- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ,
I am deploying ESXI on my HP Gen 9 DL360P server, with RAID 1 for OS (ESXI) and RAID 6 for Data Store. While creating RAID I came up the screen where we can customize the Stripe Size/ Full Stripe Size , Sectors / Track , Size etc. I would like to know if there is any Vmware recommended best practice for selecting these ?
At present I just use the default which HP comes with.
Please help
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm not aware of a recommendation by VMware. It's actually the RAID controller vendor who knows what's best for his hardware. I deployed a bunch of DL380 Gen9 hosts this week and left all settings at their default values (as I did in the past with other models).
>>> ... with RAID 1 for OS (ESXI) and RAID 6 for Data Store ....
Although you can do this, there's absolutely no need for this. The installation will format the drive/logical volume as required. ESXi - once loaded - runs in the host's memory, so that even if installed on a USB/SD device (e.g. a MicroSD card in the Gen9 models) there's no performance impact. It may just require a few more seconds for booting.
André
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
VMware didn't have the recommendation for Strip size related to create the RAID size.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm affraid there is nothing like "best strip size" for VM-datastore. It all depends on usage scenario (many small files, or less big files? read/write ratio?, etc.). There are though some other values you should consider:
- disk sector-size: 512B, 4kB (can be variable for sas-drives)
- primary filesystem block size: for vmfs5 it is 1MB (with sub-block 8kB)
- secondary filesystem block size (that of VMs): depends on filesystem used (i.e. btrfs has default blocksize 16kB, ntfs 4kB, etc.)
- ssd erase-block size: this differs with vendors, mostly 4MB, but I have seen values between 1MB and 8MB
- cache-size of your raid-controller (can be anything between zero and a few GB)
If you do not have time for testing, just pick default value raid-controller offers. If that is what screenshot shows, I think it is quite good default choice and it makes sense to me (full strip-size equals to vmfs5 block-size, 1MB). It does not make sense to go under this value, but I would increase it if you store dominantly large files...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you JarryG.