LeerIT
Contributor
Contributor

VMFS datastore striping on the Intel P3608?

TL;DR version:  Is it impossible to combine disks attached locally to a VM host into a single datastore, with full performance benefits of striping (kind of like software RAID-0)?

Full story: so, we purchased an Intel DC P3608 to replace our aging HP ioDrive2 as the high-speed datastore in our Epicor ERP virtual host - which has a SQL and app server VMs sitting on this high speed datastore.  What we were unaware of though is that the Intel DC P3608 achieves its great performance by essentially slapping two SSD PCIe cards into one - and it presents itself this way to the mainboard you plug it into!  So I was disheartened to see two 750GB PCIe SSD Drives in VMWare after rebooting the host, rather than one 1.6TB high-performance drive.  Google led me to believe that the answer to this problem is to use Intel Rapid Storage Technology enterprise (RSTe) to essentially RAID0 these two "drives" together to get the full storage / performance specs out of the device.  However, it seems that there is no version of RSTe to be had for VMware.

Long story short: from here, I resorted to presenting both of the "drives" on this device to my SQL VM via PCI passthrough, and then attempting to install RSTe in the VM itself.  This also had issues, presumably because RSTe wants to be installed on Windows running right on the metal, not Windows VM.  So I had to further fallback to using Windows Disk Management striping - this seems to allow the drive to perform at full speed with the bench marking I did. However, I'll only be able to move the SQL mdf/ldf files onto the drive, not keep the entire vmdk on the drive as I originally intended.  PCI passthrough has a couple of other limitations which might hurt in the future as well (e.g. you have to fully reserve RAM, can't suspend or do snapshots, etc). 

I've looked into vCenter VSAN, but that seems to be completely different animal.

This thread is similar discussion:

Re: Managing VMFS datastores.

Tags (1)
0 Kudos
0 Replies