VMware Cloud Community
efststamb
Contributor
Contributor
Jump to solution

Intel VMD and VROC support on ESXi 7.0

Hello,

I know this has been asked before, but there is no definite answer...and both related questions were before release of esxi 7.0.

Reading this https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/Intel_VMD_...

as another user has mentioned, it is supposedly supported as long as the drives are Intel, u2 form factor and on a hot plugable back plane...

On the VMware compatibility guide, the intel VMD driver is listed as native in the esxi 7.0

(iavmd version 2.0.0.1055-3vmw)

I am planning on ordering a new server for hosting esxi, and I have to clear this up before choosing my configuration...

(server will be a supermicro, most probably 1029U-TRTP2 with intel's p4610 nvme drives)

It will be a single host, no HA, vSAN or whatever, all VM's on the local datastore which I would like to be on the NVMe drives.

However I don't feel at ease having a single drive...I would prefer having a raid 1 array with 2 of the aforementioned drives.

Regular Backups will be implemented (still haven't decided on which path I will follow for this), but still if the single drive fails, I will have downtime...

Putting the 2 drives behind a raid controller, defies the purpose of going nvme...

Right now I am stuck with an HP DL380 G6 server running esxi 5.5. So unfortunately no way of testing...

So my question is, has anyone been able to implement this successfully?

stay safe...

Reply
0 Kudos
1 Solution

Accepted Solutions
ronald_bronsink
Contributor
Contributor
Jump to solution

Hello,

I'ts not working. I just tried it. But it sees the Drives as seperated drives and not the RAID0 i just created.

Only way at this point is with the MegaRaid6460-16i (Wich i recently bought)

https://www.broadcom.com/products/storage/raid-controllers/megaraid-9460-16i

You need also this cable 05-50062-00 (Bright Green Cable's)

https://kijkshop.nl/computers-en-printers/computer-en-tablets-opslag/opslag-accessories/broadcom-05-...

But it is only possible to attach a maximum of 4 NVME drives to 1 controller.

Good luck

View solution in original post

Reply
0 Kudos
11 Replies
ronald_bronsink
Contributor
Contributor
Jump to solution

Hello,

I'ts not working. I just tried it. But it sees the Drives as seperated drives and not the RAID0 i just created.

Only way at this point is with the MegaRaid6460-16i (Wich i recently bought)

https://www.broadcom.com/products/storage/raid-controllers/megaraid-9460-16i

You need also this cable 05-50062-00 (Bright Green Cable's)

https://kijkshop.nl/computers-en-printers/computer-en-tablets-opslag/opslag-accessories/broadcom-05-...

But it is only possible to attach a maximum of 4 NVME drives to 1 controller.

Good luck

Reply
0 Kudos
NGeisler
Contributor
Contributor
Jump to solution

Per INTEL "For ESXi, the Intel® product does not provide full Intel® VROC RAID features, so it is labeled the Intel® VMD-enabled NVMe Driver for ESXi. "

So the Driver has the Raid Function Disabled.

Intel® Virtual RAID on CPU User Guide

Reply
0 Kudos
efststamb
Contributor
Contributor
Jump to solution

Thank you for the answers...

Because I wanted raid for my main datastore, and I kind off figured what you guys confirmed, I decided to go with a SMC SYS-1029U-E1CRTP2 server & AOC-S3108L-H8iR-16DD raid card and a pair of XS1600ME70004 SAS SSD's in raid 1 config.

For my use they are more than enough...

It works as it should and the volume is presented to esxi no problems.

The only "problem" when going with SAS SSD's is that one doesn't have as many choices as I would like to have.

Choices were down to seagate, wd and Kioxia drives all with 10dw/d endurance rating....

Time will tell how the drives will hold up.

I am now in the process of deciding which backup sw (for the vm's) to go with...

btechit
Enthusiast
Enthusiast
Jump to solution

What are you running that you need more than 10 writes per day and more than 1+1 (mirror)?

Reply
0 Kudos
btechit
Enthusiast
Enthusiast
Jump to solution

that's a lot of drive writes.  must be crazy apps/vms to write that much data per day. 

Reply
0 Kudos
btechit
Enthusiast
Enthusiast
Jump to solution

wait, you are concerned how 10DWPD SSD will hold up?  You do realize this is way over the expected life of spinning drives, with less power consumption, less heat, less risk of vibration damage, and less time under load, since data is read/written so quickly, and much much faster rebuild times.

For context, I have been runing SAS and SATA SSDs with mix of  1 to 3 DRPD for years...(no 10DRPD in service yet) typically in raid 1 and raid 10.  zero drive failures yet.  My worst case right now is drives with only 80% life left, and servers will notify me when they get down to 20% life left so we can replace well ahead of expected failure. 

I still have yet to replace a failed SSD in RAID, while it's on the regular to replace failed spinning drives in RAID arrays I manage....and let's not forget spinning drive take SOOOOO long to rebuild/risking a 2nd drive failure during rebuild.   SSD drives rebuild array so fast (observed in testing drive replacement scenario, since have not yet seen one fail in production yet).

Reply
0 Kudos
Jmarc_Syd
Contributor
Contributor
Jump to solution

Thanks! So does this mean I can use RAID 1 on data volumes with VROC and HPE ProLiant MicroServer Gen10 Plus v2 ?

Looking for a RAID 1 solution for my future homelab server but it's proving difficult to find on a small server...

Reply
0 Kudos
btechit
Enthusiast
Enthusiast
Jump to solution

No, doesn't look like it.   You need the correct CPU/VMD CPU support; VMD firmware; and motherboard/VMD firmware support.  From what I can tell the HPE 10 mini only supports the older style VROC for SATA drives.  but do reach out to HPE to confirm.  they will usually have documentation and drivers available if supported.   You can get a SATA raid card...just double check it's on the vmware HQL for version 7 & 8.
https://www.storagereview.com/review/hpe-proliant-microserver-gen10-plus-review
https://www.youtube.com/watch?v=BbnlEVi9oEE&t=172s


Here is an example where is it supported/implemented by SuperMicro: 
https://www.supermicro.com/manuals/other/AOC-VROCxxxMOD_VMware_ESXi.pdf

"Note ESXi* supports VMD Enabled NVMe* RAID 1 Boot Volumes and data stores created with Intel® Virtual RAID on CPU (Intel® VROC) version 7.5 on supported platforms."
Intel documentation:
https://www.intel.com/content/www/us/en/support/articles/000037947/server-products/legacy-server-pro...

Here are the VROC versions and their supported configs: 
https://www.intel.com/content/www/us/en/support/articles/000030310/memory-and-storage/datacenter-sto...


VMware inbox in ESXI 8.0 U1: 

https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=43948

Here is a good example:  if you have latest gen4 scalable CPU with support for VROC 8 and VMD 3:
https://www.intel.com/content/dam/support/us/en/documents/memory-and-storage/ssd-software/Intel_VROC...

Jmarc_Syd
Contributor
Contributor
Jump to solution

Thanks for the comprehensive response! It looks like this is slowly going towards the "too hard" basket.

Indeed I've been looking at PCI-E SATA RAID cards but then they're quite large and I'd need to upgrade the size of the case (SFF doesn't cut it). I want something small and quiet that will live in a cabinet at home.

I'll probably end up giving up on RAID 1 and just get a good Intel NUC and no redundancy for my homelab.

Aviator777
Contributor
Contributor
Jump to solution

I know it's been a few years but just wanted to clarify, RAID 1 on M.2 NVME SSD with Intel VMD on ESXi is possible.  The most important piece is the Intel jumper key installed on the MoBo, it looks similar to the first attachment. 

The Standard version of the key allows for RAID 0 or 1, the Pro version allows for RAID 0/1/5/10.  Since we only have 2 x M.2 sockets we went with the Std version. After installing and configuring the BIOS for VMD: Chipset --> IIO --> VMD: Enabled and a reboot, the option for VMD appears under Advanced listed among the NIC's.

Create the RAID here and install Esxi, it'll only list the RAID vmd and other non-RAIDed drives, the individual M.2's are not listed.

After install follow the intel instructions to install the cli driver tool, not a driver, but a tool to interact with the built-in vmware driver.

Tool: https://www.intel.com/content/www/us/en/download/784752/intel-vmd-vroc-and-led-management-tool-for-v...

I found this post while looking for the method to rebuild the array if needed in the future as the instructions are vague on that point for those w/o a hot spare.  Apparently, in systems with a hot spare it will auto-rebuild.

Hope this helps future searchers!