It's been just under a week since we figured out the last issue, but now I'm getting pretty close to the end-game. I'll try to keep it short, for all of you busy experts out there.
I have an HBA (LSI SAS9201-16e) connected to a drive cage (Kingwin MKS-435TL) that is holding four SAS HDDs (IBM Storwize V7000 98Y3241). I wish to be able to pass each individual drive to a different VM. I originally thought that this would involve SR-IOV, but now I am not so sure. Instead of wandering around in the dark, I'll ask this:
What is the recommended method for enabling passthrough of individual drives, that are connected via an HBA?
If you want/need more info about my setup, tell me - I'll give you the info you request.
Here are some of the sources that I've read thus far:
I've had no luck getting it to work thus far:
Your assistance is greatly appreciated.
Basically an RDM (Raw Device Mapping) is a way of presenting Storage LUNs directly to the VMs without being formated as datastores by the ESXi. The LUN is presented to the ESXi using a block-storage protocol and instead of adding Hard Disk to the VM you add RDM which is the LUN without format.
The way of adding it to the VM is quite easy and you can see that here: Add an RDM Disk to a Virtual Machine
I do not recommend configure SR-IOV in case completely needed as you will attach the VM to only one host and you will not enjoy features such as vMotion, HA, etc which makes the ESXi a single point of failure.
Hey TopHatProductions115,
Something that you can do is to configure one LUN per the space of the full disk each and present them as RDM to the ESXi Hosts and then present them to each VM inidividually. Have you thought about this?
I've never done that before. How would I set that up?
Basically an RDM (Raw Device Mapping) is a way of presenting Storage LUNs directly to the VMs without being formated as datastores by the ESXi. The LUN is presented to the ESXi using a block-storage protocol and instead of adding Hard Disk to the VM you add RDM which is the LUN without format.
The way of adding it to the VM is quite easy and you can see that here: Add an RDM Disk to a Virtual Machine
I do not recommend configure SR-IOV in case completely needed as you will attach the VM to only one host and you will not enjoy features such as vMotion, HA, etc which makes the ESXi a single point of failure.
I will definitely be sure to test this next time I'm on the server Thank you for showing me this.
Okay - I figured out part of the issue earlier today. Now I'm waiting on one more cable, so that I can actually start using the 16TB array. I'll give an update when the time comes:
TXP-Network Does :: ESXi Server - HBA Storage Array Update! - YouTube
I hopped into the vCenter Server Appliance to attempt configuring RDM disk(s) for VMs. But, none of the 4TB disks show up for some reason. I know for a fact that they're online and spinning, but they don't show up in ESXi or vCenter. I'll have to try initialising the 4TB HDDs on a different machine possibly. I'll let you know how it goes.
Just checked VMware compatibility for the HBA I have. Not sure if I need to install drivers and/or do a firmware update:
I'll attempt the solution you mentioned in a bit. I'll let you know how it goes between tonight and tomorrow evening.
Just redid the compatibility search for my HBA, to make sure I was looking at the correct page:
This makes me think the right VIB/driver should already be installed. I'll take the HBA, throw it into a different system, and try initialising the drives, to see if that's the issue.
As noted here, there were multiple parts to this solution. I'll be posting a finalised post in a few minutes
It has been a while, but progress does not rest in this effort. After all of the research-intensive work mentioned here (getting the HBA flashed to a newer firmware, making sure the drive cage was powered properly, etc.), I was finally able to implement the RDM/LUN solution mentioned by Lalegre. As such, their solution shall be marked as the accepted solution. But I will also link the relevant prerequisite information, in case anyone tries anything similar to this, so that they won't face hardware compatibility issues like I did.