I am having a problem with a VM which I cannot get a 10.9 TB RAW storage to be seen from the guest.
Here is the environment:
-ESXi 5.1 - latest
-Adaptec 5805 PCIe (passthrough to vm)
-Windows 7 Ultimate x64 is the VM.
I have an existing VM (Win7 Ult x64) which I added in the Adaptec RAID PCI device (coming as physical passthrough). I had created a RAID 6 array (in RAID BIOS) of 8x 2TB drives which shows as 10.9 TB total. I'm trying to pass this through as physical RDM to the VM.
The driver for the adaptec card loads on the guest (I've tried both the Windows driver and loaded the Adaptec driver with same result), but I cannot see my RAW disk in Windows Disk Management. So if I load the Adaptec Storage Manager for Windows, and blow away the RAID 6 array right there, and try to create another RAID 6 array, it will create it but down-size it (only allows max size of) to 2 TB, so I then have a RAID 6 array with 8x separate 2TB disks which are 2TB RAW size each.
It does then show up as 2 TB RAW in Disk Management. So it appears to me that either the VMware guest's BIOS is restricting it, or something else in the OS is restricting it from being able to address the large disk size. I can also set up 8 separate JBOD disks and they all show up fine in Disk Management as 8x 2TB RAW drives.
So I am at a loss here on why it is not working, since it should support up to 64TB size I would think (seen in the ESXi 5.1 Configuration Maximums Guide). This VM also has 2 other VMDKs, one for OS, and one for a 1TB virtual disk which should not have anything to do with PCI RAID card storage I'm passing through. Does the OS have to be installed on VMFS5 disk for it to be able to address the large space on a separate passthrough PCIe RAID disk?
Does anyone have any ideas on how I can get the VM guest to see all 10GB of RAW space? Also BTW, this is not a boot drive, only gonna be for storage.
Just to give some resolution on this old issue...
Turns out the Adaptec card was the problem.
I later put in an LSI 8480E (same card as Perc5e), and this allowed VM to see more than 2TB.
Supposedly these LSI 8480E cards may have issues with using PCI passthrough, so I did not even try that, but what I wound up doing is to create a manual pRDM map using the following command:
vmkfstools -z /vmfs/devices/disks/vml.0200000000600605b0004cca3019df2cd6bfe575604d6567615241 /vmfs/volumes/524889b9-136860f6-690b-000423dd1e8e/Win7-Ultimate/rdm1.vmdk -a lsilogic
...and then simply added an existing disk on this Windows VM, browsing to that location of the rdm1.vmdk map file, and it worked. When I booted up the Windows box it is able to see all 11 TB of storage now.
So the weird thing was that with the old Adaptec card ESXi could see the space, but I kept getting an error when trying to create the manual map via CLI:
"Failed to create virtual disk: Invalid argument (1441801)"
Then after installing the LSI 8480E I no longer got that error and was able to successfully create the map. And the reason why it was necessary to create the map manually via CLI is because the option in VSC Add Disk Wizard for RDM was greyed out since the PCI SAS RAID cards are seen as 'local storage' since ESX only allows iSCSI or FC storage to be RDM storage, but CLI will let you create map manually which worked.
At any rate, issue is resolved now...
Hi, are you configuring the RDM disk with Virtual Compatibility? Support for 64TB is only for Physical Compatibility over VMFS5 Datastores.
No, as far as I know, I am trying to passthrough the Adaptec RAID card through to the Windows host as a storage-only drive, so there will be no datastore on that disk in the first place. The RAW disk should just pass through and it will be formatted with NTFS directly. And it should be properly set up because I do passthrough the whole RAID card in ESXi config, and then add it into the Windows VM guest and on boot of that host I see that Adaptec RAID card in Device Manager. I can even get it to present disks of 2TB and smaller to the OS in Disk Management.
VMDK or VMFS shouldn't even touch this disk.
This is why I am puzzled as to why it doesn't work, because it is a 64-bit OS, newest Virtual Machine version, and that OS .vmdk is built on a VMFS-5 datastore, so the only thing I can think of is that maybe I had older version of the VMware BIOS and when upgrading the virtual machine version perhaps it did not fix the BIOS for the large disk addressing or something weird like that.
I may try to do a fresh install with all the right settings, but I didn't really want to have to re-install all over again if I can potentially avoid it. And plus, that is the too-easy fix, I want to know why, so if I hit this issue again then I know what causes it and how to workaround it if it ever happens to me again in my life.
Is there some other 'advanced' setting in ESXi where I need to specify this as physical or virtual RDM?
I thought that when you pass through a PCI card it just presents the whole card to the guest and is seen as local, and thought this means 'physical'. Please correct me if I am wrong.
EDIT:
Ok it looks like my terminology is messed up here. I incorrectly referred to as RDM, but in fact upon researching this more, I am not using RDM. This is just PCIe passthrough. Sorry for the confusion..
Hello, are you attaching this virtua disk as RDM into the VM? If is correct, in the same wizard you can see the option "Physical Compatibility". By the way, into the Edit Setting you can check the properties of actual disk.
Please let us know your configuration
Regards
It is an Adaptec hardware RAID PCIe card where I pass through the whole card slot to the VM guest, so it should just be raw disk (one RAID 6 array of 10.9TB) which should just pass through to the VM.
It is not sitting on a datastore at all. Just raw PCIe card passthrough (VMDirectpath).
Ok, well just for kicks I turned off the PCI passthrough for the Adaptec card and was able to create a 10.9 TB datastore on the ESXi host itself, but then when trying to create virtual disk on the VM it limits me to 2 TB.
Is it possible at all to get my 10.9 TB of storage on a Windows 7 x64 VMware guest?
The RDM option when I try to create a disk is greyed out even though I am running ESXi 5.1, using only VMFS-5 on all datastores, etc
So what is the best way to get 10.9 TB of fast storage on a VM guest, is it possible?
I understand you now! Sorry. But i don't understand why do you use this configuration? The Adaptec isn't able to present the Raid to the ESXi? The typical configuration is over the Host, you need to see the Raid in the Host, and just add a new Disk as Raw Device... maybe i missing some goal in this scenario
Regards
For use this volume as RDM you don't need a Datastore. Delete the Newly datastore (10.9TB) and try again to add a RDM disk.
THANKS for all your suggestions so far...
So is there a performance difference between using the RAID card in VMDirectPath configuration vs physical RDM passthrough?
I need the lowest possible latency, but I just cannot get the disks (when RAID PCI is running on VM guest with VMDirectPath) to show in Disk Management if they are larger than 2 TB.
I even tried building a fresh VM guest running Win7 x64, newest VM version etc.
I may try the RDM thing now as a test, but will they be able to passthrough more than 2 TB to the guest?
Well I just tried to do the create disk option using your suggestion but the Raw Device Mappings option is still greyed out...
Well I just gave up on the idea of passing through the PCI card to the VM and getting it to see all the space. Seems like either the VMware guest BIOS, Virtual Hardware, or the Adaptec driver is just not able to get past the 2 TB limit (even with everything being latest version).
I also gave up on the physical RDM passthrough idea too. ESXi recognizes the 10.9 TB just fine from my RAID card, but I cannot even create a disk out of that unless it is virtual (and even then I never tested to see if it would be possible for the VM to see that disk past 2TB).
So I reverted to my last resort idea for now. I went back to my virtualized SAN (NexentaStor Community Edition), using the PCI RAID card passthrough, and with VMXnet3 network between the Windows and SAN VMs, but this time I'm trying an iSCSI (block-based) LUN instead of CIFS (file-based protocol) and see if that will reduce the latency of my app to come within tolerance.
I just got it back running a few minutes ago and now have a 10 TB LUN (ZFS software RAID), and am copying a bunch of files to it at the moment and getting around 80 MB/s upload to the LUN, but the key test will be access latency from within my app...
I just assume use it this way if it will work, because the solaris to ZFS to JBOD has way better parity checksum protection than hardware RAID under NTFS anyway. So I will test it here once I get my test files uploaded and see what happens!
I am still curious as to why Windows x64 VM couldn't see more than the 2 TB on a PCI passthrough.
Just to give some resolution on this old issue...
Turns out the Adaptec card was the problem.
I later put in an LSI 8480E (same card as Perc5e), and this allowed VM to see more than 2TB.
Supposedly these LSI 8480E cards may have issues with using PCI passthrough, so I did not even try that, but what I wound up doing is to create a manual pRDM map using the following command:
vmkfstools -z /vmfs/devices/disks/vml.0200000000600605b0004cca3019df2cd6bfe575604d6567615241 /vmfs/volumes/524889b9-136860f6-690b-000423dd1e8e/Win7-Ultimate/rdm1.vmdk -a lsilogic
...and then simply added an existing disk on this Windows VM, browsing to that location of the rdm1.vmdk map file, and it worked. When I booted up the Windows box it is able to see all 11 TB of storage now.
So the weird thing was that with the old Adaptec card ESXi could see the space, but I kept getting an error when trying to create the manual map via CLI:
"Failed to create virtual disk: Invalid argument (1441801)"
Then after installing the LSI 8480E I no longer got that error and was able to successfully create the map. And the reason why it was necessary to create the map manually via CLI is because the option in VSC Add Disk Wizard for RDM was greyed out since the PCI SAS RAID cards are seen as 'local storage' since ESX only allows iSCSI or FC storage to be RDM storage, but CLI will let you create map manually which worked.
At any rate, issue is resolved now...