VMware Cloud Community
csementuh
Contributor
Contributor
Jump to solution

Do >2TB Physical RDM's NEED Directpath IO?

I've been struggling with this for over a week now so I could really use some advice.

ESXi 5.1

I need to setup three 3TB hard drives and give them direct access to a VM running FreeNAS (or other) for a ZFS ZRAID setup for data storage.

From my research it looks like a Physical RDM is my best solution. I want the VM OS to have full direct control to the drives with as little VM intervention as possible. Reason being, I want the drives to be able to be swapped into a spare physical server if needed to recover the data.

I am able to make a physical RDM using the command line and I am able to add them into a VM. Once attached they do not work in the OS. The OS can see the drives, but can't use them.

I am unable to make Virtual RDM's because obviously the drives are over 2TB-512k.

Where do I turn? Do I NEED to have hardware that support Directpath IO to use the large RDM's? I can't confirm this, but it's the only thing else I can think of. The new versions of ESXi should support up to 64TB with the RDMs....

My hardware doesn't support Directpath IO so I can't just pass it a whole storage card with the drives attached. But I can buy a new motherboard that supports the Passthrough if that's the answer. Also, for physical RDM's will I need to pass the VM a whole controller (card) or can I simply use the physical RDM's as on using onboard SATA connectors?

Any advice would be greatly appreciated. 😉

0 Kudos
1 Solution

Accepted Solutions
lenzker
Enthusiast
Enthusiast
Jump to solution

What type of storage are you using? ISCSI or FC?

You definitly don't need direct path i/o for a physical raw device mapping > 2TB (-512). So you can safe this money Smiley Happy

Is anything mentioned in vmkernel.log when you attach the RDM?

VCP,VCAP-DCA,VCI -> https://twitter.com/lenzker -> http://vxpertise.net

View solution in original post

0 Kudos
4 Replies
lenzker
Enthusiast
Enthusiast
Jump to solution

What type of storage are you using? ISCSI or FC?

You definitly don't need direct path i/o for a physical raw device mapping > 2TB (-512). So you can safe this money Smiley Happy

Is anything mentioned in vmkernel.log when you attach the RDM?

VCP,VCAP-DCA,VCI -> https://twitter.com/lenzker -> http://vxpertise.net
0 Kudos
csementuh
Contributor
Contributor
Jump to solution

Hello and thanks for the response!

The storage is three 3TB Toshiba (Hitatchi) SATA drives attached directly to the ESXi server. The drives are attached to the onboard SATA controller like the 500GB datastore drive is. The server uses USB flash for the boot drive. This is a home budget setup, I'll leave the highend stuff for work. 😉

The RDM's are there, however no OS will see the drives 'correctly'. They all 'see' the drives but can't 'use' them. FreeNAS shows an 'unsupportable block size' error. Sometimes the block size is some random number other times it is 0. This leads me to believe that ESXi is not allowing the VM to access the drive properly.

A Windows Server 2008R2 VM will not allow the drives to initilize however it does see them as there. I've also tried OpenIndiana and others with the same results. Most everything can see the drives but can't use them.

The ultimate end goal is allowing the VM to use the drives as a RAIDZ setup for home media storage.

I use VMWare at work often, but I'm not entirely versed on the log files. I'll do some research and see if there are any errors there.

EDIT: I installed FreeNAS directly onto the ESXi server using a flash drive and the OS was able to see, read, and create the ZFS filesystem properly. The drives work perfectly fine and my problem is *definitely* an issue with ESXi server. For whatever reason the hard drives are not being passed correctly to the FreeNAS OS using the RDM's. Anyone know how to fix this so the access is physical and my setup will work?

0 Kudos
lenzker
Enthusiast
Enthusiast
Jump to solution

can you test if you can format your LUN with VMFS and create some files / a virtual machine there? So we can make sure that the ESXi can theoretically communicate with the luns/disks/

Usuallay a RDM is not supported with local-attached-storage. They shouldn't even get shown to you in the list of lun when you add the RDM. (unsupported workaround would be here: http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ )

VCP,VCAP-DCA,VCI -> https://twitter.com/lenzker -> http://vxpertise.net
0 Kudos
csementuh
Contributor
Contributor
Jump to solution

Thanks for the additional help. I was using the RDM workaround and it still wasn't functional. I'm not sure why, but ESXi 5.1 must still not like over 2TB drives even when made into physical RDM's.

I broke down and bought a different motherboard that supports full VT-D and now I have simply passed through the onboard SATA controller for the drives. Works just fine now!

0 Kudos