VMware Cloud Community
doron1
Enthusiast
Enthusiast

ESXi 5.1: Attach 4TB physical (raw) drives to a VM - possible?

Hi all,

I'm new to ESXi but not to virtualization etc. So 7 hours of frustration after installing ESXi on my machine, it seems like I hit a brick wall. I need to attach 3x 4TB drives directly unto a guest.

I have tried everything Google furnished me with, to no avail. RDM is out (even if I turn off RDM filter as one of the KB articles said "unlocks" RDM onto local disks) - it's just greyed out.

Creating a vmdk with vmkfstools fails with "Failed to create virtual disk: The destination file system does not support large files (12)".

Note these are 4TB drives.

Is there really no way to do this?!

Thanks in advance!

Reply
0 Kudos
15 Replies
suhaimim
Enthusiast
Enthusiast

Hi there,

So far, I am using 2.8TB LUN in ESXi5.1. Never try 4Tb one shot. Is there any necessity? Thank you for sharing. Smiley Happy

Reply
0 Kudos
doron1
Enthusiast
Enthusiast

Thanks. I'd like the underlying guest to access the physical drive (it's a NAS, I don't want a layer of virtualizaion between it and the HDDs).

I thought this would be a cakewalk, with the maturity of ESXi and all...

The 4TB drives are connected to the local SATA ports (on board), which, I understand, makes life more difficult.

I seem to gather, from all I read, that it's not possible for large drives, neither via RDM nor via a raw device VMDK. I guess I just need an authoritative answer 🙂

Reply
0 Kudos
doron1
Enthusiast
Enthusiast

(bump)

Can anyone say authoritatively it's not possible, or show me how to do it if it is?

Thanks!!

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

If you get it to work - it won't be supported, RDMs are not supported on local storage.

The maturity is ESXi is complimented by the enterprise-nature of its design, where as local SATA disks are contrary to the "enterprise".

I don't want a layer of virtualizaion between it and the HDDs).

Why? What do you feel it hurts?

Reply
0 Kudos
doron1
Enthusiast
Enthusiast

Josh26 wrote:

If you get it to work - it won't be supported, RDMs are not supported on local storage.

The maturity is ESXi is complimented by the enterprise-nature of its design, where as local SATA disks are contrary to the "enterprise".

Thanks for the reply. So you're saying that even if I get the vmkfstools -z or -r setup to work for my HDDs, it will not be supported?

(at one point in my little research I realized that my rig lacks VT-d, which I though might be the problem; so even if I upgrade to a VT-d capable mobo+cpu, this is not supposed to work?)

I understand enterprise, absolutely. Yet even on enterprise servers, you find local SATA from time to time... but your point is well taken.

Josh26 wrote:

I don't want a layer of virtualizaion between it and the HDDs).

Why? What do you feel it hurts?

(a) the data already on the HDDs (I was trying to virtualize an existing setup) (b) a bit of performance. This is a file server and I'd like to maximize performance off of it.

What I did now, is turn the order upside down, - do my virtualization under the file server OS (including raw disk access). But it's kinda backwards...

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

So you're saying that even if I get the vmkfstools -z or -r setup to work for my HDDs, it will not be supported?

Yes.

(b) a bit of performance

The performance hit related to virtualised disks is a Hyper-V related myth. On modern ESXi systems, I never seen a benchmark show a determinable difference (excepting of course, the "I build two high load VMs on one host and they performed slower than on dedicated hard" scenario).

Reply
0 Kudos
w00f
Contributor
Contributor

Did you try that :http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

i've done it myself (using not SATA on board but a 3ware RAID ); i had a raid5 with existing partition (20TB) that i couldn't "break"; raw mapping system disk on the array + stoarage one and the VM booted like a charm with no update to do (except adding vmware tools)

doron1
Enthusiast
Enthusiast

w00f wrote:

Did you try that :http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

Absolutely did. This is what I referred to in in the original post above. When I issued the vmkfstools command, I got:

Failed to create virtual disk: The destination file system does not support large files (12)

I presumed that the text of the error message is a bit off (which dest filesystem does not support what large files?), but nothing I did could make it go away.

Now that I read your response, and then went back to that post, something just hit me. The xxx-rdmp files that he mentions in step 4 (created as a result of the vmkfstools -z) have a file size indication that's huge (effectively the size of the raw HDD), although in practice they are supposed to be tiny. Since my drives are 4TB, and I don't recall exactly right now on which FS I placed the data store during my experimentation, then maybe - just maybe - some file size limit filter somewhere kicked in and told me that I can't have a 4TB-size file. Hmm!

w00f wrote:


i've done it myself (using not SATA on board but a 3ware RAID ); i had a raid5 with existing partition (20TB) that i couldn't "break"; raw mapping system disk on the array + stoarage one and the VM booted like a charm with no update to do (except adding vmware tools)

In that case I envy you Smiley Wink Did not work for me.

You do say it's a 20TB "disk" so if my theory above is correct you'd have hit the same issue, but I need to go back and see on which filesys I placed the data store. There might be an answer lurking there. Time to rebuild the lab.

Thank you!

Reply
0 Kudos
w00f
Contributor
Contributor

here it is in console:

/vmfs/volumes/51c1f457-e33056bd-2dbf-0018fe6a745e/3ware_9650 # ls -l

-rw-------    1 root     root     21913978012160 Jun 21 09:12 20TB-rdmp.vmdk

-rw-------    1 root     root           523 Jun 21 15:45 20TB.vmdk

-rw-------    1 root     root     85899345408 Jun 21 09:12 80GB-rdmp.vmdk

-rw-------    1 root     root           519 Jun 21 15:45 80GB.vmdk

just maybe be sure to do it in a VMFS5 partition and not a vfat

Reply
0 Kudos
doron1
Enthusiast
Enthusiast

Got it. Thanks, that helps a lot.

I will need to set it up again to test - probably more towards weekend - and will report back.

Reply
0 Kudos
subhasis2009
Enthusiast
Enthusiast

RDM on virtual compatibility it support max 2tb on each petition but physical compatibility u can map upto 64TB  on single petition but your os version should support.
subhasis

Reply
0 Kudos
doron1
Enthusiast
Enthusiast

Reporting back for posterity: indeed, the problem was was the huge (fake) file size of the -rdmp files. Turns out that in the previous experiment I placed the datastore on NFS (just for the experiment, not as a permanent solution) and apparently something in the NFS layer rejected a file with such a reported size (4TB). Once I moved the datastore to local storage, vmkfstool completed just fine and I (seem to) have raw disk VMDK files.

Hope this will help someone someday.

Thanks to everyone who replied!

Not that my issues with ESXi are over: I now find myself not being able to launch the console. I get "VMRC console has disconnected - attempting to reconnect..." and no console. But this is for a separate thread.

It's like pulling teeth.

Reply
0 Kudos
bogd
Contributor
Contributor

For the record, in case someone else stumbles across this discussion - RDM files cannot be used with NFS storage.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100185...

SCSI commands are sent over the RDM without any changes, so you need an underlying storage capable of accepting SCSI commands. Basically any VMFS datastore will do.

If you do not have any other local disks on that host (to configure a VMFS datastore on them), you can use an iSCSI/FC datastore. I ended up configuring a very small (5GB) iSCSI datastore just to store my RDM mapping files.

Yes, it does seem really strange to go over the network to an iSCSI server just to get back and access my own local storage, but... this was an unsupported config from the start, so I can live with that Smiley Happy

Reply
0 Kudos
BreathOfIT
Contributor
Contributor

The best way to do what you are trying to do in my opinion is to have a host system running an Intel CPU with support for VT-d.

Then buy a PCIe SATA controller.

Then pass the SATA controller directly to the guest.

I did this with a 24 bay SAN tray. Lot's of people do this for home servers.

Reply
0 Kudos
kbulgrien4freed
Enthusiast
Enthusiast

The original web address is toast.  See the equivalent URL on archive.org:

RDM mapping of local SATA storage for ESXi | David Warburton

Reply
0 Kudos