VMware Cloud Community
Yoot
Contributor
Contributor
Jump to solution

Best Practices for Large Virtual Disk

I'd  like to add a large virtual disk (16TB) to use as backup storage  and  ideally thin provision it to allow potential use of, at least  intially, a  lot of free space.  The headache is the 2TB limit for virtual disks in ESXi 5.

I know you can span 2TB virtual disks in the OS (win 2008 r2, in this case).  But is this a good idea?  The underlying array is raid 6, so I guess it's safe, but it sounds at least potentially worriesome to me to span 8 disks.  Not a concern?

Performance?

RDM is less flexible and maybe not possible (raid array is local).

Anyone have suggestions as to best practices here?  What would YOU do?

0 Kudos
1 Solution

Accepted Solutions
vLarus
Enthusiast
Enthusiast
Jump to solution

I don't recommend using the span plan for the disks Smiley Happy One bad 2TB VMDK file and its all gone (several ways I could see that happen - snapshot problems, corruption etc)

As these are anyways local disks you are losing some flexibility just from that I recommend using RDMs.

One huge RDM for this VM.

http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ Check this out for configuring it.

As for what I would do, I have done this before and I used RDM's, just for 8 TB but still. Smiley Happy

vmice.net

View solution in original post

0 Kudos
6 Replies
eeg3
Commander
Commander
Jump to solution

Do you mean the storage is all local to the host? If using a SAN, another option is a direct iSCSI connection from the guest using a regular software initiator inside the VM. If it is local and all on the same RAID set, I think spanning is a good choice.

Blog: http://blog.eeg3.net
0 Kudos
cdc1
Expert
Expert
Jump to solution

For a scenario like that I would see if I could use VT-d to tie the RAID adapter to a VM.

You could then do whatever you needed to do to that disk inside the VM like it was on a standalone physical box.

0 Kudos
vLarus
Enthusiast
Enthusiast
Jump to solution

I don't recommend using the span plan for the disks Smiley Happy One bad 2TB VMDK file and its all gone (several ways I could see that happen - snapshot problems, corruption etc)

As these are anyways local disks you are losing some flexibility just from that I recommend using RDMs.

One huge RDM for this VM.

http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ Check this out for configuring it.

As for what I would do, I have done this before and I used RDM's, just for 8 TB but still. Smiley Happy

vmice.net
0 Kudos
Yoot
Contributor
Contributor
Jump to solution

I think this (vt-d/directpath) is a really cool idea, but, at least from some of the posts I've seen, it still sounds sort of experimental, with people having to enter hardware specific special config parameters to avoid, eg, IRQ conflicts (which, though perhaps unfair, reminds me of the olden days of DOS).  Maybe this has progressed since the 2010, early 2011-era posts I've seen.

If I understand things correctly, it would also require me to buy an additional controller, since I assume (incorrectly?) that I need one non-virtualized controller to run the host.  (or can one controller using directpath be used to host vmfs, esxi, *and* direct VM connections? I haven't seen examples of this, anyway).

0 Kudos
cdc1
Expert
Expert
Jump to solution

Yes, if you want to run other VM's off that controller, or have ESXi running from it, then a second controller for your other disks would be required.

If you just have ESXi on the disk, you could look at installing and running ESXi off a USB key instead.  But, if you also need to run other VM's off that disk, then a second controller would be needed.

Another option, but one which would complicate the setup, is to run ESXi off USB key (not the complicated part,) and then put the whole controller on a VM with VT-d, and use that VM to share the "VM" storage from the controller to the host via NFS or iSCSI, and keep the other huge LUNs on that VM (this would be the complicated part).

Your only other option is to not use VT-d, and then add several 2TB-512b vmdk's to the VM, and use some sort of software RAID in the guest OS.  I would not go the JBOD route.

You can also get the LUNs from the local controller added as RDM's, but that configuration is not supported by VMware, as far as I know. So, if you go that route, and run into issues, you'd be in sort of a bind.

Yoot
Contributor
Contributor
Jump to solution

This (the RDM solution) is what I did, and it seems to work great so far.  Thanks very much.  In the firmware, I carved my large raid up into a 16.5 TB virtual disk and a 1 TB virtual disk, then I used the method you linked to to create a 16.5 TB RDM, and used the 1TB for a regular vmfs datastore.

I am a little puzzled why this is unsupported, and the performance situation is yet to be determined..

Thanks very much.

0 Kudos