Here I go again - first posting and it's a bug!
Having waited patiently for VMware to eventually break the dreaded 2 TByte limit, I have now tried ESXi 5.5.
Let me first explain my situation: I have a server running Linux with an Areca controller that has 24 TByte storage online. This is divided into 2 TByte of RAID1 and 22 TByte of RAID5. Both volumes are GPT-partitioned and each has a single LVM PV on them. The LVM LVs carry LUKS-encrypted volumes that are mounted on the server. The boot system of the server is on a separate SSD.
I wanted to virtualize that server in order to be able to upgrade or change it at will. Usually, one would have to create the disk files under VMware's VMFS and use those in a VM. With all VMware versions up to 5.1, this would be a great hassle, as the 2 TByte limit asks to map 11 files of 2 TByte to the server VM.
Now, with 5.5, we can use one huge 22 TByte image file. However, I did not really want to repartition and copy the whole stuff - actually, I do not have another 22 TByte spare. Thus, I had hoped for a possibility to map the raw device (DirectIO is no option for me as the server has an Intel K-type CPU). So, I used the VMware vCenter Converter 5.5 to migrate the physical machine to a VM and started an ESXi 5.5 host (thankfully, the P2V migration works now despite the presence of partitions larger than 2 TByte in the physical machine - which was a problem for versions before vSphere Converter 5.5).
Actually, creating the raw device mappings (RDM) on a local datastore with vmkfstools worked (my first try was not so successful, as RDMs cannot be created in a NFS datastore). The RDMs appeared O.K. and with the correct sizes:
/vmfs/volumes/52695c53-8f0e2250-5066-902b34d091bf/rdm # ls -la
total 1032
drwxr-xr-x 1 root root 1120 Oct 24 19:20 .
drwxr-xr-t 1 root root 1540 Oct 24 18:01 ..
-rw------- 1 root root 23999999901696 Oct 24 19:20 areca-huge-rdmp.vmdk
-rw------- 1 root root 504 Oct 24 19:20 areca-huge.vmdk
-rw------- 1 root root 1967999680512 Oct 24 18:28 areca-raid1-rdm.vmdk
-rw------- 1 root root 491 Oct 24 19:13 areca-raid1.vmdk
/vmfs/volumes/52695c53-8f0e2250-5066-902b34d091bf/rdm # more areca-huge.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfsPassthroughRawDeviceMap"
# Extent description
RW 46874999808 VMFSRDM "areca-huge-rdmp.vmdk"
# The Disk Data Base
#DDB
ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "2917833"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "f456abe4dde608279c006275fffffffe"
ddb.uuid = "60 00 C2 97 a1 18 29 46-bb da 5e fd 4e 37 f5 66"
ddb.virtualHWVersion = "10"
However, only the RDM with less than 2 TByte actually worked. Whenever the VM (version 8, as version 10 cannot be managed by vSphere Client any more) containing the large disk is started, I get an error message that the disk image or its corresponding snapshot cannot be locked. This happens regardless of whether I define the disk as independent or not (which was what I tried first, of course). I also tried vmkfstools -z and -r.
So, it seems that RDMs with more than 2 TByte do not work with ESXi 5.5 - still (at least with VM version 8).
BTW: The actual error message was:
| Aufgabendetails: | |||||||||||||
| |||||||||||||
| Fehlerstapel: | |||||||||||||
| |||||||||||||
| Zusätzliche Aufgabendetails: | |||||||||||||
Host-Build: 1331820 | |||||||||||||
Just a guess. Does the smaller RDM also have ddb.virtualHWVersion = "10" in it's descriptor?
André
Yes, why?
I was just wondering whether this may cause an issue with the VM being at HW version 8.
André
I did not choose that value deliberately, the file is completely created by vmkfstools. But the smaller RDM works fine despite the same hw version.
Possibly, only VM version 10 machines are able to work with disks larger than 2 TByte.
If I upgrade the VM version, I have to use vSphere Web Client to configure my VM afterwards. For that, I need Single Sign On as a prerequisite. For that, VMware-tcserver is a prerequisite, which installs only on Windows Server 2008, but not on Windows 7.
I guess I am trapped in a vicious circle just when I try to verify that... or at least have to set up a whole data center infrastructure.
Starting with ESXi 5.0 and VMFS5, pass-through (physical mode) RDMs are supported with a size of up to ~62TB, so it should work with a HW version 8 VM. Unfortunately I can't tell you whether the issue is related to the HW version entry in the .vmdk file or even to the fact that it's a local disk rather than a shared LUN which you use. Anyway, did you try to change the HW version from 10 to 8 in the .vmdk file (or even remove the entry) to see whether this makes a difference?
André
I am positive that 5.0 and 5.1 supported only 2 TB for virtual disks and for RDM virtual compatibility (physical was already 64 TByte, as you wrote). This was the main reason for me to wait for 5.5.
So, there is a good chance that VM version really matters (but of the VM, not the RDM). It may well be that the APIs for the machine types have different sizes for requests.
Maybe I will try to upgrade the VM to version 10 just to verify that. However, I find it unacceptable that there is a thicket of prerequisites just to be able to manage version 10 VMs.
P.S.: There is an easier way to do this, namely by using the vCenter Server Appliance, which already has SSO and vSphere Web Client included and is based on Linux, unlike the installable software. BUT: Either way you need vCenter Server installed for the inventories - and that needs a paid license after 60 days of trial. In order to get this free, you have to stick with VM version 9 and never upgrade to version 10 - but then, you will not be able to use the new features like virtual disks with more than 2 TByte.
