I am migrating my VI 3.5 into vSphere 4.0. I am trying to migrate one of my VMs with RDM configured. My RDM is 8TB large. Unfortunately, I cannot power on that VM on vSphere. The error msg I receive is "Disks bigger than 2TB - 512B are not supported. Unable to create virtual SCSI device for scsi1:xx". I am aware of 2TB limit, however, I can break the barrier on 3.5, so why would not I on ESX 4? Any ideas are more than welcome.
The only way you can use single LUN bigger than 2TB in vSphere:
1) It has to be iSCSI storage
2) You connect this iSCSI LUN with software iSCSI initiator from inside guest OS.
MCSA, MCTS, VCP, VMware vExpert '2009
Update. According to Configuration Maximums, max rdm size is 2TB - 512B. In addition, vmdk size is limited to 2TB - 512B, too, beacause of VMFS file size limit, which is 2TB - 512B. It comes, because esx formats volumes using max block size of 8MB. IMHO, it makes 64TB (- 16K Volume Size useless, because it is not possible to place large vmdk files (over 2TB). Of course there might be some rumour about placing many vmdks on the same large datastore (built with extents) and using dynamic disks to combine vmdks into large volume. Imho it's not good idea, but it's possible. But what if I don't want to use dynamic disks? I think it is possible to use software iSCSI initiator inside the vm. Quite poor alternative from performance perspective. I was very surprised of vsphere behaviour, because esx 3.5 handles large rdms pretty fine. Does 2TB LUN Size Limit make 64TB Volume Size useless? It is a big step backward, imho. I have just figured out the answer.. Placing vswp files on large datastores can be a good solution in large environments. Any different propositions?
I was very surprised of vsphere behaviour, because esx 3.5 handles large rdms pretty fine.
It was clearly stated in 3.5 docs that maximum LUN size is 2 TB.
VMwareInfrastructure 3: Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5
Table2. Storage Maximums
Raw Device Mapping size (TB)
MCSA, MCTS, VCP, VMware vExpert '2009
Wow, you guys just ruined my day. Right now on my ESX 3.5 farm I am running VMs with 24TB RAW device mappings. I just went to add a 2TB RAW LUN Mapping to my ESX4 environment and it failed! I suspect that the LUN is just over 2TB which may be causing the problem. But now I'm left with a big problem - what to do with all my 24TB LUN Mappings. I do not have ISCSI setup and have no idea how it works. Our environments are all FC!!
"Crippling Microsoft is the geek equivalent of taking down the Death Star"
Just curious...In the near future I am looking at moving one of my current SANs (dell pe 850 and connected to a SAS MD1000) to a iSCSI based setup, along this migration I want to make the host a VM, (no reason it shouldnt be). I am currently using about 1.8tb and expect that to grow to 3TB by the end of next year. What would you recommend for the best setup for the VM? I would be using a EQL PS4000 with sata disks.
along this migration I want to make the host a VM, (no reason it shouldnt be). I am currently using about 1.8tb and expect that to grow to 3TB by the end of next year. What would you recommend for the best setup for the VM? I would be using a EQL PS4000 with sata disks.
In that case you will need to use the iSCSI initiator inside the guest to map the disk once it goes over the 2 TB limit.
How would I set that up with the iSCSI initiator inside the guest? Do I need to create a new vSwitch for iSCSI for just this VM or can i use the same vSwitch that I currently have configured for my current iSCSI connections? What is the best practice in this type of setup? Use a dedicated pNic for the Vm iSCSI intiator?
I assume you use iSCSI for connecting storage to ESX via VMkernel port. In that case you place your vmdks on iSCSI datastore, thus, all limitations still apply.
To connect iSCSI to your VM, you need to obtain iSCSI initiator for guest OS, which are avaiable for free or are built in OS. I recommend to use dedicated pnic (or two), for redundancy and according to IO workload. I would connect each pnic to separate physical switch, too. and consider playing with frame size inside guest OS to achieve better performance.
You can create a VM network on the same uplink as your vmkernel storage network.
To get the full picture of vswitches and best practises please have a read here:
Ken Cline knows his stuff