sebek
Enthusiast
Enthusiast

2TB LUN Limit

Hi,

I am migrating my VI 3.5 into vSphere 4.0. I am trying to migrate one of my VMs with RDM configured. My RDM is 8TB large. Unfortunately, I cannot power on that VM on vSphere. The error msg I receive is "Disks bigger than 2TB - 512B are not supported. Unable to create virtual SCSI device for scsi1:xx". I am aware of 2TB limit, however, I can break the barrier on 3.5, so why would not I on ESX 4? Any ideas are more than welcome.

0 Kudos
14 Replies
mcowger
Immortal
Immortal

It was a 'bug' that let you do it on 3.5, which has been 'corrected'






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
AntonVZhbankov
Immortal
Immortal

The only way you can use single LUN bigger than 2TB in vSphere:

1) It has to be iSCSI storage

2) You connect this iSCSI LUN with software iSCSI initiator from inside guest OS.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru
sebek
Enthusiast
Enthusiast

Just like I supposed. Dissapointing Smiley Sad Thanks, guys, anyway.

0 Kudos
sebek
Enthusiast
Enthusiast

Update. According to Configuration Maximums, max rdm size is 2TB - 512B. In addition, vmdk size is limited to 2TB - 512B, too, beacause of VMFS file size limit, which is 2TB - 512B. It comes, because esx formats volumes using max block size of 8MB. IMHO, it makes 64TB (- 16K Smiley Happy Volume Size useless, because it is not possible to place large vmdk files (over 2TB). Of course there might be some rumour about placing many vmdks on the same large datastore (built with extents) and using dynamic disks to combine vmdks into large volume. Imho it's not good idea, but it's possible. But what if I don't want to use dynamic disks? I think it is possible to use software iSCSI initiator inside the vm. Quite poor alternative from performance perspective. I was very surprised of vsphere behaviour, because esx 3.5 handles large rdms pretty fine. Does 2TB LUN Size Limit make 64TB Volume Size useless? It is a big step backward, imho. I have just figured out the answer.. Placing vswp files on large datastores can be a good solution in large environments. Any different propositions?

thanks,

0 Kudos
AntonVZhbankov
Immortal
Immortal

I was very surprised of vsphere behaviour, because esx 3.5 handles large rdms pretty fine.

It was clearly stated in 3.5 docs that maximum LUN size is 2 TB.

Configuration Maximums

VMwareInfrastructure 3: Update 2 and later for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Table2. Storage Maximums

Item

Maximum

Raw Device Mapping size (TB)

2


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru
0 Kudos
merovingianA51
Contributor
Contributor

Wow, you guys just ruined my day. Right now on my ESX 3.5 farm I am running VMs with 24TB RAW device mappings. I just went to add a 2TB RAW LUN Mapping to my ESX4 environment and it failed! I suspect that the LUN is just over 2TB which may be causing the problem. But now I'm left with a big problem - what to do with all my 24TB LUN Mappings. I do not have ISCSI setup and have no idea how it works. Our environments are all FC!!






----


"Crippling Microsoft is the geek equivalent of taking down the Death Star"

----------------------- "Crippling Microsoft is the geek equivalent of taking down the Death Star"
0 Kudos
AntonVZhbankov
Immortal
Immortal

Get back to 3.5 until you figure out what to do with storage.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru
0 Kudos
s1xth
VMware Employee
VMware Employee

Anton..

Just curious...In the near future I am looking at moving one of my current SANs (dell pe 850 and connected to a SAS MD1000) to a iSCSI based setup, along this migration I want to make the host a VM, (no reason it shouldnt be). I am currently using about 1.8tb and expect that to grow to 3TB by the end of next year. What would you recommend for the best setup for the VM? I would be using a EQL PS4000 with sata disks.

Thanks!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
larstr
Champion
Champion

along this migration I want to make the host a VM, (no reason it shouldnt be). I am currently using about 1.8tb and expect that to grow to 3TB by the end of next year. What would you recommend for the best setup for the VM? I would be using a EQL PS4000 with sata disks.

In that case you will need to use the iSCSI initiator inside the guest to map the disk once it goes over the 2 TB limit.

Lars

0 Kudos
s1xth
VMware Employee
VMware Employee

How would I set that up with the iSCSI initiator inside the guest? Do I need to create a new vSwitch for iSCSI for just this VM or can i use the same vSwitch that I currently have configured for my current iSCSI connections? What is the best practice in this type of setup? Use a dedicated pNic for the Vm iSCSI intiator?

Thanks!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
sebek
Enthusiast
Enthusiast

I assume you use iSCSI for connecting storage to ESX via VMkernel port. In that case you place your vmdks on iSCSI datastore, thus, all limitations still apply.

To connect iSCSI to your VM, you need to obtain iSCSI initiator for guest OS, which are avaiable for free or are built in OS. I recommend to use dedicated pnic (or two), for redundancy and according to IO workload. I would connect each pnic to separate physical switch, too. and consider playing with frame size inside guest OS to achieve better performance.

0 Kudos
larstr
Champion
Champion

You can create a VM network on the same uplink as your vmkernel storage network.

To get the full picture of vswitches and best practises please have a read here:

http://kensvirtualreality.files.wordpress.com/2009/12/the-great-vswitch-debate-combined.pdf

Ken Cline knows his stuff Smiley Happy

Lars

0 Kudos
DSTAVERT
Immortal
Immortal

I would think this might be a good candidate for VMDirectPath.

-- David -- VMware Communities Moderator
0 Kudos
sebek
Enthusiast
Enthusiast

If VMotion is not necessary.

0 Kudos