VMware Cloud Community
pitogo
Contributor
Contributor

2TB limit and still barfing

I bound a 2TB (2048GB) LUN but ESX is still barfing at me and I can't format it as a VMFS volume what the heck? What is the real limit? Geez its almost 2010 and they are still on 2TB limits?

0 Kudos
9 Replies
mcowger
Immortal
Immortal

The limit is 2048 - 512bytes, actually. So 2199023255040 bytes is the limit.

The limit is imposed by SCSI2 protocol.






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
pitogo
Contributor
Contributor

Yes, great! Thanks and exactly my point thinking like its 1994. Why still be hampered by SCSI2 technology?

Hopefully 4294967295 blocks is the magic key.

0 Kudos
Rumple
Virtuoso
Virtuoso

There are other considerations beyond size don't forget.

The recommendation for vmfs volumes is still 10-15 VM's per volume. This is due to SCSI reservation issues you can have with snapshots causing systems to pause to long when multiple snapshots hit the lun at once (like during backups). This is also a problem during boot storms when multiple luns start at once.

If you want a massive volume with lots of vm's on it, look at nfs. the technology itself is different and is based on File Level locking, not Volume Level locking...so other then performance, reservations are not a problem.

Even in the physical world, in alot of cases we break up larger volumes into smaller 1-2 TB max volume sizes for recovery and backup performance reasons (since most enterprise backup systems work with multiple streams across multiple volumes at once).

One single 4TB volumes backs uyp 4x as slow using HP data protector then 4x1TB volumes due to concurrency...

Just because you can put a porche engine in a chevette doesn't mean you should.

0 Kudos
pitogo
Contributor
Contributor

I do the opposite and I rarely put multiple VM in the same LUN, only when I know they are low I/O, but most of my needs are for high I/O and lots of storage for single VMs in a large cluster. So far no noticeable difference between vmfs and rdm storage for the high IO volumes.

Its mainly for slotus notes. I need the speed of block storage and the space. Storage is even split among 11 vm and each server splitting the storage to different data types, mail DB, attachment DB and archive DB. All split apart from 35TB. Its a constant battle for space in the ILM of it all. One archive server has a 3.5TB volume using iSCSI but 1GBe isn't fast enough and the backend of the array only has 4 ethernet and 12 FC. 10GigE is where it will be at.

0 Kudos
mcowger
Immortal
Immortal

4294967295 bytes, not blocks. Given regular 512byte blocks, that would be 8388607 blocks.

Why are they still using SCSI2? Not exactly sure, though I suspect its a compatibility thing - you'd be surprised at the number of cheap, lowend (and even higher end!) arrays that report that they will handle SCSi3 semantics, but dont do it to spec :).






--Matt

VCP, vExpert, Unix Geek

--Matt VCDX #52 blog.cowger.us
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Not sure I would do 1 VM per LUN, just seems like a waste of disk space... But if you have storage to burn, go for it.... The key here is that instead of making 1 Giant 2TB-512byte LUN make serveral 1 TB LUNs and then using Dynamic Disk or LVM create a LARGE filesytem within your VM. You may even setup striping, etc if you wanted to do so.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|
[url=http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast]Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
RParker
Immortal
Immortal

Yes, great! Thanks and exactly my point thinking like its 1994. Why still be hampered by SCSI2 technology?

You know everyone can defend VM Ware, SCSI2 spec, grandfather specs, technology works fine, limit to number of VM's, if aint broke don't fix it.. BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH

-deep breath-

BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH BLAH

Who cares? Like he SAID its 2010, when are WE going to move forward, excuses are one thing, getting us to the NEXT phase is another.

I say he is right. It's 2010. We have SINGLE DRIVES bigger than 2TB, *HELLO?!?!?!?! *. So when is this going with the flow technology and OLD standards going to stop?

2TB is a ridiculous LOW point, I don't care how much you defend it, it's STILL low. We have a 40TB array, we use Fiber, which means we have to break that up into 20 (minimum) LUN's. That's abosolutely retarded, why? Because De duplication can only work at the volume level (blame Netapp, but it's STILL because of a 2 TB limit), and therefore we are getting 45% deduplication.. ONLY .

This is why people applaud NFS, not because it's better, but because it's NOT limited to LUN. I like LUN, but let's be real, we can get more like 80% deduplication if we can ditch the 2TB limit.

What's next, let's keep 15gallon gas tanks too like they did the in 70's, and LP's and 8 track too.. yeah! There is reason we call it PROgression.. so we can MOVE forward and not stay stuck in the past...

I agree, it's time to STOP the 2TB limit, I don't care what the EXCUSE is, FIX IT! Excuses are for people that don't know how to do things right.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

There is a limit, we have to work within that Limit regardless of our desire for something different. There are limited options for working within these limitations. I neither defend nor denigrate the current limitations, they exist, we work within them and find ways to go around them. BTW, with 'extents' which can join multiple LUNs together as one, the limit is 64TBs - 512bytes.... So there is a mechanism to use your 40TB array as one giant VMFS, even if this is just not recommended from a performance perspective.

It is still better to have more than one LUN in use to balance reads/writes across as many queues as possible within the HBA. Granted with a true Mutlipath Plugin you can do this at the HBA instead.... But for those without ENterprise Plus licenses or PowerPath,etc. we are stuck with the limits of the OS we are using.... ESX.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|
[url=http://www.astroarch.com/wiki/index.php/Virtualization_Security_Round_Table_Podcast]Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
larstr
Champion
Champion

Who cares? Like he SAID its 2010, . . . . .

I say he is right. It's 2010. . . . . . .

Time surely passes quickly. But in the rest of the world we're still in 2009, or maybe the Year of the Earth Ox if you're in China.

0 Kudos