VMware Cloud Community
Rhomboid
Contributor
Contributor

2.7TB drive space, 744GB max datastore size?

I just installed ESXi 4.0 U1 on a machine with an Adaptec 5405 SAS/SATA RAID controller that I specifically bought just for this purpose, as it's on the compatible hardware list. There are 4 1TB drives attached, configured in RAID-5.

Under "Configuration -> Storage" I see the correct capacity (2.73 TB) , yet when I go to create a datastore the maximum size is 744 GB. What's going on here? I've got no snapshots or anything as this is a brand new, clean install. How can I use the drive space I just bought?

0 Kudos
7 Replies
golddiggie
Champion
Champion

I've lost track of how many times this has needed to be said...

ESX VMDK/LUNs have an abolute maximum size of 2TB-512B... If you go over that, it removes the 2TB-512B from the total size, and presents you with what's remainging as what's available. Carve the array/logical volume into two chunks, and you'll be fine.

This is just one reason to NOT use large, local, datastores. With a SAN, it's fairly easy to carve up the array into LUNs that are small enough to either be recognized, or to do the job right. Such as just what you expect to need for X VM's that will go on that LUN (keeping it under 12-15 total per LUN).

Network Administrator

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Rhomboid
Contributor
Contributor

Thanks. I had finally pieced the correct search terms (744GB appears to be the magic parameter) to find other people running into this. The 5405 doesn't have the ability to split things up, you can only select drives and a RAID level. I saw one person saying it worked after a firmware update on the same card so I'm trying that now. If it doesn't work I'm just going to go back to VMware server.

This is a research box that generally only runs 4 VM's or so. A SAN is not a cost-effective option but VMware Server lags noticeably, so I was hoping/trying to squeeze a bit more performance out of ESXi. I'm a little surprised NFS is a store option when local seems to be such a poor one. I've done a lot of testing on DAS systems and they can be just as fast or faster than a lot of SANs (I'm doing an eval on a PCI card flash store for example). I guess NFS is for the guys with shiny NetApps and such...

Thanks for the answer. I was actually just about to mark it answered and tell people to ignore it since I saw the other posts...

0 Kudos
Rhomboid
Contributor
Contributor

Original searches didn't turn up an answer. Finally got the right terms in and found other answered questions about the same issue.

0 Kudos
J1mbo
Virtuoso
Virtuoso

Do you really need all that space? 4x 1TB SATA disks will perform much better as RAID-10, and be within the 2TB limit.

http://blog.peacon.co.uk

Please award points to any useful answer.

0 Kudos
davidpanchina
Contributor
Contributor

2TB limit.

0 Kudos
golddiggie
Champion
Champion

I would go with the recommendation of RAID 10 for the four drives. There's a 99% chance you'll see no worse, but actually better, overall storage performance that way. With only four drives in a RAID 5 array, the penalty is still too high. Once you go beyond six drives in the RAID 5 array, it starts to become worth it.

Network Administrator

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

0 Kudos
Napsty
Contributor
Contributor

I had a similar issue on a new installed ESXi 4.1 with 2.86 TB disk. You have to do some manual steps to be able to use your full disk. I wrote a manual how to do it here: http://www.claudiokuenzler.com/ithowtos/vmware_esxi_4.1_disk_not_full_used.php

Maybe it helps for you.

0 Kudos