VMware Cloud Community
tgeyer
Enthusiast
Enthusiast
Jump to solution

FC Lun sizing

I have the fortunate opportunity to design a brand-new FC SAN for vSphere use, from the ground up. I'm trying to decide how to carve up the raidsets into LUNs. The ESX 4 documentation does not provide clear, best-practice advise as to sizing LUN's for VMFS, so I wonder if there is a consensus out there on the best way to do this. Here's the scenario:

Given , for instance, a 1 TB raid set , would it be better to allocate one 1TB LUN, two 500GB LUNs, or four 250GB LUNs?

No particular vm will require > 250GB storage.

Same number of vm's occupying the storage.

Utilizing one large LUN, I can see an upside in terms of better flexibility that comes from pooling a storage resource. i.e.storage over-committment with thin provisioning, less space restriction for snapshots, etc.

Is there a downside consideration?

Any insight, opinions, or comments would be appreciated.

TG

I believe raid type would be irrelevant to this discussion, but feel free to correct me if I'm wrong.

0 Kudos
1 Solution

Accepted Solutions
Rumple
Virtuoso
Virtuoso
Jump to solution

20 disks NOW are 1.TB easily, at least 400GB, so do the math on that,

you STILL have same performance, SAME disk arrray (better components)

so why not make them as big as possible? It doesn't make sense, it's a

waste. More LUNs means more overhead and more virtual disks, with

overhead, I am not a fan.

It really comes down to utilization (performance) and the effect of SCSI reservations on those VM's.

Typically vmware recommends 10-15 max VM's per lun because everytime there is a snapshot or a disk growth even (while snapshots are active) then the entire LUN get locked for a few milliseconds...usually ont enough for the O/S to even notice, but out 30vm's with a combination of SQL, Exchange, AD, webservers,etc and then run backups where you have a significant (or even 4 or 5 ) highly active VM's all locking the disk at the same time or one right after the other and you are going to start seeing scsi reservation errors in the logs PDQ...

Personally if I could do nothing but NFS anymore I would be the happiest person alive, just for the file level vs lun level locking...and the ease of growing and scrinking the volume without doing anything in esx...and the fact I've seen 800+ VM's on a single nfs mount with netapp dedup turned on with no impact to performance (that was noticable anyhow)

View solution in original post

0 Kudos
5 Replies
RParker
Immortal
Immortal
Jump to solution

Given , for instance, a 1 TB raid set , would it be better to allocate one 1TB LUN, two 500GB LUNs, or four 250GB LUNs?

Well I am sure there are many answers, but this also isn't clear anymore. for one thing new SAN have de duplication, so the larger the LUN the better the deduplication.

I started making LUN's 2TB for this reason. I never liked the whole small LUN thing, it's just old thinking, the ONLY reason they were small, because they are based on smaller drives from 15 years ago. That's all. 9 or 18GB Drives even in 20 disk array is a pretty small LUN by comparison but they are still 20 disks.

20 disks NOW are 1.TB easily, at least 400GB, so do the math on that, you STILL have same performance, SAME disk arrray (better components) so why not make them as big as possible? It doesn't make sense, it's a waste. More LUNs means more overhead and more virtual disks, with overhead, I am not a fan.

The bigger the better. People will say if the drives fail you LOSE all that data, so what? That's why we have backup! If you lose a LUN you restore the VM's, as the VM's come online you can start your VM's, you don't have to wait for the entire LUN to be rebuilt.. I still say its a waste.

The KEY is drives, are you using SATA (I am gessing given drive size) and if so your performance is going to slow.. you start pushing the IO and SATA just can't keep up. SAS is better, so in this case you should go RAID 10.....

If you have SAS then RAID 5 is sufficient and make it 1 big LUN, besides if you make 4 LUNs they are going across those same disks.. so why even bother with the extra overhead?

AndreTheGiant
Immortal
Immortal
Jump to solution

There is also the number of concurrent host / VMs to a single LUN.

For a best practice this (for VM that are server) should be less then 20 (ok, it depends on storage type, cache, disks, RAID level, ...).

So having very large LUNs could be a problem (unless you do not have also very large vmdk files Smiley Wink )

Another point could be if your storage can have active/active processor (at least on different LUNs), in this case at least 2 LUNs are recomended (one on each storage processor).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
Rumple
Virtuoso
Virtuoso
Jump to solution

20 disks NOW are 1.TB easily, at least 400GB, so do the math on that,

you STILL have same performance, SAME disk arrray (better components)

so why not make them as big as possible? It doesn't make sense, it's a

waste. More LUNs means more overhead and more virtual disks, with

overhead, I am not a fan.

It really comes down to utilization (performance) and the effect of SCSI reservations on those VM's.

Typically vmware recommends 10-15 max VM's per lun because everytime there is a snapshot or a disk growth even (while snapshots are active) then the entire LUN get locked for a few milliseconds...usually ont enough for the O/S to even notice, but out 30vm's with a combination of SQL, Exchange, AD, webservers,etc and then run backups where you have a significant (or even 4 or 5 ) highly active VM's all locking the disk at the same time or one right after the other and you are going to start seeing scsi reservation errors in the logs PDQ...

Personally if I could do nothing but NFS anymore I would be the happiest person alive, just for the file level vs lun level locking...and the ease of growing and scrinking the volume without doing anything in esx...and the fact I've seen 800+ VM's on a single nfs mount with netapp dedup turned on with no impact to performance (that was noticable anyhow)

0 Kudos
tgeyer
Enthusiast
Enthusiast
Jump to solution

Thanks to everyone who responded. I wish I had more Helpful and Correct buttons to press, you were all helpful.

Datto, thanks for the links to the Epping site. I've added it to my VMware resource favorites folder. So many expert sites with good advice, so little time to wade through them all. The long discussion thread leads me to concluded there is no true best practice for LUN sizing, which is probably why the vSphere documentation doesn't address the issue.

Rumple - I had considered going NFS, but we're heavily invested in FC infrastructure for our new equipment. NFS over 1GB connections just wasn't an option.

"Good practice" take-aways from the time I've spent looking into the subject:

1. Isolate high I/O vm's to their own LUNs.

2. Limit the number of average I/O vm's per LUN to 15-20.

3. SCSI locking is less of a performance concern with ESX 4.

4. Don't waste too much time on this subject; I/O performance bottlenecks are easily tuned by simply moving vdisks around with storage vmotion.

Thanks again.

TG

0 Kudos