VMware Cloud Community
Strago
Contributor
Contributor

Calculate VM's per LUN

1) Is this article accurate for vsphere 4.1? : http://www.yellow-bricks.com/2009/07/07/max-amount-of-vms-per-vmfs-volume/

2) If so please help filling in the blanks for the chart:

(n) : this I number I get from my storage vendor?

(a) : this I get from client OS level metrics like perfmon?

(d) : ? not sure

3) Is it no longer recommended to alter queue depth settings? rparker states this here: http://communities.vmware.com/thread/249456

Thanks,

Jaime

Reply
0 Kudos
4 Replies
RParker
Immortal
Immortal

All I can tell you is the number of VM's PER LUN isn't a solution, it's a guess. It depends on the size of the VM, the size of the LUN.

If you have Fiber, it's a very robust system. You can have millions of files on a single LUN. ESX may manage the datastore on those LUNs, but even 20 or 30 VM's should be a significant factor. So these numbers are based upon a very conservative estimate. I myself have tested this with 40 VM's accessed by 8 ESX host, on a single LUN and experience no performance impact, vs only 5 VM's on a different LUN. I have tried many configurations, but it always comes down to your storage.

That is the single biggest, most important element. If you don't have good CENTRAL storage, nothing else matters. Use proper configurations with lots of spindles, and high IO disks, such as SAS. SATA will have a HUGE impact on performance, so don't think you can get the same performance from SATA disks that you can from SAS, because you cannot.

: this I number I get from my storage vendor?

SAN

You can use this as a starting point, but it's not a definitive guide. I can dispute many of these numbers, because we have been running more than this for over 3 years..with no issues. The issues come from ESX (and MOSTLY due to 32-bit vs 64-bit which has since been fixed). Early versions of ESX were a lot less resilient. ESX 4.0 has been much better, so take that into account as well.

this I get from client OS level metrics like perfmon?

Host. esxtop in ESX

3) Is it no longer recommended to alter queue depth settings? rparker states this here:

Yes, and I stand by that remark. You should NEVER alter a setting for the sake of altering. You may not even need it. So queue depth is ONLY IF you have issues, then you can adjust it to suit your needs. The new default is now 64, and it should be watched, and very few problems can be attributed to queue depth. That's PER HOST setting, so it still depends on your SAN. Depends on the other hosts attached, and increasing or decreasing this could have a negative impact on other systems, so that's why I say don't touch it.

Max VM's per LUN is a recommendation by VMware. They have to be conservative otherwise they can't support VM's. So these numbers must be considered as general guideline. If however you go over these numbers they recommend and you have problems, they may not be able to help.. but it still depends on MANY factors, one set of numbers cannot be applied to EVERY environment.

Reply
0 Kudos
Strago
Contributor
Contributor

Our SAN is pretty solid, just being proactive. We did get scsi reservation errors pre-esx3.5, I have more conservatively sized LUNs since then.

How can I confirm the new default being 64, is DQLEN from esxtop what we're talking about? This shows 32 on my esxi 4.1.

Should disk.SchedNumReqOutstanding from VCenter and 'execution throttle' from qlogic bios be in sync with the above?

Reply
0 Kudos
thakala
Hot Shot
Hot Shot

DQLEN value seen in ESX top is your device queue depth. To see if you have problem with your queue depth please monitor ACTV (active IOs) QUED (queued IOs) values in extop, if you are constantly seeing number close to DQLEN in ACTV then your queues are full, and could benefit from increasing queue depth. High values in vmkernel IO latency (KAVG) could also be indication of ESX host based storage issues.

Please understand that by increasing queue depth you could increase throughput of single VM but at same time you could hurt storage responsiveness for other VMs.

Quick edit. Btw to get rid of SCSI reservation problems for good upgrade to vSphere 4.1 and have your storage vendor to upgrade your storage array with VAAI capable firmware. To my knowledge VAAI support is available today for EMC CX4 and HDS AMS 2x00 arrays, for EMC V-Max support is coming in November, for IBM XIV in January 2011 and for HDS USP in Q1/2011.

Tomi

VCP3, VCP4, VSP4, VTSP4

http://v-reality.info

Tomi http://v-reality.info
Reply
0 Kudos
joergriether
Hot Shot
Hot Shot

Hi,

DELL-Equallogic also fully supports VAAI since Version 5.

best regards,

Joerg

Reply
0 Kudos