VMware Cloud Community
jedijeff
Enthusiast
Enthusiast

Revisiting queue depth settings and disk.schednumreqoutstanding

While moving from esxi 4.1 to 5.1 and to a different san with a faster back-end, and more front-end, I am considering moving my disk.schednumreqoutstanding up from 32, to match my HBA queue depth setting of 128.

However a couple of things, I have read some conflicting articles. While many say to match the HBA and this setting, this particular blog seems the contrary:

http://www.yellow-bricks.com/2011/06/23/disk-schednumreqoutstanding-the-story/

I also noticed that when i changed the emulex from the default of 30 to the max of 128, without changing disk.schednumreqoutstanding, that the DQLEN in esxtop now flaps between 128 and 32. Why would it no just be 32--the disk.xxx value? It does this without any commands being queued.

So any guidance on the blog, and also on why I am seeing the flapping?

Thank you.

0 Kudos
11 Replies
vPatrickS
Enthusiast
Enthusiast

Hi

I don’t know why the DQLEN value is flapping because I still have some problems in understanding the value itself.

You allow me to provide you a link to my question: http://communities.vmware.com/thread/422276?tstart=0

Hopefully somebody can provide us with some details around DQLEN.

Regards

Patrick

0 Kudos
depping
Leadership
Leadership

Are you guys using Storage IO Control?

Also, my article was reviewed by the storage stack engineers and this is what they recommend because of the "fairness" factor.

0 Kudos
jedijeff
Enthusiast
Enthusiast

I am not using Storage Control at all. Each lun of course shoes "disabled" for storage control in vcenter. So I have no idea why DQLEN would flap. No load on those LUNs either.

I have bumped my emulex back down to the default of "30" commands per lun....so now dqlen goes back to showing 30 with no flapping---guess because it is lower than the default 32 of disk.sched...

the problem is moving to vsphere 5.1 i am going to pile more vms on bigger luns,so i am a bit concerned about the 30 setting.

0 Kudos
vGuy
Expert
Expert

0 Kudos
jedijeff
Enthusiast
Enthusiast

I do not think that is it. I was noticing the flapping even where there was no activity on the Lun, except for OS's just "idling". Also ACTV and QUED were "0" at that time,,,so really no data going through.

0 Kudos
jedijeff
Enthusiast
Enthusiast

Up further looking at this,,,it looks like DQLEN jumps from 128 to 32(disk.schedxxxxx) when my CMD/s starting getting over 32 or higher.....

Again I am not using SIOC at all.

Odd,,,,

On another note can someone confirm that upping your queue depth will lower your paths? I heard from someone it would----but I have never ever heard----and with 8 paths per Lun now I am already constrained to 128Lun,,,,thanks!

0 Kudos
vPatrickS
Enthusiast
Enthusiast

I've never heard of anything like that.

The queue has no impact on the number of paths.

Regards

Patrick

0 Kudos
vPatrickS
Enthusiast
Enthusiast

This KB refers to ESX 3.5 only

0 Kudos
jedijeff
Enthusiast
Enthusiast

I was not sure if it applied to versions beyond 3.5.

0 Kudos
CHogan
VMware Employee
VMware Employee

The KB refers to a possible heap depletion issue if you deploy too many paths per device. Its not related to queue depth.

When considering queue depth, you'll need to work out what your HBA is capable of.

Then you can decide if your queue depth value multipled by the number of devices atatched to your ESXi host is going to cause queue full issues.

i.e. with a HBA queue depth of 4096, and a queue depth setting of 32, you can safely have 128 devices (aka LUNs).

HTH

Cormac

http://cormachogan.com
0 Kudos