VMware Cloud Community
sketchy00
Hot Shot
Hot Shot

setting iSCSI queue depth...

So, some iSCSI storage manufacturers suggest changing the iSCSI queue depth from the default of 1000, to 3, (via the storage nmp command...) so that VMware Round Robin can better balance the load across multiple Gbe connections.  My question is, if a Manufacture doesn't explicitely state this as a recommendation, would it be considered good practice to do it anyway?  I have a Synology DS 1512+ NAS that can serve up iSCSI, and I checked the settings, and it does indeed default to 1000.  Any thoughts on setting this for a Synology NAS?

Tags (2)
0 Kudos
8 Replies
john23
Commander
Commander

iScsi queue depth value should only change when storage vendor recommneds it, otherwise its a practice, use the default one.

Thanks -A Read my blogs: www.openwriteup.com
sketchy00
Hot Shot
Hot Shot

Yeah, I suppose that is pretty good advice.  But it does leave the bigger question of, if the rationale for reducing it down to 3 is so clear an sensible, why is the default threshold so high?  I could see leaving it the default if it wasn't such a disparity, and knowing that the queue depth threshold can have a signifigant effect of the round robin option to kick back to the other adapter, I wonder why there is such a difference, and why a vendor would NOT recommend this change.

0 Kudos
beckham007fifa

Stechy wrote:

So, some iSCSI storage manufacturers suggest changing the iSCSI queue depth from the default of 1000, to 3,

Honestly, I am not sure what is the value you were trying to say with 1000?

But before you actuall move into something which you have never done, its very important to learn what it does and how can you calculate. QD is something which you will be changing from esx end to enable HBA to process more commands to optimize I/O in a balanced environment. However just increasing this doesn't mean you are going to get high I/O. Its all your complete setup what matters for better performance.

The iscsi_max_lun_queue parameter is used to set the maximum outstanding
commands, or queue depth, for each LUN accessed through the software iSCSI adapter.
The default is 32, and the valid range is 1 to 255.

Commands to alter the value

esxcfg-module -s iscsi_max_lun_queue=value iscsi_mod

After you issue this command, reboot your system.

http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_iscsi_san_cfg.pdf

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100811...

Also, Q depth can be calculated using this formula

T => P * q * L

T = Target Port Queue Depth
P = Paths connected to the target port
Q = Queue depth
L = number of LUN presented to the host through this port

for an example

For example an 8 ESX host cluster connects to 15 LUNS (L) presented by an EVA8000 (4 target ports)* An ESX server issues IO through one active path (P), so P=1 and L=15.

The execution throttle\queue depth can be set to 136,5=> T=2048 (1 * Q * 15) = 136,5
But using this setting one ESX host can fill the entire target port queue depth by itself, but the environment exists of 8 ESX hosts. 136,5/ 8 = 17,06

reference: The great Franks blog

Thanks

Regards, ABFS
0 Kudos
sketchy00
Hot Shot
Hot Shot

Perhaps I wasn't quite clear in my original question.  I have set the iops setting from the default 1000 to 3 via the esxcli storage nmp psp roundrobin deviceconfig set -d naa.______ -I 3 -t iops command on my datastores living on my EqualLogic arrays per their best practices.  That has been working fine, and proved to be more effective in utilizing both links.  The thought was to apply the same setting to a Synology NAS unit that can serve up multipathed iSCSI.  But I couldn't find any explicit vendor recommendations for the Synology unit, one way or the other.  My question was really regarding a generic setup, similar to:  http://jpaul.me/?p=2492  Since the Synology is a lab setup, I suppose I can experiment, but was just curious what user input on the matter was.  Thanks for your input.

0 Kudos
beckham007fifa

You wrote

esxcli storage nmp psp roundrobin deviceconfig set -d naa.______ -I 3 -t iops

ok, this means, turning down IOs per path per turn for RR. You said you have implemented it, is there any significant change in your throughputs?

Regards, ABFS
0 Kudos
john23
Commander
Commander

Correct...vendor to vendor its vary...in a production evironment Its good to have vendor recommend value.

Since vmware support lot of array vendor..may be a reason to be a default value very high.

Thanks -A Read my blogs: www.openwriteup.com
0 Kudos
sketchy00
Hot Shot
Hot Shot

Thank you everyone for contributing.  I will proceed with some testing on the Synology, as that is in a lab environment anyway.

0 Kudos
WHardy
Contributor
Contributor

Were you able to discover any way to get multipathing on the DS1512+ to go higher than 120MB/s?  I've also noticed this issue on mine.

0 Kudos