VMware Cloud Community
jamesrpt
Contributor
Contributor

Dell EqualLogic iSCSI /VMkernal/RR command limit 1000 to 3

I have a few questions about a vSphere setup that we are about to do with some Dell Servers and Dell EqualLogic arrays.

Here is the physical equipment and setup:

2x Dell ps6000 iScsi SANs

4x Dell poweredge 1950 Servers with 10 NICS (2-4port cards plus onboard)

2x Front-end gbE switches and 2x Back-end gbE switches

nic layout:

__expansion card 0_:_

pnic9 used for SC

pnic8 used for VM network traffic

pnic7 used for iSCSI going to back-end switch 0

pnic6 used for iSCSI going to back-end switch 1

expansion card 1:

pnic5 used for Vmotion going to back-end switch 0

pnic4 used for VM network traffic

pnic3 used for iSCSI going to back-end switch 0

pnic2 used for iSCSI going to back-end switch 1

onboard nics:

pnic1 used for SC

pnic0 used for Vmotion going to back-end switch 1

The iSCSI nics will be attached to a single vSwitch per an ESX host. We will be using JUMBO frames

Questions:

How many VMkernal Ports should we associate per a physical nic for the iSCSI? I've read that ESX4 limits 8 VMkernal ports per a vSwitch so should we do 2 VMkernal ports per a physical nic or just 1 VMkernal port per physical? My understanding is that by doing a 2:1 ratio allows for a little better performance.

Some people have suggested to change the round robin iSCSI command limits from the default of 1000 to 3(or even 1). This is supposedly to allow for better load balancing. Has anyone done this with the EL p6000 SAN? I've seen arguments for not doing it and arguments for doing it but nothing specific to the EqualLogic SANS.

Any other advice based on your experience is certainly welcome.

thank you for your time!

James

0 Kudos
6 Replies
s1xth
VMware Employee
VMware Employee

James...

What kind of switches are being used? (vendor/model).

I would say 2 to 1 configuration would be a best configuration, again this depends on your environment and the amount of iSCSI connections that you can have on your array. Don't forget for every volume you create will create the amount of iSCSI connections as you have configured on each host, espically using MPIO these connections can really add up. Jumbo frames is a good addition, I am using them now, some people ssay they dont make a difference but from my personal testing I have seen a benefit.

In regards to the IOPS settings, there has actually been quite a bit of chatter about this setting over the last few days. Some storage blogs have done testing with this change and most have not seena performance difference, if anything they have been impacted negatively. Below are a couple links referencing this:

http://www.yellow-bricks.com/2010/03/30/whats-the-point-of-setting-iops1/

I strongly DO NOT recommend making this as there is a bug in the current version of vSphere that changes the iops settings to a crazy number after restarting the host.

http://virtualgeek.typepad.com/virtual_geek/2010/03/understanding-more-about-nmp-rr-and-iooperations...

Jonathan

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
jamesrpt
Contributor
Contributor

I am pretty sure they are Cisco 2960G switches on the back end.

"Don't forget for every volume you create will create the amount of iSCSI connections as you have configured on each host, espically using MPIO these connections can really add up"

I am not quite getting this. let's say we have 2x 2 Terabyte luns on one of the equallogic boxes and they are presented to each of the 4 esx hosts. are you just making reference that there will be 4 iscsi connections per host per a lun for each volume?

Thanks for the reply

James

0 Kudos
s1xth
VMware Employee
VMware Employee

Yes...the amount of vmK ports you have under each nic needs to be counted as a connection to each volume. For exp. if you have 8 vmK ports and 2 volumes, you will have 16 connections x number hosts in your cluster (for exp 3) you will have 48 connections. Just something to keep an eye on...

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
jamesrpt
Contributor
Contributor

Thanks for the clarification! That is helpfull.

James

0 Kudos
Bane3d
Contributor
Contributor

I think I can answer this one for you. Here is our environment:

8 ESX hosts with 8 - 10 nics per host. I have 2 of the nics for each host dedicated to iSCSI connections to 3 equallogic PS6000X. They are interconnected via 3 cisco Gb switches. I just ran in to a limitation today of only 512 concurrent iscsi connections. This came from following this document: http://docs.google.com/viewer?a=v&q=cache:xYZQEuqAeA4J:communities.vmware.com/servlet/JiveServlet/do...dellequallogic+vsphere&hl=en&gl=us&pid=bl&srcid=ADGEESjocFPYnX3cee848xGhPMjF68TMfkdH_Y9RkBye5bM7N0uXbUm2E3CRoBBlEdl6uRk4-FwdGzCGtABBwpZXtX0xu9kYVTRwsJSi9kY9xh0k3bLcpJaw-2tgm0m-ueTK7WCs5nRP&sig=AHIEtbTYHQaZpbpz5Ip7oiMqBOZ-va5mWg

By following the instructions, I ended up with a total of 6 vmks / host for a total of 48 connections / lun. Now I have about 10 luns connected to vSphere so that means I'm using 480 connections. With the limit of 512, I'm now stuck. I can reduce down to 4 vmks / box and drop that number to 32 to get me down to 320 connections, but I'm still not comfortable with the potential of growth. I'm going to have to contact Dell to find out a better solution. I've been using round robin for a while and the performance has been satisfactory but I do worry after reading a few articles.

This limitation of only 512 connections per pool is ridiculous. Don't follow the recommended 3 vmks per nic. You will work yourself in to a corner in a fairly short amount of time.

Now for the fun part. If you read through the documentation, you are setting up 3 vmks connected to 1 nic and 3 vmks connected to another nic. They only work with 1 nic / vmk. I don't see a real issue thus far with going with only 2 vmks / server. I'm going to test it on one of my servers for a month or 2 and do some trend analysis to see if there is any deteriment to performance. If all goes well, I'll end up reconfiguring my all of my hosts this way or pray that Dell raises the limit to 1024 shortly.

d

0 Kudos
s1xth
VMware Employee
VMware Employee

Just some info from my experiences of this same type of thing. I have

two PS4000s connected to two Dell pc5448 switches with a 4 gig lag.

Plenty for the 2 active nics on the ps4. As far as the amount of vmks

per nic, I use two dual port nic cards in my r610 servers with 2vmks

per pnic. I have three volumes and three r610 hosts in a cluster. This

gives me 36 total iscsi connections and the ps4 even has less

connection total of 256.

Good news...At the previous Dell eql conference in Dallas, tx it was

mentioned that this limitation of active connections will be

raised....when? No direct time frame...but I have a feeling we will

see something around vmworld 2010.

Sent from my iPhone

On May 28, 2010, at 4:26 PM, Bane3d <communities-emailer@vmware.com

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos