VMware Cloud Community
clintonm9
Contributor
Contributor
Jump to solution

Dell EqualLogic Multipath (High IO throughput)

We have recently purchased a Dell EqualLogic SAN. Our main goal for this device was to start using resource pools and be able to do some HA and Live migration. We have been working with XenServer and Xen Cloud for a while and haven’t got the IO performance over iScsi that we were hoping. Using Multipath on the main Host machine we get about 250 MBps (Which we are happy with). On the Guest machines we will only get 120 MBps.

My question for the VMWare community; if we wanted to use the VMWare products is there the ability to get more then 1 Gbps IO throughput on a Guest machine? And if so, if there a PDF or a post somewhere that might explain how to do this?

Let me know, thanks!

Clinton

Reply
0 Kudos
1 Solution

Accepted Solutions
s1xth
VMware Employee
VMware Employee
Jump to solution

Attached is the correct configuration doc...hope its some help. I believe this is also out under TR docs when you register your PS Array.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi

View solution in original post

Reply
0 Kudos
24 Replies
dcoz
Hot Shot
Hot Shot
Jump to solution

Hi Clinton,

Yes you can have more than 1Gbps bandwidth from your ESX host to iSCSI storage.

ESX 4 allows up to 8 1 Gbps NICs to be bound to the software iscsi initiator.

The best doc to read is the iSCSI config guide. http://www.vmware.com/pdf/vsphere4/r40_u1/vsp_40_u1_iscsi_san_cfg.pdf

And i would also advise reading this great post http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

Hope this helps.

Regards

DC

clintonm9
Contributor
Contributor
Jump to solution

I want to be able to have more then 1gbps on the guest not the host. Do most people that use iscsi just accept the 1gbps limitation?

I feel like with local stoagre I get a lot better results.

Reply
0 Kudos
DCasota
Expert
Expert
Jump to solution

Hi

Do you use the sw iscsi initiator? If so, use multiple vmkernels on your vswitch with multiple physical nics with overridden vswitch failover settings (for each VMkernel port: 1 vmnic is active, all others are unused). Additionally you have to add each additional VMKernel port manually to the sw iscsi initiator in the Service Console:

esxcli swiscsi nic add -n vmk[x] -d vmhba[32+y]

x=0,1,...depends on the VMKernel port number

y=0,1,2,...depends on the number of the vmhba for the sw iscsi initiator

Rescan afterwards the sw iscsi initiator vmhba.

Configure round robin mpio for each lun on each esx. After that you should see the value of mpio for example in the performance tab (usage of vmnic).

Hope it helps.

Bye Daniel

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

What model EQL box do you have? If you have the PS4000 you will only be able to use 2GB of throughput even with MPIO, due to only 2 ports on it. If you have the PS6000 which has 4 ports on it (technically can have 4GB throughput but many use one port for out of band management). Another thought is even with 2Gb of throughput the DISKS themsevles cant even saturate that link to begin with. If you have some SSD drives etc then you will most definitly OVER staturate the link and need to move to 10GB links lke on the new PS6500.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
clintonm9
Contributor
Contributor
Jump to solution

We have PS6000. We tried to setup MPIO and we only saw 1Gpbs, is there anything speical you had to do to get it to work?

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

Did you follow the EQL MPIO and vSphere configuration document? You need to attach each vmkernal port to the iSCSI intiator and then turn on Round Robin mulitpathing. Here is the document attached to this post.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
clintonm9
Contributor
Contributor
Jump to solution

This document doesnt seem like it walks you through the commands. It talks more about networking.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

Oops I uploaded the wrong one!! When I get to a pc I'll upload the one

with the commands.

Sent from my iPhone

On Dec 21, 2009, at 4:24 PM, clintonm9 <communities-

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

Attached is the correct configuration doc...hope its some help. I believe this is also out under TR docs when you register your PS Array.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
Anders_Gregerse
Hot Shot
Hot Shot
Jump to solution

I might be wrong, but I would be surprised if you get more than 1gbps in af vm using a vmdk on one lun on one EQL. Eventhough you can setup round robin and bundle multiple nics into one, it doesn't change the underlying architecture. The iscsi connection session will be established over one of those links it will just have more to choose from and round robin between.

There are at least two ways to achieve more than 1gbps, one is using links that is based on 10gbps (still an expensive alternative) or using software based iscsi in the guest os with at least two vnics.

There is another thread talking about a MPIO module for vSphere, but it has on the verge of release since the release of vSphere.

Reply
0 Kudos
spseabra
Contributor
Contributor
Jump to solution

Also, please bear in mind that the default behavior of RR multi-pathing

uses a 1000 iSCSI commands turnover threshold. That is, at each 1000

iSCSI commands the VMkernel changes the iSCSI initiator used. So,

unless there is some kind of overlap due to the number of VM's/iSCSI

commands issued by you're vSPhere host, you will only see activity in

one initiatior.

On the other hand, if your vSphere systems are indeed pumping more than

1000 aggregated iSCSI commands, than VMKernel will indeed use more than

one NIC and the Equalogic internal Port Redirection will do the rest.

Cheers

Reply
0 Kudos
krismcewan
Enthusiast
Enthusiast
Jump to solution

Another thing to check is the Size of your Luns, if anything is in Snapshot and if so how long.

Scsi reservations are a bane of storage but seem to be magnified with iscsi sans.

A VMware Consultant in Scotland for Taupo Consulting

If you think I'm right or helpful award me some points please

A VMware Consultant in Scotland for Taupo Consulting http://www.taupoconsulting.co.uk If you think I'm right or helpful award me some points please
Reply
0 Kudos
EPL
Contributor
Contributor
Jump to solution

Here's what I've done to get some nice throughput when I need it. I have two switches (dvswitches in my case) with one pnic each going to my dedicated iSCSI physical switch. Within the guest, I assign 2 additional vnics that are going to those two dvswitches. Then I install the Hit Kit (for windows) to get the eql dsm mpio module and map the iSCSI lun directly.

This has its draw backs, but I believe if performance is your objective you should be able to achieve it this way.

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

To get over 120MB/s from the PS4000 as stated there needs to be two GigE NICs on the iSCSI vSwitch and two vmkernel ports for iSCSI. Then bind both to the sw-iSCSI initiator, set the LUN to use round-robin, and finally on a per-LUN basis, set the IOPS to 3 use the command:

esxcli nmp roundrobin setconfig --device --iops 3 --type IOPS

Please award points to any useful answer.

Reply
0 Kudos
EPL
Contributor
Contributor
Jump to solution

The question at hand is not how to get the HOST to use more than 1Gbps, its how to get the GUEST to use more than 1Gbps throughput.

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

Indeed. I have a PS4000 configured like this and see over 160MB/s within a guest.

Please award points to any useful answer.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

J1mbo..

Why do you say "per lun" basis change the iops setting? Do you not

recommend setting this to 3 on all luns? I currently have it set to 3

on two luns that o have assigned to my hosts.

Just wondering your opinion on the iops change as there isn't a lot of

information out there about it.

Thanks!!

Sent from my iPhone

On Jan 20, 2010, at 11:46 AM, J1mbo <communities-emailer@vmware.com

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

I just meant that the command needs to be manually issued for each LUN as the default is always 1000 as far as I can see.

I read a blog that recommended 3 for these arrays. My own testing seemed to confirm it - the setting is changed on-the-fly so it's easy to test, run up IO Meter 100% sequential then fiddle with it and watch the counters.

The only issue I have is that after controller failover, the performance has gone a bit funny. Although as I'm still in test, I just deleted the LUNs and added them back on.

Please award points to any useful answer.

Reply
0 Kudos
spseabra
Contributor
Contributor
Jump to solution

Reducing the I/O Operation Limit to 3 will incur in some CPU penalization on the host due to the increase in VMkernel port switching, but provided there is enough CPU power this shouldn't be a problem.

Cheers,

Reply
0 Kudos