VMware Cloud Community
mcwill
Expert
Expert

ESX4 swiscsi MPIO to Equallogic dropping

We've updated to ESX4 and have implemented round robin MPIO to our EQL boxes (we didn't use round robin under 3.5), however I'm seeing 3 - 4 entries per day on the EQL log that indicate a dropped connection. See logs below for EQL & vCenter views on the event.

EQL Log Entry

INFO 10/06/09 23:50:32 EQL-Array-1

iSCSI session to target '192.168.2.240:3260, iqn.2001-05.com.equallogic:0-8a0906-bc6459001-cf60002a3a648493-vm-exchange' from initiator '192.168.2.111:58281, iqn.1998-01.com.vmware:esxborga-2b57cd4e' was closed.

iSCSI initiator connection failure.

Connection was closed by peer.

vCenter Event

Lost path redundancy to storage device naa.6090a018005964bc9384643a2a0060cf.

Path vmhba34:C1:T3:L0 is down. Affected datastores: "VM_Exchange".

warning

6/10/2009 11:54:47 PM

I'm aware the the EQL box will shuffle connections from time to time, but these appear in the logs as follows, (although vCenter will still display a Lost path redunancy event.)

INFO 10/06/09 23:54:47 EQL-Array-1

iSCSI session to target '192.168.2.245:3260, iqn.2001-05.com.equallogic:0-8a0906-bc6459001-cf60002a3a648493-vm-exchange' from initiator '192.168.2.126:59880, iqn.1998-01.com.vmware:esxborgb-6d1c1540' was closed.

Load balancing request was received on the array.

Should we be concerned or is it now normal operations for the ESX iscsi initiator to drop and re-establish connections?

Reply
0 Kudos
179 Replies
tWiZzLeR
Enthusiast
Enthusiast

I have an open case with Dell EqualLogic support and yesterday I received an email regarding this issue:

"I have heard that VMware will not put the fix for this problem in their VMware V4.1 release. Which means that the fix won't be available for quite a while."

Reply
0 Kudos
johnz333
Contributor
Contributor

So what is the recommendation, that we run 1 path across two nics? I am running production now on 6 path/two nics, scary but its working.

John Z

From: tWiZzLeR <communities-emailer@vmware.com>

To: <jzolnows@slcr.wnyric.org>

Date: 01/12/2010 08:39 AM

Subject: New message: "ESX4 swiscsi MPIO to Equallogic dropping"

Reply
0 Kudos
Edificom
Contributor
Contributor

I couldn't get any recommendation apart from "it works fine in 3.5" but then 3.5 has lower iSCSI performance anyway, so I'd rather not rebuild everything again.

Amazed this is happening really.

As before, I am waiting to go into production using the 1:1 approach, but I really don't want to risk this if I am going to have connection drops and possible data issues.

Then again, waiting months isn't an option either...

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

I will be writing a blog post on this issue sometime this week. Its really ashame with how highly used iSCSI framework is, how much VMware has PUSHED the use of MPIO in their marketing, and benefits of this rewritten iSCSI iniator. For the people that are using Equallogic boxes, the only option we have is wait and hope Eql comes out with their 3rd Party Plugin, and from what I have heard recently that is not going so smoothly.

I am surprised people are still seeing drops in a 1:1. I havent seen any drops in my configuration, running Dell PC5424's switches and EQL PS4000. Anything above that though and I get drops connstantly. Lets not forget though, that even with drops there is always an active patch available for communication. If you are seeing a complete drop and loosing your volumes from the host then may have other issues going on. When I was experiencing the drops in a 3:1 I never lost a volume etc. Lets also not forgot that 1000 commands go down each patch before switching to another path in RR.

I havent heard anything from EQL regarding my ticket that is also open, but I have a feeling we wont be seeing a fix for this until U2.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
Ben150
Contributor
Contributor

This latest development about the long ETA on a fix is definitely concerning. We have been in production with 3 VMs since before christmas, and are still undecided whether to proceed with more P2V, or hold at our current position.

Has anyone ever seen all paths to a LUN drop at the same time? I'm assuming that this would be the (only) nightmare scenario where data loss could occur. I've gone over our SAN logs, and so far we have never seen an instance whereby we completely lost all paths between an ESX4 host and a LUN, but maybe that's just luck. I would of thought that the more paths that are configured, the less of an issue this is, as the iSCSI commands will just be delivered via a different connection in the event of a drop. Even so, the only real safe option has to be with falling back to fixed paths instead of round robin, and losing the performance. I guess that's our most likely course of action until this is sorted.

I remember an earlier poster mentioned that DELL were pursuing this agressively with vmware, but that seems hard to believe after hearing this latest development. Surprising really, as the EQL SAN is DELL's flagship iSCSI product range, and I know they're pushing it along with the R710 servers and vsphere at the moment. I'm in agreement with others, it really does seem amazing that this issue is still unresolved, as it seems fairly critical to me.

Reply
0 Kudos
DCasota
Expert
Expert

Hi everybody

Some questions:

- How many LUNs do you have on your EQL?

- How many VMkernel ports do you have on your vSwitch for the swiscsi traffic?

- How many ports do you have configured on the vSwitch?

The reason I'm asking is following:

I've had the same problems as everybody in this thread. The vSwitch for the swiscsi traffic, created with 8 ports, contains 6 VMkernel ports. We have 16 LUNs on our PS5000. I never saw all LUNs when rescanning and I had to reduce the amount of vmkernel ports. However in the document Equallogic_vSphere.pdf they tested with the same config (6 vmkernel ports, 2 pnics), but with one difference: They had 56 ports configured on the vswitch, however they didn't mentioned how many LUNs the had.

Is it possible that the vSwitch port value matters? With 16 LUNs and each LUN has 6 possibilities (VMkernel ports) this gives 96 connections.

I changed today the port value to 120 which is the next higher value after 96. I hadn't any errors so far, but this isn't a scientific explanation.

Maybe someone with deeper knowledge of the internals of vswitches can explain if there is a correlation between port value and swiscsi connections or not?

Bye

Daniel

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Good thing there, but I had the drops with just one 'LUN' or Volume behind the PS4000/vSphere so there was definitly enough vSwitch ports.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
theflakes
Contributor
Contributor

I have two PS4000s in a RAID 6 group; three esxi boxes with three 1:1 iscsi setups each; and two Dell 5424 switches dedicated to the iscsi network. The DELL switches have four ports etherchanneled between them. I do see drops, but I have not seen any problems caused by the drops. Performance is still very good and so is throughput. I have ~17 VMs running on this setup.

fyi,

Brian

Reply
0 Kudos
theflakes
Contributor
Contributor

Sorry we have 11 LUNs defined presently on the two PS4000s.

Reply
0 Kudos
DCasota
Expert
Expert

Ok, if you don't have a vm network group with a couple of VMs on the same vswitch as the swiscsi vmkernel ports (which wouldn't be best practice...) or a couple of service console ports (which doesn't make sense for vsphere...) then the theory was wrong.

Until now I don't see any drops anymore, but this may change during week. If VMware confirms that this is a known issue, so it is one...

Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

I'd be curious to know who you are working with at Dell/Eql that's saying that 4.1 won't have the fix. Since even VMware support hasn't been saying exactly when it will be released. However, IMO, I doubt it will take very long for them to release the patch. Also there are EFFECTIVE workarounds, other than going back to 3.5 until they release the fix.

Let's make sure we're talking about the same issue. If you have multple GbE physical interfaces in one vSwitch for iSCSI and you've followed the guide that Dell issued and is one also found in the wild on the web. VMware reference number: PR484220.

First thing. The dropped connections occur during extremely LOW levels of IO, if there's IO going on all the time, you don't see the drops. Which is why I suspect that people running 1:1 Vmkernel ports to GbE interfaces aren't seeing the problem as much or at all. Since you are more likely to have traffic going across the links. Also, some customers have lowered the number of IOs in the RR config, from 1000 IOs before switching to as low as 3 or even 1. Also forcing more IO across all the VMkernel ports. In the /var/lib/vmkiscsid.log you will see "no-op" errors and terminated connections. Then later you see the connnections get restablished. On the Dell/EQL event log you will "reset" on iSCSI connections at the same time.

One sure way to avoid the issue, comes from Vmware support. Don't have more than one GbE interface per vSwtich. If you have three GbE interfaces for iSCSI, create three vSwitches with one Vmkernel port and one GbE interface. If you have already configured it, remove the iSCSI vSwitch and all the iSCSI vmkernels then reboot the ESX server. Then recreate from scratch. Otherwise some people have reported terrible performance.

-don

Reply
0 Kudos
Edificom
Contributor
Contributor

Thanks for the info Don, looks really interesting. You said this info came from vmware support?

Was that from a KB article or via a support tech?

Cheers,

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Don- Very interesting information. Did you get this work around from VMware? If so is there a KB yet?

I have to agree with you, I saw drops in a 3:1 configuration under very low I/O. The array wasn't in production yet and I was testing it out. With only three VM's running on the array the i/o was only 10iOps so I was still seeing drops. This makes sense that when I switch to a 1:1 the connection is stable because the i/o is low.

You state that VMware is recommending using TWO vSwitches instead of having one vSwitch with multiple vmK ports under it and the nics teamed across, exactly as stated by the Dell PR document. How does this fix the problem? Have you tried this yet and created two vSwitches each with one nic and each with 3 vmK ports to see if the drops still occurred? I am trying to wrap my head around WHY this configuration would NOT have drops but the other configuration would. All we are doing is separating the vSwitches.

Thoughts?

I may try this configuration on one of my lab machines connected to the same PS storage to see I see any drops. Anyone try this yet?

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

The reason the change "fixes" the problem is that the problem is NOT in the MPIO code. Where it is, I can't say as I'm under NDA. However, if you talk to VMware support they will probably tell you.

I don't have a KB article number. I didn't even think to ask the VMware support guy. duh. I understood why they suggested it and continued on. I've setup several customer sites this way and had no issues.

So yes, if you are using 2x GbE interfaces for iSCSI with one vSwitch, you would remove that entirely. Reboot, then create two vSwitches, each with a vmkernel port and one GbE interface. Then re-do the binding of the vmkernel ports to the iSCSI HBA, then enable Round Robin.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Don....

Would you recommend only having ONE vmK port under each nic and not have multiple vmK ports under each nic? For exp, for each nic have 3 ports to each nic?

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

To start I would use 1:1. Measure the results, verify that you're not seeing the problem. Then add the additional vmkernel ports and check again.

-don

Reply
0 Kudos
Ian78118
Contributor
Contributor

Would doing this 1 Vswitch per Physical Nic be the full time solution, or would the recommendation be to put everything back within one vswitch once the patch/fix has been released?

Reply
0 Kudos
dwilliam62
Enthusiast
Enthusiast

That would be up to each adminstrator. The benefit of one vSwitch is it's less "clutter" in the GUI and slight less memory overhead. Each vSwitch takes up X amount of memory.

-don

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

So with that being said this more of a 'fix' for NOW until they fix the problem with one vswitch and multiple nics on a single vswitch?

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Update:

Dell / Equallogic have been very responsive to my recent blog post regarding this problem. I have just recieved an official response from them on my blog at:

If anyone has any additional questions for EQL I would respond to them there as they will be monitoring any readers questions on the matter, but in the end we are just going to have to wait for a patch from VMware on this issue and use the work around that was posted above. (which I personally will be testing shortly).

Thanks!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos