VMware Cloud Community
blondie72
Contributor
Contributor

iSCSI 10G NIC recommendation

Hi NG,

we would like to deploy a VMWare solution based on a 10G ethernet structure, using a 10G iSCSI SAN. I have some questions considering iSCSI performance.

I have found some threads in the communities discussing this topic, but most of them were from '08, and since a lot of things changed (new 10G hardware, esx 4..) I would like to ask for your experience with 10G iSCSI and VMware.

I have read, that ESX only does perform well, when you have enough LUNs on your SAN, but much LUNs means much fragmentation of SAN space into many arrays. Is this still true with ESX 4?

Another thing I read was, that ESX (referring to ESX 3.5 by the time) does not perform well with 10G NICs (not much more performance than with 1G cards). Is this (still) true, if so would the solution be to use more blades (with less VMs on each) using 1G iSCSI NICs. Or would you recommend using 10G NICs nowadays?

What 10G NICs would you recommend? As far as I see, there are no real 10G iSCSI HBA in the market, but HBA-like NICs with some offloading features. The NICs of Intel (82599eb) look quite interesting. IBM uses NetXen in their blades. What experience do you have with these cards; any recommendations?

Is it okay using spanning tree (MSTP, VLAN based loadbalancing/ fast port for iSCSI devices) with iSCSI?

Regards ...

0 Kudos
11 Replies
jasoncllsystems
Enthusiast
Enthusiast

You may read this http://malaysiavm.com/blog/cisco-nexus-5000-poc/ and I''m still looking for "WHY"

Regards,

Jas aka Superman

MALAYSIA VMware Communities

http://www.malaysiavm.com

'If you found this or any other answer useful please consider allocating points for helpful or correct answers ***

http://www.malaysiavm.com
blondie72
Contributor
Contributor

Hello Jas,

Thanks for your reply. This is really sad news. I would not have believed that using 10G NICs can end up in such a poor performance...

Did you ask VMware support? What is their explanation? Obviously ESX is loading the wrong driver (Intel) for your card...

Just for the understanding:

You used the dual port 10G QLE8042 NIC, one per ESX host. Then you used one port for VM network traffic (with poor performance) and one port for FCoE traffic. So you are using a FC SAN; if so what about the FC traffic (poor performance as well?)

BTW: Just wondering why there is not much feedback. Seems hardly anybody is currently using 10G iSCSI ...

Regards

0 Kudos
JamesSykes
Contributor
Contributor

Yeah 10G discussions are extremely quiet at the moment!

I'm guessing that not too many people have the requirement for that sort of performance... and if they do then they are probably on FC!

I'm looking at building a new platform on 10G but not quite sure if it's going to give me the peformance I need after reading these threads.

0 Kudos
pasikarkkainen
Contributor
Contributor

Some old links from 2006 and 2007 about 10 Gbps iSCSI performance:

http://www.myri.com/scs/performance/Myri10GE/iSCSI/openiscsi.html

http://www.chelsio.com/10gbe_performance.html

And some 10 Gbps network performance from 2008:

http://www.myri.com/scs/performance/Myri10GE/

Dunno how much those help with your problem, but at least they should give you some kind of idea about what should be possible.. Now in 2009 things should be even better than in 2006/2007 Smiley Happy

JamesSykes
Contributor
Contributor

The big question is how is performance over 10G when using the software initiator?

I'm not even sure there are 10G ISCSI Hardware Initiators yet...

0 Kudos
TimPhillips
Enthusiast
Enthusiast

Try to use softaware iSCSI target: it lets you to dispose of possible hardware incompatibilities and will let you easily use all bandwidth of your 10Gb network. Based on my experience and lab tests I can suppose that you willl like Starwind iSCSI. If you ineterested in perfomance, you`ll find this articlevery useful.

0 Kudos
pasikarkkainen
Contributor
Contributor

The big question is how is performance over 10G when using the software initiator?

I'm not even sure there are 10G ISCSI Hardware Initiators yet...

The URLs I posted contain benchmarks with software-initiator (open-iscsi). I believe vsphere (esx 4.0) uses open-iscsi.

And yes, there are 10 Gbps NICs that have hardware offloading for iSCSI. I don't know if there are fully featured 10 Gbps iSCSI HBAs though.

0 Kudos
TimPhillips
Enthusiast
Enthusiast

I haven`t seen full feature 10Gb iSCSI HBA.

0 Kudos
1ppInc
Contributor
Contributor

I am using two HP DL360G5 Servers with HP NC510C 10GbE NICs, as ESX4 servers.

I am using two HP DL185G5 Servers with Intel 82598EB 2-Port 10GbE NICs, using Open-E DSS v5 as iSCSI targetting software.

The servers are interconnected using a HP 6400CL 6-port 10GbE CX4 Switch

I have been fairly pleased with the performance over 1GB iSCSI connections. We use the VMWare Software Initiator. Since 3.5, update 1 or 2 (i think), VMWare has provided 10GbE support for iSCSI connections.

We run about 6 Heavy load VMs per host, so 12 in total. Disk latency is never a bottleneck for our systems.

I did run into some problems related to the HP NC510C NICs and I would not recommend them, however the Intel based cards are stellar and I have had no problems with them. They are certified for vSphere4 and Open-e DSS.

0 Kudos
TimPhillips
Enthusiast
Enthusiast

Open-e DSS 5, as far as I know, is enought old software, not supporting some of advanced fucntions, avaible in other iSCSI targets such as even Starwind.

0 Kudos
blondie72
Contributor
Contributor

Hello NG,

First of all, thanx for your support. Nice to hear, that 10G does perform well in the first place with the right hardware in place.

This 10G solution should work with IBM Bladecenter H and HS21 blades. So the only 4 port 10G NIC seems to be this Broadcom:

44W4473 - 5479 Broadcom 10Gb 4-port Ethernet Expansion Card (CFFh) for IBM BladeCenter

http://www-03.ibm.com/systems/xbc/cog/bchs21/bchs21io.html

Should be supported by vSphere 4 (and by ESX 3.5 for some months); it is on HCL. But we didn't find any reports based on lab tests or experience in a production environment.

To gain redundancy for LAN traffic and iSCSI a 4 port card is needed ?

Hope somebody has done this before and a success story Smiley Wink

Cheers...

0 Kudos