VMware Cloud Community
gallopa
Enthusiast
Enthusiast

iSCSI HBA vs. Std NIC

Does anyone have any experience of the performance differences between iSCSI HBA's when compared to using standard NICs? I have heard that a standard NIC with the software iSCSI initiator can perform as well, or better, than an iSCSI HBA. Do you agree, disagree? Has anyone conducted any comparion tests that they could share?

Also, I have heard that iSCSI HBA's are sucseptible to dropping packets at high load and this causes the HBA performance to tank as more traffic is generated to re-send the same packets. Does anyone have any experience of this?

Thanks for your help.

0 Kudos
13 Replies
lamw
Community Manager
Community Manager

Usually an iSCSI HBA also includes TOE (TCP/IP Offload Engine), this differs between a normal NIC as it offloads it's processing of the iSCSI packets from the CPU of the server onto the HBA card. A traditional NIC set to use iSCSI would pass the iSCSI packet to the CPU and it would get processed there, and as you can see, you can swamp your CPU with most of its processing just in the packets and not much processing power is left. Hence, usually if you know you're going to intensively utilize iSCSI in either lots of I/O or VM traffice=, you'll want to get a dedicated HBA to offload that work. So, in that regards, I think the swISCSI can perform good, but definitely not better, as it'll take away more CPU resources from allocating to your VMs. Getting a very good iSCSI HBA can mean the difference in performance, usually qlogic is a pretty good one. Though, there is a cost associated with using an iSCSI HBA vs software initiaitor, so if you need performance, then price may not be a limiting factor. In terms of performance/testing, I have not seen anything out there, though I'm sure someone on the forum can point you to some benchmarks.

This article might help clear some things up as well:

http://www.infostor.com/Articles/Article_Display.cfm?Section=Archives&Subsection=Display&ARTICLE_ID=...

maddenwr
Enthusiast
Enthusiast

Did you ever get any more information to help you decide between the hardware and software iSCSI initiators?

I am setting up a new iSCSI environment and am researching the alternatives... looks like I'm arriving at the same questions you had, so I am curious what you found and what you decided.

0 Kudos
Kahonu84
Hot Shot
Hot Shot

Aloha - I've been using teamed GB NICs for nearly two years with no problems. I would suggest trying the NICs before making an investment in a HBA. Of course it depends on your CPU horsepower. But you certainly have nothing to lose by trying the NICs first.

Bill

0 Kudos
AndreTheGiant
Immortal
Immortal

With vSphere there are a lot of improvment in the iSCSI software initiator part.

And jumbo frames are now supported also for software initiator.

So the difference in performance could be minimal.

NB: be sure to follow the best practice of your storage, also in the iSCSI network design.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
maddenwr
Enthusiast
Enthusiast

I guess the real questions for me now are somewhat hardware related... when configuring the ESX host, do I get iSCSI HBAs or just multi-function NICs. There seems to be a lot of people with the opinion that current CPUs have plenty of spare capacity and that using standard (lost cost) NICs will get the job done with reasonably good performance. And further that the cost of iSCSI HBAs does not justify the performance improvement.

I believe these opinions are valid but were formed in isolation and best applied to standalone servers, not to an enterprise virtual infrastructure. My line of thinking is that I would prefer to offload as much processing from the host CPUs as possible, leaving the extra for the VMs.

So, a multi-function NIC is supposed to have some TOE capability, where a standard lost cost NIC has none:

a) is a multi-function NIC an acceptable alternative that offers some offload and gives better performance than a low cost NIC?

b) does ESX recognize and utilize the capabilities of a multi-function NIC?

Otherwise, the only option for me may be the iSCSI HBA.

Thanks for your input

0 Kudos
joergriether
Hot Shot
Hot Shot

Since vSphere 4 the iSCSI software initiator was redesigned from scratch, with huge improvements. now add the cpu power of actual multicore systems and give this to a x520 (if you prefer 10gb) or a standard e1000 (if you prefer 1 gb) - i would be surprised if you got more power with a dedicated hba.

I can add one point many forget to mention: Hardware offloading requires 100% functional and slick drivers - otherwise you could ending up finding yourself in looking for the cause of latency a or latency b Smiley Wink

I don´t want to badmouth hardware iscsi hba - not at all - because they really rock. But i say VMware software iSCSI does rock, too. And equiped with good cpu hostpower, it will rock very good!

best regards

Joerg

0 Kudos
hbato
Enthusiast
Enthusiast

Completely agreed with joerg.

I think in the end it would boil to price per performance of HBA or NIC. If you have underutilized CPU, you might want to go off with standard NICs and if you want to reserve more horsepower of your CPUs, then use HBA.

Regards, Harold

0 Kudos
AndreTheGiant
Immortal
Immortal

See also: http://vinfrastructure.it/en/vdesign/vstorage-software-vs-hardware-iscsi/

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
Khue
Enthusiast
Enthusiast

Hey Andre,

I was wondering how I can identify if a particular product supports iSCSI offloading. Any tips/suggestions? I checked the HCL and my card is listed but it doesn't say much else other then generic compatiblity. This will be my first foret into iSCSI storage as most of my experience is isolated to FC. My servers will be equiped with HP NC522SFPs.

Thanks for any help in advance!

0 Kudos
eilz
Contributor
Contributor

Can you use ISCSI HBA Cards ONLY on my two/three ESX hosts servers BUT use a normal 2 x gigabit NIC's on Dedicated PC's with OpenFiler installed where the VM's will be stored.  Would there be any perofmance and/or cpu offload benefits in doing this?

OR

Could I use NIC's with TOE support to offload tcpip traffic at both end, is it worth going down this route, I heard nothing really supports TOE yet?

My thinking was to offload iscsi traffic off the ESX host(s) CPU's, and assumed I dont really need to offload anything on dedicated Openfiler hardware boxes so would just use 2 x gigabit NICs on them.

Is this worth considering/doing, would there be any gain here, or would I just be better off using gigabit nics all round and using the software initiator instead?

Thanks in advance..

0 Kudos
Bisti
Enthusiast
Enthusiast

Any benefits you could observe while using NICs with TOE is lower CPU usage so the question is how busy is your CPU? If it`s full than you could probably see the difference. Some TOE / Dependant HW iSCSI initiators exclude using jubo frames though and can cause lower performance in the end.

Same as Khue I`m interested if there is and list stating that particular iSCSI initiator is dependent or independent?

0 Kudos
davidbewernick
Contributor
Contributor

Hi,

we do install iSCSI environments quite a lot and figured out that in most cases switching off the TOE and use the Software Initiation ended up in the best performance. Most NICs have the availibility to act as an iSCSI HBA these days, but the NICs onboard hardware mostly acted slower than the CPU did.

Since the most environments have a lack of RAM instead of CPU resources, the initiator is a good option I think.

0 Kudos
AndreTheGiant
Immortal
Immortal


I was wondering how I can identify if a particular product supports iSCSI offloading. Any tips/suggestions?

You must check on VMware HCL... but is not enough... some NICs may require a license kind or simple the enable of this function in BIOS.

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos