ericsl
Enthusiast
Enthusiast

Performance of QLogic iSCSI vs Software Initiator in ESX 3.5

Hello All,

Prior to my question I would like to thank everyone that has helped me to this point. This board is awesome!

I am running ESX 3.5 accessing our Compellent SAN with iSCSI via the ESX software initiator. I have heard that the perfomance of the software initiator is not that great but I have tested it yet myself. The SAN has QLogic QLA4052's.

How much of a performance benefit would there be in putting those cards in the ESX server?

TYIA,

Eric

0 Kudos
8 Replies
kukacz
Enthusiast
Enthusiast

You can expect the same performance. The difference is in CPU penalty with SW initiator. In throughput peaks it can reach 20-30%.


--

Lukas Kubin

ericsl
Enthusiast
Enthusiast

Okay, so for large copy jobs on the iSCSI network this might be an issue, like starting VM's with SAN Boot, etc... but no throughput gain.

Thanks,

Eric

0 Kudos
kukacz
Enthusiast
Enthusiast

Starting VM doesn't need much I/O resources, the large copy jobs are true. Especially multiple large transfers at parallel, like file server with multiple clients etc.

Still, however, I prefer using SW initiator through common server NIC cards (like multiport Intels) against purchasing Qlogic HBAs. With software initiator you have more options, teaming multiple ports, aggregating traffic through MPIO drivers inside the VM etc.

--

Lukas Kubin

0 Kudos
kukacz
Enthusiast
Enthusiast

Also, I forgot to mention there are some iSCSI features which require the iSCSI volume to be initiated inside Virtual Machine. Such feature might be VSS snapshot coordination for example, or aggregating ports through multiple connections per session feature. With hardware initiator you can not perform VM-level initiation, you can only map the iSCSI LUN at the ESX level. The reason is the VM can't see the HBA ports.<br/>

This might change in the future as it did with Fibre Channel, currently it's not possible AFAIK.

--

Lukas Kubin

0 Kudos
christianZ
Champion
Champion

The highest ios/sec you can get with ms iscsi initiator - but that needs vm cpu performance and additional nics. With iscsi hba the vm cpu utilization is lower, the configuration is simpler but as already mentioned you can't use such features like consistent storage snapshots(with VSS), etc.

For some results check this:

0 Kudos
Brian_D1
Contributor
Contributor

What iscsi initiator setup would you use to get the highest ios/sec to the system drives (c:) of VMs?

The VMWare software initiator does not appear to support jumbo frames. Also, it does not seem to be able to load balance very well, if at all. I took a test ESX host with 2 gig ports assigned to a vSwitch used only for iSCSI traffic, created a single volume on a LeftHand SAN, and fired off storage I/O instensive processes on 4 separate VMs running on the host. However, our reporting showed 99% of the iSCSI traffic from the host went over a single NIC. Am I missing something here?

The QLOGIC iSCSI Initiator at least seems to support jumbo frames but I haven't been able to find anything out about its support for load balancing.

Any insight would be greatly appreciated.

0 Kudos
Paul_Lalonde
Commander
Commander

The VMkernel interfaces can indeed be configured to support jumbo frames, but this functionality isn't officially supported by VMware at this time.

As for load-balancing, there isn't any means of achieving it through network configuration. The iSCSI VMkernel interface will always use the same NIC (same path) to reach the storage.

Look into multipathing with esxcfg-mpath -rr (search the forums) and you'll see how you can set up true multipathing by using separate iSCSI VMkernel interfaces spread across separate vSwitches.

Regards,

Paul

0 Kudos
BigHug
Enthusiast
Enthusiast

On our testing, iSCSI hba and ms sw iscsi perform the same random io speed. ESX sw iscsi is about 20% slower. But that is on 3.0.2. Maybe esx sw iscsi is improved in 3.5. YMMV.

0 Kudos