Prior to my question I would like to thank everyone that has helped me to this point. This board is awesome!
I am running ESX 3.5 accessing our Compellent SAN with iSCSI via the ESX software initiator. I have heard that the perfomance of the software initiator is not that great but I have tested it yet myself. The SAN has QLogic QLA4052's.
How much of a performance benefit would there be in putting those cards in the ESX server?
Starting VM doesn't need much I/O resources, the large copy jobs are true. Especially multiple large transfers at parallel, like file server with multiple clients etc.
Still, however, I prefer using SW initiator through common server NIC cards (like multiport Intels) against purchasing Qlogic HBAs. With software initiator you have more options, teaming multiple ports, aggregating traffic through MPIO drivers inside the VM etc.
Also, I forgot to mention there are some iSCSI features which require the iSCSI volume to be initiated inside Virtual Machine. Such feature might be VSS snapshot coordination for example, or aggregating ports through multiple connections per session feature. With hardware initiator you can not perform VM-level initiation, you can only map the iSCSI LUN at the ESX level. The reason is the VM can't see the HBA ports.<br/>
This might change in the future as it did with Fibre Channel, currently it's not possible AFAIK.
The highest ios/sec you can get with ms iscsi initiator - but that needs vm cpu performance and additional nics. With iscsi hba the vm cpu utilization is lower, the configuration is simpler but as already mentioned you can't use such features like consistent storage snapshots(with VSS), etc.
For some results check this:
What iscsi initiator setup would you use to get the highest ios/sec to the system drives (c:) of VMs?
The VMWare software initiator does not appear to support jumbo frames. Also, it does not seem to be able to load balance very well, if at all. I took a test ESX host with 2 gig ports assigned to a vSwitch used only for iSCSI traffic, created a single volume on a LeftHand SAN, and fired off storage I/O instensive processes on 4 separate VMs running on the host. However, our reporting showed 99% of the iSCSI traffic from the host went over a single NIC. Am I missing something here?
The QLOGIC iSCSI Initiator at least seems to support jumbo frames but I haven't been able to find anything out about its support for load balancing.
Any insight would be greatly appreciated.
The VMkernel interfaces can indeed be configured to support jumbo frames, but this functionality isn't officially supported by VMware at this time.
As for load-balancing, there isn't any means of achieving it through network configuration. The iSCSI VMkernel interface will always use the same NIC (same path) to reach the storage.
Look into multipathing with esxcfg-mpath -rr (search the forums) and you'll see how you can set up true multipathing by using separate iSCSI VMkernel interfaces spread across separate vSwitches.