Well, the answer is both yes and no (re: is it possible to link two 1Gbps Ethernet connections between an ESX box and SANMelody).
The "no" part is due to the fact that ESX does not allow you to load balance an iSCSI session over two or more physical network adapters. Common implementations of this include Microsoft MPIO (multiple targets per session) and MCPS (multiple TCP connections per session).
Neither MPIO or MCPS are supported by ESX Server. Teaming will provide no benefit, either, because the bonding algorithm will still direct traffic to/from the ESX and SAN box across a single NIC (same source/dest pair).
However, the "yes" part is somewhat good news. If you are able to divide your iSCSI storage capacity into \*two* LUNS (as opposed to just one), you can have the iSCSI server present each LUN through a different network adapter. ie. LUN1 will be advertised to ESX from NIC #1, and LUN2 will be advertised to ESX from NIC #2.
In this way, you could configure two network cards on the ESX server and have iSCSI running on both of them. The first NIC would see LUN1, and the second NIC would see LUN2. You'd have the effective throughput of 2Gbps for iSCSI traffic.
Hope this helps and isn't confusing.
Currently you only get one path between your ESX and the iSCSI LUN..
We setup as Paul has desribed.. At least you get 2G..
For some iSCSI Arrays (Equallogic) you point to a group Address and the TCP Session is load balanced between 3 1G Connections to the LUN.. but this is done at the Array, not by ESX..
VMware need to step up to the mark with regard iSCSI offerrings and support.. 10G is out soon and we still cant even offer MPIO..
I'd done the same by using 4052c, creating two luns, tweaking the path to the luns so they're run across different nics. Works great.
I also echo the need for VMware to get up off it and give us true MPIO instead of this weak path tweaking.
However, based on the presentations at VMworld they're putting their efforts into rearchitecting the drivers so the vendors are on the hook for high performance features.
is there are disadvatages of 'hard coding' the path when dealing with HA/DRS?
Ok.... I think I get it. Example - Boot drive on one LUN using one NIC and Data drive on second LUN using another NIC. Or, one VM on one LUN and the other VM on another LUN. Is this correct?
So this is like a "manual" load balancing. I assume that, on a two port QLA5052, using this approach the failover capability is lost since each LUN is bound to a specific NIC and no redundant ports are left (?).
that what i was getting at
When you connect to your iSCSI store via a dual-port hba you'll get at least 2 paths to each lun. One path will go over hba1 and the other over hba2 on the dual port hba.
In the storage area of VC if you select a datastore and click properties you'll see these paths. Once you click manage path you can specify which is the active path.
By selecting a datastore's active path to be over a certain lun you can manually load balance the datastore access traffic.
As far as booting over iscsi I believe you could use either port, you would just have to define which one loads the bios from the fastutl utility.
The redundancy still exists for the luns because if a path fails it will switch over to the standby path on the other hba. I also assume if you had zoned & presented the boot luns to both hba you would still be able to operate over the standby path for that as well. If you needed to reboot while the primary hba path was unavailable you might have to enable the bios on the second hba. However, I have never tried booting off the iSCSI adapter and would encourage you to test these scenarios.
Hope this makes some sense. Keep asking if it doesn't.