Greg, please edit your post and change it to a "question" to encourage the forum users to respond.
Do you mean 3 NICs each for a total of 6?
My suggestion (and I'm sure others will provide a different one) is to "bond" two of the three NICs and use it for a combined Service Console / Virtual Machine Network / VMotion vswitch.
Use the 3rd NIC strictly for iSCSI.. that is, create a separate vswitch and configure Service Console and VMkernel services on it for iSCSI usage. (The Service Console connection is required because it is responsible for discovering iSCSI targets).
Since you only have two ESX servers, the 1Gbps of dedicated bandwidth to the EQL array should provide exceptional performance for your environment.
Setup a iSCSI Lab today with a PS3600x using a couple of Servers each with 3 NICs..
Exactly as Paul suggests...
More NICs give you more flexibility though..
But you rarely see NIC as the first resource to bottleneck..
yep, would agree with Paul with the only exception being that I think you have 6 nic's per server which means you can also dedicate one to Vmotion rather than having the shared service console/virtual machine/vmotion config. This also then leaves a couple spare which may be needed depending on the complexity of your VM's networking or can be used for additional bandwidth for either the iSCSI network or VM network
It would be 6 total ports per server with 3 dual port nics per server.
therefore you could go:
vSwitch #1 - service console (1 nic)
vSwitch #2 - Vmotion (1nic)
vSwitch #3 - iSCSI (service console & vmkernel) (1 nic) - start with one and see how performance goes
vSwitch #4 - Virtual Machines (1 or 2 nics off seperate cards)
This leaves you one nic spare for additional Virtual Machine vSwitch or to add resilience/additional bandwidth to the iSCSI network if needed.
vSwitch #1 could potentially also be shared as a Virtual Machine switch also for even more flexibility
With 6 NICs on a server there are several options.
I would probably set it up as-
2 NIC ports for your VMKernel iSCSI
1 NIC for Vomtion
2 NICs for Guest VM Network (to you main LAN)
(the last one can be used for additional bandwitdh to either iSCSI or VM Network. Or you could use for another dedicated COS network but that's a wast of a GigE nic in my opinion.)
Using the COS for management on the iscsi Subnet works well since it's a natural management network. in that ONLY the right people ever know it's there.
The important thing is to look at your switches closely. If they are working as a singe switch then you will certainly want to alternate the NIC connections for redundancy.
\**** Make sure you have Flow Control Enabled on the switches!!! Don't bother with Jumbo Frames at the switch since ESX can't understand them anyway. f****
For management of the EQL and the ESX just make sure there is an ISL between your iSCSI network and the main LAN, if not you'll need a server that is dual homed to use as a control machine... I've seen people set up their Virtual Center Server as this unit.
Hope this helps
If you go with HBAs and decide to use jumbo frames be aware that the latest firmware release notes have a correction in them.
Previous documentation stated that you shouldn't enable jumbo frames and flow control.
Apparently that was mistaken.