IMO you should use dedicated NICs (unless you run 10 GBit/s) for iSCSI traffic. In addition to this, consult the Lefthand (sorry HP) best practices. You may get a better iSCSI throughput with two separate vSwitches (1 NIC each) for iSCSI.
I would recommend separating them on a vSwitch as well. Although technically speaking you can achieve the same with portgroups and setting nics in active / unused or active / standby from a management perspective it is easier when you separate them.
Following various best practice articles and bit of our own testing , we settled on a model as given in the figure below. the highlights of the setup are
- all nics are broadcom iscsi offload (Dependent HBA) type
- seperate switches (physical) for storage and vm traffic
- broadcom iscsi adapters are driven by built in s/w adaptors
- vSwitches are setup for round robin load balancing for external Nics
Few concepts we eliminated en route to our deployment
- jumbo frames : not worth the effort and inconsistent performance with loss of service.
- shared switch with storage and vm traffic separated using vlans : degraded performance since it is still within the same switch fabric
- SFP+ connectors on server : not sure about backwards compatibility with SFP slots.
To answer your query , yes it is imperative that you have seperate dedicated nics for iscsi traffic. You could get away with vlans on a single nic but that would not be ideal in a production environment.
vmWare Ports.png 33.1 K
As a rule, I always seperate storage traffic -whether this be iSCSI or NFS.
Storage Nics invariably carry more traffic than any other NIC - meaning large disk read / writes can heavily affect your nbetwork bandwidth utilisation.
As an add-on to seperating iSCSI, wherever possible, I also make my storage networks NON-routable - to even further isolate network traffic and also provide additional security.