VMware Cloud Community
taylorb
Hot Shot
Hot Shot

Should iSCSI have its own physical NICs?

Trying to decide what the best course of action is here.  I have about half my VMs on an HP Lefthand system.  I am (like most of us) running out of NICs on my VM host.   I have 4 gigabit ports left to share between the main internal networks and ISCSI.   I am trying to decide if I pair them on into two virtual switches, one with 2 links to the ISCSI VLAN and 2 links trunked with my internal VLANs.  Or should I lump them altogether in one big Vswitch with 4 links and use port groups?   See crude drawing below for the two options I am considering.    Any pros and cons would be helpful.  I know either way works, but I am just wondering what the most efficient setup would be as well as any pros/cons. 

switches.gif

0 Kudos
4 Replies
a_p_
Leadership
Leadership

IMO you should use dedicated NICs (unless you run 10 GBit/s) for iSCSI traffic. In addition to this, consult the Lefthand (sorry HP) best practices. You may get a better iSCSI throughput with two separate vSwitches (1 NIC each) for iSCSI.

André

0 Kudos
depping
Leadership
Leadership

I would recommend separating them on a vSwitch as well. Although technically speaking you can achieve the same with portgroups and setting nics in active / unused or active / standby from a management perspective it is easier when you separate them.

Duncan

Yellow-bricks.com | HA/DRS technical deepdive - the ebook

0 Kudos
anjaneshbabu
Contributor
Contributor

Following various best practice  articles and bit of our own testing , we settled on a model as given in  the figure below. the highlights of the setup are

- all nics are broadcom iscsi offload (Dependent HBA) type

- seperate switches (physical) for storage and vm traffic

- broadcom iscsi adapters are driven by built in s/w adaptors

- vSwitches are setup for round robin load balancing for external Nics

Few concepts we eliminated en route to our deployment

- jumbo frames : not worth the effort and inconsistent performance with loss of service.

-  shared switch with storage and vm traffic separated using vlans :   degraded performance since it is still within the same switch fabric

- SFP+ connectors on server  : not sure about backwards compatibility with SFP slots.

To  answer your query , yes it is imperative that you have seperate  dedicated nics for iscsi traffic. You could get away with vlans on a  single nic but that would not be ideal in a production environment.

anjanesh babu

www.itgeeks.info

http://www.itgeeks.info/
0 Kudos
bulletprooffool
Champion
Champion

As a rule, I always seperate storage traffic -whether this be iSCSI or NFS.

Storage Nics invariably carry more traffic than any other NIC - meaning large disk read / writes can heavily affect your nbetwork bandwidth utilisation.

As an add-on to seperating iSCSI, wherever possible, I also make my storage networks NON-routable - to even further isolate network traffic and also provide additional security.

One day I will virtualise myself . . .
0 Kudos