VMware Cloud Community
Morrow201110141
Contributor
Contributor

Assigning NIC’s to my ESX hosts

Hello,

I have recently been tasked with building a second VMware environment at our DR location.

I have limited experience with VMware, but I have been looking after our production servers for a while and I am pretty sure that the network configuration is very poor. I’d like to get it right at the DR site and then eventually correct it on live.

Each of my ESX hosts has 6 NICs available.
I am using iSCSI at the DR site as opposed to FC in production.
I have only 1 subnet available for all connections, I can use VLANs if needed and in the future I might be able to use separate switches for my iSCSI traffic but not from the start.

I have currently assigned

1 NIC to the service console
1 NIC to Vmotion
2 NICs to the VM network
1 NIC to a second service console on a different subnet (this doesn’t serve any purpose but gets rid of the alarm for redundant service console)
1 unassigned NIC, which I was thinking of teaming to the first NIC to provide redundancy

So how am I best to assign these NICs? Which NIC is used for the iSCSI traffic? Any suggestions would be greatly appreciated.

Regards,

0 Kudos
6 Replies
logiboy123
Expert
Expert

Don't use ESX, use ESXi.

2 NIC's for the Management and vMotion network

2 NIC's for the iSCSI network

2 NIC's for VM Networking

Are you using Enterprise Plus?

Is your vCenter server physical or virtual and where is it, in the production vSphere environment?

Typical best practice for Management and vMotion is two have two NIC's on a vSwitch (vSS) where the VMKernel ports are setup so:

Management - vmnic0 active / vmnic3 standby

vMotion - vmnic0 standby / vmnic3 active

This is documented in one of the best practice documents, but I can't remember off the top of my head which one.

Typical best practice for iSCSI implementations is to bind the iSCSI traffic to the respective ports and once again define load balancing policing on the VMkernel ports so that:

iSCSI1 - vmnic1 active / vmnic4 disabled

iSCSI2 - vmnic1 disabled / vmnic4 active

See the following document for a Vendor agnostic walkthrough:

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

For a Lefthand document also try:

http://virtualy-anything.blogspot.com/2009/12/how-to-configure-vsphere-mpio-for-iscsi.html

If you haven't decided on an iSCSI solution yet, then I'll simply tell you to buy a LeftHand. It is pure awesome. If you want a full list of reasons why I can do that for you just let me know.

Then that leaves the VM Network switch. Simply put them in an active/active configuration and you now have 2GB of throughput for your VM's.

0 Kudos
logiboy123
Expert
Expert

The above networking recommendations change if you are using Enterprise Plus licensing and vCenter is physical or you will be using the vCenter server from your Production environment.

0 Kudos
Morrow201110141
Contributor
Contributor

HI,

I am using ESXi at the DR site. The vSphere server is physical and we are running enterprise plus.

Where do i make the changes to the binding that you suggest?

0 Kudos
logiboy123
Expert
Expert

With enterprise plus and vCenter as physical I would recommend using a vNetwork Distributed Switch (vDS).

Assign all 6 NIC's as dvUplinks on the vDS with the following configuration

dvUplink1 - Management - Active, vMotion - Standby, VM Networks - Standby

dvUplink2 - Management - Standby, vMotion - Active, VM Networks - Standby

dvUplink3 - Management - Standby, vMotion - Standby, VM Networks - Active

dvUplink4 - Management - Standby, vMotion - Standby, VM Networks - Active

dvUplink5 - Management - Standby, vMotion - Standby, VM Networks - Active

dvUplink6 - Management - Standby, vMotion - Standby, VM Networks - Active

You will probably need to follow these steps;

1) Create the host

2) Add host to vCenter

3) Create vDS

4) Assign NIC's to dvUplink profiles

5) Configure load balancing and VLAN's

6) Attach ESXi host to vDS - Migrate the management network to dvUplink1

Using vDS is quite a bit more complex then using vSS, but far more elegant and resilient. Further management tasks become quite simple. I suggest reading up on vNetwork Distributed Switches.

The only caveat is that your iSCSI solution might not be supported on the vDS, I'd check with your Vendor to confirm. Also multipath IO is supported on a vDS under vCenter 4.1. If you are running vCenter 4.1 then this solution will work for you. Otherwise you could use 2 NIC's on a vSS and then the other 4 on your vDS.

0 Kudos
logiboy123
Expert
Expert

Check this out if you are considering the more simple vSS solution;

http://www.kendrickcoleman.com/index.php?/Tech-Blog/vsphere-host-nic-design-6-nics.html

Certainly some considerations should be;

1) Using Jumbo Frames for your iSCSI network

2) Getting NIC failure redundancy covered

3) Maximising throughput for Management, vMotion and VM Networks

4) Isolating Management, vMotion and VM Networks (something I always do using VLAN tagging at the very least)

0 Kudos
Morrow201110141
Contributor
Contributor

Hi,

Thanks for your feedback.  I think i will go with a more simple vSS design as the learning curve is steap for me and my timescales are short. I don't really handle networking on a day to day basis so simple will probably be better!

There is just one point that i'm not 100% clear on. It seems that i have to set the binding for iSCSI to the NIC(s) via the command line? Is there no easier way to assign this?  By defaul I assume it uses the service console adapter for software HBA based iSCSI? Is the process for changing the NIC used teh same for software HBA as for hardware ones?

0 Kudos