VMware Cloud Community
zach1008
Enthusiast
Enthusiast

ESXi 5.1 Cluster Networking best practises

We are getting ready to replace our existing SAN with a new 10G Dell md3620i with a 1220 expansion chassis.  We are adding a dual port 10g network card to all of our hosts.  We have a total of 6 hosts. In addition to the dual port 10g cards each host has a total of 8 1gb network connections.  I am looking for suggestions on a best practise on the best way to configure the networking on these hosts.

We are using ESXi 5.1 on all hosts and the vCenter server.  Each host has 192gb of ram at this time.  We have a total of 85 vm's running right now and plan to grow that number with the addition of the new faster storage and the 5th and 6th hosts.

Currently we have pairs of nic ports setup on separate vSwitches.

vSwitch 1 - Management and vMotion

vSwitch 2 - iSCSI

vSwitch 3 - iSCSI

vSwitch 4 - vm's

vSwitch 5 - vm's

Our new setup plans initially are to just take the two iSCSI switches and have just a single port on each for the 2 paths to the SAN.

I have seen people suggest that we should have a single vSwitch with 6 ports teamed for all the vm's on the host to provide the most available bandwidth as it is needed.  Not sure what the best setup would be and we are looking for suggestions.  Again, we have a total of 6 hosts each with 8 1gb nic ports and 2 10g nic ports for the SAN.

thanks

Zach

Tags (2)
Reply
0 Kudos
2 Replies
Josh26
Virtuoso
Virtuoso

People will suggest teaming 47 adapters if you have the hardware for it.

Absoutely the biggest gain here is on storage, and multipathing those two 10GbE adaptors on the storage is the way to do it. Since you should be using dedicated iSCSI switches, sharing those 10GbE NICs with anything else is pretty much a no go.

Outside of storage, is what you have now, currently working? If you like the rest of your configuration there's no reason to add more NICs to it.

Very few VM environements will use more than 2 x 1GB ports outside of storage.

zach1008
Enthusiast
Enthusiast

So in most setups a single pair of 1gb network connections are all that are used for vm's to access the core data network?  Just seems like that could be a bottle neck depending on the density of the host machine.  I realize the typical server is using only a small percentage of a network connection but 15 or so machines I would think could generate a significant load.

From your response and other postings we will plan on keeping the dedicate vSwitches for the iSCSI traffic with the  new 10gbE nics.

For the virtual machines and management I may create 2 vSwitches.  One for management and vmotion traffic with 2 nic ports and then another with remaining nic ports to provide a single switch that is load balanced and fault tollerant for all the VM's on the host.

Do you see any issues with doing that?

Reply
0 Kudos