VMware Cloud Community
Byron_Zhao
Enthusiast
Enthusiast

vSphere with 10Gb

We are going to set up two vSphere servers for a new project, and we are getting 10Gb network into our environment for the first time. Now it comes with questions how I can set up the vSphere servers to make the best use of the 10Gb, with redundancy.

In the old environment with 1Gb networks , we use HBA cards for storage, and two 1GB NICs  team for SC and vMotion with primary on different NIC (SC primary on NIC1 with NIC2 standby, vMotion primary on NIC2 with NIC1 standby). Another two 1Gb NICs for VM network traffic.

In this new environment, since we are getting 10Gb network, we are thinking not to use HBA. We are going to use 10Gb for iSCSI or NFS for datastores. Here comes my questions: how do you guys out there set up your environment using 10Gb? Do you use two 10Gb NICs in each server, and put everything on it with datastore,  Service Console (or management network), vMotion, VM traffic? Or you would use four 10Gb NICs, with two for datastore, and two for SC, vMotion, and VM traffic? Or you would use two 10Gb and two 1Gb?

Thanks for any input

-Byron

Tags (2)
Reply
0 Kudos
4 Replies
chriswahl
Virtuoso
Virtuoso

I go with at least 2 cards. Whether that means 2 dual port cards vs single port cards is a question of northbound port density, use case, and cost.

For most environments, a pair of 10GbE connections is enough to run everything, and I advise using NIOC to set priorities on various traffic types. If you have the ability, ports, and cash to go with four 10GbE (2x storage and 2x the rest) then go for it. Smiley Happy

Here are some designs I wrote for 2x and 4x NIC layouts. I did it for a lab setup but the basic principles are the same.

http://wahlnetwork.com/2012/07/16/efficient-virtual-networking-designs-for-vsphere-home-lab-servers/

Cheers

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
Byron_Zhao
Enthusiast
Enthusiast

Thanks for your response, Chris. And great post in your blog.

Your designs basically match what is in my mind. Do you have the 10Gb setup for any of your production environments? I would really like to hear any feed back on performance from the real production environment. What I really want to compare is two 10Gb in your design, versus two HBA and four 1Gb. To me, two 10Gb for VM traffic, management and vMotion seems a little bit of overkill. For our existing cluster, two 1Gb are sufficient for our VM traffic, which has made me thinking to have two 10Gb for datastores, management and vMotion, and two 1Gb for VM traffic.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

I've personally run all of these configs in production: 4x 10Gb (2x dual port cards), 2x 10Gb + 4x 1Gb (LOM), and 2x 10Gb (BladeSystem & UCS). They all work fine. If you have that 4x 1GbE LOM card, you may want to use that for management - but be warned that it is a single point of failure. In many cases, I just disabled the LOM and didn't use it because it was a single point of failure.

The other question on design is mainly around the storage piece - if it is properly VLAN'ed off and NIOC makes sure that it has preference, it ran just fine. In the case where I ran 4x 10Gb, I needed a lot of throughput for NFS storage, so I gave a pair of 10GbE uplinks specifically to storage.

Also, be careful with vMotion and limit how much bandwidth it can assume. A maintenance mode operation can easily saturate a 10GbE link if you let it run wild. Smiley Happy

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Byron_Zhao
Enthusiast
Enthusiast

Thanks for sharing. It helps to understand different designs and their impact on performance.

Reply
0 Kudos