VMware Cloud Community
ferexderta
Enthusiast
Enthusiast

network desing

There are 6 physical servers that I will set up a new cluster. It has 4 x 10 Gb and 2 x 1 GB network cards on servers. I will use standard switches. What are best practices? What happens if I don't use 2 x 1 GB ethernet cards?

Is it okay if I use 2 x 10 GB management - VMotion 2 x 10 VM networks? Also, should I configure one or two standard switches?

0 Kudos
5 Replies
a_p_
Leadership
Leadership

How do you attach the shared storage (which I assume you have) to the hosts?

André

0 Kudos
ferexderta
Enthusiast
Enthusiast

there is a san switch and it is connected with fiber cable.

0 Kudos
a_p_
Leadership
Leadership

In case you want to go with 4x10gpbs, I'd probably create a single vSwitch with all 4 uplinks, and configure vMotion with one of the vmnics as active, and the other ones as standby. For all other port groups (including the Managment port group) set the vMotion vmnic as standby, and the other 3 vmnics as active.

André

0 Kudos
ferexderta
Enthusiast
Enthusiast

Why don't we create 2 switches? Is there a special reason? we can add 2 nic for every virtual switch  . Is this a wrong design?  There are 2 x 1 GB nic on the server and 1 GB physical switch for ESXi management ports in the environment. Do you mind if I use 1gb for the management port? Can you share your experiences here?

0 Kudos
a_p_
Leadership
Leadership

You can of course use the 1gb vmnics for Management, but from your initial post I understood that you don't want to use them!?

Anyway, try to keep vMotion on its onw vmnic, as vMotion can completely saturate a vmnic, which may cause latency with VMs using the  same vmnic.
The reason for my suggestion was to have as many vmnics as possible available for VM and Management traffic.

André

0 Kudos