jlovera
Contributor
Contributor

Vmware Switch Networking

Good morning, I currently have servers with 6 network cards, 2 physical 1 Gb. And 4 10 Gb.

In each site we are going to install a vmware cluster with 4 nodes. The customer purchased an enterprise plus license. I would like to configure a vDS for host management in each vCenter, but would like to hear your recommendations regarding the use of physical devices for management traffic, such as vmotion traffic.

My ideas:

1- Create a vSS for the management network and VM. Then create a vDS for the vMotion network (using the 10Gb NICs)

2- Create a vSS for the management network and VM. Then create a vSS for the vMotion network.

3- Create a vSS for the management network (1 Gb NIC). Then create a vDS for the VM and vMotion network (10 Gb NIC), using the Physical NIC LOAD policy.

4- Create a vDS and place all the traffic there, administration, vm and vmotion.

I would like your suggestions and comments, thank you.

JRLD
0 Kudos
6 Replies
scott28tt
VMware Employee
VMware Employee

Moderator: Thread moved to the vSphere area.

0 Kudos
ZibiM
Enthusiast
Enthusiast

Hi

It depends really at amount of traffic you require to put through.

Few things to consider:

1. Do you have NFS or VSAN ?

2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

3. What are your VMs requirements in regard to traffic. Do you have some needs in regard to the network separation - like DMZ traffic on dedicated links ?

4. Do you plan to utilize ESXi mgmt network for VM backup ?

5. Are there any NSX considerations ?

With all that to ponder upon, I'd do smthg like that:

1st vds for "infrastructure" (ESXi mgmt, vMotion, network based storage) - load based teaming policy, in NIOC vMotion on low, 2 10g uplinks per ESXi node, physical switch config: trunked VLANs

2nd vds for VMs - load based teaming policy, 2 10g uplinks per ESXi node, physical switch config: trunked VLANs

if you'd like you can also add:

3rd vSS for backup ESXi mgmt  - extra vmkernel in different subnet with rules on FW to allow https and ssh

0 Kudos
jlovera
Contributor
Contributor

I answer your questions.

1. Do you have NFS or VSAN?

The customer does not have vSAN, this is a new environment, I know they have an HP 3PAR

2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

RAM memory: 16 DIMM x 64 GB for 1024 GB

3. What are your VMs requirements in regard to traffic. Do you have some needs in regard to the network separation - like DMZ traffic on dedicated links?

It is a totally new environment, I do not know the VMs that the client will install in the future.

4. Do you plan to use ESXi mgmt network for VM backup?

I listen to recommendations.

5. Are there any NSX considerations?

NO

JRLD
0 Kudos
ZibiM
Enthusiast
Enthusiast

2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

RAM memory: 16 DIMM x 64 GB for 1024 GB

10g is bit low for such amount of RAM

Consider MultiNIC vMotion or LACP

0 Kudos
jlovera
Contributor
Contributor

Multi-Nic vmotion is configured with two port groups and two vmkernel adapters, correct?

JRLD
0 Kudos
ZibiM
Enthusiast
Enthusiast

Yup

Two port groups, the same vlan id, the same address space

First port group 1st NIC active, 2nd NIC standby

Second port group 1st NIC standby, 2nd NIC active

VMware Knowledge Base

Works quite OK and it's immune to link failure.

0 Kudos