VMware Cloud Community
rdarlin2
Contributor
Contributor

Asking for suggestions and direction: Networking vSphere v6.0

I am asking for assistance, suggestions and direction on how to best configure networking in our brand spanking new vSphere 6.0 environment.

I ask because after much reading and research, I'm still very confused.

Our environment now contains 1 vCenter server and 4 ESX hosts:

1 VM Network covering 4 hosts

each Host has 1 vSwitch0 with a

     Management Network

     VLANID: -

     VMkernel Ports 1

     vmk0:  (ip address of Host)                       vmk0 is connected to both of the two physical network adapters.

and a

     VMnetwork

     VLANID: -

     Virtual Machines: (# on that Host)             no connections here

So no 'adapter/vnic' for VMnetwork, and neither vMotion or HA is available.

I "understand" (loose description here) that I should have vMotion on it's own network segment.. or isolated from other traffic... same for vCenter-to-host traffic, same for VM Server to world (or regular server production) traffic.  What is best practice in accomplishing this?

The more detailed you can get, the better I can follow you.

I do appreciate any assistance and direction you can provide.

Thanks,

Rich

Reply
0 Kudos
3 Replies
Nick_Andreev
Expert
Expert

Network setup is always dictated by the capabilities of your hosts. How many network ports do you have and what's their speed?

General recommendation is to use dedicated physical adapters for VMotion, because they can easily saturate even 10Gb ulinks and affect other types of traffic which are using the same uplinks. And use dedicated physical adapters for IP storage (NFS and iSCSI) to make sure nothing affects storage latency.

---
If you found my answers helpful please consider marking them as helpful or correct.
VCIX-DCV, VCIX-NV, VCAP-CMA | vExpert '16, '17, '18
Blog: http://niktips.wordpress.com | Twitter: @nick_andreev_au
Reply
0 Kudos
rdarlin2
Contributor
Contributor

The 4 ESXi are HP DL980's; each with 2 sets of FiberChannel connections to the SAN, 4 1GB NICs and 2 10GB NICs.... so a 'plethora' of available connections and speeds.  While not all NICs are wired up, I was thinking of teaming several of the 1GB ports for VMkernel/mgmt. traffic and somehow use the 2-10GB connections for vMotion and other high demand needs?

The 10GB nics and the 1GB nics are connected to different switches.  I am learning both VMware and how to configure the Cisco switches, so it's all on me and direction/support is happily accepted.

Thanks,

Rich

Reply
0 Kudos
Nick_Andreev
Expert
Expert

Ok, so if you have a dedicated FC fabric for storage, I wouldn't even bother using 1Gb uplinks for management. You can simply create one vSwitch with two 10Gb uplinks. Select first uplink as active for VM traffic and management (second standby). And select second uplink as active for VMotion (first standby).

That way you kill two birds with one stone:

  1. Storage traffic, VMotion traffic and VM/management traffic are isolated from each other on separate physical NIC ports.
  2. You have physical redundancy across all types of traffic (because of the standby uplink).
---
If you found my answers helpful please consider marking them as helpful or correct.
VCIX-DCV, VCIX-NV, VCAP-CMA | vExpert '16, '17, '18
Blog: http://niktips.wordpress.com | Twitter: @nick_andreev_au
Reply
0 Kudos