VMware Cloud Community
ldoodle
Contributor
Contributor

Returning to VMware... from Hyper-V - Networking

Hi,

This is a "how is it done nowadays in VMware" question... a long with a little bit of translation of terminology.

I took some time out (8+ years ago) to work on Hyper-V (requirement, not choice). There my default go to configuration became:

2x 1Gb for a management (aka RDP) NIC team
2x 10Gb (or faster) for an 'Access' NIC team
2x 10Gb (or faster) for a 'Live Migration' NIC team
2x 10Gb (or faster) for storage connectivity using MPIO

All NIC teams were across distinct NIC cards. The 'Access' NIC team was used as a vSwitch with no host co-management usability. All NICs themselves were fixed to their maximum port speed, jumbo frames enabled, power save disabled, flow control disabled, and had unnecessary NIC-based services disabled where appropriate ('File and Print Sharing for Microsoft Networks' disabled on the storage NICs for example)

Returning to VMware in a new environment. Given a VMware host with similar port connectivity and NO storage connectivity requirements, what is the done way? There are 2 hosts that will not have HA at the Hypervisor, but will run VMs like SQL with their own HA. Both have local storage. I have cabled up the ports as above so intended NIC teams will be spanned across 2 physical NICs.

They will both be managed by the existing vSphere instance and will be required to migrate VMs off the other production VMware hosts but not the other way - it's a one-way, one-time vMotion - compute and storage.

I've lost a little bit of knowledge with respect to networking terminology vs. Hyper-V; VMkernel, Distributed Switch, Standard Switch etc.

Thanks!

0 Kudos
4 Replies
stadi13
Hot Shot
Hot Shot

Hi @ldoodle 

Which vSphere Version do you have? If enterprise plus, you should go with dvSwitch (distributed) if not, VSS (standard) switches are used. Your Hyper-V setup seems quite familiar to me.

It depends, if you wish to use all adapter or if you looking forward to a fitting solution. I would recommend to use two nics (from different cards connected to different switches) for vmkernel (Management for ESX and vMotion as well). Two other nics (again from different cards connected to different switches) for VM networking in active-active mode with default load balancing. If you are not on Enterprise Plus you will create two standard virtual switches - one for mangement and vMotion and one for VM networking.

Regards

Daniel

0 Kudos
Tibmeister
Expert
Expert

Regardless if it's a Standard Switch or Distributed Switch, I don't see why you can't have all the 10Gb NICS on a single vSwitch and use the "Teaming and Failover" feature of each port group to set specific ones as Active and leave the rest as Standby.

For instance, you have NIC2-7 as your 10GB NICs (not actual names, just place holders).  You can have them all on one vSwitch and create a portgroup for Access, one for vMotion, and one for Storage.  In the Access Portgroup, you could set say NIC2 and NIC7 as active, leaving NIC3,4,5,6 as Standby for that portgroup.  For the vMotion portgroup, you can set NIC3 and NIC6 as Active, leaving the other 4 as Standby.  And for Storage, you can set NIC4 and NIC5 as the Active and leave the rest as Standby.

The only reason I would use a separate vSwitch is: 1) separate backend switch fabric, such as dedicated storage fabric.  You can have the 1GB on the same vSwitch and use the Teaming feature to also "steer" the Management traffic to those two NICs and prevent the other portgroups from even using the 1GB NICs by setting them to Inactive/Unused.

If you are using NIC Teaming on the physical switches, you will need to change the Load Balancing method to match what you are doing, but I would avoid all that complexity and just use load based failover and let the environment run itself.

I wouldn't turn of vSphere HA just because you have HA at the application, it provides an added layer of protection and will work to bring the environment out of a degraded state by powering back on VMs as appropriate.  You can configure DRS Affinity rules to say that the SQL servers should run on separate hosts, that way if you loose a host HA will restart the SQL server on the remaining host, restoring your app level HA, and when you get the failed host back up, it will re-separate the VMs.  Otherwise, you are removing a lot of the "self healing" aspects of vSphere.

0 Kudos
ldoodle
Contributor
Contributor

Hi, sorry for the no show!

Thanks so much for the replies. Turned out we needed to align with the existing hosts for simplicity. They have full Enterprise Plus but not a single dvSwitch in sight!

0 Kudos
ldoodle
Contributor
Contributor

"For instance, you have NIC2-7 as your 10GB NICs (not actual names, just place holders).  You can have them all on one vSwitch and create a portgroup for Access, one for vMotion, and one for Storage.  In the Access Portgroup, you could set say NIC2 and NIC7 as active, leaving NIC3,4,5,6 as Standby for that portgroup.  For the vMotion portgroup, you can set NIC3 and NIC6 as Active, leaving the other 4 as Standby.  And for Storage, you can set NIC4 and NIC5 as the Active and leave the rest as Standby.

The only reason I would use a separate vSwitch is: 1) separate backend switch fabric, such as dedicated storage fabric.  You can have the 1GB on the same vSwitch and use the Teaming feature to also "steer" the Management traffic to those two NICs and prevent the other portgroups from even using the 1GB NICs by setting them to Inactive/Unused."

Oooh I like the sound of that. One of the reasons I avoided doing "one big NIC team" in Hyper-V was because of Live Migration traffic "flood" on access and other NIC functions.

"I wouldn't turn of vSphere HA just because you have HA at the application"

These 2 new hosts aren't connected to the SAN. So they are standalone hosts, just managed by the existing vSphere structure (not my choice btw!)

0 Kudos