VMware Cloud Community
chrisLE
Contributor
Contributor
Jump to solution

10GbE and 1GbE mixed setup best practice

Hi there!

Soon we'll finally have 10GbE with our new ESXi hosts and I'm wondering how I should set up the different vSS (and probably vDS next year after the planned upgrade) to get the best out of speed and redundancy. The hosts will have 2x 10GbE (on 1 NIC) and 4x 1GbE (those are onboard and guaranteed to be on 1 NIC, too).

The current hosts have 6x 1GbE on two NICs, where one of each NIC is in a vSwitch for the vMotion (vmk0) and management (vmk1) adapter. The remaining 4 are on another vSwitch for the VMs. All physical ports are active. We use NFS which is in the same subnet as vmk0.

I have no good idea how to use the 10GbE with the 1GbE ports. First, I'll probably create a new vmk for the NFS traffic*, or is it okay to run those two in the same VLAN like we do now? After the migration to the new hosts we will have a maximum of 3 hosts.

Then I'll either put 1 or 2 1GbE into the first vSwitch for the management vmk adapter and the rest into another vSwitch. There the 10GbE adapters will both be active and the 1GbE adapters will be on standby.

Would this be a valid and and good way to distribute the physical ports into the hosts and VMs? The physical switch on the other side is of course stacked and I'll connect to it in a redundant fashion like we do already.

Kind regards,

Chris

*I'll probably just create new vMotion NICs so I don't have to change anything on the storage side, just create a new VLAN and Subnet for vMotion

Reply
0 Kudos
1 Solution

Accepted Solutions
aleex42
Enthusiast
Enthusiast
Jump to solution

Only 2x 10GbE isn't the best setup, so I would prefere:

2x 10 GbE: NFS

2x 1 GbE "Frontend" (VMs)
2x 1 GbE vMotion

If you need 10 GbE for VMs, then maybe this mix setup:

1x 10 GbE (active), 1x 1 GbE (passiv): NFS
1x 10 GbE (active), 1x 1 GbE (passiv): "Frontend"

2x 1 GbE (active) vMotion

And if you have big VMs (mostly huge RAM), then I would combine vMotion and Frontend on one vSwitch.

In every solution, I would seperate each type by network and VLAN, so you have 3 portgroups:

* Network 1, VLAN 1: NFS

* VLAN 2: VMs

* Network 2, VLAN 3: vMotion

Best regards,

Alex

-- Alex (VMware VCAP-DCV, NetApp NCIE, LPIC 2)

View solution in original post

4 Replies
aleex42
Enthusiast
Enthusiast
Jump to solution

Only 2x 10GbE isn't the best setup, so I would prefere:

2x 10 GbE: NFS

2x 1 GbE "Frontend" (VMs)
2x 1 GbE vMotion

If you need 10 GbE for VMs, then maybe this mix setup:

1x 10 GbE (active), 1x 1 GbE (passiv): NFS
1x 10 GbE (active), 1x 1 GbE (passiv): "Frontend"

2x 1 GbE (active) vMotion

And if you have big VMs (mostly huge RAM), then I would combine vMotion and Frontend on one vSwitch.

In every solution, I would seperate each type by network and VLAN, so you have 3 portgroups:

* Network 1, VLAN 1: NFS

* VLAN 2: VMs

* Network 2, VLAN 3: vMotion

Best regards,

Alex

-- Alex (VMware VCAP-DCV, NetApp NCIE, LPIC 2)
chrisLE
Contributor
Contributor
Jump to solution

Hi Alex,

thanks for your fast reply. Because I also need iSCSI from within a few VMs I need 10GbE in the frontend as well. I like your distribution for this:

1x 10 GbE (active), 1x 1 GbE (passiv): NFS
1x 10 GbE (active), 1x 1 GbE (passiv): "Frontend"

2x 1 GbE (active)

I guess you mean the 2x 1 GbE in the last line are for vMotion? Most VMs have around 4-8GB of RAM, so I could live with the "slow" vMotion there.

But I'm not a big fan of single redundancies if I can have more. I would put vMotion into the frontend vSS like you suggested and would add one 1 GbE to each of the vSwitches.

Also, where would you put the management adapter in your second distribution? Frontend or vMotion?

Kind regards,

Chris

Reply
0 Kudos
aleex42
Enthusiast
Enthusiast
Jump to solution

Why you want more than one redundancy?

Because Management doesn't need much traffic, I would combine it with Frontend or vMotion.

And yes, the blank line was meant "vMotion" 🙂

-- Alex (VMware VCAP-DCV, NetApp NCIE, LPIC 2)
chrisLE
Contributor
Contributor
Jump to solution

Okay, the thought wasn't more redundancy, but more throughput. Because, if the 10G link fails, I was hoping that the two standby become active. This would only drop our max speed by a fifth, not a tenth (single session still capped at 1G though, obviously).

If that isn't the case (because only one standby adapter is activated at a time), it makes no sense to have more than one standby adapter configured for each vSS.

This should be another story when using vDS, because there we can use LACP to team the adapters before assigning them a vSwitch.

Reply
0 Kudos