Best way to add 10GbE NIC to existing ESXi hosts with 1GbE NICs

I'm getting 10GbE NICs to add to my ESXi hosts that already have 1Gbe NICs, and I wanted advice on adding the 10GbE NICs.

Each ESXi host's current setup is:

   - VMWare Essentials license, so I am only using VSS (no VDS).

   - Unfortunately, I can't use vMotion, due to license (I may upgrade in the future).

   - Currently, each ESXi host has 3 or 4 1GbE ports. The 1GbE ports are connected to a Cisco managed switch (which only has 1Gbe ports). The ports are configured in the Cisco switch in the same link aggregation group (LAG) (LACP off).

   - Each host's vSwitch is configured with "Route based on IP hash" load balancing.

   - I don't use VLANs at this time.

   - I currently use NFS but not iSCSI. I plan on adding iSCSI in the future.

   - Each VM is in local storage, but also uses NFS share.

For two of the ESXi hosts, I am getting one single port 10GbE NIC each. For the third host, I am getting one dual port 10GbE NIC.

The 10GbE NICs will be connected via SFP+ to a MikroTik CRS309-1G-8S+IN switch that supports 10Gbps (aside: love the price of this switch).

The 1GbE Cisco switch and the 10GbE MikroTik switch will either be (a) both connected to a 3rd Cisco switch, OR (b) I'll connect the MicroTik switch to the same Cisco switch that the 1Gbe ESXi host ports are connected to.

To keep things simple, I am thinking of just having one vSwitch per ESXi host with the following port groups:

1. Management Network port group

2. VM Network (traffic) port group (to be used by all VM traffic)

3. In the future, storage port group (e.g. for iSCSi).

4. If I upgrade license, VMotion port group

By default, I want the 10GbpE ports to be prioritized over the 1GbE ports.

Right now, with only 1GbE ports, I am using "Route based on IP hash" for load balancing (my Cisco switch has the 1GbE ports configured in the same LAG group).

However, when adding the 10GbE ports, I'm not sure how to configure NIC teaming. I suppose one simple approach is to  "Use explicit failover order", and then

for Failover order, I list the 10GbE ports first, followed by the 1GbE ports. In this approach, I wouldn't need to setup LAG groups in my switches??

I guess "Use explicit failover order" is simpler? As long as the 10GbE port(s) are healthy, i probably don't need the bandwidth of load balancing between two 10GbE ports, and I can probably even use non-managed switches since I don't need LAG groups?

But, is it possible (and desirable) to define load balancing, so that I use "Route based on IP hash" for the 10 GbE ports, and then only have the 1GbE ports as standby. But when there is failover to the standby ports, the 1GbE ports will use "Route based on IP hash" amongst each other as well? I'm not sure how to set this up (e.g. how to configure vSwitch or port group to treat the two 10GbE ports as the first (higher priority) group and to treat the 1GBe ports as a second (standby) group. How to set this up?

Also, for the Management Network port group, would it be better to prioritize the use of the 1GbE ports and only use the 10GbE ports for failover (but in the VM Network port group, prioritize the use of the 10GbE ports and only use the 1GbE ports for failover)?

Finally,  not sure if it's worth the trouble of using Jumbo frames for the 10GbE ports and switch?

Earlier I thought I might put the 10GbE ports in a different subnet (e.g. than the 1GbE ports (e.g. But I don't think this is necessary?

Anyone have any advice?

0 Kudos
2 Replies

Forget network aggregation or LACP with standard vSwitches. You cannot use this setup anyway with vmnics on a vSwitch which are connected to different physical switches with are not stacked.

If I were you, I'd connect the ESXi host to default physical switch ports (For Cisco it's Access/Trunk ports, for other vendors it's tagged/untagged ports), set the vSwitch configuration to "Route based on originating port ID", and configure Failover&Teaming with the 10gbe as Active uplink , and the 1gbe ports as Standby uplinks.


0 Kudos

Ah, thanks, André.

Unfortunately, it would be too expensive to get one big switch that supports all my NICs. So, I have to continue to use an existing 1GbE Cisco switch and add a new (inexpensive) 10GbE MikroTik switch for the new 10GbE NICs. I didn't know if it were possible to have one network aggregation group for the 1GbE ports in the Cisco switch, and a second network aggregation port for the 10GbE ports in the MikroTik switch, and then have ESXi failover between the two groups. So, I guess this isn't possible.

So, if I understand you correctly, I would add the two 10GbE NICs and set their status to active. I would add the 4 1 GbE NICs and set their status to standby. Use "Route based on originating port ID". Question - what happens if the two 10GbE NICs become unavailable (maybe the MikroTik switch goes down)? Will all 4 1 GbE NIC change from standby to active, so all 4 are sharing the traffic? And then when one or both 10GbE NICs become available again, will all 4 1 GbE NICs automatically go on standby?

0 Kudos