VMware Cloud Community
shane_presley
Enthusiast
Enthusiast
Jump to solution

NIC layout

Hello,

We have two esxi5 servers (Dell R710s) in a cluster.  We use all seperate physical switches, no trunks/vlans.  We have redundant switches for all networks.

Today we have one management interface (management and vmotion), two iscsi interfaces (for path redundancy to our SAN), and one VM network interface.  We don't do a lot of vmotion, maybe once a week.  No DRS, HA, or FT today (although we're licensed for all of it).

We have a total of 6 physical interfaces available.  What's the best way to improve our design?  I know we could seperate management and vmotion?  Or we could go with two management/vmotion interfaces.  We should also team our VM network interfaces.

I'm thinking "ideal" would be two management, two vmotion, two iscsi, two VM network.  But that would require buying another NIC.  If that's strongly recommended I can consider that.  But if not, what's the best improvement we could make?

Thanks

Shane

1 Solution

Accepted Solutions
jrmunday
Commander
Commander
Jump to solution

Hi Shane,

For the low cost of an additional NIC I would highly recommend splitting out all services for full redundancy. Although it's not one of your primary requirements, an easy win to double your vMotion speed is to add a second VMKernel port (requires an additional IP address) to the vSwitch and inverse your failover adapter order for each port group. So in this case, your vSwitch uses two uplinks (example:vmnic2,vmnic3) in an active/active configuration and each port group overrides the switch to use a active/standby failover policy. Your first VMKernel port uses vmnic2 as active and vmnic3 as standby, and the second uses vmnic3 as the active and vmnic2 as the standby (effectively using both links actively, and maintaining redundancy).

I currently do the following (each service on a separate vSwitch);

  • Management (2x 1GB links - active/standby - access ports)
  • vMotion (2x 1GB links - active/active with 2x VMKernel ports as described above - access ports)
  • IP_Storage (2x 1GB links - active/standby - access ports)
  • Guest Networking (2x 1GB links - active/active - Trunk ports with VLAN tagging)

Each uplink is patched into a separate physical switch for redundancy.

If you're licences for the HA, DRS, etc. I would also use those features to your advantage Smiley Happy

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77

View solution in original post

6 Replies
jrmunday
Commander
Commander
Jump to solution

Hi Shane,

For the low cost of an additional NIC I would highly recommend splitting out all services for full redundancy. Although it's not one of your primary requirements, an easy win to double your vMotion speed is to add a second VMKernel port (requires an additional IP address) to the vSwitch and inverse your failover adapter order for each port group. So in this case, your vSwitch uses two uplinks (example:vmnic2,vmnic3) in an active/active configuration and each port group overrides the switch to use a active/standby failover policy. Your first VMKernel port uses vmnic2 as active and vmnic3 as standby, and the second uses vmnic3 as the active and vmnic2 as the standby (effectively using both links actively, and maintaining redundancy).

I currently do the following (each service on a separate vSwitch);

  • Management (2x 1GB links - active/standby - access ports)
  • vMotion (2x 1GB links - active/active with 2x VMKernel ports as described above - access ports)
  • IP_Storage (2x 1GB links - active/standby - access ports)
  • Guest Networking (2x 1GB links - active/active - Trunk ports with VLAN tagging)

Each uplink is patched into a separate physical switch for redundancy.

If you're licences for the HA, DRS, etc. I would also use those features to your advantage Smiley Happy

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
shane_presley
Enthusiast
Enthusiast
Jump to solution

That sounds great, thanks!

For your vMotion suggestion, I think I follow that.  But how does vMotion know to make use of the second IP/path?  Does it only come into play if there are multiple vmotions going on?

Also for HA, DRS -- would the design you propose support that?  I think HA required dual management networks, so I see we have that covered.

0 Kudos
jrmunday
Commander
Commander
Jump to solution

Both NICS are actively used, even with a single vMotion operation.

See this post where I have uploaded screenshots of the configuration and tested the performance;

ESXi 5.1 Seperating SAN traffic w/Vlans | VMware Communities

For HA/DRS you need to create a cluster, and therefore need vCenter - do you already use vCenter?

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
shane_presley
Enthusiast
Enthusiast
Jump to solution

Got it, thanks.

And yes, we already have vcenter and the hosts in a cluster.

With DRS, how do we manage the rules that it uses to move VMs around?  I'd really like to be cautious about letting it move VMs in the beginning.

0 Kudos
jrmunday
Commander
Commander
Jump to solution

When you configure DRS you have the option of setting the automation level to manual, partial or fully automated. Even with full automation you have the option of using a scale (1-5) to select how conservative or aggressive you want to be. I use full automation with level 3.

In addition, you can create DRS rules to keep VM's together or separate them on different hosts etc, so it's flexible around your requirements.

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
shane_presley
Enthusiast
Enthusiast
Jump to solution

Perfect.  You've been a big help, thanks again

0 Kudos