VMware Cloud Community
krsie
Contributor
Contributor
Jump to solution

esxi network setup 4 nic

Hi,

I am currently configuring a new ESXi 4.1 environment. I have 4 nic's on each host. I have seen alot of different discussions regarding separation of vmotion/management. However I do not think this is ideal with the current nic setup because should I choose to use 2 nics for management and management redundancy.

This is why I believe that, I separate the 4 NICs on two different switches and on each of them I create two trunks(or etherchannel for cisco) while both of these trunks include 50% of the NICs for one host. Should I choose to spend 2 NICs on management(one for each switch) I will no longer have the ability to trunk/etherchannel from the switch and will therefore lose both redundancy on that particular switch and the extra bandwidth I get by trunking two interfaces.

Any ideas on this matter?

/Kris

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
mjcar
Contributor
Contributor
Jump to solution

The only way i could think of it would be with a managed switch doing tcp port based QoS, as I'm fairly sure the management and vMotiontraffic will be on different ports to what your servers are.

It will depend on your make an model of switch and how it constructs uts ACL's/Classes but it shouldbe fairly easy to construct given the right level of managed layer-3 switches.

The issue is whether it can do this level of frame/packet inspection without incurring too much overhead.

View solution in original post

0 Kudos
6 Replies
mjcar
Contributor
Contributor
Jump to solution

What is the expected bandwidth usage by your guests ? Will they be impacted if the Host starts using 90% of the bandwidth to throw virtual machines around during a vmotion?

The idea of the separation is to limit impact on your guest traffic by heavy duty management functions like VMotion.

0 Kudos
krsie
Contributor
Contributor
Jump to solution

I went in to check out our current production environment and most hosts seem to be peaking at 2500-4000 KBps thought we will expect some more after the upgrade due to more capacity on the hosts. vMotion during office hours will be minimum and all hosts have shared storage so the expected traffic used by vMotion daily will not be that intense. Is it worth wasting two NICs and the ability to trunk/etherchannel on the switch just to separate management/vmotion from the VM traffic in this case?

0 Kudos
mjcar
Contributor
Contributor
Jump to solution

I guess the theoretical ideal goal is to be able to Vmotion in the middle of the day and not have anyone notice the impact.

If you are aware of the limitations then you can choose any configuration that suits. Given your setup I would probably choose production performance and loose flexibilty for the hopefully seldom required vMotion.

The other advantages of a completely separate management network is to work around catastrophic switch failure, but it doesn't look like your redundancy stretches that far. (3rd management only switch that is...)

0 Kudos
krsie
Contributor
Contributor
Jump to solution

I totally agree that the best theoretical solution would be separation, with shared storage in place I am unable to see how my configuration with 2gbit incoming traffic bandwidth and 4 NICS with 1gbit with outgoing traffic can overloaded by perhaps a few vMotions during office hours. Most of the vMotion will happen outside office hours and in maintenance windows. Do you know if there is any way to traffic shape or prioritize VM traffic infront of vMotion?

Kris

0 Kudos
mjcar
Contributor
Contributor
Jump to solution

The only way i could think of it would be with a managed switch doing tcp port based QoS, as I'm fairly sure the management and vMotiontraffic will be on different ports to what your servers are.

It will depend on your make an model of switch and how it constructs uts ACL's/Classes but it shouldbe fairly easy to construct given the right level of managed layer-3 switches.

The issue is whether it can do this level of frame/packet inspection without incurring too much overhead.

0 Kudos
krsie
Contributor
Contributor
Jump to solution

Thanks, sorry for late reply. I ended up using 2 NICs for management and vmotion on one vSwitch and the last two for VM data. That way I can expand my vSwitch with more pNICs should I need to.

Thanks.

0 Kudos