We are using Dell 10th gen blade chassis for our ESX environment. We have 2 fiberchannel cards for storage and 4 networking cards for every box. I am torn as to the best way to handle these 4 NICs'. I wanted to create a separate vmkernel network, but find it hard to justify using 2 NICs for vmkernel (redundancy). I teamed all 4 NICs and share them for vmkernel and guest networks (traffic is separated by vlans). Yesterday 1 of our 4 switches failed, and all of the hosts lost connectivity. It is making me rethink our networking setup.
What are others doing when they only have 4 NICs available? What is the recommended configuration for vmkernel and guest networks?
vSwitch0:
Service Console and Vmotion Network: vmnic0/2 each is a standbye for each other
vSwitch1:
Virtual Machine Port Group: vmnic1/3 all active
hope this helps.
vSwitch0:
Service Console and Vmotion Network: vmnic0/2 each is a standbye for each other
vSwitch1:
Virtual Machine Port Group: vmnic1/3 all active
hope this helps.
Troy's layout sounds good to me.
Troys config is about the best you can do with 4 NICs
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth
VMware Communities User Moderator
Like everyone else has said Troy's layout for 4 NICs is just about as good as you can do. Using vmnic0 and vmnic2 gives you seperation (hopefully of physical NIC failure / bus failure) along with the same concept for your vm network vmnic1 and vmnic3
Since you had a switch go down and lost all connections on that switch, you might want to think about splitting up the 2 sets of nics to different switches, like vmnic0 goes to switch 1 and vmnic2 goes to switch 4 which would give you added redundancy
Kyle
I have been using the same setup as Troy mentions. Its the best you can get with 4 nics.
Each vswitch has two pNics. Each pNic goes to a seperate physical switch. This is handled by VLANs and dot1q link aggregation. We also then use iphash load balancing in the vswitches.
This gives us a fault tolerant and better performing virtual network configuration.
Essentially work can be carried out on a core switch requiring downtime or a failure of a NIC or cable cut/unplugged and it would still carry on going. :smileygrin:
Andy, VMware Certified Professional (VCP),
If you found this information useful please award points using the buttons at the top of the page accordingly.
All 4 NICs are on completely separate network fabrics. Separate
switches, separate uplinks, etc. My only hesitation is that it seems
like a ton of overkill and waste to have 2 NICs dedicated to vmkernel
and vmotion. The real traffic actually happens on the other networks.
If it is all done on a Gig switch, it really isn't that much overkill. In reality if you look at esxtop networking how much bandwidth are you really using on your hosts? Enough to bog down even one nic? I doubt it, we run a simular setup as yours (plus an additional 2 pNics dedicated for DMZ) and we don't even come close to using all the bandwidth on one Gig NIC
Also it wont seem like overkill when you have one of your mgmt/sc NICs go down and you don't have a redundant NIC there to pick up the slack
Kyle
it is nothing to do with performance but more to do with reslilance and security. by having physical seperation of your Service console and Vmotion networks you are minimising the risks of compromisation.
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth
VMware Communities User Moderator
I guess the question I should be asking is: Why physical NIC separation instead of vlans? If I do vlans, I can have 4 active NICs for everything. I have double the redundancy, and double the available capacity for each side. I still have traffic separation through the tagged vlans.
i am looking at purchasing the M series blades in the future, we have a FC datastore now (EMC CX3-20) and was going to spec the same nic configas the opening poster. However i was hoping to start looking at getting some EQL boxes as well.
How would the nic setup go then if you had 2x FC and 4x ethernet as your config?
Cheers
Aaron
exactly the same way, FC x2 for multipath to san,
NIC 0 - Service Console failover to NIC2
NIC 2 - VMotion/VMKernel failover to NIC0
NIC 1 & 3 Production VLANs
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth
VMware Communities User Moderator
Why not put all pNICs in a single vSwitch, and use the same failover order as mentioned. This maximizes flexibility in NIC config and failover. Antoher upside is that you have to configure a single vSwitch isntead of two, downside is that you have to set priority on the uplinks (active/standby/unused).
Nice thing about this setup is, that you are even able to failover production networks to NIC0 or 2 if you wish. Using two vSwitches cannot accomplish this level of failover. The use of dot1q trunks is highly recommended in a setup like this.
I'm still learning so could someone explain this please?
vSwitch0:
Service Console and Vmotion Network: vmnic0/2 each is a standbye for each other
vSwitch1:
Virtual Machine Port Group: vmnic1/3 all active
hope this helps.
Why not have both nics in vswitch0 as active?
I've seen it mentioned several times so i'm sure there's a good reason for it.
Hi,
What Troy is trying to point out is, that you can actually tell ESX to use one NIC for vmotion, and never use the other NIC right until a link fails. So when both links are up, SC and vmotion each have their own NIC so to speak. As soon as any of the two NICs fail, the affected communication will failover to the NIC that is still "up".
So basically this configuration separates both network streams, but still allows for failover.
Can i use the same nic for service console and Vmotion?
In my network setup i have one single nic on a separate physical switch dedicated only for Vmotion network
I have 4 hosts with 6 network card, 2 on board and other 4 plug in pci-express slot, i have created one Vswitch only for virtual machine, and put all for network card into Vswitch, via vlan i can chose where to put VM.
In physical switch i have set all ports where the network card are connected, tagged for al Vlan that i need to use
Yes, you can. But I do not recommend to do this.
There will be heavy traffic on VMotion interface during VM migration.
---
We currently have three vSwitches:
vSwitch0 (vmnic0) - Service Console
vSwitch1 (vmnic1) - VMotion and Secondary Service Console (as part of the HA clustering best practices - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100264...
vSwitch2 (vmnic2 & vmnic3) - vm's
Cheers,
Ben
Sorry Tom i don't think i explained my question properly. yes that would be the initial setup but what would i change it to if i was to include iSCSI into the mix ?
both NICs are infact active, but only for one set of traffic,
NIC0 is active for Service console traffic and the failover standby for VMotion/VMkernal
and
NIC2 is active for VMotion/VMKernel and standby for Servece Console traffic
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth
VMware Communities User Moderator