So we have a flat network, no Vlans. I have a total of 6 NICS per ESX host...Since there is no Service Console or it's shared with the Prod vswitch here's how I have the NICS setup.
Keep in min we're using ESXi 3.5
vSwitch0 - Vmotion - 2 NICS
vSwitch1 -Vmkernel Port & Virtual Machine port group - 4 NICS
I've attached the screen shot.
Please let me know if you would do anything differently?
Dave's bang on! the ESXi VMotion IP will serve the same purpose. To much ESX on the brain, sorry.
Each VM will get no greater than 1Gbe over the four NICs. Once a VM is granted a NIC it will stay on that NIC using the default configuration and thats fine because VMs are spread across all four of them.
vExpert 2009
Thanks Mike. The network guys are looking into Vlans. Im also trying to explain the value of adding the 4 NICS to the Production virtual switch to the network guys. How do I go about explaining this to them.
Im getting redundancy with 4 nics and also getting more throughput, correct?
How do I go about a second service console?
ESXi uses a vmkernel port for management so if you have 2 vmkernel ports (i.e. your port for vmotion) then you have 2 IPs to use. If you don't end up using VLANs you should at least put the vmotion NICs on an isolated switch.
ESXi will load balance your outbound connections over the 4 NIC ports. Do you expect a lot of network traffic?
Dave's bang on! the ESXi VMotion IP will serve the same purpose. To much ESX on the brain, sorry.
Each VM will get no greater than 1Gbe over the four NICs. Once a VM is granted a NIC it will stay on that NIC using the default configuration and thats fine because VMs are spread across all four of them.
vExpert 2009
Yes, we have Vmotion on an isolated network. the 192.168 network.
I would do it differently. You only really need one nic for vMotion and the second to peovide failover. Same is true for the Management Network. I would keep the two of them on the vSwitch0 and configure the Management Network to be active on vmnic0 and standby othe the other nic say vmnic2. Then I would configure the vMotion portgroup to be active on vmnic2 and standby on vmnic0. This gives each one a dedicated nic and allows for redundancy in case of failure.
I would only allocate (2) nics to your production vm portgroup, make them active-active and if you are looking to acheive better utilization configure the vSwitch to use either route based on IP or mac hash, and appropriatly configure the physical switch with fast either channel trunk to support the configuration.
The other (2) nics I would use for backup traffic if you are doing agent based guest backups. But I would highly recommend utilizing vlans for traffic segmentation and it would be required to support this configuration.
Sid Smith
-
VCP, VTSP, CCNA, CCA(Xen Server), MCTS Hyper-V & SCVMM08
http://www.dailyhypervisor.com
Don't forget to award points for correct and helpful answers.;-)
Thanks for all your help.
I guess my question now is if I do leave things the way are, Im losing any functionality, correct? Or redundancy?
It depends on your physical switch configuration? Area ll nics going to the same physical switch? For best redundancy you should split the nics between two switches? This wil allow each vSwitch to survive a switch failure. Other than that you do have redundancy in your configurations because you have more than 1 nic for each vswitch.
Sid Smith-----
VCP, VTSP, CCNA, CCA(Xen Server), MCTS Hyper-V & SCVMM08
Don't forget to award points for correct and helpful answers.;-)
What if the scenario is:
3 physical nics in ESX host. VLAN for VMotion. VLAN for VMs. Currently have 2 nics (trunked) for VMs and VMkernel (Management Network) and 1 nic (not trunked) for VMotion set.
Since ESXi just has VMkernel, I can only enter one Default Gateway or am I totally missing something. I'm trying to justify moving to ESXi from ESX.
3 is an odd number to have but in that scenario I would have only (1) vSwitch. Redundancy is important. I would make sure all the traffic is segmented using 802.1q
Sid Smith
-
VCP, VTSP, CCNA, CCA(Xen Server), MCTS Hyper-V & SCVMM08
Don't forget to award points for correct and helpful answers.