Hi, we are having 10 nics cards in a ESX host. 5 goes to primary switch and 5 goes to secondary switch.
Earlier, we had
1. 2 Nics for Service Console
2. 2 Nics for vMotion
3. 6 Nics for Virtual Machine Port Group
Can we have
4 nic cards - Management Network and vMotion (Multi Nic) with same IP Address ?
6 nic cards - To Virtual Machine
Do you see any problems with this configuration ? Will HA work fine, OR we should change this configuration ?
We are not looking for Ft right now.
Thanks
Tom
tomtom1 wrote:
Can we have
4 nic cards - Management Network and vMotion (Multi Nic) with same IP Address ?
6 nic cards - To Virtual Machine
If you have 4 cards to be used for vMotion and Management a better solution would be:
Management-VMK = vSwitch0 = two physical network cards (vmnics) connected to two physical switches.
vMotion-VMK = vSwitch1 = two other physical network cards connected to two physical switches.
Use different IP addresses, which also must be on different IP ranges, for vMotion and Management.
Set up multi-NIC-vMotion to be able to use both vMotion cards at the same time.
That is what we has earlier, I am thinking now, Management Network and vMotion should use same nics.
Normally, Mangement network is just used for agent communication it won' take that much of network bandwith, so why not use this for vMotoion traffic ?
thanks
You could configure all the 4 vmnics in a single vSwitch, then use vmnic0 for mgmt portgroup and the remaining vmnics for multi-nic vMotion portgroups.
For redundancy of the management portgroup, use one among the three vmnics as standby. Do not use same IP, separate the traffics using VLANs.
For reference on the multi-nic vMotion setup: http://www.yellow-bricks.com/2011/09/17/multiple-nic-vmotion-in-vsphere-5/
...hth!
Message was edited by: vGuy
Obaid wrote:
You could configure all the 4 vmnics in a single vSwitch, then use vmnic0 for mgmt portgroup and the remaining vmnics for multi-nic vMotion portgroups.
For redundancy of the management portgroup, use one among the three vmnics as standby. Do not use same IP, separate the traffics using VLANs.
That's what we have implemented.
I would also implement egress traffic filtering on the vmotion vswitch because if you perform alot of vmotions (maint mode) you could saturate the managmeent interface which is used for heartbeat. If you cause the management to not respond in the middle of a vmotion you are really going to hate yourself.
We run everything (storage, vmotion, LAN and mgmt) on 2x 10gb nic's but limit vmotion to about 3gbps to ensure it can't take all the bandwidth (on esx4.1+, vmotion can easily saturate a 10gbps link) which would drop storage and management and then I'd probably be looking for a job :smileyshocked:
i know that this post is old, but I would I highly recommned what Rumple said. I have hit this before where the vmotion traffic saturated the link and interrupted the management traffic including the heartbeat. I put a host into manitenance mode and one of the VM failed to vmotion correctly due to losing the management traffic. It got hung up in limbo and had to manually have the process killed to bring it back online.
Now this was a dev vm so it wasnt a big problem, but it could be a big problem, esspecially if you have IP storage over converged networking links.
Now I either use egress traffic filtering as Rumple suggested, or keep the roles seperate.