We're moving from ESX 4.0 to ESXi 4.1. Our servers have 4 physical Gigabit NICs.
On ESX 4.0, we're running 2 vSwitches:
vSwitch0
Service Console - vmnic0 Active - vmnic3 Standby
VMkernel - vmnic3 Active - vmnic0 Standby
(Unique NICs / IPs per function)
vSwitch1
VM Port Groups - vmnic1 and vmnic2 Active
(Multiple trunked VLANs)
With the changes in ESXi, is it recommended to separate Management from VMotion as we did with ESX? Note that we're using the same isolated subnet for both of these functions.
Personally I'd prefer combining Management and VMotion. Wouldn't VMotion benefit from the use of an additional NIC, especially with multiple simultaneous VMotions? At the same time, it doesn't seem that the Management traffic would be impeded to the point of needing separation, especially since we're using the same subnet. Also, security shouldn't be an issue, since again we're using the same isolated subnet for Management and VMotion.
Your config is compliant with 'best practice'. I prefer to separate VMkernel from Management taffic myself, even if it will cost me some vMotion performance.
---
MCITP: SA+VA, VCP 3/4, VMware vExpert
Your config is compliant with 'best practice'. I prefer to separate VMkernel from Management taffic myself, even if it will cost me some vMotion performance.
---
MCITP: SA+VA, VCP 3/4, VMware vExpert
Yes, Since is vSphere 4.1 you can do about to 4 vmotions concurrently I would definitely recommend that as well. I also agree with Anton as well.
Upgrade Best practices:
Cheers,
Chad King
VCP-410 | Server+
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Hey,
we have some hosts with 4 NICs too. I configured 3 vSwitches:
1. 1 NIC for manegement
2. 2 NICs for VM Network
3. 1 NIC for VMkernel for VMotion running on a seperate GBit Switch.
Adding another NIC would propably boost the VMotion process but right now I have no requirement to make it faster. Also I like to keep the VMotion traffic out of my productive network.
Regards
>we have some hosts with 4 NICs too. I configured 3 vSwitches:
In your case you have no redundancy for Management Traffic and vMotion.
---
MCITP: SA+VA, VCP 3/4, VMware vExpert
the third vSwitch has a second management for HA configured, forgot to mention that
vMotion redundancy is not really needed because we don't use DRS, but of cours, in bigger enviroments you should have a second VMkernel, but you should also have more than 4 NICs.
[~117533],
please let us know if you need any further assistance! We are here to help!
Cheers,
Chad King
VCP-410 | Server+
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Thank you both very much. That's what I wanted to hear!
One thing needs clafication as below.
vSwitch0 - 2 NICs for Mangement console & vMotion with virtual port ID configured.
Management console active adapter is vmnic0 and standby is vmnic4
vMotion active adapter is vmnic4 and standby is vmnic0
vSwitch1 - 2 NICs for NFS vmkernel with IP hash configured.
vSwitch2 - 4 NICs for VM with IP hasd configured.
All the vSwitch above is at same network. (10.3.100.x)
Questions:
1] What is the best practise for management console & vMotion configuration if I have 2 NICs only?
2] If assign 2 NICs for Mangement console & vMotion in a vSwitch like what I configured now, should I configure it as IP hash? As what I tested (unplug UTP cables port by port in sequence and ping management console IP continously), IP hash configuration will produce less loss of packet compared to active - standby configuration.
Regards.