We'll be rolling out our first vSphere 5 servers with 10 nics in each of the 4 ESX Hosts and I have questions as to how to best configure the networking.
All hosts have 10 1GB nics
My first tought was to build 4 vswitches
vswitch0:
Management Network VLAN10
VM Network VLAN10
vmnic0 + vmnic1 to physical switch1
vmnic6 + vmnic7 to physical switch2
vswitch1:
DMZ VLAN20
vmnic2 to physical switch1
vmnic8 to physical switch2
vswitch2:
vMotion
vmnic4 to a dedicated vmotion switch
vswitch3:
Backup Network with NFS for Veeam VLAN30
vmnic3 to physical switch1
vmnic9 to physical switch2
Now my questions.
Are there some design improvments? What should I do better?
Physical Switch Configuratoin:
Do I have to configure a etherchannel between the nics vmnic0,1,6 and7 and change the policy to route based on IP hash?
Do I put the vswitch0 with active and standby nics or should I use all nics in active state?
If it is better to work with active and standy which nics do I change to standby?
Thank you for your help
regards
Dennis
hi
I would do something like that, see attachment
vMotion and mgmt into one vSwitch with active\standby approach - vLAN trunking must be done on pSwitch port
DMZ over two vmnics Active/Active
VM_LAN - over 4 vmnics all Active
backup - 2 vmincs - Active/Active
All critical networks (mgmt. vMotion, VM_LAN and DMZ) are redundant (split between 2 quad port adapters) + backup LAN flowing over embedded adapter
No ether channel - I have never seen network saturated by VM traffic - here you have 4 vmnics so will work fine
Hi Dennis,
Can you post vmnic hardware configuration (which one is embedded, which ones is on extension cards) something like:
vmnic0, vmnic1 - embedded
vmnic2, vmnic3 - dual port nic etc
will be easier to desing better solution for you
HI Arthur,
yes of cause
vmnic0, vmnic1, vmnic2, vmnic3 Intel82580 QuadPort Card
vmnic4, vmnic5 Intel I350 embedded
vmnic6, vmnic7, vmnic8, vmnic9 Intel82580 QuadPort Card
The physical switches are from HP
One time a HP2848 and the other one is a HP2910-al
cheers
Dennis
hi
I would do something like that, see attachment
vMotion and mgmt into one vSwitch with active\standby approach - vLAN trunking must be done on pSwitch port
DMZ over two vmnics Active/Active
VM_LAN - over 4 vmnics all Active
backup - 2 vmincs - Active/Active
All critical networks (mgmt. vMotion, VM_LAN and DMZ) are redundant (split between 2 quad port adapters) + backup LAN flowing over embedded adapter
No ether channel - I have never seen network saturated by VM traffic - here you have 4 vmnics so will work fine
Hello Arthur,
thank you very much for the szenario 😉
Why do you seperate the mgmt traffic from the VM Traffic?
performance? Security?
regards
Dennis
It's for both performance as well as for security
Hi,
next question is why do you do the mgmt together with the vmotion with a active/standby configuration and Not active actice
This is because for the managment suppose nic1 is active and nic2 is passive and for vmotion vmkernel port nic2 is active and nic1 is passive this is done to provide a dedicated nic for their traffic and in case of failure passive means standby becomes the active adapter. If both of these are active then may be both vmkernel ports can use the same nic for sending the traffic.
I would go with arturka suggestion, in additiona take a look at this vSphere Host NIC Design - 10 NICs
Are you using FC or IP based storage?
I am using fc based storage
ditro2001 wrote:
Physical Switch Configuratoin:
Do I have to configure a etherchannel between the nics vmnic0,1,6 and7 and change the policy to route based on IP hash?
No, do not change this and do not configure any link aggregation on the physical switches (called "trunk" on HP Procurve devices). The suggestions above are very good and will get you both good performance and fail over capacity in case of switch reboot/failure/power loss.
thank you very much for the szenario 😉
hi
you are very welcome,
if you have additional questions don't hesitate to post them 🙂
BTW, your topic is a good "material" for blog post 😉
Artur,
one small question, what would you recommend as the traffic balancing algorithm for the VMnetwork. also why not have a DVS for that environment. also why no etherchannel?? I am curious.
Tom Howarth wrote:
also why no etherchannel??
Not speaking for Artur, but with HP Procurve switches you can not create an etherchannel (HP "trunk") that spans two physical switches, which would make you without fault tolerance.
Hi Tom,
My desing is based on my experience and from blogs which are owned by more experienced engineers than myself
Anyway, see my comments below:
one small question, what would you recommend as the traffic balancing algorithm for the VMnetwork.
for vSS- Rout based on orginal port ID
for vDS - LBT
also why not have a DVS for that environment.
I made assumption that user know what he is asking for, in his question he mentioned only vSS so I assumed that he does not has an E+ license
also why no etherchannel?? I am curious.
etherchannel would be an option, it is possible to create two etherchannels, one with vmnic2 and vmnic8 and second with vmnic3 and vmnic9, most probably it will work, but I don't know what will happen if one network port will went down from etherchannel ? how ESX will behaive ?, I read (on yellow bricks blog) that it might be a problem
So in general, on a community and also on my blog I always gives advices which I'm sure that it will work for 100%.
Tom, if you know that it would work with etherchannel, you can advice user to use it. But, I don't see big benefits to have etherchannel created (for VM traffic) over standard approach without it
BTW, your topic is a good "material" for blog post 😉
Thank you very much.
I do not have my own blogs. Don't hesitate to post it on your blog. I think many people do have the same questions.
cheers
Dennis
Artur wrote:
It is possible to create two etherchannels, one with vmnic2 and vmnic8 and second with vmnic3 and vmnic9, most probably it will work,
In my opinion it would not work in this situation. All vmnics from the vSwitch with IP Hash are required to be attached to the same physical switch, otherwise it would create massive flapping of MAC addresses on the physical network.
thanks for clarification, that's why I wrote "most probably" 🙂 becuse I have never test it
Ranjna Aggarwal schrieb:
This is because for the managment suppose nic1 is active and nic2 is passive and for vmotion vmkernel port nic2 is active and nic1 is passive this is done to provide a dedicated nic for their traffic and in case of failure passive means standby becomes the active adapter. If both of these are active then may be both vmkernel ports can use the same nic for sending the traffic.
So I configure the nics within the portgroups and not for the whole vswitch, right?
Portgroup MGMT vmnic0 with standby nic vmnic6 and
Portgroup vMotion with vmnix6 with standy nic vmnic0 right?
Why is the "trunk" in the picture?
regards
So I configure the nics within the portgroups and not for the whole vswitch, right?
right
Portgroup MGMT vmnic0 with standby nic vmnic6 and
Portgroup vMotion with vmnix6 with standy nic vmnic0 right?
right, mgmt - vmnic0 standby - vmnic6 - Active
vMOTION - vmnic0 Active, - vmnic6 - standby
Why is the "trunk" in the picture?
that might be little confusing, is to let you know that to physical port where vmnic is conencted you must trunk vLANs (for vMotion and mgmt traffic)
Artur