Currently I am having a discussion with someone in my group that disagrees with anything I say, regardless of the content. What I am looking for in this thread is a general consensus about a specific topic.
And that topic is Network configuration on a blade that has only two network interfaces. ESX 3.0.1 and the blade(s) are added to VC2 and we do have HA and DRS licenses and are using them in 3 or 4 blade clusters. We do not set aggressive DRS thresholds and don't do alot of VMotioning manually.
Currently my counterpart setup up the NICs on vSwitch0 in a team. Both NIC's are active. I agree with that. What we disagree on is I prefer to follow vmware recommended practices, which is: dedicate a NIC to VMotion. This was actually a suggestion by Massimo (thanks bud). Something that I had considered all along but not setup. One NIC active (vmnic0) for the VM Network and Service console, the other NIC as a standby. With VMotion having the opposite configuration. vnmic1 as active for the Vmotion portgroup and the other NIC as standby. Still gives you fault tolerance but dedicates an interface to VMotion.
The other guy is whining about the NIC team bandwidth advantage and that we don't use the VMotion very much. He has the blades he setup as both NIC's active for all portgroups. Fault tolerance:yes, and the extra bandwidth for one pipe.
Well, 5 or 6 VM's on each blade connected to a 4 GigE port switch module on the blade chassis. These VM's are not using a single Gig NIC, much less two gig's worth of bandwidth.
One thing I am a little confused about that someone might can help me with as well is when I change the configuration I lose connectivity until I remove the VMOtion active NIC as an uplink from the command line. esxcfg-vswitch vSwith0 -U vmnic1 once I do that pings come back immediately.
Other than that I am really looking for opinions on those that currently have blades with two NIC's or consultants/gurus, who have specific opinions and justification for dedicated NIC for the VMotion portgroup.
I'm using 2 NIC's active for all my port groups... console, vmotion, lan1, lan2, etc. I thought about having one nic active for vmotion and console but have not seen the need for it....yet.
Although I can see how when a vMotion is running and usnig 700-800Mb/s that it may impact some of the virtual machines network throughput. I'm not sure how fast ESX moves the VM traffic over to vmnic1 is vmnic0 is saturated.
Our ESX blades are setup as follows:
NIC1 and NIC2 - Trunked connection shared for console, Vmotion vlan, DMZ VLAN second DMZ VLAN, VLAN for backups (vSwitch0)
This is the setup recommended by IBM. The advantage is that if one connection goes down then everything will continue to run on the other ... probably slower but it will still run.
We learned that for our environment, blades are not the most effective solution. We just ordered some more servers to expand our VI environment; they are dual quad core, 16GB RAM, 10 total NIC ports, and dual HBAs.
I hope this helps
Yeah, our original blades have some drawbacks. Lack of network interfaces being the only one that really has a real negative impact. The newer blades are two slot, 4 dual core Opterons, 32GB of memory and 4 NIC's so for the most part, horsepower and connectivity options are a non issue with what we're getting moving forward. But for the Vi3 environment the single slot two ways could stand to be replaced. Won't happen anytime soon but it'd be nice.
I've been playing around with it and seeing some bizzarro behavior. I expect bouncing them after I make changes would fix it but until I get the LUN's presented to two of my other hosts in that cluster I am sort of handcuffed with what I can tinker with.
Thanks for the input guys. I need to bone up on and work with VLAN's on the hosts I believe. Not really strong with networking so it's a battle but good for me. Preciate the thoughts.
The networking setup isn't very hard. I believe there were some tech papers for with the detailed setup instructions under ESX 2.x. The ESX side is even easier in 3.x.