VMware Cloud Community
turkina
Contributor
Contributor

ESX Networking Best Practices documentation

The company I work for has a number of ESX servers and each is set up with only 2 pNICs and everything teamed through the pair. VLANs are assigned to port groups to keep the traffic separated on the physical network. I explained to my boss how 4 pNICs per blade would be much more ideal since we would be physically isolating traffic and, as we grow, provide more bandwidth and less latency for the VM network since all traffic (about 12-15 VMs per blade) aren't over 1 line. I also remember issues of having HA heartbeats and VM traffic on the same pNIC.

Since more pNICs would be more expensive he has to go to the CIO and present information as to why we should get 4 pNICs per blade going forward and wants solid documentation from VMware explaining why more than 2 pNICs is advised. I'm having trouble locating something on VMware's website. Anyone know of some documentation like this?

Thanks!

VI3 VCP

VCP3/4/5, VCAP5-DCA
0 Kudos
8 Replies
jrenton
Hot Shot
Hot Shot

Its on page 7 of this VMWare document

John

0 Kudos
turkina
Contributor
Contributor

Thanks. I have seen that document, however, what I'm looking for is:

VMware recommends X number of pNICs per ESX host. If less than that, the following negative consequences can occur:

Even though I can explain to them things I've seen or that it is advised to have more than 2 pNICs because of traffic isolation and better reliability, the CIO is likely going to want to see compelling documentation as to why he should spend the money to go from 2 pNICs per blade to 4 since we would need a new Bladecenter (I'm told) and more network infrastructure to support it.

VI3 VCP

VCP3/4/5, VCAP5-DCA
0 Kudos
jrenton
Hot Shot
Hot Shot

Just use physical servers. Blades are prone to memory module failure. The addition cost of extra nics will then be negligable with a traditional server. You are already consolidating so why try to save an extra 10 percent by using blades. I have found them less reliable and an extra level of complexity which doesn't rate with me as very cost effect if you consider the extra time of config then support in the future.

0 Kudos
alin1
Contributor
Contributor

0 Kudos
jbogardus
Hot Shot
Hot Shot

A series of blogs was done by Ken Cline earlier this year which are very detailed and do an excellent job of covering your questions. Ken Cline is now a VMware employee.

Halfway through Part 5 there are a few tables that indicate the security and performance impact of mixing network connection functions on one NIC.

Part 7 discusses the details of recommended 6, 4, and 2 NIC configurations. And discusses why 2 NIC config isn't desirable.

http://kensvirtualreality.wordpress.com/2009/04/17/the-great-vswitch-debate-part-5/

Also I agree with an earlier poster about the extra problems of dealing with configuring and maintain desirable ESX networking on blades. If you have a choice seriously considering using rackmount 2U servers to get a more suitable balanced CPU and Memory scalability.

0 Kudos
Josh26
Virtuoso
Virtuoso

Just use physical servers. Blades are prone to memory module failure. The addition cost of extra nics will then be negligable with a traditional server. You are already consolidating so why try to save an extra 10 percent by using blades. I have found them less reliable and an extra level of complexity which doesn't rate with me as very cost effect if you consider the extra time of config then support in the future.

Where did you pull this from?

For every blade center I've worked with, the memory modules are the same exact chips as the physical servers.

Sure, additional NICs might be cheap on a physical server - but how would buying a whole bunch of a new servers to replace his bladecenter be in anyway a negligible price?

To the poster, how is your storage managed? If there's iSCSI involved, then the answer is simple - you need as much performance as you can get to the iSCSI network. Having someone flood a link - and in turn actually causing IO latency, is not a good place to be.

0 Kudos
jrenton
Hot Shot
Hot Shot

In my personal experience I have found blades less reliable than traditional servers. I know the hardware is the same but it is the environment that is different. Could be something to do with the amount of ambient air around the memory modules which makes the difference.

If this is a new implementation then I would consider trad servers over blades. If you already have blades then I agree it would not be the best idea to replace the whole environment just to get cheaper additional network interface.

0 Kudos
bulletprooffool
Champion
Champion

Take the cost of 15 VMs on your blades, with 4 pNics . . .and the cost of 15 complete blades . . .with 2 NICs each . . . show the CIO option 2 . . . then show him option 1 . . .and ask him which he prefers . . hehe

One day I will virtualise myself . . .
0 Kudos