VMware Cloud Community
petersonmd
Contributor
Contributor

Input wanted - 6 NIC setup with ESXi 4.x

Just getting into the ESXi arena after a couple of years of messing around with VMWare Server. I have a brand spanking new server with 6 NICs - and am looking for advice on how they should be configured.

Right now - I have one NIC dedicated for the management interface (192.168.10.2) in vSwitch0. The other 5 NICs are pooled in vSwitch1 and will be allocated across my 5 virtual machines (I guess). Each virtual machine will have a static IP in the 192.168.10.11-.20 range.

Am I better off leaving all 5 NICs in vSwitch1 and letting ESXi handle the routing to/from the virtual machines (getting some sort of redundancy I suppose), or should I setup separate switches for each virtual machine and dedicate a single NIC to each? 2 of the 5 virtual machines in use will be hit fairly regularly (file/print server and application terminal server). The other 3 virtual machines will get light use.

Have attempted to find a best practices document (or something similar) but haven't stumbled across it yet.

Thanks!

Mark

Reply
0 Kudos
3 Replies
Dave_Mishchenko
Immortal
Immortal

A single vSwitch for the VMs will be sufficient. ESXi will load balance outbound connections. You can load balance inbound connections as well depending on what you use for physical switches. I would also move one of the NICs back to vSwitch0 so that you have redundancy there.

Do you plan to manage ESXi from a seperate management network, use network storage or put VMs in a DMZ? Even with 4 NICs in your VM vSwitch you'll likely have plenty of unused capacity as even the network load from the terminal server should be relatively light. Here's some good reading for vSwitch setup - http://kensvirtualreality.wordpress.com/2009/03/29/the-great-vswitch-debate-part-1/.






Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
petersonmd
Contributor
Contributor

Thanks for the input. Management of ESXi will be from the local subnet only (192.168.10.x). No DMZ or network storage - the server itself has 36 GB and 2 TB of local storage which is more than enough for what we're doing. It's a pretty basic small office environment - just have multiple conflicting software packages that necessitate the separation into virtual machines.

Reply
0 Kudos
HHS_Jason
Contributor
Contributor

Hello, I believe you could also just place all the NICs into the vSwitch0 and forget about vSwitch1 altogether. This way all NICs are part of the load balancing and unless you are using vMotion or other advanced features, your management side of things will be insignificant.

Separate vSwitches would be nice when having different VLANs and subnets, running iSCSI and dedicating bandwidth and isolating network traffic for critical applications. Given what you have described, none of that applies to your situation and you can always reassign the NICs at a later date as that need arises.

There was mention of redundancy, which would only apply if you have two physical switches, and in that case you would be wanting to evaluate your whole solution, eg: multiple ESX servers hosting redundant servers so that any point of failure will not cause the loss of your whole system. For example, it is no point having two physical switches if the power fails and they are both connected to the same power source.

Hope that helps.

Reply
0 Kudos