VMware Cloud Community
JFWilmer
Contributor
Contributor

Ethernet Best Practices

Folks

I am install 3 new ESX 3.0.2 servers each server has 2 onboad gig cards and 2 Intel dual port gig cards. Total of 6 gig ports.

My initial setup was to be the following:

onboard 1 = console in its own vlan

onboard 2 = vmotion in its own vlan

Intel 1- 4 = user access balanced across 2 core switches in it own vlan

A consultant that is coming in is recommending the following

onboard 1 and Intel port 1 = console

onboard 2 and Intel port 3 = vmotion

Intel port 2 and 4 = user access

He says that it builds in redundancy but that only leaves me with 2 links for user connectivity.

What are your thoughts???

Thanks you have been very helpful!!

0 Kudos
6 Replies
admin
Immortal
Immortal

I prefer to use the 4 Intel to make a trunk for user connectivity. You will get you failover and load balance by configure you switch with 802.3ad protocol (if you have Cisco switch, the technology is Ether Channel). You may also use VLAN tagging to access more than one vlan to the virtual machine.

The point of the consultant is good (think about redondancy) but console and vMotion port don't have the same critical level than the user access port. If you lose console or vMotion port your VM will still running, this will give you the time to correct the problem and bring back the infrastructure (feature who required Virtual Certer).

Message was edited by:

jfrichard

0 Kudos
Paul_B1
Hot Shot
Hot Shot

We have 6 NICs as well and here's what we do:

Onboard 1 = Console to pSwitch 1

Onboard 2 = VMotion/NFS/Backup Network

Intel 1-1 = VMs (trunked) to pSwitch1

Intel 1-1 = DMZ

Intel 2-1 = VMs (trunked) to pSwitch2

Intel 2-2 = Failover for the Console NIC in (configured in standby mode in VirtualCenter) to pSwitch2

We have NEVER even come close to touching the bandwidth capacity of the 2 trunked NICs dedicated to the VMs

peetz
Leadership
Leadership

We usually add the two onboard ports to a single vSwitch and run both the service console and VMKernel/VMotion over it. The two physical ports are connected to two redundant physical switches. This way we have redundancy for both the service console and VMKernel connections while just using two physical ports.

You cannot (or should not) do this if there are security concerns running console and VMKernel traffic on the same VLAN or if you plan to have a heavy network load (e.g. backup jobs) in the service console (the latter is not a good idea anyway).

We then use all the external NICs for VM networks but always care for having redundant connections to different physical switches within each vSwitch.

\- Andreas

Twitter: @VFrontDe, @ESXiPatches | https://esxi-patches.v-front.de | https://vibsdepot.v-front.de
0 Kudos
spex
Expert
Expert

Console and VMotion work perfectly together. So

onboard 1 + intel 1 -= trunk for console + vmotion (2 vlans)

onboard 2 + intel 2 -= first user trunk

intel 3 + intel 4 -= second user trunk e.g. for DMZ ,... (where vlan is not safe enough)

Regards Spex

0 Kudos
JDLangdon
Expert
Expert

Console and VMotion work perfectly together. So

onboard 1 + intel 1 -= trunk for console + vmotion (2

vlans)

onboard 2 + intel 2 -= first user trunk

intel 3 + intel 4 -= second user trunk e.g. for DMZ

,... (where vlan is not safe enough)

I have to agree with spex. I like the idea of having the COS teamed either with a VMkernel or on it's own.

Jason

0 Kudos
Ken_Cline
Champion
Champion

This topic comes up on a regular basis. Look at this thread[/url]for a discussion and also to follow the links to other related threads.

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
0 Kudos