VMware Cloud Community
andoven
Contributor
Contributor

Questions about cabling ESX hosts to physical network switches

We have 3 ESX 3.5 update 1 hosts in a VMware cluster. Each ESX host has 6 network cables attached: 1 cable is dedicated to VMotion, 1 cable is dedicated to DMZ traffic, and the four remaining cables are connected to the internal VMware switch which handles local LAN traffic. Each ESX host has five of it's cables connected to a unique Dell network switch. The DMZ cables are attached to one switch that has a VLAN for the DMZ traffic.All networking has worked fine so far. We don't have any trunking, etc. setup on our physical network switches.

I am curious if there would be any benefit in spreading the data cables across multiple network switches, or if it's better for each ESX host to be dedicated to one switch.

Configuration:

  • ESX host 1 has four data cables and one vmotion cable attached to switch 1.

  • ESX host 2 has four data cables and one vmotion cable attached to switch 2.

  • ESX host 3 has four data cables and one vmotion cable attached to switch 3.

  • ESX hosts 1, 2 and 3 each have one DMZ cable that use 4 ports on switch 1.

  • The DMZ cables are associated with a VLAN. The additional DMZ cable in the VLAN comes from our firewall into switch1.

  1. On the one hand, isolating network cables to one switch means that if that switch ever goes down, only one ESX host would be affected. The bad news is that all of the VMs on the ESX host with the bad physical network switch would be shutdown and restarted on another ESX host if the switch was not functioning.

  2. Alternatively, if ESX host 1 had two data cables on switch 1, and two data cables on switch 2, then the load would be over more than one switch. But I wonder if the network traffic would be less efficient because one ESX host is sending/receiving data over two physical switches. Are there any other reasons this approach would be good/bad?

  3. Also, is there any benefit to putting all the Vmotion cables on one switch with it's own VLAN? Is there any benefit to that approach vs having three VMotion cables each connected to a unique physical network switch ?

  4. What benefit could be gained by trunking our data cables vs not trunking them?

Thanks, Andoven

0 Kudos
6 Replies
vmPUNK
Enthusiast
Enthusiast

so couple things

first to answer your question i would only use 2 switches for all your lan traffic, so for each esx 2 cables to switch1 and 2 to switch2

vmotion can have its own switch, or just split it between the 2 above

other notes

dmz and vmotion in your case has no redundancy, also where is your serivce console

what you could do is 1, 2, 3 per esx

1 for dmz since you only have one switch for that anyways, so redundancy physically doesnt exist

2 for sc and vmotion, say nic1 (nic2 stby) is act for sc and they all go to switch1 , nic2 (nic1 stby) is active for vmotion and they all go to switch2, this way vmotion traverses only 1 switch, and you have redundancy for both

3 for lan, whichever way you split it is fine, you have an odd number of ports so 4 on switch1 and 5 on switch2, goal is to connect all esx's to both switches for redundancy

that woud leave you with one extra switch i think and then you can dedicate it to dmz instead of having vlans

0 Kudos
andoven
Contributor
Contributor

my service console is attached to the virtual LAN switch with the four data cables.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Using multiple pSwitches is generally a case for redundancy. You have 4 networks to be concerned about.

vMotion

DMZ

VM Network

SC Traffic

Given that I would setup redundancy quite a bit differently than you have. I would setup multiple pNICs/vSwitch combinations using for each ESX server

vMotion 1 pNIC 1 vSwitch

SC 1 pNIC 1 vSwitch

DMZ 2 pNIC 1 vSwitch

VM Network 2 pNIC 1 vSwitch

I would then make use of at least 2 pSwitches per host.... I would definitely make vMotion its own private VLAN. It is way to important an network to let anything but ESX servers on it. So you end up with per server the following:

pSwitch0 -> pNIC0 -> vSwitch0 -> SC Portgroup

pSwitch0 -> pNIC1 -> vSwitch1 -> vMotion Portgroup

pSwitch1 -> pNIC2 -> vSwitch2 -> DMZ Portgroup

pSwitch2 -> pNIC3 -> vSwitch2 -> DMZ Portgroup

pSwitch1 -> pNIC4 -> vSwitch3 -> VM Network

pSwitch2 -> pNIC5 -> vSwitch3 -> VM Network

If you have iSCSI or NFS Storage then this will not work. If you want redundancy for SC/vMotion then you can combine them onto the same vSwitch but will need VLANs for each to make it work.

Trunking is used mostly to get around low # of pNICs, but will be beneficial because you really need 8 pNICs per server to become fully redundant, so yes, I would trunk minimally vMotion and SC VLANs.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
vmPUNK
Enthusiast
Enthusiast

so you could just run 2 cables from each host to one of 2 switches for lan and sc

then 1 switch for all 4 vmotion cables

or

here may be a good one

still run 2 cables from each host to one of 2 switches for lan and sc

then set up vmotion and dmz on same vswitch with 2 ports, configure the vmotion port group to be active on nic1, stby on nic2, dmz port group act on nic2, stby in nic1

if you can run nic1 (vmotion) ports to 1 switch, run nic2 (dmz) ports to other switch - so now you have redundant switches for your dmz on vmotion

if your worried about someone hacking your vmotion ip since its on the same net as dmz, you can do two things: vlan the portgroups, use different subnets

if you vlan the portgroups, your switch ports would need to be set in trunk mode

hope that makes sense

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I would keep the vMotion VLAN or network off the switches you are using for DMZ. But that is just me. It is a risk and you really do not want to have the clear text vMotion traffic ending up in your DMZ. I would ideally use a separate set of DMZ physical switches while placing the other networks on their own switches that have no routes or anything to the DMZ network.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
khughes
Virtuoso
Virtuoso

Everyone above me is basically covered all the points but what we do (which is similar to how you're setup/going to be setup) is we have 2 gigi switches where we split the Production Network apart, so 1/2 of the prod net pNICs going to switch1 and the other 1/2 of the other prod net pNICs to a second switch. That way if one of the switches goes down we wont loose network connectivity to that esx host.

- Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos