VMware Cloud Community
timjwatts
Enthusiast
Enthusiast

Network architecture - n00b question

Hi,

I'm an experience sysadmin (inc networking) but fairly green with ESX - and I'm planning a new installation.

Got 3 hosts with 12 NICs each, pair of stacked gig switches and a PS6500E SAN.

Clearly, as per PS6500E docs, I will have one iSCSI VLAN, 4x1Gbit bonded links to all hosts and both SPs in the SAN.

I'll also have feeds from each host to the public network (1 or 2x1 bonded)

Now - the real question is - would be "normal" architecture to have a 4x1Gbit bonded link on another private, unrouted VLAN[1] between the 3 hosts for VMotion (and poss misc) purposes?

[1] On the same switch stack as the iSCSI in my case, because there's plenty of spare capacity there.

Superficially, it seems an obvious thing to do, but I will admit I'm 25% the way through reading all the docs and I haven't yet noticed the issue mentioned.

Thansk for any pointers 🙂

Tim

0 Kudos
11 Replies
AndreTheGiant
Immortal
Immortal

For iSCSI I usually use dedicated switches and NICs.

You can partition your switches in different VLANs to have different type of traffics.

But try to have at least dedicated NICs for iSCSI.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
timjwatts
Enthusiast
Enthusiast

Hi Andre,

Yes - the hosts have 12 NICs in and 4 per host will be dedicated to iSCSI.

My actual question (sorry - difficult to know how much background to put in without muddying the water) is:

Is it worth having a dedicated VLAN (and NICs) for interhost VMotion traffic between the 3 hosts?

Or to put it another way - is that a usual setup Smiley Happy ?

Ta,

Tim

0 Kudos
AndreTheGiant
Immortal
Immortal

A dedicated VLAN is common...

A dedicated NIC maybe. For example I use a vSwitch with 2 or 3 NICs where I put the Management, vMotion (and if is needed the FT) interface and then use different active NIC for each interface.

Of course all on different VLANs.

With vSphere 5 now vMotion can use more than 1 active NIC... so the previous schema can change.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
timjwatts
Enthusiast
Enthusiast

Hi Andre,

Cool - thanks for confirming that...

One thing occurred to me though:

I have been assuming all along that ESX 4.1 can do LCAP/802.3ad link bonding at the lowest (hypervisor?) level.

Is this actually true?...

Attached is a PDF of the logical wiring diagram of the sort of layout I was envisaging:

Cheers,

Tim

0 Kudos
a_p_
Leadership
Leadership

LACP is only supported with Enterprise Plus and the Cisco NEXUS 1000V add-on. For available/supported configurations, take a look at http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf

from this document:

... That switch or set of stacked switches must be 802.3ad-compliant and configured to use that link-aggregation standard in static mode (that is, with no LACP).
...
Etherchannel negotiation, such as PAgP or LACP — must be disabled because they are not supported.

André

timjwatts
Enthusiast
Enthusiast

Hi Andre,

You sir are a font of all knowledge.

Yep - the PC 6224 switches support static 802.3ad (upto 8 ports per group) and LACP may be disabled. This is also supported across a number of switches interconnected with a stacking kit - which we have (2 switches, stacked, for redundancy in the event of failure).

Thanks for that PDF - I had not come across that (and I was looking for a guide like that) - now downloaded. Looks like my initial wiring diagram in sane - I'll build that for the initial tests.

First job - usb boot hosts to linux and speed-tune the SAN! Then we go onto VMWare Smiley Happy

All the best!

Tim

0 Kudos
a_p_
Leadership
Leadership

Good Luck.

In case you are interested, VMware provides a couple of sample network configurations in the KB, e.g.

Sample configuration of EtherChannel / Link aggregation with ESX/ESXi and Cisco/HP switches

Sample configuration of virtual switch VLAN tagging (VST Mode)

André

timjwatts
Enthusiast
Enthusiast

Thanks for those Andre,

Very interesting. So only IP-hash is supported at the vSwitch level (I assume VMostion uses a vSwitch?).

So it appears we are not going to benefit from bonding for a single VMotion from Host A->Host B, but we might if we have to VMotion A->B && A->C

assuming VMotion is limited to using 1 IP per host. Somthing I will be reading up on. It may therefore be the case that bonding more than 2 links is a waste of time, and those 2 links are all we need for redundant paths via the 2 switches...

Cheers,

Tim

0 Kudos
a_p_
Leadership
Leadership

Actually I don't think you will not really benefit from configuring EtherChannel for the vMotion network. By default vSphere 4.1 allows only 4 concurrent vMotion tasks anyway (see http://kb.vmware.com/kb/1022851) and IMO it does not really matter if it takes a few seconds more or less for the VM's to be migrated.

What I usually do - that's also recommended by others - is to use one vSwitch (with the default "Route based on originating Port ID" policy) for Management and vMotion with two uplinks (for redundancy), assign a separate VLAN ID to each port group and configure the uplinks on the port groups as active/standby (one uplink active for Management and the other one for vMotion). This way - during normal operation - each of those two VMKernel port groups has its dedicated uplink and also a failover in case of a link loss.

André

0 Kudos
timjwatts
Enthusiast
Enthusiast

Brillant - thanks Andre.

Cheers,

Tim

0 Kudos
timjwatts
Enthusiast
Enthusiast

Heh - just though of something else this morning, on a related note - while I was reading the ESX docs...

Of course, I will be wanting a vSwitch to span all 3 hosts so that the VMs can talk to each other (which happens a lots - we use a lot of layered web apps, so there are database servers, LDAP, 2nd tier app servers (tomcat), fileservers and so on.

I don't want all this traffic going through the main uplinks to the public net, through that switch (which I don't own and has limited ports) and back down again - so I suspect I will need another bundle of interconnects  through my local switch.

Must read up on how vSwitches work now...

Tim

0 Kudos