VMware Cloud Community
WillL
Enthusiast
Enthusiast

Physical switch implementation for vCNI backed networks

Hi,

Let's say one vCNI network pool is created on a vDS with optional VLAN ID, MTU set to 1524.

For each Organization, create one Routed and one Internal network based on the vCNI network pool, the respectiive networks are 192.168.0.0/24 and 192.168.1.0/24 by default, and respective vSE VM are also created.

The Uplink of vDS connect to physical switch ports, what configuration is reuqired on the physical switch? I'm not a network guy, can only assume to setup VLAN ID, two networks, and MTU for required ports? The Traffic Flow diagrams in Duncan't blog at http://www.yellow-bricks.com/2010/09/15/vcd-networking-part-3-use-case/ are very helpful.

It looks like MAC-in-MAC encapsulation is used to isolated vCNI-backed networks, since they share the same IP ranges, is there an easy to understand turtorials on MAC-in-MAC encapsulation?

Thanks,

William

Reply
0 Kudos
20 Replies
_morpheus_
Expert
Expert

The VLAN ID used for your vCNI pool must exist in the physical switching infrastructure, or at least in the switch that all of your ESX hosts are connected to. When creating networks from a vCNI pool with VLAN, VCD will create portgroups with that VLAN tag, so the switch must also support VLAN trunking to your ESX switchports.

As for MTU, the switch must support the higher MTU (jumbo frames). If not then you can just use the default MTU.

Reply
0 Kudos
WillL
Enthusiast
Enthusiast

Our proof of concept environment has two physical servers, I didn't request network team with any setup on the switch, just created the vCNI network pool with a randomly selected VLAN ID, vSE VM gets created on esx server1 according to vCenter.

Some interesting findings after creating a fenced vApp containing two VMs:

According to vCenter, fenced vSE and VM-1 were created on esx server1 and VM-2 was on esx server2, default gateway is set to the vSE of vCNI network pool, interestingly VM-1 and VM-2 can connect to each other, since they are on different esx servers, so the network traffic must have flowed through the physical switch, I din't expect this to work as nothing has been done on the physical switch.

Now I'm confused Smiley Wink Please explain to me, thanks!

Note: due to lack of NIC, the same vDS has a portgroup configured for External Network with no VLAN ID.

Reply
0 Kudos
_morpheus_
Expert
Expert

If the VLAN wasn't setup in the switch then it shouldn't work. If it does work then that indicates the VLAN is working at the switch level

Reply
0 Kudos
WillL
Enthusiast
Enthusiast

Okay, I will get our network guy take a look.

Reply
0 Kudos
depping
Leadership
Leadership

Not sure what happened here, but normally when an incorrect VLAN ID is used the traffic should be dropped by the physical switch. I can only assume the network team created a trunk and your guess was a very lucky one.

You will require the following to be configured:

- a single VLAN per vCNI Pool

- preferably an increase of the MTU to 1524 to avoid fragmentation

I would also recommend to use a dedicated dvSwitch for the Network Pool. As the portgroups are dynamically created combining them with the statically created external networks might be confusing and might complicate troubleshooting.



Duncan

VMware Communities User Moderator | VCDX

-


Now available: <a href="http://www.amazon.com/gp/product/1439263450?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1439263450">Paper - vSphere 4.0 Quick Start Guide (via amazon.com)</a> | <a href="http://www.lulu.com/product/download/vsphere-40-quick-start-guide/6169778">PDF (via lulu.com)</a>

Blogging: http://www.yellow-bricks.com | Twitter: http://www.twitter.com/DuncanYB

Reply
0 Kudos
_morpheus_
Expert
Expert

Depping,

It's important to have two NICs per vSwitch for redundancy. I think it's OK to have a dvSwitch dedicated to network pools but not at the expense of loosing redundancy. My servers have 4 nics so if I were to have one dvSwitch dedicated to network pools then that dvSwitch would only have one NIC.

Reply
0 Kudos
depping
Leadership
Leadership

Of course it is Morpheus, I made the assumption that everyone understands that implementing a recommendation like that should not decrease uptime. Of course each vSwitch/dvSwitch should have a minimum of 2 nic Ports!



Duncan

VMware Communities User Moderator | VCDX

-


Now available: <a href="http://www.amazon.com/gp/product/1439263450?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1439263450">Paper - vSphere 4.0 Quick Start Guide (via amazon.com)</a> | <a href="http://www.lulu.com/product/download/vsphere-40-quick-start-guide/6169778">PDF (via lulu.com)</a>

Blogging: http://www.yellow-bricks.com | Twitter: http://www.twitter.com/DuncanYB

Reply
0 Kudos
XIII
Contributor
Contributor

Hi

a small question regarding the increase in MTU, if a VLAN ID is specified for the VCDNI backed network, wouldn't the encapsulation overhead increase to 28 bytes? I.e., 24 bytes of VCDNI, + 4 bytes to include the dot1q ethertype and the VLAN ID in front of the VCDNI ethertype.

Thanks!

Reply
0 Kudos
manythanks
Contributor
Contributor

Guys, it is NOT a MAC-in-MAC encapsulation, it is not 802.1ah or such, it is a proprietery encapsulation made by 'lab-manager' and is created by a special services-VM called vshield-PGI , this is a linux VM running on ESX and 'bridging' between VMs and external vmNIC and doing the 'lab-manager' encapsulation for all ORGs and all VMs on the specific ESX host, it came from akimbi (vmware requesition) and MAC encapsulation used are still akimbi (the services VM used in-front of each VM). frame format is attached, you can see it has nothing to do with MAC-in-MAC just special vmware-specific fields inside ETH frame.

pcap file attached so your networking guys can take a look at this encapsulation.

indeed - vmware provides very little/none information on that encapsulation, they just call it 'MAC-in-MAC' which is a diffrent IEEE protocol not used here.

Reply
0 Kudos
mreferre
Champion
Champion

is created by a special services-VM called vshield-PGI , this is a linux VM running on ESX and 'bridging' between VMs and external vmNIC

We no longer require that system VM from vSphere 4.0U2 and above. This now all happens inside the vDS.

Massimo.



Massimo Re Ferre'

VMware vCloud Architect

twitter.com/mreferre

www.it20.info

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
manythanks
Contributor
Contributor

Massimo, it will be great to get insight on how vDS do it now, i still see VCD created PGI-VM per ESX host

Reply
0 Kudos
mreferre
Champion
Champion

We implemented it as a filter driver on the vDS.

I found weird you see that PGI vm with Redwood. You may see that vm if you enable PortGroup Isolation from vShield Manager. Since VSM supports also ESX 4.0U1 (and that requires the PGI vm) it just instantiate it regardless of the version.

In vCD (which supports only 4.0U2 and 4.1 and those versions do not require the PGI vm) we do not deploy it.

Are you sure you are using the GA code? We used to deploy it with the Beta of vCD.




Massimo Re Ferre'

VMware vCloud Architect

twitter.com/mreferre

www.it20.info

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
manythanks
Contributor
Contributor

ok, i just deleted PGI-VM and added VCDNI pool on VCD and it did not create it again on vcenter, so i guess this is for older ESX only as you say, thanks a lot.

so now the vDS switch components has all the code from old 'PGI-VM' and doing the encapsulation for all different organizations ?

ORG-1 vcdni-pool 1 and ORG-2 vcdni-pool 2 share the vDS code and share the vDS uplink and onshared vlan used by vDS to send 'lab-manager' encapsulated traffic per ORG ?

how exactly vDS decide on the MAC used for 'outer' frame ? i see the real internal MAC of ORG1 and ORG2 VMs on the external switch along with a new MAC (per ESX) so i am trying to figure out how vDS still able to tell which frame destination MAC goes where ? what vDS will do with a regular Ethernet frame destined to ORG1 VM1 MAC in that case?

Reply
0 Kudos
mreferre
Champion
Champion

>ORG-1 vcdni-pool 1 and ORG-2 vcdni-pool 2 share the vDS code and share the vDS uplink and onshared vlan used by vDS to send 'lab-manager'

>encapsulated traffic per ORG ?

I am not sure why you insist to call this a "lab-manager" thing. Not that I don't like it ... as it means we have been doing cloud for 5 years. Let's just call it with its name (vCD-NI).

Also using 3 times the word "share" in a single sentence may give the impression you are implying some sort of ... negative aspect? Yes we do share a lot of things in virtualization... if you think about it we share the same code base to run dozens of mixed VMs on the same host sharing CPU, Memory, Storage and Networks. We have been doing that for about 10 years. We have just made a step forward and we now can share a VLAN.

The vDS knows which vCD-NI PG the original MAC (VM) is attached to and knows which other MACs (VMs) belong to the same PG (L2 segments). In a way, we effectively use the VLAN as a trunk to carry multiple virtual L2 networks. This is (logically) similar to using a single wire to carry multiple VLANs. It's just an additional level of virtualization of the network built on top of the first level of virtualization (VLANs).

We understand that this is disruptive and we understand that it's a radically different way to do things. Arguably a better way. I was expecting lots of push back on this because, having been in this space for about 10 years, I went through all of these same discussions with the "server people".

"Running two OSes on the same piece of hardware? Sharing the same memory? The same NIC? You must be kidding! This will never happen here!". Yeah sure...

While I wasn't doing networking stuff when VLANs were introduced, I can imagine similar discussions.



Massimo Re Ferre'

VMware vCloud Architect

twitter.com/mreferre

www.it20.info

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
manythanks
Contributor
Contributor

i did not mean to do the 'cost savings versus security' thing, clearly it is a mute point.

i am looking for more information on the way VCDNI is done, it is not

MAC-in-MAC because if it was then only the ESX/vDS outer MAC would have been seen on the

switch and internal frame would have been using session-id and such to isolate each ORG. i am calling it 'lab-manager' because this is how the data on the frame is called,trying to make it more accurate then just calling it MAC-in-MAC, which is not, if you take a look at the

frame sent on the wire you will see the 'lab-manager' encapsulation with all the data related (at

least this is how wiresharck call it, it also calls the MAC used as made

by akimbi company) so it is not 802.1ah or others , there are other ways to save vlans and isolate tenants on L2 wire i am just looking for info on how and why vmware decided to do it.

Reply
0 Kudos
mreferre
Champion
Champion

Well there are many management tools out there that would show an ESX host and would call it a "Linux" machine. Should we call ESX "Linux" simply because a management tool hasn't been updated for 5 years? Never mind ... moot point.

The reason for which I have been picky in my responses is because you have taken an offensive approach on vCD-NI from the get go on this forum. We obviously understand this is not for everyone (as we have tried to explain multiple times here) but your approach "this thing is not secure no matter what. Period" didn't help in having a constructive balanced discussion on this.

Don't get me wrong, we understand there is a need for more information on this. There are few information available externally (admittedly not a lot) and a few more internally. We have asked to make some of these info available externally too for these reasons but it's a journey. Your frustration is a fraction of our frustration.

We may share some of these VMware confidential (so far) information with selected partners and customers interested in understanding more about this. If you want to know more you can send me an e-mail / pm with your name and company to see what we can do (please do not send me an e-mail from manythanks@hotmail.com ... it will go straight into the recycle bin).

In addition to this, it is usually a good practice and a good professional behavior to clearly show your affiliation on a public forum (any public forum). Especially if it's a technology vendor forum (like this) and your first anonymous comment is "this technology is s__t, don't use it". I think I don't need to explain you why.

Thanks.

Massimo.



Massimo Re Ferre'

VMware vCloud Architect

twitter.com/mreferre

www.it20.info

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
WillL
Enthusiast
Enthusiast

Okay, let's pull this thread back to topic.

I have the same question as XIII, MTU increase to 1528 in some VMware document, can someone please clairfy it's 1524 or 1528? of course we can always do 1528 on the safe side.

Also do we need to increase MTU on the underlying vDS and its uplinks? will it be manually or automatically done by vCD?

Thanks,

William

Reply
0 Kudos
manythanks
Contributor
Contributor

back to topic will , put a sniffer on VCNI network and you will see , here is a pcap file to save you some time calculating the overhead added by the lab-manager header on the frame....it is done auto on vds but you need to change it on external switch as well for the largest frames to work Smiley Wink

can you mark answer helpful?

Reply
0 Kudos
depping
Leadership
Leadership

1524 is the correct value.

Duncan

VMware Communities User Moderator | VCDX

-


Soon to be release: vSphere 4.1 HA and DRS deepdive (end of November through Amazon.com)

Blogging: | Twitter:

Reply
0 Kudos