I have the same problem, but did't get it work with the workaround VLAN 4095
If I use VLAN 0 QinQ packets are forwarded to the Linux Vmware, but all outgoing packets are droped on the VSwitch as described here.
With VLAN 4095 a single trunk goes through, but a QinQ packed is already droped incomming. Any ideas what to do here?
We found one solution : We build a Layer 2 tunnel from an outside Linux Server to the Vmware Linux and tunnel the QinQ packet. This works, but we would prefere a solution without an additional tunnel.
I also hope that the Nexus 1000V will support QinQ, but I think the chance is not high as the Nexus 5000 did't support it. On other Cisco switches like Catalyst 3750 or 6500 you can configure dotq tunneling.
Maybe I understand your workaroud not correct. As I understand the plan is to decapsulate the QinQ packet on Linux Vmwares running on the ESX. But how do you get the QinQ packet to them? Has it not to go also through a VSwitch where it is droped even with VLAN 4095?
We have observed that when set to 4095, that traffic is moved in and out of the VM and/or the physical NIC without being touched -- whether the frame has a tag or not. It sounds as though this is not what you have experienced, which surprises me.
In other words, a vswitch with a tag of 4095 behaves like a true switch in that it simply forwards frames, regardless what they contain (untagged, single tagged, multiply tagged).
It's only when we attempt to tag or untag using a vswitch (i.e. tag != 4095) that we experience the inability to stack tags.
I don't believe QinQ will be supported at initial release. Subsequent releases are another story.
Just to be clear here: do the virtual switches support QinQ packets or do they drop them? In my experience the QinQ packets are dropped, but it seems that some people have seen the packets come through. Do different builds of ESXi have different behavior with QinQ packets? Is the official VMware stance that QinQ packets are dropped?
An update on this issue.
We have fully characterized the behavior and have a workaround that operates correctly.
What the virtual switch is doing is dropping any frame that have two (or more) tags where the outermost and next innermost tags both have etherype 0x8100. This is likely due to the anti-DOS machinery baked into the virtual switches. We would love to be able to turn off anti-DOS, but we don't know how.
Here's the workaround:
For untagged traffic -- do nothing;
For single-tagged traged traffic, set the ethertype on the end point to something other than 0x8100 (we use 0x88A8);
For double-tagged traffic, set the end point outer tag to something other than 0x8100 (we use 0x88A8), and set the inner tag to 0x8100.
Using this setup, we are able to move double-tagged traffic between VMs, and between VMs and external devices that are connected to the host via an external physical switch that tags/untags using ethertype 0x8100. Note that the traffic between the host and the external physical switch actually has three tags -- 0x8100 / 0x88A8 / 0x8100, where the outermost 0x8100 is used to muliplex across the single external link.
This setup works great, with two caveats:
1. You must be comfortable with and able to independently set the inner and outer ethertypes;
2. Since we're triple-stacking on egress of the host , the MTU on the VMs must be reduced to 1500 - 4 = 1496 bytes.
See attached image for the topology.
vmware-topology.jpg 54.5 K
I am sure that this is a dead post but I had a question on it.
Have you had others outside of your company look at your networking requirements ?
I am sure there is more to what is in your attached diagram but it doesn't suggest this level of complexity and sometimes having another pair of eyes helps simplify things - even more so when the pair of eyes belong to a more simple person.