VMware Cloud Community
Vromaidis
Contributor
Contributor

VDS switch configuration help

Hi all,

i am building a new Virtual infrastructure to host my VMs. Because i am in need for many Virtual port groups i decided to go with VDS for my production networks.

I have created a VDS with 2 physical nics, i have added my port groups and define Ephemeral binding mode on all ports.

I have migrated there 3 Virtual servers for test reasons and my problem is that i cannot have the 3 Virtual machines running at the same time at the same ESXi from networking perspective. VMs runs fine but i can ping only the 2 of them. If i migrate one of them to another ESXi then i am fine. It seems to be a config issue and if you have any comment on this it would be great.

I also attach a pic whivh shows the basic config of my VDs

0 Kudos
18 Replies
rickardnobel
Champion
Champion

Vromaidis wrote:

my problem is that i cannot have the 3 Virtual machines running at the same time at the same ESXi from networking perspective. VMs runs fine but i can ping only the 2 of them.

This seems to be a problem on your physical switches. Do you know if any port-security settings are configured? Perhaps a maximum allowed number of MAC addresses per port? Check this, it needs to be unlimited.

My VMware blog: www.rickardnobel.se
0 Kudos
Vromaidis
Contributor
Contributor

Thanks for the reply mate.

The ESXi servers are actually dell blade M1000e chassis with Dell PE M915 servers and Dell Ethernet Pass-Through module. We have no limitation to Cisco behind this and i think there is no such option for the pass through module.

Regards,

Vasilis

0 Kudos
rickardnobel
Champion
Champion

Even if I do not think that should cause the problem, is there any reason why you have chosen the Ephemeral port group policy?

My VMware blog: www.rickardnobel.se
0 Kudos
Vromaidis
Contributor
Contributor

This was my first troubleshooting step, the problem was the same as Static binding...

0 Kudos
rickardnobel
Champion
Champion

Vromaidis wrote:

This was my first troubleshooting step, the problem was the same as Static binding...

I would recommend to go back to static, since that is both default and also has some good features like keeping port statistics between VM vMotions.

My VMware blog: www.rickardnobel.se
0 Kudos
Vromaidis
Contributor
Contributor

Indeed, this is my choise too and i will roll back once i solve the problem.

Regards,

Vasilis

0 Kudos
rickardnobel
Champion
Champion

Vromaidis wrote:

Indeed, this is my choise too and i will roll back once i solve the problem.

I see no reason to not set this back already, since it will make everything "more default".

Another configuration question, which NIC teaming policy are you using?

My VMware blog: www.rickardnobel.se
0 Kudos
Vromaidis
Contributor
Contributor

Nic Teaming policy is at the default value of " Route based on originating virtual port", i need to check if i go to "Route based on physical NIC load", i need to study it, cause i do not know it well enouph

0 Kudos
firestartah
Virtuoso
Virtuoso

Hi

I had the same kind of problem on a vSphere 4 VDS when i had selected ephemeral. As rickard says once I deleted the port group and recreated it with static binding it worked perfectly. Route based on physical network adaptor load takes the virtual machine network I/O load into account and tries to avoid congestion by dynamically reassigning and balancing the virtual switch port to physical NIC mappings. It works well if you have four or more uplinks assigned (in my opinion).

Gregg

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Gregg http://thesaffageek.co.uk
0 Kudos
Vromaidis
Contributor
Contributor

My VDS originally has beencreated with Static Binding so i think the issue is somewhere else. Thanks for the tip at Load balancing.

My next step is to drop the VDS and create a standard virtual switch. I will test it and if i have no problems, i will be sure that the error will be in my config only. If there is problem there too, the error will be at the switch of blade or the backend.

Regards

0 Kudos
rickardnobel
Champion
Champion

Since there really are not much configuration inside the Distributed Switches around this there should not be any real issues here.

One thing could be: IP hash policy - with physical switches without "Etherchannel".

Another thing could be: Port ID / Load based - with physical switches with "Etherchannel".

Do you know how the physical ports are setup?

My VMware blog: www.rickardnobel.se
0 Kudos
Vromaidis
Contributor
Contributor

Trunk with Etherchannel mate

0 Kudos
rickardnobel
Champion
Champion

Vromaidis wrote:

Trunk with Etherchannel mate

Then that is likely part of the problem. When using Etherchannel on physical switches you must use IP Hash load balancing on the virtual switches.

Try to change this and see if your connectivity changes.

My VMware blog: www.rickardnobel.se
Vromaidis
Contributor
Contributor

I have switched it without any lack, but i remember that i had these kind of problems when i build my previus VMware infra. So i dropped the VDS and have created a standard switch. With the default config no VM had connectivity, when i switched that to the option ....based on IP hash" everything worked fine.

So at this point my situation is that i am playing well with standard switch but i cannot do the VDs work. I have big farms and a lot of VLANS and i was hoping to avoid to build any host server from start and then add every new port group x n times:(

Thank you Rickard

0 Kudos
rickardnobel
Champion
Champion

It really "should" work and when you have the license for the Distributed Switch I think you should use it.

Could you not remove the Etherchannel configuration from the physical switches and then use Port Id / Load Based on the vSwitches? The setup would be much more simple to setup and troubleshoot.

It is easy to get the Etherchannels incorrect, for example in a non-static way. (On Cisco, they need to be in "mode on" for example.) The non-etherchannel ways still have a good distribution of traffic and with reduced complexity.

My VMware blog: www.rickardnobel.se
0 Kudos
firestartah
Virtuoso
Virtuoso

Configuring of your vlans x number of times is easily remedied by using host profiles.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Gregg http://thesaffageek.co.uk
0 Kudos
Vromaidis
Contributor
Contributor

Hi folks,

i made a webex with VMware. While the guy of support was looking around i saw that every port that i had configure as IP Hash on the VDS had changed back to deafult option (Route based on originated port). I changed that back too IP Hash and it worked. I continued with other VDS. Some VDS worked some of them didn't. At the VDS which didn't worked i deleted it and re-created it and changed to IP Hash every single port group before create the next one.

Unfortuanately, VMware's engineer didn't found anything and what i described above is pure magic, so i don't know what was the real issue and i feel nervous about it Smiley Sad

0 Kudos
Vromaidis
Contributor
Contributor

Hi folks,

i have remove the VDS and re-create them changing to IP Hash every port group after i created it. It worked fine now.

So the issue was with the IP Hash. The starange think is that while i have changed the port group to IP Hash, next day it was changed back to default valus (originated port i think) Now i am ok i think.

Thanks all,

Vasilis

0 Kudos