VMware Cloud Community
tekhie
Contributor
Contributor

CDP stops working when adding a VM to the vSwitch

Hi - i have an esxi 4.1 running with a number of vswitches. All of the vswitches have only a single VLAN presented to them and they all work fine. My Network Team set up 2 Cisco switch ports with 2 VLANs being presented so that i could utilise VLAN tagging. I created a new vSwitch and if there are no VM's assigned to the vSwitch then the CDP information is visible. As soon as i add a VM to the vSwitch i get the following message when trying to view the CDP info - "CISCO Discovery Protocol is not available on this physical adapter" - if i take the VM off it starts working, if i put a VM back on it s tops working !! Anyone come across this before ? or better still - know how to fix it Smiley Wink

Tags (1)
Reply
0 Kudos
7 Replies
mblake4u
Contributor
Contributor

Hi,

Did you manage to solve your problem? I'm having a similar problem with CDP, I've got two ESXi servers connected to the same switches which have CDP enabled. I can see the switch information on one of the ESXi servers but not the other. Each of the servers has two NICs assigned to the vswitch, and the only difference between the two is that the one that doesn't work has etherchannel configured on the physical switch.

I'm wondering if configuring etherchannel disables CDP..

Regards, Michael

Reply
0 Kudos
tekhie
Contributor
Contributor

hi michael - ill find out from our network team. i believe it was some kind of security setting on the switch and natioe VLAN. ill get the info from them and post it for you to check in your environment.

Reply
0 Kudos
mblake4u
Contributor
Contributor

hi, many thanks for that - it'll be good to get this fixed.

Reply
0 Kudos
avallk
Contributor
Contributor

I also have 1 nic that just started doing this. I'll be checking with our network guys as well.

Reply
0 Kudos
tekhie
Contributor
Contributor

heres the reposnse from my network guys - maybe you can relate this to your environment ..

CDP info works fine in Site1 but not in Site2 on trunk interfaces because of the global command on the server farm switches "vlan dot1q tag native".

This command is used to tag the native VLAN egress traffic and drop all untagged ingress traffic.

For a reason I don't know, the CDP info packets exchanged between the physical switch and the virtual switch are not tagged and because of the above command they are all dropped.

Possible options to fix the issue :

1) Create new vlan and configure it as native on the ESX trunk interfaces

2) Remove the global command from all server farm switches - Need to identify if this is a standard command or not

Seems to be issue with a global command ??? Hope it helps Smiley Wink

Reply
0 Kudos
mblake4u
Contributor
Contributor

Thanks for your response (sorry about the delay in replying). I've tried to understand the native VLAN issue and all that and had a look over our switch configurations.

There seem to be quite a lot of issues with the ESXi native VLAN and the problems it causes. I'm going to try set up a test system soon to try and replicate this problem and see if this is the cause.

Many thanks for your help, Michael

Reply
0 Kudos
tekhie
Contributor
Contributor

Reply
0 Kudos