VMware Cloud Community
ozzorag
Contributor
Contributor

vPC with nexus 5k and without 1000v?

We are using ESXi 5.0 WITHOUT a nexus 1000v going to 2 nexus 5k's over 10 gige.  We are trying to setup a portchannel / vPC between the 2 nexus switches without LACP using channel-group mode on .  We have spoke to several cisco engineers who say this should work and verified our nexus configs and UCS settings, seems like everything is correct. Pings are going out and not coming back as long as VPC is enabled.  If we disable one nic or break the VPC pings and connectivity work as they should.  Cisco is now blaming it on a VMWare config.  I am starting to question if vpc is even supported withnout a 1000v? Anyone know? I cant seem to find anyone with this use case.

And before anyone links to the link below - we have already tried it - iphashing is setup on the switch and vmware side/management network

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102275...

This is what the 5k config looks like its identical on both switches

interface Ethernet1/9
  description esx1 Cisco c200
  switchport mode trunk
  spanning-tree port type edge trunk
  channel-group 5

interface port-channel5

  switchport mode trunk

  spanning-tree port type edge trunk

  speed 10000

  vpc 5

vPC status
----------------------------------------------------------------------------
id     Port        Status Consistency Reason                     Active vlans
------ ----------- ------ ----------- -------------------------- -----------
5      Po5         up     success     success                    1,11-13,15,
                                                                 19-20,25,30
                                                                 ,69,101,201
                                                                 ,216,302-30
                                                                 3,333,40....

Thanks,

0 Kudos
10 Replies
lwatta
Hot Shot
Hot Shot

You do not need the Nexus 1000V to do VPC port-channels down to the ESX host. I just did a quick test in my lab with ESXi 5 and its working with no issues.  Do you have a prior Cisco TAC case ID I can look up?

Also are you using the DVS or vSwitch? If you create a VM on the host does it experience the same behavior as the mgmt interface?


Which 10G card are you using? Is it a CNA? If so are you also passing FC traffic to it as well?

louis

ozzorag
Contributor
Contributor

Thanks for the reply, the case # is # 620247845.  The nic we are using is the CISCO p81e, using SFP+ copper cables.  I had a qlogic 8242 overnighted which I received today and replaced the p81e in one server.  Without changing anything else the setup is working as it should.  Not sure if the p81e is having problems with vpc or maybe with esxi5, but as of now the cisco nic seems to be the culprit.

0 Kudos
lwatta
Hot Shot
Hot Shot

Ah. I tested with Qlogic as well. I'll find one of our Cisco cards and see if I get the same behavior.

I'll also take a look at the case.

louis

0 Kudos
ozzorag
Contributor
Contributor

Thanks, my case # with cisco is 620247845 my case # with vmware is 12132403401, i think i may have only given you the cisco one earlier.

0 Kudos
lwatta
Hot Shot
Hot Shot

I am able to get it working in my lab. I had some minor issues that required me to remove and readd one of the nics manually with esxcfg-vswitch but once I added it back all is working. I can turn interfaces off and on and traffic fails over from interface to the other and the port-channel seems to be working fine.

One thing i did notice is that I'm running older enic code then you are. I'm also not at the latest and greatest firmware for the P81E. Do you recall what firmware your card was running? Also did you patch your ESXi 5 code after install? I'm downloading all the patches now and will patch my ESXi 5 to try and get to the same version you are running.

Are you running FC over this card as well? if so is it SAN boot or just datastore storage?

Looking at the case I think you should escalate. The issue is clearly something with either the card or the driver which makes it our(cisco) problem. I'm going to ping engineering for you as well.

I'll let you know what happens after I upgrade ESX.

louis

0 Kudos
ModenaAU
Enthusiast
Enthusiast

was there a resolution to this? We are experiencing the same problem....

0 Kudos
Sreejesh_D
Virtuoso
Virtuoso

can you try with the load balancing policy "Route based on IP hash"? also ensure both uplinks to the virtual swithes are active. 

0 Kudos
ModenaAU
Enthusiast
Enthusiast

edit: ooo

0 Kudos
ModenaAU
Enthusiast
Enthusiast

we're already using route based on IP hash! I have other installs the same as this that all work, except this is the first ESXi 5.1. Am patching the host and will see what happens...

0 Kudos
Deepak_Balaji
Contributor
Contributor

Hi colleague,

I am on time off from from 25th - 27th April 2013, having no access to my mail & phone.

On issues/queries please reach my colleagues via email @ 'DL GLOBAL IT LIT SM 2nd level WIN_VM (external)' or by creating a IT Direct with category ID 'IMIS_SRVR_DEV_WINVMWARE' or 'SRIS_SRVR_DEV_WINVMWARE'.

Thank you

Regards,

K.Deepak Balaji

0 Kudos