4 Replies Latest reply on Dec 12, 2013 2:58 AM by PeterLind

    VMs on DVS not able to communicate?

    PeterLind Novice

      Hello people. :-)


      We've had problems with VMs on the Nexus 1000v not being able to communicate at times, when we move them to the standard switch, the problem disappears. We've stumbled upon some lines in the vmkernel.log on the ESXi host with the problem that might give us a hint, but we can't understand what they mean:


      2013-12-03T06:45:27.675Z cpu62:8254)<3>nx_nic[vmnic3]: Bad Rcv descriptor ring

      2013-12-03T06:45:27.763Z cpu62:8254)<3>nx_nic[vmnic3]: Bad Rcv descriptor ring

      2013-12-03T06:45:27.783Z cpu62:8254)<3>nx_nic[vmnic3]: Bad Rcv descriptor ring

      2013-12-03T06:45:27.951Z cpu63:8255)<3>nx_nic[vmnic3]: Bad Rcv descriptor ring

      2013-12-03T06:45:27.988Z cpu58:8250)<3>nx_nic[vmnic3]: Got a buffer index:11f for Jumbo desc type. Max is 80

      2013-12-03T06:45:27.991Z cpu58:8250)<3>nx_nic[vmnic3]: Got a buffer index:117 for Jumbo desc type. Max is 80


      2013-12-04T12:29:56.114Z cpu44:8287)sf_netif_port_unreserve: DEBUG port 33554454-2000016 clientName is SRV7WWWMID002 ethernet0, unlicensed/headless VEM

      2013-12-05T08:43:44.725Z cpu26:5272602)sf_netif_port_connect: Cannot set uplink tree capabilities, error returned was Not found


      2013-12-06T08:59:58.554Z cpu5:8285)sf_netif_port_unreserve: DEBUG port 33554449-2000011 clientName is AV01_Prod.eth0, unlicensed/headless VEM

      2013-12-06T09:02:10.196Z cpu56:8287)sf_netif_port_unreserve: DEBUG port 33554458-200001a clientName is SRV9DMZAPP002 ethernet0, unlicensed/headless VEM

      Can you help us interpret it?


      ESXi 5.0 Update 2.

      VEM/VSM version 4.2(1)SV1(5.1)


        • 1. Re: VMs on DVS not able to communicate?
          AJ Master

          Pete, Most likely you are getting this error due to NUM_RCV_DESC_RINGS not been set properly.

          I recommend you to recreate the new DVS with just one uplink and PG and see if the problem disappears.

          • 2. Re: VMs on DVS not able to communicate?
            grasshopper Virtuoso

            Thanks for sharing, I have not seen this one.


            -  Can you reproduce this on demand?

            -  Is the example VM using the e1000?  If so can you try the vmxnet3?

            Note:  The reason I ask is purely speculation, but I wonder if you are on your way to experiencing this

            Are you using the new 'free' 1000v licensing model (i.e. no advanced features)?  If not, please compare the output of the following command on good vs. bad hosts:

            vemcmd show card (then scroll down to the bottom and look at license.  Lots of other useful info in that command as well)


            If in doubt, capture a 'vm-support' and 'vem-support all' during the time of the issue.  Also helpful might be tailing or otherwise reviewing the vmware.log of an affected VM.

            1 person found this helpful
            • 3. Re: VMs on DVS not able to communicate?
              lwatta Expert

              I talked to some folks here in the office and the errors you are getting with the nx_nic are most likely driver or hardware related. Can you tell us what driver versions your nic is running?


              If these are HP/netxen drivers there is a known issue with version 4.0.550-1.2

              1 person found this helpful
              • 4. Re: VMs on DVS not able to communicate?
                PeterLind Novice

                Thanks for the input guys.


                The log errors didn't relate to the problem we had. We had a Cisco technician look through our setup and logs and he didn't find anything. He suspects that we're running a too old version of VSM/VEM for the ESXi build we're running. Either that or a hardware driver failure. We're updating everything and then we'll see how it goes.


                He recommended getting these logs if we experience the error again:

                ESXi: vem-support all

                Nexus 1000v: show tech-support svs


                Take care