5 Replies Latest reply on Jun 17, 2019 8:14 AM by BaumMeister

    Stopping I/O on vmnic0

    M.Meuwese Novice

      Hello,

       

      Since a coupe of weeks we experience intermitted connectivity on a daily basis. Though not always on the same time, not always every day, though in the ESXi logs we see the following:

       

      2019-03-13T20:06:20.426Z cpu2:2097220)igbn: indrv_UplinkReset:1447: indrv_UplinkReset : vmnic0 device reset started

      2019-03-13T20:06:20.426Z cpu2:2097220)igbn: indrv_UplinkQuiesceIo:1411: Stopping I/O on vmnic0

      2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_DeviceReset:2306: Device Resetting vmnic0

      2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 2 to 8

      2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_Stop:1890: stopping vmnic0

      2019-03-13T20:06:20.462Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 8 to 4

      2019-03-13T20:06:20.492Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 4 to 1

      2019-03-13T20:06:20.492Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 1 to 20

      2019-03-13T20:06:20.493Z cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 20 to 2

      2019-03-13T20:06:20.493Z cpu2:2097220)igbn: indrv_UplinkStartIo:1393: Starting I/O on vmnic0

      2019-03-13T20:06:20.507Z cpu2:2097220)igbn: indrv_UplinkReset:1464: indrv_UplinkReset : vmnic0 device reset completed

      2019-03-13T20:06:27.426Z cpu2:2097220)NetqueueBal: 5032: vmnic0: device Up notification, reset logical space needed

      2019-03-13T20:06:27.427Z cpu3:2212666)NetSched: 654: vmnic0-0-tx: worldID = 2212666 exits

      2019-03-13T20:06:27.428Z cpu3:2224632)NetSched: 654: vmnic0-0-tx: worldID = 2224632 exits

      2019-03-13T20:06:27.428Z cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1

      2019-03-13T20:06:27.428Z cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4

      ...

       

      In Syslog we see some more events related as the access switch in the datacenter reports an up/down event:

       

              

      Date timeDate/TimeFacilityLevelhostnameMessage tekst
        13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340772+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:339Z 'Activation.trace' 140109948208896 INFO [activationValidator, 1261] Trace objects loaded.
      13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340557+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 853] Temp directory disk free space is:8407379968
      13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340338+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 804] Patch store disk free space is:104520380416
      13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.340114+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 303] Internal Scheduled Tasks Manager Timercallback end of this timer slice.....Rescheduling after 300000000 microseconds
      13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.339871+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 724] InvokeCallbacks. Total number ofcallbacks: 7
      13/03/201921:06:533/13/19 21:06UserInfovCenter1 2019-03-13T20:06:56.339443+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:56:337Z 'InternalScheduledTasksMgr' 140109948208896 INFO [internalScheduledTasksMgr, 194] Internal Scheduled Tasks Manager Timercallback...
      13/03/201921:06:243/13/19 21:06Local7NoticeNetwork Switch819: Mar 13 2019 20:06:27.732 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/2, changed state to up
      13/03/201921:06:223/13/19 21:06Local7ErrorNetwork Switch818: Mar 13 2019 20:06:25.686 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed state to up
      13/03/201921:06:203/13/19 21:06Local7ErrorNetwork Switch817: Mar 13 2019 20:06:22.875 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet1/0/2, changed state to down
      13/03/201921:06:193/13/19 21:06Local7NoticeNetwork Switch816: Mar 13 2019 20:06:21.860 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/2, changed state to down
      13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.757084+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:756Z 'VcIntegrity' 140109287241472 INFO [vcIntegrity, 1536] Cannot get IP address for host name: tpvc-pvvm-003
      13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.752196+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:742Z 'VcIntegrity' 140109287241472 INFO [vcIntegrity, 1519] Getting IP Address from host name: tpvc-pvvm-003
      13/03/201921:06:173/13/19 21:06UserInfovCenter1 2019-03-13T20:06:21.742931+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:21:742Z 'Activation' 140109287241472 INFO [activationValidator, 368] Leave Validate. Succeeded forintegrity.VcIntegrity.retrieveHostIPAddresses on target: Integrity.VcIntegrity
      13/03/201921:06:073/13/19 21:06UserInfovCenter1 2019-03-13T20:06:11.089546+00:00 tpvc-pvvm-003 vpxd 4459 - - Event [614748] [1-1] [2019-03-13T20:06:11.089256Z] [vim.event.UserLoginSessionEvent] [info] [TRUEPARTNER\sa-veeam] [] [614748] [UserTRUEPARTNER\sa-veeam@x.x.3.101 logged in as VMware VI Client]
      13/03/201921:05:583/13/19 21:05UserDebugvCenter1 2019-03-13T20:06:02.401034+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:02:400Z 'JobDispatcher' 140109819787008 DEBUG [JobDispatcher, 415] The number of tasks: 0
      13/03/201921:05:583/13/19 21:05CronInfovCenter1 2019-03-13T20:06:01.830614+00:00 tpvc-pvvm-003 CROND 55022 - - (root) CMD (. /etc/profile.d/VMware-visl-integration.sh; /usr/lib/applmgmt/backup_restore/scripts/SchedulerCron.py>>/var/log/vmware/applmgmt/backupSchedulerCron.log 2>&1)
      13/03/201921:05:573/13/19 21:05CronInfovCenter1 2019-03-13T20:06:01.830222+00:00 tpvc-pvvm-003 CROND 55021 - - (root) CMD ( test -x /usr/sbin/vpxd_periodic && /usr/sbin/vpxd_periodic >/dev/null 2>&1)
      13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.756909+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:756Z 'VcIntegrity' 140109286442752 INFO [vcIntegrity, 1536] Cannot get IP address for host name: tpvc-pvvm-003
      13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.752179+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:743Z 'VcIntegrity' 140109286442752 INFO [vcIntegrity, 1519] Getting IP Address from host name: tpvc-pvvm-003
      13/03/201921:05:573/13/19 21:05UserInfovCenter1 2019-03-13T20:06:01.744337+00:00 tpvc-pvvm-003 updatemgr - - - 2019-03-13T20:06:01:743Z 'Activation' 140109286442752 INFO [activationValidator, 368] Leave Validate. Succeeded forintegrity.VcIntegrity.retrieveHostIPAddresses on target: Integrity.VcIntegrity

       

      Hardware profile:

       

      Cisco 3750 switch stack

       

      Server hardware:

       

      • Hypervisor:VMware ESXi, 6.7.0, 11675023
      • Model:PowerEdge R720
      • Processor Type:Intel(R) Xeon(R) CPU E5-2609 v2 @ 2.50GHz
      • Logical Processors:8
      • NICs:4
      • Virtual Machines:22
      • State:Connected
      • Uptime:11 days

       

      At this moment it is also unknown why VMNIC1 is not taking over the data stream as it is configured to become active when the primary link fails VMNIC0.

       

      Any suggestions are welcome and if more information is required then let me know.

       

      Regards,

       

      Martin Meuwese

        • 1. Re: Stopping I/O on vmnic0
          Diego Oliveira Master
          vExpertUser Moderators

          Hi

          Your server is not supported for vsphere 6.7.

          VMware Compatibility Guide - System Search

          • 2. Re: Stopping I/O on vmnic0
            M.Meuwese Novice

            Some more details of all logs combined:

             

                

            20:05:10.190error hostd[2098841] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:05:19Z sdInjector[2098648]: Injector: Sleeping!
            20:05:20.016warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 844686 for scsi0:2 is out of range -- 844686,prevBytes = 81267164160 curBytes =81297572864 prevCommands = 479208curCommands = 479244
            20:05:20.016warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 889928 for scsi0:5 is out of range -- 889928,prevBytes = 900956103680 curBytes =901019288576 prevCommands = 1216542curCommands = 1216613
            20:05:40.018warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 718950 for scsi0:2 is out of range -- 718950,prevBytes = 81297572864 curBytes =81355088896 prevCommands = 479244curCommands = 479324
            20:05:40.018warning hostd[2098657] [Originator@6876 sub=Statssvc] Calculated write I/O size 967077 for scsi0:5 is out of range -- 967077,prevBytes = 901019288576 curBytes =901150811136 prevCommands = 1216613curCommands = 1216749
            20:05:40.191error hostd[2099336] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:05:51Z sdInjector[2098648]: Injector: Sleeping!
            20:05:56.278info hostd[2099329] [Originator@6876 sub=Libs opID=ef8bddf0] NetstackInstanceImpl: congestion control algorithm: newreno
            20:06:00.017warning hostd[2099341] [Originator@6876 sub=Statssvc] Calculated write I/O size 831981 for scsi0:5 is out of range -- 831981,prevBytes = 901150811136 curBytes =901420372992 prevCommands = 1216749curCommands = 1217073
            20:06:10.192error hostd[2098841] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:06:20.017warning hostd[2098843] [Originator@6876 sub=Statssvc] Calculated write I/O size 915240 for scsi0:5 is out of range -- 915240,prevBytes = 901420372992 curBytes =901524710400 prevCommands = 1217073curCommands = 1217187
            20:06:20.426cpu2:2097220)igbn: indrv_UplinkReset:1447: indrv_UplinkReset : vmnic0 device reset started
            20:06:20.426cpu2:2097220)igbn: indrv_UplinkQuiesceIo:1411: Stopping I/O on vmnic0
            20:06:20.429[netCorrelator] 899430151427us: [vob.net.uplink.watchdog.timeout] Watchdog timeout occurred for uplink vmnic0
            20:06:20.462cpu2:2097220)igbn: indrv_DeviceReset:2306: Device Resetting vmnic0
            20:06:20.462cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 2 to 8
            20:06:20.462cpu2:2097220)igbn: indrv_Stop:1890: stopping vmnic0
            20:06:20.462cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 8 to 4
            20:06:20.492cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 4 to 1
            20:06:20.492cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 1 to 20
            20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1060: registering RX IRQ[0]=20
            20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1080: registering TX IRQ[1]=21
            20:06:20.492cpu2:2097220)igbn: indrv_EnableISR:1100: registering misc IRQ[2]=22
            20:06:20.493cpu2:2097220)igbn: igbn_CheckLink:1272: Link got up for device 0x4307747850c0
            20:06:20.493cpu2:2097220)igbn: indrv_ChangeState:326: vmnic0: change PF state from 20 to 2
            20:06:20.493cpu2:2097220)igbn: indrv_UplinkStartIo:1393: Starting I/O on vmnic0
            20:06:20.507cpu2:2097220)igbn: indrv_UplinkReset:1464: indrv_UplinkReset : vmnic0 device reset completed
            20:06:20.507cpu2:2097220)igbn: indrv_EventISR:922: Event ISR called on pf 0x4307747850c0
            20:06:20.507cpu0:2097599)igbn: indrv_Worker:2032: Checking async events for device 0x4307747850c0
            20:06:20.507cpu0:2097599)igbn: igbn_CheckLink:1272: Link went down for device 0x4307747850c0
            20:06:20.508[netCorrelator] 899430232265us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 0 uplinks up. Failed criteria: 128
            20:06:20.508[netCorrelator] 899430232269us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 0 uplinks up. Failed criteria:128
            20:06:20.508[netCorrelator] 899430232271us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 0 uplinks up. Failed criteria: 128
            20:06:20.508[netCorrelator] 899430232272us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 0 uplinks up. Failed criteria:128
            20:06:20.508[netCorrelator] 899430232281us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.508[netCorrelator] 899430232282us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.508[netCorrelator] 899430232283us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
            20:06:20.508[netCorrelator] 899430232284us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
            20:06:20.508[netCorrelator] 899430232291us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232292us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232293us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232294us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232301us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232302us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232303us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232304us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232311us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232312us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232313us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232314us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232320us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232321us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232322us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: VM Network. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232323us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Management Network. 1 uplinks up. Failed criteria:128
            20:06:20.509[netCorrelator] 899430232329us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: UAT Servers. 1 uplinks up. Failed criteria: 128
            20:06:20.509[netCorrelator] 899430232330us: [vob.net.pg.uplink.transition.down] Uplink: vmnic0 is down. Affected portgroup: Production servers. 1 uplinks up. Failed criteria:128
            20:06:20.586warning hostd[2099334] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=ef8bde0c] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failurereason: Unable to get node: Sysinfo error: Not foundSee VMkernel lo.
            20:06:20.605info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_spm_1.0.0, localId: spm
            20:06:20.605info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_vmwarevmcrypt_1.0.0, localId: vmwarevmcrypt
            20:06:20.609info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
            20:06:20.611info hostd[2099334] [Originator@6876 sub=Libs opID=ef8bde0c] PluginLdr_Load: Loaded plugin 'libvmiof-disk-vmwarevmcrypt.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-vmwarevmcrypt.so'
            20:06:23Z sdInjector[2098648]: Injector: Sleeping!
            20:06:24.283cpu0:2097599)igbn: indrv_Worker:2032: Checking async events for device 0x4307747850c0
            20:06:24.283cpu2:2100334)igbn: indrv_EventISR:922: Event ISR called on pf 0x4307747850c0
            20:06:24.283cpu0:2097599)igbn: igbn_CheckLink:1272: Link got up for device 0x4307747850c0
            20:06:24.283[netCorrelator] 899434008232us: [vob.net.vmnic.linkstate.up] vmnic vmnic0 linkstate up
            20:06:24.362warning hostd[2099342] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=ef8bde18] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failurereason: Unable to get node: Sysinfo error: Not foundSee VMkernel lo.
            20:06:24.384[netCorrelator] 899434108455us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 0 uplinks up
            20:06:24.384[netCorrelator] 899434108459us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 0 uplinks up
            20:06:24.384[netCorrelator] 899434108460us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 0 uplinks up
            20:06:24.384[netCorrelator] 899434108461us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 0 uplinks up
            20:06:24.384[netCorrelator] 899434108469us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108470us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108471us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108472us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108478us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108479us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108480us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108481us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108486us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108487us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108488us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108489us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108494us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108495us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108496us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108497us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108502us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108503us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108504us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.384[netCorrelator] 899434108505us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.385info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_spm_1.0.0, localId: spm
            20:06:24.385info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] IOFilterInfoImpl: Inbox-IOFilter Id: VMW_vmwarevmcrypt_1.0.0, localId: vmwarevmcrypt
            20:06:24.385[netCorrelator] 899434108510us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108511us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108512us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108512us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108517us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108518us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Production servers. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108519us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: VM Network. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108520us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: Management Network. 2 uplinks up
            20:06:24.385[netCorrelator] 899434108525us: [vob.net.pg.uplink.transition.up] Uplink:vmnic0 is up. Affected portgroup: UAT Servers. 2 uplinks up
            20:06:24.387info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] PluginLdr_Load: Loaded plugin 'libvmiof-disk-spm.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-spm.so'
            20:06:24.388info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde18] PluginLdr_Load: Loaded plugin 'libvmiof-disk-vmwarevmcrypt.so' from '/usr/lib64/vmware/plugin/libvmiof-disk-vmwarevmcrypt.so'
            20:06:25.345warning hostd[2098658] [Originator@6876 sub=VigorStatsProvider(0000009353bad2b0)] AddVirtualMachine: VM '64' already registered
            20:06:25.345warning hostd[2098658] [Originator@6876 sub=VigorStatsProvider(0000009353bad2b0)] AddVirtualMachine: VM '79' already registered
            20:06:27.002info hostd[2098657] [Originator@6876 sub=Hostsvc.VmkVprobSource] VmkVprobSource::Post event: (vim.event.EventEx) {
            20:06:27.002[netCorrelator] 899462940618us: [esx.problem.net.vmnic.watchdog.reset] Uplink vmnic0 has recovered from a transient failure due to watchdog timeout
            20:06:27.003info hostd[2098657] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 1355 : Uplink vmnic0 has recovered from a transient failure due to watchdog timeout
            20:06:27.426cpu2:2097220)NetqueueBal: 5032: vmnic0: device Up notification, reset logical space needed
            20:06:27.426cpu2:2097220)NetPort: 1580: disabled port 0x2000002
            20:06:27.427cpu3:2212666)NetSched: 654: vmnic0-0-tx: worldID = 2212666 exits
            20:06:27.427cpu2:2097220)Uplink: 11681: enabled port 0x2000002 with mac ec:f4:bb:c4:f9:1c
            20:06:27.428cpu2:2097220)Uplink: 537: Driver claims supporting 0 RX queues, and 0 queues are accepted.
            20:06:27.428cpu2:2097220)Uplink: 533: Driver claims supporting 0 TX queues, and 0 queues are accepted.
            20:06:27.428cpu2:2097220)NetPort: 1580: disabled port 0x2000002
            20:06:27.428cpu3:2224632)NetSched: 654: vmnic0-0-tx: worldID = 2224632 exits
            20:06:27.428cpu2:2097220)Uplink: 11681: enabled port 0x2000002 with mac ec:f4:bb:c4:f9:1c
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 9
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 8
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload off
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 7
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 3
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 5
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 10
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 6
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 11
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 13
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 1
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 4
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 9
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1179: toggled hw VLAN offload on
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 8
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload on
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 7
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 3
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 5
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 10
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 6
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 11
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 13
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 1
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 4
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 9
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 8
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload off
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 7
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 3
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 5
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 10
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 6
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 11
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 13
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 1
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 4
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 9
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1179: toggled hw VLAN offload on
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 8
            20:06:27.428cpu2:2097220)igbn: igbn_ChangeUplinkCap:1170: toggled hw VLAN offload on
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 7
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 3
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 5
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 10
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 6
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkEnableCap:1101: Enable vmnic0 cap function 11
            20:06:27.428cpu2:2097220)igbn: indrv_UplinkDisableCap:1114: Disable vmnic0 cap function 13
            20:06:29.662info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] NetstackInstanceImpl: congestion control algorithm: newreno
            20:06:29.664warning hostd[2099336] [Originator@6876 sub=Hostsvc.Tpm20Provider opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Unable to retrieve TPM/TXT status. TPM functionality will be unavailable. Failure reason: Unable to get node: Sys.
            20:06:29.718info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Could not expand environment variable HOME.
            20:06:29.723info hostd[2099336] [Originator@6876 sub=Libs opID=HB-host-9@27611-9c06fe0-75-de22 user=vpxuser] Could not expand environment variable HOME.
            20:06:40.015warning hostd[2098841] [Originator@6876 sub=Statssvc] Calculated write I/O size 989341 for scsi0:5 is out of range -- 989341,prevBytes = 901524710400 curBytes =901563294720 prevCommands = 1217187curCommands = 1217226
            20:06:40.195error hostd[2099330] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:06:50.427cpu4:2097641)NMP: nmp_ResetDeviceLogThrottling:3569: Error status H:0x0 D:0x2 P:0x0 Sense Data: 0x5 0x24 0x0 from dev "mpx.vmhba32:C0:T0:L0" occurred 1 times(of1commands)
            20:06:55Z sdInjector[2098648]: Injector: Sleeping!
            20:07:10.197error hostd[2098845] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:07:27Z sdInjector[2098648]: Injector: Sleeping!
            20:07:31.676info hostd[2099342] [Originator@6876 sub=Libs opID=ef8bde70] NetstackInstanceImpl: congestion control algorithm: newreno
            20:07:40.201error hostd[2099329] [Originator@6876 sub=Hostsvc.NsxSpecTracker] Object not found/hostspec disabled
            20:07:59Z sdInjector[2098648]: Injector: Sleeping!
            1 person found this helpful
            • 3. Re: Stopping I/O on vmnic0
              M.Meuwese Novice

              Hello Diego,

               

              Before we upgraded to ESXi 6.7 we checked the hardware compatibility list as well and checked with the vendor (Dell) if this was the case. Confirmation was received by the vendor that the server supports ESXi 6.7 though was not yet added to the hardware compatibility list.

               

              Also all other servers are not experiencing this issue.

               

              Regards,

               

              Martin

              1 person found this helpful
              • 4. Re: Stopping I/O on vmnic0
                anvanster Enthusiast

                Hi  Martin,

                 

                can you please share some details about your environment. How are your ESXi hosts connected, what kind of traffic is running over VMNIC0 when you experience connection loss.

                 

                Thank you.

                • 5. Re: Stopping I/O on vmnic0
                  BaumMeister Lurker

                  I have a similar problem (same system behavior, same nic-driver):

                  dead I/O on igb-nic (ESXi 6.7)