<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic dead I/O on igb-nic (ESXi 6.7) in ESXi Discussions</title>
    <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232590#M217150</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm running a homelab with ESXi 6.7 (&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;13006603). I got three nics in my host, two are onboard and one is an Intel ET 82576 dual-port pci-e card. All nics are assigned to the same vSwitch; actually only one is connected to the (physical) switch atm.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;When I'm using one of the 82576 nics and put heavy load on it (like backing up VMs via Nakivo B&amp;amp;R) the nic stops workign after a while and is dead/Not responding anymore. Only a reboot of the host or (much easier) physically reconnecting the nic (cable out, cable in) solves the problem.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;I was guessing there is a driver issue, so I updated to the latest driver by intel:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;[root@esxi:~] /usr/sbin/esxcfg-nics -l&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;Name&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Driver&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Link Speed&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Duplex MAC Address&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; MTU&amp;nbsp;&amp;nbsp;&amp;nbsp; Description&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic0&amp;nbsp; 0000:04:00.0 ne1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 00:25:90:a7:65:dc 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82574L Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic1&amp;nbsp; 0000:00:19.0 ne1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Up&amp;nbsp;&amp;nbsp; 1000Mbps&amp;nbsp;&amp;nbsp; Full&amp;nbsp;&amp;nbsp; 00:25:90:a7:65:dd 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82579LM Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic2&amp;nbsp; 0000:01:00.0 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 90:e2:ba:1e:4d:c6 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic3&amp;nbsp; 0000:01:00.1 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 90:e2:ba:1e:4d:c7 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;[root@esxi:~] esxcli software vib list|grep igb&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;net-igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5.2.5-1OEM.550.0.0.1331820&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Intel&amp;nbsp;&amp;nbsp; VMwareCertified&amp;nbsp;&amp;nbsp; 2019-06-16&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;igbn&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.1.1.0-4vmw.670.2.48.13006603&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMwareCertified&amp;nbsp;&amp;nbsp; 2019-06-07&lt;/SPAN&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Unfortunately this didn't solve the problem.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However ... this behaviour doesn't occur, when I'm using one of the nics using the ne1000 driver.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any idea how to solve the issue?&lt;/P&gt;&lt;P&gt;(... or at least dig down to it's root?)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks a lot in advance.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Chris&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PS: I found another thread which might be connected to my problem: &lt;A href="https://communities.vmware.com/thread/607329"&gt;Stopping I/O on vmnic0&lt;/A&gt;&amp;nbsp; Same system behaviour, same driver.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 17 Jun 2019 14:31:14 GMT</pubDate>
    <dc:creator>BaumMeister</dc:creator>
    <dc:date>2019-06-17T14:31:14Z</dc:date>
    <item>
      <title>dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232590#M217150</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm running a homelab with ESXi 6.7 (&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;13006603). I got three nics in my host, two are onboard and one is an Intel ET 82576 dual-port pci-e card. All nics are assigned to the same vSwitch; actually only one is connected to the (physical) switch atm.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;When I'm using one of the 82576 nics and put heavy load on it (like backing up VMs via Nakivo B&amp;amp;R) the nic stops workign after a while and is dead/Not responding anymore. Only a reboot of the host or (much easier) physically reconnecting the nic (cable out, cable in) solves the problem.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;I was guessing there is a driver issue, so I updated to the latest driver by intel:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;[root@esxi:~] /usr/sbin/esxcfg-nics -l&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;Name&amp;nbsp;&amp;nbsp;&amp;nbsp; PCI&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Driver&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Link Speed&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Duplex MAC Address&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; MTU&amp;nbsp;&amp;nbsp;&amp;nbsp; Description&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic0&amp;nbsp; 0000:04:00.0 ne1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 00:25:90:a7:65:dc 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82574L Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic1&amp;nbsp; 0000:00:19.0 ne1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Up&amp;nbsp;&amp;nbsp; 1000Mbps&amp;nbsp;&amp;nbsp; Full&amp;nbsp;&amp;nbsp; 00:25:90:a7:65:dd 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82579LM Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic2&amp;nbsp; 0000:01:00.0 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 90:e2:ba:1e:4d:c6 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;vmnic3&amp;nbsp; 0000:01:00.1 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Down 0Mbps&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Half&amp;nbsp;&amp;nbsp; 90:e2:ba:1e:4d:c7 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;[root@esxi:~] esxcli software vib list|grep igb&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;net-igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5.2.5-1OEM.550.0.0.1331820&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Intel&amp;nbsp;&amp;nbsp; VMwareCertified&amp;nbsp;&amp;nbsp; 2019-06-16&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier;"&gt;igbn&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.1.1.0-4vmw.670.2.48.13006603&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMwareCertified&amp;nbsp;&amp;nbsp; 2019-06-07&lt;/SPAN&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Unfortunately this didn't solve the problem.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However ... this behaviour doesn't occur, when I'm using one of the nics using the ne1000 driver.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any idea how to solve the issue?&lt;/P&gt;&lt;P&gt;(... or at least dig down to it's root?)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks a lot in advance.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Chris&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;PS: I found another thread which might be connected to my problem: &lt;A href="https://communities.vmware.com/thread/607329"&gt;Stopping I/O on vmnic0&lt;/A&gt;&amp;nbsp; Same system behaviour, same driver.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 17 Jun 2019 14:31:14 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232590#M217150</guid>
      <dc:creator>BaumMeister</dc:creator>
      <dc:date>2019-06-17T14:31:14Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232591#M217151</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;What does vmkernel.log say ? can you post vmkernel logs here ..&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 17 Jun 2019 16:37:19 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232591#M217151</guid>
      <dc:creator>SureshKumarMuth</dc:creator>
      <dc:date>2019-06-17T16:37:19Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232592#M217152</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;igb driver 5.2.5 that you are using was released in 2014 and quite old. &lt;/P&gt;&lt;P&gt;Unfortunately your card is not supported by newer "igbn" drivers.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 17 Jun 2019 20:29:51 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232592#M217152</guid>
      <dc:creator>anvanster</dc:creator>
      <dc:date>2019-06-17T20:29:51Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232593#M217153</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You're right about the newer igbn driver not supporting the nic anymore.&lt;/P&gt;&lt;P&gt;However ... the nic and driver I'm using are on vmwares hcl:&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&amp;amp;productid=5325&amp;amp;deviceCategory=io&amp;amp;details=1&amp;amp;partner=46&amp;amp;releases=428&amp;amp;keyword=82576&amp;amp;deviceTypes=6&amp;amp;page=1&amp;amp;display_interval=10&amp;amp;sortColumn=Partner&amp;amp;sortOrder=Asc" title="https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&amp;amp;productid=5325&amp;amp;deviceCategory=io&amp;amp;details=1&amp;amp;partner=46&amp;amp;releases=428&amp;amp;keyword=82576&amp;amp;deviceTypes=6&amp;amp;page=1&amp;amp;display_interval=10&amp;amp;sortColumn=Partner&amp;amp;sortOrder=Asc"&gt;VMware Compatibility Guide - I/O Device Search&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 18 Jun 2019 08:07:19 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232593#M217153</guid>
      <dc:creator>BaumMeister</dc:creator>
      <dc:date>2019-06-18T08:07:19Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232594#M217154</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Sure.&lt;/P&gt;&lt;P&gt;Here's the log output in the relevant timeslot.&lt;/P&gt;&lt;P&gt;I marked the line that shows when the 82576-nic (-&amp;gt; vmnic3) went down. vmnic1 is runnign with the ne1000 driver.&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:20:44.190Z cpu4:2097707)DVFilter: 5964: Checking disconnected filters for timeouts&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:23:04.707Z cpu3:2097182)vmw_ahci[0000001f]: AHCI_EdgeIntrHandler:new interrupts coming, IS= 0x2, no repeat&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:30:44.190Z cpu0:2097707)DVFilter: 5964: Checking disconnected filters for timeouts&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu0:2098034)StorageApdHandler: 1203: APD start for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu0:2098034)StorageApdHandler: 1203: APD start for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandler: 419: APD start event for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu0:2098034)StorageApdHandler: 1203: APD start for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [3a5eb32c-7141e730] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandler: 419: APD start event for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [a16fe90b-d7095fcc] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandler: 419: APD start event for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:35:42.190Z cpu3:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [37c6519b-ec9783e7] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:37:06.190Z cpu7:2098034)WARNING: NFS: 337: Lost connection to the server 10.0.0.199 mount point /volume1/VMs, mounted as 3a5eb32c-7141e730-0000-000000000000 ("VMs@Fuchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:37:06.190Z cpu7:2098034)WARNING: NFS: 337: Lost connection to the server 10.0.0.199 mount point /volume1/VM_Backups/, mounted as a16fe90b-d7095fcc-0000-000000000000 ("VM_Backups@Fuchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:37:06.190Z cpu7:2098034)WARNING: NFS: 337: Lost connection to the server 10.0.0.199 mount point /volume1/Media, mounted as 37c6519b-ec9783e7-0000-000000000000 ("Media@Fuchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandler: 609: APD timeout event for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandlerEv: 126: Device or filesystem with identifier [3a5eb32c-7141e730] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandler: 609: APD timeout event for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandlerEv: 126: Device or filesystem with identifier [a16fe90b-d7095fcc] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandler: 609: APD timeout event for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:38:02.191Z cpu0:2097369)StorageApdHandlerEv: 126: Device or filesystem with identifier [37c6519b-ec9783e7] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:40:44.190Z cpu0:2097707)DVFilter: 5964: Checking disconnected filters for timeouts&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG style="color: #e23d39; font-size: 10pt; font-family: courier new, courier;"&gt;2019-06-17T12:45:39.351Z cpu3:2097615)&amp;lt;6&amp;gt;igb: vmnic3 NIC Link is Down&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:42.732Z cpu7:2097615)&amp;lt;6&amp;gt;igb: vmnic3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu4:2097220)NetqueueBal: 5032: vmnic3: device Up notification, reset logical space needed&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu4:2097220)NetPort: 1580: disabled port 0x2000004&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu2:2097770)NetSched: 654: vmnic3-0-tx: worldID = 2097770 exits&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu4:2097220)Uplink: 11689: enabled port 0x2000004 with mac 90:e2:ba:1e:4d:c7&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu4:2097220)NetPort: 1580: disabled port 0x2000004&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.190Z cpu4:2097220)Uplink: 11689: enabled port 0x2000004 with mac 90:e2:ba:1e:4d:c7&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.191Z cpu5:2097296)CpuSched: 699: user latency of 2102301 vmnic3-0-tx 0 changed by 2097296 NetSchedHelper -6&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.191Z cpu2:2102301)NetSched: 654: vmnic3-0-tx: worldID = 2102301 exits&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:43.191Z cpu5:2097296)CpuSched: 699: user latency of 2102302 vmnic3-0-tx 0 changed by 2097296 NetSchedHelper -6&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu3:2098034)NFS: 346: Restored connection to the server 10.0.0.199 mount point /volume1/Media, mounted as 37c6519b-ec9783e7-0000-000000000000 ("Media@Fuvchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu3:2098034)NFS: 346: Restored connection to the server 10.0.0.199 mount point /volume1/VMs, mounted as 3a5eb32c-7141e730-0000-000000000000 ("VMs@Fuchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [37c6519b-ec9783e7] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:48.941Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [3a5eb32c-7141e730] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:49.613Z cpu3:2098034)NFS: 346: Restored connection to the server 10.0.0.199 mount point /volume1/VM_Backups/, mounted as a16fe90b-d7095fcc-0000-000000000000 ("VM_Backups@Fuchur")&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:49.613Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:45:49.613Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [a16fe90b-d7095fcc] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:19.476Z cpu3:2097615)&amp;lt;6&amp;gt;igb: vmnic3 NIC Link is Down&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:29.190Z cpu6:2098637 opID=f97c863c)World: 11943: VC opID sps-Main-767271-893-94-37-bba6 maps to vmkernel opID f97c863c&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:29.190Z cpu6:2098637 opID=f97c863c)SunRPC: 3303: Synchronous RPC abort for client 0x4304520bfb90 IP 10.0.0.199.8.1 proc 1 xid 0x76d7dd9e attempt 1 of 3&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:39.190Z cpu6:2098637 opID=f97c863c)SunRPC: 3303: Synchronous RPC abort for client 0x4304520bfb90 IP 10.0.0.199.8.1 proc 1 xid 0x76d7dda2 attempt 2 of 3&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:49.190Z cpu6:2098637 opID=f97c863c)SunRPC: 3303: Synchronous RPC abort for client 0x4304520bfb90 IP 10.0.0.199.8.1 proc 1 xid 0x76d7dda6 attempt 3 of 3&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:49.190Z cpu6:2098637 opID=f97c863c)WARNING: NFS: 2335: Failed to get attributes (I/O error)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:49.190Z cpu6:2098637 opID=f97c863c)NFS: 2444: [Repeated 1 times] Failed to get object (0x451a1b49b3ce) 36 3a5eb32c 7141e730 70001 686a001 0 829c3d42 976c7782 0 0 0 0 0 :No connection&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:49.190Z cpu6:2098637 opID=f97c863c)NFS: 2449: Failed to get object (0x451a1751b16e) 36 37c6519b ec9783e7 70001 48001 0 829c3d42 976c7782 0 0 0 0 0 :I/O error&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:51.673Z cpu5:2099927)DEBUG (ne1000): checking link for adapter vmnic1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:52.679Z cpu3:2097566)INFO (ne1000): vmnic1: Link is Up&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:52.679Z cpu3:2097566)DEBUG (ne1000): Reporting uplink 0x43044d090250 status&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)NetqueueBal: 4967: vmnic1: new netq module, reset logical space needed&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)NetqueueBal: 4996: vmnic1: plugins to call differs, reset logical space&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)NetqueueBal: 5032: vmnic1: device Up notification, reset logical space needed&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)Uplink: 537: Driver claims supporting 0 RX queues, and 0 queues are accepted.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)Uplink: 533: Driver claims supporting 0 TX queues, and 0 queues are accepted.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)NetPort: 1580: disabled port 0x2000008&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu1:2097761)NetSched: 654: vmnic1-0-tx: worldID = 2097761 exits&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)Uplink: 11689: enabled port 0x2000008 with mac 00:25:90:a7:65:dd&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu5:2097296)CpuSched: 699: user latency of 2102444 vmnic1-0-tx 0 changed by 2097296 NetSchedHelper -6&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Xmit Scatter-Gathered Data'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Offload Checksum for IPv4'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Offload TCP Segmentation for IPv4'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Insert VLAN Tag'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing uplink config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing adapter config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Strip VLAN Tag'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing uplink config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing adapter config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Xmit Scatter-Gathered Across Multiple Pages'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Offload Checksum for IPv6'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Capable To Offload TCP Segmentation for IPv6'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Xmit Scatter-Gathered Data'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Offload Checksum for IPv4'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Offload TCP Segmentation for IPv4'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Insert VLAN Tag'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing uplink config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing adapter config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Strip VLAN Tag'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing uplink config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)DEBUG (ne1000): writing adapter config&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Xmit Scatter-Gathered Across Multiple Pages'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Offload Checksum for IPv6'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Enabled 'Capable To Offload TCP Segmentation for IPv6'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:53.190Z cpu3:2097220)INFO (ne1000): vmnic1: Disabled 'Driver Requires No Packet Scheduling'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu6:2098034)StorageApdHandler: 1203: APD start for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu6:2098034)StorageApdHandler: 1203: APD start for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu6:2098034)StorageApdHandler: 1203: APD start for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandler: 419: APD start event for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [3a5eb32c-7141e730] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandler: 419: APD start event for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [a16fe90b-d7095fcc] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandler: 419: APD start event for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:49:54.190Z cpu4:2097369)StorageApdHandlerEv: 110: Device or filesystem with identifier [37c6519b-ec9783e7] has entered the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu2:2098034)StorageApdHandler: 1315: APD exit for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44eeb4c0 [37c6519b-ec9783e7]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu2:2098034)StorageApdHandler: 1315: APD exit for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [37c6519b-ec9783e7] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu2:2098034)StorageApdHandler: 1315: APD exit for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44ee76d0 [3a5eb32c-7141e730]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [3a5eb32c-7141e730] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandler: 507: APD exit event for 0x430c44ee95d0 [a16fe90b-d7095fcc]&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:00.969Z cpu4:2097369)StorageApdHandlerEv: 117: Device or filesystem with identifier [a16fe90b-d7095fcc] has exited the All Paths Down state.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.325Z cpu6:2099723)VSCSI: 6602: handle 8209(vscsi0:0):Destroying Device for world 2099687 (pendCom 0)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.327Z cpu3:2099715)VSCSI: 6602: handle 8208(vscsi0:0):Destroying Device for world 2099688 (pendCom 0)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.327Z cpu2:2099723)CBT: 723: Disconnecting the cbt device 2f0796-cbt with filehandle 3082134&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.328Z cpu3:2099715)CBT: 723: Disconnecting the cbt device 31072d-cbt with filehandle 3213101&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.342Z cpu1:2099723)CBT: 1352: Created device 41078e-cbt for cbt driver with filehandle 4261774&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.342Z cpu3:2099715)CBT: 1352: Created device 320792-cbt for cbt driver with filehandle 3278738&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.345Z cpu1:2099723)CBT: 1352: Created device 5107a4-cbt for cbt driver with filehandle 5310372&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.346Z cpu1:2099723)CBT: 723: Disconnecting the cbt device 41078e-cbt with filehandle 4261774&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.346Z cpu3:2099715)CBT: 1352: Created device 2807a7-cbt for cbt driver with filehandle 2623399&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.346Z cpu3:2099715)CBT: 723: Disconnecting the cbt device 320792-cbt with filehandle 3278738&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.346Z cpu1:2099723)CBT: 723: Disconnecting the cbt device 5107a4-cbt with filehandle 5310372&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.346Z cpu3:2099715)CBT: 723: Disconnecting the cbt device 2807a7-cbt with filehandle 2623399&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.347Z cpu3:2099715)CBT: 1352: Created device 2a07a7-cbt for cbt driver with filehandle 2754471&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.348Z cpu1:2099723)CBT: 1352: Created device 5307a4-cbt for cbt driver with filehandle 5441444&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.348Z cpu3:2099715)SVM: 5032: SkipZero 0, dstFsBlockSize -1, preallocateBlocks 0, vmfsOptimizations 0, useBitmapCopy 1, skipPlugGrain 1, destination disk grainSize 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)SVM: 5126: SVM_MakeDev.5126: Creating device 2a07a7-3407aa-svmmirror: Success&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)SVM: 5175: Created device 2a07a7-3407aa-svmmirror, primary 2a07a7, secondary 3407aa&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)VSCSI: 3782: handle 8212(vscsi0:0):Using sync mode due to sparse disks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)VSCSI: 3810: handle 8212(vscsi0:0):Creating Virtual Device for world 2099688 (FSS handle 4327310) numBlocks=41943040 (bs=512)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)VSCSI: 273: handle 8212(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)Vmxnet3: 18569: indLROPktToGuest: 1, vcd-&amp;gt;umkShared-&amp;gt;vrrsSelected: 3 port 0x200000b&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu3:2099715)Vmxnet3: 18810: Using default queue delivery for vmxnet3 for port 0x200000b&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)SVM: 5032: SkipZero 0, dstFsBlockSize -1, preallocateBlocks 0, vmfsOptimizations 0, useBitmapCopy 1, skipPlugGrain 1, destination disk grainSize 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)SVM: 5126: SVM_MakeDev.5126: Creating device 5307a4-3b07ad-svmmirror: Success&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)SVM: 5175: Created device 5307a4-3b07ad-svmmirror, primary 5307a4, secondary 3b07ad&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)VSCSI: 3782: handle 8213(vscsi0:0):Using sync mode due to sparse disks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)VSCSI: 3810: handle 8213(vscsi0:0):Creating Virtual Device for world 2099687 (FSS handle 3606440) numBlocks=62914560 (bs=512)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.349Z cpu1:2099723)VSCSI: 273: handle 8213(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.350Z cpu1:2099723)Vmxnet3: 18569: indLROPktToGuest: 1, vcd-&amp;gt;umkShared-&amp;gt;vrrsSelected: 3 port 0x200000d&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:32.350Z cpu1:2099723)Vmxnet3: 18810: Using default queue delivery for vmxnet3 for port 0x200000d&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.185Z cpu2:2102534)SVM: 2847: scsi0:0 Completed copy in 821 ms. vmmLeaderID = 2099688.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.223Z cpu0:2102533)SVM: 2847: scsi0:0 Completed copy in 858 ms. vmmLeaderID = 2099687.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.275Z cpu0:2099715)VSCSI: 6602: handle 8212(vscsi0:0):Destroying Device for world 2099688 (pendCom 0)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.276Z cpu0:2099715)SVM: 2548: SVM Mirrored mode IO stats for device: 2a07a7-3407aa-svmmirror&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.276Z cpu0:2099715)SVM: 2552: Total # IOs mirrored: 0, Total # IOs sent only to source: 0, Total # IO deferred by lock: 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.276Z cpu0:2099715)SVM: 2556: Deferred IO stats - Max: 0, Total: 0, Avg: 1 (msec)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.276Z cpu0:2099715)SVM: 2570: Destroyed device 2a07a7-3407aa-svmmirror&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.281Z cpu3:2099723)VSCSI: 6602: handle 8213(vscsi0:0):Destroying Device for world 2099687 (pendCom 0)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.282Z cpu7:2099723)SVM: 2548: SVM Mirrored mode IO stats for device: 5307a4-3b07ad-svmmirror&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.282Z cpu7:2099723)SVM: 2552: Total # IOs mirrored: 0, Total # IOs sent only to source: 0, Total # IO deferred by lock: 0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.282Z cpu7:2099723)SVM: 2556: Deferred IO stats - Max: 0, Total: 0, Avg: 1 (msec)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.282Z cpu7:2099723)SVM: 2570: Destroyed device 5307a4-3b07ad-svmmirror&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.335Z cpu1:2099715)CBT: 723: Disconnecting the cbt device 2a07a7-cbt with filehandle 2754471&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.341Z cpu6:2099723)CBT: 723: Disconnecting the cbt device 5307a4-cbt with filehandle 5441444&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.350Z cpu3:2099715)CBT: 1352: Created device 6d09cd-cbt for cbt driver with filehandle 7145933&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.350Z cpu3:2099715)VSCSI: 3782: handle 8214(vscsi0:0):Using sync mode due to sparse disks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.350Z cpu3:2099715)VSCSI: 3810: handle 8214(vscsi0:0):Creating Virtual Device for world 2099688 (FSS handle 12388969) numBlocks=41943040 (bs=512)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.350Z cpu3:2099715)VSCSI: 273: handle 8214(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.351Z cpu3:2099715)Vmxnet3: 18569: indLROPktToGuest: 1, vcd-&amp;gt;umkShared-&amp;gt;vrrsSelected: 3 port 0x200000b&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.351Z cpu3:2099715)Vmxnet3: 18810: Using default queue delivery for vmxnet3 for port 0x200000b&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)CBT: 1352: Created device 220ba5-cbt for cbt driver with filehandle 2231205&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)VSCSI: 3782: handle 8215(vscsi0:0):Using sync mode due to sparse disks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)VSCSI: 3810: handle 8215(vscsi0:0):Creating Virtual Device for world 2099687 (FSS handle 1706919) numBlocks=62914560 (bs=512)&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)VSCSI: 273: handle 8215(vscsi0:0):Input values: res=0 limit=-2 bw=-1 Shares=1000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)Vmxnet3: 18569: indLROPktToGuest: 1, vcd-&amp;gt;umkShared-&amp;gt;vrrsSelected: 3 port 0x200000d&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new, courier; font-size: 10pt;"&gt;2019-06-17T12:50:33.357Z cpu4:2099723)Vmxnet3: 18810: Using default queue delivery for vmxnet3 for port 0x200000d&lt;/SPAN&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 18 Jun 2019 08:20:10 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232594#M217154</guid>
      <dc:creator>BaumMeister</dc:creator>
      <dc:date>2019-06-18T08:20:10Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232595#M217155</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Sorry for the late response.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Above log does not give more information on why the nic went down. We have to enable debug logging for the driver to find what made the nic to go down at that time. However, if we identify this issue is something due to driver , we cant do much apart from updating the driver/firmware that you have done already. Only NIC vendor can help us.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;or if you see no issues with ne1000, you may use this driver instead of igb. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 25 Jun 2019 15:30:00 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232595#M217155</guid>
      <dc:creator>SureshKumarMuth</dc:creator>
      <dc:date>2019-06-25T15:30:00Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232596#M217156</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Exact same behavior here with ESXi 6.5 U3 and Intel NIC 82576. Everythnigs was fine in ESXi 6.5 U2.&lt;/P&gt;&lt;P&gt;I've updated igb driver from 5.0.5 to 5.2.5 (last officialy supported version), let's say, it's a "little" better, it takes now two weeks (instead of 2 days) before NIC stops passing trafic. Plugin ou/in the ethernet cable, or remotly down/up the port on switch, solve the issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you find any solution to this issue ? Using ne1000 driver with this NIC is possible right ? How to switch driver ?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 15 Aug 2019 08:51:15 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232596#M217156</guid>
      <dc:creator>nague</dc:creator>
      <dc:date>2019-08-15T08:51:15Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232597#M217157</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We're having the same random issue with Intel Corporation 82576 Gigabit Network Connection QP NICs on our vSPhere 6.5 hosts, opened support ticket and of course the suggestion is upgrading to the 5.2.5 driver.&amp;nbsp; We're going to proceed but this thread doesn't make me confident.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 29 Aug 2019 15:34:39 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232597#M217157</guid>
      <dc:creator>monderick</dc:creator>
      <dc:date>2019-08-29T15:34:39Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232598#M217158</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Have the same problem here when under load, for example 2-3 hours into backups over nic's.&lt;/P&gt;&lt;P&gt;Two different servers, tried both the inbox and 5.2.5 versions of the driver.&lt;/P&gt;&lt;P&gt;If the system is stable I can recover via cli running "esxcli network down -n vmnic0" and "esxcli network up -n vmnic0"&amp;nbsp; which gets the nic's back online without a reboot.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 07 Dec 2019 21:32:00 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232598#M217158</guid>
      <dc:creator>PeterCr</dc:creator>
      <dc:date>2019-12-07T21:32:00Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232599#M217159</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;your vmnic3 is "only" 3 seconds down - anyway to long and should not happen.&lt;/P&gt;&lt;P&gt;but did you ignore the apd events before the nic went down ? seems that you lost storage connect to your nfs&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 08 Dec 2019 22:49:18 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232599#M217159</guid>
      <dc:creator>berndweyand</dc:creator>
      <dc:date>2019-12-08T22:49:18Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232600#M217160</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Tried ESXi 6.7 with the older 4.2.16.8 driver same result, also confirmed also happening on ESXi 6.5 U3.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 14 Dec 2019 01:46:03 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232600#M217160</guid>
      <dc:creator>DataBitz</dc:creator>
      <dc:date>2019-12-14T01:46:03Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232601#M217161</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;This is exactly the same issue I have with one of my servers. It's a Supermicro &lt;SPAN&gt;&lt;SPAN&gt;X9DRH-7TF with the onboard 1 Gbit interface. Both are Intel 82576 and one is used as Management (vmnic2), the other one (vmnic3) for the guests (1x centOS, 2x Ubuntu 18.04 LTS, 4x Windows Server 2012R2/2019) with its own vSwitch.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Everything was working with ESXi 6.7 Build 13473784. Problem first occured after installing ESXi 6.7 Build 15160138.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;vmnic2&amp;nbsp; 0000:02:00.0 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Up&amp;nbsp;&amp;nbsp; 1000Mbps&amp;nbsp;&amp;nbsp; Full&amp;nbsp;&amp;nbsp; &amp;lt;MAC address&amp;gt; 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/P&gt;&lt;P&gt;vmnic3&amp;nbsp; 0000:02:00.1 igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Up&amp;nbsp;&amp;nbsp; 1000Mbps&amp;nbsp;&amp;nbsp; Full&amp;nbsp;&amp;nbsp; &amp;lt;MAC address&amp;gt; 1500&amp;nbsp;&amp;nbsp; Intel Corporation 82576 Gigabit Network Connection&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;The Management network is always reachable, while the other one stops passing traffic when th&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;ere is heavy traffic on it (e.g. backups). The logs doesn't show anything and Link is always "Up".&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;First, some Linux VMs caused&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;Vmxnet3: 24934: &amp;lt;Linux VM&amp;gt;,&amp;lt;MAC address&amp;gt;, portID(83886088): Hang detected,numHangQ: 1, enableGen: 183&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;changed all Linux to e1000e. No "Hang" in logs since... But problem wasn't resolved. vmnic3 stops passing traffic without any log entry.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;SPAN&gt;&lt;SPAN&gt;esxcli network down -n vmnic3&lt;BR /&gt;esxcli network up -n vmnic3&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;immediate starts passing traffic&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;SPAN style="background-color: #f6f6f6; color: #3d3d3d; font-family: inherit; font-size: 14px; font-style: normal; font-weight: 400; text-align: left; text-indent: 0px;"&gt;net-igb&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5.0.5.1.1-5vmw.670.0.0.8169922&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMW&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VMwareCertified&amp;nbsp;&amp;nbsp; 2019-05-09&lt;/SPAN&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;As others stated, driver update seems not to solve the issue. Is there anything I could try to resolve this issue? Perhaps some extended logging?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Edit:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Issue occurs more or less random, but minimum every 48-72h&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 23 Dec 2019 10:14:26 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232601#M217161</guid>
      <dc:creator>HobbyStudent</dc:creator>
      <dc:date>2019-12-23T10:14:26Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232602#M217162</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello all, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have a same problem ...&lt;/P&gt;&lt;P&gt;I use a ESXI 6.7.0 Update 3 (Build 14320388) i &lt;SPAN style="color: #666666; font-family: proxima-nova, Arial, sans-serif;"&gt;using also one of the 82576 nics. It's working for me, but whit a latency more than 500ms ....&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="color: #666666; font-family: proxima-nova, Arial, sans-serif;"&gt;For the moment, I have not found the solution .... but if you have other information, I'm all ears ....&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Best regards, &lt;/P&gt;&lt;P&gt;Theo&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 09 Jan 2020 21:48:17 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232602#M217162</guid>
      <dc:creator>theoha</dc:creator>
      <dc:date>2020-01-09T21:48:17Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232603#M217163</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I've been having very similar issues since upgrading 3 of my hosts to the latest 6.5 (v&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;15256549).&amp;nbsp; The version I was running prior to this update was very old as I had been slacking on updates.&amp;nbsp; I don't even recall what version it was but I think it was a 6.5 version from around 12/2018. &lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;As I mentioned I have 3 hosts (of similar vintage- older Dell M610 blade servers) and they've all got dual-quad port Intel &lt;SPAN class="vx-property-view-section-property-values-table"&gt;&lt;SPAN class="vx-property-view-section-property-value"&gt;&lt;SPAN class="vx-property-view-section-property-value-text"&gt;82576's.&amp;nbsp; &lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;Unfortunately the upgrade process went completely fine and gave me no indication of a problem until the very last host.&amp;nbsp; I was vmotioning VMs between the 3 of them the entire time and had no issues at all.&amp;nbsp; After I completed the process and went to vmotion the last hosts was when that vmotion failed an all hell broke loose.&amp;nbsp; &lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;&lt;SPAN class="vx-property-view-section-property-values-table"&gt;&lt;SPAN class="vx-property-view-section-property-value"&gt;&lt;SPAN class="vx-property-view-section-property-value-text"&gt;I use 4 of the uplinks in an static lag (4 active uplinks with IP hash teaming mode on the vmware side, and static lag on the switch side).&amp;nbsp; This configuration has been in place for almost 9 years and has worked flawlessly.&amp;nbsp; My findings are completely in line with what has been mentioned here-- after a period of time, either some VMs or all VMs on a host stop passing traffic.&amp;nbsp; Simply downing a NIC and bring it back up brings it back online.&amp;nbsp; When some hosts don't work, it's usually (maybe always?) the last NIC in the group that has a problem.&amp;nbsp; Some hosts can ping hosts that other hosts can't, and vice versa.&amp;nbsp; Using a vmware IP hash calculator (&lt;A href="https://techslaves.org/2014/02/25/vmware-ip-hash-algorithm-calculator/" title="https://techslaves.org/2014/02/25/vmware-ip-hash-algorithm-calculator/"&gt;https://techslaves.org/2014/02/25/vmware-ip-hash-algorithm-calculator/&lt;/A&gt; ) you can see which vmnic it would be sending the traffic over and you can see which NIC is the one with the problem.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;&lt;SPAN class="vx-property-view-section-property-values-table"&gt;&lt;SPAN class="vx-property-view-section-property-value"&gt;&lt;SPAN class="vx-property-view-section-property-value-text"&gt;I started a support call immediately with vmware on this issue and I've gotten very little help.&amp;nbsp; #1 because my servers are technically only certified up to 6.0 U3.&amp;nbsp; So it's very easy for them just to blame that.&amp;nbsp; However, these servers have been running on 6.5 for at least 2 years no problem.&amp;nbsp; I wasn't about to roll back to a version that is end of life in 2 months.&amp;nbsp; One of the things we tried that seemed to work at first was just destroying the vswitch and recreating a new one.&amp;nbsp; That actually worked for 3-4 days without an issue.&amp;nbsp; At that point I had recreated the vswitches on the other 2 hosts and started moving some production VMs to them.&amp;nbsp; Then the problems starting cropping up again on all three hosts.&amp;nbsp; Every time I call back in to vmware they want to blame either my old servers or my physical switches so I had to take matters into my own hands to do some real debugging.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;&lt;SPAN class="vx-property-view-section-property-values-table"&gt;&lt;SPAN class="vx-property-view-section-property-value"&gt;&lt;SPAN class="vx-property-view-section-property-value-text"&gt;Last weekend I spent many hours debugging it and this is what I found... You can use pktcap-uw to capture packets at different points through the system.&amp;nbsp; This document indicates the different stages of pktcap-uw: &lt;A href="https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.networking.doc/GUID-33B3FDD7-0555-4D54-B9A9-CDBC827504DA.html" title="https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.networking.doc/GUID-33B3FDD7-0555-4D54-B9A9-CDBC827504DA.html"&gt;Capture Points of the pktcap-uw Utility&lt;/A&gt;. Using a physical server that could not ping on of my virtual hosts (while others could) I opened a continuous ping to the VM from that physical server.&amp;nbsp; I could identify the packets coming in from the non-vmware-related potion of the network into the vmware-related switches and eventually reaching the host.&amp;nbsp; The host receives the ping packets and replies to them.&amp;nbsp; I see the return packets exit the VM and enter the vswitch, &lt;STRONG&gt;but they never leave the vswitch and get put on the physical adapter&lt;/STRONG&gt;.&amp;nbsp; Here are the steps I used:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;ping source 10.212.132.50&lt;/P&gt;&lt;P&gt;ping dest 10.100.32.25&lt;/P&gt;&lt;P&gt;dest VM name: ghost&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="_jivemacro_uid_15803108462559592 jive_macro_code jive_text_macro" data-renderedposition="575_8_1232_800" jivemacro_uid="_15803108462559592" modifiedtitle="true"&gt;[root@vm2:~] esxcli network vm&lt;BR /&gt;list&lt;P&gt;World ID&amp;nbsp; Name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Num Ports&amp;nbsp; Networks&lt;/P&gt;&lt;P&gt;--------&amp;nbsp; ---------&amp;nbsp; ---------&amp;nbsp; --------------------------------&lt;/P&gt;&lt;P&gt; 2102595&amp;nbsp; ghost&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp; Data Network A&lt;/P&gt;&lt;P&gt; 2102796&amp;nbsp; Server 2&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp; Data Network B&lt;/P&gt;&lt;P&gt; 2102159&amp;nbsp; Server 3&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp; Data Network C&lt;/P&gt;&lt;P&gt; 2101973&amp;nbsp; Server 4&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp; Data Network A&lt;/P&gt;&lt;P&gt; 2101731&amp;nbsp; Server 5&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&amp;nbsp; Data Network B&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] esxcli network vm port list -w 2102595&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Port ID: 83886095&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; vSwitch: vSwitch3&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Portgroup: Data Network A&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; DVPort ID:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; MAC Address: 00:50:56:bc:2e:e9&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; IP Address: 0.0.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Team Uplink: all(4)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Uplink Port ID: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Active Filters:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --switchport 83886095 --capture PortInput --dstip 10.212.132.50 -o- |tcpdump-uw -enr -&lt;/P&gt;&lt;P&gt;The switch port id is 0x0500000f.&lt;/P&gt;&lt;P&gt;The session capture point is PortInput.&lt;/P&gt;&lt;P&gt;The session filter destination IP address is 10.212.132.50.&lt;/P&gt;&lt;P&gt;07:09:58.812762 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3323, length 40&lt;/P&gt;&lt;P&gt;07:10:03.813107 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3324, length 40&lt;/P&gt;&lt;P&gt;07:10:08.809467 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3325, length 40&lt;/P&gt;&lt;P&gt;07:10:13.808301 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3326, length 40&lt;/P&gt;&lt;P&gt;pktcap: Dumped 4 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --capture PortOutput --dstip 10.212.132.50 -o- |tcpdump-uw -enr -&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The session capture point is PortOutput.&lt;/P&gt;&lt;P&gt;The session filter destination IP address is 10.212.132.50.&lt;/P&gt;&lt;P&gt;07:11:23.808640 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3340, length 40&lt;/P&gt;&lt;P&gt;07:11:28.808943 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3341, length 40&lt;/P&gt;&lt;P&gt;07:11:33.810570 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3342, length 40&lt;/P&gt;&lt;P&gt;07:11:38.809677 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3343, length 40&lt;/P&gt;&lt;P&gt;pktcap: Dumped 4 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --capture UplinkSnd --dstip 10.212.132.50 -o- |tcpdump-uw -enr -&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The session capture point is UplinkSnd.&lt;/P&gt;&lt;P&gt;The session filter destination IP address is 10.212.132.50.&lt;/P&gt;&lt;P&gt;pktcap: Dumped 0 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;... so note on the last packet capture there are no packets captured.&amp;nbsp; PortInput in my first capture is the vswitch receiving the packet from the VM, basically.&amp;nbsp; PortOutput is the packet leaving the vswitch.&amp;nbsp; UplinkSnd is the vswitch putting the packet on the physical adapter.&amp;nbsp; Note that I used "--switchport 83886095" for the first capture which theoertically captures all packets from/to that host's portgroup.&amp;nbsp; I used "--uplink vmnic9" on the other two commands because at that point you're dealing with the vswitch itself.&amp;nbsp; So you have to know (or trial and error to find) the vmnic.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here are similar tests that produce the same result, but using different "[stage]s" and "[dir]ection" switches for the command instead.&amp;nbsp; My understanding is that the "--capture" points are basically alternates to using --dir and --stage.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_macro_code _jivemacro_uid_15803115408253885 jive_text_macro" data-renderedposition="1522_8_1232_1184" jivemacro_uid="_15803115408253885"&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --dir 0 --stage 0 -o- |tcpdump-uw -enr -|grep 10.212&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The Stage is Pre.&lt;/P&gt;&lt;P&gt;pktcap: The output file is -.&lt;/P&gt;&lt;P&gt;pktcap: No server port specifed, select 40524 as the port.&lt;/P&gt;&lt;P&gt;pktcap: Local CID 2.&lt;/P&gt;&lt;P&gt;pktcap: Listen on port 40524.&lt;/P&gt;&lt;P&gt;pktcap: Accept...&lt;/P&gt;&lt;P&gt;pktcap: Vsock connection from port 1152 cid 2.&lt;/P&gt;&lt;P&gt;reading from file -, link-type EN10MB (Ethernet)&lt;/P&gt;&lt;P&gt;07:30:38.809784 b8:af:67:70:92:c6 &amp;gt; 00:50:56:bc:2e:e9, ethertype IPv4 (0x0800), length 74: 10.212.132.50 &amp;gt; 10.100.32.25: ICMP echo request, id 1, seq 3571, length 40&lt;/P&gt;&lt;P&gt;07:30:43.808875 b8:af:67:70:92:c6 &amp;gt; 00:50:56:bc:2e:e9, ethertype IPv4 (0x0800), length 74: 10.212.132.50 &amp;gt; 10.100.32.25: ICMP echo request, id 1, seq 3572, length 40&lt;/P&gt;&lt;P&gt;tcpdump-uw: pcap_loop: error reading dump file: Interrupted system call&lt;/P&gt;&lt;P&gt;pktcap: Join with dump thread failed.&lt;/P&gt;&lt;P&gt;pktcap: Destroying session 128.&lt;/P&gt;&lt;P&gt;pktcap:&lt;/P&gt;&lt;P&gt;pktcap: Dumped 130 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --dir 0 --stage 1 -o- |tcpdump-uw -enr -|grep 10.212&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The Stage is Post.&lt;/P&gt;&lt;P&gt;pktcap: The output file is -.&lt;/P&gt;&lt;P&gt;pktcap: No server port specifed, select 40537 as the port.&lt;/P&gt;&lt;P&gt;pktcap: Local CID 2.&lt;/P&gt;&lt;P&gt;pktcap: Listen on port 40537.&lt;/P&gt;&lt;P&gt;pktcap: Accept...&lt;/P&gt;&lt;P&gt;reading from file -, link-type EN10MB (Ethernet)&lt;/P&gt;&lt;P&gt;pktcap: Vsock connection from port 1153 cid 2.&lt;/P&gt;&lt;P&gt;07:30:53.810564 b8:af:67:70:92:c6 &amp;gt; 00:50:56:bc:2e:e9, ethertype IPv4 (0x0800), length 74: 10.212.132.50 &amp;gt; 10.100.32.25: ICMP echo request, id 1, seq 3574, length 40&lt;/P&gt;&lt;P&gt;07:30:58.812753 b8:af:67:70:92:c6 &amp;gt; 00:50:56:bc:2e:e9, ethertype IPv4 (0x0800), length 74: 10.212.132.50 &amp;gt; 10.100.32.25: ICMP echo request, id 1, seq 3575, length 40&lt;/P&gt;&lt;P&gt;tcpdump-uw: pcap_loop: error reading dump file: Interrupted system call&lt;/P&gt;&lt;P&gt;pktcap: Join with dump thread failed.&lt;/P&gt;&lt;P&gt;pktcap: Destroying session 129.&lt;/P&gt;&lt;P&gt;pktcap:&lt;/P&gt;&lt;P&gt;pktcap: Dumped 91 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --dir 1 --stage 0 -o- |tcpdump-uw -enr -|grep 10.212&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The Stage is Pre.&lt;/P&gt;&lt;P&gt;pktcap: The output file is -.&lt;/P&gt;&lt;P&gt;pktcap: No server port specifed, select 40550 as the port.&lt;/P&gt;&lt;P&gt;pktcap: Local CID 2.&lt;/P&gt;&lt;P&gt;pktcap: Listen on port 40550.&lt;/P&gt;&lt;P&gt;reading from file -, link-type EN10MB (Ethernet)&lt;/P&gt;&lt;P&gt;pktcap: Accept...&lt;/P&gt;&lt;P&gt;pktcap: Vsock connection from port 1154 cid 2.&lt;/P&gt;&lt;P&gt;07:31:13.811837 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3578, length 40&lt;/P&gt;&lt;P&gt;07:31:18.813389 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3579, length 40&lt;/P&gt;&lt;P&gt;07:31:23.811731 00:50:56:bc:2e:e9 &amp;gt; b8:af:67:70:92:c6, ethertype IPv4 (0x0800), length 74: 10.100.32.25 &amp;gt; 10.212.132.50: ICMP echo reply, id 1, seq 3580, length 40&lt;/P&gt;&lt;P&gt;tcpdump-uw: pcap_loop: error reading dump file: Interrupted system call&lt;/P&gt;&lt;P&gt;pktcap: Join with dump thread failed.&lt;/P&gt;&lt;P&gt;pktcap: Destroying session 130.&lt;/P&gt;&lt;P&gt;pktcap:&lt;/P&gt;&lt;P&gt;pktcap: Dumped 8 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --uplink vmnic9 --dir 1 --stage 1 -o- |tcpdump-uw -enr -|grep 10.212&lt;/P&gt;&lt;P&gt;The name of the uplink is vmnic9.&lt;/P&gt;&lt;P&gt;The Stage is Post.&lt;/P&gt;&lt;P&gt;pktcap: The output file is -.&lt;/P&gt;&lt;P&gt;pktcap: No server port specifed, select 40560 as the port.&lt;/P&gt;&lt;P&gt;pktcap: Local CID 2.&lt;/P&gt;&lt;P&gt;pktcap: Listen on port 40560.&lt;/P&gt;&lt;P&gt;reading from file -, link-type EN10MB (Ethernet)&lt;/P&gt;&lt;P&gt;pktcap: Accept...&lt;/P&gt;&lt;P&gt;pktcap: Vsock connection from port 1155 cid 2.&lt;/P&gt;&lt;P&gt;pktcap: Join with dump thread failed.&lt;/P&gt;&lt;P&gt;tcpdump-uw: pcap_loop: error reading dump file: Interrupted system call&lt;/P&gt;&lt;P&gt;pktcap: Destroying session 131.&lt;/P&gt;&lt;P&gt;pktcap:&lt;/P&gt;&lt;P&gt;pktcap: Dumped 0 packet to file -, dropped 0 packets.&lt;/P&gt;&lt;P&gt;pktcap: Done.&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So in this test, the first test is dir 0/stage 0, then dir 0/stage 1, then dir 1/stage 0, then finally dir 1/stage 1 where it fails.&amp;nbsp; Again, same tests just different variations of the commands.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Then I found the "trace" switch for that command and this further proves my findings.&amp;nbsp; Here is a &lt;STRONG&gt;successful&lt;/STRONG&gt; trace of an ICMP packet:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="jive_macro_code jive_text_macro _jivemacro_uid_15803118375765782" data-renderedposition="2811_8_1232_608" jivemacro_uid="_15803118375765782"&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --trace --ip 10.100.1.5&lt;/P&gt;&lt;P&gt;The trace session is enabled.&lt;/P&gt;&lt;P&gt;The session filter IP(src or dst) address is 10.100.1.5.&lt;/P&gt;&lt;P&gt;No server port specifed, select 56026 as the port.&lt;/P&gt;&lt;P&gt;Output the packet info to console.&lt;/P&gt;&lt;P&gt;Local CID 2.&lt;/P&gt;&lt;P&gt;Listen on port 56026.&lt;/P&gt;&lt;P&gt;Accept...&lt;/P&gt;&lt;P&gt;Vsock connection from port 1207 cid 2.&lt;/P&gt;&lt;P&gt;18:39:04.106975[1] Captured at PktFree point, TSO not enabled, Checksum not offloaded and not verified, length 74.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PATH:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106955] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkRcv |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106957] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkRcvKernel |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106958] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortInput |&amp;nbsp;&amp;nbsp; 83886086 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106958] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkDoSwLRO@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106959] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchDispath |&amp;nbsp;&amp;nbsp; 83886086 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106961] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchOutput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106961] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortOutput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106962] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VLAN_OutputProcessor@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106963] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VSwitchDisablePT@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106968] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VnicRx |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.106974] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PktFree |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;18:39:04.107284[2] Captured at PktFree point, TSO not enabled, Checksum not offloaded and not verified, VLAN tag 101, length 74.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PATH:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107187] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VnicTx |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107189] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortInput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107190] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VLAN_InputProcessor@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107192] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchDispath |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107195] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortOutput |&amp;nbsp;&amp;nbsp; 83886084 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107195] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkGenericOffload@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107196] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkTSO6ExtHdrs@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107197] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkCSum6ExtHdrs@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107197] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | Uplink_BuildWritableInetHeaders@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107198] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | NetSchedInput@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107200] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkSndKernel |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107201] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkSnd |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:39:04.107281] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PktFree |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;...The first packet is the ICMP being received, and the second is the reply.&amp;nbsp; Note lines 36 and 37-- the &lt;STRONG&gt;UplinkSndKernel&lt;/STRONG&gt; and &lt;STRONG&gt;UplinkSnd&lt;/STRONG&gt; before the PktFree.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now here is an &lt;STRONG&gt;UNsuccessful&lt;/STRONG&gt; ICMP trace:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __default_attr="plain" __jive_macro_name="code" class="_jivemacro_uid_15803120007193358 jive_macro_code jive_text_macro" data-renderedposition="3524_8_1232_576" jivemacro_uid="_15803120007193358"&gt;&lt;P&gt;[root@vm2:~] pktcap-uw --trace --ip 10.100.0.54&lt;/P&gt;&lt;P&gt;The trace session is enabled.&lt;/P&gt;&lt;P&gt;The session filter IP(src or dst) address is 10.100.0.54.&lt;/P&gt;&lt;P&gt;No server port specifed, select 55910 as the port.&lt;/P&gt;&lt;P&gt;Output the packet info to console.&lt;/P&gt;&lt;P&gt;Local CID 2.&lt;/P&gt;&lt;P&gt;Listen on port 55910.&lt;/P&gt;&lt;P&gt;Accept...&lt;/P&gt;&lt;P&gt;Vsock connection from port 1205 cid 2.&lt;/P&gt;&lt;P&gt;18:34:21.838652[1] Captured at PktFree point, TSO not enabled, Checksum not offloaded and not verified, length 74.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PATH:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838622] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkRcv |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838626] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; UplinkRcvKernel |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838627] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortInput |&amp;nbsp;&amp;nbsp; 83886088 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838628] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkDoSwLRO@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838630] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchDispath |&amp;nbsp;&amp;nbsp; 83886088 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838633] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchOutput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838634] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortOutput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838634] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VLAN_OutputProcessor@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838636] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VSwitchDisablePT@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838642] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VnicRx |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838651] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PktFree |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;BR /&gt;&lt;P&gt;18:34:21.838900[2] Captured at PktFree point, TSO not enabled, Checksum not offloaded and not verified, VLAN tag 101, length 74.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PATH:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838878] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; VnicTx |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838881] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortInput |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838882] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | VLAN_InputProcessor@com.vmware.vswitch#1.0.0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838885] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; EtherswitchDispath |&amp;nbsp;&amp;nbsp; 83886095 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838888] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PortOutput |&amp;nbsp;&amp;nbsp; 83886088 |&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838889] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkGenericOffload@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838890] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkTSO6ExtHdrs@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838891] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | UplinkCSum6ExtHdrs@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838891] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | Uplink_BuildWritableInetHeaders@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838892] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; IOChain |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; | NetSchedInput@vmkernel#nover&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; +- [18:34:21.838899] |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; PktFree |&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; |&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;... Note the &lt;STRONG&gt;lack&lt;/STRONG&gt; of &lt;STRONG&gt;UplinkSndKernel&lt;/STRONG&gt; and &lt;STRONG&gt;UplinkSnd&lt;/STRONG&gt; before the PktFree.&amp;nbsp; The packet never gets put on the line.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I didn't mention earlier but just to put it out there, before doing all of this this past weekend, I actually fresh installed the latest ESXi 6.7 just to see if it was something that had lingered from previous upgrades or something.&amp;nbsp; But it didn't help, obviously because here I am.&amp;nbsp; All of these tests and capture above are on a fresh install of the latest 6.7 as of last weekend.&amp;nbsp; I figured I would try this before forcing myself into installed a soon-to-be obsolete 6.0 version.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now, the interesting thing was that the igb driver that ships inside 6.7 was version 5.0.5, &lt;STRONG&gt;I believe&lt;/STRONG&gt;.&amp;nbsp; So searching around for an update to that I found this: &lt;A href="https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-IGB-533&amp;amp;productId=491" title="https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-IGB-533&amp;amp;productId=491"&gt;Download VMware vSphere&lt;/A&gt;&amp;nbsp; which is version 5.3.3 of the driver.&amp;nbsp; This is also listed as compatible with version 6.7 on another page but doesn't say it on that link.&amp;nbsp; After upgrading the driver to v5.3.3 I &lt;STRONG&gt;haven't had a problem... &lt;SPAN style="text-decoration: underline;"&gt;yet&lt;/SPAN&gt;&lt;/STRONG&gt;.&amp;nbsp; However, after finding and reading this thread I am not so confident that I have the problem solved.&amp;nbsp; After all, I mentioned earlier that simply recreating the vswitch had lasted 3-4 days before.&amp;nbsp; I'm on day 3 now.&amp;nbsp; I also only have 3 non-productive VMs on the host right now as well.&amp;nbsp; I am glad to see I am not the only one having the problem and maybe if enough of us bark up the right trees we can get some sort of solution to this.&amp;nbsp; I know I need to replace my aging servers, and have it in the plans to do so about a year from now, but they all worked fine and suited our needs before this.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 29 Jan 2020 15:46:27 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232603#M217163</guid>
      <dc:creator>MattSnead</dc:creator>
      <dc:date>2020-01-29T15:46:27Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232604#M217164</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;.... AAANNNDDD it just happened to me again.&amp;nbsp;&amp;nbsp; And it happened on the second uplink this time (vmnic4, 5, 8, and 9 make up the LAG, and vmnic5 faulted this time).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 30 Jan 2020 03:47:30 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232604#M217164</guid>
      <dc:creator>MattSnead</dc:creator>
      <dc:date>2020-01-30T03:47:30Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232605#M217165</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;FYI.. in case anyone finds this post in the future.&amp;nbsp; I have not found any solution to make this work on the latest versions of 6.5 or 6.7.&amp;nbsp; The only solutions I have found were rolling back to 6.0 (with all latest patches as of this writing is fine) or 6.5 build &lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;10719125.&amp;nbsp; Something in 6.5 build 10884925 is what's breaking it for me.&amp;nbsp; If you install a fresh 6.5 U2 you can create a custom baseline that only includes updates before 11/27/2018 (11/26 or earlier).&amp;nbsp; That will take you up to build &lt;SPAN class="summary-value"&gt;&lt;SPAN data-test-id="Hypervisor:"&gt;10719125.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 13 Feb 2020 12:58:16 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232605#M217165</guid>
      <dc:creator>MattSnead</dc:creator>
      <dc:date>2020-02-13T12:58:16Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232606#M217166</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We experienced the same issue with ESXi 6.7 and quad port cards:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;Vendor:&lt;/TD&gt;&lt;TD&gt;Intel Corporation &lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt; Vendor ID:&lt;/TD&gt;&lt;TD&gt;0x8086 &lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt; Device ID:&lt;/TD&gt;&lt;TD&gt;0x10e8 &lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt; Sub-Vendor ID:&lt;/TD&gt;&lt;TD&gt;0x8086 &lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt; Sub-Device ID:&lt;/TD&gt;&lt;TD&gt;0xa02c &lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt; Device name:&lt;/TD&gt;&lt;TD&gt;82576 Gigabit Network Connection &lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;VMware informed us that the support for this card was dropped in ESXi 6.7:&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&amp;amp;productid=12997&amp;amp;deviceCategory=io&amp;amp;details=1&amp;amp;VID=8086&amp;amp;DID=10e8&amp;amp;SVID=8086&amp;amp;SSID=a02c&amp;amp;page=1&amp;amp;display_interval=10&amp;amp;sortColumn=Partner&amp;amp;sortOrder=Asc" rel="nofollow"&gt;https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&amp;amp;productid=12997&amp;amp;deviceCategory=io&amp;amp;details=1&amp;amp;VID=8086&amp;amp;DID=10e8&amp;amp;SVID=8086&amp;amp;SSID=a02c&amp;amp;page=1&amp;amp;display_interval=10&amp;amp;sortColumn=Partner&amp;amp;sortOrder=Asc&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As we have just a few of these cards we decided to replace them.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 28 Feb 2020 21:52:19 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232606#M217166</guid>
      <dc:creator>horfor</dc:creator>
      <dc:date>2020-02-28T21:52:19Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232607#M217167</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;i have exactly the same problem, any solution?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 19 May 2020 13:24:13 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232607#M217167</guid>
      <dc:creator>leotog</dc:creator>
      <dc:date>2020-05-19T13:24:13Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232608#M217168</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I've had this same issue and this forum post has been helpful in me troubleshooting the issue. I had issues with both a 4 port 82576 and 2 port 82575EB cards that are in my 2 host lab environment. I was using the original 5.0.5 igb driver that is included with ESXi 6.7u3 when I first experienced the issue with the 2nd host that was not host vCenter dropping its connection to vCenter. Previously I noticed it mainly with vMotions, but then I started noticing with any high traffic functions even in VMs. Issues happened more often when I had 2 uplink ports on a vSwitch for redundancy. Multi-NIC vMotion is setup as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So I went from igb 5.0.5 to 5.2.5 and like others have said the issue persisted. I was going to try 5.3.3 even though another user had mentioned having the same issue with that. However, I started acquiring every version of the igb driver that I could find. I found that 5.3.2 was the last version to be a similar size with 5.3.0 and 5.3.1. So I have tried 5.3.2 instead. And so far I have not had any of the issues I was seeing before. With the other driver versions I would see the issue within 50% of the vMotion and within a few minutes of a high transaction operation. This includes a gigabit speed backup that would max out 1 uplink where that would have failed before with the other drivers.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also pfSense was having issues and stating an error "vmx0: watchdog timeout on queue 0" while pushing a decent amount of internet traffic but not maxing out my connection. I could only get around by using e1000 nic instead of vmxnet3. Now that is working with vmxnet3 as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Time will tell though, but I thought I'd share my early results.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;igb 5.3.2 download:&lt;/P&gt;&lt;P&gt;&lt;A href="https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-INTEL-IGB-532&amp;amp;productId=323" title="https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-INTEL-IGB-532&amp;amp;productId=323"&gt;https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-INTEL-IGB-532&amp;amp;productId=323&lt;/A&gt; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 10 Jul 2020 20:41:15 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232608#M217168</guid>
      <dc:creator>VirtualSlam</dc:creator>
      <dc:date>2020-07-10T20:41:15Z</dc:date>
    </item>
    <item>
      <title>Re: dead I/O on igb-nic (ESXi 6.7)</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232609#M217169</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Scratch the pfSense part. It still has issues with vmxnet3, but I was only half suspecting that it was a related issue. Everything else still looks good with 5.3.2.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 10 Jul 2020 20:46:44 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/dead-I-O-on-igb-nic-ESXi-6-7/m-p/2232609#M217169</guid>
      <dc:creator>VirtualSlam</dc:creator>
      <dc:date>2020-07-10T20:46:44Z</dc:date>
    </item>
  </channel>
</rss>

