VMware Cloud Community
TryllZ
Expert
Expert

vCenter 7.0.2 not migrating free vmnnics to dvswitch ?!

Hi,

I have 2 free vmnics on a host which I'm trying to migrate to dvswitch but they fail.

Earlier I used to get Rollback error now I get Operation Timed Out error.

Any thoughts what's going on ?

The host FQDN is w-esx-vsn-01.vlab.lab, IP 192.168.10.37

Portgroup - https://i.ibb.co/WVpwbQr/W-SRV-ADDC-01-2021-07-16-20-14-48.png

available vmnics - https://i.ibb.co/LPhh1zB/W-SRV-ADDC-01-2021-07-16-20-14-57.png

Thanks..

Spoiler
2021-07-16T20:01:52.199+01:00 warning vpxd[07206] [Originator@6876 sub=Default] AsyncResolve took 1988791 us, 00:00:01.988791 (hh:mm:ss.us) on host w-esx-vsn-01.vlab.lab
2021-07-16T20:01:55.577+01:00 info vpxd[07937] [Originator@6876 sub=vpxLro opID=W3-aa] [VpxLRO] -- FINISH lro-2314
2021-07-16T20:01:59.729+01:00 warning vpxd[07918] [Originator@6876 sub=Default] Failed to connect socket; <io_obj p:0x00007f12bc4c97c8, h:59, <TCP '192.168.10.40 : 32860'>, <TCP '10.0.20.14 : 443'>>, e: 110(Connection timed out)
2021-07-16T20:01:59.729+01:00 warning vpxd[07984] [Originator@6876 sub=HostAccess] Creating SOAP stub adapter failed: N7Vmacore24InvalidArgumentExceptionE(No version for VMODL calls to <cs p:00007f12ac5980b0, TCP:d-esx-usr-04.vlab.lab:443>)
--> [context]zKq7AVECAQAAAEcGEgEYdnB4ZAAAUnE0bGlidm1hY29yZS5zbwAACaUpABGaKgCsZS8B7W8BbGlic3NvY2xpZW50LnNvAAHXegEB/PcBAQkAAgEjAQICz9YWbGlidm1vbWkuc28AAhlJEwPN7512cHhkAATo4/ZsaWJ2aW0tdHlwZXMuc28Ag9kEKgGDcF0oAYMSmSgBg7uoKAGDVzQoAYMOCykBAPFAIAACnSAAJgA0BYd/AGxpYnB0aHJlYWQuc28uMAAGvzUPbGliYy5zby42AA==[/context].
2021-07-16T20:02:20.170+01:00 warning vpxd[06931] [Originator@6876 sub=VpxProfiler opID=EventManagerProcessJobs-29f232d0] EventManagerProcessJobs [TotalTime] took 30000 ms
2021-07-16T20:02:23.156+01:00 error vpxd[07047] [Originator@6876 sub=hostMethod] Commit call for method [updateNetworkConfig] transaction Id [1] failed on host [[vim.HostSystem:host-1044,w-esx-vsn-01.vlab.lab]] with exception:[(vmodl.fault.SystemError) {
--> faultCause = (vmodl.MethodFault) null,
--> faultMessage = <unset>,
--> reason = "Transaction has rolled back on the host."
--> msg = "Received SOAP response fault from [<cs p:0000564a685dc9a0, TCP:w-esx-vsn-01.vlab.lab:443>]: commitTransaction
--> Received SOAP response fault from [<cs p:00000036dcb74130, TCP:localhost:8307>]: commitTransaction
--> A general system error occurred: Transaction has rolled back on the host."
--> }]
2021-07-16T20:02:23.793+01:00 warning vpxd[07007] [Originator@6876 sub=Default] Failed to connect socket; <io_obj p:0x00007f12c005e798, h:32, <TCP '192.168.10.40 : 59774'>, <TCP '192.168.10.37 : 443'>>, e: 110(Connection timed out)
2021-07-16T20:02:23.793+01:00 error vpxd[07007] [Originator@6876 sub=vmomi.soapStub[17]] Resetting stub adapter for server <cs p:0000564a685dc9a0, TCP:w-esx-vsn-01.vlab.lab:443> : service state request failed: N7Vmacore15SystemExceptionE(Connection timed out)
-->
[context]zKq7AVECAQAAAEcGEgEPdnB4ZAAAUnE0bGlidm1hY29yZS5zbwAACaUpABGaKgAjZC8AAssgAA3tIABu+CAASPwgANkGIQCypSAA8UAgAAKdIAAmADQBh38AbGlicHRocmVhZC5zby4wAAK/NQ9saWJjLnNvLjYA[/context]
2021-07-16T20:02:23.794+01:00 warning vpxd[07007] [Originator@6876 sub=vmomi.soapStub[17]] Terminating invocation: server=<cs p:0000564a685dc9a0, TCP:w-esx-vsn-01.vlab.lab:443>, moref=vpxapi.VpxaService:vpxa, method=fetchQuickStats
2021-07-16T20:02:23.796+01:00 warning vpxd[08035] [Originator@6876 sub=QuickStats] Error returned from calling FetchQuickStats for [vim.HostSystem:host-1044,w-esx-vsn-01.vlab.lab]: N7Vmacore17CanceledExceptionE(Operation was canceled)
-->
[context]zKq7AVECAQAAAEcGEgEddnB4ZAAAUnE0bGlidm1hY29yZS5zbwAACaUpABGaKgDbYi8AovIsAZAkFmxpYnZtb21pLnNvAAENJRYBXC8WAaQxFgDhyCwA8PssALn8LACkHi0A9CIpAPQiKQDKZS0A9CIpAPQoIQAY2SAA1e0gAG74IABI/CAA2QYhALKlIADxQCAAAp0gACYANAKHfwBsaWJwdGhyZWFkLnNvLjAAA781D2xpYmMuc28uNgA=[/context]
2021-07-16T20:02:23.928+01:00 info vpxd[06921] [Originator@6876 sub=vpxLro opID=kr78etp2-a0-h5-bd] [VpxLRO] -- FINISH lro-2347
2021-07-16T20:02:23.928+01:00 info vpxd[06921] [Originator@6876 sub=Default opID=kr78etp2-a0-h5-bd] [VpxLRO] -- ERROR lro-2347 -- SessionManager -- vim.SessionManager.loginByToken: vim.fault.InvalidLogin:
--> Result:
--> (vim.fault.InvalidLogin) {
--> faultCause = (vmodl.MethodFault) null,
--> faultMessage = <unset>
--> msg = ""
--> }
--> Args:
-->
--> Arg locale:
--> "en"
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 525cad65-509e-ac27-6822-18024f709264, h: host-1032, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 5287fdef-319c-f419-a951-d64cf51ad04e, h: host-1016, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 527709f3-0879-07eb-1912-5a63e4a2c677, h: host-1035, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 524854c1-4378-ef77-53f4-c357215fd5f5, h: host-1038, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52ed8dd1-7e59-151e-9b52-e24aaaa7053f, h: host-1041, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52a4c002-dbfd-dfce-55b7-ce754f4f5221, h: host-1063, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52e451e5-a764-261f-46ed-d6fdd4758221, h: host-1060, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52084541-2887-a2a2-4199-f927ba636602, h: host-1025, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52acd480-2560-6edb-fb18-e2fb691f1b9e, h: host-1047, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52f70881-cc91-04e8-1f54-2395743e05f1, h: host-1050, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52d7da83-8eb3-c005-e552-23ae613d2656, h: host-1022, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52514759-e6f6-a197-9385-8a763b4b56c2, h: host-1019, time since last heartbeat: 442991ms
2021-07-16T20:02:24.938+01:00 info vpxd[07980] [Originator@6876 sub=HostCnx opID=CheckforMissingHeartbeats-77411b53] [VpxdHostCnx] No heartbeats received from host; cnx: 52b75c9f-8ba4-8ef6-05fc-fb39591d9ebe, h: host-1028, time since last heartbeat: 442991ms
2021-07-16T20:02:24.942+01:00 info vpxd[07911] [Originator@6876 sub=vpxLro opID=sps-SHPoller-461844-219-80] [VpxLRO] -- BEGIN lro-2353 -- SessionManager -- vim.SessionManager.sessionIsActive -- 526ba478-fce0-b6e3-969a-185385070779(520d7e45-5108-4c6b-eaa0-d6540f24b6f5)
2021-07-16T20:02:24.942+01:00 info vpxd[07911] [Originator@6876 sub=vpxLro opID=sps-SHPoller-461844-219-80] [VpxLRO] -- FINISH lro-2353
2021-07-16T20:02:31.216+01:00 warning vpxd[07925] [Originator@6876 sub=Default] Failed to connect socket; <io_obj p:0x00007f12987f7298, h:59, <TCP '192.168.10.40 : 54552'>, <TCP '10.0.20.11 : 443'>>, e: 110(Connection timed out)
2021-07-16T20:02:31.217+01:00 warning vpxd[07984] [Originator@6876 sub=HostAccess] Creating SOAP stub adapter failed: N7Vmacore24InvalidArgumentExceptionE(No version for VMODL calls to <cs p:00007f12ac57cfc0, TCP:d-esx-usr-01.vlab.lab:443>)
--> 
0 Kudos
7 Replies
scott28tt
VMware Employee
VMware Employee

It’s best to use a spoiler when wanting to post dumps of text into a post, to do this select the triangle which contains an exclamation mark from the editor toolbar.

This is the effect:

Spoiler
So much easier for other users to scroll through a thread, but to expand the spoiler if they want to see the text.

-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
TryllZ
Expert
Expert

Done, thanks for that..

TryllZ
Expert
Expert

I have a test network as well in which I migrated the vmnics and they migrated flawlessly.

0 Kudos
TryllZ
Expert
Expert

On a different host I migrated different vmnics and as it turns out only 4 vmnics are being migrated for some reason, the rest fail even though the number of uplinks are set to 6 on the dvswitch.

0 Kudos
TryllZ
Expert
Expert

Not sure whats' going on, I removed the host and re added it, now I can transfer 5 vmnics, the 6th one fails ?!, the vSwitch0 has not vmnics attached to it either.

https://i.ibb.co/M18jCnD/W-SRV-ADDC-01-2021-07-17-14-41-30.png

0 Kudos
TryllZ
Expert
Expert

The following has beeon obtained from the host whose vmnic fails to migrate.

vmnic5 is the one failing to migrate, below is the error most noticeable

2021-07-17T22:46:24.266Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6678: vmnic5: failed to get the coalescing settings: Not supported

2021-07-17T22:46:24.266Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6700: vmnic5: failed to push the coalescing settings, increase inflight window to 640000 bytes: Not supported

2021-07-17T22:46:25.775Z cpu1:131122)NetqueueBal: 5056: vmnic5: new netq module, reset logical space needed2021-07-17T22:46:25.775Z cpu1:131122)NetqueueBal: 5085: vmnic5: plugins to call differs, reset logical space

Spoiler

2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkConnect:7413: vmnic5: do not enable multiple queue since we cannot have more than one tx queue
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 1
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 0
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkNotify:4655: vmnic5: link up notification
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 7
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 6
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 3
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 2
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 4
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 5
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 8
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 9
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 10
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 11
2021-07-17T22:46:24.264Z cpu0:133281 opID=6669a501)Uplink: 12140: enabled port 0x84000015 with mac 00:0c:29:a2:61:92
2021-07-17T22:46:24.264Z cpu1:131568)NetSched: 723: vmnic5-0-tx: worldID = 131568 exits
2021-07-17T22:46:24.265Z cpu1:131164)CpuSched: 817: user latency of 134547 vmnic5-0-tx 0 changed by 131164 NetSchedHelper -6
2021-07-17T22:46:24.265Z cpu0:134547)NetSched: 723: vmnic5-0-tx: worldID = 134547 exits
2021-07-17T22:46:24.265Z cpu0:131461)Net: 2182: connected Shadow of vmnic5 to null config, portID 0x4000016
2021-07-17T22:46:24.265Z cpu1:131164)CpuSched: 817: user latency of 134548 hclk-sched-vmnic5 0 changed by 131164 NetSchedHelper -6
2021-07-17T22:46:24.265Z cpu0:131461)Mirror.cswitch: VSwitchMirrorPortEnable:3833: [nsx@6876 comp="nsx-esx" subcomp="vswitch"]Failed to get port 4000016 linkup flag
2021-07-17T22:46:24.265Z cpu0:131461)NetPort: 1524: enabled port 0x4000016 with mac 00:50:56:5e:ea:17
2021-07-17T22:46:24.266Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6523: vmnic5: link up event received, device running at 10000 Mbps so setting queue depth to 320000 bytes with expected 1310 bytes/us
2021-07-17T22:46:24.266Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6678: vmnic5: failed to get the coalescing settings: Not supported
2021-07-17T22:46:24.266Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6700: vmnic5: failed to push the coalescing settings, increase inflight window to 640000 bytes: Not supported
2021-07-17T22:46:25.775Z cpu1:131122)NetqueueBal: 5056: vmnic5: new netq module, reset logical space needed
2021-07-17T22:46:25.775Z cpu1:131122)NetqueueBal: 5085: vmnic5: plugins to call differs, reset logical space
2021-07-17T22:46:25.775Z cpu1:131122)Uplink: 537: vmnic5: Driver claims supporting 0 RX queues, and 0 queues are accepted.
2021-07-17T22:46:25.775Z cpu1:131122)Uplink: 533: vmnic5: Driver claims supporting 0 TX queues, and 0 queues are accepted.


2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkConnect:7413: vmnic5: do not enable multiple queue since we cannot have more than one tx queue
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 1
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 0
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkNotify:4655: vmnic5: link up notification
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 7
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 6
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 3
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 2
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 4
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 5
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 8
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 9
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 10
2021-07-17T22:46:25.775Z cpu1:131122)netschedHClk: NetSchedHClkCreateQueue:5307: vmnic5: look up for vm leaf pool tag gave out 11
2021-07-17T22:46:25.776Z cpu1:131122)Uplink: 12140: enabled port 0x84000015 with mac 00:0c:29:a2:61:92
2021-07-17T22:46:25.776Z cpu1:131164)CpuSched: 817: user latency of 134551 vmnic5-0-tx 0 changed by 131164 NetSchedHelper -6
2021-07-17T22:46:25.776Z cpu1:134551)NetSched: 723: vmnic5-0-tx: worldID = 134551 exits
2021-07-17T22:46:25.776Z cpu1:131164)CpuSched: 817: user latency of 134552 vmnic5-0-tx 0 changed by 131164 NetSchedHelper -6
2021-07-17T22:46:25.776Z cpu0:134552)NetSched: 723: vmnic5-0-tx: worldID = 134552 exits
2021-07-17T22:46:25.776Z cpu1:131164)CpuSched: 817: user latency of 134553 hclk-sched-vmnic5 0 changed by 131164 NetSchedHelper -6
2021-07-17T22:46:25.776Z cpu0:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6523: vmnic5: link up event received, device running at 10000 Mbps so setting queue depth to 320000 bytes with expected 1310 bytes/us
2021-07-17T22:46:25.777Z cpu0:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6678: vmnic5: failed to get the coalescing settings: Not supported
2021-07-17T22:46:25.777Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6299: vmnic5: hclk scheduler instance clean up
2021-07-17T22:46:25.777Z cpu0:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6700: vmnic5: failed to push the coalescing settings, increase inflight window to 640000 bytes: Not supported
2021-07-17T22:46:25.781Z cpu1:134549)netschedHClk: NetSchedHClkWatchdogSysWorld:6446: vmnic5: watchdog world (worldID = 134549) exits


2021-07-17T22:46:54.774Z cpu0:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6286: vmnic5: watchdog failed to acquire the device port 0x84000015 while hclk object reference count is 0: Not found
2021-07-17T22:46:54.774Z cpu0:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6299: vmnic5: hclk scheduler instance clean up
2021-07-17T22:46:54.777Z cpu1:134554)netschedHClk: NetSchedHClkWatchdogSysWorld:6446: vmnic5: watchdog world (worldID = 134554) exits
2021-07-17T22:46:55.773Z cpu0:131122)NetqueueBal: 5056: vmnic5: new netq module, reset logical space needed
2021-07-17T22:46:55.773Z cpu0:131122)NetqueueBal: 5085: vmnic5: plugins to call differs, reset logical space
2021-07-17T22:46:55.773Z cpu0:131122)NetqueueBal: 5121: vmnic5: device Up notification, reset logical space needed
2021-07-17T22:46:55.773Z cpu0:131122)Uplink: 537: vmnic5: Driver claims supporting 0 RX queues, and 0 queues are accepted.
2021-07-17T22:46:55.773Z cpu0:131122)Uplink: 533: vmnic5: Driver claims supporting 0 TX queues, and 0 queues are accepted.
2021-07-17T22:46:55.773Z cpu0:131122)NetqueueBal: 3142: vmnic5: rxQueueCount=0, rxFiltersPerQueue=0, txQueueCount=0 rxQueuesFeatures=0x0
2021-07-17T22:46:55.774Z cpu0:131122)NetPort: 1783: disabled port 0x80000017
2021-07-17T22:46:55.774Z cpu0:131122)Uplink: 12127: The default queue id for vmnic5 is 0x44000.
2021-07-17T22:46:55.774Z cpu0:131122)Uplink: 12140: enabled port 0x80000017 with mac 00:0c:29:a2:61:92
2021-07-17T22:46:55.775Z cpu0:134563)NetSched: 723: vmnic5-0-tx: worldID = 134563 exits

 

0 Kudos
TryllZ
Expert
Expert

Seemingly there is some issue, the same host I removed and added again and it migrated all ports without any issues.

0 Kudos