Hi
After adding additional two interface to the blades only one is recognized as a vmnic .The first 6 interfaces were added with no issues .
troubleshooting steps:
vmkchdev -l
.....
0000:03:00.0 103c:323b 103c:3355 vmkernel vmhba0
0000:04:00.0 19a2:0710 103c:337b vmkernel vmnic0
0000:04:00.1 19a2:0710 103c:337b vmkernel vmnic1
0000:04:00.2 19a2:0710 103c:337b vmkernel vmnic2
0000:04:00.3 19a2:0710 103c:337b vmkernel vmnic3
0000:04:00.4 19a2:0710 103c:337b vmkernel vmnic4
0000:04:00.5 19a2:0710 103c:337b vmkernel vmnic5
0000:04:00.6 19a2:0710 103c:337b vmkernel vmnic6
0000:04:00.7 19a2:0710 103c:337b vmkernel vmnic7
.....
esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -----------------------------------------------------------
vmnic0 0000:04:00.0 elxnet Up Up 10000 Full f0:92:1c:03:bf:d0 1500 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic1 0000:04:00.1 elxnet Up Up 10000 Full f0:92:1c:03:bf:d4 1500 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic2 0000:04:00.2 elxnet Up Up 10000 Full f0:92:1c:03:bf:d1 1500 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic3 0000:04:00.3 elxnet Up Up 10000 Full f0:92:1c:03:bf:d5 1500 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic4 0000:04:00.4 elxnet Up Up 10000 Full f0:92:1c:03:bf:d2 9000 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic5 0000:04:00.5 elxnet Up Up 10000 Full f0:92:1c:03:bf:d6 9000 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
vmnic7 0000:04:00.7 elxnet Up Up 10000 Full f0:92:1c:03:bf:d7 1500 Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter
Tried to manually added :
#vmkchdev -v 0000:04:00.6
#vim-cmd hostsvc/net/refresh
I also tried to switch the nic from passthrough and back to vmkernel
> vmkchdev -p 0000:04:00.6
> vmkchdev -l
....
0000:03:00.0 103c:323b 103c:3355 vmkernel vmhba0
0000:04:00.0 19a2:0710 103c:337b vmkernel vmnic0
0000:04:00.1 19a2:0710 103c:337b vmkernel vmnic1
0000:04:00.2 19a2:0710 103c:337b vmkernel vmnic2
0000:04:00.3 19a2:0710 103c:337b vmkernel vmnic3
0000:04:00.4 19a2:0710 103c:337b vmkernel vmnic4
0000:04:00.5 19a2:0710 103c:337b vmkernel vmnic5
0000:04:00.6 19a2:0710 103c:337b passthru vmnic6
0000:04:00.7 19a2:0710 103c:337b vmkernel vmnic7
......
vmkchdev -v 0000:04:00.6
vim-cmd hostsvc/net/refresh
>lspci
0000:04:00.0 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic0]
0000:04:00.1 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic1]
0000:04:00.2 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic2]
0000:04:00.3 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic3]
0000:04:00.4 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic4]
0000:04:00.5 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic5]
0000:04:00.6 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic6]
0000:04:00.7 Ethernet controller: Emulex Corporation HP FlexFabric 10Gb 2-port 554FLB Adapter [vmnic7]
# cat /etc/vmware/esx.conf | grep -i vmnic
/device/00000:004:00.0/vmkname = "vmnic0"
/device/00000:004:00.4/vmkname = "vmnic4"
/device/00000:004:00.1/vmkname = "vmnic1"
/device/00000:004:00.2/vmkname = "vmnic2"
/device/00000:004:00.5/vmkname = "vmnic5"
/device/00000:004:00.3/vmkname = "vmnic3"
/device/00000:004:00.6/vmkname = "vmnic6"
/device/00000:004:00.7/vmkname = "vmnic7"
/vmkdevmgr/logical/pci#m00008507#0/alias = "vmnic6"
/vmkdevmgr/logical/pci#m00008504#0/alias = "vmnic3"
/vmkdevmgr/logical/pci#m00008506#0/alias = "vmnic5"
/vmkdevmgr/logical/pci#m00008503#0/alias = "vmnic2"
/vmkdevmgr/logical/pci#m00008505#0/alias = "vmnic4"
/vmkdevmgr/logical/pci#m00008502#0/alias = "vmnic1"
/vmkdevmgr/logical/pci#m00008508#0/alias = "vmnic7"
/vmkdevmgr/logical/pci#m00008501#0/alias = "vmnic0"
/vmkdevmgr/pci/m00008501/alias = "vmnic0"
/vmkdevmgr/pci/m00008502/alias = "vmnic1"
/vmkdevmgr/pci/m00008506/alias = "vmnic5"
/vmkdevmgr/pci/m00008503/alias = "vmnic2"
/vmkdevmgr/pci/m00008504/alias = "vmnic3"
/vmkdevmgr/pci/m00008505/alias = "vmnic4"
/vmkdevmgr/pci/m00008507/alias = "vmnic6"
/vmkdevmgr/pci/m00008508/alias = "vmnic7"
/net/vmkernelnic/child[0000]/macFromPnic = "vmnic0"
/net/pnic/child[0006]/name = "vmnic7"
/net/pnic/child[0005]/name = "vmnic5"
/net/pnic/child[0001]/name = "vmnic1"
/net/pnic/child[0004]/name = "vmnic4"
/net/pnic/child[0003]/name = "vmnic3"
/net/pnic/child[0002]/name = "vmnic2"
/net/pnic/child[0000]/name = "vmnic0"
/net/vswitch/child[0003]/teamPolicy/uplinks[0000]/pnic = "vmnic7"
/net/vswitch/child[0003]/uplinks/child[0000]/pnic = "vmnic7"
/net/vswitch/child[0001]/teamPolicy/uplinks[0000]/pnic = "vmnic4"
/net/vswitch/child[0001]/teamPolicy/uplinks[0001]/pnic = "vmnic5"
/net/vswitch/child[0001]/uplinks/child[0001]/pnic = "vmnic5"
/net/vswitch/child[0001]/uplinks/child[0000]/pnic = "vmnic4"
/net/vswitch/child[0000]/portgroup/child[0000]/teamPolicy/uplinks[0000]/pnic = "vmnic0"
/net/vswitch/child[0000]/portgroup/child[0000]/teamPolicy/uplinks[0001]/pnic = "vmnic1"
/net/vswitch/child[0000]/teamPolicy/uplinks[0000]/pnic = "vmnic0"
/net/vswitch/child[0000]/teamPolicy/uplinks[0001]/pnic = "vmnic1"
/net/vswitch/child[0000]/uplinks/child[0000]/pnic = "vmnic0"
/net/vswitch/child[0000]/uplinks/child[0001]/pnic = "vmnic1"
/net/vswitch/child[0002]/uplinks/child[0000]/pnic = "vmnic2"
/net/vswitch/child[0002]/uplinks/child[0001]/pnic = "vmnic3"
/net/vswitch/child[0002]/teamPolicy/uplinks[0000]/pnic = "vmnic2"
/net/vswitch/child[0002]/teamPolicy/uplinks[0001]/pnic = "vmnic3"
Version:
6.7.0 #1 SMP Release build-13644319 May 9 2019
Any help is very much appreciated
Thanks to the comments:
the following is the extract from vmkernel.log
2019-07-15T16:57:23.654Z cpu28:2097846)Uplink: 11671: Device vmnic5 not yet opened
2019-07-15T16:57:23.654Z cpu28:2097846)Uplink: 13950: Opening device vmnic5
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x1
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x4
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x9
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x8
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x7
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x3
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0x5
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic5] vmk_UplinkCap: 0xa
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic5] vmk_UplinkCap: 0xd
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkStateSet:1917: [vmnic5] vmnic5: driverData: 0x450194200000
2019-07-15T16:57:23.654Z cpu7:2097292)elxnet: elxnet_uplinkStartIO:3247: [vmnic5] Received Uplink Start I/O
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_registerMSIxInterrupts:1791: [vmnic5] Registered 4 vectors
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2960: [vmnic5] *** intrCookies[0]= 0x32 ***
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2973: [vmnic5] Enabled interrupt i=0
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2960: [vmnic5] *** intrCookies[1]= 0x33 ***
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2973: [vmnic5] Enabled interrupt i=1
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2960: [vmnic5] *** intrCookies[2]= 0x34 ***
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2973: [vmnic5] Enabled interrupt i=2
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2960: [vmnic5] *** intrCookies[3]= 0x35 ***
2019-07-15T16:57:23.793Z cpu7:2097292)elxnet: elxnet_enableIntrs:2973: [vmnic5] Enabled interrupt i=3
2019-07-15T16:57:23.838Z cpu7:2097292)elxnet: elxnet_linkStatusUpdate:944: vmnic5 : 0000:04:00.5 Link up - 10000 Mbps
2019-07-15T16:57:23.879Z cpu7:2097292)elxnet: elxnet_keyValueInit:2118: [vmnic6] Initialization of Key-Value with mgmt succeeded
2019-07-15T16:57:23.880Z cpu28:2097846)Uplink: 11671: Device vmnic6 not yet opened
2019-07-15T16:57:23.880Z cpu28:2097846)Uplink: 13950: Opening device vmnic6
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x1
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x4
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x9
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x8
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x7
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x3
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0x5
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0xa
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0xd
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkStateSet:1917: [vmnic6] vmnic6: driverData: 0x450194600000
2019-07-15T16:57:23.880Z cpu7:2097292)elxnet: elxnet_uplinkStartIO:3247: [vmnic6] Received Uplink Start I/O
2019-07-15T16:57:23.977Z cpu7:2097292)WARNING: elxnet: elxnet_mccComplProcess:1153: [vmnic6] Mailbox/MCC command opcode 8-3 failed:status 1-22
2019-07-15T16:57:23.977Z cpu7:2097292)WARNING: elxnet: elxnet_rxQueuesCreate:3326: [vmnic6] elxnet_cmdRxqCreateUseMcc failed for rx 2
2019-07-15T16:57:23.999Z cpu7:2097292)WARNING: elxnet: elxnet_uplinkStartIO:3257: [vmnic6] elxnet driver: Failed to create Rx queues
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x1
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x4
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x9
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x8
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x7
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x3
2019-07-15T16:57:23.999Z cpu0:2097892)NetSched: 654: vmnic6-0-tx: worldID = 2097892 exits
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0x5
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapDisable:2829: [vmnic6] vmk_UplinkCap: 0xa
2019-07-15T16:57:23.999Z cpu7:2097292)elxnet: elxnet_uplinkCapEnable:2718: [vmnic6] vmk_UplinkCap: 0xd
Message was edited by: Muhsin Tawafig
ESXi 6.5 had a maximum of 8 elxnet-based 10 GbE NICs that were supported. Although I don't see that same number in the configuration maximum guide, this would be my guess.
Logs would tell us more,
Do the refresh and post the vmkernel.log event
also check boot.gz to see why that PCI device was not loaded.
from vmkernel.log
2019-07-15T16:57:23.977Z cpu7:2097292)WARNING: elxnet: elxnet_mccComplProcess:1153: [vmnic6] Mailbox/MCC command opcode 8-3 failed:status 1-22
2019-07-15T16:57:23.977Z cpu7:2097292)WARNING: elxnet: elxnet_rxQueuesCreate:3326: [vmnic6] elxnet_cmdRxqCreateUseMcc failed for rx 2
2019-07-15T16:57:23.999Z cpu7:2097292)WARNING: elxnet: elxnet_uplinkStartIO:3257: [vmnic6] elxnet driver: Failed to create Rx queues
Also verified am running latest driver from vmware
esxcli network nic get -n vmnic7
Advertised Auto Negotiation: false
Advertised Link Modes: 10000None/Full
Auto Negotiation: false
Cable Type:
Current Message Level: 4631
Driver Info:
Bus Info: 0000:04:00:7
Driver: elxnet
Firmware Version: 4.6.247.5
Version: 12.0.1115.0
To make it more mysterious : Further testing ,
The issue is seen across 6 blades running esxi , but same configuration on the same blade center same blade model seems to be working ok
except its running RHEV all nics are being accounted for .
If I were you, I'd open an SR with VMware and ask them if the 8 vmnic limit for elxnet devices is still in effect for 6.7.
I appreciate your feedback ,
I already checked with the max features as per https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=2-4
and its 16 nics for 10G on the host ( no exception listed )
Thanks
I think this issue is something related to driver which is limiting the Rx queue to 8. Following KB talks about similar issue in older driver however you can check with vmware / broadcom to confirm if this is something to do with driver
That was it apretiate the feed back
Executing :
esxcfg-module -s "msix=0" elxnet
on each of esxi( blades ) , reboot , resolved the issue with the missing vmnic definition .
Just for reference on the installation of vsphere
1- HPE c7000 gen3 blade Center
2- Connectivity through : HP VC FlexFabric 10Gb/24-Port Module / firmware 4.63 2018-08-21T19:27:31Z
Thanks for all who contributed