beezleinc's Posts

Thank you Bob.  I just found this same answer out yesterday!  Yes, it was two VM's with duplicate bios.uuid's. In the process of deploying/testing Veeam Backup and Recovery, this issue came to... See more...
Thank you Bob.  I just found this same answer out yesterday!  Yes, it was two VM's with duplicate bios.uuid's. In the process of deploying/testing Veeam Backup and Recovery, this issue came to light again as it barks if dup uuid's are present as well. Problem solved.  Thank you again.
When I look at the Virtual Object Placement Details for a vsan cluster I have all of the objects belonging to one VM stashed under the VO folder of another VM even though when I look at the vsanD... See more...
When I look at the Virtual Object Placement Details for a vsan cluster I have all of the objects belonging to one VM stashed under the VO folder of another VM even though when I look at the vsanDatastore FS itself, the objects are in their correct 'folders'. I am assuming this is a VC instrumentation database mixup and nothing wrong with the vsan itself. Is there a way to reset this so it shows the correct folder tree under this display? Also, I notice that VC does not always show 'vSAN Default Storage Policy' for some objects even though it shows correctly when I edit the VM properties. TIA!
I have same issue.  Four identical boxen running 6.7.0 Update 1 (Build 11675023) Only one box has the performance stats resetting every 15 minutes or so.
FYI, I finally got around to rebooting my witness appliances and the high CPU load has been cured for now.   Both of my witness appliances exhibited the same high CPU trigger at the same time. ... See more...
FYI, I finally got around to rebooting my witness appliances and the high CPU load has been cured for now.   Both of my witness appliances exhibited the same high CPU trigger at the same time. This is the one month graph of CPU W/A usage off of my Virtual Center.   Weird. I'll post back if it happens again.
Hi Bob.   I've logged into both witness esxi appliances and esxtop simply shows the 'system' process consuming a steady CPU %USED of 45 and a %RDY of ~50-60 which, from what I've been reading, in... See more...
Hi Bob.   I've logged into both witness esxi appliances and esxtop simply shows the 'system' process consuming a steady CPU %USED of 45 and a %RDY of ~50-60 which, from what I've been reading, indicates cpu contention.... but the host esxi shows each witness 'world' with a very low %RDY which indicates it's getting all the CPU it's requesting(?) The esxi host is a Dell Poweredge  with 2 x 4 core CPU's (Intel(R) Xeon(R) CPU X5667 @ 3.07GHz) 128GB RAM Each witness was installed with 'normal' setting from the OVF.  16GB RAM, 2 vcpu's.   (there are a couple of other low use VM's on the box as well but it is well under utilized from a memory or disk perspective) Anyway,  Just thought I'd throw it out there and would be curious to know how this compares with other witness appliances.  This may be just the normal idle state of an embedded esxi within an esxi, idk. Thx, -a esxtop from within one of the the witness esxi appliance 4:11:05pm up 55 days 21:09, 590 worlds, 0 VMs, 0 vCPUs; CPU load average: 0.24, 0.23, 0.22 PCPU USED(%):  60 9.7 AVG:  35 PCPU UTIL(%): 100  10 AVG:  55       ID      GID NAME             NWLD   %USED    %RUN    %SYS   %WAIT %VMWAIT    %RDY   %IDLE  %OVRLP   %CSTP  %MLMTD  %SWPWT        1        1 system            170   45.61  193.84    0.00 16974.81       -   59.31    0.00   18.09    0.00    0.00    0.00 88798951 88798951 esxtop.13388107     1    5.48    5.77    0.00   95.40       -    0.06    0.00    0.01    0.00    0.00    0.00    11350    11350 hostd.2099283      27    1.04    1.04    0.00 2700.00       -    0.07    0.00    0.00    0.00    0.00    0.00    15968    15968 vpxa.2099911       38    0.13    0.13    0.00 3800.00       -    0.05    0.00    0.00    0.00    0.00    0.00   222771   222771 sh.2136210          1    0.10    0.10    0.00  100.00       -    0.41    0.00    0.00    0.00    0.00    0.00    18963    18963 dcui.2100333        4    0.08    0.08    0.00  400.00       -    0.18    0.00    0.00    0.00    0.00    0.00   223299   223299 python.2136276     33    0.08    0.08    0.00 3300.00       -    0.02    0.00    0.00    0.00    0.00    0.00 ... esxtop from within the host esxi 4:19:38pm up 63 days 14:38, 680 worlds, 4 VMs, 7 vCPUs; CPU load average: 0.52, 0.54, 0.54 PCPU USED(%):  71 9.6  23  54  23  24  23  20  10 5.6 4.5  47 8.2 8.0 3.8  63 AVG:  25 PCPU UTIL(%):  74  15  39  59  36  38  37  29  10 6.1 5.8  44 8.6 8.1 5.1  60 AVG:  30 CORE UTIL(%):  80      82      61      57      15      47      16      63     AVG:  53       ID      GID NAME             NWLD   %USED    %RUN    %SYS   %WAIT %VMWAIT    %RDY   %IDLE  %OVRLP   %CSTP  %MLMTD  %SWPWT   358419   358419 hostd.2169563      37  126.64  139.63    0.00 3552.60       -   23.59    0.00    0.25    0.00    0.00    0.00   368325   368325 abs-vsan-witnes    13  115.03  111.46    0.06 1193.98    0.00    0.24   89.24    0.11    0.00    0.00    0.00   370468   370468 abs-vsan-witnes    13  111.59  104.88    0.08 1200.57    0.00    0.16   96.01    0.11    0.00    0.00    0.00        1        1 system            309   33.21 1232.67    0.00 29402.59       -  414.86    0.00   14.78    0.00    0.00    0.00 2714598  2714598 absvc1             11   10.47   10.57    0.06 1093.81    0.02    0.37  190.07    0.07    0.00    0.00    0.00 2754165  2754165 esxtop.2648390      1    5.29    4.99    0.01   95.46       -    0.00    0.00    0.01    0.00    0.00    0.00   498795   498795 WitnessRouter      10    0.24    0.25    0.01 1000.00    0.00    0.03  100.85    0.00    0.00    0.00    0.00
Hi.  I have a question about Witness Appliance CPU usage... I have a 6.7 witness appliance running on an esxi 6.7U1 (free) host and it is consuming roughly half of the host's CPU steadily eve... See more...
Hi.  I have a question about Witness Appliance CPU usage... I have a 6.7 witness appliance running on an esxi 6.7U1 (free) host and it is consuming roughly half of the host's CPU steadily even while there is virtually very little disk activity.  This seems very excessive to me. The vSAN is healthy so I know the witness is working ok, any other data points on witness cpu usage that I can compare to?   Any way to check the witness to see why it is running wide open?   I haven't tried rebooting the witness yet but that is my next step. FYI, I actually have two witness appliances running on the same esxi host and they are both exhibiting the same high CPU usage so my esxi host is running almost 100% cpu Pics of the host CPU usage view plus the witness appliance view of the CPU attached. TIA.
Very cool.  Thank you sir!     So Foundation vCenter will support two (2)  2-node+witness clusters.  This is good info to know.
Hi Bob.  Just curious if you had a chance to try this configuration out with Foundation.  Thanks in advance! -a
Thank you, that is my understanding as well ...  my concern is adding two witnesses and four hosts to Foundation vCenter....  for six total nodes (2 separate 2-node+witness clusters)   Or is the ... See more...
Thank you, that is my understanding as well ...  my concern is adding two witnesses and four hosts to Foundation vCenter....  for six total nodes (2 separate 2-node+witness clusters)   Or is the fix just for that 'fifth' host.  I just can't find that written down anywhere and I don't have the license to test it myself.
Thanks but release note from 6.5 suggests that the witness can be added above and beyond the vCenter capacity... at least for Essentials version.    It sort of implies that if I add the two witne... See more...
Thanks but release note from 6.5 suggests that the witness can be added above and beyond the vCenter capacity... at least for Essentials version.    It sort of implies that if I add the two witness appliances first, I can add the 4 vSphere hosts to Foundation and it will take them.   Obviously I can't try this since I don't own Foundation.  Maybe this is just an essentials things idk. Licensing Issues Cannot add witness virtual machine to vCenter Server with Essentials licenseWhen the witness host for a stretched cluster is an appliance that resides in a virtual machine, it incorrectly consumes a host license. This problem occurs because the vCenter Server considers the witness appliance to be a physical host. If your license does not cover an additional host, you cannot add the witness appliance to vCenter Server.Workaround: Add the witness appliance VM to vCenter Server before you add the physical hosts.
Oh, this is 6.7U1 where vCenter Foundation does support up to 4 vSphere hosts. 
I understand that the 2-node + witness scenario is supported with any license type of vSAN (Standard, Advanced or Enterprise) I also understand that the witness node appliance doesn't count to... See more...
I understand that the 2-node + witness scenario is supported with any license type of vSAN (Standard, Advanced or Enterprise) I also understand that the witness node appliance doesn't count toward vSphere licenses inside vCenter What I want is to run 2 separate 2-node + witness clusters for a total of 6 hosts (4 vsphere and 2 witnesses) Can I load 2 witnesses and the  4 vSphere hosts inside vCenter Foundation and not have it complain about licenses? I have this scenario built and working with the all eval licenses but they allow everything.  I don't want to buy vCenter Foundation just to have then kick out one or both witness hosts. Anyone every try this setup?
Hi.  Tried the U1 version.  Same error.  Looks like it is just when the Tiny config is selected.  I switched to Normal config size and it completed. Thank u
trying to install VMware-VirtualSAN-Witness-6.7.0-8169922.ova on a 'free' esxi  6.7.0 Update 1 (Build 11675023) and getting "Line 644: Duplicate Element 'InstanceID' on the last step ... See more...
trying to install VMware-VirtualSAN-Witness-6.7.0-8169922.ova on a 'free' esxi  6.7.0 Update 1 (Build 11675023) and getting "Line 644: Duplicate Element 'InstanceID' on the last step Any ideas?
I've tried multiple versions of esxi and the bnxtnet driver.  All fail to recognize in esxi. I swapped the Broadcom 57406 with an Intel X540 and it was recognized and worked perfectly in all o... See more...
I've tried multiple versions of esxi and the bnxtnet driver.  All fail to recognize in esxi. I swapped the Broadcom 57406 with an Intel X540 and it was recognized and worked perfectly in all of the servers. Bottom line,  the BCM57406 card is on the VMware HCL but it clearly does not work correctly in the PowerEdge R730 and I don't have time to troubleshoot further. I hope this helps someone else avoid a 6+ hour Dell tech support call. *** FOLLOW UP *** Dell escalation team has confirmed that the BCM57406 has issues with Linux SLI and esxi.   They have agreed to swap mine for Intel X550-T2 cards.
I have four brand new identical Dell Poweredge R730's with BCM57406 10G nic adapters (on the 6.7U1 HCL) Model : BCM57406 Device Type : Network Brand Name : DELL Number of Ports: 2 DID : 16... See more...
I have four brand new identical Dell Poweredge R730's with BCM57406 10G nic adapters (on the 6.7U1 HCL) Model : BCM57406 Device Type : Network Brand Name : DELL Number of Ports: 2 DID : 16d2 SVID : 14e4 SSID : 4060 VID : 14e4 One of the four servers will load bnxtnet driver and activate the nic just fine.  The other three will not and I am stumped.  I have checked any and all bios/nic settings. All firmware is identical, PCI slots are identical, esxi 6.7U1 is loaded identical.... and yet I cannot get three of them past this error. vmkernel.log from server that works.... 2019-01-31T12:19:13.436Z cpu1:2097664)Loading module bnxtnet ... 2019-01-31T12:19:13.437Z cpu1:2097664)Elf: 2101: module bnxtnet has license BSD 2019-01-31T12:19:13.441Z cpu1:2097664)Device: 192: Registered driver 'bnxtnet' from 22 2019-01-31T12:19:13.441Z cpu1:2097664)Mod: 4962: Initialization of bnxtnet succeeded with module ID 22. 2019-01-31T12:19:13.441Z cpu1:2097664)bnxtnet loaded successfully. 2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:06:00.0 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected 2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:06:00.0 : 0x4309fd3bfe10] Starting Cumulus device probe 2019-01-31T12:19:13.442Z cpu6:2097620)DMA: 679: DMA Engine 'cumulus-0000:06:00.0' created using mapper 'DMANull'. 2019-01-31T12:19:13.442Z cpu6:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:06:00.0' created using mapper 'DMANull'. 2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 0 bus_addr 0x91c20000 size 0x10000 2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196a40000 2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 2 bus_addr 0x91c30000 size 0x10000 2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196a60000 2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 4 bus_addr 0x91dc2000 size 0x2000 2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x450196468000 2019-01-31T12:19:13.443Z cpu6:2097620)bnxtnet: dev_init_device_info:1113: [0000:06:00.0 : 0x4309fd3bfe10] PHY is AutoGrEEEn capable 2019-01-31T12:19:13.479Z cpu6:2097620)WARNING: bnxtnet: bnxtnet_alloc_mem_probe:933: [0000:06:00.0 : 0x4309fd3bfe10] Disable VXLAN/Geneve RX filter due to firmware bug. Refer to VMware Compatibilit 2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_alloc_intr_resources:899: [0000:06:00.0 : 0x4309fd3bfe10] The intr type set to MSIX 2019-01-31T12:19:13.479Z cpu6:2097620)VMK_PCI: 764: device 0000:06:00.0 allocated 16 MSIX interrupts 2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1352: [0000:06:00.0 : 0x4309fd3bfe10] Interrupt mode: MSIX, max fastpaths: 16 max roce irqs: 0 2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1358: [0000:06:00.0 : 0x4309fd3bfe10] Ending successfully cumulus device probe 2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_attach_device:235: [0000:06:00.0 : 0x4309fd3bfe10] Driver successfully attached cumulus device (0x2d544305d9cc7d46) with Chip ID=0x16D2 Rev/Me 2019-01-31T12:19:13.480Z cpu6:2097620)Device: 327: Found driver bnxtnet for device 0x2d544305d9cc7d46 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097666 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097667 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097668 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097669 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097670 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097671 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097672 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097673 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097674 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097675 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097676 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097677 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097678 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097679 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097680 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097681 netpoll-backup 0 changed by 2097620 vmkdevmgr -6 2019-01-31T12:19:13.480Z cpu6:2097620)bnxtnet: bnxtnet_start_device:389: [0000:06:00.0 : 0x4309fd3bfe10] Driver successfully started cumulus device (0x2d544305d9cc7d46) 2019-01-31T12:19:13.480Z cpu6:2097620)Device: 1466: Registered device: 0x4305d9cc0070 pci#s00000005.00#0 com.vmware.uplink (parent=0x2d544305d9cc7d46) 2019-01-31T12:19:13.480Z cpu6:2097620)bnxtnet: bnxtnet_scan_device:559: [0000:06:00.0 : 0x4309fd3bfe10] Successfully registered uplink device vmkernel.log from other three servers that don't work.... 2019-01-31T12:18:56.545Z cpu4:2097664)Loading module bnxtnet ... 2019-01-31T12:18:56.546Z cpu4:2097664)Elf: 2101: module bnxtnet has license BSD 2019-01-31T12:18:56.550Z cpu4:2097664)Device: 192: Registered driver 'bnxtnet' from 22 2019-01-31T12:18:56.550Z cpu4:2097664)Mod: 4962: Initialization of bnxtnet succeeded with module ID 22. 2019-01-31T12:18:56.550Z cpu4:2097664)bnxtnet loaded successfully. 2019-01-31T12:18:56.551Z cpu7:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:05:00.0 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected 2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:05:00.0 : 0x4309fd3bfe10] Starting Cumulus device probe 2019-01-31T12:18:56.552Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-0000:05:00.0' created using mapper 'DMANull'. 2019-01-31T12:18:56.552Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:05:00.0' created using mapper 'DMANull'. 2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 0 bus_addr 0x91c20000 size 0x10000 2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196540000 2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 2 bus_addr 0x91c30000 size 0x10000 2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196560000 2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 4 bus_addr 0x91c42000 size 0x2000 2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x450196468000 2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: dev_init_device_info:1113: [0000:05:00.0 : 0x4309fd3bfe10] PHY is AutoGrEEEn capable 2019-01-31T12:18:58.068Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.0 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x11(HWRM_FUNC_RESET) seq 5 2019-01-31T12:18:59.583Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.0 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x11(HWRM_FUNC_RESET) seq 6 2019-01-31T12:18:59.583Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-0000:05:00.0' destroyed. 2019-01-31T12:18:59.583Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-co-0000:05:00.0' destroyed. 2019-01-31T12:18:59.583Z cpu7:2097620)WARNING: bnxtnet: bnxtnet_attach_device:208: [0000:05:00.0 : 0x4309fd3bfe10] failed to find cumulus device (status: Failure) 2019-01-31T12:18:59.583Z cpu7:2097620)Device: 2628: Module 22 did not claim device 0x1bd34305d9cc7d46. 2019-01-31T12:18:59.584Z cpu7:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:05:00.1 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected 2019-01-31T12:18:59.584Z cpu7:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:05:00.1 : 0x4309fd3bfe10] Starting Cumulus device probe 2019-01-31T12:18:59.585Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-0000:05:00.1' created using mapper 'DMANull'. 2019-01-31T12:18:59.585Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:05:00.1' created using mapper 'DMANull'. 2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 0 bus_addr 0x91c00000 size 0x10000 2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196500000 2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 2 bus_addr 0x91c10000 size 0x10000 2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196520000 2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 4 bus_addr 0x91c40000 size 0x2000 2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x45019469c000 2019-01-31T12:19:00.090Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.1 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x0(HWRM_VER_GET) seq 0 2019-01-31T12:19:00.090Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-0000:05:00.1' destroyed. 2019-01-31T12:19:00.090Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-co-0000:05:00.1' destroyed. 2019-01-31T12:19:00.090Z cpu7:2097620)WARNING: bnxtnet: bnxtnet_attach_device:208: [0000:05:00.1 : 0x4309fd3bfe10] failed to find cumulus device (status: Failure) 2019-01-31T12:19:00.090Z cpu7:2097620)Device: 2628: Module 22 did not claim device 0x602e4305d9cc7eef. The server with the working nic is actually working with the older driver bnxtnet                        20.6.101.7-11vmw.670.0.0.8169922      VMW     VMwareCertified   2019-01-16 bnxtroce                       20.6.101.0-20vmw.670.1.28.10302608    VMW     VMwareCertified   2019-01-16 But I have tried the older and the newest version on the other three bnxtnet                        212.0.119.0-1OEM.670.0.0.8169922      BCM                    VMwareCertified   2019-01-31 bnxtroce                       212.0.114.0-1OEM.670.0.0.8169922      BCM                    VMwareCertified   2019-01-31 I have swapped nics between the servers and the results are the same... the server with the working nic works with any of the nics and the other three servers won't so the physical nic cards are fine. I don't know if this is a vmware or Dell issue. Any ideas/thoughts on possible issues or other things to try?  Next step is to swap the Dell PCI riser and see if maybe somehow that might be an issue.
Thank you. That is basically what I wanted to know. I was under the impression the vSphere virtual hardware was different 'enough' from ESX 3.5 to trigger a Windows reactivation.
I am in the process of upgrading a customer's ESX 3.5 to vSphere. I have a single Windows Server 2003 Enterprise license and am running 4 virtual machines (clones) on a single ESX host. (on... See more...
I am in the process of upgrading a customer's ESX 3.5 to vSphere. I have a single Windows Server 2003 Enterprise license and am running 4 virtual machines (clones) on a single ESX host. (one actual activation and 3 clones of the already activated Windows VM) After I upgrade to vSphere each Windows VM is going to require Windows reactivation. Can anyone tell me what to expect as far as the reactivation working or will I have to call Microsoft. Will Microsoft let me reactivate all 4 VM's? Will the fact that the VM's are clones impact the reactivation process with Microsoft? Thank you.