Hi All,
I've run into a very strange issue and I cannot find a reason or resolution.
I'm currently running vCenter 6.0 U2 and I have 7 ESXi hosts with mixed versions, 6.0 U1 & 6.0 U2.
These hosts have 2 dedicated vmotion port groups tied to 2 nics, configured as active/active.
The management port group is configured only for management traffic. So it looks like this:
vmotion 1 - (vmotion only)
vmotion 2 - (vmotion only)
management - (management only)
Here's the strange issue. Sometimes, maybe once per week, I'm noticing that the management network automatically enables vmotion. This occurs on different hosts at different times. I can't figure out a pattern. I find this out because the logs indicate vmotion failures coming from the management IP address. I resolve it by unchecking vmotion from the management port group, and everything is happy again.
Would there be any reason this would happen, aside from someone actually checking off "vmotion" on the management network? I can say with 100% certainty, this is happening automatically, and nobody is making changes.
Thanks!
I have the same problem and have esx u3 and vcenter u3a
i have test solution:
disable vmotion on ESX.xx managment vmkernel and reboot esx
you have about one minute to restart esx, after vmotion is re-activated automatically on managment nic
I am having the exact same issue, same code levels as the OP. Can VMware respond at all, or are we missing a bug/feature? This seems to break HA as well, as some hosts try to reach others on a (now) vmotion enabled vmk, when the others don't have the same configuration.
same here,
I'm a support engineer for many customers so I have access to many configurations.
I've seen it on different VC farms and different hardware. from 6.0 to 6.03a.
not ready to upgrade to 6.5 so can't tell
i have the same problem, but it's getting enabled on one particular host after the upgrade to 6.5, not the management network, and it's causing vmotions to fail.
Are you using vmk10 or higher for vMotion traffic? Assuming your mgmt is vmk1?
I'm having the exact same issue in my home lab. 3 NUCs with vSAN, 6.5 all around with external PSC all at current code. Every time I reconfigure for HA, it will put check the vmotion box on the management interface and vMotions start failing. I've run this setup for nearly a year without this issue and it just started happening a week or so ago. Anyone have any updates on solutions you may have found? Thanks!
I found that if I disable VMotion going through the Virtual Switches menu, the damn checkbox will re-enable itself. However, if I go through VMKernel Adapters and disable VMotion there, it sticks! Very annoying for sure
Running vCenter 6.5 6671409
ESXi 6.5 4887370
Nevermind. Turns out the damn checkbox re-checked itself. Opened a case with VMWare
I had faced same issue last two days.
the work around I found is to create another kernel network and allow management traffic to follow that.
I had VMK0 earlier for management traffic. When I deleted and created same, the vMotion check mark was getting enabled automatically. However when I created new kernel network (VMK2) and allowed management traffic on that I stopped facing that issue. Its been more than 24 hours and it is stable. After migrating the traffic to new kernel network I have deleted VMK0.
You can directly connect to individual host to add and delete vmkernel network or use "esxcli network ip interface add/remove" command line from remove console.
Did you find a solution for this problem ?
what is your network configuration for the interfaces?
I have seen problem like this when both (vmk0 used for management and vmk1 used for vMotion) got a network configuration similar to the below:
vmk0 - 10.10.0.131 / 255.255.255.0
vmk1 - 10.0.0.91 / 255.255.0.0
If I moved vml to e.g. 172.16.1.91 / 255.255.255.0 the problem went away.
thanks
in my case, the same problem occurred when changing the vMotion user interface for the migration of the vm
comparing with an updated host that just before restarting it changed the vMotion interface to vmk1, review the advanced options and this had a change that in other hosts was not updated:
"Migrate.Vmknic"
by default, the option has the value "vmk0", and in others it was empty, so I was changing in each host by the interface "vmk1" or "vmk2" according to each case
It would be interesting if someone else happens to the problem can try using the advanced options and indicate if this solves the problem
I am facing similar issue but not with vMotion service. In my case, Management services gets auto enabled.
ESXi host in the cluster have got dedicated virtual switch for iSCSI, Management, vMotion and 2 uplinks are configured as Active/Active for each VSS/VDS.
so it look like this
iSCSI VSS (vmnic0, vmnic1)
iSCSI PG - vmk0,vmk1
MGMT-VDS (vmnic2, vmnic3)
Management PG - vmk2, vmk3 (managemnt service enabled)
vMotion-VDS (vmnic4, vmnic5)
vMotion PG - vmk4, vmk5 (vmotion enabled)
Here's the strange issue, Management services gets auto enabled on vmk0. This happens on different hosts at different times. I can't figure out the pattern. manually un check management service on vmk0 resolve temporary basis, after a hour or post reboot of esxi host will bring back management service on vmk0 again.
below version used.
vcenter 6.5.0 Build 4602587
ESXi, 6.5.0, 8294253