Anyone else having problems with Windows 8.1 and (potentially) Windows 10 guests whose network just stops working for no reason? Seems to be happening more and more often and it only seems to affect these two OSes. Maybe affecting more but so far I've only noticed these two OSes.
They are running on hosts with 6.0U1a (3073146) and the version of vmware tools that's bundled with that ESXi version although I'm pretty sure I've seen the problem on at least one vm that had an older version of tools from 5.5.
All vms are running vmxnet3 adapters. Reboot always solves the problem.
I have no idea how to reproduce it yet or if/when it may occur. Am I alone or anyone else seeing this problem?
Hello FreddyFredFred,
I am also seeing this issue since we upgraded to vsphere 6. Here is a description of our issue:
I have had a ticket open with VMWare about this for a month now without any progress. Were you able to find anything more on this issue?
Regard,
Darrenoid
After much testing (and probably some dumb luck) I found the issue in my environment was being caused by IPv6. Still have no idea what in my environment is triggering the issue but at least I can work around it.
As soon as a Windows 8.1 or 10 VM got 9 or 10 IPv6 addresses, the network would die. You could easily see the number of IPs in vmware client or within the VM if you did ipconfig /all. Within windows you would see a bunch of temporary addresses which weren't being released properly. I tried a VM with 2 nics to see if it was 10 IPs total but no, it was really 10 ips per nic.
The conditions under which the problem would happen:
Winddows 8.1/10
VMXNET 3
Distributed switch
(and probably ESXi 6 since or distributed switch version 6 since i didn't have issue in 5.5)
If any one of those would change, the problem wouldn't happen even though the the VM still picked up 10 IPs. I still believe this is a vmware issue (since you need vmxnet3 and distributed switches to trigger the issue).
There are a number of workarounds ( the last one was provided by vmware):
1) Disable IPv6 (just uncheck it under network and settings)
2) Change vmxnet3 to e1000 (probably e1000e is also ok)
3) Move the VM from a distributed switch to standard switch
4) Run these two commands to stop windows from picking up more of those temporary IPv6 addresses:
netsh interface ipv6 set global randomizeidentifiers=disabled
netsh interface ipv6 set privacy state=disable
In the end I added a step to my VM provisioning workflow to run the commands in #4. This allowed me to keep my distributed switch, vmxnet3 and IPv6 enabled. Haven't seen the problem in a while since using the fix.
Edit: In my case those extra IPs were being picked up about 1 every 6 to 8 hours or so. It would take about 4-5 days before a VM had the 9-10 IPv6 addresses and would stop responding.
Thanks so much for responding FreddyFredFred,
I think we have the exact same issue. We also use ipv6 and were wondering if the many temporary IPv6 addresses were a symptom or a cause. I will try the netsh commands you posted and see if the issue happens again.
FYI, not sure if it matters, but we are using HP switches for our ipv6 router.
Regards,
Darrenoid