I have installed Windows Server 2016 on the latest version of ESXI 6.5.0b but it's almost unusable.
The VM has 2 CPU and 2GB of RAM with VMware Tools 10.1.5 (5055683) and is used for Active Directory for my lab but it's really slow.
For example for open DNS I take almost 10 secs.
On ESXI I get High VM Memory Usage alarm but if I check on task manager on the guest vm ram usage is always around 40% and CPU is around 3 4% max and even if I put 4 gb the problem is still there.
Is this a know problem or there's something that I can do to slove this problem beside downgrade to Windows Server 2012 R2?
My Host ESXI specs are:
128GB RAM DDR4
1x SSD 250GB + 4xSSD 500GB on LSI 9260-4i on RAID 0
ESXI Booted from USB Stick
Hey Marko im having a similar problem with that too. I'm using server 2012 r2 though. I found that running anything on the 2k12 vm is massively slow because of c drive disk queue length. Yo can see the disk performance through system performance monitor. Anything over 1 for disk queue length is bad - I've been seeing anywhere from 5 to 50. If I run a 2008r2 vm on the same host it's fine. I haven't tried downgrading the hardware yet but what version is your vm at? there is another thread that states downgrading the hw to v10 or v9 will resolve the issue.
You can can email me at firstname.lastname@example.org
Here are some resolution but it's not a VMware's official one.
Windows Server 2012 or above can support In-Guest unmap and trim.
That function cause a problem with VMware's new In-Guest unmap support.
This function works well under Hyper-V environment but doesn't works well with VMware's one.
01. Use a VM Hardware verwion 10 (ESXi 5.5 compatible) that doesn't support In-Guest unmap.
If you downgrade VM hardware version 10 you will escape from high disk usage.
But you can't never use In-Guest unmap function.
02. Use fsutil command in Windows Guest OS - Windows Server 2012 or above -
In Windows command shell with administrative right then execute belows.
These commands will disable Windows In-Guest unmap and trim on NTFS, ReFS v1, v2.
-- disable unmap and trim on NTFS & ReFS v1
fsutil behavior set DisableDeleteNotify NTFS 1
-- disable unmap and trim on ReFS v2
fsutil behavior set DisableDeleteNotify ReFS 1
If you want to rollback to Windows unmap function execute belows
fsutil behavior set DisableDeleteNotify NTFS 0
fsutil behavior set DisableDeleteNotify ReFS 0
If you disable In-Guest unmap but you can unmap on VMFS with vmfstools command manually...:)
I installed this update using a script, rebooted the server, but the problem remained.
esxcli software profile update -p ESXi-6.5.0-20170304001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Only the disabling of these modes in the system really saves the situation.
Can confirm that patched to latest 6.5.0 has improved speeds from 20-30mb/s read to 250mb/s read. Just updated via ISO and bootable media for easy life.
Thanks for all the help of those who re-assured it was just something that needed patching.
I ran into this issue last few days. Very sluggish 99% guest disk activity on local SSD datastore. Experienced both Server16 + Win10 guests...Disabling unmap in guest fixed it.
Just updated Dell T610 with VMware-VMvisor-Installer-6.5.0.update01-5969303.x86_64-DellEMC_Customized-A00.iso
Re-enabled guests unmap. Tested. 6.5 U1 fixed it.
This thread was very useful to me recently, but the issue has some differences with some of the situations described in other posts and in this article. The crux is that the "DisableDeleteNotify" settings corrected the problems in both occurrences. Therefore if you find yourself in a "Very poor file I/O performance with Windows 2016 and VMware ESXi 6.5" (any version, I say) then these commands are worth a try.
First a client called us with a problem on a brand new VM that was built on their brand new VMware 6.5 platform on their brand new SAN and host hardware. Other existing VMs (brought over from the old vSphere 5.5 system) were running fine. However when there was any file I/O initiated from a physical machine where the file resided on this new VM, the file I/O was so poor as to be unusable. We first confirmed the behavior from a physical machine. We then tested from a VM and saw the same thing. Next we migrated our test machine to be on the same host as the new, problematic server and the file I/O was great. Interesting but not immediately helpful. After a little more testing and a call to VMware the first thing that was suggested was to try the DisableDeleteNotify settings mentioned above in this thread. The client's system runs Dell-customized ESXi 6.5.0 Build 8294253, which is 6.5 U2. The article linked above describes the problem as being corrected in 6.5 U1. Therefore I was skeptical that setting DisableDeleteNotify to 1 would correct the problem. But it did. After running both the NTFS and the ReFS commands and rebooting the server the file I/O problem was GONE. It's been a couple of weeks now and the performance has been great. VM hardware is version 13, by the way. Windows Server OS on the problematic server is 2016.
Then this past Sunday morning I installed two Microsoft updates to our primary file and print server (KB890830 – Malicious Software Removal Tool and KB4480961 – Cumulative Update from January 8). We immediately began having file I/O problems that felt very similar to those that my client had on their system a couple weeks ago, described above. We are on Dell-customized 6.5.0 build 8935087. VM hardware version is 13. Windows OS is Server 2016. With my client's situation fresh on my mind I quickly tried applying the DisableDeleteNotify=1 settings and the problem went away immediately.
Therefore my advice is that if you have any sudden file I/O issues with a new-ish Windows OS on a host running VMware ESXi 6.5 (any build), you should at least try these commands:
I will date suggest that the fix works on any 6.x as I was NOT on 6.5 and it improved my performance just as expected. I am on 6.0.0 Update 3 (Build 5050593) image profile HPE-ESXi-6.0.0-Update3-iso-600.9.7.0.17 (Hewlett Packard Enterprise) on bare metal 12 CPUs x Intel(R) Xeon(R) CPU X5690 @ 3.47GHz
Quick follow-up: the issue does NOT exist with Windows 2019 server. Phew. That was a pleasant surprise once I rolled the first 2019 server on the same infrastructure where 2016 needed these tweaks