Recently we installed some of Samsungs new NVME SSD Disks (PCIE M2) and there is quite a difference in the performace of that SSD on the Win 7 PC directly and within a Win 7 virtual machine on that very same PC...
The AS SSD Benchamrk gives following results:
Seq Read, 10 GB on PC directly: 2611 MB/s
Seq Write, 10 GB on PC directly: 1620 MB/s
Seq Read, 10 GB in VM: 1446 MB/s
Seq Write, 10 GB in VM: 778 MB/s
The VM uses the SSD via SCSI.
Since it is the very first time I installed a VM I wonder if that loss of disk performance is to be expected, or should the VM perform better?
If the performance for read / write on the PC directly is set as 100% what performance could be expected within the VM?
80% or maybe just 60%? Or did I do something wrong?
If the disk performance in the VM can be improved, how so?
Thanks in advance,
There certainly is some loss of performance to be expected.
Unfortunately there is not an easy answer on how much performance loss there would be as there are many factors.
To name just a few.
You have to take into account that the host OS (Windows 7) was not designed as a host for virtualisation.
Instead it was designed to be used as a desktop OS, so it is tuned for desktop use.
Then there can be a certain load at the host OS that will cause a slow down in the guest.
Eg. your physical hardware is bound to a certain number of Input/output operations per second, so if the host is reading/writing at the disk for another application then that means the maximum read/write performance that can be achieved will be less.
On top of that there's the extra layer. Your guest OS uses it's normal operating system calls to write to disk, but in reality it is just passing on those reads/writes to the host OS.
It is the host OS that does the writing.
Then there's the potential common performance measurements mistake.
For example if you take a few snapshots and then go try to do a read (or write) test, it will be much slower.
Why? Because now it has to read multiple times from disk in order to read that one sector from disk.
If you really want to get maximum performance then you probably should test with a pre-allocated disk as growing the virtual disk size while you are speed test writing to it will also slow things down.
On top of all that, your IO is now also more dependent on things like RAW CPU power, is your host or guest swapping to disk?
Is the RAM backing (.vmem) file being written too.
As you see, there's not an easy answer, but hopefully this gives you a bit of a better insight in things.
Thanks for your Answer!
Some additional info from me:
- The PC in question is a single user owned development box and thus mostly used to compile, build and run executables.
- The two tests were run with no other application runing on either host or guest system
- The box has 16 GB RAM and the VM was set to 6 GB RAM, thus there was no swapping in host or guest System noticed
- Also the VM was setup with a single pre allocated file. And after browsing through this forum some weeks ago i already got rid of the 11 snapshots i created when setting up the VM. However getting rid of those did not really improve performance at all.
Having read your answer I'd like to ask two new questions:
1) Can the VMware (either by starting from scratch, or changing some options of the existing image within VMware WS 12) be tweaked to be more perfomant?
Info: e.g was using the SSD via SCSI the right choice to make?
2) What is the most performant way to set up a virtual machine to run on a PC?
Info: in the forum I read that with ESXI one could add a NVME Controller, also the overhead of an additional host OS will be gone.
Thus would I have been better of using ESXI or another product?
Thanks for your time!
There are still a lot of unknown factors (posting a vmware.log of the VM helps)
Getting the max performance does depend on details, though quite often the differences are tiny.
1) Not sure, if selecting SCSI then I hope you selected SAS, not parallel SCSI.
2) Yes I would expect ESXi to be faster, but ESXi is also a bit more picky on what hardware it supports and performance still won't be the same as native. There's also sharing algorithms that optimize performance over multiple guests who do not help when just running a single guest.
I don't have a "guaranteed to give you best performance" recipe.
Oh and FWIW, I actually installed a few NVMe disks here last week, for the exact same reason, faster build times as opposed to compile on my VM that runs of a normal SSD.
Unfortunately I have been too busy over the week to run many tests (dead lines, oh joy)
As it happens those machines do have ESXi under a dual boot as well as well as Workstation (both Windows and Linux), so I can test differences, once I get the time and if the NVMe disks are recognized (not sure, my ESXi on there might be too old)
I posted this important information in January 2019, and apparently some unhappy individual (presumably with VMWare) foolishly removed it, so here it is again. It is just as relevant today under version 15.5.x, as it was when I posted it back in January 2019:
As of version 15.0.2 of Workstation, this is still NOT fixed. VMware's claim of "performance improvements for virtual NVMe storage" are just hot air. I have extensively tested the latest versions of Workstation Pro 15 and Workstation Pro 14, and can definitively state that the SCSI drive option continues to be faster than the NVME drive option, and both are much slower than they should be within VMs. (Even under the SCSI option, read/write performance is DRAMATICALLY worse in the VM than on the host for NVME drives.) I tested using a Samsung 970 Evo 2TB NVME drive in all cases for this. There is not even a recognizable marginal improvement in NVME performance between Workstation 14 and 15. So, VMware has yet to actually address this problem. Notably, Workstation 15 continues to recommend SCSI drives over all others for performance, so at least that much is yet true. NOTE: Within a VM, NVME drives particularly suffer for multithreaded reads and writes, as may be seen in the AS SSD Benchmark tool's 4K-64Thrd (i.e., 64-threads) read and write tests.
There are very few reasons to upgrade to 15 Pro from 14 Pro IMO. Even 4K support is questionable: one can achieve workable 1080P across 1080P and 4K screens with Windows 10 (1809) for example. (Just set the host's resolution to 1080P before launching Workstation 14 Pro, AND ensure that you have set the "high DPI settings" appropriately: using the Properties for the Workstation Pro shortcut, go to the Compatibility tab, then "Change high DPI settings," and select "Program DPI" with "I signed in to Windows" for the corresponding dropdown option.) This said, the 4K support is nice if you do decide to fork out the bucks for 15 Pro. If you are using 15 Pro, you may also want to enable (check) "Automatically adjust user interface size in the virtual machine" (under VM --> Settings --> Hardware --> Display).
Now, regarding overall VM performance, including drive performance, here are your best options to date: use a SCSI disk (NOT NVME), select (enable) the "Disable memory page trimming" option under VM --> Settings --> Options --> Advanced, and select "Fit all virtual machine memory into reserved host RAM" under Edit --> Preferences --> Memory. (For this last option, you will want to ensure you have plenty of physical host RAM that you can spare (i.e., dedicate exclusively) to the VM (should the VM need it). Lastly, if you are using 15 Pro, you may want to set Graphics memory (under VM --> Settings --> Hardware --> Display) to 3GB.
Finally, if you are working on a system that has hybrid graphics (e.g., a laptop with an nVidia video card besides in-built Intel display graphics), you may want to use the relevant "control panel" application to instruct your system to use the discrete graphics card for vmware.exe and vmware-vmx.exe.
Wanted to toss my $0.02 out there since I came across this thread.
I have been using VMware workstation (various versions) on top of Mint Linux(Ubuntu-based) on a Thinkpad P50 laptop on NVMe for the past 4 years. Always felt really slow but not unusable just far slower than NVMe should be. Guest OS is Win7/64 still. iostat on the linux host would report the NVMe drive was maxed out (% busy time was over 90%) during high i/o, though I was probably getting only 2-3% of the drive's I/O capacity. The system is dual boot to Win7 (though stays in Linux 99% of the time). In Win7 I have the Samsung tools installed and it shows a full 10G connection to the NVMe cards (Which are Samsung 960 Pro) - the host operating systems run off a 1TB Samsung 950 Pro.
I decided to dig a little deeper yesterday. Since I have two NVMe drives on this laptop I decided to stitch two partitions together one from each in RAID 0. Performance from VMware workstation was still poor. I'm talking just a few hundred MB/sec and maybe 2-3k IOPS with the drives reporting as maxed out by the Linux host (Linux Mint 20 based on Ubuntu 20 with ext4 file system).
I came across this NVMe option in workstation itself(didn't know the option was there till yesterday) and tried to look up what it actually did, but could not find any info as to what it does differently and what if any affect it may have(was originally assuming it would require raw access to the underlying NVMe drive). I came across a web page that talked about "hacking" the configuration to make your existing disk NVMe, which I did, after installing the drivers in the guest OS.
Performance went up probably at least 3-8X. I saw one second of iostat where there was 12,000 IOPS and 1.5GB/sec of throughput (most of that writes). Sure is still a far cry from what the underlying drives should be capable of (each being "up to 3.5GB/sec and 330,000 IOPS, still running RAID 0 between both drives). But it is FAR faster than it was before and more than adequate for my needs (before it worked fine too just was sluggish).
I haven't run any formal disk benchmarks, as I was writing this I tried to look some up, since windows is not my thing generally. Came across Crystal disk benchmark which I have heard of before but it could not find any disks on my VM.
[Per previous comment talking about other settings I run pretty generic stuff, no games, I keep hardware 3D acceleration off for improved stability, running regular 1080p even though I have a 4k screen 1080p just works better for me]
And while it may be obvious if your buying Workstation it's generally best to buy on black friday seems vmware has a sale for workstation every black friday at least the past several years that has been when I have bought anyway.
(VMware user since 1999 going back to before it was named workstation and before version 1.0 - 95% of the time with Linux host)