on all 3 of my linux hosts i have this option enabled for running VM's, but there are no .vmem files in /tmp..
do you have server configured to swap to the drive if necessary, or keep all VM's in memory?
I use it on Linux all the time. Yes, there is some space taken up in /tmp, but the performance is still DRAMATICALLY improved over not having the option.
Did you actually see .vmem files in /tmp??
maybe i should include a statement whenever i post like
caveat.emptor = "TRUE"
You don't see *.vmem files in /tmp but I suspect that you MUST have the same amount of space in /tmp as you have virtual memory allocated for each running virtual machine because after modifying the vmx configs for 3 of our 10 running virtual machines each configured with 768M of RAM, I ran out of disk space overnight in /tmp where the /tmp partition was 2G in size.
It is interesting in that "df -h /tmp" reports that disk space is being used but "du -sh /tmp" shows very little disk space being used.
I notice from "lsof /tmp" that the vmware-vmx process appears to create the files
which are then deleted with file handles still open. Perhaps a parent process creates the files and passes the file descriptors to a child process and then unlinks the files so that they don't appear in the directory. However the size of the deleted files is greater than the amount of disk space being used so I'm not sure where the used disk space actually resides.
At the end of the day if using mainMem.useNamedFile = "FALSE" provides a significant performance boost then I guess advice to use if should be followed with advice suggesting that /tmp needs to be at least as big as the amount of memory allocated for all virtual machines you plan to run.
And just in case you guys are still not convinced, I note that ksc from VMware made the following comments in a previous post - which I didn't read before making the change
Should add a warning.
On Linux / Mac, "mainmem.usenamedfile = false" just relocates the memory swap file from the VM's directory to a temporary directory (by default, under /tmp).
On Windows, we can back this file with swap, but that consumes disk space too.
Yes, there is some allocation in /tmp that does take place. My apologies - we should include that as a warning when we recommend that option. We recommend that option, though, when people post here pulling their hair out because they can't figure out why their VM with 2GB of RAM is running so slow and dragging the host down with it - in many cases, a host that should have more than enough spare capacity for running that VM plus others.
I think ksc's post was a bit simplified - those files aren't simply relocated - something else changes when you enable that option, otherwise the increased performance would not happen. If this option really just relocates .vmem files from the VMs directory to the /tmp directory, you would see ABSOLUTELY NO performance improvement when using this option - especially those of us who run our VMs off the same disks as the /tmp partition resides on.
Okay, so maybe we should include that bit of knowledge, but, on the other hand, the space is going to be taken up \*somewhere* anyway, so maybe we just assume that people know that?? I'm just kidding - you're right, I will try to remember to include that next time I tell people to enable that option.
actually, setting this option to FALSE doesn't really provide any performance benefit. In fact, suspending and resuming can be much slower if you set this option to FALSE. The reason is that when suspending, it creates the .vmem file in the VM folder. It is reading memory mapped file from /tmp and writing .vmem file in the VM folder. This would not be needed if .vmem file already existed there and was memory mapped.
While resuming, it has to copy the .vmem file to the memory mapped region whose backing store is in /tmp, needing additional IO. Again, if the option is not set to FALSE, it would just memory map the existing .vmem file and will resume almost instantly if host hasn't rebooted since VM suspension and you have enough RAM. If your host was rebooted or was under memory pressure, pages from .vmem file will be brought slowly into memory and resume will be slower, but not any slower than if you had set the option to FALSE.
Now, the performance benefit during the running of the VM: People tend to think that since .vmem is a file, every main memory access by the VM goes thru a filesystem IO, which is not entirely true because kernel caches memory mapped regions and won't give up on those pages until under memory pressure. What happens if you set it to FALSE, no .vmem file is created and kernel is under memory pressure? The SAME very thing. The kernel will need to throw something out and all the memory mapped regions are a fair game, whether they are in /tmp or /mnt/vmware.
The above won't be true if vmware did some sort of mlockall() on the real time vmware-vmx process iff this option is set to FALSE. Does it? Only vmware devs can tell us about that.
Again, no difference in performance if vmware does mlockall in both FALSE and TRUE setting for this option.
Well, in the ideal VMware land, maybe it doesn't provide any performance benefit, but in my real world life, it has made some great performance improvements. Obviously I'm not familiar with VMware internals, but from my experience and postings in this forum, it makes vast performance improvements, especially for machines with 1GB+ of memory.
I understand that suspending/resuming is slower with the option disabled, but I'm willing to sacrifice that if my VMs with 3GB of RAM actually run in a usable fashion.
What does and does not make a difference...
1) If the file is created and unlinked, you will not be able to see the usage with the du comand, as du walks the directory tree to summarize usage. The space is still being used, but it cannot be found by name. use "df -kh" to look at all your disk usage and compare before-and-after startup of your VM to find the disk usage.
2) If your /tmp file system is on disk, then there is probably no point in moving the .vmem file there. Many (most modern?) linux distributions now use a "tmpfs" (a ram-backd file system) for /tmp. You can tell this is the case by checking mount or /etc/fstab, or just noting whether your /tmp directory is empty every time you reboot. 8-)
2a) If /tmp is on disk, but it is on a different physical disk than where ever your .vmx files live, it may still be worth moving because you will not be contending between the guest's use of "virtual memory" and the hosts. In particular a small memory size VM, or any size VM running windows as a guest will want to page to its virtual drive. [ASIDE: do NOT remove the pagefile.sys or disable virtual memory for your windows guest, because DLLs and EXEs have to be "relocated" in memory, the windows pagefile system is the way windows caches that relocation information... it isn't "real" virtual memory in the Linux sense.]
3) If you /tmp file system is in a tmpfs or other ram-backed file system, your moment-by moment performance will be greatly increased, BUT be aware of several important things...
3a) Checkpoints and Snapshots are going to be slower... get over it... 8-)
3b) The size of the tmpfs is constrained to, by default, half of physical memory, so that constraint as a total will apply to the total memory consumption of all the vms running simultaneously. That is, if you have 2 gig, and you haven't changed the constraint in your /etc/fstab or whatever, you will have a total of 1 gig in /tmp, and so you won't be able to run 3 1-gig vms with their files relocated. Tune wisely.
3c) The memory used by the tmpfs for this purpose is essentially "free", in that if you have tuned your vmware to disalow paging of the reserved memory, then the mmap(ed) pages are both part of the vm and part of the tmpfs. So a 1gig vm will use 1 gig of memory and no disk space.
3d) it is possible, and even reasonable or necessary to set the linux tmpfs size to a value larger than physical RAM IF and ONLY IF you have the swap space (q.v. mkswap etc) to make good on the size. If you don't have adiquite swap space in the host OS then you will run out of ram as you fill the tmpfs, the out of memory killer (in some kernels) will wake up, and stuff will just start getting booted out of memory (or alternately on other kernles strange and random things will apear to fail with out of memory conditions).
4) Don't just stare at this issue in performance tuning, the disk readahead parameters in the host os add onto the disk readahed behaviors in the guest(s) so if you don't tune your disk scheduler and your swap space expectations, you will be punished without your knowledge. The same thing goes for putting many partitions on one physical drive and then competeting for that drive by spreading your files amongst those partitions.
5) Don't take any advice from the internet which doesn't include and acknowledge that your mileage will vary because of these issues...
People (Hi there troyp!) need to understand that the implications of moving the .vmem file will vary by distribution and configuration.
P.S. your milage will vary because of these issues...
BitOBear you're a genius! I've been trying for over a year to get rid of the damned virtual memory files, which are major performance hits under a Linux host. Be they named and reside in the VM directory or hidden on /tmp (but can be unmasked/tracked down through /proc/<pid>/fd)..
For some reason I'd never investigated tmpfs, now I've moved /tmp to /tmpfs on the host with mucho swap space and all is sweetness and light - albeit one hell of a kludge workaround for what has to be a mega bug in VMServer.
I don't know if that bug is fixed in 2.x, but given the limitations like most Linux users I guess I'll be sticking to 1.x for the forseeable future....