1 2 3 Previous Next 80 Replies Latest reply on Aug 28, 2010 11:58 PM by danb1974

    VMEM Files Linux vs. Windows

    zenix Novice


      So, I spent serveral days last week playing around with the pros and cons of running a Linux (CentOS) vs a Windows (Server 2003) host. 



      One thing that was frustrating me was that I couldn't find a way to get rid of the disk i/o by disabling the vmem files.  As many of your know you can disable this in your config, but in Linux it moves a file that appears of similar functionality to your /tmp directory and on Windows, at least from what I read, it moves into the pagefile.



      This is where it get's interesting, if I disable the page file on the Windows server, I never see a change in my used disk space.  So, it would appear that disabling the pagefile on a Windows server host gets past the disk i/o (not to mentions disk space!) used by vmware for memory.  And, no it does not appear to use any more memory either!



      So, as someone who until this point has always tried to run Linux for my VM Serer host, I'm wondering if anyone knows if this can be acheived on Linux?  Getting memory off the disk is way too much of a performance gain for me to ignore.



      Thanks for exploring this topic with me.






        • 1. Re: VMEM Files Linux vs. Windows
          KevinG Guru


          The virtual machine use the physical RAM of the host.



          The vmem file is the virtual machine’s paging file, which backs up the

          guest main memory on the host file system.



          Your physical machine OS has the samething. If using Windows (Pagefile ) or  Linux (swap file)



          Your virtual machine does not use the vmem file as the guest memory, so no need to worry about performance because it uses a swap file.



          Unless of course the host does not have enough memory to supply what has been allocated to the virtual machine.



          This vmem file must exist, using mainMem.useNamedFile = "FALSE" just changes where it is stored






          • 2. Re: VMEM Files Linux vs. Windows
            zenix Novice





            Thanks for the reply.  That is also my understanding, but on linux it doesn't appear to be using the swap partition for this, it uses space in the /tmp directory and cosumes space (I'm not sure if it's equal to the amount of RAM used for the guests, but it's a lot) on the partition where the /tmp directory is located.  However, on the Windows server if I turn off the page file (in the Virtual Memory serttings) and use the mainMem.useNamedFile = "FALSE", no additional disk space is used.  In fact I can't see any extra disk i/o or disk space consumed.  My free space stays exactly the same wheather I have all the VMs shutdown or powered up.  I don't know where it's putting that guest main memory file, but I can't find it anywhere and my performance looks great.



            So, I'm trying to see if this is also possible on a Linux host. 



            Thanks again for any input on this.



            • 3. Re: VMEM Files Linux vs. Windows
              KevinG Guru

              Yes, the  mainMem.useNamedFile = "FALSE" option moves the vmem file from the virtual machine directory to the /tmp on a Linux host.






              On a Windows host it is move to the Windows host swap, so in both caes they are using disk space

              • 4. Re: VMEM Files Linux vs. Windows
                zenix Novice




                Thanks agian for trying to help clarify this.  But, what I'm saying is that on a Windows Server 2003 host with the pagefile disabled, no extra disk space is used for the VMs.  When when I run the same VMs with the same setttings on the same machine, but switch the host to Linux, disk space utilisation goes way up witht he VMs running.  On the windows host with the page file disabled, this simply does not happen. 

                Since it's possible to this on the Windows host, I was trying to figure out if it is possible on a linux host.



                And to be clear both host environments contain:


                • 32 GB physical RAM

                • Set to fit all vm memory into physical RAM

                • mainMem.useNamedFile = "FALSE"


                The only differance is that on the Windows host, I've disabled the Page File.



                Thanks again.



                • 5. Re: VMEM Files Linux vs. Windows
                  devzero Master

                  good question - i also wonder how this can be disabled completely.....

                  • 6. Re: VMEM Files Linux vs. Windows
                    zenix Novice
                    devzero wrote:

                    good question - i also wonder how this can be disabled completely.....

                    Well, on a Server 2003 host it appears that you can disable it completely as long as you disable the Windows Page file (make sure you've got plenty of RAM).  I just haven't seen anyway of doing it on a linux host.

                    • 7. Re: VMEM Files Linux vs. Windows
                      Simon.H Enthusiast


                      I'm running a Red Hat EL5 host and this is exactly the behaviour I'm seeing - whatever I try I don't seem to be able to stop the memory from the guest being written to disk eventually.



                      I'm in a similar position to zenix. I've got 6GB of RAM (PAE kernel) and two Windows Server 2003 VMs, each of with memory size 2GB. I have no Linux page file and want to keep everything in memory. My problem is lack of disk space so I would really prefer not to have 2 x 2GB .vmem files (or space used from /tmp). (I realise 4GB doesn't sound very much by modern desktop standards but this is a 3 year old server with RAID1 and 3 drive bays!).



                      Whilst suspend/snapshots are very nice features of VMware I can live with out them for this environment. My .vmx files contain:



                      memsize = "2048" MemAllowAutoScaleDown = "FALSE" mainMem.useNamedFile = "FALSE" MemTrimRate = "0" snapshot.disabled = "TRUE"



                      My /etc/vmware/config file contains:



                      prefvmx.minVmMemPct = "100" prefvmx.allVMMemoryLimit = "4269"



                      i.e. I've disabled memory trimming and asked it to fit the whole of guest memory into physical RAM. When I switched useNamedFile to false the .vmem file disappeared but then disk usage (from df -k) crept up over a few hours (/tmp/vmware-root I think but I couldn't see the actual file usage).



                      Reading the comments on this thread (and lots of others!) does it mean that with a Linux host, as the VM memory is accessed, it also gets written to the "backing file" (.vmem or /tmp), even if you're asking it to fit the whole of guest memory into RAM? During this process does that also mean that the VM's physical memory usage is decreasing? If so, if it's not possible to stop this behaviour, would it be sensible to write the .vmem to a filesystem in shared memory (/dev/shm) - would that keep the memory usage constant but save using the physical disk space?



                      • 8. Re: VMEM Files Linux vs. Windows
                        Simon.H Enthusiast

                        Just to update some results I've found... I reduced one of the VMs from 2GB to 1GB memory: after 10 mins it was using 88MB disk, after 2.5h it was 630MB and then 820MB after running overnight.


                        As far as I can see the memory usage has not decreased:




                        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 7045 root       6 -10 1243m 1.1g 1.0g S    6 18.5 167:44.90 vmware-vmx





                        Therefore I assume that the data on disk is a copy of the physical memory.




                        I found one discussion (http://communities.vmware.com/message/370464), where petr suggests that using /dev/shm is a bad idea: "I personally would highly discourage use of tmpfs (or any ram based filesystem) for *.vmem. Your machine may deadlock when you'll put it under memory stress."




                        Ironically it would be cheaper for me to buy more memory (even though it would serve no purpose !) than to replace four 36GB 15k SCSI disks.




                        For Linux servers with enough RAM to cover their VM memory requirements the only benefit of the .vmem file I can see is that it would allow you to suspend & quickly (not a requirement for me). However, for me, 4GB is a fair chunk out of a 33GB filesystem to lose out for no benefit...








                        One final thought if the .vmem file can't be turned off by a .vmx parameter: if VM is only ever writing to it, is it possible/safe to set it to /dev/null ?!!

                        • 9. Re: VMEM Files Linux vs. Windows
                          zenix Novice





                          If you have enough memory, the cheapest route around this is to purchase a Server 2003 licnese.  If you do that you can simply disable the page file and then no extra disk space or memory is used.  I have two servers running in this setup right now.  I also have one Linux server that likes to eat disk space.






                          • 10. Re: VMEM Files Linux vs. Windows
                            Simon.H Enthusiast


                            That's a good idea Daniel - unfortunately the whole idea was to move the server to Linux as the host for management/reliability/consistency etc with our other servers (this server used to be the physical version of one of the Windows VMs).






                            This issue has surprised me - so far I have been very impressed with VMware... they seem to have thought of everything! However this seems to be an oversight, especially when server consolidation and making the most of (potentially) legacy hardware seems to be one of the drivers for virtualisation. It might look trivial but 4GB disk space is a lot on this server. The frustration for me is I can't understand why VM would need to write to .vmem if I have enough RAM and if I don't need to suspend the VM state. Surely there must be a magic parameter to turn it off...?!



                            PS. Given that nobody has wailed in horror at my /dev/null suggestion I'm going to give that a go on a test server - I'll keep you posted.



                            • 11. Re: VMEM Files Linux vs. Windows
                              devzero Master

                              >If you have enough memory, the cheapest route around this is >to purchase a Server 2003 licnese.


                              you`re kidding, aren`t you?



                              • 12. Re: VMEM Files Linux vs. Windows
                                zenix Novice

                                Yes, I'm very serious.  For $700 you can get a server 2003 license.  Have you priced scsi/sas disks lately.  Not to mention the less tagible costs of performance loss.  In the big picture, I consider that cheap.  I would love to run more linux hosts, in fact it's my preferance, but the performance trade off is hands down worth the $700 to me and my clients.  I can't afford to have server slow down.

                                • 13. Re: VMEM Files Linux vs. Windows
                                  devzero Master


                                  for around twice the money you can get VMWare ESX Starter.






                                  • 14. Re: VMEM Files Linux vs. Windows
                                    Simon.H Enthusiast

                                    zenix: I'm not sure I'd say it was "eating disk space" on Linux - I've set all my virtual disks to pre-allocate and, before I used the mainMem.useNamedFile = "FALSE" parameter, I would have said that the disk usage was constant (providing I didn't take a snapshot). I hadn't really noticed the .vmem file though.




                                    Anyway, I'm sure people are more interested in my /dev/null test... well, so far, so good!



                                    This is what I did on a test machine:

                                    /etc/vmware/config: tmpDirectory="/dev/null"



                                    a test VM .vmx file:

                                    memsize = "900" MemAllowAutoScaleDown = "FALSE" mainMem.useNamedFile = "FALSE" MemTrimRate = "0" snapshot.disabled = "TRUE"



                                    Note: I didn't restart the VMserver service (perhaps I should).



                                    This server has a single filesystem which contains everything (VMs, /tmp etc). Before starting VM, df -k was:

                                    Filesystem           1K-blocks      Used Available Use% Mounted on /dev/sda6             28123228  18445684   8248968  70% /



                                    After it was running:

                                    Filesystem           1K-blocks      Used Available Use% Mounted on /dev/sda6             28123228  18445880   8248772  70% /



                                    i.e. it has only used 196kB so far! Unless I've missed something this means that the .vmem data isn't being written to disk.



                                    Memory: the VM is currently only using 300M:



                                      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 5629 root       5 -10  297m 106m  97m S    8  5.3   2:49.56 vmware-vmx



                                    Yet, when I look in the VM OS itself I see the 900M available:



                                    Mem:    908604k total,    68780k used,   839824k free,     9492k buffers




                                    Here are some notes:




                                    2) By changing the /etc/vmware/config you are affecting every VM on the server



                                    3) Your VM mustn't need to use the memory "backing file" - i.e. you must have it set to put fit ALL VM memory into main RAM (see config above). I assume you mustn't try to suspend it either - I guess it may suspend OK, but given that the .vmem data has gone into a black hole, I'm sure it wouldn't resume (and quite possibly it might corrupt the VM ).



                                    4) I have no idea what else VMserver might try to put in the tmpDirectory - clearly anything it does it's not going to be able to retrieve later. This could still prove to be v bad indeed.


                                    I'd welcome any comments on the above, especially anyone who knows the VMserver internals and could give me an idea whether this might actually work. In the meantime, I'll leave it running overnight and let's see what happens...





                                    PS. for the sake of clarity: this is a risky experiment - do this kind of thing at your own peril - don't blame me if it corrupts your VMs!

                                    1 2 3 Previous Next