We have developed an OCR product that works fine on hard machines, currently I am testing it with Windows V7 SP1 on a hard machine.
No problems at all.
We have a customer who reports that our product consumes all available memory when using VMware with Windows V7 SPI as a guest.
They are seeing the problem with VMware Cloud director 5.2 and also VMware Workstation 10.0.
After a lot of work at both ends, I am coming to the conclusion that the "problem" is an illusion.
No one at our company have any access to VMware to test this however.
What seems to be happening (from reports from our client), is that our application (which never exits). increases private memory usage
to the limits allocated to the VM before plateauing. It is the only process running in it's own VM. It is also only processing 4 files in a continuous
My contention is that :
- because only 4 files are ever being handled, the disk cache is not being exercised,
- with no competition for resources,, Windows is being "lazy" and never reclaiming used memory from our application until all free memory
Has anyone else noticed this behaviour with any other application ?
I see a lot of similar issues while googling, mainly under Apple's IOS system, very little for Window''s systems.
Sorry for not being a VMware user yet, but hey we are trying to help one.
It is my understanding that if memory is requested and written to (maybe during your file reads) the guess will use the host memory and not give it back. The give back will happen when something else needs the memory (but ONLY if you have current VMware guest installed). You stated the app increases private mem usage - if the app still has allocation of the mem, the OS is likely not going to give it back when VMware asks as the OS will think the VM is still using the memory actively.
Thanks for the reply cfor, there is a lot I do not understand yet.
To explain a little more about how the application uses memory :
On startup the application malloc's memory for permanent internal tables, these are never released (except implicitly by the app exit).
For every new file, new mallocs are issued to read the file (a compressed image usually, the image is expanded),
As the file is processed new mallocs are issued for holding temporary data.
On completion of work on the file, all memory we have allocated to process that file gets released.
Permanent internal tables are retained.
It all seems to work in a multi-tasking environment on a hard machine but under VMware no.
Are you saying that the application might have used host memory for the file I/O ?
- I would have expected that to be the responsibility of the guest OS, but I could be wrong here, after all disk access from multiple guest OS's must be combined safely.
When you say "but ONLY if you have current VMware guest installed", that sounds like I am missing something.
- I thought the guest OS had to be there for the app to work, It sounds like you mean something else. Can you explain further ?
Sorry I was not clearer.
Guest tools needs to be instanced inside the guest OS in order for memory to be reclaimed for re-use on the VMware host.
It does seem odd that mem is growing if the malloc memory is being released, I assumed it was just growing without release on the guest (does not sound like it).
In trying to think of anything that could cause this, a test comes to mind (if possible). The VMware VM has an option to "reserve" memory for it. If possible set this reservation to 100% of the needed memory. See if that changes anything. What I am wondering is if the VM host needs to get some memory back, and is doing a "ballooning" action and making a client show memory as all used when it really is not.
Thanks again for the very prompt response.
I will get back to the client with your thoughts and see if your suggestions make a difference.
That will be tomorrow now.
I was thinking that because the app has the world to itself so to speak in a VMware guest, and Windows as the guest has got very little to do except
fopen, fclose, fread, malloc, realloc, free, that Windows entered some strange, normally unseen mode where it was doing something sensible
like give the app whatever it needed and not reclaim memory until such time as all free memory was used up, and only used & discarded
I would guess that only a persistent app, one that never exits (we run for days) and with no resource competition might show this behaviour.