your idea makes sense but has a few problems.
When running 3 VMs which all use the ramdisk for a nonpersistant second vmdk with the pagefile located on it
chances are high that you run out of space in the ramdisk - this would then result in a crash of the first VM that runs out of space.
I have used Windows ramdisks and some of them can be expanded on the fly - but I don't know of one that could use the pagefile to expand.
What about using an SSD to store the vmem files instead ?
Then you could configure the VMs in such a way that only the really used vRAM is allocated in real RAM and would not run the risk of crashing one of the VMs by running out of space.
What hostOS do you use now ?
Stuck with windows xp for now.
Also stuck with an ide interface that is limited to 66 MB/s theoretically (drive benchmarked with hdtune at 50 MB/s). So SSD would add little or nothing to performance.
Notice the last paragraph of the OP, I'm looking for a ramdisk that grows into host swap space, transparently to its users so VM's won't crash. Tmpfs does exactly that, I have just found. But would rather avoid linux as a host.
What if linux becomes a VM hosting other VM's? If that's possible, damnsmalllinux will do it with very little ram, possibly less than ESX.
I know ESX can be installed into a VM if you change the .vmx a bit, so other VM's can be run inside that VM. Why not do the same with linux? Specifically, a linux with extremely low memory requirements, like damnsmalllinux.
IMO Running nested Virtual Machines is not intended for Production Use and in general, even with appropriate tweaks, tend to be slow and gruelingly so in some cases. It's fine for Proof of Concept scenarios and or Demos and or Testing but Production Use use is a snooze fest.
Alright. Back to tmpfs for windows then. Can't believe it hasn't been ported yet. Or maybe it has - can BartPE liveCD use a pagefile?
can BartPE liveCD use a pagefile?
Do you know the MOA BartPE I make ? - that is a BartPE modified so that it runs Workstation.
So far I support WS up to version 6.5.4 - I will add support for WS 7 as soon as WS 7 is good enough
Yes - it can use a pagefile
can BartPE liveCD use a pagefile?
Yes it can and I've had to do it at times on systems that didn't have a lot of RAM.
What happens in BartPE if you put a nonpersistent virtual drive onto its ramdisk, and that drive grows to larger than the available memory?
I have not tried to do explicitly what you're asking however running out of space under any conditions is not a good thing and in some cases depending on the conditions/circumstances real damage can occur.
I have a screenshot somewhere where I was doing exactly this but I can't find the link right now ...
Well - as expected the VM asks the usual question: continue or abort and both answers result in a crash of the VM.
Good to know that the host system - the BartPE in this case - does not suffer
After some use the swap file of the VM grows and the corresponding nonpersistent drive deltas appear on the host ramdisk and grow.
Funny thing about RAM disk, where does the memory come from? If you allocate the memory from ESX to create a RAM disk that leaves LESS memory for VM's, which means LARGER swap for the VM's.
so if you simply increase the RAM or not use RAM disk in the first place you are much better off. RAM Disk was primarily needed because older hard drives were anemic in speed, but modern disks are fast enough for swap (even SATA drives). So I would buy an external drive or RAID and put swap on that.
That would make better sense. RAM disk isn't used any more.. It has outlived it's usefulness.
RParker: "so if you simply increase the RAM or not use RAM disk in the first place you are much better off."
Of course. But notice the OP:
"Rather than change the definition of VM's again and again to take advantage of as much memory as possible,"
The context is from the previous paragraph:
"Problem: if you run 3 vmware VM's simultaneously the definition of each has to specify 1/3 of the available memory, or whatever sums up to the available memory. But then such a VM run alone will miss 2/3 of the host memory"
In short: instead of changing the .vmx's all the time and making them all 400 mb when you run 3 VMs, making them all 600 mb when you run 2, making the one 1200 mb when you run 1, instead just give them all swap space that is almost as fast as physical memory. Because it comes off a host ramdrive that extends to host swap space when physical memory runs out, as unix's tmpfs does.