As part of our backup and DR strategy we have set up a separate ESX host to snapshot and export the VMDKs of our virtual machines. Since this is a dedicated backup server, and the backups are scheduled to run during off-hours, we want this host to read and write data at media speeds.
However, after looking at the data rates we are actually getting it would seem that there is a cap somewhere on how much IO the Service Console is allowed to generate. Specifically, none of the export jobs exceed 20MB/s of throughput.
This makes perfect sense for an ESX host in general - but in this particular case, we know that the disk subsystem can support drastically higher loads than we are generating, and we are already spending 10 hours for a full backup. With double the VM count we will be facing a 20hr backup window, which is not a good thing.
so - does anyone know of a way to tweak the service console performance to achieve higher throughput against disk?
The fastest way to backup is using a VCB Proxy on dedicated hardware connected to the SAN with fiber.
You can do file level backup on Windows vm and full backup for other vm.
The question wasn't what the fastest backup solution is, the question was how to increase the performance of the service console.
I am intimately familiar with VCB and, unfortunately, it is not in my opinion a production ready product for the enterprise. It is too unreliable, it provides no good restore mechanism, and for Disaster Recovery purposes it is equally useless as all other backup software solutions. You cannot make a hot standby site based on VCB!
We have set up a combined DR and backup solution with which we are very satisfied. It suffers from one problem - it relies on vmkfstools in the service console to do the dirty work, and even on a dedicated node with no virtual machines the performance caps off at around 20MB/s.
I've also verified that parallelizing the export does not help - if I start two exports at once, they both run at 10MB/s.
Since the number is so consistent, I am fairly certain that there is a hard limit somewhere on how much IO the service console can generate. Knowing VMware, if this is the case there probably exists a switch somewhere which regulates this limit. And this being a dedicated host, we do not mind if twiddling this switch might interfere with any running VMs on the host - there are none
Try this thread:
http://www.vmware.com/community/thread.jspa?threadID=80633&start=0&tstart=0
You could find useful info within even if not for local storage
We are working on breaking the limit.... Check this out... I just have to share...
When I saw this, I almost fell out of my chair... This is going to be awesome...
WMFS writes....Better...
Whoah, that's pretty amazing - but where is the source code?
Is there anything new on this subject? Any available solution, preferably.
Well, nothing new as to figuring out ways of improving export speeds, but interestingly I've been able to dump files at 180MB/s on test using local (DAS) RAID volumes. The host in question was not running any VMs so the result is somewhat artificial, but I was blown away by how fast exports were going using exactly the same method as before, ie vmkfstools -i.
Hi, I'm facing the same issue with ESX 3.5U2. Did you find a solution in the mean time. A backup of a 20GB guest takes me about 2hrs! I use vcbMounter running in the service console, writing to a NFS Share.
Regards