Yet again... another question about VDR 2 (or 1.2) and disk performances...
My current configuration is mounting on a HP XP2400 via FC to a big bunch a SATA drives. Usual performance are close to 3GB/s with no (or almost) no latency.
My VDR2.0 has a 500GB LUN with a .VMDR of almost the full size as a backup destination.
When doing an initial full backup of a 60GB VM to an empty backup destination, I have the following stats:
- Datastore latency write: Maximum: 14ms Average: 4.3
- Datastore latency read: Maximum: 26ms Average: 4
- Virtual Disk latency read: Maximum: 19ms Average: 6.4ms
- Virtual Disk latency write: Maximum: 1593ms Average: 134ms :smileyshocked:
- Disk write rate average: 8547 KBps :smileyshocked:
- Disk read rate average: 2017 KBps
OK, I have read other discussions about speed, but how could it be this bad with such good hardware? Something tells me I am doing something wrong...
I am having the same problems and I would like to know if you ever found a solution/answer to this issue.
I am using a much weaker HW (HP MSA 1500 fully loaded with 14 SCSI disks in RAID 5 used exclusively for VDR) but still I think the latencies I get compared to the data throughput are not realistic. I have no idea why this is happening and would appreciate any answer/solution.
Honestly, I have given up on those numbers... VDR is this kind of product where you stop asking yourself questions - at least in my case.
Out of 300 VMs, I use VDR to backup about 40 VMs as a (production) "test bench". The initial backup is super slow, the subsequent ones are just OK but the numbers are always strange. For example, on a LUN where I have two identical machines to backup (not more than a 100MB difference between them - only those VMs on the LUN), machine A takes 1 hour to backup 1117.3 MB/min while machine B takes 2 hours at 499.5MB/min.
In my case, I found RDM destination disks more "stable" or "reliable" and I couldn't say using 1TB LUN vs 500GB is giving more troubles. During the last few months, I have been through "Whoa, it look OK now!" to "Crap I have *again* to run the integrity check on those damaged restore points". Nothing changed in my configuration in the last 8 months and I now have weekly damaged restore points... even on a brand new destination LUN! We have other software deduplication tools on much bigger source / destination LUN that never ever had problems like that.
Generally speaking, the tool would be OK if we didn't have those dedup errors... Make sure your VMs are on hardware v7 in order to backup only the changed blocks. It looks slow, but when it works, it is nice. Restoring is easy and file level restore is working well. I soon as I have other project finished, I will probably search for another solution though. Free is nice but free with daily management and weekly downtime is not a production viable solution.
Feel free to ask questions since I am still using it 🙂