I've just been doing some tests to compare SCSI pass-through performance on an IBM x3650 server to a VXA-3 tape deck, with native host performance.
Win 2003 R2 (32) on both host and guest.
The disks run off a ServeRaid 8k controller in a raid-5 config, whereas the Tape deck runs off a separate Adaptec SCSI-3 controller.
I used NTBackup in both cases, to backup a set of large files that were stored in a host directory. For the test where I did the backup within the guest VM, I used a Host-only NIC config, and mapped the host directory to a share and mapped a drive to the share. I disabled tcp chimney on host as recommended elsewhere.
What I found was the guest backup performed at about 12.8 MB/sec. Doing the same operation natively on the host came in at the same overall transfer speed!
The VXA-3 has a sustained transfer rate of 12MB/sec (without compression, so since I was backing up binary data, this seems reasonably respectable).
To confirm this was as good as it could be, I checked disk throughput by xcopying the same files from host to guest virtual disk, and this ran at about 24 MB/sec. Since this would have been expanding the virtual disk as it was going, this seems pretty good.
So I conclude that the scsi passthrough on the tape controller from the guest VM does not suffer performance degradation in any material way, which I think is rather amazing.
Based on this result, is there any reason why using the guest VM to perform backups is not a perfectly viable approach?
I don't plan on using the VSS stuff on the host because I'm going to change the host to 64bit Linux probably. Which means I will either have to use a backup client agent within each VM - probably relatively slow - or manually quiesce or shutdown the VM's somehow and backup each vm's virtual disk from host to guest.
Also, what would be the optimal NIC setup for connections from one VM to another VM? Can they both communicate to each other via host only NIC's?