Apparently there isn't an answer to this issue. my first disappointment with VMware. The vcbmounter process takes over 2 hours to push a 8 gb VM to the proxy.... doesn't make sense. How could you ever back up multiple vm's on the same job?
Folks-
Having an issue backing up full vms in the following environment:
ESX 3.5,VC2.5 Backup Exec 11d
Proxy Server- Dell 1950 Win2003R2
iSCSI San- Dell AX150i
File level backups work great but full vms copy very slowly to the proxy- a 12 GB vm taking over 2 hours. Other file transfers over the same link are much much faster. My issue is that I'm running up against the 300 minute timeout period for Backup Exec. Any ideas would be appreciated.
What method are you using for VCB backups?
For iSCSI you should use nbd or nbdssl (backup over network).
It is probably advisable to install additional Physical nics on a completely dedicated network to keep backup and normal network traffic separated.
SAN method may work with an initiator on the VCB proxy, but is not the supported way for iSCSI.
What kind of storage controller do you have in that proxy, and does it have write cache enabled?
File backup just mounts the snapshot and copies directly to the backup media, fullvm will first have to copy the entire vmdk over to the proxy.
Since I use VCB with an EMC FC SAN, I cannot answer your question directly, however, two things pop up in my mind, A) the idea of VCB is LAN-free, which would mean all traffic to and from all of the VM's along with your VCB backup are on the same 1GB link to your iSCSI unit, 2) the EMC AX150 is not one of your better performing SAN units. It has SATA drives and rather slow controllers...I know this as we have one, and even using it with FC it is slow.
I think this adds up to the slow performance you are seeing with VCB. Not a great answer, but....
What method are you using for VCB backups?
For iSCSI you should use nbd or nbdssl (backup over network).
It is probably advisable to install additional Physical nics on a completely dedicated network to keep backup and normal network traffic separated.
SAN method may work with an initiator on the VCB proxy, but is not the supported way for iSCSI.
What kind of storage controller do you have in that proxy, and does it have write cache enabled?
File backup just mounts the snapshot and copies directly to the backup media, fullvm will first have to copy the entire vmdk over to the proxy.
The network option worked for me. FulVM times went from several hours to several minutes.
Thank you
What network option did you set?
I ended up moving to a different solution- agents for file level backup and VISBU for fullvm's. VCB was getting cumbersome for an small organization.
If you have a chance try it again with param -M 0. That will change it from a monolithic file to a series of 2GB files. May not make much of a difference on an 8GB image, but it sure did for me on one that's 300GB.
I'm in trouble about the same phenomenon, too.
My environment is as follows.
-ESX 3.0.2 Update1:HP BL460c
-VC2.0.2 Update3:HP BL460c
-BackupServer Win2003StdR2
Symantec Backup Exec11d
:HP DL380G5
-iSCSI Storage : EMC AX150i DualController
It takes time for a copy to a backup server very much at the time of the back execution of fullVM equally.
When iSCSI network traffic between AX150i and the backup server is seen, it's the transfer rate of the 1%-5%/1Gbps.
It took several hours to fullVM backup of an approximately 20GB virtual machine.
When carrying out in the different environment (That was composed most similarly) before, a fullVM backup has been completed in tens of minutes by the transfer rate of 30-40%/1Gbp.
What is Setting that causes the difference between these transfer rates?
I gather you're using an iSCSI initiator in your proxy, and using the SAN mode.
The mode VCB uses for the backups can be set in the configuration file (config.ini iirc)
This is also described in the VCB user guide.
SAN mode is for fibre attached SANs.
Anything else, and you should use NBD or NBDSSL (Network Block Device, either SSL secured or not).
This means you could even use VCB for VMs on local storage, which was not possible in the past.
Thank you for your answer.
When it was carried out by a NBD mode, about 10 times of response was obtained.
But traffic of a vcb copy communicates at Service Console segment(business segment), not a iSCSI segment .
In case of SAN mode, vcb copy is communicating at a iSCSI segment.
Isn't it possible to designate communication of a vcb copy in a iSCSI segment by a NBD mode?
You could set it up to use your iSCSI segment by adding a service console port in that segment and configuring the proxy to use that address (if you use VCB by connecting to Virtual Center instead of the host directly, you'll have to put VC in that segment too)
But then this will also influence performance, because the data will be read from disk over the iSCSI link, then copied to the proxy over that same network.
A better way would be to use 3 separate networks for the service console (and backup), business and iSCSI.
Hi Everyone -
I'm having the same issue with an EMC Clariion ax4-5i, BackupExec 12.5. Spoke with an VMWare engineer who made an entry in the registry to correct the issue.
Drill down to HKEY_Local_Machine --> System --> CurrentControlSet --> Services --> TCPIP --> Parameters --> Interfaces. Find the entry for your iSCSI network Adapter and add the follwoing DWORD value:
TcpAckFrequency with a value of 1.
Went from 50MB per minute to 1GB per minute.