VMware Cloud Community
edk
Contributor
Contributor
Jump to solution

data recovery 1.2 only supports 2 destinations at a time

I started with vdr1.1 and had 3 destinations setup(due to 256 gig limit) , 2 were shared data stores( from 1 physical 500gig disk) and 1 was a network share.

I installed the version 1.2 appliance to "upgrade" the version 1.1

the mount of the network share went ok and automatically mounted 1 of the datstores, but I can no longer mount the third destination , a warning window

shows up. indicating the data recovery appliance no longer supports more than 2 destinations at a time...

Right now I can't do anything while I wait for the integrity check.(looks like it will take hours)

so

My questions are :

can I migrate those jobs to the other data stores without creating new jobs ?

Can I setup another new appliance to use that data store ?

Could I "extend" one data store to another

Any other recommendations would be helpful

- Ed K | PHD Virtual Support
0 Kudos
1 Solution

Accepted Solutions
kcucadmin
Enthusiast
Enthusiast
Jump to solution

vmware has always said no more than 2 destinations, but they didn't restrict it.

now they do it looks like.

however, there is good news here. VMWARE now allows up to 10 VDR Appliances to the vCenter Server and you can switch between them from the vdr plugin. so you could setup 2 seperate vdr appliances with 2 destinations, allowing you to distribute your dedupe load across mutliple hosts.

i now have 2 vdr appliances setup with 2 vmfs3 vmdk depupe stores, and will perform my inital backups tonight. wish me luck.

p.s. i didn't have much luck upgrading my 1.1 restore points, i figured it's probably just best to start fresh... you can keep the vmdk files if you have the room incase you need to perform a restore, just dont config them in the new vdr, or mount them.

just a suggestion.

View solution in original post

0 Kudos
4 Replies
kcucadmin
Enthusiast
Enthusiast
Jump to solution

vmware has always said no more than 2 destinations, but they didn't restrict it.

now they do it looks like.

however, there is good news here. VMWARE now allows up to 10 VDR Appliances to the vCenter Server and you can switch between them from the vdr plugin. so you could setup 2 seperate vdr appliances with 2 destinations, allowing you to distribute your dedupe load across mutliple hosts.

i now have 2 vdr appliances setup with 2 vmfs3 vmdk depupe stores, and will perform my inital backups tonight. wish me luck.

p.s. i didn't have much luck upgrading my 1.1 restore points, i figured it's probably just best to start fresh... you can keep the vmdk files if you have the room incase you need to perform a restore, just dont config them in the new vdr, or mount them.

just a suggestion.

0 Kudos
admin
Immortal
Immortal
Jump to solution

1.1 also only "supported" 2 destinations. It's just now being enforced in 1.2.

"can I migrate those jobs to the other data stores without creating new jobs ?"

Yes, just edit the job and change the destination.

"Can I setup another new appliance to use that data store ?"

Destination datastores can only be used on one appliance

"Could I "extend" one data store to another"

Destination datastores are independent of each other. So no, you can not combine 2 destinations together.

edk
Contributor
Contributor
Jump to solution

Thanks to the two replies so far.

They both confirmed my issue and gave me some "tools" to work with.

Here is my game plan.

I have learned how to create a vmdk larger than 256 gig(block size increase).

I created a single volume as big as the two I had started with.

All my weekly backup jobs will be redirected to the new larger volume.

I will keep the two smaller volumes for restores until the new backup jobs pass the retention period.

It's nice to finally have a number on the limit of vdr appliances per vcentre server.

Lessons learned

- Ed K | PHD Virtual Support
0 Kudos
kcucadmin
Enthusiast
Enthusiast
Jump to solution

in regards to the BLOCK size issues, after working with EMC and VMWARE tech's they have pretty much told me regardless of actually VMFS store size, it's best to use the 8MB block size setting. i was worried about wasted block space, but they both assure me ESX will use free block space.

using the 8mb block size can speed up read/writes i guess, less overhead, or somthing like that.

just another tidbit of info,

i think it's also better on storage vmotion if all your datastores are the same block size, i've seen a speed difference, if i'm going from a 8mb datastore to a 2mb datastore.

i'm sure someone more knowldgeable can speak to the mechanics, but anyways if your rebuilding your dedupe stores, nows a perfect time to just size them to 8mb block size.

0 Kudos