We are using Linked Clones and the replica creation process takes anywhere from 15 to just over 20 minutes (to create a 20GB replica). I have tested with an EVA4400 using 15k FC disks, a CX4-120 with 15k FC disks and also had a chance to test with some EFD drives (all tests over fibre) but performance stayed around the same ~20Mb/sec for the creation process. Testing has been done with minimal disk usage for unrelated tasks at the same time (in most cases the SAN was only creating the replica).
A few questions stemming from this:
1. Should I realistically be seeing the same throughput from each of these three combinations? I was under the impression that the EFD's should be considerably faster.
2. Whilst this isn't a large amount of time to wait, is it possible to create a replica and then keep it?
Essentially: I have a snapshot on my golden image which a replica is created from. This replica is used in a specific resource pool, neither the base image/snapshot or the resource pool have changed. If I disable provisioning on my Desktop Pool that's using this replica, delete all desktops from the inventory, then the replica then deletes itself once all desktops are gone meaning that it needs to be recreated once provisioning is enabled again.
I would've thought that it'd make sense to keep the replica there if no changes have been made to speed up deployment of your Linked Clones - ideally I would like to keep multiple replica's from different snapshots and not have to re-make these... this has really only ever been an issue with unscheduled outages and the occasional problem crop up such as desktops not deleting on log off which requires the whole pool to be refreshed (these are infrequent events but peace of mind is a lovely thing).
Any insight on these questions would be greatly appreciated!
1: I never timed nor measured our replica creation process. It have never felt like it was taking an extremely long time. We have a maintenance period this weekend and will be pushing out a new image. I'll keep track of how long it takes. The replica is thin provisioned so is your 20 GB number the amount of space actually used or provisioned?
2: I have never heard of a way to archive a replica. You could probably do it but would more than likely require a lot of manipulation of the ADAm database to link the clones to that particular replica. I don't think it would be supported nor do I really have any idea where to start on something like that.
Creating a replica is not a simple file copy operation - so no, you won't see the same throughput. If the snapshot chain is complex, meaning the sectors are getting changed many times over, these calculations will take some time for vCenter to determine the end resulting disk that it should commit. This is why it takes longer to make a replica.
Regarding keeping replicas - there is no way to keep them. Composer automatically looks in the database for any dependencies for a replica and if it finds none it removes it. The only way to stop this operation would be to stop the composer service on your vCenter server.
Just for comparision I just deployed a new pool. The golden image is 20 GB with about 13 GB utilized. I deployed using View 4.5 on a 4.1 cluster connected to a CX480 SAN with VAAI. The total time to create the replica was 10 minutes.