I'm wondering if anyone knows how Linked Clones will effect datastore sizing with regards to how many linked clone desktops should be run per VMFS datastore?
The question of how many VMs/VDIs to run on a datastore was a tough question to answer before, and now I believe it is even more complicated with the addition of linked clones.
With VDM 2.1, I was planning to run ~25 XP desktops per datastore. I'm not really sure how I came up with that number, but I did (Note: I'm using a NetApp FCP SAN).
View 3.0 has linked clones which uses a single base image to create a replica which all clones are linked to. So, instead of making a full copy of the image for each new desktop, View simply links to the single common replica (great disk savings, and quicker deployment I hope), and creates a "delta" file of changes between that read-only replica image, and whatever is happening on the running desktop.
How does this new delta file architecture effect disk performance etc. on the datastores? Does it have any effect at all? If I was previously planning to size my datastores to fit 25 desktops, should I now size my datastores to fit the delta files for 25 linked clone desktops? What about the thin provisioned user portion of data?
I am just at the installation stage for the linked clones piece, and haven't actually run it yet, but it also looks like you may have the option to choose a separate datastore for the user data. How does this fit into the sizing question? Previously, I was using Folder Redirection and roaming profiles to store users My Documents etc. on network storage, and I think I would like to continue doing this (there are nice snapshotting features etc on the network storage). However, I'm guessing that there is some user data outside of what is kept in the folder redirection or roaming profile, and this data likely ends up in the thin provisioned user portion of the linked clone desktop. Can anyone comment on what they are doing with the user data? Is it on a separate datastore from the OS delta files?
Any knowledge or experiences are appreciated.
Thanks.
Some good questions, this has been a discussion around the blocks for the last week or so.
You want to have a read of the following blog posts which run through the issue:
Then review these two documents from the View Reference Architecture
These have details about the storage required for hundreds of linked clones.
In your design you will need to consider:
Storage over commit
How many datastores to create for each pool
How often are you going to refresh the desktops to reduce their storage space
When you are going to rebalance your datastores within pools
How large to create each datastore given the size of replica, estimated size for the qty of snapshots you plan
The physical storage design for each datastore to provide the IOPS you require
What monitoring are you going to put in place for all of the above to address issues before they cause a problem
The good thing is that you can have a go at it and then adjust very easily, as long as your machines can handle a refresh, which they should if you are using linked clones.
The reference architecture looks like it places about 64 linked clones per datastore.
Come back and report your progress or ask any further specific questions.
Rodos
Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/
Thanks for the point in the right direction, and work you have posted.
In VDM 2.1, one of my big concerns was the amount of time required to provision a desktop (full copy of thick provisioned image). With the linked clones feature of View, this has largely been dealt with. My brief testing yesterday, plus the reference architecture you pointed out show how much of an improvement this is (not to mention the disk space savings with linked clones).
As with many environments, I have different classes of users. Many of my users are fairly basic, and I am able to create a few locked down base images that will satisfy them. For these users, I am using non-persistent desktop pools, and deleting after first use (with folder redirection and roaming profiles for user data). For these users, the non-persistent linked clones set to delete after first use are perfect. The space savings, and provisioning performance improvement are just what I was looking for. Plus the added domain joining, and ou placement feature of quickprep was a basic addition that VMware needed to include (a netdom run once script worked, but was not the best). So, for these users with limited desktop requirements, I am very pleased.
For the more complicated class of users, there are more complicated issues. My more advanced users are primarily developers, and they all seem to have different tools, licenses, upgrade schedules etc. that make any sort of re-imaging or refresh/recomposition idea very difficult. Because of their custom environments, I think I am stuck with using the old style thick persistent desktop which can be customized for them, and updated/patched etc. just like a regular desktop without any type of refresh. I don't really see any way of getting around this with the current set of tools. It sounds like the ThinApp idea, and some other technologies are moving towards addressing this, but it isn't there yet. I suppose could still use linked clones to get the thin provisioning, and performance improvment deploying the base image for these high end users, but if I never ever see a possibility for refreshing/recomposing their desktops, then I don't know if this is a good idea. Over time, their desktop delta file will grow very large, and eventually get close to the full size. And I would have to keep the original parent/snapshop around for as long as the desktop exists. I think the old thick image would be best...what do you think? Am I missing any benefits?
I'm intrigued by the Linked Clones option for a separate thin provisioned user data disk that can be refreshed separately from the OS drive, but since it can not be shrunk, I don't think it is really of that much use. When I first saw it, I though of it as an alternative to using Roaming Profiles, and Folder Redirection, but since you can never shrink the user data disk (even if the user deletes everything in it), then I don't think it is a very good solution. Having Roaming Profiles and Folder Redirection on network storage that can grow and shrink as required seems much more flexible. Am I missing something here? Is there some way of using this feature that I have overlooked? Or maybe I should just take it as a first cut, and hope that it will be improved in a future release?
On a side note, any idea why there is an 8 ESX Server maximum cluster size with view 3.0?
In any case, I am very pleased with the improvements in this release, and I hope that they continue.
I am replacing 400 off-lease desktops with view based XP desktops over the next 2 months.
Environment:
HP c7000 Bladecenter
6 x GbE2c L2/3 Ethernet Interconnects (Connected to HP Procurce based Network running MSTP, VRRP, 802.1q, Trunking)
2 x Brocade 4/24 SAN Switch (using NPIV Access Gateway Mode)
8 x BL465 Blades (24GB RAM, Dual Quad Cores, Qlogic QMH2462 Mezzanine, NC325m Quad Port Eth Mezzanine)
NetApp FAS3020 FCP SAN (for VMFS LUNs and also CIFS shares for RoamingProfiles and FolderRedirection)
Wyse V10L Thin OS based thin clients
Thanks
