We have the licenses to run 100 concurrent desktops at one of our locations. Given this I am wondering if I should create pools with 50 desktops in each. My thinking is that 50 linked clones will easily run on a 500 GB LUN (our default size) and it will segment the desktops some in case someone ever does something deterimental to a pool. Should I be thinking along these lines or should I just create one big pool of 200 desktops?
Thoughts?
I don't see anything wrong with that. I for one don't utilize the reservations but it's easily changed if you run into any issues. Did you review the Windows 7 optimization guide provided by VMware?
I would go with one big pool as it will make your management easier. You can select multiple datastores to store your clones and depending on your refresh cycle you probably will not need a LUN that is 500 GB in size.
Yeah the additional management burden I have thought about and I agree it would be much easier to just create one big pool, I was just wondering if others have thought about the fact that you are putting all your eggs in one basket and if something went wrong with that pool you would effect all VDI users rather than a subset?
I plan on refreshing every weekend to keep the delta disks down.
Thanks,
Michael
I don't see much that can go wrong with a pool. I guess someone could accidently disable the pool but that's a Change Management issue imo.
Ok...want to run the following by you quick to see what you think of it...appreciate the advice u are sharing.
Parent VM Specs - Linked Clone Pool w/ Persistent Disk
Win 7 64 Bit
2 GB RAM
20 GB C Drive
Anything wrong with this?
I don't see anything wrong with that. I for one don't utilize the reservations but it's easily changed if you run into any issues. Did you review the Windows 7 optimization guide provided by VMware?
I read it along with all the other blog sites describing best practices, etc. Once you read so many different perspectives it becomes a mess when trying to decide which route to try first:)
Thanks for the help.
Michael
I agree there is so much out there that you can easily run in circles. At least with what you are doing you retain enough flexibility to make changes if necassary. If you don't mind me asking did you do performance testing to get your memory numbers? We are testing Windows 7 at the moment and 2 GB looks to be the number we are heading to.
I haven't done any testing yet with Windows 7. I came up with 2GB based upon recommendations and decided that it would be a good starting point. We are going to do a 20-30 user pilot test group once I have the parent / pool created and will see if I need to change anything based upon their feedback and our analysis. I would rather come up with performance specs based upon production data rather than a test lab using a workload script.
Are you guys installing MS Office 2007 / 2010 in the parent image or ThinApping it and running off a file share? I can see the benefits of both?
We have a lot of third party add-ins for Microsoft Office so we opted to install it into the parent.
Same here..I will be doing the same.
Last do you know if you can make the locally installed browser use the thinapped version of flash / java?
My thoughts for what it is worth:
-Unless you see a real possibility of increasing RAM on the desktops I would not run 64-bit. Additional overhead for zero return. If you feel you may up memory over time to 3.25GB or more then by all means run 64-bit now.
-Don't run a reservation on the parent. Instead, create one or more resource pools, depending upon your situation, in your cluster (assume you are using more than one ESX server with ~100 desktops) and place your desktops into the resource pool(s). This will allow much easier control over managing resources without having to recompose your pool. This is a requirement if you are sharing cluster resources between servers and desktops to avoid ESX swapping out too much RAM from idle desktops which will create a very noticeable impact to your desktop users. If you are using separate clusters for servers and desktops you can typically eliminate reservations altogether though as a standard rule I always use them for View deployments.
-Always set your min and max pagefile to the same amount to avoid growth and shrinkage of the page file which hurts I/O by fragmenting the pagefile. I suggest a starting point of 512 and then monitor pagefile use over a normal users work day.
-Follow additional best practices per VMware KB article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102104...
jdaconsulting I agree with your pagefile best practice of setting minimum and maximum to the same. In his case he is using a disposible disk for his page file. Do you think this change still holds the same weight sine the disk gets deleted every poweroff of the VM?
Good point - my mistake, I missed where he mentioned that. Assuming the disk is deleted at logoff, then fragmentation should not be a concern. This assumes his users do actually log off regularly. We had to set idle timers via GPO and logoff on disconnect timers at the pool to address this as it might be surprising how many people do not logoff for various reasons.
Anyway, fragmentation should not be a concern but there is still a (perhaps minor) I/O hit from growing and shrinking the pagefile and personally, I would rather zero in on a single target, set it and be done. Just my .02 though.
