VMware Horizon Community
Cameron2007
Hot Shot
Hot Shot
Jump to solution

VDI Sizing requirements- live environment

Hi guys,

I am looking for some guidance on sizing a VDI environment. I have read the VDI sizing pdf but would like to know what requirements I would need for a 200+ seat envirionment. I currently have 3x 585's with 32GB but think this is way too small. I think the customer is expecting 768MB on the Virtual desktops and so from that sizing I calculate that would mean 32GB/768MB=42. As it is three nodes to have n+1 failover this would allow 42VM's x60%=25. So for the 3 nodes would be looking at 75 VDI's in total.

Does this fit with deployments anybody out there has implemented in a live environment. I realise I would also require a VDM server within the VI.

Thanks in advance

Reply
0 Kudos
1 Solution

Accepted Solutions
Troy_Clavell
Immortal
Immortal
Jump to solution

your logic does make sense. I said 4 ESX hosts because you told me 200+ VM's. With 3 Hosts you are at 67 VM's per host, which would give you some room to grow, but not a lot. Plus a 4 Host cluster with 200 VM's each with 32GB of RAM, would be a very happy cluster.

Will all this be running on it's own VCMS, or will you be adding to an existing instance and creating a new Cluster? Although 2.5 now supports upwards of 2500 VM's and 192 vPCU's, I have been told the VMware Engineering best practice is suggested around 1000 per instance, which I think is very conservative.

We have a 2800 XP VDI running on DL580's with 16 cores and 128GB of RAM. Great resource pool, but trying to put one of the hosts in maintenace mode takes upwards of 45 minutes. Then if two... well, about 1 1/2 hours. My advise would be scale out instead of up. 6 to 8 VM's per core is a great recomendation, howerver you can always run more, but I think 8 is good.

View solution in original post

Reply
0 Kudos
7 Replies
Troy_Clavell
Immortal
Immortal
Jump to solution

one thing I would be careful of is the amount of vm's you plan to host on each ESX box. There is a bug in HA that if your host(s) have more than 80 VM's per, HA will cause an error in the agent. HA still works, but it will cause your VCMS DB to grow quite large because of the amount of logging. We host around 125+ VM's per host and have seen this error.

Our VM's run with 8GB C: drives and 1GB RAM. I think the 8GB C: is good, but 1GB of RAM, is overkill. XP vm's will run very well at 512MB, which will allow you to free up some physical resources on your ESX hosts.

My suggestion is you run 4 ESX hosts, just so you don't hit the 80 VM's per host. You may also want to look at 4 500GB LUNS, 1 VDM broker will be good, but maybe consider 2 so you can can setup a replica and maybe use round robin DNS for redundancy.

hope this helps a bit.

Cameron2007
Hot Shot
Hot Shot
Jump to solution

Thanks Troy. I will do some testing directly to an XP VDI using X-over cable direct to the VM network and see if the "end user experience" is OK using 512MB. Then test through some of the switches and firewalls etc as it is important that the end user experience is good.

Thanks for your help

Reply
0 Kudos
Cameron2007
Hot Shot
Hot Shot
Jump to solution

Hi Troy,

given your answer I think that this would then become a better setup 9albeit with the potential HA errors).

If we assume a 3 node cluster with an n+1 configuration the sizing could be as follows:

3 nodes times 32G=96GB if we subtract 3GB for the VDM broker VM and another 1GB per host for ESX we have 90G total usable memory or alternatively 30GB per host. This would give us 30G/512M=60. Again as we are running a 3 node n+1 cluster the max number of VDI boxes is likely to be 60% therefore 36VMs per host so times 3=108 XP VM's.

Still too small on 3 nodes at 32GB ;-(. Even at 64GB from the formula above 3 nodes x 64GB would yield. ((3x64G-6G)/512MB)x60%=222 XP VM's. Or 74 VM's per host. So not really greatly scalable if neccesary.

4 node solution looks more promising.

4 nodes at 64GB gives ((4x64G-7G)/512)x70%=348 XP VM's

Would I be correct in thinking that Vmware recommend 8VM's per core max (even with Update 3 which theoretically can host 20 VM's per core although I wouldn't want to try that) so in theory we could have 128 XP VM's per host running 4x quad core processor.

Is this a better solution and is the formula a decent approximation? Thanks again for your help.

Reply
0 Kudos
Troy_Clavell
Immortal
Immortal
Jump to solution

your logic does make sense. I said 4 ESX hosts because you told me 200+ VM's. With 3 Hosts you are at 67 VM's per host, which would give you some room to grow, but not a lot. Plus a 4 Host cluster with 200 VM's each with 32GB of RAM, would be a very happy cluster.

Will all this be running on it's own VCMS, or will you be adding to an existing instance and creating a new Cluster? Although 2.5 now supports upwards of 2500 VM's and 192 vPCU's, I have been told the VMware Engineering best practice is suggested around 1000 per instance, which I think is very conservative.

We have a 2800 XP VDI running on DL580's with 16 cores and 128GB of RAM. Great resource pool, but trying to put one of the hosts in maintenace mode takes upwards of 45 minutes. Then if two... well, about 1 1/2 hours. My advise would be scale out instead of up. 6 to 8 VM's per core is a great recomendation, howerver you can always run more, but I think 8 is good.

Reply
0 Kudos
Cameron2007
Hot Shot
Hot Shot
Jump to solution

I currently have a 6 node ESX cluster for application servers so would probably install another VC server rather than have the whole DB situated on one VC server. I could just create another datacenter within the current VC but on reflection it is probably keep the VDI implementation seperate and procure new hardware for the VDI cluster. The server cluster contains both 585G1's and 585G2's so i had to do CPUID masking to allow 6 node Vmotion (32 bit OS only) and don't really want to go there again.

Thanks again for all your help

Reply
0 Kudos
JayArr
Contributor
Contributor
Jump to solution

I appologize up front for a slightly off topic reply - but with a VDI deployment of 200-1000 machines on a cluster I'm curious as to what kind of storage network you use to support this?

Our existing iSCSI SAN would be brought to its knees with a load like that, but looking at future upgrade paths - I'd like to know what works for you. Feel free to pm me or email me directly.

Reply
0 Kudos
Cameron2007
Hot Shot
Hot Shot
Jump to solution

currently we are purchasing netapp 6080 we will have FC connections although I'm not sure as to the connectivity. There is a post which I think has some evidence of load which may be worth looking at.

http://communities.vmware.com/thread/155056?tstart=0

Reply
0 Kudos