vSphere Experience 101 - Growing pains of VDI

vSphere Experience 101 - Growing pains of VDI

Welcome to my world of vSphere.  Here I will be sharing my experiences with vSphere in all the challenges we have faced with the vairous implementations of it.  Being apart of one of the largest infrastructures in the world we have both learned from our mistakes and have learned the importance of "best practices" when working within a VI (Virtual Infrastructure) enviroment.  As with most large enterprise we started first using MS Virtual Servers as our first implemenation and then begin using vmware very little.  Being a MS shop proved to be beneficial when it came to licensing and such so that is probably why we began using it.  Oh those were the days...

After much testing in the lab I noticed that the virtual enviroment began to grow dramatically.  We soon had clusters in our lab testing and working with various implementations. The MS Virtual Servers sat in the racks still runing the 3 or 4 VM's that they had always ran in the past even though the solution never really grew much.  Hyper-V soon came out but after seeing the maturity of Vmware over Hyper-V the choice still seemed pretty obvious. testing still went on and slowly bled out into our production enviroment.  We had some clusters of Dell PE2900's and R900s that were being used for some of our Homeoffice Data Center servers. Finally after slow implementation in the third quarter of 2008 I got to experience the first large implementation of VI3 in our enviroment.

The first large-scale implementation used DELL PEM1000 Blade Systems fully populated with M600s.  Attached for storage with Equal Logix iSCSI arrays. Each host had 4 NICS and 32 GB of RAM (which obviously at the time 4 NIC seemed plenty but ended up to little) This was to begin our first Desktop Virtualization implementation using ESX 3.5 update 4 or VDI (Virtual Desktop Infrastructure).

Being the first initiative to help cut down large cost associated with vendors using desktops sprawled all throughout our technical center.  You could walk the isles of our tech center and see anywhere from 1-12 desktops stacked up that vairous technical teams used for other vendors or even themselves for remoting, support, and developmental services.  When it was all said and done I am sure the number of systems went well over a few thousand.  That is a huge amount of overhead when you even look at the power cost alone.  This VDI solution was the remedy to this problem but also being the first implementation didn't make it our best.  Little did we know about the risk of "disk alignment" and the affects of it when using the iSCSI arrays with our VDI enviroment (Virtual Desktop Infrastructure). For the brokers aka Gateway servers we used Citrix SSL - Net Scalers, and from there we managed various applications that were streamed to the VDI enviroment and other TC's (thin cients).

All users were setup and manged from the Citrix broker servers.  We already had licensing for citrix purchased so this made since to simply utilize what we have for this portion of the VDI enviroment. One other problem we saw with this implementation was that the engineers didn't plan HA very well in the layout for host vs. blade chasis.  Instead of spreading out the host between blade chassis they put all 16 host on a single blade chassis.  This could prove to be a huge impact in the event that an entire blade chassis went down - it would affect an entire DRS/HA cluster.  Also due to the "disk alignment" issue we had to perform a storage migration to remedy this problem and was an extensive project.  This was a large scale project which included many hours of labor all which could of been avoided if proper thought and research was performed.  The real downer was that this had ruined the chance of iSCSI to prove itself as a P.O.C. (Proof of Concept).

Lesson learned and well for that matter.  Now we are beginning our 2nd phase of our VDI enviroment and we are using High Performance NetAPP Arrays using NFS based storage.  We still use the old Dells but currently use HP BL class systems. The new host had 72 GB RAM 2 Quad Core processors and 6 NICs - which I still believe would've been just worth doing 2x10GbE.  The old iSCSI Equallogix storage is currently unused and the DELL Blades are also utilizing the NetAPP NFS as well. The second phase is using vSphere 4 and we are also using the PSA for NetAPP which is quite nice.  Though we are not using all the features of the NetAPP storage solution the performance has been much more dramatic than the iSCSI stuff from the past. Of course good planning and engineering is always a good thing.  This time we spread out the host between blade chassis.  There are approximately 16 host per DRS/HA cluster and we placed 4/16 of those host on separate chassis.  So in the event of a chassis failure or network issue we would only lose 4 host tops. Currently we manage failover capacity by %.

Though the VDI is still relatively new to us we are still learning as we go along about the Do's and Don't. One thing is for certain is that we will have to get all our 3.5 host to 4.1.  Our new environment is being used for Dynamic Desktop Virtualization.  Its getting closer each day and we finally got to give it a run when VMware View 4.5 was released.  So far we have liked what we have seen.  Though I think we will still use Citrix for profile management/apps and MS App-V in the future.  

Version history
Revision #:
1 of 1
Last update:
‎09-12-2010 11:47 AM
Updated by: