We have the following setup
Two IBM x3650 M2 servers running ESXi 4.1 using 2 dual port QLogic iSCSI HBAs per server
The 8 total iSCSI ports are connected to an Equallogic PS6000 series array (300 GB SAS drives)
View 4.6 running on 2 virtual servers both running Windows 2008 server (also running on the same 2 ESXi servers)
Windows 7 pool of 35 linked clone desktops in non-persistent mode
We think the super-slow Windows 7 logon times are because the virtual disk is running slow. I am using the LSI SAS virtual controller that was sugggested in the Windows 7 best practices manual from VMware (I went through that manual and did all the other suggestions too).
When we run iometer we notice really slow disk access speeds (averaging around 20-30 MB/sec).
Any suggestions on where to look to find the disk bottleneck? I checked the performance monitor within the Equallogic and the 16 disks look as if though they are doing almost no work. I think it measures the performance in i/o's per second and the physical drives never go above 100 that I can tell.
I would install EqualLogic's SAN HQ and look at the analysis it provides to determine if you're seeing bottlenecks. Are these slow login times when everyone logs in at the same time or just in general?
Are you using Equallogic's MEM Multipathing plugin?
The Windows 7 logon times are slow all the time, they average about a minute and this happens when one person is logging in or several. We turned on a local group policy that shows verbose startup logging and it always takes the longest time during the "Preparing your desktop" phase (since these are non-persistent desktops and the local user profiles get wiped during logoff). This is why we were looking to use LiquidwareLabs ProfileUnity, to speed up the logon times.
I do not have the SAN HQ software installed (but I will, thanks) and I am not using the Equallogic MEM plugin but I will look into it. I am pretty sure we have the multipathing set up for VMware's round robin.
I appreciate your help.
If you running ProfileUnity try adding a 'Active Setup' portability ruleset. That Active Setup registry is one area that controls the 'Preparing Desktop' action.
I had a post on the VDI.com forums that shows that rule.
I haven't gotton our test instance of LiquidwareLabs to work yet but I don't think our drive speeds are the issue, with the help of others I ended up finding an Active Setup entry having to do with Windows mail (which we do not use) that was taking up a good 25-30 seconds worth during the "Preparing your Desktop" phase of the logon process. Here is the thread where I describe it.
You could try spinning up a single VM (clone the one you're using as your replica) and connecting to that to see if your performance picks up since it will be a 1 to 1 setup. If it's faster then you're issue is more than likely storage performance. You can also dig into the performance logs built right into the vSphere client and look at disk latency on those datastores, ect.
I've seen a pool of 34 linked clones chew up 9K+ IO during login (all logging in around the same time).
Also, what kind of IO numbers does that SAN provide and are you running other services on those disks other than VDI?
Did you build your base disk (replica) using VMware's Windows 7 guide? They include a great script that will optimize (disable) services that aren't required by VDI and will increase the performance of your Windows 7 desktops.