VMware Horizon Community
ElevenB2003
Enthusiast
Enthusiast

View 4.6 - Linked Clones - 34 desktops - IOps normal?

Hi,

    We have a pool of 34 linked clones (floating pool) in use for a lab (school district) and during login, I'm seeing some huge IO numbers for the datastore(s) when all of the students are first logging in.  I know there is going to be a spike during simultaneous logins (boot storm) on linked clone pools, but this much and for only 32-34 desktops?

VMware environment is - View 4.6, ESXi 4.1 U1, 4gb fiber attached storage, single path (Only 1 HBA uplink per host at the moment), Xiotech ISE 5000 SAN with approximately 6300 IOps attached at 4gb.  Note: There are only about 50 desktops on this storage right now.

Virtual Desktops: Windows 7 Pro 32-bit, 1GB RAM, customized using the VMware guide - all non-essential services disabled, windows updates disabled, no AV at this point, floating pool, and no roaming profiles.

Users are not complaining about the speed of the desktops; I just want to make sure that as we scale up more and add more labs that these IO numbers are normal. We could potentially have 3-4 labs all logging in at the same time.  Obviously, once things settle down and the students begin to use their desktops as normal, the IO settles down and things look great.

I've attached a screenshot from the Xiotech ISE showing the IOps for the 3 hosts in the View cluster.

Thanks!

0 Kudos
3 Replies
Meph1234
Enthusiast
Enthusiast

Hi

Just to throw my 2c in. Im not sure about the IOPS but i do know that latency is very good (im assuming that's m/s). Also i can see there are way more writes than reads but i think that would come down to not having roaming profiles.

If the pool is not dedicated assignment and they are getting refreshed regularly then the machines will be creating profiles each time a user logs in. Im not familiar with your setup but perhaps consider roaming profiles/folder redirection (or persistant disks) with only a small quota size (50MB or so). That would stop the profiles being created each time.

Are you able to get those statistics from your storage array? See if you can narrow down what IOPS are on the replicas and which are the deltas.

If you haven't done so, consider moving the replicas to their own LUN, preferably separate spindles than the deltas (or SSDs if you are rolling in funding) which should also help performance. But anyway as you said noone is complaining.

Cheers

Phil

VCA4-DT
0 Kudos
JoJoGabor
Expert
Expert

You should expect around 66% read IOPS from a boot storm, login and the first time applications are launched. After that, normally expect 66% write IOPS, so something looks unusual. Check the WIndows OS to see what processes are creating the Disk IO, but as a first test increase the RAM to 2GB, I suspect your OS is paging a lot increasing the Disk IO.

Regards, profiles, if you are just using a local profile, this will mostly just read certain components. If you intriduce roaming profiles, the SAN will need to read the CIFS share, and copy the entire contents to the Virtual disk, so will acutually increase the IOPS required, and slow down login.

As said above, separate your replicas out, and if possible on the array set the cache to 100% read.

0 Kudos
dfrazer
Contributor
Contributor

I see similiar IOPS in my environment, with a similiar VM's, up to 4200 IOPS during a refresh of 30 linked clones.

I'm scrambling to figure out a solution other than dumping money into SSD's.

To get the most accurate data on disk usage, I recommend using esxtop from the esxi host.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100820...

Great info in there, just wish there was a way to specifically see replica IO load.

-D

0 Kudos