JHT_Seattle
Hot Shot
Hot Shot

Which stack is the one slowing the logon down? Also, logs suck.

Jump to solution

If there's an easy way to tell, someone PLEASE tell me.  As it is now, we are finding it impossible to tell which stack (or stacks) are the culprit behind slow logons (just waiting for the 180 second timeout).  The frustrating thing is that once the timeout happens and the user is logged into the desktop all of the apps that were linked DO work.  So why the delay?  And how can we figure out which stack it was that caused it?

Side note, the logs generated by App Volumes in the VM make me want to claw my eyes out.  A giant run-on sentence, no delimitation, no breaks.  Has no one on the App Volumes team ever had to actually look at one of those logs?  I find that impossible to believe, so how about making the log just slightly more readable in 2.next or 3.0? /rant

0 Kudos
1 Solution

Accepted Solutions
JHT_Seattle
Hot Shot
Hot Shot

Final follow up:

We did indeed implement the following AV exclusions:

C:\Program Files (x86)\CloudVolumes

C:\SnapVolumesTemp

C:\SVROOT

We did also rebuild our parent image natively in our vCenter v6 environment.  As a result, all logon issues with multiple stacks have been resolved.

View solution in original post

0 Kudos
8 Replies
Ray_handels
Virtuoso
Virtuoso

Normally if you indeed have the 180 seconds timeout i'd guess it is not related to 1 single appstack but more to some sort of configuration error or such.

I would suggest raising a ticket, normally he appvolumes guys do know how to handle these things.

And do you use SCCM?? It has a brilliant log file reader called cmtrace. If you use this one it will put the information inthere nicely line by line. My guess is that these guys are using some sort of log file reader.

JHT_Seattle
Hot Shot
Hot Shot

Thanks for the reminder about cmtrace, I'd almost put my SCCM past behind me...

As for our slow logon and troublesome app stacks issue, it appears to be entirely Symantec Endpoint Protection related.  I've never read any documentation as to what antivirus exclusions should be set, but I think SEP is disagreeing with the filter driver, or starts balking at the multiple vmdk files.  Anyways, with SEP uninstalled our logons are speedy and all apps attach as expected.  I've got a case open, so hopefully I'll get some feedback on the exceptions that I can share here if anyone's interested.

0 Kudos
Ray_handels
Virtuoso
Virtuoso

We don't even install the virus scanner into the machine but use VShield for it.

It is quite interesting looking into it. To be honest i'm not quite sure if Symantec has an agentless virusscanner but i did some digging and SEP is in the list of supported software of VShield.

http://www.vmware.com/files/pdf/products/vcns//vmware-integrated-partner-solutions-for-networking-an...

And we did have a discussion very very early on with Appvolumes guys that you do need to exclude quite some directories from the virus/malware scanners. That's why we eventually ended up ofloading scan to dedicated machines.

Also a link to the vshield page of vmware.

https://www.vmware.com/nl/products/vsphere/features/endpoint

0 Kudos
JHT_Seattle
Hot Shot
Hot Shot

Symantec does, technically, support vShield, but their implementation of supporting it is so fundamentally flawed as to be a joke.  They committed to supporting it wayyyy too early without having a plan to provide an agentless solution.  We're locked in to using them because, you know, corporate stuff, and supposedly the agentless version is the next one (or the one after that, who knows).

So for now we have to be very liberal with our exclusions.  Do you recall the list that was given to you?

I was thinking of:


C:\Program Files (x86)\CloudVolumes

C:\SnapVolumesTemp

C:\SVROOT

C:\Windows\system32\drivers\svdriver.sys

0 Kudos
Ray_handels
Virtuoso
Virtuoso

Hey JHT, We will be going forward with McAfee move. I will let you know about test results.

And nice to be locked into one antivirus vendor, right?? We also noticed that documentation is very limited (as in almost non) on how to pull this one off..

This is info i received but is "as is" so no oficial statment or whatever. More of a discussion between me and Appvolumes on how we could be using virusscanning. I would also strongly suggest excluding snapvoluemstemp indeed although i'm not quite sure if virusccanners see this directory dus to it being mountpoints. This info is from way back when it was still Cloudvolumes but my guess is it would still hold up now.

1) Scan provisioning VM before anything get attached, make sure base image is clean.  Fully finish provisioning applications  Enable real-time scanning, then reattach this AppStack to this VM to make sure it's working as expected plus virus free. So the AppStack gets verified before anybody using it. I understand that you still want to scan on access, so here is another option:

2) For other VMs, modify the scan policy say exclude all the application directories (like program files (x86)\office… or c:\autocad for example) based on apps it will be attaching. The files captured on our AppStack are pretty much the application files so to minimize the scan scope, exclude the application files will do it. AppStack is read-only so once you verify it in option 1 it will not be modified

0 Kudos
JHT_Seattle
Hot Shot
Hot Shot

Final follow up:

We did indeed implement the following AV exclusions:

C:\Program Files (x86)\CloudVolumes

C:\SnapVolumesTemp

C:\SVROOT

We did also rebuild our parent image natively in our vCenter v6 environment.  As a result, all logon issues with multiple stacks have been resolved.

0 Kudos
Boe_K
Enthusiast
Enthusiast

Hey JHT_Seattle whare you seeing for your average login time after implementing your fix? Right now when I login with 4 AppStacks attached I get the following times.

Windows Client = 45 seconds

LG Zero Client = 1 minute 30 secounds

0 Kudos
JHT_Seattle
Hot Shot
Hot Shot

The amount of time it take to log on with stacks would likely vary depending on the size/complexity of your App Stack, as well as what storage it is sitting on, and what the interconnect speeds are, not to mention how your user profiles are configured.  A lot of variables!  That said, our stacks are single-app stacks (for the most part) and sitting on a SAN LUN pinned to flash and 15k tiers with 8GB connectivity to the ESXi hosts (HP DL380p G8s).  After rebuilding the VMs from scratch (and either updating or rebuilding app stacks with the 2.7 release) most of our issues have dissipated.  We can log a user on with a roaming profile (using redirected folders, so no Persona or writeable volumes being used) and several app stacks attached within 20-30 seconds.  Sometimes a little longer, but like I said, it depends on the stack.

I'm perplexed that there would be a difference for you whether using a physical computer or a zero client (we have both in our environment as well), because nothing client-side should matter when it comes to the stacks being attached, unless the clients are connecting to different pools with different GPOs or something affecting them.

0 Kudos