We're struggling to determine what to do about GPU resources in knowledge worker VDI Deployments. VMware published this very interesting blog on the subject: VMware vSGA for Content-Rich VDI which makes a pretty compelling case for vSGA. From what I understand vSGA does a great job of allocating GPU resources to some users and falling back to CPU when needed, whereas a vGPU or MxGPU solution would require a dedicated cluster providing GPU to all users since the integrated driver can’t fallback cleanly in the event of over-commitment or licensing server failure.
So while vSGA sounds like a good fit for a non-engineering, knowledge worker VDI deployment, the lack of hardware available to run it makes it a tough choice right now. The hardware options seem strangely limited: If you’re on 6.7 U3, nothing from AMD, Nvidia doesn't support it on their current hardware. They even have a footnote in their release notes specifically excluding it from any but older GPUs (see docs.nvidia.com/grid/latest/product-support-matrix/index.html#abstract__only-horizon-supports-vsga )
So I’m stuck… I’d feel foolish buying very old cards to use vSGA and have them go obsolete in a year, but I also don’t think vGPU makes sense for us: Limited additional performance benefits (See above article), and significant administrative limitations – needing entire cluster configured with it, no true vmotion, etc.
For now we’re left with just not doing any GPU acceleration for knowledge worker VDI, which is a shame because we’ve got the budget allocated for hardware and software licenses, but can’t find anything to buy, and ultimately it’s going to diminish the VDI experience for end users.
Very interested to hear what others are doing about GPU in knowledge worker VDI deployments.
We recently ran into a similar situation. We just stood up a new Horizon 7.12 environment and decided early on to provide some sort of grid ram solution. Long story short, we went with the nVidia T4. We didn't really have a clear goal in mind, just that we wanted to purchase all the hardware up front because we had the budget. Not being familiar with the nVidia hardware, we didn't really ask the right questions and found out after the fact that it doesn't support sVGA. What we have decided to do is provide 1gb of video ram for knowledge workers and 2gb (at least for now) for CAD users. We can have up to 2 different grid profiles in our single cluster because each of our hosts have 2 T4 cards. Another disappointing thing we found is that with the "Grid Virtual PC" nVidia licenses that we purchased; we can only do up to 2gb of grid ram per desktop. In order to have more than 2gb would require the "Quadro vDWS" license. I have not had any issues with vMotion of grid assigned desktops between hosts. One thing I need to be cognizant of is how many grid assigned desktops I have deployed at any one time to be sure that there is sufficient resources available to perform host maintenance.
It would be nice if there was a grid profile allowing 512mb of ram...but unfortunately there isnt.
We are still in the early testing stages for CAD users and I have not rolled our grid ram to all knowledge workers yet. Baby steps...
There used to be a 512MB profile and while it kinda worked on Windows 7, it did not work on Windows 10. Well, on Windows 10 it worked sometimes, but most of the time it didn't. So I guess it was easy for Nvidia to just make the 1MB profiles the smallest one to avoid these problems.