VMware Horizon Community
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

Splitting up a nVidia GRID K2's GPUs?

What I'm talking about is designating one GPU for vSGA and pass-through the second GPU to a VM for high performance 3D work.  In theory this should work, but I'm curious to know if anyone has actually done this or could contribute to this line of thinking.  Right now I'm not chomping at the bit to buy a 3K+ card to test this theory out.

Regards,

Nathan

1 Solution

Accepted Solutions
gunnarb
Expert
Expert
Jump to solution

I know Warren is the SME on this area but I just wanted to say I have had success splitting the K2, using one PCIe slot as direct and the other as shared. The shared is less than stellar on the performance, the direct was pretty good for 3d models. I even shot a video of me doing direct on youtube.

NVIDIA K2 on VMware View running Solidworks - YouTube

Right now I'm playing around with the Q4000 trying to get direct to work and having more issues than I did getting K2 to work in the direct mode

Gunnar Berger http://www.gunnarberger.com http://www.endusercomputing.com

View solution in original post

0 Kudos
60 Replies
admin
Immortal
Immortal
Jump to solution

I have never used a K2. I know we have never tested it though internally.

WP

0 Kudos
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

Yes, it is proabably a small use case situation and wouldnt have been something popular to test.  It would be the same concept with the K1; maybe someone has used that card since you could divide up more GPUs for different things (one vSGA, three pass-through, etc..).

0 Kudos
admin
Immortal
Immortal
Jump to solution

I think its more vDGA not working with the K2 card Smiley Wink . There is a new set of APIs that need to be worked in.

WP

0 Kudos
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

We are definitely looking forward to vDGA being integrated fully into View, however in the interim we were looking at using other means to connect to the VM with a passthrough GPU (HP RGS).  I have gotten vDGA to work within View, however the remoting experience (lag) is worse than the vSGA experience.  I'm sure this is what you are working on (integrating the VGX fast remoting APIs) and why its not officially supported yet.

Another idea we have been tossing around in an effort to stay all VMware view at the endpoints (utilizing zero clients) is the idea of running a VM with XenApp and a dedicated GPU (to publish our 3D apps to multiple users) and then integrating that into View.  It looks like you would connect into a View VM which has shortcuts on the desktop presenting the XenApp published apps through citrix receiver (would install the citrix receiver plug-in on the virtual machine your zero client is connecting to).  However with this approach I'm not sure how well the remote session's experience (FPS/lag) would be going through all these levels.

Regards,

Nathan

0 Kudos
admin
Immortal
Immortal
Jump to solution

>> I'm sure this is what you are working on (integrating the VGX fast remoting APIs) and why its not officially supported yet.

This was done a long time ago I have used a vDGA desktop as my primary for 12 months now. I think ( Could be wrong ) the bits are actually in the latest released builds I assume so since I have seen some really nice customer benchmarks lately comparing different configurations I recall them being based on 5.2 but would need to look back. You are right its not yet officially supported. I have never seen any lag with vDGA except in the really early development. I really do not have lag with vSGA either depending on the workload though. I am not a super heavy 3D user but during development I hammered it a lot and know where most the bottlenecks are. vDGA is faster than my Core Duo Mac I know that Smiley Happy. There are new APIs beyond the originals we used, for K2.

>> Another idea we have been tossing around in an effort to stay all VMware view at the endpoints (utilizing zero clients) is the idea of running a VM with XenApp and a dedicated GPU (to publish our 3D apps to multiple users) and then integrating that into View.  It looks like you would >>connect into a View VM which has shortcuts on the desktop presenting the XenApp published apps through citrix receiver (would install the citrix receiver plug-in on the virtual machine your zero client is connecting to).  However with this approach I'm not sure how well the remote >>session's experience (FPS/lag) would be going through all these levels.

It technically would work, but I do not think the performance would be that great. You would have to cut back on your virtual desktop consolidation as well most likely. The Citrix receiver requirements for 3D pro are beefy. For the client decode they recommend a Dual Core 3Ghz Client with at least 2 GB of RAM. That is before you include any other apps on the Virtual Desktop. I am not sure I would want the add complexity. Unless there is a compelling reason to mix VDI and SBC I would just go straight XenApp on ESX with Direct-Pass GPUs based on what you are describing. If there is a compelling reason to have both I might sway that direction.

WP

0 Kudos
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

Interesting about vDGA.  I wonder why we have a worse experience, since there are really no settings per se to get vDGA to work with View.  We are also using the latest View release (5.2).  You know on second thought I need to go back and check this with the Quadro 4000 we have.  I just remembered the last time I was messing with this (and the bad experience I mentioned) I was using a V7900 FirePro card (Direct-Pass GPU).  It sounds like this is optimized for only nVidia cards right now?

We use a 3D map application which can tax the GPU a bit, but we actually get a good frame rate with the application itself (35-45 FPS).  Our issue is getting a good PCoIP session FPS (which I have mentioned before on these forums).  The latest config we have is using tera2 zero clients with an apex card in the server.  We run four monitors on our zero client and on the 3D map display when moving around quickly we avg 15-19 FPS (PCoIP session frame rate).  If we are also playing video on one of the other displays while moving around in the map app the avg frame rate drops to 13-15 FPS.  If we do not use a zero client these frame rates drop quite significantly.  The apex card gave us about a 2-3 FPS bump as well as CPU offload.  I guess we are probably getting close to the limits of the PCoIP session's capabilities with this setup?

The reason we have to mix VDI and Direct-Pass GPU VMs is the requirement to be able to access all the different VMs from the same end point and also only use one type of client device (zero client).  If we used a regular computer with the software view client and the citrix receiver (when connecting to the Direct-Pass GPU vms) the vSGA VMs would suffer very badly (the PCoIP session frame rate that is).

0 Kudos
admin
Immortal
Immortal
Jump to solution

>> Interesting about vDGA.  I wonder why we have a worse experience, since there are really no settings per se to get vDGA to work with View.  We are also using the latest View release (5.2).

>> We run four monitors on our zero client and on the 3D map display when moving around quickly we avg 15-19 FPS (PCoIP session frame rate).  If we are also playing video on one of the other displays while moving around in the map app the avg frame rate drops to 13-15 FPS.  If we >> do not use a zero client these frame rates drop quite significantly.  The apex card gave us about a 2-3 FPS bump as well as CPU offload.  I guess we are probably getting close to the limits of the PCoIP session's capabilities with this setup?

From these details I do not think you are really using vDGA. You might have a VM that has a GPU assigned to it, but not using everything connected end to end. There are some specific ( undocumented ) things that have to be done for vDGA to work properly using View with PCoIP. If not done, one of the components will not actually enable GPU based rendering to occur. The GPU might be assigned to the VM and you can see it, but the rendering is not happening there and its likely our soft3D.

The other hint here is the quad display. vDGA will only work with dual displays at the moment.

Not sure about Apex. We have never tested it. I do not know if it will work with vDGA. Cannot think of a reason it wouldn't though.

WP

LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

I should of mentioned before about vDGA.  When testing that config we were only using dual monitors and we did go in through the vclient (console) first and make sure the GPU video card's display was the primary.  Then once we connected through the zero client we made sure under resolution/display settings that the Direct-Pass GPU was connected to the display being presented through View.  Is this what you were mentioning that is "undocumented".  If there are other things to do could you let me know.

With respect to only dual monitors working with vDGA right now I thought that this was really dependent on the capabilities of the GPU being passed through.  Are you saying there are other limitations within View itself that would restrict a keplar GPU (capable of displaying four monitors) from displaying all four monitors?

I edited my post above, but basically said I need to go back and test our Quadro 4000 card as we were previously using a V7900 FirePro card.  It sounds like nVidia cards may do better with View?

0 Kudos
admin
Immortal
Immortal
Jump to solution

Nope... that will will not do it.

WP

0 Kudos
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

Ah, I see.  Is this information you could share with me (PM, email, etc..)?  This would significanly help in our internal testing for implementing View technologies into our future systems.  

0 Kudos
LMUISHuntsville
Enthusiast
Enthusiast
Jump to solution

Are there any plans to implement this for AMD GPU cards in the near future (View and vDGA)?

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

As usual we cannot comment on possible future plans on public forums.

You could ask for an NDA Roadmap presentation from your local VMware Team.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
mikeleahy1234
Contributor
Contributor
Jump to solution

Hi guys

I have esx 5.1 U1 / view 5.2 / win 7 64 bit

I have a k1 card that im trying to setup for vdga but its not working.

Passed card through to vm but view vm still says its using "vmware svga 3d" under display in dxdiag.exe

I am connecting over PCOIP and have tried different endpoints / laptops.

When i run the command esxcfg-module -l | grep -i vttdmar then i dont get any output , the guide says that i should, is this the issue ?

I am using dell R720 with latest bios and have tried latest and last version of nvidia driver on vm

When i check device manager in vm then it shows the K1 card as well as the svga  - why wont my view session use the grid cards?

0 Kudos
admin
Immortal
Immortal
Jump to solution

As mentioned above. vDGA has not officially shipped as a supported feature. There are a series of undocumented things you have to do to force it to work. Passing the card though is not enough to make it work. What bits are there now will not work with the K1 / K2 cards properly either.

WP

0 Kudos
mikeleahy1234
Contributor
Contributor
Jump to solution

ok - thanks for this info.

Is this only with view ?

Another person told me it worked fine with citrix

0 Kudos
mikeleahy1234
Contributor
Contributor
Jump to solution

also this is the guide i have been using

http://www.vmware.com/files/pdf/techpaper/vmware-horizon-view-graphics-acceleration-deployment.pdf

Why would this guide be available freely if it doesnt work ?

0 Kudos
admin
Immortal
Immortal
Jump to solution

There are multiple parts to the picture.

Platform Support - Support for passing a GPU though to a VM

VM - Remoting support - Integration with a remoting protocol to efficiently access the frame buffer of the GPU, encode the content and send it to a remote end point.

Neither part of vDGA is supported today with View or XenDesktop.

I did not realize we had released the whitepaper with the steps for enabling the tech preview of vDGA. If you follow that guide you can get things going. From your message above it looks like you are missing a key step from page 19.

Note - You will likely have issues with the K1 / K2 as the bit in the preview are not updated to properly support K1 / K2.


WP

0 Kudos
mikeleahy1234
Contributor
Contributor
Jump to solution

thanks

i think the step you are talking about is enabling the prioretary nvidia capture api  (montereyenable)

i ran that command and the screen just kind of froze and went a little screwy for 1-2 seconds and then just returned to normal - nothing really happened so i presume this part  didnt finish properly ?

The guide is for K1 / K2 cards so why wouldnt it work properly?

IF the issue is with the capture api - is there anything i can do to get this working ?

0 Kudos
admin
Immortal
Immortal
Jump to solution

You have to reboot afterward. When the VM comes up it should hang at the windows logo ( You will not have console access ) if that happens, its using vDGA.

It might work some, but not reliably and several issues with the Tech Preview code. The code was locked down before the cards were available and new APIs to make them work properly were available.

The guide generally states its for K1 / K2 and is not really as clear about being for vSGA / vDGA as it could be. It also points to the HCL which states only the k1 / k2 are supported with vSGA, whihc is not the case. Most of our work has been on the Quadro cards.

WP

0 Kudos