VMware Horizon Community
MNKrantz
Enthusiast
Enthusiast

can vDGA ever really match the performance of a physical machine with a high end gpu?

I have been wrestling with the implementation vDGA for a while now and am starting to think that there is no way it could match the performance of a physical machine with a high end GPU. I have followed all the implementation guides and read numerous articles on deplyment and recommended tuning. I'm not sold on the technology for supporting CAD users. Has anyone out there had an experiece to the contrary? If so, can you please share your experience so that it may help us find what we're missing...if anything. Any links with helpful info would be appreciated although, I think we have seem most of the official documentation on the topic.

Thanks in advance.

0 Kudos
9 Replies
Linjo
Leadership
Leadership

I would say it can, what kind of issues are you seeing and what does your configuration look like?

I'm working on a configuration and best practice guide and are looking for input/issues.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
MNKrantz
Enthusiast
Enthusiast

First...thanks for responding!

You say it can but have you actually ever seen that level of performance?

We are running View 5.2 (we are aware of the vDGA is officially support in 5.3). We have a Dell R720 with two K1 cards installed. We started with the vSGA model and based on all of the post installation steps, in the Virtual Machine Graphics Acceleration Deployment Guide, everything appears to be working as it should. However, we are not seeing the performance benefits advertised, especially when it came to open OpenGL 2.1 with CAD. We then moved on to the vDGA model and were even more disappointed. Before we even got to testing CAD apps, we saw poor performance with basic functions such as moving windows and misbehaving busy circles. The gpuvm command shows gpu assignment to the vm but the nvidia-smi command shows no utilization as it did with the vSGA model. In addition, DxDiag  shows the vm as using the K1 card. Is there any other info you are curious about?

0 Kudos
Linjo
Leadership
Leadership

So it depends what you mean with "level of performance", some of it can be 100:s of time better for example when opening a big cad-drawing compared to do it over a slow WAN connection...

The K1 is really 4 Quadro 600 stacked on one PCIe-card, so do not expect stellar performance from that.

I always recommends going for the K2 since it performance in total is about 3 times the K1.

You will not see the any performance numbers from vDGA by using nvidia-smi on the ESX-console or SSH since the GPU is now passed-through to the virtual machine.

You need to monitor the GPU from inside the virtual machine instead.

I suspect that there are other things that is causing the poor performance, how are the PCoIP settings set?

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
MNKrantz
Enthusiast
Enthusiast

Thanks for the info regarding the nvidia-smi command not working in the vDGA model. That makes perfect sense. We have not moved much from the defaults on the PCoIP settings other than increasing the frame rate. Any suggestions?

0 Kudos
Linjo
Leadership
Leadership

Start with checking on how many frames are rendered in the vm:

Open performance monitor (perfmon.exe)

  1. Start – Administrative Tools – Performance Monitor.  Uncheck Processor Time box.
  2. Click on green Plus button, browse to PCOIP Imaging Statistics – Imaging Encoded Frames/sec
  3. Highlight PCOIP session in Instances of selected Object, click on Add button
  4. By default you should see the encoding session running at 30fps.
  5. Add or modify the following registry keys

HKLM\Software\Policies\Teradici\PCoIP\pcoip_admin_defaults\pcoip.maximum_frame_rate = 60
HKLM\Software\VMware, Inc.\VMware SVGA DevTap\Win32FrameRate = 60


The framerate should now jump to 60. (If running 5.3, if you are running an earlier version you need to disconnect and connect for the setting to take effect)

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
MNKrantz
Enthusiast
Enthusiast

The registry keys may have helped things somewhat, thank you.

What's the best way to monitor the GPU from inside the virtual machine?

Any other tuning suggestions?

0 Kudos
Linjo
Leadership
Leadership

There is always a bottleneck somewhere, its just to find the next one....

How much bandwith are you using? There is a cap in PCoIP that can be set with this regkey:

HKLM\Software\Policies\Teradici\PCoIP\pcoip_admin_defaults\PCoIPMaxLinkRate

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
TheLoneVM
VMware Employee
VMware Employee

Typically the bottleneck for desktop performance is sending the data to the remote desktop.  With a fast local network and proper configuration, you could get raw frame rate performance that would rival a local machine with the exact same hardware.  However, a local desktop with a "High end GPU" will usually out perform a vDGA config with a K1 board.  As for monitoring GPU resources, GPU-z or Nvidia system monitor should help out here.

NVIDIA System Monitor | NVIDIA

GPU-Z Video card GPU Information Utility

0 Kudos
Linjo
Leadership
Leadership

I find the "Fraps" application to be useful, then you can find out what framerate the GPU is producing.

Then compare that to the FPS that PCoIP are encoding and how many frames the client actually receives. (The last one is easier to do on a zero-client)

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos