VMware Horizon Community
paul2ouy
Enthusiast
Enthusiast
Jump to solution

VMware View 5.3 and NVIDIA GRID K2 cards

Any help appreciated

I am running a VMware View 5.3 POC

Vsphere 5.5

NVIDIA-VMWare ESXi 5.5 319.65xxxxxx.vib

View 5.3

SL250s

nVidia GRID K2

Issue

1.

SSH the the host

VT-d (virtual Technology for Directed I/O is enabled in the bios, if i run esxcfg-module -| grep vtddmar, this does not appear as a module.

Am I missing something or ?

2.

This may relate to the above as I am trying to use NVIDIA GRID K2 cards (2 GPU's)  for GPU pass-through, if I enable the PCI on the Hardware settings on the ESXi host for both GPU the card stops working and you can no longer see this by running /etc/init.d/xorg status and or gpuvm. If I click 1 GPU on the PCI devices this comes up with the xorg and gpuvm.

Any help in getting these cards working would be appreciated.

Regards

Paul

0 Kudos
1 Solution

Accepted Solutions
Linjo
Leadership
Leadership
Jump to solution

Hey Paul.

So you seem to mix vSGA (virtual Shared Graphics Adapter) and vDGA (virtual Dedicated Graphics Adapter)

With vSGA you load the driver on the host (VIB) and enable x.org and do NOT use PCI Passthrough

With vDGA you load the driver on the guest (Standard Nvidia) driver and NO driver on the host and you do use Passthrough.

Which one are you trying to use?

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".

View solution in original post

0 Kudos
25 Replies
Linjo
Leadership
Leadership
Jump to solution

Hey Paul.

So you seem to mix vSGA (virtual Shared Graphics Adapter) and vDGA (virtual Dedicated Graphics Adapter)

With vSGA you load the driver on the host (VIB) and enable x.org and do NOT use PCI Passthrough

With vDGA you load the driver on the guest (Standard Nvidia) driver and NO driver on the host and you do use Passthrough.

Which one are you trying to use?

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo,

Good answer and yes we are trying to use both but not at the same time, so this puts me on the right track, and I try this now question for you.

1.Concerning the vDGA do i enable both PCI GPU on the hardware settings.

2. The VT-d being enabled on the bois setting and then having the module appear when you run esxcfg-module -l | grep vtddmar as this module does not appear on the host i have the GRID K2 cards on.

Regards

Paul

0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo

do youn have to enable VT-d and make sure the module is loaded (vtddmar for intel)

Regards

Paul

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Yes, you have to enable VT-d to use vDGA.

You can skip the vtddmar check, its not always valid.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo,

sVGA is working well, I am now using another host for the vDGA, I am following the Graphics Acceleration in VMware Horizon View Virtual Desktops 5.3 (VMware white paper) and also what you told me for the VDGA keep hitting an error

No Host is compatible with the machine.

Host SL250s with NVIDIA GRID K2 - enabled 1of the GPU under the hardware PCI devices, This device is available for VMs to use

Gold image added the PCI device this list the NVIDIA GPU enabled on the host and installed the nvidia driver windows 7, from here i create my pool with 2 desktops for testing but i keep getting no host is compatible with the virtual machine once the machine try to power on.

Note: at the bottom of the PCI devices on the host the device list the 2  vm's created for the pool can use the card

Any help or direction would be appreciated.

Regards

Paul

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Have you reserved all the memory allocated?

Have you added pcihole.start = 2048 in the advanced settings on the VM?

Please attach the vmware.log.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo,

This i have not done and I will complete this in the gold image

reserve all the memory and pcihole.start = 2048

concerning hardware PCI devices setting for the card (GRID K2) DirectPath I/O Devices Available to VMs I have enabled both GPU's as ther will be 1VM for each GPU, once I have completed the gold image for the vDGA POC is it best to create a manual pool or linked clones ? Sorry to be a pain but this has been a bit of a mission, have not grasped the documentation "Graphics Acceleration in VMware Horizon View Virtual Desktops - 5.3 WHITE PAPER"

rEGARDS

pAUL

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

I can understand the confusion, our documentation has not been "top notch" regarding this but its slowly improving.

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo,

Sorry for th delay in getting back to you, completed the setting in the gold image and and created a pool with 2 linked clones adding a pci card to each adjusting each one to look at the 2 differenct GPUfor the GRID K2. Powered on each vm and looking at the HOST - Manage - Settings - Hardware - PCI Devices, looked at the DirectPath I/O PCI devices Available to VM's clicked on each GPU at the bottom - VM's Using This will indicate a VM per GPU's. So far so good I think

lookig in each vm still indicates i am using vSGA mode using the DXdiag.exe in the devices how do i change this to vDGA, as detect is greyed out ?

I have read this can be done via the registry and view agen GPO.

Regards

Paul

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

So if you look in device manager in your guest, do you see the Nvidia GPU there? Have you installed the latest Nvidia drivers?

If so you need to then run this command: "C:\Program Files\Common Files\VMware\Teradici PCoIP Server\MontereyEnable.exe –enable"

Then you need to restart the guest and connect from a client, full screen with PCoIP

// Linjo


Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

I done all this on the gold image before creating the new pool to create the 2 linked clones gues i have to run this on each linked clone.

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

No, once done on the parent is should not be needed on the linked clones. (if done right)

Did you get it to work with the parent?

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution


Linjo,

excuse my ignorance but do i run the MontereyEnable.exe on -enable on each provisioned linked clone, in my case 2 as ran this just on the gold image before creating the new pool for the linked clones, I think this is where i am going wrong.

Regards

Paul

paul2ouy
Enthusiast
Enthusiast
Jump to solution

Just sent my message before i just got your message I see wht i am doing wrong, will complete this again on the gold image then create do this on my linked clones. I will make sure the gold image is workig first. Thank you for your help.

Regards

Paul

0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution

Linjo,

just read your message again all this was working on the parent but i did not test this all the way through for the passthrough with the PCI maybe this is what i should do first. What i came across does the parent need to have the pci card added befpre i create the linked clones as i took this out and added this into the linked clones once created and gave them their respective GPU.

Regards

Paul

0 Kudos
paul2ouy
Enthusiast
Enthusiast
Jump to solution


Linjo

Just to say tjanks for your help has been a slog but new to view .... VDGA all up and running.

Regards

Paul

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Great to hear and thanks for reporting back!

I would be very interested in hearing about your experiences once you been running it for a while.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
IT_Vision
Contributor
Contributor
Jump to solution

Lingo,

I have recently upgrade my environment to View 5.3.1 in hopes to resolve an issues we were seeing in 5.2.  In one of our hosts we are utilizing vDGA with a GRID K1 Card.  The issue I was hoping would get resolved is when the VM is rebooted it appears to lose knowledge of having the pci passthrough device attached to it.  The dxdiag results show n/a for all items under the display tab.  However, device manager shows the NVIDA K1 video adapter available running the latest NVidia drivers.  When this happens I have to go through the process of removing the pci device from the VM then re-add it and dxdiag reports correct info of the K1 card being available until the VM gets rebooted again.  The VM also shows the VMware SVGA 3D adapter which I have manually disabled to force the VM to use the K1 card.

Any thoughts on this and if it is still a know issue with Horizon View 5.3.1?

Any feedback is appreciated.

Cheers

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

So does the PCI Passthrough config stick on the actual VM?

Did you run the monterayenable-command in the VM?

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos