VMware Cloud Community
alg-interexpert
Contributor
Contributor
Jump to solution

vDGA support with HP NVIDIA K4000 passthrough

I am trying to use vDGA in a HP Proliant DL380gen8 and an HP NVIDIA K4000 video card. When this works I will use this configuration for View Horizon clients. But when I create a VM and add the PCI device that is available after the passtrough configuration is completed I receive the following error when starting the VM:

An error was received from the ESX host while powering on VM w7-view-template-20140110-jvdm.

Failed to start the virtual machine.

Failed to register the device pciPassthru1 for 4:0.0 due to unavailable hardware or software support

I am desperate to make it work. Can anyone help me out of this?

Thanks in advance,

Jan

1 Solution

Accepted Solutions
Linjo
Leadership
Leadership
Jump to solution

Great news, thanks for posting back!

This is your only option, its not possible to share the K4000 between virtual machines, the non kepler Quadro 4000 (and GRID k1/k2) its possible to use with vSGA (shared graphics) but that is not full passthrough.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".

View solution in original post

10 Replies
Linjo
Leadership
Leadership
Jump to solution

You need to do a few things to get it to work:

1. Reserve all the assigned memory to the virtual machine.

2. In the advanced settings add “PciHole.start 2048”

If it still does not work please attach the vmx-file and the vmware.log file from the virtual machine

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
alg-interexpert
Contributor
Contributor
Jump to solution

It still doesn't work. Machine won't start. The startup error is listed in the log.

VMX file:

======================

.encoding = "UTF-8"

config.version = "8"

virtualHW.version = "9"

vmci0.present = "TRUE"

displayName = "w7-view-template-20140110-jvdm"

extendedConfigFile = "w7-view-template-20140110-jvdm.vmxf"

svga.vramSize = "8388608"

memSize = "3072"

sched.cpu.units = "mhz"

tools.upgrade.policy = "manual"

scsi0.virtualDev = "lsisas1068"

scsi0.present = "TRUE"

ide1:0.startConnected = "FALSE"

ide1:0.deviceType = "cdrom-raw"

ide1:0.clientDevice = "TRUE"

ide1:0.fileName = "/usr/lib/vmware/isoimages/windows.iso"

ide1:0.present = "TRUE"

scsi0:0.deviceType = "scsi-hardDisk"

scsi0:0.fileName = "w7-view-template-20140110-jvdm.vmdk"

sched.scsi0:0.shares = "normal"

scsi0:0.present = "TRUE"

floppy0.startConnected = "FALSE"

floppy0.clientDevice = "TRUE"

floppy0.fileName = "vmware-null-remote-floppy"

ethernet0.virtualDev = "vmxnet3"

ethernet0.networkName = "productie"

ethernet0.addressType = "vpx"

ethernet0.generatedAddress = "00:50:56:a5:23:39"

ethernet0.present = "TRUE"

guestOS = "windows7-64"

toolScripts.afterPowerOn = "TRUE"

toolScripts.afterResume = "TRUE"

toolScripts.beforeSuspend = "TRUE"

toolScripts.beforePowerOff = "TRUE"

tools.syncTime = "FALSE"

uuid.bios = "42 25 5c 89 d1 a6 c0 53-33 7d 7a 6f da 69 c5 fa"

vc.uuid = "50 25 7a 82 27 4d c7 17-70 98 68 84 35 52 32 14"

sched.cpu.min = "0"

sched.cpu.shares = "normal"

sched.mem.min = "3072"

sched.mem.minSize = "3072"

sched.mem.shares = "normal"

uuid.location = "56 4d a0 0a 0a 94 ac a9-19 80 11 95 79 10 5b 45"

svga.present = "TRUE"

vmci.filter.enable = "true"

tools.guest.desktop.autolock = "false"

hpet0.present = "TRUE"

nvram = "w7-view-template-20140110-jvdm.nvram"

virtualHW.productCompatibility = "hosted"

scsi0.pciSlotNumber = "160"

pciBridge0.present = "true"

sched.scsi0:0.throughputCap = "off"

ethernet0.pciSlotNumber = "192"

pciBridge4.present = "true"

vmci0.pciSlotNumber = "32"

snapshot.action = "keep"

sched.cpu.latencySensitivity = "low"

pciBridge4.virtualDev = "pcieRootPort"

replay.supported = "FALSE"

unity.wasCapable = "FALSE"

pciBridge0.pciSlotNumber = "17"

pciBridge4.pciSlotNumber = "21"

pciBridge5.pciSlotNumber = "22"

pciBridge6.pciSlotNumber = "23"

pciBridge7.pciSlotNumber = "24"

tools.remindInstall = "FALSE"

hostCPUID.0 = "0000000b756e65476c65746e49656e69"

hostCPUID.1 = "000206c200200800029ee3ffbfebfbff"

hostCPUID.80000001 = "0000000000000000000000012c100800"

guestCPUID.0 = "0000000b756e65476c65746e49656e69"

guestCPUID.1 = "0002065100010800829822030fabfbff"

guestCPUID.80000001 = "00000000000000000000000128100800"

pciBridge4.functions = "8"

userCPUID.0 = "0000000b756e65476c65746e49656e69"

userCPUID.1 = "000206c200200800029822030fabfbff"

userCPUID.80000001 = "00000000000000000000000128100800"

evcCompatibilityMode = "TRUE"

vmotion.checkpointFBSize = "8388608"

softPowerOff = "TRUE"

scsi0.sasWWID = "50 05 05 69 d1 a6 c0 50"

pciBridge5.present = "true"

pciBridge5.virtualDev = "pcieRootPort"

pciBridge5.functions = "8"

pciBridge6.present = "true"

pciBridge6.virtualDev = "pcieRootPort"

pciBridge6.functions = "8"

pciBridge7.present = "true"

pciBridge7.virtualDev = "pcieRootPort"

pciBridge7.functions = "8"

toolsInstallManager.updateCounter = "2"

toolsInstallManager.lastInstallError = "0"

pciPassthru0.deviceId = "0xe0b"

pciPassthru0.id = "04:00.1"

pciPassthru0.systemId = "4d78967e-963f-c8c0-81d5-001e0bd1fcb0"

pciPassthru0.vendorId = "0x10de"

pciPassthru0.pciSlotNumber = "-1"

sched.swap.derivedName = "/vmfs/volumes/515e81ae-c096ffe8-eef8-ac162d771d50/w7-view-template-20140110-jvdm/w7-view-template-20140110-jvdm-980748ed.vswp"

pciHole.start = "2048"

svga.autodetect = "true"

pciPassthru1.deviceId = "0x11fa"

pciPassthru1.id = "04:00.0"

pciPassthru1.present = "TRUE"

pciPassthru1.systemId = "4d78967e-963f-c8c0-81d5-001e0bd1fcb0"

pciPassthru1.vendorId = "0x10de"

pciPassthru1.pciSlotNumber = "256"

sched.mem.pin = "TRUE"

replay.filename = ""

scsi0:0.redo = ""

=============

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Ok, I have seen that before, lets try to fix it.

First update the bios on the host to the latest available from HP. I believe it should be this one:

Drivers,  Software and Firmware for HP ProLiant DL380e Gen8 Server - HP Support Center

Check that the GPU are mapped below the 4G boundary by disabling your server’s SBIOS option that controls 64-bit memory-mapped I/O support. This option may be labeled “Enable >4G Decode” or “Enable 64-bit MMIO"

If it still does not work try to edit the pcihole.start to 1024

If it still does not work, please attach the vmkernel.log

There was a similar thread that was solved by updating the bios, read more here:

NVIDIA k20m PCI passthrough fails ESXi 5.5

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Have been researching a bit about this and it seems that the HP G8 is not working very well with VM Directpassthrough because some incompatibilities.

I would advice to open support issues with both VMware and HP so they get more motivated into fixing this problem.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
alg-interexpert
Contributor
Contributor
Jump to solution

Hello Linjo,

Thanks for the research. I will try the firmware update later this week. Since this is a production server I have to schedule the maintenance. At the same time I will check the BIOS setting.

But to be sure I will open a service request both with HP and VMware as well.

Keep you informed.

Jan

0 Kudos
alg-interexpert
Contributor
Contributor
Jump to solution

Hello Linjo,

After the BIOS update to 2013.09 the pci passthrough for the K4000 works! Couldn't find the parameter for the 64-bit memory-mapped I/O.

By your knowledge: I want to share the GPU between hosts (Full Passthrough). In my passthru.map is listed:

#NVIDIA

10de ffff bridge false

Do I have to add other options to that?

According to http://h20195.www2.hp.com/V2/GetPDF.aspx%2F4AA4-1701ENW.pdf

it is not possible to do this with VMware View and PCoIP.

Jan

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

Great news, thanks for posting back!

This is your only option, its not possible to share the K4000 between virtual machines, the non kepler Quadro 4000 (and GRID k1/k2) its possible to use with vSGA (shared graphics) but that is not full passthrough.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
alg-interexpert
Contributor
Contributor
Jump to solution

Hello Linjo,

I saw some discussions with you involved about the Ati Rage Firebird S7000. Is that card a possibility for vSGA? What would you recommend as an affordable alternative to high end graphics solutions like dedicated blades?

Anyway thanks for your help!

Jan

0 Kudos
Linjo
Leadership
Leadership
Jump to solution

The S7000 is supported by VMware for vSGA (Shared graphics) but AMD have not yet released a ESX driver for it. (VIB)

What would you recommend as an affordable alternative to high end graphics solutions like dedicated blades?

It  depends on what you mean with "high end"? What applications will you use and what requirements is there with regards to high availability?

vDGA (Passthrough) with GRID K2 will provide the best possible "high end graphics" possible in a VDI-desktop,  but as you noted these GPU:s are not cheap.

The Quadro 4000 are reasonable priced and is still very potent, they work both with vSGA and vDGA.

// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
alg-interexpert
Contributor
Contributor
Jump to solution

I was trying the K4000 because my suppliers say the 4000 is outdated and only the K4000 can be ordered.

We are using the Card for medium 3D users (CAD applications) and about 10-15 Horizon View desktops Windows 7 on the same server. This solution is used for a construction company that wants to provice desktops for the workers on a construction site, or people that work at home. With DL380 we can only install 2 K4000 per server for vDGA with PCI pass-through. With vSGA we can assign the card to a pool of clients.

Jan

0 Kudos