VMware Horizon Community
mcrampton
Enthusiast
Enthusiast

vGPU VM will only power on if it has the same nvidia profile as all the VM's on the host

I have a host with 2 x Nvidia GRID K1 cards in it. I've got 2 VM's running successfully on it, using vGPU, profile grid_k140q.

If I create a new VM with profile grid_k120q, it will not power on. I receive the following error:

Power On virtual machine:Disconnected from virtual machine.

See the error stack for details on the cause of this problem.

Time: 2016-01-15 11:51:37 AM

Target: Jan15-Active-vGPU

vCenter Server: vcenter.mydomain.com

Error Stack

An error was received from the ESX host while powering on VM Jan15-Active-vGPU.

Remote connection failure

Failed to establish transport connection (9): There is no VMware process running for config file /vmfs/volumes/537bdb04-c71878eb-1744-74867ad4ea02/Jan15-Active-vGPU/Jan15-Active-vGPU.vmx.

Disconnected from virtual machine.

If I give it profile grid_k140q it powers on successfully.

Is this expected behaviour?

0 Kudos
2 Replies
Linjo
Leadership
Leadership

Hi Mike.

By default this is not expected, have a look at this setting:

vGPU.consolidation" in /etc/vmware/config.


By default this should be set to "false" but it sounds like yours are set to "True"


// Linjo

Best regards, Linjo Please follow me on twitter: @viewgeek If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
Azreal75
Contributor
Contributor

Not sure if this is still an open question, I'm looking into altering the vgpu.consolidation setting to allow the use of 2 different vGPU profiles on the same card.

The issue you describe is indeed by design and is (at least today) the default setting of vGPU.consolidation=false.

By default as each VM with vGPU resource is assigned to one of the two physical GPU's on the GRID card, so the first VM goes to GPU#1, the second onto GPU#2. Since vGPU profiles cannot be mixed on the same physical GPU then after powering two VM's with vGPU on you have locked the physical GPU's into running VM's with those vGPU profiles. This is termed 'breadth-first' allocation.

Using the vgpu.consolidation=true you alter this allocation to 'depth-first' - i.e. instead of loading vGPU profiles horizontally across the physical GPU's they are allocated vertically, that is VM's with the same vGPU profile get allocated to GPU#1 until GPU#1 is at capacity, they then get allocated to GPU#2.

OR, again using vgpu.consolidation=true, you can deploy a single VM with a 1GB vGPU profile and one with a 2GB vGPU profile, the 1GB vGPU VM will be allocated to GPU#1 and the 2GB vGPU profile will be allocated to GPU#2 since a physical GPU can only run one type of vGPU profile.

Subsequent 1GB vGPU VM's will then get allocated to GPU#1 until it reaches capacity and subsequent 2GB vGPU VM's will be allocated to GPU#2 until it reaches capacity.

0 Kudos