- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I want to add something. With my graphics cards being M60's there are 2 GPU chips per card. On my cofiguration, I cannot have an allocation of different profiles on the same GPU chip. I noticed that I had all of one 1gb profile, and when I changed one to a different profile like (4q) it wouldn't power on even though I had 4gb of ram left on the physical card. When I changed it to match the rest of th guests it powered on fine. Somehow you have to fully evacuate the entire processor on a GPU and allocate only one profile for it.
If you go into the CLI of the vsphere host - you can type the command
nvidia-smi vgpu
That will show you which guests are located on which gpu and card if there are multiple cards. Then you can either try to power off and remove the cards for the hosts until you evac one gpu. Then you can change the profile to something else and the VM should power on . Add the other vgpu back to the other cards, and they should power on, on another gpu.
This was the issue I was having with the exact same verbiage as all of you.