Let me try to answer your questions.
So for your first question what I have seen until now is that when you have a vm installation with more than 2GB of vRAM configured and you add a VMDirectPath Graphics adapter then you will get the pci.hole error an the VM will fail to power on.
In fact even if you do a new instalation with the VMDP graphics adapter attached you will also get the same behaviour.
The problem is even if you add the parameter to your vmx and you manage to poweron the VM you will not be able to boot this machine as you will get BSODs all the time.The only workaround seems to be to lower the vRAM configured to 2GB or below!
The 32bit systems behave slightly different, but still not any beter at all (see my posts above). Could you please play a bit more and confirm you are getting also the same behaviour?
Second and third question: You can't get rid of the default vmware graphics adapter. Think of it as a system with two VGAs. You will need to go to graphics properties and choose to extend the display to the second -the VMDP- graphics adapter. Then you better choose to outpout only to that adapter to avoid any problem bettween these vmware adapter and the physical one. You will then notice that the VI console will go black and your VM will have its video output to your screen (where you once had your ESX server output) using the VMDP graphics adapter. This way you won't get any "no signal" anymore to your display
Unfortunately as soon as you power off this VM, the screen output will not redirect to your ESX's server console. It will just go black. You can reboot the ESX server to get its console back to your screen or you can use two graphics adapters if you want to have both the ESX console and the VM's VMDP display running independently.
Thanks alot for the reply twood! You soo made my day!
Haven't had time too test it tho' so will get back soon with the results and certainly with some more questions, hehe
Holy smokes, great instructions Theo! Seems like the problems you guys report are bugs, more and more people complain about them. I hope that VMware support team reads the forum and will provide this info to the developers for a fix.
I have a question - redirecting video output to the ESXi host monitor is great but how about keyboard and mouse? Do you need VMDirectPath for them as well or once you redirect the video output the keyboard and mouse of the ESXi host kick in (if they don't it would be very miserable as there is a two device limit for VMDirectPath as I understand)? My idea is to get full local access to one of my virtual machines (especially the Windows one), otherwise there is almost no benefit of VMDirectPath for video cards.
Off-topic: How about sound cards, have any of you guys tried to map Creative cards for example? These are the 2 pieces of hardware I'm looking forward to give direct access to the OS.
VMdirectpath deals with PCI devices. For a mouse / keyboard you would have to add an addition mouse / keyboard via USB and redirect those to a VM. This would require ESXi 4.1 or higher.
Stupid question, sorry:) My brain is running out of energy at 7am.
As for the 2GB memory limitation - I just watched this movie:
If you rewind to 3:00 you'll hear the guy mentioning that if you don't change sched.mem.minsize parameter to match the memory assigned to the virtual machine you will have problems starting the VM as this is an issue directly related to VMDirectPath I/O. Couldn't find in the KB anything relating this parameter to your particular issue guys but still worth a try.
In fact you can click inside the VI console of the Virtual machine, and then switch your monitor to your VMDP VGA, (assuming you have assigned a VMDP vga to your VM) and you will actually see the cursor moving there which is very funny considering you "left' your input to the VI console.
This is very buggy though (slow moving and jumpy cursor ) and you also need a second PC to launch your VI console.
So the best way to do this, as Dave mentioned, is to VMDP a couple of USB ports to your Virtual Machine and attach there a pair of keyboard and mouse.
Now regarding both inputs and sound output In my setup I have my ESXi 4.1 running happily, and i have VMDP one usb port (this is because I have a PC along with my ESXi and i use a KVM which has the keyboard and mouse attached) one ATI 5450 VGA adapter, one creative labs soundblaster audigy and the onboard firewire which I have not tested yet.
Input, sound and graphics all work great. For the sound I couldn't VMDP the onboard sound card as it was crashing my ESXi for some reason, so I just plugged an old good soundblaster audigy that was hanging around
Regaring which usb port to VMDP which actually means which of the usb controllers to use, you will have to do some trial and error until you find the right ones. I recommened to try one or two usb controllers each time and not all at once.
This is very interesting, I will test and post my results.
It is very strange indeed that there is nothing in VMware knowledge base regarding this problem with VMDP and configured memory
Well what the guy mentions in the video is that you need to have memory reservation on the vm that has vmdp devices assigned, equal to the vRAM configured, otherwise it will give you an error on power on.
This is a well known prerequisite to run a VM that uses VMDP devices.
Our porblems with the 2GB limtitation begin well after that.
The problem is well described on the first posts of the thread.
I wish a vmware representative come to this thread and give us his lights!
Great news, I'm planning to attach my good old Audigy 2 ZS as well. I don't have an extra laptop at home for my parents so I need the machine to be usable from the console, that's why I'm planning such twisted setups:)
One thing I don't understand though - if you VMDP a device is it directly accessible by all virtual machines at the same time (this would mean shareable DMA, IRQs and other stuff that would decrease the performance significantly) or just one of them has direct access and the others see the VMWare virtual adapter?
Well the way I understand this is that when you VMDP a device you are declaring it as a device that can be used directly from a Virtual Machine and therefore bypass the vmkernel for that device. It is like taking the device from the ESX server and having it ready for passthrough. At this point nothing really happens except of course that ESX server can no longer access the device, so be careful what you choose there as you may ruin your ESX installation. (for instance VMDP the sata controller where your ESX boot disk relies!!!)
You will also need to reboot the ESX server after configuring devices for passthrough.
So after that you will have to assign the VMDP device by editing the target VM's configuration and add the device found under "PCI devices". You can add up to 6 VMDP devices and of course you cannot have more than one VM's sharing the same VMDP device.
This will benefit you with performance but will sacrifice some of the nice features of ESXi hypervisor such as snapshots, hot plug disks or nics etc.
And of course will bound you with the need to reserve all the memory configured on the VM and the dreadfull pci.hole error when you try to give more than 2GB RAM to the VM.
It is a nice feature though this bug seems to spoil the fun.
On the screenshots attached you can see my VMDP configurations
I have read that directpath does not support hardware acceleration with video cards - so I am wondering what exactly the use cases for a directpath video card would be? I'd love to virtualize my media center but I am assuming that streaming 1080p will be a no go with directpath?
I was finally able to get this to work with Win 7 32-bit with only 2GB of RAM and a Radeon 5750 1GB.
I am curious as to why video acceleration is not supported. With the latest flash player stuff like youtube videos hang with a green screen. I also see the same thing with DXVA video acceleration with videos. After disabling hardware acceleration for both, everything works fine again (of course producing a higher strain on the CPU). Does it have anything to do with the fact that there are two video adapters on there and the VMware video adapter doesn't deal with the acceleration request properly? Is there any way to completely remove the virtual video adapter so the VMDP adapter becomes the primary?
I also have one other hiccup, but I shouldn't be surprised. I chose to VMDP the primary video card used by ESXi 4.1, so the console video gets ripped away by the VM when it boots up. I can deal without consle video, but when I go to reboot the ESXi server, near the final stages before the reboot it just hangs. When I reset the machine things are back to normal again. I guess I could trace it to see what is actually happening.
This whole idea of VMDP is pretty impressive. I just hope VMware works to further improve/stabilize it so it can be used with many more PCIe devices.
I too am trying to configure Radeon HD5450 video card as a VMPD device in Windows 7 guest. I added both the video and audio pci devices and installed the ati device drivers in win7. Both of these devices are active in device manager. But the problem is there is no video output to this card. In Windows 7 video settings, only the vmware vga is displayed as device and the ati video device is not listed. I'm trying to use the same video card as vmpd for this vm that is also the primary for vmware. I get the vmware startup on the display and once I start the vm, the display goes blank and says no signal. The same has been experienced by Haxxfilif as he said in this forum.
My question to you. What guest OS are you running with successful video output on HD5450. Do you use this adapter as both primary for vmware and configured exclusive for a guest OS or do you have more than one video cards? Could you also give me some more information on the hardware its been installed?
I have the HD5450 installed on a Dell PowerEdge T110 server. It sits on the pci-e slot 2 as its the only slot with x16 length that could accept this card. Its got 1GB video ram.
Any and all help will be appreciated.
I'm also using one VGA card (ati 5450) for both ESX server as well as a VMDP device.
My ESX 4.1 server is running on an Asus whitebox mobo (P6X58D-E) and the guest OS with the VGA assigned as VMDP is a windows 7 64 bit machine.
I have also tried with windows XP and windows 7 32 bit and is working there too. I am just limited to 2GB vRam allways.
So anyways back to your problem, I think that what you describe is normal up to a point.
In the beginning when i initially configured my video for passthrough I too noticed that as soon as i powered on the VM which has the passthrough video assigned, the ESX server's console went blank.
Back to my VI console I went to the screen resolution settings and first i had to select "extend these displays" and then "show desktop only on 2" which is the VMDP display as it seems that the vmware adapter is still the primary one.
After that everytime i power on this VM the ESX console is redirecting to the VMDP display and the VI console just shows the win7 boot logo but this is fine as everything is being displayed directly to my monitor and this is so cool actually.
Makes the VM feel like a native machine.
So try to hit detect under screen settings and then choose "extend displays" hit apply and then "show only on 2" and apply again.
If nothing happens there then I can't really know what could be wrong with your setup.
You will also need to VMDP a usb controller too so as to have native imput on your VM.
screen.jpg 97.8 K
I also gave some try to this delightful unsupported feature. I added a second GPU in my server, and tried to dedicate that one to a VM (in order not to mess with the console).
Sadly, with this configuration I was unable to redirect the VM display to the dedicated GPU. It appeared in the Device Manager, but it acted as if no display was attached to it. When I tried to update the driver, it sometime couldn’t start (the driver).
I tried with an nvidia 6200 and the well-known ATI 5450.
Has anyone made some new test since the Update 1 for ESX has been released?