I've been successful with both an AMD Radeon HD 3450 as well as a Radeon HD 6850.
The ESXi host is a SuperMicro X8SIA-F (Intel 3420 platform) with a Xeon X3440 and 16GB RAM.
The VM that's getting the graphics card is Windows 7 Ultimate x64. Both the older (3450) and newer (6850) cards actually present two PCIe devices. The first is the GPU and the second is an audio device (for audio over HDMI / DisplayPort).
In addition to the AMD GPU and AMD audio device, I'm passing through one of my two Intel USB host controllers (onboard device coming off of the 3420 PCH). This lets me connect my keyboard, mouse, webcam and USB sound card for a fully functioning workstation as a VM.
I can confirm that any VM which has VMDirectPath devices must have 100% of it's configured memory reserved (this is the reason why I upgraded the host from 8GB to 16GB of RAM).
I am also running into the RAM limitation. It took quite a few reboots of the VM, but I've determined that the RAM ceiling for my workstation VM is 2816 MB. It's an odd number, but if I go any higher then Windows will quit at boot with a random BSOD. It is very frustrating that I can't configure the workstation VM for a proper 4-6GB of RAM, but it's a tradeoff I'm willing to make for this setup. Hopefully a future release of ESXi will address this limitation, but I'm not holding my breath as this setup is definitely not supported.
I initially tried using USB passthrough for the keyboard and mouse, but they didn't even show up in the list of available USB devices to pass through in vSphere. It seems only USB storage and hardware locking keys are supported for USB passthrough. Giving the VM direct access to the USB controller has been a great workaround.
Some tips for anyone else trying this out:
1. Make sure that you are using the WDDM 1.1 driver for the VMware SVGA display adapter. There is an older, non-WDDM driver but Windows 7 doesn't support heterogeneous display adapter setups unless all of the adapters are running WDDM 1.1 drivers.
2. I ended up disabling the desktop display on the VMWare adapter. This will prevent you from accessing the console from vSphere, but I use remote desktop for situations when I'm not at the local console. The advantage is that your mouse won't disappear onto the extended virtual screen when you're at the local console.
3. Start off at 2GB of RAM and then work your way up if you're stubborn like me and insist on > 2GB. Make sure you update the memory reservation whenever you increase the amount of configured RAM for the VM.
Unfortunately no hardware acceleration.
However, I think the answer is it depends on your video card and CPU. With my ATI 5750 I have no problems whatsoever watching 1080p YouTube or any other video for that matter. I'd imagine any mid-level video card and decent CPU will do just fine.
One thing to watch out for though, is acceleration gets turned on by default in new versions of flash if they detect a compatible video card. You need to shut this off before trying to watch a video or it may hang.
Hardware accelerated h.264 playback (DXVA2) is working for me using the system described in an earlier post (AMD 6850).
I used XBMC as my test bed. There is an easy method to enable/disable accelerated playback in XBMC and you can confirm whether it is enabled by pressing 'o' during playback.
WIth hardware acceleration enabled, my CPU will be at 3-5% load during playback of 1080p video and it will be perfectly smooth.
When I disable hardware acceleration, CPU usage will range from 30 - 50% and on certain videos playback will skip badly. I'm not sure why this is as the CPU should be more than capable of software-only playback.
I am also able to play games with native GPU performance, including three 1680x1050 displays running in an eyefinity setup.
I'm wondering if the DXVA issues that others were seeing are a simple driver or software bug that is completely unrelated to ESXi and VMDP. I read that AMD had to update their display drivers after Adobe enabled DXVA support in flash. For anyone who is trying to make VMDP work with DXVA and having troubles, I recommend ensuring that you try the latest versions of flash as well as your display card drivers (if you haven't tried this already).
first I want to thank for this brilliant solution. This is the only way I found so far where you can use a VM like a normal PC without any further hardware.
Now what I am conerned about is how I do upgrades and patches. If I have an upgrade requiring the ESXi host to go into maintenance mode this will leave me with no display. As long as I still have a second computer I will be able to do the upgrade from there and put the ESXi into normal operation mode again.
But what to do without a second pc? Maybe one could do the update with a script (ssh to ESXi host, put a script there which shutdowns all vm's, goes into maintenance mode, upgrades, leaves maintenance mode and reboots). If everything goes well you will be back in a few minutes. However as we all know things tend to go wrong. Therefore I'd wish to have a view on the ESXi console when doing the update.
Is there a way to disable VMDP for the graphics adapter from within a running vm? Or maybe can I alter a config file and then reboot without VMDP for graphics?
I was able to add a second GPU to my ESX host, that was I was able to keep the console of the ESX and have the VMDP working.
With the local console you should be able to apply patches manually through the shell (I haven't done that very often; not sure how it works exactly).
I am glad to ear that some of you were able to get the DXVA working. I gave some more tests on my own and had not so bad results.
- Nvidia GT210 did not work (driver failed to load as far as I can remember)
- HD5450 did work, but I was not able to get the DXVA. The driver related to the HDMI sound was not able to load though
In order to get DXVA you have to disable the vmware GPU I guess. How have you done it? Did you simply disable the display or did you disable the whole GPU in the device manager?
I ll try to do some more test tonight; using a VM as a HTPC would be so cool. Which leads me to another related question. Have anyone tried to use a TV tuner through VMDP? Any feedback welcome.
I was able to get the HD5450 working, even with the sound but...
- I was unable to get the DXVA working. Everytime I tried I got a green picture and the application crashed (tried with ffdshow DXVA and flash). I allready had the driver up to date (11.5 & 11.6)
- TV through 7MC did not work, because it makes use of DXVA.
- TV Tuner (Nova-T) nearly worked. It detected some channel (108 on the first time, 92 on the second time instead of the 130 channels), but I was unable to get any picture because of the DXVA bug (sound was fine though).
- Some time the picture flickered and the colors got inverted for a while. It may be related to my cheap cable
I'm trying to get this working for a couple of sleepless days now.
My motherboard is Intel DP67BG which seems like the only P67 chipset motherboard with VT-d.
I got the passthrough working for SATA controller.
When I pass through a GPU, one of my nVIdia PCIe cards - 6200 and 8400GS, I see the driver loaded ok but I have no display available in the screen settings,
The driver works only after I disable the driver of the display of VMWare.
nVidia control setting doesn't open and says no nVidia GPU connected.
When I try with a PCIe ATI card 4650HD I get bluescreens on XP and Win7,
How do you install the VMWare WDDM 1.1 driver? I have latest ESXi and the driver from the tools is WDDM 1.0
I tried first with the two PCIe cards installed, then I found an old ATI Mach64 PCI card,
Tried with ESXi screen to the PCI card and passing the PCIe to the VM but always the same result, for my nvidia cards, driver works but I have no screens to select (detect only show the vmware screen even when the driver is disabled) and the ATI gives the BSOD.
Anyone got an Idead? I saw someone who checked combination of PCI / PCIe cards and looks like a PCI ATI7000ve with nVIdia PCIe might work
Anyone with any ideas?
I was also unlucky with nvidia cards. I noticed the same problems with a 8400 and a GT210.
For the WDDM, I did as you did; I installed the tools provided by my ESXi (4.1U1, I haven't done any other updates).
I'd be interested in more details about the configuration working with DXVA (ESX version, driver, ...). I didn't checked if I had VDDM 1 or 1.1 though.
Apologies in advance for the huge post...
Here are the details of my current setup:
Motherboard: SuperMicro X8SIA-F
- Intel 3420 chipset (for Lynnfield based Xeons)
- ICH10R SATA controller with 6x ports (Vendor: 8086, Device: 3B34)
- I have this passed through to my Nexenta NAS VM via VT-d
- There are 5x Samsung F4 2TB HDDs connected to this SATA controller
- Two intel USB 2.0 controllers (Vendor: 8086, Devices: 3B34 and 3B3C)
- One of these USB controllers (Device 3B34) is passed through to my Win7 workstation VM via VT-d
- To the passed through controller, I connect a keyboard, mouse, sound card, and webcam
- IPMI v2.0 via Winbond WPCM450 BMC chip
- The BMC chip includes a legacy PCI video core that is identified as a Matrox G200eW (Vendor: 102B, Device: 532)
- This is connected to the on-board VGA port and is also accessable via the remote IPMI console
- This is the video adapter that ESXi is using for the console
Processor: Intel Xeon X3440
RAM: 16GB total (4x4GB) Registered ECC DDR3 (Kingston KVR1066D3Q8R7S/4G)
- XFX RadeonHD 6850 ZDFC (AMD GPU, Vendor: 1002, Device: 6739)
- This is a PCIe 2.0x16 device
- The audio device shows up as (Vendor: 1002, Device: AA88)
- This card (including the audio device) are passed through to my Win7 workstation VM via VT-d
- Promise SATA300 TX4 PCI (Vendor: 105A, Device: 3D17)
- There is only a single device connected to this card, an OCZ Vertex2 60GB SSD
- I installed ESXi onto the SSD and with the left over space I created a datastore which is used for the Nexenta and Win7 VMs
ESXi version: 4.1.0, 260247
- NexentaStor [v3.0.4] (NAS - 1x vCPU + 4GB RAM reserved)
- VT-d devices: On-board Intel ICH10R SATA controller
- 5x 2TB HDDs used to create a RAIDZ1 Zpool which is exported to ESXi via NFS and the rest of the network via CIFS
- VM Set to auto-start after ESXi power on
- Windows 7 x64 [v6.1.7601] (Workstation - 4x vCPU+ 2816 MB RAM reserved)
- VT-d devices: AMD 6850 GPU+audio; Intel USB controller
- VMware Tools v8.3.2, build-257589
- Display adapters shown in device manager (Note that both devices are ENABLED):
- AMD Radeon HD 6800 Series, Driver v8.850.0.0 dated 4/19/2011
- VMware SVGA 3D (Microsoft Corporation - WDDM), Driver v126.96.36.199 dated 3/1/2010
- For initial setup, I left display output enabled for both the VMware adapter (accessed via remote vSphere) as well as the physical displays connected to the AMD 6850
- After I was confident that the 6850 was working reliably, including after rebootting the Win7 VM as well as the entire ESXi system, I right-clicked on the desktop and selected "Screen Resolution" and simply disabled screen output on the VMware adapter, that eliminates the problem of the mouse disappearing off of the desktop on the physical monitors and onto the virtual VMware display. If I ever need to access the console via vSphere, I simply re-enable that display output, but this is rarely needed as RDP works most of the time.
- Note that I am not disabling the VMware SVGA 3D adapter in device manager, simply disabling the display output in "Control Panel\Appearance and Personalization\Display\Screen Resolution".
Suggestions for anyone encountering BSODs when booting a Win7 VM which uses a VT-d GPU:
- Double check the amount of configured RAM on your VM. It has been stated by others on this thread that anything over 2GB can cause problems. I was able to push mine up to 2816 MB thorugh trial and error, this configuration works for me, YMMV. My suggestion is to start at 1.5GB and get the GPU stable before trying to push the VM's RAM up.
- Remember to ensure that 100% of your configured VM RAM is reserved for any VM which takes VT-d devices.
- Disable sleep mode in control panel power management, whenever my Win7 VM went to sleep, I couldn't wake it back up via the USB keyboard or mouse (note my USB controller is passed through via VT-d). The only thing that worked for me was to connect via the remote vSphere console and click on the black screen. This would wake up the VM so I could log back in via the physical console. This is something that should be tried if you ever encounter a black screen on your previously working VT-d GPU display.
- Set cpuid.coresPerSocket per VMware KB1010184 if you're having trouble getting all of your cores to show up in the Win7 VM (not really specific to passing through GPU's, but it is a tweak I had to do for my setup).
- Don't rely solely on Flash to determine if ESXi+hardware+drivers are playing nicely together. Flash has a history issues when it comes to GPU acceleration. My validation steps consisted of many VM reboot cycles to ensure I wasn't going to encounter any more BSODs during startup. This was followed by full screen video playback in various players (VLC, XBMC, MPC-HC, media player), and then testing with a number of games.
- Note that you might not see your VM bootup sequence on the VT-d GPU display. On my system, when the VM is rebooting, the physical screens are black until the Win7 login screen appears. The actual boot sequence (VM POST, Win7 loading screen) are visible via the vSphere remote console. I guess the VMware display adapter defaults to being the primary. There might be a way to change this in the VM BIOS but I haven't bothered to do so.
Thanks for the most elaborating post somedude1234
Some things I learned from your post regarding software (as hardware is obviously different)
1.) Is there a way you can try to see if this setup works with the latest ESXi 4.1u1 which is version: 4.1.0, 348481?
I will try and test on the version you are using, maybe it the GPU passthrough got broken somehow on the newer release...
2.) The WDDM version of your tools is also lower then the lates, I will try that as well.
3.) Did you install the windows 7 with the GPU passthrough or only added it after the OS was installed?
As I did try a PCI video for the ESXi, I still hope I will be luckier with a different combination, so I might search for a Matrox G200 PCI although I could only see G450 available (on ebay). The Radeon 6850 is still pretty expensive for a test purpose I would buy it in a second if I was completly sure it will work, I hate good hardware just dusting up in my pile of unusable good things....
A small tip I can give regarding your storage with Nexentastore, I also tried it first, but currently it doesn't support the vmxnet3s driver for the network which limit the network interface to 1gb which is 128mbytes, which is nice..... but....
I ended up installing OpenIndiana (solaris variant) with napp-it (the gui for the storage), that suppots the vmxnet3s 10gb card (from the vmware tools).
I was pleasently surprised to get on a virtual machine with its HD attached to an NFS storage to the OpenIndiana, I got about 430mbyes read and 270mbytes write speed, I have raidz2 with 6 drives but even with less drives the 1gb ethernet limit would have been reach sooner.
When Nexenta support this driver I will probably move to it as the GUI and CLI are much more impressive.
1) I built the system back in the Dec/Jan timeframe and haven't updated the software since it was put into production. I've been contemplating an update to ESXi, I might give it a shot over the long holiday weekend.
2) Yes, I'm using the VMware tools version that is included with my slightly down-rev version of ESXi, which likely explains the VMware SVGA 3D driver also being slightly down-rev. I believe I misstated in an earlier post that you need WDDM 1.1 drivers for all of your mis-matched video cards in order to work correctly in Windows 7. I based this statement on the WDDM wiki entry, specifically, the following is listed as a "new feature" with WDDM 1.1:
- Support multiple drivers in a multi-adapter and multi-monitor setup
In my case, it appears that my AMD 6850 has a WDDM 1.1 driver, while the VMware SVGA 3D driver is WDDM 1.0.
3) I created the VM, installed Win7, ran windows updates, and then shut the VM down to add the GPU via PCIe passthrough.
Note that I borrowed an old AMD RadeonHD 3450 for my "proof of concept" testing before ordering the 6850. I was successful with both the 3450 as well as the 6850. I would expect that any of the ATI/AMD PCIe cards between the 3450 and the current 6800 series should work fine, but you never know unless you test a specific combination of hardware.
Thanks for the tip on OpenIndiana support for VMXnet3. I was quite disappointed when I realized that I woudln't be able to utilize this driver during my build (all of my other VM's are running with this driver, it's too bad the NAS is the one that doesn't work). At the time of my build, I considered OpenIndiana + NAPP-IT for the NAS, but I felt like OpenIndiana was a bit "too new" at the time for my data. You've inspired me to build an OpenIndiana VM and do some A/B testing with the same array to see how much of a difference it makes. In theory, I can just re-assign the SATA controller from the Nexenta VM over to the OpenIndiana VM.
I'm happy enough with the current performance, but I'm always willing to try out something that will provide an increase.
Have you tried DXVA with something else than XBMC? My configuration is quite close to yours, and I am pretty sure that it should also be able to run DXVA.
I was unable to get it working with: Flash, FFDSHOW, W7 default WMF codecs.
Here is my configuration:
- ESXi 4.1
- SuperMicro mainboard (can't remember the ref; I'll check tonight)
- HD 5450 (VMDP)
- Highpoint RocketRaid 3520 (VMDP)
- 24gb of RAM
- Hauppauge Nova-T (VMDP, but still testing)
I shall maybe give a try tonight with a fresh VM. Having a VM as a HTPC would be awesome.