GBowman's Posts

Yes I have tried dual GPU in a crossfire setup with my x58 board.  Crossfire worked perfectly on 2 5770's with a bridge connector however when I tried 2 6770's that dont have the bridge connector... See more...
Yes I have tried dual GPU in a crossfire setup with my x58 board.  Crossfire worked perfectly on 2 5770's with a bridge connector however when I tried 2 6770's that dont have the bridge connector I could not enable crossfire.  Both cards were detected in Windows but I did not try a monitor on both at the same time (as I was testing crossfire).  My aim was to use 2 cards for 2 seperate HTPC machines but then get a decent gaming machine by shutting both down and starting a single VM with the combined resources of both. The 6770's do work on the same platform when the host is booted straight to Windows so the lack of a bridge connector is not an issue.  I beleive its the ATI drivers that will allow crossfire without a bridge only on machines where there are no other GPU's and the VM will always have the VMWare SVGA GPU.
I have to say all these problems you are having are exactly the start I had to this challenge.  Nvidia cards will get as far as loading up in device manager but will always have the code 10 yelow... See more...
I have to say all these problems you are having are exactly the start I had to this challenge.  Nvidia cards will get as far as loading up in device manager but will always have the code 10 yelow exclamation mark as far as my experience goes.  ATI cards need some fine tunning of the bios settings and will still not work in all scenarios, I have 2 machines with multiple x16 slots where only 1 will work with VGA pass thru.  I then have an x58 board which has had 5 cards passed through no problems (4 x16 slots and an x1). I am at 4.1 for my main system, over 2 years down and going strong as an HTPC, Asterisk, VPN, Sandbox, telecomuting, IP TV Server, NAS and home automation system.  I have tried all the others on my test machines and have had the same issues as others with USB on 5.1 so I am at 5.0 for those.  Thinking of adding a router VM aswell. I beleive it is easier to add the network driver to a running machine so if possible fit a supported NIC for install then once ESXi is running you can add the driver for the Realtek and then remove (or pass through) the intel NIC.  Dont upgrade to hardware version above 8 if using 5.5 as versions 9 and 10 require the (not free) web interface.
I have tried 3xxx, 5xxx and 6xxx cards without DXVA success but I have never tried a 7xxx card.  I have a GA-X58A UD3R mobo and have had it running 5 graphics cards at once without issue, 1 was i... See more...
I have tried 3xxx, 5xxx and 6xxx cards without DXVA success but I have never tried a 7xxx card.  I have a GA-X58A UD3R mobo and have had it running 5 graphics cards at once without issue, 1 was in an x1 slot.  With this number of cards though I ran out of USB controllers to pass through even with the USB3, I never tried USB passthrough only PCI passthrough of the physical USB controller.  This was with Version 5.0.
Its trial and error I am afraid.  All of my onvoard USB devices are passed through and yours looks to be the same as mine, each USB 2.0 controller has 3 USB hubs.  I pass through the USB 2.0 cont... See more...
Its trial and error I am afraid.  All of my onvoard USB devices are passed through and yours looks to be the same as mine, each USB 2.0 controller has 3 USB hubs.  I pass through the USB 2.0 controller and 3 of the hubs to each VM using USB's.  If you are running out of PCI passthrough slots to the VM you can pass through less, as you say each hub is 2 ports but I have found you must pass through the 2.0 addapter to get it running at 2.0 speeds.
I have all my Tunners passed through as PCI/PCIe devices not as USB.  Aktohugh my older PCI cards do consist of a USB controller and then 2 USB devices I have never tried to pass them through as ... See more...
I have all my Tunners passed through as PCI/PCIe devices not as USB.  Aktohugh my older PCI cards do consist of a USB controller and then 2 USB devices I have never tried to pass them through as USB devices instead passign the whole controller throug to the Guest VM. Have you tried DPC Latency Checker?  
Ok just a follow up on my Crossfire success.  It appears it will have no useable merrit in this instance since it will not work without a bridge connector.  I can get 2 6770 cards that do not hav... See more...
Ok just a follow up on my Crossfire success.  It appears it will have no useable merrit in this instance since it will not work without a bridge connector.  I can get 2 6770 cards that do not have the option of a bridge (no pins on PCB) to work when booted natively into Windows but not when using the same 2 cards in a VM on the same machine.  I beleive the problem is that CCC will not enable software crossfire on a system with another type of GPU present so the VMWare GPU device has thwarted me again.  I really wish there was a way to remove this from the virtual hardware. So my plan to use 2 cards with seperate machines some of the time and then bring them together along with the CPU and RAM resources into one machine has failed.  I can only get hardware (bridged/linked) crossfire to work in a VM.  I have not tried using 2 cards with a bridge connector attached to 2 seperate VM's as I dont want to risk possibly damaging my hardware.  For most of us there will be no use in using 2 PCIe slots for 2 budget cards on one VM when we can use the same 2 slots for good cards serving 2 different VM's. On a side note not relating to ESXi, Crossfire with no bridge is possible only when the primary card has not got the connector.  This differs to AMD's official list which says both must have the connector or both must not.  If you have one with and one without this will work aswell.
I just tested my HD5450 in an x1 slot, it works perfectly.  This is on my x58 UD3R board but it proves that at least on some systems you can use x1 slots for GPU's.  Performance did not appear to... See more...
I just tested my HD5450 in an x1 slot, it works perfectly.  This is on my x58 UD3R board but it proves that at least on some systems you can use x1 slots for GPU's.  Performance did not appear to be down either but then the 5450 is not a great performer.  I used a passive PCI x1 to x16 slot converter which just allows you to do this without having to butcher a slot, the card was working at x1 speeds but is a full x16 card. I would suggest your problem is the card, have you tried the x1 card in the x16 slot?  Just to rule it out as a card issue.
I had a similar issue, it would not lock up the ESXi host but it would only pass through the card on the first power on of the machine and not on subsequient power ups.  I found an option in a wh... See more...
I had a similar issue, it would not lock up the ESXi host but it would only pass through the card on the first power on of the machine and not on subsequient power ups.  I found an option in a white paper about RAID cards and tried it and it solved the problem for me.  Add the following to your configuration file and try it, obviously with the correct number for your pass through device. pciPassthru0.msiEnabled = false I now add this line as a matter of course to all GPU pass throughs and most others.  It will be deleted if you remove and re add the hardware so watch out for that. On another note I have tested Crossfire on my x58 board and it works!  I have 2x 5770s' working in CF but with a bridge fitted as the drivers would only enable CF with the bridge in place.  I am going to borrow another 6770 as the one I have has no bridge connector so should allow software crossfire.  If this works (I see no reason why it would not) then I can have 2 VM's each with 2 cores and 2gb when needed and change to 1 VM with twice the power for gaming when needed.
Yes your right.  I have removed my GPU and the results are the same.  I still have the option to enable 3D support and I still get the same WEI results.  All the applications I tested still work,... See more...
Yes your right.  I have removed my GPU and the results are the same.  I still have the option to enable 3D support and I still get the same WEI results.  All the applications I tested still work, DXDiag reports hardware acceleration and DirectX11 support with no GPU in the system at all. I had read so many articles that I was convinced the enable 3D checkbox would only be present if the hardware supported it.  I also read that DXDiag would only report hardware acceleration if it was the case but it is not so (tested in Windows 7 and 8).  Software 3D Emulation is really very good with ESXi 5.0, hardware version 8.
From what I can tell I have vSGA working in a test server with ESXI 5.0.  I get full Aero in Windows 7 and a WIE score of 3.0 for graphics (2.7 in Windows 8 where the vSGA driver is not used, a g... See more...
From what I can tell I have vSGA working in a test server with ESXI 5.0.  I get full Aero in Windows 7 and a WIE score of 3.0 for graphics (2.7 in Windows 8 where the vSGA driver is not used, a generic microsoft driver is used instead).  I can run a few 3D (DirectX not OpenGL) apps from the VMware console but the biggest advantage is that Windows 8 uses the GPU to help remote desktop so with Windows 8 and vSGA there is almost no need for VMWare View or Microsoft RemoteFX. I beleive only NVidia GPU's work, it certainly did not work for me when I tried an ATI GPU (HD5770) but you must have them installed durung the ESXI install, I had a 460GTX installed and have now downgraded to a 450GTS for power reasons.  I have read about the VIB and as far as I am aware its installed automatically at install if the hardware is there.  I can confirm VM's with a vSGA addapter and DXDiag reporting acceleration although I know there is not a lot of info around on the net so I think its implementation is still not up to a full release standard. All that said a windows 8 VM with a passed through ATI card is better still, full GPU power to the remote desktop acceleration.  I can run Unigine Heaven benchmark over RDP at 30+ FPS. Regarding passthru DXVA I can only get hardware acceleration to work when using ATI's own codecs and I cant confirm that is DXVA, with FFDShow, DivX and LAV specifically using DXVA it fails with green screens or no video.  I have only tried a 5xxx and 6xxx cards though.  Its not always possible to select ATI's codecs and even ATI screwed up when they released a new driver version which disabled them so I am stuck with version 11.4 for the moment to keep using good codecs.  Annoyingly DXVA does work with vSGA!
another thing regarding GPU errors in the event log. I get these aswell but they relate to the vSGA addapter, remember the VM does not know you ave a second GPU so it still uses its own video ada... See more...
another thing regarding GPU errors in the event log. I get these aswell but they relate to the vSGA addapter, remember the VM does not know you ave a second GPU so it still uses its own video adapter, all my VM's get an error telling me I have too little video ram etc but I ignore it as it will not effect the machines opperation. If however you want to remove the error about 3D capabilities you must install an Nvidia card and then not pass it through.  I beleive the card must be installed during the install of ESXi for it to be picked up and used for vSGA.
You have to be really carefull which codecs to use and dont use DXVA its broken.  I use Shark007 codecs and select LAV with DXVA off.  Obviously with DXVA off you need more CPU and 4 are needed m... See more...
You have to be really carefull which codecs to use and dont use DXVA its broken.  I use Shark007 codecs and select LAV with DXVA off.  Obviously with DXVA off you need more CPU and 4 are needed minimum for 1080p I have found.  Bluray is something I tried and could not get to work.  WinDVD and PowerDVD will no work in a VM.  I used AnyDVDHD to overcome HDCP but VLC could not play back blurays at a decent rate.  Also DVD playback under a VM fails for me.  I always get a copy protection error and even with AnyDVDHD video is jerky but VLC will play DVD's ok so I use this and dont watch blu rays on my VM HTPC.  Mediaportal is going to support non copy protected blurays in its next release but you will need AntDVDHD or DVDFab running in the back ground to play copy protected material. You must turn off hardware acceleration in flash to get rid of the green screen.  In windows 7 you can right click on the flash box to get up the settings but in windows 8 you must go into internet explorer options and turn off hardware acceleration for all explorer functions.  I cant use HDMI sound as its just to glitchy and to get the on board sound passed through and sounding acceptable I had to reserve some CPU, I have 2000mhz reserved and this cleared the sound up.  I have a device to mix SPDIF and DVI into HDMI so I will use this when I get time but they are expensive new, its a shame modern GPU's dont have the SPDIF input like the old Nvidia ones have.
Other than a few codecs and hardware acceleration not working my VM's are perfectly smooth and play all games and applications I have tried.  You must choose to only display on the ATI monitor an... See more...
Other than a few codecs and hardware acceleration not working my VM's are perfectly smooth and play all games and applications I have tried.  You must choose to only display on the ATI monitor and not the VMWare one if you want aero. I have to say I have not had anything like the troubles some have.  Once I found my working hardware I have installed and re installed dozens of times, windows just works no matter what order I do things.  Install the OS, VMWare tools and CCC (although I am on an old version about 11.4) and it all just works.  Windows 8 has behaved exactly the same on all my setups. X58 is the king though, I have a UD3R board now with 4x PCIx16 slots and they all work for passthru.  Currently have an Nvidia card for vSGA in slot 1 and 3 ATI cards for passthru.  Going to give Crossfire a go just to see if it works!
I think Hardware version 9 came with ESXi 5.1, you cannot use this version with ESXi 5.0, only hardware version 8. I have made an interesting discovery that might have been common knowledge al... See more...
I think Hardware version 9 came with ESXi 5.1, you cannot use this version with ESXi 5.0, only hardware version 8. I have made an interesting discovery that might have been common knowledge all along but is not mentioned much that I can find.  WIndows 8 offloads remote desktop encoding to the GPU so I can now run full directX and 1080p over remote desktop using a passed through Radeon card and a Windows 8 Guest.  This is not RemoteFX or VMWare View where there is a vGPU being used.  I tried RemoteFX with Windows 8 under server 2012 and becasue you only get a portion of the physical card its performance is poor.  With ESXi 5.0 I can have the full power of my GPU over RDP (less about 10%). That said I have only been able to use Windows 8 Enterprise for my testing so I hope the same goes for Pro.  Of course this works for physical machines running Windows 8 aswell but thats just not as much fun!  I am struggling to find out why there is not more information on this massive step up for RPD on the net, only RemoteFX. Now if VMWare start to officially support GPU passthru and can give USB redirection via VMWare tools or a seperate application (3rd party ones already exist) then this could be the answer to all the 'how can I pass more GPU ram or power to each individual virtual machine?' questions.  Albeit that you are limited to 1 VM per PCIe slot.
I get the green screen aswell, using some codecs and flash player in IE.  If you disable hardware acceleration in flash this cures that problem but for the green screen on codecs I just find a co... See more...
I get the green screen aswell, using some codecs and flash player in IE.  If you disable hardware acceleration in flash this cures that problem but for the green screen on codecs I just find a codec that works.  The best codec seams to be the ATI MPEG Decoder that you get with the extended packages (Hydravision or the AVIVO package), this works with h264 despite the confusing name. This for me gave the lowest CPU usage and perfect video. I am now looking at pasing a card through to the new Hyper-V server 2012, a free version of Hyper-V that hopefully will support RemoteFX on its guest VM's.  I will have to upgrade to 5.0 for nested VM's though.
PCI Passthrough is not a virtual 3D addapter it is a passed through physical one.  Its totally different and un supported by VMWare. If you want to remove the cnic_register hang then revert ba... See more...
PCI Passthrough is not a virtual 3D addapter it is a passed through physical one.  Its totally different and un supported by VMWare. If you want to remove the cnic_register hang then revert back to ESXi 4.1, I am still using this and it gets all the way to the ESXi console before trying to pass the GPU through.
I had this problem, to overcome it you have to add pciPassthru(n).msiEnabled and set it to 'False' in the configuration file.  I do it only for the PCI passthru device that is the GPU, all others... See more...
I had this problem, to overcome it you have to add pciPassthru(n).msiEnabled and set it to 'False' in the configuration file.  I do it only for the PCI passthru device that is the GPU, all others I leave as default which is 'True' In reply to earlier posts about PCI pass through rather than PCIe, I have the same issues with my TV tuner cards, they pass through but I cannot get any signal.  Other cards I have tried are working, a Digium telephony card is currently stable running my home phones and a soundblaster live 24 in my test PC as the onboard sound does not work.
My i7 820 has no integrated graphics as said and my Q6700 has only the motherboard graphics of the intel Q45 chipset.  These are not disabled but do not appear in the list of devices that could b... See more...
My i7 820 has no integrated graphics as said and my Q6700 has only the motherboard graphics of the intel Q45 chipset.  These are not disabled but do not appear in the list of devices that could be passed through. I will try to run the linux diagnostic on my Q45/Q6700 machine in the next few days.  I cant shut down the i7 as it is the TV for the whole family and also now has a digium card passed through for the telephones.  Got to wait until I'm home alone!
Which player software are you using?  How have you got the drive connected?
My i7 870 on a Q57M-SH2 is a cheap intel build and M-ATX.  All in an HTPC case and running for more than 12 months without fault.  Up time of over 100 days now and it only got turned iff before ... See more...
My i7 870 on a Q57M-SH2 is a cheap intel build and M-ATX.  All in an HTPC case and running for more than 12 months without fault.  Up time of over 100 days now and it only got turned iff before that to fit more ram. I would not compare VT-d on ESXi with XEN since XEN can do GPU passthrough on systems with no VT-d, they must use a different way of emmulating the gpu into the guest OS.