<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic RTX 4090 GPU passthru, esxi 8.0 in ESXi Discussions</title>
    <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2936330#M284459</link>
    <description>&lt;P&gt;I'm working on passing an RTX 4090 GPU to a VM on an Intel 13900k system.&lt;/P&gt;&lt;P&gt;Things I did first-- enabled Vt-d, &amp;gt; 4G addressing in BIOS (no ACS option , appeared enabled by default.)&lt;/P&gt;&lt;P&gt;No problem turning on passthru and assigning it to a VM. Also passthru some NVME drives and a renesas USB controller which all seem&amp;nbsp; to work. efi firmware mode. Windows 11. 48 GB RAM (out of 96 on the host.) 8 cores (out of 24).&lt;/P&gt;&lt;P&gt;VM won't poweron without Use64bitMMIO of at least 64 GB (as expected for a 24GB card.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;VM then powers on and works in windows. BUT, the first time after each host boot, the VM will spontaneously die (power off) within a few seconds of getting to the login screen.&amp;nbsp; The VM log gives a message "attempted to map 65000 pages to host memory" and recommends setting a pciHole.start to 1536.&lt;/P&gt;&lt;P&gt;If I do that, the VM doesn't poweroff as above, BUT the windows OS inside of it dies at the same time and the system reboots (succesfully).&amp;nbsp;&lt;/P&gt;&lt;P&gt;With or without pciHole.start, the VM can the be restarted and seems to work fine.&amp;nbsp; I have seen it blow up spontaenously once in a few days of testing, otherwise rock solid.&amp;nbsp; Only the first powerup after booting the physical host seems affected.&lt;/P&gt;&lt;P&gt;If I turn "resize BAR" off in the BIOS, the pciHole address changes but otherwise as above.&amp;nbsp; reBAR works in the VM OS if enabled in BIOS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Several other VM's on the same host , using 2070 GPU's, don't show this behavior (and also don't require 64bitMMIO.)&lt;/P&gt;&lt;P&gt;The same windows OS boots and 4090&amp;nbsp; on the same hardware (without esxi) and works fine.&lt;/P&gt;&lt;P&gt;Any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for thinking about it.&lt;/P&gt;</description>
    <pubDate>Wed, 02 Nov 2022 00:18:58 GMT</pubDate>
    <dc:creator>Memnarch</dc:creator>
    <dc:date>2022-11-02T00:18:58Z</dc:date>
    <item>
      <title>RTX 4090 GPU passthru, esxi 8.0</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2936330#M284459</link>
      <description>&lt;P&gt;I'm working on passing an RTX 4090 GPU to a VM on an Intel 13900k system.&lt;/P&gt;&lt;P&gt;Things I did first-- enabled Vt-d, &amp;gt; 4G addressing in BIOS (no ACS option , appeared enabled by default.)&lt;/P&gt;&lt;P&gt;No problem turning on passthru and assigning it to a VM. Also passthru some NVME drives and a renesas USB controller which all seem&amp;nbsp; to work. efi firmware mode. Windows 11. 48 GB RAM (out of 96 on the host.) 8 cores (out of 24).&lt;/P&gt;&lt;P&gt;VM won't poweron without Use64bitMMIO of at least 64 GB (as expected for a 24GB card.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;VM then powers on and works in windows. BUT, the first time after each host boot, the VM will spontaneously die (power off) within a few seconds of getting to the login screen.&amp;nbsp; The VM log gives a message "attempted to map 65000 pages to host memory" and recommends setting a pciHole.start to 1536.&lt;/P&gt;&lt;P&gt;If I do that, the VM doesn't poweroff as above, BUT the windows OS inside of it dies at the same time and the system reboots (succesfully).&amp;nbsp;&lt;/P&gt;&lt;P&gt;With or without pciHole.start, the VM can the be restarted and seems to work fine.&amp;nbsp; I have seen it blow up spontaenously once in a few days of testing, otherwise rock solid.&amp;nbsp; Only the first powerup after booting the physical host seems affected.&lt;/P&gt;&lt;P&gt;If I turn "resize BAR" off in the BIOS, the pciHole address changes but otherwise as above.&amp;nbsp; reBAR works in the VM OS if enabled in BIOS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Several other VM's on the same host , using 2070 GPU's, don't show this behavior (and also don't require 64bitMMIO.)&lt;/P&gt;&lt;P&gt;The same windows OS boots and 4090&amp;nbsp; on the same hardware (without esxi) and works fine.&lt;/P&gt;&lt;P&gt;Any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for thinking about it.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Nov 2022 00:18:58 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2936330#M284459</guid>
      <dc:creator>Memnarch</dc:creator>
      <dc:date>2022-11-02T00:18:58Z</dc:date>
    </item>
    <item>
      <title>Re: RTX 4090 GPU passthru, esxi 8.0</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2936714#M284489</link>
      <description>&lt;P&gt;First of all congrats on getting your hands on this video card! However, the first hurdle is making it past the Vmware HCL and if you manage to get this setup working on ESX 8 then your next hurdle is the FPS and mouse response on gaming.&lt;/P&gt;&lt;P&gt;For example, it's going to make you an easy target on a first person shooter scenario.&lt;/P&gt;&lt;P&gt;-r&amp;nbsp; &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 03 Nov 2022 19:49:54 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2936714#M284489</guid>
      <dc:creator>RobBenedit</dc:creator>
      <dc:date>2022-11-03T19:49:54Z</dc:date>
    </item>
    <item>
      <title>Re: RTX 4090 GPU passthru, esxi 8.0</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2937474#M284610</link>
      <description>&lt;P&gt;So, after considerable debugging...&lt;/P&gt;&lt;P&gt;TL;DR&lt;/P&gt;&lt;P&gt;I think it's a bug in how esxi releases the console-claimed GPU from the console to a VM, and there is a workaround.&lt;/P&gt;&lt;P&gt;Turn off boot display with&lt;/P&gt;&lt;P&gt;esxcli system settings kernel set -s vga -v FALSE (Can undo with: esxcli system settings kernel set -s vga -v TRUE)&lt;/P&gt;&lt;P&gt;VM is now stable including on first boot.&lt;/P&gt;&lt;P&gt;Long version:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Esxi had a "bug" in 7.0 where the display output claimed for the hypervisor console itself would have to be manually re-enabled for passthru with every reboot (in 6.7, this wasn't needed). &amp;nbsp;This was reportedly later changed back to the 6.7 behavior. &amp;nbsp;The above command line stops esxi from claiming any GPU for the console at all, and during the initial phases of 7.0 therefore removed the need to manual re-enable one gpu for passthru on every boot. The need for this command line was subsequently reportedly removed when the behavior was changed back to 6.7-style behavior, where passthru remained enabled on reboot and the VM would just take over the GPU automatically on its first boot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Using this command line appears to totally fix the problems with the VM spontaneously combusting on first boot after host boot described in the initial post, and nothing else I tried does. It's probably not a coincidence that the 4090 is in the primary graphics slot and esxi does in fact claim it for the console unless I force it not to. (I never had to re-enable passthrough on boot, so didn't see any need for this command line until testing to see if it solved the problematic behavior.)&lt;/P&gt;&lt;P&gt;Source:&amp;nbsp;&lt;A href="https://williamlam.com/2020/06/passthrough-of-integrated-gpu-igpu-for-standard-intel-nuc.html" target="_blank"&gt;https://williamlam.com/2020/06/passthrough-of-integrated-gpu-igpu-for-standard-intel-nuc.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Beware-- the setting persists across boots. It is reversible, but you can run into trouble if your host loses network connectivity for some reason-- you will have no console AND no network access and this usually requires reinstalling esxi to fix. In particular, the combination of "no console" and an off-HCL network driver is a very dicey idea, since those often require resetting network configurations and host services for minor changes. &amp;nbsp;You have been warned.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Nov 2022 15:10:25 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2937474#M284610</guid>
      <dc:creator>Memnarch</dc:creator>
      <dc:date>2022-11-09T15:10:25Z</dc:date>
    </item>
    <item>
      <title>Re: RTX 4090 GPU passthru, esxi 8.0</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2949209#M286063</link>
      <description>&lt;P&gt;Did you run into error 43 in the VM?&amp;nbsp; I've been trying to passthrough a 3060 for a few days on esxi 8, no dice.&amp;nbsp; Do you think disabling boot display will resolve this?&amp;nbsp; Tried setting&amp;nbsp;&lt;SPAN&gt;hypervisor.cpuid.v0 = "FALSE", didn't work.&amp;nbsp; Tried to edit&amp;nbsp;/etc/vmware/passthru.map from bridge to link, same issue.&amp;nbsp; Any help would be appreciated!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jan 2023 20:19:31 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2949209#M286063</guid>
      <dc:creator>fatbob01</dc:creator>
      <dc:date>2023-01-17T20:19:31Z</dc:date>
    </item>
    <item>
      <title>Re: RTX 4090 GPU passthru, esxi 8.0</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2976288#M288908</link>
      <description>&lt;P&gt;Did you ever get your setup working? I can't get direct access working at all, I just get a 2nd VGA adapter, but for what I'm doing I need direct access to the CUDA cores.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Depending on how I install the driver, when I run "nvidia-smi", I get either:&lt;/P&gt;&lt;P&gt;NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.&lt;/P&gt;&lt;P&gt;or&lt;/P&gt;&lt;P&gt;No devices were found&lt;/P&gt;&lt;P&gt;____________________________________________________________________________________________&lt;/P&gt;&lt;P&gt;Here's what my devices look like:&lt;/P&gt;&lt;P&gt;*-display&lt;BR /&gt;description: &lt;STRONG&gt;VGA compatible controller&lt;/STRONG&gt;&lt;BR /&gt;product: SVGA II Adapter&lt;BR /&gt;vendor: VMware&lt;BR /&gt;physical id: f&lt;BR /&gt;bus info: pci@0000:00:0f.0&lt;BR /&gt;logical name: /dev/fb0&lt;BR /&gt;version: 00&lt;BR /&gt;width: 32 bits&lt;BR /&gt;clock: 33MHz&lt;BR /&gt;capabilities: vga_controller bus_master cap_list rom fb&lt;BR /&gt;configuration: depth=32 driver=vmwgfx latency=64 resolution=1176,885&lt;BR /&gt;resources: irq:16 ioport:840(size=16) memory:f0000000-f7ffffff memory:ff000000-ff7fffff memory:c0000-dffff&lt;BR /&gt;*-display&lt;BR /&gt;description: &lt;STRONG&gt;VGA compatible controller&lt;/STRONG&gt;&lt;BR /&gt;product: &lt;STRONG&gt;NVIDIA Corporation&lt;/STRONG&gt;&lt;BR /&gt;vendor: &lt;STRONG&gt;NVIDIA Corporation&lt;/STRONG&gt;&lt;BR /&gt;physical id: e&lt;BR /&gt;bus info: pci@0000:02:05.0&lt;BR /&gt;version: a1&lt;BR /&gt;width: 64 bits&lt;BR /&gt;clock: 33MHz&lt;BR /&gt;capabilities: pm msi pciexpress vga_controller bus_master cap_list&lt;BR /&gt;configuration: driver=nvidia latency=64&lt;BR /&gt;resources: irq:18 memory:fd000000-fdffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:a80(size=128)&lt;/P&gt;&lt;P&gt;____________________________________________________________________________________________&lt;/P&gt;&lt;P&gt;Here's my build guide so far:&lt;/P&gt;&lt;P&gt;###BIOS&lt;BR /&gt;Vt-d (Enabled)&lt;BR /&gt;SRV-IO (Enabled)&lt;/P&gt;&lt;P&gt;###Hypervisor Video Turned Off&lt;BR /&gt;esxcli system settings kernel set -s vga -v FALSE&lt;BR /&gt;(Note: esxcli system settings kernel set -s vga -v TRUE)&lt;/P&gt;&lt;P&gt;###System Settings&lt;BR /&gt;30 Gb (Reserved)&lt;BR /&gt;2 CPUs (x1 Core)&lt;/P&gt;&lt;P&gt;###Assigned PCI Device to VM&lt;BR /&gt;RTX 4090&lt;BR /&gt;Audio Device&lt;/P&gt;&lt;P&gt;###VM Options&lt;BR /&gt;pciPassthru.set.usebitMMIO = TRUE&lt;BR /&gt;pciPassthru.64bitMMIOSizeGB = 64&lt;BR /&gt;hypervisor.cpuid.v0 = FALSE&lt;BR /&gt;pciHole.start = 1536&lt;BR /&gt;pciHole.end = 2200&lt;/P&gt;&lt;P&gt;###Minimal Install + No Drivers&lt;BR /&gt;sudo apt-get install openssh-server&lt;BR /&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get upgrade&lt;/P&gt;&lt;P&gt;##Install Nvidia Requirements&lt;BR /&gt;sudo apt install build-essential&lt;BR /&gt;sudo apt install pkg-config libglvnd-dev&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;###Shutdown GUI&lt;BR /&gt;sudo systemctl set-default multi-user&lt;BR /&gt;sudo telinit 3&lt;BR /&gt;sudo reboot 0&lt;/P&gt;&lt;P&gt;###Installed Nvidia Drivers&lt;BR /&gt;sudo apt install nvidia-driver-525 nvidia-dkms-525&lt;/P&gt;&lt;P&gt;or&lt;BR /&gt;sudo sh NVIDIA-Linux-x86_64-535.54.03.run&lt;/P&gt;&lt;P&gt;____________________________________________________________________________________________&lt;/P&gt;&lt;P&gt;I've tried several combinations of things above to get it to detect, but it VMware absolutely refuses to pass the RTX 4090 directly. Need help, been working this for days now.&amp;nbsp; Thanks for any tips/information.&lt;/P&gt;</description>
      <pubDate>Sat, 08 Jul 2023 02:43:14 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/RTX-4090-GPU-passthru-esxi-8-0/m-p/2976288#M288908</guid>
      <dc:creator>Jukari</dc:creator>
      <dc:date>2023-07-08T02:43:14Z</dc:date>
    </item>
  </channel>
</rss>

