Hi All,
I'm playing with ESXi in a home lab environment - this is my first experiment with ESXi and I think I'm doing OK; I have some virtualisation experience with Hyper-V, but I'm still a novice home user! ESXi 6.7 Free is running from a USB on an AsRock H61M-DGS with 16GB RAM and a couple of 750GB hard drives on the motherboard, and a 500GB WD drive attached to a PCI-E Syba SI-PEX40064 SATA with a Marvell 88 SE 9215 chipset.
I'm basically trying to get passthrough working so that I can present direct disks to a couple of VMs (FreeNAS, Windows 10, a few others) for some testing, including to see if I can get the SMART data to pass through. I have one of the 750 disks as a traditional datastore with all the VMs, and its all been running really well until I tried to get the passthrough working!
I set a new Windows 10 VM up without passthrough, with VM tools installed, and all was working well. I physically installed the PCI-E SATA card and ESXi recognised it and installed the drivers - it seemed to be working OK. I then set the card to run as passthrough, and rebooted ESXi. All good so far:
I took a snapshot of the VM then went to VM hardware tab to add the card to the VM:
I had to adjust the memory reservation etc as below:
However, now it won't boot. I can't even get it to boot off install DVD into recovery mode! Nothing comes up on screen when I use the console, and the log has the following in it, but I have no idea where the error starts, and google has not been able to help me out:
2018-08-26T18:58:14.324Z| vcpu-0| I125: MemSched (MB): min: 9 sizeLimit: 1024 swapInitialFileSize: 110 prealloc: FALSE.
2018-08-26T18:58:14.324Z| vcpu-0| I125: MemSched (MB): min: 9 sizeLimit: 1024 swapInitialFileSize: 110 prealloc: FALSE.
2018-08-26T18:58:14.328Z| vcpu-0| I125: Guest: EFI ROM version: VMW71.00V.7581552.B64.1801142334 (64-bit RELEASE)
2018-08-26T18:58:14.413Z| vcpu-0| I125: BIOS-UUID is 56 4d 73 55 76 0f dc 19-84 ac 45 6f b6 65 7d db
2018-08-26T18:58:14.489Z| vcpu-0| I125: UHCI: HCReset
2018-08-26T18:58:14.522Z| vcpu-0| I125: SVGA: Registering MemSpace at 0xf0000000(0x0) and 0xfb800000(0x0)
2018-08-26T18:58:14.523Z| vcpu-0| I125: SVGA: Unregistering MemSpace at 0xf0000000(0xf0000000) and 0xfb800000(0xfb800000)
2018-08-26T18:58:14.563Z| vcpu-0| I125: SVGA: Registering IOSpace at 0x2040
2018-08-26T18:58:14.563Z| vcpu-0| I125: SVGA: Unregistering IOSpace at 0x2040
2018-08-26T18:58:14.564Z| vcpu-0| I125: AHCI: Tried to enable/disable IO space.
2018-08-26T18:58:14.564Z| vcpu-0| I125: PCIBridge4: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.564Z| vcpu-0| I125: pciBridge4:1: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.564Z| vcpu-0| I125: pciBridge4:2: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.564Z| vcpu-0| I125: pciBridge4:3: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.564Z| vcpu-0| I125: pciBridge4:4: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.564Z| vcpu-0| I125: pciBridge4:5: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge4:6: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge4:7: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: PCIBridge5: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:1: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:2: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:3: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:4: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:5: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:6: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: pciBridge5:7: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.565Z| vcpu-0| I125: PCIBridge6: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:1: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:2: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:3: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:4: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:5: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:6: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge6:7: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: PCIBridge7: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge7:1: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge7:2: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge7:3: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge7:4: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.566Z| vcpu-0| I125: pciBridge7:5: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.567Z| vcpu-0| I125: pciBridge7:6: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.567Z| vcpu-0| I125: pciBridge7:7: ISA/VGA decoding enabled (ctrl 001C)
2018-08-26T18:58:14.574Z| svga| I125: MKSScreenShotMgr: Taking a screenshot
2018-08-26T18:58:14.576Z| vcpu-0| I125: SVGA: Registering IOSpace at 0x2040
2018-08-26T18:58:14.576Z| vcpu-0| I125: SVGA: Registering MemSpace at 0xf0000000(0xf0000000) and 0xfb800000(0xfb800000)
2018-08-26T18:58:14.580Z| svga| I125: SVGA enabling SVGA
2018-08-26T18:58:14.582Z| svga| I125: SVGA-ScreenMgr: Screen type changed to RegisterMode
2018-08-26T18:58:14.734Z| vcpu-0| I125: Tools: Running status rpc handler: 0 => 1.
2018-08-26T18:58:14.734Z| vcpu-0| I125: Tools: Changing running status: 0 => 1.
2018-08-26T18:58:14.734Z| vcpu-0| I125: Tools: Removing Tools inactivity timer.
2018-08-26T18:58:14.842Z| svga| I125: MKSScreenShotMgr: Taking a screenshot
2018-08-26T18:58:14.854Z| vmx| I125: VigorTransportProcessClientPayload: opID=db5d949d seq=8821: Receiving MKS.IssueTicket request.
2018-08-26T18:58:14.854Z| vmx| I125: SOCKET creating new socket listening on /var/run/vmware/ticket/6b98b0066629a08a
2018-08-26T18:58:14.854Z| vmx| I125: SOCKET 5 (108) creating new listening socket on port -1
2018-08-26T18:58:14.854Z| vmx| I125: Issuing new webmks ticket 6b98b0... (120 seconds)
2018-08-26T18:58:14.854Z| vmx| I125: VigorTransport_ServerSendResponse opID=db5d949d seq=8821: Completed MKS request with messages.
2018-08-26T18:58:15.037Z| mks| I125: Accepting connection for webmks ticket 6b98b0...
2018-08-26T18:58:15.037Z| mks| I125: Expiring webmks ticket 6b98b0...
2018-08-26T18:58:15.037Z| mks| I125: SOCKET 6 (111) AsyncTCPSocketSetOption: Option layer/level [6], option/name [1]: successfully set OS option for TCP socket.
2018-08-26T18:58:15.037Z| mks| W115: SOCKET 7 (111) unable to determine remote IP address
2018-08-26T18:58:15.037Z| mks| I125: SOCKET 6 (111) AsyncTCPSocketSetOption: sendLowLatencyMode set to [1].
2018-08-26T18:58:15.037Z| mks| I125: SOCKET 7 (111) Creating VNC remote connection.
2018-08-26T18:58:15.037Z| mks| I125: MKSControlMgr: New VNC connection 0
2018-08-26T18:58:15.038Z| svga| I125: VNCENCODE 7 VNCEncode: VNCEncode_ServerSetTopology - original root: (0, 0) size: (1024, 768)
2018-08-26T18:58:15.038Z| svga| I125: VNCENCODE 7 VNCEncode: Number of screens changed from 0 to 1
2018-08-26T18:58:15.038Z| svga| I125: VNCENCODE 7 VNCEncode: screen: 0 BoundingBox: (1024x768) Screen (1024x768) @ (0,0) bytesPerLine: 4096
2018-08-26T18:58:15.421Z| mks| W115: VNCENCODE 7 JPEG quality levels (min, mid, max). Input: (25, 35, 90) Clamped: (25, 35, 90)
2018-08-26T18:58:15.421Z| mks| W115: VNCENCODE 7 failed to allocate VNCBlitDetect
2018-08-26T18:58:15.421Z| mks| I125: VNCENCODE 7 VNCEncodeChooseRegionEncoder: region encoder adaptive. Screen 1/1 @ Resolution: 1024 x 768
2018-08-26T18:58:15.536Z| vcpu-0| I125: AHCI-VMM:HBA reset issued on sata0.
2018-08-26T18:58:15.537Z| vcpu-0| I125: AHCI-VMM: sata0:0: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.537Z| vcpu-0| I125: AHCI-VMM: sata0:1: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.537Z| vcpu-0| I125: AHCI-VMM: sata0:2: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.548Z| vcpu-0| I125: AHCI-VMM: sata0:3: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.558Z| vcpu-0| I125: AHCI-VMM: sata0:4: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.569Z| vcpu-0| I125: AHCI-VMM: sata0:5: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.580Z| vcpu-0| I125: AHCI-VMM: sata0:6: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.591Z| vcpu-0| I125: AHCI-VMM: sata0:7: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.602Z| vcpu-0| I125: AHCI-VMM: sata0:8: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.613Z| vcpu-0| I125: AHCI-VMM: sata0:9: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.624Z| vcpu-0| I125: AHCI-VMM: sata0:10: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.635Z| vcpu-0| I125: AHCI-VMM: sata0:11: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.646Z| vcpu-0| I125: AHCI-VMM: sata0:12: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.657Z| vcpu-0| I125: AHCI-VMM: sata0:13: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.668Z| vcpu-0| I125: AHCI-VMM: sata0:14: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.680Z| vcpu-0| I125: AHCI-VMM: sata0:15: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.691Z| vcpu-0| I125: AHCI-VMM: sata0:16: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.702Z| vcpu-0| I125: AHCI-VMM: sata0:17: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.713Z| vcpu-0| I125: AHCI-VMM: sata0:18: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.724Z| vcpu-0| I125: AHCI-VMM: sata0:19: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.735Z| vcpu-0| I125: AHCI-VMM: sata0:20: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.746Z| vcpu-0| I125: AHCI-VMM: sata0:21: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.757Z| vcpu-0| I125: AHCI-VMM: sata0:22: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.768Z| vcpu-0| I125: AHCI-VMM: sata0:23: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.778Z| vcpu-0| I125: AHCI-VMM: sata0:24: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.789Z| vcpu-0| I125: AHCI-VMM: sata0:25: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.800Z| vcpu-0| I125: AHCI-VMM: sata0:26: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.812Z| vcpu-0| I125: AHCI-VMM: sata0:27: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.823Z| vcpu-0| I125: AHCI-VMM: sata0:28: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.834Z| vcpu-0| I125: AHCI-VMM: sata0:29: PxSCTL.DET already 0. Ignoring write 0.
2018-08-26T18:58:15.890Z| vmx| I125: Tools_SetGuestResolution: Sending rpcMsg = Resolution_Set 1024 768
2018-08-26T18:58:17.498Z| svga| I125: MKSScreenShotMgr: Taking a screenshot
2018-08-26T18:58:18.878Z| vcpu-0| I125: Guest: DSDT: CNOT method is 38 bytes long.
2018-08-26T18:58:18.980Z| vcpu-0| I125: Guest: About to do EFI boot: Windows Boot Manager
2018-08-26T18:58:19.014Z| vcpu-0| I125: AHCIHandleFirstWrite: First write on sata0:1.fileName='/vmfs/volumes/5aea2ac7-f8a56fc8-0045-d050994a8c62/Win 10/Win 10.vmdk'
2018-08-26T18:58:19.014Z| vcpu-0| I125: DDB: "longContentID" = "ecb91f0866d6fb3533d9ca95961368bd" (was "e1dca447b4f2ea5ec9ecc5a389dc3b1a")
2018-08-26T18:58:19.213Z| vcpu-0| I125: DISKLIB-CHAIN : DiskChainUpdateContentID: old=0x89dc3b1a, new=0x961368bd (ecb91f0866d6fb3533d9ca95961368bd)
2018-08-26T18:58:26.119Z| vcpu-0| I125: UHCI: HCReset
2018-08-26T18:58:26.140Z| vcpu-0| I125: Guest: Firmware has transitioned to runtime.
2018-08-26T18:58:26.234Z| vcpu-0| I125: Guest MSR write (0x48: 0x2)
2018-08-26T18:58:26.234Z| vcpu-0| I125: Preparing for SPEC_CTRL Guest MSR write (0x48) passthrough.
2018-08-26T18:58:26.235Z| vcpu-0| I125: APIC CMCI LVT write: 0x100d8
2018-08-26T18:58:27.028Z| vcpu-0| I125: SVGA: Unregistering IOSpace at 0x2040
2018-08-26T18:58:27.028Z| vcpu-0| I125: SVGA: Unregistering MemSpace at 0xf0000000(0xf0000000) and 0xfb800000(0xfb800000)
2018-08-26T18:58:27.029Z| vcpu-0| I125: SVGA: Registering IOSpace at 0x2040
2018-08-26T18:58:27.029Z| vcpu-0| I125: SVGA: Registering MemSpace at 0xf0000000(0xf0000000) and 0xfb800000(0xfb800000)
2018-08-26T18:58:28.517Z| vcpu-0| I125: AHCI-VMM:HBA reset issued on sata0.
2018-08-26T18:58:30.297Z| vcpu-0| E105: PANIC: VERIFY bora/devices/pcipassthru/pciPassthru.c:948
2018-08-26T18:58:33.104Z| vcpu-0| W115: A core file is available in "/vmfs/volumes/5aea2ac7-f8a56fc8-0045-d050994a8c62/Win 10/vmx-zdump.000"
2018-08-26T18:58:33.104Z| mks| W115: Panic in progress... ungrabbing
2018-08-26T18:58:33.104Z| mks| I125: MKS: Release starting (Panic)
2018-08-26T18:58:33.104Z| mks| I125: MKS: Release finished (Panic)
2018-08-26T18:58:33.152Z| vcpu-0| I125: Writing monitor file `vmmcores.gz`
2018-08-26T18:58:33.156Z| vcpu-0| W115: Dumping core for vcpu-0
2018-08-26T18:58:33.156Z| vcpu-0| I125: VMK Stack for vcpu 0 is at 0x451a0ba93000
2018-08-26T18:58:33.156Z| vcpu-0| I125: Beginning monitor coredump
2018-08-26T18:58:33.577Z| vcpu-0| I125: End monitor coredump
2018-08-26T18:58:34.317Z| vcpu-0| I125: Printing loaded objects
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5683D4000-0xC56953D044): /bin/vmx
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5A9B7C000-0xC5A9B82630): /lib64/librt.so.1
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5A9D84000-0xC5A9D85E90): /lib64/libdl.so.2
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5A9F88000-0xC5AA21D364): /lib64/libcrypto.so.1.0.2
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AA44F000-0xC5AA4B834C): /lib64/libssl.so.1.0.2
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AA6C3000-0xC5AA7D737C): /lib64/libX11.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AA9DD000-0xC5AA9EC01C): /lib64/libXext.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AABEE000-0xC5AACD2341): /lib64/libstdc++.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AAEF1000-0xC5AAFECB94): /lib64/libm.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AB1EE000-0xC5AB202BC4): /lib64/libgcc_s.so.1
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AB405000-0xC5AB41C858): /lib64/libpthread.so.0
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AB622000-0xC5AB7C62E0): /lib64/libc.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC569959000-0xC5699789C0): /lib64/ld-linux-x86-64.so.2
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AB9D0000-0xC5AB9EA634): /lib64/libxcb.so.1
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5ABBEC000-0xC5ABBED95C): /lib64/libXau.so.6
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5ABE50000-0xC5ABFFF284): /usr/lib64/vmware/plugin/objLib/vsanObjBE.so
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AC2BF000-0xC5AC2D865C): /lib64/libz.so.1
2018-08-26T18:58:34.317Z| vcpu-0| I125: [0xC5AC722000-0xC5AC72D758): /lib64/libnss_files.so.2
2018-08-26T18:58:34.317Z| vcpu-0| I125: End printing loaded objects
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace:
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[0] 000000c5ae64e450 rip=000000c568afcb47 rbx=000000c568afc640 rbp=000000c5ae64e470 r12=0000000000000000 r13=0000000000000001 r14=0000000000000800 r15=0000000000000001
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[1] 000000c5ae64e480 rip=000000c5685af090 rbx=000000c5ae64e4a0 rbp=000000c5ae64e980 r12=000000c5698033f0 r13=0000000000000001 r14=0000000000000800 r15=0000000000000001
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[2] 000000c5ae64e990 rip=000000c5686db107 rbx=000000c569e06750 rbp=000000c5ae64ea10 r12=0000000000000005 r13=0000000000000005 r14=0000000000000800 r15=0000000000000001
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[3] 000000c5ae64ea20 rip=000000c5686db18f rbx=0000000000000006 rbp=000000c5ae64ea80 r12=0000000000000000 r13=000000c569e06828 r14=000000c569e06750 r15=000000c569e06810
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[4] 000000c5ae64ea90 rip=000000c5686dbc8c rbx=0000000000000002 rbp=000000c5ae64eae0 r12=000000c569e06750 r13=0000000000000001 r14=000000c5ae64eb0c r15=000000c569e06800
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[5] 000000c5ae64eaf0 rip=000000c5686d5f8c rbx=0000000000000031 rbp=000000c5ae64eb40 r12=0000000000300004 r13=000000c5ad250020 r14=0000000000000000 r15=000000c569e06f80
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[6] 000000c5ae64eb50 rip=000000c5689b2781 rbx=000000c5698fdea0 rbp=000000c5ae64eb80 r12=000000c56960d880 r13=0000000000000164 r14=000000c569c870b0 r15=0000000000000000
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[7] 000000c5ae64eb90 rip=000000c5689d4216 rbx=000000000000012d rbp=000000c5ae64ebd0 r12=000000c5698033f0 r13=000000c5698f4fe0 r14=000000c5697cfd40 r15=0000000000000000
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[8] 000000c5ae64ebe0 rip=000000c5689b2891 rbx=0000000000000000 rbp=000000c5ae64ebf0 r12=000000c5ad2510e8 r13=000000c5ae64f9c0 r14=000000c569b79040 r15=0000000000000003
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[9] 000000c5ae64ec00 rip=000000c568a9dd17 rbx=000000c5ae64ec00 rbp=000000c5ae64ed20 r12=000000c569d5a1e0 r13=000000c5ae64f9c0 r14=000000c569b79040 r15=0000000000000003
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[10] 000000c5ae64ed30 rip=000000c5ab40d06b rbx=0000000000000000 rbp=0000000000000000 r12=0000032e27590b10 r13=000000c5ae64f9c0 r14=000000c569b79040 r15=0000000000000003
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[11] 000000c5ae64ee40 rip=000000c5ab70baed rbx=0000000000000000 rbp=0000000000000000 r12=0000032e27590b10 r13=000000c5ae64f9c0 r14=000000c569b79040 r15=0000000000000003
2018-08-26T18:58:34.317Z| vcpu-0| I125: Backtrace[12] 000000c5ae64ee48 rip=0000000000000000 rbx=0000000000000000 rbp=0000000000000000 r12=0000032e27590b10 r13=000000c5ae64f9c0 r14=000000c569b79040 r15=0000000000000003
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[0] 000000c5ae64e450 rip=000000c568afcb47 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[1] 000000c5ae64e480 rip=000000c5685af090 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[2] 000000c5ae64e990 rip=000000c5686db107 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[3] 000000c5ae64ea20 rip=000000c5686db18f in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[4] 000000c5ae64ea90 rip=000000c5686dbc8c in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[5] 000000c5ae64eaf0 rip=000000c5686d5f8c in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[6] 000000c5ae64eb50 rip=000000c5689b2781 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[7] 000000c5ae64eb90 rip=000000c5689d4216 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[8] 000000c5ae64ebe0 rip=000000c5689b2891 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[9] 000000c5ae64ec00 rip=000000c568a9dd17 in function (null) in object /bin/vmx loaded at 000000c5683d4000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[10] 000000c5ae64ed30 rip=000000c5ab40d06b in function (null) in object /lib64/libpthread.so.0 loaded at 000000c5ab405000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[11] 000000c5ae64ee40 rip=000000c5ab70baed in function clone in object /lib64/libc.so.6 loaded at 000000c5ab622000
2018-08-26T18:58:34.317Z| vcpu-0| I125: SymBacktrace[12] 000000c5ae64ee48 rip=0000000000000000
2018-08-26T18:58:34.317Z| vcpu-0| I125: Msg_Post: Error
2018-08-26T18:58:34.317Z| vcpu-0| I125: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-0)
2018-08-26T18:58:34.317Z| vcpu-0| I125+ VERIFY bora/devices/pcipassthru/pciPassthru.c:948
2018-08-26T18:58:34.317Z| vcpu-0| I125: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5aea2ac7-f8a56fc8-0045-d050994a8c62/Win 10/vmware.log".
2018-08-26T18:58:34.317Z| vcpu-0| I125: [msg.panic.requestSupport.withoutLog] You can request support.
2018-08-26T18:58:34.317Z| vcpu-0| I125: [msg.panic.requestSupport.vmSupport.vmx86]
2018-08-26T18:58:34.317Z| vcpu-0| I125+ To collect data to submit to VMware technical support, run "vm-support".
2018-08-26T18:58:34.317Z| vcpu-0| I125: [msg.panic.response] We will respond on the basis of your support entitlement.
2018-08-26T18:58:34.317Z| vcpu-0| I125: ----------------------------------------
2018-08-26T18:58:34.319Z| vcpu-0| I125: Exiting
[Sorry - I can't see how to upload that as text code to make it smaller]
So - can anyone help identify why this card is not working in passthrough mode and is preventing my VM from booting?! TIA.
To add,
I just changed the SATA card to non-passthrough mode, and it works fine showing the controller and the hard disk attached to it available to ESXi to create a datastore on it of required, so I'm fairly sure its set up right... I just can't get the passthrough to work!
Instead of trying to use the passthrough, have you tried to use it as a RDM drive/disk?? Remember, this is NOT hyper-v you're using now. So most of the way things 'work' there don't apply. IME, hyper-v is a SS compared to ESXi/vSphere.
Hi golddiggie,
thanks for the reply - sorry for my delayed response - I somehow missed that you someone had replied! OK, I'm happy to try the RDM method - I thought I try pass-through first as:
To be honest, I never used pass-through in Hyper-V, so I'm not hung up on any previous experience there, I'm just happy to learn new techniques and experiment. What do you mean by "Hyper-V is SS" compared to ESXi?
Finally, seeing as I tried the passthrough method first, is there anyone who can explain what is going wrong with my system and preventing the VM booting? Many thanks!
Oddly, it fails to boot when I when i try to boot off the windows install DVD when the PCI SATA card is passed through to the VM.
It does exactly the same as when trying to boot the VM from its native virtual HDD - it gets as far as the windows logo and a few spins of the loading dots, then crashes and stops the VM running completely.
Also, if I passthrough the PCI SATA controller to a FreeNAS VM, it all goes through without any issue at all and I can see it in the MV.
Also, also... booting from a Win 10 installation directly on the hardware (not in a VM) works fine too with the card physically installed.
Here's my recommendation as I see home labbers trying to do this all the time--especially with ESXi free. This isn't going to be popular, fair warning: Stop trying to pass through storage controllers and other sundry desktop hardware. You're only going to meet with tedium, heartbreak, and, ultimately, most likely failure. The reasons for this are as follows:
1.) ESXi and other type-1 hypervisors go to great lengths to abstract the physical components into logical representations for purposes of sharing resources, stability, and standardization. Most of their development efforts are focused on ensuring this is really, really rock solid and, for the most part, it is. This is probably why you're even here in the first place. And physical passthrough is a corner case that is rarely encountered in "real" vSphere environments.
2.) Consumer-grade hardware isn't certified for ESXi as I'm sure you know. It's hard enough making this work as-is much less exposing it even further up the stack. Don't press your luck. If you're hell-bent on accomplishing this, you need to invest in server hardware. Yes, I know it's more expensive, but this is a system for business use and so that's just table stakes.
3.) Further to #2 above, single, direct disk access in any type-1 hypervisors is just not a good idea anyhow and so much of this can be mitigated with even mid-grade NAS storage which can monitor itself. There are numerous vendors that have such products on the market that don't break the bank.
I can appreciate you're learning ESXi and maybe later vSphere, but set yourself up for success through an experience that somewhat mimmics the real uses of this platform by getting a little closer to its intended and designed operating environment.
Hi @daphnissov
Thanks for taking the time to reply, and for the warning. I understand where you are coming from, and would like to offer my rebuttal to your 3 points:
I don't mean to be rude, I just wanted to explain where I am coming from; I thank you for your thoughts and warning. So, to try and work towards fixing the issue I am dealing with, and with your warning in mind, maybe you can help with some advice on coming up with a workable solution?! The main reason I want to set up passthrough is so that I can use a free / cheap windows-based SMART monitoring software solution to keep an eye on my hard disk health, temperatures, etc. passthrough seemed like the best way to do this.
As you are suggesting this is not a good idea, please can you advise how to monitor / report / auto-email this kind of health information from within ESXi to me as the admin, or any other way? Many thanks again.
I certainly get what you're saying, why you're experimenting, and what you're trying to do. Still, though, it seems a few things are off.
but the software is designed to do it, and I'm trying to understand and use a capability of the software as designed
Yes, that's certainly true, but it isn't designed to do it with that type of hardware. Pass-through is either on or off; works or doesn't work. And in your case, it seems not to work with this hardware which isn't surprising to me at all. There aren't a whole lot of advanced or secret buttons or knobs to be turned to make it work.
I understand the risks, and I'm trying to learn how the operating system / hypervisor works before I spend my hard earned cash on more hardware.
Always a good idea, but if this is your aim then test within the major use cases of the hypervisor and not corner cases which really aren't used in the real world. Not only are virtual disks in use the vast majority of the time, but using pass-through physical devices and RDMs actually prohibit the number and types of features you can take advantage of later because, again, they're throw-backs to a physical world which is not what a hypervisor is about. So if you really want to kick the tires and learn all you can about ESXi or vSphere, you're actually shooting yourself in the foot majorly by using physical pass-through devices here.
The main reason I want to set up passthrough is so that I can use a free / cheap windows-based SMART monitoring software solution to keep an eye on my hard disk health, temperatures, etc. passthrough seemed like the best way to do this.
I get this is why you're going to such lengths, but you're still trying to fit a square peg in a round hole here. If your sole reason for trying to pass-through consumer-grade, old, and unsupported hardware is to monitor SMART data then perhaps you should just stick to running Windows in a physical sense on this hardware and use some more appropriate hardware for ESXi. ESXi/vSphere really aren't designed well to use local storage but rather shared storage. Does it work? Sure. Is it going to work just like how it did in a purely physical world? No. It's more of a corner case and so therefore robust mechanisms aren't built-in to handling this type of configuration unless you're using a RAID controller to abstract the physical disks away from the hypervisor.
When testing new platforms, there's always the need to look down and also look ahead to figure out what are and aren't the intended uses for the platform. For example, if I'm learning SQL server as a beginner then I probably want to focus my efforts on the basic and predominant use cases of the software like learning to store and managed structured data, learning how to query, and consume it as a data source. But if I start out by skipping over these things and instead jump straight to making English Language work with Alexa, re-write SSNS so it becomes a robust notification service, or force a relational database to output JSON then your time is being essentially wasted as folly. Solving problems unique to you is always a good way to learn, but at some point if you're purposely throwing tacks down in front of your bare feet then it becomes counterproductive.
Thanks daphnissov.So if i take hardware passthrough off the table for now, can you recommend another way to monitor the SMART data & temps on my hard disks either from within ESXi or from a host VM / separate client / any other way?
Yes, do so from the native ESXi level which has been available since 5.1 and illustrated here. If this is something you wanted checked in an automated fashion, then you may need to script it. As I know I've said multiple times now, ESXi really isn't designed to handle local hard disks well (absolutely not single hard disks) that aren't part of a vSAN group. As a result, don't expect the same type of introspection that you'd get with desktop tools on consumer hardware. This is why I recommended hardware that was a little more in-line with the abilities of ESXi, which really come in the form of external, shared storage. Far too often I see people insistent on using single, local spinning rust with ESXi and it's really just not a good idea--not good with a single VM, and not good to incorporate into the platform from the beginning. This means the cost is indeed higher, but there's a minimum ante required when dealing with business- and enterprise-class platforms. When you have that, life becomes much easier.
OK, thanks - I'll take a look at the manual method and learning some scripting!
For the record, I've been dealing with nearly this exact problem, with nearly identical context for the last 48 hours.
I can passthrough my P420 and Intel software RAID controllers to a FreeNAS install, but only the Intel drives show up, not the existing array, and none of the 8 connected to the P420. When I try the same in a Windows Server 2016 install, I crash after at the spinning dots.
It's really a huge bummer because it means the alternative for me to to double my power consumption and run a secondary server to get where I need to be - one running FreeNAS for iSCSI and one running ESXi for OSes.
Change in the boot options of your VM the firmware setting from UEFI to BIOS. (you will need to reinstall your system though if you already had one set up) and that should allow windows to recognize the SATA controller and disks attached to it.
@asansiis right. The SATA controller pass-through works in windows VM guests only if the VM is configured as BIOS boot mode, in UEFI mode it just crash during the OS initialization, just after a few seconds with the spinning dots. Oddly enough, in Linux guests like Ubuntu 22.04 it works on UEFI boot mode and in BIOS mode, I tried with a Supermicro server that comes with a X10SRi-F motherboard, both onboard SATA Wellsburg AHCI Controllers (SATA and sSATA) worked fine in Linux guest VMs, in Windows guests (server and workstation versions) it crashed when booting in UEFI mode, only worked in BIOS mode.
I agree with both daphnissov and Wafy_Tony, with @daphnissov in the sense that the platform is primarily designed for business/enterprise stuff, I know this because I have 15 years working with vSphere since the 5.5 days all the way to the 7.x iteration, I work in an enviroment with a lot of enterprise grade hardware, mainly HPE and Cisco stuff (Blades, 3PARs, Primera, Synergy, Cisco Nexus, ASRs, etc), and he's right, the strong point of the platform is enterprise grade stuff aimed to be rock solid 7/24 operation. But, the platform also support stuffs like the one @Wafy_Tony wants to do, to a certain point of course. Heck, I myself have 2 servers on my home doing the same stuff that he wants to do, and that's why I agree with him too
That's the fun on computing and the internet, you can try to do things that push the limits on something, and in the process one can have fun and learn new stuff, if something doesn't work, you always have the internet, where you can ask if somebody else have done the same thing you are trying to acomplish before. Also, this kind of crazy things, corner cases like daphnissov stated, can only be done in a home lab, or a small business with low budget, in which with proper backups, you can risk to try something new and save a few bucks in the process. In a entreprise enviroment, there's two reason for not try to do this kind of things, first there's no need to do this, the hardware which you use is designed and certified to work with the platform and the manufactures tell before hand what the solutions can and can't do, so there's no way to get wrong, but second and most important, there's too much at risk from the responsability point of view, to be trying this kind of things: data, money, time, and your job.
@Wafy_Tonyhave fun with your home server, I definitely had fun building mines, I tried TrueNAS Scale, but didn't like it, coming from ESXi and vSphere, TrueNAS virtualization looked like kids stuff to me. I ended with a server running TrueNAS VM (HBA passed-through), 2 VM runing Windows and Steam Remote Play with 2 NVidia card for the game streaming (one Quadro and another GeForce), really fun to be able to play PC games on tablet and TV without PCs or game consoles attached to them. MySQL VM for Kodi media library, Plex for transcoding media content on low power devices that lack codec support to uncommon content, etc. I hope you have fun and learn a lot in the process.