Im building a new ESXi 5.5 system on Asus H87i-itx board with Intel i5 Haswell 4670T chip. Mainboard and CPU both support VT-d / DirectPath I/O so far as I can tell, but I have a problem.
In ESXi I can see most devices available for DirectPath I/O passthrough such as Video, Audio, NIC and USB controller, everything is available for DirectPath I/O passthrough except for Lynx Point SATA AHCI controller.
Can anybody help diagnose why the SATA controller is missing, and steps for more detailed debugging ?
Previous Cougar Point AHCI SATA controller is supported in ESXi so why not Lynx Point.
Also, Lynx Point SATA AHCI controller is listed on VMware HCL page here:
VMware Compatibility Guide: I/O Device Search Intel "Lynx Point" Series Chipset 6-Port SATA AHCI 8086:8c02
Strange thing is my ESXi can see the Lynx Point SATA AHCI controller in Storage Adaptors correctly, showing 6 x VMHBA listed, so why does it not appear in passthrough, and not even greyed out ?
Is there any extra diagnostic I can do please tell me ?
Would like to avoid doing physical raw mapping (rdm) if possible, DirectPath I/O Passthrough would be much nicer method. 🙂
I'm currently searching for a haswell-based i7 ESXi Setup like yours. I've found out that the Lynx-Point SATA-Controller is not able to be passed through in ESXi5.1.
My hope was for ESXi 5.5 that it will make it possible for the Lynx Point Controller. While makeing some research about that topic I've read that missing support for passthrough might be a BIOS problem and maybe the manufacturer can help with some BIOS Updates which will make passthrough possible.
On the other hand... because I'm preparing for a new setup: Is there any haswell based motherboard that uses a different onboard SATA controller which has support for passthrough in ESXi?
I am curious how the bios could be the cause of this problem, all other directpath / vt-d devices work fine from bios in esxi, so seems strange that this would be a bios problem.
Do any vmware experts understand exactly how directpath works and why sata controller may not be appearing ?
My theory is maybe the esxi/debian sata driver requires special code modification to add directpath support for esxi ?
Would be easy if someone from vmware can comment.
Hi VMware, can you please address this hardware since it is now very common in most Z87 series intel chipset.
I have hit a dead end in my server design since also there is a problem with > 2tb raw device mapping.
I made some custom changes to my /etc/vmware/esx.conf , but my system didn't startup properly afterwards. Displaying the usual "Press F2...", dark screen flashing, "Press F2...", dark screen flashing, ... It is also unreachable from network: Neither by vSphere nor by ping.
Intel DQ87PG Haswell Motherboard
Intel Core i5 4570
Old PCI SATA controller used for "root-device".
From: /device/000:000:31.2/vmkname = "vmhba0"
/device/000:000:31.2/owner = "passthru"
/device/000:000:31.2/device = "8c02"
/device/000:000:31.2/vendor = "8086"
I also tried to remove the AHCI driver:
~ # esxcli software vib remove -n sata-ahci
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Removed: VMware_bootbank_sata-ahci_3.0-18vmw.518.104.22.1683387
Tinkered with some other system files... No changes... It's not listed..
After the system crashed, I simply disabled the AHCI controller at the Intel VBIOS/EFI. This gave me the opportunity to boot the system properly, my changes were removed, as the controller is not available anymore.
It can be enabled afterwards to use it again.
I need to passthrough the controller, as using ZFS with raw-disk-mapping is not recommended. My drives are also greater than 2TB.
The controller works properly using Linux-KVM: Making use of modern vfio, I was able to passthrough the controller to a FreeBSD 10 machine, which booted directly off a drive on this controller, thanks to SeaBios. That is, it is technically possible!
I'll use Linux then. VMware really proofed its customer friendliness at this stage, by not fixing this issue.
EDIT: Using the latest version of ESXI: 5.5.0 #1 SMP Release build-1623387
just a short (first) post to tell I'd love that to be fixed haswell .
Mr VMWare, if you're listening... I'm really desperate I put 250€ in a core i5 of latest generation and a motherboard that can't passthrough its sata controller (and USB, but that will be solved with a Dlink DUB-1310 USB3 controller - tested, it works).
Have a nice day folks,
Ivan, from France.
I opened Support Request# 14496891806. I explained that there was no solution to a problem with hardware that is on the HCL. Also, that this thread has been open for almost a year.
I was unable to receive any assistance without an "active contract" for technical support. I refuse to use my employers contract to resolve a personal issue.
I was simply told to browse through tech papers, knowledge base, and pubs; also to repost to this forum.
Seems the solution is either to buy new hardware or not use vSphere.
The HCL primarily lists devices that will work with ESXi itself – i.e. you can use the device to attach the ESXi host to a storage device, a network, and so forth, using drivers that are supported on ESXi itself.
DirectPath I/O has a different set of requirements, in some ways more strict (limitations on what the device can actually do at the hardware level) and in some ways less strict (no need for ESXi drivers). Motherboard/builtin devices (and particularly chipset devices/functions) are particularly difficult to pass through to a guest, since the hardware/firmware/drivers may operate on the assumption that the whole chipset is under the control of one OS – not an unreasonable assumption in the real world.
I don't have a ready link to the DirectPath I/O device compatibility list... it is much shorter than the full HCL.
The only list I can find that is similar to what you describe falls under "Systems/Servers" on the VMWare Compatibility guide search page. You can select "VM Direct Path IO" under Features.
It still seems that "VM Direct Path IO" should be listed under IO Devices as well. If the CPU supports VT-D and we are able to passthrough other devices aside from the subject SATA Controller.... This should be identified somewhere for all of us ...
I've been able to remove the ahci drivers by removing the sata_ahc (IIRC) module from the boot.cfg. Subsequent testing was done in that configuration. Not sure if this is necessary, certainly doesn't seem to be sufficient to get things to work.
FYI, a quick way to get things working again when you break things is to disable the SATA controller, and reboot, then unbreak things, reboot reenabling the SATA controller again. Also, when using vmkchdev, -l outputs PCI device numbers in hex, however vmkchdev -v/-p requires inputs in decimal.
Updating /etc/vmware/passthru.map with data for the Lynx Point SATA, per the VMWare doco: http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf vis:
8086 8c02 d3d0 false (or true)
followed by auto-backup.sh & reboot, gets me so fas as to see the SATA controller in the viclient passthru configuration GUI, and a further reboot means I can then add the PCI device to a guest. However when I boot a linux or freebsd guest I get ahci reset failure, irrespective of the .msiEnabled .vmx file config parameter. I've tried a few kernel boot command line options with linux, however still broken.
I tried the other documented reset methods in passthru.map, but they either resulted in boot failure or no SATA controller option in the viclient passthru configuration GUI.
W8/2012R2 are not my guest targets, and I don't have media in any case.
I looked at the linux source code for the ahci driver reset, didn't seem to be doing anything particularly special, and I don't know anything much about PCI bus device resets.
I've iterated this as far as I can see a path, and at the moment I don't have any further ideas, which was one reason I posted, hoping to trigger an idea in someone else.
ESXi was a only a possibility to me, one that has not been fruitful for me over the years, and even though I've kept trying VMware, I keep ending up going some other way. At this point in time I think Linux KVM will better suit my needs for a Lab in a single box, and it eliminates VMDirectPath requirement too, although searching seems to show the passthru of the AHCI controller works under KVM.