VMware Cloud Community
inferno480
Contributor
Contributor

Adaptec 7805 PCI Passthrough Issue w/ ESXi 5.1 and Ubuntu 12.04 + Windows 2008 R2 guests

Hello,

I am pretty new to VMware/ESXi but I am taking ESXi 5.1 for a spin (free trial license) to see if it will become the hypervisor of choice for my home server.  So far so good, until I wanted to get around the 2TB VMFS limit by passing my Adaptec 7805 card through directly to an operating system.

I cannot, for the life of me, get it to work right.  I made sure I am using the compatible 1.2.1-29900 drivers on both the host, and the guest operating systems, but I am having some strange issues when actually trying to do the pass-through.  I don't know if I am encountering a bug or just doing something wrong or unsupported with my license?

As expected, after enabling IOMMU on the host's BIOS, the Adaptec card (as well as my NICs) showed-up under Advanced Configuration as being eligible for pass-through/DirectPath.  So, I of course enabled it on the PCIe Adaptec 7805.

I am able to get the device to be passed-through in the VM settings; and when I install the operating system Ubuntu shows the PCI card under "lspci":

inferno@amahi:~$ lspci | grep Adaptec

0b:00.0 RAID bus controller: Adaptec Device 028c (rev 01)

...and, if I try a Windows 2008 R2 VM, it will up as a "RAID Controller" with the yellow exclamation mark, as expected.

[...]

The headaches begin when I attempt to install drivers, on either type VM.  NOTE, I am not attempting to use both VMs at the same time, I've tried these one at a time with the opposite VM completely deleted.

In Ubuntu 12.04, using the stock/default (v1.1.x) aacraid drivers does not cause me any problems, but it never sees my RAID10 array that's built in the BIOS of the controller card.  I was assuming this might be because they don't match the version of drivers running on the host, so I've tried a myriad of things and I get pretty much the same results as soon as I tinker with the aacraid drivers.  Here is what I've attempted:

- installing the 1.2.1-29900 Ubuntu 12.04 binary drivers from Adaptec

- compiling the 1.2.1-29900 dkms drivers myself and installing them

- installing the 1.2.1-33000 Ubuntu 12.04 binary drivers from Adaptec

- upgrading the host drivers to b33000 (as well as Ubuntu), however these are not listed as compatible on the VMWare site.

What happens after I install the aacraid drivers is pretty strange (and this happens with any of the four scenarios mentioned)... after an extended period of time after the first reboot following aacraid driver installation, I just get a black screen on the console.  The GUI never will load, but eventually I get a text-based Ubuntu login prompt.  This only lasts 5-10 minutes, then the console (but not the VM!) completely dies.  Completely black screen, no mouse cursor, typing doesn't do anything yet the play button icon is depressed.  The VM is not completely hung -- just the console, as I am able to log into the VM through SSH and poke around, but if I run "arcconf" (Adaptec controller mgmt utility for CLI), it will not detect a controller.  dmesg / fdisk / udev all do not see the RAID array as /dev/sdb (or any other device name, for that matter).


In Windows 2008 R2, it also will not work.  If I manually try to update the yellow exclamation mark "RAID Controller" driver to the appropriate drivers from Adaptec (same version running on the host), the entire VM instantly crashes (play button pops out, stop button depresses, and I get an error notification in the host's Event log that states:


"VMware ESX unrecoverable error: (vcpu-0) PCIPassthru: 01:00.0 tried to modify MSI-X vectors number 32-32, but maximum supported vector number is 31"


I actually have no idea what that means, and doing some preliminary Google searches on it have led me astray.  Any help with this issue would be greatly appreciated; my goal is to get PassThrough working to the Ubuntu VM so it can become my file server with full access to the RAID10 array without having to worry about the 2TB VMFS limitation and for increased performance...


Here are my system specs for reference:


- ESXi 5.1 w/Update 1

- AMD Opteron 6300 16-core CPU

- 64GB ECC RAM (<-- I have read the free ESXi will only support 32GB RAM, but is that the case with the trial version as well?  are my issues related?)

- SuperMicro H8SGL-F Motherboard

- Adaptec 7805 SAS RAID Controller Card

- 128GB SSD (attached to on-board SATA controller, for datastore/VMs)

Any help, tips, or suggestions would be greatly appreciated!

Reply
0 Kudos
7 Replies
inferno480
Contributor
Contributor

Been tinkering some more... first a correct, it's the 29900 drivers and the 30200 drivers I've been experimenting with. Not "33000", I don't think they exist.  I tried upgrading the host drivers to 30200 and building the 30200 drivers from source on the VM.  Same results. Smiley Sad

I did a tail on my vmkernel.log file from when I boot my VM with the PCIe Adaptec card in pass-through mode.

/var/log # tail -f vmkernel.log

2013-05-25T08:55:45.728Z cpu4:11058)AMDIOMMU: 250: remove device 0x100 (alias=0x100) from domain 993

2013-05-25T08:55:45.728Z cpu4:11058)AMDIOMMU: 178: assign device 0x100 (alias=0x100) to domain 992

2013-05-25T08:55:49.747Z cpu4:11058)AMDIOMMU: 250: remove device 0x100 (alias=0x100) from domain 992

2013-05-25T08:55:49.747Z cpu4:11058)AMDIOMMU: 178: assign device 0x100 (alias=0x100) to domain 993

2013-05-25T08:55:50.429Z cpu4:11058)VMKPCIPassthru: 2567: BDF = 01:00.0 intrType = 2 numVectors: 1

2013-05-25T08:55:50.429Z cpu4:11058)IRQ: 240: 0x39 <pcip_01:00.0> exclusive, flags 0x0

2013-05-25T08:55:51.587Z cpu4:11050)PVSCSI: 2390: Failed to issue sync i/o : Busy (btstat=0x0 sdstat=0x8) <<< this doesn't sound good Smiley Sad

2013-05-25T08:56:55.285Z cpu3:11052)VSCSIFs: 2035: handle 8194(vscsi0:0):Invalid Opcode (0x85) from (vmm0:Amahi_(File_Server))

2013-05-25T08:56:55.289Z cpu3:11052)VSCSIFs: 2035: handle 8194(vscsi0:0):Invalid Opcode (0x85) from (vmm0:Amahi_(File_Server))

2013-05-25T08:56:55.760Z cpu3:11059)NetPort: 1380: enabled port 0x2000007 with mac 00:0c:29:df:e6:09

2013-05-25T08:58:29.676Z cpu2:8194)NMP: nmp_ThrottleLogForDevice:2319: Cmd 0x1a (0x4124404025c0, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE <<< same here. When this happened, I lost the console to the VM Smiley Sad but it's not hung, I can still SSH to it.

2013-05-25T08:58:29.676Z cpu2:8194)ScsiDeviceIO: 2331: Cmd(0x4124404025c0) 0x1a, CmdSN 0xaf6 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2013-05-25T09:01:03.001Z cpu0:8275)WARNING: VFAT: 4346: Failed to flush file times: Stale file handle

Another interesting thing; if I disable the PCI passthrough to the VM, the VM works fine -- no disappearing/hanging console, and the VM comes right up in GUI mode.  If I have the SCSI controller in PCI passthrough, it takes quite a while to boot each time and will only boot to text mode (then after about two minutes the console hangs).

Reply
0 Kudos
inferno480
Contributor
Contributor

Tried upgrading the firmware on my Adaptec 7805 to the latest and greatest... no change. Smiley Sad

Tried installing Fedora 14 as another test case... when I have the PCI passthrough enabled, the OS will not finish booting. Smiley Sad  It takes an extremely long time on the white "Fedora 14" bar, but never completes.  The box never gets to the point where it brings up network interfaces, so I have no way to get in remotely and see what it's being hung-up on.

This problem is extremely frustrating given the amount of $$$ I've spent on the hardware, especially since I heavily researched the compatibility.  Evidently PCI passthrough is not part of that. Smiley Sad

Guess my next step is to try a different Hypervisor, which is unfortunate because ESXi 5.1 is pretty neat (aside from this problem)

Reply
0 Kudos
inferno480
Contributor
Contributor

Still not having any luck whatsoever... starting to get extremely frustrated.  I installed Proxmox 3.0 temporarily to see what would happen.  Amazingly, PCI passthrough with the Adaptec 7805 worked perfectly fine... only problem was, performance was completely terrible. 

I was only able to achieve 30-40MBytes/sec reading and writing to to the RAID10 array.  On my current (non-virtualized) environment, I can saturate a GigE link @ 100-110MBytes/sec using an Adaptec 5805 and SAS 3.0Gb/s drives.  This is a gigantic bummer because I REALLY do not want to use Proxmox, it is nowhere near as polished and robust as ESXi and seems very kludge-y to me.

Any suggestions?  Are there any official lines of support I can contact who may be able to help, since this is all supported hardware?  I had planned on paying for ESXi, but not if it doesn't work.  Therein lies the problem...

Reply
0 Kudos
inferno480
Contributor
Contributor

So I believe I've "sort of" resolved my issue! 

I can at least get the VMs to boot and see the Adaptec 7805 in ESXi, but I am still doing some preliminary benchmarkings in terms of performance (more to come on that).

The trick?  Editing the .vmx file manually and adding the following line for the PCIe card being passed-through:

pciPassthru0.msiEnabled = "FALSE"

Reply
0 Kudos
NTShad0w
Enthusiast
Enthusiast

Mate,

Damm my browser destroy my long post before save/post it, aghrrr L(

But in short now

Officially VMDP (VM Direct Path, so VM Passthru using intel VT-d and AMD IOMMU are solutions SR-IOV sugested by PCI-SIG group) was designed for passing only NIC cards, of course there is a huge possibility to pass almost anything like usb, pcie itself, graphic cards and raid/hba controllers, but in theory they can be SR-IOV capable if their vendor make them like that,, but there is no such cards on the market (as my best knowledge, maybe something from LSI but I'm not sure that it's really work)

There is a long topic of VMDP of ATI Graphic cards here (especially look after 22 page but if you would like to understand the VMDP problem read whole long discussion):

http://communities.vmware.com/message/1668739#1668739

In my opinion VMDP will be used to pass most of hw in nearest future but not for all hw it’s easy and documented, some cards works easier and some really hard to get working, but most of success are editing/adding parameters in .vmx file and/or advanced parameters in the host (harder to do and may result crashes on ESXi), for example solution for working VMDP graphics cards is editing .vmx file and adding something like this:

pciHole.start = "1200"

pciHole.end = "2200”

and then you can use more than 1 vcpu and more than 2 GB of ram (I used 16GB and 4 vcpu with ati radeon 5870 and some onboard matrox g200 as I remember)

So that technology is need to be tuned for graphic cards and raid/hba controllers what will happen in next years I think, but there is very good to post a problems and solutions we found to share important unknown knowledge and save ones time and money for tests :smileysilly:

So Thank You mate :smileysilly:

And about the performance of RAID/HBA on HW, VM and native via VMDP i analyse it and it looks like that:

- on bare metal/HW performance is 100%

- on vm with VMDP performance warries but should be about 90% - I plan to use like that my Areca 1882-24 RAID6 Controller in the nearest future, so I will write my opinions/problems/solutions here or create a new topic with link here.

- on vm (as a virtual storage appliance) uses a hw RAID controller connected to esxi (v4/v5) and uses storage as a vmfs as a storage for that VSA to next or local esxi performace is poor, and range 20-30%

- on vm (as a VSA) uses a hw RAID controller connected to esxi (v4/v5) and uses storage as a LUNs (connected directly from RAID Controller to the VSA vm) as a storage for that VSA to next or local esxi performace is still rader poor, and range 30-50% so only make sens to use SSD like that, normal especially SATA 7k2 disks work terrible on this model Smiley Sad(, there are some of VSA's that trying to do such operation fast but in truly (in my opinion after long years of tests) it's not working fast and it's only good for really poor performance lab/test enviroments (I absolutely dont use it on my lab even I have bought some of such products licenses).

additionally about storage performance for VMware ESXi 4/5 you have to notice that:

- RAID level is really important, fastest is a RAID 1, RAID 0 and RAID 10 and a single disk (yep!! ssd's best works as a single disks or some easy raid 1)

- to have good (optimal) performance especially for RAIDs you have to has BBU (Battery Backup Unit) on RAID controller, other way in raid 5 for example you will have less than 10MB/s writes and it also reduce reads, there are some methods to cheat vmware to enable full write cache without BBU but its really dangerous and after 2-3 crash/reboots/halts of ESXi host your VMFS will be gone Smiley Sad( its normal unfortunately Smiley Sad

- in RAIDs especially RAID 5 and RAID6 you have to consider very carefully stripe size, its VERY important, for example vm os generates average of about 6kB chunks so best stripe size will be something close to it so 8kB as I used, You really notice a huge difference when you start on your lab server more than 5-8 vms that use a hdd for example for simple LU (you start 8 vms that are not up to date and then tyeh load a updates and kill your storage wonderfully) and of course when your vms starting up, after 20 vms on one storage you will really notice differences even  if your vms dont du a lot on a storage, notice that also single VMFS should be oficially occupied by max 8 vms !!!

- in RAID 5 and RAID 6 you also have to always as possible consider number of disks, its important because bad number of disks reduce your storage performance 2x (-50%) because ehen you do something on storage like admin tasks or just copying something that do large chunks (64-256 kB) your storage need to do 2x more IOPS than it should, so the idea of designing RAID 5 storage is to have 4+1 disks, than 8+1 disk then 16+1 disk (in teory you may use 12+1 disk but I will not recommend it because its rather still a problem, same like 6+1), for RIAD 6 idea is that you have 4+2, 8+2, 16+2 (and in theory 12+2 what I will not recommend, same like 6+2 in my opinion). from both a performance, price and security reasons I recommend RAID 5 for 4+1 + hotspare or 8+1 + 2x hotspare (ideally RAID 5 EE) and RAID 6 for 8+2 and 16+2 + hotspare - but warning is that RAID controllers sometimes support up to 16x hdd/sdd for RAID5 and RAID6), these last recommendation in mainly for test/lab/home/soho/smb enviroment, for enterprise there are some different rules also because of type of storage, disks and array software/vendor used.

Regards

NTShad0w

Reply
0 Kudos
NTShad0w
Enthusiast
Enthusiast

mates,

little updates, about week ago I install Areca ARC-1882-24 RAID controller (8gb ecc cache + bbwc) on my esxi v5.1 b1117900 passthru (VMDP) to vm like windows 2008 r2 and freenas 9.1 rc and after little tuning it working realy nice.

hw used:

mb: supermicro server board x8dah+f

144gb ram on 1333MHz (forced)

2x L5638 sc cpu

1x areca arc-1882-24 with 6x cheap seagate 3TB in hw raid 6, 8k cluster size, 7x ssd on raid 6 4kb cluster size

sw solution:

esxi v5.1 b1117900 on the host

1x freenas 9.1 rc vm with VMDP of Areca arc-1882-24 RAID Controller

1x windows 2008 r2 with Areca VMDP of arc-1882-24 RAID Controller

modified .vmx files to use inferno480 mate sugestion/solution  " pciPassthru0.msiEnabled = "FALSE" "

on this 2 systems Areca and esxi itself working stable under moderate load (500-2000 iops) about 5 days for now, there is no any "idle cpu" consuming or something strange, so I may say it looks like looking stabe and smooth (but for real tests we need to wait 2-3 months to be quite sure it's really stable and fast).

soon I will also test other NAS solutions in such configuration to choose best cheap nas for home/soho small lab enviroment. I'll try to info you mates how its ride...:P

regards

NTShad0w

Reply
0 Kudos
Dafty
Contributor
Contributor

Having the same problem passing through my adaptec 7805 and getting guest o.s. to boot properly.  It's passing through just fine, but my VM just hangs at the loading screen anytime I add the device to it.

Edited .vmx file using suggestions in this post, but I'm pretty sure I'm doing something wrong as when I make changes, it breaks the virtual machine completely and I have to restore original .vmx file.

Do I add " pciPassthru0.msiEnabled = "FALSE" " in addition to or am I replacing an existing string for this device?

This is what I have in the original string and I've made in bold  what I added.

"01:00.0"pciPassthru1.deviceId = "01:00.0"pciPassthru1.msiEnabled = "FALSE"

Is this correct?

I downloaded the file from the datastore onto my desktop...made the changes in text editor....saved as .vmx file again.  Uploaded and wrote over original.  Am I supposed to do something different?

Any help is greatly appreciated!!

-DJ

Reply
0 Kudos