9 Replies Latest reply: Sep 29, 2004 8:04 PM by blackcat RSS

    Guest able to directly access PCI cards

    blackcat Lurker

      If the host is not using a card in a certain PCI slot (ie. does not have drivers running for that card) then a guest OS should be able to directly access it.

       

      For example, there is a very popular Linux-only telephony/PBX application called Asterisk, which uses PCI telephony cards from Digium. These cards only have Linux drivers, and cant be used in Windows.

       

      So, on a Windows machine, it would be logical to run a Linux VM in which Asterisk could run and access its telephony cards. However you cant currently do this.

        • 1. Re: Guest able to directly access PCI cards
          Big Al Novice

          I think you will find this a problem, as windows uses the hardware application layer to communicate with any system devices whether it is using them or not. The virtual machines within the workstation cannot directly communicate with the devices because of hardware application layer of running on the host o/s.

          • 2. Re: Guest able to directly access PCI cards
            Daryll Master

            This is not likely to happen at all in the foreseeable future.

             

            We would need to do a lot more work than just making the PCI slot available to the VM.

             

            I agree that this falls under the "it would be nice to have" umbrella. Given the technical challenges behind this, however, you're probably not going to see this feature included in our products.

             

            -Daryll

            • 3. Re: Guest able to directly access PCI cards
              blackcat Lurker

              I'm aware of the issues but, as ever, its just a case of how best to work around them.

               

              Wouldnt it be a case of replacing the PCI device's Windows driver with a customised driver which does nothing more than bridge the device across to the VM, where it is virtualised  in the normal way.

               

              The bridging driver doesnt need to know what the device actually does, but it does need to stop the device being seen by the rest of Windows and provide acess to I/O ports, DMA transfers and IRQs to the VM (something similar most be going on already with, for example, USB devices).

              • 4. Re: Guest able to directly access PCI cards
                sherold Master

                Keep in mind one of the reasons we all love ESX.  The lightweight VMkernel that allows for minimal virtualization overhead.  By creating a driver in such a fashion, there would need to be quite a bit of modification and bloating of the vmkernel (to provide DMA, IRQs, and manage the memory assosciated with these functions).  Perhaps with Intel's announcment of the Vanderpool project (http://www.tomshardware.com/hardnews/20040907_141505.html) some of the functionality of the VMkernel can be offladed, allowing for increased functionality in the vmkernel. *Content voluntarily deleted*

                 

                Scott

                 

                 

                Message was edited by: sherold

                • 5. Re: Guest able to directly access PCI cards
                  blackcat Lurker

                  Vanderpool is interesting but I was hoping not to have to upgrade CPU, motherboard, memory, OS \*and* VMWare to do the suggested functions.

                   

                  To me, this functionality is primarily of interest to developers who have to, say, write software to interact with a hardware device on an OS other than that on which they normally work (eg. writing Windows drivers on aLinux platform, or writing Linux drivers on a Windows platform).

                   

                  So couldnt there be a VMWare "Developer Edition" which implements this functionality without bloating the other products?

                   

                  BTW I'm not even sure that bloat is an issue. The VMWare kernel must already be virtualising a PCI bus and associated hardware controls (IRQs/ports/etc). What is new  here is it passing some of those over to a new driver on the host (where the hard work is done) for execution.

                   

                  Mind you, I'm probably talking nonsense!!

                  • 6. Re: Guest able to directly access PCI cards
                    sherold Master

                    BTW I'm not even sure that bloat is an issue. The

                    VMWare kernel must already be virtualising a PCI bus

                    and associated hardware controls (IRQs/ports/etc).

                    What is new  here is it passing some of those over to

                    a new driver on the host (where the hard work is

                    done) for execution.

                     

                    Mind you, I'm probably talking nonsense!!

                     

                    You make a valid point, and I[/b] may be the one talking nonsense here, but VMware doesn't actually virtualize (in the sense of the word) the physical PCI devices it can currently use (NIC, SCSI, HBA) directly to the VMs.  The guest operating system never sees these physical devices.  They see emulated (for lack of a better word) devices at the vmkernel level.  The vmkernel then transforms the proper requests back to the physical hardware.  I have NO IDEA how they do this...I chalk it up to magic.  I would think it would take significantly more code to supply direct interaction of virtual machines to the physical hardware by requiring that they include some form of direct physical hardware abstraction layer.

                     

                    Scott

                    • 7. Re: Guest able to directly access PCI cards
                      blackcat Lurker

                      I would have thought that the VMware kernel actually does virtualise hardware devices by, for example, trapping I/O port requests (possible from the 386 onwards using its IO map), then simulating what a real physical device (LAN adapter, serial port, VGA card, USB controller, etc) would do and return.

                       

                      Thats how the guest OS's drivers for the supported devices work without modification - they dont know that they're not talking to real hardware devices, but to software simulations of them.

                       

                      So I suppose that what I'm suggesting is the implementation of a "generic PCI device", where the software emulation is (where configured, for a specific PCI device) is not so much an emulation, but an interface to the actual device through a special driver on the host, using the VM-host interface that the VM tools provides.

                      • 8. Re: Guest able to directly access PCI cards
                        petr Champion VMware Employees

                        It is not that simple.  I think that I already answered it in some other thread, but I'm lazy to look for it, so I'll retype it here.

                         

                        There are three (or maybe more) problems with generic support for PCI cards:

                         

                        (1) It is not easy to say whether host OS uses card or not.  ESX makes it simpler, but we do not want to run ESX, yes?

                         

                        (2) PCI cards can do busmastering.  When you program hardware, you cannot distinguish programming of busmastering transfers from normal programming of card.  So either you'll maintain 1:1 mapping between physical guest memory and host's physical memory (which means that you can run only one guest OS, and you'll have to use special host OS.  So for hosted products busmastering is impossible, for ESX maybe, but with very huge costs.

                         

                        (3) After we eliminated busmastering, there are interrupts.  As only guest OS has an idea how to acknowledge (and deassert) interrupts generated by hardware, it means that we cannot share generic PCI's interrupt line with any other device which is used by host - as we only know how to disable IRQ on PIC/APIC/IOAPIC.  Maybe PCI-X's generic IRQ enable on card can lift this requirement, but I'm afraid that touching this bit could change hardware behavior.

                         

                        (4) After we eliminated cards with busmastering and interrupts - who's still in contest?  I'm afraid that nobody.  And from those devices which are still in contest we have to eliminate devices which can lockup PCI bus while programmed - for example (from my exprience) when you setup Matrox cards, there are time windows during which framebuffer access locks up card hard (not that all except old MillenniumI were eliminated by no-busmastering condition above, but you could try to run them without busmastering).  So you do not want to run untrusted VMs, as VM can now crash your host.

                         

                        Due to conditions above I do not think that generic PCI cards access is possible without BIG support from hardware.  Maybe chipsets with IOMMU could shield PCI card's busmastering from damaging your system, but until IOMMU is generally available I'm afraid that there is no chance that anybody can implement usable framework.

                         

                        If we talk about it, I think that instead of direct PCI access you should ask for API which would allow you to write your own virtual devices - that way, with device datasheet, you should be able to write simple pass-through filter which will know which accesses are for busmastering/interrupts and which are not, and then you can map/translate what needs to be translated, and pass through safe operations.

                        • 9. Re: Guest able to directly access PCI cards
                          blackcat Lurker

                          But that's exactly what I meant when I said above:

                           

                          "...[i]replacing the PCI device's Windows driver with a customised driver which does nothing more than bridge the device across to the VM, where it is virtualised in the normal way.[/i]

                           

                          "[i]Virtualised in the normal way[/i]" means that the VM has a corresponding virtual device, which front-ends to the special host driver.

                           

                          None of the above are then an issue.

                           

                          It might be possible to have a fairly generalised implementation of both of these, eg. using a configuration file to specify the device's main details (a bit like Windows Unimodem driver, in principle). The source code of these would be available so that people can tailor them to specific devices that they need to support.