3 Replies Latest reply on Aug 11, 2008 6:26 PM by jjeff1

    server 2.0 beta - neverending pci to pci bridge

    jjeff1 Novice


      I updated a host from server 1.06 to 2.0 beta. The guest windows 2003 server failed to boot, missing a biosinfo.inf. I replaced that by hand and my guest now boots normally.



      But once booted, the found hardware wizard finds neverending pci to pci bridges. So far I have about 30 of them, plus a number of other unknown devices. The system is also finding multiple IDE and SCSI controllers, I've upgraded the VM hardware  and installed the updated vmware tools.



      Before the software upgrade the system was working normally.



      Luckily, the system seems to run fine. It's just a web server, and that part doesn't seem to have any problems.



      Any ideas about the neverending devices?



        • 1. Re: server 2.0 beta - neverending pci to pci bridge
          RDPetruska Guru
          User ModeratorsvExpert

          Thread moved to VMware Server 2 beta:Installation/Upgrade forum.

          • 2. Re: server 2.0 beta - neverending pci to pci bridge
            jjeff1 wrote:

            Any ideas about the neverending devices?


            Do you really mean never ending, or do you just mean that there are a lot of them?


            You get a lot of them once you upgrade to hardware version 7. I don't know the details, but doing it this way allows us to support a bunch of stuff that people care about (like having 128 NICs in a VM or something insane like that which apparently matters to a bunch of people ).

            • 3. Re: server 2.0 beta - neverending pci to pci bridge
              jjeff1 Novice


              When I first booted and installed the vmware tools, the only problems seemed to be 4 unknown devices, and the numerous PCI to PCI bridges.



              I thought by removing the PCI BUS device (or maybe it was named PCI HOST BUS) after installing the 2.0 vmware tools, it woud redetect all the child devices, this time using the properly updated vmware drivers, rather than the 1.06 vmware-tools drivers it had when first booting with the new hardware.



              That was apparently a mistake. It now found even more bad devices than before. I've stitched a copy of my device manager and attached a jpg. I gave up hitting OK at 33 PCI to PCI bridges. Sometimes I'd tell windows to go search on it's own for a driver, sometimes I pointed it at the VMCI driver. It didn't seem to make any difference.



              I read about a similar problem with vmware fusion and 32 PCI PCI bridges, which is why I went all the way to 33.



              This is windows 2003 server SP2.