3 Replies Latest reply on Aug 8, 2008 1:54 PM by CLSsupport

    Maximum Network Performance

    bmernz Lurker

       

      Can anyone tell me what sort of throughput I should expect from a single W2K3 Server guest on a Linux VMWare Server 1.0.6 Host with a bridged Gigabit NIC?

       

       

        • 1. Re: Maximum Network Performance
          Peter_vm Guru

          Depends on many things:

          1. host CPU(s)

          2. host CPU(s) utilization

          3. physical NIC host OS advanced settings

          4. physical network switch type and settings

          5. throughput measured between guest and what

          6. host other network activities at that time

          7. throughput measurement technique

          8. make and model of your physical NIC

          9. VMware tools installed in guest, or not

          10. physical NIC host OS driver

          and probably another 20 other things that I have forgot to mention....

           

          Ideally that would be in the 50MB/s range, or at least that much I have seen.

          1 person found this helpful
          • 2. Re: Maximum Network Performance
            bmernz Lurker

             

            That was what I expected. So when I was getting around 7MB/s I wasn't unreasonably dissapointed then?

             

             

            1/2 - The Host is an HP DL185 G5 Dual Quad Core Opteron 2352's, with low utilisation.

            3 - what needs tweaking? can you point me in the right direction?

            4 - The physical switch is an Allied Telesyn AT-9924T Gigabit Layer3 Managed Switch

            5 - Win2K3 Svr Guest and an HP Workstation With Gig NIC, XP Pro on same switch

            6 - Isolated for Test

            7 - I used the NetCPS Utility, and was getting around 50MB/s from one Workstation to another, but only 7ish to the Guest.

            8 -

             

             

            02:02.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)

             

             

            9 - Yes - build 91891

            10 -

             

             

            tg3.c:v3.91 (April 18, 2008)

            ACPI: PCI Interrupt 0000:02:02.0[A] -> GSI 40 (level, low) -> IRQ 40

            eth0: Tigon3 (PCIX:133MHz:64-bit) 10/100/1000Base-T Ethernet 00:1c:c4:5f:61:56

            eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] WireSpeed[1] TSOcap[0]

            eth0: dma_rwctrl[769f4000] dma_mask[64-bit]

             

             

            Can you advise from here, or point me in the direction of the relevant docs?

             

             

            Thanks for your time!

             

             

            • 3. Re: Maximum Network Performance
              CLSsupport Novice

              Hi bmernz

               

              I havent seen your thread, but I'm a little bit further in investigating network throughput performance.

              PeterVM is right - there are a lot of further parameters to concern. So lets go ...

               

               

              Please download the CrystalDiskMark application and tell me your values when

               

              Local drives NTFS or VMFS

              read write 50MB sequential to your drive C on the virtual W2K3 server guest system

              then which RAID system configuration do you have on your host and how many guests are running similarly

              then run the CrystalDiskMark on your physical WXP Workstation on Drive C

               

              Virtual to Physical

              then run it again on physical WXP testing the shared drive
              server\share that you linked to a drive letter f.ex S:\

              then run it again on W2K3 server using the shared drive
              wkstphys\test from your physical WXP that you linked to a drive letter f.ex W:\

               

              Virtual to Virtual

              then run it again on virtual WXP testing the shared drive
              server\share that you linked to a drive letter f.ex S:\

              then run it again on W2K3 server using the shared drive
              wkstvirt\test from your virtual WXP that you linked to a drive letter f.ex V:\

               

              If you find other interesting combinations (like physWKST to physWKST or virtTerminalServer to virtFileServerDC or virtWkst to virtWkst then mesure it and tell me.

               

              I'm in holidays until mid of august and cannot tell you my exact values but they are definitely higher but also too low.

               

              After that I ask you to report or change some further details.

               

              For example:

               

               

              1. Which NIC driver do you have installed in virtW2K3srv ? vlance e1000 or vmxnet ?

              2. Which virtHW version do you use ?

              3. Which RAID configuration do you have for testing ?

              4. Do you have enabled the write cache on the drives ? (be carefully to use that in production environment)

              5. Do you have enabled the write cache on the RAID controller ? (be carefully to use that in production environment, only with BBU)

              6. Which type of write cache do you have enabled ? (there are several types in the LSI MegaRaid in my FSC server)

              7. Did you install more than one ethernet NIC on the physHOST, the physWKSTs and the virtServer ?

              8. If yes, did you configure a teaming technology ?

              9. Did you have enabled TCP optimization on the phys Broadcom Server NIC ?

              10. Which NIC do you have installed on the phys Wkst ?

              11. Could you manage to install Intel Pro 1000 Dual PCIe NICs on both phys Wkst AND physHOST AND maybe in the virtServer to assure that the same TCP optmization can be used ?

              12. Did you ever check, if you can set Jumbo Frames (I prefer 9kB like Intel) and if your phys switch can manage those ?
                     (Some say that Intel I/O AT or Broadcom TCP Offload Engine are replacing the need of jumbo frames)

              13. Did you ever configure NIC teaming together with JUMBO frames ?

               

              My questions in another thread are going in the same direction, but I didnt get comparable values from other users until now.

              You have a comparable testing equipment and I would be very interesting to get a step further !

               

              My near goals are: Which configuration on VMserver RC1 is really optimal for network throughput ?

              Where are the bottlenecks ?

              My further goal is: Which performance benefits can be achieved, when using 10GbE or Intel I/O AT or iSCSI over 10GbE combined with FreeNAS using ESX 3.5 ff ?

               

              Michael

               

              FujSie RX300 S3 2xQC 1,6GHz 10GB LSI 6x144GB SAS 10k / WINHOST64 / VMserver2RC1 / WinDC32 WinTS32 WXPpro Knoppix