4 Replies Latest reply on Aug 26, 2009 2:09 PM by jwnchoate

    SAN performance in VM

    jwnchoate Enthusiast

       

      Can some of you post observed guest vm performance data for your SAN's?  Were running a Compellent system, 2 controllers and 2 shelves (1-10disk FC Shelf on Tier 1, 1-11 disk SATA shelf).  Weve been testing with iometer and were getting these results on 4G fiber.

       

       

      on a vm running its vmdk on Tier1 were getting about 90M/s.

       

       

      on a vm running its vmdk on Tiers1 and 3 were getting about 60M/s

       

       

      If we mulitthread our reads were getting about 250M/s, but most apps run single thread I/O.  We really want to improve performance, but were not sure adding shelves or disks will help.

       

       

      Can some of you give me an idea of any benchmarks you have done on your vm's?  How many shelves, disks, what vendor and speeds do you see?  We dont want to pressure the vendor if were seeing the max off of a vm on a SAN.

       

       

      Any numbers would really help us if we could have some knowlege that having a VM guest be able to access its disk at throughputs close to 200M/s on a single thread.  Perhaps this is just a dream unless we spend a million+ (not likely) on a fat EMC cabinet.  If we have any hope we are willing to make some upgrades though.

       

       

       

       

       

        • 1. Re: SAN performance in VM
          bobross Hot Shot

           

          As with most performance tests, the answer is: "it depends". 

           

           

          What blocksize are you using?  Are you using filesystem or raw volume(s) as the Iometer target?  How many path(s) are you using? 

           

           

          • 2. Re: SAN performance in VM
            jwnchoate Enthusiast

            Obviously, it depends, but my question is not to help me with my settings.

             

            What I would like to hear is what kind of SAN, how many disks, shelves, back end loops, front end loops, block sizes, etc that you are using if you are getting near 200MB/s or better.

             

            The basic guest VM in our environment is a windows 2003/2008 x32/x64 server with an O/S .vmdk and a Data .vmdk in the same folder on the same lun with a single 'active' path. We do have standby paths. I can get multiple vm's or a single vm with multithread i/o to go past 250+MB/s over a single active path to the SAN. That is why I am asking about the single thread I/O in your environment.

             

             

             

             

             

             

             

            My goal is to determine if single thread I/O near or better than 200M/s is possible with a windows guest vm using vmdk's on a vmfs3 volume, and if so, what does it take to get it there?

            • 3. Re: SAN performance in VM
              bobross Hot Shot

               

              Well, I don't have a SAN with shelves or loops.  I have a SAN with two ISEs and fabric.  I just measured single-stream performance from a Dell 1950, 3.5U4, Windows 2003 R2 VM, single path, single HBA (Qlogic 24xx) to a single NTFS volume with single-threaded/queue depth=1 using IOmeter @ 256 KB blocksize and got 182 MB/sec.

               

               

              Sorry to hear on your similarly configured VM perf is 90 MB/sec.

               

               

              • 4. Re: SAN performance in VM
                jwnchoate Enthusiast

                 

                thanks, that tell me its possible to get there.  Anyone else out there. got any additional info.

                 

                 

                 

                 

                 

                BTW, yes on iomter we also used the 256K blocksize in our tests as well so thats good to know.