1 2 Previous Next 22 Replies Latest reply on Mar 16, 2011 8:19 AM by DaIceMan Go to original post
      • 15. Re: MSA2312i and ESX4.1 slow performance
        DaIceMan Enthusiast

          Status Update:

             after some further debugging, we narrowed the issue to one of the 2 MSA controllers. Somehow, someway, the second storage controller was presenting the LUNS regularly but, did not allow access to one of the LUNs. In ESX it would appear as available but not connected, and it wouldn't be owned by anybody and connections would apparently simply time out while in MP through the switch (the MSA presents all LUNs on all ports). Not even after an HBA/vmfs rescan. After restarting the 2nd SC everything went back to normal. This must have severely messed up the Multi-pathing.

           While we were at it, we ran some file copy tests from physical to virtual and from virtual to virtual. To a Z200 workstation an 8GB copy from a virtual server (2008 R2) was 110MB/s sustained, which is about the limit for a 1GB connection (Host was directly connected to one port of one SC). The opposite, a write, would max out around 55MB/s (it is a 6 disk RAID 6 vdisk with 6 1TB SATA disks split up in 8 small boot LUNs and 2 larger ones - we are running max 20 low I/O VMs so it was enough).

          We also swapped out one of our QMH4062s from one of our hosts and used the all software dependant iSCSI (we have a quadport Tigon3 NC325 mezzanine so used one of it's ports through the switch so it was the only "switched" port) to test the difference. With this kind of storage, the write and read performance made no difference but naturally the CPU usage was higher. The DAVG with the software in write bounced from 150ms to 1500 but on average was around 200, while in read it was around 12ms. The write DAVG on the Qlogic hosts on sustained write operations was also around 150ms, indicating the our MSA was the bottleneck (the SATA disks). On read of course the DAVG was also around 12ms with 110MB bandwidth. These tests were all done with direct host to SC connections bypassing the switch and jumbo frames set to ON on QMH4022 controller and SC but not on the software iSCSI so I was expecting more  issues on the software but we see that there is actually no difference. I enabled jumbo frames on the relevant vswitch and vmk port and rebooted. The read test speeds indicated a dramatic decrease in performance (less than 30MB instead of 110) so something is not working here, either on the physical switch (procurve 2810-24) or esx side so we decided to disable jumbo frames (we don't have such an intensive and large file workload) and put an end to the pain for the moment.

        • 16. Re: MSA2312i and ESX4.1 slow performance
          Syl20m Novice

          Hello,

           

          Sorry for this so late feedback but I was very busy those last days!

          I made some further tests and haven't yet identified the bottleneck of my infrastrusture.

          In fact, we realized that if we made a copy from a physical server in the VMWare Management network to a VM in the production LAN (in the middle we have a router with a 1500MTU) the copy is more stable but still slow.

          When we copy from Physical to VM inside the production LAN, the transfer rate starts over 50MB/s and decrease sometimes to 5-10 MB/s.

          So I decided to play with MTU inside VMs and changed all the VMNICs to flexible in order to having the choice of the MTU in the windows driver. If we use a 1300 MTU inside all VMs, the copy between physical  and VM inside the production LAN become more stable as the copy from Management LAN to Production LAN (through a router).

          I don't know what changes I can do to improve the performances? Do you think migrating the RAID5 in RAID10 will help?

          • 17. Re: MSA2312i and ESX4.1 slow performance
            Syl20m Novice

            Hi Dalceman,

            I hadn't find any solution yet. Do you solved your problem since your last post? Or did you have made some other tests? Have you called VMware suport?

            I analyzed my VM logs and don't see anything. I just have regularly in events of the 2 ESXs (but not at the same time) :"Lost access to volume (Datastore LUN0) due to connectivity issue" and 5 or 20sec later "successfully restored access to (Datastore LUN0)".

            Did you have those same messages?

            Thanks in adance,

             

            Syl20m

            • 18. Re: MSA2312i and ESX4.1 slow performance
              binoche Expert
              VMware Employees

              Hi, Syl20m

               

              could you please upload /var/log/vmkernel.log? we can have a check what could be wrong, thanks

              • 19. Re: MSA2312i and ESX4.1 slow performance
                Syl20m Novice

                Hi,

                Here is the vmkernel log file of the first ESX server. Thanks in advance for your analyze!

                • 20. Re: MSA2312i and ESX4.1 slow performance
                  binoche Expert
                  VMware Employees

                  the below messages look like strange to me, not sure it is something wrong?

                   

                  Mar 15 04:43:23 ESX1 vmkernel: 47:05:55:37.685 cpu12:4269)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0 
                  Mar 15 04:43:23 ESX1 vmkernel: 47:05:55:37.733 cpu13:4271)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:23 ESX1 vmkernel: 47:05:55:37.737 cpu13:4267)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:23 ESX1 vmkernel: 47:05:55:37.740 cpu13:4272)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.729 cpu15:4264)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.753 cpu8:4262)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.770 cpu15:4274)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.781 cpu12:4269)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.786 cpu12:4273)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:25 ESX1 vmkernel: 47:05:55:39.791 cpu13:4271)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.849 cpu13:4272)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.910 cpu15:4264)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.918 cpu22:4270)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.921 cpu17:4268)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.926 cpu13:4277)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  Mar 15 04:43:26 ESX1 vmkernel: 47:05:55:39.929 cpu13:4275)WARNING: vmklinux26: __kmalloc: __kmalloc: size == 0
                  • 21. Re: MSA2312i and ESX4.1 slow performance
                    opbz Hot Shot

                    Have you looked at the /var/log/vmkiscsid.log?

                     

                    I have not used the MSA boxes but I have seen similar issues with Equallogic boxes and in my case the issue was down to misconfiguration.

                     

                    I would suggest the following:

                    ensure you have lattests version of the iscsi config guide for vmware for the MSA. I seen cases where config details where copied over from different devices and that caused issues.  ALso check for firmware update versions....

                     

                    If its an active passive storage you will most likelly need to have different subnets

                    if its activie active you only need 1

                     

                    check with esxcli -l vmhbaXX to ensure all your vmknics are assocaited the correct VMHBA

                     

                    If you have jumbo frames ensure it is enabled throught your network for iscsi. Also check for any particular settings you have on your network.

                     

                    by the way the vmkiscsid.log throw out a lot of rubbish when its connecting to iscsi.... but after proepr connections are made it ussually stops.

                     

                    hope this helps...

                    • 22. Re: MSA2312i and ESX4.1 slow performance
                      DaIceMan Enthusiast

                      Syl,

                         we are satisfied with our current setup as it won't give more throughput. We get about 30-40MBs copy from VM to VM on different hosts which is acceptable. From VM to physical machine we can get around 80MB/s. We will change our MSA with a new generation one with 12x 2.5" 600MB SAS disks where I will run further tests on a RAID10 volume for best overall performance. Our present write limit on a slower RAID6 volume is about 60MB/s. In read we can get a little over 110MB/s maxing out the Gbit connection.

                       

                           Regarding your problematic, I would suggest to take it one step at a time. I would first disable all jumbo frame support from vswitches, vmkernels, NICs and your 1810 switches and revert to enabling Flow Control on the ports where your iSCSI NICs are connected and your 4 storage ports which is more important than having Jumbo Frames if you cannot support both (and the 1810, like our 2810 cannot) and run some tests.

                       

                          The MSA controllers are actually active/active, though they adhere to ALUA specifications. This means that the 2 SPs present all LUNs to all ports, though in reality connect directly only to the ones they own while internally route the connection to the owning SP if a request is made indirectly. This means that the SP which receives the request and does not own the LUN simply hands it over to the SP that does internally (the 2 SPs are interconnected with a BUS inside the MSA). This supposedly maintains the path active in case of a failure. Personally, I have not yet found any relevant performance data on how much this impacts the "slave" SP respect to having directly connected paths to the respective owning SPs and if enabling Round Robin vs MRU is really helpful in this situation - but this is another story.

                       

                           If you have any doubts on your MSA's behaviour and if you can, try this test which works with max 4 ESX hosts:

                       

                      One at a time, disconnect one SP port and one iSCSI Nic, and connect directly to the SP bypassing the switch (be sure you have auto MDI-X capable ports or use a cross cable, in our case they were all auto MDI-X through blade interconnects) and disconnect the second iSCSI Nic from the switch. Leave the kernel time to failover and re-establish another path before doing the next (wait about 1 minute), and "Rescan All" from the storage adapters. Do this for all 4 SP ports. In the end you will have only 4 iSCSI NICs connected directly to 4 SP ports (if you are using 4 hosts of course). With the MSA behaviour, all the hosts will be able to see the LUNs. Note that the MSA path detection apparently behaves differently if you have the second ports of both SPs on different subnets than if they are on the same, but this is another story. Now you should see each vmhba and it's relevant direct path. If you don't while you are doing this, the SP may be malfunctioning as happened to us, so go ahead and restart the relevant SP from the MSA web interface and "Rescan All" after. If everything looks ok, you can try I/O tests from physical to virtual and from virtual to virtual and see how it performs.

                       

                        Another note which may or may not be relevant for your setup regarding the MSA behaviour is that if you are using a LUN 0 then this LUN will be shown on all other hosts, who do not have explicit access to the LUN as "Enclosure" so don't be alarmed. Access to the LUN, if not explicitly enabled (or by default behaviour) from the MSA is not allowed.

                       

                         You can monitor the vmkwarning file from the console (tail -f /var/log/vmkwarning) while running tests, disconnecting and reconnecting paths.

                       

                         These tests will help you isolate the problem, if it is MSA, switch or ESX related (configuration issue or malfunction).

                      1 2 Previous Next