VMware Cloud Community
edinburgh1874
Enthusiast
Enthusiast

CX3-40f Performance on ESXi 5.1

Hi All,

We are having hardware issues with our MSA G3 P2000's, so have deployed an EMC CX3-40f to offload some of our VMs onto. The EMC has been donated to us from another part of our company, so comes with no support or Powerpath Licences.

I'm running into a few performance issues, and am hoping someone can help me.

Our setup

C7000 G7 460C blades -> Q2462 HBA - > Brocade 8Gb switches (HP4/24) -> EMC CX3-40f FLARE 26 and MSA P2000

Installed OS : ESXi 5.1 914609

Test LUN CX3 - non meta-LUN with 15K FC x 12 R10

Test LUN P2000 - 10K SAS 10K x 4 R10

Test VM - 4 vCPU, test disk on separate Paravirtual controller

Test Datastore - VMFS 5, 1Mb block size

Brocade Zoning setup - Individual zones, 1 initiator and 1 target (host + SAN HBA)

Multipathing : FIXED with SATP_CX

**Troubleshooting steps**

Tested 64, 128, 256 element size - same.

Disabled read/write cache - same.

Disabled prefetch - same

Reduced Qlogic throttle execution from ~65000 to 256

Updated HBA FW through HP update ISO

Updated qla driver through VMWare Update Manager

Installed Windows physical server - same results as ESXi server

I'm using HD Tune Pro to obtain some IOPs and bandwidth results

****CX3****

512 bytes - 80 IOPS - 0.1Mb/s

4KB - 272 IOPS -  1Mb/s

64KB - 221 IOPS - 11Mb/s

1MB - 100 IOPs - 83Mb/s

Random - 138 IOPS - 62MB/s

avg access time 4-5ms

****MSA P2000****

512 bytes - 1500 IOPS - 0.7Mb/s

4KB - 1057 IOPS - 4MB/s

64KB - 887 IOPS - 55MB/s

1MB - 409 IOPS - 409MB/s

Random - 610 IOPS - 309MB/s

As you can see our MSA P2000 is performing much better than the CX3 with 4 disks as opposed to 12 in the MSA!

Is anyone else using the CX3 with 5.1 with success, or am I putting too much faith into this old unit?

0 Kudos
3 Replies
edinburgh1874
Enthusiast
Enthusiast

I just tried a further step - forced the HP blade HBA to 4Gb...suddenly the IOPs 4k test jumped from 80 to 8000 IOPS which is looking much better.

So it looks like using 8Gb or AUTO on the Qlogic HBA is causing some sort of issue...if this is the case then the P2000 will need to be limited to 4Gb too.

My understanding is that 4Gb fiber should work OK on 8Gb switches without having to force the whole fabric to 4Gb...am I wrong?

Anyone else seen issues like this?

0 Kudos
mcowger
Immortal
Immortal

1) The CX3 was designed in early 2006.  It is now a 7 year old array.  Certainly not far to compare it to a P2000, which was designed int he last couple years.

2) Disabling the caches certainly wont help your performance

3) With a non-meta LUN you are only using 1 controller worth of performance

4) Are the RAID sets still zeroing/initializing?  That wont help either.

5) You should understand that the CX3 is not supported with ESXi 5.1 in any way, shape or form.

--Matt VCDX #52 blog.cowger.us
0 Kudos
edinburgh1874
Enthusiast
Enthusiast

Hi mccowger,

Thanks for your answer, yes I understand all of what you've listed...I only turned these options off to see if there was any effect and am well aware that the CX3 is not supported/quite old now.

My point is that I appear to have improved the performance by forcing the HP blades to 4Gb, I'm now getting 8000 IOPS.

I wondered if anyone could shed some light as to why this may have this effect. Running the whole fabric at 4Gb isn't ideal as the P2000 shares it.

Cheers

0 Kudos