VMware Cloud Community
jrmunday
Commander
Commander

EMC VMAX 40K - NMP vs PowerPAth/VE

Hi All,

We're currently replacing our HDS SAN with an EMC Symmetrix VMAX 40K and I'm mid way through benchmarking and getting to grips with the new storage.

In my testing I found that when using NMP, I got the best performance by using RR path selection policy with iops=1 (as opposed to the default iops=1000). I've read many conflicting recommendations regarding this setting, but in my testing the performance increase was more than double the IOPS and throughput with the same latency. Even with the limited scale out tests, this was the case, but with a trade off of a higher CPU utilisation for the increased performance.

I'm just about to start the PowerPath MPP testing and hopefully this will simply be so much better that NMP the business case for buying it speak for its self. If however, the performance results are similar to using NMP then I might struggle to support the business case for additional spend.

So with this in mind, if you are using EMC storage, specifically the VMAK 40K with NMP, then I would appreciate any feedback on your experience and configuration. Is there anything that I must absolutely do or avoid doing? If yes, please back this up with some information so that I can understand the decision making process and run through the pro's and con's.

Any additional feedback or comments would also be appreciated.

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
0 Kudos
2 Replies
DavoudTeimouri
Virtuoso
Virtuoso

Hi,

I'm using Symmetrix DMX3 and VNX7500 in my environment and using NMP for multi-pathing. We added VNX7500 with 15TB fast disks and 4TB SSD disks for our Composer project to our environment. Each LUN has 4 path that two of them are active I/O on VNX and all are active I/O on DMX.

Changing IO operation limit to 1 improved throughput on the both storage array same as you.

Also based on our storage team advise, I changed my hosts IO block size to 64KB and the change is improved performance as well.

Another important thing is queue depth. I didn't see any full queue condition in my logs but after increasing HBA queue depth to 128 and "SchedNumReqOutstanding" to 64, latency is reduced on my hosts. Of course, the configuration is related to your environment size and your storage array. It needs to calculation.

There is no CPU bottleneck on my hosts.

I don't use PowerPath for my tests yet and please share your test results with me.

BR

-------------------------------------------------------------------------------------
Davoud Teimouri - https://www.teimouri.net - Twitter: @davoud_teimouri Facebook: https://www.facebook.com/teimouri.net/
jrmunday
Commander
Commander

Hi Davoud,

Here are my test results for NMP vs PowerPath/VE;

EMC Symmetrix VMAX 40K testing on vSphere 5.0

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
0 Kudos