VMware Cloud Community
djciaro
Expert
Expert
Jump to solution

Performance on StorageTek 6140 array

Hi Guys,

I am looking for some thought, feedback or similar experiences when using StorageTek 6140 arrays.

We have an environment made up of HP blades using virtual connect running ESXi 5.1. Our storage layout is: Boot from SAN, Local SSD disks for Host Cache and Datastores on either:

  • VPLEX distributed (SAS and SSD – auto-tiering)
  • VPLEX distributed (SAS only)
  • VNX local only disks (NL-SAS)

These disks are housed in VNX 5700 with Fast Cache enabled

For DEV and test machines we are using StorageTek 6140 with FC disks.

Datastores are all VMFS 5 and sizes are 1TB, 1.6TB and 2TB

The latency on the 6140 is terrible and is having a very negative impact on the virtualization project.

The problem is that the four 6140 boxes have been paid for and maintenance and running costs are cheap so it is hard to get management to agree to scraping them.

Each host has 4 HBAs (FCoE on CNAs) 6140 LUNs using VMware MRU as PSP and all EMC LUNs are using PowerPath/VE 5.8. The VPLEX LUNs have 6 paths (4 local & 2 remote) and VNX and 6140 have 4 paths.

In performance tests VPLEX and VNX LUNs slate the 6140, we have used many different scenarios (2,4,6,8,16,32 threads) different VMDK sizes: 1GB, 10GB, 100GB, 150GB and 500GB

We also used the KB article with test set from VMware:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=201913...

With higher number of workers we get latencies of 160ms on the 6140.

The only time the 6140 provides better performance is when we use a large disk and eliminate the benefits of caching on the storage array. Also in real test cases (Financial applications running batches an full working sets reading large files) the 6140 was faster than NL-SAS on VNX. The only explanation (theory) I have for this is that with small files the access is reading from Cache and not actually hitting the disks so the VNX is faster than the 6140. But when we read large files it gets to the disk and FC disks used on the 6140 are faster than the NL-SAS disks on the VNX.

So is anyone using StorageTek 6140 with vSphere 5.1? What sort of experiences do you have? Any thoughts on improving performance? We asked EMC to support the 6140 behind VPLEX but that was declined.

Many thanks

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!
Reply
0 Kudos
1 Solution

Accepted Solutions
djciaro
Expert
Expert
Jump to solution

Nobody else seems to be using StorageTek 6140 with VMware (which I can perfectly understand) We decided in the end to retire the 6140s and replace them with TinTri boxes after a successful POC. We are going to use the TinTri T540 (even though there is a new T600 series, the order was made before they were released)

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!

View solution in original post

Reply
0 Kudos
1 Reply
djciaro
Expert
Expert
Jump to solution

Nobody else seems to be using StorageTek 6140 with VMware (which I can perfectly understand) We decided in the end to retire the 6140s and replace them with TinTri boxes after a successful POC. We are going to use the TinTri T540 (even though there is a new T600 series, the order was made before they were released)

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!
Reply
0 Kudos