VMware Cloud Community
RalphyBoy
Contributor
Contributor

ESXi Shared Storage - EMC vs Dell vs NetApp vs HP

Hi,

I am investigating the purchase of a new SAN for our VMware ESXi servers. Current we are using a Dell MD3000i and the performance from host to disks is slow. It is preventing me from running our SQL server virtualised. I would like to virtualise my entire infrastructure and need a suitable storage performance.

I have started investigating the options that are available for our budget and have come up with the following list from each vendor.

  • EMC VNXe 3300
  • Dell EqualLogic (Not sure which model is equivalent yet)
  • NetApp 2040
  • HP P2000

Out of these options and from your experience, which one will give us the best performance from host to disk?

Can you make a recommendation that is different that will give us a better storage solution?

Thanks for your opinions.

0 Kudos
19 Replies
mcowger
Immortal
Immortal

There is absolutely no way to predict that.

You could ask about interface protocols, but I'm pretty sure all of those support 10GBE.

Your performance will be dictated by the number of drives, their type, RAID levels, workload profile, etc.

Any of the above can perform well.

Do you have more information about your environment and requirements?

(fair warning, I work for EMC, but clearly nothing I've said above is product specific.

--Matt VCDX #52 blog.cowger.us
0 Kudos
RalphyBoy
Contributor
Contributor

Sorry I know I should be more specific....

Scenario, let just say that each unit is using 15 x 600GB SAS drives in a RAID 5. One big LUN is presentented to each ESX host.

What connector or configuration of connections via aggregation or multipathing or whatever would give the best performance results from

1VM running on 1 Host communicating to the Storage via.

1gbps eth intefaces

10gbps eth interfaces

8gbps FC Intefaces

6gbps SAS interfaces

Can any aggregation ticks be used to make the pipe thicker from 1VM running on 1Host accessing the Storage

What is the best way of connecting to get max performance!

0 Kudos
cmcminn
Enthusiast
Enthusiast

Every vendor will tell you their baby is the prettiest.  To mcgower's point, some of the variable you are going to have to look at are IOpS spindle, or SSD if you go that route.  Are you planning on buying a FC infrastructure to go along with a potential SAN solution?  Are you planning on going iSCS, or even NFS?  Is your ethernet network 1G or 10G?  Are you looking to create multiple datastores by carving up LUNs or are you looking for a solution that provides speed and simplicity?

There are a ton of vendors in the market space right now that fill your needs...  You might also look at Tintri and Nimble.  Both good solutions...

Everything you have listed and these two will most likely satisfy your needs at a certain price point 9 and the price points will be different per vendor along with speeds and VMware integration.

Disclaimer: I used to work for EMC, but now work for one of the other two I mentioned.  I like to throw up 2 "other" options so I feel better about myself Smiley Happy

0 Kudos
RalphyBoy
Contributor
Contributor

I am looking for speed and simplicity....

Performance between disk and host!

0 Kudos
cmcminn
Enthusiast
Enthusiast

In order of simplicity (easiest on top) IMHO, performance will vary and I would not want to steer you in the wrong direction without understanding workloads and entire environment (all solutions below have or can scale to decent performance):

  • Tintri
  • EQL
  • Nimble
  • NTAP
  • EMC
  • HP

Again, all good choices and simplicity is in the eye of the operator...  But each vendor will tell you theirs is the best solution, so take each comment (including my own) with a grain of salt...

0 Kudos
RalphyBoy
Contributor
Contributor

Thanks cmc

You must work for Tintri....

The workload is One VM on One Host connected with the best connection type to the disk..... I can't make it any simpler than this....

What sequential read performance in MBps wuld I see?

Doesn't have to be exact just ball park for each type of connection....

geez it's hard to get any straight answer from any vendor!!!!! :smileycry:

0 Kudos
cmcminn
Enthusiast
Enthusiast

Everyone is very politically correct on the boards (from my experience)!

Let's go 10G from your host through a 10G switch to an NFS DS - let's say Tintri Smiley Wink, you should get 600+MB minimum and good IOps in a sequential.  The network can go to saturation, and I have seen this before, but you really have to push it in a sequential workload from a single VM or a few VMs.  What type of application are you using?

Best connection is a tricky statement...

And each vendor can speak on their behalf...

  • FC is the EMC bread and butter (they are decent NFS as well)
  • NTAP is great with NFS and good with FC
  • EQL is iSCSI only (I believe)
  • Tintri is great NFS

Honestly, you're probably not going to get a vendor stating their baby is the prettiest here...  We are all good, or we wouldn't be in business.  Look at the feature sets, you will get performance from each vendor, but operational simplicity, awareness, deduplication, etc. are other factors in this equation...

Smiley Happy

0 Kudos
RalphyBoy
Contributor
Contributor

Ok now we are finally getting somewhere.... first straight answer so far! Thank you cmc, you might have a sale soon!

600mbps = 10gbps iscsi

....now

What about FC and SAS?

Just to make things even simpler...the work load is tool called Crystal DiskMark. Very simple tool, like a kids toy. Smiley Wink Just run it on the single disk of the single vm on the single host on the single storage via:

a) 10ggps iscsi = 600mbps

b) 8gbps FC = ????? (Still looking for ball park figure)

c) 6gbps SAS = ???? (Still looking for ball park figure)

Any ideas?

0 Kudos
EdWilts
Expert
Expert

I have started investigating the options that are available for our budget and have come up with the following list from each vendor.

  • EMC VNXe 3300
  • Dell EqualLogic (Not sure which model is equivalent yet)
  • NetApp 2040
  • HP P2000

The NetApp 2040 is already obselete - look at the 2240.  It will support 10Gbps whereas I don't think the 2040 will.  In any case, the controller and the interface is typically not the limiting factor.

One of the things you have to look at is not just the interface on the controller but the spindles in them.  SQL has a very specific workload characteristic and you shouldn't be presenting a single large RAID 5 volume from any of these solutions.  In fact, RAID 5 sucks for write operations and does not follow Microsoft's Best Practices.  You will need to present multiple volumes on multiple heads of each of the solutions you're presenting so you can spread the workload out and parallelize it as much as you can or your transaction logs will choke.

If your current solution is utilizing a single large volume for everything, your money might be better spent on a consultant who can optimize your current environment.  Throwing more money at hardware and then choking it with a bad configuration is not going to make anybody happy.

.../Ed (VCP4, VCP5)
0 Kudos
RalphyBoy
Contributor
Contributor

Hi Ed,

Thanks for your comments.

I am wondering which interface is the fastest? Lets imagine the disk is the setup with the quickest available configuration.

What would be the limitations of the interfaces in mbps?

To keep things simple.

One VM (Running Windows Server)

One Host

One Lun

One Disk Group (Running fastest disk and raid possible)

One connection type from host to shared storage

What would be the quickest, 6gbps FC, 10gbps ethernet or 6gbps SAS?

The application running would be a tool called "Crystal DiskMark"

Thanks!

0 Kudos
mcowger
Immortal
Immortal

If all you care about is interface performance (which is a worthless, silly benchmark) then here you go:

10Gbit FCoE > 10GBit NFS > 8GBit FC > 6GBit SAS by (approx) a factor of their value.  So from a purely link rate, 10Gbit FCoE is (about) twice as fast as 6GBit SAS, so on and so forth.

However, interface speed is the LEAST important thing to consider, as a workload that is 100% sequential read basically doesn't exist outside of benchmarks.  Your SQL Server is amost certainly NOT doing that, so why evaluate based on a workload thats not real?  Your real limiting factor will be controller performance, cache hit rates, backend disk performance, RAID level.

Also, evaluating based on a single VM with a single disk is pointless.  You artifically limit what the array and link can do by using (effecitvly) only 1 thread at a time.  A real workload has multi IO threads across multi VMs and multiple LUNs, which gives (on ANY array) far better performance.

*Any* of these arrays can saturate a 10GbE link with the right disks, but NONE of them will do it with only 15 drives, for example.

As an analogy.  You've thrown a BMW M3, Porsche 911 Turbo and Bugatti Veyron into a track at the same time, and said: "Which is best, I care only about horsepower"?  Well, then obviously the Bugatti at 1001 HP wins, but why even have a track?  You can do that comparison on paper.  If what you really care about it performance for YOU (rather than some arbitrary number), then you need to understand the track.  If its a dragstrip, the Bugatti will probably win.  If its a winter snow/ice course, the Porche will probably win.  If its a autocross course, the M3 would likely win.

Workload matters more than a single interface performance number, and purchasing an array based on a single number like that is a recipe for ending up where you are now - with something that doesn't meet your needs.

As an aside - why are all the vendors hesitant to quote a number?  Because we all know its completely out of context.  Can a Tintri box get you 600MBit?  Sure.  Can a VNX get more (yes - we've demonstrated 10Gbit of aggregate throughput on a single VNX).  But without knowing the workload, its pointless, and we all know it.  So we do you, the customer, a favor by asking relevant questions before throwing out a number that means essentially nothing.  All of the arrays mentioned are decent boxes, and can achieve great performance with the right setup. 

PS: Part of your performance problem on the MD3000i is likely related to the fact that you have a single LUN - that will severly impact performance, regardless of interface speed.

--Matt VCDX #52 blog.cowger.us
0 Kudos
scottyyyc
Enthusiast
Enthusiast

+1 to mcowger's points. Comparing interface performance is not terribly useful. If you're going purely off this, then 10Gbe wins. This doesn't then factor in multipathing or aggregation.

Keep in mind that running SQL on a single disk isn't terribly common - you normally have separate disks for Logs, SQL Data, and the OS. This also doesn't always happen with virtual disks/datastores - many vendors include SQL snapshotting and maintenance tools with their SANs, which in most cases require RDMs.

I'm a big fan of EqualLogic and Compellent gear, both on power and ease of use. Compellent does things substantially differently than most other vendors, so you'd be doing yourself a disservice by not at least sitting down to a demo.

You'll find as many opinions about storage as you will people who sell it.

P.S. - There's many questions that beg asking in your MD3000 setup - hence why most storage vendors will be hesitant to quote numbers. Are you multi-pathing? What method of multi-pathing? How many links? How many controllers? How fast are the controllers? etc etc. You also have to factor in how many other things are going to be running on the storage (or is it just the single VM?) Speed and throughput is a garbage number if latency isn't factored in.

0 Kudos
Titans99
Enthusiast
Enthusiast

If you do not forsee needing more than 4 hosts, for simplicity sake I would definitely go with SAS connections over any of the others.  I've used the P2000 SAS and iSCSI models and they are nice units, but SAS model is so much easier to troubleshoot.  I hear the Dell equivilent is nice, but HP's management, installation and monitoring tools blow Dell out of the water.

0 Kudos
scottyyyc
Enthusiast
Enthusiast

"I hear the Dell equivilent is nice, but HP's management, installation and monitoring tools blow Dell out of the water."

What models/lines of HP are you dealing with? In my experience, Dell's EqualLogic is nothing short of amazing, so for something to blow it out of the water, it would pretty much have to read your mind and wash your car for you.

I played with some HP's a couple years ago, and while decent, I wouldn't say they were anywhere in the same ballpark as the EqualLogic offerings.

0 Kudos
mcowger
Immortal
Immortal

All of the vendors have made pretty significant strides recently (Compellent, EQL, HP/3PAR, EMC VNX) on useability - I wouldn't rule any of them out based on experience more than 6 months old.

Going the SAS route is OK, but pretty limiting.  You are, as you note, limited to 4 hosts, limited multipathing options, limited backup options, limited replication options and on a class of hardware that is unlikely to see significant updates compared to the FAR more common FC, iSCSI, FCoE, NFS based stuff.

Unless I had a customer TRULY *unwilling* to consider FC/iSCSI/NFS, I'd stay well away from SAS-based connections.  And if my customer were truly unwilling, I'd consider it a personal failing of mine to convince them why it was a bad idea.

--Matt VCDX #52 blog.cowger.us
0 Kudos
EdWilts
Expert
Expert

Sometimes all you have is a remote site that you want to do host patches/upgrades for and for which you know will never need more than even 1 host's worth of performance.  We have a terrible time scheduling upgrades because it's effectively a site down situation.  Nothing like maintenance mode to make an admin happy...

I'm thinking that something like a pair of smaller hosts with a P2000, all connected via SAS, would give us a lot of flexibility and less downtime due to host issues. 

.../Ed (VCP4, VCP5)
0 Kudos
mcowger
Immortal
Immortal

In those cases I'd argue that a VSA solution (P4000, VMware's) is still better than SAS. 

--Matt VCDX #52 blog.cowger.us
0 Kudos
Josh26
Virtuoso
Virtuoso

RalphyBoy wrote:

Just to make things even simpler...the work load is tool called Crystal DiskMark. Very simple tool, like a kids toy. Smiley Wink Just run it on the single disk of the single vm on the single host on the single storage via:

a) 10ggps iscsi = 600mbps

b) 8gbps FC = ????? (Still looking for ball park figure)

c) 6gbps SAS = ???? (Still looking for ball park figure)

Any ideas?

It's simply not a fair question to ask. Any number anyone can give you will be either theoretical and unacheiveable, or so dependant on other factors you won't replicate it.

For example, if I put 3 x SATA disks in a low end SAN with 10gbps iSCSI, you will NOT get 600mbps, you won't even get close to it, you may be lucky to push 50.

Then there's the multipath configuration. If I have 2 x 8Gb FC connections, and one LUN is able to pull 400Mbit down one path, and a second LUN simultaneously pulls 400Mbit down another path, the "throughput" would be 800Mbit, but no tool will detect it.

> Crystal DiskMark

Noone runs a production environment built out of benchmark software. So what good is any answer you're going to get that discusses a tool like this?

0 Kudos
Titans99
Enthusiast
Enthusiast

I'm just saying in terms of simplicity and cost, 6G SAS is the way to go if you don't need more than 3-4 hosts.  The "blow Dell out of the water" comment was simply in reference to management and monitoring tools of HP over Dell (i.e. SmartStart, iLO 3/4, Management Agents, firmware updates, etc).

The rest of the performance related questions will depend on the App, RAID, I/O, etc.

0 Kudos