VMware Cloud Community
jjeff1
Contributor
Contributor

HP EVA 4400 vs Netapp FAS2000

I'm looking at 2 proposals, an HP EVA 4400 and a netapp FAS2050. Both of these would be used as a FC SAN to host 4 vmware servers, with about a dozen guests each. I asked each vendor to provide a solution with 6TB of usable storage, after RAID5.Guests will be application servers and a couple file servers. Most apps are low utilization, but have to be on separate servers.

Netapp came back with 10.2 TB of raw disk, HP was 9600GB. I've read some information about netapp being inefficient with disk space, butnothing too concrete.

I've had a hard time finding any real comparative data. Can someone who has experiences with these boxes give me some insight? Are these similar boxes in terms of performance?

0 Kudos
27 Replies
chrisfmss
Enthusiast
Enthusiast

HP EVA 4400 support 8GB FC port and have 450 GB FC Drive

Netapp 2000 support only 4GB FC port and 300 GB max for SAS Drive

0 Kudos
AWo
Immortal
Immortal

I can only give you some small insights about the EVA 4400. Maybe that will help you in your evaluation.

Regarding performance I only can tell you that the EVA uses disk groups and the data is spread over all disks. The system tries to keep the load and space usage on every disk even. That means more disks in a disk group results in more performance. If you add disks/spindles (should be a multiple of 6 - 8 disks) the performance will increase and the space usage on every disk will decrease.

I think it is hard to give information about the performance, as performance (quick)specs of almost all vendors referencing to a maximum expansion of the system (96 disks for the 4400). The performance you get depends on:

- how much disks are used in the disk group for VMWare (you will have between 2 - 4 disk groups in the end, if you are only using VMware I guess you will have 1).

- do you have a second EVA as a mirror and at which speed is the fiber connection between both working?

- RAID level of your disk group

- ...

If you go with the EVA try to get as much shelves as possible as they are important for availability. You can have the same number of disks in less shelves, but you will decrease the availability. This is because the disk groups are created beginning with the first disk in the first shelf, then the first disk in the second shelf and so on.

vExpert 2009/10/11 [:o]===[o:] [: ]o=o[ :] = Save forests! rent firewood! =
0 Kudos
kjb007
Immortal
Immortal

There's good and bad about every vendor, but the question is, other than the space, do they both have the functinality that you want? I run a lot of NetApp along with EMC and Hitachi, and I like NetApp, and I haven't run into performance issues. If you're comparing performance, I'd venture that their performance will be fairly comparable. Make sure they have the bells and whistles that you want, at the price you want to pay.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
KungFuJoe
Enthusiast
Enthusiast

I would go NetApp all the way. They are more flexible, offer more features such as deduplication, NFS (which is faster than VMFS due to their WAFL filesystem), and their soon to be release SnapManager for VMWare. In my opinion, other SAN vendors are behind NetApp in terms of their VMWare presence. NetApp has SCs that are very fluent in VMware and are more than able to help assist in design and implemention.

Definitely take a look at NFS. It offers benefits in performance and flexibility that you can't get from iSCSI or FCP, such as being able to grow and SHRINK volumes on the fly without interruption, easy allocation of storage (create a new export, mount it to the ESX hosts and you're done), not having to worry about SCSI command queues or LUN reservations, etc.

Here's a link to their ESX3 best practices guide.

0 Kudos
AWo
Immortal
Immortal

I would go NetApp all the way. They are more flexible, offer more features such as deduplication, NFS (which is faster than VMFS due to their WAFL filesystem), and their soon to be release

SnapManager for VMWare.

What dou you mean exactly with the statement that it is faster than VMFS? Don't you use VMFS on the NetAPP? Do you use only raw devices? Is it really faster via NFS without VMFS compared to a VMFS volume via a FC connection?

Can you clarify, please?

vExpert 2009/10/11 [:o]===[o:] [: ]o=o[ :] = Save forests! rent firewood! =
0 Kudos
kjb007
Immortal
Immortal

I would have to disagree with you here. I run NetApp over FC, and have very good success. Unless you are running over 10G ethernet, I don't see the comparison of NFS to FC. Do you have numbers to support this?

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
jeremypage
Enthusiast
Enthusiast

Well with either of those two systems you are almost certain to be spindle bound, not bottlenecked on the pipe (FC/Ethernet) so it's really a trade off on other things. NFS can be faster then VMFS since in NFS the VMDK files are locked for access where in VMFS you've got the locking done at the parition level. We saw a signifigant improvment in IO when we moved to 2 gig (Etherchannel) NFS from 2 gig FC (DS4500).

NFS also allows automatic thin provisioning, the ability to access your VMDK files from management machines, the ability to snapshot/restore from snapshot and NDMP backups - more or less replacing the need for VCB and allowing you to back up directly from your storage device.

0 Kudos
williambishop
Expert
Expert

I'm a little lost Jeremy. VMFS is a file locking file system, not at the partition level. I would honestly look at your configuration if you got better perf and lower latency out of a ethernet based files sytem vs. an FC one, especially of equal bandwidth. FC in my experience, and even in most peoples docs will give you a 20% better metric. It's a protocal designed for low overhead and large transfers, vs. ethernet

--"Non Temetis Messor."
0 Kudos
jeremypage
Enthusiast
Enthusiast

I'm not sure how the actual mechanics work, but I was under the impression that only one ESX host at a time could write to any given VMFS filesystem at a time. If that's not true then the only explination is that we do a lot of non-latency sensitive non-sequential small block IO which under NFS would get aggregated into larger writes where FC would try and actually write them in order as fast as possible. We did see a pretty good sized (average 15-20% improvement based on SQLIO) when testing FC versus NFS. To be fair we are 2 gig FC across the board and 20gig Etherchannel from the NAS to the switch, but the data was on the same RAID groups on the SAN so disk layout was on exactly the same spindles, we just deleted the LUNs and created an NFS share instead.

I'm not sure how any of the protocols are likely to be the limiting factor unless you have a relatively large installation, most folks bottlenecks are not going to be on the pipe itself, but on the spindles or local memory. If you read the VMware paper "Comparison of Storage Protocol Performance" the main things that FC gives you are lower latency and lower CPU utlization. The paper looks like it's faster in other respects too, but that's because it's comparing 2g FC to 1g Ethernet, when hopefully anyone putting IP storage into production would be at least binding 2 NICs togther for fault tolerance if not throughput.

0 Kudos
williambishop
Expert
Expert

No, vmfs is a shared source. It is file locking, so in my case, all 6 hosts can see a particular lun.

A study of the fc vs ethernet will show you that a much larger portion of the packet is payload oriented, with very little overhead. You SHOULD get quite a bit better performance out of FC. I would look at configuration...

Even apples to apples, FC will almost always(I would not qualifty with almost, but there's always that one time)......beat ethernet.

--"Non Temetis Messor."
0 Kudos
jeremypage
Enthusiast
Enthusiast

I think that we are talking about two different things. You're saying FC is more effecient (which I am quite willing to agree). I am saying it's a moot point, unless you are pushing enough data to use a signifigant amount of your pipe the protocol is unlikely to have much effect at all (all other things being the same) since you are much more likely to have a bottleneck at another level.

I'm not saying NFS is going to be faster for most folks, I haven't tested it in anyone's environment but my own. I am saying that when ever one of these threads is posted people come out of the woodwork saying how much faster FC is then NFS, which is at least misleading. It's like saying a bus going 50 miles an hour is faster then a car going 50 miles an hour. The fact is that unless you've got more people then can fit in the car it's a moot point. If you are virtualizing your systems with the sole goal of increasing performance you're probably doing it for the wrong reasons anyways...and should be using RDMs not VMFS. If you are trying to reduce costs, increase the flexibility of your infrastructure and make your systems easier to manage then for MANY people NFS is at the very least something to consider.

0 Kudos
kjb007
Immortal
Immortal

VMFS is using SCSI-based locking, so, yes, only one host is writing to a LUN at a time, but they are short write, and release type of I/O. This is also why the general best practice is to limit the LUN size to about 15-20 vm's, so as not to cause too high a performance overhead on the SCSI reservations.

The speed is one thing, but I notice huge latency differences when you increase the block size. When you use small block sizes, 8K, then the numbers were close, but as you go higher into more intensive and larger block writes, 64k, the dfference in latency is much greater. Again, as already mentioned, being able to increase/decrease the NFS mounts, and being able to use huge LUN sizes are a good benefit, I wasn't able to get over the differences in latency of FC vs ethernet. If I could find my numbers, I would post them.

You're only as strong as your network, so, if you have fast access/distribution/core switches and routers and don't flood your network, then performance can be good over ethernet, but I've found FC the way to go for mission critical type of apps.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
KungFuJoe
Enthusiast
Enthusiast

Don't forget about NetApp's WAFL file system. They pretty much are the market leaders in terms of NFS performance. I won't go into the scsi command queuing/lun reservation deal here...it's been stated by many others.

In most scenarios, your bottleneck will be the resources (most likely RAM) on an ESX host before you can fill up a 1Gb pipe. Of course, this is dependant on what your VMs are doing, but I'm speaking generally here. I like to use an analogy of FC being a 400MPH limit freeway and NetApp NFS being a 100MPH limit freeway, but you never drive more than 50MPH does it matter?

BTW...I speak only of NetApp's NFS/WAFL implementation...

http://viroptics.blogspot.com/2007/11/why-vmware-over-netapp-nfs.html

0 Kudos
kjb007
Immortal
Immortal

I understand your point, and your analogy is good regarding the bandwidth, it's true you may never go over 50 MPH, but the second point was latency. When you hit the accelerator, and you have to wait for 10 seconds before the car moves, vs waiting the 1 second for the car to take off, you'll notice it takes a lot longer to get to the 50 mph with one option vs the other. That may not be the case in your network, but in more networks than not, latency is a huge issue, due to multiple switches, routers, and who knows other hops of network devices along the way.

Again, I'm not knocking on NetApp, I have several of their frames, I'm just commenting on the statement made earlier regarding VMFS and nfs and fc vs copper. If it was up to me, I'd use FC all the way, but it's not always required to have the additional investment in the fiber, the switches, the hba's, when you already have a built-in copper infrastructure.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
williambishop
Expert
Expert

I'll back you one step further kjb....

I just bought some port expansion(I was getting low in DC2). 2x 32 port MDS switches(4gig), with 2x 10 gig copper connects. All sfp's and licenses included, for about 20 grand. That's a dual fabric setup for smaller companies, or an additional datacenter set for a larger place.

HBA prices are right on target with iscsi hba's....

So, in reality, it's not that much cheaper to do iscsi, and it's only slightly cheaper to do NFS than iscsi(nfs doesn't use hba's) than it is it to ethernet.

Some might say "but I can save 20%!"

I learned a long time ago, it's where you spend the money, not how much you spend. 20% more, for a much lower latency, much more reliable, and much more enterprise ready setup? Yeah, I'd spend it in a heartbeat.

That's not to say I don't have a place for ethernet storage, because I do. I have over a hundred terabytes in ethernet gear....But for my virtualization, and my databases, my emr apps.....It's FC all the way. As you say, 50 mph might be fine for driving, but for a lot of stuff you need porsche acceleration, not chevy.

--"Non Temetis Messor."
0 Kudos
jeremypage
Enthusiast
Enthusiast

Thats fine if you need it. Most shops don't have hundereds of TB of anything and it's silly to tell someone who's going to virtualize a dozen machines they need X because it's going to be faster. You also are not taking into account the other benefits of NFS, like the fact that your not at the mercy of VCB for backups, having to go to tape for restores etc. People crowing about FC's constant superiority sound like the same folks who "know" that Windows can't run a mission critical application or "know" that Linux is not ready for the enterprise market.

"much lower latency, much more reliable, and much more enterprise ready setup?"

Sounds like you defined "enterprise ready" as expensive. I prefer to define it as what best fits the business I am supporting.

0 Kudos
jeremypage
Enthusiast
Enthusiast

"latency is a huge issue, due to multiple switches, routers, and who knows other hops of network devices along the way.["

Latency can be an issue, just like it can be with a poorly configured FC network (as anyone with an ISL bottleneck or hosed up TOVs with different switch manufacturers can vouch for). Once again, if you are looking for extremely low latency IO you probably should not be virtualizing in the first place. There is no doubt that FC can provide better performance in the majority of implementations, the point is that performance is not the reason anyone should run virtualization software (or any software for that matter), it's driven by what meets the needs of the business you support. Also, latency is a great indicator of a bottleneck, but you rarely will get a speed increase after going under a threshold (which is dependant on the specific application). Because NFS aggreagates reads and writes to a much greater extent then FC in an environment where you are doing a ton of slow tranfers (web servers here for instance) you can see a performance increase.

If performance was the sole goal why would you virtualize in the first place? You wouldn't.

0 Kudos
kjb007
Immortal
Immortal

We're also not talking of performance of just one VM here, if we were, we would be talking a different setup. We're talking of the performance of 10-40 vm's on one host. The goal is to optimize the performance so we can host as many vm's as possible, is it not?

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
williambishop
Expert
Expert

Jeremy;

It sounds more like you are the one pigeon-holing a technology. You are describing a niche, and placing all of the world in it. I've already said I use ethernet storage, most of us do. And I've shown that FC isn't that much more expensive than IP. If you search back, you will always find that my answer is "It depends on the environment".

You will find very few of us here who fit into the niche you want to put us into as well. The "windows can't run mission critical, linux isn't for enterprise" people are generally not the kind you find in virtualization.

I am not defining enterprise as expensive, no one is. I'm defining it as robust and solid under heavy load. We all design according to need here. I have heavy hitters in virtualization, again it sounds like you want to put everything into your own categorical view of the world. Why would I not virtualize a host if it needed performance? If you can understand that even if I only can get one guest on a host, and I see it as worth it, you will understand why we design the way we do.

You've got to get out of the mindset of "virtualize it to save money" and add in a new category. "Let's move 98% to virtual and greatly simplify our lives as well as save money!"

W.

--"Non Temetis Messor."
0 Kudos