What enterprise storage product do you use and why? If you had the option to do it over again which product would you use and why?
To frame the scenario a bit: Say you're in a 1000 physical server environment with many going EOL in the next year, avg 10% annual DASD growth rate and management has finally decided to take the Virtualization plunge for servers as they go EOL and for provisioning anything new. What storage product would you architect around to take full advantage of VMware including SRM? Is it a fair statement to say Hitachi, EMC and Net App would be the main contenders? I’d love to hear your thoughts.
regards,
Ray
I have a former coworker using their storage, and I respect his abilities.
I was never really impressed though by the fact that they do not support anything besides RAID50.
--Matt
We have an HP MSA 1500 FC with 28 x 146 GB 10k and 14 x 300 GB 10K. We have 25 physical servers. Performance is OK, but we are having problem with free space and the lun are not virtualize. Thas why we are in process of buying 2 x Equalogic PS5000E (16 TB).
remove duplicate
We use NetApp. I'm 100% Sold, I been using them for 8 years now. I've currently got (2) NetApp 3040 clusters with 10 TB on one and 30 TB on the other. The great thing about these is I can run mulitple protocols, IE: Fiber Channel, iSCSI, NFS, CIFS, FTP, etc and I can perform non-disruptive upgrades without problem (Non-Disruptive upgrade procedure ) Performance of these things are great and so it the support.
Also, isn't NetApp the only storage vender certified for NFS on VMWare? Pairing NetApp ASIS (deduplication) and VMWare ESX over NFS and you can really see some space saving benifits.
Here's my Cabling Diagram with fiber switches if interested.
Matt Brown
EWU
-
MSA1500 x2 SCSI and SATA.
Moved to EVA4100 and upgrading to 6100 soon. EVA is faster and more reliable. Also is easier to manage. Cost effective storage.
I cannot recommand Netapp Storage, when using FC. I do not know if NFS is better.
Till now we have many write delays in the event log of the vms.
You may have something else going on here. I run a lot of NetApp, and I have not noticed anything of the sort you are describing.
-KjB
I work in a shop that uses netapp for our storage. I've got nothing but good things to say about our clustered 3040s.
Initally we deployed using VMFS, and iSCSI mounts, but moved over to NFS to take advantage of dedupe and other tools that work better with VMs sitting on WAFL. (If only storage VMotion were out then!) Trememdous savings of storage space, if you design it well you can save alot of space. Alot. No seriously, ALOT (YMMV). NFS is the only way to go though IMO with netapp, and take advantage of the additional features you can get with an ONTAP filer (also available with FC, iSCSI).
Our internal testing showed that iSCSI and NFS were performing at the same speeds, even under increased load.
Not knocking mcgowers comment,but we haven't had any issues, and don't see the idea of WAFL as "sketchy". It's worked great and we're saving money.Block alignment will save you ops on the filer as well, if you go with Netapp make sure you do this before you get started really deploying VMs.
With the New SnapManager VI and integration with VirtualCenter, it's going to be "a good thing". Letting the VMware admins do backup, flexclones and restores of VMs directly with the storage thru virtualcenter is going to be fantastic. (Just saw a demo so the kool-aid may still be in effect)
We did have an incident recently where we had a member of our 3040 cluster fail. A part in the filer just stopped working. Failover worked swimmingly and although we had some latency for a few minutes, the VMs didn't crash. The other filer just picked up the load and our ESX environment just kept working. Although nerve racking, it was fantastic to see it actually work. The failover back later that day also went fine.
not a netapp employee, just a happy customer.
-Theron
I'm definitly not trying to say that WAFL is sketchy - its a damn cool FS, surpassed in my mind only by ZFS. For CIFS and ZFS and DAV use its awesome, and nearly unrivaled.
What I do feel is sketchy is the NetApp method of creating luns by creating a giant file on the WAFL FS and sharing that out as a LUN. I've had nothing but bad performance with that design, including with NetApp gear (and yes, I do know what I'm doing with it).
--Matt
Sounds awfully familiar, the creating a file on a filesystem and representing as a LUN thing.
-KjB
Matt (mcowger😞 I've seen some performance threads over WAFL, pros and cons, etc.. what filers were you using if you don't mind me asking?
270c cluster and a 3050 cluster.
If I was running my VM env on NFS, it would definitly be NetApp. I would even choose NetApp/NFS over iSCSI I bet.
But, I am still a FC lover, and I I can afford to use it, so I stick with FC.
--Matt
Valid point
Though with VMDKs I dont have a choice - I have to do it VMWare's way and when building LUNs on enterprise storage, I can choose the way that seems right to me:)
--Matt
Agreed. I have a few 30xx's, and my main SAN for my virtual machine environment is on a 6030, that runs FC. I agree about the choice and what seems right to you is what you should go with. As with any piece of technology, no solution will be the right one everytime.
-KjB
3PAR doesn´t offer standard cards with mainframe connection. Here you could use a Luminex Gateway. They´re expensive but do the job.
3PAR has been designed for highly demanding, unpredicatable workloads (e.g. hundreds/thousands VMWare servers attacking same storage device). In Spetember new, T-series will come out with even higher performance. Is worth taking into consideration, especially taking into account their Thin provisioning that will save a lot of money as it eliminates need for overassigned disk space!