VMware Cloud Community
korman
Contributor
Contributor

Single ESX Cluster utilizing storage on Netapp and EMC Arrays

I am deploying a Netapp array in a DR Site and am planning to present Netapp luns to an existing 5 host ESX Cluster which is already attached to an EMC Symmetrix array.

My initial thoughts are to use 2 - Dual Port adapters and present Netapp over Dual Port HBA1 and EMC over Dual Port HBA2. Both Arrays will be attached to CISCO MDS switches which isolate Netapp and EMC traffic using VSANS.

Any problems with this?

I believe this will work but am concerned about support?

Reply
0 Kudos
8 Replies
weinstein5
Immortal
Immortal

I see no problems with this configuration -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
korman
Contributor
Contributor

Are you the same vmware employee that answered my support ticket?

Reply
0 Kudos
weinstein5
Immortal
Immortal

nope - I am a consultant not in support

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
korman
Contributor
Contributor

Have you worked with custmers that are doing this?

Reply
0 Kudos
frankdenneman
Expert
Expert

Instead of using one HBA connecting to one storage array, why don't you utilize both HBA's for connecting to both storage?

If one hba fails, you will have no connection to one of your storage.

A dual port are two seperate hba adapters, with their own unique WWN on one pci card.

The San switch doesn't know they are 2 pci cards, it just sees 4 seperate WWN's

Why not use port1 of hba 1 and hba 2 for connecting to your EMC storage and port 2 of hba 1 and hba 2 for connecting to your netapp?

This way you have more redundancy

Frank

If you found this information helpful, please consider awarding points for "correct" or "helpful". Thanks!

Blogging: frankdenneman.nl Twitter: @frankdenneman Co-author: vSphere 4.1 HA and DRS technical Deepdive, vSphere 5x Clustering Deepdive series
Reply
0 Kudos
korman
Contributor
Contributor

Thanks for the feed back.

I thought about doing that but the EMC and Netapp required HBA firmware revisions for the same model hba are different.

If I had 4 PCI Express Slots available I would have designed with 4 Single Port NICS.

Reply
0 Kudos
kcollo
Contributor
Contributor

Sounds ok to me. We have both our EMC and NetApp doing FC to our ESX clusters. This setup is the same at both our primary and DR site using Cisco MDS 9120 switches. Things have working over a year with no issues. Sounds like your setup should as well. The firmware versioning was not an issue for us, but it all comes down to zoning anyways, and that should be fine in your case.

Kevin Goodman

Linux / SAN / Virtualization

kevin@colovirt.com

http://blog.colovirt.com

Reply
0 Kudos
korman
Contributor
Contributor

Did you run the Netapp FC Host Utility (Attach kit) on your esx hosts? I am guessing the changes made are inline with EMC settings. I believe EMC has a Grab utility now for ESX so I will run a Heat report and confirm.

greg

Reply
0 Kudos