VMware Cloud Community
sdog
Contributor
Contributor

7 host - DRS/HA cluster with HP EVA8400

This is my first experience with HP hardware and HP tech support recommended splitting our 7 host cluster (BL680's), into 2 seperate clusters. He mentioned something about LUN contention, but wasnt very specific. Things have been running relatively good, but we havent put our VM's into production yet.

The cluster currently has 80 VMs on 15 LUN's on an EVA 8400 and Iexpect both numbers to grow. Are there any best practices or white papers for using vSphere DRS/HA clusters and an EVA8400?

Tags (3)
Reply
0 Kudos
6 Replies
AntonVZhbankov
Immortal
Immortal

Can't see any problem here.

80 VMs / 7 host = 11 VMs/host, so it's not an issue for HA.

IO LUN contention? Just make sure SAN can provide enough IOPS, split VMs to several LUNs.

The only reason I see to split to 2 separate clusters if everything works well - just to separate production and test environment. Nothing more.


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
MKguy
Virtuoso
Virtuoso

From a performance perspective, splitting the VMs on many LUNs isn't really helpful with an EVA as long as the LUNs reside on the same disk group.

VMFS handles mutliple hosts accessing a LUN very well, so I would not bother with splitting the hosts into multiple seperate clusters.

Here are some recommendations for EVA storage and ESX:

http://www.ivobeerens.nl/?p=465

On another note, if you have another blade center, mix the primary HA hosts of a cluster between them in case one blade center completely fails.

Read more about this at:

http://www.yellow-bricks.com/vmware-high-availability-deepdiv/#HA-primariesandsecondaries

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
smudger
Enthusiast
Enthusiast

Hi,

While I'm not fimilar with your VM workloads, at face value it seems strange splitting a cluster. One area I would recommend you read (if not already) are HP's best practises for EVA8400 & vSphere - especially around multipathing. Once configured correctly you'll see a large performance bump in storage throughput (FWIW we've just been through a similar excercise).

The link to HP's doc is here (Titled: Configuration best practices for HP StorageWorks Enterprise Virtual Array (EVA) family and VMware vSphere 4)

http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf

Best regards,

Neil

Reply
0 Kudos
depping
Leadership
Leadership

I don't know why you want to split it? Are you experiencing performance problems? If not, it seems like you are following normal best practices.

Duncan

VMware Communities User Moderator | VCP | VCDX

-


Now available: Paper - vSphere 4.0 Quick Start Guide (via amazon.com) | PDF (via lulu.com)

Blogging: | Twitter:

Reply
0 Kudos
sdog
Contributor
Contributor

Its not that I want to split it. I know that Im following the best practice with the physical layout, however we had some issues a week ago where the EVA wasnt presenting a LUN correctly to the farm and we had some other physical windows boxes lose connections to the storage controllers, and a VM on a LUN was unavailable to the vSphere farm.

We ended up having to reset both storage controllers. HP reviewed the logs and said there was LUN thrashing in the farm, and that may have caused our EVA to puke. The HP tech recommended we split the farm into 2 seperate farms and upgrade the EVA firmware. I upgraded the firmware, but I'd rather not split the farm.

As I stated earlier, this is my first experience with HP hardware. The purpose of this discussion was to ask people with similar vSphere environments with HP hardware, if they have any suggestions or best practices. I know I definately want to switch the pathing to round-robin, so thank you to all who recommended this and I will definately review the white paper.

Let me followup by asking you all this. Is there any risk to switching pathing method in a production environment? I was thinking I should explore powershell and try and script out the task to switch all paths on this specific farm to round-robin. Anyone have experience performing this? or should I just do one host at a time.

Regards

SD

Reply
0 Kudos
jpdicicco
Hot Shot
Hot Shot

I have done this with running VMs. I had no connectivity issues running the commands in the HP doc. You just want to make sure you have things configured properly starting at the storage end (controller failover settings,etc.), and working your way up to the host and guest. I have an 8000, with 2 ports per controller going to each HBA. ALUA RR keeps both ports per HBA on the active controller marked active.

I did have an issue with their script, however. Running their script, I had some values that ended in ':1' for my disk ids. I changed the for loop to look like this:

#run once per server

esxcli nmp satp setdefaultpsp --satp VMW_SATP_ALUA --psp VMW_PSP_RR

for i in `esxcli nmp device list | grep ^naa.600`;

do esxcli nmp device setpolicy --default --device $i;

done;

#run once per LUN, maybe once per reboot (see below)

for i in `esxcli nmp device list | grep ^naa.600`;

do esxcli nmp roundrobin setconfig --type "iops" --iops=1 --device $i;

done;

I has been reported that the iops value changes at ESX boot time to a different value, so you may want to look into updating an rc script to make this persistent. I haven't done it yet, so I can't make a good suggestion for which script to put it into.

Happy virtualizing!

JP

Please consider awarding points for correct and/or helpful answers

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
Reply
0 Kudos