VMware Cloud Community
GManNorth
Contributor
Contributor

VMotion between two HP C7000 Chassis

We have two HP C7000 Chassis holding a total of 16 BL685c blades - all part of one VMware cluster. We have it setup to VMotion between the two chassis. I think vmotioning between the two chassis is overkill because there are many more possibilities and options if the two chassis were two separate clusters and no vmotion between them. Opinions? Does anyone else have such a setup?

Reply
0 Kudos
7 Replies
mikepodoherty
Expert
Expert

Setting up VMotion capability between the chassis provides more reliability - if you have a problem on a chassis or want to do an upgrade or otherwise take the chassis out of service.

For our blades, we try to have both rack and chassisVMotion capability.

Reply
0 Kudos
GManNorth
Contributor
Contributor

It definitely gives chassis redundancy - no question. However, in our case that uses four Cisco 3020 switches ( two per chassis ) dedicated to vmotion only. That is alot of $$. A consultant pointed out that if we separated the chassis we would have four four Cisco 3020 switches to utilize - opening up other possiblities like dedicating one to NFS ( if we choose to do so in the future ) and one to the DMZ. It opens up more options which I think outweigh the chassis redundancy....

Reply
0 Kudos
mikepodoherty
Expert
Expert

And that really is a decision you have to make. We have 100+ ESX hosts supporting mulitple applications. For us, the choice is to incorporate as much redundancy in the set up as we can.

Anders_Gregerse
Hot Shot
Hot Shot

I might be a novice on the C7000 (planning to buy one), but isn't possible to interconnect those without using switches, but the v-connect, as far as I know it's only layer-2 but that would be enough?

Reply
0 Kudos
timmp
Enthusiast
Enthusiast

I love blades from a space and cable management perspective but I really hate them for ESX. From a redundancy perspective, I would certainly agree that having (2) separate chassis installed in two separate racks with different power feeds and separate network uplinking to two or more uplink switches gives you higehr availability.

From a switch perspective and DMZ as one person indicated, you could dedicate (2) interconnects on one chassis to your inside and (2) other interconnects to a DMZ network if you wanted to separate at the physical layer. It would unfortunately only allow for (2) nics to be setup for failover and used for SC, vmotion and vm's. That said, you would still need to lose a lot for complete failure but you could still achieve server availability

Reply
0 Kudos
depping
Leadership
Leadership

I would setup two cluster cross chassis for optimal redundancy. 8 hosts per cluster. I wrote an article a while back on this subject: http://www.yellow-bricks.com/2009/02/09/blades-and-ha-cluster-design/

Duncan

VMware Communities User Moderator

-


Blogging:

Twitter:

If you find this information useful, please award points for "correct" or "helpful".

Reply
0 Kudos
timmp
Enthusiast
Enthusiast

Yes indubitably...I totally agree. I think to take (2) separate chassis' and create a 16 node cluster would not be such a good idea. I try to keep my clusters to 6-8 systems as well. This is again why the blades are, in my opinion, not a good solution for ESX. I would rather take 4u servers like a DL580/DL585 or comparable from Dell/IBM and balance across the cluster. You have lots of scalability for these servers.

Reply
0 Kudos