VMware Cloud Community
BryanMcC
Expert
Expert

VMware 3.x on IBM Blades

Currently we are running ESX on Dell 1955 Blades and due to Network IO and other limitations I am looking to another vendor for our blade servers. I myself am leaning towards the HP C-Class blades however my manager is a bit of an IBM fan and would really like me to take this platform into consideration.

My questions are....

1) If you run on IBM blades what kind of platform are you using (chassis and blade combo)?

2) What kind of network config (just brief explanation of bonds and vSwitch config)?

3) What are your thoughts on this platform?

4) How is your hardware support?

Thanks for the help..




Help me help you by scoring points.

Help me help you by scoring points.
0 Kudos
6 Replies
Chris_S_UK
Expert
Expert

1) HS20 with 2 NICs for test and HS40s with 4 NICs for some production VMs. Chassis is standard 14 slot one.

2) For 2 NIC'd Blades, everything over one bond....test only, after all. For the 4 NIC'd ones, one bond for the Console/VMKernel and another bond for VM network (multiple portgroups/vlans)

3) Fine, for what they are but use these for lower power VMs and test. Use x3650s and x3850s for "proper" VMs

4) Fine!

Chris

mikepodoherty
Expert
Expert

We have approximately 200 IBM LS20 and HS20 blades supporting both VMWare 3.0.x and 2.5.x - plans underway to upgrade the 2.5.x to 3.0.x

We've been operational over a year - I have to check on the chassis - they are owned and operated by another organization, we own/manage the servers.

NIC teaming is require and set up on all VMs. Most have the vSwitch setup for 3 VLANs - virtual center, production and VMotion.

We were early adopters of LS20s and as such had fairly high instances of component failures. Seem to be getting better from comments of other organizations now deploying into the data center. However, IBM support has been excellent.

IBM Management Module has proved very useful but results mixed on IBM Director.

Mike

0 Kudos
BryanMcC
Expert
Expert

Thanks Chris....

Now you have produced another set of questions....

When you say "proper" VMs what exactly do you mean?...

I have always preferred the Blades due to density and such and can usually get from 12 to 20 VMs per blade this has worked out well for me and allows me to take a more scale out instead of scale up approach. Besides the heavy IO VMs such as SQL, Exchange or some highly utilized file servers. Can you just give me a brief explanation as to why I would want to either...

A) Put more VMs on a conventional server and have less hosts?

B) Not use blades for the scale out approach to virtualization?

I know this can be a heated topic but some opinions woudl be nice to hear.




Help me help you by scoring points.

Help me help you by scoring points.
0 Kudos
Chris_S_UK
Expert
Expert

Don't misunderstand me, Blades are fine for ESX if sized correctly (specifically - number of NICs). It's just the x3650s and x3850s we have are quite a bit more powerful and have more I/O than the Blades we have so I tend to deploy VMs accordingly.

Chris

0 Kudos
BryanMcC
Expert
Expert

Excellent reponse....

I too have been tossing around the idea of using conventional servers for some more heavy IO VMs so that I can give the company more flexibility when it comes to what kind of applications we can virtualize.. What kind if any HA and/or DRS solution do you use for your conventional servers?

What I mean is about how many VMs per server (average) and how many hosts in your HA/DRS cluster?

What kind of utilization do you get from your conventional servers in this scenario? (I avergae roughly 50% utilization per host)

Any issues with admission control when using highly utilized VMs in this setup?

My concerns are allowing for the migration and distribution of these "proper" VMs when a host failure has occured.



Help me help you by scoring points.

Help me help you by scoring points.
0 Kudos
Jwoods
Expert
Expert

We have 2 Blade Chassis with HS20 (2 nics) with a mixture of Prod/Dev/Test on these blades. We also have 2 x3950's (dual-core 4-proc, 6 nics) for our "heavy" VMs that have 2 vCPUs with 4GB or 8GB ram per VM.

Single bond with multiple vlans for the blades. 3 bonds with COS & vmotion on Bond1 (2nics), DMZ on Bond2 (1nic), Prod/Test on Bond3 (3nics).

We're a big IBM shop (once a HP/Compaq shop), and have been enjoying the IBM platform. There support has been excellent.

Our biggest problem with the bladesat the time of purchasewas the 8GB max mem. But now with the HS21's and 32GB w/4nics, quad-core blades, the field is wide open!