VMware Cloud Community
acamus
Contributor
Contributor

What are the limitations of running VMotion on IBM HS21 Blade servers

I believe best practices is to have a dedicated network card for service console and a dedicated network card for vmotion but what if you are stuck with a blade server , IBM HS21, that only allows for 2 nics ( HBA card takes up the 2 slots)

How much overhead is there typically with service console

How much overhead is there typically with vmotion

What potential I/O bottle necking could there be?

Also the 2 network nics will be in a trusted and a DMZ ( Security risk for the Host server??)

Message was edited by:

acamus

0 Kudos
3 Replies
brianeiler
Contributor
Contributor

We have been running around 120 VMs on 14 HS21 Blades (running 3.0.1) for about six months. So far we've had no performance issues.

Our LAN is carved into multiple VLANs, one of which is for VMotion. The service console is attached to our main VLAN, and we created a DMZ VLAN for security sensitive VMs.

I have yet to notice an I/O bottleneck on the LAN side. We typically hit the CPU limit long before we tax the dual gig-E ports. If you're really concerned about network bandwidth, you could upgrade your blades. IBM just released a new mini expansion card for the HS21 blades that will give you a total of 4 Gig-E NICs in the one-blade form factor (without sacrificing the two FibreChannel connectors). I haven't installed one yet because we aren't hitting anywhere close to Gig-speeds on our ports (aside from the occasional burst).

Hope this helps!

0 Kudos
Athyra
Contributor
Contributor

IBM is supposed to come out with a 4 port NIC(HS21) that will work prefectly in this instance, can't remember if tyhey were listed on the HCL yet

Cheers

0 Kudos
ktwebb68
Contributor
Contributor

VLAN's is the preferred method however we haven't implemented that yet we still run a dedicated VMotion interface with redundancy.

Vmnic0 has service console and VM network traffic (primary) with the other NIC as standby. Reverse the NIC roles for the VMotion traffic.

Before I noticed how they were setup, another admin has setup everything on one NIC team. It works, but if you have to Vmotion a VM that has alot of memory, then it can be a problem. Lag on the other VM's and potential for memory corruption as it moves from host to host.

0 Kudos