I have one setup which is a 8776 BladeCenter running seven LS20 blades and external fibre channel storage connected directly via FC pass-through module.
Each of the blades has between 8GB and 16GB memory and I was thinking of upgrading some of those to newer blades running 32GB of memory.
However, I have been trying to figure out the value of moving all of those VM's onto one single x3755 server. More than anything, the apps running on the BladeCenter are becoming non economical to keep running so I am looking at ways to save on power costs and keep the old apps running. Simpler than shutting them down.
What I am trying to find information on however is if this is a good idea or not.
I have about 30 VM's on the BC where many of them mostly communicate together. Consolidating those onto one server seems to make sense as they could talk directly together while also using the built in drive space for the VM's instead of the external FC storage. I could even consolidate some of those VM's onto other VM's to lower the number overall.
The box is a 64GB box with 4xquad cores so there would be plenty of resources. I'm just not sure that this is a good move and I have no idea how many VM's the x3755 can handle and run efficiently.
Not sure what else to add as details but please ask if I've missed something.
Moving everything to just one server is not that great of an idea if these are critical VMs, because you will have to host redundancy; even if one x3755 could run all the VMs with acceptable performance, you'd want a second one for failover.
Can one of these servers handle it all? Maybe. You need to look at your aggregate CPU usage, memory, storage adapter bandwidth, network adapter bandwidth, disk IOPS etc, and see if it makes sense.
Putting over 30 VMs on local disk is probably not going to perform to your liking, the degree of that depends on how heaviliy these VMs are using the disk. If you already have FC storage set up for this, I would keep them there unless there is another reason why you want to move them.
I am assuming here your seven blades are being managed by vCenter and are not standalone ESXi.
Thank you for the reply.
I posted the message looking for input from others who might have consolidated in this way. Everything you said are good points and those are all things I've considered.
Yes, I'll have backup hardware of course and backups of the VM's as well. However, you're right, I should have mentioned that these are individual ESXi blades and in different network LAN's. Doing separate LAN's on the single box is not a big deal and all of the VM's perform wonderfully off of the FC. Since everything is AMD based, it should not be a problem moving from one to another.
One thing I wondered is something you mentioned, that this number of VM's might not perform well on local disk. While I can run as many VM's as the box could handle, I think you are right that I would have to keep them on FC storage in order to continue getting good performance.
I guess you've confirmed everything I was wondering about. I just wanted a little input to know if I covered that bases.
It sounds like your requirements are to reduce cost, reduce complexity, and simplify operation. The points mentioned above about hardware redundancy are certainly valid and need to be considered. However, if you are set on doing this, you could just keep 1-2 of the old blades as cold spares in case you enounter a hardware issue. Since you have 7 of these old blades, you would be able to have plenty of offline cold spare parts if needed.
Just another thought if these are not super-critical and you can afford some downtime.
I can afford a little bit of down time, not too much. My plan was to have a second smaller IBM server handy for redundancy but it really all comes down to cooling and power. This particular setup is becoming unjustifiable to run as it continues to cost us so much in power which is why I'm trying to find a way to cut costs while keeping the setup.
I believe the BladeCenter uses some 600watts simply idling, I'll have to look at how it all breaks down.
We did nearly the same thing on HP c7000 blade enclosures. We could afford some downtime, so we just exported an .ova to shared storage, and then imported it into the new VM cluster. That worked really well. I would advise testing and timing the moves, so you have an idea of how long it will take.
We had multiple hosts built out in the new cluster, but the actual virtual server move would be essentially the same.
Right, yes, indeed, I would test for sure.
I was thinking that maybe someone else had done something almost exactly as what I'm looking for so was hoping for the pluses and minuses.
In the end, moving multiple blades to one server and a redundant server is really kind of a common thing to do so really, I need to look into the power/cooling numbers is what I'm getting from the replies.
We are doing l the same thing with a couple of differences;
We're moving from 3 enclosures to 1
We just did OVA exports from the old environments and imported to the new, which required a short maintenance window
In the end, we'll have 10 blades in the new c7000 enclosure, so we're not on just a single blade.
Like you said, we also did this based on the power costs alone which were killing us.
But the difference is that you care consolidating to blades onto one chassis which really means multiple servers.
In my case, I am considering moving VM's across multiple blades onto one single server, be it blade or other, it's still going to be just one server.
Another way to look at this is to take the sum of each current blade's average Mhz usage, and the sum of each average memory usage. If you are close to maxing out the single server that you want to move to (I wouldn't go higher than 50-60% average CPU and 70% memory, personally), then you need another server. But if these are critical VM's, I wouldn't tie them to a single host at all. If you already have a standby server, the essentials license for vSphere is not very expensive, and you could put them in an HA configuration so you wouldn't have to worry about your one host having a problem.