1 2 3 Previous Next 36 Replies Latest reply on Dec 19, 2008 7:06 AM by virtek

    Physical Server Configurations

    jacquesdp Novice

       

      Hi all,

       

       

      We are in the planning phase of Virtualizing our Infrastructure, and are currently stuck with the question whether we should invest in a couple of really powerful machines, or a blade solution with more, less powerful blades. I am sure other companies have faced this choice and I would be grateful for any comments.

       

       

      Thanks!

       

       

      Jacques

       

       

       

       

       

        • 1. Re: Physical Server Configurations
          gary1012 Master

          You'll get a lot of different opinions on this question. If you have current issues with power and available core data center network/SAN ports; go with blades that have switch options. Most of the hardware vendors have now released blades that have either reduced or eliminated concerns surround I/O expansion.

          1 person found this helpful
          • 2. Re: Physical Server Configurations
            khughes Virtuoso

             

            Well I had a nice little post typed out and yay, it errored posting so round 2

             

             

             

             

             

            Like Gary said it can go either way with lots of different opinions.  There are a couple factors when going about it, like if you're going to be using ESXi or ESX for the purchasing.  If you're using free ESXi then obviously if you have lots of blades its not going to cost you but if you are going to use ESX and pay for licenses having a lot of tiny blades might not make the most sense.  Also when you think about your virtualized hardware that you're going to be doing, do you have any real big boxes that might eat up a lot of resources that could possibly swamp a blade down?

             

             

             

             

             

            In the end its all about the resources delivered, and how you go about doing it.  A VM doesn't care where it gets the resources from as long as it gets it.

             

             

             

             

            • Kyle

             

             

            1 person found this helpful
            • 3. Re: Physical Server Configurations
              Ken.Cline Champion

               

              How large is your environment? Keep in mind that you're going to want to be able to take at least one host offline for patching / testing / failures. If you scale up (bigger boxes), you may wind up provisioning a lot of extra capacity for a small environment - and lose some flexibility. Remember that you're going to want to upgrade to the "next" version fo ESX at some point in time. You'll want to do this as a rolling upgrade - again, a good reason to scale out rather than up. Also, if you plan to use VMware HA, consider how long it will take to restart the VMs from a failed host. If you've got 20 VMs on a host, it will take "time X", if you've got 40 per VM, it may take "time X*2" or longer. Same thing if you want to take a host offline for maintenance. When you put it into maintenance mode, it will begin migrating VMs to other hosts - with 20 VMs, that will take a while. With 40 VMs, that will take a long while.

               

               

              Ken Cline

              Technical Director, Virtualization

              Wells Landers

              TVAR Solutions, A Wells Landers Group Company

              VMware Communities User Moderator

               

               

              • 4. Re: Physical Server Configurations
                azn2kew Champion

                 

                What is your long term strategies for data center consolidation?  If you want less racking space, power consumptions and flexiblity modular blades systems, than try out new Dell PE M600 series, these are capable of running ESX 3.5 hosts with max 64GB RAM if you wish and it has all the things you need to virtualize your systems.  If you want more powerful and high end rack servers, than using PE 2950, 6950 or R900 with max out with 256GB RAM, plenty of power for any solutions.

                 

                 

                No matter what types of servers, you must have all networking, storage, security and implementation plan out thoroughly so you don't expose to performance and disk I/O issues.  Try to price out which types are cheaper and reasonable, than use it otherwise neither solutions is perfectly fine.  New blades systems are no longer limited to NIC/HBA expansions or CPU cores.  

                 

                 

                If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

                 

                Regards,

                 

                Stefan Nguyen

                iGeek Systems Inc.

                VMware, Citrix, Microsoft Consultant

                 

                 

                • 5. Re: Physical Server Configurations
                  jacquesdp Novice

                   

                  Hi Gary, well we are not too concerned about that, since we will be eliminating (virtualizing) quite a few physical machines. So it sound that if you had a choice you will go for a blade solution?

                   

                   

                   

                   

                   

                  Thanks

                   

                   

                  Jacques

                   

                   

                   

                   

                   

                  • 6. Re: Physical Server Configurations
                    jacquesdp Novice

                     

                    Hi Kyle,

                     

                     

                    The blades we are looking at is quad quad core IBM machines. So there will be 16 processors available on each machine. The queastion really is whether we should considder buying one or two machines with even more processors available, or split them across a couple of blades. With VMotion and HA it makes it a pretty 'available' solution. We are running ESXi on a few machines currently, but will use ESX when the time comes

                     

                     

                    Jacques

                     

                     

                     

                     

                     

                    • 7. Re: Physical Server Configurations
                      jacquesdp Novice

                       

                      Hi Ken,

                       

                       

                      We have about 150 servers. You are making a good point in that having big boxes are putting all your eggs in one basket. I think that is also what I am leaning towards. To have blades, but powerful ones (16 processors) and use VMotion between them. Just one question, is it possible for ESX on blade A to use resources on ESX on blade B (processor, memory etc.)

                       

                       

                       

                       

                       

                      Thanks

                       

                       

                      Jacques

                       

                       

                       

                       

                       

                      • 8. Re: Physical Server Configurations
                        TomHowarth Guru
                        vExpertUser Moderators

                        My personal view on Blades are that they are just adding a level of complexity and in the majority of cases a reduction in resilience.

                         

                        In my experience, client who have gone for Blade Technology feel that they are getting more bang per buck, however they fail to see the fact that by packing 8 servers into  a Blade Chassis they are compounding a resilience crisis.

                         

                         

                        for example

                         

                         

                        A client requires 8 servers to virtualise their environment, now I have not yet found a client who will purchase 2 Blade Chassis and put 4 in each,  they will all buy one.   Now what happens if your blade chassis goes south.  bang no environment,  OK  suddenly they want to buy two chassis,  now that is better, but again a Blade Chassis goes bang,  Half your farm is gone.  but hey thats OK HA and DRS will sort us out.

                         

                         

                        However now instead of having 40 VM's restarting you have 160 VM's restarting on only four blades,  So much for your N+1 strategy now.  can you survive a 50% failure on your farm, no so buy 3 or 4 chassis to minimise your risk,  

                         

                         

                        my arguement is this, Unless your are going to have a very big farm the purchasing Blades can put your company in a very precarious predicament.

                         

                         

                         

                         

                         

                         

                         

                         

                        If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

                         

                         

                         

                        Tom Howarth

                        VMware Communities User Moderator

                        Blog: www.planetvm.net

                        • 9. Re: Physical Server Configurations
                          Ken.Cline Champion
                          jacquesdp wrote:

                          We have about 150 servers. You are making a good point in that having big boxes are putting all your eggs in one basket. I think that is also what I am leaning towards.

                           

                          I would recommend at least four hosts. That way, in the event of a host failure, you're looking at an average of 50 VMs per host. This, of course, assumes "reasonable" workloads. The "average" loading is about four vCPUs per core, so with 16 cores, that would put you in pretty good shape (assuming you have enough RAM - general rule of thumb: 4GB/core)

                           

                          To have blades, but powerful ones (16 processors) and use VMotion between them.

                           

                          Sounds like a plan.

                           

                          Just one question, is it possible for ESX on blade A to use resources on ESX on blade B (processor, memory etc.)

                           

                          No. An ESX box is an island unto itself. When a VM is on that host, it can use the resources of that host only. A VMotion is like getting into a boat and going to another island - once you get there, you're limited to the resources on the other island (host)...

                           

                          Ken Cline

                          Technical Director, Virtualization

                          Wells Landers

                          TVAR Solutions, A Wells Landers Group Company

                          VMware Communities User Moderator

                          • 10. Re: Physical Server Configurations
                            Ken.Cline Champion
                            tom howarth wrote:

                            My personal view on Blades are that they are just adding a level of complexity and in the majority of cases a reduction in resilience.

                             

                            In my experience, client who have gone for Blade Technology feel that they are getting more bang per buck, however they fail to see the fact that by packing 8 servers into  a Blade Chassis they are compounding a resilience crisis.

                             

                             

                            Ah, Tom...I'm going to disagree with you on this one. A blade chassis full of blades, in most cases, is actually more reliable than a bunch of discrete servers. (I'm going to refer to the HP C-7000 chassis in this narative, but most others vendors are comparable) Now you ask "But Ken, how can that be?" - well, the chassis itself is a passive device. There are no moving parts - it's just a hunk of metal. And you say "Yes, that's true...but what about the fact that with eight discrete servers I have 16 power supplies?" - hmm...good question! Well, you have 16 power supplies, but only two per chassis. You could lose capacity with the failure of only two power supplies, whereas in the blade chassis, you have six power supplies and it is possible to run the whole shebang off of just one - so you would have to lose six power supplies before you lose capacity. Basically, there is no single point of failure in a blade chassis. By removing a bunch of moving parts from the "server" and putting them into the "infrastructure", you're improving the MTBF of an individual server. You're improving your MTTR because all you have to do is swap a blade to fix it - no plugging & unplugging cables (a major cause of outages). And, you're simplifying your cable plant.

                             

                            Blades of old were problematic and did have some significant issues. I like the new blades and have no problem recommending them...

                             

                            my arguement is this, Unless your are going to have a very big farm the purchasing Blades can put your company in a very precarious predicament.

                             

                             

                            I agree that you do need a "reasonable" number of VMs - if you're looking at HP blades, you could go with the c3000, which can hold 8 half-height or 4 full-height blades and get good ROI with as few as 100-150 or so VMs.

                             

                            Ken Cline

                            Technical Director, Virtualization

                            Wells Landers

                            TVAR Solutions, A Wells Landers Group Company

                            VMware Communities User Moderator

                            • 11. Re: Physical Server Configurations
                              gary1012 Master

                              We're primarily a blade shop due to density, power and cooling reasons. Plus we get to say we're "green-freindly." If single points of  failure cannot be tolerated, then using multiple enclosures is a  must. That being said, we've had good luck with the enclosures and have not had one go down. At some point you'll have to ask yourself what's good enough. As for the blade types, we use HP 480s and are considering the 495s and perhaps the Dell 905s.To my knowledge, there isn't a NUMA joined-bus blade design like the IBM x3950s but I could be wrong. You still have software options to provide resilience and pooled resources through HA and DRS...

                               

                              As for the wide or high argument, each has pros and cons.

                              4 processor/multi-core hosts

                              Pros:usually has multiple PCI buses, more I/O slots, more memory slots with memory RAID capabilities,more VMs per host, less hosts/licenses to manage

                              Cons: higher cost per unit, RAM kit above 8GB are expensive, less resilience/higher pain threshold when a single host fails

                               

                              2 processor/multi-core hosts

                              Pros: cheaper cost per unit, hardware is more affordable and commodity-like providing the ability to hot-spare servers, lesser pain threshold when a single host fails

                              Cons: more hosts/licenses to manage, lower VMs per host

                               

                              I'm sure I've left something out and I'm sure some will disagree...

                              • 12. Re: Physical Server Configurations
                                jacquesdp Novice

                                 

                                Hi Gary,

                                 

                                 

                                The thing is that we do not want to end up with one VM per blade. Some machine we want to virtualize already need 2 quad cores (but having said that it is probably overspecified by the vendor). But we need to provide them with their requirements. So I guess by getting powerful blades we will sort of be providing the best of both worlds.

                                 

                                 

                                Jacques

                                 

                                 

                                 

                                 

                                 

                                • 13. Re: Physical Server Configurations
                                  mreferre Virtuoso

                                  Ken,

                                   

                                  do you remember the old good scale up Vs scale out discussions? I love them ....

                                   

                                  Massimo.

                                  • 14. Re: Physical Server Configurations
                                    gary1012 Master

                                    You'll get far more than one VM per blade. On a BL480c, we're getting ~10-12 VMs per blade. As for those monster apps that require 2 quad cores, you'd be better off leaving those as physical machines. If I remember right, you cannot create a VM with more that 4 vCPUs. Even if you could create an 8-way vCPU VM, your ROI argument might not be as attractive or more than likely, won't work at all.

                                    1 2 3 Previous Next