9 Replies Latest reply on May 29, 2014 6:12 AM by Texiwill

    Pro & Cons of More ESXi hosts vs ESXi host with more CPUs

    adrianych Enthusiast

      Hi, I need some advise as I am about to refresh my servers. I am currently running VMware 5.5 (Std with license for 12 processors) on 6x Blade servers with 2x Xeon 6 core CPUs.

       

      The usage of my VMs are approx

      - RAM 600GB (usage approx 75% of each ESXi hosts)

      - CPU 5%, max 15% (quite low on each ESXi hosts. Even if some VMs hit 90% CPU, ESXi host goes up to 15%)

      - no local HDD

      - boot using RAID 1 SD cards (installed with VMware ESXi 5.5)

      - VMs housed on Dell EQL storage (via iSCSI network)

       

      So instead of refreshing 6x blade servers with 2 CPUs each, I am considering getting 8 or 10 blade servers with single CPU (6, 8 or 10 cores depending on price and availability).

       

      Taking that the following are true, kindly advise the Pro and Cons especially in terms of VMware

      - current CPUs should be more powerful than what we had 5yrs ago

      - RAM per new server should be more or equal than current (now should be cheaper)

      - no changes in networking setup (2 NIC for LAN, 2 NICs for iSCSI)

      - if required, will purchase only 8 Blades, scale up to 10 or 12 if necessary

      - Blade chassis has 16 slots, 1 slot spare, only 3 slots reserved (AD, VDI-1 and VDI-2)

        • 2. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
          Ethan44 Hot Shot

          Hi

           

           

          Welcome to communities.

           

          No difference licencing point of view .but if you add more host power consumption ,space, budget is big things

          Disadvantage sing point of failure if having one host ,no fail over etc

          • 3. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
            Texiwill Guru
            User ModeratorsvExpert

            Hello,

             

            The real question to me, is do you have to refresh? if so why go with lest CPU per box than you already have. Please remember that CORES share the same cache usually, so with 2 sockets you actually have more core (perhaps) and more cache (definitely) available.  Determine if you really need to replace or should you spend the money adding in more memory (if possible), upgrade the CPUs, etc. That depends on model of blade.

             

            What is the need to upgrade? just because it is "time" or because of some specific need.

             

            I am in the same boat really. upgrade my G6 blades and go to G7/G8 (I use HP) or stick with what I have which is working out quite well and has more than enough CPU/memory, etc....  I like the 2proc boxes, they are very good. Do you need to upgrade to use VSAN (if so then blades are can be done but takes some serious planning.

             

            Best regards,
            Edward L. Haletky
            VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

            Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

            Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

            • 4. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
              adrianych Enthusiast

              Hi Edward, I need to refresh the entire Blade Chassis + Blade servers + Storage & switches (LAN + 10Gbps iSCSI) as the support is about to expire.

               

              Something they failed to mention during the sales pitch that although the blade chassis can support 3 to 4 generations of blade servers, the support for the blade chassis is only 5 yrs to 7 yrs. But the blade chassis is "future-proof" as 3 to 4 server generations is easily 10 yrs to 14 yrs. For example, our chassis and blades are 5 yrs old, we cannot plug in new blades as the new blades will have 5 yrs support but the chassis expires in 2 yrs time, with support extension of 2 yrs.

               

              Its is like switching the entire rack.

              It is one major overhaul but I am planning to do the upgrade in phases, as in blades, then switches then storage.

               

              Anyway, I have already maxed out the RAM slots on the current/old blades. So for the new blades, I have to choose carefully as single CPU servers can use 1/2 the RAM slots.

               

              But I have to also look into other hidden Pros & Cons especially with VMware and hardware. Like I have just found out about the RAM slots thingy.....advertised 256GB RAM, but only if have 2 CPU, else 128GB.

              • 5. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
                Texiwill Guru
                vExpertUser Moderators

                Hello,

                 

                Ouch, that is a hidden cost. Upgrade chassis to stay in 'support', even though it is working. You know I may just approach Dell about extending chassis support itself. If not possible:

                 

                All modern systems use NUMA which ties memory slots to CPUs. More CPUs more memory slots. So yes 2 CPU systems have 2x memory of 1 CPU systems. I would also pay very close attention to hardware requirements for VSAN. This will dictate the SCSI controller(s) you can use locally. Which could also change the blades you may want to use. I know it does for my blades. This is also important if you ever want to use SSD for anything.

                 

                Actually, since you use an SD slot already, having one storage slot for SSD and one for Large SAS would give you good capability for VSAN (if you can get that Large SAS large enough 2-6TB) otherwise you are really looking at non-blades to support VSAN. That is if you really want to use VSAN in the future.... just a thought.

                 

                So CPUs give you memory, SCSI Controllers are just as important in the future... I would also go with something that gives you at least 1-2 CPU upgrades. The new sandybridge/haswell V2 systems are nice that way, but I also heard the V3 systems are coming out soon... Oh and you definitely want sandybridge/haswell to get the latest AESNI instruction set for encyrption...

                 

                Do you plan on ever using NVIDIA Grid systems? If so, that makes using blades difficult as well depending on the blades... Once more just another consideration.

                 

                Hardware has not been this much of an issue for the last few years, now it has become one for upgrades for the future.


                General Recommendations:

                * 2 Procs per Blade for mem slots. (sandybridge/haswell V2 chipsets or even V3 (if you can wait)), minimum 6 cores but decacores work out well and will reduce your need to have all blade slots filled --> Upgrade room once more.

                * 256 - 512GB memory for blades

                * SD Slots still available

                * Make sure the SCSI controller is on the VSAN HCL (just to be safe, these all work well with SSD)

                 

                Best regards,
                Edward L. Haletky
                VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

                Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

                Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

                • 6. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
                  adrianych Enthusiast

                  Hi Edward. Thanks so much for your input.

                   

                  Actually I do not know much about VSAN. I am using Dell Equallogic, started off using the PS6000 with 16x 1TB SATA, then added 2 more PS 6000 16x 400GB SAS.

                   

                  Later we upgraded to 10G using a PS6110 17x SAS + 7x SSD. The older EQLs are on the copper 1G on the 10G switch while we added a pair 10G M8024K switch  modules to the blade chassis.

                   

                  As for the Blade chassis and blade server support, Dell only supports up to 7yrs. So at 5yrs, I cannot be putting in new blade servers.

                  Most brands would give you the blade chassis for free nowadays, but it does not come with the 10Gbps switch module (USD 10K) or other 1Gbps switch modules (USD 7K) nor the 5yrs (USD 10K) or 7yrs support (USD 16K).

                   

                  I would like to know what u meant by "Make sure the SCSI controller is on the VSAN HCL (just to be safe, these all work well with SSD)".

                  I tried reading up but I don't get much info.

                  • 7. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
                    Texiwill Guru
                    User ModeratorsvExpert

                    Hello,

                     

                    When you use VSAN (which makes use of local disks within each host), the primary factor is a good SCSI controller as well as the quality of the SSD. This is really about planning for the future. Any caching mechanism, VSAN, VFRC, Pernix, SanDisk, etc. require access to an SSD (or memory) so a scsi controller that is on the VSAN HCL would be blessed to work well with these caching layers and SSDs. This is a way of future proofing new boxes somewhat.

                     

                    You cannot just use any old SCSI controller and expect caching mechanisms to just work, some do not recognize SSDs as anything but spinning disks and the controllers have other issues for high performance. So best to go with one on the HCL. I am in the same boat. To use caching software I need better controllers for access to SSD.

                     

                    Best regards,
                    Edward L. Haletky
                    VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

                    Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

                    Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

                    • 8. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
                      adrianych Enthusiast

                      Sorry, but I am getting a little confused.

                       

                      Just to clarify.....my Blades do not have local storage or any HDD as they boot using SD cards (1GB each, RAID 1).

                      The blades connected to Dell EQL via iSCSI thru a 10G M8024-K Blade Chassis switch module

                      The dual/redundant SCSI or HDD controllers are within the Dell EQL units.

                       

                      I would take it that I am pretty much covered in terms of the issues you mentioned ?

                      • 9. Re: Pro & Cons of More ESXi hosts vs ESXi host with more CPUs
                        Texiwill Guru
                        User ModeratorsvExpert

                        Hello,

                         

                        Yes you are. Most blades have local disk capability regardless of whether or not they are actually installed. So for future growth, you may want to consider upgrades that allow you to expand your capability a bit more is all. Caching even with an EQL is always a good thing and you can achieve that by using SSD, memory in the box, Fusion-IO, etc. Caching is a relatively easy upgrade for all storage environments. If you eventually want to do this, then you may have to think about SCSI controllers for local SSDs within the blades (if your blades have that capability).

                         

                        When upgrading hardware, all the new 'options' for improving performance require thinking about whether or not your environment needs those options, etc.

                         

                        Best regards,
                        Edward L. Haletky
                        VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014

                        Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.

                        Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast