VMware Cloud Community
Sean_Kane
Enthusiast
Enthusiast
Jump to solution

When to add a new host

Hi Everyone,

My apologies for what may be a very simple and rudimentary question.

I have a two-ESXi 5.1 Host cluster in Europe right now and I am trying to figure out the best way to start requesting CapEx money for new hardware.

By examining the information below, do you feel that the cluster can continue to run on a single host, despite the fact that the memory usage is near capacity on both hosts?  None of my VM's have CPU or memory reservations.

Thanks in advance!!

-S

Here is the state of my environment:

Host 1:

  • Memory
    • Capacity: 196 GB
    • Usage: 174 GB
  • CPU
    • Capacity: 16 x 2.599 GHz
    • Usage: 1562 MHz

Host 2:

  • Memory
    • Capacity: 196 GB
    • Usage: 174 GB
  • CPU
    • Capacity: 16 x 2.599 GHz
    • Usage: 6374 MHz

Per the VMWare HA Advanced Runtime Info Screen for the cluster:

ClusterInfo.png

Reply
0 Kudos
1 Solution

Accepted Solutions
vuzzini
Enthusiast
Enthusiast
Jump to solution

Hello Sean_Kane,

As you do not have any reservations on VM's residing in the cluster, there would be no necessity of adding another host to the cluster. The main reason is because your vSphere cluster is having more number of available slots.

Also assuming that HA would trigger in case of an ESXi host failure, the VM's will be restarted on another host without any restrictions as you currently have only 31 VM's powered-on or 31 slots utilized.

The only exception is that In case the memory and CPU allocation on individual VM's is too high, then you may consider adding up another ESXi host to the cluster.

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Sandeep Vuzzini Sr. DevOps Engineer

View solution in original post

Reply
0 Kudos
6 Replies
vuzzini
Enthusiast
Enthusiast
Jump to solution

Hello Sean_Kane,

As you do not have any reservations on VM's residing in the cluster, there would be no necessity of adding another host to the cluster. The main reason is because your vSphere cluster is having more number of available slots.

Also assuming that HA would trigger in case of an ESXi host failure, the VM's will be restarted on another host without any restrictions as you currently have only 31 VM's powered-on or 31 slots utilized.

The only exception is that In case the memory and CPU allocation on individual VM's is too high, then you may consider adding up another ESXi host to the cluster.

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Sandeep Vuzzini Sr. DevOps Engineer
Reply
0 Kudos
Sean_Kane
Enthusiast
Enthusiast
Jump to solution

Hi Vuzzini,

Thank you for responding!

When you say "The only exception is that In case the memory and CPU allocation on individual VM's is too high" do you mean the Memory actually Assigned to the VM's themselves?

So, essentially, I should add up the memory assigned to all the VM's and make sure that the number is not greater than 192 GB?


Sean

Reply
0 Kudos
vuzzini
Enthusiast
Enthusiast
Jump to solution

Yes, I am talking about Memory assigned to the VM's themselves. Adding up the memory on all VM's and ensuring that the sum is not greater than 192 GB can be considered theoretically but this will constraint the concept of virtualization.

You need to consider reviewing the performance of cluster, like how much resources are actually being used on the cluster during peak hours and then verify whether the consumption is more than 192 GB memory and 32 Ghz CPU frequency.

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Sandeep Vuzzini Sr. DevOps Engineer
Reply
0 Kudos
BCBSSC
Contributor
Contributor
Jump to solution

If a host were to go down now most of the guest from the failed/down host would be paging most of the memory request out to your datastores which is a huge performance hit vs physical RAM. Rule of thumb is to have enough capacity both CPU and RAM to allow a HA failover without hitting swap space which in your case on a two node cluster is 50% on each node.

Reply
0 Kudos
Sean_Kane
Enthusiast
Enthusiast
Jump to solution

OK.  Thank you for your advice Vuzzini!


Sean

Reply
0 Kudos
Sean_Kane
Enthusiast
Enthusiast
Jump to solution

BCBSSC,

Yeah, I just did some rough calculations.

There is approximately 500 GB of RAM assigned to VM's in the cluster.  Looking at the two hosts, there is no Memory Ballooning going on. 

So, despite the fact that there is more than 192 GB of RAM assigned to each host, we have way more than 192 GB assigned to each host.


With that said, the active and consumed memory on each host is dramatically lower than what is Granted.

Sean

Reply
0 Kudos