VMware Cloud Community
zacvmnovice
Contributor
Contributor

Should you always restrict resources?

Hi,

Im quite new to ESXi and use both 4.1 and 5 and have had the odd issue where the server seems to freeze everything until rebooted. Looks like a resources issue, i just wondered whether resource pools should be used to restrict resources to stop VMs trying to talk as much resource as possible?

Thanks


Zac

Tags (5)
Reply
0 Kudos
11 Replies
Techstarts
Expert
Expert

Configuring resources pool, reservations and limits is strongly recommended in situation where resources are limited.

Analyze your VM's resource usage and allocated them only as much workload needs.

You can analyse your VM's using performance charts in vCenter.

With Great Regards,
Reply
0 Kudos
RParker
Immortal
Immortal

zacvmnovice wrote:

Hi,

Im quite new to ESXi and use both 4.1 and 5 and have had the odd issue where the server seems to freeze everything until rebooted. Looks like a resources issue, i just wondered whether resource pools should be used to restrict resources to stop VMs trying to talk as much resource as possible?

Thanks


Zac

Freezing is a driver issue or hardware issue.  Doesn't matter how much memory you have or how much it's being used, should not FREEZE the server.

Reply
0 Kudos
zacvmnovice
Contributor
Contributor

Thanks for the replies.

I presume not a OS driver issue?

If you give a VM a set number of cores and memory is there any chance it can automatically decide it needs more and try and get resource from other VMS?

Reply
0 Kudos
golddiggie
Champion
Champion

Are the VM's freezing or the host server freezing/locking up? What are you using for a host server (hardware)?? If it's the VM's that are locking up, but you can still get in and administrate the host, it could be due to not having enough resources in the host to do what you're asking it to. Generally speaking, we give VM's the minimum amount of resources to do the job. Such as starting off with one vCPU assigned and a minimum level of RAM. IF the VM shows it really NEEDS more resources, you slowly increase them.

At least with version 4.x you also didn't want to allocate more than about 1/2 your total core count to a single VM. So if you have a total of four cores in your host, don't give a single VM four vCPUs. Def. don't give multiple VM's 4 vCPUs. Unless a VM actually tosses a lot of CPU alarms at me, I don't give it more than two vCPUs. That's with a total of eight cores in my home lab host too. Most of the servers where I'm working are given either one, or two, vCPUs too. Very few have more assigned to them. Typically those are heavily hit SQL or SharePoint servers.

Post up what you're using for a host server and how the VM's are configured. Knowing what your running for the host, plus how the VMs are setup will help us to help you out more...

BTW, with ESX/ESXi locks/crashes are very often due to hardware issues. This is most often the case with Linux/Linux based operating systems.

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot

golddiggie wrote:

This is most often the case with Linux/Linux based operating systems.

Wat?

Reply
0 Kudos
zacvmnovice
Contributor
Contributor

When im creating VMs, within resources do you leave the resources on unlimited or would you set a limit on all VMs?

I have a DELL R710 2 x 4C processors (hyperthreaded) with 36GB RAM

At the minute just running 1 x 2011SBS server and 1 x 2008 Server running SQL.

2011SBS Server has 2C and 16GB RAM and 2008 Server has 2C and 12GB Ram

Thanks


Zac 

Reply
0 Kudos
a_p_
Leadership
Leadership

When im creating VMs, within resources do you leave the resources on unlimited or would you set a limit on all VMs?

No! It may make sense to set limits in resource pools but IMO there are very few use cases for limits in VMs. This could even add additional overhead. If you don't need that much memory in your VMs, lower the assigned memory.

What kind of disks/RAID do you use (number of disks, RAID level)? ESXi fully relies on the RAID controllers capabilities regarding caching. There's a huge performance difference in using a RAID controller without (write-through mode) and with (write-back mode) battery buffered write cache.

André

Reply
0 Kudos
Dracolith
Enthusiast
Enthusiast

zacvmnovice wrote:

When im creating VMs, within resources do you leave the resources on unlimited or would you set a limit on all VMs?

I have a DELL R710 2 x 4C processors (hyperthreaded) with 36GB RAM

No.   I would say almost never set resource limits.

Especially don't set memory limits.

Instead, create resource pools to divide the available resources, and set  reservations,

or change the share priority in order to make assurances about the resources

available to higher priority services  at times when there is resource contention.

In regards to the number of virtual CPUs assigned to a virtual machine, always set this to

1, unless you know the VM will benefit from virtual SMP;  if it will,  increase the number only

when there is a performance impact.    e.g.  Increase to 2.   Do not increase to 4 unless

there is a compelling case in regards to the parallelism of the tasks on the VM.

A virtual machine only benefits from virtual SMP if there are multiple CPU-bound tasks on it,

OR  there is a CPU-bound task that is multithreaded and will benefit from the availability of multiple CPUs.

Usually when we create virtual machines,  we run  1 application per virtual machine, in this case,

in general,  there will be a host-wide performance cost for assigning multiple vCPUs and no real benefits for the VM

assigned multiple vCPUs.

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot

Dracolith wrote:

zacvmnovice wrote:

When im creating VMs, within resources do you leave the resources on unlimited or would you set a limit on all VMs?

I have a DELL R710 2 x 4C processors (hyperthreaded) with 36GB RAM

No.   I would say almost never set resource limits.

Especially don't set memory limits.

Instead, create resource pools to divide the available resources, and set  reservations,

or change the share priority in order to make assurances about the resources

available to higher priority services  at times when there is resource contention.


To be honest, I have a test bed resource, and I set memory limits on it so that it will go into memory contention way early before putting pressure on our dev network (also lets you play with memory contested services).


Of course this isn't a hard limit, and can be slid if need be, but is mostly in place so that if anyone derps around on the test bed they cannot affect the other VMs.

But yeah, pretty specific setups indeed.

Reply
0 Kudos
Dracolith
Enthusiast
Enthusiast

William Roush wrote:

To be honest, I have a test bed resource, and I set memory limits on it so that it will go into memory contention way early before putting pressure on our dev network (also lets you play with memory contested services).

Not that there is an issue with using limits when they may be called for, particularly on resource pools of that nature.

If you do intentionally create memory contention in that way,   make sure it's  "safe" memory contention.

What's potentially unsafe?

Any level of contention that excessively impacts host performance,  including performance for VMs outside the resource pool.

Forcing dev VMs  into heavy contention,   if their swapfiles or guest OS are stored on the same datastore as production VMs.

Heavy swapping activity can hammer storage I/O queues on the storage array and the host.

But if you have dedicated LUNs and dedicated HBAs for your dev VMs' storage, it should be OK.

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot

Dracolith wrote:

William Roush wrote:

To be honest, I have a test bed resource, and I set memory limits on it so that it will go into memory contention way early before putting pressure on our dev network (also lets you play with memory contested services).

Not that there is an issue with using limits when they may be called for, particularly on resource pools of that nature.

If you do intentionally create memory contention in that way,   make sure it's  "safe" memory contention.

What's potentially unsafe?

Any level of contention that excessively impacts host performance,  including performance for VMs outside the resource pool.

Forcing dev VMs  into heavy contention,   if their swapfiles or guest OS are stored on the same datastore as production VMs.

Heavy swapping activity can hammer storage I/O queues on the storage array and the host.

But if you have dedicated LUNs and dedicated HBAs for your dev VMs' storage, it should be OK.

And that is why auto-tiering is terrible. Smiley Wink

Reply
0 Kudos