VMware Cloud Community
shub
Contributor
Contributor
Jump to solution

Linux Vm performance

We are currently running Linux VMs on ESX servers. Designers log into these VMs to perform various tasks such as editing code, compiling, etc.

From time to time, the sessions appear to freeze for 10-30 seconds and performance is very sluggish. This behavior is not exhibited on physical servers.

Has anyone else running Linux experienced similar issues?

4 X AMD dual core ESX servers with 32GB memory, running under 75% utilization, Redhat Enterprise 4 32 bit.

We have tried several kernels (found on several), checked networking (no errors), ensured latest Vmware tools were installed.

Another problem is we can't reproduce the issue at will, it just happens on its own.

0 Kudos
1 Solution

Accepted Solutions
larstr
Champion
Champion
Jump to solution

When the total allocated memory is approaching 80% ESX will start to use ballooning and swapping as counter measures against running out of memory. As we can see, you have 14GB of memory already swapped. Depending on your VMs active ram usage, swapping memory in/out will make the performance of your VMs suffer very VERY badly.

Lars

View solution in original post

0 Kudos
9 Replies
weinstein5
Immortal
Immortal
Jump to solution

How many vms are you running? Are they single vCPU vms?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
mike_laspina
Champion
Champion
Jump to solution

Hi,

Can you post the file created byt the following command se we can see the resource allocation?

esxcfg-info -r > esxresource.txt

http://blog.laspina.ca/ vExpert 2009
shub
Contributor
Contributor
Jump to solution

its a 5 server cluster with 60+ vm's, mostly windows.

The Linux ones seeing the freezing are 2vcpu and 6GB memory.

0 Kudos
shub
Contributor
Contributor
Jump to solution

i have attached the output

mike

0 Kudos
mike_laspina
Champion
Champion
Jump to solution

The first thing I see is that your host is in and over committed state. The demand for physical memory is greater that the installed. This will be you highest area of concern as it will create the symptoms you have now due to extreme disk swapping activity which is 1000 times slower than memory.

Here is the indicator from your resource report.

\==+Memory Stats :

|----Num Clients.....................................15

|----Shared..........................................1.04 GB

|----Shared Zero.....................................696.66 MB

|----COW.............................................1.19 GB

|----Balloon.........................................6.68 GB

|----Swapped.........................................13.90 GB

|----Mapped..........................................9.55 GB

|----Active..........................................1.61 GB

|----Overhead........................................1.67 GB

|----Working Set Estimate............................1.61 GB

|----Total Minimum (base)............................2.83 GB

|----Total Maximum (base)............................51.29 GB |----Effective Minimum...............................2.83 GB

|----Target Allocation...............................29.13 GB

The total maximum is what was demanded and the target allocation is the physical limit. Every request past the physical limit will cause swapping.

You need to double your memory in the host to allow for this load or start looking at ways reduce memory requests from the VM's

Are you running java apps? These are commonly over allocated for the java VM environment.

http://blog.laspina.ca/ vExpert 2009
shub
Contributor
Contributor
Jump to solution

In VC our servers are usually under 75% memory and sometimes peak over 80% but DRS keeps the cluster balanced quite well. So unless DRS is incorrect the host server never runs out of memory.

Yes we are over committed but this is one of the benefits of ESX, when a VM requires more resources it will have the resources and DRS will move Vm's around to loadbalance.

The problem we are finding is for some reason it would seem when the Linux VM needs resources, there seems to be a time delay or freeze.

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

Do your single vCPU VMs exhibit this freezing behavior? I am thinking with the way you have described your cluster it has to do with the simultaneous scheduling of the two vCPUs of your Linux VMs - can you perhaps convert one to a single vCPU Vm to see if the freezing stops -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
mike_laspina
Champion
Champion
Jump to solution

The guest usage reported in VC is the allocated active real memory % of the grant. There is a memory cost overhead per VM and you must account for this as well not just the real memory allocation. Each virtual hardware component needs real memory outside of the guest. It is not good for performance to be 40% overcommitted and that is where you are now. Open a call ticket on this one. There are a few things you can do to in the resource pool to tune it and maybe some other advanced suggestions will help. It really comes down to how much memory the host has. Not very many shops push that level of overcommit.

http://blog.laspina.ca/ vExpert 2009
0 Kudos
larstr
Champion
Champion
Jump to solution

When the total allocated memory is approaching 80% ESX will start to use ballooning and swapping as counter measures against running out of memory. As we can see, you have 14GB of memory already swapped. Depending on your VMs active ram usage, swapping memory in/out will make the performance of your VMs suffer very VERY badly.

Lars

0 Kudos