VMware Cloud Community
MikeDyn
Contributor
Contributor

VMWare Server and Hypervisor Host CPU utilization question

Hi,

Running 6 WinXPPro VMs on a 6-core, Windows Server 2003 64bit, using VMWare Server 1.0.10 (I prefer the VMWS 1 console to VMWS 2 and vSphere).

When I boot/run them all at once, it takes a very long time to boot, and Host CPU utilization remains very low (2%-4%). But when running only one VM, the boot performance is much better and host CPU utilization is more like 35%. It made me think that VMWare server gives an equal amount of CPU time to every running VM, regardless of differences in load amongst VMs. So for 6 VMs, each gets 1/6 of the host CPU time, even if 5 VMs are idle and one VM could use up most of the host CPU.

I also thought disk contention was an issue, so I eliminated all swap files on all VMs and got much better boot performance and much higher host CPU utilization (close to 100%).

But the question remains. Can anyone confirm:

  1. Does VMWare Server equally devide hostt CPU cycles across all running VMs regardless of how much each VM could use, or can a VM use more than 1/n available host CPU cycles if it needs them?
  2. Is there a difference in the way VMWare Server 1, VMWare Server 2, and vSphere Hypervisor handle host CPU utilization/allocation?

Thanks,

Mike

0 Kudos
6 Replies
mcowger
Immortal
Immortal

1) Its not a fixed 1/n.  Every VM gets whatever the host can give it, when it can give it.  E.g., if you have 6 VMs running, 5 of which are idle and 1 of which is active, that 1 VM gets more than just 1/6th of the CPU power.

2) There are some differences, yes, but overall its very similar techniques.  The vSphere hypervisor *is* much more efficient.

I suspect what you saw was the effects of disk contention across 6 VMs onto a single SATA disk.

--Matt VCDX #52 blog.cowger.us
0 Kudos
shishir08
Hot Shot
Hot Shot

When CPU resources are overcommitted, the host time-slices the physical processors across all virtual machines, so each virtual machine runs as if it has its specified number of virtual processors. When host runs multiple virtual machines, it allocates to each virtual machine a share of the physical resources.

So,whatever the number of vCPU you assign to the VM and the share (high,normal,low) it will divide the CPU cycles among the VM's accordingly.


Taking your Problem :

suppose the host has 12000Mhz speed and it has 8 cores.

and you try to power on six VMs and divide the CPU cycles like this.

VM1 - 2 VCPU -      2000Mhz
VM2- 2VCPU-      2000Mhz
VM3- 2VCPU-     2000Mhz
VM4- 2VCPU-     2000Mhz
VM5- 2VCPU-    2000Mhz
VM6- 2VCPU-    2000Mhz

Here you are limiting each VM with these many cpu cycles(2000Mhz) and thus they cannot go ahead and take the other VM's CPU cycles even though they are inactive.

HTH

Shishir

0 Kudos
MikeDyn
Contributor
Contributor

Thanks to both of you for the replies. However they seem to contradict each other.

mcgower states that one active VM will consume more than 1/n Host CPU cycles if other VMs are running and idle.

However shishir08 states that CPU cycles are divided evenly and an active VM can consume at most 1/n of the Host CPU cycles (this seems to match what I observed).

Can anyone cite VMWare documentation that confirms how Host CPU cycles are allocated to VMs in VMWare Server and Hypervisor?


Thanks

Mike

0 Kudos
mcowger
Immortal
Immortal


However shishir08 states that CPU cycles are divided evenly and an active VM can consume at most 1/n of the Host CPU cycles (this seems to match what I observed).

Can anyone cite VMWare documentation that confirms how Host CPU cycles are allocated to VMs in VMWare Server and Hypervisor?

The 2 responses dont actually contradict.  shishir08's response is that a VM can consume, at most 1/n of the host CPU *if there are n VMs competing*.  If all VMs were powered on at the same time, its reasonable to assume that for some short period, they will all be trying to boot (and use CPU) simulatensouly, in which case none of them will get more than their fair share (1/n).  However, if not all of them are actively working, each VM will get what it requests (e.g. in my example of 1 VM doing work and 5 idle, that one VM is the only one competing, and it gets 1/n CPU (where N=1).

The value of N is not defined by number of running VMs, but by the number of running VMs that have active work ready to be processed.

Here's is VMware's information about how the scheduler in ESX works:

http://www.google.com/url?sa=t&source=web&cd=5&sqi=2&ved=0CEYQFjAE&url=http%3A%2F%2Fwww.vmware.com%2...

Note that there isn't a similar document for VMware Server (which is out of support and has been for a while), but it uses very similar technqiues.

I am nearly certain that the performance behavior you are seeing during virtual machine boot is due to disk resource (IOPs) exhaustion, not CPU exhaustion, as your symptoms (long boot during boot storm, low CPU utilization but bad performance) are classic symptoms of not-enough-disk-performance-during-boot-storm scenarios.

--Matt VCDX #52 blog.cowger.us
0 Kudos
shrikanthegde
VMware Employee
VMware Employee

Does VMWare Server equally devide hostt CPU cycles across all running VMs regardless of how much each VM could use, or can a VM use more than 1/n available host CPU cycles if it needs them?

If all 'n'  VMs are active and demanding the resource, in that case available CPU resource will be devided equally and allocated to VMs (assumed all VMs have same configuration and shares) .  But one thing we need to consider here is,  if host has 8 PCPUs (ie total 800% of CPU resource) and four 1- VCPU (All are active)  VMs are running on the host. In this case, according to above sentence each VM will get  800/4  = 200% . But since each VM has only one VCPU it cant use more than 100% (capacity of one PCPU) .

On the other hand if only one VM is active and all othe are idle (idle VMs are also have small management overhead), then active VM will get the full resources till the extent how much it is allowed use based on its configuration.

0 Kudos
J1mbo
Virtuoso
Virtuoso

> I also thought disk contention was an issue, so I eliminated all swap  files on all VMs and got much better boot performance and much higher  host CPU utilization (close to 100%).

I'll stick my neck out and guess the disk subsystem is RAID-5 without battery-backed write cache?

0 Kudos