We have a Windows 2003 VM running SQL 2005 that isexperiencing high processor Queue lengths (as high as 62). We have the VM configured with 2 virtual CPU's and 4GB RAM. Most of the time the server experiences normal activity, but it seems to go through this daily cycle of high processor queue lengths to the extent that the server becomes unusable. When you log into the VM, the VM appears to have 100 % CPU utilization, while externally, the Host Server shows very little CPU utilization (32 MHZ). Problem seems to occur for a
We have 3 production Hosts (each dual quad core, (2.833 GHz) 32 GB RAM, HP Proliant DL380 G5). The production host this VM is on has22 VM's powered up.
Have you looked at the RDY% within ESXtop for the host its running on. My guess would be you have some major cpu contention. How many of those other 22 VM's have 2 vCPU's? even if they all have 1 vCPU you're already running into some CPU constraints. If you putty into your esx host where the SQL server is, type - esxtop and look at the RDY% line, if its between 8-10+ you most likely found your problem.
Kyle
%RDY looks like it hovers around 2 to 3, at least at this point in time. idle seems ro run between 70 and 120% as i watch it over time.
Also, we have 7 other VM's running with 2 cpus
Which patch are you referring to?
I am thinking it might be a scheduling issue of the dual vCPU VM - the vmkernel schedules the two vCPUs simultaneously so if can not scheduled both it will not schedule either - what happens if you set the VM to asingle vcpu?
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
This might sound dumb but I am sure you checked it... did you verify that VMtools was installed and working correctly? Does VC give you an option to upgrade the virtual hardware from the VM menu?
On that host, you have 8 cores to schedule from, you say you have that VM plus 7 other VM's that have 2 vCPU's assigned to them. Then that means your 22 VM's running on that host have 16 vCPU's trying to jocky for resources within those 7 VM's themselves, then add the other 14 1 vCPU machines basically making 30 vCPU's for your 8 physical cores. Do those 7 VM's really need 2 vCPUs? It really sounds like CPU contention and thats why your VM is performing so slow, its having to wait for 2 cores to become available before it can process anything, then wait again when the next instruction set is sent to the cores.
Kyle
VMtools are installed and current on this VM
All, we had another guy come in and look at the system, and he noticed that memory swap is high. Could this have any affect on what we are seeing? I rebooted the problem VM's and saw the individual and host swap go down.
<![endif]><![if gte mso 9]>2009-03-09
20:15 Pacific
VM-ESX13 -
Latest - 6473396
Maximum - 6503036
Minimum - 6401728
Average - 6432438
20:54 Pacific
VM-ESX13 -
Latest - 6683408
Maximum - 7044972
Minimum - 6401728
Average - 6665320
20:15 Pacific
EPLP-NWN-DB
Latest 469140
Maximum 471716
Minimum 397504
Average 410875.7
20:53 Pacific
EPLP-NWN-DB
Latest 0
Maximum 945752
Minimum 0
Average 481043.9
21:08
EPLP-NWN-APP
Latest 977992
Maximum 1019464
Minimum 969696
Average 10111136
21:08
EPLP-NWN-APP
Latest 977992
Maximum 1019464
Minimum 969696
Average 10111136
It could have been part of the problem, I'm not quite sure what I'm supposed to make of those numbers you posted. My money is still on the CPU contention but certainly make sure all your VM's are getting the resources that are needed, CPU and memory. I can't remember if you have multiple ESX hosts, if so have you tried migrating some of your 2 vCPU virtual machines to a different host and reduce the vCPU load on the current host to see if you still get the same poor performance?
Kyle