Need some help in setting up the number of vCPUs for each VM on the ESXi Server.
Server = HP Proliant DL380 with 2x Xeon Processors, 6 cores each @ 3 GHz and 98 GB Ram. RAID 5 with 10k rpm drives.
Have 6 VMs:
1. Database Server - new VM, Server 2008 R2, currently has 8 vCPU (2 sockets, 4 cores) and 32 GB Ram
2. Terminal Server1 - was V2V'd from Hyper-V, Server 2003, currently has 2 vCPU and 4 GB Ram
3. Terminal Server2 - was V2V'd from Hyper-V, Server 2003, currently has 4 vCPU and 4 GB Ram
4. App Server - new VM, Server 2008 R2, currently has 1 vCPU and 8 GB Ram
5. vCenter Mobile Access Server - currently has 1 vCPU and 512 MB Ram
6. Managment Server - new VM, Handles Backups etc, currently has 3 vCPU and 8 GB Ram
Right now we are seeing an issue with the Databse Server and running reports is slow.
vSphere shows the Server Usage in MHZ averaging 2.5 GHz in the Performance Chart. The Virtual Machines tab shows Terminal Server 2 has the most Host CPU usage at 2.1 GHZ - all other Servers are 733 MHz or less.
How should we setup the CPU Sockets & Cores of each VM?
This will sound counterintuitive but cut the number of vCPUs you are using - If I do not have data on the resource consumption of a machine I will always start with a single vCPU and add as needed - If you do not want to touch all of the I would drop the number of vCPUs you are using on the DB server intitially cutting it down to 2 vCPUs -
As weinstein5 has said I would reduce the number of vCPU's asigned to each VM, but I'd also look at the use of resource pools and shares so that the DB server alway has a certain amount of CPU cycles available.
If you look at the perfromance graphs for CPU wait times I would imagine that they're pretty bad at the moment. Each time the DB erer needs to eecute it has to wait for 8 physical cores to be available at the same time to execute and therefore increases the wait time.Assigning more vCPU' doesn't always improve performance due to this.
.. also would be worth checking which family of processor is installed, and determining whether there might be some advantage to enabling hyperthreading support.
James Saunders wrote:
As weinstein5 has said I would reduce the number of vCPU's asigned to each VM, but I'd also look at the use of resource pools and shares so that the DB server alway has a certain amount of CPU cycles available.
If you look at the perfromance graphs for CPU wait times I would imagine that they're pretty bad at the moment. Each time the DB erer needs to eecute it has to wait for 8 physical cores to be available at the same time to execute and therefore increases the wait time.Assigning more vCPU' doesn't always improve performance due to this.
The DB Server has 42% of the CPU Shares, and no VM has any CPU reservations. Should I add some for the DB Server?
CPU Wait Times are averaging 360391.
So performance will actually increase if I go down to 2 CPUs?
Does it matter if I choose 2 sockets / 1 core, or 1 socket / 2 cores?
hrp wrote:
.. also would be worth checking which family of processor is installed, and determining whether there might be some advantage to enabling hyperthreading support.
CPUs are Xeon X5675 - Hyperthreading is enabled.
Be careful with resource pools or you could end up hurting performance. See my post here (http://vmtoday.com/2012/03/vmware-vsphere-resource-pools-resource-allocation-revisited/) for an explanation of how to best use VMware Resource Pools, common mistakes in using Resource Pools, and how incorrectly using Resource Pools can negatively affect performance.
Also, check your CPU Ready value to see if you would benefit from fewer vCPU. Use this as a guide: http://vmtoday.com/2010/08/high-cpu-ready-poor-performance/