VMware Cloud Community
bikeracer
Contributor
Contributor
Jump to solution

vCPUs for AD and MySQL VM servers

Hello, I have 2 - PowerEdge 6950 servers which each have 4 - dual core physical AMD Opteron CPU's, for a total of 8 Physical cores, 32 GB RAM, and 5 - 10k RPM SAS drives in a RAID 5 array.  I am going to be consolidaing most of my servers to these two machines, as well as a couple other robust servers.  Among the servers that I am setting up will be a primary domain controller running Windows Server 2008 R2 for 45 employees, as well as another server running an editorial database on MySQL 4.1.19 which has a database size of about 80 GB and no more than 9 users connecting it it at any time.  this MySQL server regularlly has CPU spikes and pushes its current quad core CPU to a normalized 35% utilization with many 100% spikes.

I am running ESXi 5.0 on them both, and  my question is when I am creating my new VM configuration, I am not sure the best way to setup my vCPU's for my servers for optimum performance.  I am asked the number of virtual sockets and number of cores per CPU.  Does anyone have recommendations for how to allocate vCPU's for these servers?  I am assuming 1 socket and 1 core, for both servers, but was not sure if I would benifit by setting 2 or 4 sockes or cores for either my AD server or MySQL server.

Thanks,

Eric H.

The Porterville Recorder Newspaper

0 Kudos
1 Solution

Accepted Solutions
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the Community - Best practice is always to start with the least amount of vCPUs. So unless you have emperical evidence that you need more than a single vCPU start with that - 

Also moved to the appropriate forum - 

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

View solution in original post

0 Kudos
13 Replies
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the Community - Best practice is always to start with the least amount of vCPUs. So unless you have emperical evidence that you need more than a single vCPU start with that - 

Also moved to the appropriate forum - 

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
RParker
Immortal
Immortal
Jump to solution

yes agree with weinstein.. along with that NEITHER of these environments are CPU intensive.  SQL is disk intensive.. so that will be more important than CPU.  1 vCPU should be sufficient for both, provided each one is its on VM instance.

0 Kudos
bikeracer
Contributor
Contributor
Jump to solution

These are great answers, thanks for the help.  I want to dig one step deeper on this.  If I choose to setup 3 vServers, each with 1 vCPU, does this mean that 5 of my 8 physical cores are going to be mostly unused, or is ESXi5 intelligent enough to leverage the processing power of all 8 physical CPU's for those three vCPU's to share?  Basiclly, I am wondering if my server with 8 physical CPU's will only utilize 3 of them effectivly.

Thanks,

Eric Henson

The Porterville Recorder Newspaper

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

Yes your environment will be underutilized -  each vCPU will run on a single core at a time. Look at it this way you have room to grow. To give you an idea you can have 6-8 vCPUs per core as a rule of thumb.

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

A virtual CPU within a VM can only run as fast as a single CPU core.   At this point hypervisors don't have a way to consolide multiple CPU cores into a single virtual CPU.   ESXi will schedule the CPU requests from your VMs to various cores, but in your scenario with 3 vCPUS you would have 5 largely idle physical cores.

0 Kudos
bikeracer
Contributor
Contributor
Jump to solution

Would you consider a slight performance increase if I do 2 or 4 vCPU's for my high demand MySQL server?  I realize that the scheduler can cause a performance hit by trying to decide how to distribute the load, but it seems like that may be acceptable if I am allocating additional physical CPU's.

Should I also only utilize as many vCPU's on a single ESXi5 server as the number of physical CPU's?  for instance, If I have vServers on one ESXi host with 8 physcal CPU's, should the overall vCPU's on all servers equal the total number of physcal CPU's onthe server?  If I have 6 vServers, and 4 of them have 1 vCPU and two have 2 vCPU's, is that the most vCPU's that I should do, or is ESXi smart enought to hangle more vCPU's than I have physical CPU's?

Eric H.

Porterville Recorder Newspaper

0 Kudos
RParker
Immortal
Immortal
Jump to solution

bikeracer wrote:

Would you consider a slight performance increase if I do 2 or 4 vCPU's for my high demand MySQL server?  I realize that the scheduler can cause a performance hit by trying to decide how to distribute the load, but it seems like that may be acceptable if I am allocating additional physical CPU's.

Should I also only utilize as many vCPU's on a single ESXi5 server as the number of physical CPU's?  for instance, If I have vServers on one ESXi host with 8 physcal CPU's, should the overall vCPU's on all servers equal the total number of physcal CPU's onthe server?  If I have 6 vServers, and 4 of them have 1 vCPU and two have 2 vCPU's, is that the most vCPU's that I should do, or is ESXi smart enought to hangle more vCPU's than I have physical CPU's?

Eric H.

Porterville Recorder Newspaper

The Hypervisor will thread 1 single vCPU across ALL the physical cores on a host.  So you get the benefit of the host CPU with ALL the VM's, that's why the "scheduling" is important and by adding more vCPU PER VM you will create more overhead, and diminish the ability for the hypervisor to manage effectively.  It's easier to manage separate vCPU in different VM's than 1 VM with many vCPU because ALL those vCPU must be co-scheduled.. so the answer is to keep 1 vCPU PER VM, and if you need additional workload, spawn new VM's.

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

You can exceed the number of physical cores with vCPUs.  In some enviroments it's easy to get 5 - 8 vCPU per physical core.  If you have CPU intensive VMs you might not get that.   I would put the SQL VM to 2 to if it is already putting a heavy CPU load on a physical box.  But I would also look at doing some SQL tuning as that might be able to reduce the overall CPU (and potentially disk ) load.

0 Kudos
bikeracer
Contributor
Contributor
Jump to solution

Thanks for all of the advice.  I will now move forward and create my new Windows Server 2008 R2 Active Directory server, and Editorial Database MySQL server with 1 vCPU each.  Even after the answers, I am still a little confused, because if it really causes a decrease in performance to use multiple vCPU's, why do they make it an option?  If I want to give half of the servers CPU resources to one server, should I allow it 4 vCPU's and then if I have 4 more less taxing systems on that host, just give them 1 each?  Its a hard decision to move a server from a locally installed quad core server that is being taxed on average 30-40% CPU utilizatoin, to a single vCPU that might only utilize 1 CPU core while the other 7 cores are sitting there idle.

Thanks again for all of your responses,

Eric

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

I would create the SQL vm with at least 2 CPU cores as it already has a significant CPU load and it's not going to get less going to a VM.

A while back n 4.1 I did an experiments with 3 single vCPU VMs on a host with 8 cores.   I start each in a staggered manner and ran a CPU utility to max out the CPU in each VM.  Essentially ESXi keep the VMs on the same cores and 5 were totally idle (besides some ESXi host overhead).

ESXi has a pretty good CPU scheduler but there's really no issue until you start to load up the host.   If you create 7 VMs (single vCPU) on an 8 core host and max out the VMs CPU wise,  the vCPUs will run at about the same speed as a physical core and the host will show 7 cores maxed out.  ESXi won't have to move the CPU threads between cores as there won't be any need to.

Lets say you then move to 16 single VMs VM on 8 core.   Everytime a VM need a CPU cycle, the scheduler needs to figure  out when to best place the CPU demand and the CPU cycles may move around between cores.  If the VMs are running at 50% CPU,  they'll all pretty much get the CPU cycles they're looking for.

Things get complicated when you add multi-vCPU VMs.  Instead of finding just a single free CPU core,  the scheduler has to find 2 free cores or more at the same time.  Now it doesn't have to be exactly time (from the CPU perspective) but really close.  That's when the VMs can start to experience some wait time (and perceived bad performance).   That assumes the VMs are putting a heavy load on the host.   As I mentioned earlier,  you can run many vCPUs per physical core because a VM doesn't not require CPU  100% of the time and if that's the case ESXi doesn't have to schedule CPU resources for it.

If you want some in depth material,  check out this whitepaper - http://www.vmware.com/files/pdf/perf-vsphere-cpu_scheduler.pdf.

bikeracer
Contributor
Contributor
Jump to solution

This is a great discussion, I really appreciate the input.  In my situation, I am a small town newspaper with 45 employees and once I am done consolidating servers, I will only be virtualizing about 4-5 production servers and about 7-8 seldom used virtual workstations on two 8 core servers with 32 GB RAM each.  Only one of my servers will have a somwhat heavy load which is my big MySQL server.  If I have the extra CPU's avaliable, should I just assign about 4 vCPU's to my one high load system, and then the rest of myservers will just be single vCPU?  My main concern for performance is that one MySQL server, and the others can take a hit in performance to make sure that one server gets full use of its resources avaliable.

This discussion has helped greatly, and I konw others have received some great insight into their own configurations as a result.  Thanks again for all of your help and assistance in this core decision for the future of my network.

Eric H.

The Porterville Recorder Newspaper

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

With 16 cores and your VM load I wouldn't worry about a single 4 vCPU VM.

0 Kudos
bikeracer
Contributor
Contributor
Jump to solution

I have another question.  I am going to use ESXi for a platform on most of my production servers.  I have one PowerEdge 1950 with 1 Quad core CPU and 16 GB RAM which I plan to use as a Windows Server 2008 R2 file server for abour 40 employees.  I would like to have the underlying control that ESXi concole provides under my file server, but was not sure if its a bad idea to use ESXi for a single VM on a server.  Would it be smart to just setup ESXi with a VM and allocate all of my CPU's and memory to that one VM or is it a bad idea to use this for a single VM on a physical server?

Eric Henson

Information Technology Manager

The Porterville Recorder

(559) 784-5000 Ext. 1070

Fax: 559-781-1689

ehenson@portervillerecorder.com

0 Kudos