VMware Cloud Community
jwnchoate
Contributor
Contributor

Production MS SQL Server performance in a VM?

Hello all. I wish to solicit some comments based on our situation concerning database servers on VM's.

+ CURRENTLY: +We have been running an ESX blade system for about 4 years now, upgraded to vi3 last summer. These boxes are not beasts either, they are simple dual CPU with Hyperthreading or dual dual cores. They have been running Oracle and SQL servers the whole time. On a whole, the peformance has been good with a few solvable problems. Occaisionally we had some issues that required us to give more resources or shares to the servers. We do have 2G Fiber Channel on the Disk subsystem.

We built a new blade system to grow on (we are still keeping the old system) with dual quad core CPU's and twice the RAM as before (14GB). I am assuming that doubling the power should be a nice step up. My only wish is that I had a FC back end, but my boss wants to go iSCSI for cost savings, and for the most part it works good, I am only nervous because the 1g vs 2g throughput.

+ SITUATION: +We have some new database servers to build. Plans are to virtualize them as we did the others. Base VM will be on the order of 4vcpu, 6G RAM and probably only share the ESX host with a few (2-5) lower resource requiring vm's. Our older setup was 2vcpu and 4G RAM, and I considered this fairly sucessful. *However*, everytime we sit down with a vendor or have some kind of performance issue with a support person we get an earful that they don't like virtualization because its too slow. I end up trying to explain to them we have run fairly successfully for several years. Granted you should take a slight performance hit, but it is worth it when you consider other advantages. Its alwasy the same, general comments, no hard data, and stories of another customer who had a nightmare.

PS, I know FC has always been considered faster, but I am going to be stuck with iSCSI solution on the new blades :-(. So far though, I have not had any real problems, but my production db's are not installed yet.

WHAT I NEED: I am working with our DB guy to work out some way of getting some hard benchmark numbers to see the bottom line, but I would like to know what others think of using databases on VM's and what their experiences are. THANKS!!

Reply
0 Kudos
5 Replies
FredPeterson
Expert
Expert

4 vCPU with 8 cores available = Bad Mojo. I don't think I'd ever consider a 4 vCPU VM unless I had 16 or more cores to utilize.

edit:

Of course you did qualify it with saying you'd be running smaller utilization VM's. So long as you don't exceed 7 assigned vCPU (1 4vCPU and 3 1vCPU + service console = 8), everything would work swimingly. If you ever attempted to run multiple 4 vCPU and 1 vCPU on the 8 core, you'll see some higher % Ready times.

Reply
0 Kudos
jwnchoate
Contributor
Contributor

Perhaps I misphrased/misunderstood something. I am not assigning or dedicating a CPU, just a 4CPU vm. I should be able to allocate more virtual cpu's than I have logical cpu's. In fact, we often have 2 times the Virtual CPUs (some 2cpu vm's) on the hosts that have just 2 x physical single w hyperthreading processors (4 logical per host) and ready values are low with some occaisional spikes between 5-10%. Even on the the busiest host we have with 2 single cores w hyperthreading (runs 85% PCPU on avg) with acceptable performance but not great ready values (spikes above 5% with occaisonal 10-20%). Even then most of the time we are below 5%; however, there are high spikes that pop up and then back down. A dual quad core host should be able to do even better than that.

Even so, I read your comment that we should not allocate more VM cpus than we have logica cpus - service console. That doesnt seem right. Thats the great part of vmware isnt it? to leverage fewer CPU cycles across many hosts?

I totally get that performance is going to be better if I keep LCPU's = VCPU's, but that shouldn't be something written in stone. I generally try to keep my busiest VM's on hosts without too much competition anyway.

Reply
0 Kudos
FredPeterson
Expert
Expert

I didn't mean to imply that you should match 1:1 v to pCPU, that defeats the major benefit to virtualization.

I hope I'm not stating something you already know, but I'm repeating for the benefit of others down the road who might do searches that pull up this thread, etc.

The problem arises in the scheduling of the VM. VMWare doesn't utilize just 1 pCPU if thats all it needs and there are 2 or more virtual assigned. When that 4 vCPU VM needs attention, absolutely no other VM can be serviced if all you have is 4 Cores, and the reverse of this is that if all 4 cores are not available, that 4 vCPU VM cannot run either resulting in the % Ready times. The problem becomes compounded as you add varying vCPU VM's to a host.

It all depends on the number of VM's. Just as an example, I have a quad Xeon x366 running about 20 servers (1 of which is a 2 vCPU) - I P2V'd a dual cpu physical and because the VMWare Converter is so bright, it assigned 4 vCPU's and I didn't realize it and when it was done and fired up.....WOW I had 100% CPU utilization across all 8 "logical" CPU's according to esxtop. Of the 20 VM's, half had % Ready's over 20 and it took absolutely forever for that 4 vCPU VM to actually fire up to the point where I could gracefully stop the VM. At one point I saw the 4 vCPU sitting at 100% Ready. I was on the verge of going to the cos and kill-ing the PID when I finally got Windows to shutdown normally. The other VM's continued to run, but their performance absolutely tanked. Thank god 90% of our VM's sit there and do nothing to begin with so it wasn't that serious of an issue for those 10 minutes.

Reply
0 Kudos
IrNico
Contributor
Contributor

Fred, you wrote "The problem becomes compounded as you add varying vCPU VM's to a host."

Can you explain why it gets worse when you have VMs with varying amounts of vCPU?

Reply
0 Kudos
raadek
Enthusiast
Enthusiast

I think the concern / problem is that at any given time ALL vCPUs from a VM have to take cycles from pCPUs (or cores to be more precise).

So:

If we have just a bunch of single vCPU VMs & say 2x quad-core CPUs in the box (eight cores), whenever just one core is 'free', we can use it for any of the VMs.

On the other hand if we have just two 4x vCPU VMs we will not be able to run them simultaneously! One core is taken by the console, so we are left with 7, and we have to supply physical cores to ALL vCPUs from a given VM at any given time. Four cores for one 'beefy' VM, one for console & three cores sitting & doing nothing - not good.

It is good to split the workload whenever possible & create as many single vCPU VMs as posssible - four 1x vCPU VMs will almost certainly outperform one 4x vCPU VM.

Rgds.

Reply
0 Kudos