VMware Cloud Community
zenomorph
Contributor
Contributor
Jump to solution

Virtual Machine vSMP allocation (Windows)

Hi,

I'd like some assistance and clarification please, our co. is now starting to use ESX to consolidate some of our application servers (Win2k3 Server + SQL).

Our physical application servers some are running Win2k3 32bit (4GB Ram) and some 64 bit (16GB Ram) using HP G5 2CPU (Dual Core) while others with 4CPU Quad Core.

We are going to migrate and consolidate these servers using P2V. The ESX will be a DL585 4*CPU Quad Core with 32GB Ram, so we will have 16 vSMP, what I'm not sure is how to size these VMs in terms of vSMPs.

Should I allocate these Win2k3 (32/64bit) with more than one vSMP eg. our servers that were on 32bit with 2CPU Dual cores should I also allocate 2vSMP and also for the 4CPU Quad Cores should i allocate the VMs 4vSMPs. (I'm assuming that the physical servers are currently running at 50% CPU utilisation and 50% memory allocation).

How will the performace of the VMs be impacted by allocating the VMs 1vSMP or 2, 4vSMPs. Will they perform better, I think this sizing part from P2V seems to be the most difficult in cosolidating all these physical servers

What way should I allocate the RAM to these VMs, same as they were on the physical and let the ESX handle it by itself. Were going to use resource pools so this will control some of the ESX resources.

We plan to run 4-5 of these P2V VMs on the 4*CPU Quad Core ESX server.

Any suggestions and help would be much appreciated, as I'm still very new to this .......

Many thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
kjb007
Immortal
Immortal
Jump to solution

Remember, when we say vSMP, it is meant to be used in terms of the guest OS's, not the actual host. Your host has 16 processing cores, not 16 vSMP.

Now, if you have performance statistics from your current running physical servers, it will go a long way to tell you how much processing is actually being done, regardless of how much resources are currently being used in your physical environment. From that, you should be able to figure out how much CPU/Memory you will actually need to run your vm's, rather than what you are currently allocating to the servers.

More often than not, what I have seen is that the physical server resources are very minimally used, hence the need for consolidation and virtualization. I would not start off the vm with 4 vCPU. Instead, if your current statistics state that you will need to use the CPU, then don't create with the VM with less than 2 CPU. When you initially do you P2V, and you assign 2 CPU's, the OS will keep it's SMP hal, which is currently loaded in it, and you won't have performance issues related to a misconfigured HAL.

If your statistics state you're not using the CPU heavily, you can always reduce to 1 vCPU, and more often than not, using an SMP HAL will be ok with only 1 vCPU, but the other way around is more problematic, meaing using a uniprocessor HAL in a multiple cpu configuration.

That being said, it is a very rare scenario indeed where creating a 4 vCPU vm will yield you better performance. More often than not, VM's perform pretty well with 1 and 2 CPU.

So, ultimately, I am recommending creating the VM with 2 CPU, and enough memory as you're currently using. And then, adjusting up or down, after reviewing your new vm statistics.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB

View solution in original post

Reply
0 Kudos
5 Replies
mcowger
Immortal
Immortal
Jump to solution

This will ENTIRELY depend on YOUR implementation of your application.

Start small (1vCPU per VM) and go up from there. be aware that vCPUs compeiting with other vCPUs can impact performance, so starting smaller is good.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
kjb007
Immortal
Immortal
Jump to solution

Remember, when we say vSMP, it is meant to be used in terms of the guest OS's, not the actual host. Your host has 16 processing cores, not 16 vSMP.

Now, if you have performance statistics from your current running physical servers, it will go a long way to tell you how much processing is actually being done, regardless of how much resources are currently being used in your physical environment. From that, you should be able to figure out how much CPU/Memory you will actually need to run your vm's, rather than what you are currently allocating to the servers.

More often than not, what I have seen is that the physical server resources are very minimally used, hence the need for consolidation and virtualization. I would not start off the vm with 4 vCPU. Instead, if your current statistics state that you will need to use the CPU, then don't create with the VM with less than 2 CPU. When you initially do you P2V, and you assign 2 CPU's, the OS will keep it's SMP hal, which is currently loaded in it, and you won't have performance issues related to a misconfigured HAL.

If your statistics state you're not using the CPU heavily, you can always reduce to 1 vCPU, and more often than not, using an SMP HAL will be ok with only 1 vCPU, but the other way around is more problematic, meaing using a uniprocessor HAL in a multiple cpu configuration.

That being said, it is a very rare scenario indeed where creating a 4 vCPU vm will yield you better performance. More often than not, VM's perform pretty well with 1 and 2 CPU.

So, ultimately, I am recommending creating the VM with 2 CPU, and enough memory as you're currently using. And then, adjusting up or down, after reviewing your new vm statistics.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
zenomorph
Contributor
Contributor
Jump to solution

Kjb007,

Many thanks for your reply, it sort of clears things up for me. Based on a review of the performances stats I guess what I'm gong to do is run 5 VMs on the ESX with;

3 instances of Win2k 32Bit with SQL2k5 and 2vSMP, 4GB Ram each and then 2 instances of Win2k3 64bit with 2vMSP 8GB Ram each and see how things go.

I'm going to make 2 resource pools one for the 32bits (with medium priroty 4000) and another for 64 bits (with high priority 8000) to see how things go.

CHeers

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso
Jump to solution

Just a word of warning....

When I look at virtualizing servers, I would never look at those servers you have indicated. Unless a server has shown that it can run as a Single CPU (and at the extremely limit, 2 CPU), with a maximum configuration of 1-2Gb of RAM tops I don't bother virtualizing. I can get rid of more underutilized servers then taking on the possible problem servers.

My idea around virtualization is to get rid of half the environment running things like single lightly used apps, domain controllers, file servers,etc that really are utilizing 1-2% of their resources and putting them across a couple servers (for redundancy) to bring up the overall utilization of that hardware to be in the 40-50% range (so I have failover capacity if a host fails).

What exactly are you trying to accomplish by virtualizing those systems anyhow. Your overall Return on investment on the hardware and software you just purchased for your virtual environment is probably lower then your return on investment for the initial purchase of all that other hardware. With those ESX servers you just purchased I would be running approximately 30-50 Virtual servers across and leave those apps alone.

You really better understand the performance characteristics of those servers because 30-50% of the people who virtualize transaction based servers like Exchange and databases are shocked when the performance isn't as good as the physical box because they took a guess of how it would react in a virtual world.. If you virtualize those servers and the performance is not acceptable then you are going to do your vmware environment an injustice, as well as put your own reputation at risk because you recommended a course of action that totally flopped.

Have you performed any disk performance stats using iometer or anything to simulate how a single VM as well as multiple transaction based VM's are going to work when virtualized on that hardware and then compared the results of those tests against the combined disk usage of the physical SQL servers. Next do the same for those monster 64bit servers and then try it all out together.

Without a good disk layout and due dilligence you are not going to do anyone any good by making guesses like how many CPU's and RAM to give ot the VM's. Thats about 30% of the total equation.

Unless of course those servers are working no harder then your typical desktop then virtualize away and fire the guy who built those servers.

Also, it sounds as if you have a single server and you are going to virtualize 5 physical servers onto a single box (with I assume is local storage as I see nothing about SAN disk). Unless all those servers depend upon each other you are potentially bringing down a large chunk of your environment WHEN that server fails (all servers fail at some point don't forget), instead of losing just a service.

I'd like to hear how things work out to see if maybe I'm just overzealous when it comes to analysing P2V situations as well as esx environments in general.

zenomorph
Contributor
Contributor
Jump to solution

Rumple,

Thanks for the advice I will have a second look at those servers and see how we should do this.

:smileygrin:

Reply
0 Kudos