SS_DP
Contributor
Contributor

VM vCPU

Hi there,

I am currently deploying a new ESXi host.

Server Model: Dell PowerEdge R510

CPU: 2x Intel Quad E5607

My ESXi hosts states that it can see 8CPUs x 2.266 GHz as expected.

However, the question I have is in this case only 1 VM is going to be deployed onto this host at the moment and I would like as much resource to be given to this VM as possible. The VM is a Terminal Server for multiple users and is going to have to be quite the workhorse.

I have read numerous articles that state assigning a VM vCPU as 1 virtual socket and 1 core per socket, then letting ESX / vCentre look after the resource allocation is the best way to go.

If this is followed in my case will both processors be utilized?

I hope this makes sense.

Regards, Dean.

Tags (3)
0 Kudos
7 Replies
weinstein5
Immortal
Immortal

Welcome to the Community - the vmkernel schedules a vCPU to a Logical CPU (which can be a physical core or hyperthread)  and will move it iof necessary - so with what you describe yes all cores will be used but there will be periods where 7 of the 8 cores might be idle -

with your scenario you could add a second vCPU to your VM -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
lenzker
Enthusiast
Enthusiast

Both CPUs will be utilized if you give your VM the maximum of available logical CPUs. Keep things like NUMA in mind when you size your VM (http://vxpertise.net/2012/06/summarizing-numa-scheduling/) .

As long as you have a one-to-one relationship of the VMs per ESXi this configuration is ok, but as soon as you consolidate more VMs on this host make sure to size your VMs as little as possible.

VCP,VCAP-DCA,VCI -> https://twitter.com/lenzker -> http://vxpertise.net
0 Kudos
FredPeterson
Expert
Expert

Why do people continue to this day to insist on the whole 1 vCPU thing?

That was quite true pre v4.1 and most assuredly not true anymore with 5.x.  Sure if you have a server that for whatever reason must exist stand alone and literally does nothing but serve up a license file and never uses more then 500MHz sure give it just one CPU.  But if its ever going to be doing any real work, and any multi-threaded work for that matter, just give it two.

If this entire host is dedicated to one VM, just give the VM 4 CPUs to start and go up as necessary (or even down, but I would never drop below 3 vCPU for a TS) and give it an appropriate memory size.  Be sure to enable Hot Add of memory so you can increase the memory on the fly if need be.

Also the whole NUMA thing only matters for very large VMs that either cross NUMA CPU boundaries or memory boundaries.

0 Kudos
SS_DP
Contributor
Contributor

Thank you very much for the replies. All are very much appreciated.

So, it seems we have some differing views! Which in some ways i'm glad about.

Would anybody be able to advise me of any disadvantages of assigning my VM all of the available resources? In this case 8 virtual sockets, 1 core per socket.

As I can not see this host ever being used for more than the 1 TS VM that it is being used for at the moment.

Also, another point. I was under the impression that it was not advised to change vCPU settings after the base OS has been installed on a VM. Is this an incorrect piece of advise I have picked up?

Thanks again, Dean.

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso


Would anybody be able to advise me of any disadvantages of assigning my VM all of the available resources? In this case 8 virtual sockets, 1 core per socket.

Well, your VM is just another process in the kernel. It has to handle its process like vSphere, storage requests, swapping if needed, networking balancing [in your case mostly used]. Ideally, the CPU would end up in WAIT state which is bad. Although, if you are giving full reservations and affinity to your VM, you might expect a bit slower performance from the ESX side which is not advisable.

Also, another point. I was under the impression that it was not advised to change vCPU settings after the base OS has been installed on a VM. Is this an incorrect piece of advise I have picked up?

This used to be the case in older kernels guest operation systems which used to panic due to the change in cpu count and some cases UP being loaded in 1vCPU and changed to 2vCPUs. I believe most of the recent OS do handle the cpu changes gracefully instead of panicking.

HTH,

zXi

0 Kudos
FredPeterson
Expert
Expert

As Gamer mentioned, giving it the same number of CPUs as the host would be ill-advised as if the server ever went to 100% CPU you'd be impeding both the kernel AND the guest AND IO operations resulting in everything being slow(er).

But the odds of that are slim, however I wouldn't do it.  As I suggested start with 4 and if performance monitoring (namely Processor Queue depth) indicates more CPU horsepower is required you can add CPUs.  CPU Usage alone isn't a good indicator that more CPUs would actually help - processor queue depth is far more important especially on a terminal server.  You never want your processor queue depth to exceed 2 per core for much time.  You will see spikes, but they should never be sustained for more then a poll cycle or three (if watching per second).  In the real world, with only 8 total cores in the host, I would not exceed 6 cores in guest.

Also as he mentioned the kernel issue with number of processors was resolved in Windows 2008.  There is no longer a Uniprocessor and Multiprocessor kernel.

0 Kudos
jdptechnc
Expert
Expert

So if you never plan on running anything on this hardware other than one VM, why are you even using ESXi?  Why not just install Windows on this hardware and elimiate a layer of complexity?

Please consider marking as "helpful", if you find this post useful. Thanks!... IT Guy since 12/2000... Virtual since 10/2006... VCAP-DCA #2222
0 Kudos