into the VMX file, it's only 1,2 or 4
But with resourcePool you can manage better you performance of VM
I understand that the default options are only 1,2 and 4 (ie, 20, 21 and 2^2). But 3 CPU is a valid configuration (i've had them in production physical servers before), and what resource pools are for. Having the minimum amount of vCPU sis best practice as it reduces contention (and there is an upper limit / recommended limits of vCPU / phyiscal cores).
I've had a server running in our proof of concept environment for a month or so with 3 vCPUs on ESX 3.5 without issues.
Is there a valid reason why 3 vCPU is not a valid configuration.
Would the kernel have issues scheduling an uneven number of vCPUs?
Would certain guest OSes not work in this configuration?
Would applications like ESXTOP report correctly?
While I'm not able to answer specificially about co-scheduling issues, I would say that 3 vcpu VMs should be just fine. I have been running many tsets with "odd" numbers of vcpus. Moreover, AMD (and maybe Intel?) have actually announced physical CPUs with 3 cores. This being the case, one would expect guest operating systems (and native ones) to work fine with 3 vcpus.
there is a valid reason, the CPU with 3core, are very newer... and VMware have implemented the standard...
Thanks for the response, this is what I was looking for.
Obviously this isn't a 'supported' configuration.
When VMware moves into the > 4 vCPU space (later this year perhaps?), are we likely to see any of these options, or just 1,2,4,8
Intel architecture is a binary based system. 20=1, 21=2, 2^2=4, etc..
This is why a physical host is 1 CPU, 2 CPU, 4 CPU, etc. Software vendors develop for this type of architecture. Therefore, they didn't include odd numbered CPU, and despite 1 CPU is odd number, its 1, multiple CPU's work in tandem, meaning they have a "sister" system, memory works this way, hard drive controllers , and CPU.
So that's why. Sure it may work, and windows may not complain, but Physical machines were not designed with 3, 5, 7 CPU's in mind, they were made to be binary based.
You are only testing this at this point, and you may get it to work, but I wouldn't depend on it to work for everything.
Besides, if you can get 2 CPU out of a VM, what purpose would a third CPU serve? You don't get much use out of a multiple CPU VM anyway, so why complicate it when it wasn't designed with these oddball CPU?
Keep the default configuration, and dont' change it. If it aint broke don't fix it. There is no reason to use 3 CPU's when 4 will work. I know sequentially you think that adding 1 more CPU will give you that much more performance, but it really won't.
2 CPU is at most you will probably need in Virtualized machines (Windows Virtual Server included.). 1 is fine for 99% of things out there, because it doesn't equate to a physical CPU, it's a time slice in the end. So 1 CPU is good enough for things you need, and adding more CPU is only going to add more overhead to your ESX server, and you are probably not going to ever see any benefit.
The only system that benefits from extra CPU is Databases, they are constricted by I/O because there is only 1 on an ESX server. ALL I/O goes either over 1 iSCSI, 1 RAID, or 1 Fibre HBA. So that's where the bottleneck is, not CPU.
I should have made it more clear that I have no intention, ever, of putting this in a production environment, it's purely curiosity.
I understand the binary implications and architecture and the world of 1's and 0's.
Using single vCPUs is always encouraged, and quite unrealistic in a lot of situations, but getting off topic slightly.
There are many occasions where a single CPU doesen't cut the mustard, this is mostly from the purely consolidation persepective (consolidating under utilised servers etc), but we are moving to an era where it's more than that. In our environment, for example there are large numbers of Citrix servers which run pretty hot. A lot of these wouldn't run at all on a single CPU because of the workload profile. Sure, you could always argue that it's not a good virtualisation candidate, but lots of organisations are moving to a 'virtualise first' policy to gain the holistic benefits of virtualisation. Multiple vCPUs can also give a kind of natural isolation / protection in certain situations (long running single threaded reports for example while allowing enough cycles for the remaining users). During the conversion phase of existing physical machines, some applications (certain non-mssql DB servers) are installed and configured to use their 4 CPUs, and hence need to be migrated that way or re-installed.
Much of the work that happens at the moment is the inital virtualisation objective - consolidation. Much of the work from then on is about managing real workloads (DBs, Exchange etc) which is why VMware are publishing numerous papers on these topics (Citrix scalability, Exchange, SQL) and being very poignant about the optimisation of workload performance VS vCPUs.
There are numerous other strategies from managing workloads (ie, appsense, threadmaster) to QoS at an application/thread level rather than a VM level. Some machines simply require access to more than a single vCPU in terms of cycles. Others require periodic or sustained access to more than 2.
It's all good info
Bottom line is, you can do it but don't. It's not supported, but it works. Also you can't VMotion
Thanks for the replies!