2 x Dell 1955 Blades
Blade 1 has 2 x Dual core 2.992 processors
Blade 2 has 2 x Quad core 1.995 processors
So both blades have 2 sockets, but one blade has 4 cores total and the other 8 cores total.
Guest VM: Windows server 2003 Enterprise
ESX only allows me to assign 4 processors to the guest VM (maximum number allowed by ESX server?)
If I pop the guest server on to Blade 1 and give it 4 processors it will make use of the 2 x 2 cores yes?
If I pop the guest on to Blade2 and give it 4 processors how will it be assigned cores?, will it make use of 2 cores from each processor or use one processor for its self?
Or are the 8 cores in effect a CPU pool and if you assign 4 processors you are in effect using the whole pool of CPU so all 8 cores?
With the above example would I be better off running this guest on blade 1 as the cores are quicker? I will only run a single gust on the blade as I need to use all the memory for it.
I have had a good look for documentation on this with no luck, so if you know of something please let me know.
never assign 4 Vcpu's on a 4 CPU system. It takes to much of overhead, you have much better performance spreading the load on several 1 VCPU or 2 VCPU VMs.
ESX see all CPU as one big CPU and will schedule internaly what task to assign to each CPU. VCPU is a vitrtual CPU and is not the same as a physical core
You will be very limited in the number of VM's you can add if you assign 4 VCPUs.
So if I have 2 processors, both dual core (4 cpu's to ESX) Blade, I should not build a Guest VM on the blade that has 4vcpu's?
If this is correct I need to move some of my guest's (as above) to my new 2 x quad blades so they can correctly use 4vcpu's as this will leave 50% of the available cpu's for other processes/guests/esx o/s.
There is a co-scheduling overhead associated with vSMP. You should be able to run esxtop on your console and view the vm statistics as the VM is running. The problem arises when you have multiple VM's, with multiple vSMP configurations. The CPU scheduler can only schedule your VM to run on all 4 cores at the same time, and not split them up, so that's where the overhead comes from. When you have multiple vm's, the large vCPU vm may have to wait longer, or your other vm's will have to wait longer to be scheduled to run, causing performance issues. esxtop should be able to show you counters, like %WAIT and %CSTP and %RDY to show you if you are running into those performance issues.
So, as stated, you should run your vm's with as little SMP as needed, as your vm's may still perform just as well with 2 or 1 cpu as with 4 cpu in some cases.
Just because you can do a thing does not mean you should. 4vCPU's should only be used on those occasions that the application running on the Guest really needs the grunt. for example, Messenging, DB's, BI, CRM etc, if you are using 4vCPU's and are set on a virtual environemnt I would be investigating larger servers than dual quad cores, something like the IBM 3850M2 Quad Quad core, these have the ability to be joined together to make even larger machines. the scheduleing overhead that results from the creation of 4vCPU guests is such, that internal perfromance of the guest will be serverly impacted. very high context switching in the guest and high CPU wait times etc and most importantly of all poor user experiance.
a 4vCPU guest requires 4 physical cores to be available for each vCPU cycle, therefore even on a dual quad core you only have 8 cores available to Guests. run 4 guests (3 x single vCPU and in quad vCPU) on this and you are in are at the maximum you can use without CPU contentions, remember the Service console requires access to CPU0. increase Guest density and the first guest to be serverly impacted is the 4vCPU guest.
personally I would start at a single vCPU for all hosts, it is eaiser to increase CPUs in guests than decrease CPUs. if a Guest really needs the raw power of 4 CPUs then questions should really be raised as to whether the service is a suitable candidate for virtualisation.
VMware Communities User Moderator
I concur with Tom - I tell clients start with single a vcpu - because I have found in most cases the performance will be what they need even for apps like SQL servers (lightly loaded of course) and if you do have an app that will need multiple vcpus than start with a dual and go up from there
Cheers guys, food for thought!
Its for a test server that will be running an Oracle based admin system, so I can try a number of configurations on VM as well as san/lun configurations.
This has however lead me to re examine some of the blade configurations I'm currently running so its been a great help.