Moderator note: This is not a question about the VCAP certifications/exams, so I've moved it to the ESXi 5 forum area (I've assumed you are using that version of ESXi)
You can use any of your options. Performance of your virtual machine wont be changed, If you choose an another option.
But keep it mind, If you have a plan to increase the vCPU count by using "Hot-Add" feature, It will allow you to increase only sockets not cores.
Read this article:
It won't really matter in your case, but the bottom line is to always stick with sockets and leaving cores at 1, unless you have a good reason like licensing to do something else.
CPU are the processors of the Machine whereas Cores are the thread of the CPU.
If I am having dual core processor means that I am Having 1 CPU with two threads that is my I/O will work on two thread with one CPU that means cores can run multiple instructions at the same time, increasing overall speed for programs.
This functionality has been enable from 5.0 onwards. Where you can set the threading per CPU for Virtual machine to have a better performance for the respective application
2x2 or 4x1 can be specified as 2 CPU with Dual core or 4 CPU with Single core.
It depend upon the application what resources to be given but by 2x2 and 4x1 configuration as i said before by giving 2x2 you will be having 2 threads where your response will be double then the 4x1 as VMKernel will wait till it get 4 CPU free from the host comparing to 2x2 requirement.
very useful link you have shared with me.
Thanks a lot for your valuable reply.
there is no performance impact of setting it one way or another. Because the hypervisor schedules the resources on the back end, it really doesn’t matter what configuration they’re presented to the guest OS in, at least not from a performance perspective.
The reason you used to have the option to select sockets vs. cores was to get around a limitation in the guest OS. For example, Server 2008 will only use up to 4 physical CPUs. By increasing the number of cores per socket, you can raise the number of CPUs the guest OS will allow you to use (since it seems them as cores). The one consideration to keep in mind is that if there isn’t a reason to trick the guest OS using cores, scale VMs using the socket setting. The number of virtual sockets can/will affect vNUMA calculations.
I don't think there would be any performance difference will come in this situation.
Following the article suggested by MKguy the real benefits using core vs socket depends on your physical architecture and virtual constrains (like licensing, OS efficiency,...). The main thing you must consider, depending on CPU architecture, is numa and vnuma. The first depends on physical architecture, the second depends on virtual architecture; in some cases more vnuma nodes increase performance in other cases increasing vnuma node could impact to VM performance.
In a physical world, before choosing architecture I suggest to know the needs of your application: if your application is using a smaller pagefile than vnuma node memory size, your application will never ask of a remote memory, and your process could be more efficient. In a vSphere behavior could be different, due to CPU scheduler that rules vCPU the process (like said by Paltelkalpesh ), but you the previous consideration, could be transferred to virtual world, because <<A virtual machine that has more vCPUs than the number of cores per NUMA node is referred to as “wide” virtual machine because its width is too wide to fit into a NUMA node.>> .
For this reason, during design, choose an architecurte that could fit application needs (like what is doing in a physical architecture), but consider cpu and core adjustments during pre-production or cpu adjustments during production phase.
Hope this article https://www.vmware.com/files/pdf/techpaper/VMware-vSphere-CPU-Sched-Perf.pdf could be find useful too.