nachogonzalez
Commander
Commander

the ability to select between cores and sockets is for two main reasons:
- Licensing:
As virtualization, basically is abstracting physical hardware resources to software, you need to be able to emulate as much hardware resources as possible.
Some Operating systems, such as Windows Server, or RedHat used to be licensed by CPU sockets.
So if you had 2 vCPU (1 core x 2 sockets) or (2 cores x 1 socket) you would have consumed a different amount of licenses.
- NUMA:
The placement of vCPU resources directly affects performance in some systems. 
(each CPU socket has direct access to some memory sockets) This is called NUMA.
So the ability to change cores per socket gives you the ability to play with Numa and gain performance (If you are using vSphere 6.5 or later, don't do it unless it's absolutely necessary) 

As for the controllers, I'm not 100% sure, but:
Having 1 disk 1 controller each disk will have a dedicated controller, not having to wait for the controller queue length, usually this is not necessary, since controllers can handle a lot of disks (up to 15 per controller) without an issue in normal IO values.
The best layout depends on how does your disks consume IO 

Let me know if I have replied your answers

View solution in original post