You can assign 2 vCPUs per VM, but they aren't statically bound in any way. The vCPU will be scheduled on the different cores, unless you get into things like CPU affinity.
You can use 2 sockets with 1 core or 2 cores with 1 socket - it still gets scheduled the same way.
To answer your original question, there is no difference between 1 virtual CPU with 2 virtual cores, or 2 virutal CPUs with 1 virtual core. One common use case of multiple core vCPU is guest OS licensing - there are limits to how many sockets certain Windows versions can use, but you can get around that by adding virtual cores to the vCPU. From the perspective of ESXi, it is just another available core, either way.
I think you may have a misunderstanding on how virtual CPUs are allocated in vSphere. Allocating a vCPU core to a VM does not bind that VM to a specific physical CPU core at all. When a VM needs to execute an instruction, ESXi will schedule two available cores (assuming it's a 2-core VM) on which to execute the instruction. It is valid to "overcommit" and allocate more vCPUs per host than physical cores. If the nature of your processing activity is relative light, you will not notice a performance impact by doing this.
This whitepaper should provide more insight into how CPU scheduling works.