- It doesn't, yet. vCPU hot-add disables vNUMA, meaning that as soon as your VM size is >= 25 vCPUs on those hosts, one one vNUMA node will be presented to the host despite being scheduled over two.
- coresPerSocket only defines the guest visible CPU and cache topology, it doesn't affect ESXi scheduling or vNUMA autosizing (since 6.5), right now the VM runs on a single socket yet you present the guest two. The OS / application might schedule preferentially on one "socket" because it doesn't know that all vCPUs are in the same one.
In general, better locality is preferable to being wider distributed, the latter is only benefitial if the application is NUMA optimized to a high degree _and_ can benefit from the additional memory bandwidth.
Before 6.5, setting cpuid.coresPerSocket also set maxVcpusPerNode (edit: sorry, internal short form) numa.vcpu.maxPerVirtualNode and the two resulting NUMA clients, those were then most likely scheduled on two different pNUMA nodes. - There isn't really a disadvantage, just make sure that if you should cross the single pNUMA memory amount, that you manually size the vNUMA nodes since ESXi's autosize only goes after vCPUs and cores per pNUMA node.
vMotion impact is mostly dictated by memory / CPU activity but this particular workload doesn't seem excessive, there were some fairly substantial issues with vMotion of large VM pre 6.5 but that is no longer applicable. There are of course still monster workloads that might be somewhat impacted during the trace / resume phase but your's isn't getting close.
Even for those, the tracing impact was dramatically reduced in 7.0 and for the resume phase, 7.0 U1 included some major changes.
Definitely read: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vmotion-7u1-... if you want to know more.