I've built a 3 nested ESXi 6.7 hosts lab out of an HP z420 with 128 GB of RAM and a E5-2640.
I'm not particularly interested in tuning my lab for high performance, my objective is to maintain my sysadmin skills and learn new ones.
However, after vmotioning my VCSA from one host to another, I noticed that the vMotion speed was not really close to what I would expect. Speed is not terrible but I'm just wondering how come I'm not achieving more than 3.5 Gb/s ?
As far as I understand, the data that is being moved around in a nested vMotion job resides in the RAM of my physical host so it there should be not bottleneck at that level.
My nested ESXi host VMs are configured with the standard VMXNET 3 adapter. Within ESXi, vmnics are configured in 10 Gbit/s, Full Duplex. The vMotion VMkernel is attached to a dvswitch where mac learning is enabled. And the underlying network with the ESXi VMs is also also attached to a dvswitch with mac learning enabled.
Why am I not getting close to the max speed of 10 Gbit/s when I vMotion?
There’s a lot going on: The vMotion Process Under the Hood - VMware vSphere Blog
And usually, the load would be split across 2 separate physical systems - sure there should be no physical network traffic, but everything else is the same.