I am going to add 10G dual port adapter to all the ESXi hosts in the cluster. Should I use these for VM Network or vMotion traffic? Initially bought these adapters to be used for vMotion because when I was putting the host in maintenance mode some VMs migrating to different hosts would lose some pings and these VMs host sensitive applications. Would adding 10G to vMotion traffice resolve the issue that I am encountering?
Do you have shared storage in your virtual infrastructure?! You shouldn't faced with the VMs networking lose while vMotion operates and move them between the hosts by default ...
I prefer to use the 10G adapters for the virtual machine traffic handling, However it really depends on the VI requirements and many other factors like the total average of the Network/Bandwidth usage of the production VMs and so on !
Can you describe more your actual network architecture? Is better to use 10Gb intereface for workload or storage traffic like iSCSI or NFS. How many pings did you loose when your VMs do vMotion?
I mentioned before it depends on your average network/storage traffics. In a well-done design of VI structure, I prefer to use HBA-FC for storage communication (8 or 16 Gbps based on requirement and FC storage support) and use 10Gbps NIC for VM workloads, while you may be forced to use 10Gbps for both iSCSI/NFS and VM traffic because of budget limitations, and also maybe in some situations, it's not bad to use CNA cards for both storage/network communication...
So if you are not worried about the SAN traffic, consider the 10G NICs for the VM workloads as the first priority. Also, you can pass the management (VMKernels) here because they are not High consumption traffic (while they are very important and critical). Or you can consider 1G NIC ports for the management if you prefer to separate them physically.