when i put a host in maintenance mode and check out resxtop on vmnic0 and 4, most of the traffic is going to vmnic0 but not 4.
there are 2 dvuplinks and 2 port groups for vmotion, one nic is active and one is standby.
any idea why?
If you have 2 uplinks on the vswitch where the vmotion vmkernel ports are, and one is standby and the other is active, obviously the traffic will use only the active one.
If I misunderstood your configuration, send some screenshots, and also the output of "esxcfg-vswitch -l" and "esxcfg-vmknic -l".
yes but i have 2 vmotion port groups binded to 2 uplinks. each vmotion portgroup only has one uplink active. i followed this exactly
http://www.yellow-bricks.com/2011/09/17/multiple-nic-vmotion-in-vsphere-5/
So in (r)esxtop you see that one physical vmnic is not being used like you expect, but what about the actual vmkernel NICs? Are they both being utilized or only one? If both, check which uplink they are assigned to in (r)esxtop.
Do you have failback enabled on the vmkernel port groups? Do you run a recent ESXi version with update 1? There were a couple of nasty issues with standby/unused configurations like this which were fixed in U1, see:
http://vmtoday.com/2012/02/vsphere-5-networking-bug-2-affects-management-network-connectivity/
Also, do both, source and destination have 2 vMotion enabled vmkNICs and can you vmkping both?
http://www.yellow-bricks.com/2011/12/14/multi-nic-vmotion-how-does-it-work/
>In other words if the source has 2 x 1GbE and the destination 1 x 1GbE only 1 connection would be opened.
Using 2 nic's depend a lot on the hash algorythym in use. Are you hasing in mac or ip address. I suspect your hash is the issue, not the amount of traffic,
i found out that when move vms out of the host, both vmk nics are used. but when i move vms back into the host, only vmk1 is used.
both hosts I vmotion from to are on build 702118
Load balancing policy is
route base on originating virtual port
Could you provide esxtop views from the network screen while doing move out and move in to the host?
Sounds like your physical switch is either not aware of your hashing policy, or the policy is working and needs to be changed. Typical hash policies are mac or ip based. Also etherechannel does not typically balance well i/e 50 50 between ports. With only 1 inbound link seeing traffice check you hash policy.
i think it works. On the receiving host, I looked at TXpkts instead of RXpkts.
RXpkts had packets going on both vmk nics.
also I think patching up the hosts helps
Looks like traffic is not 50/50
I'm not using etherchannel
Load balancing policy is default
Route based on Originating virtual port