VMware Cloud Community
oscarleopard
Contributor
Contributor

VM migration not adhering to vmotion configuration

I am in the process of migrating a number of VMs from a cluster onto a standalone host. All machines share the following configuration;

Management VMkernel : 10.0.x.1/8 (1gbe)

VMotion VMKernel: 10.0.x.20/8 (10gbe)

When I started the migration, everything seemed fine, but when I started looking into the network performance I noticed it was not using the correct network cards any more. As shown in the image, the initial bandwidth and vmnic1 usage was correct (highlighted in blue circle). At some point (highlighted in orange circle) the 1gbe vmnic3 appeared to start doing vmotion as well. At some point, it stopped using vmnic1 entirely (network is still up and host can ping using vmkping), this is highlighted in yellow. Again I cancelled and restarted (green circle) and it seemed to work as expected for a bit and then transition to vmnic3 only.

tempsnip.png

I have just started another migration and it is only using vmnic3 entirely.

Capture.PNG

adapters.PNG

vmkernel.PNG

I am not sure whether this is something to do with having 1gbe on vSwitch and 10gbe on Dswitch, but given that it seemed to work fine when the migration started I would have expected it to remain that way.

0 Kudos
8 Replies
daphnissov
Immortal
Immortal

You probably have two uplinks configured for your vDS with no preference set for the vMotion interface. It's selecting an uplink based on its teaming criteria in the vDS config.

0 Kudos
oscarleopard
Contributor
Contributor

Thanks for the quick response.

This is the current configuration for the Vds

vds.PNG

Uplinks 3 and 4 are assigned to the 10gbe cards on all servers.

topology.PNG

Should that therefore be okay and running on the 10gbe cards?


Thanks

0 Kudos
daphnissov
Immortal
Immortal

So whatever vmnics on this host are designated as uplink3 and uplink4 the vmkernel port tagged for vMotion will be able to use. Whether that's ok depends on you and your network design. It clearly works.

0 Kudos
oscarleopard
Contributor
Contributor

That's the thing, the cards assigned to Uplink 3 and 4 are the 10Gbe cards.

0 Kudos
daphnissov
Immortal
Immortal

0 Kudos
daphnissov
Immortal
Immortal

Oh, I read your first post again. Ensure all your hosts are using consistent vmnics for their vDS uplinks. A vmkernel port can't use an uplink that isn't designated in its switch uplink config.

0 Kudos
oscarleopard
Contributor
Contributor

Yes, confirmed that all hosts are using the same configuration of 1gbe for Management, 10gbe for Vmotion. The only difference being that on 3 hosts vmnic 0/1 are 10gb and on 2 hosts vmnic 2/3 are on 10gb, but all assigned to correct uplinks.

I have even gone so far as changed the network for vmotion to something completely different and stuck it in a vlan and still the problem persists.

0 Kudos
daphnissov
Immortal
Immortal

Your vDS really shouldn't be specifying vmnics (pNICs) of different characteristics. So the uplinks should be consistent, either two 1 GbE interfaces or two 10 GbE interfaces. If necessary, you can have multiple vDS that separate the two.

0 Kudos