We are migrating some of our 10GbE adapters which form the uplinks of our existing vDSes to 40GbE NICs.
While I found quite a bunch of guides on how to migrate from a "classic" vSwitch to a vDS, I haven't found any recommendations on how to replace NICs of an existing vDS.
Are there any best practice guides I missed on Google? Or is it as straight forward as assigning the new NICs to the vDS and then unchecking the old ones?
Thanks,
Tom
Like most other OSes, ESXi is numbering the NICs by their PCI addresses. If you remove physical NIC vmnic0 and replace it with a completely different NIC in the same PCI slot, then this will become be your new vmnic0. No additional configuration is necessary, the (d)vSwitch will continue to use the NIC designated as vmnic0.
If you're replacing the NIC, the host should re-use the same alias (vmnic#) and no other action is required.
Like most other OSes, ESXi is numbering the NICs by their PCI addresses. If you remove physical NIC vmnic0 and replace it with a completely different NIC in the same PCI slot, then this will become be your new vmnic0. No additional configuration is necessary, the (d)vSwitch will continue to use the NIC designated as vmnic0.