Basically there are 3 ways how to do it: GUI CLI (from Manager node) API GUI - I have crossed out this options even in testing phase (lab) - did not work properly for me. CLI can be ok, but th...
See more...
Basically there are 3 ways how to do it: GUI CLI (from Manager node) API GUI - I have crossed out this options even in testing phase (lab) - did not work properly for me. CLI can be ok, but there is a big BUT. For the migration the process is more or less about preparing a VDS according to N-VDS configuration, preparing uplink profile, putting host to MM, migrate the host to VDS and taking it out of MM. The problem is in putting the host to MM - there is a default timer of 300 seconds and if the host is not in MM by that time it moves to next host and it does not care about the state of the cluster from the resource perspective. There is a possibility to prolonger this timer with option maintenance-timeout like "vds-migrate esxi-cluster-name cluster maintenance-timeout 60", but I simply do not like the fact how it works. I really recommends the API way - it is semi manual - you have to put the host to MM and take it out, but it is much more controlled by you and it have some other possibilities like renaming VDS to a name corresponding with your naming convention at the time of its creation. Other then that what I have noticed - the NVDS should have the same exact configuration at each host (e.g. LLDP disabled / enabled everywhere), otherwise multiple VDS will be created (but that is noted in the documentation - https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-1039A36F-F55E-4A0A-B6C6-2C383F4A716D.html). I had also trouble with IP collision for TEPs - some hosts were assigned IPs that were already in use (in that case do not take out the host out of MM - if you have DRS, VMs will be automatically moved there but won't communicate due to the collision - tunnels will be down for that host). Simple change to DHCP for the IP Pool, saving and assigning the IP Pool back fixed the issue.