1 Reply Latest reply on Jul 27, 2020 12:39 PM by TheBobkin

    vSAN Migration to new hosts Procedure

    MJMSRI Enthusiast

      Hi All, Interested to know peoples view on the scenario / project below and which way you would go. Currently in place:


      • 6.5 Update 3 All Flash vSAN Stretched Cluster in place, 8 hosts (4 in each datacenter) and 1 witness appliance in a 3rd site.
      • 1 x 6.5u3 Windows vCenter Server that is within the vSAN Cluster.
      • Licences are vCenter and ESXi Standard.
      • MGMT network on standard switches
      • vMotion and vSAN on Distributed Switches 
      • Used vSAN capacity 24TB


      Objective is to move to VCSA 6.7u3 or 7.0 and replace all 8 vSAN Hosts hardware with new servers. Options could be:


      • Option 1 Setup new vSAN Cluster and new VCSA so this would be side by side setup. then look to move vms and their storage:
        1. Cross vCenter vMotion not possible as only have ESXi STD on existing cluster https://kb.vmware.com/s/article/2106952
        2. Another option might be to follow this KB whereby the existing vSAN cluster could be connected to the new vCenter, but i think this might be a big risk VMware Knowledge Base 
      • Option 2 could be to do a one in one out way whereby the below would be done:
        1. In place upgrade and migration from windows vCenter to VCSA 6.7
        2. fit new hosts to rack, install ESXi 6.7.
        3. maintenance mode 1st existing 6.5u3 host, full data migration then remove from vSAN Cluster
        4. Set 1st new host with same hostname and IP as host above, add to vSAN Cluster
        5. then repeat the above for the 7 other hosts.


      Any options i have missed?


      Which approach have you taken / would you take?



        • 1. Re: vSAN Migration to new hosts Procedure
          TheBobkin Virtuoso
          vExpertVMware Employees

          Hello MJMSRI


          I wouldn't really consider adding an existing cluster to a new vCenter as being a "big risk" - I have done all or part of this many many times and as long as all of the elements that need consideration (as covered in the kb) are adhered to, it shouldn't be a problem (but that being said, I fix problems for a living so maybe just nothing I would see as a big problem).

          If you wanted to avoid some of the vDS import/export aspects, you could migrate all networking to standard switch(es) and then back to new vDS in the new vC.


          Potentially a combination of Option 1 and 2 could be done e.g.:

          - Upgrade and migrate current vC to vCSA 6.7 U3.

          - Add new cluster to this vC as its own cluster and configure it as a 4+4+1 with whatever features you have/want (could temporarily use Evaluation license on this cluster until you can use the old clusters license).

          - SvMotion all the VMs and/or other data to the new cluster.

          - Once empty, decommission the old cluster.


          While SvMotion (probably) may not be as fast as throughput when moving data with MM FDM option, it will make up (at least in part) by the facts that 1. no data is getting moved more than once (e.g. as per option 2 the data is getting moved to old hosts, then mix of old and new hosts and so on) and 2. won't require to have sufficient free space to fully evacuate a node in one site and thus will avoid any potential space issues - this would also apply if for instance a RAID5+RAID5 on each site Storage Policy was in use as FDM of any node would not be possible as this requires a minimum of 4+4+1 for component placement.

          What I have seen some customers doing is adding the new nodes to the cluster to make it a 8+8+1 (4 old and 4 new on each site) then doing pretty much as you said in option 2, though this would require (temporarily at least) that the cluster be vSAN licensed for all 16 nodes worth of sockets.


          daphnissov has a really good blog post on the pros and cons of upgrade vs migrate which should be considered for the vCenter aspect here:

          Upgrading vSphere through migration