Can you please tell me something more about your network-setup? and is there share storage involved ?
What you want to do, I did a couple of weeks ago also kinda.
We have extended our datacenter with a new cluster. Because those 2 clusters were using the same vmotion network I didn't had to migrate any hosts and just used vmotion to move VM's between those 2 clusters without any downtime.
It doesn't matter if one cluster has a different EVC-mode than the other, unless your VM is requiring those specific CPU-features. In that case you can migrate, but you have to poweroff and poweron the VM so the CPU features are applied
If this is also your situation, then I think i've answered your question. If i'm missing important parts, please tell me more about your network and storage-setup.
By the way, why do you want to provision a new vCenter ?
I've used a similar procedure to migrate stuff from 5.5 to 6.0. Upgrading the old vCenter wasn't a real option for seperal reasons. Only problem were the distributed virtual switches. So I wrote some PowerCLI scripts that grab a NIC from the VDS, create a VSS, read portgroups from VDS, create portgroups on VSS and migrate VMs from VDS to VSS. Then takeover ESXi host from new vCenter, vMotion VMs (EVC configured) and use reverse procedure to move VMs from VSS to new VDS. I think I migrated about 150 VMs (and 20 VLANs) that way, Worked well.
May I ask what the problem was with the dVS ? their should be no problem to upgrade and use esx 6.0 with dvs version 5.5. The only "issue" is that you cannot use the NIOC v3 that is in dVS6
There was a major restructuring of the environment and the old vCenter servers had their own problems so we decided to build new vCenter, add some new hosts, migrate VMs by moving their host to new vCenter, vMotion VMs, then reinstall not-so-old hosts and decommission the really old ones. Repeat until finished. While VMs stay running. So we had to move ESXi hosts with dVS configuration to a new vCenter with different dVS configuration. There may be different ways to do this. Manually remap the network config of >150 VMs and >20 portgroups isn't fun (and there will be some collateral damage) so I scripted that part. It worked so everybody was happy.