I have questions pertaining to our upgrade from vSphere 6.7 to vSphere 7. We are having to replace our SD card local storage on our Cisco UCS VMware hosts. After discussions with Cisco and VMware, we are replacing with M.2 flash drives and M.2 raid controller in a Raid 1 configuration. In order to install the new hardware, we need to remove the current Raid controller with the two SD cards containing ESXi, since they use the same port.
We have a backup data center in a different state. My question is, would it be possible, using a reference host that is the same model server, to install the ESXi OS onto the M.2 flash drives using this reference host, and then take those flash drives to our backup datacenter to be placed into the hosts there and complete the configuration, to help save time? We have approximately 30 hosts at our backup datacenter that need the M.2 flash drive replacements. I just wanted to verify whether or not there may be any settings from an unconfigured ESXi installation that could link it back to the physical host it was installed on (like a SID) that could cause issues if placing in different same model hosts.
If no issues with that, would it be possible for us to go as far as to install ESXi using the reference host for those servers at our backup datacenter, import the host into our production vCenter server, using a temporary hostname and IP, and complete most of the configuration in vCenter. Then remove that host from vCenter and take the M.2 flash drives / raid controllers to our backup datacenter, place in the hosts there, rename and re-IP the VMware hosts and import the hosts into vCenter at our backup site and complete any remaining configuration necessary? Both of these vCenter servers are running in linked mode. Just wanted to confirm also, if using this method, that it is ok to change both the name and IP of an ESXi host without causing any issues.
The other question I have is, since this will take time, could there be any possible issues with running the latest version of vCenter 7 with VMware clusters running ESXi 6.7 for an extended period? I’m almost positive that this should not be an issue, just wanted to verify. Also, we have several large clusters and it may take several days to a week possibly, to complete installation of ESXi 7 on all of the hosts in those clusters. Could having a mixed cluster with ESXi 7 and 6.7 hosts for an extended period (maybe a week) also cause issues?
Thanks for any insight you can provide.
Just a suggestion: create a ISO or USB Flash Drive with a Kickstart Script for the Setup: Installation and Upgrade Script Commands (vmware.com) You can do quite a lot with "%firstboot" and it would be a clean install.
ESXi 6.7 Hosts are fully supported under vCenter 7.0 U3 (Product Interoperability Matrix (vmware.com))
Thanks. I don't believe that will save us much time though. The point is to try and do as much as possible prior to going to our DR site, so we do not need to spend up to two weeks in another state re-installing ESXi in our entire environment while there. I believe with the kickstart script, it may save a few minutes not having to configure the management network manually, but you still need to edit the script with the IP information for each host.
I believe it used to be possible to have ESXi installed on a USB drive that could be used swapped and used on another same model host, in case of hardware failure, which would seem to indicate there isn't anything in the ESXi installation that ties it back to a particular physical server. If we were to install ESXi on the M.2 flash drives without any configuration (no management network config, etc, just the base install), then install those flash drives to our hosts at our DR site and configure ESXi once we've installed flash drives in the servers, should that be ok?