Hello,
Been searching for documentation on how to physically replace/upgrade my host hardware. Currently have 2 ESXi hosts in our cluster, each with about 2 dozen guests. Some/many guests are business critical, and can not have their availability affected. I purchased some similar servers to use in a test lab to practice a transition, but am at a loss as to a perfect way to do it.
The client/my employer wants to, for one reason or another, have the replacement hardware have the same IP and hostname as the originals.
1) Is there a white paper or FAQ on best practices for replacing host hardware?
2) should it be as easy as doing a vicfg-cfgbackup backup on the original host and a vicfg-cfgbackup restore on the replacement host? If so, I am getting mismatch errors if the servers are not like for like.
3) I considered adding the replacement hardware to the cluster and moving VMs that way, but am maxed on my licenses.
Hardware info:
Original host hardware: HP Proliant DL360 G5's Dual Quadcore, +4 port gigaeth
Replacement hardware HP Proliant DL360 G6's Dual Hexacore, +4 port gigaeth
Thanks for your help. I am new to this and am trying to get up to speed quickly while maintaining service levels (hence why I setup the test lab). Your advice is appreciated,
Welcome to the Community,
you said "... but am maxed on my licenses". Can you please clarify this? A newly installed ESXi host can run in evaluation mode for 60 days, so unless you are limited by the number of supported hosts (e.g. 3 hosts with vCenter Foundation, or vCenter Essentials) the easiest approach would indeed be add the host to vCenter Server and migrate the VMs.
André
By maxed licenses, I mean that I have license for 2 dual core servers - and I have those live.
So I would be able to add on a 3rd dual core server as an eval to the cluster without problem? I was worried it would conflict with the working environment.
You shouldn't have issues adding a host in evaluation mode. That's actually a way for a rolling hardware upgrade. Once the "old" host is evacuated, you can remove it from the cluster to free up the licenses, which you then can assign to the new host.
André
When attempting to add that 3rd server to the cluster, vCenter replies with "there are not enough licenses installed to perform the operation".
Is there something I need to do to take advantage of the ability to get this machine rolled in? It is a fresh install of ESXi.
Thanks!
What's the version and build of your vCenter Server and the new host?
Can you confirm that the time is properly synchronized/set on the new host?
Can you further confirm that the newly installed host is still configured/running in "Evaluation Mode"?
André
Hello AP,
Thanks for your replies. I've got more info for you.
My versioning is as follows:
WMware Infrastructure Client v2.5.0 build 341446
VMware VirtualCenter v2.5.0 build 341446
ESXi servers are 3.5.0 build 391406
Today I tried a different way, by attempting to connect to the production hosts using my test vcenter server, and I got a message stating that the host is already being managed by another vcenter IP, and that only one vcenter server may manage the host. It then asks if I want to disconnect the host from the original management server. I tell it no as I do not want to interfere with production.
I would very much like to add this 3rd server to my production cluster so I can move VMs to it and decommission one of the older hosts. I seem to keep hitting a wall.
Suggestions?
Thanks!
I am new to this company and product so I am learning a lot working through this.
It would seem the company is using 3.5 but has the option to use 4.0 and 4.1 as based on their licensed downloads on the vmware portal.
Would a workable plan be to install Vmware vCenter 4.1 on a separate server, then pull in the production ESXi 3.5's? As vCenter would be eval there, would it be open to my bringing in a 3rd server? move VMs to it, then remove one of the 3.5 ESXi servers?
I am trying to interrupt operations as little as possible while optimizing the setup. my plan to leverage an eval vCenter 4.1 seems to keep the production hosts churning happily while VMs are moved around.
Does this make sense or am I wrong?
Thanks!
Here is the upgrade path from 3.5 to 4.0/4.1
VMware KB: Upgrading ESX/ESXi 3.5 or 4.0 to ESX/ESXi 4.1
You can build a new server for vCenter and migrate the database over (so you don't lose any historical data) or just start from scratch... When I went from vCenter 2.5 to 4.1, we had to start over as our 2.5 database had become corrupted. You lose the historical performance data, permissions, folder, etc... but the VMs and the hosts stay intact.
Thanks Ben,
I will go that route then, thanks for the link.
As an aside not related to the thread's original topic, since you lost that vCenter database previously - have you changed how you deal with vCenter and the data it stores? Eg: have you started backing that server or the database up so that you don't ever lose that historical data again?
No. I've always backed up the database on a regular basis incase of issues, however... in this case, the database was fine for vCenter itself, but there was corruption that wouldn't allow us to upgrade. We just started fresh. I had used PowerCLI scripts to backup the permissions and roles to move to the new vCenter so it was a pretty easy move.
