VMware Cloud Community
kaychan
Contributor
Contributor

Fail to install virtual center agent service after upgrade

Hi,

I ran an upgrade of the virtual center from V2.0 to 2.01 for my ESX host which has been upgraded to V3.01 (as I want to run the 64 bits). The upgrade went through successfully but my ESX hosts were disconnected. When I tried to reconnect I got erros message that "fail to install virtual center agent service".

I browse through the forum and someone suggested to run the install of the agent manually by doing the folowing:

To run the vpx-upgrade-esx-3-linux-xxxxx (esx version) manually on the server from the Console.

These are located in the VirtualCenter install folder.

\VMware\VMware VirtualCenter 2.0\upgrade

But when I look into that folder, there are a number of vpx-upgrade files namiong from vpx-upgrade-esx1-linux..to vpx-upgrade-esx-4-linux... Which one should I run. Should I just use the vpx-upgrade-esx-3-linux-32042?? Also where (which directory) should I run it from after logon to the Service Console?? I am not a heavy linux user.

Thanks.

0 Kudos
13 Replies
bbshaw
Contributor
Contributor

Did you find the answer to this question? I have the exact same thing happen to me just now. What are the numbers on the files refering to versions of the vpxd agent?

Thanks

0 Kudos
kaychan
Contributor
Contributor

I did not get any answer and I end up have to reboot my ESX server which fixed the problem.

0 Kudos
admin
Immortal
Immortal

What percentage does the add host task get to before it fails? If it doesn't get to 19%, that probably means there is a network problem between your VC server host and the ESX host (upload of the VC agent could timeout because of low bandwidth between the machines - often caused by non-matching duplex settings).

0 Kudos
jaygt
Enthusiast
Enthusiast

I had the same issue and I put in a call to vmware support.

I found the management interface had to be restarted to get the host reconnect and install the new VC agent.

So I putty into each of the disconnected host, executed " /sbin/service mgmt-vmware restart", and once it was completed I waited a few seconds and I was then successfully able to reconnect the host to the VC. A few times I was prompted to enter a username and password, but not often.

When I had all of the hosts reconnected I had to reconfigured the HA of the cluster the hosts were in, by remove HA from the cluster and then reapplying it.

Strange thing is not all of my servers were affected.

0 Kudos
netlinecg
Contributor
Contributor

Same problem here.

After Upgrade of VC 2.0.1 I had this error.

In my case the resolution was to create a folder /tmp/vmware-root at the ESX hosts. This sometimes isn't done by the installer, so it can't go on.

0 Kudos
GBromage
Expert
Expert

But when I look into that folder, there are a number

of vpx-upgrade files namiong from

vpx-upgrade-esx1-linux..to vpx-upgrade-esx-4-linux...

Which one should I run.

Which one you run depends on the version of ESX on the server you're connecting.

2.0.1+ = vpx-upgrade-esx-0-linux-*

2.1.0+ = vpx-upgrade-esx-1-linux-*

2.5.0 = vpx-upgrade-esx-2-linux-*

2.5.1 = vpx-upgrade-esx-3-linux-*

2.5.2 = vpx-upgrade-esx-4-linux-*

2.5.3 = vpx-upgrade-esx-5-linux-*

3.0.0+ = vpx-upgrade-esx-6-linux-*

e.x.p = vpx-upgrade-esx-6-linux-*

That info is contained in the bundleversion.xml file in the same folder. Which I only looked at by pure chance. As far as I know, (but I could be wrong) it's not formally documented elsewhere.

I hope this information helps you. If it does, please consider awarding points with the 'Helpful' or 'Correct' buttons. If it doesn't help you, please ask for clarification!
0 Kudos
stichnip
Contributor
Contributor

Here is what you need to do.

1) Copy "vpx-upgrade-esx-6-linux-33643" to the tmp folder of your esx host.

2) Change to the tmp folder by doing "cd /tmp"

3) Do a "ls" to make sure the file is there

4) Install the new agent "sh vpx-upgrade-esx-6-linux-33643"

It doesnt require a reboot. Once that is finished running, you can go into VC and just add the host.

0 Kudos
stichnip
Contributor
Contributor

Here are the "book" steps to do this:

Install VPXA on ESX server manually

\----


Please follow steps below to install VC Agent manually:

1) Login to the VC 2.0 server as Administrator.

2) Open the folder for the VC 2.0 installation. By default this will be "C:\Program Files\VMware\VMware VirtualCenter 2.0\upgrade"

3. You need to use the correct file for different version of ESX server. You can find your answer in bundleversion.xml Ie.

ESX 2.5.2: vpx-upgrade-esx-4-linux-*

ESX 2.5.3: vpx-upgrade-esx-5-linux-*

ESX 3.0.0: vpx-upgrade-esx-6-linux-*

4) Copy file "vpx-upgrade-esx-y-linux-xxxxx" to your ESX host, where y and xxxxx are based on bundleversion.xml.

Xxxxx is the build number,

ie.

VC 2.0.0 build number is 27704

VC 2.0.1 build number is 32042

VC 2.0.1 Patch 1 build number is 33643

(Use a secure copy utility such as WinSCP or PuTTY PSFTP to copy this file to the ESX server.)

5) Login to the ESX server as root.

6) In the directory where you copied the upgrade bundle run the command:

sh ./ vpx-upgrade-esx-y-linux-xxxxx

7) run the command:

\- service vmware-vpxa restart

\- service mgmt-vmware restart

😎 Login to the VC 2.0 server as Administrator

9) Open "Control Panel->Administrative Tools->Services" window

10) Restart "VMWare License Server" service

11) Restart "VMWare VirtualCenter Server" service

12) Now open your VI Client, try to connect to the ESX host.

0 Kudos
TFX
Contributor
Contributor

Hello all,

My problem seems to reach further. I did all of the above. But the installation of "sh ./vpx-upgrade-esx-6-linux-40664" hangs (See log below). There are six VM's running on this ESX 3.0.1 server.

The VIC's error is Failed to install the VirtualCenter Agent service. Some help would be appreciated.

Total of 8 DL380 G5's with ESX 3.0.1 and only (this) one has this problem.

Now there is no vpxa installed anymore vpxa -v: -bash: vpxa: command not found. Uh, yikes!

NOTE: The root password is correct but VIC reports: Login failed due to a bad username or password. And then it starts to install the update and hangs for about two! hours on 19%, Reconnect host.

Log:

\[27747] 2007-03-13 11:20:03 AM: Logging to /var/log/vmware/vpx-ivpx-upgrade-esx-6-linux-40644.log

\[27747] 2007-03-13 11:20:03 AM: exec mkdir -p /tmp/VMware-vpx-esx-6-linux-40644

\[27747] 2007-03-13 11:20:03 AM: status = 0

\[27747] 2007-03-13 11:20:03 AM: exec tar -xvf /tmp/VMware-vpx-esx-6-linux-40644.tar -C /tmp/VMware-vpx-esx-6-linux-40644

\[27747] 2007-03-13 11:20:03 AM: status = 0

\[27747] 2007-03-13 11:20:03 AM: exec cd /tmp/VMware-vpx-esx-6-linux-40644/

\[27747] 2007-03-13 11:20:03 AM: status = 0

\[27747] 2007-03-13 11:20:03 AM: Build: 39823 List: LGTOaama-5.1.2-1.i386.rpm LGTOaamvm-5.1.2-1.i386.rpm VMware-vpxa-2.0.1-40644.i386.rpm

\[27747] 2007-03-13 11:20:03 AM: Rpmlist: VMware-vpxa-2.0.1-40644.i386.rpm LGTOaamvm-5.1.2-1.i386.rpm LGTOaama-5.1.2-1.i386.rpm

\[27747] 2007-03-13 11:20:03 AM: exec rpm -ev --allmatches VMware-vpxa

\[27747] 2007-03-13 11:20:03 AM: status = 0

\[27747] 2007-03-13 11:20:03 AM: exec rpm -ev --allmatches LGTOaamvm

Message was edited by:

TFX

Message was edited by:

TFX

0 Kudos
stichnip
Contributor
Contributor

What version of VC are you using?

0 Kudos
TFX
Contributor
Contributor

Patch2

We solved it by killing a few processes and again executing the "service mgmt-vmware restart"

Greetings,

TFX

0 Kudos
mkhwc
Contributor
Contributor

I too ran the upgrade to v2.01 build 40644 and had the same problem but it did not effect all the servers. I had recently installed two new servers that were not in production yet, I was able to connect to them without restarting any services.

Using the service mgmt-vmware restart fixed the problem on my other ESX 3.01 34176 servers.

I did have to re-enter the username and password a few times before they would connect but it eventually worked.

Now I get HA errors, I suppose I'll have to reconfigure that now

0 Kudos
stichnip
Contributor
Contributor

Most of the HA errors I have come accross were DNS errors. The rest of them I usually had to fix by putting the hosts into a new Cluster.

0 Kudos