I too upgraded to VC 2.0.2 from 2.0.1. Then upgraded
my six ESX hosts to 3.0.2 from 3.0.1. All went well,
except now VC does not show the traffic light
statuses for any VM's when viewed in the Virtual
Machines tab of VC. The statuses do show in the
Hosts tab window. Wondering what's going on to cause
this.
I was having the same problem and a service vmware-vpxa restart brought back all of the status lights for my vms.
I too upgraded to VC 2.0.2 from 2.0.1. Then
upgraded
my six ESX hosts to 3.0.2 from 3.0.1. All went
well,
except now VC does not show the traffic light
statuses for any VM's when viewed in the Virtual
Machines tab of VC. The statuses do show in the
Hosts tab window. Wondering what's going on to
cause
this.
I was having the same problem and a service
vmware-vpxa restart brought back all of the status
lights for my vms.
Thanks to bjmore for his suggestion. Restarting the vmware-xpxa service on each ESX host resolved the issue and the status traffic lights now display correctly in Virtual Center.
Hi there,
if you get the following error message
Sequence VPX_INVENTORY_SEQ is not defined on the database
Failed to initialize VMware VirtualCenter. Shutting down...
take care of the last paragraph of the Release Notes describing what you have to do in case you start with a fresh install of Virtual Center but use you old database. In this case you have to upgrade the DB manually. Refer to this KB http://kb.vmware.com/kb/1001680
Regards,
daniel
Hi,
seems that something went wrong during the update. We see time out during VMotion processes as well as time outs for Reconfiguring the cluster (e.g. it is not possible to reconfigure HA at som ESX boxes).
Did anybody face this problems or know where to search?
Regards,
daniel
Dear raoulst,
you saved my life!! Restarting the services worked for me.
Cheers,
daniel
If we restart the services will that raoulst mentioned, will that disconnect the users from the VM that are on that host? Just wondering if this needs to be done on the off hours.
If you restart your Virtual Center Service do your VM status lights stop working again? I have an open ticket with support, but so far we have been unable to permanently fix the status light issue. Anytime I reboot a host or restart the VC Server/VC Server All my VM's go back to white. I then have to ssh to each host and issue the command "service vmware-vpxa restart".
I am wondering how many other people have this issue, and how they are dealing with it in a permanent way.
Message was edited by: joelgb
Hi, I had the same problem with a fresh 3.0.2 / vc 2.0.2 install (bot being able to enable HA on a cluster). Turned out that the hosts both gave an error when adding to the HA cluster:
multipleshorthostnames: Multiple short hostnames configured for host: name and Name[/b]
Turned out that if you look at the config of the host, under DNS and routing, the "host identifcation" name was spelled as "Name". When I changed this to "name" (all lower caps) I could succesfully add them to the HA cluster!
I am getting this status problem as well.
After restarting the service on each host, the problem comes back even if I do not restart VC or ESX hosts, it just starts again after few days.
And while the status is missing, all alarms do not work, for example if I have a VM at 100% CPU for hours, it will still show as green in VC.
This is a critical problem, hope VMware resolves it soon.
Same issue here.
Haven't looked at it though, because we have a much bigger problem with HA
Gabrie
While I have tested VMotion, I did not test HA as this requires failing a host, does the upgrade cause problems to HA?
Sam
Hi there,
problems we faced after upgrading to VC 2.0.2:
- alarm problem and HA problem[/i]
I have two SRs open and VMware support suggested upgrading to ESX 3.0.2. For short they did
[i]service mgmt-vmware restart[/i][/b]
to help me. I did this already but problems came back.
So I'm on the run. First box of 8 came up as 3.0.2 without any problems. Let's see if this in the end solves the HA and alarm problems.
I'll post any news if upgrade to 3.0.2 is finished for all hosts if you are interested...
regards,
daniel[i][/i]
I just upgraded. Had the usual problems. I disabled HA, but still had my hosts come up in a disconnected state. I had to run the service restart commands.
I was not able to upgrade my license server. It kept telling me that I had an invalid license file. What's odd about this is that I just added more licenses Friday. I had no problem re-reading the license file on Friday. The file looks fine when I look at it.
adam
same problems with license files. new server didn't accept my old license file.
I forgot about that because of my more 'interesting' HA and alarm problems
For the license server upgrade I had to re-download my license files from the website and to past them manually to a single file.
afterwards the license server accepted the file.
Is the only problem with HA that it can not be enabled or it may be enabled but VMs do not failover to another host?
I am not having a problem enabling HA but now I am wondering if the VMs will really failover in the case of a host failure, did anyone test failover?
Thanks,
Sam
You can test HA failover by executing the shutdown command at the console. Just shutdown the host and you will see that VMs are brought up on other hosts. Not shutdown through VC \!!!
I normally use HP's ILO board and realy cut power from the server to test. Just very few times that the ESX OS gets disk problems because of unclosed files. And even if, I have a new install running in 15min
I do use VM's that I created for this, because you don't want to shutdown production guest OS-es like this
Gabrie
You should be able to do a shutdown from VC, you would just have to manually restart the host. As long as you don't put the host into maintenance mode first it should act just like a sudden outage without the open files problems.
I haven't tried this on 3.0.2 but I have done it on 3.0.1 in the past.
I have tested HA in the past, but now I have many production VMs on all hosts and this is why I can not test it.
I was hoping that someone already tested it to avoid testing it in the current environment which requires migrating all production VMs out of a host and add there few test VMs plus will have to do this in a weekend, yes, this is how it works here
Simulating a failing host is even more easily done by shutting the console port(s) from the switch. In this case you are not dependent on which process is killed at what time, the HA cluster will instantly discover that one host is isolated, and failover will (erm should) commence. Simply "no shut" the port(s) and the host is "up" again instantly. Very easy for testing HA.
A good way to keep an eye on HA is monitor the logs in /opt/LGTOaam512/log/vmware_HOSTNAME.log on the hosts in the cluster (this file only exists if HA has been enabled at least once in the cluster)
During HA fail over testing watch for notifications such as:
===================================
Info FT Tue Apr 24 14:24:49 WST 2007
By: isolationScript on Node: HOSTNAMEA
MESSAGE: user HOSTNAMEA Automated Availability Manager Isolated, Notifying VPXA
===================================
Info NODE Tue Apr 24 14:24:49 2007
By: FT/Agent on Node: HOSTNAMEA
MESSAGE: Agent on HOSTNAMEB has stopped
===================================
Error NODE Tue Apr 24 15:22:11 2007
By: FT/Agent on Node: HOSTNAMEA
MESSAGE: Agent on HOSTNAMEB has failed. Ping Node results: XXX.YYY.ZZZ.NNN=ALIVE
Or better still watch 'tail vmware_HOSTNAME.log -n 30'[/code] to keep an eye on it in real time.
Message was edited by:
kimono