All of our Red Hat 3 Update 8 VM's show up in VC as (Invalid) next to the hostnames and are grayed out.
We have 7 HP DL585's in a ESX 3 Farm. We created 6 Red Hat 3 update 8 VM's on them which all came up fine and VM tools installed fine. Within minutes of tools being installed all of these VM's couldn't be SSH to or pingable. Went to VC and they all showed a grayed out status with "(Invalid)" next each VM.
A few hours later the server rebooted itself and 4 of the 5 VM's came back up with a green status and HA/DRS moved all other VM's off this server during the reboot except all 5 of these Red Hat VM's.
We are running HP SIM agent 7.6.0.
The one host that was running all the Red Hat VM's had a ASM error and ASR was enabled from SIM and it rebooted the server. I have since disable the ASR setting in SIM just in case the server reboots again so we can see the error.
Has anyone seen this before?
Had a similar problem under slightly different circumstances. Had a valid RedHat V4U5 VM running. Went to move the VM to a new VLAN/Portgroup. VM came up invalid in VI 2.0.0. Current recommendation by VMware is to
Please try the following:
1) From VI client, click on Configuration.
2) Make sure "Virtual Machine Statup/Shutdown" is disabled.
3) Restart hostd via /etc/init.d/mgmt-vmware restart.
on the ESX server host which I guess you effectively did with the server reboot. Where you able to get all your VMs back out of the Invalid state? Did you find any other methods of restoring them to a normal state or did you not take it any farther?
First of all as I told in this forum:
http://www.vmware.com/community/thread.jspa?messageID=537564
Try upgrading HP M. Agents to 7.7.0.
seen lot of errors with 7.6.0
What about the logs?
Was the reboot caused by ASR?
I won't upgrade until it is certified by VMware. I know that the ASR issues more then likely were caused by this agent however it couldn't have caused the "Invalid" VM issues.
Had a similar problem under slightly different circumstances. Had a valid RedHat V4U5 VM running. Went to move the VM to a new VLAN/Portgroup. VM came up invalid in VI 2.0.0. Current recommendation by VMware is to
Please try the following:
1) From VI client, click on Configuration.
2) Make sure "Virtual Machine Statup/Shutdown" is disabled.
3) Restart hostd via /etc/init.d/mgmt-vmware restart.
on the ESX server host which I guess you effectively did with the server reboot. Where you able to get all your VMs back out of the Invalid state? Did you find any other methods of restoring them to a normal state or did you not take it any farther?
Agreed. We have had no issues with 7.6.
Hello,
If HPASM rebooted the server, it could have been a legitimate issue. Or it could have been spurious. If you run 'service hpasm reconfigure' you can disable all the storage agents, which is the general cause of most hpasm issues.
You can also run 'hplog -v' and determine why the server rebooted. If it was an ASR (Automatic System Reboot) check to see why. If hpasm storage agents are disabled and this continues, then you need to capture the console some how as the reason will there. You can use 'expect' to connect to the iLO console and every now and then send a byte to the server to keep the system alive and capture all the output to a file. If you do this, however, you can not access the iLo until you stop the 'expect' script. You could tail the logfile and see why it rebooted.
You could have a fan issue or something subtle.
Best regards,
Edward