Hello all. We are current running a vCenter 5.5U2 instance after upgrading from 5.1.
vMotion jobs fully automated or manual is failing with the following error. pbm.fault.pbm fault.summary
The only online fixes I have been able to find refers to fixes for this same error but in the vCenter appliance. We are running VC on Windows 2008 R2.
Has anyone seen this before and or have an on hand fix it. Thanks in advance.
Have you checked if profile driven storage is running?
Log in into web client and see the vCenter services health and make sure you fix anything unusual.
Have you checked if the inventory service is up and running? If it isn't, bring it up and then try to migrate the machine.
Hi there thanks for the feedback. That was the first thing I checked. The service was up but I restated it anyway. The issue still persisted after it came back up. Have checked VMotion ports config and on the hosts and DRS settings on the cluster properties which all are in place.
Thanks Fritzbrauz. We are using the the windows version of vcenter. Regardless I cam across that same post and tried restarting the service on the windows box anyway. No luck however.
I would suggest you to open a ticket with VMware support and meanwhile i will look into it and let you know if i can find something on it.
Awesome thanks Abhilashhb
Have you checked if profile driven storage is running?
Log in into web client and see the vCenter services health and make sure you fix anything unusual.
Hello again. Actually there is an alert on "VMWare VCenter Storage Monitoring Service" in vcenter. Message "Provider sync failed". I am unable to find the service on the vcenter vm. did the name perhaps change top something else like SSO change going from 5.1 - 5.5? all services except for the orcastraitor and update manger services appears to be running fine.
When trying to check the storage clusters properties the following error pops up.
java.util.NoSuchElementException
That was indeed the issue. Many thanks! vMotion sorted after restarting profile driven storage even though the service appeared to be running fine.
Glad to know the issue is resolved