Hi!
At my work place, we have an MS clustered VC 2.5, on two physical servers .
I'v been trying to install the Vmware Update Manger (VUM) on a third server - a VM, with an MS SQL DB.
The MS SQL DataBase, is on another two clustered servers, and the VUM Machine has correct ODBC configurations.
I Installed VUM, when the VC Active node was node A, all was going well, no ports were changed, and the vc2.5 clients successfully downloaded & enabled the plug-in.
BUT - if I move node, and the VC active node is node B,
when enabling the VUM plug-in I recieve an error saying that either the VC or Database could not be reached, there for the plug in could not be enabled or work properly.
The strange thing is , that if i'd install the VUM server, when VC2.5 node B is the active one, the same error repeats itself when A is Active, and the plug in, when VC2.5 is on node B works like a charm.
When prompted for VC Credentials on VUM install - I put in the cluster service user, and the IP in the VC cluster group.
Can anyone help me solve this problem? I fear this may be a problem with the VUM Certificates.
PLZ HELP
Are you pointing VUM to the virtual name of the cluster? If not, its always going to fail if it points to Node A when Node B is the active and vice versa.
If you find this information helpful or correct, please consider awarding points.
I Point it to the Clustered VC ip.
meaning I have a network name called "vc" who is on the VCgroup in the cluster, and I point vum to the IP of that name.
i'v tried pointing it just to "vc" but I get the same problem, and I assumed an IP address is better, since the default value in the field asking you who is your VC Server is an IP address....
Up Up
HLP PLZ?
Not an answer I'm afraid, but I'm getting exactly the same issue. Did you manage to resolve this issue?
Any help would be appreciated...
Warren