VMware Cloud Community
juchestyle
Commander
Commander
Jump to solution

Moving Virtual Center from VM back to Physical

Hey guys,

While I think this is straight forward, I would like to simply confirm and of course hand out points for worthwhile contributions.

Here is what we got: We have VC 2.5 running in a VM that connects to a remote SQL instance. I have a physical computer that is ready to get VC installed, the lic file has been moved over but nothing installed yet.

What we want to do: We think the right thing to do is stop the connection on the VM to the remote SQL database. Turn off the VM. On the physical machine, install VC 2.5, and point it to the remote SQL instance, next next finsh.

Anything we should be wary of, is it this simple and straight forward? Any gotchas?

Respectfully,

Matthew

Kaizen!

Kaizen!
0 Kudos
26 Replies
samugi
Enthusiast
Enthusiast
Jump to solution

Sorry missed page 2 Smiley Happy only read 1/2 the posts

0 Kudos
juchestyle
Commander
Commander
Jump to solution

Hey Sam,

Can you elaborate on this a little more, I like the sound of it.

Respectfully,

Matthew

Kaizen!

Kaizen!
0 Kudos
samugi
Enthusiast
Enthusiast
Jump to solution

The unique identifier is created on installation automatically. If you reinstall VC and point to the same DB the unique number will likely be different. This will cause problems as the VC installation references this identifier within the database.

If this is your problem boot your original VC VM. Record the unique identifier. Shut the VM down and boot your physical VC server. See if the Unique identifier is different. If it is change it to the value the VM used.

0 Kudos
juchestyle
Commander
Commander
Jump to solution

Hey Parker,

As part of the troubleshooting with vmware, we did try a brand new db (clean) and install of vc - no go. We have something special here.

Respectfully,

Matthew

Kaizen!

Kaizen!
0 Kudos
mittim12
Immortal
Immortal
Jump to solution

Hey Matthew,

I was just curious if you were able to resolve this issue and if so what ended up being the problem.

0 Kudos
juchestyle
Commander
Commander
Jump to solution

Hey all,

I wanted to share a little bit of our experiences here on this issue. It turned out to be a big issue in the following ways: We started with tech support in Canada, then moved to Palo Alto, then to my favorite place, India! We started working on this "Simple" migration around 10am on Wednesday and didn't finish cleaning up our mess until around 1:30am on Thursday, yup, if my math is right, that is 15.5 hours of work for something we thought was going to be a half hour simple deal-e-o.

After spending 1 vmware techs whole shift and then some, we were moved with the sun to Palo alto, where we introduce our next character and pals!

First off we were most impressed with the Palo Alto Crew! We were working with a Rock Star named Damien! I don't know what he looked like, but based on some of the cool stuff he did, I imagine he rides a white horse to work every day, and stands 7 feet tall! Him and his crew pulled magic tricks out of their ten gallon hats left and right and we still got slapped around by this unknown virtual force. We even discovered that one of the authentication files had a different checksum then our working ESX's. Yup, we grabbed a good file put it in place and restared the service - no go.

We were sorry to see our Californians go, but alas the sun was setting on our empire, and rising in India. And while we had perceptions of our heros in our head, it was tough for India to take over. They aren't nearly as charismatic, or customer friendly as our culture has become. In fact I might embarrisingly add that I kind of looked down on our Indian counterparts at the beginning of the call. However, It seems they had a working model of what might have been happening, to which I will elaborate shortly on. The Indians don't talk to you a lot; rather they seem to walk away with a problem, consult an oracle and then come back and tell you very matter of factly what the issue probably is, in very technical terms. On some level you may think that they look down on us for not being as brilliant as they are, but we have our street smarts and look where that has gotten us. Regressing back to our main topic, I have to ask what is important. I don't care if a tech treats me like crap if I can atleast learn something, and firstly, if my problem gets solved! Since no one really solved the problem without requiring us to reboot our esx servers, no one gets correct points there. I do want to give credit to the Indians who have a good working idea of what the problem is. Palo Alto and Damian get mega points for their "Impressive" display. Three of us inhouse were blown away by their efforts, and felt VERY good in their hands. They tried many many ways around the evil that infected our servers, so their knowledge set was inspiring.

Anyway, we are still on the old virtual center. What we think happened is this. When we moved our hosts to the new vc, we think somehow the hostd lost the ability to communicate with the internal database (vmdb). What I didn't know a lot about is that every esx has it's own internal database, which makes sense. We think the vc vpxa created a new connection between them. And once that new connection manifested itself, it created a perfect firewall of isolation in the ESX boxes; everything stopped working, except the vms! No vmotion, no control, no monitoring, putty was the only thing that seemed to work.

Error message in logs:

throw vim fault not authenticated

same path directory - different name

We even tried starting with a brand new vc database - no go. We tried uninstalling vc and reinstalling - no go. Restarting services - no go. Nothing worked except a reboot of the esx boxes.

Solution: Reboot the boxes, everything came back to the playground to play nicely after the rain.

Good times!

Matthew

PS. Anyone want to take a guess at how much time it will pass before we are willing to tackle this migration again?

Kaizen!

Kaizen!
0 Kudos
juchestyle
Commander
Commander
Jump to solution

Hey all, the saga continues. Here is what we are working on currently and I think we have a better than average approach, but are still running into some issues.

1. Turn off HA on all clusters

2. Remove all hosts from old VC

3. Stop services on old VC

4. Stop services on new VC

5. Copy the database to the new sql instance - (remote verse lite version)

6. Change the ip settings in the new VC to reflect the new ip addresses - very important!

7. Start the services on new VC

8. Good luck - try to bring in your hosts

Will update with more info if I find it.

Respectfully,

Matthew

Kaizen!

Kaizen!
0 Kudos