ESX 3.5, VC 2.5, 5 Identical AMD based Servers (Slammer, Ramper, Raptie, Blipper1, Blipper2)
I did the following:
1. Migrated ~15 VM's from Slammer to Blipper1.
2. Verified that the VM's correctly showed up under Blipper1
3. Put Slammer into Maintenance mode.
4. Removed Slammer from the cluster.
Now every VM gives me a "The request refers to an object that no longer exists or never existed." error when I try to start them, however I can still edit all of the VM's settings and when i ssh into the server everything looks fine on the iSCSI volume.
I even created an new VM and pointed the disk to on of the offending VM's hard disks and it cranked up. I then scp'ed the vmx and vmxf files from the both the fuxored VM and the newly created one that worked and compaired them with winmerge....nothing really stood out but then again I really don't know enough about the guid's and uuids to know and therefore I figure it has something to do with that.
Attached are the two vmx files
Two dumb questions.
Did you try more than once to start them? I've gotten that error before, and just clicked Power on again and it started fine.
If you were able to create a new VM using the same disks, isn't that good enough? I mean, besides just wanting to understand what happened in the first place.
Shoot man, those arent dumb questions at all...especailly after the kinda crap ive seen AND been known to do
But yeah, i tried to start them more then once, i also closed VC all the way and went back in as well as made a change to the settings which saved just fine but still no go.
There are many reasons creating a new VC doest work, for me at least...here are a few:
1. DRS is based in migrations so the very essence of VMotion is defeated...say a server went down and it migrated the running VC's but me or someone had to go in and create a new VC for them to come back on line.
2. The production system im designing is going to have upwards of a thousand VC's which would be impossible to manage if someone had to recreated a VC everytime DRS migrated a VC
3. It should work, VMWare is the leader in virtualization and these features need to work flawlessly for them to retain that...Microsoft is coming full steam and with both barrels, VMWare's got no chance if things like VMotion don't work.
4. I want to know what the problem is.
Anyway, thanks for the response Wimo, this community is certainly one of the best aspects of VMware.
Okay - From your initial description it sounded like this was a one-shot deal, you were just trying to move some VMs to a new host.
It does kind of sound like you are mixing up DRS and HA - DRS just recommends VMotions (or does them automatically). HA is for restarting a VM on another host if the first host goes down (or just loses its Service Console connection, which can be a gotcha and is a good reason to have redundant SC network connections).
DRS and VMotion migrate VMs while they are up and running. Again going back to your initial description, that didn't sound like what you were doing.
I have received this error many times. VMWare support is unable to find the cause. The temporary fix for me has been to simply go into the "cluster setttings" and re-apply them. Then my vm's start right up without that error. Hope it works for you!
If you shut down the vm's, then you're using a cold migrate, and DRS/vmotion is not involved. Only in live moves is vmotion being used, and HA will use DRS to figure out the best server to start the vm on, but doing a simple power on does not have that functionality, as far as I know. Try another power on, let it fail, and if it does, attach the vmware.log file that is located in the vm folder into your post, so we can take a look and see if we can spot any problems.
Well i reapplied the cluster settings and the problem still exists. Funny thing is that the log files are no longer updateing, the last update was from when i migrated them to the new host....ugh, this is starting to get frustrating
WOW!!!! Well I dont thing ANYONE here is going to want to hear this!!
Good news, its fixed and i have a clue as to what caused the problem
Bad news, as if ripped clean from Page 1 in the Windows ULTIMATED TROUBLE SHOOTING BIBLE....yeup, you guessed it, a simple reboot of the migrated too host solved the problem.
Before the reboot the hostd.log file showed the VM's that were failing to boot as unregistered, i looked around and all i could find was an old ESX 2.1 article on registering VM's so i thought i should try a reboot.
I DO NOT LIKE THAT AT ALL!
What service could i have restarted? Or what else could i have done to fix this....REBOOTING IS NO FIX and it's prolly the ONE MAJOR distinguishing factor between UNIX and Windows.