Here is the state of the Virtual Machine.
I believe that the issue is related to the vCenter server not being able to be contacted, but I can't get the vCenter server to attach to a network.
VM Missing BackingDevice.png 22.8 K
We have the exact same problem. Has there been a solution found for this yet?
Their is no resolution as of vSphere 4.1xxxx
The issue is that if you run vCenter as a VM within the cluster, even though you have it set to automatically start up and put it to highest priority, the vSphere hosts finish booting before the vCenter guest VM does, and then without vCenter they lack the connection to create the dvSwitch and subsequently the port groups to allow the guest VMs (even the vCenter server itself) to have a backing device.
My favorite is having to spend about 45min process
1) searching for the VM location amoungst the cluster to find what the last physical server the vcenter guest VM was running on
2) Remove one of the physical nics of that vSphere host from it's invalid dvSwitch0 device
3) Create a new vSwitch0
4) Add the unassigned NIC to this switch
5) Create the port group with the VLAN tagging nessisary for the vCenter server
6) Boot the vCenter server and attach it to a valid switch backing device
7) reboot all the vSphere hosts (easier process then getting them to reconnect to hte vCenter in their currnet orphaned backing device state).
8) go into any VM that was set to automatically boot, and manually edit the VM, and connect it to the correct backing device / port group under the dvSwitch (I about 12 VLANs in the dvSwitch collection and so have to referance a table as to what the original backing port group device was to make sure i put the VM back on the correct VLAN).
I would love to have someone from VMWare help me fine a resolution to this.
When I design solutions for my customers (presales) I now specify the vCenter server to be a seperate physical server outside the cluster... like the old school 2.x days recommendation..
dvSwitches are very nice offer some cool features, but their are still issues to get worked out.
Why they can't change the code to revert to a "last saved version" conifguation file of defined portgroups saved on each vSphere guest, I don't know.... mayebe their are other reasons.
2) Automatic startup settings for VMs is set per server. I have a few very important server that I want to startup no matter what physical server it has been moved to. Example: domain controllers, vcenter server, mail server, etc... How do I get a cluster wide setting for boot priority such that no matter the last physical server to which the VM was VMotioned to, will start these servers up?
As a workaround, I would use a DRS affinity rule (VMs to host) to keep these important VMs on a specific host, then set the right boot priorities on that host.
What port binding option did you select for your dvSwitches? Make sure you understand the difference between static, dynamic, and ephemeral bindings. I'm a big fan of using ephemeral bindings in dvSwitches.