VMware Cloud Community
ArrowSIVAC
Enthusiast
Enthusiast

vSphere dVSwitch Restart Missing Backing Devices

I have a lab environment with fourteen servers.  I have all the hosts in a single vSphere 4.1 cluster under a virtualized vCenter server. Their is a single dVSwitch for the cluster with each physical server  having both of it's ports as external managed adapters in the dVSwitch.

Issue is when we loose power, and the servers power cycle the cluster is not restarting successfully.

Issues

1) VMs all start up as noted in the automatic startup routine, but have the error: "Invalid Backing" -> "This host does not have any virtual machine networks, or you don't have the permission to access them." The console for the VMWare servers is working, so I know the dVSwitch is up and functional as the console IP rides over these links also. But none of the VMs get attachment to the dVswitch for IP communication (including the vCenter server).  How do I get the dVswitch to attach the physical nics to the environment when the vCenter server is not up?  Is their some kind of dependancy to the vCenter server being available for the "backing device" to work?

2) Automatic startup settings for VMs is set per server. I have a few very important server that I want to startup no matter what physical server it has been moved to. Example: domain controllers, vcenter server, mail server, etc...   How do I get a cluster wide setting for boot priority such that no matter the last physical server to which the VM was VMotioned to, will start these servers up?

What is the best way to impliment dVSwitches but avoid this chicken and egg restart issue?

Reply
0 Kudos
5 Replies
ArrowSIVAC
Enthusiast
Enthusiast

Here is the state of the Virtual Machine.

I believe that the issue is related to the vCenter server not being able to be contacted, but I can't get the vCenter server to attach to a network.

Reply
0 Kudos
Emsdad
Contributor
Contributor

We have the exact same problem.  Has there been a solution found for this yet?

Thanks!

Reply
0 Kudos
ArrowSIVAC
Enthusiast
Enthusiast

Their is no resolution as of vSphere 4.1xxxx

The issue is that if you run vCenter as a VM within the cluster, even though  you have it set to automatically start up and put it to highest priority, the vSphere hosts finish booting before the vCenter guest VM does, and then without vCenter they lack the connection to create the dvSwitch and subsequently the port groups to allow the guest VMs (even the vCenter server itself) to have a backing device.

My favorite is having to spend about 45min process

1) searching for the VM location amoungst the cluster to find what the last physical server the vcenter guest VM was running on

2) Remove one of the physical nics of that vSphere host from it's invalid dvSwitch0 device

3) Create a new vSwitch0

4) Add the unassigned NIC to this switch

5) Create the port group with the VLAN tagging nessisary for the vCenter server

6) Boot the vCenter server and attach it to a valid switch backing device

7) reboot all the vSphere hosts (easier process then getting them to reconnect to hte vCenter in their currnet orphaned backing device state).

😎 go into any VM that was set to automatically boot, and manually edit the VM, and connect it to the correct backing device / port group under the dvSwitch (I about 12 VLANs in the dvSwitch collection and so have to referance a table as to what the original backing port group device was to make sure i put the VM back on the correct VLAN).

I would love to have someone from VMWare help me fine a resolution to this.

When I design solutions for my customers (presales) I now specify the vCenter server to be a seperate physical server outside the cluster... like the old school 2.x days recommendation..

dvSwitches are very nice offer some cool features, but their are still issues to get worked out.

Why they can't change the code to revert to a "last saved version" conifguation file of defined portgroups saved on each vSphere guest, I don't know.... mayebe their are other reasons.

Reply
0 Kudos
NinjaHideout
Enthusiast
Enthusiast

ArrowSIVAC wrote:

...

2) Automatic startup settings for VMs is set per server. I have a few very important server that I want to startup no matter what physical server it has been moved to. Example: domain controllers, vcenter server, mail server, etc...   How do I get a cluster wide setting for boot priority such that no matter the last physical server to which the VM was VMotioned to, will start these servers up?

As a workaround, I would use a DRS affinity rule (VMs to host) to keep these important VMs on a specific host, then set the right boot priorities on that host.

Reply
0 Kudos
JPK871
Enthusiast
Enthusiast

What port binding option did you select for your dvSwitches?  Make sure you understand the difference between static, dynamic, and ephemeral bindings.  I'm a big fan of using ephemeral bindings in dvSwitches.

Reply
0 Kudos