Hello all,
I want to ask a question regarding intermittent VDM connection error. In particular, the error message below:
"The desktop has failed to open. This desktop is currently not available. Please try connecting to this desktop again later or contact your administrator"
I intermittently get this error message, when adding a new virtual desktop to the VDM Connection Broker Server, and entitling a new user to it.
One method that I use to fix the connection issue, is to login locally to the virtual desktop, launch "services.msc" from Run, then restart the VMWare VDM Agent service, and logoff.
After doing this, I can access the virtual desktop.
I have tested with both the VDM Client 2.1 on a workstation and Wyse V10L Thin-Client terminal running the lastest BIOS. If the VDM Client cannot connect, the Wyse V10L will not connect either, and vice-versa.
This is my current VMWare Infrastructure:
(1) ESX 3.5.0 build 98103 host with all baseline updates
(1) Virtual Center 2.5.0 build 84767
(1) VDM Connection Broker 2.1.0 build 596
(1) Virtual desktop runing WinXP Pro w/ VDM Agent 2.1 & VMWare Tools - static IP
Are there any patches, fixes, or tweaks, that I can run to stabilize the connection broker server for a solid connection every time?
I read on another discussion that the issue maybe DNS related, and someone suggest turning on DHCP, but specify the DNS server manually.
Could it be that the connection broker server connection to the DNS server timeout after a while, or there is a refresh delay, after adding new virtual desktop to the connection broker?
Thx for reading...
I agree, I don't think it matter how you deploy the VM, it just seems to require a final reboot in order for that agent to "wakeup". We are planning on opening an SR for it, but since we have a work around we haven't opened a case as of yet.
Good luck!
If you want to somewhat rule out DNS then you can add a host file entry to your broker.
Also look in the registry a what name the VDM agent is using to get to the broker and make sure you can ping it by name.
I can tell you that what you are experiencing is probably a bug. We run into this issue all the time when deploying VM's through our broker. For some reason the VDM agent isn't fully initialized. The work around we have done is upon deploying the VM's we add a run once into the customization specifications to allow for a final reboot after everthing is installed.
This seems to fix the problem
Hi Troy. I believe it is a bug.
I did not deploy VM from the broker.
I actually went through the painful process of "Add New Virtual Machine", to make a new virtual desktop.
After installing the VDM Agent, I would then add the virtual desktop to the VDM Connection Broker, and entitle it to a user.
It is here that I am unable to connect to the newly added virtual desktop, unless I login as local administrator, restart the VDM Agent service, logout, and restart the virtual dekstop.
Then when I attempt to reconnect, via Wyse V10L terminal or VDM client from a workstation, it works.
Seems to me like the VDM Agent service needs to be restarted every so often, in order to reconnect.
I agree, I don't think it matter how you deploy the VM, it just seems to require a final reboot in order for that agent to "wakeup". We are planning on opening an SR for it, but since we have a work around we haven't opened a case as of yet.
Good luck!
I'm interested in this, I've not seen the agent fail to start up properly until after a reboot myself, could one of you please do a couple of things to grab some debug logs for me? Before installation or on the template VM, add the following to the registry:
>\[HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\]
>"TraceEnabled"="true"
Then do what you normally do and leave the VM sitting in the error state for five minutes or so before attaching the resulting log file in c:\documents and settings\all users\application data\vmware\vdm\logs.
I was just attempting to do that, howerver, the issue, for me atleast, does not happen all the time. I was going to open an SR, but so far after creating two pools, have not be able to generate the error... i'll keep plugging away.
well, after 4 deployments, without adding a secondary reboot, I cannot get the error to come back. Here's what I think it may be. If you deploy off a template that does not have the VDM agent already installed, then install it post deployment and not reboot, that may be when I was seeing the error.
So far, my tests with the agent installed on the template are good. I am able to connect to all of the vPC's without rebooting.
If you install the 2.0/2.1 agent on the VM itself, it's important to remember to log off after install - any sessions that were connected/disconnected before the product was installed are not reported back to the connection server with full information. This may not be your problem but it's worth mentioning for other users that may find this thread.
Here you go... I deployed two vPC's through VDM using a Persistent Pool, when complete I cannot log into them using the VDM client. I've checked to see that the VDM agent service is running.
EDIT: nevermind, this was probably caused because someone rebooted the VDM server...
Mike,
I do have an SR open (1125248481) on this, and from what I was told, it is a known issue. We have been deploying multiple pools and the issue is still present.The tech I spoke to, said for now the work around is to add a final reboot in after machine is it's ready state and the VM is sitting at the login prompt. That works, but I would like to get a definitive answer as to whether this is a bug or not.
Thanks for the update Troy. I've not yet seen a need to reboot the VM after a successful customization, there is a known issue with left over registry settings for the bootrun service (http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1005991) but a reboot wouldn't resolve that issue, and the point I've already mentioned concerning a local user being left logged in obviously isn't the case from your previous posts. The only other thing I can think of off the top of my head is that the NIC is disconnected for some reason on that boot, but I'm sure you would have spotted something like that. Perhaps you could send me a direct message with the details of who you've spoken with in VMware that has said it's a known issue so I could get more info from them? I'll also keep my eyes open for those trace level logs in the SR when you manage to get them, as I'd certainly like to get to the cause.
I am running into the same issue. I'll notice that after deployment the VDM Administrator web page will show all the vm's in the "Customizing" state until I go in and either reboot, or stop and restart the agent service. Then everything is fine. I am going to employ the "run once reboot" fix for now until VMWare comes out with a fix for this.
On that note, how would I go about employing the "run once reboot" fix into my customization specification? I see a lot of people doing it, but no one posted how they did it. Thanks!