I don't know if they ever got improvements but I believe the waiting events have always potentially had issues in clustered vRO's, as they operate from their own independent rabbitmq queues.
I think you'd be better off recording the VMs/dates somewhere (vRO or externally) and running a process every X time (e.g., once per day or once per hour) that processes the list and destroys as appropriate.
I don't know if they ever got improvements but I believe the waiting events have always potentially had issues in clustered vRO's, as they operate from their own independent rabbitmq queues.
I think you'd be better off recording the VMs/dates somewhere (vRO or externally) and running a process every X time (e.g., once per day or once per hour) that processes the list and destroys as appropriate.
Thank you for the suggestion. I ended up just going with a much shorter timer. I basically have it pause execution for only 4 hour increments, and then check to see if we are past the desired date, and resume execution when we get there. It's a lot of pause/resume workflow for what I wanted, but it seems to be working well so far. The benefit is that I don't have to have an active token like with a sleep for the entire duration, but it seems like it doesn't lose track of the token with a 4 hour wait timer, at least so far. I didn't want to externalize the process, because its basically the entirety of the workflow, there's a lot more steps than just destroy the VM - remove from AD, remove from DNS, remove from monitoring, etc. But the explanation of the rabbitmq queue makes sense if thats not clustered.