I just got a new job working overnight at a cloud provider, and while I've had plenty of experience with vCenter, I haven't had any exposure to vCloud before. I've heard of some of the many ways to break vCloud when doing things in vCenter, so I'm rather nervous about doing anything now. I don't want to lose this new job, obviously. Most of my coworkers are more networking gurus, and have less experience than me in virtualization, so I've been asked to take on some of the more complicated virtualization tasks during off hours. The lack of being able to ask coworkers who have experience with vCloud makes it a bit difficult to do some.
So, onto the actual problem here. I need to put a host in maintenance mode by Friday evening, so I have a little time to figure this one out. It is one host out of 5 identical hosts in the cluster, and there is enough overall resources that two hosts can go down and still maintain all the VMs. (Host 3 is the only one different, and that is exactly why I need to put it in maintenance mode. A DIMM died, and we need to shut it down to replace the DIMM.) When I put this one host in maintenance mode, it moved 6 of 7 the VMs to host #2, instead of spreading to the other hosts, and it seems to be refusing to move anything to host 1, even though it is completely empty. There is still one last VM on there that vCloud refuses to move. The DRS also seems to be ignoring hosts 4 and 5. If I try to move it manually using vCenter, it gives me a warning that I shouldn't because it is managed in vCloud. I don't see any way in vCloud to manage the hosts or cluster balancing, though.
One big restriction: (might be obvious, but I think it bear mentioning) I can't power down any VMs without permission because they belong to the customer.
So, should I just go ahead and move the VMs manually in vCenter? Will this break vCloud, or does it seem like vCloud is already broken at this point? What about host #2? It has many more VMs that the other hosts, and probably needs to have some moved off of it, seeing as it is now at 96% memory usage.
Thanks in advance.
The overload on host 2 eventually evened out to hosts 4 and 5, to the point of being almost perfectly balanced between the three at 69%, but host 1 is still unused, and the last VM on host 3 is still there.
I tried cancelling the task and retrying after a few minutes three times now, and that last VM still won't move off. There are enough resources on any of the other hosts to do it. It just won't move it on its own.