elihuj
Enthusiast
Enthusiast

Maintenance Mode Question

We currently do not utilize Admission Control. I know, I know. But it's not my call. Well now I have a two-node cluster with each host showing over 90% memory utilization. I'm concerned that I will not be able to put one host into maintenance mode. Is there anything I can do in a situation such as this short of powering off select VMs?

0 Kudos
4 Replies
vThinkBeyondVM
VMware Employee
VMware Employee

I did not come across such situation any time but without powering down some of VMs, the host on which you are going to migrate powered on VMs will be super loaded and VMs may not perform well. there also can be impact on vMotion operation, it will not be that smooth as it can be. As you are going to put host into MM, it will not be stopped by VC itself due to memory constraint (admission control is also not enabled)  & if memory is over committed, alarm will be generated from memory perspective.

If it is not production environment, then you can go ahead put host into MM and post your observation. (but you will have evacuate that host first manually) (I am sure in production, admin usually do not allow cluster to corss 90%)

Also you can enable DRS on that cluster and put host into MM, DRS will automatically start evacuating that host. It is better if you could analyse if you can power of any VMs from either host.

If it is useful, plz mark answer as correct or helpful.
----------------------------------------------------------------
Thanks & Regards
Vikas, VCP50, MCTS on AD, SCJP6.0.
http://vThinkBeyondVM.com
-----------------------------------------------------------------
Disclaimer: Any views or opinions expressed here are strictly my own. I am solely responsible for all content published here. Content published here is not read, reviewed or approved in advance by VMware and does not necessarily represent or reflect the views or opinions of VMware.

elihuj
Enthusiast
Enthusiast

Thank you for the reply. A colleague of mine went ahead and proceeded yesterday without powering down any VMs. The memory utilization on the hosts were 88%, and 95%. All of the VMs did successfully vMotion without issues.

My concern is I had a similar situation where memory utilization was high, and as I was putting a host into MM, two hosts briefly disconnected from VC. I cancelled MM, and the hosts did come back. I'm not sure what would have happened if I had let it continue, but memory % was in the red. I've never seen this happen before.

0 Kudos
admin
Immortal
Immortal

You have to differentiate between memory consumption and active memory usage. The hypervisor does not have access to the VM guest OS free list and does not know about internal memory mapping of these guests.

It therefore will usually not release memory that was once claimed by the guest until memory pressure is kicking in. So while a VM might have been showing alot of consumed memory the actual working set of this VM is lower most of the time.

This of course needs to be evaluated on a case by case basis as there is no general rule of thumb and it is extremely workload dependent if having high memory consumption on the host is an actual issue or just a cosmetic issue (especially since ESXi 5.0 uses large pages and TPS is kicking in way later than it used to do in 4.x).

0 Kudos
elihuj
Enthusiast
Enthusiast

That's a good point you bring up Frank. I was thinking about turning large pages off to help TPS kick in sooner. What are the recommendations for this?

0 Kudos