Hello, I currently have a recovery plan configured with three VMs and I'm testing the plan.
All three are in separate ESXi host clusters.
Two of the three VMs can power on successfully during the test, but the third VM cannot. I receive "Error - Module 'MonitorLoop' power on failed".
The VM with the error is only utilizing 2 CPUs, 4GB of memory, and a single 60GB VMDK. I've reviewed to ensure there are no memory or CPU locks or reservations and there are not. I've also confirmed adequate storage is available on the protected datastore, SRM datastore, and recovery datastore.
I don't see any of the above causing the issue.
Is there something else I can look into to resolve this error and proceed with successfully testing the recovery plan?
Thank you in advance for your replies and support!
Hi @jcha7965 ,
If the test VM is registered at recovery site, please try to power it on manually. If that too fails with the same error, then you need to check the tasks and events on vCenter server to identify the failure.
One of the symptoms was insufficient space to create swap file --> Module MonitorLoop power on failed
Hi, I read the post on yellow-bricks.com prior to submitting this question on the VMware forum.
I validated there was enough space on the datastore for the swap file, so that doesn't pertain to the VM in question.
Per the comment on the article, I also ensured there's no VM snapshot.
I will attempt to power on manually.
Is there any other cause for the error? I see very little online while researching.
Hello, is there anything I can try to resolve the error? I recently powered off the VM and changed the swap file location to be stored with the VM. I tested the Recovery Plan a few times (also updated the power on priority from 3, then 2, and finally 1) and am still receiving the MonitorLoop error.
Any assistance is appreciated.
Not sure if this helps...I ran into the same issue running GNS3VM on R720 with ESXi 7.0. The R720 has 384GB RAM and GNS3 was assigned with 200GB. I got the error when i change GNS3 to 300GB, but when I changed it back to 200, I can boot it up.
I had the same error when upgrading vCenter Server 6.7 to 7.0 stage 1. After deploying the new OVA it gave this error. The reason was that although i had made enough diskspace to deploy the OVA, the new VM failed to start because it needed even more space to create the RAM disk. Increasing the free diskspace allowed me to start the VM.
I got the same error when I am installing pfSense (as a vm). It was happened due to incorrect ram size was given to VM during VM creation phase.
Ram Size given 1024 GB instead of 1GB. After changing it to 1gb vm started without issues.
I had the same Error. i am actually trying to deploy the vm in my new test server and every time i stared my vm it gave the error "monitorloop" power on failed. but i got through it by going to edit setting and managing the resources like resizing the assigned ram and storage. it works fine now.
So I experienced this, and what I found to be the issue on my end, is that the memory assigned outweighed the storage available on the datastore used for this VM, as memory will 'consume' storage, this needs to be available for the memory claim.
@RSchuddinh @RPUL @sudheee02 @KazuXee
Hi, there's enough storage on the datastores and I don't see any memory reservations. There are only 2 CPUs and 4 GB memory installed on the production VM. The replicated placeholder VM is also configured with 2 CPUs and 4 GB (not 4096 MB). Should I try reducing the memory to 2-3 GB?
@jcha7965 Greetings - I humbly and possibly ignorantly suggest fully listing your physical available resources, your desired VM resources, and your current VM resources.
Thank you for the thread, all. It helped me save my job and get my VM back online quickly, although not as fully desired.
I fully believe it is my lack of understanding and knowledge to blame for the struggles I'm having getting to the desired server and VM configuration.
Physical Server Resources available:
2x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
24x 32GB 2400MT/s - 768GB ECC RAM
Dell BOSS card, 2x micron 480ish nvme
PERC H730 Adapter, 4x HUC101212CSS600 HGST 1.2TB
H730 controlling "backplane" (which is actually front of server, love that we can't say "frontplane" lol)
BOSS in PCIe slot. I read it was supposed to default to RAID 1, but it did not, or at least, I can't find where it for sure tells me that is operating correctly in RAID 1, which I desire to verify
Currently only 1 VM
256GB of nvme
32 GB of RAM
Windows Server '16
I desire VM to get to:
OS only on nvme, and BOSS verified and running in RAID1
Be able to understand this enough that some smaller programs, but needed fast operations, will also run on nvme
other programs will be setup to run on the disk drives in raid 10, OR i might find compatible sas ssd
The Raid10 is not setup yet, i believe it is currently running in jbod mode as i have 4TB of storage available to assign to the VM
I desire ability to assign up to 128GB of RAM to the VM, likely only 32-64GB will be required, but until that is known for sure and I will monitor for usage, I need to allow certain programs within the VM to "stretch their legs" a little bit as we're testing some settings and hosts.
I desire ability to get 10G networking to the server for use by all VMs. Can this be done securely? I want to understand this all well enough that a financial program can run in its own VM and be completely secure into the network from the other VMs...is that possible or is a 2 or 4 port 10g nic required?
Thank you so much, I will update here and below with information as I find it elsewhere or you helpful people reply.