Hi,
I run tests during all day,
I have set up a new fresh w2012 vm.
Started lot of snapshot ( quiesced and not quiesced ) from VI client : no issue occured.
Started a "continuous" backup job from veeam : no issue after 1 hour ( nearly 30 snapshots ) ( using application aware guest processing )
Started a "continuous" replication job : no issue after 30 replication. ( using application aware guest processing )
Since issue happens only ( right now ) on my 2012 AD Domain Controller, I setup the vm as DC.
again no issue after the same turing for 4 hours.
ok not the best test protocol but hey, I had some work to do :smileywink:
meanwhile 1 time of 5 ( approx. ) my "real" dc get their efi corrupted....
maybe it's not related to snapshot after all...
What we know so far is that something is causing the EFI "Boot Options" (i.e. the entries visible in the EFI Boot Manager) to replicate until eventually the VM's EFI NVRAM runs out of space and the VM fails to boot. It might be a few snapshots or a few VM reboot cycles before the problem can be first seen as multiple entries in EFI Boot Manager, and it might be many more snapshots or reboot cycles before the VM fails to boot due to insufficient NVRAM space.
About that I can say that reboot seems not to be the culprit since my corrupted vm are my DC and they almost never get rebooted.
For instance, I had my vm corrupted, fixed the nvram, boot it up and get it corrupted in my veeam lab. after some backup. It was also corrupted on the host since I tried to reboot it to see what will happen and tada ! nvram was dead.