Having an SBS 2011 server as a guest OS on ESX 5.1 and everything installed on c:
c: drive ran full, and the nightly VEEAM replication failed (VEEAM db also on c:)
Problem is... guest configuration seems to be fucked up. Originally the SBS 2011 guest had two VMDK's connected as two harddrives, but now, it has four harddrives of which two is pointing to the same VMDK. The fourth one is pointing to what should have been one of the harddrives on the replicated server...
On "This PC", the two original drives seems correct (first drives), and the server seems to be running fine - allthough the two extra drives shows up weird. wrong sizes and offline "because of a Windows Policy".
Beside the replication (which might be fucked up now), I also have a std. Windows backup of everything. But I would like to avoid going the restore way, as I will be loosing some data this way.
What to do? Server is running untouched right now. I am afraid that even booting it or even just shutting it off will write unwanted data on the discs leading to corruption. What to do? Make the unwanted disks offline and then remove them on the ESX server?
I opened a case with VEEAM, and was directed to the following link
Apparently it is a normal behaviour to attach own disks to itself, and when something goes wrong, disks are left attached to the VM.
Sounds weird to me, but I followed the directions in the link and solved the problem by doing so.
But I found another issue probably related to the failed replication jobs, One VM is indicating:
Virtual machine disks consolidation is needed."
Without having any snapshots indicated in the Snapshot Manager. Starting a new thread with this.