VMware ESXi, 6.5.0
Microsoft Windows Server 2008 R2
LSI Logic SAS
A few seconds after the expansion of the system disk (via ESXi) such errors occurred
Physical Disk: PNP: Failed to get volume info for disk resource Disk I, status 3
Physical Disk: PNP: Failed to get volume info for disk resource Disk H, status 3
Physical Disk: PNP: Failed to get volume info for disk resource Disk D, status 3
Physical Disk: PNP: HardDiskpSetPnpUpdateTimePropertyWorker: status 0,
IsAlive sanity check failed!, pending IO completed with status 1235.
Microsoft-Windows-FailoverClustering 1038 (18)
This event is logged when Ownership of cluster disk has been unexpectedly lost by this node
The system disk was expanded first in the first node and 40 seconds later in the second.
Six minutes after changing the disk size, the cluster role moved to the second node.
Is this normal cluster behavior?
IsAlive sanity check failed!,
That is what I would expect if you change the size of a VMDK while the VM is running.
I would not consider to change the size of a clustered Windows vmdk at all - not even powered off.