0 Replies Latest reply on Nov 18, 2018 6:37 AM by Pickwick81

    Why does VMware stop writing data read using ddrescue?

    Pickwick81 Novice

      Hi all,


      I'm using VMware Workstation 15 Pro to run a VM containing some pretty current rescue-Linux to rescue data from some broken USB-HDD using ddrescue. That starts working fine for some GBs of data, then ddrescue slows down because of the damages to the disk, ultimately skips some blocks and depending on where to read from, starts to read with higher throughput again. That behaviour is as expected in the end. The problem is that at some point VMware seems to simply stop writing rescued data:


      While ddrescue claims to rescue lots of GBs of data, the target device at some point is not increased by that data anymore. Instead, the last written timestamp might be days old or even if it gets increased during restarts of the VM, it is only increased once and never again. It doesn't make any difference if I restart ddrescue only or the whole VM or even my whole system, no data seems to be ever added to the target disk anymore. Additionally, there's not exactly one and always the same point at which this happens, but it happened already after ~150 GBs of rescued data, ~500, ~385 etc. Of course I don't see any concrete write errors or such in VMwares logs or those of the host itself. The target device is sparse, so I thought that the data rescued might all be 0s, but that isn't the case as well, I checked using xxd and the input position of ddrescue. That wasn't all 0s, ddrescued claimed to rescue data and nothing was written to the target. That wouldn't have explained the different size of the target device when the problem occurred in the past as well.


      My setup is a bit special:


      The host is Windows 10 and the broken USB-HDD contains NTFS, which can't be read anymore by different versions of Windows. That's why I'm trying a Linux-VM and ddrescue, because mounting in Linux as NTFS fails as well. Because the broken USB-HDD is somewhat large, I created the rescue-VM at another, larger USB-HDD. That VMs contains one VMDK for / and another VMDK to receive the data from the broken USB-HDD. And as said, that is working in general, the VM runs and starts to rescue data, but stops writing it at some point.


      The interesting part is that the broken USB-HDD is still read, which can be seen at the blinking LED and the output of ddrescue, and the target USB containing the VM is still used as well e.g. to store the map of ddrescue. Because that is another VMDK, I can see changes to that map file, the timestamp for that VMDK and its size gets increased over time etc. So that VMDK is working. Additionally I have the feeling that if ddrescue rescues data, the blinking LED of the VM-USB-HDD blinks a lot, way too much for the map file of ddrescue only, just like if GBs of data would be written. But they simply never reach the target-VMDK for some reason. I even used Process Monitor for Windows already to see if the target VMDK gets written and it really doesn't after the problem occurred. It is only read sometimes a bit, but not GBs of data as well.


      I'm attaching some screenshots which document two runs of ddrescue: The first is where I stopped yesterday evening, after which the target-VMDK hasn't been written for hours already anymore. It's size is ~ 385 GB of data, while ddrescue claims to have rescued ~ 580 GB. The second is from today, where ddrescue claimed to have rescued additional ~15 GBs of data, but the size of the target-VMDK is still the same as before. The third shows the input position of the second screenshot, which is not all 0s, so at least some bytes of data should have been written to the target-VMDK.


      Do you have any idea how something like that can happen? Or how I can debug this further, where to look at? Some special logs or debugging mode or whatever?