We have 3 discs, after the update, only 2 are with their datastore, the other disc appears but does not allow to find their datastore
Any ideas ?
thank you for your time !
Has this datastore been expanded by adding a second extent to it before the update ?
Failed to check fbb.sf suggests that the vmfs-metadata is corrupt.
For a indepth answer I need a header-dump - see https://communities.vmware.com/t5/VMware-vSphere-Documents/Create-a-VMFS-Header-dump-using-an-ESXi-H...
Do this devices still exist:
Is this the IP of your ESXi ? - 126.96.36.199
Maybe recoverable :
About 1tb of data from 3CX_1-flat.vmdk maybe recoverable to a state of Nov 2017.
One more TB of data included in snapshots maybe recoverable - but at first sight 3CX_1-000002-sesparse.vmdk is missing so the snapshot chain looks corrupt / incomplete.
excuse me for not answering earlier
this is the story
the 2tb disk was only used for storage
He was about 3 years old when he started showing delays
the company that rents us the server proceeded to change the disk, making a complete clone
after that cloning the problems began, the disk appears in the vmware but the datastore does not
I requested that they change the disk back to the original, but now what do you say about that serial
t10.ATA _____ ST2000DM0062D2DM164 __________________________________ Z560D09J: 1
I think they never reconnected the original disk and the cloned disk is still connected
Do you recommend that I request this disk and if they find it reconnect it?
yes, this disk is the main storage
t10.ATA _____ ST10000NM00162D1TT101 ________________________________ ZA2479AH: 1
I have been using this type of HDD and lamented the same issues. I found out that the problem is likely because this is a Shingled magnetic Recording (SMR) unit, can google to find out more about this type of technology but the real effect is that for some update operations the drive is extremely slow and this is perceived as a fault from the driver controller, this is triggering the alter in the ESXi in the first place.
I suggest you to put no more effort on them and to replace them with a Conventional magnetic Recording (CMR) driver as I also found that this units are plague by scarce reliability, the survival rate is about 75% after three years compared to 95% of the competitors.
If you decide to keep the HDDs please note the following workload reccomendations:
CSM Drivers -> All workload (frequent delete/write of data)
SMR Drivers -> Write once/Read Many workload (Archiving)
Hope this helps.