Check with the storage vendor. I/O errors like those can often mean data corruption has occurred.
So you can still read the content from some hosts ?
Then do the following:
- unmount the volume on those hosts that can no longer read it. - DO THIS ASAP
- select one host that still can read the volume and unmount the volume on all other hosts
- use this one host to copy all VMs to another datastore
- rebuild the affected datastore from scratch and avoid RAID 5
Assume that the situation may deteriorate if you do not act soon - so do not waste too much time in the current state.
Try performing LUN reset (KB below) followed by below command in sequence.
1. esxcfg-rescan -A
2. vmkfstools -V
3. Check the availability of the datastore in question.
For further troubleshooting check storage array logs if required.
If you found this or any other answer helpful, please consider the use of the Correct or Helpful to award points.
> Check with the storage vendor. I/O errors like those can often mean data corruption has occurred.
From several years of VMFS-recovery I have learned that if one host in a cluster complains about I/O errors this does not necessarily means that the actual data is corrupted.
Often another host or a linux system can still read the data without problems.
So whenever I come across this issue I first try to read the data using another host or a Linux LiveCD.
So the surprising lesson here is that I/O errors in a vmkernel log basically do not immediatly mean corruption but rather that "this" host does not want to cooperate.
Yes - I know how crazy that sounds
Also when one host complains that a LUN has no partitiontable it does not mean that there is no partitiontable.
i shutdown all vms and esxi's and restarted both storage array controllers and then power them up.
thank you everyone.