The Maximum size of a VMFS 5 datastore is 64TB made up either a single datastore or upto 32 extents. could the 114TB datastore be a VSAN Volume? that said the format of VSAN is not VMFS.
That is Odd, I have to say I did not think that was even possible. what is even more interesting is that is is a raw 5.00 LUN and a later version.
I do not have access to an array with that much storage to actually try and replicate the issue. could you raise a Service call with VMware over it?
The strange thing in this case is the behaviour of vmkfstools -i.
Until 2 weeks ago I used to consider a vmdk as healthy as soon as vmkfstools -i dubious.vmdk /vmfs/volumes/other-datastore/ok.vmdk
rebuilt the questionable item.
If vmkfstools -i worked without errors I considered the newly created vmdk on another datastore as clean and ready for production.
I guess in future I will not be so sure about the results.
In this environment I created vmkfstools -i clones that even on a first check with windows explorer showed differences - directories missing . empty files ....
> could you raise a Service call with VMware over it?
Support from VMware and the Storage-vendor had already left the scene when I was called.
I still think that in this environment the unusual amount of open VMDKs and the size of the datastores has a bad effect.VMDKs with 8TB thick allocation look really odd:
there are areas inside those vmdks that are allocated in just 1mb fragments - those areas get as large as 130 - 140 Gb and then the next fragments show the normal behaviour again.
Looks as if the resources to allocate larger fragments were temporarily missing for some reasons ....
On other vmdks there holes as if the volume was created with several extends - and the extends are out for lunch now.
Extends were never used here ...
A vmdk with sections that are missing should not be served by ESXi to VMs - at least thats the behaviour I usually expect.
Really strange ...I am searching for anything that is presentable to VMware - but as they had seen this case already - I rather check which procedures I normally use have to be reviewed.
New for me sure was the size of dd-scripts required for recovery.
More then 1 million lines !!! - it even makes sense to run such a task with Linux plus the local ESXi ... good pracrice for me
did you ever get to the bottom of this?