any one can explain me why thin disk works faster then thick eager zeroed on same datastore ?
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Jobs: 1 (f=1): [m(1)] [100.0% done] [284.3MB/96892KB/0KB /s] [72.8K/24.3K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4133: Sun Jul 16 22:27:32 2017
read : io=3071.7MB, bw=289577KB/s, iops=72394, runt= 10862msec
write: io=1024.4MB, bw=96567KB/s, iops=24141, runt= 10862msec
read : io=3071.7MB, bw=172067KB/s, iops=43016, runt= 18280msec
write: io=1024.4MB, bw=57381KB/s, iops=14345, runt= 18280msec
What a mystic?
Datastore located on 4x SSD, RAID: 0, VMFS6, ESXI 6.5
Eager zeroed disks are tremendously more overhead for backend storage.
Is your storage deduplicating?
A thin provisioned vmdk reads from physical device and /dev/zero.
A thick provisioned vmdk on the other hand always reads from physical device.
Of course reading from /dev/zero is faster than reading from a real device.
it is not so easy 🙂
Thin provisioned vmdks will perform good as long as they are almost empty. The more data there is inside, the higher the fragmentation-rate will be so the performance goes down again.
To get a good AND predictable performance the best choice IMHO is to use eager zeroed vmdks on an unfragmented datastore.
But IMHO the main reason to use eager zeroed thick vmdks is the much higher reliabilty of thick provisioned vmdks.