VMware Cloud Community
CountFrackula
Contributor
Contributor

Incorrect reading of drive state

I am attempting to share a drive between two VMs.  When I provision a drive as Eagerly Zeroed without turning sharing on, after the drive formats, ESXi displays it as Lazily Zeroed.  It took several hours to format.

I was able to get around this by turning sharing on when creating the drive.  Now it shows correctly on the machine it was initially set up on:pastedImage_0.png

However, when I try to set it up on another machine, the same disk appears as lazily zeroed:
pastedImage_1.png

I executed a mkfstools -D on the disk to verify the state, and this is the result:

Lock [type 10c00001 offset 112467968 v 44, hb offset 3440640

gen 85, mode 0, owner 00000000-00000000-0000-000000000000 mtime 659

num 0 gblnum 0 gblgen 0 gblbrk 0]

Addr <4, 3, 17>, gen 28, links 1, type reg, flags 0, uid 0, gid 0, mode 600

len 472, nb 0 tbz 0, cow 0, newSinceEpoch 0, zla 4305, bs 65536

tbz 0 should mean that the disk is eager thick, but it is still not registering correctly.

ESXi 6.5.0 Update 2 (Build 8294253).  Updating limited due to compatibility with the LSI raid controller's management package.

1 Reply
a_p_
Leadership
Leadership

This looks like a UI issue/bug.

I was able to reproduce this on an ESXi 6.7 host with the latest host client v1.33.4 installed. While adding the existing virtual disk to another VM, it showed up as lazily zeroed. However, after saving the configuration, and opening it again the virtual disk showed up correctly as eagerly zeroed, and allowed to to change the sharing mode.


André