VMware Cloud Community
ferdis
Hot Shot
Hot Shot

Why does Inflate disk do eagerzeroedthick?

Hi,

Why does Inflate disk do eagerzeroedthick? Why not zeroedthick? Becouse default format for new disks is zeroedthick which is deployed much faster then

eagerzeroedthick.

Thanks.

0 Kudos
9 Replies
4nd7
Enthusiast
Enthusiast

Hi,

Maybe they thought that since you inflate you really need the extra performance that the eagerzeroedthick disk can supply.

0 Kudos
depping
Leadership
Leadership

Why wouldn't an inflate disk do eagerzeroedthick? There's no performance gain or anything like that in zerothick compared to thin, so what's the point?

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
4nd7
Enthusiast
Enthusiast

Hi Duncan,

Shouldn't zeroedthick be a little bit faster than thin, because it is already allocated?

0 Kudos
DCjay
Enthusiast
Enthusiast

From a test I performed a while ago in my home lab, the performance difference is negligeable.

Jay

0 Kudos
depping
Leadership
Leadership

there is a difference indeed, but the difference is neglectable as the main overhead is still the "zero-out" that needs to occur with both thin and zero-thick.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
HMitschele
Contributor
Contributor

EagerZeroedThick is, as you know prezeroing the vmdk. The question is Why?

This is used primarily for VM's planned in FT configuration to make sure that promissed space is guaranteed. A possible thin provisioned LUN is forced to allocate real LBN's.

EagerZeroedThick during VM Creation -> Zero vmdk (Write only)

After VM Creation use Inflate -> Read vmdk to find out which Blocks have not been zeroed and need to be zeroed. (Read / Write)

Both put heavy load on the Datastore unless you have a vAAI capable Datastore.

Harry

0 Kudos
depping
Leadership
Leadership

Eagerzerothick is also often used for VMs which need max performance straight away. As the disk is already allocated and zeroed out before the OS writes that overhead isn't there.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
PUREJOY
Enthusiast
Enthusiast

Even with VAAI integration array I am seeing long duration to inflate 100GB lun and a lot of SCSI reserve and release.
Is this a bug?
Somehow Inflate (vmkfstool inflate command) or datastore (right click) inflate operation is not calling write zeros (or write same)

Plan to ping vmware on this.

Architect @ Pure Storage || www.purestorage.com || http://www.purestorage.com/blog/ || http://twitter.com/#!/purestorage ||@ravivenk || VCAP-DCA5, VCP 4, VCP 5
0 Kudos
rickardnobel
Champion
Champion

PUREJOY wrote:

Even with VAAI integration array I am seeing long duration to inflate 100GB lun and a lot of SCSI reserve and release.
Is this a bug?

There will still have to done 100 GB of disk writes on the SAN, so it will be expected to take some time. With VAAI for zeroing you will just make the ESXi host transfer much less commands over the storage network.

Somehow Inflate (vmkfstool inflate command) or datastore (right click) inflate operation is not calling write zeros (or write same)

You could check this with ESXTOP while doing the inflate, you must add on some more fields in the defualt view however. Look then for the ZERO numbers while executing the inflate of the disk.

My VMware blog: www.rickardnobel.se
0 Kudos