VMware Cloud Community
Sigi13
Contributor
Contributor

File copy to local disk Windows 2019/2022 Server

Hi

We have a very strange behaviour on our Windows 2019/2022 Servers. 2016 does not seems to be affected.

We measure the copy process for a 2Gb file on a vm. I'm using the diskspd tool from Micrsofot with this parameters:

.\diskspd -t2 -o32 -b4k -r4k -w0 -d60 -Sh -D -L -c2G D:\IO.dat > test01.txt

If the VM has no snapshot, we get this:

Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | IopsStdDev | LatStdDev | file
------------------------------------------------------------------------------------------------------------------
0 | 2984321024 | 728594 | 47.43 | 12141.91 | 2.635 | 723.90 | 1.350 | D:\IO.dat (2GiB)
1 | 2995494912 | 731322 | 47.61 | 12187.37 | 2.625 | 721.81 | 1.299 | D:\IO.dat (2GiB)
------------------------------------------------------------------------------------------------------------------
total: 5979815936 | 1459916 | 95.04 | 24329.28 | 2.630 | 1426.31 | 1.325

 

If the same VM has snapshot, we have :

Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | IopsStdDev | LatStdDev | file
------------------------------------------------------------------------------------------------------------------
0 | 5964926976 | 1456281 | 94.79 | 24265.03 | 1.318 | 1086.55 | 0.822 | D:\IO.dat (2GiB)
1 | 5954260992 | 1453677 | 94.62 | 24221.64 | 1.320 | 1011.55 | 0.794 | D:\IO.dat (2GiB)
------------------------------------------------------------------------------------------------------------------
total: 11919187968 | 2909958 | 189.40 | 48486.66 | 1.319 | 2034.48 | 0.808

 

Why do we have factor 2 MiB/s(95.04 MiB/s without / 189.40 MiB/s with Snapshot) if the vm has a snapshot? Is there an explanation for that?

0 Kudos
6 Replies
kastlr
Expert
Expert

Hi,

 

what type of vmdk did you create for the Windows Server 2019, thin, lazy or eagerzeroed thick?

Except for the last kind the following will happen when you would write to a never been written track.

ESXi will pause the application IO and send zeros to wipe that track before it would allow your application to proceed.

So when running copy tests creating eagerzeroed thick volumes is a must have.

 

Regards,

Ralf


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
Sigi13
Contributor
Contributor

Hi, 

All disks are thin provsioned.

I'll try it with eagerzeroed to have a comparison.

0 Kudos
Sigi13
Contributor
Contributor

First, thank you for the tip 👍

 

Here are the results with a "Thick Provision Eager Zeroed" disk

with snapshot

66.97 MiB/s

without snapshot

66.44 MiB/s 

0 Kudos
kastlr
Expert
Expert

Hi,

 

are these numbers per Thread or per Test?

I might be wrong, but per my understanding your tests are 100% reads only.


Hope this helps a bit.
Greetings from Germany. (CEST)
Tags (1)
0 Kudos
Sigi13
Contributor
Contributor

Yes this was 100% Read

100% Write looks like this from another tool. The first 4 Tests are without Snapshot. Disk 😧 is thin provisioned and E: eagerlazyzeroed

 

disk.jpg

0 Kudos
Sigi13
Contributor
Contributor

With snapshot

With_snapshot.jpg

without

Without_snapshot.jpg

0 Kudos