VMware Cloud Community
IT_Architect
Enthusiast
Enthusiast
Jump to solution

How much data transfers while copying from VMFS thin to NFS that doesn't support thin?

1.  I copy from a VMFS volume using -thin to a 2003 Server hosted NFS volume.  However, there is no indication that the destination supports it thin, and the file size becomes the declared size.  When I copy from the NFS share that doesn't support thin to an VMFS volume that does, the file again becomes thin.

  a.  When I copy from the VMFS volume to the NFS volume, how much data actually traverses the network?

  b.  When I copy from the NFS volume to the VMFS volume, how much data actually traverses the network?

2.  I have two identical servers with the same ESXi 5.1 on them.  When I copy with one of them, it starts immediately at 0% and counts up while the other starts immediately at 10% and counts up.  I've never seen this on any other server.  Has anybody seen anything like this before?

Thanks!

0 Kudos
1 Solution

Accepted Solutions
rickardnobel
Champion
Champion
Jump to solution

IT_Architect wrote:

I already know the thin and thick sizes.

If the sizes are well apart from each other it would be easy to see how much is transfered.

My VMware blog: www.rickardnobel.se

View solution in original post

0 Kudos
9 Replies
rickardnobel
Champion
Champion
Jump to solution

I do not know how much data will be copied, altough it would be interesting to know.

A very quick way to actually verify this would be to check the amount of sent/recieved bytes before the transfer and then look again after a Storage vMotion is complete.

There are several ways to do this, but a very simple and quick method is to use: netstat -e

win2003-netstat.PNG

My VMware blog: www.rickardnobel.se
IT_Architect
Enthusiast
Enthusiast
Jump to solution

netstat -e

Possibly.  I could shut off the public NIC, put netstat -e in the script before and just after the backup call.  I'll give it a shot and see if it comes back with anything usable that can tell me one way or the other.  I'm attempting to determine actual throughput to the NFS and to do that I need to know which size is actually being transferred.

Thanks!

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

If you could create a new thin VMDK with, say, 50 GB size and then copy something like 100 MB of data into it, then transfer it back and fourth while checking the network statistics it should be see it the full size or just the (thin) data is transferred. Would that be possible in your situation?

My VMware blog: www.rickardnobel.se
IT_Architect
Enthusiast
Enthusiast
Jump to solution

I already know the thin and thick sizes.

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

IT_Architect wrote:

I already know the thin and thick sizes.

If the sizes are well apart from each other it would be easy to see how much is transfered.

My VMware blog: www.rickardnobel.se
0 Kudos
IT_Architect
Enthusiast
Enthusiast
Jump to solution

If the sizes are well apart from each other it would be easy to see how much is transfered.

How so?  It's 80 gigs declared and ~55 gigs thin.  When it's on the VMFS volume it's ~55 gigs.  When I copy it to the NFS volume it shows the declared 80 gigs.

netstat -e

That doesn't work so good.  I let it run for ~15 minutes and checked it to see how it was coming along and the new number for received was smaller than the starting.  LOL!  (Note: The bottom one is missing a digit)

Capture.PNG

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

IT_Architect wrote:

If the sizes are well apart from each other it would be easy to see how much is transfered.

How so?  It's 80 gigs declared and ~55 gigs thin.  When it's on the VMFS volume it's ~55 gigs.  When I copy it to the NFS volume it shows the declared 80 gigs.

I meant that if there was a very clear difference between them and we could actually see the amount of sent/recieved bytes it would be more easy to see.

That doesn't work so good.  I let it run for ~15 minutes and checked it to see how it was coming along and the new number for received was smaller than the starting.  LOL!  (Note: The bottom one is missing a digit)

Quite interesting! I checked with a Windows 2003 both with netstat and also through the Task Manager, network view (where you could add sent/recieved bytes) and did some massive file copies and got the same result.

It appears that both netstat and taskmgr uses the same internal 32 bit counter and flips over at about 4.2 GB.... Smiley Happy

Even in Performance Monitor there does not seem to be any total amount of network traffic counters in Windows 2003..

My VMware blog: www.rickardnobel.se
0 Kudos
IT_Architect
Enthusiast
Enthusiast
Jump to solution

About the only way I can think of is to clone a thin VM, and pork one it out with ISOs, and clone another to thick, and compare backup times of all three.

0 Kudos
IT_Architect
Enthusiast
Enthusiast
Jump to solution

The situation:
We do cross-backups at night between ESXi servers using GhettoVCB, each with a Windows 2003 Server running Windows Services for UNIX Version 3.5, each with a system virtual hard drive and a data virtual hard drive which is shared as an NFS volume.  Even when I specify thin in the backup (vmkfstools) to the NFS share on the adjacent server, the space taken up by the VMs on the target NFS store become the full declared size.  When I restore from an NFS volume to a VMFS volume, and specify thin (vmkfstools), it restores the .vmdk to the thin size.

The Question:
Since the .vmdks go from the thin size on the VMFS store to the thick size on the NFS store, how much information is transferred during the backup to the NFS volume, the amount of the thin size or the amount of the thick size?

The Results:

36 GB Virtual disk, thin size = 32 GB - Time to back up to NFS store = 22.88 minutes.

40 GB Virtual disk, thin size = 2.6 GB - Time to backup to NFS store = 1.98 minutes


What I've Observed:
1.  vmkfstools does not expand the file prior to sending.  It only sends the thin size amount of data.
2.  The cloned VM occupies the full virtual disk size on the NFS store.
3.  The NFS store is an NTFS compressed volume, so the VMs require much less room than the full file size.
4.  The inconvenient truth I discovered was just because the volume is compressed, and the VMs are thin, and Windows shows it has 75% free space on the volume, doesn't mean that you have any more usable space than if volume were not compressed, and the VMs were thick.
5.  Simply expanding an NFS volume to make more backup storage space is not sufficient.  One must unmount and remount the NFS volume for ESXi to be able to leverage the extra space, else the backups will fail due to insufficient disk space.

The Implications:

1.  I now have the facts necessary to accurately calculate MB/second of the backups.
2.  In order to effectively use the disk space available in the compressed NFS shares, they must be specified thin, and declared much larger than the disk space actually available.  To keep it safe, I must maintain a reserve directory with large, uncompressible files, so that if there were a problem with the volume running out of space, I could delete them to gain the free space necessary to delete snapshots etc.  I will also need to set up triggers in Zabbix to inform me if the disk space were to get low on datastore1.

3.  If I ever need to downsize or shrink the backup virtual disk, I would need to copy of the NFS volume contents to another volume, and recreate the .vmdk used by NFS for backups since there would not normally be the space necessary to create a temporary datastore of sufficient size on datastore1.

Summary:
Hopefully this information will prove helpful to others.

0 Kudos