VMware Cloud Community
romick1
Contributor
Contributor

About Live Storage Migration in VMware ESX5.1

Storage Migration use IO Mirroring,i want to know that, whether the size of the IO data block is compared with the size of the migrated data block when the data is being migrated to improve the efficiency of the migration,thks!

Reply
0 Kudos
1 Reply
dekoshal
Hot Shot
Hot Shot

How Storage vMotion Works

1. First, vSphere copies over the nonvolatile files that make up a VM: the configuration file (VMX), VMkernel swap, log files, and snapshots.

2. Next, vSphere starts a ghost or shadow VM on the destination datastore. Because thisghost VM does not yet have a virtual disk (that hasn’t been copied over yet), it sits idle waiting for its virtual disk.

3. Storage vMotion fi rst creates the destination disk. Then a mirror device—a new driver that mirrors I/Os between the source and destination—is inserted into the data path between the VM and the underlying storage. SVM Mirror Device Information in the Logs If you review the vmkernel log fi les on an ESXi host during and after a Storage vMotion operation, you will see log entries prefi xed with SVM that show the creation of the mirror device and that provide information about the operation of the mirror device.

4. With the I/O mirroring driver in place, vSphere makes a single-pass copy of the virtual disk(s) from the source to the destination. As changes are made to the source, the I/O mirror driver ensures that those changes are also reflected at the destination.

5. When the virtual disk copy is complete, vSphere quickly suspends and resumes in order to transfer control over to the ghost VM created on the destination datastore earlier. This generally happens so quickly that there is no disruption of service, as with vMotion.

6. The files on the source datastore are deleted.

Reference:

Book Titled "Mastering VMware vSphere 5.5" by Nick Marshall and Scott Lowe

Note: More on I/O mirroring driver

• fsdm

– This is the legacy Data Mover, the most basic version. It is the slowest because the data moves all the

way up the stack and then down again.

• fs3dm

(software) – This is the software Data Mover, which was introduced with vSphere 4.0 and contained

some substantial optimizations whereby data does not travel through all stacks.

• fs3dm

(hardware) – This is the hardware offload Data Mover, introduced with vSphere 4.1. It still is the fs3dm,

but VAAI hardware offload is leveraged with this version. fs3dm is used in software mode when hardware

offload (VAAI) capabilities are not available.

Reference:

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-storage-a...

When the source filesystem uses a different  blocksize from the destination filesystem, the legacy datamover (FSDM) is used. When the blocksizes of source and destination are equal, the new datamover (FS3DM) is used. FS3DM uses VAAI or just the software component. In either case, null blocks are not reclaimed.

Reference:

Storage vMotion to thin disk does not reclaim null blocks (2004155) | VMware KB

If you found this or any other answer helpful, please consider the use of the Helpful to award points.

Best Regards,

Deepak Koshal

CNE|CLA|CWMA|VCP4|VCP5|CCAH

Reply
0 Kudos