VMware Cloud Community
MoldRiteBud
Contributor
Contributor
Jump to solution

Migrating a huge datastore

We have vSphere 6.0 managing 3 hosts.  We have added an all-flash NAS and want to move some datastores to it.  Currently, the current datastore used by the mail server VM is appx 3.5TB.  The current plan is to mount the new data store to the hosts, attach it to the VM, create a disk and manually copy the server's data files over.  rsync will be our friend here for minimizing downtime during the final transition. 

I didn't even consider using vMotion simply because of the sheer size of the datastore, but curiosity is getting the best of me.  Is vMotioning the VM to a different datastore a practical solution, or will it cause too much disruption on the VM?

Reply
0 Kudos
1 Solution

Accepted Solutions
ThompsG
Virtuoso
Virtuoso
Jump to solution

Hi MoldRiteBud and welcome to the community!

Firstly I'm assuming you are talking about Storage vMotion Smiley Wink

Secondly - you should have little fear of moving a VM with 3.5TB of VMDK assuming your infrastructure is sound. There have been a number of changes to the way Storage vMotion works with probably the biggest change being the implementation of the mirror driver. This makes moving a VM much more reliable and reduces the time to migrate: vSphere 5.0: Storage vMotion and the Mirror Driver - Yellow Bricks

Also if the VM consists of multiple disks then you could just move the database drives a VMDK at a time if really worried. I think this actually adds more risk but could be an option.

As with any change in our industry make sure you have a tested recovery plan.

Kind regards.

View solution in original post

Reply
0 Kudos
4 Replies
ThompsG
Virtuoso
Virtuoso
Jump to solution

Hi MoldRiteBud and welcome to the community!

Firstly I'm assuming you are talking about Storage vMotion Smiley Wink

Secondly - you should have little fear of moving a VM with 3.5TB of VMDK assuming your infrastructure is sound. There have been a number of changes to the way Storage vMotion works with probably the biggest change being the implementation of the mirror driver. This makes moving a VM much more reliable and reduces the time to migrate: vSphere 5.0: Storage vMotion and the Mirror Driver - Yellow Bricks

Also if the VM consists of multiple disks then you could just move the database drives a VMDK at a time if really worried. I think this actually adds more risk but could be an option.

As with any change in our industry make sure you have a tested recovery plan.

Kind regards.

Reply
0 Kudos
MoldRiteBud
Contributor
Contributor
Jump to solution

ThompsG

I'm assuming you are talking about Storage vMotion

Right-click the VM / Migrate / migrate storage only.   I've always assumed that's a storage vMotion. :smileyblush:

If I can pull this off without incurring any downtime, that would be awesome.  This is the mail server, and I've noticed that the outside sales team curls up in a fetal position if they go more than 30 minutes without looking at their phone to check for mail.  Mostly, I've concerned about the sheer amount of time it will take; 3.5Tb is a lot of data, no matter how you slice it.

The new storage is flash based, and we updated to 10Gb for the iSCSI interfaces as well.  Agree with your advice to make sure infrastructure is squeaky clean.  We occasionally have a 'failed to communicate with host' error when vMotioning (compute only) a VM to one particular host is partially complete.  The second attempt then succeeds.  I'm trying to learn where to look in the VMware logs to see if there's something amiss internally with the networking that is not showing up in the web client.

Thanks for the insight.

Reply
0 Kudos
sk84
Expert
Expert
Jump to solution

We occasionally have a 'failed to communicate with host' error when vMotioning (compute only) a VM to one particular host is partially complete.  The second attempt then succeeds.

This sounds like you are using the same vmkernel port and server NICs for management and vMotion traffic without having network I/O control enabled. vMotion is designed to utilize the maximum available physical bandwidth to make the copy process as fast and efficient as possible. If the management also runs over the same physical infrastructure, it can be slowed down so much that heartbeat timeouts occur and the hosts are displayed as disconnected.

Best practice is to either put the vMotion traffic to a separate uplink and vmkernel port or, if you use dvSwitches, to activate network I/O control and configure shares for management traffic and vMotion traffic.

Basically a svMotion of a 3.5 TB VMDK is no problem. Especially not with a 10 Gbit network infrastructure. We copy much larger VMDKs several times a month without downtime or problems.

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.
Reply
0 Kudos
MoldRiteBud
Contributor
Contributor
Jump to solution

I have separate vmkernal ports for vMotion and Management traffic, though both are connected to the same vSwitch, and have the same physical adapters attached.  This is how the other two hosts are configured, and I'm not seeing this issue on them.

I should alos mention that management and vMotion are on 1GB links, not the 10GB, so svMotion may not benefit as much.

Reply
0 Kudos