VMware Cloud Community
glaviolette
Contributor
Contributor
Jump to solution

Recommended method to move VMs to temporary datastore so I can rebuild NFS pool

Our current datastore is a NFS mount on Nexenta (zfs) that has a pool I need to rebuild (migrating from raidz2 to striped mirror for improved performance). I was initially planning on shutting down all the VMs, creating a new pool, copying (cp -r) from the Nexenta console all the VMs to the new pool, recreating the mirrored pool and then copying (cp -r) all the data back.

I've seen several threads on this topic, and seems like the easiest method would be to use the "Migrate" feature so that the VMs aren't fragmented. However in some of my preliminary tests when I copy a VM between pools from the Nexenta console it appears (From the Datastore Browser) to intially create the provisioned vmdk and then after it finishes copying it appears as thin.

I'm hoping I can simply copy from the Nexenta console, as using the "Migrate" option in vCenter on each of my 40 VMs seems like it would be *quite* tedious as compared to simply issuing a "cp -r" command and letting all the VMs (1.5TB) copy overnight and then adding them back to each host.

Does anyone have any experience with this? Or possibly a way to easily script (?) the "Migrate" option in vCenter?

Thanks all!

0 Kudos
1 Solution

Accepted Solutions
taylorb
Hot Shot
Hot Shot
Jump to solution

You can select multiple VMs in vCenter by using regular CTRL+click or Shift+Click and migrate them all at once as long as they have the same storage destination.  You can also schedule this as a task.    Manually adding each VM back in by hand is no less tedious than individually migrating them, either way. 

View solution in original post

0 Kudos
4 Replies
idle-jam
Immortal
Immortal
Jump to solution

in fact CP command would be just good and we did it previous .. if you are worried you can always test 2-3 and see how it goes. but then i doubt 40VM would be able to complete in 1 night time? i assume if each VM is like 50GB ++ with 1GB link

glaviolette
Contributor
Contributor
Jump to solution

Glad to you've had sucess with this then!  What are you using if Imight ask?

So... "test 2-3" how? Just copy some VMs over to the other volume (using Nexenta console), add them to a host and make sure they start? Done that, they work. Are you saying can test them in some manner to make sure the VM isn't heavily fragmented??

Both pools (existing pool to be rebuild and my "xfer" pool) are on the same Promise vTrak array connected to my Nexenta "appliance", so the copy would be local. With some quick copy tests, I'm hoping it would be completed in about 5-6 hours.

I have a feature creep question if anyone happens to know.. The existing NFS pool is striped at 128k, however after some reading (and some benchmarking I've done) I see that I might want to consider creating the new pool with 64k record size for additional performance. If I create a new 64k striped pool and copy the VMs that were originally created in a 128k pool, will cause any issues with VM alignment? I've read that 2008 will automatically align, but will it re-align? I'm just learing what I can here, so please be gentle.. :smileygrin:

0 Kudos
taylorb
Hot Shot
Hot Shot
Jump to solution

You can select multiple VMs in vCenter by using regular CTRL+click or Shift+Click and migrate them all at once as long as they have the same storage destination.  You can also schedule this as a task.    Manually adding each VM back in by hand is no less tedious than individually migrating them, either way. 

0 Kudos
glaviolette
Contributor
Contributor
Jump to solution

Ah!

I tried that, but must have had a template selected in the Datacenter view and the Migrate option wasn't available. Now I see that templates must be convered to VMs first before moving. Thanks!

Moving the VMs to a new datastore in vCenter will probably be slower vs. Nexenta console, but it's probably the safer method.

I'll mark this as answered, but if anyone has any info on my question regarding pool striping, I'd be happy to hear about it!

0 Kudos