VMware Cloud Community
jdees
Contributor
Contributor
Jump to solution

Planning Datastores for VMotion

I've had several questions about going with one large datastore vs. many smaller individual datastores to account for all of our VM guests. I've heard several people say, that "it doesn't really matter," it's just that having individual datastores will mean "more management."

The reasons why I don't want one large datastore are twofold:

1. Currently we only buy SAN storage in small increments. The best we could do is create a datastore with maybe about 100gb of room to grow. This is annoying but a fact of life. The repercussions are that we would be adding extents all the time when either current servers grow or new servers are needed. Extents seem ugly, because of the limitation in number. Once we reach the max (which is 32, but only if you do it right!) then to add space you would have to create an equally large vdisk on a new lun with whatever additional space you wanted and then copy it over. Which again, this process seems ugly and wastes space.

2. According to some SAN folks, having one fat vmfs volume would be poor performance compared to individual ones per server.

If those two disadvantages are correct (are they?) then I would just go with individual datastores since I was told it will just mean "more management." But then I thought of another question.

Isn't it true that VM guests can only migrate with vmotion in the same datastore?[/b]

That sure seems like it would make the individual datastore option pretty worthless.

Please comment. Right now I'm wishing that extents were unlimited and that big vmfs volumes had no performance issues. Anyone else feel the same way?

0 Kudos
1 Solution

Accepted Solutions
jayolsen
Expert
Expert
Jump to solution

The VM guests do not have to be on the same datastore. The ESX hosts need access to any of the LUNs where you'd want to Vmotion.

View solution in original post

0 Kudos
8 Replies
jayolsen
Expert
Expert
Jump to solution

The VM guests do not have to be on the same datastore. The ESX hosts need access to any of the LUNs where you'd want to Vmotion.

0 Kudos
jdees
Contributor
Contributor
Jump to solution

Can anyone else verify that for me?

I called in for a SR b/c I was trying to migrate a guest VM to a different datastore. I was getting a "vmotion is not configured or is misconfigured" error and was hoping that he would help, but as soon as the tech saw that I was trying to migrate to a different datastore, he said that I couldn't even do that, and not to bother. And then, I was like "Oh..."

In this case the destination datastore was also located on the same ESX host (we only have one) so based on what you said above the LUN was visible and should have worked. Was the tech wrong then?

0 Kudos
sheetsb
Enthusiast
Enthusiast
Jump to solution

It may be just a misunderstanding. Vmotion does not move the data stores, it simply migrates configuration information and active state to the new ESX host. If you want to actually move the data files you have a different problem.

Vmotion allows you to "move" an active vm from one host to another without actually moving the data. Therefore any hosts to which you will vmotion a VM or from which you will vmotion one need to see the same datastore.

Bill S.

Dave_Mishchenko
Immortal
Immortal
Jump to solution

Yes to Vmotion both ESX servers have to have access to datastore where the VM's files are located. The idea with vmotion is to move the VM between ESX hosts while it is still running. You can however power down the VM and do a cold migration to another data store on another ESX host or to another data store on the same ESX host. That's more likely what you want to do and you won't have any problems with that.

jdees
Contributor
Contributor
Jump to solution

Wow, all that clears up a lot.

Do I have this right then?

\- For vmotion to work, the ESX hosts involved have to be able to see the LUN/Datastore.

\- VMotion does not actually move any data from one datastore to another (since datastores aren't somehow physically tied to any particular ESX host, they just share them.

\- In my case, vmotion wouldn't work because I'm actually wanting to migrate to a different datastore, so I have to do this cold.

Boy, I hope I'm getting this right...

Would you say then for my scenario (described above) that I should proceed with individual datastores per VM?

Thanks to all.

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

- For vmotion to work, the ESX hosts involved have to

be able to see the LUN/Datastore.

Yes - they have to share access to the LUN and it's important that they see the LUN with the same LUN ID.

- VMotion does not actually move any data from one

datastore to another (since datastores aren't somehow

physically tied to any particular ESX host, they just

share them.

Yes.

- In my case, vmotion wouldn't work because I'm

actually wanting to migrate to a different datastore,

so I have to do this cold.

That's right.

Boy, I hope I'm getting this right...

Would you say then for my scenario (described above)

that I should proceed with individual datastores per

VM?

Thanks to all.

Typicallly you would go with LUNs in the 300 to 500 GB range and try to keep the VM count between 10 and 15 per LUN. There will be exceptions to that, but in general that would work best. Individual LUNs per VM will increase maintenance for you and won't necessarily gain you anything else.

0 Kudos
jdees
Contributor
Contributor
Jump to solution

I guess all that answers my questions. It sounds like I will still probably go with an individual datastore per VM b/c of the limitation with extents. I think that's just lame in general. Hopefully that aspect of vmfs can change in the future.

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

If you're limited to get SAN storage in 100 GB increments you could just create individual data stores for each LUN and avoid using extents altogether. With ESX 3.0 you can connect to up to 256 LUNs.

http://www.vmware.com/pdf/vi3_301_201_config_max.pdf

PS - you can mark posts as correct and helpful to award points to the people who have helped you out.

0 Kudos