VMware Cloud Community
bdiringer
Contributor
Contributor

Modify Concurrent svMotions Per Datastore Limit?

Hi,

Looking at the default limits on svMotion at:

http://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.vcenterhost.doc_50/GUID-F0C0FFD7-FC60-4CF...

...it appears that the cost of an svMotion is 16, and the maximum cost of svMotions per datastore is 128.  So, the maximum concurrent svMotions per datastore is 8.

Is there a way to limit the concurrent svMotions per datastore to 1?

I tried adding the following configurations to the vCenter advanced settings (just guessing based on the syntax of limiting vMotion):

config.vpxd.ResourceManager.costPerSVmotionDatastore=128

config.vpxd.ResourceManager.maxCostPerDatastore=16

But neither of these seem to have any effect, even after restarting the vCenter service.

I am using the vCenter Server Appliance 5.0, ESXi 5.0 hosts, and iSCSI storage.

Thanks in advance for your help.

0 Kudos
7 Replies
logiboy123
Expert
Expert

What you are asking to do is a little strange. Most people try to get as many svMotions or vMotions done as quickly as possible. Can I ask why you are trying to limit your concurrent svMotions?

0 Kudos
bdiringer
Contributor
Contributor

More than 1-2 concurrent svMotions seems to impact overall storage performance (in my environment).  In addition, the total throughput on the storage array does not usually increase significantly with more than 1-2 svMotions so I don't think it's making the entire operation any faster (datastore evacuation for example).

0 Kudos
logiboy123
Expert
Expert

So your storage is iSCSI?

If you only have a single 1GB uplink assigned for the vMotion network, then I'm pretty sure it will limit you to 2 concurrent svMotion per host.

Having said that limiting your vMotions because your storage is not performing is not going to help you long term. It may fix performance impact issues immediately, but it would be better to find out why your storage array is having issues.

What is the model of your SAN?

How is Raid configured? Raid10, Raid5.

What is the size of your datastores/LUNs and what block size are they using?

What speed are the disks in your SAN?

Do you know what your IOPS requirements are?

Regards,

Paul

0 Kudos
mennos
Contributor
Contributor

I'm interested in what the answer would be to the question asked:

Lower the Maximum Concurrent svMotions Per Datastore value.

I can think of several reasons why someone would want to do this. In my case, I'm migrating 200 vm's slowly to another datastore located in a new physical datacentre, I'm not doing more then 2 at a time, this just because the link is shared with other services. This is a temporary situation.

However I, wouldn't mind if I could just queue everything and have it limit automatically to just 2 hosts. So I can leave it alone until it finishes.

Is there a vpxd setting for this? Any advice would be very much appreciated

0 Kudos
zeki893
Contributor
Contributor

i'm interested too. my setup doesn't have the iops to run 8 vmotions at once. it will impact running vms. Has anybody found a way to do this?

0 Kudos
netvope
Contributor
Contributor

I found that if I use this:

    <ResourceManager>

      <maxCostPerHost>4</maxCostPerHost>

    </ResourceManager>

I can do one (but not two) live storage migrations at the same time

If I use this:

    <ResourceManager>

      <maxCostPerHost>4</maxCostPerHost>

      <maxCostPerDatastore>1</maxCostPerDatastore>

      <maxCostPerDataStore>1</maxCostPerDataStore>

      <costPerSVmotionESX41>128</costPerSVmotionESX41>

      <costPerSVmotionESX5>128</costPerSVmotionESX5>

      <costPerSVmotionESX50>128</costPerSVmotionESX50>

      <costPerSVmotion>128</costPerSVmotion>

    </ResourceManager>

Live storage migration would not start at all

Cold storage migration still works

Therefore, at least one of the new parameters in the 2nd config should be valid and can be used to limit concurrent storage migrations. The values I used were so extreme that even one live storage migration exceeded the cost.

These parameters (other than maxCostPerHost) are not documented, so I can only guess their names. Many of the ones I tried are probably bogus. Nonetheless, it should serve as a good starting point for you to experiement.

If someone figures out which parameter is real, please share! Smiley Happy

Notes:

In the above tests, "storage migration" means executing "Change Datastore" in vSphere client. The host wasn't changed.

Software platform verion: 5.0

If you don't know how to apply these config, see: http://www.boche.net/blog/index.php/2009/01/05/guest-blog-entry-vmotion-performance/

0 Kudos
frankdenneman
Expert
Expert

Recently i've written an article about limiting the number of Storage vMotions, explaining the relation between host, network and datastore cost.

http://frankdenneman.nl/2012/06/limiting-the-number-of-storage-vmotions/

Blogging: frankdenneman.nl Twitter: @frankdenneman Co-author: vSphere 4.1 HA and DRS technical Deepdive, vSphere 5x Clustering Deepdive series
0 Kudos