I haven't seen a setting such as this per se. Anytime I've used maintenance mode, I haven't seen more than about 5 concurrent migrations at once. There were several queued, but only about 5 or so active. Are you having an issue with a large number?
I have some suspicions that SCSI timeouts seen in the past may have been caused by too many migrations occurring at the same time - with hosts 'fighting' for control of LUNs. I would like to be able to limit the number of simultaneous migrations to 2.
What kind of shared storage are you using? Is it FC or iSCSI? How many hosts and vm's are you running, and how many vm's do you have per LUN.
I have ~8-10 hosts per cluster, and limit my vm's to about 15-20 vm's per LUN. I have yet to run into SCSI timeout issues, even in extreme cases where I've put a host with about 35 vm's into maintenance mode. As posted earlier, I had about 5 active migration tasks, and the remaining would queue until they were able to migrate. The entire process took about 10-20 minutes to complete and the host to be in maintenance mode, but all completed successfully. I've done this during patch cycles when I had less hosts in my cluster (5), and have had to do this several times to have all hosts patched.
I use FC, with 2 HBA's, and several array ports, for 8 total paths.
We use FC storage, 2 HBA's and 8 total paths. I limit our clusters to 8 hosts, while a typical LUN has 15 VMs.
We used to see excessive SCSI reservations and timeouts in the logs at about the same time I put hosts into maintenance mode. We are also running SAN replication to our DR site, which may be causing some of the errors, but for peace of mind I was hoping to limit active migrations.
As part of a 3.5 upgrade I have been reconfiguring HBA settings, as per advice from IBM - we use an IBM storage virtualisation device. This might optimise our storage response and resolve the errors.
I also have the same question and have not yet been able to find an answer so giving your thread a bump. I would like to reduce the amount of VMs migrated at once to ensure that there is no packet loss or IO contention during migration. I have not been able to find any advanced configuration setting that appears to correspond with this behavior.
vSphere 5.5 Enterprise Plus environment