You can set the DRS and HA settings for each VM individually in the cluster settings.
In the vSphere C# client there are different tabs for HA & DRS, HA-VM-options and DRS-VM-options.
In the Web client you can find those settings under one tab: Manage -> Settings -> VM Overrides
Make sure you set the DRS automation level and the HA Restart Priority to disabled for those Lync 2013 VM's.
1 person found this helpful
you can create a VM-Host anti-affinity rule for your Lync Servers. These DRS rules will also block manual vMotions.
affinity-rules documents: VMware vSphere 5.1
There is NO any advanced setting to do the same.
Simple way is to create VM-Host affinity MUST rule.
1. Create VM group with VMs those you do not want to vMotion. (If you want to keep some VMs on some other hosts, you need to create multiple VM groups)
2. Create host group with only host on which you need to keep these VMs (If there are multiple host, then you need to create multiple host groups )
3. Create VM-Host affinity must rule by associating appropriate VMgroup and host group. (repeat the same with every pair of VM group & host group).
Vi client way:http://blog.shiplett.org/vsphere-5-drs-groups-and-rules/
1. VM-Host affinity rule is respected by DRS/HA/DPM & manual intervention. In all these cases VMs can not be vMotioned to other host.
2. Even though if you disable DRS on cluster or that VM this rule still holds good.
Another way is to disable DRS/HA on that VM as specified by Mustafa.
If it is useful, plz award points as appropriate
Thanks both for your answers. Unfortunately neither fully solves my problem!
I am already going to be setting DRS to be disabled to prevent any automated vMotions but this doesn't prevent an administrator performing a manual migration.
If I create a DRS vm-to-host affinity rule it does as you say prevent both DRS vMotions and manual vMotions but it has a side effect that if you needed to perform maintenance on the host that you can't migrate the VM off the host whilst powered on. If you power the VM off and migrate it (which you can do) to an alternate host the moment you try to power the VM on a lovely message is displayed saying that it would then violate the DRS rule.
So my question is, that given the crazy number of advanced settings that exist, I was wondering whether there is a virtual machine advanced setting that disables vMotion regardless of DRS rules or administrator actions. If not then my best option is going to be a section in the operational guide that identifies these machines as not valid candidates for vMotion (in conjunction with disabling DRS for those VMs)
I'm not aware of an advanced option for a VM. However, if this is to prevent administrators from accidentally vMotion the VM, you could workaround this by creating e.g. a new port group on this host and assign a name that doesn't exist on other hosts. With this setup anyone how will try to vMotion the VM will receive an error message.
Indeed André, that's another way.
In case of a planned cold migration you'd need to reconfigure the network adapter though. Should be the same amount of work as disabling the DRS-rule, especially if you have to do it for multiple VMs.
As per my understanding there is NO any way where VMs would be moved to other available host only when we are keeping host into maintenance mode.
You need to work around the options that are specified earlier by all.
In case of VM-Host Affinity rule, you can modify particular rule from MUST to Should when you are keeping that host into MM.
Another way/trick to prevent vMotion, you can configure the VM's SCSI controller to use Virtual or Physical bus-sharing (requires thick eager zero disk)
Bus-sharing will prevent vMotion & Storage vMotion
If someone is trying to vMotion the VM, it will give an error as below, but this is just a trick or another way.
BayuBayu Wibowo | VCIX6-DCV/NV
Author of VMware NSX Cookbook http://bit.ly/NSXCookbook
https://nz.linkedin.com/in/bayupw | twitter @bayupw
To disable vMotion of a VM for a specific user or group, just create an additional role and disable the specific vMotion settings. Add the role to the object and the user can't vMotion the VM anymore, however he is still able to place the host into maintenance mode:
I wrote an more elaborate step by step blog post if you are interested: Disable vMotion for a single VM - frankdenneman.nl
Hi - For your requirement, I would recommended to mount a CD/DVD which would prevent the vMotion of that particular VM at host migration.
I feel it is a simple solution to my knowledge, and any other advised are welcomed.
Pertaining to your original problem I noticed this article...Virtualizing Microsoft Lync Server – Let's Clear up the Confusion | Virtualize Business Critical Applications - VMware B…
On the fourth point where the writers have stated that VM portability “breaks the inherent availability functionality in Lync Server pools”, we are unaware of the “breakage” alluded to in the document. The VMware’s “portability” feature is vMotion, a feature that has been in long use for clustered critical applications like Microsoft Exchange Server (DAG) and Microsoft SQL Server (MSCS or AlwaysOn). We are not aware of any documented incidents of “breakage” attributable to vMotion operations on these workloads, or even for Lync.
In the “Host-based failover clustering and migration for Exchange“ section of its Exchange 2013 virtualization whitepaper, Microsoft defined the following strict criteria for its support of VM “portability” for Exchange workloads:
- Does Microsoft support third-party migration technology? Microsoft can’t make support statements for the integration of third party hypervisor products using these technologies with Exchange, because these technologies aren’t part of the Server Virtualization Validation Program (SVVP). The SVVP covers the other aspects of Microsoft support for third-party hypervisors. You need to ensure that your hypervisor vendor supports the combination of their migration and clustering technology with Exchange. If your hypervisor vendor supports their migration technology with Exchange, Microsoft supports Exchange with their migration technology.
- How does Microsoft define host-based failover clustering? Host-based failover clustering refers to any technology that provides the automatic ability to react to host-level failures and start affected virtual machines on alternate servers. Use of this technology is supported given that, in a failure scenario, the virtual machine is coming up from a cold boot on the alternate host. This technology helps to make sure that the virtual machine never comes up from a saved state that’s persisted on disk because it will be stale relative to the rest of the DAG members.
- What does Microsoft mean by migration support? Migration technology refers to any technology that allows a planned move of a virtual machine from one host machine to another host machine. This move could also be an automated move that occurs as part of resource load balancing, but it isn’t related to a failure in the system. Migrations are supported as long as the virtual machines never come up from a saved state that’s persisted on disk. This means that technology that moves a virtual machine by transporting the state and virtual machine memory over the network with no perceived downtime is supported for use with Exchange. A third-party hypervisor vendor must provide support for the migration technology, while Microsoft provides support for Exchange when used in this configuration.
vMotion, DRS and vSphere HA satisfy all of those requirements without exceptions.
Granted, when not properly configured, a vMotion operation can lead to a brief network packet loss which can then interfere with the relationship between/among clustered VMs. This is a known technical condition in Windows clustering which is not unique to vMotion operations. This condition is well understood within the industry and documented by Microsoft in its Tuning Failover Cluster Network Thresholds Whitepaper.
This is further helpfully documented by Microsoft in the following publication: Having a problem with nodes being removed from active Failover Cluster membership?
Backup vendors have also incorporated these considerations into their publications. See: How do I avoid failover between DAG nodes while the VSS snapshot is being used?
Like most other third-party vendors supporting Microsoft’s Windows Operating System and applications, VMware has incorporated several of the recommended tuning and optimization steps contained in this whitepaper into several of our guides and recommendations to our customers. See our Microsoft Exchange 2013 on VMware Best Practices Guide for an example.
The VMware’s Microsoft Exchange 2013 on VMware Best Practices Guide includes several other configuration prescriptions that, when adhered to, minimize the possibility of an unintended failover of clustered Microsoft application VMs, including the Lync Server nodes. We wish to stress that our “portability” features do not negate or impair the native availability features of Microsoft Lync Server workloads.
We are unaware of any technical impediments to combining vSphere’s robust and proven host-level clustering and availability features with Microsoft Lync Server’s application-level availability features and we encourage our customers to continue to confidently leverage these superior combinations when virtualizing their Lync servers on the vSphere platform. In the absence of any documented and proven incompatibility among these features, we are confident that customers virtualizing their Microsoft Lync Server infrastructure on the vSphere platform will continue to enjoy the full benefits of support to which they are contractually entitled without any inhibition.
In the unlikely event that virtualizing Lync Server workloads results in a refusal of support from Microsoft to a customer, such customers can open a support request ticket with VMware's Global Support Service and VMware will leverage the framework of support agreements among members of the TSANet "Multi Vendor Support Community" to provide the necessary support to the customers. Both Microsoft and VMware are members of the TSANet Alliance.
1st Create a new DRS Group for Host
2nd Create a new DRS Group for VM
3rd Create a new DRS Rules to match Host with VM
1. Log in as a vCenter administrator
2. Go to Host and cluster tab
3. Select Cluster > Manage Tab > Settings
4. Select DRS Group > click Add > Put Name > select Host DRS Group > Click Add > then select target Host > OK
5. Select DRS Group > click Add > Put Name > select VM DRS Group > Click Add > then select target VM > OK
6. Select DRS Rules > Put Name > Check 'Enable rule' >
Type: select 'Virtual Machines to Hosts' >
VM Group: select 'VM DRS Group' > then, select Must run on host in group.
Host Group: select 'Host DRS Group' then, press OK
Just another 2 cents:
I'd go the permission way as Frank describes. Most other methods mentioned (local .iso, non-existent portgroup, hard DRS rules) would also prevent HA, which might not always be desirable...
Bus sharing seems like would be doing the trick as well, however "permission approach" would probably be a lot more cleaner way.