VMware Cloud Community
vmproteau
Enthusiast
Enthusiast
Jump to solution

Windows 2008 R2 SQLCluster vSphere 4.0/4.1

We have a request to quickly build an SQL Cluster with a pair of VMs. I have the documentation and understand there are things to consider with this configuration and would normally take time to research thouroughly however, I just won't be able to.

I was looking for a quick list of complications, cautions, and other negative potential when doing this. For instance, I am aware that you have to disable DRS and HA for the clustered VMs.

I know this is a longshot but, if someone has insight or support clustered SQL in their vSphere environment let me know your experiences supporting such a configuration. Any potential surprises?

0 Kudos
1 Solution

Accepted Solutions
AlbertWT
Virtuoso
Virtuoso
Jump to solution

and this is what I'd like to try: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100349...

to use Thick EagerZeroed VMDK as the disk on top of VMFS for MSCS in case creating LUN for Physical RDM is not available for you.

/* Please feel free to provide any comments or input you may have. */

View solution in original post

0 Kudos
6 Replies
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Hi mate,

Yes I have been deploying multiple SQL server clusters in VMware using the variations of 2008/2008R2 for both os and the SQL server version.

What I do is:

Hard Drive: make the LUN from my FC-SAN presented as Physical compatibility RDM which means I'll be losing the VM benefits such as Snapshot, DRS and HA

SCSI: I choose separate Paravirtual controllers for each LUN with Physical compatibility as I am spreading the VM nodes across different esx hosts.

Network: I use the same Network Label on the vNIC but the ip address for the heartbeat I assign different IP subnet, therefore no need to configure anything on the physical network switch.

Therefore it can be concluded that the VM is only using the Hypervisor compute resource (CPU and Memory hot add features is working perfectly here), the backup is also treated as normal physical box.

HTH.

Kind Regards,

AlbertWT

/* Please feel free to provide any comments or input you may have. */
AlbertWT
Virtuoso
Virtuoso
Jump to solution

and this is what I'd like to try: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100349...

to use Thick EagerZeroed VMDK as the disk on top of VMFS for MSCS in case creating LUN for Physical RDM is not available for you.

/* Please feel free to provide any comments or input you may have. */
0 Kudos
vmproteau
Enthusiast
Enthusiast
Jump to solution

Really appreciate the posts Albert. Exactly what I was looking for.

We will be upgrading this clients cluster from vSphere 4.0 to ESXi 4.1. This will likely happen sometime after the MS Cluster is up and running. Are you aware with any differences between the 2 ESX versions with respect to MS Clustering?

0 Kudos
AlbertWT
Virtuoso
Virtuoso
Jump to solution

Hi Mate,

AFAIK, it doesn't affect the Guest OS at all, it was just VMware tools upgrade which I have to do it from the passive node first and then FailOver it once it is done and FailOver it back again so there is no service outage for the DB service in general even during the maintenance weekend 🙂

/* Please feel free to provide any comments or input you may have. */
vmproteau
Enthusiast
Enthusiast
Jump to solution

Another thing came up with regards to Host upgrades as the MSCS cluster VMs ad complexities to that process. I was thinking it would go like this:

  1. All Host are upgraded in cluster except the 2 with SQL Cluster VMs (Call them ESXHOST-1 and ESXHOST-2)

  2. SQLCLUSTERVM-1 gets powered down on ESXHOST-1.
  3. SQLCLUSTERVM-1 gets cold migrated to other Host (already upgraded).
  4. SQLCLUSTERVM-1 gets powered up.
  5. Vmotion remaining ESXHOST-1 VMs to other Hosts. Upgrade ESXHOST-1.
  6. SQLCLUSTERVM-2 gets powered down on ESXHOST-2.
  7. SQLCLUSTERVM-2 gets cold migrated to other Host (already upgraded).
  8. SQLCLUSTERVM-2 gets powered up.
  9. Vmotion remaining ESXHOST-2 VMs to other Hosts. Upgrade ESXHOST-2.

One of engineers was testing and noticed "if you migrate the secondary node, you have to re-establish the RDM simlinks (delete and re-create the drives)"

Have you experienced this or know what he is referring to. I'm trying to determine precisley how upgrades would happen. If just leaving the node powered down during the entire upgrade makes more sense we'll have to let our client know that extended time running on 1-node may be required during patches or upgrades to the Hosts.

0 Kudos
vmproteau
Enthusiast
Enthusiast
Jump to solution

Update: Scratch that. I got clarification from him. He was referring to a Storage vMotion and not just a Host to Host cold move. Host updates won't be an issue.

0 Kudos