VMware Cloud Community
smKKe
Contributor
Contributor

Need to Migrate SQL Cluster to VMware

Due to the Microsoft Licensing changes on how they license sql server (2008 and on), I need to migrate a physical SQL cluster to VMware.  Currently my SQL cluster is connected to a EMC VNX5300.  I would like to either use a in-guest iscsi initiator (yes, I know there is a little more overhead) or use a physical RDM connection.  Would 1 or the other be better?

Also, would anyone recommend 10Gbps iSCSI dualport SFP+ adapters?  Right now for the vm storage we are using Emulex HBAs. I was also thinking of using the Intel x520-DA2 nics for inguest iscsi.

If I would go the in-guest iSCSI route, I would create either (2) standard switches or (2) vDS.  I would need two since each storage processor is on its own VLAN (storage network is physical seperated than the data network).

Thanks

Tags (3)
0 Kudos
5 Replies
vfk
Expert
Expert

I would suggest using physical RDM as this would isolute storage traffic from standard network traffic unless of course you will be using dedicated storage network.  As for the type of network, you always to mitigate single point of failure, ideally you want to two HBA/CNA.

The vswitch you use depends on your license, in general for iscsi I dont think the matters as MP for iscsi at the protocol and configuration between vDS and VSS is same. 

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
0 Kudos
smKKe
Contributor
Contributor

The dedicated storage network would be the ESXi boxes connected directly to the SAN switch via SFP+ cables.  This is then connected to the VNX via shielded CAT6A cables.

0 Kudos
vervoortjurgen
Hot Shot
Hot Shot

hello

i also would go for physical RDM (read best practice sql from vmware though) and i would go for vDS if your license allows it ofcourse.

but i would recommend using 2 single port x520 card. this way you can have failure on one card.

you are going to use the full 10GB only for storage?

you can use de 10GB card for everything i think. split up the bandwith. 1GB management network, 1GB vmotion, 1GB fault tolerance, etc, and example 4 GB iSCSI. i dont think you need 2 switches then if you connect both 10 GB to the switch and vlan everything.

kind regards Vervoort Jurgen VCP6-DCV, VCP-cloud http://www.vdssystems.be
0 Kudos
kumarlakshman_k
Enthusiast
Enthusiast

Hello,

I will suggest RDM as it gives direct access of the disk to VM bypassing the hypervisor and gives better Read-Write Performance  which is very much need in your case.

For CNA you can go with Emulex adapters OCe11 or OCe14 , these comes with multiple protocol capabilities(FCoE/iSCSI), even supports UMC(universal mutli-channel) where each port can be partition to 4 logical functions and used for different traffics.

Thanks & Regards,

Lakshman,VCP550

0 Kudos
smKKe
Contributor
Contributor

Ok, thanks.  Initially I am going to add one virtual machine with a physical RDM, after that when I get the second online, I will look at converting them to virtual RDMs to take advantage of vmotion (not storage vmotion).

I found this:

VMware KB: Switching a raw data mapping between physical and virtual compatibility modes in ESX/ESXi...

0 Kudos