VMware Cloud Community
tctatum
Contributor
Contributor
Jump to solution

SQL 2005 Ent on ESX 3.5 VM

I have built a Windows 2003 Enterprise VM Server inside on VMware (ESX Server 3.5 Update 3) host. We have EMC's Celera NS20 Unifided Storage which is where the VM servers are stored utilizing various LUNs. Does anyone have recommendations on the installation of SQL 2005 Enterprise on a given Win2k3 VM server, in regard to how it should be installed to gain the best performance and reliability.. I am looking for suggestions like should SQL be installed on the main .vmdk disk and the databases stored on a different LUNs, or should SQL be installed on a different LUN from the OS? I plan on setting up a LUN specifically for logs so to keep traffic seperated, but am wondering about the base install of SQL and the databases..

Thanks for any advise you may have.

0 Kudos
1 Solution

Accepted Solutions
kellino
Enthusiast
Enthusiast
Jump to solution

That's a great question. Here's some things I do in this case.....

  • Don't install anything -- even SQL binaries on C

  • Carve/use a RAID5 VMFS for SQL programs and databases on one volume

  • Crave/use a RAID1 VMFS for logs

There's 2 important things here. One is that you have isolated the OS -- DB and logs all to different logical drives. It's very nice to be able to do a hot grow of these drives :smileyblush:

The other is more for performance. It depends on how transactional (how many IOPS) the system is going to be. In most cases you should be able to have other VM's share these same volumes, but changing the RAID level may provide some performance gain based on how the data is accessed.

I don't know if it would be necessary to isolate a whole LUN (VMFS) for the SQL database for one VM. That's a function of your SAN infrastructure and how hard the SQL VM is being hit.

We have multiple VM's with DB's on the same LUN and so far this hasn't appeared to cause any issues. You can always start with a more conservative approach and slowly add as you feel comfortable.

View solution in original post

0 Kudos
8 Replies
kjb007
Immortal
Immortal
Jump to solution

The answer, as usual, is it depends. How large will the database be? How much I/O do you plan to do? How large are the LUNs you are adding to ESX? There are differing methods to connect storage depending on what you want the vm to be able to do.

Small sized db's are fine having system/data/logs on one disk. Average size db's can benefit from having I/O separated between multiple LUNs to separate the system and transactional data from the logs. And larger still can benefit by having LUNs all to themselves so they don't have to share I/O with any other vm.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
kellino
Enthusiast
Enthusiast
Jump to solution

That's a great question. Here's some things I do in this case.....

  • Don't install anything -- even SQL binaries on C

  • Carve/use a RAID5 VMFS for SQL programs and databases on one volume

  • Crave/use a RAID1 VMFS for logs

There's 2 important things here. One is that you have isolated the OS -- DB and logs all to different logical drives. It's very nice to be able to do a hot grow of these drives :smileyblush:

The other is more for performance. It depends on how transactional (how many IOPS) the system is going to be. In most cases you should be able to have other VM's share these same volumes, but changing the RAID level may provide some performance gain based on how the data is accessed.

I don't know if it would be necessary to isolate a whole LUN (VMFS) for the SQL database for one VM. That's a function of your SAN infrastructure and how hard the SQL VM is being hit.

We have multiple VM's with DB's on the same LUN and so far this hasn't appeared to cause any issues. You can always start with a more conservative approach and slowly add as you feel comfortable.

0 Kudos
RParker
Immortal
Immortal
Jump to solution

There's 2 important things here. One is that you have isolated the OS -- DB and logs all to different logical drives. It's very nice to be able to do a hot grow of these drives

That works for a physical setup, but many people don't have the option or the luxury to have multiple RAID configurations on the SAN, for us that would mean setting aside drives to be 'special case'. That's not cost effective.

I agree with the different LOCAL drives, however on a SAN it won't make any difference if they were to reside on a RAID 5 or 1 on the SAN. If you have a SATA RAID, then maybe one local disk / vmdk for that, but that get's complicated and VMDK's spread all over the place. Not good way to manage VM's. In any case, simply adding disks to the VM would be sufficient. The primary reason for separating the drives in the first place was performance. A SAN pretty much (when configured properly) can handle anything you throw at it, so dividing these VMDK's up just further adds more overhead for VM's.

And to my usual stance on SQL Virtualization for my fan base out there is -drum roll- I don't believe SQL should be virtualized in the first place, and this is yet another reason to polute the atmosphere. SQL should reside on physical, period. I realize there are some people that don't have other options (they do, but insist they don't - it just wasn't part of their 'planning') to put SQL on physical hosts where they belong. The performance is markedly better on a physical box, 99.99% of the time. 2 machines configured the same way for ESX and SQL, the physical location for SQL will beat a VM every time. To me that's a waste of a SQL license to cripple the performance, if you are going to do it, why use SQL / Oracle at all? My SQL or some other free ware solution will work just as well....

That being said, I know the benefit of SQL on a VM. It's portable, it's easier to manage, it's simple to maintain, blah blah.. I got it. The main component for SQL is performance. That's why you use SQL in the FIRST place. That's the point I try to make. A VM cannot compete with physical machine performance. Period.

kellino
Enthusiast
Enthusiast
Jump to solution

I think we are on the same page on pretty much all points.

On most SQL setups we don't bother with different LUNs -- just different VMDKs on the same LUN.

Our SAN does block level virtualization but they still have "vraid5" and "vraid1" algorhytms you can select. We already have a VMFS dedicated for RAID1 so anytime we need a RAID1 VMDK we just add it to this volume.

I also agree that in most cases it should be necessary to dedicate a whole LUN. I'm just being cautions as our virtualized SQL boxes don't average above 1200 IOPS and others might.

I also agree that the most demanding SQL should not be virtualized which is what we do as well. Our mission critical highly-transactional SQL is phyiscal and staying there. But 99% of the other SQL is virtualized and working fine.

0 Kudos
tctatum
Contributor
Contributor
Jump to solution

Thank you very much for the quick replies on this.. The thoughts and suggestions are very beneficial. This install will server as our core SQL Enterprise server. I am still on the fence regarding physical vs virtual, but I am leaning toward virtual at the moment and creating each piece (database/logs) on its own attached Disk&LUN, as the I/O is expected to be high as we migrate databases from misc SQL installs to the enterprise server. A great point was also made that each disk could be expanded on the fly if needed, which is a nice perk of that setup.

Again thank you for your time and great suggestions Smiley Happy

0 Kudos
jbruelasdgo
Virtuoso
Virtuoso
Jump to solution

do not use disks that are "expanden on the fly". that would have overhead that you could avoid from the begining

also take notes with this info:

http://viops.vmware.com/home/docs/DOC-1403

http://viops.vmware.com/home/docs/DOC-1431

regards

Jose

Jose B Ruelas http://aservir.wordpress.com
0 Kudos
kellino
Enthusiast
Enthusiast
Jump to solution

Thanks Jose, but I'm curious about what you mean by "disks that are 'expanded on the fly"".

This sounds like thin provisioning (which can add overhead) but I thought this wasn't available until vSphere?

What I was referring to was growing the VMDK in the GUI and then using DISKPART in Windows to extend the parition.

0 Kudos
jbruelasdgo
Virtuoso
Virtuoso
Jump to solution

semantic, my mistake

Jose

Jose B Ruelas http://aservir.wordpress.com
0 Kudos