VMware Cloud Community
Chinabee
Contributor
Contributor

SQL database virtualization and iSCSI

I don't have a big budget so that I have to be creative.

I setup 2 iSCSI SAN using Openfiler and 2 IBM servers each with 8 drives. One has 8 SSD drives and the other, SATA drives.

Now both sit on a gigabit LAN dedicated to iSCSI traffic. The VM hosts are also connected to this iSCSI LAN.

The SQL server's main OS and transaction logs sits on the SSD volume and the actual database sits on the SATA volume.

My question is I understand that database is mainly an IO intensive operation but since the iSCSI throughput is limited to about guestimated 80-90MB/sec because of the gigabit LAN limitation. Is there going to be a concern for the database operation? I don't think IO is going to be an issue.

Also what's the 9K MTU deal? Should I set all MTU to 9K?

Reply
0 Kudos
6 Replies
VMmatty
Virtuoso
Virtuoso

How your SQL workload performs is really dependent on the type of application you're using.  My advice would be to run some SQL workload simulation tools and see how it performs.  Memory and CPU also play a role in determining SQL performance.  Similarly jumbo frames (9000 mtu) is highly workload dependant. You need to enable it end to end for it to be useful. Test with it on and off, but I have a feeling that it won't make much of a difference.  Matt http://www.thelowercasew.com

Matt | http://www.thelowercasew.com | @mattliebowitz
Reply
0 Kudos
rickardnobel
Champion
Champion

Chinabee wrote:

My question is I understand that database is mainly an IO intensive operation but since the iSCSI throughput is limited to about guestimated 80-90MB/sec because of the gigabit LAN limitation. Is there going to be a concern for the database operation? I don't think IO is going to be an issue.

There are not very often that you actually need to get 100 MB+ of data per second, so it should not likely be a issue for you. If you had some SSD disks then you should be able to get down the latency for each IO - often more important for performance than raw throughput.

Also what's the 9K MTU deal? Should I set all MTU to 9K?

As noted above you would have to check that it is supported and configured on everything: vmknic, vswitch, physical switch and iSCSI target. Sometimes it helps performance, but sometimes it complicates the setup without much gain.

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
kcucadmin
Enthusiast
Enthusiast

just a side note, SSD is not always better at "WRITE" operations than spanning multiple physical disks. allot depends on the SSD drives.

there are reasons why SQL logs/temp db are recommended to be placed on RAID 10 volumes.  they typically handle senquital writes bettern than RAID5 or RAID6, SSD may actually make that worse if you have your SSD in raid5 configuration with already degreaded write performance.

make sure you read the specs on the READ/WRITE rates of the drives...  also you may want to benchmark with SQLIO or IOMeter to make sure you are reaching IOP/s for different workloads

Jumbo frames will help on the backend iscsi by reducing the total number of packets sent/recieved, but you will not need to set that in your VM, but at the vmkport level.  again the type of workload will make a difference.

FLOW control is also important for iscsi.

Something you can do to achieve greater than 1gig total throughput is split your I/O over data stores that are using different 1gig uplinks to iscsi.  i.e. place tempdb in one Data Store, Logs, in another data store, index in another data store.  this way you would at least allow for 1gig of each "TYPE".

your servers would need multi nic's however. also, make sure to check your "PATHING" so that not all data stores have the same uplink set as prefered. you have 2-4 uplinks spread the load around.  If you are using Fixed/MRU multi pathing, if using Round Robin that wont be as big an issue, but in your scenerio i doubt round robin would be the best method.

I've been running a pretty heavy SQL load over 4 1gig iscsi uplinks for 2 years now, and it's working well.  not as good a 4g/FC but a heck of allot cheaper, that's for sure.

also, you didn't mention where your TEMPDB is placed.  you prob want it on a RAID10 volume i typically place my TempDB/Logs/Index within the same 6 disk RAID 10 pool. and my OS/DATA in 5+1 Raid 5 pool.

where your SSD will really shine is in READ OPs, it will perform much faster than SATA.  so if the bulk of your DB workload is "GET" i'd place my DB Files on SSD.  if your DB write workload is not time sensitive/io intensive.  Logs/Temp DB will run just fine on sata.

that's probably oposite of what you were planning but here are my thoughts.

placing your SSD's into a RAID10 pool would be wasting allot of SSD i/o to parity.  and Placing LOGS/Index/TempDB into Raid5 is not the best for write performance.

you mentioned you have 8 disks per server you could go this route

IBM Server 1 SSD:

(1+1)RAID10: Logs/Index (Nic1 Primary Uplink), (5+1)RAID5: DB Data, (Local OS) (Nic2 Primary Uplnik)

IBM Server 2 SATA:

(2+2)RAID10: TempDB (Nic1 Primary Unplink), (3+1)RAID5: VM OS, (Local OS) (Nic2 Primary Uplink)

not telling ya how to do it, just hate to see you lump 8 ssd into a raid 5/6 just for LOGs/Index

Reply
0 Kudos
Chinabee
Contributor
Contributor

What RAID would you suggest for the SSD drives? I'd like to have the max capacity of the SSD and I/O of course. 🙂

Reply
0 Kudos
Chinabee
Contributor
Contributor

Are you also suggesting that I add 4 more 1 gigabit ports to my OpenFiler server, create 4 additional iSCSI targets, one for each port, and map all of them to the same volume, and then I would have total of 6 paths iSCSI to one server (+2 on board 1 Gigabit ports), right?

Reply
0 Kudos
kcucadmin
Enthusiast
Enthusiast

I updated my previous post with some adidtional info, take a look at it again.

1 gig Nic adapters are pretty cheap.

if your server has free pci/pci-x slots i would try to get 4 dedicated 1gig uplinks for storage for each device.  this will allow you to create 4 "Paths" to ISCSI LUNS.

Personally i use a Unique Target for each Datastore with only one LUN behind it.  on your filer you should be able to assign IP's to your targets, that will determin the number of paths. 1 ip, 1 path, 2 ips, 2 paths, 3 ips 3 paths.

typiclly you would want each path to be on it's own SUBNET, makes routing from esx easier, with out VLANS.

Reply
0 Kudos