Hi,
I'm REALLY new to virtualization and currently setting up our new gear. We've got 3 x Dell R610 servers and a MD3000i SAN with dual Dell 6224 switches for redundancy.
One thing I have noticed is that people are pretty vocal on all sorts of configuration options .. apart from RAID setup and Disk configuration - so here I am asking for some advice. We're a Web agency and host mainly PHP/MySQL built apps - eCommerce, Email marketing etc. At the moment we've got our RAID setup as follows:
4 Disks in a RAID 10
10 Disks in a RAID 10
We'll use the 4 disk group for OS & log files and the 10 disks for Database/Web files.
We have 1 hot spare drive (dual controller San)
Is this a performance oriented setup ? Will MySQL and Web files be ok on a 10 disk RAID 10 ?
We're not too concerned on storage economy - more performance.
Any help welcomed and much appreciated !
Thanks
Alain
As long as you have a reasonable RAID card, which you seem to, quality of PHP and mySQL code will be a performance limiter long before RAID disk layouts will be.
How much consolidation are you looking to achieve? If those ten disks in RAID10 end up with you running 500 VMs on one LUN, this amount of IO is unlikely to perform well in any configuration. RAID10 is generally the highest performing RAID level, so there's no reason it wouldn't be "alright" for just about any configuration.
Make sure your switches are setup according to recommended practices too, since you seem be involving iSCSI, the networking configuration can also pack a reasonable performance hit.
Thanks Josh. The RAID is built into the SAN so assuming it's good We're looking at 8-10 servers in total probably, we only host apps we write so have a lot of control over the PHP/MySQL and it is efficient. I've got a Dell tech centre paper on multipath iSCSI setup with the switches and servers we have so will get into that once the disks have initialized.
Thanks for your response!
Alain
Seriously speaking, 4 or 10 disk is NOT good enought. I would assume budget constrain.
Probably you can do some tuning on switch & VMkernel level such as set MTU 9000 and look for ISCSI optimize switches (PowerConnect 5424.
Regards,
Jas aka Superman
MALAYSIA VMware Communities
'If you found this or any other answer useful please consider allocating points for helpful or correct answers ***
The SAN has 15 x 15k SAS drives. What would you recommend as the best setup then ?
Your rig is a good build. Just remember the rules:
- keep your iSCSI traffic in it's own segment through some means - either physical if you've got the NICs/port density, or logical through vlan's.
- You can bounce up your performance with jumbo frames - but make sure every device in your path supports it. Make sure you set it up right (the MD3000 is easy with it's front-end, cisco switches not so tough (pick a port, set mtu). Where things can get tricky is on your vmkernel/iSCSI soft initiator (esxcfg-vswitch -l to see the switches, -m to set mtu's, -h for help, it's all well documented). Jumbo frame = MTU of 9000. Lot of discussion on this, but it was my experience that we got a performance gain of 12%. not too shabby. It's easier to set this up now than later when you have machines deployed.
- know your limits and trust but verify. Keep an eye on your performance, test test test - iometer or some other tool to really put the spurs to it and see what you can do.
- learn to use and understand esxtop. Lots of good documentation and resources here for that. Very useful for resolving bottlenecks at all aspects of your system.
My ramblings. Hope this helps - these are things I wish I had known when I got started on this path.
Happy virtualizing!
- abe
Awesome, thanks Abe.
We're following most of the rules I think .. (have done a LOT of reading on these forums lately)
- 2 Switched are dedicated for iSCSI - no other use
- Have set Jumbo frames, working on the ESXi installs today so will do the "other" end and test
Is esxtop available in ESXi ?
Thanks again
Alain
Alain,
Yes esxtop is availabe in ESXi.
If you found my answer(s) helpful please award points.
Thanks!
I think 4 disks for the OS and logs is a bit much. The OS doesn't use much. Unless they are really really tiny drives you are probably going to waste 2 of them. Make them hot spares or add them to the datastore array. Hot spares probably makes more sense.
Seriously speaking, 4 or 10 disk is NOT good enought. I would assume budget constrain.
I would disagree with this, depending on the need.
He's running mySQL, so it's not like we're looking at multiple terabytes of Oracle data.
Linux will run on an 8GB OS disk, typical mySQL databases are quite small, and he said he's running up to ten VMs. Assuming 300GB drives, he's put 600GB into OS space already, and those are quality, fast drives. What's seriously NOT good enough about that? Yes I'm aware that you can get faster performance out of a higher spindle count, but how realistic do you think it is to tell people a 15 disk SAN isn't up to the task of running VMWare?
I worry far more about people with 20 domestic SATA disks in a whitebox desktop that try to convince people it's a far better server because it has more disks (and they are out there).
A few things...
1. You can use MPIO (multipathing for better and more throughput to your SAN), round robin. The MD3000i can do this, but has been reported to have a few 'issues'...Dell is working on it.
2. Jumbo Frames on the MD3000i. There are known 'performance' issues right now when jumbo frames are enabled. Do some searching on here for MD3000i/Performance etc. Lots of posts, less performance with it enabled, some see equal some see little gain. Again, Dell is working on it.
3. Try to use iSCSI offloading on your nics and on your switches. Test. test. test. BEFORE deployment.
4. Dell and VMware are working on the performance of jumbo frames along with MPIO. From what I heard, looking at mid-october for a official firmare fix. (ESXi 4.1? aka U1 that is in BETA right may come out around mid-end October to support R2 launch of W2K8).
Your RAID disk setup sounds good, 15k's Seagate Barracudas 15.5 drives are fast, keep a close eye on your disk write/read latency. The MD3000i is a great box, just a few little bugs with vsphere that is being worked/tweaked.
Alainrussell I have a similar setup from dell. How did you configure the SAN to setup the LUNS. I am struggling with the best method. Did you setup the MD3000i software on a virtual machine or a separate / host machine. I am running a ESXI so have no host and would like to keep the whole system contained. Current have a virtual machine running on one host. This will migrate to the SAN once its up and running.
Regards
Dave
Hi Dave,
We ended up getting a separate machine to run as our VCenter, so loaded the management software on that for the SAN. I'm new to virtualization so was a little wary of running the management system as a VM, inside the VM cluster it was managing - struggled to get my head around that!
So all we have now is a R410 server (lowish spec) that has VCenter, Dell management software and a few other small things running on it .. seems to be working out pretty well so far - no performance complaints!
Hope that helps.
Alain