Hey guys ..
I am in the process of planning out my move from DAS to iSCSI for my next VMware project and I need some opinions on my raid group configuration, and want to make sure it is correct. Below is going to be my configuration:
1 x MD3000i
2 x Dell R710s w/24GB ram in HA/Cluster
2 x Dell Powerconnect 5324 Switches
Servers being VM'd:
1 x Sql Server (core sql box with 300 users)
1 x Web server front end (300 users)
1 x CGI server for the front end web server above
1 x front end exchange 2003 server (NO DB's just SMTP front end)
1 x Exchange public folder server (just public folders no mail boxes)
.....and room for future additions down the line...but above are the planned servers.
I am thinking of configuring the raid as the following (i have SOOO many posts and searched endlessly on this!)...
15 300GB 15k disks, 7 in raid 5 + 7 in raid 5...each with two separate luns each so it would be...
Controller 1 - 7 Disks Raid 5 - Two Luns at 800GB
Controller 2 - 7 Disks Raid 5 - Two Luns at 800GB
I am thinking of putting the front end Sql box on Controller 1 LUN1, put the Web Server and CGI Server on LUN2. Than on Controller 2 put the two Exchange boxes on LUN1 leaving me with LUN2 open and see how the whole package performs. If this is how the MD3000i works...I am thinking it is but I could be wrong. I also want to do MPIO for 2GB thoughput to my SAN, I believe the MD3000i can support this.
Hopefully someone can chime in and let me know their thoughts on a configuration like this.
Thanks!!
Did you both factor in using a hot spare for each RAID 5 array?
No, I only configured one hot spare. I didnt think about having a second since there are two arrays, can the hotspare be used for either array or do you have to have a hotspare per array?
I am not sure on that specific device if it has the ability to share a hotspare, but typically you cannot and you assign a hot spare per RAID array / controller.
It is definitely something to ask your distributor or rep as you shouldn't have a RAID 5 without a hotspare.
No, I only configured one hot spare. I didnt think about having a second since there are two arrays, can the hotspare be used for either array or do you have to have a hotspare per array?
On MD3000i the Hot Spare disk is a Global Hot Spare disk.
Andre
Andre do you know if it is the same for the HP MSA2000i G2 arrays?
I do not know HP storage very well.
As I remember, yes (but I'm not 100% sure).
Andre
Yes yes thats right...global hot spare.
Hey just want to share My configs. on my first VM setup I used 3x 2950s attached to and MD3000i. I put all the disk in one big RAID5 group (NOT RECOMMENDED) with one Hotspare. I created 4 500Gig Luns for OS, and added more OS luns as i needed them. dont want too many VMs trying to get to the same lun at the same time, so I spread them evenly across the Luns. I had Obsolutely NO latency or any other kind of issues with this set up......until......... we took a power hit and lost 3 drives in the RAID 5 Group. After this and prior talks with Storage experts I reconfigured the MD3000 with 2 RAID5 groups of 7 disk and left the 15th as a global hot spare. And still Im having no performance issues.
But beware. Dell MD3000i does not have drivers for Multipathing in ESX. so you will be seeing a bunch of "VD not on perferred Path" errors......and these errors does cause the ambor light to come on the error even though it is not production affecting.
Im usinn ESX3.5
In my next build I will be connecting these three 2950s along with 4 R710s to an EMC array via fibre channel.
Hope this helps someone out.
Can you be more specific about why you do not recommend RAID 5 across one all 15 discs?
It seems unlikely to loose 3 disks at once. Why do you think you lost 3 disks on the power failure, or was it a power surge?
Were all 3 disks contiguous to each other or were they spread out?
Interesting information thanks for posting this. I do want to say that in ESX 4 multipathing on the MD3000i is fully supported, there is a walk through in the Dell techcenter site on how to set this up, under 3.5 I believe it was not supported. I am glad to hear your performance was good on your setup, as I am also going to have the same setup as you, with two raid 5 arrays, 7 disks and one hot spare.
How many VM's do you have running on this 3000i? Why are you moving to fibre channel?
Thanks!!
I did not say I didnt recommend it....just that it is not recommended. Was told that there is a big parity overhead when you start using more than 5-7 disk. This would cause Write lantency and seems reasonable to me. But like i said, I experienced no problems whatsoever using all the disk in one big Raid 5 group. and I had this setup for almost a year. If you spread it out into 2 raid goups you lose an extra 300GB of storage capacity, which is why i didnt do it from the beggining.
And it was more than just a Power hit....i just didnt want to go into too much detail..it is a long story. The disk were spread out. But this brings up another points as to why it is NOT good to use big Luns for VMFS datastores...... when i had my issue, some Datastores got corrupted. But because i created say 6 500 gig Luns instead of 3 TB Luns, I was able to save a lot of the VM.....but if you have a good back strategy you wont have this issue either.
Yes it may be supported, but I would be weary if you have to do work arounds as oppose to having a vmware host driver as they do with Windows. But either way, It is still working find for me. I have about 35 VMs runing and im not even using half my Resources on the server. i will probably run out of disk space on the MD3000i before running out of server resources.
My department is finally starting to because a lil more comfortable with VM so they are planing to lauch an app on VM that will use 50-150 VMs. And you know how the vendors come in and tell you the bottom of the line and the top. Iscsi was always at the bottom and Fibre at the top. So this is why we are switching. everyone says fibre channel is the best...and to use it for best performance blah blah. so we want to make sure we are launching the app on the best possible setup...that we can afford. But I dont expect to really see any performance gains because i never saw latency. But this may be a big improvement for people who are run SQL or any other disk intensive Apps.
Thank you for the information. It is always nice to learn from others' experiences. I had an odd experience once where I lost half of my disks because a backplane died...
I was planning on going with a large RAID 6 array across 12-15 disks so I can take a lose of 2 disks. Since RAID 6 has to have two partiy writes, it can slow down performance significantly on a small array, but I figured across a larger array, the hit should be minimal. Does anyone feel this is a bad idea? I would guess that most newer controllers should be able to cope with a larger array like this without the degredation. I remember talking to some NetApp folks last year who said after 17 disks (I think that was the number) or so, they would start to see degredation in performance.
Raymond...
Thanks for the reply. That is my only worry about iSCSI, as I am going to have a primary SQL server running on this san, its not hit super hard but its still a sql box, but everyone says its fine.
Here is the site with the MD3000i configuration steps for vSphere 4. http://www.delltechcenter.com/page/VMwareESX4.0andPowerVault+MD3000i
Wow!!! What a coincidence. I'm purchasing the following from Dell this week.
VMware Vsphere Essentials Plus
Two PowerEdge R710 (Dual E5540 Xeons, 32GBs RAM)
One MD3000i (15 450GB @ 15K speed)
Two PowerConnect 6224 switches
This will be used to virtualize all 11 servers in our company. Since this is my first virtualization project, I will test, break, and rebuild for 2-3 months before I go live with it.
Now...what to do with the 11 leftover physical servers? Craigslist? Ebay? I'll probably save a couple, but all are older than 3 years.
Hey Ryan - same here...
ordered this lot from Dell yesterday...
Vsphere essentials plus
x2 poweredge r610 (Dual E5520 xeons & 24GB ram)
x1 MD3000i (15 450GB @ 15K speed)
exactly same situation as you - we have 12 physical servers and this is our first virtualisation project as well.
Delivery due end next week - we're planning to replicate our existing physical network as our learning process - then switch to new virtual network before end of year.
one thing that I can't quite get comfortable with is how to deal with power outages - in physical world our APC software shut everything down gracefully - in the virtual world it looks to be bit more complex (scripts etc)
I haven't worried about the power situation yet. I'll have to test it out when time comes. Funny thing though, I have two APC UPS3000 units. one of them powers 6 servers and the other powers 5 servers. They both generally last about 3-4 minutes when power goes out. When virtualized, each one will only have one server connected to it, and one will have the MD3000i as well. I would imagine they'll stay online for 15-30 minutes during a power failure. Nice.
If I not mistaken, the 6000 series power connect switch you bought is not ISCSI optimization. Dell had consulted to have 5424 or 5448 for ISCSI usage which I am currently using and it does perform very well. With the iscsi optimization, jumbo frame and enable, it will definitely help in your case.
Craig
vExpert 2009
Thanks for the heads up about the switches. I'll look into it.
I'll second the 5424's switches instead of the 6000 series, you definilty need an iSCSI optimized switch.