VMware Cloud Community
Chris01
Contributor
Contributor
Jump to solution

New setup. CX300 8 * 10k spindles - will it be fast enough?

Hello,

There are a number of threads here about cheap SAN startups, this is about getting the most out of them.

We currently have VM servers with all the storage local. The plan is to get enterprise and some FC switches to support VMotion, however they didn't give me the budget for that project this year.

I can get a SAN and attach the VMservers directly to the SAN (no switches, via FC), the SAN is a CX300 with 8 x 300gig 10k spindles. I will be hosting 2 terminal servers (30 users on each), a SQL and file server amongst other low IO app servers.

My biggest worry is that I am shooting myself in the foot by not getting a smaller (5disk) 15k spindle array for those high IO servers and leaving the slower spindles or even SATA for the other hosts. Will the 8 x 10k disk arrangement be sufficient?

TIA, Chris

Points will be awarded.

Reply
0 Kudos
1 Solution

Accepted Solutions
CXSANGUY
Enthusiast
Enthusiast
Jump to solution

One of the more confusing CX threads Ive read in awhile but:

1. The CX200 was made for direct attached two-node clusters. The CX300 is an upgraded CX200 more-or-less. I haven't done any direct attached installs in ages, not because they aren't supported, but simply because its so darn cheap to get fibre switches these days (You don't have to get a 20K Cisco MDS you can get a cheapy QLOGIC for almost nothing). I would recommend finding the money for at least 2 of the inexpensive QLOGIC switches and not doing direct attach.

2. FLARE drives... More officially referred to as the Vault drives it is not a big deal at all to use these on a CX300. On a much more busy CX3-80 or something of that sort I'd more often not recommend it but it is entirely the norm to use these drives on a CX300. In fact so much so that they are often extended to more than just the default 5. As mentioned you do loose space, but it isnt all that much. On a CX300 such as you describe it would be the equivelant of having one low-to-moderate IO host already using the spindles. Given that you stack up half a dozen hosts on ESX to begin with that is hardly of any concern. Its not like we are talking about a dedicated set of disks for Oracle DB's or something.

3. 15K Drives... Probably would be a waste here. What I would do going forward is when you need more space for data start buying 15K spindles. Make a seperate RAID group out of them and migrate VMs to them as needed. If you virtualize anything sequential, like sql or exchange transaction logs, 15K is a waste if you dedicate a RAID 1/0 pair of 2 drives to sequential stuff. Get 10K for those.

You should be fine, just understand you will have to grow the SAN as the environment grows, just like anything else.

View solution in original post

Reply
0 Kudos
9 Replies
epping
Expert
Expert
Jump to solution

what is the current i/o going to the DAS ? work out the i/o and see if you have enough, u should get about 180 i/o per sec off the 10k drives.

at a guess i think you should be alright.

however going direct to the array not going though switches is a unsupported configuration

Reply
0 Kudos
whynotq
Commander
Commander
Jump to solution

my big concern would be the number drives in the CX300, you will be running on the flare drives which even in a traditional physical environment would not be advised.

the SQl could be a problem unless you are creating a 41 (d0-d4) and a 11 (d5 & d6) then d7 as a hotspare, this may work with SQL on the mirror pair.

Reply
0 Kudos
Chris01
Contributor
Contributor
Jump to solution

I assume the fact that the disks are configured in Raid 5, and that there are more spindles it will be faster?

Sorry, it has 9 disks, 9th being the spare.

What do you mean by flare? NVM googled the answer.

Reply
0 Kudos
whynotq
Commander
Commander
Jump to solution

it's already configured as 7+1 ?

more spindles doesn't always mean more performance, my concern is the flare drives, normally i'd isolate these in a 4+1 of their own.

2 reasons for this:

1. they are the SAN array OS drives and as such are constantly active serving requests and background monitoring.

2. due to the Flare OS being on the first 5 drives you lose a bit of capacity, ~6G per dirive, when you add drives outside of the first 5 you will get the same capacity hit on these. it's only small in the context of 300G drives but it mounts up.

epping
Expert
Expert
Jump to solution

flare is the version of code running on the SP

more spindels give you more i/o

the OS running on the disks will give u less i/o than if they were on virgin disks, however u need to work out your i/o requirements and see if it can provide it.

i run vms on my OS disks, no problem, however i do not run a big oracle server

Reply
0 Kudos
Urbanb
Contributor
Contributor
Jump to solution

Try the get the current I/O of euqal systems in your envrioment.

8+1 Disk in an Raid 5 will be able to generate 1300 - 1400 IOPS. The same with 15k will be about 20%-25% faster.

Don't use the System Drives in your Config, or try to span any productive VMFS LUNs over them. Generate a new LUN (or 2 will be better) over the 8+1 Raid5. Use the space on the system drive (flarecode) for an backup or nfs share with less traffic.

Chris01
Contributor
Contributor
Jump to solution

Thankyou Urbanb.

The engineers are asking me how I would like it configured, would I be correct in assuming 9+1 with two LUN's (there are two processors in a CX each processor can be assigned to a LUN?), with the non LUN allocated volume (where the flarecode resides?) becomes a NFS share?

I am new to the whole SAN element, can assign volume in a SAN as a NFS share? - presumable you would mount that.

Many thanks for explaining it.

Reply
0 Kudos
epping
Expert
Expert
Jump to solution

u have lost me now, how many disks have u got? i thought it was 9 for the whole array?

Reply
0 Kudos
CXSANGUY
Enthusiast
Enthusiast
Jump to solution

One of the more confusing CX threads Ive read in awhile but:

1. The CX200 was made for direct attached two-node clusters. The CX300 is an upgraded CX200 more-or-less. I haven't done any direct attached installs in ages, not because they aren't supported, but simply because its so darn cheap to get fibre switches these days (You don't have to get a 20K Cisco MDS you can get a cheapy QLOGIC for almost nothing). I would recommend finding the money for at least 2 of the inexpensive QLOGIC switches and not doing direct attach.

2. FLARE drives... More officially referred to as the Vault drives it is not a big deal at all to use these on a CX300. On a much more busy CX3-80 or something of that sort I'd more often not recommend it but it is entirely the norm to use these drives on a CX300. In fact so much so that they are often extended to more than just the default 5. As mentioned you do loose space, but it isnt all that much. On a CX300 such as you describe it would be the equivelant of having one low-to-moderate IO host already using the spindles. Given that you stack up half a dozen hosts on ESX to begin with that is hardly of any concern. Its not like we are talking about a dedicated set of disks for Oracle DB's or something.

3. 15K Drives... Probably would be a waste here. What I would do going forward is when you need more space for data start buying 15K spindles. Make a seperate RAID group out of them and migrate VMs to them as needed. If you virtualize anything sequential, like sql or exchange transaction logs, 15K is a waste if you dedicate a RAID 1/0 pair of 2 drives to sequential stuff. Get 10K for those.

You should be fine, just understand you will have to grow the SAN as the environment grows, just like anything else.

Reply
0 Kudos