VMware Cloud Community
Phatsta
Enthusiast
Enthusiast

Planning a new storage solution

I've got some budget funds to spend, and would really like to upgrade in terms of storage. Today I'm using 3 HP servers with 12 virtual Machines stored in local raid arrays, and backup to a QNAP NAS with gigabit copper connection. It's okay in terms of speed, but I'd like to go beyond okay. I've been looking at building an iSCSI environment to speed up the virtual machine datastore(s), and get a faster backup unit as well. The QNAP performs at about 20MB/s and I'd really like to at least triple that speed.

So first of all a new storage solution and second a better backup solution.

If you had about $5-6000, what would you spend it on? I'd really like to hear!

Tags (3)
30 Replies
jwhitehv
Enthusiast
Enthusiast

Phatsta wrote:

And by the way, another pro-external reason that I just thought of the other day... With local RAID arrays (LUN's) there's no way that I've found to report the status of the array automatically, should it fail. I have to rely on manual physical inspections today. The QNAP has features for this. Would be a nice thing. HP servers flash and mane sounds, but next time I buy a new server I can go with a supermicro server (that doesn't flash and make sounds) and save 2k :smileygrin:

No I'm not working there, just happen to know they are cheap Smiley Wink

Every single Tier 1 server manufacturer has out-of-band management to automatically report exactly that.

I blog at vJourneyman | http://vjourneyman.com/
0 Kudos
Josh26
Virtuoso
Virtuoso

The things you undeniably need to review are:

VMware's HCL

Disk quality (no magical software will make SATA disks enterprise grade)

0 Kudos
Josh26
Virtuoso
Virtuoso

Phatsta wrote:

And by the way, another pro-external reason that I just thought of the other day... With local RAID arrays (LUN's) there's no way that I've found to report the status of the array automatically, should it fail. I have to rely on manual physical inspections today. The QNAP has features for this. Would be a nice thing. HP servers flash and mane sounds, but next time I buy a new server I can go with a supermicro server (that doesn't flash and make sounds) and save 2k :smileygrin:

No I'm not working there, just happen to know they are cheap Smiley Wink

That's funny, I have tonnes of local storage on HP servers and every one of them have full reporting of local storage status into the vSphere client, monitored externally by HP's SIM.

0 Kudos
Anton_Kolomyeyt
Hot Shot
Hot Shot

This is not really true. SAS is for IOPS and SATA is for capacity and linear reads and writes. So any software turning

random I/O into sequential I/O and doing some sort of the overprovisioning can do magic for particular types of the workloads.

Think about Virsto and their vLog and vSpace concepts, think about VMware VSAN and all writes coming to flash for coalescing

and dumping to spindles later with a few writes, think about A. keeping writes in RAM for as long as they can and synchronizing

RAM and spindle with a huge delay and again with a sequential access entirely. That's for performance. For reliability - strong

checksums at file system level (ZFS) redundancy (LVM, RAID-Z, hardware RAIDs) do protect from a silent data corruption on

SATA just fine.

--

Disk quality (no magical software will make SATA disks enterprise grade)

0 Kudos
Phatsta
Enthusiast
Enthusiast

> Running 12 guests on a single host isn't going to put you in the high-end server range. That's pretty run-of-the-mill consolidation.

Like I said, I'm not even sure how much a DL360 can handle. At the moment I run 4 vm's on it, and trying to run a fifth was too much last time I tried. Of course it depends on roles and functions on the guest, which makes the answer harder.

> I'm not very clear on the math you're using

Like I said, it's not my math, I'm not really sure what the guy meant. All I know is that he's got massive experience and I generally trust what he says. To be sure about those numbers I would need to do a 5 year budget plan. Haven't really got that far yet.

> rsync gives you crash-consistent copies of your workloads, which isn't very attractive

Well, that depends on what kind of data you rsync, doesn't it? I use rsync today not for backup but replication over a VPN tunnel of certain unique data.

I'm not saying I couldn't live without the services the QNAP can provide, but they sure are handy sometimes. Of course it wouldn't be the sole purpose of getting an external storage, then it sure would be foolish. I'm saying I want to factor in every aspect and weigh them together. The QNAP isn't 6k alone, the 10GbE switch and network cards are taking a lot of space there. It's around 3k with 8 disks.

0 Kudos
Phatsta
Enthusiast
Enthusiast

Okay, that's new to me. I googled crazy for solutions at one time but came up empty. I'll have to try again, obviously :smileycry:

I haven't taken any fancy courses in vmware, I've taught myself through trial and error. I've got almost no one to ask so I tend to miss a lot of stuff. Nice to have guys like you that can throw me a bone every once in a while, thanks Smiley Happy

0 Kudos
Phatsta
Enthusiast
Enthusiast

I'm with you on this one. My general opinion is that (as long as we're talking enterprise disks) SATA drives are enough for most functions. Only databases and similar functions with high demand IOPS *need* SAS. Of course SAS and SSD drives will make overall performance way better, but SAS drives are expensive and SSD unreliable in the long run. Therefore, SATA drives win most the time.

I've been using enterprise SATA drives in most servers and computers of more or less importance for at least 10 years, and I've only had problems with a handful. Not enough to believe SATA drives have less life expectancy. Consumer drives on the other hand... that's another story.

And you can do a lot to up the performance of SATA drives. Faster disks and better disk controllers for one thing. What RAID level you go with is another. There's a huge difference in a parity RAID and a RAID 10.

The basis for the construction always has to be demand, that's my point of view. And since I'm working in the SMB sector, price is *always* the primary issue. Why waste money on overkill if it could be your own profit?

0 Kudos
jwhitehv
Enthusiast
Enthusiast

Phatsta wrote:

> Running 12 guests on a single host isn't going to put you in the high-end server range. That's pretty run-of-the-mill consolidation.

Like I said, I'm not even sure how much a DL360 can handle. At the moment I run 4 vm's on it, and trying to run a fifth was too much last time I tried. Of course it depends on roles and functions on the guest, which makes the answer harder.

It's not about the model of the server. It's the CPU, memory, and IOPS requirements of the workload. You have to profile those requirements first. The DL360 might not have the drive slots to meet the IOPS requirements, but the DL380 will. I assume you know your CPU, memory, and IOPS requirements?

> I'm not very clear on the math you're using

Like I said, it's not my math, I'm not really sure what the guy meant. All I know is that he's got massive experience and I generally trust what he says. To be sure about those numbers I would need to do a 5 year budget plan. Haven't really got that far yet.

If you're using the numbers to cost-justify a decision, it's important to understand what's meant.

> rsync gives you crash-consistent copies of your workloads, which isn't very attractive

Well, that depends on what kind of data you rsync, doesn't it? I use rsync today not for backup but replication over a VPN tunnel of certain unique data.

In that case, you can set up a storage server with rsync as a guest. That's not a feature unique to buying QNAP.

I'm not saying I couldn't live without the services the QNAP can provide, but they sure are handy sometimes. Of course it wouldn't be the sole purpose of getting an external storage, then it sure would be foolish. I'm saying I want to factor in every aspect and weigh them together. The QNAP isn't 6k alone, the 10GbE switch and network cards are taking a lot of space there. It's around 3k with 8 disks.

Right, buying into the QNAP solution package still costs $6k, right? The fact that the QNAP is cheaper than that isn't helpful when you need more than just the QNAP to run the solution.    

I blog at vJourneyman | http://vjourneyman.com/
0 Kudos
jwhitehv
Enthusiast
Enthusiast

Phatsta wrote:


And you can do a lot to up the performance of SATA drives. Faster disks and better disk controllers for one thing. What RAID level you go with is another. There's a huge difference in a parity RAID and a RAID 10.

The basis for the construction always has to be demand, that's my point of view. And since I'm working in the SMB sector, price is *always* the primary issue. Why waste money on overkill if it could be your own profit?

I agree that the basis for the storage has to be the driven by the demand. You need to know the IOPS and capacity required for the solution. You haven't given us either, so it's tough to know what would be an appropriate solution.

I blog at vJourneyman | http://vjourneyman.com/
0 Kudos
Phatsta
Enthusiast
Enthusiast

Yes, I know some from performance logs I've gathered, but I didn't mean for this discussion to dig that deep, as to analyze everything down to IOPS. Maybe I didn't phrase my question correctly. I was simply trying to get peoples opinions on what would be the best solution in that price range. I'm not building the Taj Mahal here so I didn't think detailed info was necessary.

But I get if you don't want to voice anything on uncertain grounds. And that's okay, thanks for the opinion you've shared thus far!

Would anyone else like to add anything?

0 Kudos
jwhitehv
Enthusiast
Enthusiast

Phatsta wrote:

Yes, I know some from performance logs I've gathered, but I didn't mean for this discussion to dig that deep, as to analyze everything down to IOPS. Maybe I didn't phrase my question correctly. I was simply trying to get peoples opinions on what would be the best solution in that price range. I'm not building the Taj Mahal here so I didn't think detailed info was necessary.

In that case, I think 10K drives are the sweet spot right now for a mix of capacity and IOPS. For high capacity requirements, I still prefer 7.2k SAS drives. They're not much more than 7.2k enterprise SATA (in fact, they're the same drives, different interface).

I blog at vJourneyman | http://vjourneyman.com/
0 Kudos