VMware Cloud Community
hutchingsp
Enthusiast
Enthusiast

SAS vs. SATA (in FC SAN)?

I guess the simple way to find out would be to just try it and "suck it and see" but..

We currently have all our VM's on SAS RAID10 LUNs in a Clariion AX4 SAN. Our ESX boxes have dual 4gb HBAs to Brocade 4gb switches so there is full redundancy and no bottlenecks.

Some of our VMs are low utilization but do store files, which means the SAS storage is a fairly expensive medium. I have plenty of slower RAID5 SATA space available, and am thinking about allocating a 500gb lun to the ESX boxes and moving a couple of hundred gb's of "bulkier" VM's to this storage.

I appreciate it's a hard question to answer, but would people expect a night and day difference in speed or should it only be an issue where the utilization is relatively high?

For example running esxtop I have never, ever seen any IO queues.

I guess I'm a bit unclear just how much the SAN will help abstract the visible performance from the underlying disks.

0 Kudos
38 Replies
RParker
Immortal
Immortal

OK, you just contradicted yourself. You agree you use SATA, but I don't see benchmarks to SHOW a comparision with your SAME VM's on SAS. Yet you:

'We notice no performance problems at all.' Proclaim, and yet 'Yes SAS may be 40% faster, but for us that doesn't justify the 150% price difference.' Agree that SAS is faster...

I don't get it.. Which is it? Yes it's faster but you can't afford it? Or you don't have benchmarks that truly SHOW you have performance issues, because you obviously acknowledge SAS is 40% (which is significant) and therefore your Oracle Databases would be THAT much faster?

That makes no sense. You both criticize and agree in the same paragraph, and you don't want to PAY for the performance, that's the bottom line.

If SATA drives AND SAS drives were the SAME price, it would be a no brainer! Therefore having SATA as a replacment for SAS is STRICTLY a cost / benefit analysis and NOT saying SATA is EQUAL to SAS...

Call it what it is.. SATA is CHEAP SAN solution.. but NOT a replacement for SAS performance.

Thats what I said from the beginning, SATA drives DO NOT Equal SAS performance, PERIOD! You even admit to as much . . . . .

So why bother comparing the two, when its obvious they are very different? You use SATA because they are CHEAP. Thats the issue.

0 Kudos
RParker
Immortal
Immortal

PS- on a side note, i hope matt and rparker are at VMworld next year, I think its time for you two to throw down the gauntlet and settle everything outside the forums. I'm guessing you aren't as different as you think you are, not to mention it would be quite funny.

Secretly Matt and I are good friends, we just spar online to stir controversy! Smiley Happy Matt is the man!, we can't agree on everything what fun would that be?

0 Kudos
mcowger
Immortal
Immortal

i will be at vmworld next year Smiley Happy I live and work in SF, and it will be in SF next year, so here's my challenge.

RParker and I will throw down in Moscone center. I hereby challenge him to a duel with those giant boxing glove things. Winner gets.....ummm.......a beer and 1000 forum points Smiley Happy

I'm sure one of the moderators can make that happen, right? Dave?






--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
mcowger
Immortal
Immortal

I think the distinction we are making and perhaps you are missing is this:

For some uses (not all, but some), SATA is good enough. Not better, but good enough. If you have an a need to support 20 IOPs and no chance to need more (I do have apps like that), the 70ish IOPs from a SATA drive is plenty.

We are saying why pay for fast SAS/FC drives (which are like 2x more expensive than SATA) if that level of performance isn't needed.






--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
ExCon
Enthusiast
Enthusiast

Wow...

Let me try this a different way. A Porsche is much faster than an F-150. Does that necessarily mean that the Porsche must be used for all transportation?

Different solutions for different goals, budgets, and environments. I don't think any of us has an unlimited budget, so we MANAGE our environments, which has nothing to do with micro-managing storage. I think I can keep track of the storage types in my environment, and what should be running on them. Smiley Wink As a matter of fact, I like to use a nice LUN naming convention that lets me know at a glance what the characteristics of my storage is.

BTW, I have sites that use virtualized LOCAL storage to make "virtual SANs." The performance isn't porsche-like, but the environment we're in requires us to make some concessions in some cases.

0 Kudos
jjahns
Enthusiast
Enthusiast

I use a Compellent SAN.

Actually am quite happy with it because of ILM. The blocks of disk space least used go down to SATA while the more frequently used blocks go to FC.

Running only 15 VMs on 2 ESX hosts though. But I do throw SQL databases and Exchange into the mix. Net Pause for the win.

0 Kudos
TomHowarth
Leadership
Leadership

RParker and I will throw down in Moscone center. I hereby challenge him to a duel with those giant boxing glove things. Winner gets.....ummm.......a beer and 1000 forum points Smiley Happy

well a Beer may be posible but a 1000 point, no chance, we can't award points :smileygrin:

Tom Howarth

VMware Communities User Moderator

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
Jawdat
Contributor
Contributor

I can see how some of these guys got high number of posts against their names, I want some too.

I don't see the argument any more, clearly it is a case of no clear one answer here like "you can only have 4 virtual disks in a VM" only guidelines.

All parties here have concluded again and again that different applications, SAN types and environments will benefit differently from the different disk types. What is good for someone is not necessary good for another.

How the saying goes "Horses for course".

Instructions provide the steps to do it but you still have to apply your intelect.
0 Kudos
alecprior
Enthusiast
Enthusiast

I didn't contradict myself. What i said is that we know SAS is faster which is why we use it for Oracle data volumes, but our other systems don't require any more performance than SATA provides. We'd be foolish to pay for SAS for everything when SATA would do the job perfectly well for our environment. Much in the same way that we'd be foolish to buy 200TB of storage when we only need 100TB.

Your environment needs SAS. This doesn't mean everybody's does.

0 Kudos
SCampbell1
Enthusiast
Enthusiast

I'm not sure I want to stick my toe in the water here. I think maybe a shark will leap out and bite me right in the points.

My experience is primarily with Clariion, and at the time I was watching a purchasing decision, there was a 10% different in price between FC SAS and FC SATA,

not to mention the fixed cost of the slot. I didn't have any say, but certainly thought that you might as well go SAS just so you don't have to think about when to use SATA and when to use SAS.

Of course, management drank the reseller's koolaid and "saved money" by deploying a combination.

In our environment, when we put a typical large Shared$ disk load (write once, read never), we tend to pop it on a RAID5 SATA RDM, leaving the VM and the RDM mapping files

safely with the VM on our RAID 1+0 SAS storage.

0 Kudos
ChrisDearden
Expert
Expert

We use a mixture of SAS and F-ATA drives in our EVA 8xxx's - For a large volume that just holds SQL backups , F-ATA does the job just fine. 80% of our VMFS Data stores are on SAS , with a few F-ATA Datastores for templates , Backups ( using visbu ) on the occasions I've had to move a VM into a F-ATA based Datastore , unless its a very IO intensive VM , I've not seen any performance issues.

If this post has been useful , please consider awarding points. @chrisdearden http://jfvi.co.uk http://vsoup.net
0 Kudos
khughes
Virtuoso
Virtuoso

i will be at vmworld next year Smiley Happy I live and work in SF, and it will be in SF next year, so here's my challenge.

RParker and I will throw down in Moscone center. I hereby challenge him to a duel with those giant boxing glove things. Winner gets.....ummm.......a beer and 1000 forum points Smiley Happy

--Matt

Even if my company doesn't send me next year, I'll drive from my job here in the bay area to watch that. Richard, I bet you could make quite a bit of money if you had a "pay to fight me" type of setup where everyone could take shots at you with those giant boxing gloves Smiley Happy

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
williambishop
Expert
Expert

I hate to add fuel to the fire between parker and Matt, but as a san administrator, I'm going to have to agree with Matt.

I have several arrays(I maintain about 1/3 PB over 26 switches) and I can get near sas/scsi speeds from sata volumes. Now, there are some gotcha's involved.

1. Your san admin has to be decent. It takes work and research. Out of the box, with no modification, everything is a yugo. You have to build a ferrari.

2. Your gear has to be decent. Doesn't matter how fast the disk if the array is crap.

3. You have to set up the array correctly (I can beat a raid 5 sas lun with a raid 10 sata lun, I guarantee)

4. You have to stop listenting to sales vendors.

5. You have to remember that everything has it's place. Sata can be 80% as fast as SAS, true....but it doesn't belong everywhere. Matt's not saying it should be everywhere, no one is. But for the layout described, I'm with Matt, sata will work just fine.If you absolutely need

SCSI/SAS performance edge, then that's what you buy. But the price difference is enough that sometimes you have to calculate whether or not you need it.

And yes, I have about 2000 vm's on sata. Average read time is 4ms, Average write is 6ms. Matt's right, there IS more to it than just the disks.

Let the flaming begin.

--"Non Temetis Messor."
0 Kudos
williambishop
Expert
Expert

BTW, if you'll look at the IBM XIV you'll see an out of the box sata array that is FASTER than scsi/sas.

--"Non Temetis Messor."
0 Kudos
AntonyW
Contributor
Contributor

Guys,

I'm using SATA disks for low IO VM's and SAS disks for high IO VM's and this works great. We're not having performance issues at all!

why waste those expensive SAS disks on VM's which barely benefit from the high speed disks, its like throwing away money imho.

The san is a netapp 2050C with all disks in RAID-DP

i am using iSCSI though.

Id like to quote a reply from someone earlier in this thread:

"Yes SAS may be 40% faster, but for us that doesn't justify the 150% price difference."

-> so soooo true....

0 Kudos
brad_ault
Contributor
Contributor

ok...after reading all this i figure I'll throw my 2 cents in.

I agree SAS is much faster and better preforming than SATA. But there can be sitiuations where SATA will work. Such as our file and print server has the OS on SAS and the data drive is using SATA. The SATA is connected via RDM (Raw Data Mapping) and seems to run faster than it did when it was a physical server.

I feel better now that I have said this :).

0 Kudos
Delo777
Contributor
Contributor

I have been consulting with several companies on their SAN and VMware environments and have found that theory actually meets practice.

We had a client run over 100 VM's in a 40-50 TB SAN environment which was predominantly SATA (historically given). Yes they did have performance problems and yes they did invest in several shelves of additional FC disks but the bottom line was that when you size right SATA can do a pretty good job! It jus depends on the load. They went from a 9000 random IOPS capable storage environment to multiples of that. In the end we only moved some high performance machines that actually required the storage performance.

I have seen physical machines pull FC arrays to their knees whereas several meaty servers would just let a SATA array sit Idle.

However if you do go the SATA way make sure you stick with a vendor with the virualization possibilities to create safe large disk pools to work with (NetApp, Compellent, IBM SVC, HDS USP, etc.) and for God's sake do some benchmarks to baseline against!

0 Kudos
richard6121
Contributor
Contributor

We have VM's running on a shelf of SATA disk in a CX500. We have a ticket open with EMC right now to review our logs and try to determine why this SATA drops from several thousand IOPs to about TEN when there's heavy write activity going on inside any VM or from any ESX host (when cloning a VM, perhaps).

I'm reather impressed with how snappy the SATA is as long as everything is doing reads.

I do expect SATA to underperform SAS by a wide margin. However, having IOPs plummet to nearly zero during intense writes is just weird.

0 Kudos
hutchingsp
Enthusiast
Enthusiast

Some interesting views - thanks very much to everyone who's replied so far Smiley Happy

Seems the sensible thing is to try it - I get the feeling some of you are talking performance in an environment with dozens of VM's where even the lightest used ones are under fairly constant load.

I'm talking about an environment with a couple dozen VM's in total, however we split all our roles (i.e. backup server, antivirus server, print server) out onto dedicated VM's because it's so easy, and for this SATA application we're talking a VM that will be used literally by two people plus an Accellion FTA VM that is lightly loaded.

I'm going to do some digging into the optimum LUN size and how I can monitor performance - I know of esxtop and monitoring queues and the likes but is there a way to be a bit more pro-active?

0 Kudos