VMware Cloud Community
adminatater
Contributor
Contributor

Confused About Shared SAS

My company finally got some money to spend on shared storage, so now I can finally move away from storing VMs locally. I'm currently looking into Shared SAS storage, and one thing just really confuses me.

Currently I have two hypervisors, if I understand Shared SAS storage correclty. I'll need to buy a compatible SAS card for each hypervisor, and then connect each hypervisor to the Shared SAS storage device. What I'm confused about is at this point, does each hypervisor just write & read from the Shared SAS? It seems to simple, so I must be missing something here. If anyone could fill in the blanks I'd greatly appreciate it

Reply
0 Kudos
16 Replies
wila
Immortal
Immortal

Hello,

Can you give us the name of a few of the products you are looking into as to me "Shared SAS" sounds really like local storage as in directly connected to SAS SCSI adapters on your hosts. That's not what shared storage in the ESX/vSphere means (but I might be so wrong....)

There it means storage that operates by itself and that's connected via means such as ethernet (NFS/iSCSI) or fiberchannel HBA's.



--
Wil
_____________________________________________________
VI-Toolkit & scripts wiki at http://www.vi-toolkit.com

Contributing author at blog www.planetvm.net

Twitter: @wilva

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

Hello Wil, nice to know someone else is looking at Shared SAS as an alternative to iSCSI, FC and NFS. YES, its very simple (much simpler conceptually than iSCSI and Fibre Channel because there is no switching involved) Just an HBA in your host and a SAS cable running to the shared storage device.

Our organisation recently purchased a Dell MD3200 (n.b. not the MD3200i which is the iSCSI model). Its Shared SAS. We bought the dual controller model. We bought two (dual-port) HBAs for each host server and we cable one HBA to controller 0 and the other to controller 1 in the MD3200. Each controller in the MD3200 has 4 ports. We run 3 hosts, so we have 1 port 'spare' for future growth or for dragging backup data off into another server which we might purchase in due course. The cables run direct from the host to the MD3200 and in our case these cables support 4 x 6Gbit/sec SAS connections each.

I'm just in the process of configuring a test bed so that I can test IOPs, so if you wait until early next week I will be able to provide some test results (which I will hopefully compare with my existing slow SATA iSCSI SAN and with some servers with 3Gbit/sec SAS direct attached storage).

Shared SAS is a pretty new idea. Not many people seem to know a lot about it. For small installations I think it might be a really good decision ... I hope so anyway ... as we are going this way ourselves. Fingers crossed. I will post again with some performance stats when I have them.

FrostyatCBM
Enthusiast
Enthusiast

OK, I can share a bit more info about performance. So far I have just used a Windows application called HD Tune to test raw throuput. I installed their free version on a guest VM and gave it a run on a variety of storages.

My old iSCSI SAN (an EMC Clariion) which had 10 x 7200rpm SATA disks (yuk!) configured into 2 x 5-disk RAID5 groups, linked via 2 x 1Gb/sec NICs into HP ProCurve switches, was able to put through about 60MB/sec to 80MB/sec ... pretty poor (hence our desire to replace it).

A Dell R610 server with Windows installed directly on it, containing a small number of 10k SAS drives (direct attached) was giving me about 200MB/sec ... I think its setup was bad incidentally, as the drives were split across multiple SCSI controllers ... looked a bit weird to me!

Compare those with the MD3200 and MD1200 JBOD loaded with 450GB 15k SAS drives and 2TB 7.2k nearline SAS drives ... configured a mix of 5-disk RAID5 and 6-disk RAID10 disk groups on both tiers ... HD Tune reports that all those disk groups maxed out at 1,100MB/sec. I assume that this is saturation of the 4x6Gb/sec cables, as there was no difference between any of the disk configurations.

Veyr encouraging early signs. I need to install the professional verison of HD Tune and see what sort of performance I can get if I run different kinds of tests and workloads, rather than just a raw throughput test.

Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

I installed the trial version of HD Tune Professional and got the following results when I tested on a 5-disk RAID5 on the MD3200 with 15k 450GB SAS drives:

Reply
0 Kudos
wila
Immortal
Immortal

Woww... that looks very impressive.

Thanks for keeping us up-to-date on this, much appreciated.



--
Wil
_____________________________________________________
VI-Toolkit & scripts wiki at http://www.vi-toolkit.com

Contributing author at blog www.planetvm.net

Twitter: @wilva

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

I've been doing some more testing and I have to say that I don't trust the IOPS numbers coming out of HD Tune. This must be for a very, very non-standard workload. I know very little about IOPS (that much I must confess) but have been doing some reading ... in the end I decided to load ip IOMETER on my test VM and then I gave it a run.

I configured 4 worker objects, each with the default settings for access specification. I set each worker to test 2 disks ... one is the 5-disk RAID5 15k SAS ... the other my 6-disk RAID10 15k SAS ... both running on the MD3200 tier of storage (nothing in the JBOD). I then set the ESX host into tech support mode and logged in via putty to run esxtop. Here's a couple of screenshots:

Shows that I am getting a bit over 1600 IOPS in total on this test, with the split roughly 50:50 between the disk groups. Throughput on the test was a measly 3.23MB/sec due to the nature of the workload I guess.

What this seems to indicate is that the hardware/controllers/cabling is capable of EXTREME high levels of throughput, but of course you are always constrained by the nature of the workload and the physical number of spindles. I guess I would summarise by saying that if you use a Shared SAS storage system like this, you have every reason to expect good performance up to the level that you invest in quality and number of drives in your storage array.

Interested in others feedback on these results.

Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

Ran an IOMETER test with a 16K 75% read 0% random workload, same 2 disks ... more than 11,000 IOPS and nearly 180MB/sec:

More evidence that results will be highly variable based on the workload type.

Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

Same workload, but across 4 disks simultaneously:

1. RAID5 5-disks 450GB 15k SAS

2. RAID10 6-disks 450GB 15k SAS

3. RAID5 5-disks 2TB 7200rpm nearline SAS

4. RAID10 6-disks 2TB 7200rpm nearline SAS

Reply
0 Kudos
adminatater
Contributor
Contributor

Thank you for posting all those wonderful benchmarks FrostyatCBM!

I was looking at the Dell PowerVault MD3200 myself, might I inquire what model/type SAS addon card you purchased for each hypervisor.

Reply
0 Kudos
adminatater
Contributor
Contributor

Thought of another question, did you have to create seperate data containers for each hypervisor or could they access one single data container? In my mind I can only see it working if each hypervisor has a sepereate data container (so the reads/writes don't conflict).

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

Shared SAS is a pretty new idea. Not many people seem to know a lot about it. For small installations I think it might be a really good decision ... I hope so anyway ... as we are going this way ourselves. Fingers crossed. I will post again with some performance stats when I have them.

Shared storage, albeit not SAS,like this has been available for many years for clustered servers. There have been many discussions in the communities and it is a good choice for smaller installations. If memory serves HP and perhaps Dell had or were bringing out devices that could handle more than 4 connections.

-- David -- VMware Communities Moderator
Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

Re: the HBAs ... we just took the Dell-recommended HBAs and I must confess that I didn't pay too much attention to what model they were. Initially we purchased just a single dual-port HBA for each server. Later on I decided that I wanted more redundancy, so I purchased an additional dual-port HBA for the servers. So we are using only one of the ports on each HBA, but if an HBA or a PCIe slot fails, the other HBA should be able to take over. Each HBA is a 6Gb/sec device.

Looking closer at the server now via the VI client, I can see the integrated PERC H700 controller (we are using 2 x 250GB SATA drives RAID1 for the hypervisor install) and then a couple of "6gbps SASHBA" storage controllers. So maybe they are just known as a Dell 6gbps SAS HBA.

Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

Regarding your question about separate storage for each ESXi host, I'm just getting to that stage of my setup. Its my understanding that Shared SAS means exactly that ... the LUNs presented by the MD3200/MD1200 will be accessible to all my hosts ... so vMotion will work, etc etc. We certainly wouldn't have purchased it if that weren't the case. But since I haven't actually configured a 2nd host and tested all this myself, I can only say "its supposed to work" and will see whether that is true in due course!

If you're looking for information on what features are supported with the Shared SAS, the best resource I was able to find was:

http://www.vmware.com/resources/compatibility/pdf/vi_san_guide.pdf

From pages 5+6:

SAS Arrays

For SAS Arrays, if listed, VMware supports the following configuration, unless footnoted otherwise:

- Basic Connectivity - The ability of ESX hosts to recognize and interoperate with the storage array. This configuration does not allow for multipathing, any type of failover, or sharing of LUNs between multiple hosts.

- Direct Connect - In this configuration, the ESX host is directly connected to the array (that is, no switch between HBA and the array). Windows Clustering is not supported in this configuration.

- LUN sharing - The ability of multiple ESX hosts to share the same LUN.

- Multipathing - The ability of ESX hosts to handle multiple paths to the same storage device.

- HBA Failover - In this configuration, the ESX host is equipped with multiple HBAs connecting directly to the array. The server is robust to HBA failure only.

- Storage Port Failover - In this configuration, the ESX host is attached to multiple storage ports on the same array and is robust to storage port failures.

- Boot from SAS - SAS boot is supported unless explicitly stated in a footnote for a specific array.

FrostyatCBM
Enthusiast
Enthusiast

If memory serves HP and perhaps Dell had or were bringing out devices that could handle more than 4 connections.

Yes, the Dell MD3200 in its dual-controller configuration can support up to 8 hosts, or 4 hosts with redundant links (each controller has 4 6gbps SAS connectors on it).

Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

One other comment I should make. In our discussions with Dell, it was repeatedly brought to my attention that I will not be able to take advantage of MPIO (multipath IO). So whereas an MD3200i (iSCSI model) can be configured so that a host can retrieve data via multiple paths, the MD3200 doesn't do this. Yes, you can configure multiple paths with multiple HBAs, but they are active/standby, not active/active.

When we considered our purchase, we decided that the benefits:

-- having a single active 4x6Gb/sec miniSAS cable for data retrieval (instead of 2 x 1Gb/sec NICs for iSCSI)

-- no ethernet switches (simpler config)

outweighed the disadvantage:

-- not being able to get MPIO with iSCSI

I suppose that there are other possible disadvantages ... e.g. we can't configure access to the iSCSI inside our VMs if we want to (not that I personally have ever wanted to).

But we are a very small environment (<100 users in a single location) and the simplicity of the Shared SAS is hopefully a worthwhile benefit for us. n.b. we are actually retiring an old iSCSI SAN for this project, so we have some familiarity with iSCSI, however we can configure the Shared SAS setup ourselves, whereas in the past we had to get consultants in to setup/configure the iSCSI stuff for us, so that will save us a heap of $$$ too.

Reply
0 Kudos
pesospesos
Contributor
Contributor

great thread - thanks for all the numbers!

we currently use an md3000 shared sas box with two hyper-v hosts (dell sas5e HBAs).

we are looking at possibly moving to the md3200 - bummer that it takes 2 less drives, but the performance increase would be nice...

Reply
0 Kudos