VMware Cloud Community
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

virtual san appliances (VSAs): what is the current state of the field?

Like many SMBs, we are feeling the pressure to move away from DAS and get on some kind of SAN.  We are considering an iSCSI SAN implementation, which will likely include (at least in part) some kind of VSA to leverage our DAS capacity in the new iSCSI environment.

What are the major players in the VSA field?  My googling has turned up:

the vsphere storage appliance (not crazy about the poor disk space utilization)

falconstor

starwind vsa

HP LeftHand VSAN appliance

Also, we would consider OpenFiler/FreeNas...but its not clear to me if these can truly operate in a VSAN mode (e.g. RAID accross hosts, etc.)

Are there any others we should consider?

Does any stand out as a clear leader for any reason (price/features/usability/reliability)?

Any performance data out there?

Thanks.

Tags (2)
Reply
0 Kudos
1 Solution

Accepted Solutions
CHogan
VMware Employee
VMware Employee
Jump to solution

I just wanted to add a small clarification around the vSphere Storage Appliance. Yes, it did start out with a RAID10 requirement on each host, but this was relaxed at the beginning of 2012, so that it now also supports both RAID5 & RAID6 configurations - http://blogs.vmware.com/vsphere/2012/01/raid10-requirement-for-vsphere-storage-appliance-vsa-relaxed...

The point here is that we do not want a single spindle failure to bring down one complete node in the cluster.

Just in case this was the reason you were discounting the vSphere Storage Appliance.

HTH

Cormac

http://cormachogan.com

View solution in original post

Reply
0 Kudos
15 Replies
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

just added these to the list to be considered

datacore Vsan

stormagic svsan

nutanix 2400 vsan

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

I'm sure the Nexenta guys can chime in, but that's my poison of choice for virtual storage. It uses ZFS and some other really cool technology under the hood, along with the ability to do read/write caching with SSDs and has a very simple and easy to use interface.

http://nexenta.com/corp/

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
mcowger
Immortal
Immortal
Jump to solution

Nexenta, while cool, doesn't meet his need to scale out to multiple hosts in a clustered fashion like the rest do (the VSA, HP P4000, etc).

--Matt VCDX #52 blog.cowger.us
chriswahl
Virtuoso
Virtuoso
Jump to solution

Ah, I totally missed the mark - was thinking just virtual storage. Thanks mcowger Smiley Happy

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
CHogan
VMware Employee
VMware Employee
Jump to solution

I just wanted to add a small clarification around the vSphere Storage Appliance. Yes, it did start out with a RAID10 requirement on each host, but this was relaxed at the beginning of 2012, so that it now also supports both RAID5 & RAID6 configurations - http://blogs.vmware.com/vsphere/2012/01/raid10-requirement-for-vsphere-storage-appliance-vsa-relaxed...

The point here is that we do not want a single spindle failure to bring down one complete node in the cluster.

Just in case this was the reason you were discounting the vSphere Storage Appliance.

HTH

Cormac

http://cormachogan.com
Reply
0 Kudos
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

This is great info, very helpful guys.  Thanks.

If anybody else stumbles upon this thread, please continue to add your opinions....our search is far from over.

I will say that I don't understand why any vSAN vendor would not include the functionality to build volumes accross physical nodes...because without that why use it?  Might as well stick with DAS and enjoy the performance of local drives.

I am very happy to hear the RAID 10 requirement was lifted....that will probably cause us to consider that most seriously simply because it is a vmware product.

Thanks again.

Reply
0 Kudos
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

@Cormac

just to make sure I understand re: "The point here is that we do not want a single spindle failure to bring down one complete node in the cluster."

if we have RAID5 on each physical node, and the VSA is doing raid 1...then wouldn't it take two drive failures to bring down one complete node, and four (two on each physical node) to bring down a two node cluster?

Reply
0 Kudos
CHogan
VMware Employee
VMware Employee
Jump to solution

Hi Eric,

Yes - that is correct. I describe other failure scenarios and how the VSA handles them on the vSphere blog if you are interested.

Probably the best place to start is here - http://blogs.vmware.com/vsphere/2011/10/vsphere-storage-appliance-vsa-useful-links.html

If this is a longer term project, you might to see what new enhancements are coming for the VSA in and around the VMworld timeframe before making a decision.

Cormac

http://cormachogan.com
Reply
0 Kudos
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

Thanks for pointing me in the right direction.  Mind if I ask a few quick one off questions?

-  How does the VSA perform in your experience, and is there any performance data (charts, etc) that you guys have to pass around?

-  I notice that it's NFS, not iSCSI...hrm.  Not really a block level device, but I guess it doesnt matter, just interesting.

-  Some of our DAS arrays are rippin fast:  15K 6GB/s SAS raid 0+1 (I will change these to 5 if we use VSA which will essentiall make it 5+1)...won't these fast drives saturate a gigabit NFS connection pretty quickly?

-  Can we configure the node to node replication (the VSA mirroring) to do its thing accross a dedicated ethernet interface, so its not competing with the data traffic?

These are my big questions, I can dig up the rest on my own using the very helpful links you shared.

Thanks.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

It takes a ton of 4K blocks (standard Windows partition block size) to saturate 1GbE - something around 32000 IOPS. So unless you're doing large block size transfers (such as SQL or file servers) this usually isn't an issue.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

its SQL Smiley Sad

we may be looking at some hybrid solution (DAS for the SQL dbs and logs / VSA for the vms)

Chris could you link me to some resources that I can use to educate myself on how you arrive at your statement?  My current state of knowledge is "hrm...6 Gb/s (6 gigabits per second) is six times 1 Gbps ethernet (1 gigabit per second), so it would easily saturate it"

However, I am quite certain my understanding is flawed, especially reading your answer.  I don't want to task you with educating me, but if you are familiar with any links or articles that explain this, I would be very appreciative.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

Just some math.

1Gb = 0.125 GB = 131072 KB

131072 KB bandwidth / 4KB block size = 32768 IOPS

Of course there is an overhead to TCP so it may end up being only 80% of that (roughly 26k IOPS).

If you're doing SQL blocks of ... let's say 64K - that gives you about 2000 IOPS or ~1600 with overhead

Edit: Your 15K SAS drives probably get about 180 IOPS per disk, so it would take about 10 disks (no parity considerations) to provide this workload

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
EricTrentMiller
Enthusiast
Enthusiast
Jump to solution

man, I just got a lot smarter.  thx

regarding block size (4K/64K/etc) that refers to the bock size used when I created the physical volume correct?  This is usually measured in K in the Raid hardware....

Also, I know that when I add storage in ESX, I choose a block size for the volume, but that is measured in MB. not KB?.  I've always wondered how these two values relate to one another and what net effect they have. (the physical RAID block size in KB vs. the VMFS block size in MB).  I know that the VMFS block size has to be increased in order to use larger 2TB partitions, so I usually choose the largest block size.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso
Jump to solution

regarding block size (4K/64K/etc) that refers to the bock size used when I created the physical volume correct?  This is usually measured in K in the Raid hardware....

The K size of the blocks on the guest partition. Windows defaults to 4K blocks. Use DISKPART to verify.

Also, I know that when I add storage in ESX, I choose a block size for the volume, but that is measured in MB. not KB?.  I've always wondered how these two values relate to one another and what net effect they have. (the physical RAID block size in KB vs. the VMFS block size in MB).  I know that the VMFS block size has to be increased in order to use larger 2TB partitions, so I usually choose the largest block size.

No relation to each other in context of this thread. Also in vSphere 5, the block size is now unified at 1MB, there is no 1/2/4/8 choice.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
SantiniStorMagi
Enthusiast
Enthusiast
Jump to solution

I see this thread is ansered but thought I would share this on StorMagic's SvSAN. http://www.stormagic.com/svsan_vmware_vsa_comparison.php

It give some great info about how SvSAN san compares and highlights some key features sets. I believe since this was done there may have been a few changes with the Vmwares VSA, I believe with RAID as mentioned before.

Thanks

Steve

Reply
0 Kudos