Pros - VSAN is integrated in vmkernel, Nutanix ia a VM.
According to me there is very few benefits from being integrated in the vmkernel, one of the benefits from running in a VM is that its not depended on the hypervisor version.
For example you can run the latest version of Nutanix with vSphere 5.5, not possible with VSAN since its tied to the hypervisor version.
Cons - Nutanix is more flexible, interact with others hipervisors (XEN and Hyper-V)
XEN is not supported but AHV (a version of KVM), Hyper-V and all supported versions of vSphere. (Possibly other hypervisors in the future)
Pros - vSAN is compatible with more expensive hardware and nutanix only homolog hardwares.
Nutanix is available on multiple hardware platforms like Supermicro, Dell and Lenovo.
Disclaimer: Nutanix employee but try to stay objective.
Why sticking only to these two vSAN solutions out there, what about ScaleIO, StarWind, etc.?
Good morning, I think you have it about right. vSAN is great if you say VMware all the way. There are some solutions that will let you share out a vSAN datastore as NFS/CIFS with AD integration. I'm not going to link to any particular one as I have not used them personally.
I would recommend you take a look at VxRail from EMC. Thank you, Zach.
Thanks for disclaimers Linjo,
I curious the Nutanix plataform and find more informations!
If yours have more information share please!
Obviously Linjo is as biased as I am (VSAN Team Member here) I would argue that there are many benefits of being an integrated part of vSphere. Just to give some examples, enabling VSAN takes 3 clicks, installing / configuring is dead simple for people who know vSphere, it is blazing fast, management is the same interface you are already using (WebClient), it integrates with things like VROps / LogInsight etc. When a new update comes out you install it and BAM you have the latest version of VSAN with it. No need to think about it almost even. If you are a vSphere customer, then VSAN is the logical choice when it comes to hyper-converged if you ask me. 3500+ customers have already made that choice, more than anyone else in the hyperconverged market today. On top of that you can pick ANY server on the vSphere HCL. Want HP? Want IBM? SuperMicro? Dell? Fujitsu? Quanta? And the list goes on and on... Want flexibility? Don't want to be tied to a specific vendor or configuration? VSAN gives you that. Start at 3 nodes (or even 2) and use a couple of disks... to MANY hosts in a cluster with each host having 35 capacity devices. Yes, Petabytes of storage possible. Want hybrid? Want All-Flash? Want it based on NVMe, or much rather use FusionIO PCIe? Or still prefer SAS or SATA? It is all possible. Anyway, the decision is yours to make... For more details on VSAN, find everything you ever wanted to know here: vmware-vsan.com
Disclaimer - VMware Employee, working in the Storage and Availability Storage Unit - I could be biased
As an administrator (of storage/compute/directory services/collaboration services) for 15 years, it was always a challenge to "right size" storage for different workloads. (I've worked for a vendor since late 2010.)
Whenever I had to carve up storage for this/that/other workload, there was always a huge amount of work ensuring I met the requirements of the workload (performance, protection, capacity) while not being wasteful with that storage. It was common to have to size storage this way to meet a performance requirement, often wasting capacity, or size storage another way for capacity and either wasting performance, or not providing enough. What I like the most about Virtual SAN, is that it allows Administrator to individually assign policies per vmdk/object to allow for appropriate performance, capacity, or protection. Virtual SAN is a radical departure from the traditional sense of managing and consuming storage.
VMware Virtual SAN is integrated into the ESXi Hypervisor. Virtual Storage Appliances run on top of a hypervisor in a Virtual Machine.
It has been argued that the number of "hops" for data, when using a VSA can add significant overhead. The may or may not be the case. I don't know if I 100% agree with that. ESXi itself can handle massive IO, along with hardware of today, presenting storage from a VSA could be entirely appropriate for the workload expected. It is important though, to keep in mind that VSA's operate in the same space as other Virtual Machines. Things like CPU Ready Time and multiple vCPU scheduling requirements do not affect Virtual SAN in the way they do Virtual Machines. Additionally, Virtual SAN requires a small amount of host RAM, entirely dependent on the architecture (Hybrid vs All-Flash), capacity of cache devices, and number of cache devices.
Requirements for Virtual SAN 6.0 can be found here: VMware Virtual SAN 6.0 Requirements (2106708) | VMware KB
From a CPU overhead perspective, we typically state Virtual SAN requires around 10% CPU overhead.
Some features in Virtual SAN 6.2 (Deduplicaton&Compression, Erasure Coding, Checksum) add a small overhead to that.
We might be giving worst case values, as many customers report a lower overall utilization.
I'd also state that KB 2106708 mentions 32GB of RAM as a minimum, but that is only needed for a full compliment of disk groups. Most customer deployments are well below that requirement per host.
Another KB is a bit more indicative of Memory requirements: Understanding Virtual SAN 6.x memory consumption (2113954) | VMware KB
*My home lab (All-Flash) w/Micron P420m drives for caching (1 disk group) requires only 7.7GB of RAM per host.
Virtual SAN is compatible with:
- Hosts on the vSphere Hardware Compatibility list (VMware Compatibility Guide - System Search) that have Storage Controllers/Drives on the Virtual SAN Compatibility Guide (VMware Compatibility Guide - vsan)
- Virtual SAN Ready Nodes - Pre-Validated hardware from many OEMs (http://vsanreadynode.vmware.com/RN/RN)
- Engineered Appliances (VxRail | Hyper-Converged Infrastructure | VCE)
- And some larger Engineered Solutions EVO:SDDC (VMware EVO SDDC Hyper-Converged Infrastructure (HCI) Solution )
And yes, Virtual SAN is only compatible with VMware vSphere, the most proven and widely deployed hypervisor.
i currently work on both systems, one major con on nutanix is the CVM. in order to run nutanix you must have a nutanix ran CVM on each host, which can use anywhere between 16GB to 32GB of RAM each, depending on features you want running (ie dedup, Compression) i find this a killer when it comes to smaller sized environments where power is limited.
Another CON is cost. currently nutanix software is much more expensive than VSAN licensing. its also another vendors application you need to learn how to navigate, to include there own command line and services you need to get up and running before the nutanix storage cluster is seen by all host. where as VSAN is pretty much license, enable, select cache and data drive and your done.
Don't want to turn this into a flame-thread but vSAN is not "free" either, it will use from 6-32Gb memory (depending on disk configuration) and "up to" 10% of the cpu resources.
Its not visible in the same way as for the Nutanix CVM but its there.
Disclaimer: Nutanix employee
you are correct, VSAN is not "free" however a fraction of the price of nutanix
So we agree that the resource consumption is comparable between Nutanix and vSAN? Great!
(I leave the pricing discussion to someone else, more interested in the technical things.)
that i would like to explore, because im not aware of VSAN taking up 20GB of memory per host to run..
Its in the vSAN Design Guide, here is from the 6.6 version:
"A minimum of 32GB is required per ESXi host for full vSAN functionality (5 disk
groups, 7 disks per disk group)"
That doesnt tell me anything, all that states is there should be a minimum of 32GB to run, not that it uses 32GB while running like nutanix does. nutanix CVM's actually use 16GB-32GB
The statement: "Each host should contain a minimum of 32 GB of memory to accommodate for the maximum number of 5 disk groups and maximum number of 7 capacity devices per disk group." pertains to a fully populated configuration per host (5 disk groups with 1 cache & 7 capacity devices each." (updated to add closing quotation mark)
The amount of RAM required is completely dependent on the type of hardware (Hybrid or All Flash), how many disk groups, and the size of the cache device(s) being used.
In short, 32GB of RAM is NOT a minimum requirement, unless using 5 disk groups with 7 capacity devices each.