tleavit
Contributor
Contributor

VSAN implementation

Jump to solution

Hello Fellow VSAN users and fellow vExperts.

I am considering an all flash VSAN implementation.

I plan on purchasing 3 new hosts and already went the traditional route of acquiring some storage quotes from Dell (Equalogic) and we are working on Pure Storage. But I would like to consider VSAN and would love to hear people opinions on it to date (2017) since I have not looked in a few years.

Im looking for ~20 TB of pure flash and my VSAN plan would look like 10 800gig SAS12 enterprise drives per server. I can make this happen with VSAN it seems at half the price of a pure flash Equalogic and Pure SAN (10 gig iSCSI).

Any advise would be appreciated. Is VSAN up to snuff today?

This would host around 100 virtual servers (mixed traditional infrastructure). 10 gig uplinks (Nexus 9300)

Thank you

Todd Leavitt

vExpert 2017

1 Solution

Accepted Solutions
TheBobkin
VMware Employee
VMware Employee

Hello Todd,

First off, I would strongly advise going with a minimum of 4 nodes as this offers far more options with regard to flexibility, ability to recover from hardware faults and being able to evacuate an entire node for maintenance/upgrades without requiring the need to run on a reduced number of data replicas during maintenance. Alternatively a 4-node cluster would allow to use RAID5 as the FTM (Fault Tolerance Method) for FTT=1 (Failures To Tolerate), which uses 1.33x space as opposed to 2x with RAID1 as the FTM for FTT=1.

I would also advise using *slightly* larger disks capacity-tier drives (1.2-1.5TB seems to be the sweet-spot for All-Flash) but this mostly depends on the size of the vmdks being placed on these and the required performance profile(s).

While I am likely biased towards vSAN (as I work with it everyday and genuinely love this product), the improvements in reliability and features have come on in huge leaps since I first started using it ~2 years ago (to the point that I cringe when I see someone still running a 5.5 U2 cluster!).

Also, VMware are investing a huge amount of resources into vSAN so this progress is only going to continue.

Bob

View solution in original post

2 Replies
TheBobkin
VMware Employee
VMware Employee

Hello Todd,

First off, I would strongly advise going with a minimum of 4 nodes as this offers far more options with regard to flexibility, ability to recover from hardware faults and being able to evacuate an entire node for maintenance/upgrades without requiring the need to run on a reduced number of data replicas during maintenance. Alternatively a 4-node cluster would allow to use RAID5 as the FTM (Fault Tolerance Method) for FTT=1 (Failures To Tolerate), which uses 1.33x space as opposed to 2x with RAID1 as the FTM for FTT=1.

I would also advise using *slightly* larger disks capacity-tier drives (1.2-1.5TB seems to be the sweet-spot for All-Flash) but this mostly depends on the size of the vmdks being placed on these and the required performance profile(s).

While I am likely biased towards vSAN (as I work with it everyday and genuinely love this product), the improvements in reliability and features have come on in huge leaps since I first started using it ~2 years ago (to the point that I cringe when I see someone still running a 5.5 U2 cluster!).

Also, VMware are investing a huge amount of resources into vSAN so this progress is only going to continue.

Bob

thomasross
Enthusiast
Enthusiast

We started with 4 nodes and recently add three more esxi hosts.  We are using HP proliant 380 G9 servers with 1.5 TB SSD and 800 GB SSD.  The 1.5 T are the capacity disks.

I definitely recommend a minimum of 4 hosts,

Vsphere 6.0 and  vcenter appliances,

This is using VMware Cloud Foundation and so far have had very good luck with this. We started with a few production  and dev test VMs  and are adding additional production and dev/test VMs

we have 100 TB of raw storage and I estimate 40 TB of useable.

We are using RAID 1 with thin provisioning,

I'd recommend keeping the firmware, drivers and VMware software up to date.

Tom Ross

0 Kudos