VMware Cloud Community

Direction on upgrading storage for 2 vsan clusters being built on 12th gen Dell hardware used for Horizon VDI

I was hoping someone who's gone before me might be willing to check out my plan.  There are 2 specific questions at the end, however, I am fishing for new ideas.  I'm well read on vCenter, Horizon, vSphere and vSAN, however, I am a new administrator who may be overlooking some design aspects. 

While there will be some non-critical production uses this is largely a test for a full deployment of Horizon / vSAN which would see all new hardware purchased for a production roll-out.  I'm building it out of what's available on hand with about $4-5000 available for a few new SSD purchases.  I have Horizon Enterprise CCU licenses. The idea is to get an idea of what vsan can do and a general understanding of what it takes to run 50 concurrent VD's in our environment.

Owned hardware:  ****Environment is limited to vSAN 6.2 / esxi 6.0 u3 due to the PERC H710 controllers.*****

     3 x identical Dell T620 - ESXI 6.0 u3: 2 cpu / 12 cores, 128gb, 10gb Ethernet, Perc h710P  (I likely need more RAM per host)

     1 x R720 - esxi 6.0 u3: 2 cpu / 12 cores, 128gb, perc h710mini

     1 x R620 - esxi 6.0 u3: 2 cpu / 12 cores, 128gb, perc h710mini

Storage - (All Dell enterprise class however they are 3-5 years old)

     10 x 200gb SATA SSD

     19 x 300gb 15 SAS

     48 x 1.2TB 7.2k SAS (will likely go unused)

     4 x 600gb 10k SAS (will likely go unused)

I have 2 vSAN clusters planned:

Compute Cluster (3 x T620 FTT=1): Virtual desktops, Virtual Apps...Currently configured with a 5TB vsan using 3 disk groups per host (1 x sata ssd / 2 x 15k sas). This configuration is vsan HCL supported and passes all health checks.


     PLAN: Purchase 3 x intel dc p3520 1.2TB PCI-e NVMe AIC Cache devices and use 6 x 300gb 15k SAS as capacity in a single DG per host.   The caching tier is straight over kill for how much capacity there is, however, you get ~half the iops on these PCI-e cards < 1TB.  My concern is a single ssd / DG in a host affecting performance despite it being a monster in comparison to my old SSD's. 

Management Cluster (2 node r720, r620 FTT=1😞 (vCA, Composer box, Horizon CM, Access point appliance, log insight appliance, Orchestrator appliance, Operations manager appliance, copies of Master Image / Template storage)  Currently configured without vsan.

     PLAN: Buying the 3 Intel AIC's above free up SSD's for a 2-node all flash vsan.  If I purchase 2 400gb SAS SSD, I can implement 1DG per host with 1x 400gb SAS cache and 5x 200gb SATA capacity disks.  I do realize the bulk of this data will generally be cold and sitting on an all-flash vsan... however I can test an all-flash vsan config.  I don't love the idea that all my reads will be from old 6gb/s SATA flash...but what are you gonna do.  They will be limited and it should still be quick.


Would anybody take a different direction in my shoes?  From what I've read this is about as fast as I can make a single disk group for a hybrid vsan.  The all-flash vsan is a bonus in my mind, just some more experience with a different configuration.

Is separating my environment even the best idea?  How about a single 5 - node cluster with FTT = 2/3?  I believe clustering my T620's gives vsan it's best shot at a stable environment.  However I could simply buy 5 of the P3520 AIC cards and put the same single DG on each of the hosts with 15k SAS capacity drives.  This should almost certainly speed up vsan however it would be pushing away from identical configurations.

0 Kudos
0 Replies