VMware Cloud Community
DFA71
Contributor
Contributor

vSan Design for a new VDI environment

someone can help me configure a new environment for VDI (mixed workload, graphical VDI with Tesla GPU for cad 3d, and standard use for office applications), my idea is to use VSAN with 4 full flash nodes, 2 SSD Write intensive 800GB for cache, and 3 SSD read intensive (or mixed workload) 3,84TB per capacity, 1 network card dual port 25gb sfp+

My doubts are about:

- Network if it's better 4 ports 10Gb or 2 25Gb

- Disk, install esxi on Boss controller card 2 SD in raid1, and capacity workload with 3 or 4 disk

- install all management component (vcenter psc horizon ..) on external environment

any suggestions are appreciated

@luke

Tags (1)
0 Kudos
3 Replies
TheBobkin
Champion
Champion

Hello Luke,

"2 SSD Write intensive 800GB for cache, and 3 SSD read intensive (or mixed workload) 3,84TB per capacity"

Do you mean 2 Disk-Groups with 1x 800GB cache-tier and 3x ~4TB capacity-tier devices per Disk-Group? Otherwise (e.g. if you mean 3x capacity-tier devices total) indicating  non-homogenous Disk-Groups (2x capacity in first Disk-Group, 1x capacity in second Disk-Group) is not optimal.

"- Network if it's better 4 ports 10Gb or 2 25Gb"

You are very unlikely to saturate 10Gb links in a 4-node cluster with this configuration and thus this is unlikely to be a bottleneck - 4x 10Gb links are likely the preferable option here as it will allow better physical segregation (and redundancy) of the vSAN and other network traffic (e.g. Management, vMotion, VM network, FT (if using), backup traffic and iSCSI/SAN networking).

"- Disk, install esxi on Boss controller card 2 SD in raid1"

Something 'inside-the-box' such as BOSS M.2 or SD-cards in R1 is the preferable option in my opinion as a) this leaves more slots for potential future expansion, b) Can do R1 on the devices which you can't do with devices attached to the controller used for vSAN devices and c) if using any controller that has specifics certification/caveats about what can be attached.

"and capacity workload with 3 or 4 disk"

If you mean 3x ~4TB devices per Disk-Group fronted by 800GB cache-tier, this should be more than adeqaute for most workloads - having multiple Disk-Groups really helps with performance.

https://blogs.vmware.com/virtualblocks/2017/01/18/designing-vsan-disk-groups-cache-ratio-revisited/

"- install all management component (vcenter psc horizon ..) on external environment"

If this is possible, I would generally advise this as without vCenter etc. troubleshooting issues (e.g. network connectivity/partition issues) can be difficult for normal-users to troubleshoot. Then again I am pretty biased in my opinion with regard to this as I look at clusters from a support perspective and know too well what troubleshooting huge clusters with no vCenter is like (less of an issue in a 4-node cluster and if you know what you are doing).

Bob

0 Kudos
DFA71
Contributor
Contributor

Hi Bob

thanks for the quick response, for the storage part i'm poorly skilled, I would like to have a usable space for the worklod of VDI about 20TB, with FT 1node, the best option which is according to your opinion?

many thanks

Luke

0 Kudos
TheBobkin
Champion
Champion

Hello Luke,

Just to clarify, from the Guest-OS perspective, 20TB usable vs 20TB used, there can be a number of differences - e.g. 20TB used (as the Guest-OS sees it) would require ~55TB with FTT=1,FTM=RAID1 (allowing for ~25% slack-space which is VMware's recommendation), but if data is over-provisioned (Thin and only partially-used) and/or using RAID5 as the FTM and/or Deduplication&Compression then it could potentially fit in a smaller footprint.

While the overhead of FTM=RAID1 (2x) might seem excessive compared to using RAID5 (1.33x), RAID1 would be the better choice here in my opinion as performance will be better and data can be rebuilt back to FTT-1 in the event of a relatively long-term node failure (e.g. motherboard/cache-device/controller (the latter two if using only one) provided there is adequate slack-space to allow this.

If you would like to read into any of this further, our sizing guides are a good resource:

VMware® vSAN™ Design and Sizing Guide

Bob

0 Kudos