VMware Cloud Community
RealQuiet
Enthusiast
Enthusiast
Jump to solution

Optane for VDI, is it overkill?

I have a small VDI environment, only 200 Windows 10 systems with 8Gb fiber channel connecting to the storage array. We are looking at buying new equipment and have a good size budget for an HCI solution using vSAN. Off the bat, I expect storage latency to drop, just having it in rack, but I want to know if NVMe SSD with an Optane caching tier is worth it. Keep in mind we will be encrypting the vSAN.

I know, I know... it is workload dependent. I am just trying to get a baseline on if Optane is overkill for VDI.


I am considering 3 SSD configurations

1. All NVMe with Optane Cache

2. NVMe cache + SAS storage

3. All SAS

Also, please let me know if I am overlooking another storage technology that is better for VDI.

Reply
0 Kudos
1 Solution

Accepted Solutions
bryanvaneeden
Hot Shot
Hot Shot
Jump to solution

Honestly I'd say just use SAS SSD 12GB/s.

We've build a vSAN test environment with these (with encryption,dedup,compr), but with a good Write Intensive one for the cache tier and this was already way overkill with 300k read iops and 115k write iops.

The additional cost for Intels Optane is just insane as far as I am concerned.

But yes you are correct, it's all workload dependent. I'd say just try to create a baseline (if you already have an environment) and check if you need that many IOPs.

Visit my blog at https://vcloudvision.com!

View solution in original post

Reply
0 Kudos
4 Replies
sjesse
Leadership
Leadership
Jump to solution

I have 50 or 60 people on at the same time in my environment, and I barely see 1 ms latency with 1k to 5k iops on a hybrid array. I don't see my users every using that, but as you say its depended on the workload. I'd get statistics of what your currently using and compare that against what is theoretically possible. I think it matters too if you are serving persistent desktops, or non persistent desktops. Linked clones that are restarting at booting often need more iops then instant clones, and persistent desktops are probably on all at the same time.

Reply
0 Kudos
bryanvaneeden
Hot Shot
Hot Shot
Jump to solution

Honestly I'd say just use SAS SSD 12GB/s.

We've build a vSAN test environment with these (with encryption,dedup,compr), but with a good Write Intensive one for the cache tier and this was already way overkill with 300k read iops and 115k write iops.

The additional cost for Intels Optane is just insane as far as I am concerned.

But yes you are correct, it's all workload dependent. I'd say just try to create a baseline (if you already have an environment) and check if you need that many IOPs.

Visit my blog at https://vcloudvision.com!
Reply
0 Kudos
RealQuiet
Enthusiast
Enthusiast
Jump to solution

Yeah, looking at this more closely it gets quite costly to obtain a RAID 6 and have a fault tolerance of two. I am currently favoring the SAS SSD configurations due to cost.

Reply
0 Kudos
bryanvaneeden
Hot Shot
Hot Shot
Jump to solution

Hi RealQuiet,

I don't know if you considered this also;

Something you could consider is doing 2 disk groups on each host (which was already best practive if I remember correctly). This gives you full flexibility and reliability in regards to some failures in parts of a vSAN node without having to lose the complete node.

And yeah doing a raid 6 FTT=2 would be 6 nodes full of optane disks, which is far to costly for any kind of workload, except for maybe ERP/heavy DB like systems.

Visit my blog at https://vcloudvision.com!
Reply
0 Kudos