I could have sworn there were some SATA Dell disks listed on their, or I am probably thinking Seagate Constellation SAS. From my experience as long as you have an HBA with decent QD (min LSI 2308), and over the top Enterprise PCIe SSD NVME, 6TB SATA 7200 disk have worked out just fine. Had flying success with high-end PCIe and 4x 6TB SATA per host. Works great. However 4TB SAS really is the way togo. IMHO
Clarification would great pertaining to you finding.
I have previously asked in forums about the lack of any PCIe cards on the HCL, I believe the answer was (as you suggested) a back log in QA.
As for the lack of large capacity SATA drives... I would agree with retting, SAS is the way to go. And I'm curious about your use case. If you take 12 x 4 TB drives = 48 TB per node x 64 nodes (Max in a vSAN 6 cluster) = 3,072 TB. I mean wow! What are you scaling to 3 Petabytes?
Thank you, Zach.
The funny thing is that VMware already published a whitepaper (Virtual SAN 6.0 Performance: Scalability and Best Practices) several months ago where they used Intel P3700 PCIe SSDs.. but they're still not HCL'd. Customers have been waiting over a year for those things to show up on the HCL..
I'm curious about your use case. If you take 12 x 4 TB drives = 48 TB per node x 64 nodes (Max in a vSAN 6 cluster) = 3,072 TB. I mean wow! What are you scaling to 3 Petabytes?
One doesn't need to scale to 64 nodes to want high storage density.Any time someone asks about VSAN, storage density per socket becomes an issue. That's $2.5K per socket and some customers aren't willing to drop down to a single socket per host, meaning $5K/host in VSAN licensing. Since external storage expansion doesn't seem to be supported despite GA announcements, VSAN starts looking like a capacity license.
2U 24 disk SFF host:
3 disk groups of 7 HDD + 1 SDD
The largest SFF HDD on the HCL is 1.2TB, so
24 * 1.2TB = 28.8TB/host
Using 6TB LFF drives:
12 * 6TB = 72TB/host
That's 2.5x the storage per VSAN socket license.
With Dell's FX2 architecture, you can get three FD322 storage modules with 5 x 7 SFF drive disk groups.
35 * 1.2TB = 42TB
6TB drives are still 1.7x the storage per VSAN socket license.
If you're willing to go larger than 2U, the HP SL4540 will scale to 60 LFF drives, so you can reach the 35 drive/host VSAN limit. The Cisco C160 scales to 60 LFF drives, but it's controller isn't on the VSAN HCL.
HP seems to have the best capability to scale VSAN, with the SL4540, 35 LFF drives, and a controller and 6TB SAS drives on the HCL. And even then, the largest HP SDD on the VSAN HCL is 1.6TB, when you should have 2.1TB per 42TB disk group.
I smell a blog post coming on.
The funny thing is that VMware already published a whitepaper (Virtual SAN 6.0 Performance: Scalability and Best Practices) several months ago where they used Intel P3700 PCIe SSDs.. but they're still not HCL'd. Customers have been waiting over a year for those things to show up on the HCL.
That is really frustrating. Scalability needs to be supported.
as long as you have an HBA with decent QD (min LSI 2308), and over the top Enterprise PCIe SSD NVME, 6TB SATA 7200 disk have worked out just fine
Except if the components aren't on the HCL, the user can be denied support. That's not a great place to be.
I think you hit the nail on the head with the comment about VSAN socket licenses.. Think about it.
Blog post indeed.
Let's take this example:
Using 6TB LFF drives:
12 * 6TB = 72TB/host
Let's say it's a minimum 3 node cluster, that's 216 TB before overhead. It work load can consume that much space, but still perform well with only 6 sockets in the cluster? It seems like with disk that dense, you would run out of compute power before you could fill it up. Is it a few VMs that are massive? A lot of little VMs that don't consume much CPU? Something else I'm missing? Thank you, Zach.
Does NexentaConnect, SoftNAS, etc answer that question?
Correct me if im wrong but Netextenda just utalizes a VM with VSAN attached vmdks, and then offered up as SAN/iSCSI/NFS/CIFS. Along the same notion VVols would be a separate solution from VSAN, more akin to DAS with the benefits of VSAN like Virtual Volumes, performance tiers, etc. Or so is my current understanding Best, -Jon
Correct, using 3rd-party solutions that turn VSAN into a NAS. Not much compute needed. Just lots of space and IO
Correct -- It all depends on the risk assessment. The cutting edge is always not without risk. However vSphere by definition removes a great deal of risk. Somethings not working shift things around and rebuild it. Being locked down to physical hardware and networking is the real risk. In the case where I deployed that non HCL 6TB cluster, it was a very established high capacity replicated vSphere environment. In the end it was their acknowledged risk, as I do not condone SATA in any form. This statement does not however make your assessment of the current HCL list any less important. Best, -Jon
It does in my minds eye add significant complexity, and once again locked into hardware types. Also how many Vvol system failures can you tolerate... In essence it seems like inverted convergence, where we took what we learned from VSAN convergence, inverted the principals and applied it to the physical SAN. Thanks, -Jon