I was doing some research into the ESA requirements for our upcoming deployment, and found a recent post by Duncan that covers HCI Mesh: Cross connecting vSAN Datastores with HCI Mesh in vSAN 8 OSA (yellow-bricks.com)
Background - We are looking to purchase some next-gen Ready Nodes for a high performance ESA vSAN Cluster that will be mounted remotely (HCI Mesh). I say "remotely", but the Ready Nodes will be in the next rack along, connected to the same Spine as the other Clusters, so not exactly remote in terms of distance or latency. The new Ready Nodes will be 100% dedicated to running only vSAN. No VMs or anything else will be running directly on those Ready Nodes. All VMs will be running on high performance servers in a separate Cluster, however the VMDKs will run on the ESA Cluster (exactly how you would do it with a normal NetApp / Pure / EMC etc).
In the post, Duncan mentions that HCI Mesh is not yet available for ESA, and I was wondering if anyone knows when it will be available, as currently this is a show stopper for us, as we do not want to install vSAN on our Compute Servers, nor do we want to run OSA, so the only way we would be able to use ESA is with HCI Mesh.
Thanks in advance
I would recommend contacting a local VMware resource, they can share what the timeline for HCI Mesh support will look like, unfortunately, I cannot comment publicly on the roadmap for vSAN ESA and HCI Mesh.
To be honest though, most customers would still use the vSAN ESA Cluster as well to run VMs on, any reason you don't want to do this?
Thanks, I'll reach out to VMware directly and see if I can get an indication of the release date.
Regarding the reasoning behind the HCI Mesh option:
1) If I run vSAN on my high performance servers, it changes the hardware requirements and uses processing cycles that should be used for the compute workloads. The Ready Node hardware specification is far too restrictive, and I've always built my own because of this. Now that the Ready Nodes are mandatory, my options on deployments are very limited so I need to go for a separate vSAN Cluster that does not dictate my compute specification.
2) Probably the biggest reason for HCI Mesh, is that vSAN does not allow the use of DPM. If I run HCI Mesh, I can now use DPM across my high performance Compute Cluster, as they are not participating in the vSAN Cluster. Being able to power manage my Servers obviously helps in many ways.
The performance should be as good as running ESA locally, as it will be connected on either a 100 or 200GB network (still TBD) with sub-millisecond connectivity. I realize I can't use RDMA, but as it will be a dedicated vSAN Cluster, it doesn't matter how hard it hits the CPUs as they are Ready Nodes, so will be designed to handle the requirements.
Thanks for providing those extra details, it makes sense now. if the local person can't help, feel free to point them to me 🙂
I've emailed our VMware reps and am waiting to hear back.
I'm assuming that as the vSAN Cluster will be delivered as HCI Mesh, and will not be running any Virtual Machines directly / locally, that the minimum hardware requirements for ESA are now no longer applicable?
Would you know if there are any official references for this, as I don't seem to be able to find anything about this other than the default Ready Node requirements?
Right now the minimum is the minimum, regardless of how you use the cluster.
Thanks for confirming, I'll keep that in mind when configuring the Ready Nodes.
This may, or may not change in the future. I have provided your feedback to the PM/Eng team, as it is useful to know your expectations.
Thank you, much appreciated.
I'm sure it would be very useful for the community as well to know how much resources ESA actually requires when there are no VMs running on the same hardware and it's purely an NVMe storage platform (Mesh), as I'm sure I can't be the only one who would really benefit from using things like DPM with vSAN in this kind of configuration. It should be fairly easy to work out and I would test it myself, but I don't have the hardware for it at the moment, and having to be the one who validates it, makes the hardware an expensive purchase for resources (CPU and RAM) that simply aren't going to be required. The networking is fine, and I totally understand that and have it covered, but large DIMMs and certainly next-gen CPUs run to many thousands, so saving where we can and investing those savings in other areas is always beneficial.
We've been testing ESA last few days and I must share our thoughts:
2*7513 AMD Epyc, 1024 RAM, 2*25g mellanox 4lx, 4*7,68 samsung NVME
1. It works without ready nodes perfectly well:-) Yeah, some suppressed alarms - but still.
2. The performance figures we see - nothing we could possibly get on any version of vsan previously. And it is completely out of the box one.
Just to get a perspective - 800k+ iops on 4k block with just 6 nodes 4 NVME each.
And all in sub 1 ms consistent latency.
3. Finally RDMA works without any dancing. And it really makes difference. We see ±20% increase with RDMA on. And more consistent results.
4. It scales. Going from 4 to 6 nodes gave us not +50% but 2x in performance. Will be trying going up to 10 next week.
5. RAID6 policy cuts only 10-15% in performance (HCIbench) in comparison to RAID1.
6. Not sure how - but we did not have any bottlenecks in the NICs even our HCIbench peaked at 12,6 gigabytes per second during tests.
The bottle neck always were in CPUs.
But it is not all sugar though.
1. It uses ALL the CPU. To get 800k+ IOPS it consumes 800+ Ghz from the cluster. So basically, nothing is left to any VM's I would have put there.
2. Given the data from #1 - if you going to use this storage hard, you have to make sure you have a lot of CPU for those IOPS.
3. Unless HCI mesh is ready for ESA - I can't really think how to use this in production. ESA must be all about performance. But you have to make 24-32+ very dense nodes and keep in mind that 10-16cpu from each are reserved for storage. Even with 64 cores nodes - it is too much. So you have to go for 96-128 or even more cores per node.
I prefer separate 8-12-16 servers for vsan only and separate compute nodes.
To sum up.
Great improvement. Finally looks competitive performance wise.
Needs tons of CPU. (RDMA does not seems to reduce CPU usage)
Definitely needs HCImesh to be usable as a performance solution.
The performance sounds very impressive so far!
I have no intention to use this as a "normal" vSAN deployment, I'm only interested in HCI Mesh. Without HCI Mesh, I cannot use vSAN at all - it simply won't work for our environment.
The Servers I will be purchasing will be 100% dedicated to vSAN, so if at some point they consume 100% of the CPUs, then that's fine. We plan to use 2x 32 Core CPUs as well, and I would be interested to know whether to add more CPU Cores, or increase the GHz to further increase the performance.
I would be grateful if you could please keep us updated on your experiences as you progress with your testing, as there seems to be little to no real world feedback on ESA yet, so this is very insightful.
Thank you, and great work with the testing!
Sorry for the slow reply - That's a nice increase in results!
Did you make any significant changes to achieve that or just tweaking your existing configuration?