VMware Cloud Community
peppepalmi
Contributor
Contributor

vSAN 6.7 U3 on 3 nodes HPE xl170r Gen 9 - weird write latency

Hi all,

I've recently made an all flash cluster vSAN with the following components:

  • HPE Apollo r2600 Gen 9 with 3x xl170r Gen 9, each configured with:
    • 2x Xeon E5-2680 v3 @ 2.50GHz
    • 256GB of RAM
    • 2x 1Gbps embedded NIC
    • 2x HP 533FLR-T Dual Port - 10GbE RJ45 FlexibleLOM (one of them used for vSAN traffic)
    • HPE P440 4GB Smart Controller (Firmware version 7.0, driver 2.0.44, HBA mode enabled)
    • 6x SAMSUNG SSD SATA Drives
  • NETGEAR XS728T

I've created 4 vLans on the switch:

  1. Management
  2. vMotion
  3. vSAN
  4. Production

Esxi version is 6.7 U3 HPE Branded, and I've run the SSP from HP 2020.3, so latest drivers are in use.

"Skyline healt" is all green, and everything seems perfect, but, despite the read performance that are perfect, I notice a huge write lantency that affect any kind of operation.

Attached you can find the output of healt check on the first node.

skyline.PNG

vmware.PNG

Any help would be greatly appreciated.

Peppe-

Peppe-
9 Replies
scott28tt
VMware Employee
VMware Employee

Moderator: Please do not post multiple threads on the same topic.

Your duplicate posted today has been archived.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
peppepalmi
Contributor
Contributor

Hi Scott,

sorry, I've posted wrongly the discussion under general/italian. The target instead was vSAN section Smiley Sad

Peppe-
0 Kudos
scott28tt
VMware Employee
VMware Employee

No problem - as your post in the Italian area was in English anyway, that's the thread I archived.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
TheBobkin
Champion
Champion

Hello peppepalmi​,

First off, just so that you (and anyone else reading this) is aware - vSAN Health is not intended to identify every single possible issue or misconfiguration with a host/cluster.

Fair enough, it covers probably 10x the things it did at its inception, but it still doesn't or can't look at everything (for various reasons).

Why I note this is that both models of SSDs you are using (SAMSUNG MZ7LN512 (some variant of a PM871b) and SanDisk SD6SB2M5) appear to be consumer-grade devices and obviously not on the vSAN HCL - if this is just a homelab this is okay (provided you don't care about the safety of the data or performance), but if this is a Production cluster (or anything you or your organisation/customer care about) you need to replace these with supported enterprise-grade devices before even considering putting any data on this cluster.

"I notice a huge write lantency that affect any kind of operation."

Can you please elaborate on what the actual impact here is aside from seeing (an admittedly alarming) high latency in the graph? I ask as there is literally NO workload on the host/cluster in your screenshot - it is bouncing between 0 and 1 IOPS, this can break graphs (not just in vSAN or VMware products but in computing in general) due to the fact that the math backing these is (generally) going to be designed around 1. positive whole integers (e.g. if your average over X time is 0.5 IOPS you may see weird/wrong results) and 2. an actual workload across a reasonable number of sampling intervals.

While you should definitely be replacing the disks here (unless this is a homelab), to confirm whether this is a case of the graphs getting broken from the low/no IO, can you run HCIBench EasyRun on this cluster, post the results and share a screenshot of the performance graphs while this is running?

This should always be the next step after setting up any new cluster anyway.

Initial Functional Test -HCIBench Easy Run | vSAN Performance Evaluation Checklist | VMware

Bob

0 Kudos
peppepalmi
Contributor
Contributor

Hi TheBobkin​,

first thanks a lot for your interest.

This is a demo environment, so we don't need data protection or maximum performance, but, even using consumer-grade SSDs we do expect good performance, even because we are using a good equipment in general...

I'm saying that there is a huge write latency, because I've did several tests, such as VM creation, copy/paste inside the VM, etc.. and every time I didn't received more that 10-15MB/s of writing speed.

pastedImage_1.png

I've attached to this answer the HCIBench.

Thanks a lot,

Peppe-

Peppe-
0 Kudos
TheBobkin
Champion
Champion

Hello Peppe,

So, in no way is my aim here to deride you or your cluster, but this is likely the worst easyrun output I have ever seen (and I have seen a LOT of them) - is is abundantly clear that there is either something deeply wrong/broken here or else these drives are just designed for reads and not up to the task - in the HCIBench output there is constant and shockingly high latency when doing any size of writes (even 4k ones which vSAN is kind of optimised for).

You should be able to get more insight into this (and other things like ruling out whether this is solely a storage issue or has a network aspect also) by looking at the host-level vSAN performance stats for while HCIBench was running that show the per disk and per Disk-Groups stats - if you have suspicions that one device/Disk-Group or the network is slowing down the whole show then check if you have the same poor performance when VMs/HCIBench are configured for FTT=0 and the VM/test-VMs are running on the same node as their Objects reside.

"we do expect good performance, even because we are using a good equipment in general..."

If you mean to say that one should expect good storage performance when everything (except the storage devices) are of okay quality, this is the IT equivalent of getting a Ferrari, taking the alloys and Pirellis of it, replacing them with old wooden cart-wheels and then wondering why they start falling apart when it goes faster than 20 km/h.

Kind of aside the point (as it is clear the drives cannot handle write-workloads), but file-copy is not by any means a good test for testing storage on vSAN or otherwise - it is especially unsuited for vSAN as it is writing to a single vmdk (which at low sizes typically will only have 2 components on 2 Capacity-tier devices and thus is only using a fraction of the clusters storage) over a single vscsi handle.

Bob

0 Kudos
peppepalmi
Contributor
Contributor

Hi TheBobkin,

thanks for your -frankly- explanation.

I'm a bit frustrated about a product (vSAN) that doens't gives clear insight and hints on where to look for issues, because, even you admitted that:

vSAN Health is not intended to identify every single possible issue or misconfiguration with a host/cluster.

It is possible that a solution like this doesn't have a "config check" that helps in a non too hard way how to identify bottlenecks or misconfigurations?

Or if you go not too far away from the "certified" hardware you go to the hell?

Then about our equipment isn't a "Ferrari", but I think that having more than 60ms of write latency with a bit more than 170 IOPS is not resonable for all flash disks group - even if those are consumer class SSDs - .

Saying that, I've run agian HCIbench using just two SSD as disk group on the same vSAN node with FFT=0 and the results was pretty the same.

Now probably I have the ability to exange those disks for another project, and I can chose to replace with:

Dell SSDSC2BX200G4R

or

Intel SSDSC2BA200G3P

Do you have any hints on that?

Best regards,

Peppe-

Peppe-
0 Kudos
TheBobkin
Champion
Champion

Hello Peppe,

"It is possible that a solution like this doesn't have a "config check" that helps in a non too hard way how to identify bottlenecks or misconfigurations?"

Yes, there clearly is via the simple Day-1 step of running HCIBench and taking an even cursory 10 minute peruse of the vSAN Performance data which have shown you here what the problems are.

This is also why VMware advise vSAN ReadyNodes so that one can know what performance to expect and whether it fits their workload before buying it - if you choose to build your own vSAN nodes with components of your choosing then planning and assessing this is in your responsibility.

Having any form of health-check for disks being on the HCL is not by any means a trivial task, at least probably not without it having an annoyingly large amount of false positives - this is due to the fact that there are literally thousands of certified devices, many of which get shipped with different part-numbers (which ESXi can't see) or rebranded OEM IDs, get seen by ESXi differently depending on 3rd party storage-tools and may not even have the device model exposed to ESXi/vSAN if in RAID0 mode:

https://www.vmware.com/resources/compatibility/pdf/vi_vsan_guide.pdf

"Or if you go not too far away from the "certified" hardware you go to the hell?"

It is in no uncertain terms noted in multiple Day-0/pre-purchase vSAN guides that the HCL should be adhered to if you want reliable results:

'All capacity devices, drivers, and firmware versions in your Virtual SAN configuration must be certified and listed in the Virtual SAN section of the VMware Compatibility Guide.'

Hardware Requirements for vSAN

'Only use hardware that is found in the VMware Compatibility Guide (VCG). The use of hardware not listed in the VCG can lead to undesirable results.'

Hardware | vSAN Frequently Asked Questions (FAQ) | VMware

This reason for this statement is not arbitrary, it is based on the fact that anything that makes it onto the vSAN HCL has been rigorously tested (for performance, stability and reliability) first by the hardware vendor (who decide if they want to test it for vSAN at all and then provided they are happy with it,) then by VMware engineering testing teams who either reject it or certify it for specific usages (e.g. this is why not all SSD/NVMe are rated for All-Flash cache-tier).

"is not resonable for all flash disks group - even if those are consumer class SSDs - ."

All SSDs are not equal and/or not equally good at different tasks (e.g. your ones here look to be good at read only from the HCIBench), the justification that 'but it's All-Flash' doesn't really hold water as (even when just considering devices on the vSAN HCL) the difference between the lowest capability and highest capability All-Flash Disk-Group/cluster is massive - going even further, I have seen Hybrid Disk-Groups/clusters that have far outperformed lower-end All-Flash ones.

"Dell SSDSC2BX200G4R

or

Intel SSDSC2BA200G3P"

SSDSC2BX200G4R (S3610 variant) are on the vSAN HCL but as I mentioned regarding 'specific usages', these are on the low-end (SATA, Mixed-use, MLC) and thus are only certified as suitable for All-Flash capacity-tier (e.g. they are not even rated as suitable for Hybrid cache-tier):

VMware Compatibility Guide - ssd

SSDSC2BA200G3P looks to be a HPE rebrand variant of S3700 (??) having trouble finding any decent information relating to this (going back to my point about how would one possibly automate this reliably), are you sure they are not 'SSDSC2BA200G3'? If they are then yes these are much better devices than the above:

VMware Compatibility Guide - ssd

Bob

0 Kudos
peppepalmi
Contributor
Contributor

Hi Bob,

at the end we've managed to replace the old SSDs with the new ones I've mentioned (Dell SSDSC2BX200G4R), and as you can see from the bench attached, now the situation has totally changed.

The performance are simply awesome.

Thanks a lot for your help in idetify the issue, even if I gave you a big help using "garbage-class" SSDs Smiley Happy

Best regards,

Peppe-

Peppe-