My understanding of it is that the Queue Depth on the drives is determined by the firmware on the drive itself as opposed to the controller.
I see SAS devices all the time with 64 QD so this isn't uncommon with firmware and devices that are on the vSAN HCL.
While it may be technically feasible to increase the QD on these, this isn't necessarily going to be beneficial and I wouldn't advise it without the hardware vendors blessing (as the fact that they set it to 64 is unlikely to be an arbitrary decision).
Thanks for the reply!
I started a thread on the Cisco forums at the same time as this, and I've posted some findings there --> https://community.cisco.com/t5/unified-computing-system/ucsb-mraid12g-he-disk-queue-length/m-p/3999741
Everything I've seen on my own systems seems to indicate that the controller firmware/driver is giving the queue length of 64, instead of the drive firmware itself.
I may be wrong, but it just seems odd that multiple system platforms (Dell H730P, H740P, Cisco "B" MRAID, Cisco "C" MRAID) all show the same symptoms, and the main thing in common is the chipset family and driver.
I'm going to try and borrow one of my previous HP branded HGST HUSMM SAS SSD (Which I sold when I swapped to Cisco) which reported as a depth of 256 on the HP servers I had a while back, and see if they drop to 64 when plugged in to my Dell or Cisco servers.
That may help give a "final" answer.
Failing that, a depth of 64 should be ok for a lab environment so I'll just likely live with it.
I appreciate the response anyway.
I'll update this with findings once I've had a chance to try the HP SSD.
Maybe not very helpful as I find it strange that you see different queue length depending on the vendor you put your disks. Here I can see different queue lengths for different vSAN disks. I see lengths of 32, 64 254 and 2046 and it seems related to the speed of the disks (2046 is NVMe).