I’m most of the way through rebuilding my home lab and have swapped to Cisco UCS M4 generation gear and am also using UCS manager.
I’m continuing to run VMWare vSAN so tracked down the required UCSB-MRAID12G-HE controllers for my B200M4 blades.
The environment is up and running UCS wise and I’m about to start the VMware config and decided to check the controller and disk queue lengths.
The controller is listed in esxtop as 896 depth, which is correct, by the Cisco SAS SSDs attached to it are only reporting a depth of 64.
I'm using the Cisco ESXi 6.7 U3 custom ISO, and my firmware versions are all the latest release of 4g.
The Cisco SAS SSDs are part numbers:
All the hardware is on the vSAN HCL and with the exception of the “capacity” drive my setup essentially mirrors the B200M4 AF4 vSAN ready nodes setup.
Does the LSI controller in the “HE” only allow a depth of 64 to drives (instead of the expected 256 depth), or are these drives limited show how?
Any suggestions welcome!
My understanding of it is that the Queue Depth on the drives is determined by the firmware on the drive itself as opposed to the controller.
I see SAS devices all the time with 64 QD so this isn't uncommon with firmware and devices that are on the vSAN HCL.
While it may be technically feasible to increase the QD on these, this isn't necessarily going to be beneficial and I wouldn't advise it without the hardware vendors blessing (as the fact that they set it to 64 is unlikely to be an arbitrary decision).
Thanks for the reply!
I started a thread on the Cisco forums at the same time as this, and I've posted some findings there --> https://community.cisco.com/t5/unified-computing-system/ucsb-mraid12g-he-disk-queue-length/m-p/39997...
Everything I've seen on my own systems seems to indicate that the controller firmware/driver is giving the queue length of 64, instead of the drive firmware itself.
I may be wrong, but it just seems odd that multiple system platforms (Dell H730P, H740P, Cisco "B" MRAID, Cisco "C" MRAID) all show the same symptoms, and the main thing in common is the chipset family and driver.
I'm going to try and borrow one of my previous HP branded HGST HUSMM SAS SSD (Which I sold when I swapped to Cisco) which reported as a depth of 256 on the HP servers I had a while back, and see if they drop to 64 when plugged in to my Dell or Cisco servers.
That may help give a "final" answer.
Failing that, a depth of 64 should be ok for a lab environment so I'll just likely live with it.
I appreciate the response anyway.
I'll update this with findings once I've had a chance to try the HP SSD.
Maybe not very helpful as I find it strange that you see different queue length depending on the vendor you put your disks. Here I can see different queue lengths for different vSAN disks. I see lengths of 32, 64 254 and 2046 and it seems related to the speed of the disks (2046 is NVMe).