stephan87
Enthusiast
Enthusiast

SanDisk sx350-3200 not used as cache tier

Jump to solution

Hi,

we have a 8 node stretched cluster with dell 730xd (4 per datacenter) with witness on a third location. Config as followed per node

- 21 sas hdd

- 3 SanDisk sx350-3200

- add disk to storage: automatic

The SanDisk cards are on vsan hcl with driver version 4.2.3 but sandisk has revoked the driver because of a bug and now we have 4.2.4 installed (screenshot attached.) But the new driver is not on the hcl at the moment and the vsan health check shows them with a warning. Maybe is that the problem? When we run a i/o meter virtual machine the vsan observer shows at read cache 0.0 used (screenshot attached).

2016-02-26 14_35_39-AW_ VMware Support Request 16898181002 [ref__00D409hQR._50034nbgfj_ref] - Nachri.jpg

I have also run multicast, storage performance and vm creation test. Test results were ok with warning and storage test with poor performance. (screenshot attached)

vm.png

storage.pngmulti.png

VSAN health check reports the cache devices with warning. I have created some screenshots.

2016-02-26 14_40_09-vSphere Web Client.jpg

2016-02-26 14_40_23-vSphere Web Client.jpg

2016-02-26 14_40_35-vSphere Web Client.jpg

2016-02-26 14_40_48-vSphere Web Client.jpg

Regards,

Stephan

0 Kudos
1 Solution

Accepted Solutions
stephan87
Enthusiast
Enthusiast

We have applied the following parameters and it works and the heap message goes away:

esxcfg-advcfg -s 2047 /LSOM/heapSize

We have also applied two value for our controller from: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=21354...

esxcfg-advcfg -s 110000 /LSOM/diskIoTimeout

esxcfg-advcfg -s 1 /LSOM/diskIoRetryFactor

View solution in original post

0 Kudos
7 Replies
zdickinson
Expert
Expert

Good morning, was all running normally with the 4.2.3 driver and then these issues popped up after moving to the 4.2.4 driver?  I understand that SanDisk pulled the 4.2.3 driver, but unless you were having major issues, I would default to the HCL.  Thank you, Zach.

0 Kudos
pfuhli
Enthusiast
Enthusiast

The 4.2.3 version of the driver is not available for download anymore. We started from scratch with 4.2.4.

0 Kudos
mschubi
Enthusiast
Enthusiast

Hello,

there are some news. If we build a policy with 100% reservation we see cache usage.

Rulset_reserved.png

Cache Usage is now

vsan.vmdk_stats_with_reservation.png

But after running IOMeter for a while the results are not so good as expected 😞

IOMeterResulr_reservation.png

IOMeter runs with following metrics:

- 10GB Testsize

- 4 Workers

- 64 outstanding IOs

- 4K Block Size

- 70/30% read/write

- 80% random

- 30 Seconds Ramp Up

best regards,

Mike

0 Kudos
zdickinson
Expert
Expert

Bummer, this does seem to pop up more than one would like.  Catch-22, the driver on the HCL is not available.  Have you contact SanDisk support to see about getting the driver?  Thank you, Zach.

0 Kudos
zdickinson
Expert
Expert

I'm curious what you were expecting.  From one VM running IOMeter, that looks pretty good to me.  13k+ IOPS @ 4k is pretty good.  If you want to see more throughput, up the block size.

If you spin up IOMeter on several different VMs, can they all push 13k+ IOPS?  Thank you, Zach.

0 Kudos
mschubi
Enthusiast
Enthusiast

Hello Zach,

there is nothing "good".

This single VM runs with 100% Readcache. Writecache is also more than enough available.

The same test VM give us more than 138K IO/s by <0.1s latency if we use the SX350 Flashdrives as VMFS

datastore.

We see only 13K IO/s by latencies between 5ms and 43ms.

Thats not an Flash based "Enterprise" performance...

best regards,

Mike

0 Kudos
stephan87
Enthusiast
Enthusiast

We have applied the following parameters and it works and the heap message goes away:

esxcfg-advcfg -s 2047 /LSOM/heapSize

We have also applied two value for our controller from: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=21354...

esxcfg-advcfg -s 110000 /LSOM/diskIoTimeout

esxcfg-advcfg -s 1 /LSOM/diskIoRetryFactor

0 Kudos