VMware Cloud Community
970170
Contributor
Contributor
Jump to solution

SSD datastore performance

Would like to get some info on whether or not there are ESXi specific tweaks to get better performance out of VMs sitting on an SSD datastore.

Lab Setup:

LSI 9260-8i RAID card

8 x 500GB Samsung 830 SSDs (TLC, horrible I know), RAID5 (yes I must have parity protection)

I have endlessly tweaked the 9260 and RAID settings to arrive at the best possible configuration according to my benchmarks:  No read ahead, 128KB stripe size, direct I/O, disk cache enabled, write back with BBU..

I am seeing a pretty big performance hit comparing native RAID performance in windows vs performance in ESXi when I transplant the same RAID card/SSDs into my lab ESXi host and test from inside a VM.  Understood there is overhead from the hypervisor, and then overhead from the VM.. but still..

Looking for anyone who has experience with a similar setup and can provide tweaks/suggestions..

Thanks in advance!

0 Kudos
1 Solution

Accepted Solutions
vangoose
Contributor
Contributor
Jump to solution

Do you have FastPath option? This will significantly improve IO performance for SSD.

I'd also create RAID-50 for the 8 SSDs.

View solution in original post

0 Kudos
6 Replies
jrocket
Contributor
Contributor
Jump to solution

Hi,

I have done quite a bit of work with SSD myself. SSD have a good use cases depending on the workload. Read VS Write and Random VS Sequential. You also need to look at the long term PE of the drives.

What workload are you building this for? Database?

There are technologies in the market that let you have the best of both worlds.

Take a peek at this.. WWW.SanDisk.com/FlashSoft-Connect  Flashsoft turns SSD into Cache so Random Read and Random write workload is handled by the SSD and Sequential Read and Sequential Write are bypassed and handled better by traditional spinning disk.

Note: You won’t need as much SSD in your design and the SSD won’t have to be RAID protected if just used as READ Cache.

Regards

0 Kudos
970170
Contributor
Contributor
Jump to solution

Hi,

Thanks for the reply.  So in my scenario, I want all the VMs (various workloads) on this particular host to run on SSDs, no need for selective caching.  Have run extensive benchmarks and both sequential and random reads/writes are much faster than the platter disks they are replacing.

My issue is there seems to be excessive ESXi and VM overhead that is causing a loss of IOPS/bandwidth on the SSD array.. wondering if there are any optimizations I can make to vSphere that can increase storage performance.

Thanks

0 Kudos
jrocket
Contributor
Contributor
Jump to solution

I suspect the RAID and the IO Cotroller is the pain..  Look at this..

The reson on mention the Cacheing solution as that you can save on the amount of SSD you are deploying, Don't have to worry about the PE ratio and avoid the internal raid card for the SSD if use just for Read Cache.

WWW.SanDisk.com/FlashSoft-Connect

Solutions Architect | SanDisk / FlashSoft Software
0 Kudos
970170
Contributor
Contributor
Jump to solution

Hi,

Not concerned about TRIM, the native garbage collection on the SSDs work well and I am going to live with a periodic secure erase of the array.

Thanks

0 Kudos
vangoose
Contributor
Contributor
Jump to solution

Do you have FastPath option? This will significantly improve IO performance for SSD.

I'd also create RAID-50 for the 8 SSDs.

0 Kudos
970170
Contributor
Contributor
Jump to solution

THANK YOU for the glorious "why didn't I think of that" moment.  vMotioning my VMs now and will rebuild with RAID 50.

Have looked at FastPath however wasn't quite sold on it.  Do you have firsthand experience of it in an ESXi environment with typical server workloads?  Is it a software key or hardware key?  I noticed that there are different SKUs for 9260 and 9265, is it transferrable in any way later if I upgrade?

Do I need to reconfigure the array after installing FastPath or I just type in the key after I buy it and it just works?

Thanks again!

0 Kudos