VMware Cloud Community
fsora
Contributor
Contributor

[ESXi_5.1u1] 2xSSD_840pro + Intel_raid_RS2BL040 -> non-ssd drive type and other questions

Good morning to everyone,

we are running esxi 5.1u1 on a small server with desktop-class components:

- CPU= i5-3570k

- MB= Intel Z77 based MSI

- RAM= 16gb

- RAID: Intel RS2BL040

- SSD: 2x Samsung 840 pro raid 1 config --> single datastore

- ESXi on usb

Only 3 Windows based VM acting as SQL server, domain controller, dns & dhcp server for about 10 PC.

We are wondering about SSD performance and reliability, since:

  1. ESXi doesn't recognize Intel Local Disk as a SSD disk (Configuration->Storage->Drive Type = non-ssd)
  2. ESXi seems not supporting real Trim function (we have also used TRIMcheck tool: http://files.thecybershadow.net/trimcheck/ )

What do you suggest as best practice for this type of configuration?

Should we avoid SSD for non-cache use?

Thanks in advance

vubai.com
Reply
0 Kudos
4 Replies
jrmunday
Commander
Commander

ESXi doesn't recognize Intel Local Disk as a SSD disk (Configuration->Storage->Drive Type = non-ssd)

You can Tag Devices as SSD if they are not recognised as such.

Do you have a performance issue or are you simply going through a process of understanding your capabilities and limitations of this configuration?

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
fsora
Contributor
Contributor

Hi Jon,

thanks for the reply: i'll tag the device as you suggested.

Since we've got about 29gb free and Esxi doesn't support TRIM, i've started to worry about performance decrease.

Using crystal disk mark in a windows VM seems the RAID1 is going "kinda" well (without tag):

450mb/s seq read

400mb/s seq write

I'm still understanding capabilities and limitations of this configuration and, since we are going to assembly other small servers, i would like to learn more about different type of configurations.

vubai.com
Reply
0 Kudos
JarryG
Expert
Expert

"...SSD: 2x Samsung 840 pro raid 1 config..."

"...ESXi seems not supporting real Trim function..."

Nope. This does not have to do anything with ESXi. If there is a problem at all, then it is caused by controller. AFAIK, there are *some* hw-raid controllers that support TRIM for raid0 (i.e. some of those "pseudo-raid" Intel uses on motherboard-chipsets), but there is not a single one hw-raid controller supporting TRIM for raid1 (or raid10/5/6/etc).

And concerning SSD-performance: it is generally recommended to over-provision much more than standard values for performance consistency. This can be done by leaving some space unpartitioned. This space is then used as "addional" (dynamic) over-provisioning. I use the rule "100GB partition for every 128GB space". So if your SSD 840/pro is 512GB, create partition only 400GB big (for 840/pro/256GB only 200GB).

But now comes the problem: if you *did not* leave enough area unpartitioned, it is not that simple now. There were already some data written to SSD, and controller does not know if those cells contain real data, or just garbage. Because of that, SSD-controller would not use unpartitioned area for dynamic over-provisioning. Although some SSD-controllers are filesystem-aware, this does not work with raid-arrays. So what you have to do is:

1. revert SSD to factory-default condition. This can be done using by some live-cd with "hdparm" (i.e. Hiren's boot CD).

2. create raid-array

3. partition raid-array but now leave ~20% space unpartitioned

With this setup you do not have to worry about performance degradation, even if you are running 2xSSD in raid1 without TRIM.

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
Reply
0 Kudos
jrmunday
Commander
Commander

Hi Fsora,

Andreas covers this topic fairly comprehensively in his blog - check it out : VMware Front Experience: FAQ: Using SSDs with ESXi

I generally use iometer to do my performance tests as I can scale this out across the infrastructure and also produce nice user friendly charts of the results. Here is an example that I posted in another discussion;

Test disk performance - ESXi 5.1

The controller look fine to me, and the Samsung 840 pro SSD's are awesome (I use these in my laptop). Perhaps you should do some base lining so that you have a reference point to make comparisons to in the future?

I would look at doing the following storage tests;

  1. Single worker test - 1x ESXi host, 1x guest VM
  2. Scale out test - 1x ESXi host, multiple guest VM's, workers
  3. Depending on how many hosts you have, scale this out across your environment

One you have this data you can always repeat the same process in the future to compare results, even compare this against other tiers of storage.

Let me know if you want some iometer access specifications to set this up.

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
Reply
0 Kudos