VMware Cloud Community
COS
Expert
Expert
Jump to solution

vSphere 5 - SSD's

I am considering using 4-6 of these disks

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233191

MS Win server 2008 & W7 figures out that it is installing on an SSD and set's the OS accordingly. It uses AHCI mode, does "Disk Allignment", disables disk prefetch and other spindle tuning stuff.

Does vSphere 5 know to do the same thing? The RAID 5 LUN will be where vSphere 5 will be installed. I think it's also where the swapfiles are too. So theoretically it should be a little faster if a VM has to page out.

Also does vSphere support TRIM if I decide to use just one SSD?

Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
wdroush1
Hot Shot
Hot Shot
Jump to solution

COS wrote:

I am considering using 4-6 of these disks

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233191

MS Win server 2008 & W7 figures out that it is installing on an SSD and set's the OS accordingly. It uses AHCI mode, does "Disk Allignment", disables disk prefetch and other spindle tuning stuff.

Does vSphere 5 know to do the same thing? The RAID 5 LUN will be where vSphere 5 will be installed. I think it's also where the swapfiles are too. So theoretically it should be a little faster if a VM has to page out.

Also does vSphere support TRIM if I decide to use just one SSD?

Thanks

vSphere doesn't support TRIM as far as I know (was discussed before).

Also, hypervisor on the SSDs is a huge waste, put your VMs on it.

As far as vSphere 5 support in telling the OS it's on SSD's so it can tune itself for it, I'm pretty sure it doesn't, and if it does, it wasn't put in as a marketing piece (as it should!).

View solution in original post

Reply
0 Kudos
22 Replies
golddiggie
Champion
Champion
Jump to solution

IMO, you'd be better off getting a fast (or good speed) flash drive (USB) and install ESXi 5 onto that.

Keep the SSD's for your client system, or to use for the datastores/LUNs where high IOPS matters.

wdroush1
Hot Shot
Hot Shot
Jump to solution

COS wrote:

I am considering using 4-6 of these disks

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233191

MS Win server 2008 & W7 figures out that it is installing on an SSD and set's the OS accordingly. It uses AHCI mode, does "Disk Allignment", disables disk prefetch and other spindle tuning stuff.

Does vSphere 5 know to do the same thing? The RAID 5 LUN will be where vSphere 5 will be installed. I think it's also where the swapfiles are too. So theoretically it should be a little faster if a VM has to page out.

Also does vSphere support TRIM if I decide to use just one SSD?

Thanks

vSphere doesn't support TRIM as far as I know (was discussed before).

Also, hypervisor on the SSDs is a huge waste, put your VMs on it.

As far as vSphere 5 support in telling the OS it's on SSD's so it can tune itself for it, I'm pretty sure it doesn't, and if it does, it wasn't put in as a marketing piece (as it should!).

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

Thanks all!

Appreciate the input!

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

I think i'll use it for SQL Index file's only. It's not big enough to hold the DB's itself. I'll rely on the SandForce SF-2281 "Garbage Collection" feature since TRIM isn't supported.

Thanks

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

COS wrote:

I think i'll use it for SQL Index file's only. It's not big enough to hold the DB's itself. I'll rely on the SandForce SF-2281 "Garbage Collection" feature since TRIM isn't supported.

Thanks

Is it on a SAN? I find SSDs great for caching.

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

On a small MSA 2000 fibre SAN.

I figure even just 6 of these in a RAID 5 will be enough to push the controllers close to their limits. But if their just index files, they'll be small bursts of data. The only process that will probably stress the SAS/SATA bus is the Re-Indexing or Re-build of indexes on a weekly maint job.

/If I can get away with just the 6, I might just sneak the other 2 in my desktop PC.....shhhhhh don't tell anyone though. Smiley Happy

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

COS wrote:

On a small MSA 2000 fibre SAN.

I figure even just 6 of these in a RAID 5 will be enough to push the controllers close to their limits. But if their just index files, they'll be small bursts of data. The only process that will probably stress the SAS/SATA bus is the Re-Indexing or Re-build of indexes on a weekly maint job.

/If I can get away with just the 6, I might just sneak the other 2 in my desktop PC.....shhhhhh don't tell anyone though. Smiley Happy

Coming from the NexentaStor platform (Solaris) where using flash for log + L2ARC cache pretty much gets you about 10x the bang for your buck compared to Dell/HP/EMC systems, I've only seen them in mirrors and striping them from there, I have no clue if this is because of MTBF or what. Anyone want to chime in on this?

COS
Expert
Expert
Jump to solution

I avoid Mirroring for wear leveling purposes.

In a mirror, if you write a 2GB file, you write 2GB onto SSD A and 2 GB onto SSD B. This means each SSD wrote 100% of the data each.

In a 3 SSD RAID 5, when you write a 2GB file, you write 33% of the file data on SSD A, 33% of the file data on SSD B and 33% of the file data on SSD C. Thus only using 33% on each disk.

That spreads the "wear out" across the SSD's giving you longer "wear leveling" life. But neither of the two can withstand 2 drive failures. Smiley Sad But that's life and you can adjust.

Even just the 3 SSD's in a RAID 5 will be enough for for a greater than 10x boost in performance. At over 500Mb/s write each you can most likely get at least 800Mb/s actual write throughput.

My data here at work is write intensive because of SQL. Reads are just a big plus that comes along with it. Smiley Happy

My desktop at home get's 819Mb/s :smileyshocked: write with 4 slower (270 write) SSD's in RAID.

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

William Roush wrote:

Coming from the NexentaStor platform (Solaris) where using flash for log + L2ARC cache pretty much gets you about 10x the bang for your buck compared to Dell/HP/EMC systems,

Erm - Dell & EMC both have the ability to use flash as cache.  I'm not sure what dell calls it, but EMC calls it flash cache, and its basically the same idea as L2ARC/Logzillas, but without the need to predefine how much of each (its dynamically assigned).

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

COS wrote:

I am considering using 4-6 of these disks

http://www.newegg.com/Product/Product.aspx?Item=N82E16820233191

MS Win server 2008 & W7 figures out that it is installing on an SSD and set's the OS accordingly. It uses AHCI mode, does "Disk Allignment", disables disk prefetch and other spindle tuning stuff.

Windows does NOT does not disable prefetch or other "spindle tuning stuff".  I have Windows 7, I have SSD, both of these are STILL enabled.  You have to turn them off manually. In fact Windows doesn't know what an SSD drive is, it's just a drive like any other.

Disk alignment is only available on Windows 2008/7/Vista.  XP and 2003 do not align the disk.

AHCI mode is a BIOS function not a Windows (or any other OS) function.

ESX is not a user OS, it's a host.  Windows is basically an OS with a GUI, services can do hosting, but the OS is still a user environment.  one reason Windows get's a lot of flak is because you use Windows like any other desktop GUI, users throw a bunch of stuff on the OS and use it like their personal desktop.  If you use a server the way a server is supposed to be used, you will have less problems.

ESX just hosts the VM's, that's all it does.  the datastores are aligned automatically, but the VM's will not.. that is a GUEST function not a host function.

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

golddiggie wrote:

IMO, you'd be better off getting a fast (or good speed) flash drive (USB) and install ESXi 5 onto that.

Keep the SSD's for your client system, or to use for the datastores/LUNs where high IOPS matters.

Yes, I see the part about IMO, but flash isn't the same as SSD, flash is still slow.  SATA disks are faster than flash drives, you have no control over the speed of flash, and even the faster flash drives are not as fast as better SATA disks.

So SSD drives would be a better option..

IMO:smileycool:

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

Matt wrote:

William Roush wrote:

Coming from the NexentaStor platform (Solaris) where using flash for log + L2ARC cache pretty much gets you about 10x the bang for your buck compared to Dell/HP/EMC systems,

Erm - Dell & EMC both have the ability to use flash as cache.  I'm not sure what dell calls it, but EMC calls it flash cache, and its basically the same idea as L2ARC/Logzillas, but without the need to predefine how much of each (its dynamically assigned).

Considering the Dell is just a rebranded EMC, they still call EMC/Flash cache

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

Matt wrote:

William Roush wrote:

Coming from the NexentaStor platform (Solaris) where using flash for log + L2ARC cache pretty much gets you about 10x the bang for your buck compared to Dell/HP/EMC systems,

Erm - Dell & EMC both have the ability to use flash as cache.  I'm not sure what dell calls it, but EMC calls it flash cache, and its basically the same idea as L2ARC/Logzillas, but without the need to predefine how much of each (its dynamically assigned).

Yeah, but NexentaStor does it for cheap, I have to buy a compellent from Dell before I get SSD cache on it (AFAIK, in terms of what our Dell reps were willing to sell us), those are absurdly expensive, EMC has fastcache, but their units still start at $10k (fairly cheap to be honest, but still not as cheap as an OpenSolaris based box), and you have to buy expensive drives from them, and on top of that EMC wanted to steer me far away from VNXe as soon as I mentioned fastcache (which means I'm looking back at almost a $26k base before any SSD is added).

Though it will cost you if you don't know Solaris and ZFS (depending on hardware, we've actually been running NexentaStor for a few months on probably an 8-9 year old box, and requires extensive work at the command line to do simple tasks due to this), it is a lot more hands on (at least at this point, Dell is the only one that makes a "hands off" ZFS based system as one of their Compellent lines [again massive amounts of $$$, but for some the drop-it-and-forget-it can be worth it], I wish Dell would make a SMB version).

And most of all: Vendor lockin for all hardware, absurdly expensive, lots of markup.

Nimble Storage bases it's entire storage concept on SSD cache, looks like they'll do it at a good price, but tiering beyond their simple setups are out of the question.

Reply
0 Kudos
wdroush1
Hot Shot
Hot Shot
Jump to solution

COS wrote:

I avoid Mirroring for wear leveling purposes.

In a mirror, if you write a 2GB file, you write 2GB onto SSD A and 2 GB onto SSD B. This means each SSD wrote 100% of the data each.

In a 3 SSD RAID 5, when you write a 2GB file, you write 33% of the file data on SSD A, 33% of the file data on SSD B and 33% of the file data on SSD C. Thus only using 33% on each disk.

That spreads the "wear out" across the SSD's giving you longer "wear leveling" life. But neither of the two can withstand 2 drive failures. Smiley Sad But that's life and you can adjust.

Even just the 3 SSD's in a RAID 5 will be enough for for a greater than 10x boost in performance. At over 500Mb/s write each you can most likely get at least 800Mb/s actual write throughput.

My data here at work is write intensive because of SQL. Reads are just a big plus that comes along with it. Smiley Happy

My desktop at home get's 819Mb/s :smileyshocked: write with 4 slower (270 write) SSD's in RAID.

Yeah, I understand, it makes massive sense numbers wise, it's why I'm confuse why I haven't seen it before (at least in various benchmarks) and just wanted to point it out.

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

I beg to differ. Read any posts in the following forum about "SSD Disk Alignment" and Windows 7.

http://www.overclockers.com/forums/forumdisplay.php?s=c53b7ad1be5ee3df90e579891e386721&f=69

If you upgrade your current spindle disk to an SSD and just drop/restore your image onto an SSD then you are correct. All spindle tuning features stay on. One in particular is the Disk Defrag schedule. But yes you are correct that AHCI is a BIOS setting.

On a Native/RAW Windows 7 or 2008 build onto a new SSD those features should not be enabled.You'll see there is no scheduled disk defrag for the SSD. That tells me W7 Knows something.

But since I brought up the prefetch I thought i'd check, strangely the settings "EnablePrefetcher" and "EnableSuperfetch" on one PC was set to 3 but the other one was set to 0. Hmmmm......that's weird.

Go to Start, Run, type in dfrgui.

On my PC's with SSD and Native/RAW install of W7, the C: drive is listed as "Never Run" and the spindle drive 😧 have a schedule.

On PC's we did an Acronis Universal Restore onto, the Scheduled defrag had Last run dates and are scheduled. Sounds right.

Server 2008 Server R2, I get the same results.

IMHO, W7 is SSD aware, but I guess to a certain extent.

Personally I don't think SSD's are ready for SAN prime time simply because TRIM is still not supported/available in a RAID configuration. Although "Garbage Collection" is there, it only runs when it thinks the disk is "Idle". In RAID, there really is little to no idle time.

I'm just tinkering with them in a small SAN. Performance on the other hand is KickBut...

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

COS wrote:

I beg to differ. Read any posts in the following forum about "SSD Disk Alignment" and Windows 7.

http://www.overclockers.com/forums/forumdisplay.php?s=c53b7ad1be5ee3df90e579891e386721&f=69

If you upgrade your current spindle disk to an SSD and just drop/restore your image onto an SSD then you are correct. All spindle tuning features stay on. One in particular is the Disk Defrag schedule. But yes you are correct that AHCI is a BIOS setting.

OK, I see that TRIM command works *IF* the SSD drive supports.. also AHCI is a bad idea:

http://windows7forums.com/windows-7-hardware/18767-ahci-ssds.html

AHCI is not official supported on OCZ SSDs and may under some circumstances affect performance,
specifically during windows installation. Enabling AHCI can result in higher performance in synthetic
benchmarks for SSDs and HDDs alike, but can cause hang-ups and intermittent freezes in SSDs since it allows multiple access requests to compete for a drive that is not made to address re-ordering of commands in the queue. We recommend AHCI is set to disabled in both Windows and in the BIOS.
Native Command Queuing greatly increases the performance of standard rotational drives but it has no bearing on SSDs.

So you are not using the right mode for those disks anyway.. AHCI isn't supported (OCZ -- using the same logic where you posted for the overclocker forum) OCZ is pretty much the defacto standard for performance on SSD and Memory, so they must know something.

Cheap SSD drives, and early versions may be not be fully supported.. so TRIM may not be in effect with the BIOS on the machine.

Also I did what you said I imaged my disk, but I did reinstall and those settings were disabled, I had to minipulate my BIOS to get it to work.. and when I did it the first time these featuures didn't turn off, and I did lots of searching for SSD drive performance to see what needed to be turned off.  Several sites give registry entries for this, so if these are supposed to be disabled (by default) why are some sites giving manual entries?  It must be not 100% for every SSD on every computer.  It depends on hardware.

You were correct that these features are SUPPOSED to be turned off, in my experience it was not, but after manipulating my BIOS and doing all the recomendations and installing Windows 7 with SP1.. I guess it's all working fine now, because I just checked and they were indeed OFF.

But since I brought up the prefetch I thought i'd check, strangely the settings "EnablePrefetcher" and "EnableSuperfetch" on one PC was set to 3 but the other one was set to 0. Hmmmm......that's weird.

Go to Start, Run, type in dfrgui.

On my PC's with SSD and Native/RAW install of W7, the C: drive is listed as "Never Run" and the spindle drive 😧 have a schedule.

On PC's we did an Acronis Universal Restore onto, the Scheduled defrag had Last run dates and are scheduled. Sounds right.

Server 2008 Server R2, I get the same results.

It probably depends on pre-BETA of Windows, and pre-BETA of Service Pack, those are "undocumented features" after all :smileygrin:

Personally I don't think SSD's are ready for SAN prime time simply because TRIM is still not supported/available in a RAID configuration. Although "Garbage Collection" is there, it only runs when it thinks the disk is "Idle". In RAID, there really is little to no idle time.

I'm just tinkering with them in a small SAN. Performance on the other hand is KickBut...

This also depends on Enterprise class SSD drives (not ALL SSD are designed the same).  Enterprise class (those certified for SAN) don't need these features, a SAN doesn't have AHCI or any of that anyway.. LUN target depends on the OS, that's why they have so many configurations.  It's RAW storage.. no file system, the OS can STILL be "tweaked" to take advantage of SSD performance...  So to say SSD are not SAN ready, would get you a LOT of feedback from the SAN vendors that are pushing them, because they are definately up to the task.

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

Also consider running the following commands to see if your W7 build is "Aware" of your SSD...

Checks to see if TRIM is enabled:

At commandline run:

fsutil behavior query disabledeletenotify


Results explained below:
DisableDeleteNotify = 1 (Windows TRIM commands are disabled)
DisableDeleteNotify = 0 (Windows TRIM commands are enabled)

If it's set to 0 the TRIM is on. That means W7 knows it's an SSD because TRIM is NOT a feature on spindle drives.

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

"Cheap SSD drives, and early versions may be not be fully supported.. so TRIM may not be in effect with the BIOS on the machine."

AHAHAHAHAHAHAAA!!!! :smileylaugh:

That's EXACTLY what I am running at home! 4 CheapO ADATA S599 64GB thangs bought on sale from NewEgg....lol

Their not the fastest but they supported TRIM but in the end it didn't matter because I configured them in a RAID set. I have to rely on the SandForce 1200 controllers "Garbage Collection" for that.

I also tested Native/Raw builds with AHCI set/enabled in BIOS and W7 and found that AHCI when enabled gave me faster i/o than without AHCI enabled. My guess is ADATA is designed to run in that mode, just a guess though. It was almost ~1800 iops slower without it.

I get 8700 iops from the 4 SSD's using the SQLIO test. Here's the output I just ran...

sqlio v1.5.SG
using system counter for latency timings, 25000000 counts per second
parameter file used: param.txt
file c:\testfile.dat with 1 thread (0) using mask 0x0 (0)
1 thread writing for 90 secs to file c:\testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 8 outstanding
size of file c:\testfile.dat needs to be: 2147483648 bytes
current file size: 0 bytes
need to expand by: 2147483648 bytes
expanding c:\testfile.dat ... done.
using specified size: 2048 MB for file: c:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  8725.49
MBs/sec:   545.34
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 134
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 74 25  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0

Reply
0 Kudos
COS
Expert
Expert
Jump to solution

"So to say SSD are not SAN ready, would get you a LOT of feedback from the SAN vendors that are pushing them, because they are definately up to the task."

Yup you're right.....I retract my comment. :smileysilly:

Reply
0 Kudos