VMware Cloud Community
COS
Expert
Expert

vSAN 6.0 LAB - SSD Write Cache performance

Setup:

3 DL360 G6 Servers 64GB RAM 2 X5670 Procs, P410i RAID Controller w/512MB cache

Disk 1 900GB SAS 10KRPM as RAID 0 on SAS channel 1 (Controller cache enabled)

Disk 2 900GB SAS 10KRPM as RAID 0 on SAS channel 1 (Controller cache enabled)

Disk 3 120GB SATA SSD as RAID 0 (Total 240GB) on SAS channel 2 (Controller cache disabled)

1 NIC for management 1Gb/s Full

2NIC's for vMotion 1Gb/s Full

2 NIC's for vSAN traffic 1Gb/s Full

Most everything is working as expected.

VM's on failed node get's re-spawned.

VM's can vmotion from host to host.

HA works.

Read cache works.

The "Flash read cache reservation" is set to 15% in the default vSAN policy.

Not sure where the write cache setting is.

But Write cache is really crappy slow, I mean compared to read, it's Glacially slow.

When I run an SQLIO test for read, I get ~225MB/s and 3538 iops. that's expected right?

In the read test, I can see the vSAN traffic on the NIC's ~119,000KBps.

When I run an SQLIO test for write, I get 10.75MB/s and 172 iops, that's not good.

In the write test, I can see the vSAN traffic on the NIC's ~ 181KBps.......Booooo, not good.

I'm a little stumped as I thought the vSAN write cache would improve iops.........lol, or am I wrong?

Thanks

28 Replies
jonretting
Enthusiast
Enthusiast

Your SSDs are very slow. As I said probably somewhere else, its practically impossible to cheap out on your VSAN performance tier. SATA is not designed for high-speed flash, and fails terribly in the demanding environment of VSAN cache r/w. VSAN does and will freak out very often running SATA Flash SSDs for cache. Missed or aborted commands can turn into entire resets, and a QD of practically nothing. You could plug in 4x RAID0 Samasung Evo Pro 850's into a LSI 9260-8i, and still have the same performance as one Kingston Value SSD. Best of luck -Jon

0 Kudos
COS
Expert
Expert

"Your SSDs are very slow."

Absolutely 100% correct because they are at SATA II speeds.

The production hardware will use HP Gen 9 in hybrid config and 400GB SAS 12Gb/s 400GB or 600GB SSD's.

Now we just need this proof of concept running on 10Gbe.........lol

Trying to sell that to management is like getting 2 root canals and a deep cleaning at the same time.

0 Kudos
jonretting
Enthusiast
Enthusiast

No -- You SSD is slow, because its slow. SATA is slow, and not fit for the task. SATA I II III they are all slow, doesn't matter. If you don't want to internalize this, that is your own prerogative. -Jon

0 Kudos
jonretting
Enthusiast
Enthusiast

Out of genuine curiosity what are you trying to prove in your "proof of concept"?

0 Kudos
COS
Expert
Expert

Long story short.....

We currently have a massive implementation of NetApp and vCloud.

we have smaller departments that want to virtualize but don't want (can't afford) a full SAN implemented architecture.

I want to show that vSAN can provide the same level of performance if not better (especially Write) using standard off the shelf HP or Dell servers in a vSAN setup using SSD's for R/W caching for less than the typical Full SAN implemented VMware architecture.

Show the difference in I/o performance with no SSD caching and show performance with caching at a price per performance.

Of course it's with the understanding that the new hardware (along with being on 10Gbe) will perform at a magnitude much higher than the old hardware I am using.

Management would not fork out any funds for this proof of concept. The hardware actually belongs to me personally.

Thanks

0 Kudos
jonretting
Enthusiast
Enthusiast

Ok -- Just keep in mind the hardware you are throwing at it will probably be slower then whats in place. So if you are trying to demonstrate something to them it might be very difficult. In all honesty you might wan't to focus more on the selling of VSAN ability, as Vmware assures you, it can be fast as $%^&. I have never had such a fluid virtual experience till VSAN, many testimonials available that attest to this. The caveat is that you use the right hardware, if you are learning VSAN to prove competency in delivering such a solution, then it is most important have an understanding on the limitations of say slow SATA SSD for your performance tier; and the overall bottleneck affect of the other components. Best of luck -Jon

0 Kudos
CrashTheGooner
Enthusiast
Enthusiast

Dear COS,

Firstly I want to stress on the point that this design does not consider penalty for RAID set up .
May I Know what is SSD used in this set up ?

Also setting up RAID is not a best practise for VSAN

We may also need to consider the % of read and write that will take place .

If we are calculating iops for write buffer and read cache , then front end iops and back end iops will vary

Kindly choose a higher write % that is try deploying SAS instead of SATA disks .
Get more SSD for induvidual diskgroups.

Then the design could be feasible , VSAN performance is good only with a good design Smiley Happy

If you find this or any other answer useful please mark the answer as correct or helpful. RGS
sadiq1435
Enthusiast
Enthusiast

Hi

I have used Transcend SSD's and they give better perrformance in my Laptop Thinkpad E431 . I have built VSAN lab on 5.5 and succesfully works .

0 Kudos
darcidinovmw
VMware Employee
VMware Employee

I know this thread is old and likely dead but I wanted to add a few things into the mix just for the sake of having a nice bow on it. You will ultimately suffer from two issues in this deployment. I had to spend money to figure this one out the hard way and I am trying to share my experience with others before they fall victim to it. The first issue is that while enterprise SATA SSD drives may be fine for the capacity layer, consumer grade SSD drives will never provide even decent POC testing. This goes doubly well for any test utilizing older HP Smart Array controllers. The P410 series which was the standard for G7 and G6 HPE servers maxes out at 3Gbps and so using a 6Gbps drive as your cache layer will operate less effectively than a SATA II consumer drive. Your best bet for this test to be accurate is to go to any popular online auction site and locate some used or new old stock 6Gb SAS SSD drives. I had sandisk consumer SATA III drives as my cache layer for my lab. Swapped out all six of them with Sandisk Optimus SAS 6Gbps drives for ~$100 each and cust my latency down 90%. VM clones used to take over an hour for a 60GB VM and now they take 6-10 minutes depending on what I am doing.

The other thing to consider about consumer grade SSD is that they generally don't handle power failure well due to using volatile memory and you will end up with orphaned or damaged writes on a power failure.

I have tested this stuff so many different ways and always came to the same conclusion, spend the money up front so your POC impresses and contact some sales guys and see if they will send you a demo host or two in the hopes you will buy some stuff if the POC goes well.

Doug Arcidino VCP-DCV 4/5/6, VCP-DTM 5/6/7, VCAP-DCV Deploy/Design 6 If this answer was helpful, please mark it as answer I work for VMware Disclaimer: Any views or opinions expressed here are strictly my own. I am solely responsible for all content published here. Content published here is not read, reviewed or approved in advance by VMware and does not necessarily represent or reflect the views or opinions of VMware.
0 Kudos