I found the problem.
My Kingston V300 SSD's are GARBAGE!!!!!
I forgot I replaced my Corsair SATA 2 60GB SSD's for these Kingston 120's (there was a good sale) and they suck big BIG time!
Never again will I buy Kingston! EVER!
So I did a sanity check and plugged it into a PC and performance was still the same garbage. Bad Kingston! Bad BAD Kingston!
I swapped my old smaller SSD's back in and it's back to expected SATA II performance. Now to rebuild back to VSAN......lol
Thats good to hear, but I havn't had any SATA SSD perform correctly in VSAN. From Intel SATA Enterprise, Samsung 850 pros, and slightly better performing M.2 PCIe Samsungs. This was done under many conditions, and tons of testing. It is practically impossible to cheap out on VSAN Flash. The only viable option right now is Intel 750 Series (NVME). The performance is near exactly its enterprise sibling @ $1 per GB.
Please take a hard look at my previous post, I fear your expectations won't meet up with reality.
What is your opinion of using SATA SSD for the capacity tier in all flash vSAN 6? Along with a PCIe SSD for the performance tier. Thank you, Zach.
Unfortunately I have yet to have that pleasure. The announcement of Samsungs 2TB SSD was really cool, and I was hoping to replace my Lab's 1TB SAS drives to All-Flash. Yet, I find myself troubled by the consumer side still not making SATA Legacy. Just like Intel 750 Series cut things in half, making proper NVME lab friendly; I was expecting the same thing in the large capacity realm. All in all toward the end of the year I might gear up for some 1TB SATA *SSD All-Flash. Funny enough I am more interested on how All-Flash will perform when paired with consumer end performance Flash, and not NVME. Specifically how an ultrafast M.2 PCIe XSATA SSD performs in All-Flash, and 1GB/10GB networks.
After going over my notes on this topic, i made special mention of VDP Perf Test always failing with a 1GB network. Noting a possible middle ground would be All-Flash, but would still exhibit high latencies. What doesn't work in VSAN are paths to understanding.
All-Flash VSAN is just so new, and deploying a All-Flash production systems is perfectly suited for VDI. IMHO real availability of 2TB+ SAS SSD drives is key to its success outside VDI and the extreme.
Sorry my reply is so askew.
EDIT: Using four Intel 750 NVME per host, and its enterprise sibling as the performance tier would be neat. Would need a 3u DP servers + 40GB Infiniband, making the solution way outside a labs reach. If someone at vMware isn't playing around with that type of solution, I would be very sad.
Because this is all LAB so far, my expectations are to just see a significant improvements on SSD caching as opposed to non caching.
My metrics numbers are not the best. It's not the actual SSD's that are slow (except for the Kingstons), it's the lame on board controller of the DL360 G6 host. These use HP P410i controllers and when you put an SSD that's a SATA III in, it dumbs it down to SATA II.
I'll probably upgrade to new controllers later to LSI controllers. I'm looking at the 9260-8i.
Your SSDs are very slow. As I said probably somewhere else, its practically impossible to cheap out on your VSAN performance tier. SATA is not designed for high-speed flash, and fails terribly in the demanding environment of VSAN cache r/w. VSAN does and will freak out very often running SATA Flash SSDs for cache. Missed or aborted commands can turn into entire resets, and a QD of practically nothing. You could plug in 4x RAID0 Samasung Evo Pro 850's into a LSI 9260-8i, and still have the same performance as one Kingston Value SSD. Best of luck -Jon
"Your SSDs are very slow."
Absolutely 100% correct because they are at SATA II speeds.
The production hardware will use HP Gen 9 in hybrid config and 400GB SAS 12Gb/s 400GB or 600GB SSD's.
Now we just need this proof of concept running on 10Gbe.........lol
Trying to sell that to management is like getting 2 root canals and a deep cleaning at the same time.
No -- You SSD is slow, because its slow. SATA is slow, and not fit for the task. SATA I II III they are all slow, doesn't matter. If you don't want to internalize this, that is your own prerogative. -Jon
Out of genuine curiosity what are you trying to prove in your "proof of concept"?
Long story short.....
We currently have a massive implementation of NetApp and vCloud.
we have smaller departments that want to virtualize but don't want (can't afford) a full SAN implemented architecture.
I want to show that vSAN can provide the same level of performance if not better (especially Write) using standard off the shelf HP or Dell servers in a vSAN setup using SSD's for R/W caching for less than the typical Full SAN implemented VMware architecture.
Show the difference in I/o performance with no SSD caching and show performance with caching at a price per performance.
Of course it's with the understanding that the new hardware (along with being on 10Gbe) will perform at a magnitude much higher than the old hardware I am using.
Management would not fork out any funds for this proof of concept. The hardware actually belongs to me personally.
Ok -- Just keep in mind the hardware you are throwing at it will probably be slower then whats in place. So if you are trying to demonstrate something to them it might be very difficult. In all honesty you might wan't to focus more on the selling of VSAN ability, as Vmware assures you, it can be fast as $%^&. I have never had such a fluid virtual experience till VSAN, many testimonials available that attest to this. The caveat is that you use the right hardware, if you are learning VSAN to prove competency in delivering such a solution, then it is most important have an understanding on the limitations of say slow SATA SSD for your performance tier; and the overall bottleneck affect of the other components. Best of luck -Jon
Firstly I want to stress on the point that this design does not consider penalty for RAID set up .
May I Know what is SSD used in this set up ?
Also setting up RAID is not a best practise for VSAN
We may also need to consider the % of read and write that will take place .
If we are calculating iops for write buffer and read cache , then front end iops and back end iops will vary
Kindly choose a higher write % that is try deploying SAS instead of SATA disks .
Get more SSD for induvidual diskgroups.
Then the design could be feasible , VSAN performance is good only with a good designIf you find this or any other answer useful please mark the answer as correct or helpful.
I have used Transcend SSD's and they give better perrformance in my Laptop Thinkpad E431 . I have built VSAN lab on 5.5 and succesfully works .
I know this thread is old and likely dead but I wanted to add a few things into the mix just for the sake of having a nice bow on it. You will ultimately suffer from two issues in this deployment. I had to spend money to figure this one out the hard way and I am trying to share my experience with others before they fall victim to it. The first issue is that while enterprise SATA SSD drives may be fine for the capacity layer, consumer grade SSD drives will never provide even decent POC testing. This goes doubly well for any test utilizing older HP Smart Array controllers. The P410 series which was the standard for G7 and G6 HPE servers maxes out at 3Gbps and so using a 6Gbps drive as your cache layer will operate less effectively than a SATA II consumer drive. Your best bet for this test to be accurate is to go to any popular online auction site and locate some used or new old stock 6Gb SAS SSD drives. I had sandisk consumer SATA III drives as my cache layer for my lab. Swapped out all six of them with Sandisk Optimus SAS 6Gbps drives for ~$100 each and cust my latency down 90%. VM clones used to take over an hour for a 60GB VM and now they take 6-10 minutes depending on what I am doing.
The other thing to consider about consumer grade SSD is that they generally don't handle power failure well due to using volatile memory and you will end up with orphaned or damaged writes on a power failure.
I have tested this stuff so many different ways and always came to the same conclusion, spend the money up front so your POC impresses and contact some sales guys and see if they will send you a demo host or two in the hopes you will buy some stuff if the POC goes well.