VMware Cloud Community
smonborgne
Enthusiast
Enthusiast

PCIE card vs SAS SSD

Hi,

I would like some advice on a configuration I’m building.

We have the choice of using  Fusion-IO card or SAS SSD for an hybrid configuration.

But for the same price we can have a 400GB Fusion-IO or a 1,6TB SSD.

With the PCIE card we already reach the ~10% flash ratio but for the same price I was wondering what the best choice woulds be?

I know it can be an “it depends” answer because the working set size or the latency needs could differ, however I’m seeking a general advice on this because I don’t mesure the real impact it can have.

Or maybe I can even just stick with a 400GB SSD and save some money?

Thanks for your help

Sylvain

Tags (3)
5 Replies
zdickinson
Expert
Expert

I like the PCIe card route, it keeps the IO off the storage card.  Thank you, Zach.

saurav116
Contributor
Contributor

As you mentioned it depends , if you compare generally between PCIE and SAS SSD from the performance point of view i would suggest go for PCIE . In case you are looking for a balance between capacity , performance and cost SAS SSD would be a good option. Again as you mentioned your total data size also matter , if you are working with a large data set , i would recommend you to go with a enterprise grade SSD option.

0 Kudos
smonborgne
Enthusiast
Enthusiast

Thanks for your answers

I asked the same question to a consultant who has made some testing on this kind of configurations and his point was that the network latency and bandwith over a 10Gb link almost negates the gains of a PCIE card.

He was seeing better performance in the cases where the VMs were executed by the nodes storing their files but vsan being data-locality agnostic by design, it’s not likely to happen and not predictable.

Some of you have made comparisons and testing on this?

0 Kudos
depping
Leadership
Leadership

Personally I prefer regular SSDs over PCIe based. Primarily because in my experience it typically takes longer for PCIe based solutions to be certified for a new version, it requires specific drivers and that needs to align with specific firmware, it just makes management a bit more challenging. On top of that, the price is usually significantly higher per GB. As you state, for the same amount you can get a decent 1.6TB drive instead, which means you have 4x the cache capacity. Not only will this increase your write buffer but it will also increase the read cache.

With having that said, I have customers who prefer PCIe for performance reasons, but all depends on what your requirements are in terms of latency / iops. My experience is that most typically won't need to hit those ultra low numbers,

0 Kudos
jmadpalw
Enthusiast
Enthusiast

Hi

Below is the comparisons

0 Kudos