Skip navigation

My two most recent books The Green and Virtual Data  Center and Cloud and Virtual Data Storage Networking both  published by CRC Press/Taylor and Francis have been added to the Intel  Recommended Reading List for Developers.

 

Intel Recommended Reading

 

If you are not familiar with the Intel Recommended Reading  List for Developers, it is a leading comprehensive list of different books across various technology domains covering hardware, software, servers,  storage, networking, facilities, management, development and more.

 

Cloud and Virtual Data Storage Networkinghttp://noggin.intel.com/rrIntel Recommended Reading Listhttp://storageio.com/book2.htmlThe Green and Virtual Data Center

 

So what are you waiting for, check out the Intel  Recommended Reading list for Developers where you can find a diverse line up of different books of which I'm honored to have two of mine join the  esteemed list. Here is a link to a free chapter download from Cloud and Virtual Data Storage Networking.

 

Ok, nuff said for now.

 

cheers gs

This is the second of a two-part post about why storage arrays and appliances with SSD drives can be a good idea, here is link to the first post.

 

So again, why would putting drive form factors SSDs be a bad idea for  existing storage systems, arrays and appliances?

 

Benefits of SSD drive in storage systems, arrays and appliances:

  • Familiarity with customers who buy and use these devices
  • Reduces time to market enabling customers to innovate via  deployment
  • Establish comfort and confidence with SSD technology for customers
  • Investment protection of currently installed technology (hardware  and software)
  • Interoperability with existing interfaces, infrastructure, tools  and policies
  • Reliability, availability and serviceability (RAS) depending on  vendor implementation
  • Features and functionality (replicate, snapshot, policy, tiering,  application integration)
  • Known entity in terms of hardware, software, firmware and  microcode (good or bad)
  • Share SSD technology across more servers or accessing applications
  • Good performance assuming no controller, hardware or software  bottlenecks
  • Wear leveling and other SSD flash management if implemented
  • Can end performance bottlenecks if backend (drives) are a  problem
  • Coexist or complemented with server-based SSD caching

 

Note, the mere presence of SSD drives in a storage system, array  or appliance will not guarantee or enable the above items to be enabled, nor to  their full potential. Different vendors and products will implement to various  degrees of extensibility SSD drive support, so look beyond the check box of  feature, functionality. Dig in and understand how extensive and robust the SSD implementation  is to meet your specific requirements.

 

Caveats of SSD drives in storage systems, arrays and appliances:

  • May not use full performance potential of nand flash SLC  technology
  • Latency can be an issue for those who need extreme speed or  performance
  • May not be the most innovative newest technology on the block
  • Fun for startup vendors, marketers and their fans to poke fun at
  • Not all vendors add value or optimization for endurance of drive  SSD
  • Seen as not being technology advanced vs. legacy or mature systems

 

Note that different vendors will have various performance characteristics,  some good for IOPs, others for bandwidth or throughput while others for latency  or capacity. Look at different products to see how they will vary to meet your  particular needs.

 

Cost comparisons are tricky. SSD in HDD form  factors certainly cost more than raw flash dies, however PCIe cards and FTL  (flash translation layer) controllers also cost more than flash chips by  themselves. In other words, apples to apples comparisons are needed. In the future, ideally the baseboard or motherboard  vendors will revise the layout to support nand flash (or its replacement) with  DRAM DIMM type modules along with associated FTL and BIOS to handle the flash  program/erase cycles (P/E) and wear leveling management, something that DRAM  does not have to encounter. While that provides great  location or locality of reference (figure 1), it is also a more complex approach that  takes time and industry cooperation.

 

Locality of reference for memory and storage
Figure 1: Locality of reference for memory and storage

 

Certainly, for best performance, just like realty location matters  and thus locality of reference comes into play. That is put the data as close  to the server as possible, however when sharing is needed, then a different  approach or a companion technique is required.

 

Here are some general thoughts about SSD:

  • Some customers and organizations get the value and role of SSD
  • Some see where SSD can replace HDD, others see where it  compliments
  • Yet others are seeing the potential, however are moving cautiously
  • For many environments better than current performance is good  enough
  • Environments with the need for speed need every bit of  performance they can get
  • Storage systems and arrays or appliances continue to evolve  including the media they use
  • Simply looking at how some storage arrays, systems and appliances  have evolved, you can get an idea on how they might look in the future which  could include not only SAS as a backend or target, also PCIe. After all, it was  not that long ago where backend drive connections went from propriety to open parallel  SCSI or SSA to Fibre Channel loop (or switched) to SAS.
  • Engineers and marketers tend to gravitate to newer products nand technology,  which is good, as we need continued innovation on that front.
  • Customers and business people tend to gravitate towards deriving  greatest value out of what is there for as long as possible.
  • Of course, both of the latter two points are not always the case  and can be flip flopped.
  • Ultrahigh end environments and corner case applications will  continue to push the limits and are target markets for some of the newer  products and vendors.
  • Likewise, enterprise, mid market and other mainstream environments  (outside of their corner case scenarios) will continue to push known technology  to its limits as long as they can derive some business benefit value.

 

While not perfect, SSD in a HDD form factor with a SAS or SATA  interface properly integrated by vendors into storage systems (or arrays or  appliances) are a good fit for many environments today. Likewise, for some environments,  new from the ground up SSD based solutions that leverage flash DIMM or daughter  cards or PCIe flash cards are a fit. So to are PCIe flash cards either as a  target, or as cache to complement storage system (arrays and appliances). Certainly,  drive slots in arrays take up space for SSD, however so to does occupying PCIe  space particularly in high density servers that require every available socket  and slot for compute and DRAM memory. Thus, there are pros and cons, features  and benefits of various approaches and which is best will depend on your needs  and perhaps preferences, which may or may not be binary.

 

I agree that for some applications and solutions, non  drive form factor SSD make sense while in others, compatibility has its  benefits. Yet in other situations nand flash such as SLC combined with HDD and  DRAM tightly integrated such as in my Momentus XT HHDD is good for laptops,  however probably not a good fit for enterprise yet. Thus, SSD options and  placements are not binary, of course, sometimes opinions and perspectives will  be.

 

For some situations  PCIe, based cards in servers or appliances make sense, either as a target or as  cache. Likewise for other scenarios drive format SSD make sense in servers and  storage systems, appliances, arrays or other solutions. Thus while all of those  approaches are used for storing binary digital data, the solutions of what to  use when and where often will not be binary, that is unless your approach is to  use one tool or technique for everything.

 

Here are some related links to  learn more about SSD, where and when to use what:
  Why SSD based arrays and storage appliances can be a good idea (Part I)
  IT and storage economics 101,  supply and demand
  Researchers and marketers dont agree on future of nand flash  SSD
  Speaking of speeding up business  with SSD storage
  EMC VFCache respinning SSD and  intelligent caching (Part I)
  EMC VFCache respinning SSD and intelligent caching (Part II)
  SSD options for Virtual (and Physical) Environments: Part I  Spinning up to speed on SSD
  SSD options for Virtual (and Physical) Environments, Part  II: The call to duty, SSD endurance
  SSD options for Virtual (and Physical) Environments Part  III: What type of SSD is best for you?

 

Ok, nuff said for now.

 

Cheers gs

This is the first of a two-part series, you can read part II here.

 

Robin Harris (aka @storagemojo) recently in a blog post asks a question  and thinks solid state devices (SSDs) using SAS or SATA interface in traditional  hard disk drive (HDD) form factors are a bad idea in storage arrays (e.g.  storage systems or appliances). My opinion is that as with many things about storing, processing or moving binary digital data (e.g. 1s and 0s) the  answer is not always clear. That is there may not be a right or wrong answer  instead it depends on the situation, use or perhaps abuse scenario. For some  applications or vendors, adding SSD packaged in HDD form factors to existing  storage systems, arrays and appliances makes perfect sense, likewise for others  it does not, thus it depends (more on that in a bit). While we are talking  about SSD, Ed Haletky (aka @texiwill) recently asked a related question of Fix  the App or Add Hardware, which could easily be morphed into a discussion of Fix  the SSD, or Add Hardware. Hmmm, maybe a future post idea exists there.

 

Lets take a step back for a moment and look at the bigger picture  of what prompts the question of what type of SSD to use where and when along as  well as why various vendors want you to look at things a particular way. There  are many options for using SSD that is packaged in various ways to  meet diverse needs including here and here (see figure 1).

 

Various SSD packaging options
Figure 1: Various packaging and  deployment options for SSD

 

The growing number of startup and established vendors with SSD  enabled storage solutions vying to win your hearts, minds and budget is looking  like the annual NCAA basketball tournament (aka March Madness and march  metrics here and here). Some of vendors have or are adding SSD with SAS or SATA  interfaces that plug into existing enclosures (drive slots). These SSDs have  the same form factor of a 2.5 inch small form factor (SFF) or 3.5 inch HDDs  with a SAS or SATA interface for physical and connectivity interoperability.  Other vendors have added PCIe based SSD cards to their storage systems or  appliances as a cache (read or read and write) or a target device similar to  how these cards are installed in servers.

 

Simply adding SSD either in a drive form factor or as a PCIe card  to a storage system or appliance is only part of a solution. Sure, the hardware  should be faster than a traditional spinning HDD based solution. However, what differentiates  the various approaches and solutions is what is done with the storage systems  or appliances software (aka operating system, storage applications, management,  firmware or micro code).

 

So are SSD based storage systems, arrays and appliances a bad  idea?

 

If you are a startup or established vendor able to start from scratch  with a clean sheet design not having to worry about interoperability and  customer investment protection (technology, people skills, software tools,  etc), then you would want to do something different. For example, leverage off  the shelf components such as a PCIe flash SSD card in an industry standard  server combined with your software for a solution. You could also use extra  DRAM memory in those servers combined with PCIe flash SSD cards perhaps even  with embedded HDDs for a backing or preservation medium.

 

Other approaches might use a mix of DRAM, PCIe flash cards, as  either a cache or target combined with some drive form factor SSDs. In other  words, there is no right or wrong approach; sure, there are different technical  merits that have advantages for various applications or environments. Likewise,  people have preferences particular for technology focused who tend to like one  approach vs. another. Thus, we have many options to leverage, use or abuse.

 

In his post, Robin asks a good question of if nand flash SSD were being put  into a new storage system, why not use the PCIe backplane vs. using nand flash  on DIMM vs. using drive formats, all of which are different packaging options  (Figure 1). Some startups have gone the all backplane approach, some have gone  with the drive form factor, some have gone with a mix and some even using HDDs  in the background. Likewise some traditional storage system and array vendors  who support a mix of SSD and HDD drive form factor devices also leverage PCIe  cards, either as a server-based cache (e.g. EMC VFCahe) or installed as a  performance accelerator module (e.g. NetApp PAM) in their appliances.

 

While most vendors who put SSD drive form factor drives into their  storage systems or appliances (or serves for that matter) use them as data  targets for creating LUNs or file systems, others use them for internal functionality.  By internal functionality I mean instead of the SSD appearing as another drive  or target, they are used exclusively by the storage system or appliance for  caching or similar purposes. On storage systems, this can be to increase the  size of persistent cache such as EMC on the CLARiiON and VNX (e.g. FAST Cache). Another use is on  backup or dedupe target appliances where SSDs are used to store dictionary,  index or meta data repositories as opposed to being a general data pool.

 

Part two of this post looks at the benefits and caveats of SSD in storage arrays.

 

Here are some related links to  learn more about SSD, where and when to use what:
  Why SSD based arrays and storage appliances can be a good idea (Part II)
  IT and storage economics 101,  supply and demand
  Researchers and marketers don't agree on future of nand flash  SSD
  Speaking of speeding up business  with SSD storage
  EMC VFCache respinning SSD and  intelligent caching (Part I)
  EMC VFCache respinning SSD and intelligent caching (Part II)
  SSD options for Virtual (and Physical) Environments: Part I  Spinning up to speed on SSD
  SSD options for Virtual (and Physical) Environments, Part  II: The call to duty, SSD endurance
  SSD options for Virtual (and Physical) Environments Part  III: What type of SSD is best for you?

 

Ok, nuff said for now, check part II.

 

Cheers gs