Skip navigation

With March 31st as world backup day, hopefully some will  keep recovery and restoration in mind to not be fooled on April 1st.

Lost data


When it comes to protecting data, it may not be a headline news disaster such as earthquake, fire, flood, hurricane or act of man, rather something as simply accidentally overwriting a file, not to mention virus or other more likely to occur problems. Depending upon who you ask, some will say backup or saving data is more important while others will standby that it is recovery or restoration that matter. Without one the other is not practical, they need each other and both need to be done as well as tested to make sure they work.


Just the other day I needed to restore a file that I accidentally overwrote and as luck would have it, my local bad copy had also just overwrote my local backup. However I was able to go and pull an earlier version from my cloud provider which gave a good opportunity to test and try some different things. In the course of testing, I did find some things that have since been updated as well as found some things to optimize for the future.


Destroyed data


My opinion is that if not used properly including ignoring best practices, any  form of data storage medium or media as well as software could result or be blamed for data loss. For some people they have lost data as a result of using cloud  storage services just as other people have lost data or access to information on other storage mediums  and solutions. For example, data has been lost on cloud, tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and  remote and even optical based storage systems large and small. In some cases,  there have been errors or problems with the medium or media, in other cases  storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other  issues.


Now is the time to start thinking about modernizing data protection, and that means more than simply swapping out media. Data protection modernization the past several years has been focused on treating the symptoms of downstream problems at the target or destination. This has involved swapping out or moving media around, applying data footprint reduction (DFR) techniques downstream to give near term tactical relief as has been the cause with backup, restore, BC and DR for many years. The focus is starting to expand to how to discuss the source of the problem with is an expanding data footprint upstream or at the source using different data footprint reduction tools and techniques. This also means using different metrics including keeping performance and response time in perspective as part of reduction rates vs. ratios while leveraging different techniques and tools from the data footprint reduction tool box. In other words, its time to stop swapping out media like changing tires that keep going flat on a car, find and fix the problem, change the way data is protected (and when) to cut the impact down stream.


Here is a link to a free download of chapter 5 (Data Protection: Backup/Restore and Business Continuance / Disaster Recovery) from my new book Cloud and Virtual Data Storage Networking (CRC Press).


Cloud and Virtual Data Storage NetworkingIntel Recommended Reading List


Additional related links to  read more and sources of information:

Choosing  the Right Local/Cloud Hybrid Backup for SMBs
E2E  Awareness and insight for IT environments
Poll: What  Do You Think of IT Clouds?
Convergence:  People, Processes, Policies and Products
What do  VARs and Clouds as well as MSPs have in common?
Industry  adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster  recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on  board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy


Take a few minutes out of your busy schedule and check to see if your backups and data protection are working, as well as make sure to test restoration and recovery to avoid an April fools type surprise. One last thing, you might want to check out the data storage prayer while you are at it.


Ok, nuff said for now.


Cheers gs

A news story about the school board in Marshall Missouri  approving data storage plans in addition to getting good news on health  insurance rates just came into my in box.


I do not live in or anywhere near  Marshall Missouri as I live about 420 miles north in the Stillwater Minnesota area.


What  caught my eye about the story is the dollar amount ($52,503) and capacity amount (14.4TByte) for the new  Marshall school district data storage solution to replace their old, almost full 4.8TByte  system.


That prompted me to wonder, if the school district are getting a really good deal (if so congratulations), paying too much, or if about right.


Industry Trends and Perspectives


Not knowing what type of storage system they are getting, it  is difficult to know what type of value the Marshall School district is getting  with their new solution. For example, what type of performance and availability  in addition to capacity? What type of system and features such as snapshots, replication, data  footprint reduction aka DFR capabilities (archive, compression, dedupe, thin provisioning), backup, cloud access, redundancy  for availability, application agents or integration, virtualization support, tiering. Or if the 14.4TByte is total (raw) or usable storage  capacity or if it includes two storage systems for replication. Or what type of  drives (SSD, fast SAS HDD or high-capacity SAS or SATA HDDs), block (iSCSI, SAS  or FC) or NAS (CIFS and NFS) or unified, management software and reporting  tools among capabilities not to mention service and warranty.


Sure there are less expensive solutions that might work,  however since I do not know what their needs and wants are, saying they paid  too much would not be responsible. Likewise, not knowing their needs vs. wants,  requirements, growth and application concerns, given that there are solutions  that cost a lot more with extensive capabilities, saying that they got the deal  of the century would also not be fair. Maybe somewhere down the road we will  hear some vendor and VAR make a press release announcement about their win in  taking out a competitor from the Marshall school district, or perhaps that they  upgraded a system they previously sold so we can all learn more.

With school districts across the country trying to stretch  their budgets to go further while supporting growth, it would be interesting to  hear more about what type of value the Marshall school district is getting from  their new storage solution. Likewise, it would also be interesting to hear what  alternatives they looked at that were more expensive, as well as cheaper  however with less functionality. I'm guessing some of the cloud crowd cheerleaders  will also want to know why the school district is going the route they are vs.  going to the cloud.


IMHO value is not the same thing as less or lower cost or cheaper, instead its the benefit derived vs. what you pay. This means that something might cost more than something cheaper, however if I get more benefit from what might be more expensive, then it has more value.


Industry Trends and Perspectives


If you are a school district of similar size, what criteria  or requirements would you want as opposed to need, and then what would you do  or have you done?


What if you are a commercial or SMB environment, again not  knowing the feature functionality benefit being obtained, what requirements  would you have including want to have (e.g. nice to have) vs. must or have to  have (e.g. what you are willing to pay more for), what would you do or have  done?

How about if you were a cloud or managed service provider (MSP) or a VAR  representing one of the many services, what would your pitch and  approach be beyond simply competing on a cost per TByte basis?

Or if you are a vendor or VAR facing a similar opportunity,  again not knowing the requirements, what would you recommend a school district or  SMB environment to do, why and how to cost justify it?


What this all means to me is the importance of looking beyond lowest cost, or cost per capacity (e.g. cost per GByte or TByte) also factoring in value, feature functionality benefit.


Ok, nuff said for now, I need to get my homework assignments  done.


Cheers gs

My two most recent books The Green and Virtual Data  Center and Cloud and Virtual Data Storage Networking both  published by CRC Press/Taylor and Francis have been added to the Intel  Recommended Reading List for Developers.


Intel Recommended Reading


If you are not familiar with the Intel Recommended Reading  List for Developers, it is a leading comprehensive list of different books across various technology domains covering hardware, software, servers,  storage, networking, facilities, management, development and more.


Cloud and Virtual Data Storage Networking Recommended Reading List Green and Virtual Data Center


So what are you waiting for, check out the Intel  Recommended Reading list for Developers where you can find a diverse line up of different books of which I'm honored to have two of mine join the  esteemed list. Here is a link to a free chapter download from Cloud and Virtual Data Storage Networking.


Ok, nuff said for now.


cheers gs

This is the second of a two-part post about why storage arrays and appliances with SSD drives can be a good idea, here is link to the first post.


So again, why would putting drive form factors SSDs be a bad idea for  existing storage systems, arrays and appliances?


Benefits of SSD drive in storage systems, arrays and appliances:

  • Familiarity with customers who buy and use these devices
  • Reduces time to market enabling customers to innovate via  deployment
  • Establish comfort and confidence with SSD technology for customers
  • Investment protection of currently installed technology (hardware  and software)
  • Interoperability with existing interfaces, infrastructure, tools  and policies
  • Reliability, availability and serviceability (RAS) depending on  vendor implementation
  • Features and functionality (replicate, snapshot, policy, tiering,  application integration)
  • Known entity in terms of hardware, software, firmware and  microcode (good or bad)
  • Share SSD technology across more servers or accessing applications
  • Good performance assuming no controller, hardware or software  bottlenecks
  • Wear leveling and other SSD flash management if implemented
  • Can end performance bottlenecks if backend (drives) are a  problem
  • Coexist or complemented with server-based SSD caching


Note, the mere presence of SSD drives in a storage system, array  or appliance will not guarantee or enable the above items to be enabled, nor to  their full potential. Different vendors and products will implement to various  degrees of extensibility SSD drive support, so look beyond the check box of  feature, functionality. Dig in and understand how extensive and robust the SSD implementation  is to meet your specific requirements.


Caveats of SSD drives in storage systems, arrays and appliances:

  • May not use full performance potential of nand flash SLC  technology
  • Latency can be an issue for those who need extreme speed or  performance
  • May not be the most innovative newest technology on the block
  • Fun for startup vendors, marketers and their fans to poke fun at
  • Not all vendors add value or optimization for endurance of drive  SSD
  • Seen as not being technology advanced vs. legacy or mature systems


Note that different vendors will have various performance characteristics,  some good for IOPs, others for bandwidth or throughput while others for latency  or capacity. Look at different products to see how they will vary to meet your  particular needs.


Cost comparisons are tricky. SSD in HDD form  factors certainly cost more than raw flash dies, however PCIe cards and FTL  (flash translation layer) controllers also cost more than flash chips by  themselves. In other words, apples to apples comparisons are needed. In the future, ideally the baseboard or motherboard  vendors will revise the layout to support nand flash (or its replacement) with  DRAM DIMM type modules along with associated FTL and BIOS to handle the flash  program/erase cycles (P/E) and wear leveling management, something that DRAM  does not have to encounter. While that provides great  location or locality of reference (figure 1), it is also a more complex approach that  takes time and industry cooperation.


Locality of reference for memory and storage
Figure 1: Locality of reference for memory and storage


Certainly, for best performance, just like realty location matters  and thus locality of reference comes into play. That is put the data as close  to the server as possible, however when sharing is needed, then a different  approach or a companion technique is required.


Here are some general thoughts about SSD:

  • Some customers and organizations get the value and role of SSD
  • Some see where SSD can replace HDD, others see where it  compliments
  • Yet others are seeing the potential, however are moving cautiously
  • For many environments better than current performance is good  enough
  • Environments with the need for speed need every bit of  performance they can get
  • Storage systems and arrays or appliances continue to evolve  including the media they use
  • Simply looking at how some storage arrays, systems and appliances  have evolved, you can get an idea on how they might look in the future which  could include not only SAS as a backend or target, also PCIe. After all, it was  not that long ago where backend drive connections went from propriety to open parallel  SCSI or SSA to Fibre Channel loop (or switched) to SAS.
  • Engineers and marketers tend to gravitate to newer products nand technology,  which is good, as we need continued innovation on that front.
  • Customers and business people tend to gravitate towards deriving  greatest value out of what is there for as long as possible.
  • Of course, both of the latter two points are not always the case  and can be flip flopped.
  • Ultrahigh end environments and corner case applications will  continue to push the limits and are target markets for some of the newer  products and vendors.
  • Likewise, enterprise, mid market and other mainstream environments  (outside of their corner case scenarios) will continue to push known technology  to its limits as long as they can derive some business benefit value.


While not perfect, SSD in a HDD form factor with a SAS or SATA  interface properly integrated by vendors into storage systems (or arrays or  appliances) are a good fit for many environments today. Likewise, for some environments,  new from the ground up SSD based solutions that leverage flash DIMM or daughter  cards or PCIe flash cards are a fit. So to are PCIe flash cards either as a  target, or as cache to complement storage system (arrays and appliances). Certainly,  drive slots in arrays take up space for SSD, however so to does occupying PCIe  space particularly in high density servers that require every available socket  and slot for compute and DRAM memory. Thus, there are pros and cons, features  and benefits of various approaches and which is best will depend on your needs  and perhaps preferences, which may or may not be binary.


I agree that for some applications and solutions, non  drive form factor SSD make sense while in others, compatibility has its  benefits. Yet in other situations nand flash such as SLC combined with HDD and  DRAM tightly integrated such as in my Momentus XT HHDD is good for laptops,  however probably not a good fit for enterprise yet. Thus, SSD options and  placements are not binary, of course, sometimes opinions and perspectives will  be.


For some situations  PCIe, based cards in servers or appliances make sense, either as a target or as  cache. Likewise for other scenarios drive format SSD make sense in servers and  storage systems, appliances, arrays or other solutions. Thus while all of those  approaches are used for storing binary digital data, the solutions of what to  use when and where often will not be binary, that is unless your approach is to  use one tool or technique for everything.


Here are some related links to  learn more about SSD, where and when to use what:
  Why SSD based arrays and storage appliances can be a good idea (Part I)
  IT and storage economics 101,  supply and demand
  Researchers and marketers dont agree on future of nand flash  SSD
  Speaking of speeding up business  with SSD storage
  EMC VFCache respinning SSD and  intelligent caching (Part I)
  EMC VFCache respinning SSD and intelligent caching (Part II)
  SSD options for Virtual (and Physical) Environments: Part I  Spinning up to speed on SSD
  SSD options for Virtual (and Physical) Environments, Part  II: The call to duty, SSD endurance
  SSD options for Virtual (and Physical) Environments Part  III: What type of SSD is best for you?


Ok, nuff said for now.


Cheers gs

This is the first of a two-part series, you can read part II here.


Robin Harris (aka @storagemojo) recently in a blog post asks a question  and thinks solid state devices (SSDs) using SAS or SATA interface in traditional  hard disk drive (HDD) form factors are a bad idea in storage arrays (e.g.  storage systems or appliances). My opinion is that as with many things about storing, processing or moving binary digital data (e.g. 1s and 0s) the  answer is not always clear. That is there may not be a right or wrong answer  instead it depends on the situation, use or perhaps abuse scenario. For some  applications or vendors, adding SSD packaged in HDD form factors to existing  storage systems, arrays and appliances makes perfect sense, likewise for others  it does not, thus it depends (more on that in a bit). While we are talking  about SSD, Ed Haletky (aka @texiwill) recently asked a related question of Fix  the App or Add Hardware, which could easily be morphed into a discussion of Fix  the SSD, or Add Hardware. Hmmm, maybe a future post idea exists there.


Lets take a step back for a moment and look at the bigger picture  of what prompts the question of what type of SSD to use where and when along as  well as why various vendors want you to look at things a particular way. There  are many options for using SSD that is packaged in various ways to  meet diverse needs including here and here (see figure 1).


Various SSD packaging options
Figure 1: Various packaging and  deployment options for SSD


The growing number of startup and established vendors with SSD  enabled storage solutions vying to win your hearts, minds and budget is looking  like the annual NCAA basketball tournament (aka March Madness and march  metrics here and here). Some of vendors have or are adding SSD with SAS or SATA  interfaces that plug into existing enclosures (drive slots). These SSDs have  the same form factor of a 2.5 inch small form factor (SFF) or 3.5 inch HDDs  with a SAS or SATA interface for physical and connectivity interoperability.  Other vendors have added PCIe based SSD cards to their storage systems or  appliances as a cache (read or read and write) or a target device similar to  how these cards are installed in servers.


Simply adding SSD either in a drive form factor or as a PCIe card  to a storage system or appliance is only part of a solution. Sure, the hardware  should be faster than a traditional spinning HDD based solution. However, what differentiates  the various approaches and solutions is what is done with the storage systems  or appliances software (aka operating system, storage applications, management,  firmware or micro code).


So are SSD based storage systems, arrays and appliances a bad  idea?


If you are a startup or established vendor able to start from scratch  with a clean sheet design not having to worry about interoperability and  customer investment protection (technology, people skills, software tools,  etc), then you would want to do something different. For example, leverage off  the shelf components such as a PCIe flash SSD card in an industry standard  server combined with your software for a solution. You could also use extra  DRAM memory in those servers combined with PCIe flash SSD cards perhaps even  with embedded HDDs for a backing or preservation medium.


Other approaches might use a mix of DRAM, PCIe flash cards, as  either a cache or target combined with some drive form factor SSDs. In other  words, there is no right or wrong approach; sure, there are different technical  merits that have advantages for various applications or environments. Likewise,  people have preferences particular for technology focused who tend to like one  approach vs. another. Thus, we have many options to leverage, use or abuse.


In his post, Robin asks a good question of if nand flash SSD were being put  into a new storage system, why not use the PCIe backplane vs. using nand flash  on DIMM vs. using drive formats, all of which are different packaging options  (Figure 1). Some startups have gone the all backplane approach, some have gone  with the drive form factor, some have gone with a mix and some even using HDDs  in the background. Likewise some traditional storage system and array vendors  who support a mix of SSD and HDD drive form factor devices also leverage PCIe  cards, either as a server-based cache (e.g. EMC VFCahe) or installed as a  performance accelerator module (e.g. NetApp PAM) in their appliances.


While most vendors who put SSD drive form factor drives into their  storage systems or appliances (or serves for that matter) use them as data  targets for creating LUNs or file systems, others use them for internal functionality.  By internal functionality I mean instead of the SSD appearing as another drive  or target, they are used exclusively by the storage system or appliance for  caching or similar purposes. On storage systems, this can be to increase the  size of persistent cache such as EMC on the CLARiiON and VNX (e.g. FAST Cache). Another use is on  backup or dedupe target appliances where SSDs are used to store dictionary,  index or meta data repositories as opposed to being a general data pool.


Part two of this post looks at the benefits and caveats of SSD in storage arrays.


Here are some related links to  learn more about SSD, where and when to use what:
  Why SSD based arrays and storage appliances can be a good idea (Part II)
  IT and storage economics 101,  supply and demand
  Researchers and marketers don't agree on future of nand flash  SSD
  Speaking of speeding up business  with SSD storage
  EMC VFCache respinning SSD and  intelligent caching (Part I)
  EMC VFCache respinning SSD and intelligent caching (Part II)
  SSD options for Virtual (and Physical) Environments: Part I  Spinning up to speed on SSD
  SSD options for Virtual (and Physical) Environments, Part  II: The call to duty, SSD endurance
  SSD options for Virtual (and Physical) Environments Part  III: What type of SSD is best for you?


Ok, nuff said for now, check part II.


Cheers gs