Skip navigation

Doing more with less, doing more with what you have or reducing cost have been the mantra for the past several years now.


Does that mean as a trend, they are being adopted as the new way of doing business, or simply a cycle or temporary situation?


Reality is that many if not most IT organizations are and will remain under pressure to stretch their budgets further for the immediate future. Over the past year or two some organizations saw increases in their budgets however also increased demand while others saw budgets fixed or reduced while having to support growth. On the other hand, there is no such thing as an information recession with more data being generated, moved, processed, stored and retained for longer periods of time.


Industry trend: No such thing as a data recession


Something has to give as shown in the following figure which is that on one curve there is continued demand and growth, while another curve shows need to reduce costs while another reflects the importance of maintaining or enhancing service level objectives (SLOs) and quality of service (QoS).


Enable growth while removing complexity and cost without compromising service levels


One way to reduce costs is to inhibit growth while another is to support growth by sacrificing QoS including performance, response time or availability as a result of over consolidation, excessive utilization or instability as a result of stretching resources to far. Where innovation comes into play is finding and fixing problems vs. moving or masking them or treating symptoms vs. the real issue and challenge. Innovation also comes into play by identifying both near term tactical as well as longer term strategic means of taking complexity and cost out of service delivery and the resources needed to support them.


For example determining the different resources and processes involved in delivering an email box of a given size and reliability. Another being supporting a virtual machine (VM) with a given performance and capacity capability. Yet another scenario is a file share or home directory of a specific size and availability. By streamlining work flows, leveraging automation and other tools to enforce polices as well as adopting new best practices complexity and thereby costs can be reduced. The net rest is a lower cost to provide a given service to a specific level which when multiplied out over many users or instances, results in cost savings however also productivity gains.


The above is all good and well for longer term strategic and where you want to go or get to, however what can be done right now today?


Here are a few tips to do more with what you have while supporting growth demands

If you have service level agreements (SLAs) and SLOs as part of your service category, review with your users as to what they need vs. what they would like to have. What you may find is that your users want or expect a given level of service, yet would be happy and ok with moving to a cloud service that had lower SLO and SLA expectations if lower cost. The previous scenario would be an indicator that you users want and thus you give them a higher level of service, yet their requirements are actually lower than what is expected. On the other hand if you do not have SLOs and SLAs aligned with cost for the services then set them up and review customer or client expectations, needs vs. wants on a regular basis. You might find out that you can stretch your budget by delivering a lower (or higher) class of services to meet different users requirements than what was assumed to be the case. In the case of supporting a better class of service, if you can use an SSD enabled solution to reduce latency or wait times and boost productivity, more transactions or page views or revenue per hour, that could prompt a client to request that capability to meet their business needs.


Reduce your data footprint impact in order to support growth using the ABCDs of data footprint reduction (DFR), that is Archive (email, file, database), Backup modernization, Compression and consolidation, Data management and dedupe, storage tiering among other techniques.


Storage, server virtualization and optimization using capacity consolidation where practical and IO consolidation to fast storage and SSD where possible. Also review storage configuration including RAID and allocation to identity if any relatively easy changes can improve performance, availability, capacity and energy impact.


Investigate available upgrades and enhancements to your existing hardware, software and services that can be applied to provide breathing room within current budgets while evaluating new technologies.


Find and fix problems vs. chasing false positives that provide near term relief only to have the real issue reappear. Maximize your budgets by identifying where people time and other resources are being spent due to processes, work flows, technology configuration complexity or bottlenecks and address those.


Enhance and leverage existing management measurements to gain more insight along with implementing new metrics for End to End (E2E) situational awareness of your environment which will enable effective decision making. For example you may be told to move some function to the cloud as it will be cheaper, yet if you do not have metrics to indicate one way or the other, how can that be an informed decision? If you have metrics that show your cost for the same service being moved to a cloud or managed service provider as well as QoS, SLO, SLA, RTO, RPO and other TLAs, then you can make informed decisions. That decision may still be to move functions to a cloud or other service even if in fact it is more expensive compared to what you can provide it for in order that your resources can be directed to supporting other important internal functions.


Look for ways to reduce cost of a service delivered as opposed to simply cutting costs. They sound like one and the same, however if you have metrics and measurements providing situational awareness to know what the cost of a service is, you can also then look at how to streamline those services, remove complexity, reduce workflow, leverage automation there by removing cost. The goal is the same, however how you go about removing cost can have an impact on your return on innovation not to mention customer satisfaction.


Also be an informed shopper, have a forecast or plan on what you will need and when, along with what you must have (core requirements) vs. what you would like to have or want. When looking at options, balance what is needed and then if you can get what you want or would like for little or no extra cost if they add value or enable other initiatives. Part of being an informed shopper is having support of the business to be able to procure what you want or need which means aligning technology resources and their cost to delivery of business functions and services.


What you need vs. what you want
In a recent interview with the associated press (AP) the reporter wanted to know my comments about spending vs. saving during economic tough times (you can read the story here). Basically my comments were to spend within your means by identifying what you need vs. what you want, what is required to keep the business running or improve productivity and remove cost as opposed to acquiring nice to have things that can wait. Sure I would like to have a new 85 to 120" 3D monitor for my workstation that could double as a TV, however I do not need or require it.


On the other hand, I recently upgraded an existing workstation adding a Hybrid Hard Disk Drive (HHDD) and some additional memory, about a $200USD investment that is already paying for itself via increased productivity. That is instead of enjoying a cup of dunkin donut coffee while waiting for some tasks to complete on that system, Im able to get more done in a given amount of time boosting productivity.


For IT environments this means looking at expenditures to determine what is needed or required to keep things running while supporting near term strategic and tactical initiatives or pet projects.


For vendors and vars, if things have not been a challenge yet, now they will need to refine their messages to show more value, return on innovation (ROI) in terms of how to help their customers or prospects stretch resources (budgets, people, skill sets, products, services, licenses, power and cooling, floor space) further to support growth, while removing costs without compromising on service delivery. This also means a shift in thinking of short term or tactical cost cutting to longer term strategic approaches of reducing costs to deliver a service or resources.


Related links pertaining to stretching your resources, doing more with what you have, increasing productivity and maximizing your budget to support growth without compromising on customer service.


Saving Money with Green IT: Time To Invest In Information Factories
Storage Efficiency and Optimization – The Other Green
Shifting from energy avoidance to energy efficiency
Saving Money with Green Data Storage Technology
Green IT Confusion Continues, Opportunities Missed!
Storage Efficiency and Optimization – The Other Green
PUE, Are you Managing Power, Energy or Productivity?
Cloud and Virtual Data Storage Networking
Is There a Data and I/O Activity Recession?
More Data Footprint Reduction (DFR) Material


What is your take?


Are you and your company going into a spending freeze mode, or are you still spending, however placing or having constraints put on discretionary spending?


How are you stretching your IT budget to go further?


Ok, nuff said for now.


Cheers gs

Here is a link to a recent guest post that I was invited to do over at The virtualization Practice (TVP) pertaining to the recent VMware vSphere 5.0 announcement.


A theme of the vSphere 5.0 launch is reducing complexity, enabling automation, and supporting scaling with confidence for cloud and virtual environments. As a key component for supporting cloud, virtual and dynamic infrastructure environments, vSphere V5.0 includes many storage related enhancements and new features including Storage Distributed Resource Scheduler (SDRS).


Read more here.


Ok, nuff said for now.


Cheers gs

Here is a link to a recent guest post that I was invited to do over at  The Virtualization Practice (TVP) pertaining to Getting SASsy, the other shared server to storage interconnect for disk and SSD systems. Serial Attached SCSI (SAS) is better known as an interface for connecting hard  disk drives (HDD) to servers and storage systems; however it is also widely used  for attaching storage systems to physical as well as virtual servers.


An  important storage requirement for virtual machine (VM) environments with more  than one physical machine (PM) server is shared storage. SAS has become a viable  interconnect along with other Storage Area Network (SAN) interfaces including  Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI for block  access.


Read more here.


Ok, nuff said for now.


Cheers gs

A couple of weeks ago I traveled down to Orlando Florida  for a few days to attend the fall 2011 SNW (Storage Networking World) produced  in conjunction by IDG Computerworld and the Storage Networking Industry  Association (SNIA).




While at the Orlando event, SNIA executive director Leo  Legar asked me how many SNWs I had attended and my responses was on which  continent?

My answer was part in fun however also serious as I have been  attending SNWs (in addition to other SNIA events) for over ten years in both  North and South America as well as in Europe including presenting SNIA  tutorials and SNW sessions.


SNW is always good for meeting up with old friends and acquaintances along with meeting new ones including twitter tweeps (hashtag #snwusa #snw2011 @sniacloud @snwusa) and the recent event was no exception. Granted SNW is smaller than it was during its peak in the mid 2000s however it was great to go for a couple of days of meetings, checking out the expo hall and some sessions as well as getting out and about meeting people involved with servers, storage, networking, virtualization, cloud, hardware, software and services.


SNW remains as its name implies (Storage Networking World) an  event around networking as in conversations, learning, knowledge exchange,  information gathering and meetings not to mention the hands on lab. I found the  two days I was there adequate to get the meetings and other  activities I had planned, along with time for impromptu meetings. ANother observation was that during the peak of the large mega SNW events, while there were more meetings, they were also much shorter along the lines of speed dating vs. those a couple of weeks ago where there was time to have quality conversations.


SNIA Emerald Program


Some of the news at the recent SNW event, involved SNIA and their  Green Storage Initiative (GSI) announcing the availability of the Emerald program Green IT storage energy metrics that have been in the works for several years. The SNIA Emerald program consists of specifications, taxonomies, metrics and measurements standards to gauge various types of storage power or energy usage to gauge its effectiveness. In other words, yes, Green IT and Green storage are still alive, they just are not as trendy to talk about as they were a few years ago which a shift in focus towards productivity, effective use and supporting growth to help close the green gap and missed IT as well as business opportunities.


Also during the recent SNW event, I did a book signing event  sponsored by SNIA. If you have not done so, check out the SNIA Cloud Storage Initiative (CSI) who arranged for several of my new book Cloud and Virtual Data Storage Networking to be given away. Book signings are fun in that I get to meet lots of people and hear what they are doing, encountering, looking for, have done, concerned or excited about. It was handy having SNIA CSI material available at the table as I was signing books and visiting with people to be able to give them information about things such as CDMI not to mention hearing what they were doing or looking for. Note to SNIA, if we do this again, lets make sure to have someone from the CSI at the table to join in the fun and conversations as there were some good ones. Learn more about the activities of the SNIA CSI including their Cloud Data Management Initiative (CDMI) here.


SNIA Cloud Storage Initiaive CSI


Thanks again to SNIA for arranging the book  signing event and for those who were not able to get a copy of my new book  before they ran out, my publisher CRC Press Taylor and Francis has arranged a  special SNIA and SNW discount code. To take advantage of the SNIA and SNW discount code, go to  the CRC Press web site (here) and apply the discount code KVK01 during checkout for catalog  item K12375 (ISBN: 9781439851739).


30 percent discount code for Cloud and Virtual Data Storage Networking Book


Thanks again to Wayne Adams (@wma01606), Leo Legar and Michael  Meleedy among others who arranged for a fantastic fall 2011 SNW event along with everyone who participated in the book signing event and other conversations while in Orlando and to those who were involved virtually via twitter.


Ok, nuff said for now


Cheers gs

It has been a busy fall 2011 which started out with VMworld 2011 in Las Vegas just before the labor day weekend.


At the CXI party in Vegas during VMworld standing with the NEXUS vMonstoerLas Vegas Strip from CXI party during VMworld with Karen of Arcola
Scenes from the CXI party (@cxi) at VMworld 2011


Besides activity in support of the launch of my new book Cloud and Virtual Data Storage Networking (CRC Press), I have been busy with various client research, consulting and advisory projects. In addition to Las Vegas for VMworld, out and about travel activities for attending conferences and presenting seminars have included visits in Minneapolis (local), Nijkerk Holland and Denver (in the same week) and Orlando (SNW). Upcoming out and about events are scheduled for Los Angles, Atlanta, Chicago, Seattle and a couple of trips to San Jose area before the brief thanksgiving holiday break.


My Sunday virtual office in Nijkerk before a busy weekMy Sunday virtual office in Nijkerk before a busy week
Beer and Bitter ballens on the left, coffee machine in Nijkerk on the right


Brouwer Strorage Consulantcy Seminar
Day one of two day seminar in Nijkerk


Instead of automobiles lined up a trainstation, its bicycles in NijkerkWaiting in Nijkerk for 6:30AM train to Schiphol and on to Denver
Bicycles lined up at the Nijkerk train station, waiting for the 6:30 train to Schiphol


Changing trains in Amsfort on way to SchipholBoarding Delta A333 AMS to MSP then on to DEN
Changing trains on way to Schiphol to board flight to MSP and then to DEN


Climbing out of Denver on way back to MSP, it was a long yet fun weekEvening clouds enroute from DEN to MSP
After Denver back to MSP for a few days before SNW in Orlando


While being out and about I have had the chance to meet and visit with many different people. Here are some questions and comments that I have heard while out and about:


  • What comes after cloud?
  • Are there standards for clouds and virtualization?
  • Should cost savings be the justification for going to cloud, virtual or dynamic environments?
  • How is big data different than traditional stream and flat file analytics and processing using tools such as SAS (Statistical Analysis Software)?
  • Is big data only about map reduce and hadoop?
  • Are clouds any less secure or safe for storage and applications?
  • Do clouds and virtualization removing complexity and simplify infrastructures?
  • Are cloud storage services cheaper than buying and managing your own?
  • Is object based storage a requirement for public or private cloud?
  • Do solution bundles such as EMC vBlock and NetApp FlexPods reduce complexity?
  • Why is FCoE taking so long to be adopted and is it dead?
  • Should cost savings be the basis for deciding to do a VDI or virtualization project?
  • What is the best benchmark or comparison for making storage decisions?


In addition, there continues to be plenty of cloud confusion, FUD and hype around public, private, hybrid along with AaaS, SaaS, PaaS and IaaS among other XaaS. The myth that virtualization of servers, storage and workstations is only for consolidation continues. However there are more people beginning to see the next wave of life beyond consolidation where the focus expands to flexibility, agility and speed of deployment for non aggregated workloads and applications. Another popular myth that is changing is that data footprint reduction (DFR) is only about dedupe and backup. What is changing is an awareness that DFR spans all types of storage and data from primary to secondary leveraging different techniques including archive, backup modernization, compression, consolidation, data management and dedupe along with thin provisioning among other techniques.


Archiving for email, database and file systems needs to be rescued from being perceived as only for compliance purposes. If you want or need to reduce your data footprint impact (DFR), optimize your storage for performance or capacity, enable backup, BC and DR to be performed faster, achieve Green IT and efficiency objectives, expand your awareness around archiving. While discussing archiving, focus is often on the target or data storage medium such as disk, tape, optical or cloud along with DFR techniques such as compression and dedupe or functionally including ediscovery and WORM. The other aspects of archive that need to be looked at include policies, retention, application and software plugins for Exchange, SQL, Sharepoint, Sybase, Oracle, SAP, VMware and others.


Boot storms continue to be a common theme for apply solid state devices (SSD) in support of virtual desktop inititiaves (VDI). There is however a growing awareness and discussions around shutdown storms, day to day maintenance including virus scans in addition to applications that increase the number of writes. Consequently the discussions around VDI are expanding to include both reads and writes as well as reduced latency for storage and networks.


Some other general observations, thoughts and comments:


  • Getting into Holland as a visitor is easier than returning to the U.S. as a citizen
  • Airport security screening is more thorough and professional in Europe than in the U.S.
  • Hops add latency to beer (when you drink it) and to networks (time delay)
  • Fast tape drives need disk storage to enable streaming for reads and writes
  • SSD is keeping HDDs alive, HDDs are keeping tape alive and all there roles are evolving while the technologies continue to evolve.
  • Hybrid Hard Disk Drives (HHDDs) are gaining in awareness and deployments in workstations as well as laptops.
  • Confusion exists around what are flat layer 2 networks for LANs and Sans
  • Click here to view additional comments and perspectives


Ok, nuff said for now


Cheers gs

Over the past several years I have done an annual post about IBM and their XIV storage system and this is the fourth in what has become a series. You can read the first one here, the second one here, and last years here and here after the announcement of the IBM V7000.


IBM recently announced the generation 3 or Gen3 version of XIV along with releasing for the first time public performance comparison benchmarks using storage performance council (SPC) throughout SPC2 workload.


The XIV Gen3 is positioned by IBM as having up to four (4) times the performance of earlier generations of the storage system. In terms of speeds and feeds, the Gen3 XIV supports up to 180 2TB SAS hard disk drives (HDD) that provides up to 161TB of usable storage space capacity. For connectivity, the Gen3 XIV supports up to 24 8Gb Fibre Channel (8GFC) or for iSCSI 22 1Gb Ethernet (1 GbE) ports with a total of up to 360GBytes of system cache. In addition to the large cache to boost performance, other enhancements include leveraging multi core processors along with an internal InfiniBand  network to connect nodes replacing the former 1 GbE interconnect. Note, InfiniBand is only used to interconnect the various nodes in the XIV cluster and is not used for attachment to applications servers which is handled via iSCSI and Fibre Channel.


IBM and SPC storage performance history
IBM has a strong history if not leading the industry with benchmarking and workload simulation of their storage systems including Storage Performance Council (SPC) among others. The exception for IBM over the past couple of years has been the lack of SPC benchmarks for XIV. Last year when IBM released their new V7000 storage system benchmarks include SPC were available close to if not at the product launch. I have in the past commented about IBMs lack of SPC benchmarks for XIV to confirm their marketing claims given their history of publishing results for all of their other storage systems. Now that IBM has recently released SPC2 results for the XIV it is only fitting then that I compliment them for doing so.


Benchmark brouhaha
Performance workload simulation results can often lead to applies and oranges comparisons or benchmark brouhaha battles or storage performance games. For example a few years back NetApp submitted a SPC performance result on behalf of their competitor EMC. Now to be clear on something, Im not saying that SPC is the best or definitive benchmark or comparison tool for storage or other purpose as it is not. However it is representative and most storage vendors have released some SPC results for their storage systems in addition to TPC and Microsoft ESRP among others. SPC2 is focused on streaming such as video, backup or other throughput centric applications where SPC1 is centered around IOPS or transactional activity. The metrics for SPC2 are Megabytes per second (MBps) for large file processing (LFP), large database query (LDQ) and video on demand delivery (VOD) for a given price and protection level.


What is the best benchmark?
Simple, your own application in as close to as actual workload activity as possible. If that is not possible, then some simulation or workload simulation that closets resembles your needs.


Does this mean that XIV is still relevant?


Does this mean that XIV G3 should be used for every environment?
Generally speaking no. However its performance enhancements should allow it to be considered for more applications than in the past. Plus with the public comparisons now available, that should help to silence questions (including those from me) about what the systems can really do vs. marketing claims.


How does XIV compare to some other IBM storage systems using SPC2 comparisons?

Cost per SPC2
Storage GBytes
Price tested


In the above comparisons, the DS5300 (NetApp/Engenio based) is a dual controller (4GB of cache per controller) with 128 x 146.8GB 15K HDDs configured as RAID 5 with no discount applied to the price submitted. The V7000 system which is based on the IBM SVC along with other enhancements consists of dual controllers each with 8GB of cache and 120 x 10K 300GB HDDs configured as RAID 5 with just under a 40% discount off list price for system tested. For the XIV Gen3 system tested, discount off list price for the submission is about 63% with 15 nodes and a total of 360GB of cache and 180 2TB 7.2K SAS HDDs configured as mirrors. The DS8800 system with dual controllers has a 256GB of cache, 768 x 146GB 15K HDDs configured in RAID5 with a discount between 40 to 50% off of list.


What the various metrics do not show is the benefit of various features and functionality which should be considered to your particular needs. Likewise, if your applications are not centered around bandwidth or throughput, then the above performance comparisons would not be relevant. Also note that the systems above have various discount prices as submitted which can be a hint to a smart shopper where to begin negotiations at. You can also do some analysis of the various systems based on their performance, configuration, physical footprint, functionality and cost plus the links below take you to the complete reports with more information.


DS8800 SPC2 executive summary and full disclosure report

XIV SPC2 executive summary and full disclosure report

DS5300 SPC2 executive summary and full disclosure report

V7000 SPC2 executive summary and full disclosure report


Bottom line, benchmarks and performance comparisons are just that, a comparison that may or may not be relevant to your particular needs. Consequently they should be used as a tool combined with other information to see how a particular solution might be a fit for your specific needs. The best benchmark however is your own application running as close to possible realistic workload to get a representative perspective of a systems capabilities.


Ok, nuff said

Cheers gs

Converged and dynamic infrastructures, cloud and virtual  environments are popular themes and industry trends with different levels of adoption and deployment occurring. Although  are you focusing on products, or the other Ps, that is  people, processes and policies (or more here).


Industry Trend: Data growth and demand


The reason I bring this up is quite often I hear  discussions that are centered around the products (or services) providing  various benefits, return on investment or cost saving opportunities.


Very little discussions are heard around whats being done  or enabled by vendors and service providers, or what is being adopted by  customers to tie in people, process and policy convergence.


Industry Trend: Removing organizational barriers to enable convergence technology


Put another way, the discussions focus around the new  technology or service while forgetting or assuming that the people, process and  policies will naturally fall into place.


Will customer policies, process or procedures along with  internal organizational (e.g. politics) issues with how people leverage those  converged products also evolve?


I assert that while there are benefits that can be  obtained from leveraging new enabling technologies (hardware, software,  networks, services) their full potential will not be realized until policies,  process, people skill sets and even more important, organizational or  intradepartmental turf wars and boundaries are also addressed.


Industry Trend: SANtas converged management team and family
Converged family team


This does not mean  consolidating different groups, rather it can mean thawing out relations between groups if there are challenges, establishing an  abstraction or virtual layer, a virtual team to cut across different technology  domains combing various skill sets, new best practices, policies and procedures in order to streamline management of physical and virtual resources.


Chuck Hollis (aka twitter @ChuckHollis) of EMC has an interesting blog post (here)  that ties in the themes of different IT groups working or not having situational  awareness that is worth a read. You can also read this Industry Trends and  Perspective solution brief that I did earlier this year on the topic of  Removing Organizational Barriers for Leveraging Technology Convergence.


Here are some additional related posts:


What is your organization doing (or have done) to enable  convergence factoring in people, processes, policies and products or is it a  non issue for you?


Ok, nuff said for now


Cheers gs

Given that it is Halloween season, time for some fun.


Over the past couple of  weeks various product and solution services announcements have been made that result in various articles, columns, blogs and commentary in support of them.


Ever wonder which if any of those products could  actually be stitched together to work in a production environment without increasing the overall cost and complexity that they sometimes promote as their individual value proposition? Granted, many can and do work quite well when introduced into heterogeneous or existing environments with good interoperability. However what about those that look good on paper or in a webex or you tube video on their own, however may be challenged to be pieced together to work with others?


Reading product announcements


Hence in the spirit of halloween, the vision of a Frankenstack appeared.


A Frankenstack is a fictional environment where you piece various technologies from announcements or what you see or hear about in different venues into a solution.


Part of being a Frankenstack is that the various pieces may look interesting on their own, good luck trying to put them together on paper let alone in a real environment.


While I have not yet attempted to piece together any  Frankenstacks lately, I can visualize various ones.

Stacking or combining different technologies, will they work together?

A Frankenstack could be based on what a  vendor, VAR, or solution provider proposes or talks about.


A Frankenstack  could also also be what a analyst, blogger, consultant, editor,  pundit or writer pieces together in a story or recommendation.

Some Frankenstacks may be  more  synergistic and interoperable than others perhaps even working in a real customer environment.


Of  course even if the pieces could be deployed, would you be able to afford them  let alone support them (interoperability aside) without adding complexity?


You see a Frankenstack might look good on paper or on a slide  deck, webex or via some other venue, however will it actually work or apply to your  environment or are they just fun to talk about?


Dont get me wrong, I like  hearing about new technology and products as much as anyone else, however lets have some fun with Frankenstacks and keep in perspective do they help or add complexity to your environment.


Ok, enough fun for now, let me know what you see or can put together in terms of Frankenstacks.


Keep in mind they dont actually have to work as that is what qualifies them for trick or treat and Frankenstack status.


Enjoy your Halloween season, do not be afraid, however be  ready for some tricks and treats, its that time of the year.


Cheers gs

Warning: Do not be scared, however be ready for some trick and  treat fun, it is after all, the Halloween season.


I like new emerging technologies and trends along with Zomboe technologies, you know,  those technologies that have been declared dead yet are still being enhanced, sold and used.


Zombie technologies as a name may be new for some, while others will have a realization of experiencing something from the past, technologies being declared deceased yet still  alive and being used. Zombie technologies are those that have been declared dead, yet  still alive enabling productivity for customers that use them and often profits  for the vendors who sell them.


Zombie technologies

Some people consider a technology or trend dead once it hits the  peak of hype as that can signal a time to jump to the next bandwagon or shiny  new technology (or toy).


Others will see a technology as being dead when it is  on the down slope of the hype curve towards the trough of disillusionment  citing that as enough cause for being deceased.


Yet others will declare  something dead while it matures working its way through the trough of disillusionment  evolving from market adoption to customer deployment eventually onto the plateau  of productivity (or profitability).


Then there are those who see something as being  dead once it finally is retired from productive use, or profitable for sale.


Of  course then there are those who just like to call anything new or other than what they  like or that is outside of their comfort zone as being dead. In other words, if your focus or area of interest is tied to new  products, technology trends and their promotion, rest assured you better be  where the resources are being applied and view other things as being dead and thus probably not a fan of Zombie technologies (or at least publicly).


Zombie technologies and hype cycles


On the other hand, if your area of focus is on leveraging technologies  and products in a productive way, including selling things that are profitable without  a lot of marketing effort, your view of what is dead or not will be different.  For example if you are risk averse letting someone else be on the leading bleeding  edge (unless you have a dual redundant HA blood bank attached to your environment)  your view of what is dead or not will be much different from those promoting  the newest trend.


Funny thing about being declared dead, often it is not the  technology, implementation, research and development or customer acquisitions,  rather simply a lack of promotion, marketing and general awareness. Take tape  for example which has been a multi decade member of the Zombie technology  list. Recently vendors banded together investing or spending on  marketing awareness reaching out to say tape is alive. Guess what, lo and behold, there was a  flurry of tape activity in venues that normally might not be talking about tape. Funny how marketing resources can bring something back  from the dead including Zombie technologies to become popular or cool to  discuss again.


With the 2011 Halloween season among us, it is  time to take a look this years list of Zombie technologies. Keep in mind that  being named a Zombie technology is actually an honor in that it usually means  someone wants to see it dead so that his or her preferred product or technology  can take it place.


Here are 2011 Zombie technologies.


Backup: Far from being dead, its focus is changing and evolving with  a broader emphasis on data protection. While many technologies associated with  backup have been declared dead along with some backup software tools, the  reality is that it is time or modernizes how backups and data protection are  performed. Thus, backup is on the Zombie technology list and will live on, like  it or not until it is exorcised from, your environment replaced with a modern resilient  and flexible protected data infrastructure.


Big Data: While not declared dead yet, it will be soon by some  creative marketer trying to come up with something new. On the other hand,  there are those who have done big data analytics across different Zombie  platforms for decades which of course is a badge of honor. As for some of the  other newer or shiny technologies, they will have to wait to join the big data  Zombies.


Cloud: Granted clouds are still on the hype cycle, some argue that  it has reached its peak in terms of hype and now heading down into the trough  of disillusionment, which of course some see as meaning dead. In my opinion  cloud, hype has or is close to peaking, real work is occurring which means a  gradual shift from industry adoption to customer deployment. Put a different  way, clouds will be on the Zombie technology list of a couple of decades or  more. Also, keep in mind that being on the Zombie technology list is an honor indicating  shift towards adoption and less on promotion or awareness fan fare.


Data centers: With the advent of the cloud, data centers or habitats for technology have been declared dead, yet there is continued activity in expanding or building new ones all the time. Even the cloud relies on data centers for housing the physical resources including servers, storage, networks and other components that make up a Green and Virtual Data Center or Cloud environment. Needless to day, data centers will stay on the zombie list for some time.


Disk Drives: Hard disk drives (HDD) have been declared dead for many  years and more recently due to popularity of SSDs have lost their sex appeal. Ironically,  if tape is dead at the hands of HDDs, then how can HDDs be dead, unless of  course they are on the Zombie technology list. What is happening is like tape,  HDDs role are changing as the technology continues to evolve and will be around  for another decade or so.


Fibre Channel (FC): This is a perennial favorite having been  declared dead on a consistent basis over three decades now going back to the  early 90s. While there are challengers as there have been in the past, FC is  far from dead as a technology with 16 Gb (16GFC) now rolling out and a  transition path for Fibre Channel over Ethernet (FCoE). My take is that FC will  be on the zombie list for several more years until finally retired.


Fibre Channel over Ethernet (FCoE): This is a new entrant and one  uniquely qualified for being declared dead as it is still in its infancy. Like  its peer FC which was also declared dead a couple of decades ago, FCoE is just  getting started and looks to be on the Zombie list for a couple of decades into  the future.


Green IT: I have heard that Green IT is dead, after all, it was hyped before the cloud era which has been declared dead by some, yet there remains a Green gap or disconnect between messaging and issues thus missed opportunities. For a dead trend, SNIA recently released their Emerald program which consists of various metrics and measurements (remember, zombies like metrics to munch on) for gauging energy effectiveness for data storage. The hype cycle of Green IT and Green storage may be dead, however Green IT in the context of a shift in focus to increased productivity using the same or less energy is underway. Thus Green IT and Green storage are on the Zombie list.


iPhone: With the advent of Droid and other smart phones, I have  heard iPhones declared dead, granted some older versions are. However while the  Apple cofounder Steve Jobs has passed on (RIP), I suspect we will be seeing  and hearing more about the iPhone for a few years more if not longer.


IBM Mainframe: When it comes to information technology (IT), the king of  the Zombie list is the venerable IBM mainframe aka zSeries. The IBM mainframe  has been declared dead for over 30 years if not longer and will be on the  zombie list for another decade or so. After all, IBM keeps investing in the  technology as people buy them not to mention IBM built a new factory  to assemble them in.


NAS: Congratulations to Network Attached Storage (NAS) including  Network File System (NFS) and Windows Common Internet File System (CIFS) aka  Samba or SMB for making the Zombie technology list. This means of course that  NAS in general is no longer considered an upstart or immature technology;  rather it is being used and enhanced in many different directions.


PC: The personal computer was touted as killing off some of its  Zombie technology list members including the IBM mainframe. With the advent of  tablets, smart phones, virtual desktops infrastructures (VDI), the PC has been  declared dead. My take is that while the IBM mainframe may eventually drop of  the Zombie list in another decade or two if it finds something to do in  retirement, the PC will be on the list for many years to come. Granted, the PC  could live on even longer in the form of a virtual server where the majority of  guest virtual machines (VMs) are in support of Windows based PC systems.


Printers: How long have we heard that printers are dead? The day  that printers are dead is the day that the HP board of directors should really  consider selling off that division.


RAID: Its been over twenty years since the first RAID white paper  and early products appeared. Back in the 90s RAID was a popular buzzword and  bandwagon topic however, people have moved on to new things. RAID has been on  the Zombie technology list for several years now while it continues to find  itself being deployed at the high end of the market down into consumer  products. The technology continues to evolve in both hardware as well as  software implementations on a local and distributed basis. Look for RAID to be  on the Zombie list for at least the next couple of decades while it continues  to evolve, after all, there is still room for RAID 7, RAID 8, RAID 9 not to  mention moving into hexadecimal or double digit variants.


SAN: Storage Area Networks (SANs) have been declared dead and thus  on the Zombie technology list before, and will be mentioned again well into the  next decade. While the various technologies will continue to evolve, networking  your servers to storage will also expand into different directions.


Tape: Magnetic tape has been on the Zombie technology list almost as  long as the IBM mainframe and it is hard to predict which one will last longer.  My opinion is that tape will outlast the IBM mainframe, as it will be needed to  retrieve the instructions on how to de install those Zombie monsters. Tape has  seen resurgence in vendors spending some marketing resources and to no  surprise, there has been an increase in coverage about it being alive, even at  Google. Rest assured, tape is very safe on the Zombie technology list for  another decade or more.


Windows: Similar to the PC, Microsoft Windows has been touted in the  past as causing other platforms to be dead, however has been added to the  Zombie list for many years now. Given that Windows is the most commonly  virtualized platform or guest VM, I think we will be hearing about Windows on  the Zombie list for a few decades more. There are particular versions of  Windows as with any technology that have gone into maintenance or sustainment  mode or even discontinued.


Poll: What are the most popular Zombie technologies?

Keep in mind that a Zombie technology is one that is still in use,  being developed or enhanced, sold usually at a profit and used typically in a  productive way. In some cases, a declared dead or Zombie technology may only  be just in its infancy getting started having either just climbed over the peak  of hype or coming out of the trough of disillusionment. In other instance, the  Zombie technology has been around for a long time yet continues to be used (or  abused).

Click here to cast your vote on zombies technology and see results


Note: Zombie voting rules apply which means vote early, vote often, and of  course vote for those who cannot include those that are dead (real or virtual).


Ok, nuff said, enough fun, lets get back to work, at least for now


Cheers gs

I recently came across a piece by Carl Brooks over at IT Tech News  Daily that caught my eye, title was Cloud Storage Often  Results in Data Loss. The piece has an effective title (good for search engine:  SEO optimization) as it stood out from many others  I saw on that  particular day.


Industry Trend: Cloud storage


What caught my eye on Carls piece is that it reads as if  the facts based on a quick survey point to clouds resulting in data loss, as opposed to being an opinion that some cloud usage can result in data loss.


Data loss


My opinion is that if not used properly including ignoring best practices, any  form of data storage medium or media could result or be blamed for data loss. For some people they have lost data as a result of using cloud  storage services just as other people have lost data or access to information on other storage mediums  and solutions. For example, data has been lost on tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and  remote and even optical based storage systems large and small. In some cases,  there have been errors or problems with the medium or media, in other cases  storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other  issues.


Data loss


Technology failure: Not if,  rather when and how to decrease impact
Any technology regardless of what it is or who it is from along with its architecture design and implementation can fail. It is not if, rather when and  how gracefully along with what safeguards to decrease the impact, in addition to containing or isolating  faults differentiates various products or solutions. How they automatically repair and self heal to keep running or support  accessibility and maintain data integrity are important as is how those options are used. Granted a failure may not be technology related  per say, rather something associated with human intervention, configuration,  change management (or lack thereof) along with accidental or intentional activities.


Walking the talk
I have used public cloud storage services for several years including  SaaS and AaaS as well as IaaS (See more XaaS here) and knock on wood, have not  lost any data yet, loss of access sure, however not data being  lost.


I follow my advice and best practices when selecting cloud  providers looking for good value, service level agreements (SLAs) and service level objectives (SLOs) over low cost or for free services.


In the several years of using cloud based storage and services there  has been some loss of access, however no loss of data. Those service disruptions  or loss of access to data and services ranged from a few minutes to a little  over an hour. In those scenarios, if I could not have waited for cloud storage  to become accessible, I could have accessed a local copy if it were available.


Had a major disruption occurred where it would have been several  days before I could gain access to that information, or if it were actually  lost, I have a data insurance policy. That data insurance policy is part of my  business continuance (BC) and disaster recovery (DR) strategy. My BC and DR  strategy is a multi layered approach combining local, offline and offsite as  along with online cloud data protection and archiving.


Assuming my cloud storage service could get data back to a given  point (RPO) in a given amount of time (RTO), I have some options. One option is  to wait for the service or information to become available again assuming a  local copy is no longer valid or available. Another option is to start  restoration from a master gold copy and then roll forward changes from the  cloud services as that information becomes available. In other words, I am using  cloud storage as another resource that is for both protecting what is local, as  well as complimenting how I locally protect things.


Minimize or cut  data loss or loss of access
Anything important should be protected locally and remotely meaning  leveraging cloud and a master or gold backup copy.


To cut the cost of  protecting information, I also leverage archives, which mean not all data gets  protected the same. Important data is protected more often reducing RPO  exposure and speed up RTO during restoration. Other data that is not as  important is protected, however on a different frequency with other retention  cycles, in other words, tiered data protection. By implementing tiered data  protection, best practices, and various technologies including data footprint reduction (DFR) such as archive, compression, dedupe in addition to  local disk to disk (D2D), disk to disk to cloud (D2D2C),  along with routine copies to offline media (removable HDDs or RHDDs) that go  offsite,  Im able to stretch my data  protection budget further. Not only is my data protection budget stretched further,  I have more options to speed up RTO and better detail for recovery and  enhanced RPOs.


If you are looking to avoid losing data, or loss of access, it is a  simple equation in no particular order:

  • Strategy and design
  • Best practices and processes
  • Various technologies
  • Quality products
  • Robust service delivery
  • Configuration and implementation
  • SLO and SLA management metrics
  • People skill set and knowledge
  • Usage guidelines or terms of service (ToS)


Unfortunately, clouds like other technologies or solutions get a bad reputation or blamed when something goes wrong. Sometimes it is the technology or service that fails, other times it is a combination of errors that resulted in loss of access or lost data. With clouds as has been the case with other  storage mediums and systems in the past, when something goes wrong and if it has been hyped, chances are it will become a target for blame or finger pointing vs. determining what went wrong so that it does not occur again.  For example cloud storage has been  hyped as easy to use, don’t worry, just put your data there, you can get out of  the business of managing storage as  the cloud will do that magically  for you behind the scenes.


The reality is that while cloud storage solutions can offload functions, someone is still responsible for making decisions on its usage and configuration that impact availability. What separates various providers is their  ability to design in best practices, isolate and contain faults quickly, have resiliency  integrated as part of a solution along with various SLAs aligned to what the service  level you are expecting in an easy to use manner.


Does that mean the more you pay the more reliable and resilient a  solution should be?
No, not necessarily, as there can still be risks including how the  solution is used.


Does that mean low cost or for free solutions have the most risk?
No, not necessarily as it comes down to how you use or design  around those options. In other words, while cloud storage services remove or  mask complexity, it still comes down to how you are going to use a given  service.


Shared responsibility for  cloud (and non cloud) storage data protection
Anything important enough that you cannot afford to lose, or have  quick access to should be protected in different locations and on various  mediums. In other words, balance your risk. Cloud storage service provider toned  to take responsibility to meet service expectations for a given SLA and SLOs  that you agree to pay for (unless free).


As the customer you have the responsibility of following best practices  supplied by the service provider including reading the ToS. Part of the  responsibility as a customer or consumer is to understand what are the ToS, SLA  and SLOs for a given level of service that you are using. As a customer or consumer,  this means doing your homework to be ready as a smart educated buyer or  consumer of cloud storage services.


If you are a vendor or value added reseller  (VAR), your opportunity is to help customers with the acquisition process to  make informed decision. For VARs and solution providers, this can mean up  selling customers to a higher level of service by making them aware of the risk  and reward benefits as opposed to focus on cost. After all, if a order taker at McDonalds can ask Would you like to super size your order, why cant you as a  vendor or solution provider also have a value oriented up sell message.


Additional related links to  read more and sources of information:

Choosing  the Right Local/Cloud Hybrid Backup for SMBs
E2E  Awareness and insight for IT environments
Poll: What  Do You Think of IT Clouds?
Convergence:  People, Processes, Policies and Products
What do  VARs and Clouds as well as MSPs have in common?
Industry  adoption vs. industry deployment, is there a difference?
Cloud conversations: Loss of data access vs. data loss
Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
Clouds are like Electricity: Dont be scared
Wit and wisdom for BC and DR
Criteria for choosing the right business continuity or disaster  recovery consultant
Local and Cloud Hybrid Backup for SMBs
Is cloud disaster recovery appropriate for SMBs?
Laptop data protection: A major headache with many cures
Disaster recovery in the cloud explained
Backup in the cloud: Large enterprises wary, others climbing on  board
Cloud and Virtual Data Storage Networking (CRC Press, 2011)
Enterprise Systems Backup and Recovery: A Corporate Insurance Policy


Poll: Who is responsible for cloud storage data  loss?

Click here to cast your vote on who is responsible for cloud storage loss and view results


Taking action, what you  should (or not) do
Dont be scared of clouds, however do your homework, be ready,  look before you leap and follow best practices. Look into the service level  agreements (SLAs) associated with a given cloud storage product or service. Follow  best practices about how you or someone else will protect what data is  put into the cloud.


For critical data or information, consider having a copy of that  data in the cloud as well as at or in another place, which could be in a  different cloud or local or offsite and offline. Keep in mind the theme for  critical information and data is not if, rather when so what can be done to  decrease the risk or impact of something happening, in other words, be  ready.


Data put into the cloud can be lost, or, loss of access to it can  occur for some amount of time just as happens with using non cloud storage  such as tape, disk or ssd. What impacts or minimizes your risk of using  traditional local or remote as well as cloud storage are the best practices,  how configured, protected, secured and managed. Another consideration is the  type and quality of the storage product or cloud service can have a big impact.  Sure, a quality product or service can fail; however, you can also design and  configure to decrease those impacts.


Wrap up
Bottom line, do not be scared of cloud storage, however be ready,  do your homework, review best practices, understand benefits and caveats, risk  and reward. For those who want to learn more about cloud storage (public,  private and hybrid) along with data protection, data management, data footprint  reduction among other related topics and best practices, I happen to know of some  good resources. Those resources in addition to the links provided above are titled  Cloud and Virtual Data Storage Networking (CRC Press) that you can learn more  about here as well as find at Amazon among other venues. Also, check out Enterprise  Systems Backup and Recovery: A Corporate Insurance Policy by Preston De Guise (aka  twitter @backupbear ) which is a great resource for protecting data.


Ok, nuff said for now


Cheers gs

Rather than doing a bunch of separate posts, here is a collection of different perspectives and commentary on various IT and data storage industry activity.

Various comments and perspectives

In this link are comments and perspectives regarding thin provisioning including how it works as well as when to use it for optimizing storage space capacity. Speaking of server and storage capacity, here in this link are comments on what server and storage would be needed to support an SMB office of 50 people (or more, or less) along with how to back it up.


For those interested or in need of managing data and other records in this link are comments on preparing yourself for regulatory scrutiny.


Storage networking interface or protocol  debates (battles) can be interesting, in this link, see the role of iSCSI SANs for data storage environments. Lets not forget about Fibre Channel over Ethernet (FCoE) which is discussed in this link and here in this link. Here in this link are comments about how integrated rackem, stackem and package bundles stack up. To support increased continued demand for managed service providers (MSP), cloud and hosted services providers are continuing to invest in their infrastructures, so read some comments here. While technology plays a role particular as it matures, there is another barrier to leveraging converged solutions and that is organizational, read some perspectives and thoughts here.


Storage optimization including data footprint reduction (DFR) can be used to cut  costs as well as support growth. In this link see tips on reducing storage costs and additional perspectives in this link to do more with what you have. Here in this link are some wit and wisdom comments on the world of disaster recovery solutions. Meanwhile in this link are perspectives for choosing the right business continuity (BC) and disaster recovery (DR) consultant. In this link are comments on BC and DR including planning for virtualization and life beyond consolidation. Are disk based dedupe and virtual tape libraries a hold over for old backup, or a gateway to the future, see some perspectives on those topics and technologies in this link.


Here are some more comments on DR and BC leveraging the cloud while perspectives on various size organizations looking at clouds for backup in this piece here. What is the right local, cloud or hybrid backup for SMBs, check out some commentary here while viewing some perspectives on cloud disaster recovery here. Not to be forgotten, laptop data protection can also be a major headache however there are also many cures discussed in this piece here.


The Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) debut their Emerald power efficiency measurement specification recently, read some perspectives and comments in this link here. While we are on the topic of data center efficiency and effectiveness, here in this link are perspectives on micro servers or mini blade systems. Solution bundles also known as data center in a box or SAN in a CAN have been popular with solutions from EMC (vBlocks) and NetApp (FlexPods) among others, read perspectives on them in this link.


Buzzword bingo


What would a conversation involving data storage and IT (particularly buzzword bingo) be without comments about Big Data and Big Bandwidth which you can read here.


Want to watch some videos, from Spring 2011 SNW, check out starting around the 15:00 to 55:00 time scale in this video from the Cube where various topics are discussed. Interested in how to scale data storage with clustered or scale up and out solutions, check out this video here or if you want to see some perspectives on data de duplication watch this clip.


Various comments and perspectives


Here is a video discussing SMBs as the current sweet spot for server virtualization with comments on the SMB virtualization dark side also discussed here. Meanwhile here are comments regarding EMC Flashy announcements from earlier this year on the Cube. Check out this video where I was a guest of Cali Lewis and John MacArthur on the Cube from the Dell Storage Forum discussing a range of topics as well as having some fun. Check out these videos and perspectives from VMworld 2011.


Whats your take on choosing the best SMB NAS? Here are some of my perspectives on choosing a SMB NAS storage system. Meanwhile here are some perspectives on enterprise class storage features finding their way into SMB NAS storage systems.


Meanwhile industry leaders EMC and NetApp have been busy enhancing their NAS storage solutions that you can read comments here.


Are you familiar with the Open Virtualization Alliance (OVA)? Here are some comments about OVA and other server virtualization topics.


Various videos


Whats your take on Thunderbolt the new interconnect Apple is using in place of USB, here are my thoughts. Meanwhile various other tips and Ask the Expert (AtE) and discussion can be found here.


Check out the above links, as well view more perspectives, comments and news here, here, here, here and here.


Ok, nuff said for now

Cheers gs