Skip navigation

Does your organization have or do you work with a revenue prevention department or revenue prevention team?


For those not familiar, a revenue prevention team or department is an expression that refers to those who get in the way of selling, closing and generating revenue for an organization.


In a sales dominated organization, the revenue prevention team or department might be refereed to as those who do not do what sales wants when and how they want it. Anything other than what sales wants is seen as getting in the way of revenue. Sometimes sales will see marketing, engineering, manufacturing, quality control, human resources, finance and accounting or legal as revenue prevention departments. In other instances, the revenue prevention team or department of some sales organizations will refer to those in customer or prospects organizations that get in the way or slow down the process of closing the deal. Yet another example can be outsiders or third parties such as consultants, analysts, advisors or others who are brought into the sales process by a customer or prospect and seen by a sales organizations as a barrier to revenue prevention obstacle.


On the other hand, sales can also be seen as a revenue prevention department when as a whole or on a smaller or personal basis they get in the way of actually bringing in the deals. For example a sales based revenue prevention department, team or personal may be spending too much time selling however not enough actually closing or getting the real deal. This can be due to different reasons such as the sales rep trying to sell the wrong solution to a particular customer or prospect needs, or simply not being able to close the deal.


If you have never seen the movie Glengary Glen Ross take a few minutes and check at least the highlights out including the classic lines such as ABC: Always Be Closing or Coffee is for Closers.


Now let's get back to revenue prevention in the context of this post which are some revenue prevention scenarios.


Should you feel sorry for the vendor or var who misses their revenue or sales forecast while they were busy trying to sell something new and forgetting to take the order on the existing items?


Should you feel sorry for the vendor or var whose disk  or storage sales are down because their customers and prospects headed their  advise from last couple of years to dedupe everything?


Well, if their dedupe  sales are not making up for the shortfall, no, you should not feel sorry for  them nor their investors.


Should you feel sorry for the vendor or var whose server  sales and associated software including hyper visors and tools are down because  their customers and prospects headed their advise from last couple of years to  virtualize everything?


Well, if the corresponding increase in services, new  tools, engagements for data protection and other modernization do not make up  for it then no.

Should you feel sorry for the vendor or var whose  laptop, desktop and workstation along with corresponding pull of other items  has resulted in business going elsewhere while they have sold VDI?


How about should you feel sorry for the vendor or var whose customers or prospects are no longer buying as much hardware, software or services as they headed the advise and went to Goggle, Amazon or some other cloud?


Ok, do you get my point?


In the quest to  increase opportunity, boost revenue, expand into adjacent markets and technologies there is a balancing  act of generating awareness, moving customer and prospect into new areas while  keeping the revenue prevention team on the bench to avoid disrupting annuities or current revenue streams.


In other words, embrace the new, avoid clinging to the old with a death  grip, be careful with leading beading edge without a dual redundant blood  bank (or at least a backup plan).  Put another way, find a balance of taking orders for what your customers want while selling them on where you want them to go.


Ok, nuff said for now


Cheers gs

gregschulz Hot Shot

Cloud and travel fun

Posted by gregschulz Jan 30, 2012

Warning if you are a cloud purist who does not take lightly to fun in and around all types of clouds, well, try to have some fun, otherwise enjoy this fun in and around clouds post.


On a recent trip to a video recording studio in the Boston (BOS) area, I took a few photos with my iPhone of traveling above, in and around clouds. In addition, during the trip I also used cloud based services from the airplane (e.g. Gogo WiFi) for cloud backup and other functions.


Above the clouds, the engine (A GE/CFM56) enables this journey to and above the clouds
View of a GE CFM56 powering a Delta A320 journey to the clouds

Easy to understand Disaster Recovery (DR) plan for planes traveling through and above cloudsEasy to understand Disaster Recovery (DR) plan for planes traveling through and above clouds
Easy to understand cloud emergency and contingency procedures

On board above the cloud marketing
Example of cloud marketing and value add services

Nearing Boston
Clouds are clearing nearing destination Boston aka IATA: BOS

Easy to understand above the cloud networking
Example of easy to understand converged cloud networking

A GE/CFM56 jet engine flying over the GE Lynn MA jet engine facility
GE Aviation plant in Lynn MA below GE CFM56 jet engine

On rampe or waiting area to return back to above the clouds
Back at Logan, long day of travel, video shoot, time for a nap.

Clear sky at sunset as moon rises over Cloud Expo 2011 in Santa Clara
From a different trip, wrapping up a cloud focused day, at Cloud Expo in Santa Clara CA in November.


Here are some additional links about out and about, clouds, travel, technology, trends and fun:
Commentary on Clouds, Storage, Networking, Green IT and other topics
Cloud, virtualization and storage networking conversations
What am I hearing and seeing while out and about


Oh, what was recorded in the video studios on that day?


Why something about IT clouds, virtualization, storage, networking and other related topics of course that will be appearing at some venue in the not so distant future.

Ok, nuff fun for now, lets get back to work.


Cheers gs

Amazon Web Services (AWS) announced the beta of their new storage gateway functionality that enables access of Amazon S3 (Simple Storage Services) from your different applications using an appliance installed in your data center site. With this beta launch, Amazon joins other startup vendors who are providing standalone gateway appliance products (e.g. Nasuni, Certa, etc) along with those who have disappeared from the market (e.g. Cirtas). In addition to gateway vendors, there are also those with cloud access added to their software tools such as (e.g. Jungle Disk that access both Rack space and Amazon S3 along with Commvault Simpana Cloud connector among others). There are also vendors that have joined cloud access gateways as part of their storage systems such as TwinStrata among others. Even EMC (and here) has gotten into the game adding qualified cloud access support to some of their products.


What is a cloud storage gateway?

Before going further, lets take a step back and address what for some may be a fundemental quesiton of what is a cloud storage gateway?

Cloud services such as storage are accessed via  some type of network, either the public Internet or a private connection. The  type of cloud service being accessed (figure 1) will decide what is  needed. For example, some services can be accessed using a standard Web browser,  while others must plug-in or add-on modules. Some cloud services may need  downloading an application, agent, or other tool for accessing the cloud  service or resources, while others give an on-site or on-premises appliance  or gateway.

Generic cloud access example via Cloud and Virtual Data Storage Networking (CRC Press)
Figure 1: Accessing and using clouds (From Cloud and Virtual Data Storage Networking (CRC Press))


Cloud access software and gateways  or appliances are used for making cloud storage accessible to local applications.  The gateways, as well as enabling cloud access, provide replication,  snapshots, and other storage services functionality. Cloud access gateways or  server-based software include tools from BAE, Citrix, Gladinet, Mezeo,  Nasuni, Openstack, Twinstrata and Zadara among others. In addition to cloud gateway  appliances or cloud points of presence (cpops), access to public services is  also supported via various software tools. Many data protection tools  including backup/restore, archiving, replication, and other applications have  added (or are planning to add) support for access to various public services  such as Amazon, Goggle, Iron Mountain, Microsoft, Nirvanix, or Rack space among several others.


Some of the tools have added native  support for one or more of the cloud services leveraging various applicaiotn programming interfaces (APIs), while  other tools or applications rely on third-party access gateway appliances or a  combination of native and appliances. Another option for accessing cloud  resources is to use tools (Figure 2) supplied by the service provider, which  may be their own, from a third-party partner, or open source, as well as  using their APIs to customize your own tools.


Generic cloud access example via Cloud and Virtual Data Storage Networking (CRC Press)
Figure 2: Cloud access tools (From Cloud and Virtual Data Storage Networking (CRC Press))


For example, I can use my Amazon  S3 or Rackspace storage accounts using their web and other provided tools for  basic functionality. However, for doing backups and restores, I use the tools  provided by the service provider, which then deal with two different cloud  storage services. The tool presents an interface for defining what to back up,  protect, and restore, as well as enabling shared (public or private) storage  devices and network drives. In addition to providing an interface (Figure 2),  the tool also speaks specific API and protocols of the different services,  including PUT (create or update a container), POST (update header or Meta  data), LIST (retrieve information), HEAD (metadata information access), GET  (retrieve data from a container), and DELETE (remove container) functions.


Note  that the real behavior and API functionality will vary by service provider.  The importance of mentioning the above example is that when you look at some  cloud storage services providers, you will see mention of PUT, POST, LIST,  HEAD, GET, and DELETE operations as well as services such as capacity and availability.  Some services will include an unlimited number of operations, while others will  have fees for doing updates, listing, or retrieving your data in addition to  basic storage fees. By being aware of cloud primitive functions such as PUT or  POST and GET or LIST, you can have a better idea of what they are used for as  well as how they play into evaluating different services, pricing, and services  plans.


Depending on the type of cloud  service, various protocols or interfaces may be used, including iSCSI, NAS NFS, HTTP  or HTTPs, FTP, REST, SOAP, and Bit Torrent, and APIs and PaaS mechanisms  including .NET or SQL database commands, in addition to XM, JSON, or other  formatted data. VMs can be moved to a cloud service using file transfer tools  or upload capabilities of the provider. For example, a VM such as a VMDK or VHD  is prepared locally in your environment and then uploaded to a cloud provider  for execution. Cloud services may give an access program or utility that  allows you to configure when, where, and how data will be protected, similar to  other backup or archive tools.


Some traditional backup or archive  tools have added direct or via third party support for accessing IaaS cloud  storage services such as Amazon, Rack space, and others. Third-party access  appliance or gateways enable existing tools to read and write data to a cloud environment  by presenting a standard interface such as NAS (NFS and/or CIFS) or iSCSI (Block) that gets mapped to the back-end  cloud service format. For example, if you subscribe to Amazon S3, storage is  allocated as objects and various tools are used to use or utilize. The cloud  access software or appliance understands how to communicate with the IaaS  storage APIs and abstracts those from how they are used. Access software tools  or gateways, in addition to translating or mapping between cloud APIs, formats  your applications including security with encryption, bandwidth optimization,  and data footprint reduction such as compression and de-duplication. Other functionality  include reporting, management tools that support various interfaces, protocols  and standards including SNMP or SNIA, Storage Management Initiative  Specification (SMIS), and Cloud Data Management Initiative (CDMI).


First impression: Interesting, good move Amazon, I was ready to install and start testing it today

The good news here is that Amazon is taking steps to make it easier for your existing applications and IT environments to use and leverage clouds for private and hybrid adoption models with both an Amazon branded and managed services, technology and associated tools.


This means leveraging your existing Amazon accounts to simplify procurement, management, ongoing billing as well as leveraging their infrastructure. As a standalone gateway appliance (e.g. it does not have to be bundled as part of a specific backup, archive, replication or other data management tool), the idea is that you can insert the technology into your existing data center between your servers and storage to begin sending a copy of data off to Amazon S3. In addition to sending data to S3, the integrated functionality with other AWS services should make it easier to integrated with Elastic Cloud Compute (EC2) and Elastic Block storage (EBS) capabilities including snapshots for data protection.


Thus my first impression of AWS storage gateway at a high level view is good and interesting resulting in looking a bit deeper resulting in a second impression.


Second impression: Hmm, what does it really do and require, time to slow down and do more home work

Digging deeper and going through the various publicly available material (note can only comment or discuss on what is announced or publicly available) results in a second impression of wanting and needing to dig deeper based on some of caveats. Now granted and in fairness to Amazon, this is of course a beta release and hence while on first impression it can be easy to miss the notice that it is in fact a beta so keep in mind things can and hopefully will change.


Pricing aside, which means as with any cloud or managed storage service, you will want to do a cost analysis model just as you would for procuring physical storage, look into the cost of monthly gateway fee along with its associated physical service running VMware ESXi configuration that you will need to supply. Chances are that if you are an average sized SMB, you have a physical machine (PM) laying around that you can throw a copy of ESXi on to if you dont already have room for some more VMs on an existing one.


You will also need to assess the costs for using the S3 storage including space capacity charges, access and other fees as well as charges for doing snapshots or using other functionality. Again these are not unique to Amazon or their cloud gateway and should be best practices for any service or solution that you are considering. Amazon makes it easy by the way to see their base pricing for different tiers of availability, geographic locations and optional fees.


Speaking of accessing the cloud, and cloud conversations, you will also want to keep in mind what your networking bandwidth service requirements will be to move data to Amazon that might not already be doing so.


Another thing to consider with the AWS storage gateway is that it does not replace your local storage (that is unless you move your applications to Amazon EC2 and EBS), rather makes a copy of what every you save locally to a remote Amazon S3 storage pool. This can be good for high availability (HA), business continuance (BC), disaster recovery (DR) and compliance among other data management needs. However in your cost model you also need to keep in mind that you are not replacing your local storage, you are adding to it via the cloud which should be seen as complimenting and enhancing your private now to be hybrid environment.


Walking the cloud data protection talk

FWIW, I leverage a similar model where I use a service (Jungle Disk) where critical copies of my data get sent to that service which in turn places copies at Rack space (Jungledisks parent) and Amazon S3. What data goes to where depends on different policies that I have established. I also have local backup copies as well as master gold disaster copy stored in a secure offsite location. The idea is that when needed, I can get a good copy restored from my cloud providers quickly regardless of where I am if the local copy is not good. On the other hand, experience has already demonstrated that without sufficient network bandwidth services, if I need to bring back 100s of GBytes or TBytes of data quickly, Im going to be better off bring back onsite my master gold copy, then applying fewer, smaller updates from the cloud service. In other words, the technologies compliment each other.


By the way, a lesson learned here is that once my first copy is made which have data footprint reduction (DFR) techniques applied (e.g. compress, de dupe, optimized, etc), later copies occur very fast. However subsequent restores of those large files or volumes also takes longer to retrieve from the cloud vs. sending up changed versions. Thus be aware of backup vs. restore times, something of which will apply to any cloud provider and can be mitigated by appliances that do local caching. However also keep in mind that if a disaster occurs, will your local appliance be affected and its cache rendered useless.


Getting back to AWS storage gateway and my second impression is that at first it sounded great.


However then I realized it only supports iSCSI and FWIW, nothing wrong with iSCSI, I like it and recommend using it where applicable, even though Im not using it. I would like to have seen a NAS (either NFS and/or CIFS) support for a gateway making it easier for in my scenario different applications, servers and systems to use and leverage the AWS services, something that I can do with my other gateways provided via different software tools. Granted for those environments that already are using iSCSI for your servers that will be using AWS storage gateway, then this is a non issue while for others it is a consideration including cost (time) to factor in to prepare your environment for using the ability.


Depending on the amount of storage you have in your environment, the next item that caught my eye may or may not be an issue  that the iSCSI gateway supports up to 1TB volumes and up to 12 of them hence a largest capacity of 12TB under management. This can be gotten around by using multiple gateways however the increased complexity balanced to the benefit the functionality is something to consider.


Third impression: Dig deeper, learn more, address various questions

This leads up to my third impression  the need to dig deeper into what AWS storage gateway can and cannot do for various environments. I can see where it can be a fit for some environments while for others at least in its beta version will be a non starter. In the meantime, do your homework, look around at other options which ironically by having Amazon launching a gateway service may reinvigorate the market place of some of the standalone or embedded cloud gateway solution providers.


What is needed for using AWS storage gateway

In addition to having an S3 account, you will need to acquire for a monthly fee the storage gateway appliance which is software installed into a VMware ESXi hypervisor virtual machine (VM). The requirements are VMware ESXi hypervisor (v4.1) on a physical machine (PM) with  at least 7.5GB of RAM and four (4) virtual processors assigned to the appliance VM along with 75GB of disk space for the Open Virtual Alliance (OVA) image installation and data. You will also need to have an proper sized network connection to Amazon. You will also need iSCSI initiators on either Windows server 2008, Windows 7 or Red Hat Enterprise Linux.


Note that the AWS storage gateway beta is optimized for block write sizes greater than 4Kbytes and warns that smaller IO sizes can cause overhead resulting in lost storage space. This is a consideration for systems that have not yet changed your file systems and volumes to use the larger allocation sizes.


Some closing thoughts, tips and comments:

  • Congratulations to Amazon for introducing and launching an AWS branded storage gateway.
  • Amazon brings trust the value of trust to a cloud relationship.
  • Initially I was excited about the idea of using a gateway  that any of may systems could use my S3 storage pools with vs. using gateway  access functions that are part of different tools such as my backup software or  via Amazon web tools. Likewise I was excited by the idea of having an easy to  install and use gateway that would allow me to grow in a cost effective way.
  • Keep in mind that this solution or at least in its beta version DOES NOT replace your existing iSCSI based storage needs, instead it compliments what you already have.
  • I hope Amazon listens carefully to what they customers and prospects want vs. need to evolve the functionality.
  • This announcement should reinvigorate some of the cloud appliance vendors as well as those who have embedded functionality to Amazon and other providers.
  • Keep bandwidth services and optimization in mind both for sending data as well as for when retrieving during a disaster or small file restore.
  • In concept, the AWS storage gateway is not all that different than appliances that do snapshots and other local and remote data protection such as those from Actifio, EMC (Recoverpoint), Falconstor or dedicated gateways such as those from Nasuni among others.
  • Here is a link to added AWS storage gateways frequently asked questions (FAQs).
  • If the AWS were available with a NAS interface, I would probably be activating it this afternoon even with some of their other requirements and cost aside.
  • Im still formulating my fourth impression which is going to take some time, perhaps if I can get Amazon to help sell more of my books so that I can get some money to afford to test the entire solution leveraging my existing S3, EC2 and EBS accounts I might do so in the future, otherwise for now, will continue to research.
  • Learn more about the AWS storage gateway beta, check out this free Amazon web cast on February 23, 2012.


Learn more abut cloud based data protection, data footprint reduction, cloud gateways, access and management, check out my book Cloud and Virtual Data Storage Networking (CRC Press) which is of course available on Amazon Kindle as well as via hard cover print copy also available at


Ok, nuff said for now, I need to get back to some other things while thinking about this all some more.


Cheers gs

Im in the process of wrapping up 2011 and getting ready for 2012, here is a list of the top 25 new posts from this past year at StorageIOblog.


Looking back, here is a post about industry trends, thoughts and perspective predictions for 2010 and 2011 (preview 2012 and 2013 thoughts and perspectives here).


Here are the top 25 new blog posts from 2011


Check out the companion posts of the top 25 all time posts here as well as 2012 and 2013 predictions preview here.


Ok, nuff said for now


Cheers gs

2011 is almost over, so its wrap up time of the year as well as getting ready for 2012.


Here is a link to a post of the top 25 new posts that appeared on StorageIOblog in 2011.


As a companion to the above,  here is a link to the all time top 25 posts from StorageIOblog.


Looking back, here is a post about industry trends, thoughts and perspective predictions for 2010 and 2011 (preview 2012 and 2013 thoughts and perspectives here).


Im still finalizing my 2012 and 2013 predictions and perspectives which is a work in progress, however here is a synopsis:


  • Addressing storage woes at the source: Time to start treating the source of data management and protection including backup challenges instead of or in addition to addressing downstream target destination topics. 
  • Big data and big bandwidth meet big backup: 2011 was a buzz with big data and big bandwidth so 2012 will see realization that big backup needs to be addressed. Also in 2012 there will be continued realization that many have been doing big data and big bandwidth thus also big backups for many years if not decades before the current big buzzword became popular.
  • Little data does not get left out of the discussion even though younger brother big data gets all of the press and praise. Little data may not be the shining diva it once was, however the revenue annuity stream will keep many software, tools, server and storage vendors afloat while customers continue to rely on the little data darling to run their business. 


  • Cloud confusion finds clarity on the horizon: Granted there will be plenty of more cloud fud and hype, cloud washing and cleaning going around, however 2012 and beyond will also find organizations realizing where and how to use different types of clouds (public, private, hybrid) too meet various needs from SaaS and AaaS to PaaS to IaaS and other variations of XaaS. Part of the clarification that will help remove the confusion will be that there are many different types of cloud architectures, products, stacks, solutions, services and products to address various needs. Another part of the clarification will be discussion of what needs to be added to clouds to make them more viable for both new, as well as old or existing applications. This means organizations will determine what they need to do to move their existing applications to some form of a cloud model while understanding how clouds coexist and compliment what they are currently doing. Cloud conversations will also shift from low cost or for free focus expanding to discussions around value, trust, quality of service (QoS), SLOs, SLAs, security, reliability and related themes.


Industry Trends and Perspectives


  • Cloud and virtualization stack battles: The golden rule of virtualization and clouds is that who ever controls the management and software stacks controls the gold. Hence, watch for more positioning around management and enablement stacks as well as solutions to see who gains control of the gold. 


  • Data protection modernization: Building off of first point above, data protection modernization the past several years has been focused on treating the symptoms of downstream problems at the target or destination. This has involved swapping out or moving media around, applying data footprint reduction (DFR) techniques downstream to give near term tactical relief as has been the cause with backup, restore, BC and DR for many years. Now the focus will start to expand to how to address the source of the problem with is an expanding data footprint upstream or at the source using different data footprint reduction tools and techniques. This also means using different metrics including keeping performance and response time in perspective as part of reduction rates vs. ratios while leveraging different techniques and tools from the data footprint reduction tool box. In other words, its time to stop swapping out media like changing tires that keep going flat on a car, find and fix the problem, change the way data is protected (and when) to cut the impact down stream. This will not happen overnight, however with virtualization and cloud activities underway, now is a good time to start modernizing data protection.


  • End to End (E2E) management tools: Continue focus around E2E tools and capabilities to gain situational awareness across different technology layers.


  • FCoE and Fibre Channel continue to mature: One sure sign that Fibre Channel over Ethernet (FCoE) is continuing to evolve, mature and gain initial traction is the increase in activity declaring it dead or dumb or similar things. FCoE is still in its infancy while Fibre Channel (FC) is in the process of transitioning to 16Gb with a roadmap that will enable it to continue for many more years. As FCoE continues to ramp up over next several years (remember, FC took several years to get where it is today), continued FC enhancements will give options for those wishing to stick with it while gaining confidence with FCoE, iSCSI, SAS and NAS. 


  • Hard drive shortages drive  revenues and profits: Some have declared that the recent HDD shortages due to Thailand flooding will cause Solid State Devices (SSD) using flash memory to dramatically grow in adoption and deployment. I think that both single level cell (SLC) and multi level cell (MLC) flash SSDs will continue to grow in deployments counted in units shipped as well as revenues and hopefully also margin or profits. However I also think that with the HDD shortage and continued demand, vendors will use the opportunity to stabilize some of their pricing meaning less discounting while managing the inventory which should mean more margin or profits in a quarter or too. What will be interesting to watch will be if SSD vendors drop the margin in an effort to increase units shipped and deployed to show market revenue and adoption growth while HDD margins rise. 


Industry Trends and Perspectives


  • QoS, SLA/SLOs part of cloud conversations: Low cost or cost avoidance will continue to be the focus of some cloud conversations. However with metrics and measurements to make informed decisions, discussions will expand to QoS, SLO, SLAs, security, mean time to restore or return information, privacy, trust and value also enter into the picture. In other words, clouds are growing up and maturing for some, while their existing capabilities become discovered by others.


  • Clouds are a shared responsibility model: The cloud blame game when something goes wrong will continue, however there will also be a realization that as with any technology or tool, there is a shared responsibility. This means that customers accept responsibility for how they will use a tool, technologies or service, the provider assumes responsibility, and both parties have a collective responsibility. 


  • Return on innovation is the new ROI: For years, no make that decades a popular buzz term is return on investment the companion of total cost of ownership. Both ROI and TCO as you know and like (or hate) will continue to be used, however for situations that are difficult to monitize, a new variation exists. That new variation is return on innovation which is the measure of intangible benefits derived from how hard products are used to derive value for or of soft products and services delivered.

  • Solid State Devices (SSD) confidence: One of the barriers to flash SSD adoption has been cost per capacity with another being confidence in reliability and data consistency over time (aka duty cycle wear and tear). Many enterprise class solutions have used single level cell (SLC) flash SSD which has better endurance, duty cycle or wear handing capabilities however that benefit comes at the cost of a higher price per capacity. Consequently vendors are pushing multi level cell (MLC) flash SSD that reduces the cost per capacity, however needs extra controller and firmware functionality to manage the wear leaving and duty cycle. In some ways, MLC flash is to SSD memory what SATA high-capacity desktop drives were to HDDs in the enterprise storage space about 8 to 9 years ago. What I mean by that is that more cost high performance disk drives were the norm, then lower cost higher capacity SATA drives appeared resulting in enhancements to make them more enterprise capable while boosting the confidence of customers to use the technology. Same thing is happening with flash SSD in that SLC is more expensive and for many has a higher confidence, while MLC is lower cost, higher capacity and gaining the enhancements to take on a role for flash SSD similar to what high-capacity SATA did in the HDD space. In addition to confidence with SSD, new packaging variations will continue to evolve as well. 


  • Virtualization beyond consolidation: The current wave of consolidation of desktop using VDI, server and storage aggregation will continue, however a trend that has grown for a couple of years now that will take more prominence in 2012 and 2013 is realization that not everything can be consolidated, however many things can be virtualized. This means for some applications the focus will not be how many VMs to run per PM, rather, how a PM can be more effectively used to boost performance and agility for some applications during part of the day, while being used for other things at different times. For example a high performance database that normally would not be consolidated would be virtualized to enable agility for maintenance, BC, DR load balancing and placed on a fast PM with lots of fast memory, CPU and IO capabilities dedicated to it. However during off hours when little to no database activity is occurring, then other VMs would be moved onto that PM then moved off before the next busy cycle.


Industry Trends and Perspectives


  • Will applications be ready to leverage cloud: Some applications and functionality can more easily be moved to cloud environments vs. others. A question that organizations will start to ask is what prevents their applications or business functionality from going to or using cloud resources in addition to asking cloud providers what new capabilities will they extend to support old environments. 


  • Zombie list  grows: More items will be declared dead meaning that they are either still alive, or have reached stability to the point where some want to see them dead so that their preferred technology or topic can take root. 


  • Some other topics and trends include continued growing awareness that metrics and measurements matter for cloud, virtualization, data and storage networking. This also means a growing awareness that there are more metrics that matter for storage than cost per GByte or Tbyte that include IOPS, latency or response time, bandwidth, IO size, random and sequential along with availability. 2012 and 2013 will see continued respect being given to NAS at both the high end as well as low end of the market from enterprise down to consumer space. Speaking of consumer and SOHO (Small Office Home Office), now that SMB has generally been given respect or at least attention by many vendors, the new frontier will be to move further down market to the lower end of the SMB which is SOHO, just above consumer space. Of course some vendors have already closed the gap (or at least on paper, power point, web ex or you tube video) from consumer to enterprise. Of course Buzzword bingo will continue to be a popular game.


  • Oh, btw, DevOps will also appear in your vocabulary if it has not already.


Watch for more on these and other topics in the weeks and months to come and if you and to read more now, then get a copy of Cloud and Virtual Data Storage Networking. Also check out the top 25 new post of 2011 as well as some of the all time most popular posts at that can also be seen on various other venues that pickup the full RSS feed or archive feed. Also check out the StorageIO news letter for more industry trends perspectives and commentary.

Ok, nuff said for now


Cheers gs

Im in the process of wrapping up 2011 and getting ready for 2012. Here is a list of the top 25 all time posts from  StorageIOblog covering cloud, virtualization, servers, storage, green IT, networking and data protection. Looking back, here is 2010 and 2011  industry trends, thoughts and perspective predictions along with looking forward, a 2012 preview here.


Top 25 all time posts about storage, cloud, virtualization, networking, green IT and data protection


Check out the companion post to this which is the top 25 2011 posts located here as well as 2012 and 2013 predictions preview here.


Ok, nuff said for now


Cheers gs

Here (.qt) and here (.wmv) is a video from an interview that I did with Jenny Hamel (@jennyhamelsd6) during the Fall 2011 SNW event in Orlando Florida.




Topics covered during the discussion include:

  • Importance of metrics that matter for gaining and maintaining IT situational awareness
  • The continued journey of IT to improve customer service delivery in a cost-effective manner
  • Reducing cost and complexity without negatively impacting customer service experience
  • Participating in SNW and SNIA for over ten years on three different continents


Industry Trends and Perspectives

  • Industry trends, buzzword bingo (SSD, cloud, big data, virtualization), adoption vs. deployment
  • Increasing efficiency along with effectiveness and productivity
  • Stretching budgets to do more without degrading performance or availability
  • How customers can navigate their way around various options, products and services
  • Importance of networking at events such as SNW along with information exchange and learning
  • Why data footprint reduction is similar to packing smartly when going on a journey
  • Cloud and Virtual Data Storage Networking (now available on Kindle and other epub formats)


View the video from SNW fall 2011 here (.qt) or here (.wmv).



Check out other videos and pod casts here or at


Speaking of industry trends, check out the top 25 new posts from 2011, along with the top 25 all time posts and my comments (predictions) for 2012 and 2013.


Ok, nuff said for now

Cheers gs