Skip navigation

Greg's Blog

October 31, 2011 Previous day Next day

Doing more with less, doing more with what you have or reducing cost have been the mantra for the past several years now.


Does that mean as a trend, they are being adopted as the new way of doing business, or simply a cycle or temporary situation?


Reality is that many if not most IT organizations are and will remain under pressure to stretch their budgets further for the immediate future. Over the past year or two some organizations saw increases in their budgets however also increased demand while others saw budgets fixed or reduced while having to support growth. On the other hand, there is no such thing as an information recession with more data being generated, moved, processed, stored and retained for longer periods of time.


Industry trend: No such thing as a data recession


Something has to give as shown in the following figure which is that on one curve there is continued demand and growth, while another curve shows need to reduce costs while another reflects the importance of maintaining or enhancing service level objectives (SLOs) and quality of service (QoS).


Enable growth while removing complexity and cost without compromising service levels


One way to reduce costs is to inhibit growth while another is to support growth by sacrificing QoS including performance, response time or availability as a result of over consolidation, excessive utilization or instability as a result of stretching resources to far. Where innovation comes into play is finding and fixing problems vs. moving or masking them or treating symptoms vs. the real issue and challenge. Innovation also comes into play by identifying both near term tactical as well as longer term strategic means of taking complexity and cost out of service delivery and the resources needed to support them.


For example determining the different resources and processes involved in delivering an email box of a given size and reliability. Another being supporting a virtual machine (VM) with a given performance and capacity capability. Yet another scenario is a file share or home directory of a specific size and availability. By streamlining work flows, leveraging automation and other tools to enforce polices as well as adopting new best practices complexity and thereby costs can be reduced. The net rest is a lower cost to provide a given service to a specific level which when multiplied out over many users or instances, results in cost savings however also productivity gains.


The above is all good and well for longer term strategic and where you want to go or get to, however what can be done right now today?


Here are a few tips to do more with what you have while supporting growth demands

If you have service level agreements (SLAs) and SLOs as part of your service category, review with your users as to what they need vs. what they would like to have. What you may find is that your users want or expect a given level of service, yet would be happy and ok with moving to a cloud service that had lower SLO and SLA expectations if lower cost. The previous scenario would be an indicator that you users want and thus you give them a higher level of service, yet their requirements are actually lower than what is expected. On the other hand if you do not have SLOs and SLAs aligned with cost for the services then set them up and review customer or client expectations, needs vs. wants on a regular basis. You might find out that you can stretch your budget by delivering a lower (or higher) class of services to meet different users requirements than what was assumed to be the case. In the case of supporting a better class of service, if you can use an SSD enabled solution to reduce latency or wait times and boost productivity, more transactions or page views or revenue per hour, that could prompt a client to request that capability to meet their business needs.


Reduce your data footprint impact in order to support growth using the ABCDs of data footprint reduction (DFR), that is Archive (email, file, database), Backup modernization, Compression and consolidation, Data management and dedupe, storage tiering among other techniques.


Storage, server virtualization and optimization using capacity consolidation where practical and IO consolidation to fast storage and SSD where possible. Also review storage configuration including RAID and allocation to identity if any relatively easy changes can improve performance, availability, capacity and energy impact.


Investigate available upgrades and enhancements to your existing hardware, software and services that can be applied to provide breathing room within current budgets while evaluating new technologies.


Find and fix problems vs. chasing false positives that provide near term relief only to have the real issue reappear. Maximize your budgets by identifying where people time and other resources are being spent due to processes, work flows, technology configuration complexity or bottlenecks and address those.


Enhance and leverage existing management measurements to gain more insight along with implementing new metrics for End to End (E2E) situational awareness of your environment which will enable effective decision making. For example you may be told to move some function to the cloud as it will be cheaper, yet if you do not have metrics to indicate one way or the other, how can that be an informed decision? If you have metrics that show your cost for the same service being moved to a cloud or managed service provider as well as QoS, SLO, SLA, RTO, RPO and other TLAs, then you can make informed decisions. That decision may still be to move functions to a cloud or other service even if in fact it is more expensive compared to what you can provide it for in order that your resources can be directed to supporting other important internal functions.


Look for ways to reduce cost of a service delivered as opposed to simply cutting costs. They sound like one and the same, however if you have metrics and measurements providing situational awareness to know what the cost of a service is, you can also then look at how to streamline those services, remove complexity, reduce workflow, leverage automation there by removing cost. The goal is the same, however how you go about removing cost can have an impact on your return on innovation not to mention customer satisfaction.


Also be an informed shopper, have a forecast or plan on what you will need and when, along with what you must have (core requirements) vs. what you would like to have or want. When looking at options, balance what is needed and then if you can get what you want or would like for little or no extra cost if they add value or enable other initiatives. Part of being an informed shopper is having support of the business to be able to procure what you want or need which means aligning technology resources and their cost to delivery of business functions and services.


What you need vs. what you want
In a recent interview with the associated press (AP) the reporter wanted to know my comments about spending vs. saving during economic tough times (you can read the story here). Basically my comments were to spend within your means by identifying what you need vs. what you want, what is required to keep the business running or improve productivity and remove cost as opposed to acquiring nice to have things that can wait. Sure I would like to have a new 85 to 120" 3D monitor for my workstation that could double as a TV, however I do not need or require it.


On the other hand, I recently upgraded an existing workstation adding a Hybrid Hard Disk Drive (HHDD) and some additional memory, about a $200USD investment that is already paying for itself via increased productivity. That is instead of enjoying a cup of dunkin donut coffee while waiting for some tasks to complete on that system, Im able to get more done in a given amount of time boosting productivity.


For IT environments this means looking at expenditures to determine what is needed or required to keep things running while supporting near term strategic and tactical initiatives or pet projects.


For vendors and vars, if things have not been a challenge yet, now they will need to refine their messages to show more value, return on innovation (ROI) in terms of how to help their customers or prospects stretch resources (budgets, people, skill sets, products, services, licenses, power and cooling, floor space) further to support growth, while removing costs without compromising on service delivery. This also means a shift in thinking of short term or tactical cost cutting to longer term strategic approaches of reducing costs to deliver a service or resources.


Related links pertaining to stretching your resources, doing more with what you have, increasing productivity and maximizing your budget to support growth without compromising on customer service.


Saving Money with Green IT: Time To Invest In Information Factories
Storage Efficiency and Optimization – The Other Green
Shifting from energy avoidance to energy efficiency
Saving Money with Green Data Storage Technology
Green IT Confusion Continues, Opportunities Missed!
Storage Efficiency and Optimization – The Other Green
PUE, Are you Managing Power, Energy or Productivity?
Cloud and Virtual Data Storage Networking
Is There a Data and I/O Activity Recession?
More Data Footprint Reduction (DFR) Material


What is your take?


Are you and your company going into a spending freeze mode, or are you still spending, however placing or having constraints put on discretionary spending?


How are you stretching your IT budget to go further?


Ok, nuff said for now.


Cheers gs

Here is a link to a recent guest post that I was invited to do over at The virtualization Practice (TVP) pertaining to the recent VMware vSphere 5.0 announcement.


A theme of the vSphere 5.0 launch is reducing complexity, enabling automation, and supporting scaling with confidence for cloud and virtual environments. As a key component for supporting cloud, virtual and dynamic infrastructure environments, vSphere V5.0 includes many storage related enhancements and new features including Storage Distributed Resource Scheduler (SDRS).


Read more here.


Ok, nuff said for now.


Cheers gs

Here is a link to a recent guest post that I was invited to do over at  The Virtualization Practice (TVP) pertaining to Getting SASsy, the other shared server to storage interconnect for disk and SSD systems. Serial Attached SCSI (SAS) is better known as an interface for connecting hard  disk drives (HDD) to servers and storage systems; however it is also widely used  for attaching storage systems to physical as well as virtual servers.


An  important storage requirement for virtual machine (VM) environments with more  than one physical machine (PM) server is shared storage. SAS has become a viable  interconnect along with other Storage Area Network (SAN) interfaces including  Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI for block  access.


Read more here.


Ok, nuff said for now.


Cheers gs

A couple of weeks ago I traveled down to Orlando Florida  for a few days to attend the fall 2011 SNW (Storage Networking World) produced  in conjunction by IDG Computerworld and the Storage Networking Industry  Association (SNIA).




While at the Orlando event, SNIA executive director Leo  Legar asked me how many SNWs I had attended and my responses was on which  continent?

My answer was part in fun however also serious as I have been  attending SNWs (in addition to other SNIA events) for over ten years in both  North and South America as well as in Europe including presenting SNIA  tutorials and SNW sessions.


SNW is always good for meeting up with old friends and acquaintances along with meeting new ones including twitter tweeps (hashtag #snwusa #snw2011 @sniacloud @snwusa) and the recent event was no exception. Granted SNW is smaller than it was during its peak in the mid 2000s however it was great to go for a couple of days of meetings, checking out the expo hall and some sessions as well as getting out and about meeting people involved with servers, storage, networking, virtualization, cloud, hardware, software and services.


SNW remains as its name implies (Storage Networking World) an  event around networking as in conversations, learning, knowledge exchange,  information gathering and meetings not to mention the hands on lab. I found the  two days I was there adequate to get the meetings and other  activities I had planned, along with time for impromptu meetings. ANother observation was that during the peak of the large mega SNW events, while there were more meetings, they were also much shorter along the lines of speed dating vs. those a couple of weeks ago where there was time to have quality conversations.


SNIA Emerald Program


Some of the news at the recent SNW event, involved SNIA and their  Green Storage Initiative (GSI) announcing the availability of the Emerald program Green IT storage energy metrics that have been in the works for several years. The SNIA Emerald program consists of specifications, taxonomies, metrics and measurements standards to gauge various types of storage power or energy usage to gauge its effectiveness. In other words, yes, Green IT and Green storage are still alive, they just are not as trendy to talk about as they were a few years ago which a shift in focus towards productivity, effective use and supporting growth to help close the green gap and missed IT as well as business opportunities.


Also during the recent SNW event, I did a book signing event  sponsored by SNIA. If you have not done so, check out the SNIA Cloud Storage Initiative (CSI) who arranged for several of my new book Cloud and Virtual Data Storage Networking to be given away. Book signings are fun in that I get to meet lots of people and hear what they are doing, encountering, looking for, have done, concerned or excited about. It was handy having SNIA CSI material available at the table as I was signing books and visiting with people to be able to give them information about things such as CDMI not to mention hearing what they were doing or looking for. Note to SNIA, if we do this again, lets make sure to have someone from the CSI at the table to join in the fun and conversations as there were some good ones. Learn more about the activities of the SNIA CSI including their Cloud Data Management Initiative (CDMI) here.


SNIA Cloud Storage Initiaive CSI


Thanks again to SNIA for arranging the book  signing event and for those who were not able to get a copy of my new book  before they ran out, my publisher CRC Press Taylor and Francis has arranged a  special SNIA and SNW discount code. To take advantage of the SNIA and SNW discount code, go to  the CRC Press web site (here) and apply the discount code KVK01 during checkout for catalog  item K12375 (ISBN: 9781439851739).


30 percent discount code for Cloud and Virtual Data Storage Networking Book


Thanks again to Wayne Adams (@wma01606), Leo Legar and Michael  Meleedy among others who arranged for a fantastic fall 2011 SNW event along with everyone who participated in the book signing event and other conversations while in Orlando and to those who were involved virtually via twitter.


Ok, nuff said for now


Cheers gs

It has been a busy fall 2011 which started out with VMworld 2011 in Las Vegas just before the labor day weekend.


At the CXI party in Vegas during VMworld standing with the NEXUS vMonstoerLas Vegas Strip from CXI party during VMworld with Karen of Arcola
Scenes from the CXI party (@cxi) at VMworld 2011


Besides activity in support of the launch of my new book Cloud and Virtual Data Storage Networking (CRC Press), I have been busy with various client research, consulting and advisory projects. In addition to Las Vegas for VMworld, out and about travel activities for attending conferences and presenting seminars have included visits in Minneapolis (local), Nijkerk Holland and Denver (in the same week) and Orlando (SNW). Upcoming out and about events are scheduled for Los Angles, Atlanta, Chicago, Seattle and a couple of trips to San Jose area before the brief thanksgiving holiday break.


My Sunday virtual office in Nijkerk before a busy weekMy Sunday virtual office in Nijkerk before a busy week
Beer and Bitter ballens on the left, coffee machine in Nijkerk on the right


Brouwer Strorage Consulantcy Seminar
Day one of two day seminar in Nijkerk


Instead of automobiles lined up a trainstation, its bicycles in NijkerkWaiting in Nijkerk for 6:30AM train to Schiphol and on to Denver
Bicycles lined up at the Nijkerk train station, waiting for the 6:30 train to Schiphol


Changing trains in Amsfort on way to SchipholBoarding Delta A333 AMS to MSP then on to DEN
Changing trains on way to Schiphol to board flight to MSP and then to DEN


Climbing out of Denver on way back to MSP, it was a long yet fun weekEvening clouds enroute from DEN to MSP
After Denver back to MSP for a few days before SNW in Orlando


While being out and about I have had the chance to meet and visit with many different people. Here are some questions and comments that I have heard while out and about:


  • What comes after cloud?
  • Are there standards for clouds and virtualization?
  • Should cost savings be the justification for going to cloud, virtual or dynamic environments?
  • How is big data different than traditional stream and flat file analytics and processing using tools such as SAS (Statistical Analysis Software)?
  • Is big data only about map reduce and hadoop?
  • Are clouds any less secure or safe for storage and applications?
  • Do clouds and virtualization removing complexity and simplify infrastructures?
  • Are cloud storage services cheaper than buying and managing your own?
  • Is object based storage a requirement for public or private cloud?
  • Do solution bundles such as EMC vBlock and NetApp FlexPods reduce complexity?
  • Why is FCoE taking so long to be adopted and is it dead?
  • Should cost savings be the basis for deciding to do a VDI or virtualization project?
  • What is the best benchmark or comparison for making storage decisions?


In addition, there continues to be plenty of cloud confusion, FUD and hype around public, private, hybrid along with AaaS, SaaS, PaaS and IaaS among other XaaS. The myth that virtualization of servers, storage and workstations is only for consolidation continues. However there are more people beginning to see the next wave of life beyond consolidation where the focus expands to flexibility, agility and speed of deployment for non aggregated workloads and applications. Another popular myth that is changing is that data footprint reduction (DFR) is only about dedupe and backup. What is changing is an awareness that DFR spans all types of storage and data from primary to secondary leveraging different techniques including archive, backup modernization, compression, consolidation, data management and dedupe along with thin provisioning among other techniques.


Archiving for email, database and file systems needs to be rescued from being perceived as only for compliance purposes. If you want or need to reduce your data footprint impact (DFR), optimize your storage for performance or capacity, enable backup, BC and DR to be performed faster, achieve Green IT and efficiency objectives, expand your awareness around archiving. While discussing archiving, focus is often on the target or data storage medium such as disk, tape, optical or cloud along with DFR techniques such as compression and dedupe or functionally including ediscovery and WORM. The other aspects of archive that need to be looked at include policies, retention, application and software plugins for Exchange, SQL, Sharepoint, Sybase, Oracle, SAP, VMware and others.


Boot storms continue to be a common theme for apply solid state devices (SSD) in support of virtual desktop inititiaves (VDI). There is however a growing awareness and discussions around shutdown storms, day to day maintenance including virus scans in addition to applications that increase the number of writes. Consequently the discussions around VDI are expanding to include both reads and writes as well as reduced latency for storage and networks.


Some other general observations, thoughts and comments:


  • Getting into Holland as a visitor is easier than returning to the U.S. as a citizen
  • Airport security screening is more thorough and professional in Europe than in the U.S.
  • Hops add latency to beer (when you drink it) and to networks (time delay)
  • Fast tape drives need disk storage to enable streaming for reads and writes
  • SSD is keeping HDDs alive, HDDs are keeping tape alive and all there roles are evolving while the technologies continue to evolve.
  • Hybrid Hard Disk Drives (HHDDs) are gaining in awareness and deployments in workstations as well as laptops.
  • Confusion exists around what are flat layer 2 networks for LANs and Sans
  • Click here to view additional comments and perspectives


Ok, nuff said for now


Cheers gs

Over the past several years I have done an annual post about IBM and their XIV storage system and this is the fourth in what has become a series. You can read the first one here, the second one here, and last years here and here after the announcement of the IBM V7000.


IBM recently announced the generation 3 or Gen3 version of XIV along with releasing for the first time public performance comparison benchmarks using storage performance council (SPC) throughout SPC2 workload.


The XIV Gen3 is positioned by IBM as having up to four (4) times the performance of earlier generations of the storage system. In terms of speeds and feeds, the Gen3 XIV supports up to 180 2TB SAS hard disk drives (HDD) that provides up to 161TB of usable storage space capacity. For connectivity, the Gen3 XIV supports up to 24 8Gb Fibre Channel (8GFC) or for iSCSI 22 1Gb Ethernet (1 GbE) ports with a total of up to 360GBytes of system cache. In addition to the large cache to boost performance, other enhancements include leveraging multi core processors along with an internal InfiniBand  network to connect nodes replacing the former 1 GbE interconnect. Note, InfiniBand is only used to interconnect the various nodes in the XIV cluster and is not used for attachment to applications servers which is handled via iSCSI and Fibre Channel.


IBM and SPC storage performance history
IBM has a strong history if not leading the industry with benchmarking and workload simulation of their storage systems including Storage Performance Council (SPC) among others. The exception for IBM over the past couple of years has been the lack of SPC benchmarks for XIV. Last year when IBM released their new V7000 storage system benchmarks include SPC were available close to if not at the product launch. I have in the past commented about IBMs lack of SPC benchmarks for XIV to confirm their marketing claims given their history of publishing results for all of their other storage systems. Now that IBM has recently released SPC2 results for the XIV it is only fitting then that I compliment them for doing so.


Benchmark brouhaha
Performance workload simulation results can often lead to applies and oranges comparisons or benchmark brouhaha battles or storage performance games. For example a few years back NetApp submitted a SPC performance result on behalf of their competitor EMC. Now to be clear on something, Im not saying that SPC is the best or definitive benchmark or comparison tool for storage or other purpose as it is not. However it is representative and most storage vendors have released some SPC results for their storage systems in addition to TPC and Microsoft ESRP among others. SPC2 is focused on streaming such as video, backup or other throughput centric applications where SPC1 is centered around IOPS or transactional activity. The metrics for SPC2 are Megabytes per second (MBps) for large file processing (LFP), large database query (LDQ) and video on demand delivery (VOD) for a given price and protection level.


What is the best benchmark?
Simple, your own application in as close to as actual workload activity as possible. If that is not possible, then some simulation or workload simulation that closets resembles your needs.


Does this mean that XIV is still relevant?


Does this mean that XIV G3 should be used for every environment?
Generally speaking no. However its performance enhancements should allow it to be considered for more applications than in the past. Plus with the public comparisons now available, that should help to silence questions (including those from me) about what the systems can really do vs. marketing claims.


How does XIV compare to some other IBM storage systems using SPC2 comparisons?

Cost per SPC2
Storage GBytes
Price tested


In the above comparisons, the DS5300 (NetApp/Engenio based) is a dual controller (4GB of cache per controller) with 128 x 146.8GB 15K HDDs configured as RAID 5 with no discount applied to the price submitted. The V7000 system which is based on the IBM SVC along with other enhancements consists of dual controllers each with 8GB of cache and 120 x 10K 300GB HDDs configured as RAID 5 with just under a 40% discount off list price for system tested. For the XIV Gen3 system tested, discount off list price for the submission is about 63% with 15 nodes and a total of 360GB of cache and 180 2TB 7.2K SAS HDDs configured as mirrors. The DS8800 system with dual controllers has a 256GB of cache, 768 x 146GB 15K HDDs configured in RAID5 with a discount between 40 to 50% off of list.


What the various metrics do not show is the benefit of various features and functionality which should be considered to your particular needs. Likewise, if your applications are not centered around bandwidth or throughput, then the above performance comparisons would not be relevant. Also note that the systems above have various discount prices as submitted which can be a hint to a smart shopper where to begin negotiations at. You can also do some analysis of the various systems based on their performance, configuration, physical footprint, functionality and cost plus the links below take you to the complete reports with more information.


DS8800 SPC2 executive summary and full disclosure report

XIV SPC2 executive summary and full disclosure report

DS5300 SPC2 executive summary and full disclosure report

V7000 SPC2 executive summary and full disclosure report


Bottom line, benchmarks and performance comparisons are just that, a comparison that may or may not be relevant to your particular needs. Consequently they should be used as a tool combined with other information to see how a particular solution might be a fit for your specific needs. The best benchmark however is your own application running as close to possible realistic workload to get a representative perspective of a systems capabilities.


Ok, nuff said

Cheers gs