Skip navigation

Storage I/O industry trends image

Amazon Web Services (AWS) recently announced global availability of Elastic Block Storage (EBS) optimized support for four extra Elastic Cloud Computing (EC2) instance types. The support enables optimized performance between standard and provisioned IOP EBS volumes and EC2 instances to meet different bandwidth or throughput needs (learn more about AWS EBS, EC2, S3 and Glacier here).


AWS image via


The four EBS optimized instance types are m3.xlarge, m3.2xlarge, m2.2xlarge and c1.xlarge for dedicated bandwidth or throughput between the EC2 instances and EBS volumes. The performance or bandwidth ranges from 500 Mbits (500 / 8 = 62.5 MBytes) per second, to 1,000 Mbits (1,000 / 8 = 125MBytes) per second depending on the type of instance. As a refresher, EC2 instances (why by time you read this could change) vary in size and functionality with different amounts of EC2 Unit of Compute (ECU), number of virtual cores, amount of storage space included, 32 or 64 bit, storage and networking IO performance, and EBS Optimized or not. In addition to instances, different operating system images can be installed using those licensed from AWS such as various Windows and Unix or supply your own.


Image of AWS EC2 instance


There are also different generations of instances such as M1 (first generation where one ECU = 1.0 to 1.2 Ghz of a 2007 era Opteron or Xeon processor), M3 (second generation with faster processors) along with Micro low-cost options. There are also other optimized instances including high or large amounts of memory, high CPU or compute processing, clustered compute, high memory clustered, clustered GPU (e.g. using Nivida Tesla GPUs), high IO and high storage space capacity needs.


Here is the announcement from AWS:



Dear Amazon Web Services Customer,    


We are delighted to announce the global availability of EBS-optimized support for four additional instance types: m3.xlarge, m3.2xlarge, m2.2xlarge, and c1.xlarge. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used. The dedicated throughput minimizes contention between EBS I/O and other traffic from your Amazon EC2 instance, providing the best performance for your EBS volumes.    


EBS-optimized instances are designed for use with both Standard and Provisioned IOPS EBS volumes. Standard volumes deliver 100 IOPS on average with a best effort ability to burst to hundreds of IOPS, making them well-suited for workloads with moderate and bursty I/O needs. When attached to an EBS-optimized instance, Provisioned IOPS volumes are designed to consistently deliver up to 2000 IOPS from a single volume, making them ideal for I/O intensive workloads such as databases. You can attach multiple Amazon EBS volumes to a single instance and stripe your data across them for increased I/O and throughput performance.    


Amazon EBS-optimized support is now available for m3.xlarge, m3.2xlarge, m2.2xlarge, m2.4xlarge, m1.large, m1.xlarge, and c1.xlarge instance types, and is currently supported in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), Asia Pacific (Singapore), Asia Pacific (Japan), Asia Pacific (Sydney), and South America (São Paulo) Regions.    


You can learn more by visiting the Amazon EC2 detail page.    




The Amazon EC2 Team


What this means is that AWS is enabling customers to size their compute instances and storage volumes with more flexibility to meet different needs. For example, EC2 instances with various compute processing capabilities, amount of memory, network and storage I/O performance to volumes. In addition, storage volumes based on different space capacity size, standard or provisioned IOP's, bandwidth or throughput performance between the instance and volume, along with data protection such as snapshots.

Amazon Web Services (AWS) image


This means that the cost per space capacity  of an EBS volume varies based on which AWS availability zone it is in, standard (lower IOP performance) or provisioned IOP's (faster), along with instance type. In other words, cloud storage is not just about the cost per GByte, it's also about the cost for IOPS, bandwidth to use it, where it is located (e.g. with AWS which Availability Zone), type of service, level of availability and durability among other attributes.


Additional reading and related items:

  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part I)
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part II)
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
  • Cloud conversations: AWS  Government Cloud (GovCloud)
  • Cloud conversations: Gaining cloud confidence from  insights into AWS outages
  • AWS (Amazon) storage gateway, first, second and third impressions
  • Cloud conversations: Public, Private, Hybrid what about Community Clouds?
  • Amazon cloud storage options enhanced with Glacier
  • Amazon Web Services (AWS) and the NetFlix Fix?
  • Cloud conversation, Thanks  Gartner for saying what has been said
  • Cloud and Virtual Data Storage Networking via
  • Seven Databases in Seven Weeks

    Continue reading part I (closer look at EBS) here, part II (closer look at S3) here and part III (tying it all together) here.


    Ok,  nuff said (for now)

    Cheers Gs

    Storage I/O cloud virtual and big data perspectives


    If your organization like StorageIO is a member of the  Open Data Center Alliance (ODCA) you may be aware of the resources they make  available about cloud, virtualization, security and more. Unlike so many other industry associates or trade groups dominated by vendors, the ODCA has an IT or customer focus including member developed best practices, strategies and templates.


    Open Data Center Alliance (ODCA) image


    A good example is the recently released ODCA member BMW group private cloud strategy document.


    This 24 page document (PDF found here) covers BMW groups  private cloud strategy that sets stage for phased future hybrid. By being a phased approach, it seems that BMW is leveraging and transitioning for the future while maintaining support for their current environment (including Windows-based) as part of a paradigm shift. This is refreshing and good to see how organizations are looking to use cloud as part of a paradigm or IT service deliver model and not just as a new technology or platform focus.


    ODCA BMW private cloud strategy image


    Topics covered include IaaS along with PaaS for DB, Web, SAP and  CSaaS or Corporate Software as a Service based on the NIST cloud model. Also included are roles and integration of CMDB, ITSM, ITIL, orchestration in a business vs. technology driven model. Being business driven, that means there is a mission statement for the BMW cloud strategy, with objectives aligned to support organization enablement vs. using different tools, technologies or trends along with design criteria.


    What I like about the BMW strategy is that it is aligned to support the business as opposed to finding ways to use technology to support the business, or justify why a cloud is needed. In other words, something different from those needing for a technology, tool, product, standard or service to be adopted.

    Thus while having been a vendor, the ODCA customer focused angle appeals to me from when I was on that side of the table working in IT organizations. Otoh, for some of you reading through the BMW document might result in DejaVu from experiences of web-based, client-server, information utilities and other IT service delivery models or paradigms.


    Learn more at the ODCA newsroom
    ODCA BMW cloud strategy document
    ODCA video featuring highlights of the BMW cloud implementation
    Additional ODCA usage models and resources


    If you have not done, check out and join the ODCA.


    Ok nuff said

    Cheers gs

    Storage I/O cloud virtual and big data perspectives

    A couple of years ago I did this post about if Is FCoE Struggling to Gain Traction, or on a normal adoption course?


    Fast forward to today, has anybody else noticed that there seems to be less hype  and fud on Fibre Channel (FC) over Ethernet (FCoE) than a year or  two or three ago?


    Does this mean that FCoE as the fud or detractors were  predicting is in fact stillborn with no adoption, no deployment and dead on  arrival?


    Does this mean that FCoE as its proponents have said is  still maturing, quietly finding adoption and deployment where it fits?


    Does this mean that FCoE like its predecessors Fibre  Channel and Ethernet are still evolving, expanding from early adopter to a  mature technology?


    Does this mean that FCoE is simply forgotten with  software defined networking (SDN) having over-shadowed it?


    Does this mean that FCoE has finally lost out and that  iSCSI has finally stepped up and living up to what it was hyped to do ten  years ago?


    Does this mean that FC itself at either 8GFC or 16GFC is  holding its own for now?


    Does this mean that InfiniBand is on the rebound?


    Does  this mean that FCoE is simply not fun or interesting, or a shiny new  technology with vendors not spending marketing money so thus people not talking,  tweeting or blogging?


    Does this mean that those who were either proponents  pitching it or detractors despising it have found other things to talk about  from SDN to OpenFlow to IOV to Software Defined Storage (what ever, or who ever definition your subscribe to) to cloud, big or little  data and the list goes on?


    I continue hear of or talk with customers organizations deploying FCoE in addition to iSCSI, FC, NAS and other means of accessing storage for cloud, virtual and physical environments.


    Likewise I see some vendor discussions occurring not to mention what gets picked up via google alerts.


    However in general, the rhetoric both pro and against, hype and FUD seems to have subsided, or at least for now.


    So what gives, what's your take on FCoE hype and FUD?


    Cast your vote and see results here.


    Ok, nuff said

    Cheers gs