Skip navigation
2012

Keeping  in mind that there is no such thing as a data or information recession, not to  mention that people and data are living longer, there is the need to discuss  expanding data footprints. When researching his new article over on SearchSolidstateStorage.com John Hilliard reached out to  ask about SSD, Green IT, energy efficiency and effectiveness trends and  perspectives (you can read the article and my comments here).

 

In  the past when Green IT and Green storage was mentioned, discussions focused  around energy avoidance along with space capacity reduction. While storage efficiency  and optimization in the context of space-saving and capacity consolidation are  part of Green storage, so too are storage IO consolidation with SSD. For  inactive or less frequently accessed data, storage optimization and efficiency  can focus on using various data footprint reduction techniques including  archive, backup and data protection modernization, compression, dedupe, data  management and deletion, along with storage tiering and thin provisioning among  others.

 

SSD and IO consolidation for Green IT and productivity

 

On the other hand, for active data where performance is important, the  focus expands to how to be more effective and boosting productivity with IO  consolidation using SSD and other technologies.

 

Note  that if your data center infrastructure is not efficient, then it is possible  that for every watt of energy consumed, a watt (or more) of energy is needed to  cool. However if your data center cooling is effective with a resulting low or  good PUE, you may not be  seeing a 1:1 watt or energy used for storage to cooling ratio as was more  common a few years ago.

 

The Green and Virtual Data Center book

 

IMHO  while reducing carbon footprints is a noble and good thing, however if that is  your own focus or value proposition for a solution such as SSD or other Green  technologies and techniques including data footprint reduction, you are missing many opportunities.

 

Have  a read of John’s article that includes some of my comments on energy efficiency  and effectiveness to support enhanced productivity, or the other aspect of  Green IT being economic enabling to avoid missed opportunities.

 

Related  and more reading:
Green  IT Confusion Continues, Opportunities Missed!
Green  IT deferral blamed on economic recession might be result of green gap
Supporting  IT growth demand during economic uncertain times
Industry  trend: People plus data are aging and living longer
Are  large storage arrays dead at the hands of SSD?
EPA  Energy Star for data center storage draft 3 specification
How  much SSD do you need vs. want?
More  storage and IO metrics that matter
What  is the best kind of IO? The one you do not have to do
Speaking  of speeding up business with SSD storage

 

Ok, nuff said for now

 

Cheers Gs

I Received the following press release in my inbox today from the National Advertising Division (NAD) recommending that Oracle stop making certain performance claims about Exadata after a complaint from IBM.

 

Oracle Exadata

 

In case you are not familiar with ExaData, it is a database machine or storage appliance that only supports Oracle database systems (learn more here). Oracle having bought Sun microsystems a few years back moved from being a software vendor that competed with other vendors software solutions including those from IBM while running on hardware from Dell, HP and IBM among others. Now that Oracle is in the hardware business, while you will still find Oracle software products running on their competitors hardware (servers and storage), Oracle is also more aggressively competing with those same partners, particularly IBM.

 

Hmm, to quote Scooby Doo: Rut Roh!

 

Looks like IBM complained to the Better Business Bureau (BBB) National Advertising Division (NAD) that resulted in the Advertising Self-Regulatory Council (ASRC) making their recommendation below (more about NAD and ASRC can be found here). Based on a billboard sign that I saw while riding from JFK airport into New York City last week, I would not be surprised if a company with two initials that start with an H and end with a P were to file a similar complaint.

I Wonder if the large wall size Oracle advertisement that used to be in the entry way to the white plains (IATA:HPN) airport (e.g. in IBM's backyard) welcoming you to the terminal as you get off the airplanes is still there?

The following is the press release that I received:

 

National Advertising Division (NAD) and ASRC

 

 

For Immediate Release
    Contact: Linda Bean
    212.705.0129

 

NAD Finds Oracle Took Necessary  Action in Discontinuing Comparative Performance Claims for Exadata; Oracle to  Appeal NAD Decision

New York, NY – July 24,  2012 –TheNational Advertising Division  has recommended that Oracle Corporation discontinue certain comparative  product-performance claims for the company’s Exadata database machines,  following a challenge by International Business Machines Corporation. Oracle  said it would voluntarily discontinue the challenged claims, but noted that it  would appeal NADs decision to the National Advertising Review Board.

The advertising claims at issue appeared in a full-page advertisement  in the Wall Street Journal and included the following:

  • “Exadata 20x  Faster … Replaces IBM Again”
  • “Giant  European Retailer Moves Databases from IBM Power to Exadata … Runs 20 Times  Faster”

NAD also considered whether the advertising implied that all Oracle  Exadata systems are twenty times faster than all IBM Power systems.

The advertisement featured the image of an Oracle Exadata system,  along with the statement: “Giant European Retailer Moves Databases from IBM  Power to Exadata Runs 20 Times Faster.” The advertisement also offered a link  to the Oracle website: “For more details oracle.com/EuroRetailer.”

IBM argued that the “20x Faster” claim makes overly broad references  to “Exadata” and “IBM Power,” resulting in a misleading claim, which the  advertiser’s evidence does not support.   In particular, the challenger argued that by referring to the brand name  “IBM Power” without qualification, Oracle was making a broad claim about the  entire IBM Power systems line of products.

The advertiser, on the other hand, argued that the advertisement  represented a case study, not a line claim, and noted that the sophisticated  target audience would understand that the advertisement is based on the  experience of one customer – the “Giant European Retailer” referenced in the  advertisement.

In a NAD proceeding, the advertiser is obligated to support all  reasonable interpretations of its advertising claims, not just the message it  intended to convey.   In the absence of  reliable consumer perception evidence, NAD uses its experienced judgment to  determine what implied messages, if any, are conveyed by an advertisement.   When evaluating the message communicated by  an advertising claim, NAD will examine the claims at issue in the context of  the entire advertisement in which they appear.

In this case, NAD concluded that while the advertiser may have  intended to convey the message that in one case study a particular Exadata  system was up to 20 times faster when performing two particular functions than  a particular IBM Power system, Oracle’s general references to “Exadata” and  “IBM Power,” along with the bold unqualified headline “Exadata 20x Faster  Replaces IBM Again,” conveyed a much broader message.

NAD determined that at least one reasonable interpretation of the  challenged advertisement is that all – or a vast majority – of Exadata systems  consistently perform 20 times faster in all or many respects than all – or a  vast majority – of IBM Power systems. NAD found that the message was not  supported by the evidence in the record, which consisted of one   particular comparison of one consumer’s  specific IBM Power system to a specific Exadata System.

NAD further determined that the disclosure provided on the  advertiser’s website was not sufficient to limit the broad message conveyed by  the “20x Faster” claim. More importantly, NAD noted that even if Oracle’s  website disclosure was acceptable – and had appeared clearly and conspicuously  in the challenged advertisement – it would still be insufficient because an  advertiser cannot use a disclosure to cure an otherwise false claim.

NAD noted that Oracle’s decision to permanently discontinue the claims  at issue was necessary and proper.

Oracle, in its advertiser’s statement, said it was “disappointed with  the NAD’s decision in this matter, which it believes is unduly broad and will  severely limit the ability to run truthful comparative advertising, not only  for Oracle but for others in the commercial hardware and software industry.”

Oracle noted that it would appeal all of NAD’s findings in the matter.

 

###

NAD's  inquiry was conducted under NAD/CARU/NARB Procedures for the Voluntary  Self-Regulation of National Advertising.  Details of the initial inquiry,  NAD's decision, and the advertiser's response will be included in the next  NAD/CARU Case Report.

About Advertising Industry  Self-Regulation:  The Advertising Self-Regulatory Council establishes the policies and procedures  for advertising industry self-regulation, including the National Advertising  Division (NAD), Children’s Advertising Review Unit (CARU), National Advertising  Review Board (NARB), Electronic Retailing Self-Regulation Program (ERSP) and  Online Interest-Based Advertising Accountability Program (Accountability  Program.) The self-regulatory system is administered by the Council of Better  Business Bureaus.

Self-regulation is good for consumers. The self-regulatory  system monitors the marketplace, holds advertisers responsible for their claims  and practices and tracks emerging issues and trends. Self-regulation is good  for advertisers. Rigorous review serves to encourage consumer trust; the  self-regulatory system offers an expert, cost-efficient, meaningful alternative  to litigation and provides a framework for the development of a self-regulatory  to emerging issues.

To learn  more about supporting advertising industry self-regulation, please visit us at: www.asrcreviews.org.

 

 

Linda  Bean l Director, Communications,
    Advertising Self-Regulatory Council

Tel: 212.705.0129
  Cell: 908.812.8175
  lbean@asrc.bbb.org

112 Madison Ave.
  3rd Fl.
  New York, NY
  10016

 

http://storageioblog.com/?p=2304

Ok, Oracle is no stranger to benchmark and performance claims controversy having amassed several decades of experience. Anybody remember the silver bullet database test from late 80s early 90s when Oracle set a record performance except that they never committed the writes to disk?

 

Oracle image

 

Something tells me that Oracle and Uncle Larry (e.g. Larry Ellison who is not really my uncle) will treat this as any kind of press or media coverage is good and probably will issue something like IBM must be worried if they have to go to the BBB.

 

Will a complaint which I'm sure is not the fist to be lodged with the BBB against Oracle deter customers, or be of more use to IBM sales and their partners in deals vs. Oracle?

 

What's your take?

 

Is this much ado about nothing, a filler for a slow news or discussion day, a break from talking about VMware acquisition of Nicira or VMware CEO management changes? Perhaps this is an alternative to talking about the CEO of SSD vendor STEC  being charged with insider trading, or something other than Larry Ellison buying an Hawaiian island (IMHO he could have gotten a better deal buying Greece), or is this something that Oracle will need to take seriously?

 

Ok, nuff said for now

 

Cheers Gs

Speaking  of and about modernizing data protection, back in June I was invited to be a  keynote presenter on industry trends and perspectives at a series of five dinner events (Boston, Chicago, Palo  Alto, Houston and New York City) sponsored by Quantum (that is a disclosure btw).

 

Industry trends and perspective data protection modernization

 

The theme of the dinner  events was an engaging discussion around modernizing data protection with  certainty along with clouds, virtualization and related topics. Quantum and one  of their business partner resellers started the event with introductions  followed by an interactive discussion by myself, followed by David Chappa  (@davidchappa)  who ties the various themes with what Quantum is doing along with some  of their customer success stories.

 

Themes and examples for these events build on my book Cloud and Virtual Data Storage Networking including:

 

  • Rethinking how, when, where and why data is being protected 
  • Big  data, little data and big backup issues and techniques
  • Archive,  backup modernization, compression, dedupe and storage tiering
  • Service  level agreements (SLA) and service level objectives (SLO)
  • Recovery  time objective (RTO) and recovery point objective (RPO) 
  • Service  alignment and balancing needs vs. wants, cost vs. risk
  • Protecting  virtual, cloud and physical environments
  • Stretching  your available budget to do more without compromise
  • People,  processes, products and procedures

 

Quantum  is among other industry leaders with multiple technology and solution offerings  for addressing different aspects of data footprint reduction and data protection  modernization. These include for physical, virtual and cloud environments along  with traditional tape, disk based, compression, dedupe, archive, big data, hardware,  software and management tools. A diverse group of attendees have been at the  different events including enterprise and SMB, public, private and government  across different sectors.

 

Following are links to some blog posts that covered first series of events along  with some of the specific themes and discussion points from different cities:

Via  ITKE: The New Realities of Data Protection
Via  ITKE: Looking For Certainty In The Cloud
Via  ITKE: Success Stories in Data Protection: Cloud virtualization
Via  ITKE: Practical Solutions for Data Protection Challenges
Via  David Chappas blog

 

If  you missed attending any of the above events, more dates are being added in  August and September including stops in Cleveland, Raleigh, Atlanta, Washington  DC, San Diego, Connecticut and Philadelphia with more details here.

 

Ok,  nuff said for now, hope to see you at one of the upcoming events.
   
Cheers  Gs

Industry trends and perspective data protection modernization

 

Have  you modernized your data protection strategy and environment?

 

If  not, are you thinking about updating your strategy and environment?

 

Why modernize your data protection including backup restore, business  continuance (BC), high availability (HA) and disaster recovery (DR) strategy and environment?

 

Data protection, Modernize data protection, BC, DR, HA, Cloud

 

Is it to leverage new technology such as disk to disk (D2D)  backups, cloud, virtualization, data footprint reduction (DFR) including  compression or dedupe?

 

Perhaps  you have or are considering data protection modernization because somebody  told you to or you read about it or watched a video or web cast? Or, perhaps  your backup and restore are broke so its time to change media or try something  different.

 

Lets  take a step back for a moment and ask the question of what is your view of data  protection modernization?

 

Perhaps  it is modernizing backup by replacing tape with disk, or disk with clouds?

Maybe  it is leveraging data footprint reduction (DFR) techniques including  compression and dedupe?

 

Data protection, data footprint reduction, dfr, dedupe, compress

 

How  about instead of swapping out media, changing backup software?

Or  what about virtualizing servers moving from physical machines to virtual  machines?

 

On  the other hand maybe your view of modernizing data protection is around using a  different product ranging from backup software to a data protection appliance,  or snapshots and replication.

 

The  above and others certainly fall under the broad group of data protection  modernization, however there is another area which is not as much technology as  it is techniques, best practices, processes and procedure based. That is,  revisit why data and applications are being protected against what applicable  threat risks and associated business risks.

 

Lost of destroyed data, data protection

 

This means reviewing service needs  and wants including SLA, SLO, RTO and RPOs that in turn drive what data and  applications to protect, how often, how many copies and where those are  located, along with how long they will be retained.

 

Data protection, Modernize data protection, BC, DR, HA, Cloud, RTO, RPO

 

Modernizing  data protection is more than simply swapping out old or broken media like flat  tires on a vehicle.

 

To be effective, data protection modernization involves  taking a step back from the technology, tools and buzzword bingo topics to  review what is being protected and why. It also means revisiting service level  expectations and clarify wants vs. needs which translates to what if for free  that is what is wanted, however for a cost then what is required.

 

Data protection, Modernize data protection, BC, DR, HA, Cloud, SLA, SLO, RTO, RPO

 

Certainly  technologies and tools play a role, however simply using new tools and  techniques without revisiting data protection challenges at the source will  result in new problems that resemble old problems.

 

Data protection, Modernize data protection, BC, DR, HA, SLO, SLA, Cloud

 

Hence to support growth with  a constrained or shrinking budget while maintaining or enhancing service levels,  the trick is to remove complexity and costs.

 

Tiered Data protection, Modernize data protection, BC, DR, HA, SLO, SLA, Cloud

 

This means not treating all data  and applications the same, stretch your available resources to be more effective  without compromise on service is mantra of modernizing data protection.

 

Ok,  nuff said for now, plenty more to discuss later.

 

Cheers Gs

There is a new (free) book that I'm a co-author of along Bruce Grieshaber and Larry Jacob (both of LSI) along with foreword by Harry Mason of LSI and President of the SCSI Trade Association titled SAS SANs for Dummies compliments of LSI.

 

SAS SANs for Dummies, LSI Edition

 

This new book (ebook and print hard copy) looks at Serial Attached SCSI (SAS) and how it can be used beyond traditional direct attached storage (DAS) configurations for support various types of storage mediums including SSD, HDD and tape. These configuration options include as entry-level SAN with SAS switches for small clusters or server virtualization, or as shared DAS as well as being a scale out back-end solution for NAS, object, cloud and big data storage solutions.

 

Here is the table of contents (TOC) of SAS SANs for Dummies

 

Chapter 1: Data storage challenges

  • Storage Growth Demand Drivers
  • Recognizing Challenges
  • Solutions and Opportunities


Chapter 2: Storage Area Networks

  • Introducing Storage Area Networks
  • Moving from Dedicated Internal to Shared Storage


Chapter 3: SAS Basics

  • Introducing the Basics of SAS
  • How SAS Functions
  • Components of SAS
  • SAS Target Devices
  • SAS for SANs


Chapter 4: SAS Usage Scenarios

  • Understanding SAS SANs Usage
  • Shared SAS SANs Scenarios including:
  •  
  • SAS in HPC environments
  •  
  • Big data and big bandwidth
  •  
  • Database, e-mail, back-office
  •  
  • NAS and object storage servers
  •  
  • Cloud, wen and high-density
  •  
  • Server virtualization

 

Chapter 5: Advanced SAS Topics

  • The SAS Physical Layer
  • Choosing SAS Cabling
  • Using SAS Switch Zoning
  • SAS HBA Target Mode


Chapter 6: Nine Common Questions

  • Can You Interconnect Switches?
  • What Is SAS Cable Distance?
  • How Many Servers Can Be In a SAS SAN?
  • How Do You Manage SAS Zones?
  • How Do You Configure SAS for HA?
  • How Does SAS Zoning Compare to LUN Mapping?
  • Who Has SAS Solutions?
  • How Do SAS SANs Compare?
  • Where Can You Learn More?


Chapter 7: Next Steps

  • SAS Going Forward
  • Next Steps
  • Great Take Away's

 

Regardless of if you are looking to use SAS as a primary SAN interface, or leverage it for DAS or implementing back-end storage for big-data, NAS, object, cloud or other types of scalable storage solutions, check out and get your free copy of SAS SANs for Dummies here compliments of LSI.

 

SAS SANs for Dummies, LSI Edition

 

Click here to ask your free copy of SAS SANs for Dummies compliments of LSI, tell them Greg or StorageIO sent you and enjoy the book.

 

Ok, nuff said.

 

Cheers Gs

Server and StorageIO industry trends and perspective DAS

 

Following up from my last post over at InfoStor about metrics that matter, here is a  link to a new piece that I did on storage vendors benchmarking and  related topics. This new post looked at an storage performance council (SPC1)  benchmark that HP did with their P10000 (e.g. 3PAR) storage system under  assertions by some in the industry that they were short stroking to meet  better performance.

 

Amazon Web Services (AWS)
    HP P10000 (3PAR) storage systemHP P10000 (3PAR) storage system

 

I'm surprised  some creative technical marketer, blogger or prankster has yet to rework Clarence Carters (e.g. Dr. CC) iconic song into something about storage performance and capacity short strokin.

 

From the creator of Strokin, Check out Clarence Carter new album 

 

Ok, nuff said before I get a visit from the HP  truth squads, in the meantime, give HP a hug and some love if so inclined.

 

Cheers Gs

Kudos to Lenovo who I called yesterday to get a replacement  key for my X1 laptop keypad.

 

Lenovo X1 laptop

 

After spending time on their website including finding  the part number, sku and other information, I could not figure out how to actually order the part. Concerned about calling and getting routed between different call centers as is too often the case, I finally decided to give a try on the phone route.

 

I was surprised, no, shocked at how quick and easy it was once I got  routed to the Atlanta Lenovo support center to get what I needed.

 

Thus the other day, late afternoon when I called, the Atlanta Lenovo agent was able to take my laptop serial  number, make and model, description of what part was needed all without  transferring to other persons. They then  made arrangements for not  a new replacement key, rather an entire new keyboard with total phone time probably less than 15 minutes.

 

The next morning by 10:30AM  CT a box with the new replacement keyboard arrived. In-between calls and other work, in  a matter of minutes the old keyboard was removed, the new one installed, tested and I now get to type normally instead of dealing with a broken Y key.

 

In  less than 24 hours from making the call, UPS arrived back to pickup the old  keyboard to return to the depot.

 

Here are some photos for you propeller (tech heads or  geek's) beginning with the X1 keyboard and broken key before the replacement.

 

Lenvo X1 keyboard replacement

 

The following shows the keyboard removed looking towards the screen with the key board flat cables still installed. Note that the small black connectors (two of them) flip-up and the cables slide out (or in for installation).


Lenvo X1 keyboard replacement

 

In this photo, you can see one of the two keyboard connectors, plus where the Samsung SSD I installed replaces the HDD that the X1 shipped with. Also shown are the Sierra wireless 4G card that I use while traveling that provides an alternative when others are trying to figure out how to use available public WiFi.


Lenvo X1 keyboard replacement

I

In this image, you can see the DRAM (e.g. memory) along with two connectors where the keyboard cables connect to before cables have been reconnected.


Lenvo X1 keyboard replacement

 

With the new cables connected, keyboard reinstalled and tested, the old key board has been boxed up up, return shipping sticker applied, UPS called and the box picked up, on its way back to Lenovo.


  Lenvo X1 keyboard replacement

 

For that, Kudos to Lenovo for delivering on what in the past  taken for granted as good customer service and support, however in these days, all  to often is the exception.

 

Next time somebody asks why I use Lenovo ThinkPad's  guess what story I will tell them.

 

Ok, nuff said for now

 

Cheers Gs

Dell Storage Customer Advisory Panel (CAP)

 

For those not familiar with Quest, they are a software company not to be confused with  the telephone communications company formerly known as Qwest (aka now known as centurylink).

 

Both Dell and Quest have been on software related  acquisition initiatives that past few years with Quest having purchased vKernel,  Vizoncore (vRanger virtualization backup), BakBone (who had acquire Alavarii  and Asempra) for traditional backup and data protection among others. Not to be  out done, as well as purchasing Quest, Dell has also more recently bought Appassure  (Disclosure: StorageIOblog site sponsor) for data protection, Sonicwall and Wyse in  addition to some other recent purchases (ASAP, Boomi, Compellent, Exanet,  EqualLogic, Force10, InsightOne, KACE, Ocarina, Perot, RNA and Scalent among  others).

 

What does this mean?

Dell is expanding the scope of their business with  more products (hardware, software), solution bundles, services and  channel partnering opportunities Some of the software tools and focus areas  that Quest brings to the Dell table or portfolio include:

 

Database management (Oracle, SQLserver)
Data protection (virtual and physical backup,  replication, bc, dr)

Performance monitoring (DCIM and IRM) of applications  and infrastructure
User workspace management (application delivery)
Windows server management (migrate and manage, AD,  exchange, sharepoint)
Identify and access management (security, compliance,  privacy)

 

What does Dell get by spending over $2B USD on quest?

  • Additional software titles or product
  • More software developers for their Software group
  • Sales people to help promote, partner and sell software  solutions
  • Create demand pull for other Dell products and services  via software
  • Increase its partner reach via existing Quest VARs and  business partners
  • Extend the size of the Dell software and intellectual  property (IP) portfolio
  • New revenue streams that compliment existing products and  lines of business
  • Potential for better rate of return on some  of its $12B USD in cash or equivalence
     
    Is this a good move for Dell?
    Yes for the above reasons  

 

Is there a warning to this for Dell?
Yes, they need to execute, keep the Quest team focused  along with their other teams on the respective partners, products and market  opportunities while expanding into new areas. Dell needs to also leverage Quest  to further its cause in creating trust, confidence and strategic relationships  with channel partners to reach new markets in different geographies. In  addition, Dell needs to  articulate its strategy and positioning of the various solutions to avoid  products being perceived as competing vs. complimenting each other.

 

Additional  Dell related links:
Dell  Storage Customer Advisory Panel (CAP)
Dell  Storage Forum 2011 revisited
Dude, is  Dell doing a disk deal again with Compellent?
Data  footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize
Post  Holiday IT Shopping Bargains, Dell Buying Exanet?
Dell Will  Buy Someone, However Not Brocade (At least for now)

 

Ok, nuff said for now

 

Cheers Gs

Dell Storage Customer Advisory Panel (CAP)

 

Recently I was asked by Dell to moderate and host their North America storage customer advisory panel (CAP) session (twitter #storagecap) that followed their 2012 storage forum (see comments about 2011 storage forum here) event in Boston (Disclosure Dell covered my trip to Boston).

 

This was an interesting event in many ways because it was a diverse group some of whom were long-time EqualLogic and Compellent (both before and post acquisition) customers of various size or customers of Dell who have yet to buy storage from them.

 

Dell Storage Customer Advisory Panel (CAP)
Click on above image for video feed

 

Beyond the diversity of types of customers and their relationship with Dell, what also made this event interesting was that it was live streamed with professional produced video and audio in addition to twitter and other social media coverage. However what made the event even more interesting IMHO was the fact that being a live event (watch replay here) in video with audio as well as on twitter, the attendees were urged to speak freely with conversation among themselves providing feedback and commentary for Dell.

 

Sure there were songs of praise when and were deserved, however unlike some made for social media vendor events that tend to be closer to sales pitches, this event also included some tough love feedback and comments for Dell, their products, services and events planner.

 

Dell Storage Customer Advisory Panel (CAP)
Dell Storage CAP illustrators aka @ThinkLink

 

Oh, did I mention that other than some members from the Dell social media team (@dell_storage) who were in the room to help facilitate and coordinate the event itself, the real discussions were free and independent of Dell employees (other than to remind not to avoid going into NDA land while live on the video and audio feed). Dell had @ThinkLink doing live illustrations capturing as images the discussion themes, topics and points of interests during the events that you can see examples of in the following images.

 

Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)Dell Storage Customer Advisory Panel (CAP)
Dell Flickr images from the Storage CAP session

 

Kudos to Dell for having the courage, conviction and confidence to have a customer advisory panel event  live streamed, that also allowed the attendees to speak their mind free of a script or talking points guide. The session included having each participant taking a  turn of putting themselves in the general managers chair and saying what they would do, why, and how they would address customers and prospects. After all, its one thing to sit in the cheap seats, playing arm-chair quarterback saying what you want, it's another saying why you need it, what the priority and impact are or would be and how to get the message to the customer. Some of the topics covered included Appassure for data protection, Compellent, EqualLogic and other recent acquisitions, products, service, support and community forums.

 

Thanks to all who participated including @ThinkLink (illustrators), Dell Storage social media team (@dell_storage), Alison Krause (@AlisonDell), Gina Rosenthal (@gminks), Michelle Richard (@meesh_says) and particularly the participants Pete Koehler (@petergavink), Roger Lund (@rogerlund), Luigi Danakos (@nerdblurt), Dan Marbes (@danmarbes), Jeff Hengesbach (@jeffhengesbach), Steve Mickeler (@shmick), Ed Aractingi (@earactingi) and Dennis Heinle (@dheinle).

 

Ok, nuff said for now

 

Cheers Gs

US EPA Energy Star for Data Center StorageUncle SAM wants you to be energy efficient and effective with optimized data center storage

 

The U.S. EPA is ready to release DRAFT 3 of the Energy Star for data center storage specification and has an upcoming web session that you can sign up for if are not on their contact list of interested stake holders. If you are not familiar with the EPA Energy star for data center storage program, here is some background information.

 

Thus if you are interested, see the email and information below, signup and take part if so inclined as opposed to saying that you did not have a chance to comment.

    
                                                                                                              
                                                                          

Dear ENERGY STAR® Data Center Storage      Manufacturer or Other Interested Party:

                    

The U.S. Environmental Protection Agency (EPA) would      like to announce the release of the Draft 3 Version 1.0 ENERGY STAR      Specification for Data Center Storage. The draft is attached and is      accompanied by a cover letter and Draft Test Method. Stakeholders are      invited to review these documents and submit comments to EPA via email to storage@energystar.gov by Friday, July 27, 2012.

                  

EPA will host a webinar on Wednesday, July 11, 2012,      tentatively starting at 1:00PM EST. The agenda will be focused on      elements from Draft 3, Product Families, and other key topics. Please RSVP to storage@energystar.gov no later than Tuesday, July 3, 2012 with      the subject "RSVP – Storage Draft 3 specification meeting."

                  

If you have any questions, please contact Robert      Meyers, EPA, at Meyers.Robert@epa.gov or (202)      343-9923; or John Clinger, ICF International, at John.Clinger@icfi.com or (202) 572-9432.

                

Thank you for your continued support of the ENERGY      STAR program.

        
                                                                          

For      more information, visit: www.energystar.gov

        

This message was sent to you on behalf of ENERGY STAR. Each    ENERGY STAR partner organization must have at least one primary contact    receiving e-mail to maintain partnership. If you are no longer working on    ENERGY STAR, and wish to be removed as a contact, please update your contact    status in your MESA account. If you are not a    partner organization and wish to opt out of receiving e-mails, you may call    the ENERGY STAR Hotline at 1-888-782-7937 and request to have your mass mail    settings changed. Unsubscribing means that you will no longer receive    program-wide or product-specific e-mails from ENERGY STAR.

    

 

  

 

Ok, you have been advised, nuff said for now

 

Cheers Gs

Microsoft Windows 7 and TechNet
Image courtesy of Microsoft.com

 

Recently I added a new thin laptop to the fleet of Windows 7 laptop and workstations that I have in active use. The other devices run Windows 7 Ultimate 32 bit with Bitlocker security encryption enabled. However I ran into a problem getting Bitlocker to work on the 64 bit version of Windows 7 Professional.

 

Yes I know I should not be using Windows and I also have plenty of iDevices and other Apple products lying around. Likewise to the security pros and security arm-chair quarterbacks I know I should not be using Bitlocker, instead using Truecrypt of which I have done some testing and may migrate too in the future along with self-encrypting device (SED). However lets stay on track here ;).

 

Lenovo Thinkpad X1
Image courtesy of Lenovo.com

 

The problem that I ran into with my new Lenovo X1 was that it came with Windows 7 Professional 64 bit, which has a few surprises when trying to turn on Bitlocker drive encryption. Initializing and turning on the Trusted Platform Module (TPM) management was not a problem, however for those needing to figure out how to do that, check out this Microsoft TechNet piece.

 

The problem was as simple as not having a tab and easy way to enable Bitlocker Drive Encryption with Windows 7 Professional 64 bit. After spending some time searching around various Microsoft and other sites to figure out how to hack, patch, script and do other things that would take time (and time is money), it dawned on me. Could the solution to the problem be as simple as upgrading from the Professional version of Windows 7 bit to Windows 7 Ultimate?

 

Microsoft Windows 7 Ultimate
Windows 7 image courtesy of Amazon.com

 

The answer was going to the Microsoft store (or Amazon among other venues) and for $139.21 USD (with tax) purchase the upgrade.

 

Once the transaction was complete, the update was automatically and within minutes I had Bitlocker activated on the Lenovo X1 (TPM was previously initiated and turned on), a new key was protected and saved elsewhere, and the internal Samsung 830 256GB Solid State Device (SSD) initializing and encrypting. Oh, fwiw, yes the encryption of the 256GB SSD took much less time than on a comparable Hard Disk Drive (HDD) or even an HHDD (Hybrid HDD).

 

Could I have saved the $139.21 and spent some time on work around? Probably, however as I did not have the time or interest to go that route, however IMHO for my situation it was a bargain.

 

Sometimes spending a little money particular if you are short on or value, your time can be a bargain as opposed to if you are short on money however long on time.

 

I found the same to be true when I replaced the internal HDD that came with the Lenovo X1 with a Samsung 256GB SSD in that it improved my productivity for writing and saving data. For example in the first month of use I estimate easily 2 to three minutes of time saved per day waiting on things to be written to HDDs. In other words 2 to three minutes times five days (10 to 15 minutes) times four weeks (40 to 60 minutes) starts to add up (e.g. small amounts or percentages spread over a large interval add up), more on using and justifying SSD in a different post.

 

Microsoft Windows 7 Ultimate
Samsung SSD image courtesy of Amazon.com

 

If your time is not of value or you have a lot of it, then the savings may not be as valuable. On the other hand, if you are short on time or have a value on your time, you can figure out what the benefits are quite quickly (e.g. return on investment or traditional ROI).

 

The reason I bring the topic of time and money into this discussion about Bitlocker is to make a point that there are situations where spending some time has value such as for learning, the experience, fun or simple entertainment aspect, not to mention a shortage of money. On the other hand, sometimes it is actually cheaper to spend some money to get to the solution or result as part of being productive or effective. For example, other than spending some time browsing various sites to figure out that there was an issue with Windows 7 Professional and Bitlocker, time that was educational and interesting, the money spent on the simple upgrade was worth it in my situations.

 

Ok, nuff said for now

 

Cheers Gs

Storage I/O Industry Trends and Perspectives

 

Recently while at EMCworld in Las Vegas (Thanks btw to EMC who covered coach airfare and 3 nights hotel) I had the opportunity along with group of other industry analysts and advisors to have a series of small group meeting sessions with key EMC leadership.

 

EMC world

 

These sessions included time with Chairman of the Board of Directors and Chief Executive Officer Joe Tucci, Chairmen of VCE Michael Capellas (who is also on the Cisco Board of Directors), President and Chief Operating Officer, EMC Information Infrastructure and Cloud Services Howard Elias, President and Chief Operating Officer, EMC Information Infrastructure Products Pat Gelsinger, and Executive Vice President and Chief Marketing Officer (CMO) Jeremy Burton.

 

Joe Tucci is always fun to listen and engage with in small groups and conveys a cordial confidence when you meet face to face. Howard Elias who is now heading up the services business talked about walking the talk with services, public and private cloud including what EMC is doing internally. Michael Capellas had some good insight into what he is doing with VCE, along with his role on the Cisco BOD. Pat Gelsinger had some interesting points however seemed a bit more reserved than in earlier sessions. Jeremy Burton who is normally associated with the effective marketing company or everything movie campaigns at EMC did not use any backdrops, visual aids, theatrics or Vegas style entertainment during his session.

 

Of the above-mentioned executives, the one that impressed me the most, and talking with other analysts/advisors had similar perspectives was Jeremy Burton. I have seen and heard him talk before in live and virtual venues along with what he is doing to focus EMC messaging and themes.

 

A common comment and theme in talking with other analysts and advisors was that in five minutes, Jeremy did more to advance, clarify, articulate and explain who EMC is, what they are doing now and for the future.

 

Jeremy Burton EMC CMO, image via emc.com
Image courtesy of EMC.com

 

Trust was one of the themes of the EMCworld event as it pertains to collaborating with vendors and service providers as well as consultants, advisors and others. Trust is also important for going to the cloud on a public or private basis. It is easy to talk about trust however, it is also something that is earned and is important to keep up and protect. Normally given some of the stigma associated with marketing and or sales, trust too often becomes a punch line or term tossed around with skepticism, cynicism or empty promises. The reason I bring trust up in this discussion was that in Jeremy’s interaction with those in the room, whether others realized it or not, he was working on planting the seeds and establishing the basis for trust.

 

Does that mean there is automatic trust now in anything that EMC or their marketing organization says or more so than what heard from other organizations? Perhaps some will automatically take what is heard and go with that as gospel however, they may be doing that already. For others who are skeptical by default and do their homework, analysis, research and other related tasks, they may be more likely to give the benefit of the doubt vs. automatically questioning everything looking for multiple confirmations and added fact checking.

 

As for me, I generally take what any vendor or their pundits say with a grain of salt giving benefit of doubt where applicable unless trust has been previously impacted. In the case of EMC, I generally take what they say with a grain of salt. However, a level of trust and confidence can make validating what they say sometimes easier than with others. This is in part due to knowing where to go internally for details and information including NDA based material and the good job their analyst relations team and other group do on building and keep up relationships.

 

Does this mean I like EMC more or less than other vendors? It means there is a level of trust, communication, relationship, contact, interaction and access to resources with EMC that might be more or less than with other vendors.  Disclosure EMC along with some companies they have acquired have been past clients.

 

Now back to Jeremy.

 

What impressed me the most was while other executives were engaging to different degrees, when I asked Jeremy how he and EMC balances entertainment (videos and movies, theatrics), education (expanding knowledge of EMC solutions, technology advancement) and being engaging (not just sales calls, social media, golfing or other in person activities) to drive business economics his response included all three of those aspects.

 

Storage I/O Industry Trends and Perspectives

 

Ok, I know, some of you should be saying that is the job and role of a marketing person to be an effective communicator which I would agree, however why don’t more marketers do a more effective job of what they do?

 

In other words, Jeremy educated by sharing what and why they are doing certain things, Jeremy engaged with the entire audience while answering my question however not singular responding to me, he also entertained with some of his answers while also keeping them to the point, not rambling on. Afterwards I had a few minutes to talk one on one with Jeremy without the handlers or others and I can say it was refreshing and as is too of the case with marketers, there is trust.

That does not mean I will take anything verbatim or follow the scripts or other things the truth squads want preached or handed out from EMC, Jeremy or any other vendor for that matter.

 

I can say that in the few minutes up close and in a smaller setting, EMC has a secret weapon who can do more to build and convey trust and that is Jeremy Burton, hope I am not wrong ;).

 

Ok, nuff said for now

 

Cheers Gs

Storage I/O Industry Trends and Perspectives

 

In case you missed it, NetApp announced their most recent quarterly earnings a few weeks ago which in themselves were not bad. However what some of their competition jumping up and down for joy while others are scratching their heads is the forward-looking guidance given by NetApp.

 

NetApp can be seen as being on rough ground given their forward-looking guidance over the next year which could be seen as either very conservative, or an admission that they are not growing as fast as some of their competitors are challenging them.

 

Reading between the lines, looking at various financial and other resources in addition to factoring in technology items, there is more to NetApp then meets the eye and current stock price or product portfolio.

 

For example, NetApp is sitting on over $4 Billion USD cash that they could use for an acquisition, buying back stock, launching a major sales and marketing initiative to expand into new or adjacent markets or other activities. Speaking of acquisitions, NetApp has done some in the past including Spinnaker, which is now integrated with Ontap (e.g. clustering), Topio, Decru (security encryption) and Onaro (DCIM and IRM management software tools). More recently, NetApp has acquired Bycast (archiving and policy storage management software), Akorri (capacity management and DCIM and IRM software) and Engenio. NetApp is also maintaining good margins via both direct, channel and OEM activities while launching new products such as the channel and SMB focused FAS 2220.

 

Its arguable depending upon your point of view (or who you for or are a fan of) if NetApp has all the right product pieces now, in the works, or on their radar for acquisitions. Assuming that NetApp has the pieces, they also need to move beyond selling simply what is on the truck or what is safe and comfortable or perhaps easy to sell. This is not to say that NetApp is not being effective in selling what they have and pushing the envelope, however keeping in mind who their main competitor is, the old sales saying of being able to sell ice to an Eskimo comes to mind.

 

Two companies on parralel tracks offset by time: EMC and NetApp

 

In the case of NetApp, when the competition makes an issue about scalibility or performance of their flagship storage systems FAS and Ontap storage software, change the playing field leveraging all the tools in their portfolio. NetApp like EMC before them is figuring out how to sell via different channels or venues their complete portfolio with a mix of direct, channel and OEM. After all, it seems like only yesterday that EMC was trying to figure out where and when to sell CLARiiON (e.g. now VNX) as opposed to avoiding competing with the Symmetric (aka now the VMAX) not to mention expanding from a direct to channel and OEM model. Perhaps NetApp can continue to figure out how to leverage more effectively the Engenio E series for big bandwidth beyond their current OEMs. NetApp can also leverage their existing partners who have embraced Bycast (aka StorageGrid) while finding new ones.

 

The reality is that NetApp is being challenged by EMC who is moving down market into some of NetApp's traditional accounts along with in the scale-out NAS and big data sectors. This is where NetApp can leverage their technical capabilities including people combined with some effective sales and marketing execution to change the playing field vs. responding to EMC and others.

 

NetApp has many of the pieces, parts, products, people, programs and partners so now how can they leverage those to expand both their revenues, as well as support margin to grow the business, unless they are looking to be acquired.

 

I still subscribe that NetApp and EMC are two similar companies on parallel tracks offset by time, by about a decade or decade and a half.

 

Storage I/O Industry Trends and Perspectives

 

Thus, IMHO NetApp is a diamond in the rough, granted I am guessing EMC and some others do not see it that way. However, there was a time when EMC was seen as a diamond in the rough while others discounted that notion, particularly an Itty Bitty Manufacturing company from New York who is now focusing on services among other things.

 

Keep in mind however, diamonds can also be lost or taken as well as there can be fake gems.

 

Here are some related links:
Unified storage systems showdown: NetApp FAS vs. EMC VNX
Two companies on parallel tracks moving like trains offset by time: EMC and NetApp
NetApp buying LSI's Engenio Storage Business Unit

 

Ok, nuff said for now

 

Cheers Gs

Storage I/O Industry Trends and Perspectives

 

I recently saw a comment somewhere that talked about Green IT being deferred or set aside due to lack of funding because of ongoing global economic turmoil. For those who see Green IT in the context of the green washing efforts that requiring spending to gain some benefits that I can understand. After all, if your goal is to simply go and be or be seen as being green, there is a cost to doing that.

 

With tight or shrinking IT budgets, there are other realities and while organizations may want to do the right thing helping the environment, however that is often seen as overhead to financial conscious management.

 

On the other hand, turn the green washing messaging off or at least dial-it back a bit as has been the case the past couple of years.

 

Expand the Green IT discussion or change it around a bit from that of being seen or perceived as being green by energy efficiency or avoidance to that of effectiveness, enhanced productivity, doing more with what you have or with less and there is a different opportunity.

 

That opportunity is to meet the financial and business goals or requirements that as a by-product help the environment. In other words, expand the focus of Green IT to that of economics and improving on resource effectiveness and the environment gets a free ride, or, Green gets self-funded.

 

The Green and Virtual Data Center Book addressing optimization, effectivness, productivity and economics

 

The challenge is what I refer to as the Green Gap, which is the disconnect between what is talked about (e.g. messaging) and thus perceived to be Green IT and where common IT opportunities exist (or missed opportunities have occurred).

 

Green IT or at least the tenants of driving efficiency and effectiveness to use energy more effectively, address recycling and waste, removable of hazardous substance and other items continues to thrive. However, the green washing is subsiding and overtime organizations will not be as dismissive of Green IT in the context of improving productivity, reducing complexity and costs, optimization and related themes tied to economics where the environment gets a free ride.

 

Here are some related links:
Closing the Green Gap
Energy efficient technology sales depend on the pitch
EPA Energy Star for Data Center Storage Update
Green IT Confusion Continues, Opportunities Missed!
How to reduce your Data Footprint impact (Podcast)
Optimizing storage capacity and performance to reduce your data footprint
Performance metrics: Evaluating your data storage efficiency
PUE, Are you Managing Power, Energy or Productivity?
Saving Money with Green Data Storage Technology
Saving Money with Green IT: Time To Invest In Information Factories
Shifting from energy avoidance to energy efficiency
Storage Efficiency and Optimization: The Other Green
Supporting IT growth demand during economic uncertain times
The new Green IT: Efficient, Effective, Smart and Productive
The other Green Storage: Efficiency and Optimization
The Green and Virtual Data Center Book (CRC Press, Intel Recommended Reading)

 

Ok, nuff said for now

 

Cheers Gs

Storage I/O Industry Trends and Perspectives

 

Have you noticed how Fibre Channel (FC) and FC over Ethernet (FCoE) switch and adapter vendors and their followers focus around bandwidth vs. response time, latency or other performance activity? For example, 8Gb FC (e.g. 8GFC), or 10Gb as opposed to latency and response time, or IOPS and other activity indicators.

 

When you look at your own environment, or that of a customers or prospects or hear of a conversation involving storage networks, is the focus on bandwidth, or lack of it, or perhaps throughput being a non-issue? For example, a customer says why go to 16GFC when they are barely using 8Gb with their current FC environment.

 

This is not a new phenomenon and is something I saw when working for a storage-networking vendor who had SAN, MAN and WAN solutions (E.g. INRANGE). Those with networking backgrounds tended to focus on bandwidth when discussing storage networks while those with storage, server or applications background also look at latency or IO completion time (response time), queuing, message size, IOPs or frames and packets per second. Thus there are different storage and networking metrics that matter that are also discussed further in my first book Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures.

 

When I hear a storage networking vendor talk about their latest 16GFC based product I like to ask them what is the biggest benefit vs. 8GFC and not surprisingly, the usual response is like twice the bandwidth. When I ask them about what that means in terms of more IOPS in a given amount of time, or reduced IO completion time, lower latency, sometimes I often get the response along the lines of Yeah, that too, however it has twice the bandwidth.

 

Ok, I get it, yes, bandwidth is important for some applications, however so too are activity measured in IOPS, transactions, packets, frames, pages, sequences and exchanges among other units of measure along with response time and latency (e.g. different storage and networking metrics that matter).

 

What many storage networking vendors actually get, however they don’t talk about it for various reasons, perhaps because they are not be asked about it, or engaged in the conversation is that there is an improvement in response time in going from such as 8GFC to 16GFC. Likewise, there can be improvements in response time in addition to the more commonly discussed bandwidth.

 

If you are a storage networking switch, adapter or other component vendor, var or channel partner expand your conversation to include activity and response time as part of your value proposition. Likewise, if you are a customer, ask your technology providers to expand on the conversation of how new technologies help in areas other than bandwidth.

 

Ok, nuff said for now

 

Cheers Gs

Storage I/O Industry Trends and Perspectives

 

I have been getting asked by IT customers, VAR's and even vendors how much solid state device (SSD) storage is needed or should be installed to address IO performance needs to which my standard answer is it depends.

 

I also am also being asked if there is rule of thumb (RUT) of how much SSD you should have either in terms of the number of devices or a percentage; IMHO, the answer is it depends. Sure, there are different RUTs floating around based on different environments, applications, workloads however are they applicable to your needs.

 

What I would recommend is instead of focusing on percentages, RUTs, or other SWAG estimate’s or PIROMA calculations, look at your current environment and decide where the activity or issues are. If you know how many fast hard disk drives (HDD) are needed to get to a certain performance level and amount of used capacity that is a good starting point.

 

If you do not have that information, use tools from your server, storage or third-party provider to gain insight into your activity to help size SSD. Also if you have a database environment and are not familiar with the tools, talk with your DBA's to have them run some reports that show performance information the two of you can discuss to zero in hot spots or opportunity for SSD.

 

Keep in mind when looking at SSD what is that you are trying to address by installing SSD. For example, is there a specific or known performance bottleneck resulting in poor response time or latency or is there a general problem or perceived opportunity?

 

Storage I/O Industry Trends and Perspectives

 

Is there a lack of bandwidth for large data transfers or is there a constraint on how many IO operations per second (e.g. IOPS) or transaction or activity that can be done in a given amount of time. In other words the more you know where or what the bottleneck is including if you can trace it back to a single file, object, database, database table or other item the closer you are to answering how much SSD you will need.

 

As an example if using third-party tools or those provided by SSD vendors or via other sources you decide that your IO bottleneck are database transaction logs and system paging files, then having enough SSD space capacity to fit those in part of the solution. However, what happens when you remove the first set of bottlenecks, what new ones will appear and will you have enough space capacity on your SSD to accommodate the next in line hot spot?

 

Keep in mind that you may want more SSD however what can you get budget approval to buy now without having more proof and a business case. Get some extra SSD space capacity to use for what you are confident can address other bottlenecks, or, enable new capabilities.

 

On other hand if you can only afford enough SSD to get started, make sure you also protect it. If you decide that two SSD devices (PCIe cache or target cards, drives or appliances) will take care of your performance and capacity needs, make sure to keep availability in mind. This means having extra SSD devices for RAID 1 mirroring, replication or other form of data protection and availability. Keep in mind that while traditional hard disk drive (HDD) storage is often gauged on cost per capacity, or dollar per GByte or dollar per TByte, with SSD measure its value on cost to performance. For example, how many IOPS, or response time improvement or bandwidth are obtained to meet your specific needs per dollar spent.

 

Related links
What is the best kind of IO? The one you do not have to do
Is SSD dead? No, however some vendors might be
Speaking of speeding up business with SSD storage
Has SSD put Hard Disk Drives (HDD's) On Endangered Species List?
Why SSD based arrays and storage appliances can be a good idea (Part I)
EMC VFCache respinning SSD and intelligent caching (Part I)
SSD options for Virtual (and Physical) Environments Part I: Spinning up to speed on SSD

 

Ok, nuff said for now

 

Cheers Gs

Some of you might remember the saying from Smokey the bear, only you can prevent forest fires and for those who do not know about that, click on the image below.

 

Only you can prevent forest fires

 

The reason I bring this up is that while cloud providers are responsible (see the cloud blame game) is that it is also up to the user or consumer to take some ownership and responsibility.

 

Similar to vendor lock-in, the only one who can allow vendor lock in is the customer, granted a vendor can help influence the customer.

 

The same theme applies to public clouds and cloud storage providers in that there is responsibility of providers along with government and industry regulations to help protect consumers or users. However, there is also the shared responsibility of the user and consumer to make informed decisions.

 

What is your perspective on who is responsible for cloud data protection?

Click here to cast your vote and view results of who is responsible for cloud data protection)

 

Ok, nuff said for now

 

Cheers Gs

StorageIO News Letter Image Spring (May) 2012 News letter

Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

 

Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

 

You can subscribe to the news letter by clicking here. Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

 

Nuff said for now

 

Cheers
Gs

Server and StorageIO industry trend and perspective DAS

 

Have you seen or heard the theme that Direct Attached  Storage (DAS), either dedicated or shared, internal or external is making a  comeback?

Wait, if something did not go away, how can it make a  comeback?

 

IMHO it is as simple as for the past decade or so, DAS  has been overshadowed by shared networked storage including switched SAS, iSCSI, Fibre  Channel (FC) and FC over Ethernet (FCoE) based block storage area networks  (SAN) and file based (NFS and Windows SMB/CIFS) network attached storage (NAS) using  IP and Ethernet networks. This has been particularly true by most of the  independent storage vendors who have become focused on networked storage (SAN  or NAS) solutions.

 

However some of the server vendors have also jumped into  the deep end of the storage pool with their enthusiasm for networked storage,  even though they still sell a lot of DAS including internal dedicated, along  with external dedicated and shared storage.

 

Server and StorageIO industry trend and perspective DAS

 

The trend for DAS storage has evolved with the interfaces  and storage mediums including from parallel SCSI and IDE to SATA and more recently  3Gbs and 6Gbs SAS (with 12Gbs in first lab trials). Similarly the storage  mediums include a mix of fast 10K and 15K hard disk drives (HDD) along with  high-capacity HDDs and ultra-high performance solid state devices (SSD) moving from 3.5 to 2.5 inch  form factors.

 

While there has been a lot of industry and vendor  marketing efforts around networked storage (e.g. SAN and NAS), DAS based  storage was over shadowed so it should not be a surprise that those focused on  SAN and NAS are surprised to hear DAS is alive and well. Not only is DAS alive  and well, it's also becoming an important scaling and convergence topic for  adding extra storage to appliances as well as servers including those for  scale out, big data, cloud and high density not to mention high performance and  high productivity computing.

 

Server and StorageIO industry trend and perspective DAS

 

Consequently its becoming ok to talk about DAS again.  Granted you might get some peer pressure from your trend setting or trend  following friends to get back on the networked storage bandwagon. Keep this in  mind, take a look at some of the cool trend setting big data and little data  (database) appliances, backup, dedupe and archive appliances, cloud and scale  out NAS and object storage systems among others and will likely find DAS on the  back-end. On a smaller scale, or in high-density rack deployments in large  cloud or similar environments you may also find DAS including switched shared  SAS.

Does that mean SANs are dead?


No, not IMHO despite what some vendors marketers and  their followers will claim which is ironic given how some of them were leading  the DAS is dead campaign in favor of iSCSI or FC or NAS a few years ago.  However simply comparing DAS to SAN or NAS in a competing way is like  comparing apples to oranges, instead, look at how and where they can complement  and enable each other. In other words, different tools for various tasks,  various storage and interfaces for different needs.

 

Thus IMHO DAS never left or went anywhere per say, it just was not fashionable or cool to talk about until now as it is cool and trend to discuss it again.

 

Ok, nuff said for now.

 

Cheers Gs

Amazon Web Services (AWS)
    Amazon Web Services (AWS)
  

I received the following note from Amazon Web Services (AWS) about an enhancement to their Elastic Compute Cloud (EC2) service that can be seen by some as an enhancement to service or perhaps by others after last weeks outages, a fix or addressing a gap in their services. Note for those not aware, you can view current AWS service status portal here.

The following is the note I received from AWS.

 

Announcing Multiple IP Addresses for Amazon EC2 Instances in Amazon VPC

Amazon Web Services (AWS)
    Amazon Web Services (AWS)
  Dear  Amazon EC2 Customer,

We are  excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon  VPC. Instances in a VPC can be assigned one or more private IP addresses, each  of which can be associated with its own Elastic IP address. With this feature  you can host multiple websites, including SSL websites and certificates, on a  single instance where each site has its own IP address. Private IP addresses  and their associated Elastic IP addresses can be moved to other network  interfaces or instances, assisting with application portability across  instances.

 

The number  of IP addresses that you can assign varies by instance type. Small instances  can accommodate up to 8 IP addresses (across 2 elastic network interfaces)  whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra  Large instances can be assigned up to 240 IP addresses (across 8 elastic  network interfaces). For more information about IP address and elastic network  interface limits, go to Instance  Families and Types in the Amazon EC2 User Guide.

 

You can  have one Elastic IP (EIP) address associated with a running instance at no  charge. If you associate additional EIPs with that instance, you will be  charged $0.005/hour for each additional EIP associated with that instance per  hour on a pro rata basis.

With this  release we are also lowering the charge for EIP addresses not associated with  running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis.  This price reduction is applicable to EIP addresses in both Amazon EC2 and  Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.


To learn  more about multiple IP addresses, visit the Amazon  VPC User Guide. For more information about pricing for additional Elastic  IP addresses on an instance, please see Amazon  EC2 Pricing.

Sincerely,

The Amazon EC2 Team

We hope you  enjoyed receiving this message. If you wish to remove yourself from receiving  future product announcements and the monthly AWS Newsletter, please update  your communication preferences.

Amazon Web  Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark  of Amazon.com, Inc. This message produced and distributed by Amazon Web  Services, LLC, 410 Terry Ave. North, Seattle, WA 98109-5210.

End of AWS message

 

Server and StorageIO industry trends and perspective DAS

Either way you look at it, AWS (disclosure I'm a paying  EC2 and S3 customer) is taking responsibility on their part to do what is needed to enable a resilient, flexible, scalable data infrastructure. What I mean by that is that protecting data and access to it in cloud environments is a shared responsibility including discussing what went wrong, how to fix and prevent it, as well as communicating best practices. That is both the provider or service along with those who are using those capabilities have to take some ownership and responsibility on how they get used.

 

For example, last week a major thunderstorms rolled across the U.S. causing large-scale power outages along the eastern seaboard of the U.S. and in particular in the Virginia area where one of Amazons availability zones (US East-1) has data centers located. Keep in mind that Amazon availability zones are made up of a collection of different physical data centers to cut or decrease chances of a single point of failure. However on June 30, 2012 during the major storms on the East coast of the U.S. something did go wrong, and as is usually the case, a chain of events resulted in or near a disaster (you can read the AWS post-mortem here).

 

The result is that AWS based out of the Virginia availability zone were knocked off line for a period which impacted EC2, Elastic Block Storage (EBS),  Relational Database Service (RDS) and Elastic Load Balancer (ELB) capabilities for that zone. This is not the first time that the Virginia availability zone has been affected having met a disruption about a year ago. What was different about this most recent outage is that a year ago one of the marquee AWS customers NetFlix was not affected during that outage due to how they use multiple availability zones for HA. In last weeks AWS outage NetFlix customers or services were affected however not due to loss of data or systems, rather, loss of access (which to a user or consumer is the same thing). The loss of access was due to failure of elastic load balancing not being able to allow users access to other availability zones.

 

Server and StorageIO industry trends and perspective DAS

 

Consequently, if you choose to read between the lines on the above email note I received from AWS, you can either look at the new service capabilities as an enhancement, or AWS learning and improving their capabilities. Also reading between the lines you can see how some environments such as NetFlix take responsibility in how they use cloud services designing for availability, resiliency and scale with stability as opposed to simply using as a cost cutting tool.

Thus when both the provider and consumer take some responsibility for ensuring data protection and accessibility to services, there is less of a chance of service disruptions. Likewise when both parties learn from incidents or mistakes or leverage experiences, it makes for a more robust solution on a go forward basis. For those who have been around the block (or file) a few times thinking that clouds are not reliable or still immature you may have a point however think back to when your favorite or preferred platform (e.g. Mainframe, Mini, PC, client-server, iProduct, Web or other) initially appeared and teething problems or associated headaches.

 

IMHO AWS along with other vendors or service providers who take responsibility to publish post-mortem's of incidents, find and fix issues, address and enhance capabilities is part of the solution for laying the groundwork for the future vs. simply playing to a near term trend theme. Likewise vendors and service providers who are reaching out and helping to educate and get their customers to take some responsibility in how they can use services for removing complexity (and cost) to enhance services as opposed to simply cutting cost and introducing risk will do better over the long run.

 

As I discuss in my book Cloud and Virtual Data Storage Networking (CRC Press), do not be scared of clouds, however be ready, do your homework, learn and understand what needs to be done or done differently. This means taking a shared responsibility one that the service provider should also be taking with you not to mention identifying new best practices, tools to be used along with conducting proof of concepts (POCs) to learn what to do and what not to do.

 

Some related information:
Only you can prevent cloud data loss
The blame game: Does cloud  storage result in data loss?
Cloud  conversations: Loss of data access vs. data loss
Clouds are like Electricity: Dont be Scared
AWS (Amazon) storage gateway, first, second and third impressions
Poll: What Do You Think of IT Clouds? (Cast your vote and see results)

 

Ok, nuff said for now.

Cheers Gs