Skip navigation

This has been a busy week as on Monday Western Digital (WD) announced that they were buying the disk drive business from Hitachi Ltd. (e.g. HGST) for about $4.3 billion USD. The deal includes about $3.5B in cash and 25 million WD common shares (e.g. $750M USD) which will give Hitachi Ltd. about ten (10) percent ownership in WD along with adding two Hitachi persons onto the WD board of directors. WD now moves into the number one hard disk drive (HDD) spot above Seagate (note Hitachi is not selling HDS) in addition to giving them a competitive positioning in both the enterprise HDD as well as emerging SSD markets.




Today NetApp announced that they have agreed to purchase portions of the LSI storage business known as Engenio for $480M USD.

The business and technology that LSI is selling to NetApp (aka Engenio) is the external storage system business that accounted for about $705M of their approximate $900M+ storage business in 2010. This piece of the business represents external (outside of the server) shared RAID storage systems that support Serial Attached SCSI (SAS), iSCSI, Fibre Channel (FC) and emerging FCoE (Fibre Channel over Ethernet) with SSD, SAS and FC high performance HDDs as well as high capacity HDDs. NetApp has block however there strong suit (sorry netapp guys) is file while Engenio strong suit is block that attaches to gateways from NetApp as well as others in addition to servers for scale out NAS and cloud.




What NetApp is getting from LSI is the business that sells storage systems or their components to OEMs including Dell, IBM (here and here), Oracle, SGI and TeraData (a former NCR spin off) among others.


What LSI is retaining are their custom storage silicon, ICs, PCI RAID adapter and host bus adapter (HBA) cards including MegaRAID, 3ware along with SAS chips, SAS switches, PCI SSD card and the Onstor NAS product they acquired about a year ago. Other parts of the LSI business which makes chips for storage, networking and communications vendors is also not affected by this deal.
In other words, the sign in front of the Wichita LSI facility that used to say NCR will now probably include a NetApp logo once the deal closes.,0,2671973.story

For those not familiar, Tom Georgens current CEO of NetApp is very familiar with Engenio and LSI as he used to work there (after leaving a career at EMC). In fact Mr. Georgens was part of the most recent attempt to spin the external storage business out of LSI back in the mid 2000s when it received the Engenio name and branding. In addition to Tom Georgens, Vic Mahadevan the current NetApp Chief Strategy Officer recently worked at LSI and before that at BMC, Compaq and Maxxan among others.


What do I mean by the most recent attempt to spin the storage business out of LSI? Simple, the Engenio storage business traces its lineage back to NCR and what become known as Symbiosis Logic that LSI acquired as part of some other acquisitions.


Going back to the late 90s, there was word on the street that the then LSI management was not sure what to do with storage business as their core business was and still is making high volume chips and related technologies. Current LSI CEO Abhi Talwalkar is a chip guy (nothing wrong with that) who honed his skills at Intel. Thus it should not be a surprise that there is a focus on the LSI core business model of making their own as well as producing silicon (not the implant stuff) for IT and consumer electronics (read their annual report).


As part of the acquisition, LSI has already indicated that they will use all or some of the cash to buy back their stock. However I also wonder if this does not open the door for Abhi and his team to do some other acquisitions more synergic with their core business.


What does NetApp get:

  • Expanded OEM and channel distribution capabilities
  • Block based products to coexist with their NAS gateways
  • Business with an established revenue base
  • Footprint into new or different markets
  • Opportunity to sell different product set to existing customers


NetApp gets an OEM channel distribution model to complement what they already have (mainly IBM) in addition to their mainly direct sales and with VARs. Note that Engenio went to an all OEM/distribution model several years ago maintaining direct touch support for their partners.


Note that NetApp is providing financial guidance that the deal could add $750M to FY12 which is based on retaining some portion of the existing OEM business however moving into new markets as well as increasing product diversity with existing direct customers, vars or channel partners.


NetApp also gets to address storage market fragmentation and enable OEM as well as channel diversification including selling to other server vendors besides IBM. The Engenio model in addition to supporting Dell, IBM, Oracle, SGI and other server vendors also involves working with vertical solution integrator OEMs in the video, entertainment, High Performance Compute (HPC), cloud and MSP markets. This means that NetApp can enter new markets where bandwidth performance is needed including scale out NAS (beyond what NetApp has been doing). This also means that NetApp gets a product to sell into markets where back end storage for big data, bulk storage, media and entertainment, cloud and MSP as well as other applications leverage SAS, iSCSI or FC and FCoE beyond what their current lineup offers. Who sells into those spaces? Dell, HP, IBM, Oracle, SGI and Supermicro among others.


What does LSI get:

  • $480M USD cash and buy back some stock to keep investors happy
  • Streamline their business or open door for new ones
  • Perhaps increase OEM sales to other new or existing customers
  • Perhaps do some acquisitions or be acquired


What does Engenio get:
A new parent that hopefully invest in the technology and marketing of the solution sets as well as leverage or take care of the installed base of customers


What do the combined Engenio and NetApp OEMs and partners get:
With combination of the organizations, hopefully streamlined support, service, and marketing, product enhancements to address new or different needs. Possibly comfort in knowing that Engenio now has a home and its future somewhat known.


What about the Engenio employees?
The reason I bring this up is wondering what happens to those who have many years invested and their LSI stock which I presume they keep hoping that the sale gives them a future return on their investment or efforts. Having been in similar acquisitions in the past, it can be a rough go however if the acquirer has a bright future, than enough said.


Some random thoughts:


Is this one of those industry trendy, sexy, cool everybody drooling type deals with new and upcoming technology and marketing buzz?


Is this one of those industry deals that has good upside potential if executed upon and leveraged?


Netapp already has a storage offering why do they need Engenio?
No offense to NetApp, however they have needed a robust block storage offering to complement their NAS file serving and extensive software functionality to move into to different markets. This is not all that different from what EMC needed to do in the late 90s extending their capabilities from their sole cash cow platform Symmetrix to acquire DG to have a mid range offering.


NetApp is risking $480M on a business with technologies that some see or say is on the decline, so why would they do such a thing?
Ok, lets set the technology topics aside, from a pure numbers perspective, lets take two scenarios and Im not a financial person so go easy on me please. What some financial people have told me with other deals is that its sometimes about getting a return on cash vs. it not doing anything. So with that and other things in mind, say NetApp just lets $480M sit in the bank, can they get 12 per cent or better interest? Probably not and if they can, I want the name of that bank. What that means is that for a five year period, if they could get that rate of return (12 percent), they would only make $824M-480M=$344M on the investment (I know, there are tax and other financial considerations however lets keep simple). Now lets take another scenario, assume that NetApp simply rides a decline of the business at say a 20 percent per year rate (how many business are growing or in storage declining at 20 percent per year?) for five years. That works out to about a $1.4B yield. Lets take a different scenario and assume that NetApp can simply maintain an annual run rate of $700-750M for that five years, that works out to around $3.66B-480M=$3.1B revenue or return on investment. In other words, even with some decline, over a five year period, the OEM business pays for the deal alone and perhaps helps funds investment in technology improvement with the business balance being positive upside.


Now both of those are extreme scenarios so lets take something more likely such as NetApp being able to simply maintain a 700-750M run rate by keeping some of the OEM business, finding new markets for challenge and OEM as well as direct, expanding footprint into their markets. Now that math gets even more interesting. Having said all of that, NetApp needs to keep investing in the business and products to get those returns which might help explain the relative low price to run rate.


Is this a good deal for NetApp?
IMHO yes, as long as NetApp does not screw it up. If NetApp can manage the business, invest in it, grow into new markets instead of simple cannibalization, they will have made a good deal similar to what EMC did with DG back in the late 90s. However NetApp needs to execute, leverage what they are buying, invest in it and pick up new business to make up for the declining business with some of the OEMs.


With several hundred thousand systems or controllers having been sold over the years (granted how many are actually running is your guess as good as mine), NetApp has a footprint to leverage with their other products. For example, should IBM, Dell or Oracle completely walk away from those installed footprints, NetApp can move in with firmware or other upgrades to support plus up sell with their NAS gateways to add value with compression, dedupe, etc.


What about NetApps acquisition track record?
Fair question although Im sure the NetApp faithful wont like it. NetApp has had their ups and downs with acquisitions (Topio, Decru, Spinaker, Onaro, etc), perhaps with this one like EMC in the late 90s who bought DG to overcome some rough up and down acquisitions can also get their mojo on. (See this post).While we are on the topic of acquisitions, NetApp recently bought Akorri and last year Bycast which they now call StorageGrid that has been OEMd in the past by IBM. Guess what storage was commonly used under the IBM servers running the Bycast software? If you guessed XIV you might want to take a mulligan or a do over. Btw, HP also has OEMd the Bycast software. If you are not familiar with Bycast and interested in automated movement, tiering, policy management, objects and other buzzwords, ping your favorite NetApp person as it is a diamond in the rough if leveraged beyond healthcare capabilities.


What does this mean for Xyratex and Dothill who are NetApp partners?
My guess is that for now, the general purpose enclosures would stay the same (e.g. Xyratex) until there is a business case to do something different. For the high density enclosures, that could be a different scenario. As for others, we will have to wait and see.


Will NetApp port OnTap into Engenio?
The easiest and fastest thing is to do what NetApp and Engenio OEM customers have already been doing, that is, place the Engenio arrays behind the NetApp fas vfiler. Note that Engenio has storage systems that speak SAS to HDDs and SSDs as well as able to speak SAS, iSCSI and FC to hosts or gateways. NetApp has also embraced SAS for back end storage, maybe we will see them leverage a SAS connection out of their filers in the future to SAS storage systems or shelves instead of FC loop?


Speaking of SAS host or server attached storage, guess what many cloud, MSP, high performance and other environment are using for storage on the back end of their clusters or scale out NAS systems?
Yup, SAS.


Guess what gap NetApp gets to fill joining Dell, HP, IBM and Oracle who can now give a choice of SAS, iSCSI or FC in addition or NAS?
Yup, SAS.


Care to guess what storage vendor we can expect to hear downplay SAS as a storage system to server or gateway technology?


Is this all about SAS?


Will this move scare EMC?
No, EMC does not get scared, or at least that is what they tell me.


Will LSI buy Fusion IO who has or is filing their documents to IPO or someone else?
Your guess or speculation is better than mine. However LSI already has and is retaining their own PCIe SSD card.


Why only $480M for a business that did $705M in 2010?
Good question. There is risk in that if NetApp does not invest in the product, marketing, relationships that they will not see the previous annual run rate so it is not a straight annuity. Consequently NetApp is taking risk with the business and thus they should get the reward if they can run with it. Another reason is that there probably were not any investment bankers or brokers running up the price.


Why didnt Dell buy Engenio for $480M?
Good question, if they had the chance, they should have however it probably would not have been a good fit as Dell needs direct sales vs. OEM sales.


Ok, nuff said (for now).


Cheers gs


Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)


twitter @storageio

StorageIO News Letter Image
Winter 2011 Newsletter

Welcome to the Winter 2011 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Fall 2011 edition.


You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the Winter 2011 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.


Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

What do VARs and Clouds as well as MSPs have in common?


Several things it turns out:

  • Some Value Added Resellers (VARS) (links to VAR related content and comments here, here and here) sell cloud services or solutions
  • Some VARs are also cloud or managed solutions providers (MSPs) themselves, thus some cloud or MSPs are VARs
  • Some VARs, cloud and MSPs compete on lowest or cheapest price
  • Some VARs, cloud and MSPs have diverse product offering portfolios
  • Some VARs, cloud and MSPs compete on value (e.g. not price)
  • Some VARs, cloud and MSPs value is in the trust, security and peace of mind that they provide to their client


For some, the value of a given VAR, cloud or MSP is the ability to shop around for a resource to get the lowest price.


For others, the value of a given VAR, cloud or MSP is the ability to get the best value which may not be the lowest price rather the most effective overall cost per services with trust, security, experience and peace of mind provided.


Value to often is confused with being cheap or lowest cost.


Value can also mean a higher price that includes more thus providing a better effective option (e.g. super size it).


On the other hand, higher priced should not be confused with always being a better product, service or solution.


You may find that the initial low cost requires other add on fees or activation charges, surcharges for use or activity along with optional services to make the solution useful all resulting in an overall higher amount to be paid.


Lowest cost may result in a bargain now and then if that fits your needs.


Value can also mean a better option providing an improved return on investment if a solution or service meets and exceeds your needs and expectations.


As an example, I recently switched from a cloud backup MSP (Mozy) not due to cost (costs would have gone down with their recent service plan
announcement) rather I needed more value and functionality. With my new cloud backup MSP I get more functionality and capability that I can continue to grow into even though the price per GByte is higher than with my previous provider. What made the change of positive is what I get in the higher fee per GByte that in the end, actually makes it more affordable, not cheaper, just better value and return on investment.


For some low cost is value while for others, value is more than lowest cost including what you get for a given fee including trust, security, service and experience among other items. Different people will have different requirements or needs for what is or is not value.


If you do not like the term value, then try price performer.


Bottom line for now, with VARs, MSPs and Cloud (Public or private) dont be scared, however look before you leap!



Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to Self Encrypting Disk (SEDs).


Based on the trusted computing group (TCG) DriveTrust and OPAL disk drive security models, SEDs offload encryption to the disk drive while complimenting other encryption security solutions to protect against theft or lost storage devices. There is another benefit however for SEDs which is simplifying the process of decommissioning a storage device safely and quickly.


If you are not familiar with them, SEDs perform encryption within the hard disk drive (HDD) itself using the onboard processor and resident firmware. Since SEDs only protect data at rest, other forms of encryption should be combined to protect data in flight or on the move.


There is also another benefit of SEDs in that for those of you concerned about how to digital destroy, shred or erase large capacity disks in the future, you may have a new option. While intended for protecting data, a byproduct is that when a SED is removed from the system or server or controller that it has established an affinity with, its contents are effectively useless until reattached. If the encryption key for a SED is changed, then the data is instantly rendered useless, or at least for most environments.


Learn more about SEDs here and via the following links:


Nuff said for now


Cheers gs

gregschulz Hot Shot

Tape talk time

Posted by gregschulz Mar 3, 2011

For being a declared dead or zombie technology (here, here or here) tape remains very much alive however its role is changing. There is no disputing that hard disk drives (HDDs) are continuing to expand their role for data protection including backup/restore, BC and DR where tape has been used for decades.


Even Google who relies on disk to disk (D2D) data protection across mutliple sites still relies on tape as a last resort for data protection (See this link pertaining to recent Gmail outage).


What is also occurring is that tapes role is changing from day to day backup to that of longer term data preservation including archiving with more data stored on tape today than in past history at a lower cost. In fact the continued reduced cost per tape and improved capacity as well as utilization has worked against tape from a marketing competitive standpoint. For example if you look at a chart showing tape (media and drive) revenues you see a decline, similar to what was seen a couple of years ago for HDDs.


What is not shown on some charts are how many units (drives or media) shipped with more capacity for a given price (again what was reported for HDDs a few years ago) when net capacity had increased. Vendors of tape technology have also had a rather low profile particular for those with other technologies that have received more marketing resources (people, time, money). After all, if a product is on a plateau of productivity and profitability why spend time or effort on extensive marketing or promotion vs. directing resources to get new items into the market.


As a result, for those looking to make a case that tape is on the decline based on revenues to convince customers to move away from that technology should have a marketing freebie. Recently Oracle announced a new large capacity tape drive and media following on previous announcements of enhanced LTO roadmap and future 35TByte tape capabilities announced January 2010 by Fujifilm and IBM.


For those who are interested following are some links to various topics including how SSD, HDD and tape can coexist complementing each other for different roles or functions. As to those who do not like tape, feel free to read if you like as there is also material on SSD, HDD, dedupe, cloud, data protection and other topics.


Some previous blog posts:


Here are some additional articles, commentary and reports pertaining to tape related topics:


Something tells me we will be hearing, reading or watching more about tape being alive in the months to come.


Nuff said for now

Cheers gs

With networking, care should be taken to understand if a given speed or performance capacity is being specified in bits or bytes as well as in base 2 (binary) or base 10 (decimal).


Another consideration and potential point of confusion are line rates (GBaud) and link speed which can vary based on encoding and low level frame or packet size. For example 1GbE along with 1, 2, 4 and 8Gb Fibre Channel along with Serial Attached SCSI (SAS) use an 8b/10b encoding scheme. This means that at the lowest physical layer 8bits of data are placed into 10bits for transmission with 2 bits being for data integrity.


With an 8Gb link using 8b/10b encoding, 2 out of every 10 bits are overhead. To determine the actual data throughput for bandwidth, or, number of IOPS, frames or packets per second is a function of the link speed, encoding and baud rate. For example, 1Gb FC has a 1.0625 Gb per second speed which is multiple by the current generation so 8Gb FC or 8GFC would be 8 x 1.0625 = 8.5Gb per second.


Remember to factor in that encoding overhead (e.g. 8 of 10 bits are for data with 8b/10b) and usable bandwidth on the 8GFC link is about 6.8Gb per second or about 850Mbytes (6.8Gb / 8 bits) per second. 10GbE uses 64b/66b encoding which means that for every 64 bits of data, only 2 bits are used for data integrity checks thus less overhead.


What do all of this bits and bytes have to do with clouds and virtual data storage network?


Quite a bit when you consider what we have talked about the need to support more information processing, moving as well as storing in a denser footprint.


In order to support higher densities faster servers, storage and networks are not enough and require various approaches to reducing the data footprint impact.


What this means is for fast networks to be effective they also have to have lower overhead to avoid moving more extra data in the same amount of time instead using that capacity for productive work and data.


PCIe leverages multiple serial unidirectional point to point links, known as lanes, compared to traditional PCI that used a parallel bus based design. With traditional PCI, the bus width varied from 32 to 64 bits while with PCIe, the number of lanes combined with PCIe version and signaling rate determines performance. PCIe interfaces can have one, two, four, eight, sixteen or thirty two lanes for data movement depending on card or adapter format and form factor.  For example, PCI and PCIx performance can be up to 528 MByte per second with 64 bit 66 MHz signaling rate.


PCIe Gen 1

PCIe Gen 2

PCIe Gen 3

Giga transfers per second




Encoding scheme




Data rate per lane per second




x32 lanes




Table 1: PCIe generation comparisons


Table 1 shows performance characteristics of PCIe various generations. With PCIe Gen 3, the effective performance essentially doubles however the actual underlying transfer speed does not double as it has in the past. Instead the improved performance is a combination of about 60 percent link speed and 40 percent efficiency improvements by switching from an 8b/10b to 128b/130b encoding scheme among other optimizations.


Serial interface


PCIe Gen 1


PCIe Gen 2


PCIe Gen 3


Ethernet 1Gb


Ethernet 10Gb


Fibre Channel 1/2/4/8 Gb




Table 2: Common encoding


Bringing this all together is that in order to support cloud and virtual computing environments, data networks need to become faster as well as more efficient otherwise you will be paying for more overhead per second vs. productive work being done. For example, with 64b/66b encoding on a 10GbE or FCoE link, 96.96% of the overall bandwidth or about 9.7Gb per second are available for useful work.


By comparison if an 8b/10b encoding were used, the result would be only 80% of available bandwidth for useful data movement. For environments or applications this means better throughput or bandwidth while for applications that require lower response time or latency it means more IOPS, frames or packets per second.


The above is an example of where a small change such as the encoding scheme can have large benefit when applied to high volume or large environments.


Learn more in The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC) at


Nuff said for now


Cheers gs

You have been told by someone or determined on your own that it is time for a new server, however what to get?


A blade server, rack mount, floor model, physical or virtual perhaps cloud?


How about one that is fully configured and accessorized to meet your specific environments needs?

There are several considerations involving what type of server or computer is needed to meet your specific needs or application requirements.


Options include price, packaging, vendor preferences, blade center, freestanding, 1U rack mount, virtual and cloud support, with or without storage and networking, performance as well as power and cooling among other considerations.


Here is a link (PDF version here, may require registration) to an article that I put together to help determine your needs as well as consider various options for your next server.


Hope you find the information useful!


Nuff said for now


Cheers gs

Have you hugged your cloud or MSP lately?


Why give a cloud a hug and what does it have to do with loss of data access vs. loss of data?


First there is a difference between actually losing data and losing access to it.


Losing data means that you have no backup or copy of the information thus it is gone.


This means there are no good valid backups, snapshots, copies or archives that can be used to restore or recover the information.


Losing access to data means that there is a copy of it somewhere however it will take time to make it usable (no data was actually lost). How long you have to wait until the data is restored or recovered will vary and during that time it may seem like data was lost.


Second, industry hype for and against clouds serves as a lighting rod for when things happen.


Lighting recently struck (or at least virtually) with some outages (see links below) including at Google Gmail.

Here is a link to more info from Google about the Gmail disruption.


Cloud crowd cheerleaders may need a hug to feel good while they or their technology get tossed about a bit. Google announced that they had a service disruption recently however that data was not lost, only loss of access for a period of time.


Lets take a step back before going forward.


With the Google Gmail disruption, following on previous incidents, true cynics and naysayers will probably jump on the anti cloud FUD feeding frenzy.


The true cloud cynics will tell the skeptics all about cloud challenges perhaps never having had actually used any service or technology themselves.


Cloud crowd cheerleaders are generally a happy go lucky bunch with virtual beliefs and physical or real emotions. Cloud crowd cheerleaders have a strong passion for their technology or paradigm taking it various serious in some instances perceiving attacks or fud against cloud as an attack on them or their belief. Btw, some cheerleaders will see this post as snarky or cynical (ok, get over it already).
Ongoing poll at, click on the image to cast your vote.


Then there are the skeptics or interested audience who are not complete cynics or cheerleaders (those in the middle 80 percent of the above chart).


Generally speaking they want to learn more, understand issues to work around or take appropriate steps and institute best practices. They see a place for MSP or cloud services for some things to compliment what they are currently doing and tend to be the majority of audiences outside of special interest, vendor or industry trade groups.


Some additional thoughts, comments and perspectives:

  • Loss of data means you cannot get it back to a specific RPO (Recovery Point Objective or how much data you can afford to lose). Loss of access to data means that you cannot get to your data until a specific RTO (Recovery Time Objective).
Tiered data protection, RTO and RPOs, align technique and technology to SLO needs
RTO and RPOs


  • RAID and replication provide accessibility to data not data protection. The good news with RAID and replication or mirroring is if you make a change to the data it is copied or protected. The bad news is if it is deleted or corrupted that error or problem is also replicated.


  • Backup, snapshots, CDP or other time interval based techniques protect data against loss however may require time to restore, recovery or refresh from. A combination of data availability and accessibility along with time interval based protection are needed (e.g. the two previous above items should be combined). CDP should also mean complete, consistent, coherent or comprehensive data protection including data in application or VM buffers.


  • Any technology will fail either on its own or via human intervention or lack of configuration. It is not if rather when as well as how gracefully a failure along with fault isolation occurs and is remediate (corrected). There is generally speaking, no such thing as a bad technology, rather poor or inappropriate use, configuration or deployment of it.


  • Protect onsite data with offsite mediums including MSP or cloud backup services while keeping a local onsite copy. Why keep an onsite local copy when using a cloud? Simple, when you lose access to the cloud or MSP for extended periods of time, if needed you have a copy of data to work with (assuming it is still valid). On other hand, important data that is onsite needs to be kept offsite. Hence cloud and MSP should compliment what is done for data protection and vise versa. Thats what I do, is what you do?


  • The technology golden rule which applies to cloud and virtualization is whoever controls the management of the technology controls the gold. Leverage CDP, which is Commonsense Data Protection or Cloud Data Protection. Hops are great in beer (as well as some other foods) however they add latency including with networks. Aggregation can cause aggravation, not everything can be consolidated, however much can be virtualized.


Here are some related blog posts:


Additional links to related articles and commentary:


Closing thoughts and comments (for now) regarding clouds.


Its not if, rather when, where, why, how and with what will you leverage a cloud or MSP technologies, products, service, solution or architectures to compliment your environment.


How will cloud or MSP work for you vs. you working for it (unless you actually do work for one of them).


Dont be scared of clouds or virtualization, however look before you leap!


BTW, for those in the Minneapolis St. Paul area (aka the other MSP), check out this event on March 15, 2011. I have been invited to talk about optimizing your data storage and virtual environments and be prepared to take advantage of cloud computing opportunities as they mature.

Nuff said for now


Cheers gs

twitter @storageio