Skip navigation

Storage I/O trends

Revisiting RAID storage remains relevant and resources

If RAID were really not relevant anymore (e.g. actually dead), why do some people spend so much time trying to convince others that it is dead or to use a different RAID level or enhanced RAID or beyond raid with related advanced approaches?


When you hear RAID, what comes to mind?


A legacy monolithic storage system that supports narrow 4, 5 or 6 drive wide stripe sets or a modern system support dozens of drives in a RAID group with different options?


RAID means many things, likewise there are different implementations (hardware, software, systems, adapters, operating systems) with various functionality, some better than others.


For example, which of the items in the following figure come to mind, or perhaps are new to your RAID vocabulary?


RAID questions


There are Many Variations of RAID Storage some for the enterprise, some for SMB, SOHO or consumer. Some have better performance than others, some have poor performance for example causing extra writes that lead to the perception that all parity based RAID do extra writes (some actually do write gathering and optimization).


Some hardware and software implementations using WBC (write back cache) mirrored or battery backed-BBU along with being able to group writes together in memory (cache) to do full stripe writes. The result can be fewer back-end writes compared to other systems. Hence, not all RAID implementations in either hardware or software are the same. Likewise, just because a RAID definition shows a particular theoretical implementation approach does not mean all vendors have implemented it in that way.


RAID is not a replacement for backup rather part of an overall approach to providing data availability and accessibility.

data protection and durability

What's the best RAID level? The one that meets YOUR needs


There are different RAID levels and implementations (hardware, software, controller, storage system, operating system, adapter among others) for various environments (enterprise, SME, SMB, SOHO, consumer) supporting primary, secondary, tertiary (backup/data protection, archiving).

RAID comparison
General RAID comparisons


Thus one size or approach does fit all solutions, likewise RAID rules of thumbs or guides need context. Context means that a RAID rule or guide for consumer or SOHO or SMB might be different for enterprise and vise versa, not to mention on the type of storage system, number of drives, drive type and capacity among other factors.

RAID comparison
General basic RAID comparisons


Thus the best RAID level is the one that meets your specific needs in your environment. What is best for one environment and application may be different from what is applicable to your needs.


Key points and RAID considerations include:


· Not all RAID implementations are the same, some are very much alive and evolving while others are in need of a rest or rewrite. So it is not the technology or techniques that are often the problem, rather how it is implemented and then deployed.


· It may not be RAID that is dead, rather the solution that uses it, hence if you think a particular storage system, appliance, product or software is old and dead along with its RAID implementation, then just say that product or vendors solution is dead.


· RAID can be implemented in hardware controllers, adapters or storage systems and appliances as well as via software and those have different features, capabilities or constraints.


· Long or slow drive rebuilds are a reality with larger disk drives and parity-based approaches; however, you have options on how to balance performance, availability, capacity, and economics.


· RAID can be single, dual or multiple parity or mirroring-based.


· Erasure and other coding schemes leverage parity schemes and guess what umbrella parity schemes fall under.


· RAID may not be cool, sexy or a fun topic and technology to talk about, however many trendy tools, solutions and services actually use some form or variation of RAID as part of their basic building blocks. This is an example of using new and old things in new ways to help each other do more without increasing complexity.


·  Even if you are not a fan of RAID and think it is old and dead, at least take a few minutes to learn more about what it is that you do not like to update your dead FUD.


Wait, Isn't RAID dead?

There is some dead marketing that paints a broad picture that RAID is dead to prop up something new, which in some cases may be a derivative variation of parity RAID.

data dispersal
Data dispersal and durability


RAID rebuild improving
RAID continues to evolve with rapid rebuilds for some systems


Otoh, there are some specific products, technologies, implementations that may be end of life or actually dead. Likewise what might be dead, dying or simply not in vogue are specific RAID implementations or packaging. Certainly there is a lot of buzz around object storage, cloud storage, forward error correction (FEC) and erasure coding including messages of how they cut RAID. Catch is that some object storage solutions are overlayed on top of lower level file systems that do things such as RAID 6, granted they are out of sight, out of mind.


RAID comparison
General RAID parity and erasure code/FEC comparisons


Then there are advanced parity protection schemes which include FEC and erasure codes that while they are not your traditional RAID levels, they have characteristic including chunking or sharding data, spreading it out over multiple devices with multiple parity (or derivatives of parity) protection.


Bottom line is that for some environments, different RAID levels may be more applicable and alive than for others.


  Via BizTech - How to Turn Storage Networks into Better Performers    
  •     Maintain Situational Awareness
  •     Design for Performance and Availability
  • Determine Networked Server and Storage Patterns
  • Make Use of Applicable Technologies and Techniques


If RAID is alive, what to do with it?

If you are new to RAID, learn more about the past, present and future keeping mind context. Keeping context in mind means that there are different RAID levels and implementations for various environments. Not all RAID 0, 1, 1/0, 10, 2, 3, 4, 5, 6 or other variations (past, present and emerging) are the same for consumer vs. SOHO vs. SMB vs. SME vs. Enterprise, nor are the usage cases. Some need performance for reads, others for writes, some for high-capacity with low performance using hardware or software. RAID Rules of thumb are ok and useful, however keep them in context to what you are doing as well as using.

What to do next?

Take some time to learn, ask questions including what to use when, where, why and how as well as if an approach or recommendation are applicable to your needs. Check out the following links to read some extra perspectives about RAID and keep in mind, what might apply to enterprise may not be relevant for consumer or SMB and vise versa.


Some advise needed on SSD's and Raid (Via Spiceworks)
Double drive failures in a RAID-10 configuration (Via SearchStorage)
Industry Trends and Perspectives: RAID Rebuild Rates (Via StorageIOblog)
RAID, IOPS and IO observations (Via StorageIOBlog)
RAID Relevance Revisited (Via StorageIOBlog)
HDDs Are Still Spinning (Rust Never Sleeps) (Via InfoStor)
When and Where to Use NAND Flash SSD for Virtual Servers (Via TheVirtualizationPractice)
What's the best way to learn about RAID storage? (Via Spiceworks)
Design considerations for the host local FVP architecture (Via Frank Denneman)
Some basic RAID fundamentals and definitions (Via SearchStorage)
Can RAID extend nand flash SSD life? (Via StorageIOBlog)
I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
The original RAID white paper (PDF) that while over 20 years old, it provides a basis, foundation and some history by Katz, Gibson, Patterson et al
Storage Interview Series (Via Infortrend)
Different RAID methods (Via RAID Recovery Guide)
A good RAID tutorial (Via TheGeekStuff)
Basics of RAID explained (Via ZDNet)
RAID and IOPs (Via VMware Communities)

Also check out and for related material.


Storage I/O trends

What is my favorite or preferred RAID level?

That depends, for some things its RAID 1, for others RAID 10 yet for others RAID 4, 5, 6 or DP and yet other situations could be a fit for RAID 0 or erasure codes and FEC. Instead of being focused on just one or two RAID levels as the solution for different problems, I prefer to look at the environment (consumer, SOHO, small or large SMB, SME, enterprise), type of usage (primary or secondary or data protection), performance characteristics, reads, writes, type and number of drives among other factors. What might be a fit for one environment would not be a fit for others, thus my preferred RAID level along with where implemented is the one that meets the given situation. However also keep in mind is tying RAID into part of an overall data protection  strategy, remember, RAID is not a replacement for backup.

What this all means

Like other technologies that have been declared dead for years or decades, aka the Zombie technologies (e.g. dead yet still alive) RAID continues to be used while the technologies evolves. There are specific products, implementations or even RAID levels that have faded away, or are declining in some environments, yet alive in others. RAID and its variations are still alive, however how it is used or deployed in conjunction with other technologies also is evolving.


Ok, nuff said for now.



Attention DIY Converged Server Storage Bargain Shoppers

Software defined storage on a budget with Lenovo TS140


server storage I/O trends


Recently I put together a two-part series of some server storage I/O items to get a geek for a gift (read part I here and part II here) that also contain items that can be used for accessorizing servers such as the Lenovo ThinkServer TS140.


Lenovo thinkserver ts140
Image via


Likewise I have done reviews of the Lenovo ThinkServer TS140 in the past which included me liking them and buying some (read the reviews here and here), along with a review of the larger TD340 here.

Why is this of interest

Do you need or want to do a Do It Yourself (DIY) build  of a small server compute cluster, or a software defined storage cluster (e.g.  scale-out), or perhaps a converged storage for VMware VSAN, Microsoft SOFS or  something else?


Do you need a new server, second or third server, or expand a cluster, create a lab or similar and want the ability to tailor your system without shopping or a motherboard, enclosure, power supply and so forth?


Are you a virtualization or software defined person looking to  create a small VMware Virtual SAN (VSAN) needing three or more servers to build a proof of concept or personal lab system?


Then the TS140 could be a fit for you.


storage I/O Lenovo TS140
Image via StorageIOlabs, click to see review

Why the Lenovo TS140 now?

Recently I have seen a lot of site traffic on my site with people viewing my reviews of the Lenovo TS140 of which I have a few. In addition have got questions from people via comments section as well as elsewhere about the TS140 and while shopping at for some other things, noticed that there were some good value deals on different TS140 models.


I tend to buy the TS140 models that are bare bones having power supply, enclosure, CD/DVD, USB ports, power supply and fan, processor and minimal amount of DRAM memory. For processors mine have the Intel E3-1225 v3 which are quad-core and that have various virtualization assist features (e.g. good for VMware and other hypervisors).


What I saw on Amazon the other day (also elsewhere) were some Intel i3-4130 dual core based systems (these do not have all the virtualization features, just the basics) in a bare configuration (e.g. no Hard Disk Drive (HDD), 4GB DRAM, processor, mother board, power supply and fan, LAN port and USB with a price of around $220 USD (your price may vary depending on timing, venue, prime or other membership and other factors). Not bad for a system that you can tailor to your needs. However what also caught my eye were the TS140 models that have the Intel E3-1225 v3 (e.g. quad core, 3.2Ghz) processor matching the others I have with a price of around $330 USD including shipping (your price will vary depending on venue and other factors).

What are some things to be aware of?

Some caveats of this solution approach include:

  • There are probably other similar types of servers, either by price, performance, or similar
  • Compare apples to apples, e.g. same or better processor, memory, OS, PCIe speed and type of slots, LAN ports
  • Not as robust of a solution as those you can find costing  tens of thousands of dollars (or more)
  • A DIY system which means you select the other hardware  pieces and handle the service and support of them
  • Hardware platform approach where you choose and supply  your software of choice
  • For entry-level environments who have floor-space or  rack-space to accommodate towers vs. rack-space or other alternatives
  • Software  agnostic Based on basically an empty server chassis (with power supplies,  motherboard, power supplies, PCIe slots and other things)
  • Possible candidate  for smaller SMB (Small Medium Business), ROBO (Remote Office Branch Office), SOHO (Small Office Home Office) or labs that are looking for DIY
  • A starting place  and stimulus for thinking about doing different things

What could you do with this building block (e.g. server)

Create a single or multi-server based system for


  • Virtual Server Infrastructure (VSI) including KVM, Microsoft Hyper-V, VMware ESXi, Xen among others
  • Object storage
  • Software Defined Storage including Datacore, Microsoft SOFS, Openstack, Starwind, VMware VSAN, various XFS and ZFS among others
  • Private or hybrid cloud including using Openstack among other software tools
  • Create a hadoop big data analytics cluster or grid
  • Establish a video or media server, use for gaming or a backup (data protection) server
  • Update or expand your lab and test environment
  • General purpose SMB, ROBO or SOHO single or clustered server

  VMware VSAN server storageIO example

What you need to know


Like some other servers in this class, you need to pay attention to what it is that you are ordering, check out the various reviews, comments and questions as well as verify the make, model along with configuration. For example what is included and what is not included, warranty, return policy among other things. In the case of some of the TS140 models, they do not have a HDD, OS, keyboard, monitor, mouse along with different types of processors and memory. Not all the processors are the same, pay attention, visit the Intel Ark site to look up a specific processor configuration to see if it fits your needs as well as visit the hardware compatibility list (HCL) for the software that you are planning to use. Note that these should be best practices regardless of make, model, type or vendor for server, storage, I/O networking hardware and software.

What you will need

This list assumes that you have obtained a model without a HDD, keyboard, video, mouse or operating system (OS) installed


  • Update your BIOS if applicable, check the Lenovo site
  • Enable virtualization and other advanced features via your BIOS
  • Software such as an Operating System (OS), hypervisor or other distribution (load via USB or CD/DVD if present)
  • SSD, SSHD/HHDD, HDD or USB flash drive for installing OS or other software
  • Keyboard, video, mouse (or a KVM switch)


What you might want to add (have it your way)


  • Keyboard, video mouse or a KVM switch (See gifts for a geek here)
  • Additional memory
  • Graphics card, GPU or PCIe riser
  • Additional SSD, SSHD/HHDD or HDD for storage
  • Extra storage I/O and networking ports

Extra networking ports

You can easily add some GbE (or faster ports) including use the PCIe x1 slot, or use one of the other slots for a quad port GbE (or faster), not to mention get some InfiniBand single or dual port cards such as the Mellanox Connectx II or Connect III that support QDR and can run in IBA or 10GbE modes. If you only have two or three servers in a cluster, grid, ring configuration you can run point to point topologies using InfiniBand (and some other network interfaces) without using a switch, however you decide if you need or want switched or non-switched (I have a switch). Note that with VMware (and perhaps other hypervisors or OS) you may need to update the drives for the Realtek GbE LAN on Motherboard port (see links below).

Extra storage ports

For extra storage space capacity (and performance) you can easily add PCIe G2 or G3 HBAs (SAS, SATA, FC, FCoE, CNA, UTA, IBA for SRP, etc) or RAID cards among others. Depending on your choice of cards, you can then attach to more internal storage, external storage or some combination with different adapters, cables, interposers and connectivity options. For example I have used TS140s with PCIe Gen 3 12Gbs SAS HBAs attached to 12Gbs SAS SSDs (and HDDs) with the ability to drive performance to see what those devices are capable of doing.

TS140 Hardware Defined My Way

As an example of how a TS140 can be configured, using one of the base E3-1224 v3 models with 4GB RAM, no HDD (e.g around $330 USD, your price will vary), add a 4TB Seagate HDD (or two or three) for around $140 USD each (your price will vary), add a 480GB SATA SSD for around $340 USD (your price will vary) with those attached to the internal SATA ports. To bump up network performance, how about a Mellanox Connectx II dual port QDR IBA/10GbE card for around $140 USD (your price will vary), plus around $65 USD for QSFP cable (you your price will vary), and some extra memory (use what you have or shop around) and you have a platform ready to go for around or under $1,000 USD. Add some more internal or external disks, bump up the memory, put in some extra network adapters and your price will go up a bit, however think about what you can have for a robust not so little system. For you VMware vgeeks, think about the proof of concept VSAN that you can put together, granted you will have to do some DIY items.

Some TS140 resources

Lenovo TS140 resources include

Lenovo thinkserver ts140
Image via

What this all means

Like many servers in its category (price, capabilities, abilities, packaging) you can do a lot of different things with them, as well as hardware define with accessories, or use your own software. Depending on how you end how hardware defining the TS140 with extra memory, HDDs, SSDs, adapters or other accessories and software your cost will vary. However you can also put together a pretty robust system without breaking your budget while meeting different needs.


Is this for everybody? Nope


Is this for more than a lab, experimental, hobbyist,  gamer? Sure, with some caveats Is this apples to apples comparison vs. some  other solutions including VSANs? Nope, not even close, maybe apples to oranges.


Do I like the TS140? Yup, starting with a review I did about a year ago, I liked it so much I bought one, then another, then some more.


Are these the only servers I have, use or like? Nope, I also have systems from HP and Dell as well as test drive and review others


Why do I like the TS140? It's a value for some things which means that while affordable (not to be confused with cheap) it has features, salability and ability to be both hardware defined for what I want or need to use them as, along with software define them to be different things. Key for me is the PCIe Gen 3 support with multiple slots (and types of slots), reasonable amount of memory, internal housing for 3.5" and 2.5" drives that can attach to on-board SATA ports, media device (CD/DVD) if needed, or remove to use for more HDDs and SSDs. In other words, it's a platform that instead of shopping for the motherboard, an enclosure, power supply, processor and related things I get the basics, then configure, and reconfigure as needed.


Another reason I like the TS140 is that I get to have the server basically my way, in that I do not have to order it with a smallest number of HDDs, or that it comes with an OS, more memory than needed or other things that I may or may not be able to use. Granted I need to supply the extra memory, HDDs, SSDs, PCIe adapters and network ports along with software, however for me that's not too much of an issue.

What don't I like about the TS140? You can read more about my thoughts on the TS140 in my review here, or its bigger sibling the TD340 here, however I would like to see more memory slots for scaling up. Granted for what these cost, it's just as easy to scale-out and after all, that's what a lot of software defined storage prefers these days (e.g. scale-out).


The TS140 is a good platform for many things, granted not for everything, that's why like storage, networking and other technologies there are different server options for various needs. Exercise caution when doing apples to oranges comparison on price alone, compare what you are getting in terms of processor type (and its functionality), expandable memory, PCIe speed, type and number of slots, LAN connectivity and other features to meet your needs or requirements. Also keep in mind that some systems might be more expensive that include a keyboard, HDD with an OS installed that if you can use those components, then they have value and should be factored into your cost, benefit, return on investment.


And yes, I just added a few more TS140s that join other recent additions to the server storageIO lab resources...


Anybody want to guess what I will be playing with among other things during the up coming holiday season?


Ok, nuff said, for now...

Cheers gs

Part II 2014 Server Storage I/O Geek Gift ideas

server storage I/O trends

This is part two of a two part series for what to get a geek for a gift, read part one here.


KVM switch

Not to be confused with a software defined network (SDN) switch for the KVM virtualization hypervisor, how about the other KVM switch?


kvm switch
My KVM switch in use, looks like five servers are powered on.


If you have several servers or devices that need a Keyboard Video Mouse connection, or are using A/B box or other devices, why not combine using a KVM switch. I bought the Startech shown above from Amazon which works out to be under $40 a port (connection) meaning I do not have to have Keyboards, Video monitors or Mouse for each of those systems.


With my KVM shown above, I have used the easy setup to name each of the ports via the management software so that when a button is pressed, not only does the applicable screen appear,  also a graphic text message overlay tell me which server is being displayed. This is handy for example as I have some servers that are identical (e.g. Lenovo TS140s) running VMware that a quick glance can help me verify I'm on the right one (e.g. without looking at the VMware host name or IP). This feature is also handy during power on self test (POST) when the servers physical or logical (e.g. VMware, Windows, Hyper-V, Ubuntu, Openstack, etc..) identity is known. Another thing I like about these is that on the KVM switch there is a single VGA type connector, while on the server end there is a VGA connector for attaching to the monitor port of the device, and a break out cable with USB for attaching to server to get Keyboard and Mouse.

Single drive shoe box

Usually things are in larger server or storage systems enclosures, however now and then there is the need to supply power to a HDD or SSD along with a USB or eSATA interface for attaching to a system. These are handy and versatile little aluminum enclosures.


single drive sata enclosuredisk enclosure

Note that you can now also find these types of cables that can do same or similar function for in side a server connection (check out this cable among others at Amazon)

USB-SATA cable

It would be easy to assume that everybody would have these by now particular since everybody (depending on who you listen to or what you read) has probably converted from a HDD to SSD. However for those who have not done an HDD to SSD, or simply a HDD to newer HDD conversion, or that have an older HDD (or SSD) lying around, these cables come in very handy. attach one end (e.g. the SATA end) to a HDD or SSD and the other to a USB port on a laptop, tablet or server. Caveat however with these is that they generally only have power (via USB) for a 2.5" type drive so for a larger more power-hungry 3.5" device, you would need a different powered cable, or small shoe box type enclosure.

USB to SATAeSATA cable
(Left) USB to SATA and (Right) eSATA to SATA cables

Mophie USB charger

There are many different types of mobile device chargers available along with multi-purpose cables. I like the Mophie which I received at an event from NetApp (Thanks NetApp) and the flexible connector I received from Dyn while at AWS re:Invent 2014 (Thanks Dyn, I'm also a Dyn customer fwiw).
  power chargerpower cable
  (Left) Mophie Power station and (Right) multi-connector cable


The Mohpie has USB connector so that you can charge it via a charging station or via a computer, as well as attach a USB to Apple or other device connector. There is also a small connector for attach to other devices. This is where the dandy Dyn device comes into play as it has a USB as well as Apple and many other common connectors as shown in the figure below. Google around and I'm sure you can find both for sale, or as giveaways or something similar.

SAS SATA Interposer


sas interposerserver storage power
    (Left) SAS to SATA interposer (Right) Molex power with SATA connector to SAS


Note that the above are intended for passing a SAS signal from a device such as HDD or SSD to a SAS based controller that happens to have SATA mechanical or keyed interfaces such as with some servers. This means that the real controller needs to be SAS and the attached drives can be SATA or SAS keeping in mind that a SATA device can plug into a SAS controller however not vise versa. You can find the above at Amazon among other venues. Need a dual-lane SAS connector as an alternative to the one shown above on the right, then check this one out at Amazon.


Need to learn more about the many different facets of SAS and related technologies including how it coexists with iSCSI, Fibre Channel (FC), FCoE, InfiniBand and other interfaces, how about getting a free copy of SAS SANs for Dummies?

SAS SANS for dummies

There are also these for doing board level connections

esata connectorsata to esata cablesata male to male gender changer
Some additional SAS and SATA drive connectors


In the above on the left are a female to female SATA cable with a male to male SATA gender changer attached to be used for example between a storage device and the SATA connector port on a servers motherboard, HBA or RAID controller. In the middle are shown some SATA female to female cables, as well as a SATA to eSATA (external SATA) cable (middle), and on the right are some SATA Male to SATA Male gender changes also shown being used on the left in the above figures.

Internal Power cable / connectors

If you or your geek are doing things in the lab or other environment adding and reconfiguring devices such as some of those mentioned above (or below), sooner or later there will be the need to do something with power cables and connectors.

power meter
Various cables, adapters and extender


In the above figure are shown (top to bottom) a SATA male to molex, SATA female to SATA male and to its right SATA female to Molex. Below that are two SATA females to Molex, below that is a SATA male to dual Molex and on the bottom is a single SATA to dual SATA. Needless to say there are many other combinations of connectors as well as different genders (e.g. Male or Female) along with extenders. As mentioned above, pay attention to manufacturers recommend power draw and safety notices to prevent accidental electric shock or fire.

Intel Edison kit for IoT and IoD

Are you or your geek into the Internet of Things (IoT) or Internet of Devices (IoD) or other similar things and gadgets? Have you heard about Intel's Edison breakout board for doing software development and attachment of various hardware things? Looking for something to move beyond a Raspberry Pi system?

Intel Edison boardIntel Edison kits
Images via

Over the hills, through the woods WiFi

This past year I found  Nanostation extended WiFi devices that solved a challenge (problem) which was how to get a secure WiFi signal up to a couple hundred yards through a thick forest between some hill's.

nanostation long range wifi
Image via, check out their other models as well as resources for different deployments


The problem was it was to far and too many tree's with leaves use a regular WiFi connection and too far to run cable if I did not need to. I found the solution by getting a pair of the Nanostation M2 putting them into bridge mode, then doing some alignment with their narrow beam antennas to bounce a signal through the woods. For those who simply need to go a long distance, these devices can be reconfigured to go several km's line of sight. Click on the image above to see other models of the Nanostation as well as links to various resources on how they can be used for other things or deployments.

How about some software

  • UpDraft Backup - This is a Wordpress blog plugin that I use to back up my entire web including the templates, plug-ins, MySQL database and all other related components. While my dedicated private server gets backed up by my service provider (Bluehost), I wanted an extra detail of protection along with a copy placed at a different place (e.g. at my AWS account). Updraft is an example of an emerging class of tools for backing up and protecting cloud based and cloud born data. For example EMC recently acquired cloud backup startup Spanning who has the ability of protecting Salesforce, Google and other cloud based data.
  • Visual ESXtop - This is a great free tool that provides a nice interface and remote access for doing ESXtop functions normally accomplished from the ESXi console.
  • Microsoft Diskspd - If you or your geek is into server storage I/O performance and benchmark that has a Windows environment and looking for something besides Iometer, have them download the Microsoft Diskspd free utility.
  • Futuremark PCmark - Speaking of server storage I/O performance, check out Futuremark PCmark which will give your computer a great workout from graphics and video to compute, storage I/O and other common tasks.
  • RV Tools - Need to know more about your VMware virtual environment, take a quick inventory or something else, then your geek should have a copy of RV Tools from Robware.
  • iVMControl - For that vgeek how wants to be able to do simple VMware tasks from an iPhone, check out iVMControl tools. Its great, I don't use it a lot, however there are times where I don't need to or want to use a tablet or PC to reach my VMware environment, that's when this virtual gadget comes into play.

Livescribe Digital Pen and Paper

How about a Livescribe digital pen and paper? Sure you can use a PC, Apple or other tablet, however some things are still easier done on a traditional paper and virtual pen. I got one of these about a year ago and use it for note taking, mocking up slides for presentations and in some cases have used this for creating figures and other things. It would be easy to see and place the Livescribe and a Windows or other tablet as an either or competitive however for me, I still see where they are better together addressing different things, at least for now.

livescribe digital penlivescribe digital pen
(Left) using my Livescribe and Echo digital pen (Right) resulting exported .Png


Tip: I you noticed in the above left image (e.g. the original) the lines in the top figure, compared to the lines in the figure on the right are different. If you encounter your livescribe causing lines to run on or into each other it is because your digital pen tip is sticking. It's easy to check by looking at the tip of your digital pen and see if the small red light is on or off, or if it stays on when you press the pen tip. If it stays on, reset the pen tip. Also when you write, make sure to lift up on the pen tip so that it releases, otherwise you will get results like those shown on the right.

livescribe digital penlivescribe digital pen
(Left) Livescribe Digital Desktop (Middle) Imported Digital Document (Right) Exported PNG


Also check out this optional application that turns a Livescribe Echo pen like mine into a digital tablet allowing you to draw on-screen with certain applications and webinar tools.

Some books for the geek

Speaking of reading, for those who are not up on the No SQL and alternative SQL based databases including Mongo, Hbase, Riak, Cassandra, MySQL, add Seven Databases in Seven Weeks to your liust. Click on the image to read my book review of it as well as links to order it from Amazon. Seven Databases in Seven Weeks (A Guide to Modern Databases and the NoSQL Movement) is a book written Eric Redmond (@coderoshi) and Jim Wilson (@hexlib), part of The Pragmatic Programmers (@pragprog) series that takes a look at several non SQL based database systems.

seven database nosql

Where to get the above items

  • Ebay for new and used
  • Amazon for new and used
  • Newegg
  • PC Pit stop
  • And many other venues

What this all means

Note: Some of the above can be found at your favorite  trade show or conference so keep that in mind for future gift giving.


What interesting geek gift ideas or wish list items do  you have?


Of course if you have anything interesting to mention feel free to add it to the comments (keep it clean though or feel free to send to me for future mention.


In the meantime have a safe and happy holiday season for what ever holiday you enjoy celebrating anytime of the year.


Ok, nuff said, for now...

Cheers gs

Server Storage I/O Cables Connectors Chargers & other Geek Gifts

server storage I/O trends


This is part one of a two part series for what to get a geek for a gift, read part two here.


It is that time of the year when annual predictions are made for the upcoming year, including those that will be repeated next year or that were also made last year.


It's also the time of the year to get various projects wrapped up, line up new activities, get the book-keeping things ready for year-end processing and taxes, as well as other things.


It's also that time of the year to do some budget and project planning including upgrades, replacements, enhancements while balancing an over-subscribed holiday party schedule some of you may have.


Lets not forget getting ready for vacations, perhaps time off from work with some time upgrading your home lab or other projects.


Then there are the gift lists or trying to figure out what to get that difficult to shop for person particular geek's who may have everything, or want the latest and greatest that others have, or something their peers don't have yet.


Sure I have a DJI Phantom II on my wish list, however also have other things on my needs list (e.g. what I really need and want vs. what would be fun to wish for).

DJI Phantom helicopter drone
Image via, click on image to learn more and compare models


So here are some things for the geek or may have everything or is up on having the latest and greatest, yet forgot or didn't know about some of these things.


Not to mention some of these might seem really simple and low-cost, think of them like a Lego block or erector set part where your imagination will be your boundary how to use them. Also, most if not all of these are budget friendly particular if you shop around.

Replace a CD/DVD with 4 x 2.5" HDD's or SSD's

So you need to add some 2.5" SAS or SATA HDD's, SSD's, HHDD's/SSHD's to your server for supporting your VMware ESXi, Microsoft Hyper-V, KVM, Xen, OpenStack, Hadoop or legacy *nix or Windows environment or perhaps gaming system. Challenge is that you are out of disk drive bay slots and you want things neatly organized vs. a rat's nest of cables hanging out of your system. No worries assuming your server has an empty media bay (e.g. those 5.25" slots where CDs/DVDs or really old HDD's go), or if you can give up the CD/DVD, then use that bay and its power connector to add ones of these. This is a 4 x 2.5" SAS and SATA drive bay that has a common power connector (molex male) with each drive bay having its own SATA drive connection. By each drive having its own SATA connection you can map the drives to an on-board available SATA port attached to a SAS or SATA controller, or attach an available port on a RAID adapter to the ports using a cable such as small form factor (SFF) 8087 to SATA.

sas storage enclosuresas sata storage enclosure
(Left) Rear view with Molex power and SATA cables (Right) front view


I have a few of these in different systems and what I like about them is that they support different drive speeds, plus they will accept a SAS drive where many enclosures in this category only support SATA. Once you mount your 2.5" HDD or SSD using screws, you can hot swap (requires controller and OS support) the drives and move them between other similar enclosures as needed. The other thing I like is that there are front indicator lights as well as by each drive having its own separate connection, you can attach some of the drives to a RAID adapter while others connected to on-board SATA ports. Oh, and you can also have different speeds of drives as well.

Power connections

Depending on the type of your server, you may have Molex, SATA or some other type of power connections. You can use different power connection cables to go from one type (Molex) to another, create a connection for two devices, create an extension to reach hard to get to mounting locations.


Warning and disclosure note, keep in mind how much power you are drawing when attaching devices to not cause an electrical or fire hazard, follow manufactures instructions and specification doing so at your own risk! After all, Just like Clark Grizzwald in National Lampoon Christmas Vacation who found you could attach extension cord to splitters to splitters and fan-out to have many lights attached, you don't want to cause a fire or blackout when you plug to many drives in.

National Lampoon Christmas Vacation

Measuring Power

Ok so you do not want to do a Clark Grizzwald (see above video) and overload a power circuit, or perhaps you simply want to know how many watts, amps or quality of your voltage is.


There are many types of power meters along with various prices, some even have interfaces where you can grab event data to correlate with server storage I/O networking performance to do things such as IOP's per watt among other metrics. Speaking of IOP's per watt, check out the SNIA Emerald site where they have some good tools including a benchmark script that uses Vdbench to drive hot band workload (e.g. basically kick the crap out of a storage system).


Back to power meters, I like the Kill A Watt series of meters as they give good info about amps, volts, power quality. I have these plugged into outlets so I can see how much power is being used by the battery backup units (BBU) aka UPS that also serve as power surge filters. If needed I can move these further downstream to watch the power intake of a specific server, storage, network or other device.

Kill A Watt Power meter

Standby and backup power

Electrical power surge strips should be a given or considered common sense, however what is or should be common sense should be repeated so that it remains common sense, you should be using power surge strips or other devices.

Standby, UPS and BBU

For most situations a good surge suppressor will cover short power transients.

APC power strips and battery backup
Image via APC and model similar to those that I have


For slightly longer power outages of a few seconds to minutes, that's where battery backup up (BBU) units that also have surge suppression comes into play. There are many types, sizes with various features to meet your needs and budget. I have several of theses in a couple of different sizes not only for servers, storage and networking equipment (including some WiFi access points, routers, etc), I also have them for home things such as satellite DVR's. However not everything needs to stay on while others simply need to stay on long-enough in order to shutdown manually or via automated power off sequences.

Alternate Power Generation

Generators are not just for the rich and famous or large data center, like other technologies they are available in different sizes, power capacity, fuel sources, manual or automated among other things.

kohler residential generator
Image via Kohler Powersimilar to model that I have


Note that even with a typical generator there will be a time gap from the time power goes off until the generator starts, stabilizes and you have good power. That's where the BBU and UPS mentioned above comes into play to bridge those time gaps which in my cases is about 25-30 seconds. Btw, knowing how much power your technology is drawing using tools such as the Kill A Watt is part of the planning process to avoid surprises.

What about Solar Power

Yup, whether it is to fit in and be green, or simply to get some electrical power when or where it is not needed to charge a battery or power some device, these small solar power devices are very handy.

solar charger
Image via
solar battery charger
Image via


For example you can get or easily make an adapter to charge laptops, cell phones or even power them for normal use (check manufactures information on power usage, Amps and Voltage draws among other warnings to prevent fire and other things). Btw, not only are these handy for computer related things, they also work great for keeping batteries on my fishing boat charged so that I have my fish finder and other electronics, just saying.

Fire suppression

How about a new or updated smoke and fire detection alarm monitor, as well as fire extinguisher for the geek's software defined hardware that runs on power (electrical or battery)?


The following is from the site Fire Extinguisher 101 where you can learn more about different types of suppression technologies.


    fire extingusiher 101 exampleImage via Fire Extinguisher 101
  • Class A extinguishers are for ordinary combustible materials such as paper, wood, cardboard, and most plastics. The numerical rating on these types of extinguishers indicates the amount of water it holds and the amount of fire it can extinguish. Geometric symbol (green triangle)
  • Class B fires involve flammable or combustible liquids such as gasoline, kerosene, grease and oil. The numerical rating for class B extinguishers indicates the approximate number of square feet of fire it can extinguish. Geometric symbol (red square)
  • Class C fires involve electrical equipment, such as appliances, wiring, circuit breakers and outlets. Never use water to extinguish class C fires - the risk of electrical shock is far too great! Class C extinguishers do not have a numerical rating. The C classification means the extinguishing agent is non-conductive. Geometric symbol (blue circle)
  • Class D fire extinguishers are commonly found in a chemical laboratory. They are for fires that involve combustible metals, such as magnesium, titanium, potassium and sodium. These types of extinguishers also have no numerical rating, nor are they given a multi-purpose rating - they are designed for class D fires only. Geometric symbol (Yellow Decagon)
  • Class K fire extinguishers are for fires that involve cooking oils, trans-fats, or fats in cooking appliances and are typically found in restaurant and cafeteria kitchens. Geometric symbol (black hexagon)

Wrap up for part I

This wraps up part I of what to get a geek V2014, continue reading part II here.


Ok, nuff said, for now...


Cheers gs

h2>Data Storage Tape Update V2014, It's Still Alive

server storage I/O trends

A year or so ago I did a piece tape is still alive, or at least in conversations and discussions. Despite being declared dead for decades, and will probably stay being declared dead for years to come, magnetic tape is in fact still alive being used by some organizations, granted its role is changing while the technology still evolves.


Here is the memo  I received today from the PR folks of the Tape Storage Council (e.g. tape vendors marketing consortium) and for simplicity (mine), I'm posting it here for you to read in its entirety vs. possibly in pieces elsewhere. Note that this is basically a tape status and collection of marketing and press release talking points, however you can get an idea of the current messaging, who is using tape and technology updates.


Tape Data Storage in 2014 and looking towards 2015

True to the nature of magnetic tape as a data storage medium, this is not a low latency small post, rather a large high-capacity bulk post or perhaps all you need to know about tape for now, or until next year. Otoh, if you are a tape fan, you can certainly take the memo from the tape folks, as well as visit their site for more info.


From the tape storage council industry trade group:


Today the Tape Storage Council issued its  annual memo to highlight the current trends, usages and technology  innovations occurring within the tape storage industry. The Tape Storage  Council includes representatives of BDT, Crossroads Systems, FUJIFILM, HP,  IBM, Imation, Iron Mountain, Oracle, Overland Storage, Qualstar, Quantum, REB  Storage Systems, Recall, Spectra Logic, Tandberg Data and  XpresspaX. 
  Data Growth and Technology Innovations Fuel  Tape’s Future
  Tape Addresses New Markets as Capacity,  Performance, and Functionality Reach New Levels

  For the past  decade, the tape industry has been re-architecting itself and the renaissance  is well underway. Several  new and important technologies for both LTO (Linear Tape Open) and  enterprise tape products have yielded unprecedented cartridge capacity  increases, much longer media life, improved bit error rates, and vastly  superior economics compared to any previous tape or disk technology. This  progress has enabled tape to effectively address many new data intensive market  opportunities in addition to its traditional role as a backup device such as  archive, Big Data, compliance, entertainment and surveillance. Clearly disk  technology has been advancing, but the progress in tape has been even greater  over the past 10 years. Today’s modern tape technology is nothing like the tape  of the past.

The  Growth in Tape 
  Demand for  tape is being fueled by unrelenting data growth, significant technological  advancements, tape’s highly favorable economics, the growing requirements to  maintain access to data “forever” emanating from regulatory, compliance or  governance requirements, and the big data demand for large amounts of data to  be analyzed and monetized in the future. The  Digital Universe study suggests that the world’s information is doubling  every two years and much of this data is most cost-effectively stored on tape.

Enterprise  tape has reached an unprecedented 10 TB native capacity with data rates  reaching 360 MB/sec. Enterprise tape libraries can scale beyond one exabyte.  Enterprise tape manufacturers IBM and Oracle StorageTek have signaled future  cartridge capacities far beyond 10 TBs with no limitations in sight.  Open  systems users can now store more than 300 Blu-ray quality movies with the LTO-6  2.5 TB cartridge. In the future, an LTO-10 cartridge will hold over 14,400  Blu-ray movies. Nearly 250 million LTO tape cartridges have been shipped since  the format’s inception. This equals over 100,000 PB of data protected and  retained using LTO Technology. The innovative active archive solution combining tape  with low-cost NAS storage and LTFS is gaining momentum for open systems users.

Recent  Announcements and Milestones
  Tape storage  is addressing many new applications in today’s modern data centers while  offering welcome relief from constant IT budget pressures. Tape is also  extending its reach to the cloud as a cost-effective deep archive service. In  addition, numerous analyst studies confirm the TCO for tape is much lower than  disk when it comes to backup and data  archiving applications. See TCO Studies section below.

  • On       Sept. 16, 2013 Oracle       Corp announced the StorageTek T10000D enterprise tape drive. Features       of the T10000D include an 8.5 TB native capacity and data rate of 252 MB/s       native. The T10000D is backward read compatible with all three previous       generations of T10000 tape drives.
  • On       Jan. 16, 2014 Fujifilm       Recording Media USA, Inc. reported it has manufactured over 100       million LTO Ultrium data cartridges since its release of the first       generation of LTO in 2000. This       equates to over 53 thousand petabytes (53 exabytes) of storage and more       than 41 million miles of tape, enough to wrap around the       globe 1,653 times.
  • April       30, 2014, Sony       Corporation independently developed a soft magnetic under layer with a       smooth interface using sputter deposition, created a nano-grained magnetic       layer with fine magnetic particles and uniform crystalline orientation.       This layer enabled Sony to successfully demonstrate the world's highest       areal recording density for tape storage media of 148 GB/in2.       This areal density would make it possible to record more than 185 TB of       data per data cartridge.
  • On       May 19, 2014 Fujifilm in conjunction with IBM successfully demonstrated a       record areal data density of 85.9 Gb/in2 on linear magnetic       particulate tape using Fujifilm’s proprietary NANOCUBIC™ and Barium Ferrite (BaFe) particle technologies. This breakthrough in recording density equates to a       standard LTO cartridge capable of storing up to 154 terabytes of       uncompressed data, making it 62 times greater than today’s current LTO-6       cartridge capacity and projects a long and promising future for tape       growth.
  • On       Sept. 9, 2014 IBM announced LTFS       LE version 2.1.4 4 extending LTFS (Linear Tape File System) tape       library support.
  • On       Sept. 10, 2014 the LTO Program Technology Provider Companies (TPCs), HP,       IBM and Quantum, announced an extended roadmap which now includes LTO generations 9       and 10. The new generation guidelines call for compressed capacities of       62.5 TB for LTO-9 and 120 TB for generation LTO-10 and include compressed       transfer rates of up to 1,770 MB/second for LTO-9 and a 2,750 MB/second       for LTO-10. Each new generation will include read-and-write backwards compatibility       with the prior generation as well as read compatibility with cartridges       from two generations prior to protect investments and ease tape conversion       and implementation.
  • On       Oct. 6, 2014 IBM announced the TS1150 enterprise drive. Features of the TS1150 include a       native data rate of up to 360 MB/sec versus the 250 MB/sec native data       rate of the predecessor TS1140 and a native cartridge capacity of 10 TB       compared to 4 TB on the TS1140. LTFS support was included.
  • On       Nov. 6, 2014, HP announced a new       release of StoreOpen Automation that delivers a solution for using LTFS in automation environments with       Windows OS, available as a free download. This version complements their       already existing support for Mac and Linux versions to help simplify       integration of tape libraries to archiving solutions.

Significant  Technology Innovations Fuel Tape’s Future
  Development  and manufacturing investment in tape library, drive, media and management  software has effectively addressed the constant demand for improved  reliability, higher capacity, power efficiency, ease of use and the lowest cost  per GB of any storage solution. Below is a summary of tape’s value proposition  followed by key metrics for each:

  • Tape       drive reliability has surpassed disk drive reliability
  • Tape       cartridge capacity (native) growth is on an unprecedented trajectory
  • Tape       has a faster device data rate than disk
  • Tape       has a much longer media life than any other digital storage medium
  • Tape’s       functionality and ease of use is now greatly enhanced with LTFS
  • Tape       requires significantly less energy consumption than any other digital       storage technology
  • Tape       storage has  a much lower acquisition cost and TCO than disk

Reliability. Tape reliability  levels have surpassed HDDs. Reliability levels for tape exceeds that of the  most reliable disk drives by one to three orders of magnitude. The BER (Bit  Error Rate - bits read per hard error) for enterprise tape is rated at 1x1019  and 1x1017 for LTO tape. This compares to 1x1016 for the  most reliable enterprise Fibre Channel disk drive.

Capacity  and Data Rate. LTO-6 cartridges provide 2.5 TB capacity and more than double the compressed  capacity of the preceding LTO-5 drive with a 14% data rate performance boost to  160 MB/sec. Enterprise tape has reached 8.5 TB native capacity and 252 MB/sec  on the Oracle StorageTek T10000D and 10 TB native capacity and 360 MB/sec on the  IBM TS1150. Tape cartridge capacities are expected to grow at unprecedented  rates for the foreseeable future.

Media  Life. Manufacturers specifications indicate that enterprise and LTO tape media has a  life span of 30 years or more while the average tape drive will be deployed 7  to 10 years before replacement. By comparison, the average disk drive is  operational 3 to 5 years before replacement.

LTFS  Changes Rules for Tape Access. Compared to previous proprietary solutions,  LTFS is an open tape format that stores files in application-independent,  self-describing fashion, enabling the simple interchange of content across  multiple platforms and workflows. LTFS is also being deployed in several  innovative “Tape as NAS” active archive solutions that combine the cost  benefits of tape with the ease of use and fast access times of NAS. The SNIA  LTFS Technical Working Group has been formed to broaden cross–industry  collaboration and continued technical development of the LTFS specification.

TCOStudies. Tape’s widening cost  advantage compared to other storage mediums makes it the most cost-effective  technology for long-term data retention. The favorable economics (TCO, low  energy consumption, reduced raised floor) and massive scalability have made  tape the preferred medium for managing vast volumes of data. Several tape TCO  studies are publicly available and the results consistently confirm a  significant TCO advantage for tape compared to disk solutions.

According to  the Brad Johns Consulting Group, a TCO study for an LTFS-based ‘Tape as NAS’  solution totaled $1.1M compared with $7.0M for a disk-based unified storage  solution.  This equates to a savings of over $5.9M over a 10-year period,  which is more than 84 percent less than the equivalent amount for a storage  system built on a 4 TB hard disk drive unified storage system.  From a  slightly different perspective, this is a TCO savings of over $2,900/TB of  data. Source: Johns, B. “A New Approach to Lowering the Cost of Storing File  Archive Information,”.

Another  comprehensive TCO study by ESG (Enterprise Strategies Group) comparing an LTO-5  tape library system with a low-cost SATA disk system for backup using  de-duplication (best case for disk) shows that disk deduplication has a 2-4x  higher TCO than the tape system for backup over a 5 year period. The study  revealed that disk has a TCO of 15x higher than tape for long-term data  archiving.

Select  Case Studies Highlight Tape and Active Archive Solutions
  CyArk Is  a non-profit foundation focused on the digital preservation of cultural  heritage sites including places such as Mt. Rushmore, and Pompeii. CyArk  predicted that their data archive would grow by 30 percent each year for the  foreseeable future reaching one to two petabytes in five years. They needed a  storage solution that was secure, scalable, and more cost-effective to provide  the longevity required for these important historical assets. To meet this  challenge CyArk implemented an active archive solution featuring LTO and LTFS  technologies.

Dream  Works Animation a global Computer Graphic (CG) animation studio has  implemented a reliable, cost-effective and scalable active archive solution to  safeguard a 2 PB portfolio of finished movies and graphics, supporting a  long-term asset preservation strategy. The studio’s comprehensive, tiered and  converged active archive architecture, which spans software, disk and tape,  saves the company time, money and reduces risk.

LA Kings of the  NHL rely extensively on digital video assets for marketing activities with team  partners and for its broadcast affiliation with Fox Sports. Today, the Kings  save about 200 GB of video per game for an 82 game regular season and are on  pace to generate about 32-35 TB of new data per season. The King’s chose to  implement Fujifilm’s  Dternity NAS active archive appliance, an open LTFS based architecture. The  Kings wanted an open source archiving solution which could outlast its original  hardware while maintaining data integrity. Today with Dternity and LTFS, the  Kings don’t have to decide what data to keep because they are able to  cost-effectively save everything they might need in the future.

McDonald’s primary challenge was to create a digital video workflow that streamlines the  management and distribution of their global video assets for their video  production and post-production environment. McDonald’s implemented the Spectra T200 tape library with LTO-6  providing 250 TB of McDonald’s video production storage. Nightly, incremental  backup jobs store their media assets into separate disk and LTO- 6 storage  pools for easy backup, tracking and fast retrieval. This system design allows  McDonald’s to effectively separate and manage their assets through the use of  customized automation and data service policies.

NCSA employs  an Active Archive solution providing 100 percent of the nearline storage for  the NCSA Blue Waters supercomputer, which is one of the world’s largest active file repositories  stored on high capacity, highly reliable enterprise tape media. Using an active  archive system along with enterprise tape and RAIT (Redundant Arrays of Inexpensive Tape) eliminates the need to duplicate tape  data, which has led to dramatic cost savings.

Queensland  Brain Institute (QBI) is a leading center for neuroscience research.   QBI’s research focuses on the cellular and molecular mechanisms that regulate  brain function to help develop new treatments for neurological and mental  disorders.  QBI’s storage system has to scale extensively to store,  protect, and access tens of terabytes of data daily to support cutting-edge  research.  QBI choose an Oracle solution consisting of Oracle’s StorageTek  SL3000 modular tape libraries with StorageTek T10000 enterprise tape  drives.   The Oracle solution improved QBI’s ability to grow, attract  world-leading scientists and meet stringent funding conditions.

Looking  Ahead to 2015 and Beyond
  The role tape  serves in today’s modern data centers is expanding as IT executives and cloud  service providers address new applications for tape that leverage its  significant operational and cost advantages. This recognition is driving  investment in new tape technologies and innovations with extended roadmaps, and  it is expanding tape’s profile from its historical role in data backup to one  that includes long-term archiving requiring cost-effective access to enormous  quantities of stored data. Given the current and future trajectory of tape  technology, data intensive markets such as big data, broadcast and  entertainment, archive, scientific research, oil and gas exploration,  surveillance, cloud, and HPC are expected to become significant beneficiaries  of tape’s continued progress. Clearly the tremendous innovation, compelling  value proposition and development activities demonstrate tape technology is not  sitting still; expect this promising trend to continue in 2015 and  beyond.

Visit the Tape  Storage Council at

What this means and summary

Like it not tape is still alive being used along with the technology evolving with new enhancements as outlined above.


Good to see the tape folks doing some marketing to get their story told and heard for those who are still interested.


Does that mean I still use tape?


Nope, I stopped using tape for local backups and archives well over a decade ago using disk to disk and disk to cloud.


Does that mean I believe that tape is dead?


Nope, I still believe that for some organizations and some usage scenarios it makes good sense, however like with most data storage related technologies, it's not a one size or type of technology fits everything scenario value proposition.


On a related note for cloud and object storage, visit


Ok, nuff said, for now...

Cheers gs

server storage I/O trends

Part II: Revisiting re:Invent 2014 and other AWS updates

This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

  AWS re:Invent 2014

AWS re:Invent announcements


Announcements and enhancements made by AWS during re:Invent include:

  • Key  Management Service (KMS)
  • Amazon RDS  for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application  development, deployed and life-cycle management tools
  • AWS Service  Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

AWS Lambda

In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute  instances along with container service, another new service is AWS Lambda.  Lambda is a service that automatically and quickly runs your applications code  in response to events, activities, or other triggers. In addition to running  your code, Lambda service is billed in 100 millisecond increments along with  corresponding memory use vs. standard EC2 per hour billing. What this means is  that instead of paying for an hour of time for your code to run, you can choose  to use the Lambda service with more fine-grained consumption billing.


Lambda service can be used to have your code functions  staged ready to execute. AWS Lambda can run your code in response to S3 bucket  content (e.g. objects) changes, messages arriving via Kinesis streams or table  updates in databases. Some examples include responding to event such as a  web-site click, response to data upload (photo, image, audio, file or other  object), index, stream or analyze data, receive output from a connected device  (think Internet of Things IoT or Internet of  Device IoD), trigger from an in-app event among others. The basic idea  with Lambda is to be able to pay for only the amount of time needed to do a  particular function without having to have an AWS EC2 instance dedicated to  your application. Initially Lambda supports Node.js (JavaScript) based code  that runs in its own isolated environment.

AWS cloud example
Various application code deployment models


Lambda service is a pay for what you consume, charges are  based on the number of requests for your code function (e.g. application),  amount of memory and execution time. There is a free tier for Lambda that  includes 1 million requests and 400,000 GByte seconds of time per month. A  GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a  second. An example is your application is run 100,000 times and runs for 1  second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View  various pricing models here on the AWS Lambda site that show examples for different  memory sizes, times a function runs and run time.


How much memory you select for your application code determines  how it can run in the AWS free tier, which is available to both existing and  new customers. Lambda fees are based on the total across all of your functions  starting with the code when it runs. Note that you could have from one to  thousands or more different functions running in Lambda service. As of this  time, AWS is showing Lambda pricing as free for the first 1 million requests,  and beyond that, $0.20 per 1 million request ($0.0000002 per request) per  duration. Duration is from when you code runs until it ends or otherwise  terminates rounded up to the nearest 100ms. The Lambda price also depends on  the amount of memory you allocated for your code. Once past the 400,000 GByte  second per month free tier the fee is $0.00001667  for every GB second used.

Why use AWS Lambda vs. an EC2 instance

Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premise environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that's where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that's where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

View AWS Lambda pricing along with free tier information here.

Amazon EBS Enhancements

AWS is  increasing the performance and size of General Purpose SSD and  Provisioned IOP's SSD  volumes. This means that you can create volumes up to 16TB and 10,000 IOP's for  AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP's SSD volumes you  can create up to 16TB for 20,000 IOP's. General-purpose SSD volumes deliver a maximum  throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified  by AWS at 320MBps when attached to EBS optimized instances. Learn  more about EBS capabilities here. Verify your IO size and  verify AWS sizing information to avoid surprises as all IO sizes are not  considered to be the same. Learn more about Provisioned IOP's, optimized  instances, EBS and EC2 fundamentals in this StorageIO  AWS primer here.

Application  development, deployed and life-cycle management tools

In addition  to compute and storage resource enhancements, AWS has also announced several  tools to support application development, configuration along with deployment (life-cycle  management). These  include tools that AWS uses themselves as part of building and maintaining the  AWS platform services.

AWS Config  (Preview e.g. early access prior to full release)

Management,  reporting and monitoring capabilities including Data center infrastructure  management (DCIM) for monitoring your AWS resources, configuration (including history),  governance, change management and notifications. AWS Config enables similar capabilities  to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics,  auditing, resource and configuration analysis among other activities. Learn  more about AWS Config here.

AWS Service  Catalog

AWS  announced a new service catalog that will be available in early 2015. This new  service capability will enable administrators to create and manage catalogs of approved  resources for users to use via their personalized portal. Learn more about  AWS service catalog here.

AWS CodeDeploy

To support code rapid deployment  automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks  complexity associated with deployment when adding new features to your  applications while reducing human error-prone operations. As part of the  announcement, AWS mentioned that they are using CodeDeploy as part of  their own applications development, maintenance, and change-management and  deployment operations. While  suited for at scale deployments across many instances, CodeDeploy  works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

AWS CodeCommit


For application code management,  AWS will be making available in early 2015 a new service called CodeCommit.  CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting  standard functionalities of Git, including collaboration, you can store things  from source code to binaries while working with your existing tools. Learn more  about AWS CodeCommit here.

AWS CodePipeline


To support application delivery  and release automation along with associated management tools, AWS is making  available CodePipeline. CodePipeline is a tool (service) that supports build,  checking workflow's, code staging, testing and release to production including  support for 3rd party tool integration. CodePipeline will be  available in early 2015, learn more here.

Additional reading and related  items

Learn more about the above and other AWS services by  actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here(all sessions may not yet be uploaded by AWS re:Invent)

What this all means

AWS amazon web services


AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.


Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.


The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.


Ok, nuff said (for now)

Cheers gs

server storage I/O trends

This is part one of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part two here.

Revisiting re:Invent 2014 and other AWS updates

AWS re:Invent 2014

A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have  interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.

AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).

Some recent  AWS announcements  prior to re:Invent include

AWS vCenter Portal

Using the  AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to  manage your AWS infrastructure. The vCenter for AWS plug-in includes support  for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS  EC2, create VPC (Virtual Private Clouds) along with subnet's. There is no cost  for the plug-in, you simply pay for the underlying AWS resources consumed (e.g.  EC2, EBS, S3). Learn more about AWS Management Portal for vCenter  here, and download the OVA  plug-in for vCenter here.

AWS re:invent content

AWS re:invent 2014 day 1 keynote
    AWS Andy Jassy (Image via AWS)


November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.


AWS re:invent 2014 day 2 keynote CTO Werner Vogels (Image via AWS)


November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, CTO Werner Vogels appears making announcements about the new Container and Lambda services.


AWS re:Invent announcements


Announcements and enhancements made by AWS during re:Invent include:

  • Key  Management Service (KMS)
  • Amazon RDS  for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application  development, deployed and life-cycle management tools
  • AWS Service  Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

Key  Management Service (KMS)

Hardware  security module (HSM) based key managed service for creating and control of  encryption keys to protect security of digital assets and their keys.  Integration with AWS EBS and others services including S3 and Redshift along  with CloudTrail logs for regulatory, compliance and management. Learn more  about AWS KMS here

AWS Database

For those  who are not familiar, AWS has a suite of database related  services including  SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses  for big data and analytics. AWS offers the Relational Database  Service (RDS) which is a  suite of different database types, instances and services. RDS instance  and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new  AWS Aurora offering (read more below).  Other  little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory  cache repository) and Redshift (large-scale  data warehouse and big data repository).

In addition  to database services offered by AWS, you can also combine various AWS resources  including EC2 compute, EBS and other storage offerings to create your own  solution. For example there are various Amazon Machine Images (AMI's) or  pre-built operating systems and database tools available with EC2 as well as  via the AWS Marketplace , such as MongoDB and Couchbase among  others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.

Seven Databases book review
Seven Databases in Seven Weeks and NoSQL movement available from

Amazon RDS  for Aurora

Aurora is a new relational database offering part of the AWS  RDS suite of services. Positioned as an alternative to commercial high-end  database, Aurora is a cost-effective database engine compatible with MySQL. AWS  is claiming 5x better performance than standard MySQL with Aurora while being  resilient and durable. Learn more about Aurora which will be available in early  2015 and its current preview here.

Amazon EC2 C4 instances

AWS will be adding a new C4 instance as a next generation of  EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the  highest level of EC2 performance. AWS is targeting traditional High Performance  Computing (HPC) along with other compute intensive workloads including analytics,  gaming, and transcoding among others. Learn more AWS EC2 instances here, and  view this Server and StorageIO EC2, EBS and  associated AWS primer here.

Amazon EC2 Container Service

Containers such as those via Docker have become popular to  support developers rapidly build as well as deploy scalable applications. AWS  has added a new feature called EC2 Container Service that supports Docker using  simple API's. In addition to supporting Docker, EC2 Container Service is a high  performance scalable container management service for distributed applications  deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2  Container Service leverages security groups, EBS volumes and Identity Access  Management (IAM) roles along with scheduling placement of containers to meet  your needs. Note that AWS is not alone in adding container and docker support  with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.

Docker for smarties

Continue reading about re:Invent 2014 and other recent AWS enhancements here in part two of this two-part series.

Ok, nuff said (for now)

Cheers gs