Skip navigation
1 2 3 Previous Next

Greg's Blog

299 posts

Here is the 2018 Hot Popular New Trending Data Infrastructure Vendors To Watch which includes startups as well as established vendors doing new things. This piece follows last year’s hot favorite trending data infrastructure vendors to watch list (here), as well as who will be top of storage world in a decade piece here.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Data Infrastructures Support Information Systems Applications and Their Data


Data Infrastructures are what exists inside physical data centers and cloud availability zones (AZ) that are defined to provide traditional, as well as cloud services. Cloud and legacy data infrastructures are combined by hardware (server, storage, I/O network), software along with management tools, policies, tradecraft techniques (skills), best practices to support applications and their data. There are different types of data infrastructures to meet the needs of various environments that range in size, scope, focus, application workloads, along with Performance and capacity.


Another important aspect of data infrastructures is that they exist to protect, preserve, secure and serve applications that transform data into information. This means that availability and Data Protection including archive, backup, business continuance (BC), business resiliency (BR), disaster recovery (DR), privacy and security among other related topics, technology, techniques, and trends are essential data infrastructure topics.


2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences


2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Some of those on this year’s list are focused on different technology areas, while others on size or types of vendors, suppliers, service providers. Others on the list are focused on who is new, startup, evolving, or established which varies from if you are an industry insider or IT customer environment. Meanwhile others new and some are established doing new things, mix of some you may not have heard of for those who want or need to have the most current list to rattle off startups for industry adoption (and deployment), as well as what some established players are doing that might lead to customer deployment (and adoption).

AMD – The AMD EPYC family of processors is opening up new opportunities for AMD to challenge Intel among others for a more significant share of the general-purpose compute market in support of data center and data infrastructure markets. An advantage that AMD has and is playing to in the industry speeds feeds, slots and watts price performance game is the ability to support more memory and PCIe lanes per socket than others including Intel. Keep in mind that PCIe lanes will become even more critical as NVMe deployment increases, as well as the use of GPU's and faster Ethernet among other devices. Name brand vendors including Dell and HPE among others have announced or are shipping AMD EPYC based processors.

Aperion – Cloud and managed service provider with diverse capabilities.

Amazon Web Services (AWS) – Continues to expand its footprint regarding regions, availability zones (AZ) also known as data centers in regions, as well as some services along with the breadth of those capabilities. AWS has recently announced a new Snowball Edge (SBE) which in the past has been a data migration appliance now enhanced with on-prem Elastic Cloud Compute (EC2) capabilities. What this means is that AWS can put on-prem compute capabilities as part of a storage appliance for short-term data movement, migration, conversion, importing of virtual machines and other items.


On the other hand, AWS can also be seen as using SBE as a first entry to placing equipment on-prem for hybrid clouds, or, converged infrastructure (CI), hyper-converged infrastructure (HCI), cloud in a box similar to Microsoft Azure Stack, as well as CI/HCI solutions from others.

My prediction near term, however, is that CI/HCI vendors will either ignore SBE, downplay it, create some new marketing on why it is not CI/HCI or fud about vendor lock-in. In other words, make some popcorn and sit back, watch the show.


Backblaze – Low-cost, high-capacity cloud storage for backup and archiving provider known for their quarterly disk drive reliability ratings (or failure) reports. They have been around for a while, have a good reputation among those who use their services for being a low-cost alternative to the larger providers.


Barefoot networks – Some of you may already be aware of or following Barefoot Networks, while others may not have heard of them outside of the networking space. They have some impressive capabilities, are new, you probably have not heard of them, thus an excellent addition to this list.

Cloudian – Continue to evolve and no longer just another object storage solution, Cloudian has been expanding via organic technology development, as well as acquisitions giving them a broad portfolio of software-defined storage and tiering from on-prem to the cloud, block, file and object access.


Cloudflare – Not exactly a startup, some of you may know or are using Cloudflare, while to others, their role as a web cache, DNS, and other service is transparent. I have been using Cloudflare on my various sites for over a year, and like the security, DNS, cache and analytics tools they provide as a customer.


Cobalt Iron – For some, they might be new, Software-defined Data protection and management is the name of the game over at Cobalt Iron which has been around a few years under the radar compared to more popular players. If you have or are involved with IBM Tivoli aka TSM based backup and data protection among others, check out the exciting capabilities that Cobalt can bring to the table.


CTERA – Having been around for a while, to some they might not be a startup, on the other hand, they may be new to others while offering new data and file management options to others.


DataCore – You might know of DataCore for their software-defined storage and past storage hypervisor activity. However, they have a new piece of software MaxParallel that boost server storage I/O performance. The software installs on your Windows Server instance (bare metal, VM, or cloud instance) and shows you performance with and without acceleration which you can dynamically turn off and off.


DataDirect Networks (DDN) - Recently acquired Lustre assets from Intel, now picking up the storage startup Tintri pieces after it ceased operations. What this means is that while beefing up their traditional High-Performance Compute (HPC) and Super Compute (SC) focus, DDN is also expanding into broader markets.


Dell Technologies – At its recent Dell Technology World event in Las Vegas during late April, early May 2018, several announcements were made, including some tied to emerging Gen-Z along with composability. More recently, Dell Technologies along with VMware announced business structure and finance changes. Changes include VMware declaring a dividend, Dell Technologies being its largest shareholder will use proceeds to fund restricting and debt service. Read more about VMware and Dell Technology business and financial changes here.


Densify – With a name like Densify no surprise they propose to drive densification and automation with AI-powered deep learning to optimize application resource use across on-prem software-defined virtual as well as cloud instances and containers.


FlureDB – If you are into databases (SQL or NoSQL), as well as Blockchain or distributed ledgers, check out FlureDB. – When it comes to data infrastructure and data center networking, Innovium is probably not on your radar, however, keep an eye on these folks and their TERALYNX switching silicon to see where it ends up given their performance claims.


Komprise – File, and data management solutions including tiering along with partners such as IBM.


Kubernetes – A few years ago OpenStack, then Docker containers was the favorite and trending discussion topic, then Mesos and along comes Kubernetes. It's safe to say, at least for now, Kubernetes is settling in as a preferred open source industry and customer defecto choice (I want to say standard, however, will hold off on that for now) for container and related orchestration management. Besides, do it yourself (DiY) leveraging open source, there are also managed AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS), Google Kubernetes Engine (GKE), and VMware Pivotal Container Service (PKS) among others. Besides Azure, Microsoft also includes Kubernetes support (along with Docker and Windows containers) as part of Windows Servers.

ManageEngine (part of Zoho) - Has data infrastructure monitoring technology called OpManager for keeping an eye on networking.

Marvel – Marvel may not be a familiar name (don’t confuse with comics), however, has been a critical component supplier to partners whose server or storage technology you may be familiar with or have yourself. Server, Storage, I/O Networking chip maker has closed on its acquisition of Cavium (who previously bought Qlogic among others). The combined company is well positioned as a key data infrastructure component supplier to various partners spanning servers, storage, I/O networking including Fibre Channel (FC), Ethernet, InfiniBand, NVMe (and NVMeoF) among others.

Mellanox – Known for their InfiniBand adapters, switches, and associated software, along with growing presence in RDMA over Converged Ethernet (RoCE), they are also well positioned for NVMe over Fabrics among other growth opportunities following recent boardroom updates, along with technology roadmap's.

Microsoft – Azure public cloud continues to evolve similarly to AWS with more region locations, availability zone (AZ) data centers, as well as features and extensions. Microsoft also introduced about a year ago its hybrid on-prem CI/HCI cloud in a box platform appliance Azure Stack (read about my test drive here). However, there is more to Microsoft than just their current cloud first focus which means Windows (desktop), as well as Server, are also evolving. Currently, in public preview, Windows Server 2019 insiders build available to try out many new capabilities, some of which were covered in the recent free Microsoft Virtual Summit held in June. Key themes of Windows Server 2019 include security, performance, hybrid cloud, containers, software-defined storage and much more.


Microsemi – Has been around for a while is the combination of some vendors you may not have heard of or heard about in some time including PMC-Sierra (acquired Adaptec) and Vitesse among others. The reason I have Microsemi on this list is a combination of their acquisitions which might be an indicator of whom they pick up next. Another reason is that their components span data infrastructure topics from servers, storage, I/O and networking, PCIe and many more.

NVIDIA – GPU high performance compute and related compute offload technologies have been accessible for over a decade. More recently with new graphics and computational demands, GPU such as those NVIDIA are in need. Demand includes traditional graphics acceleration for physical and virtual, augmented and virtual reality, as well as cloud, along with compute-intensive analytics, AI, ML, DL along with other cognitive workloads.


NGDSystems (NGD) – Similar to what NVIDIA and other GPU vendors do for enabling compute offload for specific applications and workloads, NGD is working on a variation. That variation is to move offload compute capabilities for the server I/O storage-intensive workloads closer, in fact into storage system components such as SSDs and emerging SCMs and PMEMs. Unlike GPU based applications or workloads that tend to be more memory and compute intensive, NGD is positioned for applications that are the server I/O and storage intensive.


The premise of NGD is that they move the compute and application closer to where the data is, eliminating extra I/O, as well as reducing the amount of main server memory and compute cycles. If you are familiar with other server storage I/O offload engines and systems such as Oracle Exadata database appliance NGD is working at a tighter integration granularity. How it works is your application gets ported to run on the NGD storage platform which is SSD based and having a general-purpose processor. Your application is initiated from a host server, where it then runs on the NGD meaning I/Os are kept local to the storage system. Keep in mind that the best I/O is the one that you do not have to do, the second best is the one with the least resource or user impact.


Opvisor – Performance activity and capacity monitoring tools including for VMware environments.

Pavillon – Startup with an interesting NVMe based hardware appliance.


Quest – Having gained their independence as a free-standing company since divestiture from Dell Technologies (Dell had previously acquired Quest before EMC acquisition), Quest continues to make their data infrastructure related management tools available. Besides now being a standalone company again, keep an eye on Quest to see how they evolve their existing data protection and data infrastructure resource management tools portfolio via growth, acquisition, or, perhaps Quest will be on somebody else’s future growth list.


Retrospect – Far from being a startup, after gaining their independence from when EMC bought them several years ago, they have since continued to enhance their data protection technology. Disclosure, I have been a Retrospect customer since 2001 using it for on-site, as well as cloud data protection backups to the cloud.

Rubrik – Becoming more of a data infrastructure household name given their expanding technology portfolio and marketing efforts. More commonly known in smaller customer environments, as well as broadly within industry insider circles, Rubrik has potential with continued technology evolution to move further upmarket similar to how Commvault did back in the late 90s, just saying.

SkyScale – Cloud service provider that offers dedicated bare metal, as well as private, hybrid cloud instances along with GPU to support AI, ML, DL and other high performance,  compute workloads.

Snowflake – The name does not describe well what they do or who they are. However, they have an interesting cloud data warehouse (old school) large-scale data lakes (new school) technologies.


Strongbox – Not to be confused with technology such as those from Iosafe (e.g., waterproof, fireproof), Strongbox is a data protection storage solution for storing archives, backups, BC/BR/DR data, as well as cloud tiering. For those who are into buzzword bingo, think cloud tiering, object, cold storage among others. The technology evolved out of Crossroads and with David Cerf at the helm has branched out into a private company with keeping an eye on.


Storbyte – With longtime industry insider sales and marketing pro-Diamond Lauffin (formerly Nexsan) involved as Chief Evangelist, this is worth keeping an eye on and could be entertaining as well as exciting. In some ways it could be seen as a bit of Nexsan meets NVme meets NAND Flash meets cost-effective value storage dejavu play.

Talon – Enterprise storage and management solutions for file sharing across organizations, ROBO and cloud environments.


Ubitqui – Also known as UBNT is a data infrastructure networking vendor whose technologies span from WiFi access points (AP), high-performance antennas, routing, switching and related hardware, along with software solutions. UBNT is not as well-known in more larger environments as a Cisco or others. However, they are making a name for themselves moving from the edge to the core. That is, working from the edge with AP and routers, firewalls, gateways for the SMB, ROBO, SOHO as well as consumer (I have several of their APs, switches, routers and high-performance antennas along with management software), these technologies are also finding their way into larger environments.

My first use of UBNT was several years ago when I needed to get an IP network connection to a remote building separated by several hundred yards of forest. The solution I found was to get a pair of UBNT NANO Apps, put them in secure bridge mode; now I have a high-performance WiFi service through a forest of trees. Since then have replaced an older Cisco router, several Cisco, and other APs, as well as the phased migration of switches.


UpdraftPlus– If you have a WordPress web or blog site, you should also have a UpdraftPlus plugin (go premium btw) for data protection. I have been using Updraft for several years on my various sites to backup and protect the MySQL databases and all other content. For those of you who are familiar with Spanning (e.g., was acquired by EMC then divested by Dell) and what they do for cloud applications, UpdraftPlus does similar for lower-end, smaller cloud-based applications.


Vexata – Startup scale out NVMe storage solution.


VMware – Expanding their cloud foundation from on-prem to in and on clouds including AWS among others. Data Infrastructure focus continues to expand from core to edge, server, storage, I/O, networking. With recent Dell Technologies and VMware declaring a dividend, should be interesting to see what lies ahead for both entities.

What About Those Not Mentioned?

By the way, if you were wondering about or why others are not in the above list, simple, check out last year’s list which includes Apcera, Blue Medora, Broadcom, Chelsio, Commvault, Compuverde, Datadog, Datrium, Docker, E8 Storage, Elastifile, Enmotus, Everspin, Excelero, Hedvig, Huawei, Intel, Kubernetes, Liqid, Maxta, Micron, Minio, NetApp, Neuvector, Noobaa, NVIDA, Pivot3, Pluribus Networks, Portwork, Rozo Systems, ScaleMP, Storpool, Stratoscale, SUSE Technology, Tidalscale, Turbonomic, Ubuntu, Veeam, Virtuozzo and WekaIO. Note that many of the above have expanded their capabilities in the past year and remain, or have become even more interesting to watch, while some might be on the future where are they now list sometime down the road. View additional vendors and service providers via our industry links and resources page here.

What About New, Emerging, Trending and Trendy Technologies

Bitcoin and Blockchain storage startups, some of which claim or would like to replace cloud storage taking on giants such as AWS S3 in the not so distant future have been popping up lately. Some of these have good and exciting stories if they can deliver on the hype along with the premise. A couple of names to drop include among others Filecoin, Maidsafe, Sia, Storj along with services from AWS, Azure, Google and a long list of others.


Besides Blockchain distributed ledgers, other technologies and trends to keep an eye on include compute processes from ARM to SoC, GPU, FPGA, ASIC for offload and specialized processing. GPU, ASIC, and FPGA are appearing in new deployments across cloud providers as they look to offload processing from their general servers to derive total effective productivity out of them. In other words, innovating by offloading to boost their effective return on investment (old ROI), as well as increase their return on innovation (the new ROI).

Other data infrastructure server I/O which also ties into storage and network trends to watch include Gen-Z that some may claim as the successor to PCIe, Ethernet, InfiniBand among others (hint, get ready for a new round of “something is dead” hype). Near-term the objective of Gen-Z is to coexist, complement PCIe, Ethernet, CPU to memory interconnect, while enabling more granular allocation of data infrastructure resources (e.g., composability). Besides watching who is part of the Gen-Z movement, keep an eye on who is not part of it yet, specifically Intel.


NVMe and its many variations from a server internal to networked NVMe over Fabrics (NVMeoF) along with its derivatives continue to gain both industry adoption, as well as customer deployment. There are some early NVMeoF based server storage deployments (along with marketing dollars). However, the server side NVMe customer adoption is where the dollars are moving to the vendors. In other words, it's still early in the bigger broader NVMe and NVMeoF game.

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Let's see how those mentioned last year as well as this year, along with some new and emerging vendors, service providers who did not get said end up next year, as well as the years after that.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences


Keep in mind that there is a difference between industry adoption and customer deployment, granted they are related. Likewise let’s see who will be at the top in three, five and ten years, which means some of the current top or favorite vendors may or may not be on the list, same with some of the established vendors. Meanwhile, check out the 2018 Hot Popular New Trending Data Infrastructure Vendors to Watch.


Ok, nuff said, for now.

Cheers Gs

Dell Technology World 2018 Part I Announcement Summary

Dell Technology World 2018 Announcement Summary


This is part one of a five-part series about Dell Technology World 2018 announcement summary. Last week (April 30-May 3) I traveled to Las Vegas Nevada (LAS) to attend  Dell Technology World 2018 (e.g., DTW 2018) as a guest of Dell (that is a disclosure btw). There were several announcements along with plenty of other activity from sessions, meetings, hallway and event networking taking place at Dell Technology World DTW 2018.


Major  data infrastructure technology announcements include:

  • PowerMax all-flash array (AFA)  solid state device (SSD)  NVMe storage system
  • PowerEdge four-socket 2U and 4U rack servers
  • XtremIO X2 AFA SSD storage system updates
  • PowerEdge MX preview of future composable servers
  • Desktop and thin client along with other VDI updates
  • Cloud and networking enhancements
  • VMware Cloud and related updates


Besides the above, additional data infrastructure related announcements were made in association with Dell Technology family members including VMware along with other partners, as well as customer awards. Other updates and announcements were tied to business updates from Dell Technology, Dell Technical Capital (venture capital), and, Dell Financial Services.


Dell Technology World Buzzword Bingo Lineup

Some of the buzzword bingo terms, topics, acronyms from Dell Technology World 2018 included AFA, AI, Autonomous, Azure, Bare Metal, Big Data, Blockchain, CI, Cloud, Composable, Compression, Containers, Core, Data Analytics, Dedupe, Dell, DFS (Dell Financial Services), DFR (Data Footprint Reduction), Distributed Ledger, DL, Durability, Fabric, FPGA, GDPR, Gen-Z, GPU, HCI, HDD, HPC, Hybrid, IOP, Kubernetes, Latency, MaaS (Metal as a Service), ML, NFV, NSX, NVMe, NVMeoF, PACE (Performance Availability Capacity Economics), PCIe, Pivotal, PMEM, RAID, RPO, RTO, SAS, SATA, SC, SCM, SDDC, SDS, Socket, SSD, Stamp, TBW (Terabytes Written per day), VDI, venture capital, VMware and VR among others.


Dell Technology World 2018 Venue
Dell Technology World DTW 2018 Event and Venue


Dell Technology World 2018 was located at the combined  Palazzo and Venetian hotels along with adjacent Sands Expo center kicking off Monday, April 30th and wrapping up May 4th.

The theme for Dell Technology World DTW 2018 was "make it real", which in some ways was interesting given the focus on virtual including virtual reality (VR), software-defined data center (SDDC) virtualization,  data infrastructure topics, along with artificial intelligence (AI).


Virtual Sky Dell Technology World 2018
Make it real – Venetian Palazzo St. Mark’s Square on the way to Sands Expo Center


There was plenty of AI, VR, SDDC along with other technologies, tools as well as some fun stuff to do including VR games.


Dell Technology World 2018 Commons Area
Dell Technology World Village Area near Key Note and Expo Halls


Dell Technology World 2018 Commons Area Drones
Dell Technology World Drone Flying Area


During a break from some meetings, I used a few minutes to fly a drone using VR which was interesting. I Have been  operating drones (See some videos here) visually without dependence on the first-person view (FPV) or relying on extensive autonomous operations instead flying heads up by hand for several years. Needless to say, the VR was interesting, granted encountered a bit of vertigo that I had to get used to.


Dell Technology World 2018 Commons Area Virtual Village
More views of the Dell Technology World Village and Commons Area with VR activity


Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Village and VR area


Dell Technology World 2018 Commons Area Virtual Village
Dell Technology World Bean Bag Area


Dell Technology World 2018 Announcement Summary

Ok, nuff with the AI, ML, DL, VR fun, time to move on to the business and technology topics of Dell Technologies World 2018.


What was announced at Dell Technology World 2018 included among others:


Dell Technology World 2018 PowerMax
Dell PowerMax Front View


Subsequent posts in this series take a deeper look at the various announcements as well as what they mean.


Where to learn more

Learn more about Dell Technology World 2018 and related topics via the following links:


Additional  learning experiences along with  common questions (and answers), as well as  tips can be found in  Software Defined Data Infrastructure Essentials book.


  Software Defined Data Infrastructure Essentials Book SDDC

What this all means

On the surface it may appear that there was not much announced at Dell Technology World 2018 particular compared to some of the recent Dell EMC Worlds and EMC Worlds. However turns out that there was a lot announced, granted without some of the entertainment and circus like atmosphere of previous events.  Continue reading here Part II Dell Technology World 2018 Modern Data Center Announcement Details in this series, along with  Part III herePart IV here (including PowerEdge MX composable infrastructure leveraging Gen-Z) and  Part V (servers and converged) here.


Ok, nuff said, for now.


Cheers Gs

Microsoft Windows Server 2019 Insiders Preview

Application Data Value Characteristics Everything Is Not The Same


Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management.


Windows Server 2019 Preview Features

Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others.

  • Hybrid cloud - Extending active directory, file server synchronize, cloud backup, applications spanning on-premise and cloud, management).
  • Security - Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements.
  • Application platform - Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL).


Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources.


Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here.

Microsoft and Windows LTSC and SAC

As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments.

There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here.

Test Driving Installing The Bits

One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it worked. To do the upgrade I used a clean and up to date Windows Server 2016 data center edition with desktop. This test system is a VMware ESXi 6.5 guest running on flash SSD storage. Before the upgrade to Windows Server 2019, I made a VMware vSphere snapshot so I could quickly and easily restore the system to a good state should something not work.

To get the bits, go to Windows Insiders Preview Downloads (you will need to register)

Windows Server 2019 LTSC build 17623 is available in 18 languages in an ISO format and require a key.

The keys for the pre-release unlimited activations are:
Datacenter Edition         6XBNX-4JQGW-QX6QG-74P76-72V67
Standard Edition             MFY9F-XBN2F-TYFMP-CCV49-RMYVH

First step is downloading the bits from the Windows insiders preview page including select language for the image to use.

Getting the windows server 2019 preview bits
Select the language for the image to download

windows server 2019 select language
Starting the download

Once you have the image download, apply it to your bare metal server or hypervisors guest. In this example, I copied the windows server 2019 image to a VMware ESXi server for a Windows Server 2016 guest machine to access via its virtual CD/DVD.

pre upgrade check windows server version
Verify the Windows Server version before upgrade

After download, access the image, in this case, I attached the image to the virtual machine CD, then accessed it and ran the setup application.

Microsoft Windows Server 2019 Insiders Preview download
Download updates now or later

license key
Entering license key for pre-release windows server 2019

Microsoft Windows Server 2019 Insiders Preview datacenter desktop version
Selecting Windows Server Datacenter with Desktop

Microsoft Windows Server 2019 Insiders Preview license
Accepting Software License for pre-release version.

Next up is determining to do a new install (keep nothing), or an in-place upgrade. I wanted to see how smooth the in-place upgrade was so selected that option.

Microsoft Windows Server 2019 Insiders Preview inplace upgrade
What to keep, nothing, or existing files and data

Microsoft Windows Server 2019 Insiders Preview confirm selections
Confirming your selections

Microsoft Windows Server 2019 Insiders Preview install start
Ready to start the installation process

Microsoft Windows Server 2019 Insiders Preview upgrade in progress
Installation underway of Windows Server 2019 preview

Once the installation is complete, verify that Windows Server 2019 is now installed.

Microsoft Windows Server 2019 Insiders Preview upgrade completed
Completed upgrade from Windows Server 2016 to Microsoft Windows Server 2019 Insiders Preview

The above shows verifying the system build using Powershell, as well as the message in the lower right corner of the display. Granted the above does not show the new functionality, however you should get an idea of how quickly a Windows Server 2019 preview can be deployed to explore and try out the new features.


Where to learn more

Learn more Microsoft Windows Server 2019 Insiders Preview, Windows Server Storage Spaces Direct (S2D), Azure and related software defined data center (SDDC), software defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Microsoft Windows Server 2019 Insiders Preview gives a glimpse of some of the new features that are part of the next evolution of Windows Server as part of supporting hybrid IT environments. In addition to the new features and functionality that convey not only support for hybrid cloud, also hybrid applications development, deployment, devops and workloads, Microsoft is showing flexibility in management, ease of use, scalability, along with security as well as scale out stability. If you have not looked at Windows Server for a while, or involved with serverless, containers, Kubernetes among other initiatives, now is a good time to check out Microsoft Windows Server 2019 Insiders Preview.

Ok, nuff said, for now.


Data Protection Recovery Life Post World Backup Day Pre GDPR

Data Protection Recovery Life Post World Backup Day Pre GDPR trends


It's time for Data Protection Recovery Life Post World Backup Day Pre GDPR Start Date.


The annual March 31 world backup day focus has come and gone once again.


However, that does not mean data protection including backup as well as recovery along with security gets a 364-day vacation until March 31, 2019 (or the days leading up to it).


Granted, for some environments, public relations, editors, influencers and other industry folks backup day will take some time off while others jump on the ramp up to GDPR which goes into effect May 25, 2018.


Expanding Focus Data Protection and GDPR

As I mentioned in this post here, world backup day should be expanded to include increased focus not just on backup, also recovery as well as other forms of data protection. Likewise, May 25 2018 is not the deadline or finish line or the destination for GDPR (e.g. Global Data Protection Regulations), rather, it is the starting point for an evolving journey, one that has global impact as well as applicability. Recently I participated in a fireside chat discussion with Danny Allan of Veeam who shared his GDPR expertise as well as experiences, lessons learned, tips of Veeam as they started their journey, check it out here.


Expanding Focus Data Protection Recovery and other Things that start with R

As part of expanding the focus on Data Protection Recovery Life Post World Backup Day Pre GDPR, that also means looking at, discussing things that start with R (like Recovery). Some examples besides recovery include restoration, reassess, review, rethink protection, recovery point, RPO, RTO, reconstruction, resiliency, ransomware, RAID, repair, remediation, restart, resume, rollback, and regulations among others.

Data Protection Tips, Reminders and Recommendations

    • There are no blue participation ribbons for failed recovery. However, there can be pink slips.
    • Only you can prevent on-premise or cloud data loss. However, it is also a shared responsibility with vendors and service providers
    • You can’t go forward in the future when there is a disaster or loss of data if you can’t go back in time for recovery
    • GDPR appliances to organizations around the world of all size and across all sectors including nonprofit
    • Keep new school 4 3 2 1 data protection in mind while evolving from old school 3 2 1 backup rules

4 3 2 1 backup data protection rule

  • A Fundamental premise of data infrastructures is to enable applications and their data, protect, preserve, secure and serve
  • Remember to protect your applications, as well as data including metadata, settings configurations
  • Test your restores including can you use the data along with security settings
  • Don’t cause a disaster in the course of testing your data protection, backups or recovery
  • Expand (or refresh) your data protection and data infrastructure education tradecraft skills experiences

Where to learn more

Learn more about data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data protection including business continuance (BC), business resiliency (BR), disaster recovery (DR), availability, accessibility, backup, snapshots, encryption, security, privacy among others is a 7 x 24 x 365 day a year focus. The focus of data protection also needs to evolve from an after the fact cost overhead to proactive, business enabler Meanwhile, welcome to Data Protection Recovery Post World Backup Day Pre GDPR Start Date.


Ok, nuff said, for now.


This is part three of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from  chapter 2 of my new book Software Defined Data Infrastructure  Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O  Tradecraft (CRC Press 2017). available at and other global venues. In  this post, we continue looking at application and data characteristics with a focus on different types of data. There is more to data than simply being big data, fast data, big fast or unstructured, structured or semistructured, some of which has been touched on in this series, with more to follow. Note that there is also data in terms of the programs, applications, code, rules, policies as well as configuration settings, metadata along with other items stored.


Application Data Value Software Defined Data Infrastructure Essentials Book SDDC


Various Types of Data

Data types along with characteristics include big data, little data, fast data, and old as well as new data with a different value, life-cycle, volume and velocity.  There are data in files and objects that are big representing images, figures,  text, binary, structured or unstructured that are software defined by the applications that create, modify and use them.


There are many different types of data and applications to meet various business,  organization, or functional needs. Keep in mind that applications are based on programs which consist of algorithms and data structures that define the data, how to use it, as well as how and when to store it. Those data  structures define data that will get transformed into information by programs  while also being stored in memory and on data stored in various formats.


Just as various applications have different algorithms, they also have different types of data. Even though everything is not the same in all environments, or even how the same applications get used across various organizations, there are some similarities.  Even though there are different types of applications and data, there are also some similarities and general characteristics. Keep in mind that information  is the result of programs (applications and their algorithms) that process data  into something useful or of value.


Data  typically has a basic life cycle of:

  • Creation and some activity, including being protected
  • Dormant, followed by either  continued activity or going inactive
  • Disposition (delete or remove)


In general, data can be

  • Temporary, ephemeral or transient
  • Dynamic or changing (“hot data”)
  • Active static on-line, near-line,  or off-line (“warm-data”)
  • In-active static on-line or  off-line (“cold data”)


Data is organized

  • Structured
  • Semi-structured
  • Unstructured


General  data characteristics include:

  • Value = From no value to unknown  to some or high value
  • Volume = Amount of data, files,  objects of a given size
  • Variety = Various types of data (small, big, fast, structured, unstructured)
  • Velocity = Data streams, flows,  rates, load, process, access, active or static


The  following figure shows how different data has various values over time. Data  that has no value today or in the future can be  deleted, while data with unknown value can be retained.

Different  data with various values over time

Application Data Value across sddc
Data Value Known, Unknown and No Value


General  characteristics include the value of the data which in turn determines its  performance, availability, capacity, and economic  considerations. Also, data can be  ephemeral (temporary) or kept for longer periods of time on persistent,  non-volatile storage (you do not lose the data when power is turned off). Examples of temporary scratch  include work and scratch areas such as where data gets imported into, or  exported out of, an application or database.


Data  can also be little, big, or big and fast, terms which describe in part the size  as well as volume along with the speed or velocity of being created, accessed,  and processed. The importance of understanding characteristics of data and how  their associated applications use them is to enable effective decision-making about performance, availability, capacity, and economics of data infrastructure  resources.

Data Value

There  is more to data storage than how much space capacity per cost.


All data has one  of three basic values:

  • No value = ephemeral/temp/scratch  = Why keep it?
  • Some value = current or emerging  future value, which can be low or high =  Keep
  • Unknown value = protect until  value is unlocked, or no remaining value


In addition to the above basic three, data with some value can also be further subdivided into little value, some value, or high value. Of course, you can keep subdividing into as many more or different categories  as needed, after all, everything is not always  the same across environments.


Besides data having some value, that value can also change by increasing or decreasing in value over time or even going from unknown to a known value, known to unknown, or to no value. Data with no value can be discarded, if in doubt, make and keep a copy of that data somewhere safe until its value (or lack of value) is fully known and understood.


The importance of understanding the value of data is to enable effective decision-making on where and how to protect, preserve, and cost-effectively store the data. Note that cost-effective does not necessarily mean the cheapest or lowest-cost approach, rather it means the way that aligns with the value and importance of the data at a given point in time.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software-defined data center (SDDC), software-defined data infrastructures (SDDI)  and related topics via the following links:

SDDC Data Infrastructure


Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.


Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Data has different value at various times, and that value is also evolving. Everything Is Not The Same across various  organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. Continue reading the next post (Part IV Application Data Volume Velocity Variety Everything Not The Same) in this series here.


Ok, nuff said, for now.


This is part two of a five-part mini-series looking at Application Data Value Characteristics everything is not the same as a companion excerpt from  chapter 2 of my new book Software Defined Data Infrastructure  Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O  Tradecraft (CRC Press 2017). available at and other global venues. In  this post, we continue looking at application performance, availability, capacity, economic (PACE) attributes that have an impact on data value as well as availability.


4 3 2 1 data protection  Book SDDC

Availability (Accessibility, Durability, Consistency)

Just as  there are many different aspects and focus areas for performance, there are also several facets to availability. Note that applications  performance requires availability and availability relies  on some level of performance.


Availability is a broad and encompassing area that includes data protection to  protect, preserve, and serve (backup/restore, archive, BC, BR, DR, HA) data and  applications. There are logical and physical aspects of availability including  data protection as well as security including key management (manage your keys  or authentication and certificates) and permissions, among other things.


Availability  = accessibility (can you get to your application and data) + durability (is the  data intact and consistent). This includes  basic Reliability, Availability, Serviceability (RAS), as well as high  availability, accessibility, and durability. “Durable”  has multiple meanings, so context is  important. Durable means how data infrastructure resources hold up to, survive,  and tolerate wear and tear from use (i.e., endurance), for example, Flash SSD or mechanical devices such as Hard Disk  Drives (HDDs). Another context for durable refers to data, meaning how many  copies in various places.


Server,  storage, and I/O network availability topics include:

  • Resiliency and self-healing to tolerate  failure or disruption
  • Hardware, software, and services  configured for resiliency
  • Accessibility to reach or be reached for handling work
  • Durability and consistency of  data to be available for access
  • Protection of data, applications, and assets including security


Additional server  I/O and data infrastructure along with storage topics include:

  • Backup/restore, replication,  snapshots, sync, and copies
  • Basic Reliability, Availability, Serviceability, HA, fail over, BC,  BR, and DR
  • Alternative paths, redundant components, and associated software
  • Applications that are fault-tolerant,  resilient, and self-healing
  • Non disruptive upgrades, code (application  or software) loads, and activation
  • Immediate data consistency and  integrity vs. eventual consistency
  • Virus, malware, and other data corruption or loss prevention


From a data protection standpoint, the fundamental rule or guideline is 4 3 2 1, which means  having at least four copies consisting of at least three versions (different  points in time), at least two of which are on different systems or storage  devices and at least one of those is off-site (on-line, off-line, cloud, or  other). There are  many variations of the 4 3 2 1 rule shown  in the following figure along with approaches on how to manage technology to  use. We will go into deeper this subject in later chapters. For now, remember the following.


large version application server storage I/O
4 3 2 1 data protection (via Software Defined Data Infrastructure  Essentials)


1    At  least four copies of data (or more), Enables durability in case a copy goes  bad, deleted, corrupted, failed device, or site.
2    The  number (or more) versions of the data to retain, Enables various recovery  points in time to restore, resume, restart from.
3    Data  located on two or more systems (devices or media/mediums), Enables protection  against device, system, server, file  system, or other fault/failure.

4    With  at least one of those copies being off-premise and not live (isolated from  active primary copy), Enables resiliency across sites, as well as space, time,  distance gap for protection.

Capacity and Space (What Gets Consumed and Occupied)

In  addition to being available and accessible in a timely manner (performance),  data (and applications) occupy space. That space is memory in servers, as well as using available consumable processor  CPU time along with I/O (performance) including over networks.


Data  and applications also consume storage space where they are stored. In addition to basic data space, there is also space  consumed for metadata as well as protection copies (and overhead), application  settings, logs, and other items. Another aspect of capacity includes network IP  ports and addresses, software licenses, server, storage, and network bandwidth  or service time.


Server,  storage, and I/O network capacity topics include:

  • Consumable time-expiring  resources (processor time, I/O, network bandwidth)
  • Network IP and other addresses
  • Physical resources of servers,  storage, and I/O networking devices
  • Software licenses based on  consumption or number of users
  • Primary and protection copies of  data and applications
  • Active and standby data infrastructure  resources and sites
  • Data  footprint reduction (DFR) tools and techniques for space optimization
  • Policies, quotas, thresholds,  limits, and capacity QoS
  • Application and database  optimization


DFR includes various techniques,  technologies, and tools to reduce the impact or overhead of protecting, preserving,  and serving more data for longer periods of time. There are many different  approaches to implementing a DFR strategy,  since there are various applications and data.


Common DFR  techniques and technologies include archiving, backup modernization, copy data management  (CDM), clean up, compress, and consolidate, data  management, deletion and dedupe, storage tiering, RAID (including parity-based, erasure codes , local reconstruction codes [LRC] , and Reed-Solomon , Ceph Shingled Erasure Code (SHEC ), among  others), along with protection configurations along with thin-provisioning,  among others.


DFR can be implemented in various  complementary locations from row-level compression in database or email to  normalized databases, to file systems, operating systems, appliances, and  storage systems using various techniques.


Also, keep in mind that not all data is the same; some is sparse, some is dense, some can be compressed  or deduped while others cannot. Likewise,  some data may not be compressible or dedupable.  However, identical copies can be  identified with links created to a common copy.

Economics (People, Budgets, Energy and other Constraints)

If one thing in life and  technology that is constant is change, then  the other constant is concern about economics  or costs. There is a cost to enable and maintain a data infrastructure on  premise or in the cloud, which exists to protect, preserve, and serve data and  information applications.


However, there should also be a benefit to having the data infrastructure  to house data and support applications that provide information to users of the  services. A common economic focus is what something costs, either as up-front  capital expenditure (CapEx) or as an operating expenditure (OpEx) expense,  along with recurring fees.


In general, economic considerations  include:

  • Budgets (CapEx and  OpEx), both up front and in recurring fees
  • Whether you buy,  lease, rent, subscribe, or use free and open sources
  • People time needed to integrate  and support even free open-source software
  • Costs including hardware,  software, services, power, cooling, facilities, tools
  • People time includes  base salary, benefits, training and education

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI)  and related topics via the following links:

SDDC Data Infrastructure


Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various  organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. All applications have some element of performance, availability, capacity, economic (PACE) needs as well as resource demands. There is often a focus around data storage about storage efficiency and utilization which is where data footprint reduction (DFR) techniques, tools, trends and as well as technologies address capacity requirements. However with data storage there is also an expanding focus around storage effectiveness also known as productivity tied to performance, along with availability including 4 3 2 1 data protection. Continue reading the next post (Part III Application Data Characteristics Types Everything Is Not The Same) in this series here.


Ok, nuff said, for now.



Everything Is Not The Same Application Data Value Characteristics


This is part one of a five-part mini-series looking at Application Data Value Characteristics Everything Is Not The Same as a companion excerpt from  chapter 2 of my new book Software Defined Data Infrastructure  Essentials – Cloud, Converged and Virtual Fundamental Server Storage I/O  Tradecraft (CRC Press 2017). available at and other global venues. In  this post, we start things off by looking at general application server storage I/O characteristics that have an impact on data value as well as access.


Application Data Value Software Defined Data Infrastructure Essentials Book SDDC


Everything is not the same across different organizations including Information Technology  (IT) data centers, data infrastructures along with the applications as well as data they support. For example, there is so-called big data that can be many small files, objects, blobs or data and bit streams representing telemetry,  click stream analytics, logs among other information.

Keep in mind that applications impact how data is accessed, used, processed, moved and stored. What this means is that a focus on data value, access patterns, along with other related topics need to also consider application performance, availability, capacity, economic (PACE) attributes.


If everything is not the same, why is so much data along with many applications treated the same from a PACE perspective?


Data Infrastructure resources including servers, storage, networks might be cheap or inexpensive, however, there is a cost to managing them along with data.


Managing includes data protection (backup, restore, BC, DR, HA, security) along with other activities. Likewise, there is a cost to the software along with cloud services among others. By understanding how applications use and interact with data, smarter, more informed data management decisions can be made.


IT Applications and Data Infrastructure Layers
IT Applications and Data Infrastructure Layers


Keep in mind that everything is not the same across various  organizations, data centers, data  infrastructures, data and the applications that use them. Also keep in mind  that programs (e.g. applications) = algorithms (code) + data structures (how  data defined and organized, structured or unstructured).


There  are traditional applications, along with those tied to Internet of Things  (IoT), Artificial Intelligence (AI) and Machine Learning (ML), Big Data and  other analytics including real-time click stream, media and entertainment,  security and surveillance, log and telemetry processing among many others.


What  this means is that there are many different application with various character  attributes along with resource (server compute, I/O network and memory, storage  requirements) along with service requirements.


Common Applications Characteristics

Different  applications will have various attributes,  in general, as well as how they are used,  for example, database transaction  activity vs. reporting or analytics, logs and journals vs. redo logs, indices, tables, indices, import/export, scratch and temp space. Performance,  availability, capacity, and economics (PACE) describes the applications and  data characters and needs shown in the  following figure.


Application and data PACE attributes
Application PACE attributes (via Software Defined Data Infrastructure  Essentials)


All applications have PACE attributes, however:

  • PACE attributes vary by  application and usage
  • Some applications and their data  are more active than others
  • PACE characteristics may vary within different parts of an application


Think of applications along with associated data PACE as its  personality or how it behaves, what it does, how it does it, and when, along  with value, benefit, or cost as well as quality-of-service (QoS) attributes.


Understanding applications in different environments, including  data values and associated PACE attributes, is essential for making informed  server, storage, I/O decisions and data infrastructure decisions. Data  infrastructures decisions range from configuration to acquisitions or upgrades,  when, where, why, and how to protect, and how to optimize performance including  capacity planning, reporting, and troubleshooting, not to mention addressing  budget concerns.


Primary PACE attributes for active and inactive applications and data are:

P - Performance  and activity (how things get used)
A - Availability and durability (resiliency and data protection)
C - Capacity and space (what things use or occupy)
E - Economics  and Energy (people, budgets, and other  barriers)


Some applications need more performance (server computer,  or storage and network I/O), while others need space capacity (storage, memory,  network, or I/O connectivity). Likewise,  some applications have different availability needs (data protection,  durability, security, resiliency, backup,  business continuity, disaster recovery) that determine the tools, technologies, and techniques to use.


Budgets are also nearly always a concern, which for some applications means enabling more performance per cost while others are focused on maximizing space capacity and protection level per cost. PACE attributes also  define or influence policies for QoS (performance, availability, capacity), as well as thresholds, limits, quotas,  retention, and disposition, among others.


Performance and Activity (How Resources Get Used)

Some applications or components that comprise a larger solution will have more performance demands than others. Likewise,  the performance characteristics of applications along with their associated data will also vary. Performance applies to the server,  storage, and I/O networking hardware along with associated software and applications.


For servers, performance is focused on how much CPU  or processor time is used, along with memory and I/O operations. I/O operations to create, read, update, or delete  (CRUD) data include activity rate (frequency or data velocity) of I/O operations  (IOPS). Other considerations include the volume or amount of data being moved (bandwidth, throughput,  transfer), response time or latency, along with queue depths.


Activity is the amount of work to do or being done in a given amount of time (seconds, minutes, hours, days, weeks), which can be transactions, rates, IOPs. Additional performance considerations include latency, bandwidth, throughput, response time,  queues, reads or writes, gets or puts, updates, lists, directories, searches,  pages views, files opened, videos viewed, or downloads.
  Server,  storage, and I/O network performance include:

  • Processor CPU usage time and  queues (user and system overhead)
  • Memory usage effectiveness  including page and swap
  • I/O activity including between  servers and storage
  • Errors, retransmission, retries, and rebuilds


the  following figure shows a generic performance example of data being accessed  (mixed reads, writes, random, sequential, big, small, low and high-latency) on a local and a remote basis. The example  shows how for a given time interval (see lower right), applications are  accessing and working with data via different data streams in the larger image  left center. Also shown are queues and I/O handling along with end-to-end (E2E)  response time.


fundamental server storage I/O
Server I/O performance  fundamentals (via Software Defined  Data Infrastructure Essentials)


Click here to view a larger version of the above figure.


Also shown on the left in the above figure is an example of  E2E response time from the application through the various data infrastructure  layers, as well as, lower center, the response time from the server to the memory  or storage devices.


Various queues are shown in the middle of  the above figure which are indicators of how much work is occurring, if the processing is keeping up with  the work or causing backlogs. Context is  needed for queues, as they exist in the  server, I/O networking devices, and software drivers, as well as in storage  among other locations.


Some  basic server, storage, I/O metrics that matter include:

  • Queue depth of I/Os waiting to be processed and concurrency
  • CPU and memory usage to process  I/Os
  • I/O size, or how much data can be moved in a given operation
  • I/O activity rate or IOPs =  amount of data moved/I/O size per unit of time
  • Bandwidth = data moved per unit  of time = I/O size × I/O rate
  • Latency usually increases with  larger I/O sizes, decreases with smaller requests
  • I/O rates usually increase with  smaller I/O sizes and vice versa
  • Bandwidth increases with larger  I/O sizes and vice versa
  • Sequential stream access data  may have better performance than some random access data
  • Not all data is conducive to  being sequential stream, or random
  • Lower response  time is better, higher activity rates and bandwidth are better


Queues  with high latency and small I/O size or small I/O rates could indicate a  performance bottleneck. Queues with low latency and high I/O rates with good bandwidth  or data being moved could be a good  thing. An important note is to look at several metrics, not just IOPs or  activity, or bandwidth, queues, or response time. Also, keep in mind that metrics that matter for your environment  may be different from those for somebody else.


Something to keep in perspective is that there can be a large amount  of data with low performance, or a small  amount of data with high-performance, not to mention many other variations. The  important concept is that as space capacity scales, that does not mean  performance also improves or vice versa, after all, everything is not the same.

Where to learn more

Learn more about Application Data Value, application characteristics, PACE along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI)  and related topics via the following links:


SDDC Data Infrastructure


Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.


Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Keep in mind that with Application Data Value Characteristics Everything Is Not The Same across various  organizations, data centers, data infrastructures spanning legacy, cloud and other software defined data center (SDDC) environments. However all applications have some element (high or low) of performance, availability, capacity, economic (PACE) along with various similarities. Likewise data has different value at various times. Continue reading the next post (Part II Application Data Availability  Everything Is Not The Same) in this five-part mini-series here.


Ok, nuff said, for now.


VMware continues cloud construction with March announcements

VMware continues cloud construction sddc


VMware continues cloud  construction with March announcements of new features and other enhancements.


VMware continues cloud construction SDDC data infrastructure strategy big picture
VMware Cloud Provides Consistent Operations and Infrastructure Via:


With its recent announcements, VMware continues cloud construction adding new features, enhancements, partnerships along with services.


VMware continues cloud construction, like other vendors and service providers who tried and test the waters of having their own public cloud, VMware has moved beyond its vCloud Air initiative selling that to OVH. VMware which while being a public traded company (VMW) is by way of majority ownership part of the Dell Technologies family of company via the 2016 acquisition of EMC by Dell. What this means is that like Dell Technologies, VMware is focused on providing solutions and services to its cloud provider partners instead of building, deploying and running its own cloud in competition with partners.


VMware continues cloud construction SDDC data infrastructure strategy layers
VMware Cloud Data Infrastructure and SDDC layers Via:


The VMware Cloud message and strategy is focused around providing software solutions to cloud and other data infrastructure partners (and customers) instead of competing with them (e.g. divesting of vCloud Air, partnering with AWS, IBM Softlayer). Part of the VMware cloud message and strategy is to provide consistent  operations and management across clouds, containers, virtual machines (VM) as well as other  software  defined data center (SDDC) and software defined data infrastructures.


In other words, what this means is VMware providing consistent management to  leverage common experiences of data infrastructure staff along with resources in a  hybrid, cross cloud and software defined environment in support of existing as  well as cloud native applications.


VMware continues cloud construction on AWS SDDC
VMware Cloud on AWS Image via:


Note that VMware  Cloud services run on top of AWS EC2 bare metal (BM) server instances, as  well as on BM instances at IBM softlayer as well as OVH. Learn more about AWS  EC2 BM compute instances aka Metal as a Service (MaaS) here.  In addition to AWS, IBM and OVH, VMware claims over 4,000 regional cloud and  managed service providers who have built their data infrastructures out using  VMware based technologies.


VMware continues cloud construction updates

Building off of previous  announcements, VMware continues cloud construction with enhancements to  their Amazon  Web Services (AWS) partnership along with services for IBM Softlayer cloud  as well as OVH. As a refresher,  OVH is what formerly was known as VMware vCloud air before it was sold off.


Besides expanding on existing cloud partner solution  offerings, VMware also announced additional cloud, software defined data center  (SDDC) and other software  defined data infrastructure environment management capabilities. SDDC and  Data infrastructure management tools include leveraging VMwares  acquisition of Wavefront among others.


VMware Cloud Updates and New Features

  • VMware Cloud on AWS European regions (now in  London, adding Frankfurt German)
  • Stretch Clusters with synchronous replication for  cross geography location resiliency
  • Support for data intensive workloads including  data footprint reduction (DFR) with vSAN based compression and data  de duplication
  • Fujitsu services offering relationships
  • Expanded VMware Cloud Services enhancements


VMware Cloud Services enhancements include:

  • Hybrid Cloud Extension
  • Log intelligence
  • Cost insight
  • Wavefront

VMware Cloud in additional AWS Regions

As part of service expansion, VMware Cloud on AWS has been  extended into European region (London) with plans to expand into Frankfurt and an Asian Pacific location.  Previously VMware Cloud on AWS has been available in US West Oregon and US East  Northern Virginia regions. Learn more about AWS Regions and availability zones (AZ) here.


VMware Cloud Stretch Cluster
VMware Cloud on AWS Stretch Clusters Source:


VMware Cloud on AWS Stretch Clusters

In addition to expanding into additional regions, VMware  Cloud on AWS is also being extended with stretch clusters for geography  dispersed protection. Stretched clusters provide protection against an AZ  failure (e.g. data center site) for mission critical applications. Build on  vSphere HA and DRS  automated host  failure technology, stretched clusters provide recovery point objective zero (RPO 0) for continuous protection, high availability across AZs at the data infrastructure layer.


The benefit of  data infrastructure layer based HA and resiliency is not having to re architect or  modify upper level, higher up layered applications or software. Synchronous  replication between AZs enables RPO 0, if one AZ goes down, it is treated as a  vSphere HA event with VMs restarted in another AZ.


vSAN based Data Footprint Reduction (DFR) aka Compression  and De duplication

To support applications that leverage large amounts of data, aka data intensive applications in marketing speak, VMware is leveraging vSAN based data footprint reduction (DFR) techniques including compression as well as de duplication (dedupe). Leveraging DFR technologies like compression and dedupe integrated into vSAN, VMware Clouds have the ability to store more data in a given cubic density. Storing more data in a given cubic density  storage efficiency (e.g. space saving utilization) as well as with performance acceleration, also facilitate storage effectiveness along with productivity.


With VMware vSAN technology as one of the core underlying technologies for enabling VMware Cloud on AWS (among other deployments), applications with large data needs can store more data at a lower cost point. Note that VMware Cloud can support 10 clusters per SDDC deployment, with each cluster having 32 nodes, with cluster wide and aware dedupe. Also note that for performance, VMware Cloud on AWS leverages NVMe attached Solid State Devices (SSD) to boost effectiveness and productivity.


VMware Hybrid Cloud Extension
Extending VMware vSphere any to any migration across clouds  Source:


VMware Hybrid Cloud Extension

VMware Hybrid Cloud Extension enables common management of common underlying data infrastructure as well as software defined environments including across public, private as well as hybrid clouds. Some of the capabilities include enabling warm VM migration across various software defined environments from local on-premise and private cloud to public clouds.


New enhancements leverages previously available technology now as a service for enterprises besides service providers to support data center to data center, or cloud centric AZ to AZ, as well as region to region migrations. Some of the use cases include small to large bulk migrations of hundreds to thousands of VM move and migrations, both scheduling as well as the actual move. Move and migrations can span hybrid deployments with mix of on-premise as well as various cloud services.


VMware Cloud Cost Insight

VMware Cost Insight enables analysis, compare cloud costs across public AWS, Azure and private VMware clouds) to avoid flying blind in and among clouds. VMware Cloud cost insight enables awareness of how resources are used, their cost and benefit to applications as well as IT budget impacts. Integrates vSAN sizer tool along with AWS metrics for improved situational awareness, cost modeling, analysis and what if comparisons.


With integration to Network insight, VMware Cloud Cost Insight also provides awareness into networking costs in support of migrations. What this means is that using VMware Cloud Cost insight you can take the guess-work out of what your expenses will be for public, private on-premises or hybrid cloud will be having deeper insight awareness into your SDDC environment. Learn more about VVMware Cost Insight here.


VMware Log Intelligence

Log Intelligence is a new VMware cloud service that provides real-time data infrastructure insight along with application visibility from private, on-premise, to public along with hybrid clouds. As its name implies, Log Intelligence provides syslog and other log insight, analysis and intelligence with real-time visibility into VMware as well as AWS among other resources for faster troubleshooting, diagnostics, event correlation and other data infrastructure management tasks.


Log and telemetry input sources for VMware Log Intelligence include data infrastructure resources such as operating systems, servers, system statistics, security, applications among other syslog events. For those familiar with VMware Log Insight, this capability is an extension of that known experience expanding it to be a cloud based service.


VMware Wavefront SaaS analytics
Wavefront by VMware Source:


VMware Wavefront

VMware Wavefront enables monitoring of cloud native high scale environments with custom metrics and analytics. As a reminder Wavefront was acquired by VMware to enable deep metrics and analytics for developers, DevOps, data infrastructure operations as well as SaaS application developers among others. Wavefront integrates with VMware vRealize along with enabling monitoring of AWS data infrastructure resources and services. With the ability to ingest, process, analyze various data feeds, the Wavefront engine enables the predictive understanding of mixed application, cloud native data and data infrastructure platforms including big data based.


Where to learn more

Learn more about VMware, vSphere, vRealize, VMware Cloud, AWS (and other clouds), along with data protection, software defined data center (SDDC), software defined data infrastructures (SDDI)  and related topics via the following links:

SDDC Data Infrastructure


Additional  learning experiences along with  common questions (and answers), as well as  tips can be found in  Software Defined Data Infrastructure Essentials book.


Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

VMware continues cloud construction. For now, it appears that VMware like Dell Technologies is content on being a technology provider partner to large as well as small public, private and hybrid cloud environments instead of building their own and competing. With these series of announcements, VMware continues cloud  construction enabling its partners and customers on their various software defined data center (SDDC) and related data infrastructure journeys. Overall, this is a good set of enhancements, updates, new and evolving features for their partners as well as customers who leverage VMware based technologies. Meanwhile VMware continues cloud construction.


Ok, nuff said, for now.


Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot

server storage I/O data infrastructure trends

Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express  NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot?


For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot.


The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via among other global venues.


Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier


The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter.


NVMe server storage I/O sddc

NVMe Tradecraft Refresher

NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices.


data infrastructure nvme u.2 8639 ssd
Various SSD device form factors and interfaces


In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics.


The many facets of NVMe as a front-end, back-end, direct attach and fabric


Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network.


NVM and NVMe accessed flash SCM SSD storage


Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS,


SATA and others). As a refresher, NVM or the media  are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging).


Learn more about 3D XPoint with the following resources:


Learn more (or refresh) your  NVMe server storage I/O knowledge, experience tradecraft skill set with  this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also  visit to view additional NVMe tips, tools, technologies, and related resources.

NVMe U.2 SFF-8639 aka 8639 SSD

On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental insertion of SAS into SATA). Looking at the left-hand side of the following image you will see an NVMe SFF 8639 aka U.2 backplane connector which looks similar to a SAS port.


Note that depending on how implemented including its internal controller, flash translation layer (FTL), firmware and other considerations, an NVMe U.2 or 8639 x4 SSD should have similar performance to a comparable NVMe x4 PCIe AiC (e.g. card) device. By comparable device, I mean the same type of NVM media (e.g. flash or 3D XPoint), FTL and controller. Likewise generally an PCIe x8 should be faster than an x4, however more PCIe lanes does not mean more performance, its what's inside and how those lanes are actually used that matter.


NVMe U.2 8639 2.5" 1.8" SSD driveNVMe U.2 8639 2.5 1.8 SSD drive slot pin
NVMe U.2 SFF 8639 Drive (Software Defined Data Infrastructure Essentials CRC Press)


With U.2 devices the key tab that prevents SAS drives from inserting into a SATA port is where four pins that support PCIe x4 are located. What this all means is that a U.2 8639 port or socket can accept an NVMe, SAS or SATA device depending on how the port is configured. Note that the U.2 8639 port is either connected to a SAS controller for SAS and SATA devices or a PCIe port, riser or adapter.


On the left of the above figure is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of the above figure is the connector end of an 8639 NVM SSD showing addition pin connectors compared to a SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.


More PCIe lanes may not mean faster performance, verify if those lanes (e.g. x4 x8 x16 etc) are present just for mechanical (e.g. physical) as well as electrical (they are also usable) and actually being used. Also, note that some PCIe storage devices or adapters might be for example an x8 for supporting two channels or devices each at x4. Likewise, some devices might be x16 yet only support four x4 devices.


NVMe U.2 SFF 8639 PCIe Drive SSD FAQ

Some common questions pertaining NVMe U.2 aka SFF 8639 interface and form factor based SSD include:


Why use U.2 type devices?


Compatibility with what's available for server storage I/O slots in a server, appliance, storage enclosure. Ability to mix and match SAS, SATA and NVMe with some caveats in the same enclosure. Support higher density storage configurations maximizing available PCIe slots and enclosure density.


Is PCIe x4 with NVMe U.2 devices fast enough?


While not as fast as a PCIe AiC that fully supports x8 or x16 or higher, an x4 U.2 NVMe accessed SSD should be plenty fast for many applications. If you need more performance, then go with a faster AiC card.


Why not go with all PCIe AiC?


If you need the speed, simplicity, have available PCIe card slots, then put as many of those in your systems or appliances as possible. Otoh, some servers or appliances are PCIe slot constrained so U.2 devices can be used to increase the number of devices attached to a PCIe backplane while also supporting SAS, SATA based SSD or HDDs.


Why not use M.2 devices?


If your system or appliances supports NVMe M.2 those are good options. Some systems even support a combination of M.2 for local boot, staging, logs, work and other storage space while PCIe AiC are for performance along with U.2 devices.


Why not use NVMeoF?


Good question, why not, that is, if your shared storage system supports NVMeoF or FC-NVMe go ahead and use that, however, you might also need some local NVMe devices. Likewise, if yours is a software-defined storage platform that needs local storage, then NVMe U.2, M.2 and AiC or custom cards are an option. On the other hand, a shared fabric NVMe based solution may support a mixed pool of SAS, SATA along with NVMe U.2, M.2, AiC or custom cards as its back-end storage resources.


When not to use U.2?


If your system, appliance or enclosure does not support U.2 and you do not have a need for it. Or, if you need more performance such as from an x8 or x16 based AiC, or you need shared storage. Granted a shared storage system may have U.2 based SSD drives as back-end storage among other options.

How does the U.2 backplane connector attach to PCIe?


Via enclosures backplane, there is either a direct hardwire connection to the PCIe backplane, or, via a connector cable to a riser card or similar mechanism.


Does NVMe replace SAS, SATA or Fibre Channel as an interface?


The NVMe command set is an alternative to the traditional SCSI command set used in SAS and Fibre Channel. That means it can replace, or co-exist depending on your needs and preferences for access various storage devices.


Who supports U.2 devices?


Dell has supported U.2 aka PCIe drives in some of their servers for many years, as has Intel and many others. Likewise, U.2 8639 SSD drives including 3D Xpoint and NAND flash-based are available from Intel among others.


Can you have AiC, U.2 and M.2 devices in the same system?


If your server or appliance or storage system support them then yes. Likewise, there are M.2 to PCIe AiC, M.2 to SATA along with other adapters available for your servers, workstations or software-defined storage system platform.

NVMe U.2 carrier to PCIe adapter

The following images show examples of mounting an Intel Optane NVMe 900P accessed U.2 8639 SSD on an Ableconn PCIe AiC carrier. Once U.2 SSD is mounted, the Ableconn adapter inserts into an available PCIe slot similar to other AiC devices. From a server or storage appliances software perspective, the Ableconn is a pass-through device so your normal device drivers are used, for example VMware vSphere ESXi 6.5 recognizes the Intel Optane device, similar with Windows and other operating systems.


intel optane 900p u.2 8639 nvme drive bottom view
  Intel Optane NVMe 900P U.2 SSD and Ableconn PCIe AiC carrier


The above image shows the Ableconn adapter carrier card along with NVMe U.2 8639 pins on the Intel Optane NVMe 900P.


intel optane 900p u.2 8639 nvme drive end view
Views of Intel Optane NVMe 900P U.2 8639 and Ableconn carrier connectors


The above image shows an edge view of the NVMe U.2 SFF 8639 Intel Optane NVMe 900P SSD along with those on the Ableconn adapter carrier. The following images show an Intel Optane NVMe 900P SSD installed in a PCIe AiC slot using an Ableconn carrier, along with how VMware vSphere ESXi 6.5 sees the device using plug and play NVMe device drivers.


NVMe U.2 8639 installed in PCIe AiC Slot
Intel Optane NVMe 900P U.2 SSD installed in PCIe AiC Slot


NVMe U.2 8639 and VMware vSphere ESXi
How VMware vSphere ESXi 6.5 sees NVMe U.2 device


Intel NVMe Optane NVMe 3D XPoint based and other SSDs

Here are some links to various Intel Optane NVMe 3D XPoint based SSDs in different packaging form factors:


Here are some links to various Intel and other vendor NAND flash based NVMe accessed SSDs including U.2, M.2 and AiC form factors:

Note in addition to carriers to adapt U.2 8639 devices to PCIe AiC form factor and interfaces, there are also M.2 NGFF to PCIe AiC among others. An example is the Ableconn M.2 NGFF PCIe SSD to PCI Express 3.0 x4 Host Adapter Card.


In addition to,, Ebay and many other venues carry NVMe related technologies. The Intel Optane NVMe 900P are newer, however the Intel 750 Series along with other Intel NAND Flash based SSDs are still good price performers and as well as provide value. I have accumulated several Intel 750 NVMe devices over past few years as they are great price performers. Check out this related post Get in the NVMe SSD game (if you are not already).

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.


Additional  learning experiences along with  common questions (and answers), as well as  tips can be found in  Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe accessed storage is in your future, however there are various questions to address including exploring your options for type of devices, form factors, configurations among other topics. Some NVMe accessed storage is direct attached and dedicated in laptops, ultrabooks, workstations and servers including PCIe AiC, M.2 and U.2 SSDs, while others are shared networked aka fabric based. NVMe over fabric (e.g. NVMeoF) includes RDMA over converged Ethernet (RoCE) as well as NVMe over Fibre Channel (e.g. FC-NVMe). Networked fabric accessed NVMe access of pooled shared storage systems and appliances can also include internal NVMe attached devices (e.g. as part of back-end storage) as well as other SSDs (e.g. SAS, SATA).


General wrap-up (for now) NVMe U.2 8639 and related tips include:

  • Verify the performance of the device vs. how many PCIe lanes exist
  • Update any applicable BIOS/UEFI, device drivers and other software
  • Check the form factor and interface needed (e.g. U.2, M.2 / NGFF, AiC) for a given scenario
  • Look carefully at the NVMe devices being ordered for proper form factor and interface
  • With M.2 verify that it is an NVMe enabled device vs. SATA


Learn more about NVMe at including how to use Intel Optane NVMe 900P U.2 SFF 8639 disk drive form factor SSDs in PCIe slots as well as for fabric among other scenarios.


Ok, nuff said, for now.


World Backup Day 2018 Data Protection Readiness Reminder

server storage I/O trends

It's that time of year again, World Backup Day 2018 Data Protection Readiness Reminder.


In case you have forgotten, or were not aware, this coming Saturday March 31 is World Backup (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly.


data infrastructure data protection


Its time that the  focus of world backup day should expand from just a focus on backup to also broader data protection and things that start with R. Some data protection (and backup) related things, tools, tradecraft techniques, technologies and trends that start with R include  readiness, recovery, reconstruct, restore, restart, resume, replication, rollback, roll forward, RAID and erasure codes, resiliency, recovery time objective (RTO), recovery point objective (RPO), replication among others.


data protection threats ransomware software defined


Keep in mind that Data Protection  is a broader focus than just backup and recovery. Data protection includes  disaster recovery DR, business continuance BC, business resiliency BR, security (logical and physical), standard and high availability HA, as well as durability, archiving, data footprint reduction, copy data management CDM along with various technologies, tradecraft techniques, tools.


data protection 4 3 2 1 rule and 3 2 1 rule

Quick Data Protection, Backup and Recovery Checklist

    • Keep the 4 3 2 1 or shorter older 3 2 1 data protection rules in mind
    • Do you know what data, applications, configuration settings, meta data, keys, certificates are being protected?
    • Do you know how many versions, copies, where stored and what is on or off-site, on or off-line?
    • Implement data protection at different intervals and coverage of various layers (application, transaction, database, file system, operating system, hypervisors, device or volume among others)


    data infrastructure backup data protection



          • Have you protected your data protection environment including software, configuration, catalogs, indexes, databases along with management tools?
          • Verify that data protection point in time copies (backups, snapshots, consistency points, checkpoints, version, replicas) are working as intended
          • Make sure that not only are the point in time protection copies running when scheduled, also that they are protected what's intended


          data infrastructure backup data protection



              • Test to see if the protection copies can actually be used, this means restoring as well as accessing the data via applications
              • Watch out to prevent a disaster in the course of testing, plan, prepare, practice, learn, refine, improve
              • In addition to verifying your data protection (backup, bc, dr) for work, also take time to see how your home or personal data is protected
              • View additional tips, techniques, checklist items in this Data Protection fundamentals series of posts here.

                storageio data protection toolbox

              Where To Learn More

              View additional Data Infrastructure Data Protection and related tools, trends, technology and tradecraft skills topics  via the following links.


              data protection rto rpo

              Additional  learning experiences along with  common questions (and answers), as well as  tips can be found in  Software Defined Data Infrastructure Essentials book.

              Software Defined Data Infrastructure Essentials Book SDDC

              What This All Means

              You can not go forward if you can not go back to a particular point in time (e.g. recovery point objective or RPO). Likewise, if you can not go back to a given RPO, how can you go forward with your business as well as meet your recovery time objective (RTO)?


              data protection restore rto rpo


              Backup is as important as restore, without a good backup or data protection point in time copy, how can you restore? Some will say backup is more important than recovery, however its the enablement that matters, in other words being able to provide data protection and recover, restart, resume or other things that start with R. World backup day should be a reminder to think about broader data protection which also means recovery, restore and realizing if your copies and versions are good. Keep the above in mind and this is your World Backup Day 2018 Data Protection Readiness Reminder.


              Ok, nuff said, for now.


              server storage I/O data infrastructure trends


              Microsoft and Azure September 2017 Software Defined Data infrastructure Updates


              September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite  where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here.

              Azure Files and NFS

              Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage.


              Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here.

              Azure File Sync (AFS)

              Azure File Sync AFS

              Azure File Sync (AFS) has now entered public preview.  While users of  Windows-based systems have been able to access and share Azure Files in the  past, AFS is something different.  I have used AFS for  some time now during several private preview iterations having seen how it has  evolved, along with how Microsoft listens incorporating feedback into the  solution.


              Lets take a look at what is AFS, what it does, how it works, where  and when to use it among other considerations. With AFS, different and independent systems  can now synchronize file shares through Azure. Currently in the AFS preview  Windows Server 2012 and 2016 are supported including bare metal, virtual, and  cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure  and AWS) as part of participating in a file sync activities using AFS.


              Not to be confused with some other storage related AFS  including Andrew File System among others, the new Microsoft Azure File Sync service  enables files to be synchronized across different servers via Azure. This is  different then the previous available Azure File Share service that enables  files stored in Azure cloud storage to be accessed via Windows and Linux  systems within Azure, as well as natively by Windows platforms outside of  Azure. Likewise this is different from the recently announced Microsoft Azure  native NFS file sharing serving service in partnership with NetApp (e.g.  powered by OnTAP cloud).


     can be used to synchronize across different on premise as well as cloud servers that can also function as cache. What this means is that for Windows work folders served via different on premise servers, those files can be synchronized across Azure to other locations. Besides providing a cache, cloud tiering and enterprise file sync share (EFSS) capabilities, AFS also has robust optimization for data movement to and from the cloud and across sites, along with management tools. Management tools including diagnostics, performance and activity monitoring among others.

              Check out the AFS preview including planning for an Azure File Sync (preview) deployment (Docs Microsoft), and for those who have Yammer accounts, here is the AFS preview group link.

              Microsoft Azure Blob Events via Microsoft

              Azure Blob Storage Tiering and Event Triggers

              Two other Azure storage features that are in public preview include blob tiering (for cold archiving) and event triggers for events. As their names imply, blob tiering enables automatic migration from active to cold inactive storage of dormant date. Event triggers are policies rules (code) that get executed when a blob is stored to do various functions or tasks. Here is an overview of blob events and a quick start from Microsoft here.


              Keep in mind that not all blob and object storage are the same, a good example is Microsoft Azure that has page, block and append blobs. Append blobs are similar to what you might be familiar with other services objects. Here is a Microsoft overview of various Azure blobs including what to use when.

              Project Honolulu and Windows Server Enhancements

              Microsoft has evolved from command prompt (e.g. early MSDOS) to GUI with Windows to command line extending into PowerShell that left some thinking there is no longer need for GUI. Even though Microsoft has extended its CLI with PowerShell spanning WIndows platforms and Azure, along with adding Linux command shell, there are those who still want or need a GUI. Project Honolulu is the effort to bring GUI based management back to Windows in a simplified way for what had been headless, and desktop less deployments (e.g. Nano, Server Core). Microsoft had Server Management Tools (SMT) accessible via the Azure Portal which has been discontinued.


              Microsoft Project Honolulu management via
              Project Honolulu Image via


              This is where project Honolulu comes into play for managing Windows Server platforms. What this means is that for those who dont want to rely on or have a PowerShell dependency have an alternative option. Learn more about Project Honolulu here and here, including download the public preview here.

              Storage Spaces Direct (S2D) Kepler Appliance

              Data Infrastructure  provider DataOn has announced a new turnkey Windows Server 2016 Storage Spaces Direct (S2D) powered Hyper-Converged Infrastructure (e.g. productization of project Kepler-47) solution with two node small form factor servers (partner with MSI). How small? Think suitcase or airplane roller board carry on luggage size.


              What this means is that you can get into the converged, hyper-converged software defined storage game with Windows-based servers supporting Hyper-V virtual machines (Windows and Linux) including hardware for around $10,000 USD (varies by configuration and other options).

              Azure and Microsoft Networking News

              Speaking of Microsoft Azure public cloud, ever wonder what the network that enables the service looks like and some of the software defined networking (SDN) along with network virtualization function (NFV) objectives are, have a look at this piece from over at Data Center Knowledge.


              In related Windows, Azure and other focus areas, Microsoft, Facebook and Telxius have completed the installation of a high-capacity subsea cable (network) to cross the atlantic ocean. Whats so interesting from a data infrastructure, cloud or legacy server storage I/O and data center focus perspective? The new network was built by the combined companies vs. in the past by a Telco provider consortium with the subsequent bandwidth sold or leased to others.


              This new network is also 4,000 miles long including in depths of 11,000 feet, supports with current optics 160 terabits (e.g. 20 TeraBytes) per second capable of supporting 71 million HD videos streamed simultaneous. To put things into perspective, some residential Fiber Optic services can operate best case up to 1 gigabit per second (line speed) and in an asymmetrical fashion (faster download than uploads). Granted there are some 10 Gbit based services out there more common with commercial than residential. Simply put, there is a large amount of bandwidth increased across the atlantic for Microsoft and Facebook to support growing demands.

              Where To Learn More

              Learn more about related technology, trends, tools, techniques, and tips with the following links.

              What This All Means

              Microsoft announced a new release of Windows Server at Ignite as part of its new semi-annual release cycle. This latest version of Windows server is optimized for containers. In addition to Windows server enhancements, Microsoft continues to extend Azure and related technologies for public, private and hybrid cloud as well as software defined data infrastructures.


              By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.


              Ok, nuff said, for now.

              server storage I/O data infrastructure trends


              Dell EMC VMware September 2017 Software Defined Data Infrastructure Updates


              vmworld 2017


              September was a busy month including VMworld  in Las Vegas that featured many Dell EMC VMware (among other)  software defined data infrastructure updates and announcements.


              A summary of September VMware (and partner) related announcements include:


              VMware on AWS via
              VMware and AWS via Amazon Web Services


              VMware and AWS

              Some of you might recall VMware earlier attempt at public cloud with vCloud Air service (see Server StorageIO lab test drive here) which has since been depreciated (e.g. retired). This new approach by VMware leverages the large global presence of AWS enabling customers to set up public or hybrid vSphere, vSAN and NSX based clouds, as well as software defined data centers (SDDC) and software defined data infrastructures (SDDI).


              VMware Cloud on AWS exists on a dedicated, single-tenant (unlike Elastic Cloud Compute (EC2) multi-tenant instances or VMs) that supports from 4 to 16 underlying host per cluster. Unlike EC2 virtual machine instances, VMware Cloud on AWS is delivered on elastic bare-metal (e.g. dedicated private servers aka DPS). Note AWS EC2 is more commonly known, AWS also has other options for server compute including Lambda micro services serverless containers, as well as Lightsail virtual private servers (VPS).


              Besides servers with storage optimized I/O featuring low latency NVMe accessed SSDs, and applicable underlying server I/O networking, VMware Cloud on AWS leverages the VMware software stack directly on underlying host servers (e.g. there is no virtualization nesting taking place). This means more robust performance should be expected like in your on premise VMware environment. VM workloads can move between your onsite VMware systems and VMware Cloud on AWS using various tools. The VMware Cloud on AWS is delivered and managed by VMware, including pricing. Learn more about VMware Cloud on AWS here, and here (VMware PDF) and here (VMware Hands On Lab aka HOL).


              Read more about AWS September news and related updates here in this StorageIOblog post.


              VMware PKS
              VMware and Pivotal PKS via

              Pivotal Container Service (PKS) and Google Kubernetes Partnership

              During VMworld VMware, Pivotal and Google announced a partnership for enabling Kubernetes container management called PKS (Pivotal Container Service). Kubernetes is evolving as a popular open source container microservice serverless management orchestration platform that has roots within Google. What this means is that what is good for Google and others for managing containers, is now good for VMware and Pivotal. In related news, VMware has become a platinum sponsor of the Cloud Native Compute Foundation (CNCF). If you are not familiar with CNCF, add it to your vocabulary and learn more here at

              Other VMworld and September VMware related announcements

              Hyper converged data infrastructure provider Maxta has announced a VMware vSphere Escape Pod (parachute not included ) to facilitate migration from ESXi based  to Red Hat Linux hypervisor environments. IBM and VMware for cloud partnership, along with Dell EMC, IBM and VMware joint cloud solutions. White listing of VMware vSphere VMs for enhanced security combine with earlier announced capabilities.


              Note that both VMware with vSphere ESXi and Microsoft with Hyper-V (Windows and Azure based) are supporting various approaches for securing Virtual Machines (VMs) and the hosts they run on. These enhancements are moving beyond simply encrypting the VMDK or VHDX virtual disks the VMs reside in or use, as well as more than password, ssh and other security measures. For example Microsoft is adding support for host guarded fabrics (and machine hosts) as well as shielded VMs. Keep an eye on how both VMware and Microsoft extend the data protection and security capabilities for software defined data infrastructures for their solutions and services.

              Dell EMC Announcements

              At VMworld in September Dell EMC announcements included:

              • Hyper Converged Infrastructure (HCI) and Hybrid Cloud enhancements
              • Data Protection, Goverence and Management suite updates
              • XtremIO X2 all flash array (AFA) availability optimized for vSphere and VDI


              HCI and Hybrid Cloud enhancements include VxRail Appliance, VxRack SDDC (vSphere 6.5, vSAN 6.6, NSX 6.3) along with hybrid cloud platforms (Enterprise Hybrid Cloud and Native Hybrid Cloud) along with vSAN Ready Nodes (vSAN 6.6 and encryption) and VMware Ready System. Note that Dell EMC in addition to supporting VMware hybrid clouds also previously announced solutions for Microsoft Azure Stack back in May.


              Software Defined Data Infrastructure Essentials at VMworld Bookstore

              Software Defined Data Infrastructure Essentials (CRC Press) at VMworld bookstore


              My new book Software Defined Data Infrastructure Essentials (CRC Press) made its public debut in the VMware book store where I did a book signing event. You can get your copy of Software Defined Data Infrastructure Essentials which includes Software Defined Data Centers (SDDC) along with hybrid, multi-cloud, serverless, converged and related topics at Amazon among other venues. Learn more here.


              Where To Learn More

              Learn more about related technology, trends, tools, techniques, and tips with the following links.

              What This All Means

              A year ago at VMworld the initial conversations were started around what would become the VMware Cloud on AWS solution. Also a year ago besides VMware Integrated Containers (VIC) and some other pieces, the overall container and in particular related management story was a bit cloudy (pun intended). However, now the fog and cloud seem to be clearing with the PKS solution, along with details of VMware Cloud on AWS. Likewise vSphere, vSAN and NSX along with associated vRealize tools continue to evolve as well as customer deployment growing. All in all, VMware continues to evolve, let's see how things progress now over the year until the next VMworld.


              By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.


              Ok, nuff said, for now.
                Cheers Gs

              server storage I/O data infrastructure trends

              Amazon Web Service AWS September 2017 Software Defined Data Infrasture Updates


              September was a busy month pertaining to   software defined data infrastructure including cloud and related AWS announcements. One of the announcements included VMware partnering to deliver vSphere, vSAN and NSX data infrastructure components for creating software defined data centers (SDDC) also known as multi cloud, and hybrid cloud leveraging AWS elastic bare metal servers (read more here in a companion post). Unlike traditional partner software defined solutions that relied on AWS Elastic Cloud Compute (EC2) instances, VMware is being deployed using private bare metal AWS elastic servers.


              What this means is that VMware vSphere (e.g. ESXi) hypervisor, vCenter, software defined storage (vSAN), storage defined network (NSX) and associated vRealize tools are deployed on AWS data infrastructure that can be used for deploying hybrid software defined data centers (e.g. connecting to your existing VMware environment). Learn more about VMware on AWS here or click on the following image.


              VMware on AWS via

              Additional AWS Updates

              Amazon Web Services (AWS) updates include, coinciding with VMworld, the initial availability of VMware on AWS (using virtual private servers e.g. think along the lines of Lightsail, not EC2 instances) was announced. Amazon Web Services (AWS) continues its expansion into database and table services with Relational Data Services (RDS) including various engines (Amazon Auora,MariaDB, MySQL, Oracle, PostgreSQL,and SQL Server along with Database Migration Service (DMS). Note that these RDS are in addition to what you can install and run your self on Elastic Cloud Compute (EC2) virtual machine instances, Lambda serverless containers, or Lightsail Virtual Private Servers (VPS).


              AWS has published a guide to database testing on Amazon RDS for Oracle plotting latency and IOPs for OLTP workloads here using SLOB. If you are not familiar with SLOB (Silly Little Oracle Benchmark) here is a podcast with its creator Kevin Closson discussing database performance and related topics. Learn more about SLOB and step by step installation for AWS RDS Oracle here, and for those who are concerned or think that you can not run workloads to evaluate Oracle platforms, have a look at this here.


              EC2 enhancements include charging by the second (previous by the hour) for some EC2 instances (see details here including what is or is not currently available) which is a growing trend by private cloud vendors aligning with how serverless containers have been billed. New large memory EC2 instances that for example support up to 3,904GB of DDR4 RAM have been added by AWS. Other EC2 enhancements include updated network performance for some instances, OpenCL development environment to leverage AWS F1 FPGA enabled instances, along with new Elastic GPU enabled instances. Other server and network enhancements include Network Load Balancer for Elastic Load Balancer announced, as well as application load balancer now supports load balancing to IP address as targets for AWS and on premises (e.g. hybrid) resources.


              Other updates and announces include data protection backups to AWS via Commvault and AWS Storage Gateway VTL announced. IBM has announced their Spectrum Scale  (e.g. formerly known as SONAS aka GPFS) Scale Out Storage solution for high performance compute (HPC) quick start on AWS. Additional AWS enhancements include new edge location in Boston and a third Seattle site, while Direct Connect sites have been added in Boston and Houston along with Canberra Australia. View more AWS announcements and enhancements here.

              Where To Learn More

              Learn more about related technology, trends, tools, techniques, and tips with the following links.

              What This All Means

              AWS continues to grow and expand, both in terms of number of services, also the extensiveness of them. Likewise AWS continues to add more regions and data center availability zones, enhanced connectivity, along with earlier mentioned service features. The partnership with VMware should enable enterprise organizations to move towards hybrid cloud data infrastructures, while giving AWS an additional reach into those data centers. Overall a good set of enhancements by AWS who continues to evolve their cloud and software defined data infrastructure portfolio of solution offerings.


              By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.


              Ok, nuff said, for now.

              server storage I/O data infrastructure trends


              Microsoft has created an Azure and Amazon Web Service (AWS) Service Map  (corresponding services from both providers).

              Azure AWS service map via
              Image via


              Note that this is an evolving work in progress from  Microsoft and use it as a tool to help position the different services from  Azure and AWS.


              Also note that not all features or services may not be available in different regions, visit Azure and AWS sites to see current availability.


              As with any comparison they are often dated the day they are  posted hence this is a work in progress. If you are looking for another Microsoft  created why Azure vs. AWS then check out this here. If you  are looking for an AWS vs. Azure, do a simple Google (or Bing) search and watch  all the various items appear, some sponsored, some not so sponsored among  others.

              Whats In the Service Map

              The following AWS and Azure services are mapped:

              • Marketplace (e.g. where you select service offerings)
              • Compute (Virtual Machines instances, Containers, Virtual Private Servers, Serverless Microservices and Management)
              • Storage (Primary, Secondary, Archive, Premium SSD and HDD, Block, File, Object/Blobs, Tables, Queues,  Import/Export, Bulk transfer, Backup, Data Protection, Disaster Recovery, Gateways)
              • Network & Content Delivery (Virtual networking, virtual private networks and virtual private cloud, domain name services (DNS), content delivery network (CDN), load balancing, direct connect, edge, alerts)
              • Database (Relational, SQL and NoSQL document and key value, caching, database migration)
              • Analytics and Big Data (data warehouse, data lake, data processing, real-time and batch, data orchestration, data platforms, analytics)
              • Intelligence and IoT (IoT hub and gateways, speech recognition, visualization, search, machine learning, AI)
              • Management and Monitoring (management, monitoring, advisor, DevOps)
              • Mobile Services (management, monitoring, administration)
              • Security, Identity and Access (Security, directory services, compliance, authorization, authentication, encryption, firewall
              • Developer Tools (workflow, messaging, email, API management, media trans coding, development tools, testing, DevOps)
              • Enterprise Integration (application integration, content management)


              Down load a PDF version of the service map from Microsoft  here.

              Where To Learn More


              Learn more about related technology, trends, tools, techniques, and tips with the following links.


              What this means

              On one hand this can and will likely be used as a comparison however use caution as both Azure and AWS services are rapidly evolving, adding new features, extending others. Likewise the service regions and site of data centers also continue to evolve thus use the above as a general guide or tool to help map what service offerings are similar between AWS and Azure.


              By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.


              Ok, nuff said, for now.

              Server StorageIO Industry Resources and Links

              Volume 17, Issue IX (September 2017)

              Hello and welcome to the September 2017 issue of the Server StorageIO update newsletter.

              With September being generally known as back to school month, the two September event bookends were VMware VMworld and Microsoft Ignite with many other things in between.


              Needless to say, a lot has happened in and around data infrastructure topic areas since the August newsletter (here if you missed it). Here is a post covering some of the things that I participated with during September including presentations at events in Las Vegas (VMworld), New York City (Wipro SDx Summit), SNIA SDC in Santa Clara, Fujifilm Executive Summitt in Seattle, Minneapolis/St. Paul CMG along with other activities.


              Software-Defined Data Infrastructure Essentials SDDI SDDC


              One of the activities I participated in with while at VMworld in Las Vegas was a book signing event at the VMware bookstore of my new book Software Defined Data Infrastructure Essentials (CRC Press) available at and other global venues.


              September has been a busy month pertaining data infrastructure including server storage I/O related trends, activities, news, perspectives and related topics, so let's have a look at them.

              In This Issue

              Enjoy this edition of the Server StorageIO data infrastructure update newsletter.

              Cheers GS

              Data Infrastructure and IT Industry Activity Trends

              Some recent Industry Activities, Trends, News and Announcements include:

              The month started out with VMworld in Las Vegas (e.g. one of the event bookends for the month). Rather than a long list of announcements in this newsletter, check out this StorageIOblog post covering VMworld, VMware and Dell EMC and related news. As part of VMworld, VMware and Amazon Web Services (AWS) announced news about their partnership. AWS also had several other enhancements and new product announcements during september that can be found in this StorageIOblog post here.


              AWS, Dell EMC and VMware were not the only ones making news or announcements during September. Startup NVMe based storage startup Apeiron has announced a Splunk appliance to boost log and analytics processing performance. Gigamon has extended its public cloud monitoring, insight awareness and analytics capabilities including support for Microsoft Azure.

              For those looking for the latest new emerging data infrastructure vendors to watch, add Vexta to your list of NVMe based storage systems. Vexta talks a lot about NVMe particular for their backend (e.g. where data stored on NVM based devices accessed via NVMe),  access of their storage system is via traditional Fibre Channel (FC) or emerging NVMe over fabric.


              Long time data infrastructure server and storage vendor HDS (Hitachi Data Systems) is no more (at least in name) having re branded themselves as Vantara focusing on IoT and Cloud analytics besides their traditional data center focus. Vantara combines what was HDS, Hitachi Insight Group and Pentaho into a single unit effectively based in what was HDS as a new, repackaged, refocused business unit.


              Another longtime data infrastructure solution and service provider IBM announced a new Linux only zSeries (ZED) mainframe solution. Some might think the Mainframe is dead, others that it can only run Linux as a virtual guest in a virtual machine. On the other hand some might recall that there are native Linux implementations on the ZED including Ubuntu among others.


              Also note that while IBM zOS mainframe operating systems use FICON for storage access, native ZED Linux systems can use open systems based Fibre Channel (FC) e.g. SCSI command set protocols. Is the ZED based Linux for everybody or every environment? Probably not, however for those who have large-scale Linux needs, it might be worth a look to do a total cost of ownership analysis. If nothing else, do your homework, play your cards right and you might have some leverage with the x86 based server crowd when it comes to negotiating leverage.


              Cloud storage gateway vendor Nasuni has landed another $38 Million USD in funding, hopefully that will enable them to start landing some new and larger customer revenues growing their business. Meanwhile storage startup Qumulo has announced extending their global file fabric name space to include spanning AWS.


              Attala Systems has announced next generation software defined storage for data infrastructures for Telco environments. Percona has added an experimental release of their MySQL engine enhancing performance for high volume, write intensive workloads along with improved cost effectiveness.
              Software defined storage vendor Datacore announced enhancements to support fast databases for online transaction processing (OLTP) along with analytics. Meanwhile Linux provider SUSE continues to expand its software defined storage story based around Ceph. Panasas has enhanced its scale out high performance cluster file system global name space for HPC environments with 20 PByte support. Another longtime storage vendor X-IO (formerly known as Xiotech) announced their 4th generation of their Intelligent Storage Element (ISE).


              September wrapped up with Microsoft Ignite conference along with many updated, enhancements and new features for Azure, Azure Stack, Windows among others. Read more about those and other Microsoft September announcements here in this StorageIOblog post.

              Check out other industry news, comments, trends perspectives here.

              Server StorageIO Commentary in the news

              Recent Server StorageIO industry trends perspectives commentary in the news.

              Via CDW: Comments on Is Your Network About To Fail?
              Via EnterpriseStorageForum: Comments on Data Storage and Big Data Analytics
                  Via InfoGoto: Comments on Cloud FOMO (Fear of missing out)
                  Via InfoGoto: Comments on Building a Modern Data Strategy
                  Via InfoGoto: Comments on the future of Multi-Cloud Computing
                  Via InfoGoto: Comments on AI, Machine Learning and Data management
                  Via InfoGoto: Comments on Your riskiest data might be in plain sight
                  Via InfoGoto: Comments on Data Management Too Much To Handle
                Via InfoGoto: Comments on Google Cloud Platform Gaining Data Storage Momentum
                Via InfoGoto: Comments on Singapore High Rise Data Centers
                Via InfoGoto: Comments on New Tape Storage Capacity
                Via EnterpriseStorageForum: Comments on 8 ways to save on cloud storage
                Via EnterpriseStorageForum: Comments on Google Cloud Platform and Storage

              View more Server, Storage and I/O trends and perspectives comments here

              Server StorageIOblog Posts

              Recent and popular Server StorageIOblog posts include:

              In Case You Missed It #ICYMI

              View other recent as well as past StorageIOblog posts here

              Server StorageIO Data Infrastructure Tips and Articles

              Recent Server StorageIO industry trends perspectives commentary in the news.

              Via EnterpriseStorageForum: Comments on Who Will Rule the Storage World?
              Via InfoGoto: Comments on Google Cloud Platform Gaining Data Storage Momentum
              Via InfoGoto: Comments on Singapore High Rise Data Centers
              Via InfoGoto: Comments on New Tape Storage Capacity
              Via EnterpriseStorageForum: Comments on 8 ways to save on cloud storage
              Via EnterpriseStorageForum: Comments on Google Cloud Platform and Storage

              View more Server, Storage and I/O trends and perspectives comments here

              Server StorageIO Recommended Reading (Watching and Listening) List

              In addition to my own books including Software Defined Data Infrastructure Essentials (CRC Press 2017), the following are Server StorageIO recommended reading, watching and listening list items. The list includes various IT, Data Infrastructure and related topics.


              Intel Recommended Reading List (IRRL) for developers is a good resource to check out.

              Its October which means that it is also  Blogtober, check out some of the blogs and posts occurring during October here.


              Preston De Guise aka @backupbear is Author of several books has an interesting new site that looks at topics including Ethics in IT among others. Check out his new book Data Protection: Ensuring Data Availability (CRC Press 2017).


              Brendan Gregg has a great site for Linux performance related topics here.


              Greg Knieriemen has a must read weekly blog, post, column collection of whats going on in and around the IT and data infrastructure related industries, Check it out here.


              Interested in file systems, CIFS, SMB, SAMBA and related topics then check out Chris Hertels book on implementing CIFS here at


              For those involved with VMware, check out Frank Denneman VMware vSphere 6.5 host resource guide-book here at


              I often mention in presentations a must have for anybody involved with software defined anything, or programming for that matter which is the Niklaus Wirth classic Algorithms + Data Structures = Programs that you can get on here.


              Another great book to have is Seven Databases in Seven Weeks which not only provides an overview of popular NoSQL databases such as Cassandra, Mongo, HBASE among others, lots of good examples and hands on guides. Get your copy here at


              Watch for more more items to be added to the book shelf soon.

              Events and Activities

              Recent and upcoming event activities.

              Nov. 2, 2017 - Webinar - Modern Data Protection for Hyper-Convergence
              Sep. 21, 2017 - MSP CMG - Minneapolis MN
              Sep. 20, 2017 - Webinar - BC, DR and Business Resiliency (BR) tips
              Sep. 14, 2017 - Fujifilm IT Executive Summit - Seattle WA
              Sep. 12, 2017 - SNIA Software Developers Conference (SDC) - Santa Clara CA
              Sep. 7, 2017 - Wipro SDX - Enabling, Planning Your Software Defined Journey
              August 28-30, 2017 - VMworld - Las Vegas

              See more webinars and activities on the Server StorageIO Events page here.

              Useful links and pages:
              Microsoft TechNet - Various Microsoft related from Azure to Docker to Windows
     - Various industry links (over 1,000 with more to be added soon)
     - Cloud and object storage topics, tips and news items
     - Various OpenStack related items
     - Various presentations and other download material
     - Various data protection items and topics
     - Focus on NVMe trends and technologies
     - NVM and Solid State Disk topics, tips and techniques
     - Various CI, HCI and related SDS topics
     - Various server, storage and I/O  benchmark and tools
              VMware Technical Network - Various VMware related items

              Ok, nuff said, for now.