Blog Posts

Total : 2,842

Total Blog Posts : 2,842

Blog Posts

1 2 Previous Next

Edit the registry HKLM\System\CurrentControlSet\Services\ADAM_VMwareVCMSDS\Parameters , the | SSL Port value is created as a REG_SZ instead of REG_DWORD and the value is empty

 

You need to restart the active directory web services service and also the vmwarevcmsds service.

 

Find more information in the link:

Active Directory Web Services encountered an error while reading the settings for the specified Active Directory Lightwe…

CloudCredManager Enthusiast

Get your BINGO on! in CloudCred

Posted by CloudCredManager Aug 24, 2015

Want to get in on a good game of BINGO?

Like to win a nice, new Mophie Charger?

 

Then get to VMworld 2015!

Join the game in the Hands-On Labs venue, with CloudCredibility.com hosting and awarding the prizes.

 

Stop by the CloudCred booth in the Hands-On Lab Connect Area - the pre-lab area, Moscone South

and pick up your BINGO card.

 

Screen Shot 2015-08-24 at 3.39.05 PM.png

 

Complete a row - up & down, side to side, or diagonally, by completing the corresponding CloudCred Tasks.

Turn in your BINGO card no later than Wednesday, 5pm.

5 cards with verified BINGOs will be selected Wednesday evening - and winners will be notified via email.

 

Winners can pick up his or her Mophie Charger any time Thursday morning!

You have to be a CloudCred member to play, so sign up now at CloudCredibility.com.

 

Mophie Charger.jpg

Employees of VMware are not eligible for contest prizes. Players must be present to win.

Players winning multiple prizes will be at the discretion of CloudCred staff.

Photon Linux ? Docker ??RPM ????????

????Photon Linux ??RPM ? yum ????? tdnf ????????

Photon Linux ? RPM ????????(yum / tdnf)

 

????tdnf ????? docker ??????????????

 

VMware Photon Linux 1.0 TP1 ????????? Docker 1.5 ???

root [ ~ ]# cat /etc/photon-release

VMware Photon Linux 1.0 TP1

root [ ~ ]# docker -v

Docker version 1.5.0, build a8a31ef

root [ ~ ]# docker version

Client version: 1.5.0

Client API version: 1.17

Go version (client): go1.4.1

Git commit (client): a8a31ef

OS/Arch (client): linux/amd64

Server version: 1.5.0

Server API version: 1.17

Go version (server): go1.4.1

Git commit (server): a8a31ef

 

tdnf ????? RPM ???????????????????

????????????

?????? Yum ??????(Yum ????????????)

root [ ~ ]# cat /etc/yum.repos.d/*.repo | grep -E "^\[|baseurl|enable"

[lightwave]

baseurl=https://dl.bintray.com/vmware/lightwave

enabled=0

[photon-extras]

baseurl=https://dl.bintray.com/vmware/photon_extras

enabled=1

[photon-iso]

baseurl=file:///media/cdrom/usr/src/photon/RPMS

enabled=1

[photon-updates]

baseurl=https://dl.bintray.com/vmware/photon_updates_1.0_TP1_x86_64

enabled=1

[photon]

baseurl=https://dl.bintray.com/vmware/photon_release_1.0_TP1_x86_64

enabled=1

 

Docker ?????????????????

?????????????????docker 1.5.0-3 (@System)??

??????????????????photon-extras ??? docker 1.7.0-1 ????

root [ ~ ]# tdnf list docker

docker.x86_64                                1.5.0-3                     @System

docker.x86_64                                1.5.0-3                  photon-iso

docker.x86_64                                1.6.0-2              photon-updates

docker.x86_64                                1.7.0-1               photon-extras

docker.x86_64                                1.5.0-3                      photon

 

Docker ???????????

root [ ~ ]# tdnf update docker

Upgrading:

docker x86_64  1.7.0-1

Is this ok [y/N]:y

Downloading 4450014.00 of 4450014.00

Testing transaction

Running transaction

 

Docker ???????????????

RPM ????????????? Docker ???????????????

Client ? Server ?????????????????????

?Docker ?????? 1.5 / 1.7 ???????????????????????????????

root [ ~ ]# docker -v

Docker version 1.7.0, build 0baf609

root [ ~ ]# docker version

Client version: 1.7.0

Client API version: 1.19

Go version (client): go1.4.2

Git commit (client): 0baf609

OS/Arch (client): linux/amd64

Error response from daemon: client and server don't have same version (client : 1.19, server: 1.17)

 

Systemd ???????? Docker ????????????

Systemd ? Unit ????????????????

systemctl daemon-reload ??????????

root [ ~ ]# systemctl restart docker

Warning: Unit file of docker.service changed on disk, 'systemctl daemon-reload' recommended.

 

Docker ????????

root [ ~ ]# systemctl daemon-reload

root [ ~ ]# systemctl restart docker

 

????????????????????????

root [ ~ ]# docker version

Client version: 1.7.0

Client API version: 1.19

Go version (client): go1.4.2

Git commit (client): 0baf609

OS/Arch (client): linux/amd64

Server version: 1.7.0

Server API version: 1.19

Go version (server): go1.4.2

Git commit (server): 0baf609

OS/Arch (server): linux/amd64

 

???Photon ? Docker ??????????????

Intro

As you may have found the existing official documentation on vROPS Remote collectors is pretty thin. As I was involved in a project to get this all going in a Enterprise setting I thought I would share some documentation with you.

 

First of all here is the official doco: http://pubs.vmware.com/vrealizeoperationsmanager-6/index.jsp#com.vmware.vcom.core.doc/GUID-83164C8C-45FA-41C2-B4E0-F0BE86CF4B34.html

 

And here is a good post about some questions you may have: http://virtsanity.com/2015/05/vrealize-operations-manager-6-remote-collector-information/

 

If you have not worked with vROPS 6 yet there is a good book I would recommend: https://www.packtpub.com/virtualization-and-cloud/mastering-vcenter-operations-manager

Architecture background

The deployment we are looking at is a vROPS 6.0.2 with vRIN 5.8.4 (vRealize Infrastructure Navigator formally vCenter Infrastructure Navigator, VIN) and with SRM integration of vRIN.

The idea is to have a central vROPS cluster and then use remote collectors to get data from other vCenters that are disbursed throughout the word.

The main site consists of an vROPS Master, a replica and a Data node. vROPS is connected to the local vCenter (Protected site) as well as to the VRIN that is paired with the same vCenter. vRIN is configured to collect information form the VMs as well as from SRM.

Each remote site has a remote collector that is paired with the remote vCenter (Protected Site) as well as the vRIN instance. vRIN is configured to collect information form the VMs as well as from SRM.

 

 

vCOPS/vRIN and SRM

Using vROPS and SRM together is something that needs to be discussed. Some people have the idea that they like to monitor the VMs that fail over from the protected vCenter to the recovery vCenter and that it all then magically works. This is not the case.

Each VM (or actually every object) in vCenter has its unique moref (managed object reference) and even if a VM has the same name in the Protected vCenter as in the Recovery vCenter it’s a different object. When SRM protects a VM it will create a placeholder VM on the recovery site. This placeholder VM is basically only the VMX file and has no VMDKs attached to it. SRM will furnish the VMs with VMDKs at the time of recovery.

So if you are connection a vROPS instance to the Proetced and the Recovery site, VROPS will see two different VMs (each with the same name). One will be active monitored and the other one is powered off. However you just wasted a vROPS VM licence for an essentially dead VM. The placeholder VM on the protected site shouldn’t be on, if it is...you are in a DR scenario.

 

So what would be the benefit of an vROPS in DR?

The only thing would be the ability to use all the data that is collected from the point on that the placeholder VM is started. You could use the troubleshooting options as well as some of the views etc. But all forecasts will be unusable. Please remember that vROPs needs at least 3 weeks of data collection to make accurate future predictions.

 

In my personal opinion vROPS in DR is just a waste of licensing and space. I can not see any real benefits. Please feel free to correct me.

 

VRIN - SRM integration

For VRIN to be integrated in SRM the user must have permission on the PAIRED SRM instance. Meaning the users needs to have permissions on vCenter as well as on the SRM instance that this vCenter is paired with. As for the role…that’s a bit tricky there isn’t really any great doco about it I successfully used for vCenter the read rights plus Virtual machine | Interaction | Console interaction | Guest operating system management by VIX API. For SRM I haven’t really tested it that much and used the Admin role…properly there is a better solution.

 

Open Ports

The following figures show all the Network ports that need to be in place for vROPS & vRIN in regards to the above scenario.

 

 

 

Deployment

The are heaps of posts about how to deploy and configure vROPS and vRIN so I will not cover this.

We will focus on deploying and configuring the remote collectors.

 

Deploy VROPS Remote collector

  • Deploy the vROPS OVA using vSphere Web Client (The Fat client can be used, but you shouldn’t)
  • Fill out the deployment tool as usual
  • Choose the Remote Collector (Standard or Large) for deployment
  • Choose TimeZone where the remote collector is placed
  • Deploy and power on the VM
  • Wait for VM to be ready (the VM Console shows the IP etc)

 

Adding Remote collector to Cluster

  • Open Web Browser and connect to IP or FQDN of the Remote collector
  • Click on Expand Existing Installation
  • Enter the nodes name (maybe create a Naming Standard!)
  • Select Remote Collector
  • Enter the FQDN of master node and click on Validate
  • Accept The Certificate
  • Enter the vROPS Admin password
  • Wait until the config is done…THIS may take some time (10 minutes plus).
  • Click on “finish adding Nodes” the Remote collector should now show Online and poweredOn
    You can do that step also thought the /admin interface on the Master node.
  • Logout

 

Add remote sources to Solutions

  • Login to the vROPS UI
  • Go to Solutions and mark VMware vSphere then click on the Gears Icon (configure)
  • Mark vCenter Adapter and then select the green + to add a new instance
  • Give it a Display name and description. Make sure that you have a good Naming Standard as its important that you can identify which instance is connected to what using which remote collector.
  • Enter the FDQN of the of the vCenter as: https://[vCenter FQDN]/sdk
  • We need to select how we connect to this instance. Expand Advanced settings and select the remote collector that you want to use to connect to this instance of vCenter.
  • You may want to create new Credential for this connection
  • Click on Test.  if that works click on save Settings
  • Accept the SSL certs
  • Repeat the same for the Python Adapter (also using the remote collector in the advanced settings)
  • Repeat the above for the vRealize Infrastructure Navigator Solution (also using the remote collector in the advanced settings)

Storage I/O trends

Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates

I attended the Flash Memory Summit in Santa Clara CA last week and not surprisingly there were many announcements about Non-Volatile Memory (NVM) along with related enabling technologies. Some of these announcements were component based intended for original equipment manufactures (OEMs) ranging from startup to established, systems integrators (SI), value added resellers (VAR's) while others were more customer solution focused. From a customer solution focus, some of the technologies were consumer oriented while others for business and some for cloud scale service providers.

Recent NVM, NVMe and Flash SSD news

A sampling of some recent NVM, NVMe and Flash related news includes among others:

  • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers  (Via TomsITpro)
  • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
  • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
  • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
  • Everspin & Aupera show all-MRAM Storage Module in M.2 Form Factor (Via BusinessWire)
  • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
  • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
  • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
  • New SAS Solid State Drive First Product From Seagate Micron Alliance (Via Seagate)
  • Wow, Samsung's New 16 Terabyte SSD Is the World's Largest Hard Drive (Via Gizmodo)
  • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

NVMe primer

Via Intel History of Memory
  Via Intel: Click above image to view history of memory via Intel site

NVM  includes technologies such as NAND flash commonly used in Solid State Devices (SSD's) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

Server Storage I/O access and NVM
        Server Storage I/O memory (and storage) hierarchy

Keep in mind that memory is storage and storage is persistent memory as well as that there are different classes, categories and tiers of memory and storage as shown above to meet various performance, availability, capacity and economic requirements. Besides NVM ranging from flash to NVRAM to emerging 3D XPoint among others, another popular topic that is gaining momentum is NVM Express (NVMe). NVMe (more material here at www.thenvmeplace.com)  is a new server storage I/O access method and protocol for fast access to NVM based products. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

Server Storage I/O NVMe PCIe SAS SATA AHCI
  Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

Leveraging the common PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

NVMe as back-end storage
        NVMe as a "back-end"  I/O interface in a server or storage system accessing NVM storage/media devices

NVMe as front-end server storage I/O interface
NVMe as a "front-end" interface for servers (or storage systems/appliances) to use NVMe based storage systems

NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can in addition to being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

Shared external PCIe using NVMe
NVMe and shared PCIe

NVMe features

Main features of NVMe include among others:

      

            
  • Lower latency due to improve drivers and increased queues (and queue sizes)
  •         
  • Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
  •         
  • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
  •         
  • Bandwidth improvements leveraging various fast PCIe interface and available lanes
  •         
  • Dual-pathing of devices like what is available with dual-path SAS devices
  •         
  • Unlock the value of more cores per processor socket and software threads (productivity)
  •         
  • Various packaging options, deployment scenarios and configuration options
  •         
  • Appears as a standard storage device on most operating systems
  •         
  • Plug-play with in-box drivers on many popular operating systems and hypervisors
  •       

      

Watch for more about NVMe as it continues to gain in both industry adoption and deployment as well as customer adoption and deployment.
      

Where to read, watch and learn more

        

              
  • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
  •           
  • What should I consider when using SSD cloud? (Via SearchCloudStorage)
  •           
  • MSP CMG, September 2014 Presentation (Flash back to reality – Myths and Realities Flash and SSD Industry trends perspectives plus benchmarking tips) - PDF
  •           
  • Selecting Storage: Start With Requirements (Via NetworkComputing)
  •           
  • Spot The Newest & Best Server Trends (Via Processor)
  •           
  • Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage (part I, part II and part III)
  •           
  • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))
  •           
  • Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.
  •         

Storage I/O trends

What this all means and wrap up

The question is not if NVM is in your future, it is! Instead the questions are what type of NVM including NAND flash among other mediums will be deployed where, using what type of packaging or solutions (drives, cards, systems, appliances, cloud) for what role (as storage, primary memory, persistent cache) along with how much among others. For some environments the solution is already, or will be All NVM Arrays (ANA) or All Flash Arrays (AFA) or All SSD Arrays (ASA) while for others the home run will be hybrid based solutions that work for you, fitting in and adapting to your environment as it changes.

 

Also keep in mind that a little bit of fast memory including NVM based flash among others in the right place can have a big benefit. My experiences using NVMe to use flash enabled NVMe devices on Windows and Linux systems is that you can see lower response times at higher-IOP's however also with lower CPU consumption particular when compared to 6Gbps SATA. Likewise bandwidth can easily be pushed to the limits of the NVMe device as well as PCIe interface being used such as x4 or x8 depending on implementation. That is also a warning and something to watch out for comparing apples to oranges in that while NVMe uses PCIe, understand when looking at different results if those are for x4 or x8 or faster PCIe as their mere presence of using PCIe does not mean you are running at full potential.

 

Keep an eye on NVMe as a new high-speed, low-latency server storage I/O access protocol for unlocking the full performance capabilities of fast NVM based storage as well as leveraging the multiple cores in today's fast processors. Does this mean AHCI/SATA or SCSI/SAS are now dead? Some will claim that, however at least near-term for next few years (if not longer), those interfaces will continue to be used where they make sense, as well as where they can save dollars specifically for cost sensitive, high-capacity environments that do not need the full performance of NVMe just yet.

 

As for the Flash Memory Summit event in Santa Clara, that was a good day with time well spent in briefings, meetings, demo's and add hoc discussions on the expo floor.

Ok, nuff said

Cheers
Gs

Storage I/O trends

Some August 2015 Amazon Web Services (AWS) and Microsoft Azure Cloud Updates

Cloud Services Providers continue to extend their feature, function and capabilities and the following are two examples. Being a customer of both Amazon Web Services (AWS) as well as Microsoft Azure (among others), I receive monthly news updates about service improvements along with new features. Here are a couple of examples involving recent updates from AWS and Azure.

Azure enhancements

Microsoft Azure customer update

Azure Premium Storage generally  available in Japan East

  Solid State Device (SSD) based  Azure Premium Storage is now available in Japan East region. Add up to  32 TB and more than 64,000 IOPs (read operations) per virtual machine  with  Azure Premium Storage. Learn more about Azure storage and pricing here.
http://azure.microsoft.com/en-us/pricing/details/data-factory

Azure Data Factory generally available

Data Factory is a cloud based  data integration service for automated management as well as movement and  transformation of data, learn more and view pricing options here.

AWS Partner Updates

Recent Amazon Web Services (AWS) customer update included the following pertaining to partner storage solutions.

AWS partner updates

AWS Partner Network APN

Learn more about AWS Partner Network (APN) here or click on the above image.

AWS APN competency programs include:

  • Storage
  • Healthcare
  • Life Sciences
  • SAP Solutions
  • Microsoft Solutions
  • Oracle Solutions
  • Marketing and Commerce
  • Big Data
  • Security
  • Digital Media

AWS Partner Network (APN) Solutions for Storage include:

Archiving to AWS Glacier

  • Commvault

 

  • NetApp (AltaVault)

Backup to AWS using S3

  • CloudBerry Lab
  • Commvault
  • Ctera
  • Druva
  • NetApp (AltaVault)   

 

       

Primary Cloud File and NAS storage complementing on-premise  (e.g. your local) storage

  • Avere
  • Ctera
  • NetApp (Cloud OnTap)
  • Panzura
  • SoftNAS
  • Zadara   

 

        

Secure File Transfer

  • Aspera
  • Signiant   

 

 

Note that the above are those listed on the AWS Storage  Partner Page as of this being published and subject to change. Likewise other  solutions that are not part of the AWS partner program may not be listed.

Where to read, watch and learn more

Storage I/O trends

What this all means and wrap up

Cloud Service Providers (CSP) continue to enhance their capabilities, as well as their footprints as part of growth. In addition to technology, tools and number of regions, sites and data centers, the CSPs are also expanding their partner networks both about how many partners, also in the scope of those partnerships. Some of these partnerships are in the scope of the cloud as a destination, others are for enabling hybrid where public clouds become an extension complementing traditional IT. Everything is not the same in most environments and one type of cloud approach does not have to suit or fit all needs, hence the value of hybrid cloud deployment and usage.

Ok, nuff said, for now...

Cheers
Gs

Storage I/O trends

Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?

Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?

Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?

Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?

If you need implement the above, then here is a possible solution, or in my case, an real solution.

Via StorageIOblog Supermicro 4 x 2.5 12Gbps SAS enclosure CSE-M14TQC
Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers

In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).

Via Amazon.com StarTech SAS and SATA expansion
Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.

Recently while talking with  the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks  told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.

What is the Supermicro CSE-M14TQC?

The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.

Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.

Via Supermicro CSE-M14TQC internal 12gbps SAS enclosure rear view
Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)

Via StorageIOblog Supermicro 4 x 2.5 rear view CSE-M14TQC 12Gbps SAS enclosure
CSE-M14TQCrear view before installation

Via StorageIOblog Supermicro CSE-M14TQC 12Gbps SAS enclosure cabling
CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector

Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.

Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD's to be installed to avoid electrical power or fire issues due to overloading!

Via StorageIOblog Supermicro CSE-M14TQC enclosure Lenovo TS140
CSE-M14TQC installed into Lenovo TS140 empty media bay

Via StorageIOblog Supermicro CSE-M14TQC drive enclosure Lenovo TS140
CSE-M14TQC installed with front face plated installed on Lenovo TS140

Where to read, watch and learn more

Storage I/O trends

What this all means and wrap up

If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for  VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.

For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.

Ok, nuff said

Cheers
Gs

Storage I/O trends

Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

Do you have a Lenovo TD350 or for that many other servers  that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run  into the boot loop at the “Initializing ACPI” point?

Lenovo TD350 server

VMware ACPI boot loop

The symptoms are that you see ESXi start its boot process,  loading drivers and modules (e.g. black screen), then you see the Yellow boot  screen with Timer and Scheduler initialized, and at the “Initializing ACPI”  point, ka boom, either a boot loop starts (e.g. the above processes repeats  after system boots).

The fix is actually pretty quick and simple, finding it took  a bit of time, trial and error.

There were of course the usual suspects such as

  • Checking to BIOS and firmware version of the motherboard on  the Lenovo TD350 (checked this, however did not upgrade)
  • Making sure that the proper VMware ESXi patches and updates  were installed (they were, this was a pre built image from another working  server)
  • Having the latest installation media if this was a new  install (tried this as part of trouble shooting to make sure the pre built image  was ok)
  • Remove any conflicting devices (small diversion hint: make  sure if you have cloned a working VMware image to an internal drive that it is  removed to avoid same file system UUID errors)
  • Boot into BIOS making sure that for processor VT is enabled,  for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and  that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as  well as verify boot order. Having been in auto mode for UEFI support for some  other activity, this was easy to change, however was not the magic silver  bullet I was looking for.

Breaking the VMware ACPI boot loop on Lenovo TD350

After doing some searching and coming up with some  interesting and false leads, as well as trying several boots, BIOS configuration  changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or  remembered).

Lenovo TD350 Basic BIOS settings
Lenovo TD350 BIOS basic settings

Lenovo TD350 processor BIOS settings
Lenovo TD350 processor settings

Make sure that in your BIOS setup under PCIE that you have that you disable  “Above 4GB decoding".

Turns out that I had enabled "Above 4GB decoding" for some other things I had done.

Lenovo TD350 fix VMware ACPO error
Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings

Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.

Where to read, watch and learn more

  • Lenovo TS140 Server and Storage I/O lab Review
  • Lenovo ThinkServer TD340 Server and StorageIO lab Review
  • Part II: Lenovo TS140 Server and Storage I/O lab Review
  • Software defined storage on a budget with Lenovo TS140

 

Storage I/O trends

What this all means and wrap up

In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with  my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.

Ok, nuff said (for now)

Cheers
Gs

Storage I/O trends

Intel and Micron unveil new 3D XPoint Non Volatile Memory (NVM) for servers and storage

Intel Micron announce new 3D XPoint NVM memory

This is the first of a three-part series on the recent Intel and Micron 3D XPoint server storage memory announcement. Read Part II here and  Part III here.

In a  webcast the other dayIntel and Micron announced new 3D XPoint non-volatile memory (NVM) that can be used for both primary main memory (e.g. what's in computers, serves, laptops, tablets and many other things) in place of Dynamic Random Access Memory (DRAM), for persistent storage faster than today's NAND flash-based solid state devices  (SSD), not to mention future hybrid usage scenarios. Note that this announcement while having the common term 3D in it is different from the earlier Intel and Micron announcement about 3D NAND flash (read more about that here).

Twitter hash tag  #3DXpoint

The big picture, why this type of NVM technology is needed

Server and Storage I/O trends

  • Memory is storage and storage is persistent memory
  • No such thing as a data or information recession, more data being create, processed and stored
  • Increased demand is also driving density along with convergence across server storage I/O resources
  • Larger amounts of data needing to be processed faster (large amounts of little data and big fast data)
  • Fast applications need more and faster processors, memory along with I/O interfaces
  • The  best server or storage I/O is the one you do not need to do
  • The second best I/O is one with least impact or overhead
  • Data needs to be close to processing, processing needs to be close to the data (locality of reference)


Server Storage I/O memory hardware and software hierarchy along with technology tiers

What did Intel and Micron announce?

Intel SVP and General Manager Non-Volatile Memory solutions group Robert Crooke (Left) and Micron CEO D. Mark Durcan did the joint announcement presentation of 3D XPoint (webinar here). What was  announced is the 3D XPoint technology jointly developed and manufactured by Intel and Micron which is a new form or category of NVM that can be used for both primary memory in servers, laptops, other computers among other uses, as well as for persistent data storage.


Robert Crooke (Left) and Mark Durcan (Right)

Summary of 3D XPoint announcement

  • New category of NVM memory for servers and storage
  • Joint development and manufacturing by Intel and Micron in Utah
  • Non volatile so can be used for storage or persistent server main memory
  • Allows NVM to scale with data, storage and processors performance
  • Leverages capabilities of both Intel and Micron who have collaborated in the past
  • Performance Intel and Micron claim up to 1000x faster vs. NAND flash
  • Availability persistent NVM compared to DRAM with better durability (life span) vs. NAND flash
  • Capacity densities about 10x better vs. traditional DRAM
  • Economics cost per bit between dram and nand (depending on packaging of resulting products)

What applications and products is 3D XPoint suited for?

In general, 3D XPoint should be able to be used for many of the same applications and associated products that current DRAM and NAND flash-based storage memories are used for. These range from IT and cloud or managed service provider data centers based applications and services, as well as consumer focused among many others.


3D XPoint enabling various applications

In general, applications or usage scenarios along with supporting products that can benefit from 3D XPoint include among others'. Applications that need larger amounts of main memory in a denser footprint such as in-memory databases, little and big data analytics, gaming, wave form analysis for security, copyright or other detection analysis, life sciences, high performance compute and high-productivity compute, energy, video and content severing among many others.

In addition, applications that need persistent main memory for resiliency, or to cut delays and impacts for planned or un-planned maintenance or having to wait for memories and caches to be warmed or re-populated after a server boot (or re-boot). 3D XPoint will also be useful for those applications that need faster read and write performance compared to current generations NAND flash for data storage. This means both existing and emerging applications as well as some that do not yet exist will benefit from 3D XPoint over time, like how today's applications and others have benefited from DRAM used in Dual Inline Memory Module (DIMM) and NAND flash advances over the past several decades.

Where to read, watch and learn more

Storage I/O trends

What this all means and wrap up

First, keep in mind that this is very early in the 3D XPoint technology evolution life-cycle and both DRAM and NAND flash will not be dead at least near term. Keep in mind that NAND flash appeared back in 1989 and only over the past several years has finally hit its mainstream adoption stride with plenty of market upside left. Continue reading  Part II here and  Part III here of this three-part series on Intel and Micron 3D XPoint along with more analysis and commentary.

Disclosure: Micron and Intel have been direct and/or indirect clients in the past via third-parties and partners, also I have bought and use some of their technologies direct and/or in-direct via their partners.

Ok, nuff said (for now)

Cheers
Gs

Server Storage I/O trends

EMCworld 2015 How Do You Want Your Storage Wrapped?

Back in early May I was invited by EMC to attend EMCworld 2015 which included both the public sessions, as well as several NDA based discussions. Keep in mind that there  is the known, there is the unknown (or assumed or speculated) and in between there are NDA's, nuff said on that. EMC covered my hotel and registration costs to attend the event in Las Vegas (thanks EMC, that's a disclosure btw ;) and here is a synopsis of various EMCworld 2015 announcements.

What EMC announced

  • VMAX3 enhancements to the EMC enterprise flagship storage platform to keep it relevant for traditional legacy workloads as well as for in a converged, scale-out, cloud, virtual and software defined environment.
  • VNX 3200 entry-level All Flash Array (AFA) flash SSD system starting at $25,000 USD for a 3TB unified platform with full data services found in other VNX products.
  • vVNX aka Virtual VNX aka "project liberty" which is a community (e.g. free) software version of the VNX. vVNX is a Virtual Storage Appliance (VSA) that you download and run on a VMware platform. Learn more and download here. Note the install will do a CPU type check so forget about trying to run it on a Intel Nuc or similar, I tried just because I could, the install will protect you from doing such things.
  • Various data protection related items including new Datadomain platforms as well as software updates and integration with other EMC platforms (storage systems).
  • All Flash Array (AFA) XtremIO 4.0 enhancements including larger clusters, larger nodes to boost performance, capacity and availability, along with copy service updates among others improvements.
  • Preview of DSSD shared (inside a rack) external flash Solid State Device (SSD) including more details. While much of DSSD is still under NDA, EMC did provide more public details at EMCworld. Between what was displayed and announced publicly at EMCworld as well as what can be found via Google (or other searches) you can piece  together more of the DSSD story. What is known publicly today is that DSSD leverages the  new Non-Volatile Memory express (NVMe) access protocol built upon underlying PCIe technology. More on DSSD in future discussions,if you have not done so, get an NDA deep dive briefing on it from EMC.
  • ScaleIO is now available via a free download here including both Windows and Linux clients as well as instructions for those operating systems as well as VMware.
  • ViPR can also be downloaded here for free (has been previously available) from here as well as it has been placed into open source by EMC.

What EMC announced since EMCworld 2015

  •       Acquisition of cloud services (and  software tools) vendor Virtustream for $1.2B adding to the federation cloud services portfolio (companion to VMware vCloud Air). 
  •       Release of ECS 2.0 including a free download here. This new version of ECS (Elastic Cloud Storage) can be used independent of the ViPR controller, or in conjunction with ViPR. In addition ECS now has about 80% of the functionality of the Centera object storage platform. The remaining 20% functionality (mainly regulatory compliance governance) of Centera will be added to ECS in the future providing a migration path for Centera customers. In case you are wondering what does EMC do with Centera, Atmos, ViPR and now ECS, answer is that ECS can work with or without ViPR, second is that the functionality of Centera, Atmos are being rolled into ECS. ECS as a refresher is software that transforms general purpose industry standard servers with direct storage into a scale-out HDFS and object storage solution.
  •     Check out EMCcode including S3motion that I use and have reviewed here. Also check out EMCcode Rex-Ray which if you are into docker containers, it should be of interest, I know I'm interested in it. 

Server Storage I/O trends

What this all means and wrap-up

There were no single major explosive announcements however the sum of all the announcements together should not be over shadowed by the big tent made for TV (or web) big tent productions and entertainment. What EMC announced was effectively how would you like, how do you want and need your storage and associated data services along with management wrapped.

                      

tin wrapped software

                      

By being wrapped, do you want your software defined storage management and storage wrapped in a legacy turnkey solution such as VMAX3, VNX or Isilon, do you want or need it to be hybrid or all flash, converged and unified, block, file or object.

software wrapped storage

Or do you need or want the software defined storage management and storage to be "shrink wrapped" as a download so you can deploy on your own hardware "tin wrapped" or as a VSA "virtual wrapped" or cloud wrapped? Do you need or want the software defined storage management and storage to leverage anybody's hardware while being open source?

                      

server storage software wrapping

How do you need or want your storage to be wrapped to fit your specific needs, that IMHO was the essence of what EMC announced at EMCworld 2015, granted the motorcycles and other production entertainment was engaging as well as educational.

Ok, nuff said  for now

    

Cheers
Gs

Server Storage I/O trends

VMware vCloud Air Server StorageIOlab Test Drive with videos

Recently I was invited by VMware vCloud Air to do a free hands-on test drive of their actual production environment. Some of you may already being using VMware vSphere, vRealize and other  software defined data center (SDDC) aka Virtual Server Infrastructure (VSI) or Virtual Desktop Infrastructure (VDI) tools among others. Likewise some of you may already be using one of the many cloud compute or Infrastructure as a Service (IaaS) such as Amazon Web Services (AWS) Elastic Cloud Compute (EC2), Centurylink, Google Cloud, IBM Softlayer, Microsoft Azure, Rackspace or Virtustream (being bought by EMC) among many others.

VMware vCloud Air provides a platform similar to those just mentioned among others for your applications and their underlying resource needs (compute, memory, storage, networking) to be fulfilled. In addition, it should not be a surprise that VMware vCloud Air shares many common themes, philosophies and user experiences with the traditional on-premise based VMware solutions you may be familiar with.

VMware vCloud Air overview

You can give VMware vCloud Air a trial for free while the offer lasts by clicking here (service details here). Basically if you click on the link and register a new account for using VMware vCloud Air they will give you up to $500 USD in service credits to use in the real production environment while the offer lasts which iirc is through end of June 2015.

Server StorageIO test drive VMware vCloud Air video I
Click on above image to view video part I

giving VMware vCloud Air a test drive to see what it can do for you, as opposed to what you can do for it...

Ok, nuff said  for now

    

Cheers
Gs

Server Storage I/O trends

Modernizing Data Protection = Using new and old things in new ways

This is part of an ongoing series of posts that part of www.dataprotectiondiaries.com on data protection including archiving, backup/restore, business continuance (BC), business resiliency (BC), data footprint reduction (DFR), disaster recovery (DR), High Availability (HA) along with related themes, tools, technologies, techniques, trends and strategies.

data protection trends

Keep in mind that a fundamental goal of an Information Technology (IT) organization is to protect, preserve and serve data and information in a cost-effective as well as productive way when needed. There is no such thing as an information recession with more data being generated and processed. In addition to more of it, data is also getting larger, having more dependencies on it being available as well as living longer (e.g. retention).

Proof Points, No Data or Information Recession

A quick easy proof point of more data and it getting larger is your cell phone and the pictures it take. Compare the size of those photos today to what you had in your previous generation of smart phone or even digital camera as the Mega Pixels (e.g. resolution and size of data) increased, along with the size of media (e.g. storage) to save those to also grew. Another proof point is look at your presentations, documents, web sites and other mediums with how the amount of rich or unstructured content (e.g. photos, videos) exists on those now vs. a few years ago. Yet another proof-point is to look at your structured little data databases and how there are more rows and columns, as well as how some of those columns have gotten larger or are point to external "blobs" or "objects" that have also gotten larger.

Industry trend  and challenges

There has been  industry buzz the past several years around data protection modernizing,  modernizing data protection or simply modernizing backup along with modernizing  your data and information infrastructure. Many of these conversations focus around swapping out an older technology in favor of whatever the new  industry buzzword trend is (e.g. swap tape for disk, disk for cloud) or perhaps  from one data protection, backup, archive or copy tool for another. Some of  these conversations also focus around swapping legacy for virtual, cloud or  some other variation of software defined marketing.

Data protection strategy

The Opportunity  to do new things

What is common  with all the above is basically swapping out one technology, tool, medium or  technique for another new one yet using it in old ways. For example tape gets  swapped for disk, yet the same approach to when, where, why, how often and what  gets copied or protected is left the same. Sure some new tools and technologies  get introduced.  However when was the  last time you put the tools down, took a step back and revisited the  fundamental questions of how and why you are doing data protection the way it  is being done? When was the last time you thought about data protection as an  asset or business enabler as opposed to a cost center, overhead or after  thought?

Data protection tool box
What's in your data protection toolbox, do you know what to use when?

What about  modernizing beyond the tools

One of the  challenges with modernizing is that there is a cost involved including people  time, staff skills as well as budgets not to mention keeping things running, so  how do you go about paying for any improvements? Sure you can go get a data  infrastructure or habitat for technology aka data home improvement loan,  however there are costs associated to that.

Big data garbage in = big data garbage out

What about  reducing data protection costs?

So why not  self-fund the improvements and modernization activities by finding and removing  costs, eliminating complexity vs. moving and masking issues? Part of this can  be accomplished by simply revisiting if you are treating all your applications  and data the same from a data protection perspective. Are you providing a data  protection service ability to your organization that is based on business  wants or business needs? For example, does the business want recovery time objective (RTO) 0 and recovery point objective (RPO) 0  for all applications, while it needs RTO 4 hours and RPO 15 minutes for  application-a while application-b requires RTO 12 hours and RPO of 2 hours and  application must have RTO 24 hours with RPO of 12 hours?

As a reminder  RTO is how much time, or how quickly you need your applications and data to be restored and made ready for use. RPO is the point in time to where data needs to be protected as of, or the amount of data or time frame data could be lost or missing. Thus RTO = 0 means instant recovery no downtime and RPO = 0 means no loss of data. RTO one day and RPO of ten (10) minutes means applications and their data are ready for use within 24 hours and no more than 10 minutes of data can be lost (e.g. the granularity of protection coverage)., Also keep in mind that you can have various RTO and RPO combinations to meet your specific application along with business needs as part of a tiered data protection strategy implementation.

With RTO and RPO in mind, when was the last  time you sat down with the business and applications people to revisit what  they want vs. what they must have? From these conversation you can easily  Transition into how long to keep, how many copies in what place among other  things which in turn allows you to review data protection as well as start  using both old and new technologies, tools and techniques in new ways.

Where to learn more

Learn more about data protection and related topics, themes, trends, tools and technologies  via the following links:

Server Storage I/O trends

What this all means and wrap-up

Data protection is a broad topic that spans from logical and physical security to HA, BC, BR, DR, archiving (including life beyond compliance) along with various tools, technologies, techniques. Key is aligning those to the needs of the business  or organization for today's as well as tomorrows requirements. Instead of doing  things what has been done in the past that may have been based on what was  known or possible due to technology capabilities, why not start using new and  old things in new ways. Let’s start using all the tools in the data protection  toolbox regardless of if they are new or old, cloud, virtual, physical,  software defined product or service in new ways while keeping the requirements  of the business in focus.

Keeping with the theme of protect preserve and serve, data protection to be modernized needs to become and be seen as a business asset or enabler vs. an after thought or cost over-head topic. Also, keep in mind that only you can prevent data loss, are your restores ready for when you need them? as well as one of the fundamental goals of IT is to protect, preserve and serve  information including its applications as well as data when, where needed in a  cost-effective way.

What say you?

    

Ok, nuff said  for now

    

Cheers
Gs

Storage I/O trends

Data Protection Diaries: Are your restores ready for World Backup Day 2015?

This is part of an ongoing data protection diaries series of post about, well, data protection and what I'm doing pertaining to World Backup Day 2015.

In case you forgot or did not know, World Backup Day is March 31 2015 (@worldbackupday) so now is a good time to be ready. The only challenge that I have with the World Backup Day (view their site here) that has gone on for a few years know is that it is a good way to call out the importance of backing up or protecting data. However its time to also put more emphasis and focus on being able to make sure those backups or protection copies actually work.

By this I mean doing more than making sure that your data can be read from tape, disk, SSD or cloud service actually going a step further and verifying that restored data can actually be used (read, written, etc).

The Problem, Issue, Challenge, Opportunity and Need

The problem, issue and challenges are simple, are your applications, systems and data protected as well as can you use those protection copies (e.g. backups, snapshots, replicas or archives) when as well as were needed?

storage I/O data protection

The opportunity is simple, avoiding downtime or impact to your business or organization by being proactive.

Understanding the challenge and designing a strategy

The following is my preparation checklist for World Backup Data 2015 (e.g. March 31 2015) which includes what I need or want to protect, as well as some other things to be done including testing, verification, address (remediate or fix) known issues while identifying other areas for future enhancements. Thus perhaps like yours, data protection for my environment which includes physical, virtual along with cloud spanning servers to mobile devices is constantly evolving.

collect TPM metrics from SQL Server with hammerdb
My data protection preparation, checklist and to do list

Finding a solution

While I already have a strategy, plan and solution that encompasses different tools, technologies and techniques, they are also evolving. Part of the evolving is to improve while also exploring options to use new and old things in new ways as well as eat my down dog food or walk the talk vs. talk the talk. The following figure provides a representation of my environment that spans physical, virtual and clouds (more than one) and how different applications along with systems are protected against various threats or risks. Key is that not all applications and data are the same thus enabling them to be protected in different ways as well as over various intervals. Needless to say there is more to how, when, where and with what different applications and systems are protected in my environment than show, perhaps more on that in the future.

server storageio and unlimitedio data protection
Some of what my data protection involves for Server StorageIO

Taking action

What I'm doing is going through my checklist to verify and confirm the various items on the checklist as well as find areas for improvement which is actually an ongoing process.

Do I find things that need to be corrected?

Yup, in fact found something that while it was not a problem, identified a way to improve on a process that will once fully implemented enabler more flexibility both if a restoration is needed, as well as for general everyday use not to mention remove some complexity and cost.

Speaking of lessons learned, check this out that ties into why you want 4 3 2 1 based data protection strategies.

Storage I/O trends

Where to learn more

Here are some extra links to have a look at:

Data Protection Diaries
Cloud conversations: If focused on cost you might miss other cloud storage benefits
5 Tips for Factoring Software into Disaster Recovery Plans
Remote office backup, archiving and disaster recovery for networking pros
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Given outages, are you concerned with the security of the cloud?
Data Archiving: Life Beyond Compliance
My copies were corrupted: The 3-2-1 rule
Take a 4-3-2-1 approach to backing up data
Cloud and Virtual Data Storage Networks- Chapter 8 (CRC/Taylor and Francis)

What this all means and wrap-up

Be prepared, be proactive when it comes to data protection and business resiliency vs. simply relying reacting and recovering hoping that all will be ok (or works).

Take a few minutes (or longer) and test your data protection including backup to make sure that you can:

a) Verify that in fact they are working protecting applications and data in the way expected

b) Restore data to an alternate place (verify functionality as well as prevent a problem)

c) Actually use the data meaning it is decrypted, inflated (un-compressed, un-de duped) and security certificates along with ownership properties properly applied

d) Look at different versions or generations of protection copies if you need to go back further in time

e) Identify area of improvement or find and isolate problem issues in advance vs. finding out after the fact

Time to get back to work checking and verifying things as well as attending to some other items.

Ok, nuff said, for now...

Cheers gs

Storage I/O trends

S3motion Buckets Containers Objects AWS S3 Cloud and EMCcode

It's springtime in Kentucky and recently I had the opportunity to have a conversation with Kendrick Coleman to talk about S3motion, Buckets, Containers, Objects, AWS S3, Cloud and Object Storage, node.js, EMCcode and open source among other related topics which are available in a podcast here, or video here and available at StorageIO.tv.

In this Server StorageIO industry trends perspective podcast episode, @EMCcode (Part of EMC) developer advocate Kendrick Coleman (@KendrickColeman) joins me for a conversation. Our conversation spans spring-time in Kentucky (where Kendrick lives) which means Bourbon and horse racing as well as his blog (www.kendrickcoleman.com).

Btw, in the podcast I refer to Captain Obvious and Kendrick's beard, for those not familiar with who or what @Captainobvious is that is made reference to, click here to learn more.

Captain Obvious
@Kendrickcoleman
& @Captainobvious

What about Clouds Object Storage Programming and other technical stuff?

Of course we also talk some tech including what is EMCcode, EMC Federation, Cloud Foundry, clouds, object storage, buckets, containers, objects, node.js, Docker, Openstack, AWS S3, micro services, and the S3motion tool that Kendrick developed.

Cloud and Object Storage Access
Click  to view video

Kendrick explains the motivation behind S3motion along with trends in and around objects (including GET, PUT vs. traditional Read, Write) as well as programming among related topic themes and how context matters.

S3motion for AWS S3 Google and object storage
Click  to listen to podcast

I have used S3motion for moving buckets, containers and objects around including between AWS S3, Google Cloud Storage (GCS) and Microsoft Azure as well as to/from local. S3motion is a good tool to have in your server storage I/O tool box for working with cloud and object storage along with others such as Cloudberry, S3fs, Cyberduck, S3 browser among many others.

You can get S3motion free from git hub here.

Amazon Web Services AWS

Where to learn more

Here are some links to learn more about AWS S3, Cloud and Object Storage along with  related topics

Also available on Listen to Server StorageIO podcasts on iTunes

What this all means and wrap-up

Context matters when it comes to many things particular about objects as they can mean different things. Tools such as S3motion make it easy for moving your buckets or containers along with objects from one cloud storage system, solution or service to another. Also check out EMCcode to see what they are doing on different fronts from supporting new and greenfield development with Cloud Foundry and PaaS to Openstack to bridging current environments to the next generation of platforms. Also check out Kendricks blog site as he has a lot of good technical content as well as some other fun stuff to learn about. Look forward to having Kendrick on as a guest again soon to continue our conversations. In the meantime, check out S3motion to see how it can fit into your server storage I/O tool box.

Ok, nuff said, for now..

Cheers gs

Storage I/O trends

Cloud Conversations: AWS EFS Elastic File System (Cloud NAS) First Preview Look

Amazon Web Services (AWS) recently announced (preview) new Elastic File System (EFS) providing Network File System (NFS) NAS (Network Attached Storage) capabilities for AWS Elastic Cloud Compute (EC2) instances. EFS AWS compliments other AWS storage offerings including Simple Storage Service (S3) along with Elastic Block Storage (EBS), Glacier and Relational Data Services (RDS) among others.

 

Ok, that's a lot of buzzwords and acronyms so lets break this down a bit.

 

Amazon Web Services AWS

AWS EFS and Cloud Storage, Beyond Buzzword Bingo

  • EC2 - Instances exist in various Availability Zones (AZ's) in different AWS Regions. Compute instance with various operating systems including Windows and Ubuntu among others that also can be pre-configured with applications such as SQL Server or web services among others. EC2 instances vary from low-cost to high-performance compute, memory, GPU, storage or general purposed optimized. For example, some EC2 instances rely solely on EBS, S3, RDS or other AWS storage offerings while others include on-board Solid State Disk (SSD) like DAS SSD found on traditional servers. EC2 instances on EBS volumes can be snapshot to S3 storage which in turn can be replicated to another region.
  • EBS - Scalable block accessible storage for EC2 instances that can be configured for performance or bulk storage, as well as for persistent images for EC2 instances (if you choose to configure your instance to be persistent)
  • EFS - New file (aka NAS) accessible storage service accessible from EC2 instances in various AZ's in a given AWS region
  • Glacier - Cloud based near-line (or by some comparisons off-line) cold-storage archives.
  • RDS - Relational Database Services for SQL and other data repositories
  • S3 - Provides durable, scalable low-cost bulk (aka object) storage accessible from inside AWS as well as via externally. S3 can be used by EC2 instances for bulk durable storage as well as being used as a target for EBS snapshots.
  • Learn more about EC2, EBS, S3, Glacier, Regions, AZ's and other AWS topics in this primer here

aws regions architecture

What is EFS

Implements NFS V4 (SNIA NFS V4 primer) providing network attached storage (NAS) meaning data sharing. AWS is indicating initial pricing for EFS at $0.30 per GByte per month. EFS is designed for storage and data sharing from multiple EC2 instances in different AZ's in the same AWS region with scalability into the PBs.

What EFS is not

Currently it seems that EFS has an end-point inside AWS accessible via an EC2 instance like EBS. This appears to be like EBS where the storage service is accessible only to AWS EC2 instances unlike S3 which can be accessible from the out-side world as well as via EC2 instances.

 

Note however, that depending on how you configure your EC2 instance with different software, as well as configure a Virtual Private Cloud (VPC) and other settings, it is possible to have an application, software tool or operating system running on EC2 accessible from the outside world. For example, NAS software such as those from SoftNAS and NetApp among many others can be installed on an EC2 instance and with proper configuration, as well as being accessible to other EC2 instances, they can also be accessed from outside of AWS (with proper settings and security).

AWS EFS at this time is NFS version 4 based however does not support Windows SMB/CIFS, HDFS or other NAS access protocols. In addition AWS EFS is accessible from multiple AZ's within a region. To share NAS data across regions some other software would be required.

 

EFS is not yet as of this writing released and AWS is currently accepting requests to join the EFS preview here.

 

Amazon Web Services AWS

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

AWS continues to extend its cloud platform include both compute and storage offerings. EFS compliments EBS along with S3, Glacier and RDS. For many environments NFS support will be welcome while for others CIFS/SMB would be appreciated and others are starting to find that value in HDFS accessible NAS.

 

Overall I like this announcement and look forward to moving beyond the preview.

 

Ok, nuff said, for now..

Cheers gs

1 2 Previous Next

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.