Skip navigation
2017

server storage I/O trends

Broadcom aka Avago aka LSI announces SAS SATA NVMe Adapters with RAID

In case you missed it, Broadcom formerly known as Avago who bought the LSI adapter and RAID card business announced shipping new SAS, SATA and NVMe devices.

 

While SAS and SATA are well established continuing to be deployed for both HDD as well as flash SSD, NVMe continues to evolve with a bright future. Likewise, while there is a focus on software-defined storage (SDS), software defined data centers (SDDC) and software defined data infrastructures (SDDI) along with advanced parity RAID including erasure codes, object storage among other technologies, there is still a need for adapter cards including traditional RAID.

 

Keep in mind that while probably not meeting the definition of some software-defined aficionados, the many different variations, permutations along with derivatives of RAID from mirror and replication to basic parity to advanced erasure codes (some based on Reed Solomon aka RAID 2) rely on software. Granted, some of that software is run on regular primary server processors, some on packaged in silicon via ASICs or FPGAs, or System on Chips (SOC), RAID on Chip (RoC) as well as BIOS, firmware, drivers as well as management tools.

SAS, SATA and NVMe adapters

 

For some environments cards such as those announced by Broadcom are used in passthru mode effectively as adapters for attaching SAS, SATA and NVMe storage devices to servers. Those servers may be deployed as converged infrastructures (CI), hyper-converged infrastructures (HCI), Cluster or Cloud in Box (CiB) among other variations. To name names you might find the above (or in the not so distant future) in VMware vSAN or regular vSphere based environments, Microsoft Windows Server, Storage Spaces Direct (S2D) or Azure Stack, OpenStack among other deployments (check your vendors Hardware Compatibility Lists aka HCLs). In some cases these cards may be adapters in passthru mode, or using their RAID (support various by different software stacks). Meanwhile in other environments, the more traditional RAID features are still used spanning Windows to Linux among others.

Who Is Broadcom?

Some of you may know of Broadcom having been around for many years with a focus on networking related technologies. However some may not realize that Avago bought Broadcom and changed their name to Broadcom. Here is a history that includes more recent acquisitions such as Brocade, PLX, Emulex as well as LSI. Some of you may recall Avago buying LSI (the SAS, SATA, PCIe HBA, RAID and components) business not sold to NetApp as part of Engenio. Also recall that Avago sold the LSI flash SSD business unit to Seagate a couple of years ago as part of its streamlining. That's how we get to where we are at today with Broadcom aka formerly known as Avago who bought the LSI adapter and RAID business announcing new SAS, SATA, NVMe cards.

What Was Announced?

Broadcom has announced cards that are multi-protocol supporting Serial Attached SCSI (SAS), SATA/AHCI as well as NVM Express (NVMe) as basic adapters for attaching storage (HDD, SSD, storage systems) along with optional RAID as well as cache support. These cards can be used in application servers for traditional, as well as virtualized SDDC environments, as well as storage systems or appliances for software-defined storage among other uses. The basic functionality of these cards is to provide high performance (IOPs and other activity, as well as bandwidth) along with low latency combined with data protection as well as dense connectivity.

 

Specific features include:

  • Broadcom’s Tri-Mode SerDes Technology enables  the operation of NVMe, SAS or SATA devices in a single drive bay, allowing for  endless design flexibility.
  • Management software including LSI Storage Authority (LSA), StorCLI, HII  (UEFI)
  • Optional CacheVault(R) flash cache protection
  • Physical dimension Low Profile 6.127” x 2.712”
  • Host bus type x8 lane PCIe Express 3.1
  • Data transfer rates SAS-3 12Gbs; NVMe up to 8 GT/s PCIe Gen 3
  • Various OS and hypervisors host platform support
  • Warranty 3 yrs, free 5x8 phone support, advanced replacement option
  • RAID levels 0, 1, 5, 6, 10, 50, and 60

 

Note that some of the specific feature functionality may be available at a later date, check with your preferred vendors HCL

                                                                                                                                                                                                       

Specification

9480    8i8e

9440    8ihttps://www.broadcom.com/products/storage/raid-controllers/megaraid-9480-8i8e#specifications

9460    8ihttps://www.broadcom.com/products/storage/raid-controllers/megaraid-9440-8i

9460    16ihttps://www.broadcom.com/products/storage/raid-controllers/megaraid-9460-8i

Image

Broadcom 9480 8i83 nvme raid

Broadcom 9440 8i nvme raid

Broadcom 9460 8i nvme raid

Broadcom 9460 16i nvme raid

Internal Ports

8

 

8

16

Internal Connectors

2 x Mini-SAS HD x4 SFF-8643

2 x Mini-SAS HD x4 SFF-8643

2 x Mini-SAS HD x4 SFF-8643

4 Mini-SAS HD x4
      SFF-8643

External Ports

8

 

 

 

External Connectors

2 x Mini-SAS HD    SFF8644

 

 

 

Cache Protection

CacheVault CVPM05

 

CacheVault CVPM05

CacheVault    CVPM05

Cache Memory

2GB 2133 MHz DDR4    SDRAM

 

2GB 2133 MHz DDR4    SDRAM

4GB 2133 MHz DDR4    SDRAM

Devices Supported

SAS/SATA: 255, NVMe:    4 x4, up to 24 x2 or x4*

SAS/SATA: 63, NVMe:    4 x4, up to 24 x2 or x4*

SAS/SATA: 255, NVMe:    4 x4, up to 24 x2 or x4*

SAS/SATA: 255, NVMe:    4 x4, up to 24 x2 or x4*

I/O Processors (SAS Controller)

SAS3516 dual-core RAID-on-Chip (ROC)

SAS3408 I/O    controller (IOC)

SAS3508 dual-core    RAID-on-Chip (ROC)

SAS3516 dual-core RAID-on-Chip (ROC)

 

In case you need a refresher on SFF cable types, click on the following two images which take you to Amazon.com where you can learn more, as well as order various cable options. PC Pit Stop has a good selection of cables (See other SFF types), connectors and other accessories that I have used, along with those from Amazon.com and others.

 

Available via Amazon.com sff 8644 8643 sas mini hd cable
Left: SFF 8644 Mini SAS HD (External), Right SFF-8643 Mini SAS HD (internal) Image via Amazon.com

Available via Amazon.com sff 8644 8642 sas mini hd cable
Left: SFF 8643 Mini SAS HD (Internal), Right SFF-8642 SATA with power (internal) Image via Amazon.com

Wait, Doesnt NVMe use PCIe

For those who are not familiar with NVMe and in particular U.2 aka SFF 8639 based devices, physically they look the same (almost) as a SAS device connector. The slight variation is if you look at a SAS drive, there is a small tab to prevent plugging into a SATA port (recall you can plug SATA into SAS. For SAS drives that tab is blank, however on the NVMe 8639 aka U.2 drives (below left) that tab has several connectors which are PCIe x4 (single or dual path).

 

What this means is that the PCIe x4 bus electrical signals are transferred via a connector, to backplane chassis to 8639 drive slot to the drive. Those same 8639 drive slots can also have a SAS SATA connection using their traditional connectors enabling a converged or hybrid drive slot so to speak. Learn more about NVMe here (If the Answer is NVMe, then what were and are the questions?) as well as at www.thenvmeplace.com.

 

NVMe U.2 8639 driveNVMe U.2 8639 sas sata nvme drive
Left NVMe U.2 drive showing PCIe x4 connectors, right, NVMe U.2 8639 connector

Who Is This For?

These cards are applicable for general purpose IT and other data infrastructure environments in traditional servers among others uses. They are also applicable for systems builders, integrators and OEMs whom you may be buying your current systems from, or future ones.

 

Where to  Learn More

The following are additional resources to learn more about vSAN and related technologies.

What this  all means

Even as the industry continues to talk and move towards more software-defined focus, even for environments that are serverless, there is still need for hardware somewhere. These adapters are a good sign of the continued maturing cycle of NVMe to be well positioned into the next decade and beyond, while also being relevant today. Likewise, even though the future involves NVMe, there is a still a place for SAS along with SATA to coexist in many environments. For some environment there is a need for traditional RAID while for others simply the need for attachment of SAS, SATA and NVMe devices. Overall, a good set of updates, enhancements and new technology for today and tomorrow, now, when do I get some to play with? ;).

 

Ok, nuff said (for now...).

 

Cheers
Gs

server storage I/O trends

Dell EMC Azure Stack Hybrid Cloud Solution

Dell EMC have announced their Microsoft Azure Stack hybrid cloud platform solutions. This announcement builds upon earlier statements of support and intention by Dell EMC to be part of the Microsoft Azure Stack community. For those of you who are not familiar, Azure Stack is an on premise extension of Microsoft Azure public cloud.

 

What this means is that essentially you can have the Microsoft Azure experience (or a subset of it) in your own data center or data infrastructure, enabling cloud experiences and abilities at your own pace, your own way with control. Learn more about Microsoft Azure Stack including my experiences with and installing Technique Preview 3 (TP3) here.

 

software defined data infrastructures SDDI and SDDC

What Is Azure Stack

Microsoft Azure Stack is an on-premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

 

In summary, Microsoft Azure Stack and this announcement is about:

  • A onsite, on-premise,  in your data center extension of Microsoft  Azure public cloud
  • Enabling private and hybrid  cloud with good integration along with shared  experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline  choosing what works best for you
  • Common processes,  tools, interfaces, management and user experiences
  • Leverage speed of  deployment and configuration with a purpose-built integrated  solution
  • Support existing and cloud-native  Windows, Linux, Container and other services
  • Available as a public preview via software download, as well  as vendors offering solutions

What Did Dell EMC Announce

Dell EMC announced their initial  product, platform solutions, and services  for Azure Stack. This includes a Proof of  Concept (PoC) starter kit (PE R630) for doing evaluations, prototype, training,  development test, DevOp and other initial activities with Azure Stack. Dell EMC also announced a larger for production  deployment, or large-scale development, test DevOp activity turnkey solution. The  initial production solution scales from 4 to 12 nodes, or from 80 to 336 cores  that include hardware (server compute, memory, I/O and networking, top of rack  (TOR) switches, management, Azure Stack software along with services.  Other aspects of the announcement include initial  services in support of Microsoft Azure Stack and Azure cloud offerings.


server storage I/O trends
Image via Dell EMC

 

The announcement builds on joint Dell EMC Microsoft  experience, partnerships, technologies and services spanning hardware,  software, on site data center and public cloud.

server storage I/O trends
Image via Dell EMC

 

Dell EMC along with Microsoft have engineered a hybrid cloud  platform for organizations to modernize  their data infrastructures enabling faster innovate, accelerate deployment of resources. Includes hardware  (server compute, memory, I/O networking, storage devices), software, services, and support.
server storage I/O trends
Image via Dell EMC

 

The value proposition  of Dell EMC hybrid cloud for Microsoft Azure Stack includes consistent  experience for developers and IT data infrastructure professionals. Common experience across Azure public cloud and Azure  Stack on-premise in your data center for private or hybrid. This  includes common portal, Powershell,  DevOps tools, Azure Resource Manager (ARM), Azure Infrastructure as a Service  (IaaS) and Platform as a Service (PaaS), Cloud Infrastructure and associated  experiences (management, provisioning, services).
server storage I/O trends
Image via Dell EMC

 

Secure, protect, preserve and serve applications VMs hosted  on Azure Stack with Dell EMC services along with Microsoft technologies. Dell  EMC data protection including backup and restore, Encryption as a Service, host  guard and protected VMs, AD integration among other features.
server storage I/O trends
Image via Dell EMC

 

Dell EMC services for Microsoft Azure Stack include single contact support for prepare, assessment, planning; deploy with rack  integration, delivery, configuration;  extend the platform with

applicable migration,  integration with Office 365 and other applications,  build new services.
server storage I/O trends
Image via Dell EMC

 

Dell EMC Hyper-converged scale out solutions range from minimum of 4 x PowerEdge R730XD (total raw specs include 80 cores (4 x 20), 1TB RAM (4 x 256GB), 12.8TB SSD Cache, 192TB Storage, plus two top of row network switches (Dell EMC) and 1U management server node. Initial maximum configuration raw specification includes 12 x R730XD (total 336 cores), 6TB memory, 86TB SSD cache, 900TB storage along with TOR network switch and management server.

 

The above configurations initially enable HCI nodes of small (low) 20 cores, 256GB memory, 5.7TB SSD cache, 40TB storage; mid size 24 cores, 384GB memory, 11.5TB cache and 60TB storage; high-capacity with 28 cores, 512GB memory, 11.5TB cache and 80TB storage per node.
  server storage I/O trends
Image via Dell EMC

 

Dell EMC Evaluator program for Microsoft Azure Stack including the PE R630 for PoCs, development, test and training environments. The solution combines Microsoft Azure Stack software, Dell EMC server with Intel E5-2630 (10 cores, 20 threads / logical processors or LPs), or Intel E5-2650 (12 cores, 24 threads / LPs). Memory is 128GB or 256GB, storage includes flash SSD (2 x 480GB SAS) and HDD (6 x 1TB SAS). and networking.
server storage I/O trends
Image via Dell EMC

 

Collaborative support single contact between Microsoft and Dell EMC

Who Is This For

This announcement is  for any organization that is looking for an  on-premise, in your data center private or hybrid cloud turnkey solution  stack. This initial set of announcements can be for those looking to do a proof  of concept (PoC), advanced prototype,  support development test, DevOp or gain cloud-like elasticity, ease of use, rapid procurement and other experiences of public  cloud, on your terms and timeline. Naturally,  there is a strong affinity and seamless experience for those already using, or  planning to use Azure Public Cloud for Windows,  Linux, Containers and other workloads, applications,  and services.

What Does This Cost

Check with your Dell EMC representative  or partner for exact pricing which  varies for the size and configurations.  There are also various licensing models to take into consideration if you have Microsoft Enterprise  License Agreements (ELAs) that your Dell EMC representative  or business partner can address for you. Likewise being cloud based, there is also time usage-based  options to explore.

Where to learn more

What this  all means

The dust is starting to settle on last falls Dell EMC  integration, both of whom have long histories working with, and partnering  along with Microsoft on legacy, as well as virtual software-defined data centers (SDDC), software-defined data infrastructures  (SDDI), native, and hybrid clouds. Some may view the Dell EMC VMware relationship as a primary  focus, however, keep in mind that both Dell and EMC had worked with Microsoft long before VMware came into being. Likewise, Microsoft remains one of the most commonly  deployed operating systems on VMware-based  environments. Granted Dell EMC have a significant  focus on VMware, they both also sell, service and support many services for Microsoft-based solutions.

 

What about Cisco, HPE, Lenovo among others who have to announce or discussed their Microsoft  Azure Stack intentions? Good question, until we hear more about what those and  others are doing or planning, there is  not much more to do or discuss beyond speculating  for now. Another common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

 

Overall, this is an excellent  and exciting move for both Microsoft  extending their public cloud software stack to be  deployed within data centers in a hybrid way, something that those  customers are familiar with doing. This  is a good example of hybrid being spanning public and private clouds, remote  and on-premise, as well as familiarity  and control of traditional procurement with the flexibility, elasticity experience  of clouds.

software defined data infrastructures SDDI and SDDC

 

Some will say that if OpenStack is struggling in many organizations  and being free open source, how Microsoft can have success with Azure Stack.  The answer could be that some organizations  have struggled with OpenStack while others have not due to lack of commercial  services and turnkey support. Having installed both OpenStack and Azure Stack  (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy  to install, granted it is limited to one node,  unlike the production versions. Likewise,  there are easy to use appliance versions of OpenStack that are limited in  scale, as well as more involved installs that unlock full functionality.

 

OpenStack, Azure Stack, VMware and others have their places,  along, or supporting containers along with other tools. In some cases,  those technologies may exist in the same environment supporting different  workloads, as well as accessing various public clouds, after all, Hybrid is the  home run for many if not most legality IT environments.

 

Overall this is a good announcement from Dell EMC for those who are interested in, or should become more aware about Microsoft Azure Stack, Cloud along with hybrid clouds. Likewise look forward to hearing more about the solutions from others who will be supporting Azure Stack as well as other hybrid (and Virtual Private Clouds).

 

Ok, nuff said (for now...).

Cheers
Gs

server storage I/O trends

Azure Stack TP3 Overview Preview (Part II) Install Review

This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

 

Azure Stack Review and Install

Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted  to gain some closer insight, experience, expand my trade craft on  Azure Stack by installing TP3. This is similar to what I have done in the past  with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I  need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

 

Microsoft Azure Stack architecture
  Click here or on the above image to view list of VMs and other services  (Image via Microsoft.com)

Whats Involved Installing Azure Stack TP3?

 

The basic steps are as follows:

  • Read this Azure Stack blog post (Azure Stack)
  • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
  • Planning your deployment making decisions on Active Directory and other items.
  • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
  • Copy Azure Stack software and installer to target server and run pre-install scripts.
  • Modify PowerShell script file if using a VM instead of a PM
  • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
  • Server reboots, select Azure Stack from two boot options.
  • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
  • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
  • Update any applicable installation scripts (see notes that follow)
  • Deploy the script, then extended Azure Stack TP3 PoC as needed

 

Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

 

Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate  files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

 

Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

 

server storageio nested azure stack tp3 vmware

 

Azure Stack deployment prerequisites (Microsoft) include:

  • At least 12 cores (or more), dual socket processor if possible
  • As much DRAM as possible (I used 100GB)
  • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
  • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
  • A single NIC or adapter (I put mine into static instead of DHCP mode)
  • Verify your physical or virtual server BIOS has VT enabled

 

The above image helps to set the story of what is being done. On the left is for  bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

 

Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

 

Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

 

Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

 

Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

 

After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don't worry, it becomes much easier once all is said and done.

 

Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

 

Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD.

Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

 

Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

 

Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

 

Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

 

storageio azure stack tp3 vmware configuration

 

Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

 

The following image combines a couple of different things including:

A: Showing the contents of C:\Azurestack_Supportfiles directory

B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

 

prepariing azure stack tp3 cloudbuilder for nested vmware deployment

 

From PowerShell (administrator):

# Variables
$Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
$LocalPath = 'c:\AzureStack_SupportFiles'

# Create folder
New-Item $LocalPath -type directory

# Download files
( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

 

Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

 

Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }

You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

 

Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

 

Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

 

azure stack tp3 cloudbuilder nested vmware deployment

 

You will see a reboot and install, this is installing what will be  called the physical instance. Note that this is really being installed on the  VM system drive as a secondary boot option (e.g. azure stack).

 

azure stack tp3 dual boot option

 

After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure  Active Directory (AAD) or local ADFS.

 

Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

 

Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

CloudBuilder Deployment Time

Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

 

From PowerShell (administrator):

cd C:\CloudDeployment:\Setup
$adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

Deploying on VMware Virtual Machines Tips

Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

 

Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

 

Here are some more tips for deploying Azure Stack on VMware,

 

Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

 

Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate  to line 376 (or in that area)
  Change $false to $true which will stop the script failing when checking  to see if the Azure Stack is running inside a VM.
  Next go to line 453.
  Change the last part of the line to read “Should Not BeLessThan 0”
  This will stop the script checking for the required amount of cores  available.

 

After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

cd C:\CloudDeployment\Setup .\InstallAzureStackPOC.ps1 -rerun

Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

 

starting azure stack tp3 deployment

 

Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

 

azure stack tp3 deployment finished

Checking in periodically to see how the deployment progress  is progressing, as well as what is occurring. If you have the time, watch some  of the scripts as you can see some interesting things such as the software  defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka  Azure Stack virtual environment created. This includes virtual machine creation  and population, creating the software defined storage using storage spaces  direct (S2D), virtual network and active directory along with domain controllers  among others activity.

azure stack tp3 deployment progress

After Azure Stack Deployment Completes

 

After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of  (note documentation side to use  which did not work for me).

 

Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

 

accessing azure stack tp3 management portal dashboard

 

Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.  

What this  all means

A common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

software defined data infrastructures SDDI and SDDC

 

Some will say that if OpenStack is struggling in many organizations  and being free open source, how Microsoft can have success with Azure Stack.  The answer could be that some organizations  have struggled with OpenStack while others have not due to lack of commercial  services and turnkey support. Having installed both OpenStack and Azure Stack  (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy  to install, granted it is limited to one node,  unlike the production versions. Likewise,  there are easy to use appliance versions of OpenStack that are limited in  scale, as well as more involved installs that unlock full functionality.

 

OpenStack, Azure Stack, VMware and others have their places,  alongside, or supporting containers along with other tools. In some cases,  those technologies may exist in the same environment supporting different  workloads, as well as accessing various public clouds, after all, Hybrid is the  home run for many if not most legality IT environments.

 

Ok, nuff said (for now...).

Cheers
Gs

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about  Azure Stack?

 

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

 

For those who are not aware, Azure Stack is a private on-premise  extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or  what you might also refer to as a beta (get the bits here).

 

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

 

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

 

Besides being an on-premise, private  cloud variant, Azure Stack is also hybrid capable being able to work with  public cloud Azure. In addition to working with public cloud Azure, Azure  Stack services and in particular workloads can also work with traditional  Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

 

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

 

Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

 

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

 

Where to learn more

 

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  

What this  all means

A common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

 

Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

 

Ok, nuff said (for now...).

Cheers
Gs