Skip navigation

Blog Posts

Total : 4,179

Blog Posts

vSAN データストアにネステッド ESXi (ゲスト OS として ESXi をインストール)を配置するときに、

仮想ディスクのフォーマット エラー対策などで物理サーバ側の ESXi で

/VSAN/FakeSCSIReservations を有効にします。

 

参考: How to run Nested ESXi on top of a VSAN datastore?

https://www.virtuallyghetto.com/2013/11/how-to-run-nested-esxi-on-top-of-vsan.html

 

今回は、PowerCLI で /VSAN/FakeSCSIReservations を有効にしてみます。

 

vSAN クラスタに参加している ESXi のみに設定するため、

対象クラスタを取得してから、パイプで設定コマンドに渡します。

 

今回の対象クラスタは infra-cluster-01 です。

PowerCLI> Get-Cluster infra-cluster-01 | select Name,VsanEnabled

 

Name             VsanEnabled

----             -----------

infra-cluster-01        True

 

 

対象の ESXi です。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,ConnectionState,PowerState,Version,Build | ft -AutoSize

 

Name                    ConnectionState PowerState Version Build

----                    --------------- ---------- ------- -----

infra-esxi-01.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-02.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-03.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-04.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-05.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-06.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

 

 

現状の設定を確認しておきます。

VSAN.FakeSCSIReservations は、まだ無効の「0」です。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,{$_|Get-AdvancedSetting VSAN.FakeSCSIReservations}

 

Name                    $_|Get-AdvancedSetting VSAN.FakeSCSIReservations

----                    ------------------------------------------------

infra-esxi-01.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-02.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-03.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-04.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-05.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-06.go-lab.jp VSAN.FakeSCSIReservations:0

 

 

設定変更します。

VSAN.FakeSCSIReservations を、有効の「1」にします。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | Get-AdvancedSetting VSAN.FakeSCSIReservations | Set-AdvancedSetting -Value 1 -Confirm:$false

 

設定変更されました。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,{$_|Get-AdvancedSetting VSAN.FakeSCSIReservations}

 

Name                    $_|Get-AdvancedSetting VSAN.FakeSCSIReservations

----                    ------------------------------------------------

infra-esxi-01.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-02.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-03.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-04.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-05.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-06.go-lab.jp VSAN.FakeSCSIReservations:1

 

 

下記のように列名の表示などを調整することもできます。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,@{N="VSAN.FakeSCSIReservations";E={($_|Get-AdvancedSetting VSAN.FakeSCSIReservations).Value}}

 

Name                    VSAN.FakeSCSIReservations

----                    -------------------------

infra-esxi-01.go-lab.jp                         1

infra-esxi-02.go-lab.jp                         1

infra-esxi-03.go-lab.jp                         1

infra-esxi-04.go-lab.jp                         1

infra-esxi-05.go-lab.jp                         1

infra-esxi-06.go-lab.jp                         1

 

 

設定が統一されているか、グルーピングして確認することもできます。

VSAN.FakeSCSIReservations が「1」の ESXi ホストをグルーピングして、

6台すべての設定が統一されていることがわかります。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | Get-AdvancedSetting VSAN.FakeSCSIReservations | Group-Object Name,Value | select Count,Name,{$_.Group.Entity}

 

Count Name                         $_.Group.Entity

----- ----                         ---------------

    6 VSAN.FakeSCSIReservations, 1 {infra-esxi-01.go-lab.jp, infra-esxi-02.go-lab.jp, infra-esxi-03.go-lab.jp, infra...

 

 

下記のようにシンプルに表示することもできます。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Get-AdvancedSetting VSAN.FakeSCSIReservations | Group-Object Name,Value | select Count,Name

 

Count Name

----- ----

    6 VSAN.FakeSCSIReservations, 1

 

 

以上、vSAN データストアのネステッド ESXi ラボでの PowerCLI 利用例でした。

server storage I/O trends

Azure Stack TP3 Overview Preview (Part II) Install Review

This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

 

Azure Stack Review and Install

Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted  to gain some closer insight, experience, expand my trade craft on  Azure Stack by installing TP3. This is similar to what I have done in the past  with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I  need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

 

Microsoft Azure Stack architecture
  Click here or on the above image to view list of VMs and other services  (Image via Microsoft.com)

Whats Involved Installing Azure Stack TP3?

 

The basic steps are as follows:

  • Read this Azure Stack blog post (Azure Stack)
  • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
  • Planning your deployment making decisions on Active Directory and other items.
  • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
  • Copy Azure Stack software and installer to target server and run pre-install scripts.
  • Modify PowerShell script file if using a VM instead of a PM
  • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
  • Server reboots, select Azure Stack from two boot options.
  • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
  • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
  • Update any applicable installation scripts (see notes that follow)
  • Deploy the script, then extended Azure Stack TP3 PoC as needed

 

Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

 

Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate  files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

 

Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

 

server storageio nested azure stack tp3 vmware

 

Azure Stack deployment prerequisites (Microsoft) include:

  • At least 12 cores (or more), dual socket processor if possible
  • As much DRAM as possible (I used 100GB)
  • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
  • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
  • A single NIC or adapter (I put mine into static instead of DHCP mode)
  • Verify your physical or virtual server BIOS has VT enabled

 

The above image helps to set the story of what is being done. On the left is for  bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

 

Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

 

Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

 

Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

 

Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

 

After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don't worry, it becomes much easier once all is said and done.

 

Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

 

Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD.

Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

 

Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

 

Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

 

Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

 

storageio azure stack tp3 vmware configuration

 

Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

 

The following image combines a couple of different things including:

A: Showing the contents of C:\Azurestack_Supportfiles directory

B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

 

prepariing azure stack tp3 cloudbuilder for nested vmware deployment

 

From PowerShell (administrator):

# Variables
$Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
$LocalPath = 'c:\AzureStack_SupportFiles'

# Create folder
New-Item $LocalPath -type directory

# Download files
( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

 

Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

 

Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }

You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

 

Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

 

Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

 

azure stack tp3 cloudbuilder nested vmware deployment

 

You will see a reboot and install, this is installing what will be  called the physical instance. Note that this is really being installed on the  VM system drive as a secondary boot option (e.g. azure stack).

 

azure stack tp3 dual boot option

 

After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure  Active Directory (AAD) or local ADFS.

 

Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

 

Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

CloudBuilder Deployment Time

Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

 

From PowerShell (administrator):

cd C:\CloudDeployment:\Setup
$adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

Deploying on VMware Virtual Machines Tips

Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

 

Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

 

Here are some more tips for deploying Azure Stack on VMware,

 

Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

 

Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate  to line 376 (or in that area)
  Change $false to $true which will stop the script failing when checking  to see if the Azure Stack is running inside a VM.
  Next go to line 453.
  Change the last part of the line to read “Should Not BeLessThan 0”
  This will stop the script checking for the required amount of cores  available.

 

After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

cd C:\CloudDeployment\Setup .\InstallAzureStackPOC.ps1 -rerun

Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

 

starting azure stack tp3 deployment

 

Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

 

azure stack tp3 deployment finished

Checking in periodically to see how the deployment progress  is progressing, as well as what is occurring. If you have the time, watch some  of the scripts as you can see some interesting things such as the software  defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka  Azure Stack virtual environment created. This includes virtual machine creation  and population, creating the software defined storage using storage spaces  direct (S2D), virtual network and active directory along with domain controllers  among others activity.

azure stack tp3 deployment progress

After Azure Stack Deployment Completes

 

After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of  (note documentation side to use  which did not work for me).

 

Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

 

accessing azure stack tp3 management portal dashboard

 

Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.  

What this  all means

A common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

software defined data infrastructures SDDI and SDDC

 

Some will say that if OpenStack is struggling in many organizations  and being free open source, how Microsoft can have success with Azure Stack.  The answer could be that some organizations  have struggled with OpenStack while others have not due to lack of commercial  services and turnkey support. Having installed both OpenStack and Azure Stack  (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy  to install, granted it is limited to one node,  unlike the production versions. Likewise,  there are easy to use appliance versions of OpenStack that are limited in  scale, as well as more involved installs that unlock full functionality.

 

OpenStack, Azure Stack, VMware and others have their places,  alongside, or supporting containers along with other tools. In some cases,  those technologies may exist in the same environment supporting different  workloads, as well as accessing various public clouds, after all, Hybrid is the  home run for many if not most legality IT environments.

 

Ok, nuff said (for now...).

Cheers
Gs

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about  Azure Stack?

 

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

 

For those who are not aware, Azure Stack is a private on-premise  extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or  what you might also refer to as a beta (get the bits here).

 

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

 

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

 

Besides being an on-premise, private  cloud variant, Azure Stack is also hybrid capable being able to work with  public cloud Azure. In addition to working with public cloud Azure, Azure  Stack services and in particular workloads can also work with traditional  Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

 

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

 

Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

 

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

 

Where to learn more

 

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  

What this  all means

A common question is if there is demand  for private and hybrid cloud, in fact,  some industry expert pundits have even said private,  or hybrid are dead which is interesting, how can something be dead if it is  just getting started. Likewise, it is  early to tell if Azure Stack will gain traction with various organizations,  some of whom may have tried or struggled with OpenStack among others.

 

Given a large number  of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services  as well as other platforms, along with continued growing popularity of Azure,  having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question  of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V  and if only for Windows guest operating systems. At this point indeed, Windows  would be an attractive and comfortable option, however, given a large number  of Linux-based guests running on Hyper-V  as well as Azure Public, those are also primary candidates as are containers  and other services.

 

Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

 

Ok, nuff said (for now...).

Cheers
Gs

最近は、ESXi 自身が VMXNET3 のアダプタを使用できたりします。

ただし、これはネステッド ESXi での話です。

nested_esxi_vmxnet3.png


ESXi 5.1 のリリースノートを見ていると、

「ネットワークの問題」に下記のような記載があります。

 

VMware ESXi 5.1 Update 2 リリース ノート

https://www.vmware.com/jp/support/support-resources/pubs/vsphere-esxi-vcenter-server-pubs/vsphere-esxi-51u2-release-notes

VMXNET3 を pNIC としても使用している ESX を実行する仮想マシンがクラッシュする可能性がある

VMXNET3 のサポートは試験的であるため、VMXNET3 を pNIC としても使用している ESX をゲストとして実行している仮想マシンがクラッシュする可能性があります。ESX 仮想マシンのデフォルトの NIC は e1000 であり、この問題はデフォルトをオーバーライドし、VMXNET3 を選択した場合にのみ発生します。

 

回避策:ESX 仮想マシンの pNIC として、e1000 または e1000e を使用してください。

これは普通の VM に作成された VMXNET3 vNIC の問題ではなく、

ネステッド ESXi の VM に作成した VMXNET3 vNIC(ネステッド ESXi から見たら pNIC)

についての問題ということです。

普通に Windows や Linux の VM で VMXNET3 のネットワークアダプタを使う分には

上記の問題は該当しません。

 

ネステッド ESXi に VMXNET3 を付ける場合の設定

 

ネステッド ESXi 用の VM の「設定の編集」→「オプション」タブ で、

ゲストOSのバージョンを「VMware ESXi 5.x」 にしておきます。

esxi-vmxnet3-1.png

 

そして、アダプタ タイプが 「VMXNET 3」 のネットワーク アダプタを作成します。

esxi-vmxnet3-2.png

 

ネステッド ESXi での VMXNET3 の見え方

 

vSphere Client からは下記のように見えます。

vmnic0 が、「VMware Inc. vmxnet3 Virtual Ethernet Controller」 という

10Gbps の 物理 NIC(pNIC) に見えています。

esxi-vmxnet3-3.png

 

ネステッド ESXi に 直接コンソールログイン(例ではSSH)して確認すると

VMXNET3 の vNIC は下記のように見えます。

esxi-vmxnet3-4.png

 

どのみち ネステッド ESXi 自体が テスト用途の機能(本番環境ではサポートされない)

という位置づけですが、より安定させたい場合は、E1000 か E1000E を使用すると

良いと思います。

また、自宅 LAB などで ESXi を使用している場合は、

普通は 10Gbps の NIC がないと思うので、VMXNET3 を使用するメリットはあまり無いはずです。

 

ただ、同一の物理 ESXi で稼働している ネステッド ESXi が多いのであれば、

VMXNET3 を使ってみても面白いかもしれません。パープルスクリーン覚悟で・・・


以上、ネステッド ESXi で VMXNET3 の話でした。


ネステッド ESXi 環境で検証をするとき、実環境に近づけたいなどの理由で

仮想スイッチ(のポートグループ)に VLAN を設定したいことがあると思います。


ネステッド ESXi 環境で VM の ネットワーク接続をする場合、

物理 ESXi の仮想スイッチから見ると通常は想定されないMACアドレスからの

通信を通す必要があります。


そのため、物理 ESXi(ネストではない外側の ESXi)の

仮想スイッチまたはポートグループで、

無差別モード(Promiscuous Mode)を許可 します。


そして、ネステッド ESXi の仮想スイッチ(vSS または vDS)で

ポートグループに VLAN ID を設定したい場合には、

物理 ESXi(ネストではない外側の ESXi)の

仮想スイッチまたはポートグループで、

無差別モードを許可して

さらに VLAN ID 4095 を設定 します。

 

ネステッド ESXi で特に VLAN 構成しない場合


この場合、物理 ESXi 側のポートグループでは 無差別モード を許可します。

Nested_esxi_vlan4095_1.png

 

ネステッド ESXi の仮想スイッチで VLAN を構成(VST)したい場合

  1. 物理 ESXi 側のポートグループでは
    無差別モード を許可VLAN ID 4095 を設定します。
    →通信は VLAN タグをつけたままネステッド ESXi へ。
  2. ネステッド ESXi 側のポートグループには VM に使用させる VLAN ID を設定します。
    → ここで VLAN タグを外す。

Nested_esxi_vlan4095_2.png


実際に設定するには、下記のように

物理 ESXi 側のポートグループの「全般」タブで VLAN 4095 を設定し、

vlan4095-1.png


「セキュリティ」 タブで 「無差別モード」 を 「承諾」 にします。

vlan4095-2.png


ネステッド ESXi には上記のポートグループを接続します。

そして、仮想スイッチやポートグループでは、

特にネストされていることは意識せず、VLAN ID を設定します。


ちなみに、ESXi の VLAN についてはこのあたりを参照してください。

ESXi および vCenter Server 5.5 のドキュメント

vSphere ネットワーク > ネットワークの概要

VLAN 構成

http://pubs.vmware.com/vsphere-55/topic/com.vmware.vsphere.networking.doc/GUID-7225A28C-DAAB-4E90-AE8C-795A755FBE27.html


仮想スイッチ VLAN タギング (VST Mode) のサンプル構成 (2053120)

http://kb.vmware.com/kb/2053120

 

 

以上、ネステッド ESXi で VLAN 4095 を利用する話でした。

以前にESXi 5.x で 普通のHDD をSSD に見せかける方法をポストしました。


しかし下記の記事によれば、ネステッドESXi で なんちゃってSSD をやりたい場合は
もっといい方法があるそうです。

virtuallyGhetto
Emulating an SSD Virtual Disk in a VMware Environment
http://www.virtuallyghetto.com/2013/07/emulating-ssd-virtual-disk-in-vmware.html#sthash.6UTetKIX.dpuf

 

そこで、さっそく試してみました。
ネステッドESXi の VM に対して、下記の設定をします。

  1. 仮想マシンバージョン を 8 以上 にする。(ESXi 5.x はデフォルトで 8以上)
  2. 仮想ディスクに関する、下記の構成パラメータを設定する。
    scsiX:Y.virtualSSD = 1

 

たとえば、下記の画面のように
SCSI(0:2) に接続された仮想ディスクをSSDに見せるのであれば、
nesxi_ssd1.png

「scsi0:2.virtualSSD = 1 」と設定します。

※デフォルトでは表示されていないパラメータなので、「行の追加」で入力します。
nesxi_ssd2.png

ネステッドESXi のVMを起動すると、

「SCSI 0:2」 に相当するデバイスが SSD として認識されていました。

nesxi_ssd3.png

 

これは、仮想ディスクのVPD(Vital Product Dataの略で、製品情報のこと。)
の一部を書き換えることでSSDに見せかけているそうです。

本来であれば、ディスクの回転数を示すフィールドを
「回転しないストレージデバイス」(たとえばSSDみたいな)とすることで

実現しているみたいです。

 

前回の、SATP(ストレージ アレイ タイプ プラグイン)のルールを設定する方法と

今回の構成パラメータ(VMXパラメータ)で設定する方法を比較してみました。

NestedESXi_SSD.png

SATPルールで設定する方法 は、
一度ネステッドESXi 側でVMDKファイルを仮想ディスクとして認識してから
ストレージデバイスの特性を決めるルールでSSDに見せかけます。


SATPルールで設定する場合のコマンドライン

~ # esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device mpx.vmhba1:C0:T2:L0 --option=enable_ssd

~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0

 

一方、構成パラメータで設定する方法 は、
ネステッドESXi に対して、VMDKファイルをそのままSSDとして認識させます。

 

並べてみると、詳細な内部動作まではわかりませんが
SATP よりも scsiX:Y.virtualSSD での設定の方が
簡単、かつネステッドESXi からSSDっぽく見えそうな気がします。

 

以上、ネステッドESXi になんちゃってSSDを見せる方法でした。

Note: This post attempts to provide information to those looking to build a simple, nested vSphere lab who may require some basic examples of iSCSI configuration. The examples are intended to be as simple as possible and do not include security configuration or performance tuning.

Building a nested vSphere lab is one of the best ways to gain hands-on experience in a virtual environment. Assuming you don’t want to practice on your production environment and that you don’t have access to a full hardware lab it is actually a very good alternative.

 

There are many guides and tutorials available that walk through a complete lab setup and configuration. You can build it within ESXi, VMware Workstation, VMware Fusion and others. The most popular lab build available is the excellent AutoLab on labguides.com. The outstanding work provided by the creators will allow you to set up your binaries in pre determined locations and then literally kick off an automated install of a complete environment. It’s incredible! Just think how much time this saves for anyone who needs a quick, fresh lab.

 

For beginners, however, an automated lab may not provide the greatest benefit. To get your feet wet with vSphere you really should go through the complete process and perform it manually. You haven’t really installed vCenter until you had to destroy a few databases, lost all communications because you configured a port group and locked yourself out of your own Active Directory domain. Not that it happened to me…

Your first shared storage

One of the first challenges you will encounter when attempting to build a functioning vSphere environment revolves around shared storage.
Shared storage is the basis for many of the advanced features enabled by virtualization. vMotion, HA, DRS, SvMotion cannot truly work without shared storage. So how do you accomplish this in a home lab or even on a single laptop?

There are three types of shared storage that can be used; Fibre Channel, iSCSI and NFS. The vast majority of home lab builders will not have a fibre infrastructure in their basement which forces us to consider the remaining two. Luckily, both protocols use Ethernet so it is not that hard to set up. Both iSCSI and NFS are great solutions and each have their benefits. In this post I will cover iSCSI and leave NFS for a future write-up.

iSCSI

iSCSI is an IP-based standard that uses the network infrastructure to transfer data. Because the commands that it uses are regular SCSI commands the connected computer believes its storage is directly attached when in reality it is delivered over the network. iSCSI is a SAN storage protocol since it passes blocks of data. This is different from NFS which is file storage, NAS, and viewed as a share on the network.
Many newcomers already have experience with NFS and are familiar with the export file, mounting commands and root permission issues. iSCSI is new territory for many and requires some research. But not much.

iSCSI can be set up, configured and tweaked to provide incredible performance. With the advances in Ethernet speeds the sky is truly the limit for this protocol. But that is down the road. At the moment I just want to walk people trying to set up a home lab through the very simple process of adding some iSCSI datastores.

The Minimum You Need to Know
  • Initiator – Initiates a session by sending a SCSI command
  • Target – Listens for Initiators’ commands and provides input/output data transfers. The Target is the one that provides one or more LUNs to the initiator.
  • Naming and addressing - Each iSCSI element that uses the network has a unique and permanent iSCSI name and is assigned an address. The most common name is the IQN – iSCSI Qualified Name, which has the following format:
    iqn.yyyy-mm.naming-authority:unique nameExample: iqn.2004-04.com.qnap:ts-412:iscsi.qnap02.ccb19e-The date refers to the name and month when the naming authority was established.
    -The naming authority is usually represented as a reverse syntax of the Internet domain name of the naming authority.
    -Unique name is any name you want to use.
Step 1 – Locate your storage.

Determine what storage will be presented to your ESXi hosts. This is one of the parts I enjoy most when setting up a home lab. Obviously I go completely nuts and attempt to use something different each time just for the hell of it. There are a number of options:

  1. A dedicated NAS – You may have decided to invest in a small home or office NAS. These little boxes are INCREDIBLE! I have a Qnap TS-412 at home and I love it. It can do everything.
  2. Build your own NAS – Solutions like FreeNas and Open Filer can provide you with as much and even more functionality as a store-bought NAS. All you need is an available PC, some hard disks and a 1gig Ethernet connection. Building your NAS is another fun experience that also happens to teach you a lot during the process.
  3. Build your own Virtual NAS – If you have enough power in your virtual host, be it ESXi, workstation or anything else, you can build that same NAS and run it as a VM. It will be there right next to your vCenter, Domain Controller and Database. You can even download a vApp and avoid all the setup, but why would you do that? You want to learn, right?
  4. Use software to create an iSCSI target. If you don’t have any of the above options then this is your solution. There are several free iSCSI targets that operate completely in software. All you need to do is install the software and point it to a specific folder. This folder can even be on the same machine where the target is installed. I will show two examples.
Step 2 - Enable and configure iSCSI.

Note: Since this is an introductory tutorial I will be creating the iSCSI targets with no advanced options and I will not be enabling security. Obviously in a production or even a corporate test environment you should always enable security options.

Example 1: iSCSI on a NAS

The Qnap TS-412 offers a quick configuration wizard for iSCSI. The wizard will create both the iSCSI target and the required LUN in a single step.

 

Qnap_iscsi_01

Qnap_iscsi_02

Qnap_iscsi_03

 

Example 2 – Software iSCSI using the free Microsoft Software iSCSI Target

 

The Microsoft iSCSI target can be downloaded for free here. Please note that it requires Server 2003 or 2008/R2 to install.

You will need to dedicate space on which the software will create a virtual disk. This virtual disk will be the LUN presented to the initiator.

 

msiscsi01

msiscsi02

 

At this point you will be asked to provide the identifier of the initiator. If using the ESXi iSCSI adapter you can copy the IQN from the configuration menu and paste it in.

 

msiscsi03

 

After the target has been done you will need to add a Virtual Hard Disk to the target. I added another disk to my VM and will point my target at it.

 

msiscsi04

 

Note that when choosing the location to place the VHD you will give it a name with .vhd extension.

 

msiscsi05

 

These are the properties of the VHD that is connected to the target.

 

msiscsi06

 

Step 3 - Connect ESXi host Initiator to iSCSI Target

Now we finally get to work with ESXi. Remember that since vSphere 5.0 the software iSCSI has to be added from the configuration > Storage Adapters menu.

After adding the adapter go to the properties of the adapter and select the “Dynamic Discovery” tab. Enter the IP of the target. If you didn’t make any changes to defaults leave the port on the usual 3260.

 

msiscsi07

 

If all works well then ESXi will locate the target and LUN presented. You will be prompted to perform a scan of the adapters. The new target will now appear in the bottom part of the screen.

 

msiscsi08

And there is my new Qnap target!

 

You can now proceed to create a datastore on the new target.

Well, that's it for now. I hope you found something helpful in this post, it was a good refresher for me.

If you would like to see anything else or have any questions or comments, leave a reply.

Hi,

since some time ago it's possible to run esx(i) inside a vmware Workstation. In others words, running the esx(i) host as a virtual machine.

 

To be able to run esx(i) virtual, the host where Workstation is running needs to accomplish some requirements because esx(i) runs as a 64 bit virtual machine.

 

If you use the Workstation 7.X or later, running esx(i) virtual is a lot of easy. Just select the "Vmware ESX" like the guest operating system when creating the virtual machine. Anything else is needed.

 

But if you use a older version than 7.X, some adjustments need to be done. Check this guides to learn how to do it:

 

http://www.vladan.fr/how-to-install-and-manage-esxi-server-inside-vmware-workstation-65/

http://www.vmwarevideos.com/running-vmware-esxi-4-vsphere-in-vmware-workstation-video

http://www.youtube.com/watch?v=1nNK5wFVppg

 

Some interesting notes from vmware about running ESX(i) virtual.

Here more information about this topic.

 

Regards,

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.