Skip navigation

Blog Posts

Total : 3,900

Blog Posts

1 2 Previous Next

vCenter Server Appliance(VCSA)は、GUI からだけでなく CLI でもデプロイすることができます。

 

以前に下記のような投稿をしましたが・・・

VCSA 6.5 U1 を CLI デプロイしてみる。(vCenter + embedded-PSC)

VCSA 6.0 を CLI Install してみる。(External PSC + vCenter)

 

今回は VCSA 6.7 を CLI デプロイしてみます。

  • VCSA は vCenter Server 6.7.0a (Build 8546234)を利用しています。
  • Enhanced Linked Mode(ELM)で 2台を vCenter をデプロイします。

 

ISO イメージ ファイルのマウント。

デプロイを実行する OS で、VCSA の ISO イメージファイルをマウントします。

今回は Windows 10 の PowerShell から実行しています。

ISO イメージファイルは下記を使用しました。

  • VMware-VCSA-all-6.7.0-8546234.iso

 

Dドライブとしてマウントした場合は、下記フォルダに vcsa-deploy.exe があります。

PS> D:

PS> cd  vcsa-cli-installer/win32/

 

JSON ファイルの作成。(1台目)

デプロイ パラメータを指定した JSON ファイルを作成します。

JSON ファイルのサンプルのファイルが、ISO ファイルの vcsa-cli-installer/templates/install フォルダに

「embedded_vCSA_~.json」という名前で用意されています。

今回は ESXi にデプロイする「_on_ESXi.json」を利用しました。

PS> ls D:\vcsa-cli-installer\templates\install | select Name

 

 

Name

----

PSC_first_instance_on_ESXi.json

PSC_first_instance_on_VC.json

PSC_replication_on_ESXi.json

PSC_replication_on_VC.json

embedded_vCSA_on_ESXi.json

embedded_vCSA_on_VC.json

embedded_vCSA_replication_on_ESXi.json

embedded_vCSA_replication_on_VC.json

vCSA_on_ESXi.json

vCSA_on_VC.json

 

1台目の vCenter は、下記のように JSON を作成しました。

以前の VCSA のパラメータは deployment.option といった ドット区切りでしたが、

deployment_option のような「_」区切りになっています。

 

JSON 内で指定するホスト名(FQDN)は、あらかじめ DNS サーバに登録して

名前解決できるようにしておきます。

指定しているホスト名や IP アドレスなどのパラメータは、うちのラボのものです。

パスワードをファイル内で指定しない場合は、install 実行の途中で対話的な入力になります。

 

C:\work\lab-vc-01.json

lab-vc-01.json · GitHub

{

    "__version": "2.13.0",

    "__comments": "deploy a VCSA with an embedded-PSC on an ESXi host.",

    "new_vcsa": {

        "esxi": {

            "hostname": "infra-esxi-06.go-lab.jp",

            "username": "root",

            "password": "VMware1!",

            "deployment_network": "dvpg-vc-deploy-0000",

            "datastore": "vsanDatastore"

        },

        "appliance": {

            "thin_disk_mode": true,

            "deployment_option": "tiny",

            "name": "lab-vc-01"

        },

        "network": {

            "ip_family": "ipv4",

            "mode": "static",

            "ip": "192.168.10.11",

            "dns_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "prefix": "24",

            "gateway": "192.168.10.1",

            "system_name": "lab-vc-01.go-lab.jp"

        },

        "os": {

            "password": "VMware1!",

            "ntp_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "ssh_enable": true

        },

        "sso": {

            "password": "VMware1!",

            "domain_name": "vsphere.local"

        }

    },

    "ceip": {

        "settings": {

            "ceip_enabled": false

        }

    }

}

 

JSON ファイルの作成。(2台目)

2台目の vCenter は、下記のように JSON を作成しました。

ELM の2台目なので、1台目とは、VM 名やネットワーク設定以外に赤字の部分で違いがあります。

 

C:\work\lab-vc-02.json

lab-vc-02.json · GitHub

{

    "__version": "2.13.0",

    "__comments": "deploy a VCSA with an embedded-PSC as a replication partner to another embedded-VCSA, on an ESXi host.",

    "new_vcsa": {

        "esxi": {

            "hostname": "infra-esxi-06.go-lab.jp",

            "username": "root",

            "password": "VMware1!",

            "deployment_network": "dvpg-vc-deploy-0000",

            "datastore": "vsanDatastore"

},

        "appliance": {

            "thin_disk_mode": true,

            "deployment_option": "tiny",

            "name": "lab-vc-02"

        },

        "network": {

            "ip_family": "ipv4",

            "mode": "static",

            "ip": "192.168.10.12",

            "dns_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "prefix": "24",

            "gateway": "192.168.10.1",

            "system_name": "lab-vc-02.go-lab.jp"

        },

        "os": {

            "password": "VMware1!",

            "ntp_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "ssh_enable": true

        },

        "sso": {

        "password": "VMware1!",

            "domain_name": "vsphere.local",

            "first_instance": false,

            "replication_partner_hostname": "lab-vc-01.go-lab.jp",

            "sso_port": 443

        }

    },

    "ceip": {

        "settings": {

            "ceip_enabled": false

        }

    }

}

 

VCSA のデプロイ。(1台目)

作成した JSON ファイルを指定して、vcsa-deploy.exe を実行します。

ちなみに・・・

  • vCenter 登録ずみで DRS が有効な ESXi にデプロイする場合は、
    処理中に DRS で VCSA が移動されてしまうと失敗するので
    ESXi ではなく vCenter にむけてデプロイするか、アフィニティ ルールで工夫する必要があります。
  • vDS / 分散ポートグループを指定してデプロイする場合は、
    分散ポートグループのポートバインドを「短期」にしておきます。

 

確認をしておきます。

以前のバージョンとはオプションが異なり「--verify-only 」ではなく「--precheck-only」で確認をします。

「--accept-eula」も必要になりました。

PS> .\vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula --precheck-only C:\work\lab-vc-01.json

 

デプロイします。

PS> .\vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula C:\work\lab-vc-01.json

 

VCSA のデプロイ。(2台目)

確認をしておきます。

この時点で 1台目の VCSA がないとエラーになります。

PS> .\vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula --precheck-only C:\work\lab-vc-02.json

 

デプロイします。

このとき、1台目の VCSA のスナップショットを取得しておくと

失敗した時の再試行が簡単になります。

PS> .\vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula C:\work\lab-vc-02.json

 

これで、ELM の vCenter が利用できるようになります。

すでに 1台目の VCSA の vSphere Client / vSphere Web Client にログインしている場合は、

ログインしなおすと VCSA が2つ見えるようになります。

vcsa67-elm.png

 

VCSA 6.7 では embedded-PSC(PSC と vCenter を 1つの VM に含む)での

ELM が推奨となったので、その検証環境の作成などに便利かなと思います。

 

以上、VCSA 6.7 の CLI インストールでした。

PowerCLI で、VM を構成するファイルの情報を取得することができます。

そして、VM の定義情報が記載されている .vmx ファイルのパスも取得できます。

ここでは、.vmx ファイルのパスを取得して

ついでにそのパスをもとに ESXi に VM の再登録をしてみます。

 

今回は「vm01」という名前の VM を対象とします。

vm01 という VM 名は PowerCLI で接続している vCenter のインベントリで重複しないようにしてあります。

コマンドラインで都度 VM 名を変更しなくてよいように、$vm_name という変数で扱います。

PowerCLI> $vm_name = "vm01"

 

あらかじめ Connect-VIServer で vCenter に接続してあります。

PowerCLI から 複数の vCenter への接続は下記のような感じになります。(古い投稿ですが)

PowerCLI から複数の vCenter に接続する方法。

 

VM は hv-d01.go-lab.jp という ESXi ホストでパワーオン状態です。

PowerCLI> Get-VM $vm_name | select Name,ResourcePool,Folder,VMHost,PowerState | fl

 

Name         : vm01

ResourcePool : Resources

Folder       : vm

VMHost       : hv-d01.go-lab.jp

PowerState   : PoweredOn

 

 

PowerCLI から見た VM のファイル。

VM のファイルは、下記のように情報を確認できます。

.vmx ファイルは Type が「config」になっています。

ちなみに、.vmx や .vmdk といった VM を構成するファイルだけでなく

VM のログファイル(vmware.log)の情報も含まれていることがわかります。

PowerCLI> (Get-VM $vm_name).ExtensionData.LayoutEx.File | select Key,Type,Size,Name | ft -AutoSize

 

 

Key Type                 Size Name

--- ----                 ---- ----

  0 config               2998 [ds-nfs-repo-01] vm01/vm01.vmx

  1 nvram                8684 [ds-nfs-repo-01] vm01/vm01.nvram

  2 snapshotList            0 [ds-nfs-repo-01] vm01/vm01.vmsd

  3 diskDescriptor        549 [ds-nfs-repo-01] vm01/vm01_2.vmdk

  4 diskExtent     1609957376 [ds-nfs-repo-01] vm01/vm01_2-flat.vmdk

  5 log                231426 [ds-nfs-repo-01] vm01/vmware-1.log

  6 log                233220 [ds-nfs-repo-01] vm01/vmware.log

  7 swap                    0 [ds-nfs-repo-01] vm01/vm01-a5ede08e.vswp

  8 uwswap                  0 [ds-nfs-repo-01] vm01/vmx-vm01-2783830158-1.vswp

 

 

コマンドラインを工夫すれば、複数の VM の .vmx のパスを一覧することもできます。

ホストの障害対応や、vSphere の移行作業などで VM の格納先を記録しておく際に利用できます。

PowerCLI> Get-VM test*,vm* | select Name,@{N="vmx_path";E={($_.ExtensionData.LayoutEx.File | where {$_.Type -eq "config"}).Name}} | Sort-Object Name

 

 

Name        vmx_path

----        --------

test-web-01 [vsanDatastore] 532a455b-a477-02ce-4958-f44d3065d53c/test-web-01.vmx

test-web-02 [vsanDatastore] 8f2a455b-6c94-6930-3200-f44d3065d53c/test-web-02.vmx

vm01        [ds-nfs-repo-01] vm01/vm01.vmx

 

 

vm01 の .vmx ファイルのパスだけ取得して、これ以降は変数 $vmx_path で扱います。

PowerCLI> $vmx_path = (Get-VM $vm_name).ExtensionData.LayoutEx.File | where {$_.Type -eq "config"} | %{$_.Name}

PowerCLI> $vmx_path

[ds-nfs-repo-01] vm01/vm01.vmx

 

vCenter / ESXi への VM 再登録。

VM の再登録で、vCenter をまたいだ VM の移動をしてみます。

  • 移動先の ESXi への VM 登録の際に、New-VM コマンドで .vmx ファイルのパスを指定します。
  • 移動元 / 先の ESXi には、ds-nfs-repo-01 という名前で同一の NFS データストアをマウントしています。

 

ds-nfs-repo-01 データストアをマウントしている ESXi は、下記のように確認できます。

  • ここでの「Name」は ESXi の名前です。
  • 元 / 先 NFS データストアの構成確認は今回は省略して、データストア名のみをもとにしています。
  • vm01 のいる hv-d01.go-lab.jp だけが独立した vCenter「vc-sv01.go-lab.jp」の ESXi 6.5 配下にいて、
    それ以外の ESXi 6.7 は vCenter「infra-vc-01.go-lab.jp」の配下にいます。

PowerCLI> Get-Datastore ds-nfs-repo-01 | Get-VMHost | Sort-Object Name | select @{N="vCenter";E={$_.Uid -replace ".*@|:.

*",""}},Name,ConnectionState,Version

 

 

vCenter               Name                    ConnectionState Version

-------               ----                    --------------- -------

vc-sv01.go-lab.jp     hv-d01.go-lab.jp              Connected 6.5.0

infra-vc-01.go-lab.jp infra-esxi-01.go-lab.jp       Connected 6.7.0

infra-vc-01.go-lab.jp infra-esxi-02.go-lab.jp       Connected 6.7.0

infra-vc-01.go-lab.jp infra-esxi-03.go-lab.jp       Connected 6.7.0

infra-vc-01.go-lab.jp infra-esxi-04.go-lab.jp       Connected 6.7.0

infra-vc-01.go-lab.jp infra-esxi-05.go-lab.jp       Connected 6.7.0

infra-vc-01.go-lab.jp infra-esxi-06.go-lab.jp       Connected 6.7.0

 

 

それでは vm01 を停止して、移動元 vCenter のインベントリから削除します。

VM のファイル自体を削除しないように、Remove-VM には「-DeletePermanently」を付与しないで実行ます。

PowerCLI> Get-VM $vm_name | Stop-VM -Confirm:$false

PowerCLI> Get-VM $vm_name | Remove-VM -Confirm:$false

 

移動先 vCenter のインベントリの ESXi のうちの 1台に(infra-esxi-06.go-lab.jp)に VM を登録します。

ついでに登録先の リソースプールと、「-Location」でVM のフォルダを指定しています

PowerCLI> New-VM -Name $vm_name -VMFilePath $vmx_path -VMHost "infra-esxi-06.go-lab.jp" -Location "lab-vms-01" -ResourcePool "rp-02-lab"

 

そして VM を起動します。

※VM 起動時に「コピー」と「移動」どちらかの質問メッセージがでた場合は「移動」を選択します。

PowerCLI> Get-VM $vm_name | Start-VM

 

これで VM の移動ができました。

PowerCLI> Get-VM $vm_name | select Name,ResourcePool,Folder,VMHost,PowerState | fl

 

Name         : vm01

ResourcePool : rp-02-lab

Folder       : lab-vms-01

VMHost       : infra-esxi-06.go-lab.jp

PowerState   : PoweredOn

 

 

vSphere 6.x では Cross-vCenter での vMotion も可能ですが、

実行するための前提条件で引っかかったりする場合には

このような VM の移動もできたりします。

 

以上、PowerCLI で .vmx ファイルの情報を取得して利用する話でした。

This configuration protects the 6.5 external Platform Service Controller using two-factor authentication.

A common LDAP identity source between vCenter Server SSO and the RSA Authentication Manager is required.

=========================================

Configure the RSA Authentication Manager 8.3

=========================================

 

1. Add the Identity Source to the Authentication Manager Operations Console

 

2. Configure the Identity Source Mapping

 


3. Test the connection to the Identity Source

 

 

4. Link the Identity Source

 

 

5. Configure the Default Security Domain Mapping for the Identity Source

 

6.  Assign the Identity Source user account a SecurID Token

NOTE: Ensure that you select the Active Directory Domain from the Identity Source drop-down prior to assigning the user a SecurID Token.

 

 

7.  Download the RSA Authentication Manager server certificate

 

 

8. Add the RSA Authentication Manager server certificate to the Platform Services Controller's Trusted Root Store

 

9. Import the certificate of the LDAP Identity Source to the RSA Authentication Operations Console

 

 

10. Add an Authentication Agent (the external Platform Services Controller)

 

 

 

11. Confirm that the Authentication Agent is listed as "Selected" within the Authentication Manager Contact List

 

12. Add the Hostname and IP Address of the Authentication Manager to the Agent Authentication Settings under Security Console>Setup>System Settings>Agents>To Configure Agents using IPV6, click here"

 

 

13. Generate the Authentication Agent Configuration File (sdconf.rec)

 

 

14. Enable the RSA SecurID Authentication API

 

 

==========================================================

Configure the 6.5 External Platform Services Controller

==========================================================

 

1. Use WinSCP to import the sdconf.rec file to the external Platform Services Controller

 

2. Open an SSH to the Platform Services Controller and login as root

 

3. Change to the directory that contains the sso-config.sh script

Appliance:  /opt/vmware/bin

Windows:  C:\Program Files\VMware\VCenter server\VMware Identity Services

 

4. Enable RSA SecurID Authentication on the tenant

# sso-config.[sh|bat]  -t tenantName  -set_authn_policysecurIDAuthn true

 

For example:

# sso-config.sh -t vsphere.local -set_authn_policy -securIDAuthn true

Note: After you enable RSA SecurID, the checkbox "Use RSA SecurID" will appear in the vSphere Web Client

 

5. Configure the Tenant to use the RSA Site.

# sso-config.[sh|bat] -set_rsa_site [-t tenantName] [-siteID Location] [-agentName Name] [-sdConfFile Path]

 

For Example:

# sso-config.sh -set_rsa_site -t vsphere.local -siteID fed-linpsc.fedlab.local -agentName fed-linpsc.fedlab.local -sdConfFile /tmp/sdconf.rec

6. Set the userID mapping using the attribute configured in the RSA Authentication Manager for the Identity Source

# sso-config.[sh|bat] -set_rsa_userid_attr_map [-t tenantName] [-idsName Name] [-ldapAttr AttrName] [-siteID Location]

 

For Example:

#sso-config.sh -set_rsa_userid_attr_map -t vsphere.local -idsName fedlab.local -ldapAttr userPrincipalName

 

7. Confirm that the agentName, siteID, and idsUserIDattributemaps are correct

# sso-config.sh -t tenantName -get_rsa_config

 

For Example:

# sso-config.sh -t vsphere.local -get_rsa_config

 

8. Authenticate to vCenter Server using  RSA SecureID

 

 

 

NOTE: User accounts management by vCenter Server SSO (administrator@vsphere.local) cannot use two-factor authentication.

 

REFERENCES:

 

SET UP RSA SECURID AUTHENTICATION

HTTPS://DOCS.VMWARE.COM/EN/VMWARE-VSPHERE/6.0/COM.VMWARE.VSPHERE.SECURITY.DOC/GUID-639F8754-48E1-494B-A232-A8691447C212.HTML

TWO FACTOR AUTHENTICATION FOR VSPHERE – RSA SECURID

HTTPS://BLOGS.VMWARE.COM/VSPHERE/2016/04/TWO-FACTOR-AUTHENTICATION-FOR-VSPHERE-RSA-SECURID.HTML

HTTPS://BLOGS.VMWARE.COM/VSPHERE/2016/04/TWO-FACTOR-AUTHENTICATION-FOR-VSPHERE-RSA-SECURID-PART-2.HTML

 

RSA SETUP GUIDE

HTTPS://COMMUNITY.RSA.COM/DOCS/DOC-85959

When troubleshooting networking issues, tcpdump can be a quite useful to help you understand what is going on.

On the Unified Access Gateway, this tool is not available "out-of-the-box". However, it can be easily installed by running a script that is available on the server (when I tested it, I was running UAG version 3.3).

 

Run the following commands to install it:

 

cd /etc/vmware/gss-support/

./install.sh

 

Screen Shot 2018-07-30 at 10.59.13.png

 

After the installation is complete, you should be able to run tcpdump commands on the server.

 

--

 

The postings on this site are my own and do not represent VMware’s positions, strategies or opinions.

Deploy Installing and Configuring vRealize Suite Lifecycle Manager 1.2

 

 

 

vRealize Suite Lifecycle Manager provides a single installation and management platform for all products in the vRealize Suite.

,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

System Requirements –

Systems that run vRealize Suite Lifecycle Manager must meet specific hardware and operating system requirements

  • vCenter Server 6.0 or 6.5
  • ESXi version 6.0 or 6.5

Minimum Hardware Requirements

Verify that the system where you run vRealize Suite Lifecycle Manager meets the following minimum software requirements

  • 2 vCPUs if content lifecycle management is disabled.
  • 4 vCPUs, if content lifecycle management is enabled.
  • 16 GB memory
  • 127 GB storage

Supported vRealize Suite Products

vRealize Suite Lifecycle Manager supports the following vRealize Suite products and product versions

 

  • vRealize Automation 7.3.1 and 7.4.
  • vRealize Orchestrator 7.3.0 and 7.4.0 ( all versions embedded with supported vRealize Automation versions are supported) n
  • vRealize Business for Cloud 7.3.1 and 7.4.
  • vRealize Operations Manager 6.6.1. and 6.7.0
  • vRealize Log Insight 4.5.1 and 4.6.0.

 

Deploy the vRealize Suite Lifecycle Manager Appliance

 

Deploy the vRealize Suite Lifecycle Manager appliance to begin using vRealize Suite Lifecycle Manager. To create the appliance, you use the vSphere Client to download and deploy a partially configured virtual machine from a template.

 

Prerequisites

  • Log in to the vSphere Client with an account that has permission to deploy OVF templates to the inventory.
  • Download vRealize Suite Lifecycle Manager .ovf or .ova file from My VMware to a location accessible to the vSphere Client

Procedure

 

1 Select the vSphere Deploy OVF Template

 

 

 

 

 

Enter the path to the vRealize Suite Lifecycle Manager appliance .ovf or .ova file

 

 

 

 

 

 

Read and accept the end-user license agreement

 

 

 

 

  

 

 

Enter an appliance name and inventory location. When you deploy appliances, use a different name for each one, and do not include nonalphanumeric characters such as underscores ( _ ) in names.

 

 

 

Select the host and cluster in which the appliance will reside and Select the resource pool in which the appliance will reside.

 

 

 

  

 

 

 

Select a deployment configuration.

 

 

 

Note   Enable this feature if you want to use content management, where the VA is deployed with 4 CPUs.

Typically, there is an option to include or exclude content management. You can select a configuration and mention the change in the number of CPU that is required

 

 

Select the storage that will host the appliance

 

 

 

 

 

Select Thick  or thin as the disk format.

Format does not affect appliance disk size. If an appliance needs more space for data, increase disk size by using vSphere after deploying.

 

 

 

From the drop-down menu, select a Destination Network

 

 

 

12 Complete the appliance properties.

a For Hostname, enter the appliance FQDN.

b (Optional) Enter the certificate properties.

c In Network Properties, when using static IP addresses, enter the values for gateway, netmask, and DNS servers. You must also enter the IP address, FQDN, and domain for the appliance itself.

 

 

 

Depending on your deployment, vCenter Server, and DNS configuration, select one of the following ways of finishing deployment and powering up the appliance.

 

 

Verify that the vRealize Suite Lifecycle Manager appliance is deployed by pinging its FQDN.

 

 

NOTE: Please check with your VMware rep to ensure the apps you test are supported, as there are app per-requirements for Certificate Authentication.

 

Workspace One UEM setup

Integrate UEM Console with VMware Identity Manager

This guide assumes the UEM Console integration with VMware Identity Manager has been completed.

 

Configure and deploy certificate through Workspace One UEM

Integrate with CA and Cert Template, make sure you meet the below guidelines.

Steps:

Subject Name

  • CN={DeviceUid}

Add SAN Type:

  • Email Address : {EmailAddress}
  • User Principal Name: {UserPrincipalName}

 

 

VMware Identity Manager setup

Configure “Certificate (Cloud Deployment)” as Authentication Method

Configure Certificate auth as the authentication method.

Steps:

In the VMware Identity Manager Console:

  1. Identity & Access Management > Authentication Methods > Certificate (Cloud Deployment).
  2. Enable Certificate Adapter.
  3. Upload Root and intermediate CA certificates – must match the CA integration from Workspace One UEM.
  4. Set User Identifier Search Order: email | upn | subject.
    1. Tip: You can troubleshoot which one to use by setting the identifier search to each one individually, test authentication, and view what we are pulling from the certificate by viewing the Audit Report in the vIDM Console: under Dashboards > Reports.
  5. I recommend unchecking all the other boxes for troubleshooting purposes.

 

 

Enable Built-in Identity Provider to use Certificate (Cloud Deployment)

Steps:

In the VMware Identity Manager Console:

  1. Identity & Access Management > Identity Providers
  2. Open the “Built-In” provider.
  3. Enable “Certificate (Cloud Deployment)” as one of the authentication methods.

 

 

Set Policy Rule for Android to use Certificate & Device Compliance as authentication

Configure authentication policy for Android to Certificate (Cloud Deployment) & Device Compliance.

Steps:

In the VMware Identity Manager Console:

  1. Identity & Access Management > Policies > Create a New Policy.
  2. Set the policy to apply to the relevant application you are testing.
  3. Configure a Policy Rule for Android, and set authenticate using:
    1. Certificate (Cloud Deployment), and
    2. Device Compliance (with AirWatch).

 

 

 

Troubleshooting

Validate correct certificate is on the device

Validate correct certificate is on the device.

The Subject Name of certificate should be CN={DeviceUid}.

The SAN should match the Email or UPN in VMware Identity Manager, and should match the User Identifier Search set in the Authentication Method setup.

  • Tip: iOS devices show the certificate’s full SAN attributes. You can enroll an iOS device, receive the Certificate from UEM, and validate the SAN values are correct.

 

Ensure correct Policy Rule is being activated

Check that other Policy Rules, including the default Policy, are not interfering with the authentication process.

You can edit the Error Messages that show up

For troubleshooting purposes, remove all other authentication methods from the policy, so that you are only testing Certificate auth.

 

Set correct User Identifier Search Order (email | upn | subject)

You can troubleshoot which one to use by setting the identifier search to each one individually, test authentication, and view what we are pulling from the certificate by viewing the Audit Report in the vIDM Console: under Dashboards > Reports.

 

Review Audit Events

In VMware Identity Manager, under Dashboard > Reports > Adit Events > Show, you can view the recent authentication attempts. Look through the Events for events similar to:

  • LOGIN_ERROR failed
  • LOGIN (Certificate (Cloud Deployment))
  • LOGIN (Certificate (Cloud Deployment), Device Compliance (with AirWatch))
  • LOGIN failed

The details of the events should show if VMware Identity Manager was able to pull the User from the certificate, or whether the correct policy rule was used, or if the login failed or succeeded.

今回は、ラボ環境などで定期的に不要 VM を一括削除するための工夫についてです。

あらかじめ 削除禁止 VM のリスト ファイルを用意しておき、

そこに記載されていない VM を PowerCLI で一括削除してみます。

 

今回の PowerCLI 10.1 を利用しています。

あらかじめ vCenter に接続しておきます。

PowerCLI> Connect-VIServer <vCenter アドレス>

 

まず、削除したくない VM のリストファイルを作成しておきます。

PowerCLI> cat .\VM-List_HomeLab-Infra.txt

infra-backup-01

infra-dns-01

infra-dns-02

infra-jbox-02

infra-ldap-02s

infra-nsxctl-01-NSX-controller-1

infra-nsxdlr-01-0

infra-nsxesg-01-0

infra-nsxmgr-01

infra-pxe-01

infra-repo-01

infra-sddc-01

infra-vrli-01

infra-vrni-01

infra-vrni-proxy-01

infra-vrops-01

ol75-min-01

 

このリストファイルは、下記のように vCenter 実機の情報から

ベースとなる作成しておくと間違いが少ないかなと思います。

PowerCLI> Get-VM | Sort-Object Name | %{$_.Name} | Out-File -Encoding utf8 -FilePath .\VM-List_HomeLab-Infra.txt

 

今回は、下記のようなスクリプトを作成しました。

cleanup_list_vm.ps1 · GitHub

# Cleanup VMs

# Usage:

#   PowerCLI> ./cleanup_list_vm.ps1 <VM_List.txt>

 

$vm_list_file = $args[0]

if($vm_list_file.Length -lt 1){"リストを指定して下さい。"; exit}

if((Test-Path $vm_list_file) -ne $true){"リストが見つかりません。"; exit}

$vm_name_list = gc $vm_list_file |

    where {$_ -notmatch "^$|^#"} | Sort-Object | select -Unique

 

function step_mark {

    param (

        [String]$step_no,

        [String]$step_message

    )

    ""

    "=" * 60

    "Step $step_no $step_message"

    ""

}

 

$vms = Get-VM | sort Name

$delete_vms = @()

$vms | % {

    $vm = $_

    if($vm_name_list -notcontains $vm.Name){

        $delete_vms += $vm

    }

}

 

step_mark 1 "削除VM一覧"

$delete_vms | ft -AutoSize Name,PowerState,Folder,ResourcePool

 

$check = Read-Host "上記のVMを削除しますか? yes/No"

if($check -ne "yes"){"削除せず終了します。"; exit}

 

step_mark 2 "VM削除"

$delete_vms | % {

    $vm = $_

    if($vm.PowerState -eq "PoweredOn"){

        "Stop VM:" + $vm.Name

        $vm = $vm | Stop-VM -Confirm:$false

    }

    "Delete VM:" + $vm.Name

    $vm | Remove-VM -DeletePermanently -Confirm:$false

}

 

下記のように削除禁止 VM のリスト ファイルを指定して、

スクリプトを実行します。

PowerCLI> .\cleanup_list_vm.ps1 .\VM-List_HomeLab-Infra.txt

 

============================================================

Step 1 削除VM一覧

 

 

Name               PowerState Folder     ResourcePool

----               ---------- ------     ------------

infra-ldap-02s_old PoweredOff 01-Infra   rp-01-infra

test-ldap-01m      PoweredOff test-ldap  rp-02-lab

test-ldap-01s      PoweredOff test-ldap  rp-02-lab

test-vm-01          PoweredOn lab-vms-01 rp-02-lab

test-vm-02          PoweredOn lab-vms-01 rp-02-lab

test-vm-31          PoweredOn 02-Lab     rp-02-lab

 

 

上記のVMを削除しますか? yes/No: yes

 

============================================================

Step 2 VM削除

 

Delete VM:infra-ldap-02s_old

Delete VM:test-ldap-01m

Delete VM:test-ldap-01s

Stop VM:test-vm-01

Delete VM:test-vm-01

Stop VM:test-vm-02

Delete VM:test-vm-02

Stop VM:test-vm-31

Delete VM:test-vm-31

 

 

PowerCLI>

 

これで、定期的なラボのクリーンアップなどが簡単になるはずです。

ただし、VM 削除は失敗すると大変なことになるので、

スクリプトは入念に例外制御や実行テストが必要かなと思います。

 

以上、PowerCLI での VM 削除の工夫についての話でした。

vSAN データストアにネステッド ESXi (ゲスト OS として ESXi をインストール)を配置するときに、

仮想ディスクのフォーマット エラー対策などで物理サーバ側の ESXi で

/VSAN/FakeSCSIReservations を有効にします。

 

参考: How to run Nested ESXi on top of a VSAN datastore?

https://www.virtuallyghetto.com/2013/11/how-to-run-nested-esxi-on-top-of-vsan.html

 

今回は、PowerCLI で /VSAN/FakeSCSIReservations を有効にしてみます。

 

vSAN クラスタに参加している ESXi のみに設定するため、

対象クラスタを取得してから、パイプで設定コマンドに渡します。

 

今回の対象クラスタは infra-cluster-01 です。

PowerCLI> Get-Cluster infra-cluster-01 | select Name,VsanEnabled

 

Name             VsanEnabled

----             -----------

infra-cluster-01        True

 

 

対象の ESXi です。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,ConnectionState,PowerState,Version,Build | ft -AutoSize

 

Name                    ConnectionState PowerState Version Build

----                    --------------- ---------- ------- -----

infra-esxi-01.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-02.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-03.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-04.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-05.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

infra-esxi-06.go-lab.jp       Connected  PoweredOn 6.7.0   8169922

 

 

現状の設定を確認しておきます。

VSAN.FakeSCSIReservations は、まだ無効の「0」です。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,{$_|Get-AdvancedSetting VSAN.FakeSCSIReservations}

 

Name                    $_|Get-AdvancedSetting VSAN.FakeSCSIReservations

----                    ------------------------------------------------

infra-esxi-01.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-02.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-03.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-04.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-05.go-lab.jp VSAN.FakeSCSIReservations:0

infra-esxi-06.go-lab.jp VSAN.FakeSCSIReservations:0

 

 

設定変更します。

VSAN.FakeSCSIReservations を、有効の「1」にします。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | Get-AdvancedSetting VSAN.FakeSCSIReservations | Set-AdvancedSetting -Value 1 -Confirm:$false

 

設定変更されました。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,{$_|Get-AdvancedSetting VSAN.FakeSCSIReservations}

 

Name                    $_|Get-AdvancedSetting VSAN.FakeSCSIReservations

----                    ------------------------------------------------

infra-esxi-01.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-02.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-03.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-04.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-05.go-lab.jp VSAN.FakeSCSIReservations:1

infra-esxi-06.go-lab.jp VSAN.FakeSCSIReservations:1

 

 

下記のように列名の表示などを調整することもできます。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | select Name,@{N="VSAN.FakeSCSIReservations";E={($_|Get-AdvancedSetting VSAN.FakeSCSIReservations).Value}}

 

Name                    VSAN.FakeSCSIReservations

----                    -------------------------

infra-esxi-01.go-lab.jp                         1

infra-esxi-02.go-lab.jp                         1

infra-esxi-03.go-lab.jp                         1

infra-esxi-04.go-lab.jp                         1

infra-esxi-05.go-lab.jp                         1

infra-esxi-06.go-lab.jp                         1

 

 

設定が統一されているか、グルーピングして確認することもできます。

VSAN.FakeSCSIReservations が「1」の ESXi ホストをグルーピングして、

6台すべての設定が統一されていることがわかります。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Sort-Object Name | Get-AdvancedSetting VSAN.FakeSCSIReservations | Group-Object Name,Value | select Count,Name,{$_.Group.Entity}

 

Count Name                         $_.Group.Entity

----- ----                         ---------------

    6 VSAN.FakeSCSIReservations, 1 {infra-esxi-01.go-lab.jp, infra-esxi-02.go-lab.jp, infra-esxi-03.go-lab.jp, infra...

 

 

下記のようにシンプルに表示することもできます。

PowerCLI> Get-Cluster infra-cluster-01 | Get-VMHost | Get-AdvancedSetting VSAN.FakeSCSIReservations | Group-Object Name,Value | select Count,Name

 

Count Name

----- ----

    6 VSAN.FakeSCSIReservations, 1

 

 

以上、vSAN データストアのネステッド ESXi ラボでの PowerCLI 利用例でした。

Here is the 2018 Hot Popular New Trending Data Infrastructure Vendors To Watch which includes startups as well as established vendors doing new things. This piece follows last year’s hot favorite trending data infrastructure vendors to watch list (here), as well as who will be top of storage world in a decade piece here.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Data Infrastructures Support Information Systems Applications and Their Data

 

Data Infrastructures are what exists inside physical data centers and cloud availability zones (AZ) that are defined to provide traditional, as well as cloud services. Cloud and legacy data infrastructures are combined by hardware (server, storage, I/O network), software along with management tools, policies, tradecraft techniques (skills), best practices to support applications and their data. There are different types of data infrastructures to meet the needs of various environments that range in size, scope, focus, application workloads, along with Performance and capacity.

 

Another important aspect of data infrastructures is that they exist to protect, preserve, secure and serve applications that transform data into information. This means that availability and Data Protection including archive, backup, business continuance (BC), business resiliency (BR), disaster recovery (DR), privacy and security among other related topics, technology, techniques, and trends are essential data infrastructure topics.

 

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

 

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch

Some of those on this year’s list are focused on different technology areas, while others on size or types of vendors, suppliers, service providers. Others on the list are focused on who is new, startup, evolving, or established which varies from if you are an industry insider or IT customer environment. Meanwhile others new and some are established doing new things, mix of some you may not have heard of for those who want or need to have the most current list to rattle off startups for industry adoption (and deployment), as well as what some established players are doing that might lead to customer deployment (and adoption).

AMD – The AMD EPYC family of processors is opening up new opportunities for AMD to challenge Intel among others for a more significant share of the general-purpose compute market in support of data center and data infrastructure markets. An advantage that AMD has and is playing to in the industry speeds feeds, slots and watts price performance game is the ability to support more memory and PCIe lanes per socket than others including Intel. Keep in mind that PCIe lanes will become even more critical as NVMe deployment increases, as well as the use of GPU's and faster Ethernet among other devices. Name brand vendors including Dell and HPE among others have announced or are shipping AMD EPYC based processors.

Aperion – Cloud and managed service provider with diverse capabilities.

Amazon Web Services (AWS) – Continues to expand its footprint regarding regions, availability zones (AZ) also known as data centers in regions, as well as some services along with the breadth of those capabilities. AWS has recently announced a new Snowball Edge (SBE) which in the past has been a data migration appliance now enhanced with on-prem Elastic Cloud Compute (EC2) capabilities. What this means is that AWS can put on-prem compute capabilities as part of a storage appliance for short-term data movement, migration, conversion, importing of virtual machines and other items.

 

On the other hand, AWS can also be seen as using SBE as a first entry to placing equipment on-prem for hybrid clouds, or, converged infrastructure (CI), hyper-converged infrastructure (HCI), cloud in a box similar to Microsoft Azure Stack, as well as CI/HCI solutions from others.

My prediction near term, however, is that CI/HCI vendors will either ignore SBE, downplay it, create some new marketing on why it is not CI/HCI or fud about vendor lock-in. In other words, make some popcorn and sit back, watch the show.

 

Backblaze – Low-cost, high-capacity cloud storage for backup and archiving provider known for their quarterly disk drive reliability ratings (or failure) reports. They have been around for a while, have a good reputation among those who use their services for being a low-cost alternative to the larger providers.

 

Barefoot networks – Some of you may already be aware of or following Barefoot Networks, while others may not have heard of them outside of the networking space. They have some impressive capabilities, are new, you probably have not heard of them, thus an excellent addition to this list.

Cloudian – Continue to evolve and no longer just another object storage solution, Cloudian has been expanding via organic technology development, as well as acquisitions giving them a broad portfolio of software-defined storage and tiering from on-prem to the cloud, block, file and object access.

 

Cloudflare – Not exactly a startup, some of you may know or are using Cloudflare, while to others, their role as a web cache, DNS, and other service is transparent. I have been using Cloudflare on my various sites for over a year, and like the security, DNS, cache and analytics tools they provide as a customer.

 

Cobalt Iron – For some, they might be new, Software-defined Data protection and management is the name of the game over at Cobalt Iron which has been around a few years under the radar compared to more popular players. If you have or are involved with IBM Tivoli aka TSM based backup and data protection among others, check out the exciting capabilities that Cobalt can bring to the table.

 

CTERA – Having been around for a while, to some they might not be a startup, on the other hand, they may be new to others while offering new data and file management options to others.

 

DataCore – You might know of DataCore for their software-defined storage and past storage hypervisor activity. However, they have a new piece of software MaxParallel that boost server storage I/O performance. The software installs on your Windows Server instance (bare metal, VM, or cloud instance) and shows you performance with and without acceleration which you can dynamically turn off and off.

 

DataDirect Networks (DDN) - Recently acquired Lustre assets from Intel, now picking up the storage startup Tintri pieces after it ceased operations. What this means is that while beefing up their traditional High-Performance Compute (HPC) and Super Compute (SC) focus, DDN is also expanding into broader markets.

 

Dell Technologies – At its recent Dell Technology World event in Las Vegas during late April, early May 2018, several announcements were made, including some tied to emerging Gen-Z along with composability. More recently, Dell Technologies along with VMware announced business structure and finance changes. Changes include VMware declaring a dividend, Dell Technologies being its largest shareholder will use proceeds to fund restricting and debt service. Read more about VMware and Dell Technology business and financial changes here.

 

Densify – With a name like Densify no surprise they propose to drive densification and automation with AI-powered deep learning to optimize application resource use across on-prem software-defined virtual as well as cloud instances and containers.

 

FlureDB – If you are into databases (SQL or NoSQL), as well as Blockchain or distributed ledgers, check out FlureDB.

 

Innovium.com – When it comes to data infrastructure and data center networking, Innovium is probably not on your radar, however, keep an eye on these folks and their TERALYNX switching silicon to see where it ends up given their performance claims.

 

Komprise – File, and data management solutions including tiering along with partners such as IBM.

 

Kubernetes – A few years ago OpenStack, then Docker containers was the favorite and trending discussion topic, then Mesos and along comes Kubernetes. It's safe to say, at least for now, Kubernetes is settling in as a preferred open source industry and customer defecto choice (I want to say standard, however, will hold off on that for now) for container and related orchestration management. Besides, do it yourself (DiY) leveraging open source, there are also managed AWS Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS), Google Kubernetes Engine (GKE), and VMware Pivotal Container Service (PKS) among others. Besides Azure, Microsoft also includes Kubernetes support (along with Docker and Windows containers) as part of Windows Servers.

ManageEngine (part of Zoho) - Has data infrastructure monitoring technology called OpManager for keeping an eye on networking.

Marvel – Marvel may not be a familiar name (don’t confuse with comics), however, has been a critical component supplier to partners whose server or storage technology you may be familiar with or have yourself. Server, Storage, I/O Networking chip maker has closed on its acquisition of Cavium (who previously bought Qlogic among others). The combined company is well positioned as a key data infrastructure component supplier to various partners spanning servers, storage, I/O networking including Fibre Channel (FC), Ethernet, InfiniBand, NVMe (and NVMeoF) among others.

Mellanox – Known for their InfiniBand adapters, switches, and associated software, along with growing presence in RDMA over Converged Ethernet (RoCE), they are also well positioned for NVMe over Fabrics among other growth opportunities following recent boardroom updates, along with technology roadmap's.

Microsoft – Azure public cloud continues to evolve similarly to AWS with more region locations, availability zone (AZ) data centers, as well as features and extensions. Microsoft also introduced about a year ago its hybrid on-prem CI/HCI cloud in a box platform appliance Azure Stack (read about my test drive here). However, there is more to Microsoft than just their current cloud first focus which means Windows (desktop), as well as Server, are also evolving. Currently, in public preview, Windows Server 2019 insiders build available to try out many new capabilities, some of which were covered in the recent free Microsoft Virtual Summit held in June. Key themes of Windows Server 2019 include security, performance, hybrid cloud, containers, software-defined storage and much more.

 

Microsemi – Has been around for a while is the combination of some vendors you may not have heard of or heard about in some time including PMC-Sierra (acquired Adaptec) and Vitesse among others. The reason I have Microsemi on this list is a combination of their acquisitions which might be an indicator of whom they pick up next. Another reason is that their components span data infrastructure topics from servers, storage, I/O and networking, PCIe and many more.

NVIDIA – GPU high performance compute and related compute offload technologies have been accessible for over a decade. More recently with new graphics and computational demands, GPU such as those NVIDIA are in need. Demand includes traditional graphics acceleration for physical and virtual, augmented and virtual reality, as well as cloud, along with compute-intensive analytics, AI, ML, DL along with other cognitive workloads.

 

NGDSystems (NGD) – Similar to what NVIDIA and other GPU vendors do for enabling compute offload for specific applications and workloads, NGD is working on a variation. That variation is to move offload compute capabilities for the server I/O storage-intensive workloads closer, in fact into storage system components such as SSDs and emerging SCMs and PMEMs. Unlike GPU based applications or workloads that tend to be more memory and compute intensive, NGD is positioned for applications that are the server I/O and storage intensive.

 

The premise of NGD is that they move the compute and application closer to where the data is, eliminating extra I/O, as well as reducing the amount of main server memory and compute cycles. If you are familiar with other server storage I/O offload engines and systems such as Oracle Exadata database appliance NGD is working at a tighter integration granularity. How it works is your application gets ported to run on the NGD storage platform which is SSD based and having a general-purpose processor. Your application is initiated from a host server, where it then runs on the NGD meaning I/Os are kept local to the storage system. Keep in mind that the best I/O is the one that you do not have to do, the second best is the one with the least resource or user impact.

 

Opvisor – Performance activity and capacity monitoring tools including for VMware environments.

Pavillon – Startup with an interesting NVMe based hardware appliance.

 

Quest – Having gained their independence as a free-standing company since divestiture from Dell Technologies (Dell had previously acquired Quest before EMC acquisition), Quest continues to make their data infrastructure related management tools available. Besides now being a standalone company again, keep an eye on Quest to see how they evolve their existing data protection and data infrastructure resource management tools portfolio via growth, acquisition, or, perhaps Quest will be on somebody else’s future growth list.

 

Retrospect – Far from being a startup, after gaining their independence from when EMC bought them several years ago, they have since continued to enhance their data protection technology. Disclosure, I have been a Retrospect customer since 2001 using it for on-site, as well as cloud data protection backups to the cloud.

Rubrik – Becoming more of a data infrastructure household name given their expanding technology portfolio and marketing efforts. More commonly known in smaller customer environments, as well as broadly within industry insider circles, Rubrik has potential with continued technology evolution to move further upmarket similar to how Commvault did back in the late 90s, just saying.

SkyScale – Cloud service provider that offers dedicated bare metal, as well as private, hybrid cloud instances along with GPU to support AI, ML, DL and other high performance,  compute workloads.

Snowflake – The name does not describe well what they do or who they are. However, they have an interesting cloud data warehouse (old school) large-scale data lakes (new school) technologies.

 

Strongbox – Not to be confused with technology such as those from Iosafe (e.g., waterproof, fireproof), Strongbox is a data protection storage solution for storing archives, backups, BC/BR/DR data, as well as cloud tiering. For those who are into buzzword bingo, think cloud tiering, object, cold storage among others. The technology evolved out of Crossroads and with David Cerf at the helm has branched out into a private company with keeping an eye on.

 

Storbyte – With longtime industry insider sales and marketing pro-Diamond Lauffin (formerly Nexsan) involved as Chief Evangelist, this is worth keeping an eye on and could be entertaining as well as exciting. In some ways it could be seen as a bit of Nexsan meets NVme meets NAND Flash meets cost-effective value storage dejavu play.

Talon – Enterprise storage and management solutions for file sharing across organizations, ROBO and cloud environments.

 

Ubitqui – Also known as UBNT is a data infrastructure networking vendor whose technologies span from WiFi access points (AP), high-performance antennas, routing, switching and related hardware, along with software solutions. UBNT is not as well-known in more larger environments as a Cisco or others. However, they are making a name for themselves moving from the edge to the core. That is, working from the edge with AP and routers, firewalls, gateways for the SMB, ROBO, SOHO as well as consumer (I have several of their APs, switches, routers and high-performance antennas along with management software), these technologies are also finding their way into larger environments.

My first use of UBNT was several years ago when I needed to get an IP network connection to a remote building separated by several hundred yards of forest. The solution I found was to get a pair of UBNT NANO Apps, put them in secure bridge mode; now I have a high-performance WiFi service through a forest of trees. Since then have replaced an older Cisco router, several Cisco, and other APs, as well as the phased migration of switches.

 

UpdraftPlus– If you have a WordPress web or blog site, you should also have a UpdraftPlus plugin (go premium btw) for data protection. I have been using Updraft for several years on my various sites to backup and protect the MySQL databases and all other content. For those of you who are familiar with Spanning (e.g., was acquired by EMC then divested by Dell) and what they do for cloud applications, UpdraftPlus does similar for lower-end, smaller cloud-based applications.

 

Vexata – Startup scale out NVMe storage solution.

 

VMware – Expanding their cloud foundation from on-prem to in and on clouds including AWS among others. Data Infrastructure focus continues to expand from core to edge, server, storage, I/O, networking. With recent Dell Technologies and VMware declaring a dividend, should be interesting to see what lies ahead for both entities.

What About Those Not Mentioned?

By the way, if you were wondering about or why others are not in the above list, simple, check out last year’s list which includes Apcera, Blue Medora, Broadcom, Chelsio, Commvault, Compuverde, Datadog, Datrium, Docker, E8 Storage, Elastifile, Enmotus, Everspin, Excelero, Hedvig, Huawei, Intel, Kubernetes, Liqid, Maxta, Micron, Minio, NetApp, Neuvector, Noobaa, NVIDA, Pivot3, Pluribus Networks, Portwork, Rozo Systems, ScaleMP, Storpool, Stratoscale, SUSE Technology, Tidalscale, Turbonomic, Ubuntu, Veeam, Virtuozzo and WekaIO. Note that many of the above have expanded their capabilities in the past year and remain, or have become even more interesting to watch, while some might be on the future where are they now list sometime down the road. View additional vendors and service providers via our industry links and resources page here.

What About New, Emerging, Trending and Trendy Technologies

Bitcoin and Blockchain storage startups, some of which claim or would like to replace cloud storage taking on giants such as AWS S3 in the not so distant future have been popping up lately. Some of these have good and exciting stories if they can deliver on the hype along with the premise. A couple of names to drop include among others Filecoin, Maidsafe, Sia, Storj along with services from AWS, Azure, Google and a long list of others.

 

Besides Blockchain distributed ledgers, other technologies and trends to keep an eye on include compute processes from ARM to SoC, GPU, FPGA, ASIC for offload and specialized processing. GPU, ASIC, and FPGA are appearing in new deployments across cloud providers as they look to offload processing from their general servers to derive total effective productivity out of them. In other words, innovating by offloading to boost their effective return on investment (old ROI), as well as increase their return on innovation (the new ROI).

Other data infrastructure server I/O which also ties into storage and network trends to watch include Gen-Z that some may claim as the successor to PCIe, Ethernet, InfiniBand among others (hint, get ready for a new round of “something is dead” hype). Near-term the objective of Gen-Z is to coexist, complement PCIe, Ethernet, CPU to memory interconnect, while enabling more granular allocation of data infrastructure resources (e.g., composability). Besides watching who is part of the Gen-Z movement, keep an eye on who is not part of it yet, specifically Intel.

 

NVMe and its many variations from a server internal to networked NVMe over Fabrics (NVMeoF) along with its derivatives continue to gain both industry adoption, as well as customer deployment. There are some early NVMeoF based server storage deployments (along with marketing dollars). However, the server side NVMe customer adoption is where the dollars are moving to the vendors. In other words, it's still early in the bigger broader NVMe and NVMeoF game.

Where to learn more

Learn more about data infrastructures and related topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Let's see how those mentioned last year as well as this year, along with some new and emerging vendors, service providers who did not get said end up next year, as well as the years after that.

2018 Hot Popular New Trending Data Infrastructure Vendors to Watch
Different timelines of adoption and deployment for various audiences

 

Keep in mind that there is a difference between industry adoption and customer deployment, granted they are related. Likewise let’s see who will be at the top in three, five and ten years, which means some of the current top or favorite vendors may or may not be on the list, same with some of the established vendors. Meanwhile, check out the 2018 Hot Popular New Trending Data Infrastructure Vendors to Watch.

 

Ok, nuff said, for now.

Cheers Gs

There are many definitions and interpretation of the "Serverless" term, but if I have to say it in a few words, it would be: a software architecture, which allow the (Dev)Ops team not to care about the backend infrastructure (there are still servers, they just don't care about them). Depending on the use case, there are different components that comprise a Serverless architecture:

  • Cloud data stores
  • API gateways
  • Functions as a Service

In this blog post we examine in more detail how Functions as a Service (FaaS) can be implement on by leveraging the vCloud Director (vCD) platform.

FaaS Requirements

First, lets define some basic requirements for a FaaS solution.

  • As a FaaS developer I would like to be able to create a function with the following properties
    • Name - the name of the function
    • Code - the function executable code, complying with the FaaS API
    • Trigger - the criteria which if met will tell the platform the run the function code. In the vCD world we can define this as two events:
      • External API call to an endpoint defined by the trigger
      • A notification event triggered as a result of an operation like creation of a VM.
  • As a FaaS developer I would like my functions to not be limited in terms of scale, or how many events they can handle.
  • As a FaaS developer I would like my function to be run in a sandbox, i.e. other tenants, should not have access to my functions.
  • As a Service Provider I would like individual function calls to be limited in amount of resources they are going to use.
  • As a FaaS developer I would like to be able to update a function.
  • As a FaaS developer I would like to be able to remove a function.

Architecture Alternatives

There are probably many architectures that would satisfy these requirements, but I would touch on two in this blog post and will discuss their pros and cons.The first part of both solution architectures is the same: when an event or an external API call is triggered, send a message to a queue. This is the OOTB vCD extensibility mechanism.

Gateway Based Alternative

This alternative relies on a FaaS gateway to handle the request for a function call by:

  1. Starting a previously created container
  2. Running the function with the request payload
  3. Stoping the container

faas_arch_gateway-2.png

This architecture has an obvious drawback: the time it takes to start the container is added the time a request can be handled, but on the other hand is:

  • Relatively simple
  • Scalable by nature

Queues Based Alternative

The second alternative replaces the FaaS gateway with a very simple router, which can route the massages to e function specific queues. Modern messaging queue system can handle message routing, however, it is described in the architecture to clearly communicate the need of message routing as the first queue is not tenant aware.

The function containers would need be "fatter" as the code running inside would need to handle messages from a message queue and translate them into a request to the function.

faas_arch_queue.png

This approach would deliver much faster response time, but it would require a monitoring/scaling mechanism of the containers, which is part of container orchestration solutions like Kubernetes.

Solution Implementation (PoC)

For the current PoC, I've decided to go with the simpler architecture and use vRO as a FaaS Gateway. We will cover only the external API endpoints type of trigger.

Function Definition

To cerate a function we would need to provide the Programming Language, endpoint URI and the function code itself.

Screen Shot 2018-07-18 at 18.26.00.png

When we hit create, it will:

  1. Store the function in a persistent store.
  2. Create a container
  3. Register the endpoint using the vCD API extensibility

Screen Shot 2018-07-18 at 18.26.10.png

There are few things to notice here:

  1. The status is initializing, because creating the container takes a minute or so. This is why we've made the process async.
  2. The route is tenant-specific. In the request form we provided "hello", but the solution generated "/api/org/pscoe/hello".

Container

Our container is relatively simple. It has:

  • Dockerfile, which describes the container

Screen Shot 2018-07-19 at 12.05.11.png

  • handler.js which contains the function code

Screen Shot 2018-07-19 at 12.06.05.png

  • package.json, which is used by the NodeJS package installer (NPM)
  • server.js, which is a very simple express web server, used to redirect request to the function code

Screen Shot 2018-07-19 at 12.03.05.png

Finally, we use docker to build our image and create a container from it.

 

Screen Shot 2018-07-18 at 18.29.24.png

Function Calls

Once the function is in ready state, we can test it using the "Test Function" button, which makes a simple GET HTTP request using the function's route.

Screen Shot 2018-07-19 at 13.02.52.png

FaaS Gateway:

  1. Starts the container.
  2. Makes an HTTP POST request to the container port with the body of the original request.
  3. Stops the container.

Screen Shot 2018-07-19 at 13.08.32.png

Result

Screen Shot 2018-07-18 at 18.28.05.png

For unknown reason the filesystem on NSX-Manager will go bad and to recovering from this is different when compared with other filesystem recovery methods.

 

Problem: NSX Manager VM unable to boot. you will a screen something like below:

 

https://confluence.eng.vmware.com/download/attachments/268560128/image2017-12-21%2011%3A33%3A42.png?version=1&modificationDate=1513836222000&api=v2

 

 

Recovery Steps:

  •      Download and Connect the Ubuntu ISO to CD drive of the NSX-Manager VM.
  • Boot through CD drive and choose "try now" option.
  • Run the recovery command "fsck /dev/sda2"
  • NSX-Manager VM will recover back from the filesystem and will boot normally.

When using a pay-as-you-go type service, you would like to see how much you have spent and how much you are going to pay at the end of the month. You might also want to have an insight on the cost of services for the last several months.

The native UI extensibility of vCloud Director (vCD), introduced in version 9.0, allows us to build any type of custom UI and incorporate it into the product to offer seamless user experience. This includes dashboards and charts - the ultimate visual aid for statistical data.

Solution Architecture

showback.png

Our solution consists of 3 main components:

  1. UI extension, containing a dashboard with different charts presenting billing information.
  2. Storage, exposed through a vCD API extension, for retrieving billing data.
  3. Data Collectors, small scheduled processes that pull data from billing solutions (like VRBC).

You may wonder "Why are we not pulling the data directly from VRBC?". There are a couple of reasons:

  • To optimize for performance, the data is ready to be consumed by the UI.
  • To support different billing data sources. Often, the service providers charge not only for the infrastructure, but also for additional custom services they offer, e.g. API calls to messaging queue.

Solution Implementation (PoC)

Lets try to build a simplified version of this solution as a proof of concept.

Data

For our PoC, we will prepare the data manually in a JSON format.

Screen Shot 2018-07-14 at 9.17.29.png

API Extension

To serve the data we need an API extension.

Screen Shot 2018-07-14 at 9.22.00.png

Dashboard

Once we have the data served, we can display it using a charting library in the vCD user interface and present the showback information to the tenant administrator.

Screen Shot 2018-07-14 at 8.15.23.png

VM の仮想 CD/DVD ドライブからメディアを切断するときに、

Linux ゲストでマウントしたままだと、質問メッセージがでて VM が停止してしまいます。

しかも、ゲスト OS でアンマウントしている場合でも、

なぜか同様に VM が停止してしまうことがあります。

そこで PowerCLI を利用して、VM を起動したままの状態で メディアを取り出してみます。

 

メディア切断時の VM の状態。

仮想 CD/DVD ドライブからメディアを取り出そうとすると、下記のような状態になります。

eject-vm-stop-01.png

 

この状態では、下記のように質問に応答するまで VM が停止してしまいます。

eject-vm-stop-02.png

 

この状態を回避するには、下記の KB のように対象 VM にパラメータを追加します。

 

マウントされた CDROM が切断された後、Linux 仮想マシンが応答しない (2144053)

https://kb.vmware.com/kb/2144053?lang=ja

 

PowerCLI でのパラメータ追加~メディア切断。

下記のような PowerCLI スクリプトを作成してみました。

  • KB にあるパラメータを VM に追加。
  • VM の 仮想 CD/DVD ドライブからメディア切断。
  • パラメータを VM から削除。

 

eject_cd_no-msg.ps1 · GitHub

$vm_name = $args[0]

 

Get-VM $vm_name | % {

    $vm = $_

   

    # Add AdvancedSetting

    $vm | New-AdvancedSetting -Name cdrom.showIsoLockWarning -Value "FALSE" -Confirm:$false |

        ft -AutoSize Entity,Name,Value

    $vm | New-AdvancedSetting -Name msg.autoanswer -Value "TRUE" -Confirm:$false |

        ft -AutoSize Entity,Name,Value

   

    # Eject

    $cd_drive = $vm | Get-CDDrive |

        Set-CDDrive -NoMedia -Connected:$false -Confirm:$false

        $cd_drive | Select-Object `

        @{N="VM";E={$_.Parent.Name}},

        @{N="StartConnected";E={$_.ConnectionState.StartConnected}},

        @{N="Connected";E={$_.ConnectionState.Connected}},

        IsoPath

 

    # Remove AdvancedSetting

    $vm | Get-AdvancedSetting -Name cdrom.showIsoLockWarning | Remove-AdvancedSetting -Confirm:$false

    $vm | Get-AdvancedSetting -Name msg.autoanswer | Remove-AdvancedSetting -Confirm:$false

}

 

Connect-VIServerで vCenter に接続したうえで、

下記のようなコマンドラインで実行します。

PowerCLI> .\eject_cd_no-msg.ps1 <VM の名前>

 

下記のような感じで、仮想 CD/DVD ドライブからメディアを取り出すことができます。

メディアを取り出すことで、最後の IsoPath が空欄になっています。

PowerCLI> .\eject_cd_no-msg.ps1 lab-ldap02

 

Entity     Name                     Value

------     ----                     -----

lab-ldap02 cdrom.showIsoLockWarning FALSE

 

Entity     Name           Value

------     ----           -----

lab-ldap02 msg.autoanswer TRUE

 

VM         StartConnected Connected IsoPath

--         -------------- --------- -------

lab-ldap02          False     False

 

PowerCLI>

 

ちなみに今回の環境は vCenter 6.5 U1 / ESXi 6.5 U1 / PowerCLI 10.1 です。

 

以上、PowerCLI で 仮想 CD/DVD ドライブからメディア切断してみる話でした。

PowerNSX で、NSX の Syslog 転送先のサーバを設定してみます。

 

PowerNSX では、Syslog サーバ設定そのもののコマンドは用意されていないので、

Invoke-NsxWebRequest で NSX API から設定します。

今回の環境は、vCenter 6.7a / NSX-v 6.4.1 / PowerCLI 10.1 / PowerNSX 3.0 です。

 

NSX Manager の Syslog Server 設定。

API リファレンスは下記です。

Working With the Appliance Manager

VMware NSX for vSphere API documentation

 

XML ファイルを用意します。

今回は Syslog サーバとして 192.168.1.223 を指定しています。

 

syslog-nsx-manager.xml

<syslogserver>

  <syslogServer>192.168.1.223</syslogServer>

  <port>514</port>

  <protocol>UDP</protocol>

</syslogserver>

XML ファイルを読み込んで、Invoke-NsxWebRequest  を使用して NSX API で設定します。

PowerNSX> [String]$xml_text = Get-Content .\syslog-nsx-manager.xml

PowerNSX> Invoke-NsxWebRequest -method PUT -URI "/api/1.0/appliance-management/system/syslogserver" -body $xml_text

 

Syslog サーバのアドレスが設定されたことを確認します。

PowerNSX> $data = Invoke-NsxWebRequest -method GET -URI "/api/1.0/appliance-management/system/syslogserver"

PowerNSX> [xml]$data.Content | select -ExpandProperty syslogserver

 

syslogServer  port protocol

------------  ---- --------

192.168.1.223 514  UDP

 

 

NSX Controller の Syslog Server 設定。

NSX Controller は、Object ID を指定して仮想アプライアンスそれぞれで設定をします。

 

API リファレンスは下記です。

Working With NSX Controllers

VMware NSX for vSphere API documentation

 

まず NSX Controller の Object ID を確認しておきます。

NSX では 3台の NSX Controller を配置しますが、私のラボではリソースの都合で 1台だけです。

PowerNSX> Get-NsxController | select name,id

 

name            id

----            --

infra-nsxctl-01 controller-1

 

 

XML ファイルを用意しておきます。

 

syslog-nsx-controller.xml

<controllerSyslogServer>

  <syslogServer>192.168.1.223</syslogServer>

  <port>514</port>

  <protocol>UDP</protocol>

  <level>INFO</level>

</controllerSyslogServer>

 

Manager と同様、Invoke-NsxWebRequest  で設定します。

PowerNSX> [String]$xml_text = Get-Content .\syslog-nsx-controller.xml

PowerNSX> Invoke-NsxWebRequest -method POST -URI "/api/2.0/vdn/controller/controller-1/syslog" -body $xml_text

 

転送先の Syslog サーバが設定されました。

PowerNSX> $data = Invoke-NsxWebRequest -method GET -URI "/api/2.0/vdn/controller/controller-1/syslog"

PowerNSX> [xml]$data.Content | select -ExpandProperty controllerSyslogServer

 

syslogServer  port protocol level

------------  ---- -------- -----

192.168.1.223 514  UDP      INFO

 

 

NSX Edge の Syslog Server 設定。

NSX Edge の ESG / DLR Control VM の Syslog 転送先を設定してみます。

これらの Syslog 転送をするには、仮想アプライアンスから Syslog サーバに

通信できる(ルーティングがされている)必要があります。

 

API リファレンスは下記です。

Working With NSX Edge

VMware NSX for vSphere API documentation

 

下記のような XML ファイルを用意しておきます。

 

syslog-nsx-edge.xml

<syslog>

  <protocol>udp</protocol>

  <serverAddresses>

    <ipAddress>192.168.1.223</ipAddress>

  </serverAddresses>

</syslog>

 

NSX Edge Service Gateway(ESG)の Object ID を確認しておきます。

PowerNSX> Get-NsxEdge -Name infra-nsxesg-01 | select name,id

 

name            id

----            --

infra-nsxesg-01 edge-1

 

 

Invoke-NsxWebRequest  で設定します。

PowerNSX> [String]$xml_text = Get-Content .\syslog-nsx-edge.xml

PowerNSX> Invoke-NsxWebRequest -method PUT -URI "/api/4.0/edges/edge-1/syslog/config" -body $xml_text

 

設定が反映されたことを確認します。

取得した XML は、Format-XML でも確認できます。

Syslog サービスも、自動的に有効(enabled = true)になります。

PowerNSX> $data = Invoke-NsxWebRequest -method GET -URI "/api/4.0/edges/edge-1/syslog/config"

PowerNSX> $data.Content | Format-XML

<?xml version="1.0" encoding="UTF-8"?>

<syslog>

  <version>12</version>

  <enabled>true</enabled>

  <protocol>udp</protocol>

  <serverAddresses>

    <ipAddress>192.168.1.223</ipAddress>

  </serverAddresses>

</syslog>

 

DLR Control VM の Object ID も確認します。

PowerNSX> Get-NsxLogicalRouter | select name,id

 

name            id

----            --

infra-nsxdlr-01 edge-5

 

 

DLR Control VM も NSX Edge アプライアンスによるものなので、

ESG と同様の XML、API で Syslog 転送先サーバを設定できます。

PowerNSX> [String]$xml_text = Get-Content .\syslog-nsx-edge.xml

PowerNSX> Invoke-NsxWebRequest -method PUT -URI "/api/4.0/edges/edge-5/syslog/config" -body $xml_text

 

転送先の Syslog サーバが設定できたことが確認できます。

PowerNSX> $data = Invoke-NsxWebRequest -method GET -URI "/api/4.0/edges/edge-5/syslog/config"

PowerNSX> $data.Content | Format-XML

<?xml version="1.0" encoding="UTF-8"?>

<syslog>

  <version>2</version>

  <enabled>true</enabled>

  <protocol>udp</protocol>

  <serverAddresses>

    <ipAddress>192.168.1.223</ipAddress>

  </serverAddresses>

</syslog>

 

これでネットワーク環境に問題がなければ、

それぞれの NSX コンポーネントから Syslog サーバにログが転送されるはずです。

ちなみに今回は vRealize Log Insight(vRLI)にログ転送しています。

 

転送先サーバで、ホスト名などをもとにログが受信できていることを確認することになりますが、

vRLI では「管理」→「ホスト」画面のあたりから、受信しているホストを確認できたりします。

nsx-to-vrli-01.png

 

インタラクティブ分析でホスト名などをもとに受信確認するときは、

hostname を「ユニークカウント」時系列でグループ化すると、

各時間帯ごとにホスト単位の受信有無を確認しやすいかなと思います。

nsx-to-vrli-02.png

 

以上、PowerNSX で NSX の Syslog サーバを設定してみる話でした。

前回の投稿で、NSX の Edge Service Gateway(ESG)で DHCP サーバ、

分散論理ルータ(DLR)で DHCP リレー エージェントを構成しました。

NSX ESG / DLR で DHCP Server / Relay を構成してみる。

 

今回は PowerNSX で同様の環境を構成してみます。

ただし前回とは異なり、実際に使用する場面を想定して

一度のみ設定すればよいものは省略して下記のようにしています。

  • ESG での DHCP サービスの有効化 → 設定ずみ
  • 論理スイッチの作成と DLR への接続
  • ESG への IP プール作成
  • DLR での DHCP リレー サーバの登録 → 設定ずみ
  • DLR での DHCP リレー エージェントの指定
  • VM の作成 ~ 論理スイッチへの接続

 

nsx-edge-dhcp-powernsx.png

 

使用する環境は、前回の投稿と同様です。

あらかじめ、vCenter / NSX Manager には接続ずみです。

PowerNSX での NSX への接続方法について。

 

論理スイッチの作成と DLR への接続。

まず、DHCP を利用するネットワークの論理スイッチ「ls-lab-vms-01」を作成します。

PowerNSX> Get-NsxTransportZone infra-tz-01 | New-NsxLogicalSwitch ls-lab-vms-01

 

論理スイッチを DLR に接続します。

このときに、ゲートウェイのアドレス(10.0.1.1)も指定します。

PowerNSX> Get-NsxLogicalRouter -Name infra-nsxdlr-01 | New-NsxLogicalRouterInterface -Name if-ls-lab-vms-01 -ConnectedTo (Get-NsxLogicalSwitch ls-lab-vms-01) -PrimaryAddress 10.0.1.1 -SubnetPrefixLength 24 -Type internal

 

論理スイッチを接続した DLR インターフェースの index を確認しておきます。

PowerNSX> Get-NsxLogicalRouter -Name infra-nsxdlr-01 | Get-NsxLogicalRouterInterface -Name if-ls-lab-vms-01 | Format-List name,connectedToName,index

 

name            : if-ls-lab-vms-01

connectedToName : ls-lab-vms-01

index           : 10

 

 

ESG への IP プール作成。

PowerNSX では、ESG のネットワーク サービス関連のものがあまり充実していないので、

IP プールの作成は、Invoke-NsxWebRequest で NSX API を利用します。

似たもので Invoke-NsxRestMethod もありますが、これは非推奨になっています。

 

まず、IP プールの内容を定義した XML を作成しておきます。

 

esg_dhcp_pool_10.0.1.1.xml

<ipPool>

  <ipRange>10.0.1.100-10.0.1.199</ipRange>

  <subnetMask>255.255.255.0</subnetMask>

  <defaultGateway>10.0.1.1</defaultGateway>

  <domainName>go-lab.jp</domainName>

  <primaryNameServer>192.168.1.101</primaryNameServer>

  <secondaryNameServer>192.168.1.102</secondaryNameServer>

  <leaseTime>86400</leaseTime>

  <autoConfigureDNS>false</autoConfigureDNS>

  <allowHugeRange>false</allowHugeRange>

</ipPool>

 

ESG の Object ID を確認しておきます。

PowerNSX> Get-NsxEdge -Name infra-nsxesg-01 | Format-List name,id

 

name : infra-nsxesg-01

id   : edge-1

 

 

XML ファイルを読み込んで、IP プールを作成します。

API のメソッドについては、下記のリファレンスを参考にします。

VMware NSX for vSphere API documentation

 

ESG の edgeId については、先ほど確認したものを指定します。

XML でキャストするとエラーになってしまうので [String] としています。

PowerNSX> [String]$xml_text = Get-Content ./esg_dhcp_pool_10.0.1.1.xml

PowerNSX> Invoke-NsxWebRequest -method POST -URI "/api/4.0/edges/edge-1/dhcp/config/ippools" -body $xml_text

 

DLR での DHCP リレー エージェントの指定。

DHCP リレー エージェントについても Invoke-NsxWebRequest で設定します。

まず、XML を作成します。

 

DLR の edgeId を確認しておきます。

PowerNSX> Get-NsxLogicalRouter -Name infra-nsxdlr-01 | Format-List name,id

 

name : infra-nsxdlr-01

id   : edge-5

 

 

DLR の DHCP リレー設定を取得しておきます。

PowerNSX> $data = Invoke-NsxWebRequest -method GET -URI "/api/4.0/edges/edge-5/dhcp/config/relay"

PowerNSX> $data.Content | Format-XML

<?xml version="1.0" encoding="UTF-8"?>

<relay>

  <relayServer>

    <ipAddress>10.0.0.1</ipAddress>

  </relayServer>

</relay>

PowerNSX>

 

XML ファイルを用意します。

vnicIndex には DLR インターフェースの index、

giAddress には DLR インターフェースに設定したゲートウェイ アドレスを指定します。

今後ネットワークを増設してリレー エージェントが追加される場合は relayAgent 要素が増えます。

 

dlr_dhcp_relay.xml

<?xml version="1.0" encoding="UTF-8"?>

<relay>

  <relayServer>

    <ipAddress>10.0.0.1</ipAddress>

  </relayServer>

  <relayAgents>

    <relayAgent>

      <vnicIndex>10</vnicIndex>

      <giAddress>10.0.1.1</giAddress>

    </relayAgent>

  </relayAgents>

</relay>

 

DLR に、リレー エージェントを含む DHCP リレー設定を反映します。

PowerNSX> [String]$xml_text = Get-Content ./dlr_dhcp_relay.xml

PowerNSX> Invoke-NsxWebRequest -method PUT -URI "/api/4.0/edges/edge-5/dhcp/config/relay" -body $xml_text

 

VM の作成 ~ 論理スイッチへの接続。

既存の VM から VM を作成します。

PowerNSX> Get-Template vm-template-01 | New-VM -ResourcePool infra-cluster-01 -Datastore vsanDatastore -Name test-vm-01

 

VM を論理スイッチに接続します。

PowerNSX> Get-VM test-vm-01 | Connect-NsxLogicalSwitch -LogicalSwitch (Get-NsxLogicalSwitch ls-lab-vms-01)

 

VM を起動します。

PowerNSX> Get-VM test-vm-01 | Start-VM

 

少し待つと、DHCP プールのレンジから IP アドレスが設定されます。

PowerNSX> Get-VM test-vm-01 | Get-VMGuest

 

State          IPAddress            OSFullName

-----          ---------            ----------

Running        {10.0.1.100, fe80... Oracle Linux 7 (64-bit)

 

 

以上、NSX Edge の DHCP Relay を PowerNSX で設定してみる話でした。

1 2 Previous Next

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.