Skip navigation

Blog Posts

Total : 4,009

Blog Posts

日本の vExperts Advent Calendar 2018 の 1日目の投稿です。

 

vExperts Advent Calendar 2018

https://adventar.org/calendars/3101

 

1日目なのでクリスマス色のつよい Tips をお伝えしたいと思います。

そこで、lamw さんの Tips を参考に vSphere Client の色を変更してみます。

Add custom color to vSphere HTML5 UI Header/Footer in vSphere 6.7 Update 1 · GitHub

 

当然ながら HTML5 のハックは非サポートですので、

自宅ラボなどで気分を出すためにご利用いただければと思います。

この投稿でも、既存環境への適用を避けて新規の VCSA 6.7 u1 をデプロイします。

 

vCenter Server Appliance (VCSA) のデプロイ。

今回は、デプロイ手順を簡略化するため CLI を利用します。

vcsa-deploy.exe での CLI デプロイは、ちゃんとサポートされる方法です。

VCSA 6.7 を CLI デプロイしてみる。(embedded-PSC x2 の Enhanced Linked Mode)

 

デプロイ環境について。

 

VCSA は、この投稿時点で最新のバージョンを利用します。

  • バージョン: VMware vCenter Server 6.7 Update 1 ビルド 10244745
  • インストーラ: VMware-VCSA-all-6.7.0-10244745.iso

 

デプロイ先として、ESXi 6.7 u1 環境はセットアップずみです。

  • ESXi インストールずみ。
  • VCSA の Tiny デプロイのスペック以上のマシンが必要。
    • CPU 2コア(スレッド)以上
    • メモリは最小 16GB程度(ESXi と VCSA の 10GB を搭載できる程度)
    • ディスク
  • データストア名、ポートグループ名はそのまま。
  • ネットワーク / DNS アドレスはデプロイ環境にあわせる。
  • ユーザ / パスワードは、いつものデモ用のもの。

 

CLI で指定するデプロイ設定のファイル(JSON のテキストファイル)は下記のようにしました。

 

lab-vcsa-67u1.json · GitHub

  • 今回のハックのために、SSH アクセスを有効化しています。
  • 正式には system_name で VCSA のアドレスで FQDN のホスト名を指定しますが、
    ラボでの DNS レコード省略のため IP アドレスににします。

{

    "__version": "2.13.0",

    "__comments": "deploy a VCSA with an embedded-PSC on an ESXi host.",

    "new_vcsa": {

        "esxi": {

            "hostname": "192.168.1.20",

            "username": "root",

            "password": "VMware1!",

            "deployment_network": "VM Network",

            "datastore": "datastore1"

        },

        "appliance": {

            "thin_disk_mode": true,

            "deployment_option": "tiny",

            "name": "lab-vcsa-67u1"

        },

        "network": {

            "ip_family": "ipv4",

            "mode": "static",

            "ip": "192.168.1.55",

            "dns_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "prefix": "24",

            "gateway": "192.168.1.1",

            "system_name": "192.168.1.55"

        },

        "os": {

            "password": "VMware1!",

            "ntp_servers": [

                "192.168.1.101",

                "192.168.1.102"

            ],

            "ssh_enable": true

        },

        "sso": {

            "password": "VMware1!",

            "domain_name": "vsphere.local"

        }

    },

    "ceip": {

        "settings": {

            "ceip_enabled": false

        }

    }

}

 

VCSA デプロイの実施。

それでは、Windows クライアントからデプロイします。

VCSA のインストーラは F: ドライブにマウントしています。

 

事前チェック

※ lab-vcsa-67u1.json はファイルを配置したパスを指定します。

PS> cd F:/vcsa-cli-installer/win32/

PS> ./vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula --precheck-only ~/lab-vcsa-67u1.json

 

デプロイ実行

PS> ./vcsa-deploy.exe install --no-esx-ssl-verify --accept-eula ~/lab-vcsa-67u1.json

 

デプロイ処理が完了すると、HTML5 の

vSphere Client にアクセスできるようになります。

vcsa-html5-hack-01.png

 

HTML5 Client(vSphere Client)の色を変更してみる。

※ここからは非サポートの方法です。

 

今回は、スクリプトを用意してみました。

Add custom color to vSphere HTML5 UI Header/Footer in vSphere 6.7 Update 1 · GitHub

 

NotSupported_H5ClientHacks-Xmas.sh ファイルの内容

  • 元ファイルを、いちおうバックアップ。
  • ファイルの置き換えからサービス再起動まで実行。
  • 色がクリスマス風。

NEW_HEADER_HEX_COLOR=006400

NEW_BOTTOM_HEX_COLOR=8b0000

BACKUP_FILE=/usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/h5ngc.war.bak

if [ ! -e ${BACKUP_FILE} ]; then

  cp /usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/h5ngc.war ${BACKUP_FILE}

fi

mkdir -p /root/work

cd /root/work

cp /usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/h5ngc.war .

unzip h5ngc.war

rm -f h5ngc.war

cat << EOF >> resources/css/NotSupported_H5ClientHacks.css

.main-nav HEADER{

  background-color:#${NEW_HEADER_HEX_COLOR} !important; }

bottom-panel toggle-splitter {

  background: #${NEW_BOTTOM_HEX_COLOR} !important; }

EOF

sed -i '/--%>/a \

\n   <link href="resources/css/NotSupported_H5ClientHacks.css" rel="stylesheet"/>' WEB-INF/views/index.jsp

zip -r /root/h5ngc.war config  error.jsp  locales  META-INF  notfound.jsp  plugin.xml  resources  webconsole.html  WEB-INF

cd /root

rm -rf /root/work

cp /root/h5ngc.war /usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/

service-control --stop vsphere-ui; service-control --start vsphere-ui

 

VCSA に SSH でログインして、shell を起動します。

[gowatana@infra-jbox-01 ~]$ ssh root@192.168.1.55

 

VMware vCenter Server Appliance 6.7.0.20000

 

Type: vCenter Server with an embedded Platform Services Controller

 

root@192.168.1.55's password:

Last login: Sat Dec  1 03:46:41 2018 from 192.168.1.103

Connected to service

 

    * List APIs: "help api list"

    * List Plugins: "help pi list"

    * Launch BASH: "shell"

 

Command> shell

Shell access is granted to root

root@photon-machine [ ~ ]#

 

スクリプトをダウンロードして、実行します。

root@photon-machine [ ~ ]# curl -O https://gist.githubusercontent.com/gowatana/a82d8038c7994e317484d747e8edf461/raw/NotSupported_H5ClientHacks-Xmas.sh

root@photon-machine [ ~ ]# bash ./NotSupported_H5ClientHacks-Xmas.sh

 

もしくは、ダウンロード~実行をまとめることもできます。

root@photon-machine [ ~ ]# curl https://gist.githubusercontent.com/gowatana/a82d8038c7994e317484d747e8edf461/raw/NotSupported_H5ClientHacks-Xmas.sh | bash

 

サービスの再起動をまって HTML5 の vSphere Client にアクセスすると、

クリスマス風になります。

vcsa-html5-hack-02.png

 

元に戻す場合は、下記のようにバックアップ ファイルをリストアして、

サービスを再起動します。

root@photon-machine [ ~ ]# cp /usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/h5ngc.war.bak /usr/lib/vmware-vsphere-ui/plugin-packages/root-app/plugins/h5ngc.war

root@photon-machine [ ~ ]# service-control --stop vsphere-ui; service-control --start vsphere-ui

 

以上、vSphere Client をクリスマスカラーにしてみる話でした。

vE アドベントカレンダー 2日目は kmassue さんの予定です。よろしくお願いします。

Dear readers

this is the first blog of a series related to NSX-T. This first coverage provide you a simple introduction to the most relevant information required to better understand the implication of a centralized services in NSX-T. A centralized service could be as example a Load Balancer or an Edge Firewall.

 

NSX-T has the ability to do distributed routing and supports distributed firewall. Distributed routing in the context that each host, which is prepared for NSX-T, can do local routing. From the logical view is this part called Distributed Router (DR). The DR is part of a Logical Router (LR) and this LR can be configured at Tier-0 or at Tier-1 level. Distributed routing is perfect for scale and could reduce the bandwidth utilization of each physical NIC on the host, as the routing decision is done on the local host. As example, when the source and the destination VM is located on the same host, but connected to different IP subnets and therefore attached to different overlay Logical Switches, then the traffic never leaves the host. All traffic forwarding is processed on the host itself instead at the physical network as example on the ToR switch.

Each host which is prepared with NSX-T and attached to a NSX-T Transport Zone is called a Transport Node (TN). Transport Nodes have implicit a N-VDS configured, which provides as example GENEVE Tunnel Endpoint or is responsible for the distributed Firewall processing. However, there are services like load balancing or edge firewalling which is not a distributed service. VMware call these services "centralized services". Centralized services instantiate a Service Router (SR) and this SR runs on the NSX-T edge-node (EN). An edge-node could be a VM or a bare metal server. Each edge-node is also a Transport Node (TN).

 

Lets have now a look to a simple two tier NSX-T topology with a tenant BLUE and a tenant RED. Both have for new no centralized services at Tier-1 level enabled. For the North-South connectivity to the physical world, there is already a centralized services at the Tier-0 instantiated. However, we don't want focus on this North-South routing part, but as we later would like the understand, what it means to have a centralized services configured on a Tier-1 logical router, it is important to understand this part as well, because North-South routing is also a centralized service. The diagram below shows the logical representation of a simple lab setup. This lab setup will later be used to instantiate the a centralized service at a Tier-1 Logical Router.

Blog-Diagram-1.png

For those which like to get a better understanding of the topology, I have included a diagram of the physical view below. In this lab, we actually use 4 ESXi hosts. For simplification we focus in this blog on the Hypervisor ESXi, instead KVM, even we could build a similar lab with KVM too. On each of these two Transport Nodes ESX70A-TN and ESX71A-TN is a VM installed. The two other hosts ESX50A and ESX51A are NOT* prepared for NSX-T, but they host on each a single edge-node (EN1 and EN2) VM. These two edge-nodes don't have to run on two different ESXi hosts, but it is recommended for redundancy reason.

Blog-Diagram-2.png

As shown in the next diagram, we combine now the physical and logical view. The two Transport Nodes ESX70A-TN and ESX71A-TN have only DRs at Tier-1 and Tier-0 level instantiated, but no Service Router. That means the Logical Router consists of only a DR. These DRs at Tier-1 level provide the gateway (.254) for the attached Logical Switch. The tenant BLUE uses VNI 17289 and the tenant RED uses VNI 17294. NSX-T assign these VNIs out of a VNI pool (default pool: 5000 - 65535). The edge-nodes VMs, now showed as Edge Transport Node (EN1-TN and EN2-TN) have the same Tier-1 and Tier-0 DRs instantiated, but only the Tier-0 includes a Service Router (SR).

Blog-Diagram-1.3.png

The two Tier-1 Logical Routers respective DRs can only talk to each other via the green Tier-0 DR. But before you are able to attach the two Tier-1 DRs to a Tier-0 DR a Tier-0 Logical Router is required. And a Tier-0 Logical Router mandates the assignment of an edge-cluster during the configuration of the Tier-0 Logical Router. Lets assume at this point, we have already configured two edge-node VMs and these edge-node VMs are assigned to an edge-cluster. A Tier-0 Logical Router consists always of a Distributed Router (DR) and depending on the node type as well with a Service Router. A Service Router is always required for the Tier-0 Logical Router, as the Service Router is responsible for the routing connectivity to the physical world. But the Service Router is only instantiated on the edge-nodes. In this lab both Service Router are configured on two edge-nodes respective as Edge Transport Node in active/active mode to provide ECMP to the physical world.

All the internal transit links, as shown in the diagram below, are automatically configured through NSX-T. The only task for the network administrator is to connected the Tier-0 DR to the Tier-1 DRs.

The northbound connection to the physical world requires further a configuration of an additional (or better two Transport Zones for routing redundancy) VLAN based Transport Zone plus the routing peering (typically eBGP). Below is the resulting logical network topology.

One probably ask, why NSX-T instantiate on each edge-node the two Tier-1 DRs too? Well, this is required for an optimized forwarding. As already mentioned, routing decisions are always done on that hosts where the traffic is sourced. Assume vm1 in tenant BLUE would like to talk to a server in the physical world. Traffic sourced at vm1 is forwarded to its local gateway on the Tier-1 DR and then towards the traffic to the Tier-0 DR on the same host. From the Tier-0 DR is then the traffic forwarded to the left Tier-0 SR on EN1-TN (lets assume, traffic is hashed accordingly) and then the flow reach the external destination. The return traffic reach first Tier-0 SR on EN2-TN (lets assume again based on the hash), then the traffic is forwarded locally to Tier-0 DR on the same Edge Transport Node and then to the Tier-1 DR in tenant BLUE. The traffic never leaves EN2-TN until the traffic reach locally the Logical Switch where the vm1 is attached. This is what is called optimized forwarding which is possible due the distributed NSX-T architecture. The traffic needs to be forwarded only once over the physical data center infrastructure and therefore encapsulated into GENEVE per direction!

Blog-Diagram-1.4.png

For now we close this first blog. For the second blog we will dive into the instantiation of a centralized service at Tier-1. Hope you had a little big fun reading this first write-up.

 

 

 

 

*Today, NSX-T supports to run edge-node VMs on NSX-T prepared hosts too. This capability is important to combine compute and edge-node services on the same host.

Version 1.0 - 19.11.2018

Version 1.1 - 27.11.2018 (minor changes)

Version 1.2 - 04.12.2018 (cosmetic changes)

Version 1.3 - 10.12.2018 (link for second blog added)

Introduction

vRealize Automation relies on a blueprint concept for offering catalog services. This means that the service is widely defined in the catalog and users just request the pre-defined services. Although there’s options for parametrization and deployment type (template, unattended installation …) some users still might want to do their custom installation of a VM. To do this they need a way to attach an ISO to a VM provisioned by vRA as well as there needs to be a method to upload an ISO of their choice. Both is not available out-of-the-box in vRA but can be implemented with small customizations.

 

This article describes how to

  • Leverage a central ISO storage
  • Provide upload web page for ISOs
  • Integrate mount and unmount day-2 operations
  • Create example blueprint with day-2 operations entitled

 

image002.png

 

 

Disclaimer

This article just gives an example and starting point on how this requirement can be achieved. It’s not intended to provide a “water-proof” solution nor it leverages all capabilities PHP technology provides.

 

 

Preparation of central ISO storage

 

vSphere typically uses a central shared datastore as ISO repository. This can be of any supported type like iSCSI, Fibre Channel or NFS. For this use case the best way to go is leveraging an NFS share. While mounting and ISO browsing would work for any datastore, it’s much easier to upload files to a file service rather than block storage.

Therefor as first step you should provide an NFS share which has write permissions from the web server we will discuss later and appropriate permissions from the ESX hosts.

This NFS share must to be added as datastore to all ESX hosts where VMs with mounted ISOs should be used.

 

At this point I won’t go into details on how this is done. Please refer to vSphere documentation.

 

  image003.png

 

 

 

Create upload server for ISOs

 

Now we need to set up an upload server which hosts the web page where image upload can be invoked by the user. vRealize Automation itself does not provide this capability. Also, it’s hard to use vRealize Orchestrator workflows directly for this task as Orchestrator expects the source files to be hosted on the Orchestrator VM itself rather than on a directory on the client PC.

 

The easiest way to create an upload server is by leveraging a linux web server for this task. In this example I am using a CentOS 7 based VM, but the procedure in general should work for any linux running apache and PHP.

 

Setting up web server on linux

 

The basic procedure for the upload server setup is described here: https://www.w3schools.com/php/php_file_upload.asp

 

In this example slight modifications are used. The configuration below e.g. checks if files have been uploaded before and only accepts iso files.

 

Following steps must be performed:

 

Install apache on linux

 

yum install httpd

 

Make sure apache starts on VM start

 

chkconfig httpd on

 

Install PHP (might be more packages than actually needed)

 

yum install php php-mysql php-devel php-gd php-pecl-memcache php-pspell php-snmp php-xmlrpc php-xml

 

Modify /etc/php.ini

 

The values are self-explaining and can be modified to individual needs.

 

memory_limit = 64M

upload_max_filesize = 8000M

post_max_size = 8000M

file_uploads = On

 

Create file /var/www/html/index.html

 

<!DOCTYPE html>

<html>

<body>

 

<title>VMware vRealize Automation ISO Upload</title>

<h1>VMware vRealize Automation ISO Upload</h1>

 

<form action="upload.php" method="post" enctype="multipart/form-data">

    Select ISO image to upload:

    <input type="file" name="fileToUpload" id="fileToUpload">

    <input type="submit" value="Upload Image" name="submit">

</form>

<style>

body {

    background-color: #3989C7;

    background-repeat: no-repeat;

    background-position: center top;

    color: white;

    font-size: 150%;

}

</style>

 

Create file /var/www/html/upload.php

 

<?php

$target_dir = "uploads/";

$target_file = $target_dir . basename($_FILES["fileToUpload"]["name"]);

$uploadOk = 1;

$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));

// Check if file already exists

if (file_exists($target_file)) {

    echo "Sorry, file already exists.";

    $uploadOk = 0;

}

// Allow certain file formats

if($imageFileType != "iso") {

    echo "Sorry, only ISO files are allowed.";

    $uploadOk = 0;

}

// Check if $uploadOk is set to 0 by an error

if ($uploadOk == 0) {

    echo "Sorry, your file was not uploaded.";

// if everything is ok, try to upload file

} else {

    if (move_uploaded_file($_FILES["fileToUpload"]["tmp_name"], $target_file)) {

        echo "The file ". basename( $_FILES["fileToUpload"]["name"]). " has been uploaded.";

    } else {

        echo "Sorry, there was an error uploading your file.";

    }

}

?>

 

 

Create uploads folder and set proper permissions

 

mkdir /var/www/html/uploads

chown apache:apache /var/www/html/uploads

 

 

Mount ISO NFS share to uploads folder

 

mount -t nfs <server IP or hostname>:/<path-to-iso-folder> /var/www/html/uploads

 

 

Restart apache services after changes

 

service httpd restart

 

 

After all required steps have been performed, the upload page as below screenshot shows should appear when pointing your browser to the web server.

 

  image004.png

 

 

When a local file has been selected for upload it will first be loaded into the linux OS memory and then stored in the uploads folder on the NFS server. To optimize performance, it might be required to tune the memory_limit parameter in /etc/php.ini.

 

 

 

Create Day-2 operations

 

The actual mount and unmount tasks are performed by related Orchestrator workflows to be invoked by day-2 operations.

As pre-requisite in Orchestrator the vCenter plugin must be configured properly allowing the workflows to scan the datastores.

 

Following steps must be done to get the workflows configured.

 

Import Orchestrator package

 

Import Orchestrator package com.vmware.custom.isomount.package (attached to this blog)

 

Modify workflow “Mount ISO”

 

Edit Workflow and select the NFS datastore added previously through vCenter plugin.

 

  image005.png

 

Set datastorepath attribute. If root folder of NFS share is used, insert the datastore name into the field.

 

  image006.png

 

Save workflow

 

Add custom resources in vRA

 

Go to Design --> XaaS --> Resource Actions and create a new one.

 

Select “Mount ISO” workflow from proper folder.

image007.png

 

 

Keep defaults in next page

 

image008.png

 

 

In next page remove description and keep other values.

 

image009.png

 

 

In Form tab drag a new field of type “Link” from left to right and place it above the “select ISO file to mount” field.

 

  image010.png

 

Modify field constraints to define Value --> Constant --> <URL of the web server>

Click “Apply” and “Finish”

 

Publish the new resource action

 

Do same steps for the Unmount ISO resource action, however no modification of form page is required.

 

 

Create example blueprint

 

The above-mentioned day-2 operations for mount and unmount can be used for any entitled blueprint. Depending on the VM configuration however it might be required to modify the workflows as CD-ROMs to add ISO reference might be identified differently (different identifier numbers or types) depending on VM configuration.

In this example we use an empty VM blueprint which expects VM installation to happen based on the ISO mounted.

 

Create empty VM blueprint

 

Create new blueprint in vRA through design tab.

 

image015.png

 

Specify “create” action and “BasicVmWorkflow” in build information tab.

 

image016.png

 

Specify desired VM parameters and in specific disk size. Storage maximum must be at least the desired disk size.

 

image017.png

 

Add disk with proper size on Storage tab

 

image018.png

 

 

Specify custom property with operating system type

 

image019.png

 

In vSphere it’s required to specify the OS type of a VM during VM creation. To do this you have to set the mentioned custom property in vRA. Find a list of guest os identifiers here: http://www.fatpacket.com/blog/2016/12/vm-guestos-identifiers/

 

In this example we are using a Windows 2012 server guest. For production use of this configuration it might be required to define an OS selection for the user on request page or choose a generic OS type and hard code it into the blueprint.

 

As last step you need to publish the blueprint and create a proper entitlement.

 

image020.png

 

It’s recommended to use VMRC console for full VM installation purpose. VMRC is much easier to handle for this use case compared to web remote console. This especially translates to better keyboard mapping for non-US keyboards.

 

 

Final test

 

After proper entitlement a new catalog item should appear in the catalog of the entitled user. On request the virtual machine is provisioned. When the provisioning process has finished, the defined day-2 operations are available on the VM object.

 

image021.png

 

The “mount ISO” operation provides a link to the upload page and shows all available ISOs in a dropdown field. On selection of an ISO and click on submit button, the iso will be mounted automatically to the VM.

 

image022.png

 

Users must wait until the mount process is finished until they can carry out other day-2 operations. After that they can use the VMRC day-2 operation to manage and install the VM based on the ISO. If VMRC is not installed on the client, users can use the link that is presented to download VMRC and install it. The VM must be power cycled to start the ISO boot process.

 

image023.png

The below script will export the details of existing vCenter Server Roles details.

 

Copy the below piece of code and paste it onto a text file and save it as .PS1 file. Then run it in PowerCLI:

#requires -Version 3   
[CmdletBinding(SupportsShouldProcess)]  
  Param(  
   [Parameter(Mandatory=$true, Position=1,  
    ValueFromPipeline=$true)]  
   [AllowNull()]  
   [alias("LiteralPath")]  
   [string]$Path = "c:\temp"   
  ) #Param  
Begin { 
   $DefaultRoles = "NoAccess", "Anonymous", "View", "ReadOnly", "Admin", "VirtualMachinePowerUser", "VirtualMachineUser", "ResourcePoolAdministrator", "VMwareConsolidatedBackupUser", "DatastoreConsumer", "NetworkConsumer" 
   $DefaultRolescount = $defaultRoles.Count 
   $CustomRoles = @() 
} #Begin 
  
Process { 
   $AllVIRoles = Get-VIRole 
  
   0..($DefaultRolescount) | ForEach-Object { 
     if ($(Get-Variable "role$_" -ErrorAction SilentlyContinue)) { 
       Remove-Variable "role$_" -Force -Confirm:$false 
     } #if ($(Get-Variable "role$_" -ErrorAction SilentlyContinue)) 
   } #0..($DefaultRolescount) | Foreach-Object 
  
   0..$DefaultRolescount | ForEach-Object { 
     $DefaultRolesnumber = $DefaultRoles[$_] 
     if ($_ -eq 0) { 
       New-Variable "role$_" -Option AllScope -Value ($AllVIRoles | Where-Object {$_.Name -ne $DefaultRolesnumber}) 
     } #if ($_ -eq 0) 
     else { 
       $vartxt = $_ - 1 
       $lastrole = 'role'+"$vartxt" 
       #Get-Variable $lastrole 
       New-Variable "role$_" -Option AllScope -Value (Get-Variable "$lastrole" | select -ExpandProperty value | Where-Object {$_.Name -ne $DefaultRolesnumber}) 
     } #else ($_ -eq 0) 
   } #0..$DefaultRolescount | ForEach-Object 
   $filteredRoles = Get-Variable "role$($DefaultRolescount-1)" | select -ExpandProperty value 
} #Process 
End { 
   $filteredRoles | ForEach-Object { 
     $completePath = Join-Path -Path $Path -ChildPath "$_.role" 
     Write-Host "Exporting Role `"$($_.Name)`" to `"$completePath`"" -ForegroundColor Yellow 
     $_ | Get-VIPrivilege | select-object -ExpandProperty Id | Out-File -FilePath $completePath 
   } #$filteredRoles | ForEach-Object 
} #End   #To Export Permissions:Get-VIPermission | export-csv c:\temp\rights.csv

 

To check lockdown mode on all ESXi hosts in vCenter Server or a Cluster level, run the bellow PowerCLI command:

 

get-vmhost | get-view | Select Name, @{N='LockDownActivated';E={$_.Config.AdminDisabled}}

 

If an ESXi host is dead and you can't access the VMs on it and you need to move a VM to another host while the original host is in disconnected state, SSH to the new host and run the belo command that will re-register the VM on the host and ignore vCenter Server on that.
vim-cmd solo/registervm /vmfs/volumes/datastore_name/VMName/VMName.vmx

ここまで、PowerCLI を利用したネステッド vSAN 環境セットアップの工夫を紹介してきました。

今回は、ここまでのコマンドラインを利用したスクリプトによるデプロイ簡素化についてです。

 

これまでの話については下記をどうぞ。

図解 ネステッド vSAN ラボ。

ネステッド vSAN ラボを構築するための工夫 Part.1。(物理 マシン ESXi ~ VCSA デプロイ)

ネステッド vSAN ラボを構築するための工夫 Part.2。(物理マシン ESXi と ESXi VM の準備)

ネステッド vSAN ラボを構築するための工夫 Part.3。(vSAN クラスタ構築)

 

今回のサンプルスクリプトは、下記に配置しました。

GitHub - gowatana/deploy-1box-vsan

 

vSAN 環境の初期化。

まず、ここまでで下記のような vSAN 環境をセットアップしてあります。

vsan-1box-5-1.png

 

はじめに、再度デプロイをためすために、一度環境を初期化します。

前回に引き続き、PowerCLI は Connect-VIServer で vCenter に接続したままの状態です。

 

今回の vSAN クラスタ名と、ESXi VM の名前は下記です。

$cluster_name = "vSAN-Cluster"

$vm_name = "vm-esxi-??"

 

vSAN クラスタにある ESXi をすべて削除して、クラスタも削除します。

今回はラボを初期化してしまうので、ESXi はメンテナンスモードなどにはせず

切断 → インベントリから削除 としています。

$cluster = Get-Cluster $cluster_name

$cluster | Get-VMHost | Set-VMHost -State Disconnected -Confirm:$false

$cluster | Get-VMHost | Remove-VMHost -Confirm:$false

$cluster | Remove-Cluster -Confirm:$false

 

ESXi VM も削除します。

Get-VM $vm_name | Stop-VM -Confirm:$false

Get-VM $vm_name | Remove-VM -DeletePermanently -Confirm:$false

 

これで、vCenter のインベントリから、ネステッド vSAN 環境が削除されます。

vsan-1box-5-2.png

 

vSAN 環境のデプロイ。(スクリプト簡素化版)

それでは、スクリプトでデプロイしてみます。

 

設定情報については、できるだけスクリプトから分離したいので

下記のようなファイルを作成しました。

 

config_vSAN-Cluster-01.ps1

# Lab Global Setting.

$base_vc_address = "192.168.1.30"

$base_vc_user = "administrator@vsphere.local"

$base_vc_pass = "VMware1!"

 

 

 

$nest_vc_address = "192.168.1.30"

$nest_vc_user = "administrator@vsphere.local"

$nest_vc_pass = "VMware1!"

 

$domain = "go.lab.jp"

$hv_ip_prefix_vmk0 = "192.168.1."

$hv_subnetmask = "255.255.255.0" # /24

$hv_gw = "192.168.1.1"

$dns_1 = "192.168.1.101"

$dns_2 = "192.168.1.102"

$hv_user = "root"

$hv_pass = "VMware1!"

 

# Base ESXi Setting

$template_vm_name = "vm-esxi-template-01"

$hv_name = "192.168.1.20"

$base_hv_name = "192.168.1.20"

 

# Cluster setting

$vm_num_start = 1

$vm_num_end = 3

$cluster_name = "vSAN-Cluster-01"

 

# vSAN Disk setting

$vsan_cache_dev = "mpx.vmhba0:C0:T1:L0"

$vsan_capacity_dev = "mpx.vmhba0:C0:T2:L0", "mpx.vmhba0:C0:T3:L0"

 

# VM / ESXi List

$nest_hv_hostname_prefix = "esxi-"

$vm_name_prefix = "vm-esxi-"

 

$vm_name_list = $vm_num_start..$vm_num_end | % {

    $i = $_

    $vm_name_prefix + $i.toString("00")

}

 

$nest_hv_hostname_list = $vm_num_start..$vm_num_end | % {

    $i = $_

    $nest_hv_hostname_prefix + $i.toString("00")

}

 

$hv_ip_vmk0_list = $vm_num_start..$vm_num_end | % {

    $i = $_

    $hv_ip_prefix_vmk0 + (30 + $i).ToString()

}

 

$vc_hv_name_list = $vm_num_start..$vm_num_end | % {

    $i = $_

    $hv_ip_prefix_vmk0 + (30 + $i).ToString()

}

 

下記のようにコマンドライン1行でデプロイできます。

(ただし、今回の対象は ESXi VM 作成以降の部分です)

PowerCLI> ./setup_vSAN-Cluster_AllFlash.ps1 ./config_vSAN-Cluster-01.ps1

 

スクリプトからの出力表示は粗いままですが・・・

vsan-1box-5-3.png

 

下記のように、前回までにデプロイした vSAN と同様の環境がセットアップできます。

vsan-1box-5-4.png

 

今回は SSD 上でのネステッド環境ですが、

ハイブリッド構成にみせかけてネステッド vSAN をセットアップすることもできます。

PowerCLI> ./setup_vSAN-Cluster_Hybrid.ps1 ./config_vSAN-Cluster-02.ps1

vsan-1box-5-5.png

 

vSAN 環境の初期化。(スクリプト簡素化版)

冒頭に実施した vSAN 環境の初期化もスクリプト化しておくと、

下記のように再構築も簡素化できます。

PowerCLI> ./destroy_vSAN-Cluster.ps1 ./config_vSAN-Cluster-01.ps1

PowerCLI> ./destroy_vSAN-Cluster.ps1 ./config_vSAN-Cluster-02.ps1

 

以上、ネステッド vSAN 環境構築の簡素化についてでした。

image (1) (1).png

As the internet is one of the most trending things these days, so even young ones are familiar with it and use it. The students in colleges and universities who have tons of work and assignments on their shoulder find different shortcuts to complete their tasks especially if they are running out of time. As in most of the institutes now instead of writing down assignments on notebooks, they are bound to email those assignments to their teachers for checking. Some students write assignments with a whole heart and try their best to prepare a best-written work. But when they reach to their class on the day when teachers tell them how was their assignments, they are told that their work was containing plagiarized material which means that they have copied.

Plagiarism is found in the document that has been copied, intentionally or unintentionally, without permission of the original author. So, these sentences can ruin the whole reputation of the students who have put efforts to write and didn’t copy. They can use the tool named Plagiarism Detector and can save themselves from these kinds of situations. As they will not have to manually check whether their assignments are copied, by merely using the tool they will be sure that their data will not be considered plagiarized before sending the work.

The owners of different websites or companies have to hire some writers to write articles for their business. They pay a high amount of money to them to write unique content for a better brand image. But in reality, they are not sure whether if they are using their money effectively, or the content writers are just copying the work and pasting. If you are eager to know about the hard work honesty of your students and employees, then you can access the plagiarism detector free.

By using a plagiarism detector, you will be able to know if you have the best content writers or are having the ones that are just wasting out your time. Then through it, you will be able to pay to the ones who write creative content and can dismiss the ones who find shortcuts. These actions can improve the image of your company as unique content will be published, and no customer can say if they have read the same article somewhere else. Plagiarism checker may save your cost that you may waste on workers who are just in your office to pass their time and get the payment.

Free Plagiarism detector is an application that you can use by going on the website plagiarismdetector.com. As it is a web-based tool so you will not be told for using any particular device as you can open it on the browser. The only requirement to use the tool is that the device you are using to open plagiarism checker on must have a well-connected internet. The result you will get by submitting your data in the plagiarism checker will tell you about two terms.

You will be informed about how much of the percentage of your article is having plagiarized material and how much portion is written uniquely. By knowing these results, as the plagiarized material will be highlighted so you can change those lines and make your article 100% creative. The highlighted portion can be a sentence or phrase; even if the whole paragraph is plagiarized then the paragraph will be highlighted by the tool. The results will be instant, and you will not have to wait for a long time.

 

The procedure to use this tool is easy and does not require any rocket science in it. No training is required, and you will not be asked again and again to update the version of plagiarism detector as there is no other premium version of plagiarism detector free. You can use all the functions at once and that too even free. After opening the tool, a box will be visible to you on your screen. Two options will be provided to you for inserting your data. One will be to enter the URL and the second will be to upload a file from your device.

 

You can copy and paste the data into the text box for checking the plagiarism. You merely will then have to click check plagiarism, and in a few seconds, the results will be right in front of you. You can use the results to amend your article accordingly. You can use the free plagiarism detector as many times you want to as there is no limit on the number of articles you can check by using plagiarism checker. I have been this best plagiarism detector for a long time now, and you should also use it as it can be a life-saving tool for students and writers.

Podemos migrar VMs entre Host ESXi sin Storage compartido, es decir solo con su storage interno. Esto se puede

hacer a partir de la versión 5.1 con vCenter Server común para los 2 host ESXi.

Debemos tener licenciado vMotion, si no hay compatibilidad de CPUs se tendrá que migrar las VMs apagadas.

Have you started trying Wavefront the metric monitoring tool? There is not much article or blog from Google. I would suggest to start with VMware Hands-on Lab. HOL-1902-01-CMP is a good place to start with. Apart from HOL, I like to test out Wavefront Integrations. Wavefront allows integrating directly from various products. There are step-by-step instructions that is very easy to collect metrics from your source. Moreover, you can modify the pre-defined dashboard and create your own dashboard. Of course, the most important is to learn from built-in Wavefront query language in the beginning.

 

Once you have an account at Wavefront instance, you can see those Wavefront integrations.

 

Screen Shot 2018-10-26 at 1.13.17 AM.png

 

Integration with Windows Host is very straight forward. We just need to download wavefront proxy and setup the software with our own token. The next step is to download and install Telegraf Agent. By default, it will point to localhost wavefront proxy. Therefore, there is nothing to change. Once both wavefront proxy and telegraf agent services are started, you can read the dashboard with your windows host metric.

 

Screen Shot 2018-10-26 at 1.17.51 AM.png

 

In my testing, I would like to use my colleague's token so both of our machines will metrics to same wavefront instance. It is the first step to compare my machine and my colleague's machine by time series. We failed. I modified wavefront.conf and change to token field. Even wavefront proxy service got restarted, metric still sent to my instance. It seems like a bug. There is no resolution from the internet. Finally, I got a help from a wavefront engineer suggesting me to remove "C:\Program Files (x86)\Wavefront\bin.wavefront_id" and get the wavefront proxy service restarted. It works! It's not documented. I have frustrated for days. Finally I got it fix.

 

Lastly, by changing hostname = "" in telegraf.conf, we can test out multiple sources sending to same wavefront proxy instance. It's really cool!

 

Screen Shot 2018-10-26 at 1.22.50 AM.png

ひきつづき、ネステッド vSAN 環境セットアップの工夫を紹介します。

今回は、vSAN クラスタのセットアップです。PowerCLI も利用していきます。

 

これまでの話は下記をどうぞ。

図解 ネステッド vSAN ラボ。

ネステッド vSAN ラボを構築するための工夫 Part.1。(物理 マシン ESXi ~ VCSA デプロイ)

ネステッド vSAN ラボを構築するための工夫 Part.2。(物理マシン ESXi と ESXi VM の準備)

 

前回までの投稿にて、起動されたネステッド ESXi が用意されているので、

ここからは、クラスタの作成、ESXi の登録、クラスタでの vSAN 有効化

といったことを進めていきます。

1box-vsan-13.png

3-1. クラスタの作成。

vCenter のインベントリに、クラスタ「vSAN-Cluster」を作成します。

PowerCLI は、以前の投稿にて vCenter に接続したままの状態です。

この時点では、まだ vSAN は有効化していません。

PowerCLI> Get-Datacenter LAB-DC | New-Cluster -Name vSAN-Cluster

 

3台のネステッド ESXi (Nest-ESXi)を登録します。

  • PowerCLI の Add-VMHost コマンドを利用。
  • ESXi は、IP アドレスのまま(192.168.1.31 ~ 192.168.1.33)でインベントリ登録。
  • ESXi VM は 1つのテンプレート VM からクローンしているため、
    ID 重複によるエラーをさけたいのでローカルデータストア(datastore1)は ESXi から除去。
  • データストア削除(Remove-Datastore)でエラーになってしまいますが、
    データストアは外せるので、今回はエラーを無視しています。

 

PowerCLI のプロンプトに投入するコマンドラインは、下記のようにします。

"192.168.1.31","192.168.1.32","192.168.1.33" | %{

    $hv_name = $_

    Add-VMHost -Name $hv_name -Location (Get-Cluster vSAN-Cluster) -User root -Password VMware1! -Force

    Get-VMHost $hv_name | Remove-Datastore -Datastore "datastore*" -Confirm:$false -ErrorAction:Ignore

}

 

Nest-ESXi の VMkernel ポートで vSAN トラフィックを有効にします。

設定対象はクラスタ「vSAN-Cluster」配下の、全ホストの vmk0 です。

本番環境では、vSAN トラフィックは( vmk1 などに)分離することが多いはずですが、

ラボ用途なので今回はシンプルに vmk0 に相乗りさせています。

PowerCLI> Get-Cluster vSAN-Cluster | Get-VMHost | Get-VMHostNetworkAdapter -Name vmk0 | Set-VMHostNetworkAdapter -VsanTrafficEnabled:$true -Confirm:$false

 

3-2. vSAN クラスタのセットアップ。

クラスタで、vSAN を有効にします。

PowerCLI> Get-Cluster vSAN-Cluster | Set-Cluster -VsanEnabled:$true -Confirm:$false

 

vSAN ディスクグループを作成します。

「ScsiLun」などから、Nest-ESXi の認識しているデバイス名を確認しておきます。

容量や VMDK の接続順などから、デバイス名を特定できます。

このデバイス名は、Nest-ESXi の vSCSI / VMDK 構成が同じであれば、必ず同じものになります。

PowerCLI> Get-Cluster vSAN-Cluster | Get-VMHost | Get-VMHostDisk | select VMHost,ScsiLun,TotalSectors | Sort-Object VMHost,ScsiLun

 

VMHost       ScsiLun             TotalSectors

------       -------             ------------

192.168.1.31 mpx.vmhba0:C0:T0:L0     33554432

192.168.1.31 mpx.vmhba0:C0:T1:L0     41943040

192.168.1.31 mpx.vmhba0:C0:T2:L0    104857600

192.168.1.31 mpx.vmhba0:C0:T3:L0    104857600

192.168.1.32 mpx.vmhba0:C0:T0:L0     33554432

192.168.1.32 mpx.vmhba0:C0:T1:L0     41943040

192.168.1.32 mpx.vmhba0:C0:T2:L0    104857600

192.168.1.32 mpx.vmhba0:C0:T3:L0    104857600

192.168.1.33 mpx.vmhba0:C0:T0:L0     33554432

192.168.1.33 mpx.vmhba0:C0:T1:L0     41943040

192.168.1.33 mpx.vmhba0:C0:T2:L0    104857600

192.168.1.33 mpx.vmhba0:C0:T3:L0    104857600

 

 

各ホストでディスクグループを作成します。

PowerCLI> Get-Cluster vSAN-Cluster | Get-VMHost | New-VsanDiskGroup -SsdCanonicalName mpx.vmhba0:C0:T1:L0 -DataDiskCanonicalName mpx.vmhba0:C0:T2:L0,mpx.vmhba0:C0:T3:L0

 

ディスクグループ作成が完了すると、その分の

vSAN データストアの容量(CapacityGB)も増加します。

PowerCLI> Get-Cluster vSAN-Cluster | Get-VsanSpaceUsage

 

Cluster              FreeSpaceGB     CapacityGB

-------              -----------     ----------

vSAN-Cluster         295.945         299.953

 

 

これで、vSAN データストアが構成されて、

そこに VM を作成したりできるようになりました。

ネステッド環境上でのポートグループの作成やVM の作成などは、

基本的にネストであることを意識せずに実施します。

 

ちなみに、vSAN クラスタの PowerCLI でのセットアップについては、以前にも投稿しましたが、

こちらは少し前のものなので、現行のハンズオンラボ環境には未対応です・・・

PowerCLI で vSAN セットアップをためしてみる。

 

以上、ネステッド vSAN の環境セットアップについての話でした。

 

まだ続きあり。

ネステッド vSAN ラボを構築するための工夫 Part.4。(スクリプトでの簡素化)

The Art of Creating a Community of Trusted Content

Imagine an online community where customers, partners, and VMware employees come together to create a community of trusted content that benefits all members and visitors. That would be a tremendous resource. But most efforts to unlock, capture, and document the collective wisdom of an online community fail, because they don’t establish a clear definition of trusted content and they don’t inform members how community content reaches a trusted state.

What we mean by Trusted Content

We define it as content that generally has at least one post reply selected as a Correct Answer, and one or more marks given by other members to highlight how much they liked the post or found it to be helpful.

Community is all about collaboration and all members have role to play. When a member finds a post that appears valuable enough to read, whether it’s a post or post reply, we encourage them to go one step further if they found value in what they read and mark the post. The more marks a post receives overall, the stronger the indication that it’s valuable and can be trusted. A post’s level of trust can also grow as members add marks over time.

What we’re doing to generate Trusted Content

The VMware community team is on a mission to inform and encourage our members to embrace and mirror the steps required to create trusted content in the VMTN community. This is a collaborative process that relies on mutual respect and healthy doses of give-and-take.

Trusted content starts with a correct answer being assigned by the Original Poster (OP). It’s one of the strongest and most trustworthy marks a post can receive, as members want to know if the information resolved the issue. Next, we ask community members to provide feedback by marking posts/post replies—e.g., Like, Helpful, I Have the Same Question.

Through repetition, the process of creating and/or identifying trusted content becomes the norm, rather than the exception. If all members embrace these best practice steps, the quality and effectiveness of community content will quickly increase. A great example is the Salesforce Trailblazer community, where its membership maintains a 98% or higher correct answer rate on community posts.

Marking Posts

Again, trusted content starts with members marking posts. And this means going beyond selecting a Correct Answer as most community platforms provide several ways to mark content to show how a member perceived the value of the post. To help VMTN members understand which mark to select, we offer the following guidelines:

Available Marks:                     Defined as:

Correct Answer                         Selected by origin poster to indicate a post reply answered the question or resolved the issue

I Have the Same Question        Selected by members to indicate the information contained within a post is important and relevant to multiple members

Like                                           Selected by members to indicate the information contained in a post reply was something they enjoyed

Helpful                                      Selected by members to highlight the information contained in a post reply helped explain or resolved their issue or question

 

We ask all VMTN members to adopt and repeat these low effort behaviors that give back to the community, close the question and answer loop, and ultimately create trusted content. 

Thanks everyone and see you in the community.

The VMware Community Team

The Art of Creating a Community of Trusted Content

Imagine an online community where customers, partners, and VMware employees come together to create a community of trusted content that benefits all members and visitors. That would be a tremendous resource. But most efforts to unlock, capture, and document the collective wisdom of an online community fail, because they don’t establish a clear definition of trusted content and they don’t inform members how community content reaches a trusted state.

What we mean by Trusted Content

We define it as content that generally has at least one post reply selected as a Correct Answer, and one or more marks given by other members to highlight how much they liked the post or found it to be helpful.

Community is all about collaboration and all members have role to play. When a member finds a post that appears valuable enough to read, whether it’s a post or post reply, we encourage them to go one step further if they found value in what they read and mark the post. The more marks a post receives overall, the stronger the indication that it’s valuable and can be trusted. A post’s level of trust can also grow as members add marks over time.

What we’re doing to generate Trusted Content

The VMware community team is on a mission to inform and encourage our members to embrace and mirror the steps required to create trusted content in the VMTN community. This is a collaborative process that relies on mutual respect and healthy doses of give-and-take.

Trusted content starts with a correct answer being assigned by the Original Poster (OP). It’s one of the strongest and most trustworthy marks a post can receive, as members want to know if the information resolved the issue. Next, we ask community members to provide feedback by marking posts/post replies—e.g., Like, Helpful, I Have the Same Question.

Through repetition, the process of creating and/or identifying trusted content becomes the norm, rather than the exception. If all members embrace these best practice steps, the quality and effectiveness of community content will quickly increase. A great example is the Salesforce Trailblazer community, where its membership maintains a 98% or higher correct answer rate on community posts.

Marking Posts

Again, trusted content starts with members marking posts. And this means going beyond selecting a Correct Answer as most community platforms provide several ways to mark content to show how a member perceived the value of the post. To help VMTN members understand which mark to select, we offer the following guidelines:

Available Marks:                     Defined as:

Correct Answer                         Selected by origin poster to indicate a post reply answered the question or resolved the issue

I Have the Same Question        Selected by members to indicate the information contained within a post is important and relevant to multiple members

Like                                           Selected by members to indicate the information contained in a post reply was something they enjoyed

Helpful                                      Selected by members to highlight the information contained in a post reply helped explain or resolved their issue or question

 

We ask all VMTN members to adopt and repeat these low effort behaviors that give back to the community, close the question and answer loop, and ultimately create trusted content. 

Thanks everyone and see you in the community.

The VMware Community Team

前回に続いて、ネステッド vSAN 環境セットアップの工夫を紹介します。

今回は、特にネスト特有の、物理マシンの ESXi と、ESXi VM の部分についてです。

ひきつづき PowerCLI も利用していきます。

 

これまでの話は下記をどうぞ。

図解 ネステッド vSAN ラボ。

ネステッド vSAN ラボを構築するための工夫 Part.1。(物理 マシン ESXi ~ VCSA デプロイ)

 

ネステッド ハイパーバイザとなる ESXi VM は、下図のように

vCenter(VCSA)と横並びの VM として作成して、ネスト環境特有の設定をします。

今回は最初に、テンプレートにする ESXi VM を作成し、

それをクローンして 3台の ネステッド ESXi にします。

1box-vsan-12.png

2-1. ESXi VM を接続するポートグループの作成。

ESXi にデフォルトで作成される「VM Network」ポートグループは、

今回の構成ではネストではない VM で利用するものとします。

そこで、デフォルトで作成される仮想スイッチ「vSwitch0」に、

ESXi VM を接続する、ネスト環境むけのポートグループ

「Nested-Trunk-Network」を新規作成します。

 

このポートグループには、下記の設定をしています。

  • 無差別モード(プロミスキャスモード)の許可。
  • 「偽装転送」と「MAC アドレス変更」はデフォルト同様に「許可」。
  • VLAN ID 4095。

※ PowerCLI は前回の投稿で vCenter に接続したままの状態です。

PowerCLI> $pg_name = "Nested-Trunk-Network"

PowerCLI> $pg = Get-VMHost | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name $pg_name -VLanId 4095

PowerCLI> $pg | Get-SecurityPolicy | Set-SecurityPolicy -AllowPromiscuous:$true -ForgedTransmits:$true -MacChanges:$true

 

2-2. ESXi VM の作成。

ESXi をインストールする VM を作成します。

これは、あとで複数台の ESXi をクローンするための、VM テンプレートとして利用する想定です。

 

今回は、下記のような PowerCLI スクリプトで VM を作成します。

末尾の 4行では「ハードウェア アシストによる仮想化をゲスト OS に公開」にあたる

NestedHVEnabled を有効にしています。

また、vNIC は 最低限である 1つだけ作成しています。

 

create-esxi-vm.ps1

$vm_name = "vm-esxi-template-01"

$hv_name = "192.168.1.20"

 

$guest_id = "vmkernel65Guest"

$num_cpu = 2

$memory_gb = 6

$ds_name = "datastore1"

$vmdk_gb = 16

$pg_name = "Nested-Trunk-Network"

 

$vm = New-VM -Name $vm_name -VMHost $hv_name `

    -GuestId $guest_id `

    -NumCpu $num_cpu -CoresPerSocket $num_cpu `

    -MemoryGB $memory_gb `

    -DiskGB $vmdk_gb -Datastore $ds_name -StorageFormat Thin `

    -NetworkName $pg_name

 

$vm = Get-VM -Name $vm_name

$vm_config_spec = New-Object VMware.Vim.VirtualMachineConfigSpec

$vm_config_spec.NestedHVEnabled = $true

$vm.ExtensionData.ReconfigVM($vm_config_spec)

 

スクリプトは、vCenter に接続した状態で、下記のように実行します。

PowerCLI> .\create-esxi-vm.ps1

 

さまざまな vSAN を作成しやすいように、

VMDK の追加や ISO ファイルの接続はクローン後に実施します。

 

2-3. ESXi VM への ESXi インストールとクローン準備。

まず ESXi VM に、ESXi インストーラの ISO イメージファイルを接続してから起動し、

通常どおり ESXi をインストールします。

 

ESXi インストーラの ISO イメージファイルも、PowerCLIで VM に接続します。

ISO イメージファイルは、あらかじめ 物理マシン ESXi のデータストアに配置してあります。

※ESXi を Kickstart でインストールする場合は、この ISO イメージファイル接続は不要です。

PowerCLI> $iso_path = "[datastore1] iso/VMware-VMvisor-Installer-6.7.0.update01-10302608.x86_64.iso"

PowerCLI> Get-VM vm-esxi-?? | New-CDDrive -IsoPath $iso_path -StartConnected:$true

 

ESXi Shell を有効化して、vSphere Client の VM のコンソールからログインします。

この時点では、まだネットワーク設定は不要です。

 

下記の 2つの設定をします。

 

1) クローンによる MAC アドレス変更の対策として、

/Net/FollowHardwareMac を有効化します。

[root@localhost:~] esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

 

2) クローン後の ESXi VM 初回起動で UUID を再生成するために、

/etc/vmware/esx.conf ファイルから「/system/uuid」をいったん記載削除します。

※ これは vi などの編集でもよいです。

[root@localhost:~] sed -i "/uuid/d" /etc/vmware/esx.conf

 

設定後、ESXi VM をシャットダウンします。

クローン前に ESXi VM を起動した場合は、また /system/uuid を記載削除する必要があります。

 

また、VM テンプレートには、変換しても、しなくても大丈夫です。

 

2-4. ESXi VM のクローン。

ここまでに作成した VM「vm-esxi-template-01」から、3台の VM をクローンします。

  • PowerCLI は、vCenter に接続した状態で 物理マシン ESXi「192.168.1.20」上の VM をクローン。
  • 仮想マシン名は「vm-esxi-XX」とする。

PowerCLI> 1..3 | % {New-VM -VMHost 192.168.1.20 -StorageFormat Thin -VM "vm-esxi-template-01" -Name ("vm-esxi-" + $_.toString("00"))}

 

vSAN Disk として利用する VMDK を追加します。

今回は Cache 用に 20GB x 1、Capacity 用に 50GB x 2 を 各 ESXi に用意します。

PowerCLI> Get-VM vm-esxi-?? | New-HardDisk -SizeGB 20 -StorageFormat Thin

PowerCLI> Get-VM vm-esxi-?? | New-HardDisk -SizeGB 50 -StorageFormat Thin

PowerCLI> Get-VM vm-esxi-?? | New-HardDisk -SizeGB 50 -StorageFormat Thin

 

VMDK は下記のように作成されます。

PowerCLI> Get-VM vm-esxi-?? | Get-HardDisk | select CapacityGB,Parent,Name | Sort-Object Parent,Name

 

CapacityGB Parent     Name

---------- ------     ----

        16 vm-esxi-01 Hard disk 1

        20 vm-esxi-01 Hard disk 2

        50 vm-esxi-01 Hard disk 3

        50 vm-esxi-01 Hard disk 4

        16 vm-esxi-02 Hard disk 1

        20 vm-esxi-02 Hard disk 2

        50 vm-esxi-02 Hard disk 3

        50 vm-esxi-02 Hard disk 4

        16 vm-esxi-03 Hard disk 1

        20 vm-esxi-03 Hard disk 2

        50 vm-esxi-03 Hard disk 3

        50 vm-esxi-03 Hard disk 4

 

 

ESXi VM をまとめてパワーオンします。

PowerCLI> Get-VM vm-esxi-?? | Start-VM

 

このあとの工程に備えて、ネステッド ESXi の VMware Tools が起動したことを確認しておきます。

PowerCLI> Get-VM vm-esxi-?? | select Name,PowerState,@{N="ToolsStatus";E={$_.Guest.ExtensionData.ToolsStatus}}

 

Name       PowerState ToolsStatus

----       ---------- -----------

vm-esxi-03  PoweredOn     toolsOk

vm-esxi-02  PoweredOn     toolsOk

vm-esxi-01  PoweredOn     toolsOk

 

 

2-5. ネステッド ESXi の設定。

ネステッド ESXi が起動したら、普通の ESXi と同様に

vCenter から登録する準備としてネットワーク設定をします。

これは VM コンソール経由で DCUI や esxcli などで設定できます。

 

しかし、ESXi には標準で VMware Tools がインストールされており、

VM にインストールした場合は、vCenter 経由で(ゲスト OS としての)ESXiのコマンドが実行できます。

そして、それを PowerCLI 経由で実行することもできます。

 

そこで、下記の投稿にある PowerCLI のサンプルスクリプトを利用して、

3台の ESXi VM それぞれにむけて esxcli によるネットワーク設定をします。

PowerCLI から Nested ESXi の esxcli を実行してみる。(GuestProcessManager)

 

1台目。

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-01 system hostname set --host esxi-01 --domain go-lab.jp

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-01 network ip interface ipv4 set --interface-name=vmk0 --type=static --ipv4=192.168.1.31 --netmask=255.255.255.0 --gateway=192.168.1.1

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-01 network ip dns server add --server=192.168.1.101

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-01 network ip dns server add --server=192.168.1.102

 

2台目。

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-02 system hostname set --host esxi-02 --domain go-lab.jp

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-02 network ip interface ipv4 set --interface-name=vmk0 --type=static --ipv4=192.168.1.32 --netmask=255.255.255.0 --gateway=192.168.1.1

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-02 network ip dns server add --server=192.168.1.101

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-02 network ip dns server add --server=192.168.1.102

 

3台目。

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-03 system hostname set --host esxi-03 --domain go-lab.jp

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-03 network ip interface ipv4 set --interface-name=vmk0 --type=static --ipv4=192.168.1.33 --netmask=255.255.255.0 --gateway=192.168.1.1

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-03 network ip dns server add --server=192.168.1.101

PowerCLI> ./invoke_nested-esxcli.ps1 -ESXiUser:root -ESXiPass:VMware1! -ESXiVM:vm-esxi-03 network ip dns server add --server=192.168.1.102

 

 

これで、ネステッド ESXi が 3台作成された状態になります。

あとは、vSAN クラスタのセットアップです。

 

つづく・・・

ネステッド vSAN ラボを構築するための工夫 Part.3。(vSAN クラスタ構築)

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.