Skip navigation

Blog Posts

Total : 3,513

Blog Posts

1 2 Previous Next

Hi,

 

I'm preparing for VCAP deployment and I have done step by step guide using Microsoft Onenote. Some of the practicals are still pending. but 60% are completed.Some times it may be help you for your exam.

 

VCAP-6-Deployment | VMware,Microsoft Knowledge Sharing Portal

Intel Xeon Scalable Processors SDDI and SDDC

server storage I/O data infrastructure trends

 

Recently Intel announced a new family of Xeon  Scalable Processors (aka Purely) that for some workloads Intel claims to be on average of 1.65x  faster than their predecessors. Note your real improvement will vary based on  workload, configuration, benchmark testing, type of processor, memory, and  many other server storage I/O performance considerations.

Intel Scalable Xeon Processors
Image via Intel.com

 

In  general the new Intel Xeon Scalable Processors enable legacy and software  defined data infrastructures (SDDI), along with software  defined data centers (SDDC), cloud and other environments to support expanding  workloads more efficiently as well as effectively (e.g. boosting productivity).

 

Data Infrastructures and workloads

 

Some  target application and environment workloads Intel is positioning these new  processors for includes among others:

  • Machine  Learning (ML), Artificial Intelligence (AI), advanced analytics, deep learning  and big data
  • Networking  including software defined network (SDN) and network function virtualization  (NFV)
  • Cloud  and Virtualization including Azure Stack, Docker and Kubernetes containers,  Hyper-V, KVM, OpenStack VMware vSphere, KVM among others
  • High  Performance Compute (HPC) and High Productivity Compute (e.g. the other HPC)
  • Storage  including legacy and emerging software defined storage software deployed as appliances,  systems or server less deployment modes.

 

Features  of the new Intel Xeon Scalable Processors include:

  • New  core micro architecture with interconnects and on die memory controllers
  • Sockets  (processors) scalable up to 28 cores
  • Improved  networking performance using Quick Assist and Data Plane Development Kit (DPDK)
  • Leverages Intel Quick Assist Technology for CPU offload  of compute intensive functions including I/O networking, security, AI, ML, big  data, analytics and storage functions. Functions that benefit from Quick Assist  include cryptography, encryption, authentication, cipher operations, digital  signatures, key exchange, loss less data compression and data footprint  reduction along with data at rest encryption (DARE).
  • Optane Non-Volatile Dual Inline Memory Module  (NVDIMM) for storage class memory (SCM) also referred to by some as Persistent  Memory (PM), not to be confused with Physical Machine (PM).
  • Supports  Advanced Vector Extensions 512  (AVX-512)  for HPC and other workloads
  • Optional Omni-Path Fabrics in addition to 1/10Gb Ethernet  among other I/O options
  • Six memory channels supporting up to 6TB of RDIMM  with multi socket systems
  • From  two to eight  sockets per node (system)
  • Systems  support PCIe 3.x (some supporting x4 based M.2 interconnects)

 

Note  that exact speeds, feeds, slots and watts will vary by specific server model  and vendor options. Also note that some server system solutions have two or  more nodes (e.g. two or more real servers) in a single package not to be  confused with two or more sockets per node (system or motherboard). Refer to the where to learn more section below for links to Intel benchmarks and other resources.

 

Software Defined Data Infrastructures, SDDC, SDX and SDDI

What  About Speeds and Feeds

Watch  for and check out the various Intel partners who have or will be announcing  their new server compute platforms based on Intel Xeon Scalable Processors.  Each of the different vendors will have various speeds and feeds options that  build on the fundamental Intel Xeon Scalable Processor capabilities.

 

For  example Dell EMC announced their 14G server platforms at the May 2017  Dell EMC World event with details to follow (e.g. after the Intel  announcements).

 

Some  things to keep in mind include the amount of DDR4 DRAM (or Optane NVDIMM) will  vary by vendors server platform configuration, motherboards, several sockets  and DIMM slots. Also keep in mind the differences between registered (e.g.  buffered RDIMM) that give good capacity and great performance, and load reduced  DIMM (LRDIMM) that have great capacity and ok performance.

 

Various nvme options

What  about NVMe

It's there as these systems like previous Intel models support NVMe devices via PCIe  3.x slots, and some vendor solutions also supporting M.2 x4 physical  interconnects as well.

 

server storageIO flash and SSD
Image via Software Defined Data Infrastructure Essentials (CRC)

 

Note that Broadcom formerly known as Avago and LSI recently  announced PCIe based RAID and adapter cards that support NVMe attached devices in addition to  SAS and SATA.

 

server storage data infrastructure sddi

What  About Intel and Storage

In  case you have not connected the dots yet, the Intel Xeon Scalable Processor  based server (aka compute) systems are also a fundamental platform for storage  systems, services, solutions, appliances along with tin-wrapped software.

 

What  this means is that the Intel Xeon Scalable Processors based systems can be used  for deploying legacy as well as new and emerging software-defined storage  software solutions. This also means that the Intel platforms can be used to  support SDDC, SDDI, SDX, SDI as well as other forms of legacy and  software-defined data infrastructures along with cloud, virtual, container,  server less among other modes of deployment.

Intel SSD
Image Via Intel.com

 

Moving  beyond server and compute platforms, there is another tie to storage as part of  this recent as well as other Intel announcements. Just a few weeks ago Intel announced  64 layer triple level cell (TLC) 3D NAND solutions positioned for the client  market (laptop, workstations, tablets, thin clients). Intel with that  announcement increased the traditional aerial density (e.g. bits per square  inch or cm) as well as boosting the number of layers (stacking more bits as  well).

 

The  net result is not only more bits per square inch, also more per cubic inch or  cm. This is all part of a continued evolution of NAND flash including from 2D  to 3D, MCL to TLC, 32 to 64 layer.  In  other words, NAND flash-based Solid State  Devices (SSDs) are very much still a relevant and continue to be enhanced  technology even with the emerging 3D XPoint and Optane (also available via Amazon in M.2) in the wings.

 

server memory evolution
  Via Intel and Micron (3D XPoint launch)

 

Keep in mind that NAND flash-based technologies were announced almost 20 years ago (1999), and are still evolving. 3D XPoint announced two years ago, along with other emerging storage class memories (SCM), non-volatile memory (NVM) and persistent memory (PM) devices are part of the future as is 3D NAND (among others). Speaking of 3D XPoint and Optane, Intel had announcements about that  in the past as well.

 

Where To Learn More

Learn  more about Intel Xeon Scalable Processors along with related technology,  trends, tools, techniques and tips with the following links.

What This All Means

Some say the PC is dead and IMHO that depends on what you mean or define a PC as. For example if you refer to a PC generically to also include servers besides workstations or other devices, then they are alive. If however your view is that PCs are only workstations and client devices, then they are on the decline.

 

However if your view is that a PC is defined by the underlying processor such as Intel general purpose 64 bit x86 derivative (or descendent) then they are very much alive. Just as older generations of PCs leveraging general purpose Intel based x86 (and its predecessors) processors were deployed for many uses, so to are today's line of Xeon (among others) processors.

 

Even with the increase of ARM, GPU and other specialized processors, as well as ASIC and FPGAs for offloads, the role of general purpose processors continues to increase, as does the technology evolution around. Even with so called server less architectures, they still need underlying compute server platforms for running software, which also includes software defined storage, software defined networks, SDDC, SDDI, SDX, IoT among others.

 

Overall this is a good set of announcements by Intel and what we can also expect to be a flood of enhancements from their partners who will use the new  family of Intel Xeon Scalable Processors in their products to enable software defined data infrastructures (SDDI) and SDDC.

 

Ok, nuff said (for now...).

Cheers
Gs

server storage I/O data infrastructure trends
Updated 6/29/17

 

European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready?

 

What Is GDPR

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching.

 

Where To Learn More

Acronis GDPR Resources

Quest GDPR Resources

Microsoft and Azure Cloud GDPR Resources

 

Do you have or know of relevant GDPR information and resources? Feel free to add them via comments or send us an email, however please watch the spam and sales pitches as they will be moderated.

 

What This All Means

Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting.

 

Ok, nuff said (for now...).

 

Cheers
Gs

Who Will Be At Top Of Storage World Next Decade?

server storage I/O data infrastructure trends

 

Data storage regardless of if hardware, legacy, new, emerging, cloud service or various software defined storage (SDS) approaches are all fundamental resource components of data infrastructures along with compute server, I/O networking as well as management tools, techniques, processes and procedures.

 

fundamental Data Infrastructure resource components
Fundamental Data Infrastructure resources

 

Data infrastructures include legacy along with software  defined data infrastructures (SDDI), along with software  defined data centers (SDDC), cloud and other environments to support expanding  workloads more efficiently as well as effectively (e.g. boosting productivity).

 

Data Infrastructures and workloads
Data Infrastructure and other IT Layers (stacks and altitude levels)

 

Various data infrastructures resource components spanning server, storage, I/O networks, tools along with hardware, software, services get defined as well as composed into solutions or services which may in turn be further aggregated into more extensive higher altitude offerings (e.g. further up the stack).

IT and Data Infrastructure Stack Layers
Various IT and Data Infrastructure Stack Layers (Altitude Levels)

 

Focus on Data Storage Present and Future Predictions

Drew Robb (@Robbdrew) has a good piece over at Enterprise Storage Forum looking at the past, present and future of who will rule the data storage world that includes several perspective predictions comments from myself as well as others. Some of the perspectives and predictions by others are more generic and technology trend and buzzword bingo focus which should not be a surprise. For example including the usual performance, Cloud and Object Storage, DPDK, RDMA/RoCE, Software-DefinedNVM/Flash/SSD, CI/HCI, NVMe among others.

 

Here are some excerpts from Drews piece along with my perspective and prediction comments of who may rule the data storage roost in a decade:

Amazon Web Services (AWS) – AWS includes cloud and object storage in the form of S3. However, there is more to storage than object and S3 with AWS also having Elastic File Services (EFS), Elastic Block Storage (EBS), database, message queue and on-instance storage, among others. for traditional, emerging and storage for the Internet of Things (IoT).

 

It is difficult to think of AWS not being a major player in a decade unless they totally screw up their execution in the future. Granted, some of their competitors might be working overtime putting pins and needles into Voodoo Dolls (perhaps bought via Amazon.com) while wishing for the demise of Amazon Web Services, just saying.

 

Voodoo Dolls via Amazon.com
  Voodoo Dolls and image via Amazon.com

 

Of course, Amazon and AWS could follow the likes of Sears (e.g. some may remember their catalog) and ignore the future ending up on the where are they now list. While talking about Amazon and AWS, one will have to wonder where Wall Mart will end up in a decade with or without a cloud of their own?

 

Microsoft – With Windows, Hyper-V and Azure (including Azure Stack), if there is any company in the industry outside of AWS or VMware that has quietly expanded its reach and positioning into storage, it is Microsoft, said Schulz.

 

Microsoft IMHO has many offerings and capabilities across different dimensions as well as playing fields. There is the installed base of Windows Servers (and desktops) that have the ability to leverage Software Defined Storage including Storage Spaces Direct (S2D), ReFS, cache and tiering among other features. In some ways I'm surprised by the number of people in the industry who are not aware of Microsoft's capabilities from S2D and the ability to configure CI as well as HCI (Hyper Converged Infrastructure) deployments, or of Hyper-V abilities, Azure Stack to Azure among others. On the other hand, I run into Microsoft people who are not aware of the full portfolio offerings or are just focused on Azure. Needless to say, there is a lot in the Microsoft storage related portfolio as well as bigger broader data infrastructure offerings.

NetApp – Schulz thinks NetApp has the staying power to stay among the leading lights of data storage. Assuming it remains as a freestanding company and does not get acquired, he said, NetApp has the potential of expanding its portfolio with some new acquisitions. “NetApp can continue their transformation from a company with a strong focus on selling one or two products to learning how to sell the complete portfolio with diversity,” said Schulz.

 

NetApp has been around and survived up to now including via various acquisitions, some of which have had mixed results vs. others. However assuming NetApp can continue to reinvent themselves, focusing on selling the entire solution portfolio vs. focus on specific products, along with good execution and some more acquisitions, they have the potential for being a top player through the next decade.

 

Dell EMC – Dell EMC is another stalwart Schulz thinks will manage to stay on top. “Given their size and focus, Dell EMC should continue to grow, assuming execution goes well,” he said.

There are some who I hear are or have predicted the demise of Dell EMC, granted some of those predicted the demise of Dell and or EMC years ago as well. Top companies can and have faded away over time, and while it is possible Dell EMC could be added to the where are they now list in the future, my bet is that at least while Michael Dell is still involved, they will be a top player through the next decade, unless they mess up on execution.

 

Cloud and software defined storage data infrastructure
Various Data Infrastructures and Resources involving Data Storage

 

Huawei – Huawei is one of the emerging giants from China that are steadily gobbling up market share. It is now a top provider in many categories of storage, and its rapid ascendancy is unlikely to stop anytime soon. “Keep an eye on Huawei, particularly outside of the U.S. where they are starting to hit their stride,” said Schulz.

In the US, you have to look or pay attention to see or hear what Huawei is doing involving data storage, however that is different in other parts of the world. For example, I see and hear more about them in Europe than in the US. Will Huawei do more in the US in the future? Good question, keep an eye on them.

 

VMware – A decade ago, Storage Networking World (SNW) was by far the biggest event in data storage. Everyone who was anyone attended this twice yearly event. And then suddenly, it lost its luster. A new forum known as VMworld had emerged and took precedence. That was just one of the indicators of the disruption caused by VMware. And Schulz expects the company to continue to be a major force in storage. “VMware will remain a dominant player, expanding its role with software-defined storage,” said Schulz.

VMware has a dominant role in data storage not just because of the relationship with Dell EMC, or because of VSAN which continues to gain in popularity, or the soon to be released VMware on AWS solution options among others. Sure all of those matters, however, keep in mind that VMware solutions also tie into and work with other legacies as well as software-defined storage solution, services as well as tools spanning block, file, object for virtual machines as well as containers.

 

"Someday soon, people are going to wake up like they did with VMware and AWS," said Schulz. "That’s when they will be asking 'When did Microsoft get into storage like this in such a big way.'"

 

What the above means is that some environments may not be paying attention to what AWS, Microsoft, VMware among others are doing, perhaps discounting them as the old or existing while focusing on new, emerging what ever is trendy in the news this week. On the other hand, some environments may see the solution offerings from those mentioned as not relevant to their specific needs, or capable of scaling to their requirements.

 

Keep in mind that it was not that long ago, just a few years that VMware entered the market with what by today's standard (e.g. VSAN and others) was a relatively small virtual storage appliance offering, not to mention many people discounted and ignored VMware as a practical storage solution provider. Things and technology change, not to mention there are different needs and solution requirements for various environments. While a solution may not be applicable today, give it some time, keep an eye on them to avoid being surprised asking the question, how and when did a particular vendor get into storage in such a big way.

 

Is Future Data Storage World All Cloud?

Perhaps someday everything involving data storage will be in or part of the cloud.

 

Does this mean everything is going to the cloud, or at least in the next ten years? IMHO the simple answer is no, even though I see more workloads, applications, and data residing in the cloud, there will also be an increase in hybrid deployments.

 

Note that those hybrids will span local and on-premise or on-site if you prefer, as well as across different clouds or service providers. Granted some environments are or will become all in on clouds, while others are or will become a hybrid or some variation. Also when it comes to clouds, do not be scared, be prepared. Also keep an eye on what is going on with containers, orchestration, management among other related areas involving persistent storage, a good example is Dell EMCcode RexRay among others.

Server Storage I/O resources
Various data storage focus areas along with data infrastructures.

 

What About Other Vendors, Solutions or Services?

In addition to those mentioned above, there are plenty of other existing, new and emerging vendors, solutions, and services to keep an eye on, look into, test and conduct a proof of concept (PoC) trial as part of being an informed data infrastructure and data storage shopper (or seller).

 

Keep in mind that component suppliers some of whom like Cisco also provides turnkey solutions that are also part of other vendors offerings (e.g. Dell EMC VxBlock, NetApp FlexPod among others), Broadcom (which includes Avago/LSI, Brocade Fibre Channel, among others), Intel (servers, I/O adapters, memory and SSDs), Mellanox, Micron, Samsung, Seagate and many others.

E8, Excelero, Elastifile (software defined storage), Enmotus (micro-tiering, read Server StorageIOlab report here), Everspin (persistent and storage class memories including NVDIMM), Hedvig (software defined storage), NooBaa, Nutanix, Pivot3, Rozo (software defined storage), WekaIO (scale out elastic software defined storage, read Server StorageIO report here).

 

Some other software defined management tools, services, solutions and components  I'm keeping an eye on, exploring, digging deeper into (or plan to) include Blue Medora, Datadog, Dell EMCcode and RexRay docker container storage volume management, Google, HPE, IBM Bluemix Cloud aka IBM Softlayer, Kubernetes, Mangstor, OpenStack, Oracle, Retrospect, Rubrix, Quest, Starwind, Solarwinds, Storpool, Turbonomic, Virtuozzo (software defined storage) among many others

 

What about those not mentioned? Good question, some of those I have mentioned in earlier Server StorageIO Update newsletters,  as well as many others mentioned in my new book "Software Defined Data Infrastructure Essentials" (CRC Press). Then there are those that once I hear something interesting from on a regular basis will get more frequent mentions as well. Of course, there is also a list to be done someday that is basically where are they now, e.g. those that have disappeared, or never lived up to their full hype and marketing (or technology) promises, let's leave that for another day.

 

Where To Learn More

Learn  more about  related technology,  trends, tools, techniques, and tips with the following links.

Data Infrastructures and workloads
Data Infrastructures Resources (Servers, Storage, I/O Networks) enabling various services

 

What This All Means

It is safe to say that each new year will bring new trends, techniques, technologies, tools, features, functionality as well as solutions involving data storage as well as data infrastructures. This means a usual safe bet is to say that the current year is the most exciting and has the most new things than in the past when it comes to data infrastructures along with resources such as data storage. Keep in mind that there are many aspects to data infrastructures as well as storage all of which are evolving. Who Will Be At Top Of Storage World Next Decade? What say you?

 

Ok, nuff said (for now...).

 

Cheers

Gs

Configure Cisco Meraki Federation Connection

 

Log into your VMware Identity Manager admin interface and navigate to:

Catalog => Settings  => SAML Metadata  => Identity Provider (IdP) metadata

 

Use this informaiton to configure  VMware Identity Manager IDP as IdP in Meraki as shown below.

 

Screen Shot 2017-07-18 at 10.14.29 AM_cen.jpg

 

 

Configure VMware Identity Manager IDP Federation Connection

 

In VMware Identity Manager administrative console, navigate to:

Catalog  => Application Catalog  => Add Application  => Create a new one

 

In “Add Application” wizard, configure as shown below.

Screen Shot 2017-07-18 at 12.53.22 PM.png

Screen Shot 2017-07-18 at 12.53.48 PM.png

PowerNSX / PowerCLI のスクリプトを利用した、

NSX for vSphere ネットワーク環境構成を自動化(というか省力化)してみた様子をお伝えしてみようと思います。

 

今回は、前回作成したスクリプトで、テナントを作成してみようと思います。

PowerNSX でテナント追加の自動化をしてみる。Part.4

 

下記のような環境で、テナントっぽく NSX の論理スイッチや VM を作成、削除してみます。

powernsx-auto-5.png

 

前回の投稿にあるスクリプト ファイルは、すでに配置ずみです。

PowerNSX> ls | select Name

 

Name

----

config

add_tenant_nw.ps1

add_tenant_vm.ps1

delete_tenant.ps1

get_tenant_summary.ps1

 

 

PowerNSX> ls .\config | select Name

 

Name

----

nw_tenant-04.ps1

nw_tenant-05.ps1

vm_vm41.ps1

vm_vm42.ps1

vm_vm51.ps1

 

 

vCenter と NSX Manger に接続しておきます。

vCenter を指定して、NSX Manger にも自動接続します。

PowerNSX> Connect-NsxServer -vCenterServer <vCenter のアドレス>

1つめのテナントの作成~削除。

それでは、テナント ネットワークを作成してみます。

PowerNSX> .\add_tenant_nw.ps1 .\config\nw_tenant-04.ps1

Logical Switch: ls-tenant-04 => objectId: virtualwire-235

DLR Interface: if-tenant-04 => index: 13

SNAT Source Address: 10.1.40.0/24 => ruleId 196624

DFW Section: dfw-section-tenant-04 => id 1024

DFW Rule: allow jBox to tenant-ls SSH => id 1129

DFW Rule: allow Any to tenant-ls HTTP => id 1130

VM Folder: tenant-04 => id group-v459

 

続いて、VM を作成してみます。

PowerNSX> .\add_tenant_vm.ps1 .\config\nw_tenant-04.ps1 .\config\vm_vm41.ps1

Create Guest OS Customization Spec: osspec-vm41

Edit Guest OS Customization Spec:

New VM: vm41 => id vm-460

Delete Guest OS Customization Spec: osspec-vm41

Connect vNIC: vm41/Network adapter 1 to ls-tenant-04

Start VM: vm41

 

2台めの VM を作成してみます。

PowerNSX> .\add_tenant_vm.ps1 .\config\nw_tenant-04.ps1 .\config\vm_vm42.ps1

Create Guest OS Customization Spec: osspec-vm42

Edit Guest OS Customization Spec:

New VM: vm42 => id vm-461

Delete Guest OS Customization Spec: osspec-vm42

Connect vNIC: vm42/Network adapter 1 to ls-tenant-04

Start VM: vm42

 

作成したテナントの情報を取得してみます。

論理スイッチ、SNAT ルール、ファイアウォールルールが作成されて、

VM も作成されました。

PowerNSX> .\get_tenant_summary.ps1 .\config\nw_tenant-04.ps1

############################################################

テナント: tenant-04

実行時刻: 2017年7月13日 0:06:22

 

============================================================

Tenant Network

 

name           : ls-tenant-04

objectId       : virtualwire-235

VDSwitch       : vds02

dvPortgroup    : vxw-dvs-36-virtualwire-235-sid-10005-ls-tenant-04

DlrIfIPAddress : 10.1.40.1/24

 

============================================================

ESG SNAT ルール情報

 

translatedAddress : 192.168.1.144

originalAddress   : 10.1.40.0/24

 

============================================================

DFWセクションdfw-section-tenant-04ルール情報

 

id   name                        Src           Dst          Service action appliedTo    logged

--   ----                        ---           ---          ------- ------ ---------    ------

1130 allow Any to tenant-ls HTTP Any           ls-tenant-04 HTTP    allow  ls-tenant-04 false

1129 allow jBox to tenant-ls SSH 192.168.1.223 ls-tenant-04 SSH     allow  ls-tenant-04 false

 

============================================================

VM / ゲスト ネットワーク情報

 

VM   HostName   State IPAddress      Gateway   GuestFullName

--   --------   ----- ---------      -------   -------------

vm41 vm41     Running 10.1.40.101/24 10.1.40.1 Other 3.x or later Linux (64-bit)

vm42 vm42     Running 10.1.40.102/24 10.1.40.1 Other 3.x or later Linux (64-bit)

 

PowerNSX>

 

テナントを削除してみます。

PowerNSX> .\delete_tenant.ps1 .\config\nw_tenant-04.ps1

Remove VM

Remove DFW Rule

Remove SNAT Rule

Remove NsxLogical Switch

Remove VM Folder

 

テナントのネットワーク、VM などが消えました。

PowerNSX> .\get_tenant_summary.ps1 .\config\nw_tenant-04.ps1

############################################################

テナント: tenant-04

実行時刻: 2017年7月13日 0:08:36

 

 

 

============================================================

Tenant Network

 

 

 

============================================================

ESG SNAT ルール情報

 

 

 

============================================================

DFWセクションルール情報

 

 

 

============================================================

VM / ゲスト ネットワーク情報

 

 

 

PowerNSX>

 

2つめのテナントの作成。

新たにテナント「tenant-05」を追加してみます。

 

コンフィグを作成します。

ネットワーク構成は下記のようにします。

1つ目のテナントとの差分は、赤字の部分です。

 

ファイル名: nw_tenant-05.ps1

# テナント固有 変数

$tenant_name = "tenant-05"

$gw_addr = "10.1.50.1"

$nw_addr = "10.1.50.0"

$nw_msak_length = 24

 

# 共通 変数

$tz_name = "tz01"

$dlr_id = "edge-5"

$esg_id = "edge-1"

$esg_ext_addr = "192.168.1.144"

$jbox_ip = "192.168.1.223"

 

$dlr_if_name = "if-" + $tenant_name

$dfw_section_name = "dfw-section-" + $tenant_name

$ls_name = "ls-" + $tenant_name

 

VM の構成は下記のようにします。

 

ファイル名: vm_vm51.ps1

# VM 固有 変数

$vm_name = "vm51"

$ip_addr = "10.1.50.101"

$vnic_name = "Network adapter 1"

$template_name = "photon-1.0-rev2"

 

# テナント内共通 変数

$nw_msak = "255.255.255.0"

$tenant_dns = "10.1.1.1"

$domain_name = "go-lab.jp"

$cluster_name = "nsx-cluster-01"

$datastore_name = "ds_nfs_lab02"

 

それでは、2つ目のテナントを作成します。

テナントを作成してみます。

PowerNSX> .\add_tenant_nw.ps1 .\config\nw_tenant-05.ps1

Logical Switch: ls-tenant-05 => objectId: virtualwire-236

DLR Interface: if-tenant-05 => index: 13

SNAT Source Address: 10.1.50.0/24 => ruleId 196625

DFW Section: dfw-section-tenant-05 => id 1025

DFW Rule: allow jBox to tenant-ls SSH => id 1131

DFW Rule: allow Any to tenant-ls HTTP => id 1132

VM Folder: tenant-05 => id group-v463

 

VM を作成してみます。

PowerNSX> .\add_tenant_vm.ps1 .\config\nw_tenant-05.ps1 .\config\vm_vm51.ps1

Create Guest OS Customization Spec:

Edit Guest OS Customization Spec:

New VM: vm51 => id vm-464

Delete Guest OS Customization Spec: osspec-vm51

Connect vNIC: vm51/Network adapter 1 to ls-tenant-05

Start VM: vm51

 

2つめのテナントのネットワークと、1つの VM が作成されています。

PowerNSX> .\get_tenant_summary.ps1 .\config\nw_tenant-05.ps1

############################################################

テナント: tenant-05

実行時刻: 2017年7月13日 0:20:38

 

============================================================

Tenant Network

 

name           : ls-tenant-05

objectId       : virtualwire-236

VDSwitch       : vds02

dvPortgroup    : vxw-dvs-36-virtualwire-236-sid-10005-ls-tenant-05

DlrIfIPAddress : 10.1.50.1/24

 

============================================================

ESG SNAT ルール情報

 

translatedAddress : 192.168.1.144

originalAddress   : 10.1.50.0/24

 

============================================================

DFWセクションdfw-section-tenant-05ルール情報

 

id   name                        Src           Dst          Service action appliedTo    logged

--   ----                        ---           ---          ------- ------ ---------    ------

1132 allow Any to tenant-ls HTTP Any           ls-tenant-05 HTTP    allow  ls-tenant-05 false

1131 allow jBox to tenant-ls SSH 192.168.1.223 ls-tenant-05 SSH     allow  ls-tenant-05 false

 

============================================================

VM / ゲスト ネットワーク情報

 

VM   HostName   State IPAddress      Gateway   GuestFullName

--   --------   ----- ---------      -------   -------------

vm51 vm51     Running 10.1.50.101/24 10.1.50.1 Other 3.x or later Linux (64-bit)

 

PowerNSX>

 

NSX は多機能なので環境によって利用する機能も違うと思いますが、

vSphere Web Client の Network and Security 画面からの操作や情報確認は、PowerNSX でも可能です。

このような感じで、PowerNSX を利用するとネットワーク構成を省力化することができます。

 

まだ続くかもしれない。

ここまでの PowerNSX でのテナント追加と削除を、

簡易的なスクリプトにして省力化してみようと思います。

 

これまでの流れについて。

PowerNSX でテナント追加の自動化をしてみる。Part.1

PowerNSX でテナント追加の自動化をしてみる。Part.2

PowerNSX でテナント追加の自動化をしてみる。Part.3

 

ここまで実施してきたことを参考に、スクリプト化してみます。

今回の環境は、複雑になりすぎないようにあえてシンプルな構成にしています。

たとえば、下記のような感じです。

  • テナントごとに、論理スイッチ(VXLAN)は 1つだけ。
  • ファイアウォール ルールの通信元 / 通信先 やサービスなどは 1つしか指定しない。
  • VM の vNIC は 1つだけ。

実際の環境で使用する場合は、オブジェクトの確認やエラー制御などが必要に

なりますが、今回はできるだけ省略しています。

 

作成する環境の構成について。

設定値については、スクリプトとは別のファイルに分割してみました。

テナント ネットワークにかかわる設定値と、

テナント内の VM にかかわる設定値を、それぞれファイルを分けてみました。

NSX Edge(Edge Service Gateway と DLR Control VM)については、

はじめから  Object ID を確認しておいて決め打ちしています。

 

下記のような構成にしています。

 

テナント ネットワークの構成ファイル。

テナントの名前として、tenant-04 という文字列を使っています。

 

ファイル名: nw_tenant-04.ps1

# テナント固有 変数

$tenant_name = "tenant-04"

$gw_addr = "10.1.40.1"

$nw_addr = "10.1.40.0"

$nw_msak_length = 24

 

# 共通 変数

$tz_name = "tz01"

$dlr_id = "edge-5"

$esg_id = "edge-1"

$esg_ext_addr = "192.168.1.144"

$jbox_ip = "192.168.1.223"

 

$dlr_if_name = "if-" + $tenant_name

$dfw_section_name = "dfw-section-" + $tenant_name

$ls_name = "ls-" + $tenant_name

 

VM 1台目(vm41)の構成ファイル。

ファイル名: vm_vm41.ps1

# VM 固有 変数

$vm_name = "vm41"

$ip_addr = "10.1.40.101"

$vnic_name = "Network adapter 1"

$template_name = "photon-1.0-rev2"

 

# テナント内共通 変数

$nw_msak = "255.255.255.0"

$tenant_dns = "10.1.1.1"

$domain_name = "go-lab.jp"

$cluster_name = "nsx-cluster-01"

$datastore_name = "ds_nfs_lab02"

 

VM 2台目(vm42)の構成ファイル。

1台目との差分は VM 名と IP アドレスだけにしています。

ファイル名: vm_vm42.ps1

# VM 固有 変数

$vm_name = "vm42"

$ip_addr = "10.1.40.102"

$vnic_name = "Network adapter 1"

$template_name = "photon-1.0-rev2"

 

# テナント内共通 変数

$nw_msak = "255.255.255.0"

$tenant_dns = "10.1.1.1"

$domain_name = "go-lab.jp"

$cluster_name = "nsx-cluster-01"

$datastore_name = "ds_nfs_lab02"

 

テナント作成と VM の追加のスクリプト例。

PowerNSX でテナント追加の自動化をしてみる。Part.1 のように、

ネットワーク環境と VM を作成して、VM をネットワーク(論理スイッチ)に接続します。

 

テナントの作成 スクリプトの内容。

1つ目の引数で、テナントネットワークの構成ファイルを指定しています。

 

ファイル名: add_tenant_nw.ps1

$tenant_nw_config =  $args[0]

. $tenant_nw_config

 

# テナント 論理スイッチ作成

$tz = Get-NsxTransportZone -name $tz_name

$ls = New-NsxLogicalSwitch -TransportZone $tz -Name $ls_name

$ls | % {"Logical Switch: " + $_.name + " => objectId: " + $_.objectId}

 

# DLR 接続

$dlr = Get-NsxLogicalRouter -objectId $dlr_id

$dlr_if = $dlr | New-NsxLogicalRouterInterface `

    -Type internal -PrimaryAddress $gw_addr -SubnetPrefixLength $nw_msak_length `

    -ConnectedTo $ls -Name $dlr_if_name

$dlr_if | select -ExpandProperty interface |

    % {"DLR Interface: " + $_.name + " => index: " + $_.index}

 

# SNAT ルール追加

$nat_original_addr = $nw_addr + "/" + $nw_msak_length

$esg = Get-NsxEdge -objectId $esg_id

$snat_rule = $esg | Get-NsxEdgeNat | New-NsxEdgeNatRule `

    -Vnic 0 -action snat `

    -OriginalAddress $nat_original_addr -TranslatedAddress $esg_ext_addr

$snat_rule | % {"SNAT Source Address: " + $_.originalAddress + " => ruleId " + $_.ruleId}

 

# DFW ルール追加

$dfw_section = New-NsxFirewallSection -Name $dfw_section_name

$dfw_section | % {"DFW Section: " + $_.name + " => id " + $_.id}

 

$dfw_rule_name = "allow jBox to tenant-ls SSH"

$dfw_section = Get-NsxFirewallSection -objectId $dfw_section.id

$svc = Get-NsxService -Name SSH | where {$_.isUniversal -eq $false}

$dfw_rule = $dfw_section | New-NsxFirewallRule -Name $dfw_rule_name `

    -Action allow -Source $jbox_ip -Destination $ls -Service $svc -AppliedTo $ls

$dfw_rule | % {"DFW Rule: " + $_.name + " => id " + $_.id}

 

$dfw_rule_name = "allow Any to tenant-ls HTTP"

$dfw_section = Get-NsxFirewallSection -objectId $dfw_section.id

$svc = Get-NsxService -Name HTTP | where {$_.isUniversal -eq $false}

$dfw_rule = $dfw_section | New-NsxFirewallRule -Name $dfw_rule_name `

    -Action allow -Destination $ls -Service $svc -AppliedTo $ls

$dfw_rule | % {"DFW Rule: " + $_.name + " => id " + $_.id}

 

$vm_folder = Get-Folder -Type VM vm | New-Folder -Name $tenant_name

$vm_folder | % {"VM Folder: " + $_.Name + " => id " + $_.ExtensionData.MoRef.Value}

 

少し長いスクリプトですが、実行すると下記のような出力結果となります。

get_tenant_summary.png

 

VM の追加 スクリプトの内容。

1つ目の引数で、テナントネットワークの構成ファイルを指定して、

2つ目の引数で、VM の構成ファイルを指定しています。

 

ファイル名: add_tenant_vm.ps1

$tenant_nw_config =  $args[0]

. $tenant_nw_config

$tenant_vm_config = $args[1]

. $tenant_vm_config

 

$ls = Get-NsxTransportZone -name $tz_name | Get-NsxLogicalSwitch -Name $ls_name

 

"Create Guest OS Customization Spec: " + $os_spec_name

$os_spec_name = "osspec-" + $vm_name

$spec = New-OSCustomizationSpec -Name $os_spec_name `

    -OSType Linux -DnsServer $tenant_dns -Domain $domain_name

 

"Edit Guest OS Customization Spec: " + $_.Name

$spec | Get-OSCustomizationNicMapping |

    Set-OSCustomizationNicMapping -IpMode UseStaticIP `

    -IpAddress $ip_addr -SubnetMask $nw_msak -DefaultGateway $gw_addr | Out-Null

 

$vm = Get-Template -Name $template_name |

    New-VM -Name $vm_name -Location (Get-Folder -Type VM $tenant_name) `

    -ResourcePool $cluster_name -Datastore $datastore_name -OSCustomizationSpec $spec

$vm | % {"New VM: " + $_.Name + " => id " + $_.ExtensionData.MoRef.Value}

 

"Delete Guest OS Customization Spec: " + $spec.Name

$spec | Remove-OSCustomizationSpec -Confirm:$false

 

"Connect vNIC: " + ($vm.Name + "/" + $vnic_name + " to " + $ls.Name)

$vm | Get-NetworkAdapter -Name $vnic_name | Connect-NsxLogicalSwitch $ls

$vm | Start-VM | % {"Start VM: " + $_.Name}

 

テナント情報取得スクリプトの例。

PowerNSX でテナント追加の自動化をしてみる。Part.2 で確認したような情報を取得してみます。

しかし、あまり詳細な情報までとらず、特徴的なサマリを表示するようにしてみました。

 

https://gist.github.com/gowatana/0dfab57feb0598452bccf2448c45f4d9

テナント情報取得スクリプトの内容。

1つ目の引数で、テナントネットワークの構成ファイルを指定しています。

 

ファイル名: get_tenant_summary.ps1

$tenant_nw_config =  $args[0]

. $tenant_nw_config

 

function format_output ($title, $object) {

    "=" * 60

    $title

    ""

    ($object | Out-String).Trim()

    ""

}

 

"#" * 60

"テナント: " + $tenant_name

"実行時刻: " + (Get-Date).DateTime

""

 

$ls =  Get-NsxTransportZone $tz_name | Get-NsxLogicalSwitch -Name $ls_name

$ls_id = $ls.objectId

$dvpg = $ls | Get-NsxBackingPortGroup

$dlr = Get-NsxLogicalRouter -objectId $dlr_id

$dlr_if = $dlr | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq $ls_id}

$dlr_if_addr = $dlr_if.addressGroups.addressGroup | %{$_.primaryAddress + "/" + $_.subnetPrefixLength}

$ls_info = $ls | fl `

    name,

    objectId,

    @{N="VDSwitch";E={$dvpg.VDSwitch.Name}},

    @{N="dvPortgroup";E={$dvpg.Name}},

    @{N="DlrIfIPAddress";E={$dlr_if_addr}}

format_output "Tenant Network" $ls_info

 

$esg = Get-NsxEdge -objectId $esg_id

$nat_original_addr = $nw_addr + "/" + $nw_msak_length

$snat_rule = $esg | Get-NsxEdgeNat | Get-NsxEdgeNatRule |

    where {$_.originalAddress -eq $nat_original_addr}

$snat_rule_info = $snat_rule | fl translatedAddress,originalAddress

format_output "ESG SNAT ルール情報" $snat_rule_info

 

$dfw_section = Get-NsxFirewallSection -Name $dfw_section_name

$dfw_rules = $dfw_section.rule

$dfw_rules_info = $dfw_rules | select `

    id,

    name,

    @{N="Src";E={

            $member = $_ | Get-NsxFirewallRuleMember |

                where {$_.MemberType -eq "Source"} |

                % {if($_.Name -eq $null){$_.Value}else{$_.Name}

            }

            if(($member).Count -eq 0){$member = "Any"}

            $member

        }

    },

    @{N="Dst";E={

            $member = $_ | Get-NsxFirewallRuleMember |

                where {$_.MemberType -eq "Destination"} |

                % {if($_.Name -eq $null){$_.Value}else{$_.Name}

            }

            if(($member).Count -eq 0){$member = "Any"}

            $member

        }

    },

    @{N="Service";E={$_.services.service.name}},

    action,

    @{N="appliedTo";E={$_.appliedToList.appliedTo.name}},

    logged | ft -AutoSize

format_output ("DFWセクション" + $dfw_section.name + "ルール情報") $dfw_rules_info

 

# 論理スイッチに接続されているVMの情報

$vms = $ls | Get-NsxBackingPortGroup | Get-VM | sort Name

$vm_info = $vms | % {

    $vm = $_

    $guest = $_ | Get-VmGuest

    $vm | select `

        @{N="VM";E={$_.Name}},

        @{N="HostName";E={$_.Guest.ExtensionData.HostName}},

        @{N="State";E={$_.Guest.State}},

        @{N="IPAddress";E={

                $_.Guest.ExtensionData.Net.IpConfig.IpAddress |

                    where {$_.PrefixLength -le 32} |

                    % {$_.IpAddress + "/" + $_.PrefixLength}

            }

        },

        @{N="Gateway";E={

                $guest_dgw = $_.Guest.ExtensionData.IpStack.IpRouteConfig.IpRoute |

                    where {$_.Network -eq "0.0.0.0"}

                $guest_dgw.Gateway.IpAddress

            }

        },

        @{N="GuestFullName";E={$_.Guest.ExtensionData.GuestFullName}}

} | ft -AutoSize

format_output "VM / ゲスト ネットワーク情報" $vm_info

 

テナント削除スクリプトの例。

PowerNSX でテナント追加の自動化をしてみる。Part.3 のように、

VM を削除して、論理スイッチも削除します。

特に対話的な確認メッセージもなく、一気に削除するようにしています。

 

テナント削除 スクリプトの内容。

1つ目の引数で、テナントネットワークの構成ファイルを指定しています。

 

ファイル名: delete_tenant.ps1

$tenant_nw_config =  $args[0]

. $tenant_nw_config

 

$ls = Get-NsxTransportZone $tz_name | Get-NsxLogicalSwitch -Name $ls_name

 

"Remove VM"

$ls | Get-NsxBackingPortGroup | Get-VM | Stop-VM -Confirm:$false | Out-Null

$ls | Get-NsxBackingPortGroup | Get-VM | Remove-VM -DeletePermanently -Confirm:$false

 

"Remove DFW Rule"

Get-NsxFirewallSection -Name $dfw_section_name |

    Remove-NsxFirewallSection -force -Confirm:$false

 

"Remove SNAT Rule"

$nat_original_addr = $nw_addr + "/" + $nw_msak_length

Get-NsxEdge -objectId $esg_id | Get-NsxEdgeNat | Get-NsxEdgeNatRule |

    where {$_.originalAddress -eq $nat_original_addr} |

    Remove-NsxEdgeNatRule -Confirm:$false

 

"Remove NsxLogical Switch"

Get-NsxLogicalRouter -objectId $dlr_id | Get-NsxLogicalRouterInterface |

    where {$_.connectedToId -eq $ls.objectId} |

    Remove-NsxLogicalRouterInterface -Confirm:$false

$ls | Remove-NsxLogicalSwitch -Confirm:$false

 

"Remove VM Folder"

Get-Folder -Type VM $tenant_name | Remove-Folder -Confirm:$false

 

まだつづく。

PowerNSX でテナント追加の自動化をしてみる。Part.5

これまで、論理スイッチや VM を PowerCLI / PowerNSX で作成してみましたが、

ひたすらオブジェクトを作成していくだけというのは現実的ではないと思います。

そこで、これまでに(下記で)作成したテナントを PowerCLI / PowerNSX でいったん削除してみます。

PowerNSX でテナント追加の自動化をしてみる。Part.2

 

NSX には直接マルチ テナントを構成する機能はありませんが、

テナントっぽく作成した下記のオブジェクトを、作成した順番とは逆に削除していきます。

  1. VM
  2. DFW ルール
  3. SNAT ルール
  4. 論理スイッチ

 

1. VM の削除。

まず、論理スイッチなど他のオブジェクトを削除する前に、VM を削除します。

今回は、論理スイッチ「ls-tenant-04」を割り当てている VM をまとめて削除します。

 

まず、下記が削除対象の VM です。

VM は 2つ(vm41 と vm42)を作成して、論理スイッチに接続しています。

論理スイッチのバッキング ポートグループから VM を取得していますが、

この環境ではDLRの内部インターフェースにも論理スイッチ「ls-tenant-04」を割り当てていますが

DLR Control VM は、実際はポートグループが割り当てられないため含まれないようです。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | Get-VM

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm41                 PoweredOn  1        2.000

vm42                 PoweredOn  1        2.000

 

 

VM を停止します。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | Get-VM | Stop-VM -Confirm:$false

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm42                 PoweredOff 1        2.000

vm41                 PoweredOff 1        2.000

 

 

VM を削除します。

VM が削除されたため、以前は VM を 2つ取得できていたコマンドを実行しても何も表示されなくなりました。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | Get-VM | Remove-VM -DeletePermanently -Confirm:$false

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | Get-VM

PowerNSX>

 

2. DFW ルールの削除。

DFW ルールは、テナントごとにセクションを作成しておいたので、

ここではルールごとではなくセクションごと削除してみます。

DFW のセクションには まだルールが残っている状態なので「-force」オプションで削除します。

PowerNSX> Get-NsxFirewallSection -Name dfw-section-04

 

 

id               : 1009

name             : dfw-section-04

generationNumber : 1499559304897

timestamp        : 1499559304897

type             : LAYER3

rule             : {allow any to t04 http, allow jbox to t04 ssh}

 

 

PowerNSX> Get-NsxFirewallSection -Name dfw-section-04 | Remove-NsxFirewallSection -force -Confirm:$false

PowerNSX> Get-NsxFirewallSection -Name dfw-section-04

PowerNSX>

 

3. SNAT ルールの削除。

Edge Service Gateway(ESG)の NAT サービスに作成した、SNAT ルールを削除します。

PowerNSX> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | Get-NsxEdgeNatRule | where {$_.originalAddress -eq "10.1.40.0/24"}

 

 

ruleId                      : 196611

ruleTag                     : 196611

ruleType                    : user

action                      : snat

vnic                        : 0

originalAddress             : 10.1.40.0/24

translatedAddress           : 192.168.1.144

snatMatchDestinationAddress : any

loggingEnabled              : false

enabled                     : true

protocol                    : any

originalPort                : any

translatedPort              : any

snatMatchDestinationPort    : any

edgeId                      : edge-1

 

 

PowerNSX> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | Get-NsxEdgeNatRule | where {$_.originalAddress -eq "10.1.40.0/24"} | Remove-NsxEdgeNatRule -Confirm:$false

PowerNSX> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | Get-NsxEdgeNatRule | where {$_.originalAddress -eq "10.1.40.0/24"}

PowerNSX>

 

4. 論理スイッチの削除。

削除対象の論理スイッチの Object ID は「virtualwire-222」になっています。

PowerNSX> Get-NsxTransportZone tz01 | Get-NsxLogicalSwitch -Name ls-tenant-04 | fl name,objectId

 

name     : ls-tenant-04

objectId : virtualwire-222

 

 

まず DLR から、論理スイッチ「ls-tenant-04」の接続されたインターフェースを削除します。

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"}

 

 

label           : 27110000000d

name            : if-tenant-04

addressGroups   : addressGroups

mtu             : 1500

type            : internal

isConnected     : true

isSharedNetwork : false

index           : 13

connectedToId   : virtualwire-222

connectedToName : ls-tenant-04

logicalRouterId : edge-5

 

 

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"} | Remove-NsxLogicalRouterInterface -Confirm:$false

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"}

PowerNSX>

 

まだ論理スイッチは残っている状態です。

そして、そのバッキング分散ポートグループも存在しています。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | fl VDSwitch,Name,Id

 

 

VDSwitch : vds02

Name     : vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04

Id       : DistributedVirtualPortgroup-dvportgroup-412

 

 

論理スイッチ「ls-tenant-04」を削除します。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Remove-NsxLogicalSwitch -Confirm:$false

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04

PowerNSX>

 

ちなみに vDS に作成されているバッキング分散ポートグループも、

ちゃんと自動的に削除されて存在しない状態になりました。

PowerNSX> Get-VDPortgroup vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04

Get-VDPortgroup : 2017/07/09 12:38:12   Get-VDPortgroup         VDPortgroup with name 'vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04' was not found using the specified filter(s).

発生場所 行:1 文字:1

+ Get-VDPortgroup vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : ObjectNotFound: (:) [Get-VDPortgroup], VimException

    + FullyQualifiedErrorId : Core_OutputHelper_WriteNotFoundError,VMware.VimAutomation.Vds.Commands.Cmdlets.GetVDPort

   group

PowerNSX>

 

これで、前回までに作成したオブジェクトを削除した状態になりました。

 

まだ続く・・・

PowerNSX でテナント追加の自動化をしてみる。Part.4

PowerNSX を使用して、NSX for vSphere へのテナント追加の自動化をしてみようと思います。

今回は前回作成したテナント環境(論理スイッチや VM など)の様子を、

PowerCLI と PowerNSX をくみあわせて確認しておきます。

 

前回はこちら。

PowerNSX でテナント追加の自動化をしてみる。Part.1

 

テナント環境の確認。

作成した論理スイッチ「ls-tenant-04」です。

Object ID は「virtualwire-222」になりました。

PowerNSX> Get-NsxTransportZone tz01 | Get-NsxLogicalSwitch -Name ls-tenant-04 | fl name,objectId

 

 

name     : ls-tenant-04

objectId : virtualwire-222

 

 

論理スイッチのバッキングの分散ポートグループが、

「vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04」だとわかります。

※Transport Zone は 1つなので省略しています。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | select -ExpandProperty vdsContextWithBacking

 

 

switch          : switch

mtu             : 1600

promiscuousMode : false

backingType     : portgroup

backingValue    : dvportgroup-412

missingOnVc     : false

 

 

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | fl VDSwitch,Name,Id

 

 

VDSwitch : vds02

Name     : vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04

Id       : DistributedVirtualPortgroup-dvportgroup-412

 

 

バッキングの分散ポートグループも存在しています。

PowerNSX> Get-VDPortgroup | where {$_.Name -eq "vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04"} | select Id

 

 

Id

--

DistributedVirtualPortgroup-dvportgroup-412

 

 

DLR インスタンスと、論理スイッチ(virtualwire-222)の接続されたインターフェースを見てみます。

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"}

 

 

label           : 27110000000d

name            : if-tenant-04

addressGroups   : addressGroups

mtu             : 1500

type            : internal

isConnected     : true

isSharedNetwork : false

index           : 13

connectedToId   : virtualwire-222

connectedToName : ls-tenant-04

logicalRouterId : edge-5

 

 

インターフェースに「10.1.40.1/24」が設定されたこともわかります。

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"} | select -ExpandProperty addressGroups | select -ExpandProperty addressGroup | fl

 

primaryAddress     : 10.1.40.1

subnetMask         : 255.255.255.0

subnetPrefixLength : 24

 

 

PowerNSX での情報取得は XmlElement で階層が深く、この先「select -ExpandProperty」が多くなります。

かわりに、下記のように Format-XML によって XML で出力することもできたりします。

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | Get-NsxLogicalRouterInterface | where {$_.connectedToId -eq "virtualwire-222"} | select -ExpandProperty addressGroups | Format-XML

<addressGroups>

  <addressGroup>

    <primaryAddress>10.1.40.1</primaryAddress>

    <subnetMask>255.255.255.0</subnetMask>

    <subnetPrefixLength>24</subnetPrefixLength>

  </addressGroup>

</addressGroups>

PowerNSX>

 

ESG に作成した、SNAT ルールです。

10.1.40.0/24 のソース アドレスが、192.168.1.144 に変換されます。

PowerNSX> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | Get-NsxEdgeNatRule | where {$_.originalAddress -eq "10.1.40.0/24"}

 

ruleId                      : 196611

ruleTag                     : 196611

ruleType                    : user

action                      : snat

vnic                        : 0

originalAddress             : 10.1.40.0/24

translatedAddress           : 192.168.1.144

snatMatchDestinationAddress : any

loggingEnabled              : false

enabled                     : true

protocol                    : any

originalPort                : any

translatedPort              : any

snatMatchDestinationPort    : any

edgeId                      : edge-1

 

 

作成した DFW ルールを見てみます。

ルールは「dfw-section-04」セクションに作成しました。

ルールが2つあることがわかります。

PowerNSX> Get-NsxFirewallSection -Name dfw-section-04

 

 

id               : 1009

name             : dfw-section-04

generationNumber : 1499213132983

timestamp        : 1499213132983

type             : LAYER3

rule             : {allow any to t04 http, allow jbox to t04 ssh}

 

 

2つとも、許可(action: allow)ルールです。

PowerNSX> Get-NsxFirewallSection -Name dfw-section-04 | select -ExpandProperty rule

 

 

id            : 1105

disabled      : false

logged        : false

name          : allow any to t04 http

action        : allow

appliedToList : appliedToList

sectionId     : 1009

destinations  : destinations

services      : services

direction     : inout

packetType    : any

 

id            : 1104

disabled      : false

logged        : false

name          : allow jbox to t04 ssh

action        : allow

appliedToList : appliedToList

sectionId     : 1009

sources       : sources

destinations  : destinations

services      : services

direction     : inout

packetType    : any

 

 

ルールの通信元 / 通信先 も確認できます。

踏み台から論理スイッチへの SSH を許可するルール(RuleId: 1104)は下記のようになっています。

PowerNSX> Get-NsxFirewallRule -RuleId 1104 | Get-NsxFirewallRuleMember

 

 

RuleId     : 1104

SectionId  : 1009

MemberType : Source

Name       :

Value      : 192.168.1.223

Type       : Ipv4Address

isValid    : true

 

RuleId     : 1104

SectionId  : 1009

MemberType : Destination

Name       : ls-tenant-04

Value      : virtualwire-222

Type       : VirtualWire

isValid    : true

 

 

ルール ID 1104 のサービスは、SSH です。

PowerNSX> Get-NsxFirewallRule -RuleId 1104 | select -ExpandProperty services | select -ExpandProperty service | fl

 

 

name    : SSH

value   : application-368

type    : Application

isValid : true

 

 

任意の場所から論理スイッチへの HTTP を許可するルール(RuleId: 1105)は

通信元はなく、通信先(Destination)だけ指定されています。

PowerNSX> Get-NsxFirewallRule -RuleId 1105 | Get-NsxFirewallRuleMember

 

 

RuleId     : 1105

SectionId  : 1009

MemberType : Destination

Name       : ls-tenant-04

Value      : virtualwire-222

Type       : VirtualWire

isValid    : true

 

 

ルール ID 1105 のサービスは、HTTP です。

PowerNSX> Get-NsxFirewallRule -RuleId 1105 | select -ExpandProperty services | select -ExpandProperty service | fl

 

 

name    : HTTP

value   : application-253

type    : Application

isValid : true

 

 

作成した VM の確認。

VM の情報は、主に PowerCLI で確認しています。

作成した VM「vm41」は、指定したデータストア「ds_nfs_lab02」に配置されています。

PowerNSX> Get-VM vm41 | fl Name,PowerState,GuestId,@{N="Datastore";E={$_|Get-Datastore}}

 

 

Name       : vm41

PowerState : PoweredOn

GuestId    : other3xLinux64Guest

Datastore  : ds_nfs_lab02

 

 

vNIC には、論理スイッチのバッキングポートグループが設定されています。

PowerNSX> Get-VM vm41 | Get-NetworkAdapter | fl Parent,Name,NetworkName,@{N="Connected";E={$_.ConnectionState.Connected}}

 

 

Parent      : vm41

Name        : Network adapter 1

NetworkName : vxw-dvs-36-virtualwire-222-sid-10005-ls-tenant-04

Connected   : True

 

 

vm41 のゲスト OS の情報です。
VMware Tools(Photon OS なので open-vm-tools)がインストールされた状態です。

PowerNSX> (Get-VM vm41 | Get-VMGuest).ExtensionData | select HostName,GuestState,ToolsStatus,IpAddress,GuestFullName

 

 

HostName      : vm41

GuestState    : running

ToolsStatus   : toolsOk

IpAddress     : 10.1.40.101

GuestFullName : Other 3.x or later Linux (64-bit)

 

 

ゲスト OS には、アドレス「10.1.40.101」が設定できています。

PowerNSX> (Get-VM vm41 | Get-VMGuest).ExtensionData.Net.IpConfig | select -ExpandProperty IpAddress | select IpAddress,PrefixLength

 

 

IpAddress                PrefixLength

---------                ------------

10.1.40.101                        24

fe80::250:56ff:feaf:d89a           64

 

 

デフォルトゲートウェイ も設定できています。

PowerNSX> (Get-VM vm41 | Get-VMGuest).ExtensionData.IpStack | select -ExpandProperty IpRouteConfig | select -ExpandProperty IpRoute | where {$_.Network -eq "0.0.0.0"} | select @{N="Gateway";E={$_.Gateway.IpAddress}}

 

 

Gateway

-------

10.1.40.1

 

 

論理スイッチに、作成した VM が接続されていることもわかります。

PowerNSX> Get-NsxLogicalSwitch -Name ls-tenant-04 | Get-NsxBackingPortGroup | Get-VM

 

 

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

vm41                 PoweredOn  1        2.000

 

 

 

つづく。

PowerNSX でテナント追加の自動化をしてみる。Part.3

What's the need of SIOC?

 

1. Share of datastore is provisioned on the basis of the number of ESXI hosts. if Host A has more number of VM's than another Host B. VM's on the Host B will have more shares of datastore than VM's on Host A.

 

2. It creates a situation when one VM monopolising the datastore by levelling out all requests for I/O that the datastore receives and leads to depriving others of that resource which will cause             performance degradation. This behaviour is called the noisy neighbour.

 

3. Storage I/O Control was introduced in Vsphere 4.1 and works only on block-based Virtual Machine File Systems (VMFS) data stores (iSCSI and Fibre Channel);. In vSphere 5, however, SIOC has been enhanced with support for NFS data stores.

 

SIOC stats collection mechanism

 

 

1. SIOC keep tracks of a performance metric called Normalized Latency, compares it with the congestion latency threshold every four seconds, and only kicks in if the Normalized Latency is above the threshold then SIOC gets kicked in.

 

SIOC calculates this Normalized Latency by factoring in latency values for all hosts and all VMs accessing the datastore, taking into account different I/O sizes (larger I/O sizes means higher observed latency than smaller I/O sizes).

 

 

2. From vSphere 5.1 , "stats only" mode is available to gather performance statistics. because SDRS uses the last 24 hours' average to make recommendations.Stats only mode is disabled due to the (significant) increase of log data into the vCenter database. but from ESXi 6.0, VMware has enabled IO metric collection for every datastore by default. This can cause some weird latency spikes on your storage system that can be very hard to isolate if you don’t know what you’re looking for or what might be causing it.

 

When does SIOC get trigger and what does it do?

 

1. When latency (between host and datastore as well VM and datastore)  and equal or higher than 30MS, and the statistics around this are computed every 4 seconds, the “datastore-wide disk scheduler” will determine which action to take to reduce the overall/average latency and increase fairness. ( we can set this latency threshold manually from 5ms to 100ms before 5.1 ). This feature was removed from 5.1 because when user set too low threshold then SIOC will start to throttle IOPS too early and decrease the throughput of the storage.

 

 

2. This is a  major enhancement in vSphere 5.1 is replacing the manual setting of the congestions threshold with a percentage of peak throughput that is automatically determined for each device by injecting IOs and observing both latency and throughput. When the throughput of the LUN starts to suffer this means that it reached its maximum and the latency corresponding to 90% of the maximum throughput is used as a threshold.

 

3. Disk shares, like most fairness mechanisms, do not kick in unless there is a congestion on the resource. As long as there is enough for everybody, there is nothing preventing any VM from consuming what it wants unless you specify a hard limit on the maximum number of IOPS.  

SIOC control throttles VMs IOPS by modifying the device queue depth on the host, forcing the VM to adjust its IOPS to match its configured disk shares in comparison to other VMs that share the same LUN from all hosts.

 

How to configure SIOC ?

 

 

Configuring Storage I/O Control is a two-step process:

1

Enable Storage I/O Control for the datastore.

2

Set the number of storage I/O shares and the upper limit of I/O operations per second (IOPS) allowed for each virtual machine.

By default, all virtual machine shares are set to Normal (1000) with unlimited IOPS.

 

1. If a virtual machine has more than one virtual disk, you must set the limit on all of its virtual disks. Otherwise, the limit will not be enforced for the virtual machine. In this case, the limit on the virtual machine is the aggregation of the limits for all virtual disks.

 

For example: - if you want to restrict some snapshot or backup application with 128Kb IOs to 5Mb per second, set a limit of 40 IOPS.

 

 

SIOC in ESXI 6.5

 

Vmware has come up with SIOC v2 and it's supported from 6.5

 

1. SIOC v2 in this release is based on vSphere API for I/O filters.SIOC v1 implementation is still supported in vSphere 6.5, so the current settings that you have configured for your VMs will work just fine if you choose not to use SIOC v2

 

2. The user will specify a policy for SIOC and the policy determines how many shares the VM is configured with. SIOC v2 is only supported with VMs that run on VMFS and NFS datastores.

Since SIOC v2 is now an I/O filter it gives us the ability to leverage Storage Policy Based Management (SPBM) to apply storage I/O control in this release.

 

In vSphere 6.5 Storage IO Control has been reimplemented by leveraging the VAIO framework.

 

What is VAIO framework?

 

VAIO stands for vSphere APIs for IO Filtering.

 

VMware provides certain categories of I/O filters that are installed on your ESXi hosts and vCenter Server

The supported types of filters include the following:

  • Replication: Replicates all write I/O operations to an external target location, such as another host or cluster.
  • Encryption: Offered by VMware. Provides encryption mechanisms for virtual machines. For more information, see the vSphere 6.5 Security new features.
  • Caching: Implements a cache for virtual disk data. The filter can use a local flash storage device to cache the data and increase the IOPS and hardware utilization rates for the virtual disk. If you use the caching filter, you might need to configure a Virtual Flash Resource.
  • Storage I/O control

 

 

VAIO framework enabled policy based management. This goes for caching, replication and indeed also QoS. Instead of configuring disks or VMs individually, you will now have the ability to specify configuration details in a VM Storage Policy and assign that policy to a VM or VMDK.

 

 

Should end user enable or disable the SIOC?

 

it depends on the environment where you are using this feature.

 

1.We need to enable where host VMs for multiple tenants (like public clouds) as it helps reduce the effect of a noisy neighbour trying to hog storage resources on the rest of the well-behaving VMs.

 

2. We need to disable the SIOC in auto tier storage configurations. Especially if it’s auto tiering at the block level. It’s also recommended to have IO metric collection disabled because it will get different results based on where the blocks on the datastore physically reside.



Points to be Noted about SIOC

  • SIOC is disabled by default
  • SIOC needs to be enabled on a per Datastore level
  • SIOC only engages when a specific level of latency has been reached
  • SIOC has a default latency threshold of 30MS
  • SIOC uses an average latency across hosts
  • SIOC uses disk shares to assign I/O queue slots
  • SIOC does not use vCenter, except for enabling the feature
  • SIOC is included only in the Enterprise Plus edition of vSphere

 

Although SIOC is enabled using a vCenter server, latency values are actually stored on the datasite itself. This allows SIOC to function, even when vCenter is down or becomes unavailable after SIOC is enabled.

PowerNSX を使用して、NSX for vSphere へのテナント追加の自動化をしてみようと思います。

とはいっても、現行の NSX にマルチテナント化の機能がるわけではないので、

既に構築済みの DLR + VXLAN 環境に、新規の論理スイッチと VM をテナントっぽく追加してみます。

 

今回は、DLR とそれより上位のネットワークは構成ずみである想定とします。

下記のような、シンプルなネットワーク環境を使用しています。

NSX Edge のスタティック ルート設定を PowerNSX で見てみる。

 

今回のテナント追加の流れ。

ネットワークを作成してから、VM を作成してネットワークに接続します。

作成した VM は、テナント ネットワークのIP アドレスが設定された状態になる想定です。

 

テナント ネットワークを作成。

  • 1. 論理スイッチを作成。
  • 2. 論理スイッチを DLR に接続。
  • 3. ESG に SNAT ルール追加。
  • 4. DFW の許可ルールを作成。

 

VM を作成してテナント ネットワークに接続。

  • 5. テンプレートから VM を作成。
  • 6. VM を論理スイッチに接続。

 

下記のようなイメージで、追加テナント(赤枠の中の部分)を作成します。

powernsx-automation-img.png

まずは、PowerNSX を対話的に実行してテナントをひととおり作成してみます。

あらかじめ、Connect-NsxServer で NSX 環境に接続しておきます。

PowerNSX での NSX への接続方法について。

PowerNSX> Connect-NsxServer -vCenterServer <vCenter のアドレス>

 

1. 論理スイッチを作成。

テナント ネットワークとして利用する論理スイッチ「ls-tenant-04」を作成します。

今回の環境の Transport Zone は、「tz01」という名前です。

PowerNSX> $tz = Get-NsxTransportZone -name tz01

PowerNSX> $ls = New-NsxLogicalSwitch -TransportZone $tz -Name ls-tenant-04

PowerNSX> $ls | select Name,objectId

 

 

name         objectId

----         --------

ls-tenant-04 virtualwire-222

 

 

2. 論理スイッチを DLR に接続。

作成した論理スイッチを、DLR インスタンス(今回は edge-5)に接続します。

テナントのゲートウェイとなる IP アドレス(今回は 10.1.40.1)も設定します。

PowerNSX> Get-NsxLogicalRouter -objectId edge-5 | New-NsxLogicalRouterInterface -Type internal -PrimaryAddress 10.1.40.1 -SubnetPrefixLength 24 -ConnectedTo $ls -Name if-tenant-04

 

3. ESG に SNAT ルール追加。

Edge Service Gateway(ESG)に、テナントで使用するネットワークに対応したSNAT ルールを追加します。

  • ESG のアップリンクのアドレスは 192.168.1.144 です。
  • テナントで使用するネットワーク アドレス 10.1.40.0/24 としました。

 

ESG での SNAT の利用については下記もどうぞ

NSX ESG の SNAT 設定の様子。

 

PowerNSX> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | New-NsxEdgeNatRule -Vnic 0 -action snat -OriginalAddress "10.1.40.0/24" -TranslatedAddress 192.168.1.144

 

4. DFW の許可ルールを作成。

分散ファイアウォール(DFW)に、今回は下記のように通信許可ルールを追加します。

  • デフォルトのままでは、通信が拒否されるようにしてあります。
    別テナントからは、許可ルールを入れていないので、つながらない状態のままになります。
  • テナント向けに、DFW ルールのセクション(dfw-section-04)を作成します。
  • 踏み台サーバ(192.168.1.223)からテナントへの SSH を許可します。
  • 任意のネットワーク(Any)からテナントへの HTTP も許可します。

 

DFW のルールは、VM 追加ごとに変更しなくてよいように、適用先を論理スイッチにしてあります。

また今回は簡易的な環境構築なので、ちょっとルールが微妙なところもあるかもしれません。

PowerNSX> $dfw_section = New-NsxFirewallSection -Name dfw-section-04

PowerNSX> $svc = Get-NsxService -Name SSH | where {$_.isUniversal -eq $false}

PowerNSX> $dfw_section | New-NsxFirewallRule -Name "allow jbox to t04 ssh"   -Action allow -Source 192.168.1.223 -Destination $ls -Service $svc -AppliedTo $ls

PowerNSX> $svc = Get-NsxService -Name HTTP | where {$_.isUniversal -eq $false}

PowerNSX> $dfw_section | New-NsxFirewallRule -Name "allow any to t04 http" -Action allow -Destination $ls -Service $svc -AppliedTo $ls

 

5. テンプレートから VM を作成。

あらかじめ用意しておいた VM テンプレートから、VM を作成します。

VM 作成(クローン)するときに、IP アドレスなどの設定もしています。

ちなみに VM は、Photon OS 1.0 rev-2 の OVA を利用しています。

 

まず、カスタマイズ仕様を作成します。

PowerNSX> New-OSCustomizationSpec -Name spec-vm41 -OSType Linux -DnsServer 10.1.1.1 -Domain go-lab.jp

PowerNSX> Get-OSCustomizationSpec -Name spec-vm41 | Get-OSCustomizationNicMapping | Set-OSCustomizationNicMapping -IpMode UseStaticIP -IpAddress 10.1.40.101 -SubnetMask 255.255.255.0 -DefaultGateway 10.1.40.1

 

そしてカスタマイズ仕様を指定して、VM を作成します。

クローンが終了した直後に、カスタマイズ仕様 は削除しています。

PowerNSX> $spac = Get-OSCustomizationSpec -Name spec-vm41

PowerNSX> Get-Template -Name photon-1.0-rev2 | New-VM -Name vm41 -ResourcePool nsx-cluster-01 -Datastore ds_nfs_lab02 -OSCustomizationSpec $spec
PowerNSX> $spec | Remove-OSCustomizationSpec -Confirm:$false

 

6. VM を論理スイッチに接続。

VM のクローンが完了したら、vNIC を論理スイッチに接続します。

 

New-VM でバッキングの分散ポートグループを指定することも可能ですが、

今回はあえて Connect-NsxLogicalSwitch を使用しました。

PowerNSX> Get-VM vm41 | Get-NetworkAdapter -Name "Network adapter 1" | Connect-NsxLogicalSwitch $ls

 

VM を起動すると、論理スイッチに接続された状態になります。

PowerNSX> Start-VM vm41

 

IP アドレスが設定された状態で、ネットーワークに接続されます。

PowerNSX> Get-VM vm41 | Get-VMGuest | select VM,State,IPAddress,OSFullName

 

 

VM     State IPAddress                               OSFullName

--     ----- ---------                               ----------

vm41 Running {10.1.40.101, fe80::250:56ff:feaf:d89a} Other 3.x or later Linux (64-bit)

 

 

つづく・・・

PowerNSX でテナント追加の自動化をしてみる。Part.2

今回は、 NSX Edge Service Gateway(ESG) の NAT サービスの様子を紹介してみます。

私は自宅ラボの NSX 環境からインターネットに出る場合などに SNAT を利用しています。

esg-snat.png

 

この環境の ESG のアップリンクには、192.168.1.144 のアドレスを設定しています。

esg-snat-01.png

 

そして、インターネット接続したい VM のセグメント(今回は 10.1.30.0/24)がソース アドレスの場合に、

ESG のアップリンクのアドレスに変換する SNAT ルールを設定しています。

esg-snat-02.png

 

SNAT ルールの設定画面は、下記のような感じです。

esg-snat-03.png

 

ちなみに、PowerNSX で NAT ルールを確認することもできます。

PowerCLI C:\> Get-NsxEdge -objectId edge-1 | Get-NsxEdgeNat | Get-NsxEdgeNatRule

 

 

ruleId                      : 196609

ruleTag                     : 196609

ruleType                    : user

action                      : snat

vnic                        : 0

originalAddress             : 10.1.30.0/24

translatedAddress           : 192.168.1.144

snatMatchDestinationAddress : any

loggingEnabled              : false

enabled                     : true

description                 :

protocol                    : any

originalPort                : any

translatedPort              : any

snatMatchDestinationPort    : any

edgeId                      : edge-1

 

 

 

以上、最近のほかの投稿の参考情報として、自宅 NSX ESG の NAT の様子を投稿してみました。

1 2 Previous Next

Actions

Looking for a blog?

Can't find a specific blog? Try using the Blog page to browse and search blogs.