All TKB Articles in Networking

Proxy ARP on the NSX-T edge node is a feature supported since NSX-T 2.4. Proxy ARP is automatically enabled when a NAT rule or a load balancer VIP uses an IP address from the subnet of the Tie... See more...
Proxy ARP on the NSX-T edge node is a feature supported since NSX-T 2.4. Proxy ARP is automatically enabled when a NAT rule or a load balancer VIP uses an IP address from the subnet of the Tier-0 gateway uplink. Proxy ARP can be considered in environments where IP subnets are limited and where it is problematic to use new subnets easily and rapidly (either by using static routes or BGP). For production environment, VMware recommends implementing proper routing between a physical fabric and the NSX-T Tier-0 by using either static routes or Border Gateway Protocol with BFD. If proper routing is used between the Tier-0 gateway and the physical fabric, BFD with its sub-second timers will converge faster. The following table summarizes the design option and support for the Proxy ARP feature on the NSX-T Edge: NSX-T Proxy ARP Support Tier-0 High Availability Mode Support Active / Standby Supported Active / Active Not supported For more details regarding Proxy-ARP on NSX-T please refer to the document below This document was generated from the following discussion: The specified discussion was not found.
Recently completed this technical paper. Here’s a synopsis right from the doc: The purpose of this document is to analyze HCX service behavior (as it relates to planned and unplanned service impact... See more...
Recently completed this technical paper. Here’s a synopsis right from the doc: The purpose of this document is to analyze HCX service behavior (as it relates to planned and unplanned service impacting events), and to explore configuration and architecture practices to maximize availability during those types of events.  The document gives a synopsis of HCX operations, and the details about those operations as it relates to service or workload availability. The following topics will be covered: Understanding Service Impact During Planned HCX Migration Operations Understanding Service Impact During Planned HCX Network Extension Operations Understanding Service Impact During Planned HCX Service Updates Understanding Unplanned HCX Service Impact Scenarios Considerations and Best Practices for HCX Service Availability VMware HCX Service Availability & Resiliency v1.0   
This document is an informal document that walks through the step-by-step deployment and configuration workflow for NSX-T Edge Single N-VDS Multi-TEP design.  This document uses NSX-T 3.0 UI to s... See more...
This document is an informal document that walks through the step-by-step deployment and configuration workflow for NSX-T Edge Single N-VDS Multi-TEP design.  This document uses NSX-T 3.0 UI to show the workflow, which is broken down into following 3 sub-workflows: 1- Deploy and configure the Edge node (VM & BM) with Single-NVDS Multi-TEP. 2- Preparing NSX-T for Layer 2 External (North-South) connectivity. 3- Preparing NSX-T for Layer 3 External (North-South) connectivity. Hope this helps. --The VMware NSX Product Management
  This is the VMware® NSX-T 3.0 Security Configuration Guide.This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX-T in a secure manner.   Guide is p... See more...
  This is the VMware® NSX-T 3.0 Security Configuration Guide.This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX-T in a secure manner.   Guide is provided in an easy to consume spreadsheet format, with rich metadata (i.e. similar to existing NSX for vSphere & VMware vSphere Security Configuration Guides) to allow for guideline classification and risk assessment. Feedback and Comments to the Authors and the NSX Solution Team can be posted as comments to this community Post (Note: users must login on vmware communities before posting a comment). Other related NSX Security Guide can be found @ https://communities.vmware.com/docs/DOC-37726 --The VMware NSX PM/TPM Team
This procedure is based on KB 57844 When you have overlapping segment ID Pool range in a specific environment (one vCenter server) with another environment (second vCenter server) this is the fu... See more...
This procedure is based on KB 57844 When you have overlapping segment ID Pool range in a specific environment (one vCenter server) with another environment (second vCenter server) this is the full process how to migrate the current working objects (VMs, NSX Edges, Logical Routers, Logical Switches) to a new Segment ID Pool: I. Prerequisite 0. Put monitoring suppression for vCD Cells, vCenter server and NSX Manager 1. Upgrade all components to 6.4.4: NSX Manager, NSX Controllers, host agents, Edge Gateways 2. Stop the backups (if they are using the vCenter server API) 3. Setup Postman 3.1. Download and start Postman 3.2. Create a request 3.3. Headers > Key "Content Type" > Value "application/xml" 3.4. Authorization > Basic Auth > username "admin" > password 3.5. File > Settings > Turn off "SSL Certificate Verification" 4. Stop the vCenter server operations from vCD: login vCD (https://yourowncloud.com): Manage & Monitor > vCenters > Right click on the vCenter server > Disable 5. Change cluster DRS configuration from Fully Automated to Manual 6. Gather information (with PowerShell) in CSV file, about all NSX objects which will be migrated: Logical Switches, Logical Routers, Edges, VMs, etc. (script bellow just collect data): [CmdletBinding(PositionalBinding=$false)] Param ( [parameter(Position= 0, Mandatory = $false)] [string]$VIServer = "YOURCLOUDVCR01.local", [string]$PathExportNsxLogicalSwitch     = "C:\Support\Scripts\NSX\NSXReport.csv", [string]$PathExportNsxLogicalRouter     = "C:\Support\Scripts\NSX\NSXLogicalRouter.csv" ) begin {     If ( ! (Get-module PowerNSX )) {     Import-Module PowerNSX     } # connecting to the NSX server $connection = Connect-NSXServer -vCenterServer $VIServer $defaultNsxConnection = $connection $defaultViServer = $connection.viConnection } process { # Getting NSX Edge information $getEdge = get-nsxedge |Get-NsxEdgeInterface $edge = $getEdge | select name,edgeId,portgroupName $edgeEdgeSub = $getEdge | Get-NsxEdgeSubInterface # Getting NSX Logical Router information $getNsxLogicalRouter = Get-NsxLogicalRouter | Get-NsxLogicalRouterInterface | select connectedToId,logicalRouterId,connectedToName,type     $output =    foreach ( $ls in Get-NsxLogicalSwitch ) {         $pg = $ls | Get-NsxBackingPortGroup         foreach ( $portgroup in $pg) {             $vm = $portgroup| Get-VM             foreach ( $virtualmachine in $vm) {                     $vlookup = $edge | where {$_.portgroupName -like $ls.name}                     $vlookupEdgeSub = $edgeEdgeSub | where {$_.logicalSwitchName -like $ls.name}                     $VMdetails = (get-vm $virtualmachine.name | Get-NetworkAdapter | where {$_.NetworkName -like $portgroup.name})                  [pscustomobject]@{                     "vCenter" = $defaultViServer.name                     "NSX" = $defaultNsxConnection.server                     "LS_ObjectID" = $ls.objectId                     "LS_Name" = $ls.name                     "LS_vdnId" = $ls.vdnId                     "EdgeID" = $vlookup.edgeId                     "EdgeVNIC" = $vlookup.name                     "EdgeTrunk_LS_ID" = $vlookupEdgeSub.logicalSwitchId                     "EdgeTrunk_LS_Name" = $vlookupEdgeSub.logicalSwitchName                     "EdgeTrunk_LS_isConnected" = $vlookupEdgeSub.isConnected                     "LS_tenantId" = $ls.tenantId                     "BackingPortGroup" = $portgroup.name                     "VirtualMachine" = $virtualmachine.name                     "VirtualMachineNICname" = $VMdetails.name                     "VirtualMachineNICmac" = $VMdetails.MacAddress                  } # END pscustomobject             }         }     } $getNsxLogicalRouter | export-csv $PathExportNsxLogicalRouter -NoTypeInformation $output | export-csv $PathExportNsxLogicalSwitch -NoTypeInformation } end {    Disconnect-NsxServer } II. Migration 1. Create a new non overlapping Segment Range using Postman (Body > raw): POST https://10.10.10.40/api/2.0/vdn/config/segments <segmentRange> <name>DATACENTER</name> <begin>10001</begin> <end>20000</end> </segmentRange> # Note the segment range “id” (lets call it newRangeId) returned in response payload. 2. # GET segments will also return segment range "id" using Postman: GET https://10.10.10.40/api/2.0/vdn/config/segments it will return <newRangeId> here example output: <segmentRanges>   <segmentRange>     <id>1</id>   <- this is the ID to use in step 4.     <name>5000-5999</name>     <begin>5000</begin>     <end>5999</end>     <isUniversal>false</isUniversal>     <universalRevision>0</universalRevision>   </segmentRange> </segmentRanges> 3. Disconnect Edges, VMs, vNIC from the dvpg (LogicalSwitch) by following the steps bellow: 3.1. Before any deletion every logical switch connection should be write down (VMs, Edges): 3.1.1. Home > Network and Security > Logical Switches > take screenshot of Logical Switch ID, Segment ID, Name > Click on the logical switch > Related Objects > take screenshot of Edge tab, VMs tab 3.1.2. Home > Network and Security > Edge Gateways > Click on the Edge (or Logical Router) > Manage > Settings > Interfaces (take a screenshot & write down the information inside the edit menu) 3.1.3. Based on the logical switch ID go to network port group and take a screenshot of the VMs: Home > Networking > portgroup name > VMs 3.2. Remove and disconnect the related objects: 3.2.1. Home > Network and Security > Logical Switches > Select each logical switch > Related Objects > Actions > Remove VM > Select all the VMs in the list > Remove # DisconnectNic is taken from   PowerNSX module function DisconnectNic {                     param (         $nic,         $WaitTimeout = 90     )                     #See NSX API guide 'Attach or Detach a Virtual Machine from a Logical Switch' for     #how to construct NIC id.     $vmUuid = ($nic.parent | get-view).config.instanceuuid     $vnicUuid = "$vmUuid.$($nic.id.substring($nic.id.length-3))"                     #Construct XML     $xmldoc = New-Object System.Xml.XmlDocument     $xmlroot = $xmldoc.CreateElement("com.vmware.vshield.vsm.inventory.dto.VnicDto")     $null = $xmldoc.AppendChild($xmlroot)     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "objectId" -xmlElementText $vnicUuid     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "vnicUuid" -xmlElementText $vnicUuid     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "portgroupId" -xmlElementText ""                     #Do the post     $body = $xmlroot.OuterXml     $URI = "/api/2.0/vdn/virtualwires/vm/vnic"     if ( $confirm ) {         $message  = "Disconnecting $($nic.Parent.Name)'s network adapter from a logical switch will cause network connectivity loss."         $question = "Proceed with disconnection?"                         $choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]         $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes'))         $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&No'))                         $decision = $Host.UI.PromptForChoice($message, $question, $choices, 1)     }     else { $decision = 0 }     if ($decision -eq 0) {         Write-Progress -Activity "Processing" -Status "Disconnecting $vnicuuid from logical switch"         $response = invoke-nsxwebrequest -method "post" -uri $URI -body $body -connection $connection         Write-Progress -Activity "Processing" -Status "Disconnecting $vnicuuid from logical switch" -Completed                         $job = [xml]$response.content         $jobId = $job."com.vmware.vshield.vsm.vdn.dto.ui.ReconfigureVMTaskResultDto".jobId                         Wait-NsxGenericJob -Jobid $JobID -Connection $Connection -WaitTimeout $WaitTimeout -FailOnTimeout:$FailOnTimeout                     } } #vCenter Connection and Path to file $VIServer = "YOURCLOUDVCR01.local" $connection = Connect-NSXServer -vCenterServer $VIServer $defaultNsxConnection = $connection $defaultViServer = $connection.viConnection # Point to the CSV file generated from the script above !!! $Import = import-csv C:\Support\Scripts\NSX\NSXReport.csv # Put the current Virtual Wire you are working on $virtualwire = "virtualwire-01" $pathToVMList = $Import | where {($_.LS_ObjectID -eq $virtualwire) -and ($_.VirtualMachine -notlike "vse-*")} # disconnect VM from Logical Switch (there is a 100 sec timeout) foreach ($vm in $pathToVMList){ $VirtualMachineNic = get-vm $VM.VirtualMachine | Get-NetworkAdapter | where {$_.NetworkName -eq $VM.BackingPortGroup} DisconnectNic -nic $VirtualMachineNic  -WaitTimeout 100 } 3.2.2. Home > Network and Security > NSX Edges > Double Click on the Edge (or Logical Router) > Manage > Settings > Interfaces (take the name of the logical switch) usually vNIC 1 > Select (radio button) > Disconnect > Confirm "Yes" > wait till Pending Job finish. When disconnecting edges with High Availability configured, do remember to check and ensure HA is not configured on a logical switch also. (if the HA configuration is vNic "Any" there is no need to change anything) Note: if you have only one connected interface you should connect another one and then disconnect the original one which should be migrated. After the migration connect back the original one and delete the temporary one. 4. Move each logical switch from the old segment range to new segment range. This API needs virtualwire-id and rangeId as inputs which can be taken from the get-NSXinfo report. API payload is empty (on success the status code of the request will be "200 OK"): PUT https://10.10.10.40/api/2.0/vdn/virtualwires/virtualwire-100/segmentreconfig/<newRangeId> === Try this in case of an error: POST "https://10.10.10.40/api/2.0/vdn/virtualwires/virtualwire-40/backing?action=remediate === 5. ONLY for Logical routers: 5.1. POST https://10.10.10.40/api/4.0/edges/{edge-id}?action=vdridreconfig&vdnRangeId=<newRangeId> Output: 204 (No Content) 6. Go to Home > Network and Security > NSX Edges > Double Click on the Edge > Manage > Settings > Interfaces and then reconnect the interface that was disconnected (wait till Pending Job finish) 7. Redeploy the migrated edge/logical router 8. Check if the new configuration for each logical router is pushed to the host with net-vdr  - "net-vdr -L -l edge-113 more" #http://www.enterprisedaddy.com/2018/04/how-to-execute-script-remotely-on-esxi-hosts/ # C:\Support\plink.exe is needed. # add info $root = "root" $Passwd = "  add password here   " $esxlist = " add servers here", "add servers here" $edge = "edge-123" # "edge-100" # work $cmd = "net-vdr -L -l $edge" $plink = "echo y | C:\Support\plink.exe" $remoteCommand = '"' + $cmd + '"' $outResult = foreach ($esx in $esxlist) {     Connect-VIServer -Server $esx -User  $root -Password $Passwd > $null     # Write-Host -Object "starting ssh services on $esx"     $sshstatus = Get-VMHostService  -VMHost $esx | Where-Object { $psitem.key -eq "tsm-ssh" }     if ($sshstatus.Running -eq $False) {         Get-VMHostService | Where-Object { $psitem.key -eq "tsm-ssh" } | Start-VMHostService     }     # Write-Host -Object "Executing Command on $esx"     $output = $plink + " " + "-batch -ssh" + " " + $root + "@" + $esx + " " + "-pw" + " " + $Passwd + " " + $remoteCommand     $message = Invoke-Expression -command $output     [PSCustomObject]@{         Name = $esx         Vxlan = ($message | Select-String -Pattern "Vxlan:").ToString().split("Vxlan:")[-1]     }        Disconnect-VIServer -Server $esx -Confirm:$false } $outResult 9. Home > Network and Security > Logical Switches > Select each logical switch > Related Objects > Actions > Add VM > Search for the name of the VM > Select the VM > Click the right arrow > Next > Select the appropriate network adapter > Next > Finish # connect foreach ($vm in $pathToVMList){ $VirtualMachineNic = get-vm $VM.VirtualMachine | Get-NetworkAdapter | where {($_.MacAddress -eq $VM.VirtualMachineNICmac) -and ($_.Name -eq $VM.VirtualMachineNICname)} Connect-NsxLogicalSwitch -NetworkAdapter $VirtualMachineNic -LogicalSwitch (Get-NsxLogicalSwitch -Name $VM.LS_Name) -WaitTimeout 100 } 10. After we migrate all Logical Switches and routers (on success the status code of the request will be "200 OK"): DELETE https://10.10.10.40/api/2.0/vdn/config/segments/<oldRangeId> 11. Enable the integration between the vCD and the vCenter: login to vCD > Manage & Monitor > vCenters > Right click on the vCenter server > Enable 12. Change the cluster DRS from "Manual" to "Fully Automated" =========================================== Backout plan: 1. Login vCD (https://yourowncloud.com): Manage & Monitor > vCenters > Right click on the vCenter server > Disable 2. Login to https://YOURCLOUDVCR01.local 3. Disconnect Edges, VMs, vNIC from the dvpg (LogicalSwitch) by following the steps bellow: 3.1. Before any deletion every logical switch connection should be write down (VMs, Edges): 3.1.1. Home > Network and Security > Logical Switches > take screenshot of Logical Switch ID, Segment ID, Name > Click on the logical switch > Related Objects > take screenshot of Edge tab, VMs tab 3.1.2. Home > Network and Security > Edge Gateways > Click on the Edge (or Logical Router) > Manage > Settings > Interfaces (take a screenshot & write down the information inside the edit menu) 3.1.3. Based on the logical switch ID go to network port group and take a screenshot of the VMs: Home > Networking > portgroup name > VMs 3.2. Remove and disconnect the related objects: 3.2.1. Home > Network and Security > Logical Switches > Select each logical switch > Related Objects > Actions > Remove VM > Select all the VMs in the list > Remove # DisconnectNic is taken from PowerNSX module function DisconnectNic {                     param (         $nic,         $WaitTimeout = 90     )                     #See NSX API guide 'Attach or Detach a Virtual Machine from a Logical Switch' for     #how to construct NIC id.     $vmUuid = ($nic.parent | get-view).config.instanceuuid     $vnicUuid = "$vmUuid.$($nic.id.substring($nic.id.length-3))"                     #Construct XML     $xmldoc = New-Object System.Xml.XmlDocument     $xmlroot = $xmldoc.CreateElement("com.vmware.vshield.vsm.inventory.dto.VnicDto")     $null = $xmldoc.AppendChild($xmlroot)     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "objectId" -xmlElementText $vnicUuid     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "vnicUuid" -xmlElementText $vnicUuid     Add-XmlElement -xmlRoot $xmlroot -xmlElementName "portgroupId" -xmlElementText ""                     #Do the post     $body = $xmlroot.OuterXml     $URI = "/api/2.0/vdn/virtualwires/vm/vnic"     if ( $confirm ) {         $message  = "Disconnecting $($nic.Parent.Name)'s network adapter from a logical switch will cause network connectivity loss."         $question = "Proceed with disconnection?"                         $choices = New-Object Collections.ObjectModel.Collection[Management.Automation.Host.ChoiceDescription]         $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&Yes'))         $choices.Add((New-Object Management.Automation.Host.ChoiceDescription -ArgumentList '&No'))                         $decision = $Host.UI.PromptForChoice($message, $question, $choices, 1)     }     else { $decision = 0 }     if ($decision -eq 0) {         Write-Progress -Activity "Processing" -Status "Disconnecting $vnicuuid from logical switch"         $response = invoke-nsxwebrequest -method "post" -uri $URI -body $body -connection $connection         Write-Progress -Activity "Processing" -Status "Disconnecting $vnicuuid from logical switch" -Completed                         $job = [xml]$response.content         $jobId = $job."com.vmware.vshield.vsm.vdn.dto.ui.ReconfigureVMTaskResultDto".jobId                         Wait-NsxGenericJob -Jobid $JobID -Connection $Connection -WaitTimeout $WaitTimeout -FailOnTimeout:$FailOnTimeout                     } } #vCenter Connection and Path to file $VIServer = "YOURCLOUDVCR01.local" $connection = Connect-NSXServer -vCenterServer $VIServer $defaultNsxConnection = $connection $defaultViServer = $connection.viConnection # csv file from get-NSXinfo $Import = import-csv C:\Support\Scripts\NSX\NSXReport.csv $virtualwire = "virtualwire-60" $pathToVMList = $Import | where {($_.LS_ObjectID -eq $virtualwire) -and ($_.VirtualMachine -notlike "vse-*")} # disconnect foreach ($vm in $pathToVMList){ $VirtualMachineNic = get-vm $VM.VirtualMachine | Get-NetworkAdapter | where {$_.NetworkName -eq $VM.BackingPortGroup} DisconnectNic -nic $VirtualMachineNic  -WaitTimeout 100 } 3.2.2. Home > Network and Security > NSX Edges > Double Click on the Edge (or Logical Router) > Manage > Settings > Interfaces (take the name of the logical switch; e.g. dvs.....) usually vNIC 1 > Select (radio button) > Disconnect > Confirm "Yes" > wait till Pending Job finish. When disconnecting edges with High Availability configured, do remember to check and ensure HA is not configured on a logical switch also. (if the HA configuration is vNic "Any" there is no need to change anything) 4. Move each logical switch from the old segment range to new segment range. This API needs virtualwire-id and rangeId as inputs which can be taken from the get-NSXinfo report. API payload is empty (on success the status code of the request will be "200 OK"): PUT https://10.10.10.40/api/2.0/vdn/virtualwires/virtualwire-100/segmentreconfig/<newRangeId> === Try this in case of an error: POST "https://10.10.10.40/api/2.0/vdn/virtualwires/virtualwire-40/backing?action=remediate === 5. ONLY for Logical routers: 5.1. POST https://10.10.10.40/api/4.0/edges/{edge-id}?action=vdridreconfig&vdnRangeId=<newRangeId> 6. Go to Home > Network and Security > NSX Edges > Double Click on the Edge > Manage > Settings > Interfaces and then reconnect the interface that was disconnected (wait till Pending Job finish) 7. Redeploy the migrated edge/logical router 8. Check if the new configuration for each logical router is pushed to the host "net-vdr -L -l edge-113 more" #http://www.enterprisedaddy.com/2018/04/how-to-execute-script-remotely-on-esxi-hosts/ # C:\Support\plink.exe is needed. # add info $root = "root" $Passwd = "  add password here   " $esxlist = " add servers here", "add servers here" $edge = "edge-123" # "edge-117" # work $cmd = "net-vdr -L -l $edge" $plink = "echo y | C:\Support\plink.exe" $remoteCommand = '"' + $cmd + '"' $outResult = foreach ($esx in $esxlist) {     Connect-VIServer -Server $esx -User  $root -Password $Passwd > $null     # Write-Host -Object "starting ssh services on $esx"     $sshstatus = Get-VMHostService  -VMHost $esx | Where-Object { $psitem.key -eq "tsm-ssh" }     if ($sshstatus.Running -eq $False) {         Get-VMHostService | Where-Object { $psitem.key -eq "tsm-ssh" } | Start-VMHostService     }     # Write-Host -Object "Executing Command on $esx"     $output = $plink + " " + "-batch -ssh" + " " + $root + "@" + $esx + " " + "-pw" + " " + $Passwd + " " + $remoteCommand     $message = Invoke-Expression -command $output     [PSCustomObject]@{         Name = $esx         Vxlan = ($message | Select-String -Pattern "Vxlan:").ToString().split("Vxlan:")[-1]     }        Disconnect-VIServer -Server $esx -Confirm:$false } $outResult 9. Home > Network and Security > Logical Switches > Select each logical switch > Related Objects > Actions > Add VM > Search for the name of the VM > Select the VM > Click the right arrow > Next > Select the appropriate network adapter > Next > Finish # connect foreach ($vm in $pathToVMList){ $VirtualMachineNic = get-vm $VM.VirtualMachine | Get-NetworkAdapter | where {($_.MacAddress -eq $VM.VirtualMachineNICmac) -and ($_.Name -eq $VM.VirtualMachineNICname)} Connect-NsxLogicalSwitch -NetworkAdapter $VirtualMachineNic -LogicalSwitch (Get-NsxLogicalSwitch -Name $VM.LS_Name) -WaitTimeout 100 } 10. Enable the integration between the vCD and the vCenter: login to vCD > Manage & Monitor > vCenters > Right click on the vCenter server > Enable 11. Change the cluster DRS from "Manual" to "Fully Automated" =========================================== Impact: During the migration for each logical switch there will be a short (5-10 minutes) disconnection of the networking for all Edges, Logical Routers and VMs. All related networks which are in the current Logical Segment Pool will lose connection to the migrated logical switch which is in the new Segment ID Pool. VMs and Edges: is equal to unplug the network cable from a physical server. =========================================== Test Details: 1. Log in into https://YOURCLOUDVCR01.local 2. Go to Network & Security 3. Check the status of the Logical Switch (Logical Switches section) 4. Check the status of the Edges connected to the logical switch (Edge section) 5. Based on the information extracted before the change check the status of the VMs connected to the Logical switch 6. Check the options are in place after refreshing the vSphere Web Client. 7. Go to vCD: https://yourowncloud.com and check the status of the Orgs 8. Go to vCD and check the logs under System 9. Go to vCD: check Stranded Items, Switches & Port Groups, Storage Policies, Datastores, Hosts, Resource Pools, vCenters, Network Pools, External Networks, Edge Gateways, Organization VDCs, Provider VDCs, Cloud Cells, Organizations 10. Check if there are errors/warnings on cluster level for the tenant which was migrated 11. Check each host which is part of the tenant cluster if there are errors in: /var/log/vmkernel.log (Use Log Insight) 12. Manually move several VMs in the vCenter server and check if there are warnings/errors in the tenant cluster 13. Wait DRS to automatically move some VMs from one host to another and check for warnings/errors in the tenant cluster 14. Check again the status of the VMs and the Edges inside the vCD
Getting Started Guide for NSX Policy APIs
VMWare NSX – DMZ Anywhere Detailed Design Guide DMZ Anywhere takes DMZ security principles and decouples them from a traditional physical network and compute infrastructure to maximize securit... See more...
VMWare NSX – DMZ Anywhere Detailed Design Guide DMZ Anywhere takes DMZ security principles and decouples them from a traditional physical network and compute infrastructure to maximize security and visibility in a manner that is more scalable and efficient. With traditional design customers are forced to host separate hardware for DMZ due to dependency on physical security and hardware. With NSX this dependency is removed as routing, switching and firewalling can be done at kernel level or virtual machine vNIC level. This post is made to address a common DMZ anywhere design of  hosting  production and DMZ workloads on same underlying hardware while making use of all SDDC features which NSX would offer.  This post is made to get a complete view of an SDDC and its requirements with detailed physical and connectivity designs. Please note to make things simple i am talking about one site only in this design. This design can be used as a Low level design for SDDC to save your time and efforts. Contents of the Post Network Virtualization Architecture NSX for vSphere Requirements Physical Design vCenter Design & Cluster Design VXLAN VTEP Design Production Cluster VTEP Design Transport Zone Design Logical Switch Design Distributed Switch Design Physical Production vDS Design Control Pane and Routing Design DMZ Anywhere Routing Design Edge Uplink Design Micro Segmentation Design Deployment Flow and Implementation Guides Backup and Recovery Network Virtualization Architecture This is the high level network logical design with one cluster hosting shared production workload, NSX components and  DMZ workload. Don’t be scared by looking at this. Have a look at all the design diagrams and decisions to get the complete view. NSX Data Plane: The data plane handles the workload data only. The data is carried over designated transport networks in the physical network. NSX logical switch, distributed routing, and distributed firewall are also implemented in the data plane.NSX control plane: The control plane handles network virtualization control messages. Control messages are used to set up networking attributes on NSX logical switch instances, and to configure and manage disaster recovery and distributed firewall components on each ESXi host. Carry over control plane communication on secure physical networks (VLANs) that are isolated from the transport networks used for the data plane.NSX management plane: The network virtualization orchestration occurs in the management plane. In this layer, cloud management platforms such as vRealize Automation can request, consume, and destroy networking resources for virtual workloads. The cloud management platform directs requests to vCenter Server to create and manage virtual machines, and to NSX Manager to consume networking resources. NSX for vSphere Requirements Below are the components and its compute requirements. Server Component Quantity Location CPU RAM Storage Platform service Controllers 2 Production-Mgmt Cluster 4 12 290 vCenter server with Update manager 1 Production-Mgmt Cluster 4 16 290 NSX Manager 1 Production-Mgmt Cluster 4 16 60 Controllers 3 Production-Mgmt Cluster 4 4 20 EDGE Gateway for Production 4 Production-Mgmt Cluster 2 2 512 MB Production DLR Control VM (A/S) 2 Production-Mgmt Cluster 1 512 MB 512 MB EDGE Gateway for DMZ 2 DMZ Cluster 2 2 512 MB DMZ DLR Control VM (A/S) 2 DMZ Cluster 1 512 MB 512 MB IP Subnets RequirementsBelow vLans for Management and VTEPS will be created on the physical L3 Device in Data Center.10.20.10.0/24 – vCenter, NSX and Controllers 10.20.20.0/24 – Production & DMZ ESXi Mgmt 10.20.30.0/24 – Production & DMZ vMotion 10.20.40.0/24 – Production VTEP vLanBelow VXLANs subnets will be created on NSX and NSX DLR will act as gateway.172.16.0.0/16 – Production VXLAN’s for Logical Switches 172.17.0.0/16 – DMZ VXLAN’s for Logical SwitchesESXi Host Requirements: Hardware is compatible with targeted vSphere version. ( check with vmware compatibility guide here) Hardware to have min 2 CPU with 12 or more cores. ( even 8 core also works, but now 22 cores are available in market) Minimum 4 x 10 GB NIC Cards, if vSAN is also part of Design min 6 x 10GB NIC cards. ( if possible use 25 G or 40 G links) Minimum 128 GB RAM in each host. ( now a days each host is coming with 2.5 TB RAM). Physical Design Below is the Physical ESXi host design. Its not mandatory to keep all Prod and DMZ in separate racks. It depends on requirements and network connectivity. A minimum of 7 Hosts to support shared management, edge,  DMZ and production workloads in single Cluster .Some of the major Physical Design Considerations are below: Configure redundant physical switches to enhance availability. Configure the ToR switches to provide all necessary VLANs via an 802.1Q trunk. NSX ECMP Edge devices establish Layer 3 routing adjacency with the first upstream Layer 3 device to provide equal cost routing for management and workload traffic. The upstream Layer 3 devices end each VLAN and provide default gateway functionality. NSX doesn’t need any fancy stuff at Network level basic L2 or L3 functionalities from any hardware vendor will do. Configure jumbo frames on all switch ports with 9000 MTU although 1600 is enough for NSX. The management vDS uplinks for both Production and DMZ cluster can be connected to same TOR switches, but use separate vLans as shown in requirements. Only edge uplinks needs to be separate for Production and DMZ as that is what will decide the packet flow. vCenter Design & Cluster Design It is recommended to have One vCenter single signon domain with 2 PSC’s load balanced with NSX or external load balancer and a vCenter server will use the Load balanced VIP of PSC.vCenter Design Considerations: For this design only one vCenter server license is enough, but it is recommended to have separate vCenter for mgmt and NSX workload clusters if you have separate clusters. One single sign on domain with 2 PSC’s load balanced with NSX load balancer or external load balancer. NSX load balancer config guide is here. A one-to-one mapping between NSX Manager instances and vCenter Server instances exists. If you are looking for vCenter design and implementation steps please click here for that post.One cluster for management, edge and compute ,DMZ workload and DMZ edges. Collapsed Cluster  host vCenter Server, vSphere Update Manager, NSX Manager and NSX Controllers. This  cluster also runs the required NSX services to enable North-South routing between the SDDC tenant virtual machines and the external network, and east-west routing inside the SDDC. This Cluster also hosts Compute Workload will be hosted in the same cluster for the SDDC tenant workloads. This Cluster will host DMZ workload along with DMZ edges and DLR Control VM. VXLAN VTEP Design The VXLAN network is used for Layer 2 logical switching across hosts, spanning multiple underlying Layer 3 domains. You configure VXLAN on a per-cluster basis, where you map each cluster that is to participate in NSX to a vSphere distributed switch (VDS). When you map a cluster to a distributed switch, each host in that cluster is enabled for logical switches. The settings chosen here will be used in creating the VMkernel interface. If you need logical routing and switching, all clusters that have NSX VIBs installed on the hosts should also have VXLAN transport parameters configured. If you plan to deploy distributed firewall only, you do not need to configure VXLAN transport parameters. When you configure VXLAN networking, you must provide a vSphere Distributed Switch, a VLAN ID, an MTU size, an IP addressing mechanism (DHCP or IP pool), and a NIC teaming policy. The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600. If the vSphere distributed switch MTU size is larger than the VXLAN MTU, the vSphere Distributed Switch MTU will not be adjusted down. If it is set to a lower value, it will be adjusted to match the VXLAN MTU.Design Decisions for VTEP: Configure Jumbo frames on network switches (9000 MTU) and on VXLAN Network also. Use two VTEPS per servers at minimum which will balance the VTEP load. Some VM’s traffic will go from one , other VM’s from another one. Separate vLans will be used for Production VTEP IP pool and DMZ VTEP IP pool. Unicast replication model is sufficient for small and medium deployments. For large scale deployments with multiple POD’s hybrid is recommended. No IGMP or other needs to be configured on physical world for Unicast replication model. Select Load balancing mechanism as Load based on Source ID which will create two or more vTEPS based on the no of physical uplinks on the vDS. Production Cluster VTEP Design As shown above each host will have two VTEP’s configured. this will be automatically configured based on the policy which is selected while configuring VTEP’s. Transport Zone Design A transport zone is used to define the scope of a VXLAN overlay network and can span one or more clusters within one vCenter Server domain. One or more transport zones can be configured in an NSX for vSphere solution. A transport zone is not meant to delineate a security boundary.One Transport Zones will be used one for Production workload and for DMZ workload. This will help if you are planning for DR or secondary site as only One universal Transport Zone is supported, so when moved to secondary site we can have one Universal TZ and two universal DLR , one for production and one for DR. Logical Switch Design NSX logical switches create logically abstracted segments to which tenant virtual machines can connect. A single logical switch is mapped to a unique VXLAN segment ID and is distributed across the ESXi hypervisors within a transport zone. This logical switch configuration provides support for line-rate switching in the hypervisor without creating constraints of VLAN sprawl or spanning tree issues. Logical Switch Names DLR Transport Zone WEB Tier Logical Switch. APP Tier Logical Switch. DB Tier Logical Switch Services Tier Logical Switch Transit Logical Switch Production DLR Local Transport Zone DMZ WEB Logical Switch. DMZ Services Logical Switch DMZ Transit Logical Switch DMZ DLR Local Transport Zone Distributed Switch Design vSphere Distributed Switch supports several NIC teaming options. Load-based NIC teaming supports optimal use of available bandwidth and redundancy in case of a link failure. Use two 10-GbE connections for each server in combination with a pair of top of rack switches. 802.1Q network trunks can support a small number of VLANs. For example, management, storage, VXLAN, vSphere Replication, and vSphere vMotion traffic.Configure the MTU size to at least 9000 bytes (jumbo frames) on the physical switch ports and distributed switch port groups that support the following traffic types. vSAN vMotion VXLAN vSphere Replication NFS Two types of QoS configuration are supported in the physical switching infrastructure. Layer 2 QoS, also called class of service (CoS) marking. Layer 3 QoS, also called Differentiated Services Code Point (DSCP) marking. A vSphere Distributed Switch supports both CoS and DSCP marking. Users can mark the traffic based on the traffic type or packet classification. When the virtual machines are connected to the VXLAN-based logical switches or networks, the QoS values from the internal packet headers are copied to the VXLAN-encapsulated header. This enables the external physical network to prioritize the traffic based on the tags on the external header. Physical Production vDS Design Production Cluster will have 3 vDS. Detailed Port group information will be given below. vDS-MGMT-PROD : to host management vLan traffic, VTEP traffic and vMotion Traffic. vDS-PROD-EDGE : will be used for EDGE Uplinks for North South Traffic for production traffic. vDS-DMZ-EDGE : will be used for DMZ EDGE Uplinks for North South Traffic. ( if you don’t have extra 10GB NIC’s you can use 1GB for edge port groups also, but there will be performance impact) Port Group Design Decisions:vDS-MGMT-PROD Port Group Name LB Policy Uplinks MTU ESXi Mgmt Route based on physical NIC load vmnic0, vmnic1 1500 (default) Management Route based on physical NIC load vmnic0, vmnic1 1500 (default) vMotion Route based on physical NIC load vmnic0, vmnic1 9000 VTEP Route based on SRC-ID vmnic0, vmnic1 9000 vDS-PROD-EDGE Port Group Name LB Policy Uplinks Remarks ESG-Uplink-1-vlan-xx Route based on originating virtual port vmnic2 1500 (default) ESG-Uplink-2-vlan-yy Route based on originating virtual port vmnic3 1500 (default) vDS-DMZ-EDGEThe No of port groups in DMZ depends on the next hop L3 device. If we have a firewall we can use only one port group as firewalls always work as active passive which is the case we find most of the time. If you have separate L3 device than firewall for DMZ. you will have two uplinks as in Production. Port Group Name LB Policy Uplinks Remarks ESG-Uplink-1-vlan-xx Route based on originating virtual port vmnic4 1500 (default) Control Pane and Routing Design The control plane decouples NSX for vSphere from the physical network and handles the broadcast, unknown unicast, and multicast (BUM) traffic within the logical switches. The control plane is on top of the transport zone and is inherited by all logical switches that are created within it.Distributed Logical Router: distributed logical router (DLR) in NSX for vSphere performs routing operations in the virtualized space (between VMs, on VXLAN backed port groups). DLRs are limited to 1,000 logical interfaces. If that limit is reached, you must deploy a new DLR. Designated Instance: The designated instance is responsible for resolving ARP on a VLAN LIF. There is one designated instance per VLAN LIF. The selection of an ESXi host as a designated instance is performed automatically by the NSX Controller cluster and that information is pushed to all other ESXi hosts. Any ARP requests sent by the distributed logical router on the same subnet are handled by the same ESXi host. In case of an ESXi host failure, the controller selects a new ESXi host as the designated instance and makes that information available to the other ESXi hosts. User World Agent: User World Agent (UWA) is a TCP and SSL client that enables communication between the ESXi hosts and NSX Controller nodes, and the retrieval of information from NSX Manager through interaction with the message bus agent. Edge Services Gateway : While the DLR provides VM-to-VM or east-west routing, the NSX Edge services gateway provides north-south connectivity, by peering with upstream top of rack switches, thereby enabling tenants toaccess public networks.Some Important Design Considerations for EDGE and DLR. ESGs that provide ECMP services, which require the firewall to be disabled. Deploy a minimum of two NSX Edge services gateways (ESGs) in an ECMP configuration for North-South routing Create one or more static routes on ECMP enabled edges for subnets behind the UDLR and DLR with a higher admin cost than the dynamically learned routes. Hint: If any new subnets are added behind the UDLR or DLR the routes must be updated on the ECMP edges. Graceful Restart maintains the forwarding table which in turn will forward packets to a down neighbor even after the BGP/OSPF timers have expired causing loss of traffic. FIX: Disable Graceful Restart on all ECMP Edges. Note: Graceful restart should be selected on DLR Control VM as it will help maintain data path even control VM is down. please note DLR control VM is not in Data Path, But EDGE will sit in Data path. If the active Logical Router control virtual machine and an ECMP edge reside on the same host and that host fails, a dead path in the routing table appears until the standby Logical Router control virtual machine starts its routing process and updates the routing tables. FIX: To avoid this situation create anti-affinity rules and make sure you have enough Hosts to tolerate failures for active / passivce control VM. DMZ Anywhere Routing Design Below are the Production design details. DLR will act as gateway for Production web, app and DB tier VXLAN’s. DLR will peer with EDGE gateways with OSPF , normal area ID 10. IP 2 will use as packet forwarding address and protocol address 3 will be in use for route peering with edge in the DLR. All 4 edges will be configured with ECMP so that they all will pass the traffic to upstream router and downstream DLR. Two SVI’s will be configured on TOR / Nearest L3 device as in my case both are acting as active with VPC and HSRP configured across both the switches. EDGE gateways will have two uplinks each towards each SVI from each vLan. Static route will be created on EDGE for subnets hosted on DLR with  higher admin distance. This will save if any issues with control VM. Below are the DMZ design details. DLR will act as gateway for DMZ web and services tier VXLAN’s. DLR will peer with EDGE gateways with OSPF , normal area ID 20. ( note all areas in OSPF should connect to area 0) IP 2 will use as packet forwarding address and protocol address 3 will be in use for route peering with edge in the DLR. All 2 edges will be configured with ECMP so that they all will pass the traffic to upstream firewall and downstream DLR. As firewalls can act as active passive only one virtual IP will be configured so only one vLan will be used. EDGE gateways will have one uplinks connecting to firewall. Packet Walk ThroughEven though Production and DMZ are in same transport zone, packet has to exit from DMZ and route over the physical network to reach production VM’s as the DLR and EDGES are different for both Production and DMZ.Step 1: Outside users will try to access DMZ VM through the perimeter firewall and load balancer.Step 2: That packet will be sent from DMZ VM to DMZ DLR.Step 3: Then it will be sent to EDGEStep 4: EDGE will pass it to firewall as it is its next hop.Step 5: DMZ firewall will forward it to the datacenter core then to TOR switchStep 6: L3 device pairing with EDGE will forward to EDGE, which will forward to DLRStep 7: DLR acting as gateway for production VM, will forward the packet to VM.Step 8: Internal VM will receive the packet from DMZ server. Edge Uplink Design Below are the design details: Each edge will have two uplinks one from each port group. each uplink port group will have only one physical uplink configured. No passive uplinks. Each uplink port group will be tagged with separate vLan. Note: DMZ will have similar use case but only one port group. Micro Segmentation Design The NSX Distributed Firewall is used to protect all management applications attached to application virtual networks. To secure the SDDC, only other solutions in the SDDC and approved administration IPs can directly communicate with individual components. NSX micro segmentation will help manage all the firewall policies from single pane. Deployment Flow and Implementation Guides NSX deployment flow is given below. If you are looking for detailed vmware NSX installation and configuration guide please follow this post of mine.
After presenting a session introducing the N-VDS (CNET1582US/BE) at VMworld this year, I was approached by some attendees who could not believe that the demo showing an example of NSX-T installat... See more...
After presenting a session introducing the N-VDS (CNET1582US/BE) at VMworld this year, I was approached by some attendees who could not believe that the demo showing an example of NSX-T installation was real: they had struggled installing NSX and this looked too simple! NSX install on ESXi can relatively straightforward and fast. However, I admit there are quite a few concepts to understand, and quite a few ways of doing this incorrectly. This document is trying to document the "happy path" for installing NSX-T on ESXi hosts. It also covers uninstall, as well as some recovery steps should something go wrong. I hope this will encourage you to try NSX-T, you don't need much to get started
This is the first iteration of the NSX-T Data Center and EUC Design Guide Authors: VMware NSBU and EUC Technical Product Management Teams This document is targeted toward end-user computing... See more...
This is the first iteration of the NSX-T Data Center and EUC Design Guide Authors: VMware NSBU and EUC Technical Product Management Teams This document is targeted toward end-user computing, virtualization, network, and security architects interested in deploying VMware® NSX in a Horizon 7 virtual desktop infrastructure environment. Version 1.0 provides guidance around the following : 1. NSX Network Virtualization for Horizon 2. NSX Micro-segmentation for Horizon 3. NSX Load-balancing for Horizon 4. NSX and Horizon Example Architectures for Small, Medium, and Large Deployments Feedback and Comments to the Authors and the NSX Solution Team are highly appreciated. --The VMware NSX Solution Team
This is one document to learn everything about NSX-T native LB capabilities.   NSX-T LB ToI This document highlights NSX-T  LB capabilities and its latest NSX-T 3.1 enhancements.     You can ... See more...
This is one document to learn everything about NSX-T native LB capabilities.   NSX-T LB ToI This document highlights NSX-T  LB capabilities and its latest NSX-T 3.1 enhancements.     You can find another great document to learn everything about NSX-T LB Configuration and Management: NSX-T LB Encyclopedia This document goes over all the Configuration and Management questions you may have on NSX-T LB and much more! It lists all NSX-T LB capabilities (LB Deployment, Monitor, Server Pool, L4 VIP, L7-HTTP VIP, L7-HTTPS VIP, LB Rules, and Troubleshooting) with detailed examples for each. Document available on NSX-T LB Encyclopedia
This is one document to learn everything about NSX-T LB Configuration and Management. NSX-T LB Encyclopedia. This document goes over all the Configuration and Management questions you may hav... See more...
This is one document to learn everything about NSX-T LB Configuration and Management. NSX-T LB Encyclopedia. This document goes over all the Configuration and Management questions you may have on NSX-T LB and much more! It lists all NSX-T LB capabilities (LB Deployment, Monitor, Server Pool, L4 VIP, L7-HTTP VIP, L7-HTTPS VIP, LB Rules, and Troubleshooting) with detailed examples for each. Note: Deck updated with NSX-T 3.1.   You can find another great document to learn everything about NSX-T native LB capabilities: NSX-T LB ToI This document highlights NSX-T native LB capabilities and its latest NSX-T 3.1 enhancements. Document available on: NSX-T LB ToI
Attached is the Service-defined Firewall Benchmark document from Coalfire. The Service-defined Firewall is the industry’s first purpose-built internal firewall. It delivers intrinsic stateful lay... See more...
Attached is the Service-defined Firewall Benchmark document from Coalfire. The Service-defined Firewall is the industry’s first purpose-built internal firewall. It delivers intrinsic stateful layer 7 firewall protection to prevent lateral movement and other attack vectors specific to the internal network of on-prem, hybrid, and multi-cloud environments. Coalfire’s examination and testing of the Service-defined Firewall solution utilized simulated real-world exploits. The methodology used simulated attacks that begin with the successful compromise of a vulnerable and exploitable machine within the network and then follow with attack propagation to other machines that share network access with the exploited VM.
Attached is the Service-defined Firewall Solution Architecture document from the networking and security group at VMware. The Service-defined Firewall is the industry’s first purpose-built intern... See more...
Attached is the Service-defined Firewall Solution Architecture document from the networking and security group at VMware. The Service-defined Firewall is the industry’s first purpose-built internal firewall. It delivers intrinsic stateful layer 7 firewall protection to prevent lateral movement and other attack vectors specific to the internal network of on-prem, hybrid, and multi-cloud environments.
  This is the VMware® NSX-T 2.4 & 2.5 Security Configuration Guide.This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX-T in a secure manner.   Guid... See more...
  This is the VMware® NSX-T 2.4 & 2.5 Security Configuration Guide.This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX-T in a secure manner.   Guide is provided in an easy to consume spreadsheet format, with rich metadata (i.e. similar to existing NSX for vSphere & VMware vSphere Security Configuration Guides) to allow for guideline classification and risk assessment. Feedback and Comments to the Authors and the NSX Solution Team can be posted as comments to this community Post (Note: users must login on vmware communities before posting a comment). Other related NSX Security Guide can be found @ https://communities.vmware.com/docs/DOC-37726 --The VMware NSX PM/TPM Team
NSX-T 2.4 enhanced its management plane with the support of NSX-T Manager Cluster. NSX-T Managers Cluster offer a built-in VIP for high-availability. But the usage of an external load balance... See more...
NSX-T 2.4 enhanced its management plane with the support of NSX-T Manager Cluster. NSX-T Managers Cluster offer a built-in VIP for high-availability. But the usage of an external load balancer offers the following benefits: Load spread across all NSX-T Managers NSX-T Managers can be in different subnets Faster failover NSX-T supports load balancing service, and the same NSX-T platform can be configured to load balance its NSX-T Managers Cluster. This document describes this configuration and is valid for NSX-T 2.4 and more recent releases (like NSX-T 2.5 and NSX-T 3.0).
This document highlights NSX MultiSite capabilities including:   . Latest enhancements   . What is NSX Multisite   . NSX Multisite Capabilities   . Recorded Demos For deeper information, we... See more...
This document highlights NSX MultiSite capabilities including:   . Latest enhancements   . What is NSX Multisite   . NSX Multisite Capabilities   . Recorded Demos For deeper information, we also offer the "NSX Federation Multi-Location Design Guide (Federation + Multisite)" here. Also FYI, VMworld 2019 had a public session presenting NSX Multisite "NSX-T Design for Multi-Site [CNET1334BU]" Recording + Deck.   Note1: This ToI may be updated in the future so always check you have the latest version.     . NSX 4.0-4.1 Multisite 101 ToI version is 1.0 done on 03/02/2023.     . NSX-T 3.2 Multisite 101 ToI version is 1.1 done on 01/10/2023.   Note2: NSX Multisite solution is perfect for customers who want a "Smaller NSX Management Footprint" (with only 3x NSX Mgr VMs for all their locations), and accept a "DR Recovery Procedure with few more requirements or steps". For other use cases, NSX-T 3.0 introduced a second Multi-Location solution: NSX Federation. NSX Federation solution is based on a new component: “NSX Global Manager Cluster” (GM). GM offers a central global configuration of multiple (local) NSX Manager Cluster, each offering Network & Security services for a Location. NSX Federation solution replies to "Specific Site Management/ GDPR/ Policy Requirements", and offers "Simplified DR".
The following document covers logging options and configuration for NSX-T Data Center and it's components. Logging options include: API cURL CLI vRealize Log Insight Splunk Curren... See more...
The following document covers logging options and configuration for NSX-T Data Center and it's components. Logging options include: API cURL CLI vRealize Log Insight Splunk Current version - NSX-T Data Center 2.3
Here is a list of technical sessions and hands-on labs focused on NSX-T at VMworld2018. See you all in Vegas next week!
Authors: Samuel Kommu & NSBU/CNABU with Pivotal Customer Success Team This is the first comprehensive reference design covering Pivotal Application Services(PAS) and Pivotal Container Services... See more...
Authors: Samuel Kommu & NSBU/CNABU with Pivotal Customer Success Team This is the first comprehensive reference design covering Pivotal Application Services(PAS) and Pivotal Container Services (PKS). This design guide is intended to serve two different domains of enterprise customer.  First to help and offer a baseline design for the application developer and devops professionals. Secondly, to infrastructure professional in providing guidance on how to rapidly adopt cloud native application into their infrastructure. Reader will appreciate the NSX-T Data Center Container Plugin (NCP) enabling programmatic provisioning of cloud native workload with necessary enterprise level homogenized connectivity, security, monitoring and consistency.   This design guide starts with a brief introduction to cloud native world and PAS and their challenges around networking and security in Chapters 1-4.  Followed by a deep dive into the NSX-T Data Center's integration with PAS using NCP and how it helps address the networking and security challenges in an agile automated fashion without compromising on security and providing the same monitoring capabilities at the container level as available with a virtual machine port in Chapter 5.  Chapters 6-8 focus on the considerations around routing, NAT, LB, Security when deploying PAS on NSX-T Data Center.  Chapters 9-11 focus on similar aspects on the PKS front.  Finally, chapter 12 ends with options on how to run PAS and PKS on a single NSX-T Data Center domain. This design guide is written and aligns with Datacenter NSX-T Reference architecture recommendation, however certain design choices that relates to PAS and PKS workloads are differentiated where necessary.  For general NSX-T Data Center design guidance, please refer to the NSX-T Reference Architecture:  https://communities.vmware.com/docs/DOC-37591 Feedback and Comments to the Authors and the NSX Solution Team are highly appreciated.  Happy Reading. --The VMware NSX Product Management