Skip navigation
1 2 Previous Next

Nawals's Blog

27 posts

I have created this script to automate to configure vSwitch, Uplink and bulk virtual port groups in cluster. when needs to create 1000 of port groups just append the script under # This configures vSwitch0 and the VPG .

 

The intention  of this script is created for reduced the manual work.

How to run this as follow below procedure:

  • Install PowerCLi 5.5 /6.x on jumpbox or vCenter server
  • Open the VMware power cli
  • Go to the path where you kept the script
  • Run the script.\AddStandardSwitch.PS1
  • It will prompt to give the credential to connect vCenter server.
  • Now you can see the progress in PowerCli

 

==============================================================================================================================

    <#

.SYNOPSIS

        Host Configuration for a Cluster

    .DESCRIPTION

        This script sets Configuration of vSwitch0 on ESXi  servers, Creates vSwitch0 Data networking for migration

 

    .NOTES

        Author: Nawal Singh

    .PARAMETER $VMCluster

        ESX(i) Host Configuration

          

.EXAMPLE

#********Reminder!!!   Open up a fresh PS window and then run this script.

    #>

#Connection to vCenter

 

 

$mycred = Get-Credential

Connect-VIServer "VCSA.local.com" -Credential $mycred

 

Write-Progress -Activity "Configuring Hosts" -Status "Working" ;

 

 

# Change this setting for the Cluster that will be configured

$VMCluster = "CL_MGMT01"

 

# This configures vSwitch0 and the VPG .

Get-Cluster $VMCluster | Get-VMHost  | New-VirtualSwitch -Name VSwitch0 -Nic vmnic2

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_100" -vLanid 100

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_200" -vLanid 200

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_201" -vLanid 201

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_MGMT_2010" -vLanid 2010

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_vMotion_1010" -vLanid 1010

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_FT_2023" -vLanid 2023

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_Data_121" -vLanid 121

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_OOB_Mgmt_105" -vLanid 105

Get-Cluster $VMCluster | Get-VMHost  | Get-VirtualSwitch -Name vSwitch0 | New-VirtualPortGroup -Name "PG_VMwareTEST_180" -vLanid 180

 

Disconnect-VIServer -Server *  -Force -Confirm:$false

===============================================================================================================================

Note: This is tested in test, dev and prod it worked perfectly.

 

Please mark helpful or correct

Nawals Expert

Using vSphere Auto Deploy

Posted by Nawals Mar 16, 2020

Using vSphere Auto Deploy

cover the following recipes:

  • Enabling vSphere's auto deploy service
  • Configuring a TFTP server with the files required to PXE boot servers
  • Configuring a DHCP server to work with auto deploy
  • Preparing the vSphere environment – create host profile, configure the deploy
  • rules and activating them
  • Enabling stateless caching
  • Enabling stateful install

 

Introduction

 

In a large environment, deploying, and upgrading ESXi hosts is an activity that requires a lot of planning and manual work. For instance, if you were to deploy a set of 50 ESXi hosts in an environment, then you might need more than one engineer assigned to perform this task. The same would be the case if you were to upgrade or patch ESXi hosts. The upgrade or the patching operation should be done on each host. Of course, you have vSphere update manager that can be configured to schedule, stage, and remediate hosts, but again the process of remediation would consume a considerable amount of time, depending on the type and size of the patch. VMware has found a way to reduce the amount of manual work and time required for deploying, patching, and upgrading ESXi hosts. They call it vSphere auto deploy. In this chapter, you will learn not only to design, activate, and configure vSphere auto deploy but also to provision the ESXi hosts using it.

 

vSphere auto deploy architecture

 

vSphere auto deploy is a web server component that, once configured, can be used to quickly provision a large number of the ESXi hosts without the need to use the ESXi installation image to perform an installation on the physical machine. It can also be used to perform the upgrade or patching of the ESXi hosts without the need for vSphere update manager. Now, how is this achieved? vSphere auto deploy is a centralized web server component that lets you define rules that govern how the ESXi servers are provisioned. It, however, cannot work on its own. There are a few other components that play a supporting role for auto deploy to do its magic and here they are:

  • The auto deploy service
  • A DHCP server with scope options 66 and 67 configured
  • A TFTP server hosting files for a PXE boot
  • Servers with PXE (network boot) enabled in their BIOS
  • Host profiles configured at the vCenter server

SC1.png

 

The ESXi Host first begins to network boot by requesting for an IP address from the DHCP Server. The DHCP Server responds with an IP address and the DHCP scope options

providing the details of the TFTP Server. The ESXi Host then loads the PXE boot image from the TFTP Server to bootstrap the machine and subsequently sends an HTTP Boot

Request to the Auto Deploy Server, to load an ESXi Image into the host's memory. The image is chosen based on the rules created at the Auto Deploy Server. The workflow is

shown here:

 

SC2.png

 

Enabling vSphere auto deploy service

Auto deploy services, by default, are left disabled and need to be enabled explicitly. Understandably so, unless the environment warrants having specific features, they are left disabled to keep the resource consumption optimal. There are two specific services that need to be enabled to ensure that auto deploy functions as desired. In this recipe, we shall walk through the process of enabling the auto deploy service and image builder service on the vCenter Server Appliance.

 

The following procedure through enabling the appropriate services to activate

Auto Deploy:

1. Log in to vCenter Server Appliance.

2. Navigate to Home | Administration | System Configuration as illustrated in the

following screenshot:

SC3.png

 

3. Click on Nodes and select the intended vCenter instance and Related Objects as

shown here:

SC4.png

 

4. Highlight Auto Deploy service and click on Start.

5. Click on Settings and set Automatic to start automatically as shown here:

SC5.png

 

6. Highlight ImageBuilder Service and click on Start.

7. Click on Settings and set Automatic to start automatically.

8. Confirm that services are started from the Recent Tasks pane:

 

How it works...

Auto deploy services are, by default, left to start manually although integrated with vCSA. Hence, if the environment warrants having the feature, the administrator has to enable the service and set it to start automatically with vCenter.

 

Configuring TFTP server with the files required to PXE boot

 

Trivial File Transfer Protocol (TFTP) enables a client to retrieve a file and transmit to a remote host. This workflow concept is leveraged in the auto deploy process. Neither the protocol nor the workflow is proprietary to VMware. In this recipe, we shall use an open source utility to act as the TFTP server, there are other variants that can be used for similar purposes.

 

The following procedure would step you through configuring the TFTP server to be PXE boot ready:

1. Log in to vCenter Server Appliance.

2. Navigate to Home | vCenter | Configure | Auto Deploy

3. Click on Download TFTP Boot Zip instance as depicted here:

SC7.png

 

4. Extract the files to the TFTP server folder (TFTP-Root) as demonstrated in the

following screenshot:

SC8.png

 

5. Start the TFTP service as shown here:

 

SC9.png

 

How it works...

TFTP is primarily used to exchange configuration or boot files between machines in an environment. It is relatively simple and provides no authentication mechanism. The TFTP server component can be installed and configured on a Windows or Linux machine. In this recipe, we have leveraged a third-party TFTP server and configured it to provide the relevant PXE files on demand. The TFTP server, with the specific PXE file downloaded from vCenter, aids the host in providing a HTTP boot request to the auto deploy server.

 

Configuring the DHCP server to work with auto deploy

Once the auto deploy services and TFTP servers are enabled, the next most important step in the process is to set up the DHCP server. The DHCP server responds to servers in scope with an IP address and specifically redirects the server to the intended TFTP server and boot filename. In this recipe, we shall look into configuring the DHCP server with TFTP server details alongwith the PXE file that needs to be streamed to the soon-to-be ESXi host. In this recipe, we shall walk through setting up a Windows-based DHCP server with the specific configuration that is prevalent. Similar steps can also be repeated in a Unix variant of DHCP as well.

 

Getting ready

Ensure that the TFTP server has been set up as per the previous recipe. In addition, the steps in the following recipe would require access to the DHCP server that is leveraged in the environment with the appropriate privileges, to configure the DHCP scope options.

 

How to do it...

The following procedure would step through the process of configuring DHCP to enable PXE boot:

1. Log in to the server with the DHCP service enabled.

2. Run dhcpmgmt.msc.

3. Traverse to the scope created for the ESXi IP range intended for PXE boot.

4. Right click on Scope Options and click on Configure Options... as shown in the following screenshot:

SC10.png

 

5. Set values for scope options 066 Boot Server Host Name to that of the TFTP server.

6. Set values for scope options 067 Bootfile Name to the PXE file undionly.kpxe.vmw-hardwired as demonstrated here:

SC11.png

 

How it works...

 

When a machine is chosen to be provisioned with ESXi and is powered on, it does a PXE boot by fetching an IP address from the DHCP server. The DHCP scope configuration option 66 and 67 will direct the server to contact the TFTP server and load the bootable PXE image and an accompanying configuration file. There are three different ways in which you can configure the DHCP server for the auto deployed hosts:

1. Create a DHCP scope for the subnet to which the ESXi hosts will be connected to.

Configure scope options 66 and 67.

2. If there is already an existing DHCP scope for the subnet, then edit the scope

options 66 and 67 accordingly.

3. Create a reservation under an existing or a newly created DHCP scope using the

MAC address of the ESXi host.

Large-scale deployments avoid creating reservations based on the MAC addresses, because that adds a lot of manual work, whereas the use of the DHCP scope without any reservations is much preferred.

 

Preparing vSphere environment – create host profile, configure the deploy rules and activate them

Thus far, we have ensured that auto deploy services are enabled, and the environmental setup is complete in terms of DHCP configuration and TFTP configuration. Next, we will need to prepare the vSphere environment to associate the appropriate ESXi image to the servers that are booting in the network. In this recipe, we will walk through the final steps of configuring auto deploy by creating a software depot with the correct image, then we will create auto deploy rules and activate them.

 

How to do it...

The following procedure prepares the vSphere environment to work with auto deploy:

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles as shown here:

3. Click on Extract Profile from host as shown:

 

4. Choose a reference host based on which new hosts can be deployed and click on Finish:

5. Navigate to Home | Auto Deploy.

6. Click on Software Depots | Import Software Depot, provide a suitable name and browse to the downloaded offline bundle as shown here:

7. Click on the Deploy Rules tab and then click on New Deploy Rule.

8. Provide a name for the rule and choose the pattern that should be used to identify the target host; in this example we have chosen the IP range defined in the DHCP scope, also multiple patterns can be nested for further validation:

9. Choose an image profile from the list available in the software depot as shown here:

10. (Optional) Choose a host profile as shown here:

11. (Optional) In the Select host location screen, select the inventory and click on OK to complete:

12. Click on Activate/Deactivate rules.

13. Choose the newly created rule and click on Activate as shown here:

14. Confirm that the rule is Active as shown here:

How it works...

To prepare the vSphere environment for auto deploy, we perform the following steps:

1. Create a host profile from a reference host, a host profile conserves the efforts in replicating much of the commonly used configuration parameters typically used

in the environment. There is a natural cohesion of the feature with auto deploy.

2. Create a software depot to store image profiles, typically more than one depending on the environment needs.

3. Create deploy rules to match specific hosts to specific images.

 

In a complex and large infrastructure, there could be heterogeneous versions of products in terms of software, hardware, drivers, and so on. Hence, the auto deploy feature enables the

creation of multiple image profiles and a set of rules through which targeted deployments could be performed. In addition, auto deploy use cases stretch beyond the typical deployments to managing the life cycle of the hosts, by accommodating updates/upgrades as well.

There are two primary modes of auto deploy:

Stateless caching: On every reboot, the host continues to use vSphere auto deploy infrastructure to retrieve its image. However, if auto deploy server is inaccessible, it falls back to a cached image.

Stateful install: In this mode, an installation is performed on the disk and subsequent reboots would boot off the disk. This setting is controlled through the host profile setting system cache configuration.

 

Enabling stateless caching

In continuation of the previous recipe, an administrator can control if the ESXi hosts boots from the auto deploy on every instance of reboot, or perform an installation through auto

deploy and have subsequent reboots to load image from disks. The option to toggle between stateless and stateful is performed by amending the host profile setting. In this recipe, we shall walk through the steps to enable stateless caching.

How to do it...

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles.

3. Select the host profile and click on Edit host profile.

4. Expand Advanced Configuration Settings and navigate to System Image Cache Configuration as shown here:

. Select on Enable stateless caching on the host or Enable stateless caching to a USB disk on the host.

6. Provide inputs for Arguments for first disk or leave at default: this is the order of preference of disk on which the host would be used for caching. By default, it will detect and overwrite an existing ESXi installation, if the user indicates the specific disk make, model or driver used, the specific disk matching the preference is chosen for caching:

7. For the option Check to overwrite any VMFS volumes on the selected disk, leave it unchecked. This would ensure that if there were any VMs on the local VMFS volume, they are retained.

8. For the option Check to ignore any SSD devices connected to the host, leave it unchecked. You may need to enable this setting only if you have SSD for specific use cases for the local SSD, such as using vFlash Read Cache (vFRC).

 

How it works..

The host profile directs the installation mode in an auto deploy-based deployment. In a data center where we see blade architectures prevalent, the local storage is rather limited and data is more often stored in external storage with the exception of hyperconverged infrastructures. The stateless caching feature specifically aids in such scenarios to limit dependency on local storage. In addition, users may also choose the option to enable stateless caching to USB disk.

 

Enabling stateful install

While the stateless caching feature is predominantly built to tackle disk specific limitations on server hardware, the stateful install mode is more of a legacy installation through PXE mechanism. Apart from the installation procedure that is set to scale, it mimics the attributes of a standard manual installation. In this recipe, we shall walk through the steps to enable stateful install.

 

How to do it...

1. Log in to vCenter Server.

2. Navigate to Home | Host Profiles.

3. Select the host profile and click on Edit host profile.

4. Expand ;Advanced Configuration Settings and navigate to System Image Cache Configuration as shown here.

5. Click on Enable stateful install on the host or Enable stateful install to a USB disk on the host:

6. Provide inputs for Arguments for first disk or leave at default: this is the order of preference of disk on which the host would be used for installation. The administrator may also indicate the specific disk make, model or driver used, the specific disk matching the preference is chosen for installation.

7. For the option Check to overwrite any VMFS volumes on the selected disk, leave it unchecked. This would ensure that if there were any VMs on local VMFS volume, they are retained.

8. For the option Check to ignore any SSD devices connected to the host, leave it unchecked; you may need to enable this setting only if you have SSD for specific use cases for the local SSD such as using vFRC.

 

If this help please mark helpful or correct.

Issue: B-series blade fails to power on in UCSM and the following critical fault:

Severity: Critical

Code: F0868

Last Transition Time: 2020-02-25T03:34:17Z

ID: 11500997

Status: None

Description: Motherboard of server 1/7 (service profile: org-root/ls-IN-ESX-01) power: failed

Affected Object: sys/chassis-1/blade-7/board

Name: Compute Board Power Fail

Cause: Power Problem

Type: Environmental

Acknowledged: No

Occurrences: 2

Creation Time: 2020-02-25T03:33:07Z

Original Severity: Critical

Previous Severity: Critical

Highest Severity: Critical

 

Following entries seen in the server's SEL log:

671 | 02/25/2020 04:23:06 EST | CIMC | Module/Board SUPER_CAP_FLT #0x8f | Predictive Failure deasserted | Asserted

672 | 02/25/2020 04:23:40 EST | CIMC | Processor WILL_BOOT_FAULT #0x90 | Predictive Failure deasserted | Asserted

673 | 02/25/2020 04:23:55 EST | CIMC | Platform alert POWER_ON_FAIL #0x8c | Predictive Failure asserted | Asserted

674 | 02/25/2020 04:24:06 EST | CIMC | Platform alert POWER_ON_FAIL #0x8c | Predictive Failure deasserted | Asserted

675 | 02/25/2020 04:24:58 EST | CIMC | Platform alert POWER_ON_FAIL #0x8c | Predictive Failure asserted | Asserted

Following entries seen in the server's Oblf Log:

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[2b]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[2c]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[2d]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[2e]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[2f]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[30]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[31]

Feb 25 04:23:55 EST:4.1(30b):IPMI:1708: Pilot3SrvPower.c:483:  -> Power State On: LPC RESET is     IN RESET; powerOnLPCOff[32]

SEL log location:

Chassis log bundle -> CIMCx_TechSupport.tar.gz -> var -> log -> sel

OBFL log location:

Chassis log bundle -> CIMCx_TechSupport.tar.gz -> obfl

 

Cause: This fault is caused by a hardware failure on the motherboard.

 

Solution:

Attempt following troubleshooting steps:

1. Reset CIMC

2. Physically reseat server in chassis

Note: If the above actions do not resolve issue and blade still does not power on, it is likely there is a hardware failure.

3. Contact Cisco TAC for part replacement.

 

Request to please like and comment.

Issue: failed to retrieve version information from remote platform service controller.

 

  • Migrating a Windows Server 2008 R2 installed vCenter Server 6.0 to vCenter Server Appliance 6.7U2 using external PSC,  fails with the Error: failed to retrieve version information from remote platform service controller.
  • In the migration-assistant.log, you see entries similar as:

 

2020-02-17 05:11:35.759Z| migration-assistant-13843380| I: Entering function: ValidateExportDir

2020-02-17 05:11:35.759Z| migration-assistant-13843380| I: DirectoryPermissionCheck: Will check export dir: "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\"

2020-02-17 05:11:35.760Z| migration-assistant-13843380| I: GetPathPermissions: Longest existing path for "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\" is "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\"

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: GetPathPermissions: Dir  perms for "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\": MigrationAssistant R1 W1 E1 D1 Dc1 ACL1 service R69716481 W16843009 E16843009 D16843009 Dc188 ACL48957392 anyone R1228 W48562176 E2753176 D0 Dc7 ACL69009408

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: GetPathPermissions: Longest existing path for "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\" is "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\"

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: GetPathPermissions: File perms for "C:\Users\Administrator\AppData\Local\VMware\Migration-Assistant\": MigrationAssistant R1 W1 E1 D1 Dc1 ACL1 service R69716481 W16843009 E16843009 D16843009 Dc188 ACL48956560 anyone R1228 W48562176 E2753176 D0 Dc7 ACL69009408

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: ValidateExportDir: Required core space: 5260; core, events and tasks space: 10174; All space: 13452; FreeSpace: 28471;

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: Leaving function: ValidateExportDir

2020-02-17 05:11:35.761Z| migration-assistant-13843380| I: ConnectToLdapServer: Connecting to ldap server [PSC01.internal.local] on port [636]

2020-02-17 05:11:35.764Z| migration-assistant-13843380| E: ConnectToLdapServer: Failed to connect to the LDAP server. Error code: 81

2020-02-17 05:11:35.764Z| migration-assistant-13843380| W: RetrievePSCMajorMinorVersion: Failed to connect to server [PSC01.internal.local]] to validate PSC version using Platform Services Conntroller LDAPs port [636].

2020-02-17 05:11:35.764Z| migration-assistant-13843380| I: ConnectToLdapServer: Connecting to ldap server [PSC01.internal.local]] on port [11712]

2020-02-17 05:11:56.765Z| migration-assistant-13843380| E: ConnectToLdapServer: Failed to connect to the LDAP server. Error code: 81

2020-02-17 05:11:56.765Z| migration-assistant-13843380| E: RetrievePSCMajorMinorVersion: Failed to connect to server [PSC01.internal.local] on legacy LDAPs port [11712].

PSC.png

Cause: Transport Layer Security (TLS) 1.2 is the default protocol for Platform Service Controller 6.7 by default, while  TLS 1.2 is not supported by default on Windows Server 2008 R2

 

Resolution:

Enable TLS 1.2 on Windows Server 2008 R2.
Note: This procedure modifies the Windows registry. Before making any registry modifications, ensure that you have a current and valid backup of the registry and the virtual machine.

  1. Navigate to the registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols
  2. Create a new folder and label it TLS 1.2.
  3. Create two new keys with the TLS 1.2 folder, and name the keys Client and Server.
  4. Under the Client key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
  5. Under the Server key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
  6. Ensure that the Value field is set to 0 and that the Base is Hexadecimal for DisabledByDefault.
  7. Ensure that the Value field is set to 1 and that the Base is Hexadecimal for Enabled.
  8. Reboot the Windows Server 2008 R2 machine.
  9. Now re-initiate the migration assistance tool it worked perfectly.

Here is the brief description of how Convergence tool works.

 

Converge Tool can migrate an External Deployment PSC to an Embedded deployment PSC,  it also has the ability to decommission the external PSCs after the migration.

 

Note: Your vCenter components must be upgraded to 6.7 Update1 appliance.

 

diagram.png

 

 

Prerequisites

 

1.Disable VCHA if its enabled on 6.5.

2.Reduce\disable DRS automation level.

3.Take the VMware snapshots for all vCenter components, if possible go for VCSA backup also

4.remove the secondary nic if its assigned before the upgrade

 

 

Using Tool:

 

Step 1-VCSA 6.7 Update 1 ISO will have the vcsa-convergence-cli tool so mount the ISO any windows/Linux machine.

 

   diagram1.png

Here converge folder will have converge JSON file and decommission folder will have decommission JSON file.

 

Step 2- Expand the vcsa-converge-cli directory and go to templates. Open the converge directory and copy the converge.json file to your local machine.

 

Step 3 – open the converge JSON file in your fav editor and looks as below

  diagram2.png

 

Step 4 - Fill out the fields as below and save the file .

  1. Information about the Managing vCenter or ESXi Host.
  2. Information about the vCenter Server you wish to Converge to Embedded.
  3. (Optional) Active Directory Information if you wish to join the Embedded vCenter to AD.
  4. (Optional) please mention the other external PSC if you have .

  Step 5 :from CMD prompt run vcsa-converge-cli\win32>vcsa-util.exe converge –no-ssl-certificate –verbose C:location of your .json file and run

diagram3.png

 

Step 6: VCSA converged to  embedded PSC successfully .

diagram4.png

Verify the vCenter with Embedded platform service control from the VAMI page.

Note:   Before we head on to decommission please make sure there are no VCSA associated with external PCSs that you are going to decommission .

Decommissioning Steps:

Step 1: copy the decommission JSON file to the local machine and fill out the required field

  1. fill Information about the Managing vCenter or ESXi Host of the External PSC.
  2. Information about the Platform Services Controller you wish to Decommission.
  3. Information about the Managing vCenter or ESXi Host of an Embedded vCenter in the SSO Domain.
  4. Information about the Embedded vCenter in the SSO Domain.

 

Here is the screen shot reference .

diagram5.png

 

Repeat the above steps for second PSC if you have.

 

This completes the VCSA convergence .

Symptoms: In vCenter you are seeing the following message been reported for one or more Hosts

 

Problem: This message can appear if the Host has not synced with the vCenter Database in over 5 minutes, or if vCenter Server or an ESXi host has been recently restarted

 

Solution:

In the Inventory section, click Home > Networking.

Select the vDS displaying the alert and then click the Hosts tab.

Right-click the host displaying the Out of sync warning and then click Rectify vNetwork Distributed Switch Host.

VMware ESXTOP command that I use pretty frequently when troubleshooting a virtual workload and or ESXi host issue.

 

ESXTOP really can be one of the best and under-utilized tools in the environment to gain insight into your workload in real time.

 

I have attached the ESXTOP commands that will be very helpful.

Facts:  One of the customer using vBlock 540 with Windows vCenter 6.0 U2 with Embedded PSC and External VMware Update Manager. The requirement came to upgrade the exiting Windows vCenter 6.0U2 to VCSA 6.7U1 which is migrated successfully to VCSA 6.7U1. Post migration we observed VMware update manager not working and getitng below error when going to VMWare Update Tab. wer restarted the vcenter services and update manger services however, there were no luck.

 

Problem: VMware update manager not working and getting below Error "Class not found 'com.vmware.vum.views.compliance.ComplianceSplitView' in module /vsphere-client/updatemanager-ui/UpdateManager.swf" when click on VMware Update Tab

 

Error in the log file#

 

Error attempting Vcintegrity Export file does not exist or is corrupted, abort import

Resolution

Please check vcIntegrity migration logs for details.

 

Solution:

Register the VUM with vCenter with below steps to fix it.

cd /lib/vmware-updatemgr/bin

mkdir backup

cp -p extension.xml backup/

cp -p vci-integrity.xml backup/

cp -p jetty-vum* backup/

 

Now go ahead and finish the failed registration with below command:

 

/lib/vmware-updatemgr/bin/vmware-vciInstallUtils -C /lib/vmware-updatemgr/bin/ -L /var/log/vmware/vmware-updatemgr/ -I /lib/vmware-updatemgr/bin/ -v <your vCenter FQDN> -p 80 -U administrator@<your SSO domain> -P <password> -S /lib/vmware-updatemgr/bin/extension.xml -O extupdate

 

chown updatemgr:updatemgr vci-integrity.xml

 

service-control --start vmware-updatemgr

 

After follow above steps VMware update manager working fine and able to see the all options in VMware Update Manager Tab.

One of the customer environment using vBlock 540 with Windows vCenter 6.0 with Embedded PSC. We migrated Windows vCenter 6.0 to VCSA 6.7 and have come across an interesting error every time when I login to the vSphere Web Client. Getting error "An internal error has occurred - Error#1069"

 

Problem: vSphere Web Client display "An internal error has occurred - Error#1069" post migrated from windows vCenter 6.0 to VCSA 6.7

Error-1069.PNG

 

Solution:

1. From Home, Select Administration

2. Under Solutions, Select Client-Plug-Ins

3. In the Client Plug-ins window check for any plugins that might be causing an issue.

4. Select the plugin"vRealize Operation Manager, right click and select Disable.

5. Click Yes to disable the selected plug-in

6. Click OK to re-load the vSphere Web Client

 

That it, once re-loaded you should not experience the error any more.

 

It appears in the old version of the vRealize Operation Manager plugin was causing an issue, so by disabling this plugin vSphere web client is now running perfectly.

For more information please refer below VMware KB

https://kb.vmware.com/s/article/52387

Problem: How to stop, start, or restart vCenter Server 6.x services

VMware vCenter Server 6.0.x

VMware vCenter Server 6.5.x

 

 

Solution:

In VMware vCenter Server 6.x, VMware recommends to use the vSphere Web Client or Service Control command-line tool to stop, start, or restart vCenter Server and/or Platform Services Controller services. This process is different from previous versions of vCenter Server with the Microsoft Windows Services snap-in.

 

Starting vCenter Server and/or Platform Services Controller services

To start a vCenter Server and/or Platform Services Controller service if it is has stopped using the vSphere Web Client:

 

1. Log in to the vSphere Web Client with a vCenter Single Sign-on administrator account.

2. Navigate to Administration > Deployment > System Configuration.

3 Click on Services to view the list of all services within the vCenter Server system.

4.To view a list of services for a specific node, click Nodes, select the node in question and click the Related Objects tab.

5. Right-click on the service you would like to start and select Start.

 

 

Notes:

 

  • If you restart Inventory Service in the preceding method, the Inventory Service starts but the services depending on the Inventory Service fail to start.
  • This issue will not be fixed as Inventory Service will be removed in the future release of vCenter Server.
  • To work around this issue, log in to Windows Machine and restart Inventory Service.
  • This issue does not occur in vCenter Server Appliance.

 

To start a vCenter Server and/or Platform Services Controller service if it is has stopped using the command-line:

1. Log in as an administrator to the server that is running vCenter Server and/or Platform Services Controller.

2. Open an administrative command prompt.

3. Run this command to change to the vCenter Server and/or Platform Services Controller installation directory:

 

cd C:\Program Files\VMware\vCenter Server\bin

 

 

Note: This command uses the default installation path. If you have installed vCenter Server and/or Platform Services controller to another location, modify this command to reflect the correct install location.

 

4. Run this command to list the vCenter Server and/or Platform Services Controller services:

 

 

service-control --list

 

5. Run this command to start a specific service:

 

 

service-control --start servicename

 

6/ Run this command to start all services:

 

 

service-control --start --all

 

7. To perform a dry run of the command, add the option --dry-run to the command. This displays the actions that command run without executing the actions.

 

For example:

 

service-control --start --all --dry-run

 

Stopping vCenter Server and/or Platform Services Controller services

 

To stop a vCenter Server and/or Platform Services Controller service if it has started using the vSphere Web Client:

 

 

1. Log in to the vSphere Web Client with a vCenter Single Sign-on administrator account.

2; Navigate to Administration > Deployment > System Configuration.

3; Click Services to view the list of all services within the vCenter Server system.

4.  view the services for a specific node, click Nodes, select the node in question and click the Related Objects tab.

5. Right-click on the service you would like to stop and select Stop.

 

To stop a vCenter Server and/or Platform Services Controller service if it is has started using the command-line:

 

1. Log in as an administrator to the server that is running vCenter Server and/or Platform Services Controller.

2. Open an administrative command prompt.

3. Run this command to change to vCenter Server and/or Platform Services Controller installation directory:

 

cd C:\Program Files\VMware\vCenter Server\bin

 

Note: This command uses the default installation path. If you have installed vCenter Server and/or Platform Services controller to another location, modify this command to reflect the correct install location.

 

4. Run this command to list the vCenter Server and/or Platform Services Controller services:

 

 

service-control --list

 

5. Run this command to stop a specific service:

 

service-control --stop servicename

 

6. Run this command to stop all services:

 

service-control --stop --all

 

7, To perform a dry run of the command, add the option --dry-run to the command. This displays the actions that command run, without executing the actions.

 

For example:

 

service-control --stop --all --dry-run

 

Restarting vCenter Server and/or Platform Services Controller services

 

 

To restart vCenter Server and/or Platform Services Controller service using the vSphere Web Client:

 

 

1. Log in to the vSphere Web Client with a vCenter Single Sign-on administrator account.

2. Navigate to Administration > Deployment > System Configuration.

3. Click on Services to view the list of all services within the vCenter Server system.

4. To view the services for a specific node, click Nodes, select the node in question and click the Related Objects tab.

5. Right-click on the service you would like to restart and select Restart.

 

To restart vCenter Server and/or Platform Services Controller service using the command-line:

1. Log in as an administrator to the server that is running vCenter Server and/or Platform Services Controller.

2. Open an administrative command prompt.

3. Run this command to change to vCenter Server and/or Platform Services Controller installation directory:

 

cd C:\Program Files\VMware\vCenter Server\bin

 

Note: This command uses the default installation path. If you have installed vCenter Server and/or Platform Services controller to another location, modify this command to reflect the correct install location.

 

4. Run this command to list the vCenter Server and/or Platform Services Controller services:

 

 

service-control --list

 

5. Run this command to stop a specific service:

 

 

service-control --stop servicename

 

6. Run this command to stop all services:

 

service-control --stop --all

 

7. Run this command to start a specific service:

 

service-control --start servicename

 

8. Run this command to start all services:

 

service-control --start --all

 

Related Articles

VMware KB 2109881

Problem:

 

  • vSphere 6.5 migration requires using the migration assistant which will provide validation checks for the upgrade.
  • The migration assistant checks provide information on remediation items required for the current vCenter infrastructure prior to proceeding with the upgrade that must be addressed.
  • Reoccurring issues include:
    • vCenter error 3010 - "Installation of component VMware vCenter Server failed with error code 3010"
    • IPv6 must be disabled in order to use the migration assistant.  If IPv6 is enabled the progress bar not complete and the upgrade will fail with “Upgrade EXPORT failed”.
    • Upgrade failed with error code 1603.  This is a generic error related to possible GPO, OU permissions, User Access, File System Security, File locks, etc.
    • VUM Issue - Upgrading the vCenter Server Appliance 6.0 to 6.5 fails with the error:  “Unable to retrieve the migration assistant extension on source vCenter Server. Make sure migration assistant is running on the VUM server.

Solutions:

 

vCenter error 3010: "Installation of component VMware vCenter Server failed with error code 3010"

Multiple resolution paths listed in VMware KB Article 2149266

 

  • Verify antivirus and backup software are disabled.
  • Other than antivirus and backup software causing this issue, then next most common issue that requires resolution is JRE as shown in steps 6 & 7 of the KB.
    • Take a backup of jre folder located at C:\Program Files\VMware\vCenter Server\
    • Empty the contents of jre folder except for the these files:

C:\Program Files\VMware\vCenter Server\jre\bin\libvecsjni.dll
  C:\Program Files\VMware\vCenter Server\jre\lib\ext\vmware-endpoint-certificate-store.jar

  • If antivirus and backup software are disabled and JRE is not the issue, see additional remediation steps in VMware KB Article 2149266

 

IPv6 must be disabled in order to use the migration assistant.  If IPv6 is enabled the progress bar not complete and the upgrade will fail with “Upgrade EXPORT failed.

See IPv6  EXTERNAL blog for more information.

  • The workaround is to remove IPv6 from the appliance via admin portal.
    • Log to the portal via https://VCSA_IP:5480
    • Go to Network | Address
    • Delete the IPv6 Default Gateway
    • Set the IPv6 Address Type to Auto
    • Save Settings and reboot the appliance
    • Try upgrade process again.

Upgrade failed with error code 1603. This is a generic error related to possible GPO, OU permissions, User Access, File System Security, File locks, etc.

  • Validate vCenter is not using Group Policy in AD (GPO)
  • Ensure you are using an Administrator account with Full rights and permissions (Local Administrator is preferred)
  • See VMware KB Article 2127284 for additional remediation item

 

VUM issue upgrading vCenter Server Appliance 6.0 to 6.5 fails with the error: “Unable to retrieve the migration assistant extension on source vCenter Server. Make sure migration assistant is running on the VUM server.”

To resolve this issue, make sure that the Migration Assistant is running on the source VMware Update Manager machine. See VMware Article KB 2148400.

Related Articles

Problem: Attempting to join an Appliance-based Platform Services Controller or vCenter Server to a vSphere domain fails with the error: ERROR_TOO_MANY_NAMES (68)

rtaImage.jpg  rtaImage.png

 

To resolve this issue, unregister the failed machine using the cmsso-util command.

 

To unregister the failed machine:

 

1. Log in as root to the appliance shell of one of the available Platform Services Controller appliances within the vSphere Domain.

2. To enable the Bash shell, run the shell.set --enabled true command.

3. Run the shell command to start the Bash shell and log in.

4. Run the cmsso-util unregister command to unregister the failed Platform Services Controller or vCenter Server:

 

cmsso-util unregister --node-pnid FQDN_of_failed_PSC_or_vCenter --username administrator@your_domain_name --passwd vCenter-Single-Sign-On-password

 

Where FQDN_of_failed_PSC_or_vCenteris the FQDN or IP address of the Platform Services Controller or vCenter Server that failed to install. Ensure that this is the correct FQDN or IP address before executing.

 

Note: After executing the command, the removal process is not recoverable. You must run this command only on one of the Platform Services Controller replication partners, as the synchronization removes the entries from all other Platform Services Controller replication partners.

 

After the preceding steps are executed, try installing the Platform Services Controller or vCenter Server again.

 

 

Run below command to show the list of the registered PSC/VCSA/Extensions.

 

/usr/lib/vmware-vmafd/bin/dir-cli service list

 

Example it will show the list like below

 

1. machine-50163016-ccba-42a0-9ab4-27a605873c2b

2. vsphere-webclient-50163016-ccba-42a0-9ab4-27a605873c2b

3. machine-a3881909-f496-4dc0-88d4-8d33700efbf5

4. vsphere-webclient-a3881909-f496-4dc0-88d4-8d33700efbf5

5. machine-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

6. vsphere-webclient-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

7. vpxd-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

8. vpxd-extension-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

9. SRM-remote-24f1330b-507d-4802-b896-9c73b352e4f6

10. SRM-remote-115c3d71-e85f-4fb5-af2d-8c21cd7be1b2

11. SRM-a52c6186-c829-452a-b97a-0fdf110e7336

12. SRM-remote-f196f1de-352b-496e-9495-1dd64f0f3fbe

13. com.vmware.vr-d673abc3-0fe6-446d-b18f-da5856d37292

14. com.vmware.vr-e9b25284-b7eb-431a-94f9-e73a66cd98cd

15. com.vmware.vr-de2f42b5-8114-468b-a889-62690f654793

 

After unregister the failed PSC/VCSA list should be like below.

 

1. machine-50163016-ccba-42a0-9ab4-27a605873c2b

2. vsphere-webclient-50163016-ccba-42a0-9ab4-27a605873c2b

3. machine-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

4. vsphere-webclient-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

5. vpxd-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

6. vpxd-extension-cdb8fbc0-bc40-11e6-a6c5-0050569d2101

7. SRM-remote-24f1330b-507d-4802-b896-9c73b352e4f6

8. SRM-remote-115c3d71-e85f-4fb5-af2d-8c21cd7be1b2

9. SRM-a52c6186-c829-452a-b97a-0fdf110e7336

10. SRM-remote-f196f1de-352b-496e-9495-1dd64f0f3fbe

11. com.vmware.vr-d673abc3-0fe6-446d-b18f-da5856d37292

12. com.vmware.vr-e9b25284-b7eb-431a-94f9-e73a66cd98cd

13. com.vmware.vr-de2f42b5-8114-468b-a889-62690f654793

Problem: vMotion failed with Error "The source detected that the destination failed to resume"

 

Error code “The source detected that the destination failed to resume.

Heap dvfilter may only grow by 33091584 bytes (105325400/138416984), which is not enough for allocation of 105119744 bytes

vMotion migration [-1407975167:1527835473000584] failed to get DVFilter state from the source host <xxx.xxx.xxx.xxx>

vMotion migration [-1407975167:1527835473000584] failed to asynchronously receive and apply state from the remote host: Out of memory.

Failed waiting for data. Error 195887124. Out of memory

 

Workaround:

Configure a larger heap size on a suitable target host. Post change need reboot to take effect.

 

 

Following command on the target host to increase the Heap size.

1. Login to ESXi host via SSH (putty)

2. Run the below command to change the Heap Size

#esxcfg-module -s DVFILTER_HEAP_MAX_SIZE=276834000 dvfilter

3. Reboot the host to take effect.

4. Now try to do vMotion once esxi is come up online.

Problem: vSphere ESXi 6.7 host not responding in vCenter after upgrade from vSphere ESXi 6.0 to 6.7

This known issue have in vSphere 6.7 EP07 and EP09 which will be fixed in release 6.7U3

 

Error in vmkernel log:

 

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 470: Admission failure in path: sioc/storageRM.2136226/uw.2136226

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 477: uw.2136226 (185844) extraMin/extraFromParent: 256/256, sioc (808) childEmin/eMinLimit: 14066/14080

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 470: Admission failure in path: sioc/storageRM.2136226/uw.2136226

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 477: uw.2136226 (185844) extraMin/extraFromParent: 33/33, sioc (808) childEmin/eMinLimit: 14066/14080

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 470: Admission failure in path: sioc/storageRM.2136226/uw.2136226

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 477: uw.2136226 (185844) extraMin/extraFromParent: 256/256, sioc (808) childEmin/eMinLimit: 14066/14080

2019-06-08T11:02:03.118Z cpu2:2136226)MemSchedAdmit: 470: Admission failure in path: sioc/storageRM.2136226/uw.2136226

 

The main Root Cause is SIOC running out of memory.

 

Resolution: N/A

Note: This known issue will be fixed in 6.7U3. which will may release in July/August 2019

Work Around:

 

Users can workaround the issue by restarting the SIOC service using the following commands on the affected ESXi Hosts:

1. Check the status of storageRM and sdrsInjector

/etc/init.d/storageRM status
/etc/init.d/sdrsInjector status


2. Stop the service

/etc/init.d/storageRM stop
/etc/init.d/sdrsInjector stop


3. Start the service

/etc/init.d/storageRM start
/etc/init.d/sdrsInjector start

 

If the issue persists even after the SIOC service is restarted, users can temporarily disable SIOC by turning off the feature from VMware Virtual Center.

 

Refer VMware KB VMware Knowledge Base