Skip navigation
1 2 Previous Next

Alex_Hunt's Blog

21 posts

By default vSphere client will we started in the locale which is defined at your system level. You can very easily change the locale for your vSphere client. Follow the steps mentioned below to do so:

1: Open Command Prompt and navigate into the directory where VPXClient resides (default is “C:\Program Files\Infrastructure\Virtual Infrastructure Client\Launcher”).

Following locales are supported

en_us

English

de_de

German

fr_fr

French

Ja

Japanese

Ko

Korean

zh_cn

Simplified Chinese

 

2: Start vpxclient.exe –locale xx where xx is the locale which you want to set.

3: For E.g you want to start your vSphere client in Japanese language then type the command

  1. vpxclient.exe –locale ja

Now launch your vSphere client. It will be launched in Japanese language.


locale.png

What is Content Library?

One of the new feature of vSphere 6 is Content Library . The Content Library provides simple and effective management for VM templates, vApps, ISO images and scripts for vSphere Administrators collectively called “content”.

Sometimes we have ISO and other files (needed for VM creation etc) are spread across datastores as multiple administrators are managing vSphere Infrastructure.This can lead to duplication of the contents. To mitigate this issue concept of content library is introduced in vSphere 6.o which allows for a centralized place for storing your contents.

images.jpg

Advantage of Content Library

The Content Library can be synchronized across sites and vCenter Servers. Sharing consistent templates and files across multiple vCenter Servers in same or different locations brings out consistency, compliance, efficiency and automation in deploying workloads at scale.

Following are some of the features of the content library:

  • Store and manage content - Once central location to manage all content as VM templates, vApps, ISO’s, scripts etc. This release will have a maximum on 10 libraries and 250 items per library and it is a build-in feature in the vCenter Server. Not a plug-in that you will have to install separately.
  • Share content – Once the content is published on one vCenter Server, you can subscribe to the content from other vCenter Servers. This looks similar to the catalog option in the vCloud Director.
  • Templates – The VM’s will be stored as OVF packages rather than templates. This will affect the template creation process. If you want to make changes to a certain OFV template in the Content Library you have to create a VM out of it first, make the changes and then export it back to an OVF template and into the library.
  • Network – The port for communication for the Content Library will be 443 and there is an option to limit the sync bandwidth.
  • Storage – The Content Library can be stored on datastores, NFS, CFIS, local disks etc. as long as the path to the library is accessible locally from the vCenter Server and the vCenter Server has read/write permissions.
mjha Hot Shot
vExpert

Changes in HA in vSphere 6.0

Posted by mjha Mar 14, 2015

We all are aware of DRS affinity and anti-affinity rules and we know the fact that HA doesn’t respect the VM-Host should rules and if any host fails then HA can restart the VM’s anywhere and not necessarily on the host to which VM’s has affinity rules tied up.

 

However on next invocation of DRS (default 5 minutes) this violation would be corrected by DRS itself and VM’s will return to the host according to the VM-Host affinity rules.

With vSphere 6.0 VMware has introduced one new advanced configurable option in HA. This setting is called “das.respectVmHostSoftAffinityRules”. As the name itself suggests, this setting will let HA to respect VM-Host affinity rules when it can and can violate it when HA doesn’t have much choice.

 

To make things clear let’s consider the below scenario.

 

We have 4 Esxi hosts and each host have 10 running VM’s. 3 VM (VM-a, VM-b, VM-c) on the host 2 has affinity rules configured to run only on host 2 and host 3. Now suppose host 2 fails and HA starts the failover. We have defined “das.respectVmHostSoftAffinityRules” in HA so HA ideally can restart VM-a, VM-b and VM-c on host 3 in accordance to the VM-Host affinity rules.

But suppose if the host 3 is already heavily loaded and running on nearly full capacity and it cannot accommodate any new incoming VM, then HA has no choice but to restart the VM’s elsewhere (may be on host 1 or host 4)

 

The second scenario can be none of the host where failed VM’s has affinity rules tied is available at the time of failover. In this case HA will behave just like it is doing till version 5.5 and can restart VM’s anywhere on the remaining nodes of the cluster.

mjha Hot Shot
vExpert

Interview Questions SDRS

Posted by mjha Dec 13, 2014

Ques 1: Can 2 datastores that are part of two differ datacenters be added to a datastore cluster?

Ans: No we can’t add datastores in a datastore cluster that are part of 2 different datacenters.

 

Ques 2: Can we add new datastores to a datastore cluster without incurring any downtime?

Ans: Yes we can add new datastores in a datastore cluster without having any downtime.


Ques 3: If a datastore is space utilization is above the configured threshold then is the initial placement of a new VM is possible on such datastore?

Ans: Yes initial placement is possible on datastore which has already crossed the utilization threshold if it is capable of storing the new VM. Initial placement is always a manual process and you will be prompted to select datastore out of a datastore cluster while creating a new VM or migrating a VM from a datastore which is not part of the datastore cluster onto a datastore which is part of the datastore cluster.


Ques 4: What are pre-requisite migrations in terms of SDRS?

Ans: The set of migration recommendations generated by SDRS for existing VM’s before initial placement of a new VM are called pre-requisite migrations.


Ques 5: What is meant by Datastore cluster defragmentation?

Ans: When there is enough free space available at datastore cluster level but not enough space available per datastore for accommodating a new incoming VM then the datastore cluster is said to be defragmented. To place a new VM, SDRS will migrate existing VM’s from one datastore to other to free up enough space on a single datastore which can hold the newly created VM.


Ques 6: What is space utilization ratio difference and what is its default value? What is the purpose of defining space utilization ratio difference?

Ans: To avoid unnecessary migrations from one overloaded datastore to a datastore which is near the configured threshold, SDRS uses the space utilization ratio difference to determine which datastores should be considered as destinations for virtual machine migration.

By default the value is set to 5%. It means when there is a difference of 5% space utilization between 2 datastores then only a VM will be migrated from the datastore which is heavily loaded to other datastore which is less loaded.

 

Ques 7: Why migrations of powered-off VM is preferred by SDRS for load balancing a datastore cluster?

Ans: Migration of powered off VM is preferred by SDRS because there will be no changes going in the VM at the time of migration and SDRS don’t have to track which blocks have changed inside the VMDK of the VM when migration was going on.

Note: If swap file of VM is stored at user defined location and not inside the VM directory then SDRS will leave swap files untouched during migration of that VM.


Ques 8: What is VM.Observed.Latency in terms of SDRS?

Ans: It is the time elapsed between when a VM send I/O request and Esxi capturing that request and getting back response from the datastore.

Note: In vSphere 5.0 SDRS was only considering the time elapsed between an I/O request leaving Esxi and response coming back from datastore but in vSphere 5.1 the time is calculated right after an I/O request generated by VM and leaving it.


Ques 9: What is meant by “Performance Correlated Datastores”? How it affects SDRS in generating migrations recommendations?

Ans: Performance related datastores are those datastores that share same backend resources such as same disk group or same RAID group on storage array. By default SDRS will avoid migration recommendations for a VM between 2 performance correlated datastores because if one datastore is experiencing high latency then there might be chances that the other datastore carved out of same disk group or RAID group might experience same latency.

Note: in vSphere 5.0 SDRS was dependent on VASA to identify performance correlated datastores but in vSphere 5.1 SRDS leverages SIOC for the same.


Ques 10: What is default invocation period for SDRS and why it is not invoked at default time value for first time when SDRS is enabled on a datastore cluster?

Ans: The default invocation period for SDRS is 8 hours. But SDRS will not be invoked in 8 hours when first time it is enabled on a datastore cluster because it requires atleast 16 hours of historical data to make any space or I/O related migration recommendations. When 16 hours has been elapsed and SDRS have some data with it then it will be invoked after every 8 hours from then but will always analyze data from last 16 hours.


Ques 11: What are the different conditions under which SDRS is invoked?

Ans: Following can be the situations when SDRS will be invoked:

  1. 1) A datastore entering in maintenance mode.
  2. 2) A new datastore is added to datastore cluster.
  3. 3) A datastore exceeds its configured threshold.
  4. 4) During initial placement of a VM.
  5. 5) When SDRS is invoked manually by administrator
  6. 6) Datastore cluster configuration is updated.


Ques 12: How does Esxi hosts in a cluster learns what latency is observed by other Esxi hosts on a given datastore?

Ans: On each datastore a file named “iormstats.sf” is created and is shared among each Esxi connected to that datastore. Every Esxi host periodically writes its average latency and number of I/O for that datastore in this file. Each Esxi host read this file and calculates datastore wide average latency.


Ques 13: How to enable SIOC logging and how we can monitor SIOC logs?

Ans: SIOC logging can be enabled by editing the advance settings in vCenter server. You have to set the value of Misc.SIOCControlLogLevel parameter to 7.


Note: SIOC needs to be restarted to change the log level and it can be restarted by logging into Esxi host and use command /etc/init.d/StorageRM restart.


Ques 14: If someone has changed the SIOC log level then which file you will consult to find out so?

Ans: When log level of SIOC has been changed, this event is logged into /var/log/vmkernel log file.


Ques 15: Why it is not considered to be a best practice to group together datastores coming from different storage arrays in a single datastore cluster?

Ans: When datastores from different type of storage arrays are grouped together in a datastore cluster then performance of a VM varies on these datastores. Also SDRS will be unable to leverage VAAI offloading during VM migration between 2 datastores that are part of different storage arrays.


Ques 16: How SDRS is affected if extended datastores are used in a datastore cluster?

Ans: Extents are used to extend a datastore size but we should not use extended datastores in datastore cluster because SDRS disables I/O load balancing for such datastores. Also SIOC will be disabled on that datastore.


Ques 17: Can we migrate VM’s with independent disks using SDRS? If yes then how and if no then why?

Ans: By default SDRS doesn’t migrate VM’s with independent disks. This behavior can be changed by adding an entry “sdrs.disableSDRSonIndependentDisks” and set it value to false.


Note: This will work only for non-shared independent disks. Moving shared independent disks is not supported by SDRS.


Ques 18: How SDRS computes space requirement for thin provisioned VM’s?

Ans: For a thin provisioned VM, SDRS considers the allocated disk size instead of provisioned size for generating migration recommendation. When determining placement of a virtual machine, Storage DRS verifies the disk usage of the files stored on the datastore. To avoid getting caught out by instant data growth of the existing thin disk VMDKs, Storage DRS adds a buffer space to each thin disk. This buffer zone is determined by the advanced setting “PercentIdleMBinSpaceDemand”.

This setting controls how conservative Storage DRS is with determining the available space on the datastore for load balancing and initial placement operations of virtual machines.

SRDS will analyze data growth rate inside a thin provisioned VM and if it is very high, then SDRS attempts to avoid migrating such VM on datastores where it can cause exceed in space utilization threshold of that datastore in near future.


For more info follow the link

http://frankdenneman.nl/2012/10/01/avoiding-vmdk-level-over-commitment-while-using-thin-disks-and-storage-drs/


Ques 19: What is mirror drivers and how it works?

Ans: Mirror driver is used by SDRS to track the block changes in VMDK of a VM when storage migration of that VM was going on. During migration if some write operations are generated then mirror driver will commit these disk writes in both source and destination machine.

Mirror driver work at VMkernel level and uses Datamover to migrate VM disks from one datastore to other. Before mirror driver is enabled for a VM, VM is first stunned and then unstunned after enabling of mirror driver. Datamover uses “single pass block” copy of disks from source to destination datastore.


Ques 20: What are the types of datamovers which can be used by SDRS?

Ans: There are 3 types of datamovers which is used by SRDS:

  1. 1) Fsdm: This the legacy 3.0 datamover present in Esxi host. It is the slowest of all.
  2. 2) Fs3dm: This is the datamover which was introduced in vSphere 4.0. It is faster than legacy 3.0 datamover.
  3. 3) Fs3dm-hardware offload: This was introduced in vSphere 4.1 and it is the fastest datamover among all three. It leverages VAAI to offload disk migration task between to 2 datastores.

 

Ques 21: Why it is recommended to avoid mixing datastores with different block size in a datastore cluster? 

Ans: When the destination datastore is hosted on different storage array or has different block size as compared to source datastore then SDRS is forced to use “fsdm” datamover which is the slowest one.

 

Note: When source and destination datastore are from same storage array and have same block size, SDRS utilizes “fs3dm” datamover

When storage array has VAAI functionality and source and destination datastores are having same block size and are hosted on same storage array, SDRS used “fs3dm-hardware offload” datamover.


Ques 22: What are the enhancements that was made in SvMotion in vSphere 5.1 as compared to vSphere 5.0?

Ans: vSphere 5.1 allows 4 parallel disk copies/SvMotion process. Prior to vSphere 5.1 copying of disks were done serially. In 5.1 version, if a VM has 5 disks, then copy of first four disks will be done parallel and when copy of any of the disk out of 4 completes, 5th disk will be copied.


Ques 23: What is the max number of simultaneous SvMotion process associated with a datastore? How to change this value?

Ans: Max no of simultaneous SvMotion on a datastore is 8. This can be throttled by editing vpxd.cfg or from advance settings on vCenter server. In vpxd.cfg file modify the parameter “MaxCostPerEsx41DS”.


Ques 24: Why partially connected datastores should not be used in datastore cluster?

Ans: When a datastore cluster contains partially connected datastores, then I/O load balancing is disabled by SDRS on that datastore cluster. SDRS will do the load balancing only based on space utilization in such case.

mjha Hot Shot
vExpert

Multipathing and its techniques

Posted by mjha Dec 4, 2014

Multipathing: Multipathing is having more than one path to storage devices from your server. At a given time more than one paths are used to connect to the LUN’s on storage device. Multipathing provides:

  1. 1)      Redundancy
  2. 2)     Path Management (Failover)
  3. 3)     Bandwidth Aggregation


Native Multipathing Plugin (NMP)

This is the default multipathing plugin which is provided by VMware and is included in Esxi server iso image. NMP has 2 sub-plugins:

  1. A)    Storage Array Type Plugins (SATP): This plugin keeps information about all the available paths.
  2. B)    Path Selection Policy (PSP): PSP defines which path will be selected based on the multipathing techniques used.


VMware SATPs

Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for array specific operations. ESX/ESXi offers a SATP for every type of array that VMware supports. It also provides default SATPs that support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices.

Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the array specific operations required to detect path state and to activate an inactive path. As a result, the NMP module itself can work with multiple storage arrays without having to be aware of the storage device specifics.

After the NMP determines which SATP to use for a specific storage device and associates the SATP with the physical paths for that storage device, the SATP implements the tasks that include the following:

  • Monitors the health of each physical path.
  • Reports changes in the state of each physical path.
  • Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.


Important Note: When NMP is used then the Esxi host identifies the type of array by checking it against /etc/vmware/esx.conf file and then associates the SATP to that array based on the make and model of the array.


What does NMP do?

  • Manages physical path claiming and unclaiming.
  • Registers and de-registers logical devices.
  • Associates physical paths with logical devices.
  • Processes I/O requests to logical devices:
    • Selects an optimal physical path for the request (load balance)
    • Performs actions necessary to handle failures and request retries.
  • Supports management tasks such as abort or reset of logical devices.

 

We can also use 3rd party multipathing plugins which are provided by the storage vendors. Multiple third-party MPPs can run in parallel with the VMware NMP. When installed, the third-party MPPs replace the behavior of the NMP and take complete control of the path failover and the load-balancing operations for specified storage devices.

Pluggable Storage Architecture (PSA): PSA is a special VMkernel module which gives Esxi host the ability to use 3rd party multipathing software. Storage vendors provides their own multipathing plugins MPP which when installed on Esxi, works together with NMP so that failover and load balancing for that storage array can be optimized.

When coordinating the VMware NMP and any installed third-party MPPs, the PSA performs the following tasks:

  • Loads and unloads multipathing plug-ins.
  • Hides virtual machine specifics from a particular plug-in.
  • Routes I/O requests for a specific logical device to the MPP managing that device.
  • Handles I/O queuing to the logical devices.
  • Implements logical device bandwidth sharing between virtual machines.
  • Handles I/O queueing to the physical storage HBAs.
  • Handles physical path discovery and removal.
  • Provides logical device and physical path I/O statistics.


Multipathing Techniques: There are 3 main techniques of multipathing which is listed as below:


1: Most Recent Used (MRU): MRU selects the first working path which is discovered at the boot time. If the original path fails, the Esxi host switches to another alternative path and continues to use it unless it fails. If the original path which was discovered during boot times comes back online, Esxi host don’t failback on it.  MRU is used when LUN’s are presented from Active/Passive array.


2: Fixed: In this technique first working path (defined by administrator) is chosen at boot time. If the original path becomes unavailable or fails, Esxi host switches to another available path, but as soon as the original path comes back online again, Esxi host immediately fail back on that path. This technique is mostly used when LUN’s are presented from Active/Active storage array.


3: Round Robin: In Round Robin technique, Esxi host can use all available paths to connect to LUN’s and thus enables load distribution among the configured path. This technique can be used for both Active/Active and Active/Passive storage arrays.

  • For Active/Active arrays all the paths are used.
  • For Active/Passive, only those paths which are connecting to active controller, will be used.


Apart from these 3 techniques there is one more technique for multipathing which is discussed as below:


ALUA: Asymmetric arrays can process I/O request via both controllers at the same time, but each individual LUN is managed by a particular controller. If I/O request is received for a LUN via a controller other than its managing controller, then the traffic is proxied via it to the managing controller.

ALUA SATP plugin is used for asymmetric arrays. When an Esxi host is connected to ALUA capable array, the array can take advantage of the host knowing it has multiple storage processors and which paths are direct. This allow Esxi hosts to make better load balancing and failover decisions. There are 2 ALUA transition modes that an array can advertise:

  1. 1)       Implicit: Array itself can assign and change managing controllers for each LUN.
  2. 2)     Explicit: Here Esxi host can change LUN’s managing controller.
mjha Hot Shot
vExpert

vCenter Server Logs

Posted by mjha Nov 9, 2014

Here in this post, we will learn about the vCenter Server log files and how to view them.

 

The default log file location for the vCenter Server differs with the version of vCenter Server installed and also depends on the operating system you chose to install. Here is the list of the Log file locations for different vCenter server versions on a different operating system.


When vCenter Server 5.x and earlier versions were installed on operating systems like Windows 2003 the default log file location is:

%ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\

When vCenter Server 5.x and earlier versions of vCenter is installed on Windows 2008 then the default log file location is:

C:\ProgramData\VMware\VMware VirtualCenter\Logs\

If you look at the vCenter Log files location you will find a big list of logs each has its own significance that includes the log files like vpxd.log, vpxd-profiler.log, profiler.log and scoreboard.log, vpxd-alert.log, cim-diag.log, ls.log:, stats.log, sms.log, eam.log, catalina.<date>.log and localhost.<date>.log, jointool.log, manager.<date>.log, host-manager.<date>.log.

Out of the above mentioned log files the very two critical log files are the vpxd.log and vpxd-profiler.log files which are useful for troubleshooting the issues that are related to the configuration and performance.

What is vpxd.log?

vpxd is vCenter service which runs on the Windows Server where the vCenter Server is installed. The logs for the vpxd service are stored in the default log file location path. When you locate the log files folder you will see so many vpxd files with a number appended to the log file. This is generally happens with log rotation, which means that whenever the log files reaches 5MB or when the vpxd service is restarted then the log file would be automatically archived.

By default 10 vpxd-###.log files will be stored in the vCenter log file location directory which can altered and changed.

What is vpxd-profiler.log file?

The vpxd-profiler is mainly used to gather the performance-related information, which is useful for troubleshooting the performance related issues. Just like vpxd all the logs would be automatically archived when reaches 5 MB or when the services are restarted. You can navigate the two active logs from the vSphere client and it stores about 10 old logs by default in the vCenter server log file location.


What are the different log files you find the vCenter Server and their usage?

  • vpxd-alert.log: Non-fatal information logged about the vpxd process.
  • cim-diag.log and vws.log: Common Information Model monitoring information, including communication between vCenter Server and managed hosts' CIM interface.

 

  • drmdump: Actions proposed and taken by VMware Distributed Resource Scheduler (DRS), grouped by the DRS-enabled cluster managed by vCenter Server. These logs are compressed.
  • ls.log: Health reports for the Licensing Services extension, connectivity logs to vCenter Server.

 

  • vimtool.log: Dump of string used during the installation of vCenter Server with hashed information for DNS, username and output for JDBC creation.
  • stats.log: Provides information about the historical performance data collection from the ESXi/ESX hosts

 

  • sms.log: Health reports for the Storage Monitoring Service extension, connectivity logs to vCenter Server, the vCenter Server database and the xDB for vCenter Inventory Service.
  • eam.log: Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter Server.
  • catalina.<date>.log and localhost.<date>.log: Connectivity information and status of the VMware Webmanagement Services.

 

  • jointool.log: Health status of the VMwareVCMSDS service and individual ADAM database objects, internal tasks and events, and replication logs between linked-mode vCenter Servers.

 

 

How do you Change the default log location for VMware vCenter Server?

  • Stop the Virtual Center Server service and VMware Virtual Center Management Webservcies.
  • Back up the vpxd.cfg file. By defualt, the file is located at:


Windows Server 2008/Windows Server 2008 R2 – C:\ProgramData\VMware\VMware VirtualCenter\

  • Open the vpxd.cfg file using a text editor.
  • Add this entry within the <log> and </log> tags:

<!-- vpxd log directory -->
<directory>[Preferred directory]</directory>

  • Where [Preferred directory] is the directory within which you want to save the logs.


For example:

<!-- vpxd log directory -->
<directory>D:\VCenterLogs</directory>

  • Save the vpxd.cfg file.
  • Restart the VMware VirtualCenter Server service and VMware Virtual Center Management Web services.


How do you view the vCenter Server logs?

 

  • The vSphere Client connected to vCenter Server 4.0 and higher – Click Home > Administration > System Logs.
  • From the vSphere 5.1 and 5.5 Web Client – Click Home > Log Browser, then from the Log Browser, click Select object now, choose an ESXi host or vCenter Server object, and click OK.


How do you export the Logs?

  • Select File > Export > Export System Logs.
  • If you are connected to vCenter Server, select Include information from vCenter Server and vSphere Client to download vCenter Server and vSphere Client log files and host log files, and click next.
  • Select Gather performance data to include performance data information in the log files. Click Next. Select Gather performance data to include performance data information in the log files. Click Next.
  • Click Browse and specify the location to which to save the log files.
  • The host or vCenter Server generates .zip bundles containing the log files.
  • The Recent Tasks panel shows the Generate diagnostic bundles task in progress.

Problem: Unable to convert a VM with Paravirtual SCSI Disk using VMware Converter 4.3

When we to convert a VM with Paravirtual SCSI Controller, VMware Converter 4.3 throws an error and the task fails with Status:

FAILED: An error occurred during the conversion:


When you start analyzing the log files you will get an error “Unknown controller” is detected.


Reason: VMware Converter 4.3 doesn’t know the Paravirtual Controller!


Resolution: Perform the following steps to resolve this issue

  1. 1) Power off the VM
  2. 2) Change all Paravirtual controllers to LSI Logic
  3. 3) Perform the Conversion on powered off VM and make sure VM is not powered on automatically after convert.
  4. 4) Change SCSI controller back to Paravirtual
  5. 5) Power on VM


How to Change SCSI Controller

Select the VM and click on Edit Settings. Select the SCSI Controller from devices list and click on change type button.


Note: This is a known bug in VMware Converter 4.3

Problem:


Virtual machine stop responding and unable to Power-On. The machine reports an alarm with message “Virtual machine consolidation needed status”.


as.png


Resolution:

  1. Open the Snapshot Manger to identify the active snapshot.
  2. Right click the virtual machine and choose Edit Settings, verified to which VMDK disk the virtual machine Hard Disk was pointed.
  3. Browse the datastore and navigate to this virtual machine folder. There you could see an orphaned Snapshot (size represented 28.18 KB).

bs.png


4. Now right click the virtual machine, navigate to Snapshot and Choose Consolidate


cs.png

After completion of this task, the virtual machine will be able to Power-On and accessible.


Note: Before proceeding, ensure to consider these points related to snapshot consolidation:

  • The remove snapshot process can take a long time to complete if the snapshots are large.
  • If the consolidation process is stopped before completing, it may result in data corruption.
  • The virtual machine performance may be degraded while the snapshot consolidation process.

There is an authentication issue with vSphere Single Sign-On version 5.5 when running both the Active Directory (AD) domain control and the vCenter Single Sign-On Server on Windows Server 2012.


when your AD domain controller and your vCenter Single Sign-On are both running on Windows Server 2012, the single sign-on is unable to authenticate AD users.  You get a “Cannot parse group information” error as shown in figure below.

vcerror.jpg

Symptoms

  • Users cannot authenticate with a Vcenter Single Sign-On (SSO) 5.5 system that is installed on Windows Server 2012 when this system is joined to an Active Directory domain controller also running on Windows Server 2012.
  • Users receive this error message when trying to log in through the vSphere Web Client:
    Cannot Parse Group Information

 

Reason of this problem 

  • This issue occurs only in environments where BOTH of these conditions apply:
    • vCenter SSO 5.5 is running on Windows Server 2012, and
    • vCenter SSO 5.5 joined an Active Directory Domain with a Domain Controller that is running on Windows Server 2012


Resolution

  This is a known issue affecting vCenter Server 5.5.

To resolve this issue, replace the %WINDIR%\System32\idm.dll file on all systems running Vcenter SSO 5.5 with a idm.dll file which you can download from http://sdrv.ms/1a6WER8



Note: The attached idm.dll file is provided by VMware.


To replace the idm.dll file on the Windows Server 2012 running SSO 5.5:

  1. login as an administrator.
  2. Stop the VMware Identity Management Service on the vCenter SSO server. This also stops the VMware Secure Token Service.
  3. Back up the existing idm.dll by copying %WINDIR%\System32\idm.dll to %WINDIR%\System32\idm.dll.orig.
  4. Download the idm_patch09252013.zip attachment that contains the replacement idm.dll file and paste it in %WINDIR%\System32\.


Start the VMware Secure Token Service on the vCenter SSO server. After replacing the dll and restarting services, the initial AD login may take longer than normal to authenticate.

mjha Hot Shot
vExpert

Virtual Machine File Types

Posted by mjha Nov 9, 2014

When you browse a VM folder on a datastore you find different types of files listed there . Each file has specific role in a VM functioning. We will learn here for which purpose which files are created. Typically you will find following types of files in a VM directory:

VMDK files – VMDK files names as (VMware Virtual Disk file) are the actual virtual hard drive for the virtual guest operation system. You can create dynamic or fixed virtual disks. With dynamic disks, the disks start small and grow as the disk inside the guest OS grows. With fixed disks, the virtual disk started out at the same (large) disk size decided initially while creating VM.

Log files – Log files are just that- a log of VM server activity for a single virtual server. Log files can be used only while doing any troubleshooting with a virtual machine. It can be more than 1 or 2 and modified each time while we Start, Suspend or reboot the Virtual Machine.

VMX files – a VMX file is the primary configuration file for a virtual machine. When you create a new virtual machine and answer questions about the operating system, disk sizes, and networking, those answers are stored in this file. As you can see from the screenshot below, a VMX file is actually a simple text file that can be edited with Notepad.

VMEM – A VMEM file is a backup of the virtual machine’s paging file. It will only appear if the virtual machine is running, or if it has crashed.

VMSN files – these files are used for snapshots. A VMSN file is used to store the exact state of the virtual machine when the snapshot was taken. Using this snapshot, you can then restore your machine to the previous state as when the snapshot was taken.

 

VMSD files -- A VMSD file stores information about snapshots metadata. You’ll notice that the names of these files match the names of the snapshots.


NVRAM files – These files are the BIOS for the virtual machine. The VM must know how many hard drives it has and other common BIOS settings. The NVRAM file is where that BIOS information is stored.

VMSS files - A VMSS file is VMware Suspended virtual machine state and it is created while you suspend any Virtual machine. It stores the information of the running application while suspending the machine.

VMXF Files - A VMXF file is VMware team member file which is created during creation of Virtual machine and its shows the information about the client MetaData Attributes & VMID.

Remove Old Hidden Devices from a VM after P2V Conversion

 

After you have converted a Windows server to a virtual machine, some redundant hardware devices may not be removed in the process.

 

This issue is very common with network devices. When setting the IP in a VM you may get and error saying a network card already has that IP assigned. But there are no other network devices listed in your Network Connections Window .


Well there might be, but there hidden and you just need to reveal them and remove them. Its quite easy and this is how you go about doing it:

 

Open a command prompt on the Windows VM (Start --> Run --> cmd).

 

set devmgr_show_nonpresent_devices=1

 

Type devmgmt.msc on run prompt

 

In the device management console (View --> Show Hidden Devices).

 

Uninstall the devices that are no longer required. Such as old network devices.

VSphere Replication gives you the option of performing a host based replication of Virtual Machine from one data-center to another over a network link. This feature was introduced with vCenter Site Recovery Manager 5.x. This allowed customers to use vSphere Replication as the primary replication engine to replicate data from one Site to another and then use the SRM engine to provide automation to the entire DR process.

 

Since, you have the option to set replication per virtual machine, you can also, pre-seed the VMDK files of a virtual machine on a LUN in the target Datastore (by restoring a full image from a backup). This allows you to save time and replication bandwidth, since you do not have to replicate all the data over the WAN. This will allow you to just replicate the changes from Primary Site to DR Site by syncing both the images.

 

Most of the customers, who would use the pre-seeding method would register the restored VM's on the DR site and power them on to check if the backup was good and can they pre-seed that image. Once this VM is registered and powered on, you will be asked a question whether this VM "was copied" or "was moved". If you proceed with the default option of "was copied", the UUID of the VMDKs would change to a random value.

 

Now when you try to setup the first time Sync using the vSphere Replication configuration wizard, this configuration would fail with the following error "Target disk UUID validation failed".

 

This error comes up because when the replication engine compares the VMDK descriptor files of Source and Destination VMDK files, they both have different UUIDs. This causes the replication configuration and the first time sync to fail.

 

To solve this issue, you can simply use the ESXi shell or putty session to get the UUID from the descriptor VMDK from the Primary Site VM. Keep this UUID noted as you would need to replace the UUID of the target VMDK descriptor with this source UUID. Once done, you would be able to setup the Replication again using the same seed vmdk without any issues.

 

Here is how a UUID would look like in a VMDK descriptor:-

 

  1. ddb.uuid = "60 00 C2 94 dd 43 63 90-18 77 3f 23 6d 8e f0 22"

 

Please ensure you do this for all the disks (vmdks) attached to the Virtual Machine in question. Please ensure you have a backup available before you play around with this, in-case you do not have hands on experience.

VCenter Server 5.5 displays a yellow warning in the Summary tab of hosts: Quick stats on hostname is not up-to-date


Problem:

  • When connecting to VMware vCenter Server 5.5 using the VMware vSphere Client or VMware vSphere Web Client, the Summary tab of the ESXi 5.5 host shows a yellow warning.


You see the error:

Configuration issues. "Quick stats on hostname is not up-to-date"


This issue does not occur if you connect directly to the ESXi host.  


Resolution

This is a known issue in vCenter Server 5.5. To work around this issue, you have to add these quickStats parameters to the Advanced Settings of the vCenter Server:

  • vpxd.quickStats.HostStatsCheck
  • vpxd.quickStats.ConfigIssues

 

To add the quickStats parameters to the Advanced Settings of the vCenter Server: 


  1. In the vSphere Web Client, navigate to the vCenter Server instance.
  2. Select the Manage tab.
  3. Select Settings > Advanced Settings.
  4. Click Edit.
  5. In the Key field, enter this key: vpxd.quickStats.HostStatsCheck
  6. In the Value field, enter: False
  7. Click Add.
  8. In the Key field, enter this key: vpxd.quickStats.ConfigIssues
  9. In the Value field, enter: False
  10. Click Add.
  11. Click OK.
  12. Restart the vCenter Server services.

 
To work around this issue in the event that vSphere Web Client is inaccessible, add these quickStats parameters to the vpxd.cfg file: 

  • <HostStatsCheck>false</HostStatsCheck>
  • <ConfigIssues>false</ConfigIssues>

 

To add the quickStats parameters to the vpxd.cfg file: 

  1. Back up the existing vpxd.cfg file. Do not skip this step.
  2. Open the the vpxd.cfg file using a text editor.
  3. By default, the vpxd.cfg file is located at:

 

For Windows-based vCenter Server – C:\ProgramData\VMware\VMware VirtualCenter\  


For vCenter Server Appliance – /etc/vmware-vpx/


  1. Add these entries between the <vpxd>...</vpxd> tags:

    <vpxd>
    ...
    <quickStats>
    <HostStatsCheck>false</HostStatsCheck>
    <ConfigIssues>false</ConfigIssues>
    </quickStats>
    ...
    </vpxd>


5. Save and close the vpxd.cfg file.


6.Restart the vCenter Server services


Now when you connect to vCenter server this warning message will be no longer displayed.

Problem 1:

There are 2 main problems which comes during the installation of vcenter5.5 on windows server 2012R2: the first is that when installing the vCenter Server component, it would not recognize any administrator user (or group) you’re your domain that you try to add as an administrator.


Resolution:

This issue can be resolved by logging into the Web Client with the SSO administrator user and adding your AD info as default under SSO configuration.


Problem 2:

The second problem that you will face is: the installer would hang at the following screen:

sad.png


On checking the vminst.log file in %temp%  you will find the following:

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 Begin Logging

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 --- CA exec: VMAdamInstall

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 Getting Property CustomActionData = 603;603;C:\Users\mike\AppData\Local\Temp\{E1F05550-4238-4378-87F0-105147A251D9};C:\Windows\SysWOW64\;C:\Windows\system32\;C:\Windows\ADAM\

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 setupApp = [C:\Windows\system32\ocsetup.exe]

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 --- function: SetupComponentOnWindows

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 [C:\Windows\system32\cmd.exe /c start /w C:\Windows\system32\ocsetup.exe DirectoryServices-ADAM /passive /norestart]

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 Util_Launch::Wait: 1 Hide: 1 TimeOut: -1

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 Found "C:\Windows\system32\cmd.exe"

 

VMware VirtualCenter-build-1312298: 09/22/13 18:33:46 Attempting to launch ["C:\Windows\system32\cmd.exe" /c start /w C:\Windows\system32\ocsetup.exe DirectoryServices-ADAM /passive /norestart]

 

Resolution

On examining the log files it is found that ocsetup.exe is not included in Server 2012 R2 like it was in previous versions. So to resolve the issue you had to copy this ocsetup.exe file from Server2008 R2 DVD into the *system32* directory on your Server2012R2 and after re-running setup vCenter Server installed successfully.


Note: If after putting ocsetup.exe file you still facing problem in running the setup then reboot your machine and try again. This time you will be able to run the setup without any difficulty.


Issue:


  • Linux based virtual machines fail to take quiesced snapshots after upgrading ESXi and VMware Tools to 5.1.
  • Using the vSphere Client to take a quiesced snapshot fails immediately with the error:

  The guest OS has reported an error during quiescing. The error code was: 3 The error message was: Error when enabling the sync provider. 

  • The hostd.log of the host running the virtual machine contains errors similar to:

[3CB9EB90 verbose 'vm:/vmfs/volumes/<datastore>/<VMNAME>/<VMNAME>.vmx'] Handling message _vmx##: The guestOS has reported an error during quiescing.

--> The error code was: 3

--> The error message was: Error when enabling the sync provider.


Cause

This issue occurs due to a problem with the FIFREEZE/FITHAW ioctl feature within the guest that is utilized to quiesce the Linux filesystem, affecting kernel 2.6.32-24 and lower.


Resolution

To resolve this issue, update the kernel of your Linux guest virtual machine to 2.6.35-22 or higher.

To work around this issue without updating the kernel:


  1. Open the tools.conf file, located in the /etc/vmware-tools directory, using a text editor.

  Note: If the tools.conf file does not exist, create a new, empty file. 

  • Add these lines to the file:

[vmbackup]
enableSyncDriver = false