Skip navigation
1 2 3 Previous Next

Rajeev's Blog

41 posts

This is occuured when upgrade VCSA6.0 to VCSA 6.5 if vpostgress have customize DB


connect with VCDB


VMware Knowledge Base


run command


VCDB=# \dv


Check how many view item there , then cascade it.


Test to manually suppress VPXV_VMS view in VCDB



ERROR:  cannot drop view vpxv_vms because other objects depend on it

DETAIL:  view "DCS_BV_VIEW3" depends on view vpxv_vms

HINT:  Use DROP ... CASCADE to drop the dependent objects too.


NOTICE:  drop cascades to view "DCS_BV_VIEW3"



ERROR:  view "vpxv_vms" does not exist

VCDB=# \q

root@VC [ /var/log/vmware/vpxd ]#





Getting the below error while starting the vCenter services.

vmware-vpxd: Waiting for vpxd to start listening for requests on 8089

Waiting for vpxd to initialize: ..........................................................Fri Aug 17 15:00:05 EDT 2018 Captured live core: /var/core/live_core.vpxd.2804.08-17-2018-15-00-05

[INFO] writing vpxd process dump retry:2 Time(Y-M-D H:M:S):2018-08-17 19:00:03

.Fri Aug 17 15:00:16 EDT 2018 Captured live core: /var/core/live_core.vpxd.2804.08-17-2018-15-00-16

[INFO] writing vpxd process dump retry:1 Time(Y-M-D H:M:S):2018-08-17 19:00:15



vmware-vpxd: vpxd failed to initialize in time.

vpxd is already starting up. Aborting the request.


Stderr =

2018-08-17T19:00:26.608Z {

"resolution": null,

"detail": [


"args": [

"Command: ['/sbin/service', u'vmware-vpxd', 'start']\nStderr: "


"id": "install.ciscommon.command.errinvoke",

"localized": "An error occurred while invoking external command : 'Command: ['/sbin/service', u'vmware-vpxd', 'start']\nStderr: '",

"translatable": "An error occurred while invoking external command : '%(0)s'"



    "componentKey": null,

"problemId": null


ERROR:root:Unable to start service vmware-vpxd, Exception: {

"resolution": null,

"detail": [


"args": [



"id": "install.ciscommon.service.failstart",

"localized": "An error occurred while starting service 'vmware-vpxd'",

  "translatable": "An error occurred while starting service '%(0)


Solution need to check domain controller connectivity between VC/PSC


cpu2:32999)0x4390c119b660:[0x4180163128c3]VTDQISync@vmkernel#nover+0xf7 stack: 0x1

cpu2:32999)0x4390c119b6a0:[0x4180163137b2]VTDIRWriteIRTE@vmkernel#nover+0x8e stack: 0x2e

cpu2:32999)0x4390c119b6d0:[0x418016313895]VTDIRSteerVector@vmkernel#nover+0x61 stack: 0x43004d129f10

cpu2:32999)0x4390c119b700:[0x4180162e96c9]IOAPICSteerVector@vmkernel#nover+0x59 stack: 0x1c00

cpu2:32999)0x4390c119b740:[0x418016057514]IntrCookie_SetDestination@vmkernel#nover+0x174 stack: 0x4


VMware Knowledge Base

In the vpxd.log file, you see entries similar to:


2012-04-02T13:07:49.438+02:00 [02248 info 'Default' opID=66183d64] [VpxLRO] -- BEGIN task-internal-252 -- -- vim.SessionManager.acquireSessionTicket -- 52fa8682-47e0-2566-fb05-6192cb2c22f9(5298e245-ffb6-f7f8-e8a0-dedfbe369255)
2012-04-02T13:07:49.579+02:00 [02068 info 'Default'] [VpxLRO] -- BEGIN task-internal-253 -- host-94 -- VpxdInvtHostSyncHostLRO.Synchronize --
2012-04-02T13:07:49.579+02:00 [02068 warning 'Default'] [VpxdInvtHostSyncHostLRO] Connection not alive for host host-94
2012-04-02T13:07:49.579+02:00 [02068 warning 'Default'] [VpxdInvtHost::FixNotRespondingHost] Returning false since host is already fixed!
2012-04-02T13:07:49.579+02:00 [02068 warning 'Default'] [VpxdInvtHostSyncHostLRO] Failed to fix not responding host host-94
2012-04-02T13:07:49.579+02:00 [02068 warning 'Default'] [VpxdInvtHostSyncHostLRO] Connection not alive for host host-94
2012-04-02T13:07:49.579+02:00 [02068 error 'Default'] [VpxdInvtHostSyncHostLRO] FixNotRespondingHost failed for host host-94, marking host as notResponding
2012-04-02T13:07:49.579+02:00 [02068 warning 'Default'] [VpxdMoHost] host connection state changed to [NO_RESPONSE] for host-94
2012-04-02T13:07:49.610+02:00 [02248 info 'Default' opID=66183d64] [VpxLRO] -- FINISH task-internal-252 -- -- vim.SessionManager.acquireSessionTicket -- 52fa8682-47e0-2566-fb05-6192cb2c22f9(5298e245-ffb6-f7f8-e8a0-dedfbe369255)
2012-04-02T13:07:49.719+02:00 [02068 info 'Default'] [VpxdMoHost::SetComputeCompatibilityDirty] Marked host-94 as dirty.


This issue may occur if heartbeat packets are not received from the host before the one minute timeout period expires. These heartbeat packets are UDP packets sent over port 902.


This issue may also occur when the Windows firewall is enabled and the ports are not configured.



To resolve this issue, check the Windows Firewall on the vCenter Server machine. If ports are not configured, disable the Windows Firewall.


If ports are configured, verify if network traffic is allowed to pass from the ESXi/ESX host to the vCenter Server system, and that it is not blocking UDP port 902.


To perform a basic verification from the guest operating system perspective:
  1. Click Start > Run, type wf.msc, and click OK. The Windows Firewall with Advanced Security Management console appears.
  2. In the left pane, click Inbound Rules.
  3. Right-click the VMware vCenter Server -host heartbeat rule and click Properties.
  4. In the Properties dialog, click the Advanced tab.
  5. Under Profiles, ensure that the Domain option is selected.

VMware Knowledge Base



In the /var/log/hostd.log file, you see entries similar to:

Failed to get physical location of SCSI disk: Failed to get location information for naa.600c0ff00025d308b29de55501000000lsu-hpsa-plugin Unknown error


  • The /var/log/vpxa.log file contains errors similar to:
YYYY-MM-DDT<time> warning vpxa[7DD7FB70] [Originator@6876 sub=hostdcnx] [VpxaHalCnxHostagent] Could not resolve version for authenticating to host agent</time>
YYYY-MM-DDT<time> verbose vpxa[FFD40AC0] [Originator@6876 sub=hostdcnx] [VpxaHalCnxHostagent] Creating temporary connect spec: localhost:443</time>
YYYY-MM-DDT<time> verbose vpxa[FFD40AC0] [Originator@6876 sub=vpxXml] [VpxXml] Error fetching /sdk/vimServiceVersions.xml: 503 (Service Unavailable)</time>
YYYY-MM-DDT<time> warning vpxa[FFD40AC0] [Originator@6876 sub=Default] Closing Response processing in unexpected state: 3</time>
This issue occurs when the upgrade replaces the new esx-base, but keeps the higher version of the lsu plugins
This issue is resolved in VMware ESXi 6.0 Update 3
VMware Knowledge Base

This issue occurred during upgrade 6.0 to 6.5

Because of root password suppose to expired


Solution :- reset root password of PSC/VC


For more information we can look these KB


VMware Knowledge Base


VMware Knowledge Base

"/storage/db/vmware-vmdir/data.mdb', '[Errno 28] No space left on device"


/var/log/firstboot/vmafd-firstboot.py_XXXX_stderr.log contains the following error message:


Error: [('/storage/db/cis-export-folder/vmafd/data/vmdir/data.mdb', '/storage/db/vmware-vmdir/data.mdb', '[Errno 28] No space left on device')]


Multiple issues can occur if a Platform Services Controller has more than 100,000 tombstone entries, below this threshold the symptoms in this article are likely unrelated.


To determine the number of tombstone entries on a Platform Services Controller Appliance, run this command:


/opt/likewise/bin/ldapsearch -H ldap://PSC_FQDN -x -D "cn=administrator,cn=users,dc=vsphere,dc=local" -w 'password' -b "cn=Deleted Objects,dc=vsphere,dc=local" -s sub "" -e 1.2.840.113556.1.4.417 dn| perl -p00e 's/\r?\n //g' | grep '^dn' | wc -l


for more information find the KB 52387


VMware Knowledge Base

The ESXi host shows event warnings matching the event below:

   Agent can't send heartbeats: No route to host.


There may be no other symptoms relating to this issue.


Workaround:- The solution is to enable the fail back option on the portgroup configuration. (vSwitch settings



Unable to vMotion a virtual machine from one host to another. vMotion activity fails with the following error:-

Error code “The source detected that the destination failed to resume.
Heap dvfilter may only grow by 33091584 bytes (105325400/138416984), which is not enough for allocation of 105119744 bytes
vMotion migration [-1407975167:1527835473000584] failed to get DVFilter state from the source host <>
vMotion migration [-1407975167:1527835473000584] failed to asynchronously receive and apply state from the remote host: Out of memory.
Failed waiting for data. Error 195887124. Out of memory


As a workaround configure a larger Heap size on a suitable target host (that can be rebooted after making the changes)

-To increase the Heap Size use type the following command on the target host.
esxcfg-module -s DVFILTER_HEAP_MAX_SIZE=276834000 dvfilter
-This requires a reboot of the ESXi host to take effect.
- Once the target host is up try vMotion the affected VM again to the target host and see if it's successful.


This is a known issue that the NSX team have been working upon for a while. As per VMware the default heap size is increased in ESXi 6.7 to resolve this issue.

In vsphere 6.0 some time we can see , through thick client we can see inventory but web-client empty


This is because of heap size on web-client


Increase heap size


In Windows, locate the file C:\Program Files\VMware\vCenter Server\visl-integration\usr\sbin)

\cloudvm-ram-size.bat and run:

cloudvm-ram-size.bat -C XXX vspherewebclientsvc (where XXX is the desired heap size in MB).

RajeevVCP4 Hot Shot

Virtual machine hung

Posted by RajeevVCP4 Mar 26, 2018

If you want to power off hung virtual machine ,

Try these command


1. Take process id and kill


ps | grep vmx

kill -9 <pid> 300731



Method-2:-  take vmid and use this command



vim-cmd vmsvc/getallvms



vim-cmd vmsvc/ <vmid>



Method-3 :- By world ID



esxcli vm process list ( take world id)



esxcli vm process kill -t=soft -w=wordid

From obfl logs


5:2018 Mar 17 13:13:12 MST:3.1(21d):selparser:1688: selparser.c:706: # 2D 02 00 00 01 02 00 00 D7 76 AD 5A 33 00 04 07 00 00 00 00 6F A5 92 11 # 22d | 03/17/2018 13:13:11 | BIOS | Processor #0x00 | Configuration Error |  | Asserted

5:2018 Mar 17 22:25:39 MST:3.1(21d):avct_server:1589: callback_http: new client connected, wsi 0x40e03778

5:2018 Mar 17 22:25:39 MST:3.1(21d):avct_server:13538: Client supports encrypted/unencrypted kbd/mouse.


From SEL


22d | 03/17/2018 13:13:11 | BIOS | Processor #0x00 | Configuration Error |  | Asserted


This is cisco bug (CSCuz55148). Vce kb (000004971)


Solution:- Proactive plan replace both CPUs.

RajeevVCP4 Hot Shot

MCA error detected via CMCI

Posted by RajeevVCP4 Mar 23, 2018

Condition:- Cisco UCS B200 M3



cpu13:5344825)MCE: 1020: cpu13: MCA error detected via CMCI (Gbl status=0x0): Restart IP: invalid, Error IP: invalid, MCE in progress: no.

cpu13:5344825)MCE: 190: cpu13: bank7: status=0x8c00004000010091: (VAL=1, OVFLW=0, UC=0, EN=0, PCC=0, S=0, AR=0), Addr:0x1421652080 (valid), Misc:0x42389a00 (valid)

2018-03-16T12:36:38.906Z cpu13:5344825)MCE: 199: cpu13: bank7: MCA recoverable error (CE): "Memory Controller Read Error on Channel 1."


CIMC | Processor IERR #0x99 | Predictive Failure asserted | Asserted

5:2018 Mar 16 23:13:53 GMT:3.1(21d):selparser:1724: selparser.c:706: # DA 01 00 00 01 02 00 00 B1 4F AC 5A 20 00 04 24 95 00 00 00 7F 05 FF FF # 1da | 03/16/2018 23:13:53 | CIMC | Platform alert LED_BLADE_STATUS #0x95 | LED color is amber | Asserted



CPU 1 : MCA_ERR_SRC_LOG : 0xc0000000

CPU 2 : READ MCA_ERR_SRC_LOG Register : FAILED : RetVal = 0 : CC = 0x81



Solution:- replace faulty system board along with TPM.

Scenario :- If we are using N3K and N9K , mostly used teaming policy ip hash , but when we changed NIC/IP its take origination port id.


Try to change it by these command


To Set the NIC teaming policy on a Virtual Switch on an ESXi 5.x

  • To list the current NIC teaming policy of a vSwitch, use the command:

    # esxcli network vswitch standard policy failover get -v vSwitch0
  • To set the NIC teaming policy of a vSwitch, use this command:
# esxcli network vswitch standard policy failover set -l policy -v vSwitchXFor example, to set the NIC teaming policy of a vSwitch to IP hash:


# esxcli network vswitch standard policy failover set -l iphash -v vSwitch0Note: Available Policy Options:
  • explicit = Use explicit failover order
  • portid = Route based upon port id (This is the Default setting)
  • mac = Source Based Upon MAC Hash
  • iphash = Source based up IP hash (This is only to be used in a etherchannel\Portchannel)

To Set the NIC teaming policy on a Port Group

  1. To list the current NIC teaming policy of a port group, run this command:

    esxcli network vswitch standard portgroup policy failover get -p "Management Network"
  2. To set the NIC teaming policy of a port group, run this command:

    esxcli network vswitch standard portgroup policy failover set -p "Management Network" -l "Policy Options"

policy option is iphash


VMware Knowledge Base

In vcenter server 6.0 U3b , from vpxd logs.



--> Panic: Win32 exception: Access Violation (0xc0000005)

--> Read (0) at address 0000000000000058

--> rip: 00007ffde6a18cc0 rsp: 0000000018ede918 rbp: 00000000113f4a40


This issue occurs due to Microsoft SQL databases not supporting shared connections.


Solution:- Upgrade vcenter server/PSC by 6.0 U3c/d


For more information refer


VMware Knowledge Base