9 Replies Latest reply on Sep 16, 2018 6:57 AM by SupreetK

    Yet another "Failed - Transport (VMDB) error -45" issue after updating esxi host. No (ramdisk) issues

    supremedalek Novice

      Like others (Transport (VMDB) error -45: Failed to connect to peer process , Re: " Failed - Transport (VMDB) error -45: Failed to connect to peer process. " after reinstall , Transport (VMDB) error -45: Failed to connect to peer process ), I have upgraded a esxi host from 6.0 to 6.5 and now cannot run any guest.

      [root@vmhost:~] vim-cmd  vmsvc/power.on 30 Powering on VM: Power on failed [root@vmhost:~]

      Here is the relevant excerpt from the log file:

      018-06-22T14:50:42.910Z info hostd[E5C2B70] [Originator@6876 sub=vm:VigorExe 
      cVMXExCommon: Exec()'ing /bin/vmx
      /vmfs/volumes/52a08b50-984b4bf0-219f-d067e51ce7b7/RHCE/RHCE.vmx
      opID=vim-cmd-03-1411 user=root]
      2018-06-22T14:50:42.914Z info hostd[E5C2B70] [Originator@6876 sub=Libs opID=v
      im-cmd-03-1411 user=root] Vigor: VMKernel_ForkExec(/bin/vmx, detached=1): sta tus=0 pid=75385
      2018-06-22T14:50:42.916Z info hostd[C2F8B70] [Originator@6876 sub=Libs] SOCKE T 20 (37)
      2018-06-22T14:50:42.916Z info hostd[C2F8B70] [Originator@6876 sub=Libs] recv detected
      client closed connection
      2018-06-22T14:50:42.916Z info hostd[C2F8B70] [Originator@6876 sub=Libs] Vigor
      TransportClientProcessError: Remote disconnected
      2018-06-22T14:50:42.916Z info hostd[C2F8B70] [Originator@6876
      sub=vm:/vmfs/volumes/52a08b50-984b4bf0-219f-d067e51ce7b7/RHCE/RHCE.vmx] 
      VMX did not report err via stderr
      2018-06-22T14:50:42.918Z info hostd[E9AAB70] [Originator@6876 sub=Hostsvc]
      Decremented SIOC Injector Flag2
      2018-06-22T14:50:42.932Z warning hostd[E9AAB70] [Originator@6876
      sub=Vmsvc.vm:/vmfs/volumes/52a08b50-984b4bf0-219f-d067e51ce7b7/RHCE/RHCE.vmx]
      Failed operation
      2018-06-22T14:50:42.932Z verbose hostd[E9AAB70] [Originator@6876 sub=Property
      Provider] RecordOp ASSIGN: latestEvent, ha-eventmgr. Applied change to temp m ap. 
      # The error message
      2018-06-22T14:50:42.932Z info hostd[E9AAB70] [Originator@6876 sub=Vimsvc.ha-eventmgr]
      Event 244 : Cannot power on RHCE on vmhost.domain.com. in ha-datacenter. A
      general system error occurred:
      2018-06-22T14:50:42.932Z info hostd[E9AAB70] [Originator@6876 sub=Vmsvc.vm:/v
      mfs/volumes/52a08b50-984b4bf0-219f-d067e51ce7b7/RHCE/RHCE.vmx] State Transition
      (VM_STATE_POWERING_ON -> VM_STATE_OFF)

      2018-06-22T14:50:42.932Z verbose hostd[E9AAB70] [Originator@6876 sub=Property

      Provider] RecordOp ASSIGN: disabledMethod, 30. Sent notification immediately.

      2018-06-22T14:50:42.933Z info hostd[E9AAB70] [Originator@6876 sub=Vimsvc.Task

      Manager] Task Completed : haTask-30-vim.VirtualMachine.powerOn-189254357 Stat

      us error

       

      Following the replies to the threads mentioned above I checked if I was running out of disk space:

      [root@vmhost:~] df -h 
      Filesystem   Size   Used Available Use% Mounted on
      NFS        147.6G 110.7G     36.9G  75% /vmfs/volumes/public
      VMFS-5     414.2G 288.5G    125.7G  70% /vmfs/volumes/datastore1
      VMFS-5      39.8G  10.0G     29.8G  25% /vmfs/volumes/Test
      vfat       285.8M 208.1M     77.7M  73% /vmfs/volumes/52a08b49-9fa64986-10bf-d067e51ce7b7
      vfat       249.7M 162.4M     87.3M  65% /vmfs/volumes/0e7ac47f-85c20205-e914-af5132fe5673
      vfat       249.7M 150.4M     99.3M  60% /vmfs/volumes/01cdaf72-2d3f8757-77fe-274ef8565255
      vfat         4.0G 110.3M      3.9G   3% /vmfs/volumes/56306a5f-4068abd1-ddcb-00224d98ad4f
      [root@vmhost:~]

      What about ramdisk? Even though there is no mention in the log file (as suggested in VMware Knowledge Base ) that the ramdisk is full,  I still ran vdf -h to see the ramdisk:

      ----- 
      Ramdisk                   Size      Used Available Use% Mounted on
      root                       32M        1M       30M   6% --
      etc                        28M      960K       27M   3% --
      opt                        32M        0B       32M   0% --
      var                        48M      412K       47M   0% --
      tmp                       256M       16K      255M   0% --
      iofilters                  32M        0B       32M   0% --
      hostdstats                204M        2M      201M   1% --
      snmptraps                   1M        0B        1M   0% --

      What about running out of inodes? Per  I ran

      [root@vmhost:~]

      stat -f /vmfs/volumes/52a08b49-9fa64986-10bf-d067e51ce7b7

        File: "/vmfs/volumes/52a08b49-9fa64986-10bf-d067e51ce7b7"

          ID: 236a5549191e0c58 Namelen: 127     Type: vfat

      Block size: 8192

      Blocks: Total: 36586      Free: 9943       Available: 9943

      Inodes: Total: 0          Free: 0

       

      [root@vmhost:~]

      But honestly do not know if Inodes: Total: 0          Free: 0 means out of inodes. What else should I be looking for?

      I di dread in another thread about just reinstalling esxi back, but that does not explain why this is happening to begin with.