vSphere 7, unable to stop syslog service.

Jump to solution

Has anyone else experienced this issue?

I was trying to stop syslog from the vCenter GUI and the ESXi GUI but I am getting the following error,  I am able to stop / start the NTP service just fine, so it really appears to be related to the syslog service.

this occurs on all my hosts running vCenter / ESXi 7.

vCenter7.0.1 Build:16858589

I sshed to the host and checked the service...

[root@ESX1:~] ps -Cc|grep vmsyslogd

17375454  17375454  grep                  grep vmsyslogd

17275879  17275879  vmsyslogd             /bin/python /usr/lib/vmware/vmsyslog/bin/vmsyslogd.pyc start

17275878  17275878  wdog-17275879         /bin/python /usr/lib/vmware/vmsyslog/bin/vmsyslogd.pyc start

Looking at the vCenter logs

2020-10-16T10:56:45.956333+00:00 vcsa vpxd-main - - - 2020-10-16T10:56:45.956Z info vpxd[19461] [Originator@6876 sub=Default opID=kg76l8h1-24611-auto-izp-h5:70009269-56] [VpxLRO] -- ERROR task-14509 -- serviceSystem-15827 -- vmodl.fault.InvalidArgument:\n--> Result:\n--> (vmodl.fault.InvalidArgument) {\n-->    faultCause = (vmodl.MethodFault) null, \n-->    faultMessage = <unset>, \n-->    invalidProperty = "id"\n-->    msg = "Received SOAP response fault from [<cs p:00007f3d705d2e40, TCP:>]: stop\n--> Received SOAP response fault from [<cs p:000000e3585b26a0, TCP:localhost:8307>]: stop\n--> A specified parameter was not correct: id"\n--> }\n--> Args:\n--> \n--> Arg id:\n--> "vmsyslogd"

[root@ESX1:~] /usr/lib/vmware/vmsyslog/bin/vmsyslogd staus

Watchdog fork failed 28 (No space left on device)

[root@ESX1:~] df -l

Filesystem         Bytes          Used    Available Use% Mounted on

VMFS-5      525059751936   64702382080 460357369856  12% /vmfs/volumes/ESX1-SSD1

VMFS-L        6442450944    1715470336   4726980608  27% /vmfs/volumes/LOCKER-5f12e0b8-3d2dfc46-023e-ac1f6b781582

vfat           524009472     189685760    334323712  36% /vmfs/volumes/BOOTBANK2

vfat           524009472     189874176    334135296  36% /vmfs/volumes/BOOTBANK1

vsan       3000628174848 2257722143988 742906030860  75% /vmfs/volumes/vSAN_HZ-CL01

[root@ESX1:~] vdf -h

Tardisk                  Space      Used

vmx.v00                   116M      116M

vim.v00                   139M      139M

tpm.v00                    24K       22K

sb.v00                    171M      171M

s.v00                      68M       68M

bnxtnet.v00               688K      685K

bnxtroce.v00              324K      323K

brcmfcoe.v00                2M        2M

brcmnvme.v00              124K      123K

elxiscsi.v00              548K      546K

elxnet.v00                636K      635K

i40en.v00                 604K      602K

i40iwn.v00                484K      480K

iavmd.v00                 196K      195K

igbn.v00                  320K      319K

iser.v00                  260K      259K

ixgben.v00                524K      520K

lpfc.v00                    2M        2M

lpnic.v00                 636K      635K

lsi_mr3.v00               348K      344K

lsi_msgp.v00              484K      482K

lsi_msgp.v01              552K      549K

lsi_msgp.v02              512K      511K

mtip32xx.v00              256K      252K

ne1000.v00                636K      633K

nenic.v00                 264K      261K

nfnic.v00                 576K      573K

nhpsa.v00                 612K      611K

nmlx4_co.v00              784K      781K

nmlx4_en.v00              732K      730K

nmlx4_rd.v00              340K      338K

nmlx5_co.v00                1M        1M

nmlx5_rd.v00              292K      288K

ntg3.v00                  116K      115K

nvme_pci.v00              120K      117K

nvmerdma.v00              168K      164K

nvmxnet3.v00              196K      193K

nvmxnet3.v01              172K      168K

pvscsi.v00                124K      121K

qcnic.v00                 300K      297K

qedentv.v00                 3M        3M

qedrntv.v00                 2M        2M

qfle3.v00                   2M        2M

qfle3f.v00                  1M        1M

qfle3i.v00                368K      367K

qflge.v00                 500K      498K

rste.v00                  828K      825K

sfvmk.v00                 648K      647K

smartpqi.v00              364K      362K

vmkata.v00                204K      202K

vmkfcoe.v00              1008K     1006K

vmkusb.v00                  1M        1M

vmw_ahci.v00              236K      234K

crx.v00                    12M       12M

elx_esx_.v00                2M        2M

btldr.v00                   1M        1M

esx_dvfi.v00              488K      484K

esx_nsxv.v00               35M       35M

esx_ui.v00                 14M       14M

esxupdt.v00                 1M        1M

tpmesxup.v00               12K       11K

weaselin.v00                2M        2M

loadesx.v00                56K       53K

lsuv2_hp.v00               72K       70K

lsuv2_in.v00               28K       26K

lsuv2_ls.v00                1M        1M

lsuv2_nv.v00               16K       13K

lsuv2_oe.v00               16K       13K

lsuv2_oe.v01               16K       13K

lsuv2_oe.v02               16K       13K

lsuv2_sm.v00               56K       54K

native_m.v00                2M        2M

qlnative.v00                2M        2M

vdfs.v00                   12M       12M

vmware_e.v00              188K      187K

vsan.v00                   46M       46M

vsanheal.v00                7M        7M

vsanmgmt.v00               21M       21M

xorg.v00                    3M        3M

state.tgz                  64K       60K

vmware_f.v00               31M       31M

imgdb.tgz                   1M        1M


Ramdisk                   Size      Used Available Use% Mounted on

root                       32M        2M       29M   8% --

etc                        28M      808K       27M   2% --

opt                        32M        0B       32M   0% --

var                        48M      828K       47M   1% --

tmp                       256M        9M      246M   3% --

iofilters                  32M        0B       32M   0% --

shm                      1024M        0B     1024M   0% --

crx                      1024M        0B     1024M   0% --

configstore                32M       64K       31M   0% --

configstorebkp             32M       64K       31M   0% --

vsantraces                300M      217M       82M  72% --

hostdstats                553M        7M      545M   1% --

[root@ESX1:~] stat -f /

  File: "/"

    ID: 100000000 Namelen: 127     Type: visorfs

Block size: 4096

Blocks: Total: 1055831    Free: 807322     Available: 807322

Inodes: Total: 655360     Free: 647863

Very strange, anyone else running 7 that doesn't mind trying?

1 Solution

Accepted Solutions
VMware Employee
VMware Employee

Hi All, the item below is not an issue as this is by design as a required service.

the service can be restarted but cannot be stoppped.

Moving forward VMware will address the messaging of the error.

Again nicholas1982​ and carvaled​ thank you for providing this and hope it has made clearer.

Blog: Twitter: LinkedIn:

View solution in original post

13 Replies

Hey, hope you are doing fine:

Please, Try this:

  1. Check the watchdog and vmsyslogd process IDs (PIDs)by running the command:

    ps -Cc|grep vmsyslogd

    You see output similar to:

    5462 5462 vmsyslogd
    4523 4523 wdog-4523
  2. Using the PIDs from Step 10, run the kill command for the watchdog process first and then the vmsyslogd process: <br style="box-sizing: border-box;">

    For example:

    kill -9 4523
    kill -9 5462

  3. Restart the vmsyslogd process by running the command:


Personally, I'd never use a X.0 version since they come with many bugs. If possible update to vSphere 7.1


Thanks, for the reply.

I should have pointed out that I can and have killed the process...

My question is more around determining if this is a bug in vSphere 7, I am just hoping someone running it can try to stop syslog and also check the output of /usr/lib/vmware/vmsyslog/bin/vmsyslogd status

..Its not then end of the world, its just my homelab Smiley Happy

0 Kudos
Hot Shot
Hot Shot

Hey vMAN, I can confirm I have the same issue on ESXi 7.0b Build 16324942, I will now test on 7.0 Update 1 and report back.

Nicholas VCP6
Hot Shot
Hot Shot

I can confirm this is not just an issue with the X.0 version, I'm exhibiting the same issue on 7.0 Update 1


Nicholas VCP6

Thanks for confirming nicholas1982

Attention VMware!

0 Kudos
VMware Employee
VMware Employee

vManCH, SR has been logged on your behalf and having you and @nicholas1982 as additional contacts.

Blog: Twitter: LinkedIn:

Thank you @Siddiqui_au

0 Kudos
VMware Employee
VMware Employee

Attention VMware!

Please note that VMTN is not intended to be an official support forum - while some users are employees, most of those are helping in their own time in an unofficial capacity.

The majority of contributors are end-users, VMTN is a "community" forum.

0 Kudos

I am unable to raise any cases with my account as just a blogger, my past experience has proven that the case never gets any attention from my account.

As a VMware enthusiast which stumbled onto a bug in a Production product I am just trying to help the company and so I thought it was worth posting here in the hope a VMware employee would help out raise a internal bug / inform the right team.



0 Kudos
VMware Employee
VMware Employee

I wasn't referring to you creating a thread or adding your other comments (and you have had replied from other users), it was the specific Attention VMware! which prompted my note to you.

0 Kudos
Hot Shot
Hot Shot

I've now tested this with ESXi 6.7U3 and I can confirm it also exhibits the same issue, I've updated the support ticket with VMware to reflect.


Nicholas VCP6
0 Kudos
VMware Employee
VMware Employee

Hi All, the item below is not an issue as this is by design as a required service.

the service can be restarted but cannot be stoppped.

Moving forward VMware will address the messaging of the error.

Again nicholas1982​ and carvaled​ thank you for providing this and hope it has made clearer.

Blog: Twitter: LinkedIn:

View solution in original post


Thanks @siddiqui_au for looking into it.

Yes that's the behaviour I observed, furry muff...

0 Kudos