VMware Cloud Community
UofMVSI
Contributor
Contributor
Jump to solution

Disable vSAN observer

This might seem like a noob question but I can't seem to find an answer anywhere.  I attempted to upgrade one of our ESXi hosts in the test environment and it would not update.  I contacted VMware and they found our vSAN observer had been run with the --forever switch, and the ESXi drive had filled up with vSAN observer logs.  We deleted them and I was able to upgrade the host.  I also worked with VMware on disabling the observer so it didn't generate logs anymore.  Well it is still generating logs.  I'll be upgrading the prod hosts in a few weeks and don't want to run into the same issue.  Can someone tell me how to disable this if someone before me had run vSAN observer with the --forever switch?  Or am I misunderstanding this whole concept?

0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello UofMVSI​,

Firstly, Observer saves collected data to vCenter not to ESXi hosts.

Second, Observer stops generating data once you quit RVC, stop the Observer (Ctrl+C) or reboot the vCenter so this doesn't sound like the cause of the files you speak of.

What you are likely referring to are the observer.gz files as part of vsantraces - in older versions of vSAN (e.g. 6.0-6.2) vsantraces in some instances could run quite hot in that it would not start removing older files until its available capacity was reached or close to this - this was by design as traces are so verbose that in a busy cluster they might only hold minutes worth of logging data so retaining as much as possible is beneficial.

How these log data are managed in 6.5 and later appears to be a non-issue as I cannot recall the last time I had to troubleshoot this nor know of a colleague that has brought it up in the last year (I am GSS-vSAN-EMEA).

This may be relevant to the previous issue you spoke of:

VMware Knowledge Base

If you are concerned, you can of course check the usage of your active RAMdisks using vdf and/or datastores if you are storing traces somewhere else using df, additionally looking at /var/log/vobd.log for log messages relating to anything running out of space or being unable to write storage due to being full may help ease your concerns.

Bob

View solution in original post

2 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello UofMVSI​,

Firstly, Observer saves collected data to vCenter not to ESXi hosts.

Second, Observer stops generating data once you quit RVC, stop the Observer (Ctrl+C) or reboot the vCenter so this doesn't sound like the cause of the files you speak of.

What you are likely referring to are the observer.gz files as part of vsantraces - in older versions of vSAN (e.g. 6.0-6.2) vsantraces in some instances could run quite hot in that it would not start removing older files until its available capacity was reached or close to this - this was by design as traces are so verbose that in a busy cluster they might only hold minutes worth of logging data so retaining as much as possible is beneficial.

How these log data are managed in 6.5 and later appears to be a non-issue as I cannot recall the last time I had to troubleshoot this nor know of a colleague that has brought it up in the last year (I am GSS-vSAN-EMEA).

This may be relevant to the previous issue you spoke of:

VMware Knowledge Base

If you are concerned, you can of course check the usage of your active RAMdisks using vdf and/or datastores if you are storing traces somewhere else using df, additionally looking at /var/log/vobd.log for log messages relating to anything running out of space or being unable to write storage due to being full may help ease your concerns.

Bob

UofMVSI
Contributor
Contributor
Jump to solution

Thanks for the detailed response Bob!  It was my understanding from the VMware vSAN tech I spoke with that the generation of these vSANObserver.gz files was the result of the observer running with the --forever switch, so it sounds like I was misinformed.  We are at vSAN 6.2 as you mentioned so once we can upgrade the environment, it will be nice to not have to deal with this issue any longer.

0 Kudos