VMware Modern Apps Community
bradleyka93
Enthusiast
Enthusiast

How does alert fire time affect mcount()

I'm having difficulty understanding how the "alert fires if condition true for x minutes" field affects mcount(). Say I have a condition as follows:

     mcount(30m, ts(My_awesome_timeseries)) < 25

From what I understand, this will result in '1' if the number of data points in the last minute has been less than 25 in the last 30 minutes.

Now if I set the 'alert fires if condition is true for' to the following:

1.) 5 mins

2.) 10 mins

3.) 30 mins

4.) 60 mins

When does the alert actually go off in each case? Here are my guesses:

1.) 35 mins

2.) 40 mins

3.) 60 mins

4.) 90 mins

I think this would happen because first, you need 30 data points in order to do mcount() over 30m. So if you're sending data to wavefront at every minute interval, that would take 30 mins. Once you have that, with each additional minute you receive a new data point. So you're able to calculate mcount() 30m over a new range. So with the alert firing time set to 5 mins. You would need 5 ranges for mcount() which would take 5 additional minutes. Lol This would be so much easier to explain on a whiteboard. Is this making any sense?

0 Kudos
1 Reply
bradleyka93
Enthusiast
Enthusiast

parag_sLevel 2

Hi Vijay,

The alert is checked every minute. So initially at time t, when the series starts reporting at a frequency of one minute , the alert engine(say it starts at t+1)  will look at mcount(30m, <>) <25 and  will look back 30 minutes and see how many points are reported and it will find that is is less than 25 since the series just started reporting and might only have 1-2 points.

For the case where : 'alert fires if condition is true for 5 minutes ' -

( 5 min after series started reporting) - Since the points at (t+1)+5 will still be less than 25 it would fire the alert.

( 10 min after series started reporting) - The points continue to report in every minute, at (t+1) + 5+5  -  the points reported are still less than 25 and it fires the alert -

( 15 min after series started reporting) -  The points continue to report in every minute, at (t+1) + 5+5+5  -  the points reported are still less than 25 and it fires the alert

( 20 min after series started reporting) -  The points continue to report in every minute, at (t+1) + 5+5+5+5 -  the points reported are still less than 25 and it fires the alert

  (25 min after series started reporting) -  The points continue to report in every minute, at (t+1) + 5+5+5+5+5 -  the points reported are now 25 and the query does not meet the alert condition of < 25 and it does not fire

So at t+26 minutes provided the frequency of reporting is a minute,  mcount(30m, ts(My_awesome_timeseries)) < 25 should see 25 points reported and the alert condition will not be met and the alert does not fire.

Take a look at  alerting FAQ as well - Alert States and Lifecycle

Let me know if you have any questions on this.

Parag

0 Kudos