Our Product Roadmap is now public. Check it out here!

Reducing On-call Alert Fatigue with Deduplication

Alert noise is a very common on call complaint leading to fatigue and on call burnout. This article is an attempt at helping folks address this problem.

What is alert fatigue?

Most organizations today have an expansive set of tools to monitor their applications and services. This is to ensure that all the system metrics, events, logs, etc. are tracked to keep abreast of how their systems are doing. But it is humanly impossible to constantly supervise the various dashboards of these tools. So, it makes sense then that when these tools detect anything that is even remotely important, there is a notification that the team received informing them of this. This in turn enables engineering teams to know how reliable their systems are and be proactive in avoiding downtime.

But the issues arise when engineers start to get flooded with alerts from their monitoring setup. The sheer volume of alerts that are mostly informational and not necessarily actionable are much higher in comparison with those that are actual incidents that need immediate action.

So, a typical day in the life of an on call engineer would be to wade through the ocean of alerts on their incident management platform of choice. Engineers who have experienced this know how overwhelming it can get. The really important incidents start to get lost in the superfluous alert noise. This is Alert Fatigue.

Alert noise can kill on-call productivity

Alert fatigue has become an increasingly painful and widespread problem in DevOps and SRE teams given the amount of data that is available to them. While the whole point of using monitoring tools to send alerts is to build a culture of proactive incident management, it slowly begins diminishing this whole objective.

You know you have a problem to fix if the volume of low-priority/warning alerts greatly exceeds the number of actionable alerts to such an extent that the real, high-severity incidents end up getting detected much later or not at all.

It follows from this that it is super important to ensure that on call engineers who work on responding to these incidents are not overloaded with alert noise.

The problem now becomes centred around finding a way to capture all the data but at the same time ensuring that you’re getting particularly notified for only the actionable ones, or in essence, finding a tool that can distinguish between alerts and incidents.

No Engineer wants to be woken up at 3AM only to find out that it is a false alarm.

How Kevin Loses His Sanity Because of Alert Fatigue : An On-call Story

Let’s take a look at this in an illustrative way.

This is Kevin and he is an SRE (crowd cheers? Hahaha). He deals with services and makes sure they are healthy. And to top it all, he needs to do this while not losing his sanity.

An alert woke him up. Another one woke him up even more.

And this is a Herculean task when he is being woken up by a production alert at 1AM.

Looking like a zombie himself, and the King of Pop’s Thriller ringing on his phone is keeping up with the theme of this unfortunate series of events.

Don't judge him. (Cause this is THRILLER 🧟 on loop).

So, Kevin sees that the service sent a warning message for CPU Usage. It will probably take a week for it to move into the critical stage. He took steps to fix this by reaching out to his team. But the service continues to send him notifications disrupting his sleep.

While he understands that the alerting tool is just doing its job by pinging him ruthlessly until he wakes up to his responsibilities, he sees no reason to lose his sleep or sanity unless there's a serious production issue (he secretly prays that this isn't the case every time the phone rings)

Here's how he lost his sanity in just about an hour. I'm pretty sure he's a little sick of Thriller by now.

Timeline of D-Day:

  • 12:58:59PM Thriller
  • 01:00:22AM Sleep deprived yet, slapping in the face to remain awake and see audit logs
  • 01:21:31AM Woke up from a unexpected snooze off, found out spacebar ain't working anymore due to salivary short circuit
  • 01:30:01AM Copy spaces using mouse from sites and pasting in grep to filter logs
  • 01:36:03AM Eureka moment,followed by a thought of "Oh shoot, I'm desperate now"
  • 01:40:40AM Food delivery arrives.The high point of this incident so far.
  • 01:40:41AM Thriller
  • 01:47:12AM BURP
  • 01:52:15AM Coffee Refill.
  • 01:52:34AM Thriller
  • 02:00:44AM Thriller
  • 02:12:49AM Thriller
  • 02:33:52AM Thriller
  • 02:45:53AM Thriller
  • 02:52:53AM Thriller Thriller Thriller
  • 02:56:54AM Thriller Thriller Thriller Thriller Thriller
  • 03:03:00AM Played dunk the phone in coffee. sparks
  • 03:08:17AM Wakes up the duck. _Duck is not so thrilled
  • 03:10:29AM Hot air to the face..._either from the duck or the CPU exhaust
  • 03:27:05AM Manages to find the fix
  • 03:29:30AM Figures out that his phone survived the 6 inch dunk
  • 03:37:15AM Face hits the pillow as he contemplates throwing his phone out of the window

Kevin Configures De-duplication in Squadcast

Kevin saw that his alerts were pouring in from Prometheus. He realises that he can't keep dunking his phone in the coffee when alerts flood in.

He decides to deal with the alert noise once and for all after resolving the prod issue.

He manages to configure deduplication rules on his platform.

Prometheus was complaining about deployment rolling updates and some completely unrelated CPU usage issues every 10 seconds or so. He executes a runbook and fixes both the issues (apparently this happens once every month).

Now he rolls up his sleeves and decides to configure de-duplication for his alerts.

  • For deployment issues, he decides to group and de-duplicate alerts based on the impacted services.
  • For CPU Usage related issues, he decides to group and de-duplicate alerts based on the impacted services but create a new alert if the same event had already occurred 50 times.

He sees that the alert payload for one specific alert was to do with the deployment of that service.

He writes a rule to de-duplicate the incident for deployment errors.

He writes a similar rule for the CPU Usage based alerts and adds another one to fire this incident again only if it has occurred 50 times in a row.

Rule that Kevin used for this:

At least he won't hate Thriller now + No phone dunking + No coffee wastage + most importantly, No more alert noise!!!

TL;DR

Kevin finally manages to configure de-duplication rules for his Prometheus alerts and sets severities for incidents to get woken up for just the really _really_ important ones.

Kevin is smart. Be like Kevin.

Squadcast is an incident management tool that’s purpose-built for SRE. Create a blameless culture by reducing the need for physical war rooms, centralize SLO dashboards, unify internal and external SLIs and automate incident resolution with Squadcast Actions and create a knowledge base to effectively handle incidents.

Learn more about Squadcast:
January 8, 2020
Prakya Vasudevan
Akilan Elango
About the Author:

Reducing On-call Alert Fatigue with Deduplication

January 8, 2020
Alert noise is a very common on call complaint leading to fatigue and on call burnout. This article is an attempt at helping folks address this problem.

What is alert fatigue?

Most organizations today have an expansive set of tools to monitor their applications and services. This is to ensure that all the system metrics, events, logs, etc. are tracked to keep abreast of how their systems are doing. But it is humanly impossible to constantly supervise the various dashboards of these tools. So, it makes sense then that when these tools detect anything that is even remotely important, there is a notification that the team received informing them of this. This in turn enables engineering teams to know how reliable their systems are and be proactive in avoiding downtime.

But the issues arise when engineers start to get flooded with alerts from their monitoring setup. The sheer volume of alerts that are mostly informational and not necessarily actionable are much higher in comparison with those that are actual incidents that need immediate action.

So, a typical day in the life of an on call engineer would be to wade through the ocean of alerts on their incident management platform of choice. Engineers who have experienced this know how overwhelming it can get. The really important incidents start to get lost in the superfluous alert noise. This is Alert Fatigue.

Alert noise can kill on-call productivity

Alert fatigue has become an increasingly painful and widespread problem in DevOps and SRE teams given the amount of data that is available to them. While the whole point of using monitoring tools to send alerts is to build a culture of proactive incident management, it slowly begins diminishing this whole objective.

You know you have a problem to fix if the volume of low-priority/warning alerts greatly exceeds the number of actionable alerts to such an extent that the real, high-severity incidents end up getting detected much later or not at all.

It follows from this that it is super important to ensure that on call engineers who work on responding to these incidents are not overloaded with alert noise.

The problem now becomes centred around finding a way to capture all the data but at the same time ensuring that you’re getting particularly notified for only the actionable ones, or in essence, finding a tool that can distinguish between alerts and incidents.

No Engineer wants to be woken up at 3AM only to find out that it is a false alarm.

How Kevin Loses His Sanity Because of Alert Fatigue : An On-call Story

Let’s take a look at this in an illustrative way.

This is Kevin and he is an SRE (crowd cheers? Hahaha). He deals with services and makes sure they are healthy. And to top it all, he needs to do this while not losing his sanity.

An alert woke him up. Another one woke him up even more.

And this is a Herculean task when he is being woken up by a production alert at 1AM.

Looking like a zombie himself, and the King of Pop’s Thriller ringing on his phone is keeping up with the theme of this unfortunate series of events.

Don't judge him. (Cause this is THRILLER 🧟 on loop).

So, Kevin sees that the service sent a warning message for CPU Usage. It will probably take a week for it to move into the critical stage. He took steps to fix this by reaching out to his team. But the service continues to send him notifications disrupting his sleep.

While he understands that the alerting tool is just doing its job by pinging him ruthlessly until he wakes up to his responsibilities, he sees no reason to lose his sleep or sanity unless there's a serious production issue (he secretly prays that this isn't the case every time the phone rings)

Here's how he lost his sanity in just about an hour. I'm pretty sure he's a little sick of Thriller by now.

Timeline of D-Day:

  • 12:58:59PM Thriller
  • 01:00:22AM Sleep deprived yet, slapping in the face to remain awake and see audit logs
  • 01:21:31AM Woke up from a unexpected snooze off, found out spacebar ain't working anymore due to salivary short circuit
  • 01:30:01AM Copy spaces using mouse from sites and pasting in grep to filter logs
  • 01:36:03AM Eureka moment,followed by a thought of "Oh shoot, I'm desperate now"
  • 01:40:40AM Food delivery arrives.The high point of this incident so far.
  • 01:40:41AM Thriller
  • 01:47:12AM BURP
  • 01:52:15AM Coffee Refill.
  • 01:52:34AM Thriller
  • 02:00:44AM Thriller
  • 02:12:49AM Thriller
  • 02:33:52AM Thriller
  • 02:45:53AM Thriller
  • 02:52:53AM Thriller Thriller Thriller
  • 02:56:54AM Thriller Thriller Thriller Thriller Thriller
  • 03:03:00AM Played dunk the phone in coffee. sparks
  • 03:08:17AM Wakes up the duck. _Duck is not so thrilled
  • 03:10:29AM Hot air to the face..._either from the duck or the CPU exhaust
  • 03:27:05AM Manages to find the fix
  • 03:29:30AM Figures out that his phone survived the 6 inch dunk
  • 03:37:15AM Face hits the pillow as he contemplates throwing his phone out of the window

Kevin Configures De-duplication in Squadcast

Kevin saw that his alerts were pouring in from Prometheus. He realises that he can't keep dunking his phone in the coffee when alerts flood in.

He decides to deal with the alert noise once and for all after resolving the prod issue.

He manages to configure deduplication rules on his platform.

Prometheus was complaining about deployment rolling updates and some completely unrelated CPU usage issues every 10 seconds or so. He executes a runbook and fixes both the issues (apparently this happens once every month).

Now he rolls up his sleeves and decides to configure de-duplication for his alerts.

  • For deployment issues, he decides to group and de-duplicate alerts based on the impacted services.
  • For CPU Usage related issues, he decides to group and de-duplicate alerts based on the impacted services but create a new alert if the same event had already occurred 50 times.

He sees that the alert payload for one specific alert was to do with the deployment of that service.

He writes a rule to de-duplicate the incident for deployment errors.

He writes a similar rule for the CPU Usage based alerts and adds another one to fire this incident again only if it has occurred 50 times in a row.

Rule that Kevin used for this:

At least he won't hate Thriller now + No phone dunking + No coffee wastage + most importantly, No more alert noise!!!

TL;DR

Kevin finally manages to configure de-duplication rules for his Prometheus alerts and sets severities for incidents to get woken up for just the really _really_ important ones.

Kevin is smart. Be like Kevin.

Squadcast is an incident management tool that’s purpose-built for SRE. Create a blameless culture by reducing the need for physical war rooms, centralize SLO dashboards, unify internal and external SLIs and automate incident resolution with Squadcast Actions and create a knowledge base to effectively handle incidents.

Prakya Vasudevan
Akilan Elango
Want to share the awesomeness?
Our Product Roadmap is now public. Check it out here!
Squadcast - On-call shouldn't suck. Incident response for SRE/DevOps, IT | Product Hunt Embed
Squadcast recognized in Incident Management based on user reviews Users love Squadcast on G2 Squadcast is a leader in Incident Management on G2 Squadcast is a leader in Incident Management on G2 Squadcast is a leader in IT Service Management (ITSM) Tools on G2
Squadcast - On-call shouldn't suck. Incident response for SRE/DevOps, IT | Product Hunt Embed
Squadcast recognized in Incident Management based on user reviews Users love Squadcast on G2 Squadcast is a leader in Incident Management on G2
Squadcast is a leader in Incident Management on G2 Squadcast is a leader in IT Service Management (ITSM) Tools on G2
Copyright © Squadcast Inc. 2017-2020