๎‚ Back to Changelog
๎‚ Back to Changelog

August 29, 2024

On-Call

๐Ÿšจ Alert Grouping

๐Ÿšจ Alert Grouping

Alert grouping reduces noise and alert fatigue by consolidating related alerts into a single notification. This improves response efficiency, enhances prioritization, simplifies communication, and ultimately leads to faster incident resolution and better overall system reliability.

Alert grouping is especially helpful for organizations with more robust observability stacks that use separate monitors for different aspects of a given service. For exampleโ€”a monitor for error rates, a monitor for latency, a monitor for CPU, and so on, possibly even using multiple monitoring tools as alert sources. When something goes wrong with that particular service, it might trigger several or all related monitors to start sending off alerts. Without automatic alert grouping, itโ€™s up to the responder to identify whether the alerts are related and what to do about them.ย 

Using alert grouping means that in this example, the responder only gets paged from the first alert that comes in. Because Rootly is able to identify that the subsequent alerts are related to the initial alert, the responder is not paged for each monitor that gets triggered, but instead sees the additional alerts grouped as Alert Group Members under the original alert (referred to in Rootly as the Alert Group Leader).

Alert Groups are shown together along with the services, teams, and incident they're related to.

โ€

Grouped alerts are also shown together in a nested view from the Alert index as well:

Alert group members are nested under the alert group leader in the Alert Index page.

โ€

To set up alert grouping, head to Alerts from the left-hand navigation bar in Rootly web, then click the Grouping tab.

Click + New Alert Group and enter a name (required) and a description (optional).

Under Destinations, youโ€™ll select the Services, Teams, and/or Escalation Policies youโ€™d like to include in this group.

Under Time Window, specify a time period during which the Alert Group should stay open and accept new alerts. Our default recommendation is 10 minutes.ย 

Optionally, you can define additional requirements for the Alert Group under Content Matching by requiring alerts in the group to have the same title, urgency, or any other payload field associated with your alerts.

โ€

Configuring a new Alert Group is quick and easy.

Here's Alex to take you on a video tour of Alert Grouping!

โ€

๐ŸŒ New & Improved

๐Ÿ†• Added checkbox to have related alerts automatically resolved when incident is canceled. Teams no longer have to manually resolve each related alert when canceling an incident.

๐Ÿ’… Custom timestamp fields will now align to each userโ€™s preferred timezone setting. Previously, only out-of-box timestamp fields were displayed in the userโ€™s preferred timezone.

๐Ÿ’… Increased Rootly mobile app push notifications volume on iOS devices to help responders better hear incoming pages.

๐Ÿ’… Optimized backend logic in fetching incident subscribers to improve overall platform responsiveness.

๐Ÿ› Fixed text wrapping issue on web UI for displaying action items that contain long names.

๐Ÿ› Alerts details page now refreshes automatically when the status is updated.

๐Ÿ› Fixed intermittent issue with action item events failing to trigger workflows.

โ€

๎‚ Previous post
๎‚ Previous post
You are viewing the latestย post

Custom Role-Based Access Control (RBAC)

Subscribe to Incidents

Workflow Templates

Incident Variables and Snippets

Invite Responders to Incidents via Slack

Severity Descriptions

Reopening Incidents

SOC 2 Type II Certified

Trigger Workflows Anytime

Making Incident Creation a Breeze

Workflows 2.0

API and Infrastructure as Code Ready

Slack Enterprise Grid & Multi-Workspace Ready

Quick Start Shortcuts

Keep It Private (Incidents)

Measuring Incident Cost

Saying Hello to Rootly.com

Paging on Autopilot, Stop Wondering Who is On-Call

We ๐Ÿ’› Google Docs

๐Ÿ’Œ Sharing Rootly with your Team