Steadybit logoResilience Hub
Try SteadybitGitHub icon
Steadybit logoResilience Hub

Alert Check

CheckCheck
New
Verifies the Splunk Alert state.
Targets:
Splunk Alerts
Install now

Alert Check

Verifies the Splunk Alert state.
CheckCheck
Install now

Alert Check

CheckCheck
New
Verifies the Splunk Alert state.
Targets:
Splunk Alerts
Install now

Alert Check

Verifies the Splunk Alert state.
CheckCheck
Install now
Go back to list
Steadybit's "Alert Check" allows you to verify the behaviour of your Splunk Alerts.Steadybit's "Alert Check" allows you to verify the behaviour of your Splunk Alerts.

Introduction

The Alert Check step can be dragged&dropped into the experiment editor. Once done, you can use it to verify if an Alert has been fired, or not, during the execution time.

Experiments can be aborted and marked as failed when the Alert Check's actual state diverges from the expected one. This helps implement pre-/post-conditions and invariants. For example, to only start an experiment when the system is healthy.

At last, to help you understand the Alert states and how they evolved, the run view also contains a state visualization. Through this visualization, you can see during which time an Alert was fired.

Use Cases

  • Pre-/postcondition or invariant for any experiment.
  • Verify that alerts are triggered during incidents.

Parameters

ParameterDescriptionDefault
DurationHow long should the state of the alert rule be checked30s
New Alerts OnlyOnly take Alerts fired after the start of the check into account or also already existing onesfalse
Expected StateVerify if the Alert was fired or notAlert fired
State Check ModeVerify if the expected state needs to hold for the entire duration of the check or at least onceAt least once

Useful Templates

See all
Splunk platform alerts when a Kubernetes pod is in crash loop

Verify that Splunk platform is firing an alert when pods are not ready to accept traffic for a certain time.

Motivation

Kubernetes features a readiness probe to determine whether your pod is ready to accept traffic. If it isn't becoming ready, Kubernetes tries to solve it by restarting the underlying container and hoping to achieve its readiness eventually. If this isn't working, Kubernetes will eventually back off to restart the container, and the Kubernetes resource remains non-functional.

Structure

First, check that the Splunk platform alert responsible for tracking non-ready containers is not in a firing state. As soon as one of the containers is crash looping, caused by the crash loop attack, the Splunk platform alert should fire and escalate it to your on-call team.

Solution Sketch

  • Kubernetes liveness, readiness, and startup probes
Crash loop
Harden Observability
Restart
Kubernetes
Splunk Platform
Kubernetes cluster
Kubernetes pods
Splunk Alerts
Start Using Steadybit Today

Get started with Steadybit, and you’ll get access to all of our features to discover the full power of Steadybit. Available for SaaS and on-prem!

Are you unsure where to begin?

No worries, our reliability experts are here to help: book a demo with them!

Statistics
-Stars
Tags
Splunk
Splunk Cloud Platform
Splunk Enterprise
Kubernetes
Check
Observability
Monitoring
Homepage
hub.steadybit.com/extension/com.steadybit.extension_splunk_platform
License
MIT
MaintainerSteadybit
Install now
Steadybit logoResilience Hub
Try Steadybit
© 2025 Steadybit GmbH. All rights reserved.
Twitter iconLinkedIn iconGitHub icon