Steadybit logoResilience Hub
Try SteadybitGitHub icon
Steadybit logoResilience Hub

Kubernetes Cluster

TargetTarget
Install now

Kubernetes Cluster

TargetTarget
Install now

Kubernetes Cluster

TargetTarget
Install now

Kubernetes Cluster

TargetTarget
Install now
Go back to list

Supported Actions

See all
Kubernetes Event Logs
Collect event logs from a Kubernetes.
OtherOther
Kubernetes cluster

Useful Templates (4 of 40)

See all
AppDynamics alerts when a Kubernetes pod is in crash loop

Verify that an AppDynamics health violation alerts you when pods are not ready to accept traffic for a certain time.

Motivation

Kubernetes features a readiness probe to determine whether your pod is ready to accept traffic. If it isn't becoming ready, Kubernetes tries to solve it by restarting the underlying container and hoping to achieve its readiness eventually. If this isn't working, Kubernetes will eventually back off to restart the container, and the Kubernetes resource remains non-functional.

Structure

First, check that the AppDynamics health violation responsible for tracking non-ready containers is in a non-violating state. As soon as one of the containers is crash looping, caused by the crash loop attack, the AppDynamics health violation should notify and escalate it to your on-call team.

Solution Sketch

  • Kubernetes liveness, readiness, and startup probes
AppDynamics
Crash loop
Harden Observability
Restart
Kubernetes
AppDynamics applications
AppDynamics health rules
Kubernetes cluster
Kubernetes pods
Kubernetes deployment survives Redis latency

Verify that your application handles an increased latency in a Redis cache properly, allowing for increased processing time while maintaining throughput.

Motivation

Latency issues in Redis can lead to degraded system performance, longer response times, and potentially lost or delayed data. By testing your system's resilience to Redis latency, you can ensure that it can handle increased processing time and maintain its throughput during increased latency. Additionally, you can identify any potential bottlenecks or inefficiencies in your system and take appropriate measures to optimize its performance and reliability.

Structure

We will verify that a load-balanced user-facing endpoint fully works while having all pods ready. As soon as we simulate Redis latency, we expect the system to maintain its throughput and indicate unavailability appropriately. We can introduce delays in Redis operations to simulate latency. The experiment aims to ensure that your system can handle increased processing time and maintain its throughput during increased latency. The performance should return to normal after the latency has ended.

Redis
Recoverability
Datadog
Containers
Datadog monitors
Kubernetes cluster
Kubernetes deployments
Kubernetes deployment survives Redis downtime

Check that your application gracefully handles a Redis cache downtime and continues to deliver its intended functionality. The cache downtime may be caused by an unavailable Redis instance or a complete cluster.

Motivation

Redis downtime can lead to degraded system performance, lost data, and potentially long system recovery times. By testing your system's resilience to Redis downtime, you can ensure that it can handle the outage gracefully and continue to deliver its intended functionality. Additionally, you can identify any potential weaknesses in your system and take appropriate measures to improve its performance and resilience.

Structure

We will verify that a load-balanced user-facing endpoint fully works while having all pods ready. As soon as we simulate Redis downtime, we expect the system to indicate unavailability appropriately and maintain its throughput. We can block the traffic to the Redis instance to simulate downtime. The experiment aims to ensure that your system can gracefully handle the outage and continue delivering its intended functionality. The performance should return to normal after the Redis instance is available again.

Redis
Recoverability
Datadog
Containers
Datadog monitors
Kubernetes cluster
Kubernetes deployments
Certificate TLS/SSL expiry for Kubernetes deployment

Turn time forward and check whether your TLS/SSL certificates are valid.

Motivation

Noticing the TLS/SSL certification expiry too late is one problem you can easily avoid by frequently checking your expiry dates. While observability tools already handle this job nicely, you can't know whether they are working in your environment. With this experiment, you can turn the time forward to check whether your HTTPS endpoint works at a given date in the future. Additionally, you can configure one of the observability integrations to validate your observability tool's alerting.

Structure

First, we validate that the given HTTPS endpoint is working today. Next, we will travel with the host in time to validate that the HTTPS endpoint continues to work on a given date. If the TLS/SSL certificate has already expired at that date, the HTTP check will throw failures.

Warning

Please be aware that we will manipulate the time for a given Kubernetes node. Containers running at that host may struggle to deal with the change in the clock correctly, and you may experience other side effects.

Certificate Expiry
Hosts
Kubernetes cluster
Start Using Steadybit Today

Get started with Steadybit, and you’ll get access to all of our features to discover the full power of Steadybit. Available for SaaS and on-prem!

Are you unsure where to begin?

No worries, our reliability experts are here to help: book a demo with them!

Statistics
-Stars
Tags
Kubernetes
Container
AWS
Azure
GCP
Advice
Homepage
hub.steadybit.com/extension/com.steadybit.extension_kubernetes
License
MIT
MaintainerSteadybit
Install now
Steadybit logoResilience Hub
Try Steadybit
© 2025 Steadybit GmbH. All rights reserved.
Twitter iconLinkedIn iconGitHub icon