Steadybit logoResilience Hub
Try SteadybitGitHub icon
Steadybit logoResilience Hub

Deployment Pod Count

Check

Check

Verifies Kubernetes Deployment pod counts
Install now

Deployment Pod Count

Verifies Kubernetes Deployment pod counts
Check

Check

Install now

Deployment Pod Count

Check

Check

Verifies Kubernetes Deployment pod counts
Install now

Deployment Pod Count

Verifies Kubernetes Deployment pod counts
Check

Check

Install now
Go back to list
Experiment EditorExperiment Editor

Introduction

Check if the count of ready pods is matching your expectation

It counts the number of all pods in the ready-state and compares it with the desired replica count. It, therefore, assumes that the specified mode (see below) becomes true within the specified timeout.

Use Cases

  • Check if the ready count is equal to the desired count
  • Check if the ready count is below the desired count
  • Check if there is at least one ready pod
  • Check if the pod count increases if you add load
  • Check if the pod count decreased if there is no more load

Parameters

ParameterDescriptionDefault
TimeoutHow long should the check wait for the specified pod count?10s
Pod CountHow should the pod count change? (See values below)

Pod Count

You can use the pod count check in one of the following modes:

  • ready count = desired count: Can be used to assure that the amount of desired ready pods is equal to the actual amount in the cluster. Helpful to check e.g. after an attack whether every pod is recovering.
  • ready count > 0: To assure that for each pod in the cluster at least one pod is available to serve the traffic.
  • ready count < desired count: To make sure that all pods matching the check's query are below the specified ready count. This can be helpful in case you want to verify that e.g. exhausting memory leads to restarting the pods.
  • actual count increases: Check if the pod count increases compared to the actual number of pods when you started the action.
  • actual count decreases: Check if the pod count decreases compared to the actual number of pods when you started the action.
Statistics
-Stars
Tags
Kubernetes
Homepage
hub.steadybit.com/extension/com.steadybit.extension_kubernetes
License
MIT
MaintainerSteadybit
Install now

Useful Templates (4 of 26)

See all
Kubernetes deployment survives Redis latency

Verify that your application handles an increased latency in a Redis cache properly, allowing for increased processing time while maintaining throughput.

Motivation

Latency issues in Redis can lead to degraded system performance, longer response times, and potentially lost or delayed data. By testing your system's resilience to Redis latency, you can ensure that it can handle increased processing time and maintain its throughput during increased latency. Additionally, you can identify any potential bottlenecks or inefficiencies in your system and take appropriate measures to optimize its performance and reliability.

Structure

We will verify that a load-balanced user-facing endpoint fully works while having all pods ready. As soon as we simulate Redis latency, we expect the system to maintain its throughput and indicate unavailability appropriately. We can introduce delays in Redis operations to simulate latency. The experiment aims to ensure that your system can handle increased processing time and maintain its throughput during increased latency. The performance should return to normal after the latency has ended.

Redis
Recoverability
Datadog

Containers

Datadog monitors

Kubernetes cluster

Kubernetes deployments

Kubernetes deployment survives Redis downtime

Check that your application gracefully handles a Redis cache downtime and continues to deliver its intended functionality. The cache downtime may be caused by an unavailable Redis instance or a complete cluster.

Motivation

Redis downtime can lead to degraded system performance, lost data, and potentially long system recovery times. By testing your system's resilience to Redis downtime, you can ensure that it can handle the outage gracefully and continue to deliver its intended functionality. Additionally, you can identify any potential weaknesses in your system and take appropriate measures to improve its performance and resilience.

Structure

We will verify that a load-balanced user-facing endpoint fully works while having all pods ready. As soon as we simulate Redis downtime, we expect the system to indicate unavailability appropriately and maintain its throughput. We can block the traffic to the Redis instance to simulate downtime. The experiment aims to ensure that your system can gracefully handle the outage and continue delivering its intended functionality. The performance should return to normal after the Redis instance is available again.

Redis
Recoverability
Datadog

Containers

Datadog monitors

Kubernetes cluster

Kubernetes deployments

Network outage for Kubernetes nodes in an availability zone

Achieve high availability of your Kubernetes cluster via redundancy across different Availability Zones. Check what happens to your Kubernetes cluster when one of the zones is down.

Motivation

Cloud providers host your deployments and services across multiple locations worldwide. From a reliability standpoint, regions and availability zones are most interesting. While the former refers to separate geographic areas spread worldwide, the latter refers to an isolated location within a region. For most use cases, applying deployments across availability zones is sufficient. Given that failures may happen at this level quite frequently, you should verify that your applications are still working in case of an outage.

Structure

We leverage the block traffic attack to simulate a full network loss in an availability zone. While the zone outage happens, we observe changes in the Kubernetes cluster with Steadybit's built-in visibility. Once the zone outage is over, we expect that all deployments will recover again within a specified time.

Solution Sketch

  • AWS Regions and Zones
  • Azure Regions and Zones
  • GCP Regions and Zones
  • Kubernetes liveness, readiness, and startup probes
Azure
GCP
Redundancy
AWS
Availability Zone

Hosts

Kubernetes cluster

Kubernetes deployments

Network loss for Kubernetes node's outgoing traffic in an availability zone

Achieve high availability of your Kubernetes cluster via redundancy across different Availability Zones. Check what happens to your Kubernetes cluster when one of the zones suffers from a network loss.

Motivation

Cloud provider host your deployments and services across multiple locations worldwide. From a reliability standpoint, regions and availability zones are most interesting. While the former refers to separate geographic areas spread worldwide, the latter refers to an isolated location within a region. For most use cases, applying deployments across availability zone is sufficient. Given that failures may happen at this level quite frequently, you should verify that your applications are still working in case of an outage.

Structure

We leverage the drop outgoing traffic to simulate network loss in an availability. If you want to test for a full outage of the zone, configure it to 100% loss. While the network loss happens, we observe changes of a Kubernetes cluster with Steadybit's built-in visibility. Once the network loss is over, we expect that all deployments will recover again within a specified time.

Solution Sketch

  • AWS Regions and Zones
  • Azure Regions and Zones
  • GCP Regions and Zones
  • Kubernetes liveness, readiness, and startup probes
AWS
Azure
GCP
Redundancy
Kubernetes
Availability Zone

Hosts

Kubernetes cluster

Kubernetes deployments

More Kubernetes Deployment Actions

See all
Start Using Steadybit Today

Get started with Steadybit, and you’ll get access to all of our features to discover the full power of Steadybit. Available for SaaS and on-prem!

Are you unsure where to begin?

No worries, our reliability experts are here to help: book a demo with them!

Steadybit logoResilience Hub
Try Steadybit
© 2024 Steadybit GmbH. All rights reserved.
Twitter iconLinkedIn iconGitHub icon