Steadybit logoResilience Hub
Try SteadybitGitHub icon
Steadybit logoResilience Hub

Scale Deployment

AttackAttack
Up-/Downscale a Kubernetes Deployment
Targets:
Kubernetes deployments
Install now

Scale Deployment

Up-/Downscale a Kubernetes Deployment
AttackAttack
Install now

Scale Deployment

AttackAttack
Up-/Downscale a Kubernetes Deployment
Targets:
Kubernetes deployments
Install now

Scale Deployment

Up-/Downscale a Kubernetes Deployment
AttackAttack
Install now
Go back to list
Experiment EditorExperiment Editor

Introduction

Use this action to up or downscale a Kubernetes deployment.

Use Cases

  • Verify that your deployment is successfully scaled up and the new pod is taken into account by your load balancer
  • Check how long it takes to scale your deployment to a desirable amount of pods (in combination with a pod count check)

Parameters

NameRequiredDescriptionDefault
DurationtrueHow long should the new scale be used?180s
Replica CounttrueThe desired replica count1

Rollback

At the end of the attack or in case of an error, the old desired replica count will be restored.

Statistics
-Stars
Tags
Kubernetes
Homepage
hub.steadybit.com/extension/com.steadybit.extension_kubernetes
License
MIT
MaintainerSteadybit
Install now

Useful Templates

See all
Keep Deployment's pods down

Check what happens when all pods of a Kubernetes deployment aren't coming up again.

Motivation

Typically, Kubernetes tries to keep as many pods running as desired for a Kubernetes deployment. However, some circumstances may prevent Kubernetes from achieving this, like missing resources in the cluster, problems with the deployment's probes, or a CrashLoopBackOff. You should validate what happens to your overall provided service when a given deployment is directly affected by this or one of the upstream services used by your deployment.

Structure

To keep the pods down for a given deployment, we first kill all the pods in the deployment. Simultaneously, we will scale down the Kubernetes deployment to 0 to simulate that these pods can't be scheduled again. At the of the experiment, we automatically roll back the deployment's scale to the initial value.

Deployment
Upstream Service
Kubernetes
Kubernetes cluster
Kubernetes deployments
Kubernetes pods
Faultless scaling of Kubernetes Deployment

Ensure that you can scale your deployment in a reasonable time without noticeable errors.

Motivation

For an elastic and resilient cloud infrastructure, ensure that you can scale your deployments without user-visible errors and within a reasonable amount of time. Long startup times, hiccups in the load balancer, or resource misallocation are undesirable but sometimes unnoticed and unexpected.

Structure

For the duration of the experiment and the deployment's upscaling, verify that a user-visible endpoint offered is responding within expected success rates and that no monitors are alerting. As soon as the deployment is scaled up, the newly scheduled pod should be ready to receive traffic within a reasonable time, e.g., 60 seconds.

Scalability
Elasticity
Kubernetes
Datadog monitors
Kubernetes cluster
Kubernetes deployments

More Kubernetes Deployment Actions

See all
Start Using Steadybit Today

Get started with Steadybit, and you’ll get access to all of our features to discover the full power of Steadybit. Available for SaaS and on-prem!

Are you unsure where to begin?

No worries, our reliability experts are here to help: book a demo with them!

Steadybit logoResilience Hub
Try Steadybit
© 2025 Steadybit GmbH. All rights reserved.
Twitter iconLinkedIn iconGitHub icon