Steadybit logoResilience Hub
Try SteadybitGitHub icon
Steadybit logoResilience Hub

Delay Outgoing Traffic

AttackAttack
Inject latency into egress network traffic.
Targets:
Containers
Install now

Delay Outgoing Traffic

Inject latency into egress network traffic.
AttackAttack
Targets:
Containers
Install now

Delay Outgoing Traffic

AttackAttack
Inject latency into egress network traffic.
Targets:
Containers
Install now

Delay Outgoing Traffic

Inject latency into egress network traffic.
AttackAttack
Targets:
Containers
Install now
Go back to list
Wireshark showing the effects of the attack.Wireshark showing the effects of the attack.

Introduction

Inject latency into all matching egress traffic.

Details

The network delay operates at the ip level and affects single packets (network layer, level 3). Thus, it works for UDP and TCP traffic and you may encounter HTTP requests that are delayed by a multiple of the specified delay.

In this example the traffic is delayed by 500ms. If you tap the wire (using tcpdump) and feed it into Wireshark it looks like shown in the image above.

  1. The first incoming packet initiates the tcp connection and is accepted by the second packet, which is delayed exactly be the 500ms.

  2. With the fourth packet we receive a HTTP request in the payload. Which is acknowledged and answered with a HTTP response in packet four to seven, which are also delayed by 500ms and thus the total latency for the HTTP request sums up to 1 second.

Note: If you are going to attack containers using network attacks, all containers in the target's linux network namespace (e.g. all containers belonging to the same Kubernetes Pod or Replica Set) will be affected. In case you want to target the traffic of a single container in the namespace you can for example use the port parameter to limit the blast radius.

Parameters

ParameterDescriptionDefault
Network DelayHow much should the traffic be delayed?500ms
JitterRandom +-30% jitter to network delaytrue
Fail on Host NetworkEmit failure when the targeted container is using the host networktrue
DurationHow long should the traffic be affected?30s
HostnameRestrict to which hosts the traffic is reduced
IP AddressRestrict to which IP address the traffic is reduced
Network InterfaceTarget Network Interface which should be attacked. All if none specified.
PortsRestrict to which ports the traffic is reduced

Useful Templates (4 of 8)

See all
Kubernetes deployment survives Redis latency

Verify that your application handles an increased latency in a Redis cache properly, allowing for increased processing time while maintaining throughput.

Motivation

Latency issues in Redis can lead to degraded system performance, longer response times, and potentially lost or delayed data. By testing your system's resilience to Redis latency, you can ensure that it can handle increased processing time and maintain its throughput during increased latency. Additionally, you can identify any potential bottlenecks or inefficiencies in your system and take appropriate measures to optimize its performance and reliability.

Structure

We will verify that a load-balanced user-facing endpoint fully works while having all pods ready. As soon as we simulate Redis latency, we expect the system to maintain its throughput and indicate unavailability appropriately. We can introduce delays in Redis operations to simulate latency. The experiment aims to ensure that your system can handle increased processing time and maintain its throughput during increased latency. The performance should return to normal after the latency has ended.

Redis
Recoverability
Datadog
Containers
Datadog monitors
Kubernetes cluster
Kubernetes deployments
Graceful degradation and Datadog alerts when Postgres suffers latency

Your application should continue functioning properly and indicate unavailability appropriately in case of increased connection latency to PostgreSQL. Additionally, this experiment can highlight requests that need optimization of timeouts to prevent dropped requests.

Motivation

Latencies in shared or overloaded databases are common and can significantly impact the performance of your application. By conducting this experiment, you can gain insights into the robustness of your application and identify areas for improvement.

Structure

To conduct this experiment, we will ensure that all pods are ready and that the load-balanced user-facing endpoint is fully functional. We will then simulate a latency attack on the PostgreSQL database by adding a delay of 100 milliseconds to all traffic to the database hostname. During the attack, we will monitor the system's behavior to ensure the service remains operational and can deliver its purpose. We will also analyze the performance metrics to identify any request types most affected by the latency and optimize them accordingly. Finally, we will end the attack and monitor the system's recovery time to ensure it returns to its normal state promptly. By conducting this experiment, we can gain valuable insights into our application's resilience to database latencies and make informed decisions to optimize its performance under stress.

RDS
Postgres
Recoverability
Datadog
Database
Containers
Datadog monitors
Kubernetes cluster
Kubernetes deployments
Latency progressively increases for Kubernetes DaemonSet

Latency of a Kubernetes DaemonSet progressively increases to analyse at which point the communication breaks.

Structure

We start by adding a 250ms latency on the Kubernetes DaemonSet's outgoing traffic for 30 seconds. Next, we stepwise increase the latency to 500ms, 750ms, and 1s - each for 30 seconds. In between, we have small wait steps to ease analysis in external observability tools for each phase.

Progressive
DaemonSet
Snippet
Kubernetes
Latency
Containers
Latency progressively increases for Kubernetes Deployment

Latency of a Kubernetes Deployment progressively increases to analyse at which point the communication breaks.

Structure

We start by adding a 250ms latency on the Kubernetes Deployment's outgoing traffic for 30 seconds. Next, we stepwise increase the latency to 500ms, 750ms, and 1s - each for 30 seconds. In between, we have small wait steps to ease analysis in external observability tools for each phase.

Progressive
Deployment
Snippet
Kubernetes
Latency
Containers

More Container Actions

See all
Start Using Steadybit Today

Get started with Steadybit, and you’ll get access to all of our features to discover the full power of Steadybit. Available for SaaS and on-prem!

Are you unsure where to begin?

No worries, our reliability experts are here to help: book a demo with them!

Statistics
-Stars
Tags
Container
Kubernetes
Latency
Homepage
hub.steadybit.com/extension/com.steadybit.extension_container
License
MIT
MaintainerSteadybit
Install now
Steadybit logoResilience Hub
Try Steadybit
© 2025 Steadybit GmbH. All rights reserved.
Twitter iconLinkedIn iconGitHub icon