Docker Images And EKS Deployment

This page explains the container build and deployment path for LoadStrike on EKS. Use it when the team wants repeatable images and repeatable cluster runs.

What this page helps you do

What this page helps you do

This page explains the container build and deployment path for LoadStrike on EKS. Use it when the team wants repeatable images and repeatable cluster runs.

Who this is for

Teams moving from one machine to coordinator-and-agent execution or tighter workload targeting.

Prerequisites

  • A scenario that already works in a single-node run

By the end

A clearer cluster topology and the fields that must line up across nodes.

Use this page when

Use this page when execution topology, partitioning, or targeting changes how the run should be distributed.

Visual guide

Cluster topology diagram showing the coordinator, agents, and merged result.
Cluster settings matter because the coordinator, agents, and merged result all belong to the same workload definition.

Guide

Goal

The goal of this workflow is to standardize image creation and distributed execution so every run uses the same runtime, dependencies, and test binaries.

Image Strategy

Package the test harness into one immutable image. Then use runtime settings such as NodeType, targets, group, and cluster ID to switch between coordinator and agent behavior without rebuilding that image.

Build Prerequisites

You need Docker, AWS CLI, kubectl, and access to ECR and EKS. Build against .NET 8 and keep image tags tied to a commit SHA or release version so the run is traceable.

Registry Flow

Build locally or in CI, tag for ECR, push the image, and deploy coordinator and agent manifests that all reference the same image tag.

Runtime Inputs

Configure NodeType, ClusterId, AgentGroup, AgentsCount, NatsServerUrl, AgentTargetScenarios, CoordinatorTargetScenarios, and ClusterCommandTimeoutMs through environment variables or appsettings.

Kubernetes Objects

Typical objects include Namespace, ConfigMap, Secret, an optional ServiceAccount plus IRSA setup, the agent Deployment, the coordinator Job, and an optional PVC for report output.

Rolling And Scale

Increase agent replicas gradually and keep coordinator AgentsCount aligned with the number of agents you expect to be active. Use versioned image tags to avoid mixed binary versions during one run.

Security Controls

Use imagePullSecrets or node IAM permissions for ECR pull, IRSA for AWS API access, and namespace-scoped RBAC and NetworkPolicy for least-privilege operation.

Artifacts

Store the final coordinator reports in /reports and then copy them to S3 or PVC storage. Keep metadata such as cluster ID, image tag, and commit SHA next to the artifacts for traceability.

Troubleshooting

If agents are not discovered, first check NATS DNS and service reachability, matching ClusterId and AgentGroup values, agent readiness, and coordinator timeout settings.

Cluster setup samples

This sample uses environment variables so the same deployment artifact can boot either as the coordinator or as an agent while still joining the same clusterId.

If you run these examples locally, add a valid runner key before execution starts. Set it with WithRunnerKey("...") or the config key LoadStrike:RunnerKey.

Docker Build, Push, and Runtime Settings

using LoadStrike;

var clusterId = Environment.GetEnvironmentVariable("LOADSTRIKE_CLUSTER_ID") ?? "orders-cluster";
var nodeRole = Environment.GetEnvironmentVariable("LOADSTRIKE_ROLE") ?? "agent";
var natsUrl = Environment.GetEnvironmentVariable("LOADSTRIKE_NATS_URL")
    ?? "nats://nats.loadstrike.svc.cluster.local:4222";
var agentGroup = "perf-agents";

var submitOrdersScenario = LoadStrikeScenario.Create(
        "submit-orders",
        _ => Task.FromResult(LoadStrikeResponse.Ok(statusCode: "202")))
    .WithLoadSimulations(LoadStrikeSimulation.IterationsForConstant(1, 2));

var waitForCompletionScenario = LoadStrikeScenario.Create(
        "wait-for-completion",
        _ => Task.FromResult(LoadStrikeResponse.Ok(statusCode: "200")))
    .WithLoadSimulations(LoadStrikeSimulation.IterationsForConstant(1, 2));

var runner = LoadStrikeRunner
    .RegisterScenarios(submitOrdersScenario, waitForCompletionScenario)
    .WithClusterId(clusterId)
    .WithAgentGroup(agentGroup)
    .WithNatsServerUrl(natsUrl);

if (string.Equals(nodeRole, "coordinator", StringComparison.OrdinalIgnoreCase))
{
    runner = runner
        .WithNodeType(LoadStrikeNodeType.Coordinator)
        .WithAgentsCount(3)
        .WithCoordinatorTargetScenarios("submit-orders")
        .WithAgentTargetScenarios("wait-for-completion")
        .WithReportFolder("/reports")
        .WithRunnerKey("rkr_your_remote_runner_key");
}
else
{
    runner = runner
        .WithNodeType(LoadStrikeNodeType.Agent)
        .WithAgentTargetScenarios("wait-for-completion");
}

runner.Run();

Kubernetes Manifests (Image Pull + Agents + Coordinator)

Single image strategy

Build one immutable test image and switch behavior at runtime with NodeType and related settings.

Registry flow

Build, tag, and push the same image to ECR so coordinator and agents run the same binaries.

Image pull settings

Configure pull credentials or IAM so the cluster can retrieve the image.

Coordinator and agent manifests

Deploy the coordinator as a job and the agents as a scalable deployment.

Versioned tags

Use commit or release based tags so one distributed run does not mix binaries from different builds.

apiVersion: v1
kind: Secret
metadata:
  name: ecr-cred
  namespace: loadstrike
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: <base64-docker-config>
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: loadstrike-runner
  namespace: loadstrike
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::<aws-account-id>:role/loadstrike-eks-runner
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: loadstrike-runtime
  namespace: loadstrike
data:
  LoadStrike__ClusterId: orders-cluster
  LoadStrike__AgentGroup: perf-agents
  LoadStrike__NatsServerUrl: nats://nats.loadstrike.svc.cluster.local:4222
  LoadStrike__AgentTargetScenarios: kafka-consumer
  LoadStrike__CoordinatorTargetScenarios: http-source
  LoadStrike__ClusterCommandTimeoutMs: "180000"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loadstrike-agents
  namespace: loadstrike
spec:
  replicas: 4
  selector:
    matchLabels:
      app: loadstrike-agent
  template:
    metadata:
      labels:
        app: loadstrike-agent
    spec:
      serviceAccountName: loadstrike-runner
      imagePullSecrets:
      - name: ecr-cred
      containers:
      - name: agent
        image: <aws-account-id>.dkr.ecr.<region>.amazonaws.com/loadstrike-tests:v1.0.0
        envFrom:
        - configMapRef:
            name: loadstrike-runtime
        env:
        - name: LoadStrike__NodeType
          value: Agent
        - name: LoadStrike__TargetScenarios
          value: kafka-consumer
        resources:
          requests:
            cpu: "750m"
            memory: "768Mi"
          limits:
            cpu: "2"
            memory: "2Gi"
---
apiVersion: batch/v1
kind: Job
metadata:
  name: loadstrike-coordinator
  namespace: loadstrike
spec:
  backoffLimit: 0
  template:
    spec:
      restartPolicy: Never
      serviceAccountName: loadstrike-runner
      imagePullSecrets:
      - name: ecr-cred
      containers:
      - name: coordinator
        image: <aws-account-id>.dkr.ecr.<region>.amazonaws.com/loadstrike-tests:v1.0.0
        envFrom:
        - configMapRef:
            name: loadstrike-runtime
        env:
        - name: LoadStrike__NodeType
          value: Coordinator
        - name: LoadStrike__AgentsCount
          value: "4"
        - name: LoadStrike__RunnerKey
          value: rkr_your_remote_runner_key
        - name: LoadStrike__ReportFolder
          value: /reports
        volumeMounts:
        - name: reports
          mountPath: /reports
      volumes:
      - name: reports
        persistentVolumeClaim:
          claimName: loadstrike-reports-pvc