Use case

Event-driven load testing

See how LoadStrike approaches event-driven load testing across Kafka, queues, streams, and downstream service completion.

Event-driven load testing diagram
Explain how LoadStrike models async workloads that complete after the first request or published message.
Direct answer

What does event-driven load testing require?

Event-driven load testing has to measure whether the downstream work completed, not only whether the source request or published message was accepted. That means tracking a workflow across queues or streams, timeout windows, duplicates, and the service stages that make the final outcome visible.

LoadStrike is built for that shape of workload. It keeps source and destination endpoints in one scenario, correlates the events that belong together, and reports how the full transaction behaved under sustained load.

Who this is for

Teams testing async pipelines, queue-backed services, and stream-driven systems where user-visible completion happens after the initial edge call.

Why endpoint-only testing breaks down here

Request-only metrics do not explain late arrivals, duplicate completions, or consumer lag that appears after the message leaves the first producer. Those are often the failures that actually matter in event-driven systems.

How LoadStrike fits

LoadStrike supports HTTP, Kafka, RabbitMQ, Azure Event Hubs, NATS, Redis Streams, Push Diffusion, and delegate-based custom stream endpoints, letting the team keep the event path inside the same transaction and report surface.

What to expect

Verified LoadStrike fit points

  • One scenario can include the source action and the downstream event that proves completion.
  • Grouped correlation helps teams inspect outcomes by tenant, branch, region, or another field.
  • Timeout and duplicate behavior stay visible in the final run artifacts.
  • Self-hosted runtime works across public SDKs and clustered execution patterns.
Resources

Docs and examples

Start with the protocol and endpoint pages that already document the event-driven parts of the public product surface.

Kafka protocol guide

Frame Kafka as part of the workflow instead of as an isolated producer benchmark.

Common questions

Common questions

These questions are rendered on the page and mirrored in the matching FAQ structured data when the route is indexable.

Why is event-driven load testing hard to read from request metrics alone?

Because the workflow often finishes after the first request has already returned. Problems such as backlog growth, consumer slowdowns, late completion, and duplicates appear downstream and need transaction-level correlation to stay visible.

Does LoadStrike only support Kafka for event-driven testing?

No. The public docs also cover RabbitMQ, Azure Event Hubs, NATS, Redis Streams, Push Diffusion, and delegate-based custom stream endpoints for event-driven workflows.

Can event-driven tests still produce standard run artifacts?

Yes. LoadStrike still produces the same HTML, CSV, TXT, and Markdown report formats while keeping event-driven correlation data and grouped summaries inside the run artifact.

Related

Related documentation

Keep moving from positioning into concrete product detail.

Kafka Protocol Guide

Use this guide when Kafka is part of the business transaction and you need to measure the downstream path, not just publish speed.

Kafka Endpoint

Use the Kafka endpoint when LoadStrike needs to publish to or consume from Kafka and correlate the downstream workflow.

What Is A Transaction?

A transaction in LoadStrike is the full workflow you care about, not just one request. Read this page first if your workload crosses systems.

Related

Related comparisons

Use these routes when the next question is tool choice rather than implementation detail.

LoadStrike vs k6

Compare LoadStrike and k6 across code ergonomics, protocol scope, downstream correlation, reporting depth, browser workflows, and distributed self-hosted execution.

LoadStrike vs Gatling

Compare LoadStrike and Gatling across scenario discipline, request modeling, downstream visibility, transport breadth, reporting depth, and self-hosted operations.

Related

Related integrations

These reporting pages connect the transaction model to the observability systems already documented publicly.

LoadStrike and Datadog

See how the LoadStrike Datadog sink fits into transaction-aware, self-hosted load testing workflows.

LoadStrike and Grafana Loki

See how the LoadStrike Grafana Loki sink fits into transaction-aware reporting and public Grafana starter assets.

Related

Next best pages

Every published route should help you move to the next concrete question instead of ending in a dead end.

Next step

Next step

Open the quick start, map the transaction you already care about, and keep the workflow explicit from source action to downstream completion.