Kafka Protocol Guide

Use this guide when Kafka is part of the business transaction and you need to measure the downstream path, not just publish speed.

What this page helps you do

What this page helps you do

Model Kafka as part of one transaction so you can measure what happens after publish, not only producer speed.

Who this is for

Teams testing event-driven workflows that start, pass through, or complete on Kafka.

Prerequisites

  • A stable tracking field shared between producer and consumer
  • A real topic and consumer group strategy for the workload

By the end

A Kafka test shape that keeps publish, consumer lag, and downstream completion inside the same report.

Choose this path when

Use Kafka guidance when the question is about business completion across the broker path, not only the producer-side throughput number.

Visual guide

Transaction diagram showing source publish, Kafka handoff, downstream service processing, and completion.
Kafka load testing is most useful when you follow the transaction from publish through the downstream completion that proves the work actually finished.

Guide

What the Kafka guide covers

This guide explains how to use Kafka as a tracked source or destination so the same workflow can be measured from the first produced message to the downstream completion.

When Kafka works best in LoadStrike

Use Kafka tracking when the business workflow publishes into Kafka, completes through Kafka, or depends on topic-level fan-out and consumer lag behavior under load.

What must stay stable

Keep the tracking field in one stable header or JSON path and make sure the producer and consumer endpoints resolve the same value. Also use a real ConsumerGroupId for consume mode so offsets can be managed correctly.

Security and broker tuning

If the cluster uses TLS or SASL, populate SecurityProtocol and the Sasl object. Use ConfluentSettings only for advanced client properties that are not already represented by the dedicated fields.

Protocol setup samples

Use these samples to compare how Kafka publish and consume paths are wired into one correlated transaction across the supported SDKs.

If you run these examples locally, add a valid runner key before execution starts. Set it with WithRunnerKey("...") or the config key LoadStrike:RunnerKey.

Kafka Protocol Setup

using LoadStrike;

var source = new KafkaEndpointDefinition { Name = "in", Mode = TrafficEndpointMode.Produce, TrackingField = TrackingFieldSelector.Parse("json:$.trackingId"), BootstrapServers = "localhost:9092", Topic = "orders.in" };
var destination = new KafkaEndpointDefinition { Name = "out", Mode = TrafficEndpointMode.Consume, TrackingField = TrackingFieldSelector.Parse("json:$.trackingId"), BootstrapServers = "localhost:9092", Topic = "orders.out", ConsumerGroupId = "orders-tests" };

Kafka protocol choices a beginner should make

Producer versus consumer role

Choose Produce for generated source traffic and Consume for observed downstream traffic.

Stable tracking field

Keep the correlation id in one stable header or JSON location across producer and consumer messages.

ConsumerGroupId

Always set this for consume mode so offsets are tracked predictably.

SecurityProtocol and Sasl

Only set these when the broker actually requires TLS or SASL auth.