Kafka Endpoint

Use the Kafka endpoint when LoadStrike needs to publish to or consume from Kafka and correlate the downstream workflow.

What this page helps you do

What this page helps you do

Use the Kafka endpoint when LoadStrike needs to publish to or consume from Kafka and correlate the downstream workflow.

Who this is for

Teams defining the transport-specific source or destination side of a correlated transaction.

Prerequisites

  • A stable tracking field shared between the producer side and the consumer or completion side

By the end

A transport definition that matches the transaction you need to measure.

Use this page when

Use this page when Kafka Endpoint is the source or destination side of the transaction and you need the documented endpoint fields before wiring the scenario.

Visual guide

Transaction diagram showing source action, handoff, downstream processing, and completion.
The settings on this page only make sense when the workflow is treated as one transaction from source action to downstream completion.

Guide

SASL Mechanisms

Kafka endpoint configuration supports the common SASL mechanisms, including Plain, SCRAM, OAuth bearer, and GSSAPI. For non-OAuth SASL modes, username is required and password may be empty, but null is rejected.

OAuth Bearer Mode

Use SASL OAuth bearer mode when the Kafka broker expects bearer tokens instead of SCRAM or Plain credentials. In that branch, OAuthBearerTokenEndpointUrl is required so the client knows where to obtain the bearer token, and AdditionalSettings or ConfluentSettings can still add mechanism-specific Kafka client properties when the broker needs more than the standard fields.

GSSAPI And Kerberos Config

For Kafka GSSAPI runs, keep the Kerberos client config available on the test node. The Go SDK reads `KRB5_CONFIG` first and otherwise falls back to the default OS Kerberos file locations.

Topics and Groups

Set the topic and consumer group so LoadStrike can publish or consume from the correct Kafka path and correlate destination messages predictably.

Confluent Settings Map

Use the ConfluentSettings dictionary when the standard endpoint fields are not enough and extra Kafka client properties must be passed directly to the producer or consumer configuration.

SASL Advanced Fields

KafkaSaslOptions also includes OAuthBearerTokenEndpointUrl for OAuth bearer mode and AdditionalSettings for mechanism-specific values.

Endpoint definition samples

Use these samples to see how Kafka Endpoint is represented as a source or destination endpoint before you attach it to a correlated scenario.

If you run these examples locally, add a valid runner key before execution starts. Set it with WithRunnerKey("...") or the config key LoadStrike:RunnerKey.

Kafka Endpoint

using LoadStrike;

var endpoint = new KafkaEndpointDefinition
{
    Name = "kafka-out",
    Mode = TrafficEndpointMode.Consume,
    TrackingField = TrackingFieldSelector.Parse("header:X-Correlation-Id"),
    BootstrapServers = "localhost:9092",
    Topic = "orders.events",
    ConsumerGroupId = "orders-tests"
};

Kafka endpoint fields and parameters

Name

Required endpoint identifier. It appears in correlation tables, sink exports, and troubleshooting messages, so choose a stable descriptive name.

Mode

Choose Produce when LoadStrike should create traffic, or Consume when it should listen for downstream traffic. Run mode validation checks that the selected mode matches the source or destination role.

TrackingField

Selector that extracts the correlation id from a header or JSON body. It is normally required, but can be omitted when UseLoadStrikeTraceIdHeader is true so LoadStrike uses header:loadstrike-trace-id for generated source traffic. Selector prefixes such as header: and json: are parsed case-insensitively, but the header name or JSON path segments after the prefix must match exact casing. The extracted value is matched case-sensitively by default unless TrackingFieldValueCaseSensitive is turned off on the tracking configuration.

GatherByField

Optional destination-only selector used for grouped correlation reports. It follows the same selector-casing rules as TrackingField. Group values are grouped case-sensitively by default unless GatherByFieldValueCaseSensitive is turned off on the tracking configuration.

AutoGenerateTrackingIdWhenMissing

Defaults to true. When the source payload does not already contain the tracked id, LoadStrike can inject one so the generated traffic still produces a correlation key.

UseLoadStrikeTraceIdHeader

Defaults to false. When true and TrackingField is omitted, produced source messages receive a loadstrike-trace-id header with a GUID value. Consume-mode source endpoints and CorrelateExistingTraffic runs do not inject this header; they only observe it if the existing traffic already contains it.

PollInterval

Controls how often a consumer-style endpoint polls for new messages. The value must stay greater than zero whenever you set it explicitly.

MessageHeaders

Optional headers that are written with produced traffic and also influence tracking extraction when the selector targets headers. Header names are preserved exactly as you set them, and header selectors later match using that same exact casing.

MessagePayload

Optional object or body value sent by producer-style endpoints. This is the payload your scenario is actually placing on the wire.

MessagePayloadType

Optional type hint used when JSON selectors need typed parsing. Leave it unset when dynamic JSON parsing is enough.

JsonSettings / JsonConvertSettings

Optional serializer settings for System.Text.Json or Newtonsoft.Json. Use them only when the payload shape or naming strategy requires custom parsing behavior.

ContentType

Optional explicit content type for custom payload handling. This is most helpful for delegate-style transports or non-default HTTP body shapes.

BootstrapServers

Required broker list. This is the first connection point the producer or consumer uses to reach the Kafka cluster.

Topic

Required topic name for the produce or consume side of the workflow.

ConsumerGroupId

Required for consume mode so Kafka can manage offsets for the LoadStrike consumer group. It is not required for produce mode.

SecurityProtocol

Supported values are Plaintext, Ssl, SaslPlaintext, and SaslSsl. SASL protocols require a populated Sasl object.

Sasl.Mechanism

Supported values are Plain, ScramSha256, ScramSha512, Gssapi, and OAuthBearer.

Sasl.Username / Sasl.Password

Required for non-OAuth SASL mechanisms. Username without password is rejected.

Sasl.Gssapi.ServiceName / Realm / Username / Password

Use these fields when the Kafka broker expects Kerberos-backed SASL GSSAPI. In the Go SDK, keep the Kerberos config discoverable through `KRB5_CONFIG` or the default OS Kerberos config file paths.

Sasl.OAuthBearerTokenEndpointUrl

Required when the SASL mechanism is OAuthBearer. This is the token endpoint used for the OAuth bearer branch instead of the username/password SASL branch.

Sasl.AdditionalSettings

Optional extra client settings passed to the underlying Kafka SASL configuration when the broker needs custom tuning.

ConfluentSettings

Optional free-form dictionary for additional Kafka client properties that are not covered by the dedicated fields.

StartFromEarliest

Defaults to true for consume mode. Enable it when the test should read from the earliest retained offset instead of only new traffic.

{ "trackingId": "trk-1", "status": "completed" }