Kafka Endpoint
Use the Kafka endpoint when LoadStrike needs to publish to or consume from Kafka and correlate the downstream workflow.
Matching docs
Search across docs titles, summaries, groups, and section headings.
Use Up and Down Arrow to move through results, then press Enter to open the active page.
No indexed docs matched that search. Try a broader term or open the docs hub.
What this page helps you do
What this page helps you do
Use the Kafka endpoint when LoadStrike needs to publish to or consume from Kafka and correlate the downstream workflow.
Who this is for
Teams defining the transport-specific source or destination side of a correlated transaction.
Prerequisites
- A stable tracking field shared between the producer side and the consumer or completion side
By the end
A transport definition that matches the transaction you need to measure.
Use this page when
Use this page when Kafka Endpoint is the source or destination side of the transaction and you need the documented endpoint fields before wiring the scenario.
Visual guide
Guide
SASL Mechanisms
Kafka endpoint configuration supports the common SASL mechanisms, including Plain, SCRAM, OAuth bearer, and GSSAPI. For non-OAuth SASL modes, username is required and password may be empty, but null is rejected.
OAuth Bearer Mode
Use SASL OAuth bearer mode when the Kafka broker expects bearer tokens instead of SCRAM or Plain credentials. In that branch, OAuthBearerTokenEndpointUrl is required so the client knows where to obtain the bearer token, and AdditionalSettings or ConfluentSettings can still add mechanism-specific Kafka client properties when the broker needs more than the standard fields.
GSSAPI And Kerberos Config
For Kafka GSSAPI runs, keep the Kerberos client config available on the test node. The Go SDK reads `KRB5_CONFIG` first and otherwise falls back to the default OS Kerberos file locations.
Topics and Groups
Set the topic and consumer group so LoadStrike can publish or consume from the correct Kafka path and correlate destination messages predictably.
Confluent Settings Map
Use the ConfluentSettings dictionary when the standard endpoint fields are not enough and extra Kafka client properties must be passed directly to the producer or consumer configuration.
SASL Advanced Fields
KafkaSaslOptions also includes OAuthBearerTokenEndpointUrl for OAuth bearer mode and AdditionalSettings for mechanism-specific values.
Endpoint definition samples
Use these samples to see how Kafka Endpoint is represented as a source or destination endpoint before you attach it to a correlated scenario.
If you run these examples locally, add a valid runner key before execution starts. Set it with WithRunnerKey("...") or the config key LoadStrike:RunnerKey.
Kafka Endpoint
using LoadStrike;
var endpoint = new KafkaEndpointDefinition
{
Name = "kafka-out",
Mode = TrafficEndpointMode.Consume,
TrackingField = TrackingFieldSelector.Parse("header:X-Correlation-Id"),
BootstrapServers = "localhost:9092",
Topic = "orders.events",
ConsumerGroupId = "orders-tests"
};
package main
import loadstrike "loadstrike.com/sdk/go"
func main() {
tracking := &loadstrike.TrackingConfigurationSpec{
RunMode: "GenerateAndCorrelate",
Source: &loadstrike.EndpointSpec{
Kind: "Kafka",
Name: "orders-in",
Mode: "Produce",
TrackingField: "header:X-Correlation-Id",
Kafka: &loadstrike.KafkaEndpointOptions{
BootstrapServers: "localhost:9092",
Topic: "orders.in",
SecurityProtocol: "SaslSsl",
SASL: &loadstrike.KafkaSASLOptions{
Mechanism: "Plain",
Username: "orders-user",
Password: "orders-password",
},
},
},
Destination: &loadstrike.EndpointSpec{
Kind: "Kafka",
Name: "orders-out",
Mode: "Consume",
TrackingField: "header:X-Correlation-Id",
Kafka: &loadstrike.KafkaEndpointOptions{
BootstrapServers: "localhost:9092",
Topic: "orders.completed",
ConsumerGroupID: "orders-tests",
},
},
}
loadstrike.RegisterScenarios(
loadstrike.Empty("kafka-tracking").WithTrackingConfiguration(tracking),
).Run()
}
import com.loadstrike.runtime.CrossPlatformScenarioConfigurator;
import com.loadstrike.runtime.CrossPlatformTrackingConfiguration;
import com.loadstrike.runtime.KafkaEndpointDefinition;
import com.loadstrike.runtime.LoadStrikeCorrelation.TrackingFieldSelector;
import com.loadstrike.runtime.LoadStrikeRuntime.LoadStrikeRunner;
import com.loadstrike.runtime.LoadStrikeRuntime.LoadStrikeScenario;
import com.loadstrike.runtime.LoadStrikeRuntime.LoadStrikeSimulation;
import com.loadstrike.runtime.LoadStrikeTransports;
var source = new KafkaEndpointDefinition();
source.name = "orders-in";
source.mode = LoadStrikeTransports.TrafficEndpointMode.Produce;
source.trackingField = TrackingFieldSelector.parse("json:$.trackingId");
source.bootstrapServers = "localhost:9092";
source.topic = "orders.in";
var destination = new KafkaEndpointDefinition();
destination.name = "orders-out";
destination.mode = LoadStrikeTransports.TrafficEndpointMode.Consume;
destination.trackingField = TrackingFieldSelector.parse("json:$.trackingId");
destination.bootstrapServers = "localhost:9092";
destination.topic = "orders.out";
destination.consumerGroupId = "orders-tests";
var tracking = new CrossPlatformTrackingConfiguration();
tracking.source = source;
tracking.destination = destination;
tracking.runMode = LoadStrikeTransports.TrackingRunMode.GenerateAndCorrelate;
var scenario = CrossPlatformScenarioConfigurator.Configure(
LoadStrikeScenario.empty("orders-kafka-to-kafka"),
tracking
).withLoadSimulations(LoadStrikeSimulation.inject(10, 1d, 20d));
LoadStrikeRunner
.registerScenarios(scenario)
.withRunnerKey("rkl_your_local_runner_key")
.run();
from loadstrike_sdk import CrossPlatformScenarioConfigurator, LoadStrikeRunner, LoadStrikeScenario, LoadStrikeSimulation
tracking = {
"RunMode": "GenerateAndCorrelate",
"Source": {
"Kind": "Kafka",
"Name": "orders-in",
"Mode": "Produce",
"TrackingField": "json:$.trackingId",
"BootstrapServers": "localhost:9092",
"Topic": "orders.in",
},
"Destination": {
"Kind": "Kafka",
"Name": "orders-out",
"Mode": "Consume",
"TrackingField": "json:$.trackingId",
"BootstrapServers": "localhost:9092",
"Topic": "orders.out",
"ConsumerGroupId": "orders-tests",
},
}
scenario = (
CrossPlatformScenarioConfigurator.Configure(
LoadStrikeScenario.empty("orders-kafka-to-kafka"),
tracking,
)
.with_load_simulations(LoadStrikeSimulation.inject(10, 1, 20))
)
LoadStrikeRunner.register_scenarios(scenario) \
.with_runner_key("rkl_your_local_runner_key") \
.run()
import {
CrossPlatformScenarioConfigurator,
LoadStrikeRunner,
LoadStrikeScenario,
LoadStrikeSimulation
} from "@loadstrike/loadstrike-sdk";
const tracking = {
RunMode: "GenerateAndCorrelate",
Source: {
Kind: "Kafka",
Name: "orders-in",
Mode: "Produce",
TrackingField: "json:$.trackingId",
BootstrapServers: "localhost:9092",
Topic: "orders.in"
},
Destination: {
Kind: "Kafka",
Name: "orders-out",
Mode: "Consume",
TrackingField: "json:$.trackingId",
BootstrapServers: "localhost:9092",
Topic: "orders.out",
ConsumerGroupId: "orders-tests"
}
};
const scenario = CrossPlatformScenarioConfigurator
.Configure(LoadStrikeScenario.empty("orders-kafka-to-kafka"), tracking)
.withLoadSimulations(LoadStrikeSimulation.inject(10, 1, 20));
await LoadStrikeRunner
.registerScenarios(scenario)
.withRunnerKey("rkl_your_local_runner_key")
.run();
const {
CrossPlatformScenarioConfigurator,
LoadStrikeRunner,
LoadStrikeScenario,
LoadStrikeSimulation
} = require("@loadstrike/loadstrike-sdk");
(async () => {
const tracking = {
RunMode: "GenerateAndCorrelate",
Source: {
Kind: "Kafka",
Name: "orders-in",
Mode: "Produce",
TrackingField: "json:$.trackingId",
BootstrapServers: "localhost:9092",
Topic: "orders.in"
},
Destination: {
Kind: "Kafka",
Name: "orders-out",
Mode: "Consume",
TrackingField: "json:$.trackingId",
BootstrapServers: "localhost:9092",
Topic: "orders.out",
ConsumerGroupId: "orders-tests"
}
};
const scenario = CrossPlatformScenarioConfigurator
.Configure(LoadStrikeScenario.empty("orders-kafka-to-kafka"), tracking)
.withLoadSimulations(LoadStrikeSimulation.inject(10, 1, 20));
await LoadStrikeRunner
.registerScenarios(scenario)
.withRunnerKey("rkl_your_local_runner_key")
.run();
})();
Kafka endpoint fields and parameters
Required endpoint identifier. It appears in correlation tables, sink exports, and troubleshooting messages, so choose a stable descriptive name.
Choose Produce when LoadStrike should create traffic, or Consume when it should listen for downstream traffic. Run mode validation checks that the selected mode matches the source or destination role.
Selector that extracts the correlation id from a header or JSON body. It is normally required, but can be omitted when UseLoadStrikeTraceIdHeader is true so LoadStrike uses header:loadstrike-trace-id for generated source traffic. Selector prefixes such as header: and json: are parsed case-insensitively, but the header name or JSON path segments after the prefix must match exact casing. The extracted value is matched case-sensitively by default unless TrackingFieldValueCaseSensitive is turned off on the tracking configuration.
Optional destination-only selector used for grouped correlation reports. It follows the same selector-casing rules as TrackingField. Group values are grouped case-sensitively by default unless GatherByFieldValueCaseSensitive is turned off on the tracking configuration.
Defaults to true. When the source payload does not already contain the tracked id, LoadStrike can inject one so the generated traffic still produces a correlation key.
Defaults to false. When true and TrackingField is omitted, produced source messages receive a loadstrike-trace-id header with a GUID value. Consume-mode source endpoints and CorrelateExistingTraffic runs do not inject this header; they only observe it if the existing traffic already contains it.
Controls how often a consumer-style endpoint polls for new messages. The value must stay greater than zero whenever you set it explicitly.
Optional headers that are written with produced traffic and also influence tracking extraction when the selector targets headers. Header names are preserved exactly as you set them, and header selectors later match using that same exact casing.
Optional object or body value sent by producer-style endpoints. This is the payload your scenario is actually placing on the wire.
Optional type hint used when JSON selectors need typed parsing. Leave it unset when dynamic JSON parsing is enough.
Optional serializer settings for System.Text.Json or Newtonsoft.Json. Use them only when the payload shape or naming strategy requires custom parsing behavior.
Optional explicit content type for custom payload handling. This is most helpful for delegate-style transports or non-default HTTP body shapes.
Required broker list. This is the first connection point the producer or consumer uses to reach the Kafka cluster.
Required topic name for the produce or consume side of the workflow.
Required for consume mode so Kafka can manage offsets for the LoadStrike consumer group. It is not required for produce mode.
Supported values are Plaintext, Ssl, SaslPlaintext, and SaslSsl. SASL protocols require a populated Sasl object.
Supported values are Plain, ScramSha256, ScramSha512, Gssapi, and OAuthBearer.
Required for non-OAuth SASL mechanisms. Username without password is rejected.
Use these fields when the Kafka broker expects Kerberos-backed SASL GSSAPI. In the Go SDK, keep the Kerberos config discoverable through `KRB5_CONFIG` or the default OS Kerberos config file paths.
Required when the SASL mechanism is OAuthBearer. This is the token endpoint used for the OAuth bearer branch instead of the username/password SASL branch.
Optional extra client settings passed to the underlying Kafka SASL configuration when the broker needs custom tuning.
Optional free-form dictionary for additional Kafka client properties that are not covered by the dedicated fields.
Defaults to true for consume mode. Enable it when the test should read from the earliest retained offset instead of only new traffic.
{ "trackingId": "trk-1", "status": "completed" }