Report Overview

This page explains how to read a LoadStrike report. Use it when you want to know what each section means and where to look first.

What this page helps you do

What this page helps you do

Read a LoadStrike report faster by knowing where to start, what each tab means, and where failures show up.

Who this is for

Anyone reviewing a run result for the first time, including QA, platform, SRE, and engineering teams.

Prerequisites

  • A completed run or sample report output

By the end

A repeatable way to move from summary to detailed diagnosis without guessing which tab matters first.

Choose this path when

Use report overview when you already have a run result and need to understand which sections answer summary, failure, threshold, and correlation questions fastest.

Visual guide

Annotated report layout with summary, scenario tabs, failed rows, thresholds, and grouped correlation sections.
LoadStrike reports are easiest to read when you start with summary, then move into the scenario, failure, and correlation sections that answer the next question.
Illustration of LoadStrike report analytics and charts.
The same run result drives local reports, summary charts, failed rows, and grouped correlation analysis.

Sample Report Data Rows

Scenario demo OK Count=120 P50=18ms P95=45ms
FailureRate demo 1.67%
Bytes demo 524288
StatusClass 2xx 98.3%
Grouped tenant-a Success=58 Failure=2

Guide

Report Navigation

The HTML report uses fixed left-side tabs so you can move between sections without losing context. It also includes the icon-only Light or Dark toggle at the top-right.

Brand Header

Every HTML report includes the theme-specific LoadStrike logo. The logo switches with the selected theme while keeping the same position and visual size.

Summary and Scenario Sections

Start with Summary when you want the quickest run-level view. It shows totals and renders latency tables, latency graphs, failure-rate charts, bytes charts, and status-class charts only when those datasets are populated. Scenarios and Scenario Measurements appear only when scenario rows exist and then show count, RPS, and latency percentiles by scenario. Empty OK or FAIL measurement rows are skipped so the measurement table only shows results that have recorded request, latency, byte, or status-code data.

Step Sections

Steps and Step Measurements break the run down to named step level so it is easier to see where time and failures happened inside the workflow. These tabs are omitted when the run does not produce step-level rows, and each step measurement row is shown only when that specific OK or FAIL measurement contains data.

Status Codes and Failed Responses

Status Codes shows Percent when status rows exist. Failed Responses only appears when there are failed status rows or failed and timed out detail rows worth investigating.

Correlation Sections

Ungrouped Corelation Summary follows one row per correlated ID and adds a combined P50, P80, P85, P90, P95, and P99 graph only when percentile points exist. Grouped Correlation Summary aggregates by GatherByField and renders one chart per GatherBy value only for groups that contain percentile data. Empty correlation tabs are omitted.

Thresholds and Metrics

Thresholds shows pass or fail quality gates when threshold evaluations were recorded. Metrics shows runtime counters and gauges such as matched, timeout, duplicates, and inflight counts when those metrics are present. SDK result objects also expose FindScenarioStats or GetScenarioStats and scenario-level FindStepStats or GetStepStats helper lookups for source-derived stats navigation. In the Go parity wrapper surface, detailed step stats now follow the `.NET` names OkCount and FailCount on the detailed-step wrapper rather than exposing extra AllOkCount or AllFailCount helper names.

Automatic Result Flow

Run() returns the full LoadStrikeRunResult directly. Local reports and final sink callbacks are generated automatically from that completed result.

Report and output samples

Use these samples to compare report-format setup and output handling across the supported SDKs before you choose what the team will review.

If you run these examples locally, add a valid runner key before execution starts. Set it with WithRunnerKey("...") or the config key LoadStrike:RunnerKey.

HTML reports also include the top-right Light/Dark theme toggle. Light is the default report theme.

Report Generation

using LoadStrike;

LoadStrikeRunner.RegisterScenarios(scenario)
    .WithReportFolder("./reports")
    .WithReportFormats(LoadStrikeReportFormat.Html, LoadStrikeReportFormat.Csv, LoadStrikeReportFormat.Txt, LoadStrikeReportFormat.Md)
    .WithRunnerKey("rkl_your_local_runner_key")
    .Run();

Tabs

Summary

Shows the top-level run totals, latency views, failure rate, bytes, and the main scenario charts when those datasets are populated for the current run.

Measurement rows

Scenario Measurements and Step Measurements only show OK or FAIL rows that have recorded request, latency, byte, or status-code data, so empty result buckets do not create all-zero rows.

Status Codes

Breaks results down by status code and overall percent share.

Failed Responses

Combines failed status-code summaries with the detailed failed and timed out row table, and the tab is omitted when there are no failed rows to show.

Ungrouped Corelation Summary

Shows one combined percentile graph and table for the raw matched correlation rows, but only appears when correlation rows exist.

Grouped Correlation Summary

Shows grouped correlation tables and one percentile chart per GatherBy value for populated groups only.

Thresholds

Shows the detailed threshold evaluation rows when thresholdResults were recorded, while failedThresholds keeps the quick aggregate count.

Plugin tables

Plugin-produced hints and named tables remain available through pluginsData or PluginsData and also feed the HTML plugin tabs when those tables have rows to show.

Graph compatibility aliases

SDK result objects can expose compatibility aliases such as ThresholdResults, MetricStats, ScenarioDurationsMs, and PluginsData over the same underlying final artifact.

Summary charts

Java and Python HTML output also includes the same failure-rate, bytes, and status-code-class summary chart surfaces, and those chart cards are hidden when their datasets are empty.

Automatic result export

Local reports and final sink callbacks are generated from the full LoadStrikeRunResult artifact automatically. Run() returns that same payload directly in code.