Methodology

Benchmark methodology

Read the LoadStrike benchmark methodology and data model used to describe workloads, topologies, artifacts, and future result pages without inventing benchmark claims.

Benchmark methodology illustration
Publish a useful benchmark methodology page now while keeping future result pages grounded in real datasets only.
Direct answer

What does the benchmark methodology page cover?

This page explains how future LoadStrike benchmark pages should describe workload shape, system topology, runtime surface, cluster topology, and downloadable artifacts. It does not publish benchmark claims by itself and it does not invent result numbers that are not present in the repo.

Use it to understand how a benchmark page should be read and what data fields need to exist before a result page can move from draft to indexable publication.

Who this is for

Teams evaluating how LoadStrike benchmark pages will be structured and which evidence should exist before performance results are treated as publishable.

Why endpoint-only testing breaks down here

Benchmark content becomes misleading when it jumps straight to headline numbers without describing the transaction shape, cluster topology, downstream services, report artifacts, or the difference between runtime output and exported observability data.

How LoadStrike fits

LoadStrike already exposes report formats, cluster modes, transports, browser runtimes, and sink outputs publicly. This methodology page ties those verified building blocks into one benchmark-reading contract without claiming results that do not yet exist.

What to expect

Verified LoadStrike fit points

  • Defines the minimum metadata fields required for future benchmark result pages.
  • Keeps benchmark publication tied to real downloadable artifacts instead of headline-only claims.
  • Explains how runtime topology, scenario shape, and report artifacts should be documented together.
  • Keeps future Dataset and DataDownload schema grounded in visible files only.
Future dataset contract

Fields required before a benchmark result should be published

  • datasetKey
  • title
  • summary
  • workloadDefinition
  • systemShape
  • scenarioShape
  • loadShape
  • runtimeSurface
  • clusterTopology
  • reportArtifacts
  • downloadArtifacts
  • datePublished
  • dateModified
  • publishState

Future downloadable artifact types: csv, json, html, markdown.

Resources

Benchmark inputs that should be visible

These are the public LoadStrike surfaces that future benchmark pages should reference directly.

Cluster overview

See how coordinator and agent execution is documented publicly.

Playwright docs

Include browser runtime details when the benchmark uses browser journeys.

Common questions

Common questions

These questions are rendered on the page and mirrored in the matching FAQ structured data when the route is indexable.

Does this page publish benchmark results?

No. It publishes methodology only. Result pages should stay draft or noindex until real datasets and downloadable artifacts exist in the repo.

What should a future benchmark page include?

At minimum it should include workload definition, system shape, scenario shape, load shape, runtime surface, cluster topology, report artifacts, download artifacts, and publication dates.

Why does the methodology page matter before results exist?

It prevents thin or misleading benchmark content by setting a clear standard for what evidence must be visible before a result page is indexed.

Related

Related documentation

Keep moving from positioning into concrete product detail.

Report Overview

This page explains how to read a LoadStrike report. Use it when you want to know what each section means and where to look first.

Cluster Overview

Cluster mode lets one LoadStrike run spread across multiple nodes. Use it when a single machine is not enough or when topology matters.

Quick Start

Build one simple transaction, attach correlation, and run it. Use this page when you want the shortest path to a working LoadStrike test.

Related

Related comparisons

Use these routes when the next question is tool choice rather than implementation detail.

LoadStrike vs k6

Compare LoadStrike and k6 across code ergonomics, protocol scope, downstream correlation, reporting depth, browser workflows, and distributed self-hosted execution.

LoadStrike vs Gatling

Compare LoadStrike and Gatling across scenario discipline, request modeling, downstream visibility, transport breadth, reporting depth, and self-hosted operations.

Related

Related integrations

These reporting pages connect the transaction model to the observability systems already documented publicly.

LoadStrike and InfluxDB

See how the LoadStrike InfluxDB sink fits into transaction-aware reporting workflows and public Grafana starter assets.

Related

Next best pages

Every published route should help you move to the next concrete question instead of ending in a dead end.

Examples

See the public examples that can feed into future benchmark scenarios.

Editorial policy

Read the publication standards behind benchmark and comparison content.

Next step

Next step

Use this page as the checklist for future benchmark publication, then keep result pages in draft until real artifacts and datasets are ready to link.