Reproducible WebGL GPU Benchmarking: Presets, Exports, and Share Links

Last updated: 2025-09-13 · By Benchmark Team · 11 min read

Browser‑based GPU benchmarking is only useful if it is reproducible. This guide explains how our volume shader benchmark achieves fair comparisons using WebGL ray marching, parameter presets, CSV exports, and share links that lock in test settings. You will learn how to choose a workload, capture meaningful metrics, and share a URL that lets anyone rerun the exact same test for apples‑to‑apples results.

Why reproducibility is the hard part

Raw frame rate numbers are easy to generate but notoriously hard to compare. Different browsers use different GPU backends. Operating systems schedule work differently. A subtle driver update can change shader code generation. If the scene, resolution, and shader parameters vary from run to run, your “benchmark” becomes a vibe check rather than an objective measurement.

Our approach is to simplify the pipeline and make the workload explicit. We use a volume shader that ray marches a fixed Mandelbulb fractal. The camera follows a deterministic orbit. All adjustable parameters are exposed and can be encoded into the URL. That means you can reproduce a run weeks later—on the same machine or a different one—with high confidence.


Case study: sharing a benchmark run

Suppose you tune a Balanced preset to iterate 10 times, choose a 0.0025 step size, and keep resolution scale at 1.0. On your laptop, you observe ~14 ms frame time with tight min/max bands. Click Share to generate a URL that embeds those parameters. Send that link to a teammate with a desktop GPU. When they open it, the page auto‑applies your exact settings before rendering begins. If their frame time lands around 8–9 ms with similar stability, you can attribute the difference to hardware rather than setup.

Now change only one variable—resolution scale to 1.3—and share again. Both devices should show a proportional increase in frame time because pixel count grew. These controlled experiments are the core of fair, reproducible comparisons in a WebGL volume shader benchmark.


Keep results comparable over time

Hardware evolves, browsers ship new GPU backends, and drivers change. To compare against your own history, always include the environment in your notes: GPU model, OS, browser version, and driver version. Pair each CSV with its share link so you can rerun the exact configuration later. If a regression appears, the link helps isolate whether the cause is a workload change or a platform change.


Choose a workload with presets

The first step is picking a workload that fits your device. We ship five presets from Ultra Low through Very High. Each preset sets three variables: kernel iterations (fractal detail), step size (ray march stride), and resolution scale (render resolution multiplier). These knobs directly control how much math the GPU does per pixel and how many pixels it must shade.

Start with Ultra Low. If your frame time stays well under 10 ms, step up to Low or Balanced. If a preset pushes frame time above ~25 ms, step down so you can observe performance in a more interactive window. The goal is to land in a range where differences between hardware and settings are visible but the UI remains responsive.


Understand the parameters

  • Kernel Iterations: The number of Mandelbulb iterations per kernel evaluation. It increases fractal detail and shading cost. Higher iterations amplify compute pressure.
  • Step Size: The distance advanced along the ray each sample. Smaller steps improve surface accuracy but require more samples, raising the cost per pixel.
  • Resolution Scale: A multiplier on canvas resolution. It directly affects pixel count and therefore total shaded work per frame.

These parameters are independent. You can keep resolution fixed and sweep iterations to isolate compute behavior, or keep iterations fixed and sweep resolution to isolate fill‑rate and memory bandwidth. Step size controls sampling density along each ray and tends to affect both quality and cost. For fair comparisons, keep all three the same across devices.

Measure with FPS and frame time

Average FPS is easy to read, but frame time is the more actionable metric. A consistent 16.67 ms means 60 FPS; 33.3 ms means 30 FPS. Our UI reports average frame time and a short‑window min/max for FPS to indicate stability. Spiky min/max values usually mean a system‑level bottleneck, background processes, or a thermal state shift.

When you tune presets or parameters, look for a flat frame‑time trace. A device that averages 100 FPS but stutters to 20 FPS every few seconds may feel worse than a steady 70 FPS. The goal of a benchmark is not just a big number—it’s a realistic picture of how the GPU behaves under load.

Export results for your records

Click Export Result to download a CSV containing the metrics currently displayed on screen: average FPS, frame time, min/max, and the GPU name reported by WebGL. Log multiple runs while you tweak drivers or settings. CSVs are easy to track in a spreadsheet and provide a durable record of your tuning process.

Share a link that locks the settings

Every share link includes the exact parameters: kernel iterations, step size, and resolution scale. When someone opens your link, the benchmark auto‑applies those values before rendering starts. That makes your results reproducible and your comparisons fair—two people are running the same workload in the same way.

Sharing is not only social—it’s diagnostic. If a friend’s device underperforms with your settings, you can be confident you are looking at a hardware, driver, or thermal difference rather than a different workload. That clarity is the difference between guessing and knowing.

Practical checklist for clean runs

  • Close background apps, streaming tabs, and overlays.
  • Use the same browser across tests. Chromium variants often differ in GPU backends.
  • Update drivers and reboot before large comparison sessions.
  • Warm up the GPU for one minute at your chosen preset to avoid turbo transients.
  • Keep room temperature and power profiles consistent.

Under the hood: WebGL and ray marching

The renderer is purposely straightforward: a full‑screen pass that computes a camera basis, marches along a ray, evaluates the Mandelbulb kernel, and shades the surface normal. This design minimizes CPU overhead and keeps the focus on fragment math throughput. Because it is WebGL‑based, it runs everywhere without native installs and reveals genuine GPU behavior in a portable way.

While our baseline uses a simple sign‑change refinement strategy, more advanced distance‑estimator sphere tracing can accelerate convergence. If you are experimenting locally, try swapping in a distance estimator and compare frame time improvements under the same presets. You will see just how strongly algorithmic changes interact with GPU architecture.

The bottom line

WebGL makes GPU benchmarking accessible; presets make it practical; CSV export makes it trackable; and share links make it reproducible. Together, they turn a pretty fractal demo into a dependable volume shader benchmark that you can use for real comparisons and decision‑making. Whether you are evaluating a laptop upgrade, comparing browsers, or tuning drivers, reproducibility is your greatest asset—and it is built into the workflow.