|
vg
tools for working with variation graphs
|
#include <benchmark.hpp>
Public Member Functions | |
| double | score () const |
| How many control-standardized "points" do we score? More... | |
| double | score_error () const |
| What is the uncertainty on the score? More... | |
Public Attributes | |
| size_t | runs |
| How many runs were done. More... | |
| benchtime | test_mean |
| What was the mean runtime of each test run. More... | |
| benchtime | test_stddev |
| What was the standard deviation of test run times. More... | |
| benchtime | control_mean |
| What was the mean runtime of each control run. More... | |
| benchtime | control_stddev |
| What was the standard deviation of control run times. More... | |
| string | name |
| What was the name of the test being run. More... | |
Represents the results of a benchmark run. Tracks the mean and standard deviation of a number of runs of a function under test, interleaved with runs of a standard control function.
| double vg::BenchmarkResult::score | ( | ) | const |
How many control-standardized "points" do we score?
| double vg::BenchmarkResult::score_error | ( | ) | const |
What is the uncertainty on the score?
| benchtime vg::BenchmarkResult::control_mean |
What was the mean runtime of each control run.
| benchtime vg::BenchmarkResult::control_stddev |
What was the standard deviation of control run times.
| string vg::BenchmarkResult::name |
What was the name of the test being run.
| size_t vg::BenchmarkResult::runs |
How many runs were done.
| benchtime vg::BenchmarkResult::test_mean |
What was the mean runtime of each test run.
| benchtime vg::BenchmarkResult::test_stddev |
What was the standard deviation of test run times.
1.8.17