MicroBenchmark Reference Manual

Next:   [Contents][Index]

MicroBenchmark

This manual is for MicroBenchmark—a library to measure the performance of code fragments—version 0.0, released the day 5 July 2023. MicroBenchmark is distributed under the GNU Lesser General Public License, Version 3 or any later version.

Copyright © 2023 Miguel Ángel Arruga Vivas <rosen644835@gmail.com>

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.

The examples from this manual are additionally licensed under the GNU General Public License, Version 3 or any later version published by the Free Software Foundation. A copy of the license is included in the section entitled “GNU General Public License”.

Table of Contents


Acknowledgments

MicroBenchmark has to thank the GNU project their effort to provide the operating system where this document is written.

MicroBenchmark has taken many ideas from Google Benchmark, a Free Software library for C++ and Python.

MicroBenchmark also has taken its structural concepts from Kent Beck’s paper Simple Smalltalk Testing: With Patterns.

And also thank you, who are reading this.


Next: , Previous: , Up: MicroBenchmark   [Contents][Index]

Library Information


1 Introduction

MicroBenchmark makes easy to measure the performance of code fragments and/or functions by sampling repeatedly their execution time. These type of performance measurements are the base of what usually is called micro-benchmarks, hence the library name.


1.1 What Is A Benchmark

Benchmark commonly names some software used to measure the performance of certain hardware and/or software. Sometimes these values cannot be directly measured, e.g. in an unknown or dynamic environment, and proxy measures are used to estimate it.

The performance of a processing unit is usually represented by operations per time units. Software throughput can be measured on size units per time units. Software performance can be measured on time units too. These quantities are compared on different environments, where its values produce an ordering between the compared things1.

The need of more efficient programs has been present since long time ago, therefore tools to aid with this task can be found on almost any operating system. For example, the shell keyword time (see Pipelines in GNU Bash Reference Manual) can be used to measure the execution time of a process.

To aid with a finer grain optimization, modern operating systems implement a measurement process called profiling (see GNU profiler gprof.) A profiler usually works by taking samples of the call stack on the executing process with certain frequency.

Modern processors incorporate performance counters, whose values can be accessed through utilities such as perf (see perf wiki), which can be used as a profiler too.


1.2 MicroBenchmark Concepts

A complementary approach to the analysis on the final code is the extraction of some fragment of the code and to measure its execution performance separated of the whole process. These measurements are usually compared between different implementation algorithms of the fragment, reference implementations and/or baselines. This process is usually called micro-benchmark.

The main unit of MicroBenchmark framework is the test case (see Test Reference.) A test case encapsulates the desired process of measurement, including its preparation and finalization (see Execution Stages.)

The execution environment of a test case can be constrained (see Test Constraints.) Size constraints produce a combinatorial number of executions to be measured (see Test Dimensions.) The concrete values are provided to the test through the state object (see State Reference.)

A collection of test cases is called a suite (see Suite Reference.) Its execution is store the collected data into a report (see Report Reference.)


1.2.1 Execution Stages

The execution flow of a test case is composed of three sequential stages:

  1. Preparation Stage

    The Set Up stage receives the execution state object as a parameter and allocates the fixture for the test code and prepares the environment for its execution.

  2. Test Stage
    • Automatic Test

      An automatic test receives the allocated fixture directly and does not have direct access to the execution state. It is invoked repeatedly by the framework and measures are sampled after certain amount of iterations.

    • Directed Test

      A directed test receives the framework execution state object (see State Reference) as its parameter, where the allocated fixture by the previous stage is stored. This is the concept used by Google Benchmark, where code must manually loop on the state.

  3. Finalization Stage

    The Tear Down stage releases the resources allocated for the fixture and performs any other cleanup needed.


1.3 Predefined Calculations

MicroBenchmark samples the elapsed time and iterations performed by the test function at certain intervals. Certain iterations and samples are discarded from the final statistical values calculated from the data collected, such as the warm up iterations.

From these samples the following values are calculated:

  • Iteration time

    The time of each iteration, as common statistical values:

    • μ -> Mean time per iteration.
    • σ² -> Variance of the iteration time.
    • σ -> Standard deviation of the iteration time.
  • Iterations per sample

    The iterations performed on each sample, as the common statistical values:

    • μ -> Mean iterations per sample.
    • σ² -> Variance of the iterations per sample.
    • σ -> Standard deviation of iterations per sample.
  • Sample time

    The time of each sample.

    • μ -> Mean sample time.
    • σ² -> Variance of the sample time.
    • σ -> Standard deviation of sample time.

MicroBenchmark also measures the total time elapsed on the test case execution and the total number of iterations performed. The following sections describe the chronometers and timers predefined by the library.


1.3.1 Predefined Meters

A meter is a device used to take measures (see Meter Reference for a detailed description.)

  • None yet.

1.3.2 Predefined Clocks

The clock type controls the type of measurement performed by a chronometer or a timer. The library implements the following clock type definitions (see Time Reference) if they were available at build time:

  • realtime:

    Wall clock time provided by the operating system. CLOCK_TAI is preferred when it is availableto CLOCK_REALTIME.

  • monotonic:

    Non decreasing clock time, such as CLOCK_MONOTONIC.

  • process:

    Process time, such as CLOCK_PROCESS_CPUTIME_ID.

  • thread:

    Execution thread time.


1.3.3 Predefined Chronometers

A chronometer is a device used to measure time. It is an specialization of the micro_benchmark_meter type used to calculate the sample time and the total time (see Chronometer Reference for a detailed description.)

The following chronometers are implemented by the library:

  • time_t (Always)

    Realtime chronometer based on the the C standard type time_t, retrieved from time function. It represents seconds, so its minimum resolution is one second.

  • clock_t (Always)

    Process time chronometer based on the C standard type clock_t, retrieved from clock function. It overflows easily; this is not checked by the framework.

  • gettimeofday (Optional)

    Realtime chronometer based on the function gettimeofday. Its values are represented by struct timeval.

  • clock_gettime (Optional)

    Chronometer based on the function clock_gettime. Its minimum theoretical resolution is calculated based on the clock type selected (see Time Reference.) Its values are represented by struct timespec.


1.3.4 Predefined Timers

A timer is a device that is triggered after a predefined amount of time. They are used to calculate the sample time and the total time (see Timer Reference for a detailed description.)

The following timers are implemented by the library:

  • chrono-adapter:

    Timer adapter from a provided chronometer (see Timer Provider Reference.)

  • itimer:

    Timer based on the setitimer API. This timer only supports one timer per type.

  • timer-t:

    Timer based on the timer_t API.

  • timerfd:

    Timer based on the timerfd API, currently only supported on GNU systems based on Linux kernel.


1.4 Basic Usage

Once the library has been installed (see Installation), Guile users might want to dive directly into a REPL, as it should be usable. On the other hand, C and C++ code has to be compiled against the installed library before executing it. MicroBenchmark comes with support for autoconf based projects (see Autoconf Macros) and pkg-config modules (see Other Build Systems.)

The following sections (see Simple Tests onwards) showcase some code examples of the library usage.


1.4.1 Autoconf Macros

Common autoconf macros can be used to find the library, such as:

AC_SEARCH_LIBS([micro_benchmark_init], [mbenchmark], [...])

MicroBenchmark comes with some (experimental) macros too:

  • Experimental: MICRO_BENCHMARK_CHECK_C

    Search MicroBenchmark library on some predefined routes and call AC_SUBST on the following variables:

    • MICRO_BENCHMARK_CPPFLAGS:

      Contains the compiler flags needed to find the headers of the local installation of MicroBenchmark.

    • MICRO_BENCHMARK_CFLAGS:

      Contains the compiler flags needed to compile and link a program with MicroBenchmark.

    • MICRO_BENCHMARK_LIBS:

      Contains the linker flags needed to link a program with MicroBenchmark.

    • MICRO_BENCHMARK_LTLIBS:

      Contains a reference to MicroBenchmark libtool file, when available.

  • Experimental: MICRO_BENCHMARK_CHECK_CXX

    Search MicroBenchmark C++ library on some predefined routes and call AC_SUBST on the following variables:

    • MICRO_BENCHMARK_CPPFLAGS:

      Contains the compiler flags needed to find the headers of the local installation of MicroBenchmark.

    • MICRO_BENCHMARK_CXXFLAGS:

      Contains the compiler flags needed to compile and link a C++ program with MicroBenchmark.

    • MICRO_BENCHMARK_CXXLIBS:

      Contains the linker flags needed to link a program against MicroBenchmark C++ binding.

    • MICRO_BENCHMARK_CXXLTLIBS:

      Contains the reference to MicroBenchmark C++ libtool file, when available.

  • Experimental: MICRO_BENCHMARK_CHECK_GUILE

    Check that the modules availability and the library version.

This could be a basic example of their usage:

configure.ac:
---
@dots{}
AS_IF([@dots{}],
 [dnl To substitute CFLAGS/LIBS/LTLIBS
  MICRO_BENCHMARK_CHECK_C
  @dots{}
  dnl To substitute CXXFLAGS/CXXLIBS/CXXLTLIBS
  MICRO_BENCHMARK_CHECK_CXX
  @dots{}
  dnl For Guile, only checks are performed
  MICRO_BENCHMARK_CHECK_GUILE
 ])
@dots{}
---

Makefile.am:
---
@dots{}
ctest_CFLAGS += $(MICRO_BENCHMARK_CFLAGS)
ctest_LIBS += $(MICRO_BENCHMARK_LIBS)
@dots{}
# For the C++ library
cpptest_CXXFLAGS += $(MICRO_BENCHMARK_CXXFLAGS)
cpptest_LIBS += $(MICRO_BENCHMARK_CXXLIBS)
@dots{}
# Or with libtool
ltctest_LDADD += $(MICRO_BENCHMARK_LTLIBS)
ltcxxlttest_LDADD += $(MICRO_BENCHMARK_CXXLTLIBS)
@dots{}
---

Next: , Previous: , Up: Basic Usage   [Contents][Index]

1.4.2 Other Build Systems

The following pkg-config modules are provided by MicroBenchmark.

  • Experimental: mbenchmark.pc

    pkg-config [–cflags|–libs] mbenchmark

  • Experimental: mbenchmark-c++.pc

    pkg-config [–cflags|–libs] mbenchmark-c++

For example, to compile a benchmark executable from a source file test.c (or test.cpp) using pkg-config, you could use the following command line:

$ gcc $(pkg-config --cflags --ldflags mbenchmark) -o test test.c
# Or for C++
$ g++ $(pkg-config --cflags --ldflags mbenchmark-c++) -o test-c++ test.cpp

1.4.3 Simple Tests

This section shows some basic code examples of MicroBenchmark usage.

This code generates a minimal (and not very useful) C benchmark:

#include <mbenchmark/all.h>
#include <unistd.h>

static void
test (void *unused)
{
  (void) unused;
  usleep (1000);
}

MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test);
MICRO_BENCHMARK_MAIN ();

This file can be found on the source code tree at doc/examples/basic.c. Lets go through its parts:

  • The line #include <mbenchmark/all.h> includes all the needed files to use the library.
  • The function void test (void *unused) is the code under test...
  • ... which is registered by the macro MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test).
  • To generate a main function, MICRO_BENCHMARK_MAIN() can be placed anywhere on the file scope.

This code could be compiled with a C++ compiler too. Nonetheless, C++14 or later code might use the features available through the C++ binding. This would be the equivalent example in C++:

(use-modules (mbenchmark))

(define (test . args)
  (usleep 1000))

(register-test! "test" #:test test)
(main (command-line))

This file can be found on the source code tree at doc/examples/basic.cxx. These are the main differences with the C interface:

  • Instead of including #include <mbenchmark/all.h>, #include <mbenchmark/all.hpp> is included. These two files cannot be included together, as the macro names exported by them collide.
  • The default test receives a reference to a std::vector<std::size_t> as its parameter, the one returned by sizes method on the state object (see C++ State Reference).
  • The token returned by the register_test function (rtest) has to be declared at namespace level.

This would be its Guile equivalent:

(use-modules (mbenchmark))

(define (test . args)
  (usleep 1000))

(register-test! "test" #:test test)
(main (command-line))

This file can be found on the source code tree at doc/examples/basic.scm. Let’s see its parts:

  • The module (mbenchmark) includes all the functionality needed.
  • register-test! uses keywords for their arguments (see Guile Test Case Reference.)
  • main could be provided with the -e main, but here it is explicit to allow its direct execution.

The execution of these examples would print something like this:

basic --brief --log-level=warn
Suite: basic (1 test execution)
====================================
Test Name | Iterations | It.Time (μ)
====================================
     test |       4026 |      1.22ms
====================================

1.4.4 Directed Test Cases

TODO


1.4.5 Test Constraints

The examples shown on the previous section use the library defaults, which could not be appropriate for your specific use case. MicroBenchmark includes several options to customize the environment where the test is executed (see Test Constraints Reference.)

The following examples show the basic constraint usage on the implemented languages:

#include <mbenchmark/all.h>
#include <unistd.h>

static void
limit_iterations (micro_benchmark_test_case test)
{
  micro_benchmark_test_case_limit_iterations (test, 300, 400);
  micro_benchmark_test_case_limit_samples (test, 2, 5);
}

static void
test_1 (void *ptr)
{
  (void) ptr;
  /* This test code will run for 300 to 400 iterations, taking
     measurements each 2 to 5 iterations. */
  usleep (10000);
}

MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test_1);
MICRO_BENCHMARK_CONSTRAINT_TEST ("test_1", limit_iterations);

static void
limit_time (micro_benchmark_test_case test)
{
  micro_benchmark_clock_time max = { 1, 0 };
  micro_benchmark_test_case_set_max_time (test, max);
}

static void
test_2 (void *ptr)
{
  (void) ptr;
  /* This test code will run for 1 second. */
  usleep (10000);
}

MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test_2);
MICRO_BENCHMARK_CONSTRAINT_TEST ("test_2", limit_time);

MICRO_BENCHMARK_MAIN ();
#include <mbenchmark/all.hpp>
#include <chrono>
#include <thread>

namespace
{
  using micro_benchmark::with_constraints;

  void
  test_1 (std::vector<std::size_t> const&)
  {
    std::this_thread::sleep_for (std::chrono::milliseconds(10));
  }

  void
  limit_iterations (micro_benchmark::test_case& test)
  {
    test.limit_iterations (300, 400);
    test.limit_samples (2, 5);
  }

  auto rt1 = micro_benchmark::register_test (with_constraints,
                                             "test_1", limit_iterations,
                                             test_1);
  void
  test_2 (std::vector<std::size_t> const&)
  {
    std::this_thread::sleep_for (std::chrono::milliseconds(10));
  }

  void
  limit_time (micro_benchmark::test_case& test)
  {
    test.max_time (std::chrono::seconds (1));
  }

  auto rt2 = micro_benchmark::register_test (with_constraints,
                                             "test_2", limit_time,
                                             test_2);
}

MICRO_BENCHMARK_MAIN ();

(define (test-1)
  (usleep 10000))

(register-test! "test-1" #:test test-1
                ;; This test will run for 300 to 400 iterations...
                #:min-iterations 300
                #:max-iterations 400
                ;; ... taking measurements each 2 to 5 iterations.
                #:min-sample-iterations 2
                #:max-sample-iterations 5)

(register-test! "test-2" #:test test-1
                ;; This test will run for 1 second.
                #:max-time 1)

(main (command-line))

Their output could look like this:

constraints --brief --log-level=warn
Suite: constraints (2 test executions)
====================================
Test Name | Iterations | It.Time (μ)
====================================
   test_1 |        400 |      10.3ms
   test_2 |         97 |      10.3ms
====================================

1.4.6 Test Dimensions

Usually, we want to provide different inputs to the test code in order to compare its behavior. To this end, MicroBenchmark allows to control the sizes provided to the test (see Test Constraints Reference), which can be used to produce tailored input.

The following examples show how to control the input of the test code:

#include <mbenchmark/all.h>
#include <unistd.h>

static void
one_dimension (micro_benchmark_test_case test)
{
  /* This test will have three dimensions, but four different values
     on them, so only one execution will be performed.  */
  const size_t sizes[] = { 1, 2, 3, 4 };
  const size_t ssizes = sizeof (sizes) / sizeof (*sizes);
  micro_benchmark_test_case_add_dimension (test, ssizes, sizes);
}

static size_t *
set_up_1d (micro_benchmark_test_state state)
{
  /* Prepare the data */
  static size_t s;
  s = micro_benchmark_state_get_size (state, 0) * 100000;
  return &s;
}

static void
test_1d (size_t *sz)
{
  /* test code */
  usleep (*sz);
}

MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up_1d, test_1d, 0);
MICRO_BENCHMARK_CONSTRAINT_TEST ("test_1d", one_dimension);

static void
three_dimensions (micro_benchmark_test_case test)
{
  /* This test will have three dimensions, but no variation on them,
     so only one execution will be performed.  */
  const size_t sizes[] = { 1, 2, 3 };
  micro_benchmark_test_case_add_dimension (test, 1, sizes + 0);
  micro_benchmark_test_case_add_dimension (test, 1, sizes + 1);
  micro_benchmark_test_case_add_dimension (test, 1, sizes + 2);
}

static size_t *
set_up_3d (micro_benchmark_test_state state)
{
  /* Prepare the data */
  static size_t s[3];
  s[0] = micro_benchmark_state_get_size (state, 0) * 10000;
  s[1] = micro_benchmark_state_get_size (state, 1) * 100000;
  s[2] = micro_benchmark_state_get_size (state, 2) * 1000000;
  return s;
}

static void
test_3d (size_t *sz)
{
  /* test code */
  usleep (sz[0] + sz[1] + sz[2]);
}

MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up_3d, test_3d, 0);
MICRO_BENCHMARK_CONSTRAINT_TEST ("test_3d", three_dimensions);

MICRO_BENCHMARK_MAIN ();
#include <mbenchmark/all.hpp>
#include <chrono>
#include <thread>

namespace
{
  using micro_benchmark::with_constraints;

  void
  one_dimension (micro_benchmark::test_case& test)
  {
    test.add_dimension ({ 1, 2, 3 });
  }

  void
  test_1d (std::vector<std::size_t> const& v)
  {
    std::this_thread::sleep_for (std::chrono::milliseconds (v[0] * 100));
  }

  auto t1d = micro_benchmark::register_test (with_constraints, "test1d",
                                             one_dimension, test_1d);

  void
  three_dimensions (micro_benchmark::test_case& test)
  {
    test.add_dimension ({ 1 });
    test.add_dimension ({ 10 });
    test.add_dimension ({ 100 });
  }

  void
  test_3d (std::vector<std::size_t> const& v)
  {
    std::this_thread::sleep_for (std::chrono::milliseconds (v[0] + v[1] + v[2]));
  }

  auto t3d = micro_benchmark::register_test (with_constraints, "test3d",
                                             three_dimensions, test_3d);

}

MICRO_BENCHMARK_MAIN ();

(define (test1d n)
  (usleep (* n 100000)))

(register-test! "test1d" #:test test1d
                #:dimensions '((1 2 3)))

(define (test3d x y z)
  (usleep (* 100000 (+ x y z))))

(register-test! "test3d" #:test test3d
                #:dimensions '((1) (2) (3)))

(main (command-line))

This could be their output:

dimensions --brief --log-level=warn
Suite: dimensions (5 test executions)
========================================
    Test Name | Iterations | It.Time (μ)
========================================
      test_1d |         -- |          --
    test_1d/1 |         50 |       100ms
    test_1d/2 |         25 |       200ms
    test_1d/3 |         17 |       300ms
    test_1d/4 |         13 |       400ms
----------------------------------------
test_3d/1/2/3 |          2 |       3.21s
========================================

2 Installation

The library can be installed from the source following the common steps explained on INSTALL:

$ ./configure --prefix=<prefix>
$ make -j
$ make check -j
$ make install

The following sections show in detail the requirements for the build process and the specific available configuration options.


2.1 Requirements

The following sections detail the software needed to build and run MicroBenchmark.


2.1.1 Runtime Requirements

MicroBenchmark uses the following components at runtime:

  • C library

    Several functions from the library are used at runtime. These include mathematical functions for the statistical calculations. Some configurations may use realtime facilities from the system.

  • GNU libunistring

    Output facilities use GNU libunistring for strings manipulation.

  • C++ Standard Library (Optional)

    The C++ binding may use templates such as std::basic_string and std::function as implementation details, even when the public interface does not depend on them.

  • GNU Guile (Optional)

    Guile binding uses the modern foreign function interface, which was introduced with Guile 2.2.


2.1.2 Build Requirements

The generic build process from MicroBenchmark from a released source tarball have a reduced set of dependencies. These are the packages required:

  • Bourne Shell and core utilities
  • GNU sed or a compatible implementation
  • GNU awk or a compatible implementation
  • GNU grep or a compatible implementation
  • GNU tar or a compatible implementation
  • Make

    For the build recipes and test suite.

  • C99 toolchain

    This release has passed all the tests with the following toolchains:

    • x86_64-linux-gnu
      • GNU GCC 4.9.4 - 12.3.0
      • LLVM/clang 6.0.1 - 15.0.7

        Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.

  • C++ toolchain (Optional)

    This release has passed all the tests with the following toolchains:

    • x86_64-linux-gnu
      • GNU G++ 5.5.0 - 12.3.0
      • LLVM/clang++ 6.0.1 - 15.0.7

        Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.

  • GNU Guile (Optional)

    This release has passed all the tests with the following toolchain and GNU Guile combinations:

    • x86_64-linux-gnu
      • Guile 2.2.7 (GNU GCC 4.9.4 - 12.3.0; LLVM/clang 6.0.1 - 15.0.7)
      • Guile 3.0.9 (GNU GCC 4.9.4 - 12.3.0; LLVM/clang 6.0.1 - 15.0.7)

        Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.

  • GNU gettext (Optional)

    MicroBenchmark uses GNU gettext (see GNU gettext) for its localization (see Configuration.)

  • GNU Gcov (Optional)

    The coverage report generation (see Configuration) uses gcov (see gcov in GNU Gcov) or a compatible interface2.

  • GNU Texinfo (Optional)

    Info files can be found on the distributed tarballs, but the generation of the manual pages on html format requires a functional GNU Texinfo installation (see GNU Texinfo manual).

  • TeX Distribution (Optional)

    The manual pages can be generated on dvi and pdf formats, which need a functional TeX distribution.

These are common package names for these dependencies:

  • gcc
  • libunistring-dev, libunistring-devel
  • make
  • gettext (Optional)
  • g++ or gcc-c++ (Optional)
  • texinfo (Optional)
  • texlive (Optional)
  • guile-3.0-dev, guile30-devel
  • guile-2.2-dev, guile22-devel

2.1.3 Build From VCS Requirements

Some of the optional dependencies might be mandatory to build the software from VCS:

  • GNU texinfo is not an optional requirement, as info files are generated from the sources.
  • A TeX distribution is required for the make target distcheck.

There are some additional dependencies to build the MicroBenchmark from the git repository, in addition to git itself. The following software to compile MicroBenchmark after modifying its sources:

  • GNU wget (Optional)

    The external dependencies build-aux/gitlog-to-changelog and build-aux/test-driver.scm will be downloaded automatically when they aren’t found on the source directory. You can avoid this placing them manually on the source tree.

  • GNU Autotools

    MicroBenchmark uses GNU Autoconf (see GNU Autoconf) for the generation of configure. GNU Automake (see GNU Automake) is used for the generation of Makefile.in. GNU libtool (see GNU libtool) is needed for the library generation scripts. GNU autopoint (see autopoint in GNU autopoint) is needed for the internationalization support.

  • GNU Autoconf Archive

    configure.ac uses the macro AX_CXX_COMPILE_STDCXX (see ax_cxx_compile_stdcxxx in GNU Autoconf Archive.)

  • GNU indent (Optional)

    The target make indent-code and the script build-aux/indent.sh use GNU indent (see GNU indent) to format C source code.

These are common names for the packages providing these dependencies:

  • git
  • wget
  • autoconf
  • autoconf-archive
  • automake
  • autopoint
  • libtool
  • indent

Previous: , Up: Installation   [Contents][Index]

2.2 Configuration

MicroBenchmark provides several options to be used at configure time. In addition to the generic parameters, such as --prefix, --libdir and so on, the generated configure provided (generated) accepts these options:

  • --disable-assert

    Disable assertions on the code.

  • --enable-clock-gettime
  • --disable-clock-gettime

    Enable or disable clock_gettime chronometer implementation (see Predefined Chronometers.) If this parameter is not provided, this chronometer will be compiled if clock_gettime is available.

  • --enable-gettimeofday
  • --disable-gettimeofday

    Enable or disable gettimeofday chronometer implementation (see Predefined Chronometers.) If this parameter is not provided, this chronometer will be compiled if gettimeofday is available.

  • --enable-itimer
  • --disable-itimer

    Enable or disable itimer timer implementation (see Predefined Timers.) If this parameter is not provided, this timer will be compiled if setitimer is available.

  • --enable-timer-t
  • --disable-timer-t

    Enable or disable timer-t timer implementation (see Predefined Timers.) If this parameter is not provided, this timer will be compiled if timer_create is available.

  • --enable-timerfd
  • --disable-timerfd

    Enable or disable timerfd timer implementation (see Predefined Timers.) If this parameter is not provided, the timer will be compiled if timerfd_create is available.

  • --disable-guile

    Disable Guile bindings.

  • --enable-coverage
  • --enable-coverage=auto

    Enable the generation of a coverage report with make check. The generated report can be found at build-aux/mbenchmark-gcov.tar.gz. If --enable-coverage is used, any error checking the needed tools for the coverage report generation will result on failed execution of the configure script. If --enable-coverage=auto is used, any error finding or checking the tools needed for the coverage report generation will disable the generation of the report.

    The coverage report is disabled by default.

  • --disable-traces

    Do not emit trace level log calls on the code.

  • --with-log-level=LEVEL

    Select the default log level for the library. The following levels are available. Each level includes the previous ones.

    • error: Only error messages are shown.
    • warn: Warning messages are emitted to the log output.
    • info: Messages about the current execution status are emitted to the log output.
    • debug: Debugging information is emitted to the log output.
    • trace: Detailed information at each step of the suite execution is emitted to the log output.
  • --with-libunistring=DIR
  • --with-libunistring-include=INCLUDEDIR
  • --with-libunistring-libs=LIBDIR

    Select the libunistring installation used.


3 Advanced Usage

The following sections show some more advanced features offered by MicroBenchmark, such as different output formats (see Report Output) and helper functionality tell the compiler we really want to perform the desired computations (see Optimizer Tips.)

Contact the mailing list <bug-mbenchmark@nongnu.org> if you have ideas, or more information about possible complex use cases of MicroBenchmark.


3.1 Comparing Implementations

THe following example shows a more realistic use case in C:

#include <mbenchmark/all.h>
#include <stdio.h>
#include <string.h>

/* Main can be placed anywhere.  */
MICRO_BENCHMARK_MAIN ();

/* Recursive implementation -- overflows with small values!.  */
long
fibrec (int x)
{
  if (x > 1)
    return fibrec (x - 1) + fibrec (x - 2);
  if (x < 0)
    return fibrec (x + 2) - fibrec (x + 1);
  return x;
}

/* Iterative implementation -- overflows with small values!.  */
long
fibit (int x)
{
  long curr, last;
#define impl(c, l, start, sign) \
  last = l; \
  curr = c; \
  for (int i = start; i < sign x; ++i) \
    { \
      long tmp = last sign curr; \
      last = curr; \
      curr = tmp; \
    } \
  return curr

  if (x > 0)
    {
      impl (1, 0, 1, +);
    }
  impl (0, 1, 0, -);
#undef impl
}

/* Prepare the pointer and its value.  */
static int *
set_up (micro_benchmark_test_state s)
{
  static int value = 0;

  value = micro_benchmark_state_get_size (s, 0);
  value *= micro_benchmark_state_get_size (s, 1) ? 1 : -1;
  return &value;
}

static void
tear_down (micro_benchmark_test_state s, int *ptr)
{
  char buf[100];
  const char *name = micro_benchmark_state_get_name (s);
  if (strcmp (name, "test_fibit") == 0)
    snprintf (buf, sizeof (buf) - 1, "fibit/%d", *ptr);
  else
    snprintf (buf, sizeof (buf) - 1, "fibrec/%d", *ptr);
  buf[sizeof (buf) - 1] = '\0';
  micro_benchmark_state_set_name (s, buf);
}

/* Recursive implementation test.  */
static void
test_fibrec (int *r)
{
  long fib = fibrec (*r);
  MICRO_BENCHMARK_DO_NOT_OPTIMIZE (fib);
}

MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up, test_fibrec, tear_down);

/* Iterative implementation test.  */
static void
test_fibit (int *r)
{
  long fib = fibit (*r);
  MICRO_BENCHMARK_DO_NOT_OPTIMIZE (fib);
}

MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up, test_fibit, tear_down);

/* Constraints on the recursive test.  */
static void
rec_constraints (micro_benchmark_test_case test)
{
  const size_t sizes[] = { 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30 };
  const size_t ssizes = sizeof (sizes) / sizeof (*sizes);
  micro_benchmark_test_case_add_dimension (test, ssizes, sizes);

  const size_t sign[] = { 0, 1 };
  const size_t ssign = sizeof (sign) / sizeof (*sign);
  micro_benchmark_test_case_add_dimension (test, ssign, sign);

  micro_benchmark_clock_time mt = { 3, 0 };
  micro_benchmark_test_case_set_max_time (test, mt);
  micro_benchmark_test_case_limit_iterations (test, 0, 1000000);
}

MICRO_BENCHMARK_CONSTRAINT_TEST ("test_fibrec", rec_constraints);

/* Constraints on the iterative test, this is enough.  */
static void
it_constraints (micro_benchmark_test_case test)
{
  const size_t sizes[] = { 10, 20, 30, 40 };
  const size_t ssizes = sizeof (sizes) / sizeof (*sizes);
  micro_benchmark_test_case_add_dimension (test, ssizes, sizes);

  const size_t sign[] = { 0, 1 };
  const size_t ssign = sizeof (sign) / sizeof (*sign);
  micro_benchmark_test_case_add_dimension (test, ssign, sign);

  micro_benchmark_clock_time mt = { 3, 0 };
  micro_benchmark_test_case_set_max_time (test, mt);
  micro_benchmark_test_case_limit_iterations (test, 0, 1000000);
}

MICRO_BENCHMARK_CONSTRAINT_TEST ("test_fibit", it_constraints);

This is its C++ version:

#include <mbenchmark/all.hpp>
#include <functional> /*  std::plus and std::less  */
#include <string>

/* Main can be placed anywhere.  */
MICRO_BENCHMARK_MAIN ();

namespace
{
  /* Recursive implementation.  */
  template <typename T, typename I = int>
  T
  fibrec (I x)
  {
    if (x > 1)
      return fibrec<T> (x - 1) + fibrec<T> (x - 2);
    if (x < 0)
      return fibrec<T> (x + 2) - fibrec<T> (x + 1);
    return x;
  }

  /* Iterative implementation.  */
  template <typename T, typename I = int>
  T
  fibit (I x)
  {
    auto doit = [x] (T curr, T last, I start, auto&& op)
    {
      for (I i = start; i < op (0, x); ++i)
        {
          T tmp = op (last, curr);
          last = curr;
          curr = tmp;
        }
      return curr;
    };
    if (x > 0)
      return doit (T{1}, T{0}, 1, std::plus<T>{});
    return doit (T{0}, T{1}, 0, std::minus<T>{});
  }

  /* Prepare the pointer and its value.  */
  int
  set_up (micro_benchmark::state const& s)
  {
    auto sizes = s.sizes ();
    int value = sizes.at (0);
    value *= sizes.at (1) ? 1 : -1;
    return value;
  }

  void
  tear_down (micro_benchmark::state& s, int v)
  {
    constexpr auto fibit_base = "fibit/";
    constexpr auto fibrec_base = "fibrec/";
    std::string name = s.get_name ();
    if (name == "test_fibit")
      s.set_name (fibit_base + std::to_string (v));
    else
      s.set_name (fibrec_base + std::to_string (v));
  }

  /* Constraints on the recursive test.  */
  void
  rec_constraints (micro_benchmark::test_case& test)
  {
    test.add_dimension ({ 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30 });
    test.add_dimension ({ 0, 1 });

    test.limit_iterations (0, 1000000);
  }

  /* Register the recursive test with its constraints.  */
  void
  rtestfun (int v)
  {
    auto r = fibrec<long> (v);
    micro_benchmark::do_not_optimize (r);
  }

  auto rtest =
    micro_benchmark::register_test (micro_benchmark::with_constraints,
                                    "test_fibrec", rec_constraints,
                                    rtestfun, set_up, tear_down);

  /* Constraints on the iterative test, this is enough.  */
  void
  it_constraints (micro_benchmark::test_case& test)
  {
    test.add_dimension ({ 10, 20, 30, 40 });
    test.add_dimension ({ 0, 1 });

    test.limit_iterations (0, 1000000);
  }

  /* Register the iterative test with its constraints.  */
  void
  itestfun (int v)
  {
    auto r = fibit<long> (v);
    micro_benchmark::do_not_optimize (r);
  }

  auto itest =
    micro_benchmark::register_test (micro_benchmark::with_constraints,
                                    "test_fibit", it_constraints,
                                    itestfun, set_up, tear_down);
}

This is its Guile version:

(define (fibrec n)
  (cond ((> n 1) (+ (fibrec (- n 2)) (fibrec (- n 1))))
        ((< n 0) (- (fibrec (+ n 2)) (fibrec (+ n 1))))
        (else n)))

(define (fibit n)
  (define (up curr last p)
    (if (> p 0)
        (up (+ curr last) curr (- p 1))
        curr))
  (define (down curr last p)
    (if (< p 0)
        (down (- last curr) curr (+ p 1))
        curr))
  (cond ((> n 1) (up 1 0 (- n 1)))
        ((< n 0) (down 0 1 n))
        (else n)))

(define (set-up state)
  (define (doit size sign)
    (if (eqv? sign 0)
	(list size)
	(list (- size))))
  (apply doit (state-sizes state)))

(define (tear-down state n)
  (let ((base (if (equal? (state-name state) "fibit") "fibit/" "fibrec/")))
    (set-state-name! state (string-append base (number->string n)))))

(register-test! "fibit"
                #:test fibit
                #:set-up set-up
                #:tear-down tear-down
                #:dimensions '((10 50 100 500 1000 5000 10000 50000 100000)
                               (0 1))
                #:max-iterations 1000000)
(register-test! "fibrec"
                #:test fibrec
                #:set-up set-up
                #:tear-down tear-down
                #:dimensions '((10 12 14 16 18 20 22 24 26 28 30)
                               (0 1))
                #:max-iterations 1000000)

(main (command-line))

These file can be found on the source code tree as doc/examples/fib.c, doc/examples/fib.cxx and doc/examples.fib.scm respectively.

Their output would be something like this:

fib --brief --log-level=warn
Suite: fib (40 test executions)
========================================
    Test Name | Iterations | It.Time (μ)
========================================
        fibit |         -- |          --
     fibit/10 |    1000000 |       252ns
     fibit/50 |    1000000 |       429ns
    fibit/100 |    1000000 |      1.64μs
    fibit/500 |      92253 |      53.3μs
   fibit/1000 |      32308 |       159μs
   fibit/5000 |       2509 |      2.00ms
  fibit/10000 |        639 |      7.84ms
  fibit/50000 |         32 |       157ms
 fibit/100000 |          9 |       582ms
    fibit/-10 |    1000000 |       308ns
    fibit/-50 |    1000000 |       503ns
   fibit/-100 |    1000000 |      1.94μs
   fibit/-500 |      80266 |      62.7μs
  fibit/-1000 |      29633 |       169μs
  fibit/-5000 |       2153 |      2.33ms
 fibit/-10000 |        576 |      8.78ms
 fibit/-50000 |         29 |       173ms
fibit/-100000 |          9 |       621ms
----------------------------------------
       fibrec |         -- |          --
    fibrec/10 |    1000000 |      2.07μs
    fibrec/12 |     884087 |      4.53μs
    fibrec/14 |     388238 |      11.7μs
    fibrec/16 |     155112 |      31.1μs
    fibrec/18 |      60585 |      81.3μs
    fibrec/20 |      23411 |       213μs
    fibrec/22 |       8991 |       555μs
    fibrec/24 |       3441 |      1.45ms
    fibrec/26 |       1316 |      3.80ms
    fibrec/28 |        510 |      9.84ms
    fibrec/30 |        195 |      25.7ms
   fibrec/-10 |    1000000 |      2.81μs
   fibrec/-12 |     593652 |      7.29μs
   fibrec/-14 |     243955 |      19.3μs
   fibrec/-16 |      95423 |      51.0μs
   fibrec/-18 |      36732 |       135μs
   fibrec/-20 |      13949 |       357μs
   fibrec/-22 |       5257 |     0.945ms
   fibrec/-24 |       1998 |      2.51ms
   fibrec/-26 |        770 |      6.53ms
   fibrec/-28 |        295 |      17.0ms
   fibrec/-30 |        113 |      44.5ms
----------------------------------------
========================================

The recursive implementation speed decreases very quickly when the magnitude increases. On the other hand, the iterative version can calculate several times values with the same order of magnitude.


3.2 Report Output

MicroBenchmark provides several predefined formats for its output. The default output format is a tabulated output of the results (see Console Output), but the library may also provide you directly S-expressions (see Lisp Output) and a record format (see Text Output).


3.2.1 Console Output

NOTE: This format is under development and may change..

The usual output looks like this:

Suite: console (1 test execution)
=========================================================
     Test Name | Iterations | It.Time (μ/σ)  | Total Time
=========================================================
     Self-Test |         -- |             -- |         --
         empty |      65536 |  22.7ns/2.89ns |    0.3484s
       barrier |      65536 |  23.1ns/2.19ns |    0.3508s
    read+write |      65536 |  29.6ns/1.13ns |    0.3521s
      addition |      65536 |  32.7ns/1.71ns |    0.3529s
multiplication |      65536 |  34.8ns/1.16ns |    0.3513s
         empty |      65536 |  23.6ns/2.88ns |    0.3536s
---------------------------------------------------------
          test |       3994 | 1.25ms/0.074ms |    5.0912s
=========================================================

And here are all the possible values:

Suite: console (1 test execution)
======================================================================================================================================================
     Test Name | Total It. | Iterations | It.Time (μ/σ²/σ)        | Total S. | Samples | S.Time (μ/σ²/σ)       | S.Iter. (μ/σ²/σ) | Total Time | Sizes
======================================================================================================================================================
     Self-Test |        -- |         -- |                      -- |       -- |      -- |                    -- |               -- |         -- |    --
         empty |     65536 |      65536 |   21.1ns/8.06ns²/2.84ns |       58 |      58 |  24.1μs/189μs²/13.7μs |  1130/373162/611 |    0.3273s |   {0}
       barrier |     65536 |      65536 |   22.3ns/2.00ns²/1.41ns |       59 |      59 |  24.7μs/195μs²/14.0μs |  1111/393171/627 |    0.3430s |   {1}
    read+write |     65536 |      65536 |   29.3ns/8.21ns²/2.87ns |       60 |      60 |  31.6μs/334μs²/18.3μs |  1092/398707/631 |    0.3459s |   {2}
      addition |     65536 |      65536 |   32.2ns/6.57ns²/2.56ns |       60 |      60 |  34.9μs/398μs²/19.9μs |  1092/390244/625 |    0.3470s |   {3}
multiplication |     65536 |      65536 |   34.6ns/2.04ns²/1.43ns |       60 |      60 |  37.7μs/460μs²/21.4μs |  1092/383910/620 |    0.3485s |   {4}
         empty |     65536 |      65536 |   22.6ns/3.05ns²/1.75ns |       60 |      60 |  24.5μs/198μs²/14.1μs |  1092/386938/622 |    0.3484s |   {5}
------------------------------------------------------------------------------------------------------------------------------------------------------
          test |      3860 |       3860 | 1.29ms/0.002ms²/0.045ms |      311 |     311 | 16.0ms/78.1ms²/8.83ms |   12.4/47.5/6.89 |    5.0819s |     ∅
======================================================================================================================================================
------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------

Next: , Previous: , Up: Report Output   [Contents][Index]

3.2.2 Lisp Output

This format is compatible with Lisp’s read procedure, and can be used for further processing of the collected data.

NOTE: The format is under development and may change.

The usual output looks like this:

(suite (name "lisp")
  (test (name "Self-Test")
    (execution (number 0)
      (name "empty")
      (total-time 0.326749 "s")
      (iterations 65536)
      (iteration-time
        (mean 22.072614 "ns")
        (std-deviation 4.105545 "ns")))
    (execution (number 1)
      (name "barrier")
      (total-time 0.326562 "s")
      (iterations 65536)
      (iteration-time
        (mean 21.374931 "ns")
        (std-deviation 1.093298 "ns")))
    (execution (number 2)
      (name "read+write")
      (total-time 0.333746 "s")
      (iterations 65536)
      (iteration-time
        (mean 28.694221 "ns")
        (std-deviation 3.253493 "ns")))
    (execution (number 3)
      (name "addition")
      (total-time 0.345400 "s")
      (iterations 65536)
      (iteration-time
        (mean 38.210674 "ns")
        (std-deviation 25.125504 "ns")))
    (execution (number 4)
      (name "multiplication")
      (total-time 0.339907 "s")
      (iterations 65536)
      (iteration-time
        (mean 35.148271 "ns")
        (std-deviation 5.568858 "ns")))
    (execution (number 5)
      (name "empty")
      (total-time 0.332254 "s")
      (iterations 65536)
      (iteration-time
        (mean 21.904489 "ns")
        (std-deviation 1.905702 "ns")))
    (number-of-runs 6))
  (test (name "test")
    (execution (number 0)
      (name "test")
      (total-time 5.087626 "s")
      (iterations 3981)
      (iteration-time
        (mean 1.240041 "ms")
        (std-deviation 0.074364 "ms")))
    (number-of-runs 1))
  (number-of-tests 2))

And here are all the possible values:

(suite (name "lisp")
  (test (name "Self-Test")
    (execution (number 0)
      (name "empty")
      (dimensions (0))
      (total-time 0.314982 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 57)
      (used-samples 57)
      (iteration-time
        (mean 20.941117 "ns")
        (std-deviation 5.240370 "ns")
        (variance 27.461475 "ns²"))
      (sample-time
        (mean 24.478947 "μs")
        (std-deviation 16.031848 "μs")
        (variance 257.020146 "μs²"))
      (sample-iterations
        (mean 1149.754386)
        (std-deviation 629.857787)
        (variance 396720.831454))
      (custom-meters (total 0)))
    (execution (number 1)
      (name "barrier")
      (dimensions (1))
      (total-time 0.323575 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 57)
      (used-samples 57)
      (iteration-time
        (mean 21.163393 "ns")
        (std-deviation 0.844636 "ns")
        (variance 0.713410 "ns²"))
      (sample-time
        (mean 24.239070 "μs")
        (std-deviation 13.764559 "μs")
        (variance 189.463080 "μs²"))
      (sample-iterations
        (mean 1149.754386)
        (std-deviation 652.575592)
        (variance 425854.902882))
      (custom-meters (total 0)))
    (execution (number 2)
      (name "read+write")
      (dimensions (2))
      (total-time 0.322978 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 57)
      (used-samples 57)
      (iteration-time
        (mean 27.766774 "ns")
        (std-deviation 2.136562 "ns")
        (variance 4.564897 "ns²"))
      (sample-time
        (mean 31.624877 "μs")
        (std-deviation 17.903620 "μs")
        (variance 320.539599 "μs²"))
      (sample-iterations
        (mean 1149.754386)
        (std-deviation 652.000419)
        (variance 425104.545739))
      (custom-meters (total 0)))
    (execution (number 3)
      (name "addition")
      (dimensions (3))
      (total-time 0.324984 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 58)
      (used-samples 58)
      (iteration-time
        (mean 31.652676 "ns")
        (std-deviation 7.132115 "ns")
        (variance 50.867060 "ns²"))
      (sample-time
        (mean 37.170897 "μs")
        (std-deviation 27.209318 "μs")
        (variance 740.347009 "μs²"))
      (sample-iterations
        (mean 1129.931034)
        (std-deviation 656.896073)
        (variance 431512.451301))
      (custom-meters (total 0)))
    (execution (number 4)
      (name "multiplication")
      (dimensions (4))
      (total-time 0.323793 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 57)
      (used-samples 57)
      (iteration-time
        (mean 32.703045 "ns")
        (std-deviation 0.937458 "ns")
        (variance 0.878827 "ns²"))
      (sample-time
        (mean 37.474596 "μs")
        (std-deviation 21.326054 "μs")
        (variance 454.800575 "μs²"))
      (sample-iterations
        (mean 1149.754386)
        (std-deviation 656.408880)
        (variance 430872.617168))
      (custom-meters (total 0)))
    (execution (number 5)
      (name "empty")
      (dimensions (5))
      (total-time 0.326234 "s")
      (total-iterations 65536)
      (iterations 65536)
      (total-samples 58)
      (used-samples 58)
      (iteration-time
        (mean 21.576884 "ns")
        (std-deviation 2.229484 "ns")
        (variance 4.970598 "ns²"))
      (sample-time
        (mean 24.337345 "μs")
        (std-deviation 14.608577 "μs")
        (variance 213.410528 "μs²"))
      (sample-iterations
        (mean 1129.931034)
        (std-deviation 660.495265)
        (variance 436253.995160))
      (custom-meters (total 0)))
    (number-of-runs 6))
  (test (name "test")
    (execution (number 0)
      (name "test")
      (total-time 5.082051 "s")
      (total-iterations 3938)
      (iterations 3938)
      (total-samples 311)
      (used-samples 311)
      (iteration-time
        (mean 1.265120 "ms")
        (std-deviation 0.067238 "ms")
        (variance 0.004521 "ms²"))
      (sample-time
        (mean 15.978674 "ms")
        (std-deviation 8.834413 "ms")
        (variance 78.046855 "ms²"))
      (sample-iterations
        (mean 12.662379)
        (std-deviation 7.027033)
        (variance 49.379193))
      (custom-meters (total 0)))
    (number-of-runs 1))
  (number-of-tests 2))

Previous: , Up: Report Output   [Contents][Index]

3.2.3 Text Output

This format is compatible with GNU recutils (see GNU recutils), and can be used for further processing of the collected data.

NOTE: The format is under development and may change.

The usual output looks like this:

# Suite Name: recutils
# Number of tests: 2

Suite: recutils
Test: Self-Test
Test-Case: empty
Total-Time: 0.330962 seconds
Iterations: 65536
Iteration-Time: 21.889642 ns
Iteration-Time-Dev: 3.465406 ns


Suite: recutils
Test: Self-Test
Test-Case: barrier
Total-Time: 0.336391 seconds
Iterations: 65536
Iteration-Time: 22.075144 ns
Iteration-Time-Dev: 1.480519 ns


Suite: recutils
Test: Self-Test
Test-Case: read+write
Total-Time: 0.338262 seconds
Iterations: 65536
Iteration-Time: 28.806907 ns
Iteration-Time-Dev: 1.788922 ns


Suite: recutils
Test: Self-Test
Test-Case: addition
Total-Time: 0.335605 seconds
Iterations: 65536
Iteration-Time: 31.240347 ns
Iteration-Time-Dev: 1.614777 ns


Suite: recutils
Test: Self-Test
Test-Case: multiplication
Total-Time: 0.336565 seconds
Iterations: 65536
Iteration-Time: 33.552166 ns
Iteration-Time-Dev: 0.872344 ns


Suite: recutils
Test: Self-Test
Test-Case: empty
Total-Time: 0.336429 seconds
Iterations: 65536
Iteration-Time: 22.165482 ns
Iteration-Time-Dev: 1.831835 ns


Suite: recutils
Test: test
Test-Case: test
Total-Time: 5.087685 seconds
Iterations: 4019
Iteration-Time: 1.244888 ms
Iteration-Time-Dev: 0.070221 ms


And here are all the possible values:

# Suite Name: recutils
# Number of tests: 2

Suite: recutils
Test: Self-Test
Test-Case: empty
Test-Dimensions: (0)
Total-Time: 0.313433 seconds
Total-Iterations: 65536
Total-Samples: 57
Iterations: 65536
Samples: 57
Iteration-Time: 21.266609 ns
Iteration-Time-Dev: 3.854903 ns
Iteration-Time-Var: 14.860275 ns²
Sample-Time: 24.024211 μs
Sample-Time-Dev: 14.308473 μs
Sample-Time-Var: 204.732392 μs²
Iterations-Sample: 1149.754386 iterations
Iterations-Sample-Dev: 658.869331 iterations
Iterations-Sample-Var: 434108.795739 iterations


Suite: recutils
Test: Self-Test
Test-Case: barrier
Test-Dimensions: (1)
Total-Time: 0.324264 seconds
Total-Iterations: 65536
Total-Samples: 57
Iterations: 65536
Samples: 57
Iteration-Time: 22.271273 ns
Iteration-Time-Dev: 1.055291 ns
Iteration-Time-Var: 1.113640 ns²
Sample-Time: 25.427140 μs
Sample-Time-Dev: 14.430901 μs
Sample-Time-Var: 208.250895 μs²
Iterations-Sample: 1149.754386 iterations
Iterations-Sample-Dev: 653.540940 iterations
Iterations-Sample-Var: 427115.760025 iterations


Suite: recutils
Test: Self-Test
Test-Case: read+write
Test-Dimensions: (2)
Total-Time: 0.324357 seconds
Total-Iterations: 65536
Total-Samples: 57
Iterations: 65536
Samples: 57
Iteration-Time: 29.853592 ns
Iteration-Time-Dev: 0.998944 ns
Iteration-Time-Var: 0.997890 ns²
Sample-Time: 34.188228 μs
Sample-Time-Dev: 19.447461 μs
Sample-Time-Var: 378.203752 μs²
Iterations-Sample: 1149.754386 iterations
Iterations-Sample-Dev: 655.631110 iterations
Iterations-Sample-Var: 429852.152882 iterations


Suite: recutils
Test: Self-Test
Test-Case: addition
Test-Dimensions: (3)
Total-Time: 0.327395 seconds
Total-Iterations: 65536
Total-Samples: 58
Iterations: 65536
Samples: 58
Iteration-Time: 32.130466 ns
Iteration-Time-Dev: 1.723760 ns
Iteration-Time-Var: 2.971349 ns²
Sample-Time: 36.404397 μs
Sample-Time-Dev: 21.188594 μs
Sample-Time-Var: 448.956513 μs²
Iterations-Sample: 1129.931034 iterations
Iterations-Sample-Dev: 638.763055 iterations
Iterations-Sample-Var: 408018.240774 iterations


Suite: recutils
Test: Self-Test
Test-Case: multiplication
Test-Dimensions: (4)
Total-Time: 0.326852 seconds
Total-Iterations: 65536
Total-Samples: 58
Iterations: 65536
Samples: 58
Iteration-Time: 35.174917 ns
Iteration-Time-Dev: 1.703684 ns
Iteration-Time-Var: 2.902540 ns²
Sample-Time: 39.356707 μs
Sample-Time-Dev: 22.986242 μs
Sample-Time-Var: 528.367301 μs²
Iterations-Sample: 1129.931034 iterations
Iterations-Sample-Dev: 664.098383 iterations
Iterations-Sample-Var: 441026.661827 iterations


Suite: recutils
Test: Self-Test
Test-Case: empty
Test-Dimensions: (5)
Total-Time: 0.328522 seconds
Total-Iterations: 65536
Total-Samples: 58
Iterations: 65536
Samples: 58
Iteration-Time: 22.131524 ns
Iteration-Time-Dev: 0.876206 ns
Iteration-Time-Var: 0.767737 ns²
Sample-Time: 24.892259 μs
Sample-Time-Dev: 14.260445 μs
Sample-Time-Var: 203.360286 μs²
Iterations-Sample: 1129.931034 iterations
Iterations-Sample-Dev: 649.394100 iterations
Iterations-Sample-Var: 421712.696915 iterations


Suite: recutils
Test: test
Test-Case: test
Test-Dimensions: ()
Total-Time: 5.089305 seconds
Total-Iterations: 3935
Total-Samples: 311
Iterations: 3935
Samples: 311
Iteration-Time: 1.269683 ms
Iteration-Time-Dev: 0.063774 ms
Iteration-Time-Var: 0.004067 ms²
Sample-Time: 15.970704 ms
Sample-Time-Dev: 8.820956 ms
Sample-Time-Var: 77.809264 ms²
Iterations-Sample: 12.652733 iterations
Iterations-Sample-Dev: 7.028627 iterations
Iterations-Sample-Var: 49.401597 iterations



3.3 Optimizer Tips

Compilers are usually allowed to throw away any computation whose effects are not visible. Micro benchmarks usually throw away the result of the computation at each iteration, and the compiler might be really nice to you and move or throw the code under test with it. At the end of the day, the fastest execution is achieved executing no code at all.

If you are experiencing these kind of problems, see Optimization Utilities Reference for useful functions to tell the compiler we need the values produced by the code under test. Users of the C++ binding can check See C++ Optimization Utilities Reference for their equivalents. Guile binding doesn’t implement yet these kind of helpers.

These tools must be used carefully, as the compiler might well find ways to comply with our requirements with less work than we expected—really a big kudos compiler writers. Your mileage may vary.


Next: , Previous: , Up: MicroBenchmark   [Contents][Index]

API Reference


Common API Concepts

The functions of this reference are specified using Floyd-Hoare logic, as P{Q}R, where:

The following order is imposed between these elements:

  1. Preconditions must be met at the call point.
  2. Effects take place.
  3. Postconditions are guaranteed at the return point.

The following examples showcase the format used by the API Reference:

[Lang] Example Function: return type function_example (ParamType value, …) extra_qualifiers

Preconditions:
  1. Conditions to be met at the call point.
    
  2. <-- This number is for reference purposes, all preconditions must be
    met at the call point.
    

Effects:
  1. Effect produced by the call.
    
  2. Another effect produced by the call when value has the special
    value foobar.
    
  3. <-- This number is for reference purposes.  It does not impose an
    order on the effects unless specified otherwise.
    

Postconditions:
  1. Condition provided after the call.
    
  2. Another condition provided after the call when ParamType is
    something.
    
  3. <-- This number is for reference purposes, all postconditions must be
    provided at the return point.
    

General comments regarding the function come after the formal
definition.


[Lang] Example Function: dynamic_example param

Preconditions:
…


Note:


4 C API Reference

The top level object is called a suite (see Suite Reference) which is composed by one or more tests.

Tests (see Test Reference) are defined by the struct micro_benchmark_test_definition. These definitions can registered onto the suite.

Each test execution is controlled through a state object (see State Reference), where statistics are collected.

A report with the results (see Report Reference) is available once the execution has finished.

In addition to these interfaces, utility macros and functions are provided (see Utilities Reference) to reduce the boilerplate. Also, compilers sometimes need hints to not optimize away code under test (see Optimization Utilities Reference.)

MicroBenchmark is on an early stage of development. The API might change in future versions.


4.1 Suite Reference

The suite is represented by the following type:

C Data Type: micro_benchmark_suite

Opaque pointer type. This object controls the lifetime of their derived objects, so all references obtained using an object of this type are bound to the lifetime of that object.


C Suite Function: micro_benchmark_suite micro_benchmark_suite_create (const char *name)

Preconditions:

  1. name must point to a valid, zero-ended array of characters.

Effects:

  1. Creation of a new suite with name as its name.

Postconditions:

  1. The returned value is not NULL.


C Suite Function: void micro_benchmark_suite_release (micro_benchmark_suite suite)

Preconditions

  1. suite has been obtained through micro_benchmark_suite_create.

Effects:

  1. The resources associated to suite are released.

Postconditions:

  1. Any object pointer obtained through suite, including suite, is invalidated.


C Suite Function: const char * micro_benchmark_suite_get_name (micro_benchmark_suite suite)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.

Effects:

  1. None.

Postconditions:

  1. The returned value is equivalent to the one provided  to the call of micro_benchmark_suite_create that originated this suite.


C Suite Function: micro_benchmark_test_case micro_benchmark_suite_register_test (micro_benchmark_suite suite, const char *name, micro_benchmark_test test)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.
  2. name must point to a valid, zero-ended array of characters.
  3. test must point to a valid object.

Effects:

  1. test is added as name to the test set that suite will run.

Postconditions:

  1. The returned value is not NULL.
  2. The returned value points to the registered test is enabled.
  3. The registered test is enabled.
  4. The registered test contains a disjoint copy of test.


C Suite Function: size_t micro_benchmark_suite_get_number_of_tests (micro_benchmark_suite suite)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.

Effects:

  1. The number of tests registered on suite, including the internal test used to calculate the overhead of the framework.

Postconditions:

  1. The suite is not modified.


Note that no interface to retrieve tests by name is provided, as several tests might be registered under the same name.

C Suite Function: micro_benchmark_test_case micro_benchmark_suite_get_test (micro_benchmark_suite suite, size_t pos)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.
  2. pos is less than the value returned by micro_benchmark_suite_get_number_of_tests.

Effects:

  1. None.

Postconditions:

  1. The suite is not modified.
  2. The return value is a reference to the test case at pos from suite.


C Suite Function: void micro_benchmark_suite_run (micro_benchmark_suite suite)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.
  2. micro_benchmark_suite_run (suite) has not been called.

Effects:

  1. The tests registered on suite whose status is enabled are run.
  2. The tests registered on suite whose status is not enabled are not run.

Postconditions:

  1. A report is available through micro_benchmark_suite_get_report.


C Suite Function: micro_benchmark_report micro_benchmark_suite_get_report (micro_benchmark_suite suite)

Preconditions:

  1. suite has been obtained through micro_benchmark_suite_create.
  2. micro_benchmark_suite_run (suite) has been called.

Effects:

  1. None.

Postconditions:

  1. The returned value is not NULL.
  2. The returned value is eport stored on suite.



4.2 Test Reference

This section shows the types and functions related to the test functionality.

The main types is:

C Data Type: micro_benchmark_test_case

Representation of a registered test case.


The following sections explain in depth the definiton of test cases and how to associate constraints to their execution.


4.2.1 Test Definition Reference

C Function Type: micro_benchmark_set_up_fun

Its prototype is void *(*) (micro_benchmark_test_state).


C Function Type: micro_benchmark_tear_down_fun

Its prototype is void (*) (micro_benchmark_test_state, void *).


C Function Type: micro_benchmark_auto_test_fun

Its prototype is void (*) (void *).


C Function Type: micro_benchmark_test_fun

Its prototype is void (*) (micro_benchmark_test_state).


C Data Type: micro_benchmark_test_definition { bool is_auto, micro_benchmark_set_up_fun set_up, micro_benchmark_tear_down_fun tear_down, micro_benchmark_auto_test_fun auto_test, micro_benchmark_test_fun test }

Definition of a test case.


C Data Type: micro_benchmark_test_case_definition

Opaque type of a registered test case.


C Test Function: const char * micro_benchmark_test_case_get_name (micro_benchmark_test_case test)

Preconditions

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is not NULL.
  2. The returned value is the name used to register the test case.


C Test Function: micro_benchmark_test micro_benchmark_test_case_get_definition (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is not NULL.
  2. The micro_benchmark_test_definition pointed by the returned value contains the same values as the one the provided to micro_benchmark_suite_register_test or MICRO_BENCHMARK_REGISTER_ macro family.


C Test Function: void micro_benchmark_test_case_set_enabled (micro_benchmark_test_case test, bool enabled)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. Any previous value provided to this function stops taking effect.
  2. Activate the execution of test if enabled is true.
  3. Deactivate the execution of test otherwise.

Postconditions:

  1. The test will run if enabled is true.


C Test Function: void micro_benchmark_test_case_is_enabled (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. Check the status of the test.

Postconditions:

  1. If micro_benchmark_test_case_set_enabled was called with test as its parameter, the returned value is the value provided as its parameter.


C Test Function: void micro_benchmark_test_case_set_data (micro_benchmark_test_case test, void *data, void (*cleanup) (void *))

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. cleanup points to a valid function for this signature.

Effects:

  1. data is stored on the test case.
  2. cleanup is stored on the test case.

Postconditions:

  1. cleanup will be called with data as its parameter at the end of test lifetime.


C Test Function: void * micro_benchmark_test_case_get_data (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. cleanup points to a valid function for this signature.

Effects:

  1. None.

Postconditions:

  1. The returned value is the object pointer provided to micro_benchmark_test_case_set_data or NULL if micro_benchmark_test_case_set_data has not been called.



4.2.2 Test Constraints Reference

This subsection contains the different static constraints that can be applied to a test case.

The following functions modify the constraints applied to the test case execution:

C Test Function: void micro_benchmark_test_case_add_dimension (micro_benchmark_test_case test, size_t num_sizes, const size_t *sizes)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. sizes points to a valid object or an array of them.
  4. num_sizes is greater than 0.
  5. num_sizes less or equal to the number of elements pointed by sizes.

Effects:

  1. The test will be executed as many times as needed to explore all the possible combinations of the values pointed by sizes and all the previously stored combinations.

Postconditions:

  1. test contains a disjoint copy of the values pointed by sizes.


C Test Function: size_t micro_benchmark_test_case_dimensions (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test, micro_benchmark_suite_get_test and the associated micro_benchmark_suite is still valid.
    • test has been received as a parameter on a constraint definition and the scope of the function call is still active.

Effects:

  1. The test will be executed as many times as needed to explore all the possible combinations of the values pointed by sizes and all the previously stored combinations.

Postconditions:

  1. The returned value is the number of dimensions provided to test


C Test Function: void micro_benchmark_test_case_get_dimension (micro_benchmark_test_case test, size_t dimension, size_t *n_sizes, const size_t **sizes)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. dimension is less than the value returned by micro_benchmark_test_case_dimensions.
  4. n_sizes points to a valid object.
  5. sizes points to a valid object.

Effects:

  1. Write on the object pointed by n_sizes.
  2. Write on the object pointed by sizes.

Postconditions:

  1. The object pointed by sizes point to a valid array.
  2. The object pointed by n_sizes contains the size of the array pointed by *sizes.


C Test Function: void micro_benchmark_test_case_skip_iterations (micro_benchmark_test_case test, size_t number)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. test is modified.

Postconditions:

  1. test execution will perform number iterations before start taking performance measurements.


C Test Function: size_t micro_benchmark_test_case_iterations_to_skip (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of iterations that test will performed before start taking performance measurements of its execution.
  2. If micro_benchmark_test_case_skip_iterations has been called with test as its parameter, the returned value is the one provided to micro_benchmark_test_case_skip_iterations.


The following functions control the number of iterations performed by the code under test.

C Test Function: void micro_benchmark_test_case_limit_iterations (micro_benchmark_test_case test, size_t min_iterations, size_t max_iterations)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. Either:
    • max_iterations is zero.
    • max_iterations is greater or equal than min_iterations.

Effects:

  1. test is modified.

Postconditions:

  1. The total number of iterations performed on each execution of test, without taking into account the iterations skipped, will be greater or equal than min_iterations.
  2. If max_iterations is greater than 0, the total number of iterations performed on each execution of test, without taking into account the iterations skipped, will be less or equal than max_iterations.


C Test Function: size_t micro_benchmark_test_case_max_iterations (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is the maximum iterations performed on each execution of test, without taking into account the iterations skipped. A value of 0 means no limit on the number of iterations.
  2. If micro_benchmark_test_case_limit_iterations was called and the parameter max_iterations was not zero, the returned value is the parameter provided to that call.


C Test Function: size_t micro_benchmark_test_case_min_iterations (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is the minimum iterations performed on each execution of test, without taking into account the iterations skipped.
  2. If micro_benchmark_test_case_limit_iterations was called, the returned value is the value provided by the parameter min_iterations to that call.


The following functions limit of iterations performed on each measurement sample taken, or query these limits.

C Test Function: void micro_benchmark_test_case_limit_samples (micro_benchmark_test_case test, size_t min_iterations, size_t max_iterations)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. Either:
    • max_iterations is zero.
    • max_iterations is greater or equal than min_iterations.

Effects:

  1. test sample limits are modified, previous values provided to this functions stop taking effect.

Postconditions:

  1. The total number of iterations performed on each measurement sample interval during the next execution of test will be greater or equal than min_iterations.
  2. The total number of iterations performed on each measurement sample interval during the next execution of test will be less or equal than max_iterations, if max_iterations is a positive, non-zero value.


C Test Function: size_t micro_benchmark_test_case_min_sample_iterations (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is the minimum number of iterations performed on each measurement sample taken during test execution.
  2. If micro_benchmark_test_case_limit_sample_iterations was called, the returned value is the value provided by the parameter min_iterations to that call.


C Test Function: size_t micro_benchmark_test_case_max_sample_iterations (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is the maximum number of iterations performed on each measurement sample taken during test execution.
  2. If micro_benchmark_test_case_limit_sample_iterations was called, the returned value is the value provided by the parameter max_iterations to that call.


The following functions control the maximum time spent executing of test code.

C Test Function: void micro_benchmark_test_case_set_max_time (micro_benchmark_test_case test, micro_benchmark_clock_time deadline)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. test is modified.

Postconditions:

  1. If deadline is non-zero, the test state will not request more iterations once the time spent on the test execution is greater or equal to the provided deadline and the minimum number of iterations has been reached.


C Test Function: micro_benchmark_clock_time micro_benchmark_test_case_get_max_time (micro_benchmark_test_case test)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.

Effects:

  1. None.

Postconditions:

  1. If micro_benchmark_test_case_set_max_time was called, the returned value is equivalent to the last value provided to that function.


C Test Function: void micro_benchmark_test_case_set_chrono (micro_benchmark_test_case test, micro_benchmark_clock_type type, micro_benchmark_chronometer_provider provider)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. provider is not MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED.
  4. provider is available for the system.
  5. type is available for provider.

Effects:

  1. test is modified.

Postconditions:

  1. The clock represented by type/provider will be used to perform time measurements during test execution.


C Test Function: void micro_benchmark_test_case_set_custom_chrono (micro_benchmark_test_case test, micro_benchmark_clock_type type, micro_benchmark_meter meter)

Preconditions:

  1. Either:
    • test has been obtained through micro_benchmark_suite_register_test.
    • test has been obtained through micro_benchmark_suite_get_test.
    • test has been received as a parameter on a constraint definition and the scope of that function call is still active.
  2. The suite associated to test is still valid.
  3. meter is a chronometer.
  4. meter implements type.

Effects:

  1. A disjoint copy of the object pointed by meter is stored into test and initialized with type.

Postconditions:

  1. An instance similar to meter will be used to perform time measurements during test execution.


C Test Function: void micro_benchmark_test_case_set_calculator (micro_benchmark_test_case test, micro_benchmark_custom_time_calculator calculator)

Set custom statistical calculator for test.



C Data Type: micro_benchmark_custom_meter { micro_benchmark_meter meter, micro_benchmark_custom_sample_collector collector }

Custom meter definition.


C Test Function: void micro_benchmark_test_case_add_meter (micro_benchmark_test_case test, micro_benchmark_custom_meter meter)

Add custom meter to test. Data pointer provided by the user for test.




4.3 State Reference

Note: the lifetime of test state objects is semantically bound to the call where it is received as parameter. Once the scope of the function call ends, the object is not considered valid anymore3 and any usage of its value is undefined.

C Data Type: micro_benchmark_test_state

Opaque type, representation of the internal state of the test execution.


The following function is the main driver of the benchmark. It performs the data collection and determines if the test code must be executed again.

C Test State Function: bool micro_benchmark_state_keep_running (micro_benchmark_test_state state)

Preconditions:

  1. state is a valid object.

Effects:

  1. Performance measurements are performed and stored on state.

Postconditions:

  1. The returned value is true if the test should perform at least one iteration more according to its constraints.


C Test State Function: size_t micro_benchmark_state_get_dimensions (micro_benchmark_test_state state)

Preconditions:

  1. state is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of dimensions provided to the test associated to state.


C Test State Function: size_t micro_benchmark_state_get_size (micro_benchmark_test_state state, size_t dimension)

Preconditions:

  1. state is a valid object.

Effects:

  1. None.

Postconditions:

  1. If micro_benchmark_test_case_add_dimension was called at least dimension+1 times, the returned value is one of the values provided  call number dimension, starting from 0, to micro_benchmark_test_case_add_dimension to the test associated to state.
  2. The value returned is zero when micro_benchmark_test_case_add_dimension was called less than dimension+1 times.


C Test State Function: void * micro_benchmark_state_get_data (micro_benchmark_test_state state)

Preconditions:

  1. state is a valid object.

Effects:

  1. None.

Postconditions:

  1. During the set up stage, the value returned is NULL unless micro_benchmark_test_case_set_data was called. In the latter case, the value of the object pointer provided to micro_benchmark_test_case_set_data is returned.
  2. During the tear down and test stages, the value returned is undefined unless micro_benchmark_test_case_set_data was called. In the latter case, the value of the object pointer provided to micro_benchmark_test_case_set_data is returned.


C Test State Function: void micro_benchmark_state_set_name (micro_benchmark_test_state state, const char *name)

Preconditions:

  1. state is a valid object.
  2. name must point to a valid, zero-ended array of characters.

Effects:

  1. A disjoint copy of the characters pointed by name is stored by state.

Postconditions:

  1. The last value provided to this call is used as name for the test execution report of the execution associated to state.


C Test State Function: const char * micro_benchmark_state_get_name (micro_benchmark_test_state state)

Preconditions:

  1. state is a valid object.

Effects:

  1. None.

Postconditions:

  1. The value returned points to a valid, zero-ended array of characters.
  2. If micro_benchmark_state_set_name was called, the value returned is equivalent to the value provided to micro_benchmark_state_set_name.
  3. If micro_benchmark_state_set_name was not called, the value returned is equivalent to the value provided to micro_benchmark_suite_register_test.



4.4 Report Reference

This section documents the report generated by a suite execution (see Suite Report Reference), the report of each test contained on the suite (see Test Report Reference) and the report of each execution of a test case (see Test Execution Report.) The last section contains useful functionality to print the data stored into reports with predefined formats (see Output Reference).


4.4.1 Suite Report Reference

C Data Type: micro_benchmark_report

Opaque representation of the information collected by the suite execution.

The lifetime of these objects is bound to the suite that generated them. Calling micro_benchmark_suite_release ends the lifetime of all reports associated to that suite.


C Report Function: const char * micro_benchmark_report_get_name (micro_benchmark_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_suite_get_report.

Effects:

  1. None.

Postconditions:

  1. The value returned points to a valid, zero-ended array of characters.
  2. The value returned is equivalent to the value provided to the call of micro_benchmark_suite_create that returned the suite associated to report.


C Report Function: size_t micro_benchmark_report_get_number_of_tests (micro_benchmark_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_suite_get_report.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of test cases executed by the suite associated to report.


C Report Function: micro_benchmark_test_report micro_benchmark_report_get_test_report (micro_benchmark_report report, size_t number)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_suite_get_report.
  2. number is less than the value returned by micro_benchmark_report_get_number_of_tests (report).

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid object.
  2. The returned value contains the results of the test case number stored on report, counting from zero.



4.4.2 Test Report Reference

This section documents the reported data from a provided test case.

C Data Type: micro_benchmark_test_report

Opaque representation of the information collected from a test case.

The lifetime of these objects is bound to the suite that generated them. Calling micro_benchmark_suite_release ends the lifetime of all reports associated to that suite.


C Report Function: const char * micro_benchmark_test_report_get_name (micro_benchmark_test_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_report_get_test_report.

Effects:

  1. None.

Postconditions:

  1. The value returned points to a valid, zero-ended array of characters.
  2. The value returned is equivalent to the value provided at the micro_benchmark_suite_register_test call.


C Report Function: size_t micro_benchmark_test_report_get_num_executions (micro_benchmark_test_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_suite_get_report.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of executions performed of the test case associated to report.


C Report Function: micro_benchmark_exec_report micro_benchmark_test_report_get_exec_report (micro_benchmark_test_report report, size_t number)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_report_get_test_report.
  2. number is less than the value returned by micro_benchmark_test_report_get_num_executions (report).

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid object.
  2. The returned value contains the results of the test execution number stored on report, counting from zero.



4.4.3 Test Execution Report

This section documents the reported data from an execution of a provided test.

C Data Type: micro_benchmark_exec_report

Opaque representation of the information collected from a test case execution. Certain constraints, such as size dimensions, require several executions of the same test case.

The lifetime of these objects is bound to the suite that generated them. Calling micro_benchmark_suite_release ends the lifetime of all reports associated to that suite.


C Report Function: const char * micro_benchmark_exec_report_get_name (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned points to a valid, zero-ended array of characters, representing the name of this test execution.


C Report Function: void micro_benchmark_exec_report_get_sizes (micro_benchmark_exec_report report, const size_t **sizes, size_t *dimensions)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. sizes must point to a valid object.
  3. dimensions must point to a valid object.

Effects:

  1. None.

Postconditions:

  1. The object pointed by dimensions contains the size of the array pointed by sizes, or zero.
  2. If dimensions is zero, the value of the object pointed by sizes is not changed. When dimensions is greater than zero, The object pointed by sizes points to an array of  objects whose values are equivalent to the ones provided for the test execution associated to report.


C Report Function: size_t micro_benchmark_exec_report_num_time_samples (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The number of time samples stored on report.


C Report Function: micro_benchmark_time_sample micro_benchmark_exec_report_get_time_sample (micro_benchmark_exec_report report, size_t n)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. n is less than the value returned by micro_benchmark_exec_report_get_num_time_samples (report).

Effects:

  1. None.

Postconditions:

  1. The value returned is the the time sample n collected by the test case execution associated to report.


The following functions access to the aggregated statistical data of the test execution report.

C Report Function: size_t micro_benchmark_exec_report_get_iterations (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned is the number of sampled iterations by the test case execution associated to report.


C Report Function: size_t micro_benchmark_exec_report_total_iterations (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned is the total number of iterations performed by the test case execution associated to report.


C Report Function: micro_benchmark_clock_time micro_benchmark_exec_report_total_time (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned is the total time, including library overhead, spent on the test case execution associated to report.


C Report Function: size_t micro_benchmark_exec_report_total_samples (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned is the total number of samples collected by the test case execution associated to report.


C Report Function: size_t micro_benchmark_exec_report_used_samples (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned is the number of samples used for the statistical data aggregation of the test case execution associated to report.


C Report Function: micro_benchmark_stats_value micro_benchmark_exec_report_iteration_time (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned contains the statistical values calculated with regard of the time elapsed per iteration of the test case execution associated to report.


C Report Function: micro_benchmark_stats_value micro_benchmark_exec_report_sample_time (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned contains the statistical values calculated with regard of the sample time of the test case execution associated to report.


C Report Function: micro_benchmark_stats_value micro_benchmark_exec_report_sample_iterations (micro_benchmark_exec_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.

Effects:

  1. None.

Postconditions:

  1. The value returned contains the statistical values calculated with regard of the iterations per sample of the test case execution associated to report.


The report can contain additional data collected during the test execution (see Custom Data Collection Reference.)

Note: This API is still on a very early stage, expect changes.

C Data Type: micro_benchmark_report_extra_data { const char*name, size_t size, const char **keys, const char**values }

Additional data collected. name identifies the data. The length of keys and values must be greater or equal than size.


C Function Type: micro_benchmark_report_extractor_fun

Extract a printable representation of the collected data. Its prototype is micro_benchmark_report_extra_data (*) (void *).


C Report Function: size_t micro_benchmark_exec_report_number_of_meters (micro_benchmark_exec_report report)

Note: Not specified yet.

Return the number of custom meters whose data has been collected onto report.



C Report Function: size_t micro_benchmark_exec_report_meter_num_samples (micro_benchmark_exec_report report, size_t meter)

Note: Not specified yet.

Return the number of samples collected by meter onto report. meter must be less than the value returned by micro_benchmark_exec_report_number_of_meters.



C Report Function: micro_benchmark_stats_sample_type micro_benchmark_exec_report_meter_sample_type (micro_benchmark_exec_report report, size_t meter)

Note: Not specified yet.

Return the type of samples collected by meter onto report. meter must be less than the value returned by micro_benchmark_exec_report_number_of_meters.



C Report Function: micro_benchmark_stats_generic_sample micro_benchmark_exec_report_meter_get_sample (micro_benchmark_exec_report report, size_t meter, size_t pos)

Note: Not specified yet.

Return the sample pos collected by meter onto report. meter must be less than the value returned by micro_benchmark_exec_report_number_of_meters (report) and pos must be less than the value returned by micro_benchmark_exec_report_meter_num_samples.



C Report Function: micro_benchmark_report_extra_data micro_benchmark_exec_report_get_extra_data (micro_benchmark_exec_report report, size_t pos)

Note: Not specified yet.

Extract the custom data stored on report from meter pos.




4.4.4 Output Reference

This section documents the output of the reported data.

C Data Type: micro_benchmark_output_type { MICRO_BENCHMARK_CONSOLE_OUTPUT, MICRO_BENCHMARK_TEXT_OUTPUT, MICRO_BENCHMARK_LISP_OUTPUT }

Predefined output formats:

  • See Console Output for a more detailed description of the format produced by MICRO_BENCHMARK_CONSOLE_OUTPUT.
  • See Text Output for a more detailed description of the format produced by MICRO_BENCHMARK_TEXT_OUTPUT.
  • See Lisp Output for a more detailed description of the format produced by MICRO_BENCHMARK_LISP_OUTPUT.

C Data Type: micro_benchmark_output_stat { MICRO_BENCHMARK_STAT_NONE, MICRO_BENCHMARK_STAT_MEAN, MICRO_BENCHMARK_STAT_VARIANCE, MICRO_BENCHMARK_STAT_STD_DEVIATION, MICRO_BENCHMARK_STAT_BASIC, MICRO_BENCHMARK_STAT_ALL }

Provided statistical values.

  • The values can be joined with | to select both values.
  • The value BASIC is equivalent to MEAN | STD_DEVIATION.
  • The value ALL is is equivalent to MEAN | VARIANCE | STD_DEVIATION.

C Data Type: micro_benchmark_output_values { bool self_test, bool size_constraints, bool total_time, bool total_iterations, bool iterations, bool total_samples, bool samples, bool extra_data, micro_benchmark_output_stat iteration_times, micro_benchmark_output_stat sample_times, micro_benchmark_output_stat batch_stats }

Each field determines the output of a value of the report.


C Report Function: micro_benchmark_output_values micro_benchmark_get_default_output_values ()

Preconditions:

  1. No additional precondition.

Effects:

  1. None.

Postconditions:

  1. If micro_benchmark_set_default_output_values was called, the returned value is equivalent to the parameter provided to the last call in effect of micro_benchmark_set_default_output_values.


C Output Function: void micro_benchmark_set_default_output_values (micro_benchmark_output_values new_values)

Preconditions:

  1. No additional precondition.

Effects:

  1. Change default output values.

Postconditions:

  1. The output functions that do not accept a parameter of type micro_benchmark_output_values will use the values provided by new_values to the last call of this function.


C Output Function: void micro_benchmark_write_report (micro_benchmark_report report, FILE *out, micro_benchmark_output_type type)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. out is ready to receive an indeterminate number of characters.

Effects:

  1. Write the values from report determined by micro_benchmark_get_default_output_values on out with output format type.

Postconditions:

  1. out has successfully received the formatted data stored on report.


C Output Function: void micro_benchmark_write_custom_report (micro_benchmark_report report, FILE *out, micro_benchmark_output_type type, const micro_benchmark_output_values *values)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. out is ready to receive an indeterminate number of characters.

Effects:

  1. Write the values from report determined by values on out with the output format type.

Postconditions:

  1. out has successfully received the formatted data stored on report.


C Report Function: void micro_benchmark_print_report (micro_benchmark_report report)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. The standard output is ready to receive an indeterminate number of characters.

Effects:

  1. Write the values from report determined by micro_benchmark_get_default_output_values on the standard output with the console format.

Postconditions:

  1. The standard output has successfully received the formatted data stored on report.


C Output Function: void micro_benchmark_print_custom_report (micro_benchmark_report report, const micro_benchmark_output_values *values)

Preconditions:

  1. report must have been obtained as the return value of a call to micro_benchmark_test_report_get_exec_report.
  2. The standard output is ready to receive an indeterminate number of characters.

Effects:

  1. Write the values from report determined by values on out with the console output format.

Postconditions:

  1. The standard output has successfully received the formatted data stored on report.



4.5 Data Collection reference

This section details the data collected and reported by MicroBenchmark.

C Data Type: micro_benchmark_stats_sample_type { MICRO_BENCHMARK_SAMPLE_INT64, MICRO_BENCHMARK_SAMPLE_UINT64, MICRO_BENCHMARK_SAMPLE_DOUBLE, MICRO_BENCHMARK_SAMPLE_TIME, MICRO_BENCHMARK_SAMPLE_SMALL_BUFFER, MICRO_BENCHMARK_SAMPLE_USER_PROVIDED }

Type of data collected.


C Data Type: micro_benchmark_stats_meter_sample { int64_t integer, uint64_t modular, double floating_point, micro_benchmark_clock_time time, unsigned char buffer[MICRO_BENCHMARK_SAMPLE_SB_SIZE], void *user_provided }

Union of the data collection types.


C Data Type: micro_benchmark_stats_generic_sample_data { bool discarded, size_t iterations, micro_benchmark_stats_meter_sample value }

Generic data sampled on a test execution.


C Data Type: micro_benchmark_stats_generic_sample

Pointer to a collected sample.


C Data Type: micro_benchmark_stats_generic_samples

Pointer to an array of samples. It is effectively the same type as micro_benchmark_stats_generic_sample and it’s meant for documentation purposes only.



4.5.1 Custom Data Collection Reference

C Function Type: micro_benchmark_custom_sample_create_data_fun

Its prototype is void *(*) (void).


C Function Type: micro_benchmark_custom_sample_release_data_fun

Its prototype is void (*) (void *);


C Function Type: micro_benchmark_custom_sample_collector_fun

Its prototype is void (*) (void *, size_t, micro_benchmark_stats_generic_samples).


C Data Type: micro_benchmark_custom_sample_collector { micro_benchmark_custom_sample_create_data_fun create_data, micro_benchmark_custom_sample_release_data_fun release_data, micro_benchmark_custom_sample_collector_fun parse_samples, micro_benchmark_report_extractor_fun get_data }

Custom data collection.


C Function Type: micro_benchmark_custom_time_calculator

Its prototype is micro_benchmark_time_stats_values (*) (size_t, micro_benchmark_time_samples)



4.5.2 Meter Reference

A meter is an device that produces measurement samples.

Currently, the only meters provide by the library are chronometers (see Chronometer Reference.)

C Data Type: micro_benchmark_meter_data { void *ptr, const char *name, micro_benchmark_stats_sample_type sample_type, micro_benchmark_stats_meter_sample min_resolution, micro_benchmark_stats_meter_sample max_resolution }

Data of a meter implementation.

ptr is the instance data that is provided to the meter functions. sample_type represents the active union field from the samples produced by the meter, as well as min_resolution and max_resolution fields.


C Function Type: micro_benchmark_meter_init_fun

Function to initialize a meter instance. Its prototype is void (*) (micro_benchmark_meter_data *, const void *).


C Function Type: micro_benchmark_meter_cleanup_fun

Function to release a meter instance. Its prototype is void (*) (const micro_benchmark_meter_data *).


C Function Type: micro_benchmark_meter_start_fun

Start the measuring process of a meter instance. Its prototype is void (*) (void *).


C Function Type: micro_benchmark_meter_stop_fun

Stop the measuring process of a meter instance. Its prototype is void (*) (void *).


C Function Type: micro_benchmark_meter_restart_fun

Restart the measuring process of a meter instance. Its prototype is void (*) (void *).


C Function Type: micro_benchmark_meter_get_sample_fun

Extract the measure performed by the meter. Its prototype is micro_benchmark_stats_meter_sample (*) (void *).


C Data Type: micro_benchmark_meter_definition { micro_benchmark_meter_data data, micro_benchmark_meter_init_fun init, micro_benchmark_meter_cleanup_fun cleanup, micro_benchmark_meter_start_fun start, micro_benchmark_meter_stop_fun stop, micro_benchmark_meter_restart_fun restart, micro_benchmark_meter_get_sample_fun get_sample }

Meter definition.


C Data Type: micro_benchmark_meter

Pointer to a constant micro_benchmark_meter_definition object.


C Meter Function: void micro_benchmark_stats_meter_init (micro_benchmark_meter_definition *meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->init points to a valid function.
  3. The value meter is not valid. This means one of the following conditions:
    • No value equivalent to meter has been provided as a parameter to micro_benchmark_stats_meter_init or micro_benchmark_stats_meter_init_with_data.
    • Each call to micro_benchmark_stats_meter_init or micro_benchmark_stats_meter_init_with_data with a value equivalent to meter as its parameter has been followed by a call to micro_benchmark_stats_meter_cleanup with a value equivalent to meter as its parameter.

Effects:

  1. Call meter->init (meter->data, NULL).

Postconditions:

  1. The value meter is valid.


C Meter Function: void micro_benchmark_stats_meter_init_with_data (micro_benchmark_meter_definition *meter, const void *extra_data)

Preconditions:

  1. meter points to a valid object.
  2. meter->init points to a valid function.
  3. The value meter is not valid. This means one of the following conditions:
    • The value meter has not been provided as a parameter to micro_benchmark_stats_meter_init or micro_benchmark_stats_meter_init_with_data.
    • Each call to micro_benchmark_stats_meter_init or micro_benchmark_stats_meter_init_with_data with the value of meter as its parameter has been followed by a call to micro_benchmark_stats_meter_cleanup with the value of meter as its parameter.

Effects:

  1. Call meter->init (meter->data, extra_data).

Postconditions:

  1. The value meter is valid.


C Meter Function: void micro_benchmark_stats_meter_cleanup (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->cleanup points to a valid function.
  3. The value meter is valid.

Effects:

  1. Call meter->cleanup.

Postconditions:

  1. The value meter is not valid.


C Meter Function: void micro_benchmark_stats_meter_start (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->start points to a valid function.
  3. The value meter is valid.
  4. meter is not measuring.

Effects:

  1. Call meter->start.

Postconditions:

  1. meter is running.
  2. meter is measuring.
  3. The measurements performed by meter take this call as its starting point.


C Meter Function: void micro_benchmark_stats_meter_stop (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->stop points to a valid function.
  3. The value meter is valid.
  4. meter is running.
  5. meter is measuring.

Effects:

  1. Call meter->stop.

Postconditions:

  1. meter is not measuring.
  2. The measurements performed by meter take this call as its end point.


C Meter Function: void micro_benchmark_stats_meter_restart (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->restart points to a valid function.
  3. The value meter is valid.
  4. meter is running.
  5. meter is not measuring.

Effects:

  1. Call meter->restart.

Postconditions:

  1. meter is measuring.
  2. The measurements performed by meter take this call as another starting point.


C Meter Function: micro_benchmark_stats_meter_sample micro_benchmark_stats_meter_get_sample (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. The value meter is valid.
  3. meter is running.
  4. meter is not measuring.

Effects:

  1. Call meter->get_sample.

Postconditions:

  1. The returned value contains the accumulated measurement performed by the value meter.
  2. The accumulated measurement of meter is reset to zero its zero point—equivalent to a call to micro_benchmark_meter_start.


C Meter Function: const char * micro_benchmark_stats_meter_get_name (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. The value meter is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is meter->data.name;


C Meter Function: micro_benchmark_stats_sample_type micro_benchmark_stats_meter_get_sample_type (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. The value meter is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is meter->data.sample_type;


C Meter Function: micro_benchmark_stats_meter_sample micro_benchmark_stats_meter_get_min_resolution (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. The value meter is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is meter->data.min_resolution;


C Meter Function: micro_benchmark_stats_meter_sample micro_benchmark_stats_meter_get_max_resolution (micro_benchmark_meter meter)

Preconditions:

  1. meter points to a valid object.
  2. meter->stop points to a valid function.
  3. The value meter is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is meter->data.max_resolution;



4.5.3 Time Reference

C Data Type: micro_benchmark_clock_time { int32_t seconds,

int32_t nanoseconds }

Structure used by the framework for time measuring.


C Data Type: micro_benchmark_clock_type { MICRO_BENCHMARK_CLOCK_REALTIME, MICRO_BENCHMARK_CLOCK_MONOTONIC, MICRO_BENCHMARK_CLOCK_PROCESS, MICRO_BENCHMARK_CLOCK_THREAD }

Clock type enumeration (see Predefined Clocks.)


C Data Type: micro_benchmark_time_sample_ { bool discarded, size_t iterations, micro_benchmark_clock_time elapsed }

Time sample value. This data type contains, at least, the specified values discarded, iterations and elapsed.


C Data Type: micro_benchmark_time_sample

Pointer to a constant value of type micro_benchmark_time_sample_.


C Data Type: micro_benchmark_time_samples

Pointer to an array of constant values of type micro_benchmark_time_sample_ values. This type is equivalent to micro_benchmark_time_sample and it is used only for documentation purposes.


C Data Type: micro_benchmark_stats_unit { MICRO_BENCHMARK_STATS_UNIT_NONE, MICRO_BENCHMARK_STATS_ITERATIONS, MICRO_BENCHMARK_STATS_TIME_MIN, MICRO_BENCHMARK_STATS_TIME_S, MICRO_BENCHMARK_STATS_TIME_MS, MICRO_BENCHMARK_STATS_TIME_US, MICRO_BENCHMARK_STATS_TIME_NS }

Unit of measure.


C Data Type: micro_benchmark_stats_value { micro_benchmark_stats_unit unit, double mean, double variance, double std_deviation }

Aggregated statistical values.


C Data Type: micro_benchmark_time_stats_values { size_t total_samples, size_t samples, size_t iterations, micro_benchmark_stats_value iteration_time, micro_benchmark_stats_value sample_time, micro_benchmark_stats_value sample_iterations }

Information sampled during a test case execution.

Note: this type is experimental.



4.5.4 Chronometer Reference

A chronometer is a specialization of a meter that measures time duration.

C Data Type: micro_benchmark_chronometer_provider { MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT, MICRO_BENCHMARK_CHRONO_PROVIDER_TIME_T, MICRO_BENCHMARK_CHRONO_PROVIDER_CLOCK_T, MICRO_BENCHMARK_CHRONO_PROVIDER_GETTIMEOFDAY, MICRO_BENCHMARK_CHRONO_PROVIDER_CLOCK_GETTIME, MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED }

Chronometer type (see Predefined Chronometers.)


C Chronometer Function: micro_benchmark_meter micro_benchmark_chronometer_create_default ()

Preconditions:

  1. None.

Effects:

  1. Allocate and initialize a new chronometer.

Postconditions:

  1. The returned value meter is a valid chronometer.
  2. If micro_benchmark_chronometer_set_default was called, the type of the returned value is equivalent to the last value provided to that call.


C Chronometer Function: micro_benchmark_meter micro_benchmark_chronometer_create (micro_benchmark_clock_type clock_type, micro_benchmark_chronometer_provider provider)

Preconditions:

  1. clock_type is implemented by this system.
  2. provider is not equal to MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED.
  3. provider support is implemented by the library.
  4. clock_type is implemented by provider.

Effects:

  1. Allocate and initialize a new chronometer of type provider using the clock type clock_type.

Postconditions:

  1. The returned value is a valid chronometer.


C Chronometer Function: micro_benchmark_meter micro_benchmark_chronometer_from_meter (micro_benchmark_clock_type clock_type, micro_benchmark_meter base)

Preconditions:

  1. clock_type is implemented by this system.
  2. Either:
    • base points to a valid chronometer.
    • base points to an object of type micro_benchmark_meter_definition where at least all the function fields have been filled with a valid function.
    • clock_type is implemented by base.

Effects:

  1. Allocate a new chronometer ptr.
  2. Copy the data pointed by base to ptr.
  3. Initialize the allocated chronometer with micro_benchmark_stats_meter_init_with_data (ptr, clock_type)

Postconditions:

  1. The returned value is a valid chronometer.


C Chronometer Function: void micro_benchmark_chronometer_release (micro_benchmark_meter meter)

Preconditions:

  1. meter is a valid chronometer.

Effects:

  1. Cleanup and release meter.

Postconditions:

  1. The value meter is not valid.


C Chronometer Function: void micro_benchmark_chronometer_set_default (micro_benchmark_meter meter)

Preconditions:

  1. Either:
    • base points to a valid chronometer.
    • base points to an object of type micro_benchmark_meter_definition where at least all the field function fields have been filled with a valid function.

Effects:

  1. A copy of meter is stored.

Postconditions:

  1. The chronometers created providing the enumeration constant MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT to micro_benchmark_chronometer_create will have the same effective type as the meter provided to the last call of this function.


C Chronometer Function: micro_benchmark_meter micro_benchmark_chronometer_get_default ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The returned value is the effective type of the chronometers created providing the enumeration constant MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT.
  2. If micro_benchmark_chronometer_set_default was called, the returned value is equivalent to the last value provided to that call.



4.6 Timer Reference

Timers are devices triggered after some time.

C Data Type: micro_benchmark_timer_data { void *ptr, const char *name, micro_benchmark_clock_type type, micro_benchmark_clock_time resolution }

Timer instance data.


C Function Type: micro_benchmark_timer_init_fun

Its prototype is void (*) (micro_benchmark_timer_data *, micro_benchmark_clock_type, const void *).


C Function Type: micro_benchmark_timer_cleanup_fun

Its prototype is void (*) (const micro_benchmark_timer_data *).


C Function Type: micro_benchmark_timer_start_fun

Its prototype is void (*) (void *, micro_benchmark_clock_time).


C Function Type: micro_benchmark_timer_stop_fun

Its prototype is void (*) (void *).


C Function Type: micro_benchmark_timer_restart_fun

Its prototype is void (*) (void *).


C Function Type: micro_benchmark_timer_running_fun

Its prototype is bool (*) (void *).


C Function Type: micro_benchmark_timer_elapsed_fun

Its prototype is micro_benchmark_clock_time (*) (void *).


C Data Type: micro_benchmark_timer_definition { micro_benchmark_timer_data data, micro_benchmark_timer_init_fun init, micro_benchmark_timer_cleanup_fun release, micro_benchmark_timer_start_fun start, micro_benchmark_timer_stop_fun stop, micro_benchmark_timer_restart_fun restart, micro_benchmark_timer_running_fun is_running, micro_benchmark_timer_elapsed_fun elapsed }

Timer definition.


C Data Type: micro_benchmark_timer

Pointer to a timer instance.


C Timer Function: void micro_benchmark_timer_init (micro_benchmark_timer_definition *timer, micro_benchmark_clock_type type)

Preconditions:

  1. timer points to a valid object.
  2. timer->init points to a valid function.
  3. The value timer is not valid. This means one of the following conditions:
    • No value equivalent to timer has been provided as a parameter to micro_benchmark_timer_init or micro_benchmark_timer_init_with_extra.
    • Each call to micro_benchmark_timer_init or micro_benchmark_timer_init_with_extra with a value equivalent to timer as its parameter has been followed by a call to micro_benchmark_timer_cleanup with a value equivalent to timer as its parameter.

Effects:

  1. Call timer->init (timer->data, type, NULL).

Postconditions:

  1. The value timer is valid.
  2. The value timer is not counting.


C Timer Function: void micro_benchmark_timer_init_with_extra (micro_benchmark_timer_definition *timer, micro_benchmark_clock_type type, const void *extra)

Preconditions:

  1. timer points to a valid object.
  2. timer->init points to a valid function.
  3. The value timer is not valid. This means one of the following conditions:
    • No value equivalent to timer has been provided as a parameter to micro_benchmark_timer_init or micro_benchmark_timer_init_with_extra.
    • Each call to micro_benchmark_timer_init or micro_benchmark_timer_init_with_extra with a value equivalent to timer as its parameter has been followed by a call to micro_benchmark_timer_cleanup with a value equivalent to timer as its parameter.

Effects:

  1. Call timer->init (timer->data, type, extra).

Postconditions:

  1. The value timer is valid.
  2. The value timer is not counting.


C Timer Function: void micro_benchmark_timer_cleanup (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. timer->cleanup points to a valid function.
  3. The value timer is valid.

Effects:

  1. Call timer->cleanup (timer->data).

Postconditions:

  1. The value timer is not valid.


C Timer Function: micro_benchmark_clock_type micro_benchmark_timer_get_type (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. The value timer is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is timer->data.clock_type;


C Timer Function: const char * micro_benchmark_timer_get_name (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. The value timer is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is timer->data.name;


C Timer Function: micro_benchmark_clock_time micro_benchmark_timer_get_resolution (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. The value timer is valid.

Effects:

  1. None.

Postconditions:

  1. The returned value is timer->data.resolution;


C Timer Function: void micro_benchmark_timer_start (cro_benchmark_timer timer, micro_benchmark_clock_time deadline)

Preconditions:

  1. timer points to a valid object.
  2. timer->start points to a valid function.
  3. The value timer is valid.
  4. deadline is non-zero.

Effects:

  1. The timer starts counting time up to deadline.

Postconditions:

  1. timer is running.


C Timer Function: void micro_benchmark_timer_stop (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. timer->stop points to a valid function.
  3. The value timer is valid.
  4. timer has been started.

Effects:

  1. timer stops counting time.

Postconditions:

  1. The running status of timer is fixed up to the next call to micro_benchmark_timer_restart or micro_benchmark_timer_stop.


C Timer Function: void micro_benchmark_timer_restart (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. timer->restart points to a valid function.
  3. The value timer is valid.
  4. timer has been started.

Effects:

  1. The timer starts counting time again up to the deadline provided by micro_benchmark_timer_start.

Postconditions:

  1. timer is running if it was running when the last call to micro_benchmark_timer_stop was performed.


C Timer Function: bool micro_benchmark_timer_is_running (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. timer->is_running points to a valid function.
  3. The value timer is valid.
  4. timer was started.

Effects:

  1. None.

Postconditions:

  1. The returned value is false if the deadline provided to this timer by micro_benchmark_timer_start has not expired.
  2. The returned value is true otherwise.


C Timer Function: micro_benchmark_clock_time micro_benchmark_timer_elapsed (micro_benchmark_timer timer)

Preconditions:

  1. timer points to a valid object.
  2. timer->elapsed points to a valid function.
  3. The value timer is valid.
  4. timer was started.

Effects:

  1. None.

Postconditions:

  1. The returned value is the accumulated time elapsed between calls of micro_benchmark_timer_start and micro_benchmark_timer_stop, and the following pairs of calls to micro_benchmark_timer_restart and micro_benchmark_timer_stop, up to deadline.


C Timer Function: void micro_benchmark_timer_set_default (micro_benchmark_timer timer)

Preconditions:

  1. Either:
    • timer points to a valid chronometer.
    • timer points to an object of type micro_benchmark_timer_definition where at least all the function fields have been filled with a valid function.

Effects:

  1. A copy of timer is stored.

Postconditions:

  1. The timers used by the library will have a type equivalent to timer


C Timer Function: void micro_benchmark_timer_set_default_provider (micro_benchmark_timer_provider provider)

Preconditions:

  1. provider is implemented by the library.
  2. provider is not MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED.

Effects:

  1. The default timer provider is changed.

Postconditions:

  1. The timers used by the library will have the type provider


C Timer Function: micro_benchmark_timer micro_benchmark_timer_get_default ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The returned value is the effective type of the timers created by micro_benchmark_timer_create_default and the ones created by providing the type MICRO_BENCHMARK_TIMER_PROVIDER_DEFAULT on micro_benchmark_timer_create.
  2. If micro_benchmark_timer_set_default was called, the returned value is equivalent to the last value provided to that call.



4.6.1 Timer Provider Reference

C Data Type: micro_benchmark_timer_provider { MICRO_BENCHMARK_TIMER_PROVIDER_DEFAULT, MICRO_BENCHMARK_TIMER_PROVIDER_ITIMER, MICRO_BENCHMARK_TIMER_PROVIDER_TIMER_T, MICRO_BENCHMARK_TIMER_PROVIDER_TIMERFD, MICRO_BENCHMARK_TIMER_PROVIDER_FROM_METER, MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED }

Timer type (see Predefined Timers.)


C Timer Function: micro_benchmark_timer micro_benchmark_timer_create_default ()

Preconditions:

  1. None.

Effects:

  1. Allocate and initialize a new timer of the default type.

Postconditions:

  1. The returned value timer is a valid chronometer.
  2. If micro_benchmark_timer_set_default was called, the type of the returned value is equivalent to the last value provided to that call.


C Timer Function: micro_benchmark_timer micro_benchmark_timer_create (micro_benchmark_clock_type clock_type, micro_benchmark_timer_provider provider)

Preconditions:

  1. clock_type is implemented by this system.
  2. provider is not equal to MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED.
  3. provider is not equal to MICRO_BENCHMARK_TIMER_PROVIDER_FROM_METER.
  4. provider support is implemented by the library.
  5. clock_type is implemented by provider.

Effects:

  1. Allocate and initialize a new timer of type provider using the clock type clock_type.

Postconditions:

  1. The returned value is a valid timer.


C Timer Function: micro_benchmark_timer micro_benchmark_timer_from_meter (micro_benchmark_clock_type clock_type, micro_benchmark_chronometer_provider chrono_provider)

Preconditions:

  1. clock_type is implemented by this system.
  2. chrono_provider is implemented by this system.

Effects:

  1. Allocate a new chronometer adapter ptr of type chrono_provider.
  2. Initialize the allocated timer.

Postconditions:

  1. The returned value is a valid chronometer.


C Timer Function: micro_benchmark_timer micro_benchmark_timer_from_provided_meter (micro_benchmark_clock_type clock_type, micro_benchmark_meter chrono_base)

Preconditions:

  1. clock_type is implemented by this system.
  2. Either:
    • chrono_base points to a valid chronometer.
    • chrono_base points to an object of type micro_benchmark_meter_definition where at least all the function fields have been filled with a valid function.
    • clock_type is implemented by chrono_base.

Effects:

  1. Allocate a new chronometer adapter ptr with the same type as base.
  2. Initialize the allocated timer.

Postconditions:

  1. The returned value is a valid chronometer.


C Timer Function: micro_benchmark_timer micro_benchmark_timer_from_template (micro_benchmark_clock_type clock_type, micro_benchmark_meter base)

Preconditions:

  1. clock_type is implemented by this system.
  2. Either:
    • base points to a valid timer.
    • base points to an object of type micro_benchmark_timer_definition where at least all the function fields have been filled with a valid function.
    • clock_type is implemented by base.

Effects:

  1. Allocate a new timer ptr.
  2. Copy base contents to ptr.
  3. Initialize ptr

Postconditions:

  1. The returned value is a valid chronometer.


C Timer Function: void micro_benchmark_timer_release (micro_benchmark_timer timer)

Preconditions:

  1. timer is a valid timer.

Effects:

  1. Cleanup and release timer.

Postconditions:

  1. The value timer is not valid.



4.7 Utilities Reference

The following function can be used to provide a custom error handler to be executed by the library if any postcondition, implicit or explicit, could not be provided:

C Error Function: void micro_benchmark_set_error_handler (void (*handler) (void))

Preconditions:

  1. The function pointed by handler does not return, neither through a return statement or the end of its scope, nor non-locally such as a longjmp call.

Effects:

  1. Store handler, forgetting any previous value provided.

Postconditions:

  1. handler will be called if a postcondition could not be provided.


These are some utilities to make quick and easy a test with MicroBenchmark.

C Utility Macro: MICRO_BENCHMARK_REGISTER_TEST (fun)

Preconditions:

  1. File scope macro.
  2. fun type is (or decays to) micro_benchmark_test_fun.

Effects:

  1. Register the directed test fun.
  2. Use #fun as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Utility Macro: MICRO_BENCHMARK_REGISTER_FULL_TEST (setup, fun, teardown)

Preconditions:

  1. File scope macro.
  2. setup type is (or decays to) micro_benchmark_set_up_fun.
  3. fun type is (or decays to) micro_benchmark_test_fun.
  4. teardown type is (or decays to) micro_benchmark_set_up_fun.

Effects:

  1. Register the directed test fun, calling setup before each run of fun, and calling teardown after each run.
  2. Use #fun as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Utility Macro: MICRO_BENCHMARK_REGISTER_NAMED_TEST (name, setup, test, teardown)

Preconditions:

  1. File scope macro.
  2. The type of name is (or decays to) const char *, and points to a valid, zero-ended array of characters.
  3. setup type is (or decays to) micro_benchmark_set_up_fun.
  4. fun type is (or decays to) micro_benchmark_test_fun.
  5. teardown type is (or decays to) micro_benchmark_set_up_fun.

Effects:

  1. Register the directed test fun, calling setup before each run of fun, and calling teardown after each run.
  2. Use name as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Utility Macro: MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (fun)

Preconditions:

  1. File scope macro.
  2. fun type is (or decays to) micro_benchmark_auto_test_fun.

Effects:

  1. Register the automatic test fun.
  2. Use #fun as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Utility Macro: MICRO_BENCHMARK_REGISTER_AUTO_TEST (setup, fun, teardown)

Preconditions:

  1. File scope macro.
  2. setup type is (or decays to) micro_benchmark_set_up_fun.
  3. fun type is (or decays to) micro_benchmark_auto_test_fun.
  4. teardown type is (or decays to) micro_benchmark_set_up_fun.

Effects:

  1. Register the automatic test fun, calling setup before each run of fun, and calling teardown after each run.
  2. Use #fun as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Utility Macro: MICRO_BENCHMARK_REGISTER_NAMED_AUTO_TEST (name, setup, test, teardown)

Preconditions:

  1. File scope macro.
  2. The type of name is (or decays to) const char *, and points to a valid, zero-ended array of characters.
  3. setup type is (or decays to) micro_benchmark_set_up_fun.
  4. fun type is (or decays to) micro_benchmark_auto_test_fun.
  5. teardown type is (or decays to) micro_benchmark_set_up_fun.

Effects:

  1. Register the automatic test fun, calling setup before each run of fun, and calling teardown after each run.
  2. Use name as the test registration name.

Postconditions:

  1. The test will be executed by micro_benchmark_main unless its execution is disabled by its parameters or the provided test constraints.


C Function Type: micro_benchmark_static_constraint

Its prototype is void (*) (micro_benchmark_test_case).


C Utility Macro: MICRO_BENCHMARK_CONSTRAINT_TEST (name, fun)

Preconditions:

  1. File scope macro.
  2. The type of name is (or decays to) const char *, and points to a valid, zero-ended array of characters.
  3. fun type is (or decays to) micro_benchmark_static_constraint.

Effects:

  1. Register the constraint fun.

Postconditions:

  1. fun will be called with the test named name as its parameter by micro_benchmark_main, if a test with that name has been registered.


C Utility Macro: MICRO_BENCHMARK_MAIN ()

Preconditions:

  1. File scope macro.
  2. int main (int, char**) is not defined elsewhere on the binary.

Effects:

  1. Define int main (int, char **).

Postconditions:

  1. The resulting binary will call micro_benchmark_main with the provided command line arguments its value to the main function.


C Utility Function: void micro_benchmark_main (int argc, char **argv)

Preconditions:

  1. argc is a non positive integer.
  2. argv points to an array of char *.
  3. argc is less or equal to the number of elements pointed by argv.
  4. Each value pointed by argv either is NULL or points to a valid, readable, zero-ended array of characters.

Effects:

  1. Parse the command line:
  2. If argv contains the parameters --help, --version or an unrecognized parameter, call exit with the corresponding value.
  3. Execute the registered tests.
  4. Print the results.

Postconditions:

  1. The standard output, or the output file provided through the command line parameters, was flushed.
  2. The value returned is EXIT_SUCCESS.


The following functions are used to implement the registration macros:

C Utility Function: void micro_benchmark_register_static_test (const char *name, micro_benchmark_test test)

Preconditions:

  1. name must point to a valid, zero-ended array of characters.
  2. test must point to a valid test definition.

Effects:

  1. Register test as name.

Postconditions:

  1. micro_benchmark_main will execute test unless it is disabled through its parameters or the constraints associated to name.


C Utility Function: void micro_benchmark_register_static_constraint (const char *name, micro_benchmark_static_constraint constraint)

Preconditions:

  1. name must point to a valid, zero-ended array of characters.
  2. constraint must point to a valid function.

Effects:

  1. Register constraints for name.

Postconditions:

  1. micro_benchmark_main will call constraint if a test was registered as name as its parameter, with the test case as its parameter.


NOTE: The following functions are executed automatically by the linker on all known configurations. Do not call them manually.

C Utility Function: void micro_benchmark_init ()

Preconditions:

  1. Either:
    • micro_benchmark_init has not been called before.
    • Each call to micro_benchmark_init has been followed by a corresponding call to micro_benchmark_cleanup.

Effects:

  1. Configure message translations when the library was built with Native Language Support (see Configuration).
  2. Configure Logging output.

Postconditions:

  1. Messages are printed on the user localization, when this is available.


C Utility Function: void micro_benchmark_cleanup ()

Preconditions:

  1. Either:
    • micro_benchmark_init has been called once.
    • Every call to micro_benchmark_init except the last one has been followed by a corresponding call to micro_benchmark_cleanup.

Effects:

  1. Cleanup internal data.

Postconditions:

  1. The registered tests are not executed by a call to micro_benchmark_main anymore.



4.7.1 Library Log Reference

The library produce messages at different levels of verbosity.

C Data Type: micro_benchmark_log_level { MICRO_BENCHMARK_ERROR_LEVEL, MICRO_BENCHMARK_WARNING_LEVEL, MICRO_BENCHMARK_INFO_LEVEL, MICRO_BENCHMARK_DEBUG_LEVEL, MICRO_BENCHMARK_TRACE_LEVEL }

Levels used by the library (see Configuration).


The following functions control the logging output generated by the library:

C Log Function: void micro_benchmark_log_set_output (FILE *out)

Preconditions:

  1. out is a valid FILE.

Effects:

  1. Use out to emit log messages.

Postconditions:

  1. out will receive the log messages emitted by the library up to the next call to micro_benchmark_log_set_output or micro_benchmark_log_reset_output.


C Log Function: void micro_benchmark_log_reset_output ()

Preconditions:

  1. None.

Effects:

  1. If micro_benchmark_log_set_output was called, stop using the provided FILE *.
  2. Use the default output to emit log messages.

Postconditions:

  1. The default output will receive the log messages emitted by the library up to the next call to micro_benchmark_log_set_output.


C Log Function: void micro_benchmark_set_log_level (micro_benchmark_log_level level)

Preconditions:

  1. level is valid.

Effects:

  1. Set the log level to level.

Postconditions:

  1. The library won’t emit messages with less severity than level up to the next call to micro_benchmark_set_log_level.


C Log Function: void micro_benchmark_set_module_log_level (const char *module, micro_benchmark_log_level level)

Preconditions:

  1. level is valid.
  2. module points to a valid, zero-ended array of characters.

Effects:

  1. Set the log level for module to level.

Postconditions:

  1. The library won’t emit messages of module with less severity than level up to the next call to micro_benchmark_set_log_level.



4.7.2 Optimization Utilities Reference

The following macros are meant to avoid certain optimizations the compiler might perform on your test code.

C Optimization Macro: MICRO_BENCHMARK_DO_NOT_OPTIMIZE (lvalue)

Preconditions:

  1. Function scope macro.

Effects:

  1. Simulate read+write on lvalue.

Postconditions:

  1. The compiler should not optimize lvalue away.
C Optimization Macro: MICRO_BENCHMARK_COMPILER_BARRIER ()

Preconditions:

  1. Function scope macro.

Effects:

  1. Memory fence.

Postconditions:

  1. The compiler should treat all memory as clobbered.
C Utility Function: void micro_benchmark_do_not_optimize (const void *dummy)

Preconditions:

  1. None.

Effects:

  1. “Compatibility” implementation of the previous macros.

Postconditions:

  1. A wish of hope and not much more.

5 C++ API Reference

The library namespace is micro_benchmark. The following definitions omit this namespace for brevity. MicroBenchmark uses east const for its C++ code and places all the type specifiers together.

All functions require that the library has been loaded, properly initialized and stays loaded during the duration of all calls. This initialization is usually is performed by the static initialization sequence of the binary.


5.1 C++ Suite Reference

C++ Type: suite

Main driver of the execution, C++ representation of the C type micro_benchmark_suite (see Suite Reference.)


C++ Constructor: suite::suite (char const* name)
C++ Constructor: suite::suite (SL const& name)

Preconditions:

  1. name points to a valid C string.

Effects:

  1. Creation of a new suite.

Postconditions:

  1. The created suite can be used.
  2. The name of the created suite is equivalent to name.


C++ Method: char const* suite::name () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The created suite can be used.
  2. The name of the created suite is equivalent to name.


C++ Type: <with-constraints>

Internal type for the registration of a constraint function together with the test. The provided value micro_benchmark::with_constraints has this type.


Note: The following interface is going to change, to allow member function pointers and current register_class functionality.

C++ Method Template: test_case suite::register_test (char const* name, T&& test, SU&& set_up = <default_set_up>, TD&& tear_down = <default_tear_down>)
C++ Method Template: test_case suite::register_test (SL const& name, T&& test, SU&& set_up = <default_set_up>, TD&& tear_down = <default_tear_down>)

Preconditions:

  1. *this is a valid object.
  2. T is a valid test type.
  3. SU is a valid set up type.
  4. TD is a valid tear down type.
  5. name points to a C string.
  6. test is a valid object or a pointer to a valid function.
  7. set_up is a valid object or a pointer to a valid function.
  8. tear_down is a valid object or a pointer to a valid function.

Effects:

  1. Register with name the test defined by test.
  2. Register set_up as its set up function if it is provided. Otherwise use the default provided by the library is used.
  3. Register tear_down as its tear down function if it is provided. Otherwise use the default provided by the library is used.

Postconditions:

  1. The returned value is a handle to the created test case.
  2. The returned value test_case::name is equivalent to name.
  3. The registered test will be executed by suite::run unless its execution is disabled.


C++ Method Template: test_case suite::register_test (<with-constraints>, C&& constraints, char const* name, T&& test, SU&& set_up = <default_set_up>, TD&& tear_down = <default_tear_down>)
C++ Method Template: test_case suite::register_test (<with-constraints>, C&& constraints, SL const& name, T&& test, SU&& set_up = <default_set_up>, TD&& tear_down = <default_tear_down>)

Preconditions:

  1. *this is a valid object.
  2. C is a valid constraint type.
  3. T is a valid test type.
  4. SU is a valid set up type.
  5. TD is a valid tear down type.
  6. name points to a valid C string.
  7. constraints is a valid object.
  8. test is a valid object or a pointer to a valid function.
  9. set_up is a valid object or a pointer to a valid function.
  10. tear_down is a valid object or a pointer to a valid function.

Effects:

  1. Register with name the test defined by test.
  2. Register set_up as its set up function if it is provided. Otherwise use the default provided by the library is used.
  3. Register tear_down as its tear down function if it is provided. Otherwise use the default provided by the library is used.
  4. constraints is called with the defined test as its parameter.

Postconditions:

  1. The returned value is a handle to the created test case.
  2. The returned value test_case::name is equivalent to name.
  3. The registered test will be executed by suite::run unless its execution is disabled by the constraints function or later modifications of the test_case handle.


C++ Method Template: test_case suite::register_class (char const* name, T&& test)
C++ Method Template: test_case suite::register_class (Sl const& name, T&& test)

Preconditions:

  1. *this is a valid object.
  2. T defines an instance method operator() or operator() const with the right signature.
  3. name points to a valid C string.

Effects:

  1. The test defined by test in the suite with name.
  2. If T has a valid static member constraints, it is called with the defined test as its parameter.

Postconditions:

  1. The returned value is a handle to the created test case.
  2. The returned value test_case::name is equivalent to name.
  3. The registered test will be executed by suite::run unless its execution is disabled by the constraints function or later modifications of the test_case handle.
  4. If T has a valid instance member set_up, it is called before the test execution.
  5. If T has a valid instance member tear_down, it is called after the test execution.


C++ Method: std::vector<test_case> suite::tests () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value contains a handles of the registered tests up to this.


C++ Method: void suite::run ()

Preconditions:

  1. *this is a valid object.
  2. this->run has not been called.

Effects:

  1. Run the registered and enabled tests.

Postconditions:

  1. The suite contains a report of the executions.


C++ Method: report suite::stored_report () const

Preconditions:

  1. *this is a valid object.
  2. this->run has been called.

Effects:

  1. None.

Postconditions:

  1. The returned value contains a handles of the registered tests up to this.



5.2 C++ Test Reference

C++ Type: test_case

Test case handle.

The objects returned by suite::register_test, suite::register_class are in a valid state, as well as the objects provided by the library to a constraint function through its parameter.


C++ Constructor: test_case::test_case ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The created object state is not valid. The only defined operations defined on the created object are the assignment from another object and the destruction.


C++ Constructor: test_case::test_case (test_case&& other)

Preconditions:

  1. None.

Effects:

  1. *this is initialized with state of other.

Postconditions:

  1. The state of other is invalid.


C++ Assignment Operator: test_case& test_case::operator= (test_case&& other)

Preconditions:

  1. None.

Effects:

  1. The state of *this is the state of other.

Postconditions:

  1. The state of other is invalid.


The following methods modify the constraints applied to the test execution.

C++ Method Template: void test_case::add_dimension (Range values)

Preconditions:

  1. *this is in a valid state.
  2. values is a valid range.

Effects:

  1. A new dimension with values is added to the test constraints.

Postconditions:

  1. The number of dimensions in *this is increased by one.


C++ Method Template: void test_case::add_dimension (std::initializer_list<Tvalues)

Preconditions:

  1. *this is in a valid state.

Effects:

  1. A new dimension with values is added to the test constraints.

Postconditions:

  1. The number of dimensions in *this is increased by one.


C++ Method Template: void test_case::add_dimension (Iterator begin, Iterator end)

Preconditions:

  1. *this is in a valid state.
  2. begin is a valid iterator.
  3. end is a valid iterator.
  4. [beginend) is a valid range.

Effects:

  1. A new dimension with the values pointed by [beginend) is added to the test constraints.

Postconditions:

  1. The number of dimensions in *this is increased by one.


C++ Method: void test_case::add_dimension (std::vector<std::size_t> const& values)

Preconditions:

  1. *this is in a valid state.
  2. values is in a valid state.

Effects:

  1. A new dimension with the values pointed by values is added to the test constraints.

Postconditions:

  1. The number of dimensions in *this is increased by one.


C++ Method: void test_case::number_of_dimensions () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None.

Postconditions:

  1. The returned value is number of dimensions stored in the test pointed by this.


C++ Method: void test_case::get_dimension (std::size_t number) const

Preconditions:

  1. *this is in a valid state.
  2. number must be less than the number of dimensions stored on *this.

Effects:

  1. None.

Postconditions:

  1. The returned value is values of the dimension number stored in the test pointed by this.


C++ Method: void test_case::skip_iterations (std::size_t it)

Preconditions:

  1. *this is in a valid state.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. The value it is stored for future usage.

Postconditions:

  1. The test represented by *this will perform it iterations before start taking measurements.


C++ Method: std::size_t test_case::iterations_to_skip () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None.

Postconditions:

  1. The returned value represents the iterations that *this will perform before start taking measurements.


Iteration Limits

The following function control the number of iterations performed by the code under test.

C++ Method: std::size_t test_case::limit_iterations (std::size_t min, std::size_t max)

Preconditions:

  1. *this is in a valid state.
  2. Either:
    • The range [minmax] is valid.
    • max is zero.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. The provided values min and max are stored for future usage.

Postconditions:

  1. The test represented by *this will perform at least min iterations during the measurements.
  2. The test represented by *this will perform at most max iterations during the measurements.


C++ Method: std::size_t test_case::min_iterations () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents ne minimum number of iterations that the test represented by *this will perform while taking measurements.


C++ Method: std::size_t test_case::max_iterations () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents ne maximum number of iterations that the test represented by *this will perform while taking measurements.


The following function control the iterations performed on each measurement sample taken.

C++ Method: void test_case::limit_samples (std::size_t min, std::size_t max)

Preconditions:

  1. *this is in a valid state.
  2. Either:
    • The range [minmax] is valid.
    • max is zero.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. The provided values min and max are stored for future usage.

Postconditions:

  1. The test represented by *this will perform at least min iterations each measurement sample.
  2. The test represented by *this will perform at most max iterations each measurement sample.


C++ Method: std::size_t test_case::min_sample_iterations () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents the minimum number of iterations performed on each measurement sample taken during the execution of the test represented by *this.


C++ Method: std::size_t test_case::max_sample_iterations () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents the maximum number of iterations performed on each measurement sample taken during the execution of the test represented by *this.


The following functions limit the test time:

C++ Method: void test_case::max_time (std::chrono::seconds s)
C++ Method: void test_case::max_time (std::chrono::nanoseconds ns)

Preconditions:

  1. *this is in a valid state.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. The provided value will be used as time limit for the test

Postconditions:

  1. The test represented by *this will have the provided time as its time limit.


C++ Method: void test_case::max_time (std::chrono::seconds s, std::chrono::nanoseconds ns)

Preconditions:

  1. *this is in a valid state.
  2. ns is less than 1 second.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. The provided value will be used as time limit for the test

Postconditions:

  1. The test represented by *this will have the provided time as its time limit.


C++ Method: std::pair<std::chrono::seconds, std::chrono::nanoseconds> test_case::max_time () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents the time limit stored on *this.



5.3 C++ State Reference

The following type represents the state of a test execution.

An instance of the following type is provided to set up and tear down functions when they are provided to the test. An instance of this type is provided to directed tests as well.

C++ Type: state

Handle of the execution state. The lifetime of these objects is bounded to the function call that provided it. No public constructor is available for this type.


C++ Method: char const* state::get_name () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None

Postconditions:

  1. The returned value represents the name of the execution controlled by *this.
  2. If this->set_name was called, the returned value is equivalent to the value provided to the function call in effect.


C++ Method: void state::set_name (char const* name)
C++ Method Template: void state::set_name (SL name)

Preconditions:

  1. *this is in a valid state.
  2. name points to a valid C string.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. Store a copy of the provided name for later usage.

Postconditions:

  1. The value represented by name will be used for the execution report represented by *this.


C++ Method: bool state::keep_running ()

Preconditions:

  1. *this is in a valid state.

Effects:

  1. Perform housekeeping.

Postconditions:

  1. The value returned is true if the test should run at least one iteration more.


C++ Method: std::vector<std::size_t> state::sizes () const

Preconditions:

  1. *this is in a valid state.

Effects:

  1. None.

Postconditions:

  1. The value returned contains the sizes provided for this test execution.



5.4 C++ Report Reference

The following sections describe the reports generated by the execution of the tests (see C++ Test Reference) registered at a suite object (see C++ Suite Reference) after running that suite.


5.4.1 C++ Suite Report Reference

C++ Type: report

Valid objects of this type represent the report generated by the execution of a suite object.


C++ Constructor: report::report ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The only operations defined on *this are the assignment and the destruction.


C++ Constructor: report::report (report&& other) noexcept
C++ Assignment Operator: report& report::report (report&& report) noexcept

Preconditions:

  1. None.

Effects:

  1. The validity from other is transferred to *this.
  2. If other was valid, the state of other is transferred to *this.

Postconditions:

  1. other is not a valid object. Only destruction and assignment are defined on it.


C++ Method: char const* report::name () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is equivalent to the name of the suite that generated the report represented by *this.


The following functions allow the iteration over the reports. You may write on your code:

  report rep = @dots{};

  for (auto const& test: rep)
    do_something_with_test (test);

  for (auto const& e: rep.exections ())
    do_something_with_execution (e);
C++ Method: <iterator> report::begin () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the start of the sequence of test reports associated to *this.


C++ Method: <iterator> report::end () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the end of the sequence of test reports associated to *this.


C++ Method: <iterator> report::rbegin () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the last element of the sequence of test reports associated to *this.


C++ Method: <iterator> report::rend () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the reverse end of the sequence of test reports associated to *this.


C++ Method: std::size_t report::size () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of tests associated to *this.


C++ Method: bool report::empty () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value is true if the number of executed tests was zero.


C++ Method: report::range report::tests () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the tests associated to *this in order of execution.


C++ Method: report::reverse_range report::reverse_tests () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the tests associated to *this in reverse order of execution.


C++ Method: <range> report::executions () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the test executions associated to *this in the order of execution.


C++ Method: <range> report::reverse_executions () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the test executions associated to *this in reverse order of execution.


The following functions print the report with the default format (see Report Output.).

C++ Method: void report::print () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. The report associated to *this is printed to the standard output.
  2. The default values, as returned by a call to io::default_output_values, are emitted.
  3. The default format is used to print the values.

Postconditions:

  1. The standard output has been flushed.


C++ Method: void report::print (std::FILE* out) const
C++ Instance: void report::print (std::ostream& out) const

Preconditions:

  1. *this is a valid object.
  2. out is associated to a valid stream.
  3. out can receive an indeterminate amount of characters.

Effects:

  1. The report associated to *this is printed to the stream associated to out.
  2. The default values, as returned by a call to io::default_output_values, are emitted.
  3. The default format is used to print the values.

Postconditions:

  1. out has been flushed.


C++ Function: std::ostream& operator<< (std::ostream& out, report const& rep)

Preconditions:

  1. *this is a valid object.
  2. out is associated to a valid stream.
  3. out can receive an indeterminate amount of characters.

Effects:

  1. The report associated to *this is printed to the stream associated to out.
  2. The provided values are emitted.
  3. The provided format is used to print the values.

Postconditions:

  1. out has been flushed.

Note: this function can only be called through argument-dependent lookup.



The following overloads emit customized output.

C++ Method: void report::print (io::output_values const& values) const

Preconditions:

  1. *this is a valid object.

Effects:

  1. The report associated to *this is printed to the standard output.
  2. The provided values are emitted.
  3. The default format is used to print the values.

Postconditions:

  1. The standard output has been flushed.


C++ Method: void report::print (std::FILE* out, io::output_values const& values) const
C++ Instance: void report::print (std::ostream& out, io::output_values const& values) const

Preconditions:

  1. *this is a valid object.
  2. out is associated to a valid stream.
  3. out can receive an indeterminate amount of characters.

Effects:

  1. The report associated to *this is printed to the stream associated to out.
  2. The provided values are emitted.
  3. The default format is used to print the values.

Postconditions:

  1. out has been flushed.


The following functions print the report using the format provided as its parameter (see Report Output.)

C++ Method: void report::print (io::output_type format) const

Preconditions:

  1. *this is a valid object.
  2. format is a valid object.

Effects:

  1. The report associated to *this is printed to the standard output.
  2. The default values, as returned by a call to io::default_output_values, are emitted.
  3. The provided format is used to print the values.

Postconditions:

  1. The standard output has been flushed.


C++ Method: void report::print (std::FILE* out, io::output_type format) const
C++ Instance: void report::print (std::ostream& out, io::output_type format) const

Preconditions:

  1. *this is a valid object.
  2. out is associated to a valid stream.
  3. out can receive an indeterminate amount of characters.
  4. format is a valid object.

Effects:

  1. The report associated to *this is printed to the stream associated to out.
  2. The default values, as returned by a call to io::default_output_values, are emitted.
  3. The provided format is used to print the values.

Postconditions:

  1. out has been flushed.


C++ Method: void report::print (FILE* out, io::output_type type, io::output_values const& values) const
C++ Instance: void report::print (std::ostream& out, io::output_type type, io::output_values const& values) const

Preconditions:

  1. *this is a valid object.
  2. out is associated to a valid stream.
  3. out can receive an indeterminate amount of characters.
  4. format is a valid object.
  5. values is a valid object.

Effects:

  1. The report associated to *this is printed to the stream associated to out.
  2. The provided values are emitted.
  3. The provided format is used to print the values.

Postconditions:

  1. out has been flushed.


Extra Arguments For operator<<

The following method allow the customization of the output obtained by operator<<. It can be used as the following example:

    using micro_benchmark::io::output_type;
    report rep = @dots{};
    // Print the lisp report to std::cout
    std::cout << rep.with_args (output_type::lisp);
    @dots{}
C++ Method Template: <with-args-ret> report::with_args (Args&&... args) const

Preconditions:

  1. *this is a valid object.
  2. this->print (std::declval<std::ostream>, args...) is a valid expression.

Effects:

  1. The parameters into the returned object for its later usage.

Postconditions:

  1. The returned object contains a reference to this. Its validity is bounded to the validity of this.


C++ Function: std::ostream& operator<< (std::ostream& out, <with-args-ret>&& report)

Preconditions:

  1. report was obtained by a report::with_args call.
  2. The report that generated report is still valid.
  3. out is associated to a valid stream.
  4. out can receive an indeterminate amount of characters.

Effects:

  1. Call ptr->print (out, args...) where ptr is the report referenced by report and args... were the arguments provided to the call report::with_args.

Postconditions:

  1. out was flushed.



5.4.2 C++ Test Report Reference

C++ Type: test_report

Valid objects of this type represent the report generated by all executions of a test_case object.


C++ Constructor: test_report::test_report ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The only operations defined on *this are the assignment and the destruction.


C++ Constructor: test_report::test_report (test_report&& other) noexcept
C++ Assignment Operator: test_report& test_report::operator= (test_report&& other) noexcept

Preconditions:

  1. None.

Effects:

  1. The validity from other is transferred to *this.
  2. The state of other is transferred to *this if other is valid.

Postconditions:

  1. other is not a valid object. Only destruction and assignment are defined on it.


C++ Method: char const* test_report::name () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is equivalent to the name of the test that generated the report represented by *this.


Test Report Iteration

The following functions allow the iteration over the test reports. You may write on your code, being rep an object of type test_report:

  test_report rep = @dots{};

  for (auto const& e: rep) do_something_with_execution (e);

test_report::begin

C++ Method: test_report::iterator test_report::begin () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the start of the sequence of execution reports associated to *this.


C++ Method: test_report::iterator test_report::end () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the end of the sequence of execution reports associated to *this.


C++ Method: test_report::reverse_iterator test_report::rbegin () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the last element of the sequence of execution reports associated to *this.


C++ Method: test_report::reverse_iterator test_report::rend () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value points to the reverse end of the sequence of execution reports associated to *this.


C++ Method: std::size_t test_report::size () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of executions associated to *this.


C++ Method: bool test_report::empty () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is true if the number of executions associated to *this was zero.


C++ Method: test_report::range test_report::executions () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the executions associated to *this in the order of execution.


C++ Method: test_report::reverse_range test_report::reverse_executions () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The returned value is a valid range to iterate over the executions associated to *this in reverse order of execution.



5.4.3 C++ Exec Report Reference

C++ Type: exec_report

Valid objects of this type represent the report generated by a single execution of a test_case object.


C++ Constructor: exec_report::exec_report ()

Preconditions:

  1. None.

Effects:

  1. None.

Postconditions:

  1. The only operations defined on *this are the assignment and the destruction.


C++ Constructor: exec_report::exec_report (exec_report&& other) noexcept
C++ Assignment Operator: exec_report& exec_report::operator= (exec_report&& other) noexcept

Preconditions:

  1. None.

Effects:

  1. The validity from other is transferred to *this.
  2. The state of other is transferred to *this if other is valid.

Postconditions:

  1. other is not a valid object. Only destruction and assignment are defined on it.


Other helper types are:

C++ Type: total_time_type

This structure contains two public members—seconds and nanoseconds—and a function template to<T>() to convert the result using std::chrono::duration_cast.. Valid objects of this type represent the report generated by a single execution of a test_case object.


C++ Type: <time-unit>

Internal template. Has two members: to_unit which returns its value in std::chrono::nanoseconds and value which returns the underlying double value.


C++ Type: <iteration-unit>

Internal template. Has two members: to_unit which returns its value in std::size_t and value which returns the underlying double value.


C++ Type: <iteration-stat>
C++ Type: <time-stat>

Internal template. Has three <iteration-unit> or <time-unit> fields: mean, std_deviation and variance.


C++ Type: <time-samples>

Internal alias of a collection of time samples.


C++ Method: char const* exec_report::name () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. If state::set_name was called during the execution associated to *this, the returned value is equivalent to the last value provided during that execution.


C++ Method: std::vector<std::size_t> exec_report::sizes () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value is equivalent to the value returned by state::sizes during the execution associated to *this.


C++ Method: std::size_t exec_report::iterations () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the sampled iterations performed by the execution associated to *this.


C++ Method: std::size_t exec_report::total_iterations () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the total iterations performed by the execution associated to *this.


C++ Method: <total-time-type> exec_report::total_time () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the total time elapsed during the execution associated to *this.


C++ Method: <time-samples> exec_report::time_samples () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the time samples collected during the execution associated to *this.


C++ Method: std::size_t exec_report::total_samples () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the number of samples collected during the execution associated to *this.


C++ Method: std::size_t exec_report::used_samples () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the number of samples used for the statistical data generation of the test execution associated to *this.


C++ Method: <time-stat> exec_report::iteration_time () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the aggregated time per iteration data of the execution associated to *this.


C++ Method: <time-stat> exec_report::sample_time () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the aggregated time per sample data of the execution associated to *this.


C++ Method: <iteration-stat> exec_report::sample_iterations () const

Preconditions:

  1. *this is a valid object.

Effects:

  1. None.

Postconditions:

  1. The return value represents the aggregated sample time data of the execution associated to *this.



5.4.4 C++ Report Output Reference

C++ Type: io::output_type

This enumeration represents the output format (see Report Output.) The following values are defined by this enumeration class:

  • console
  • lisp
  • text

C++ Type: io::output_stat

This enumeration represent the print option for a statistical values (see Report Output.) The following values are defined by this enumeration class:

  • none
  • mean
  • variance
  • std_deviation

C++ Struct: io::output_values \d

Customized output.


C++ Function: io::output_values io::default_output_values ()

Default output values.



C++ Function: void io::default_output_values (io::output_values const& values)

Preconditions:

  1. values is a valid object.

Effects:

  1. Any value provided to this function previously stop taking effect.
  2. Store the contents of values for its later usage.

Postconditions:

  1. The default print operations will use the provided values.



5.5 C++ Utilities Reference

C++ Macro: MICRO_BENCHMARK_MAIN ()

Declare int main (int, char **), call micro_benchmark::main and return its value.



C++ Function: void main (int argc, char* argv[])

Preconditions:

  1. argc is a non positive integer.
  2. argv points to an array of char *.
  3. argc is less or equal to the number of elements pointed by argv.
  4. Each value pointed by argv points to a valid C string.

Effects:

  1. Parse the command line:
  2. If argv contains the parameters --help, --version or an unrecognized parameter, call exit with the corresponding value.
  3. Execute the registered tests.
  4. Print the results.

Postconditions:

  1. The value returned is EXIT_SUCCESS.



5.5.1 C++ Optimization Utilities Reference

The following functions detail optimization (defeating) functionality (see Optimizer Tips).

C++ Function Template: void do_not_optimize (Tlvalue)

Preconditions:

  1. lvalue is a valid object.

Effects:

  1. Simulate read and write on lvalue.

Postconditions:

  1. The compiler should not optimize lvalue away.


C++ Function: void compiler_barrier ()

Preconditions:

  1. Function scope macro.

Effects:

  1. Memory fence.

Postconditions:

  1. The compiler should treat all memory as clobbered.



6 Guile API Reference

To use MicroBenchmark, you only need to import the main module, (mbenmchmark). It exports all the library functionality. The following sections specify the different parts that compose it.


6.1 Guile Suite Reference

The suite functionality is provided by the module (mbenchmark suite). The following names are provided:

Guile Foreign Object: <suite>

This type represents a test suite. Its starting state is empty.

It encapsulates micro_benchmark_suite underneath and controls its lifetime. Note that this name is not exported by the module; the name <suite> is only for reference.


Guile Constructor: make-suite name

Preconditions:

  1. (string? name) is #t.

Effects:

  1. Creation of a new suite with name as its name.

Postconditions:

  1. The predicate suite? returns #t on the returned value.

Guile Predicate: suite? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <suite>.

Postconditions:

  1. The value returned is #t if object is of type <suite>.

Guile Function: suite-name suite

Preconditions:

  1. suite was created by make-suite.

Effects:

  1. None.

Postconditions:

  1. The value returned is the name provided to make-suite.

Guile Function: suite-number-of-tests suite

Preconditions:

  1. suite was created by make-suite.

Effects:

  1. None.

Postconditions:

  1. The value returned is the name provided to make-suite.

Guile Function: run-suite! suite

Preconditions:

  1. suite was created by make-suite.
  2. run-suite has not been called on suite.

Effects:

  1. The test cases registered onto the suite are executed.

Postconditions:

  1. The suite contains a report.

Guile Function: get-report suite

Preconditions:

  1. suite was created by make-suite.
  2. run-suite has been called on suite.

Effects:

  1. None.

Postconditions:

  1. The returned value contains the information gathered during the suite execution.


6.2 Guile Test Reference

In Scheme, the test registration functions are the constructors of test cases. This organization eases the implementation and seems to make sense too. Let us know your opinion at <bug-mbenchmark@nongnu.org>.


6.2.1 Guile Test Case Reference

Guile Internal Record: <test-case>

This type represents a test case.

It encapsulates micro_benchmark_test_case underneath and a reference to the suite to controls its lifetime. Note that this name is not exported by the module, but objects of this type are returned by the following function.


Guile Constructor: add-test! suite name #:key test direct dimensions skip-iterations min-iterations max-iterations min-sample-iterations max-sample-iterations max-time set-up tear-down

Preconditions:

  1. suite was returned by make-suite.
  2. (string? name) is #t.
  3. If #:set-up is provided, (set-up <state-object>) is a valid call. If it isn’t provided, the value returned by the default set-up function is (get-sizes <state-object>).
  4. If #:tear-down is provided, (tear-down <state-object> (set-up <state-object>)) is a valid call.
  5. If #:direct is provided and it isn’t #f, (apply test <state-object2> (set-up <state-object1>)), being test the value provided through #:test, must be a valid call.
  6. If #:direct is not provided or it is #f, (apply test (set-up <state-object>)), being test the value provided through #:test, is a valid call.
  7. If #:dimensions is provided, its value is a list of lists of non-negative integers.
  8. If #:skip-iterations!, #:min-iterations, #:max-iterations, #:min-sample-iterations or #:max-sample-iterations is provided, its value is a non-negative integer.
  9. If #:max-time is provided, its value is either a non-negative integer or a pair of non-negative integers.

Effects:

  1. Register on suite the provided test definition and constraints as name.

Postconditions:

  1. The predicate test-case? returns #t on the returned value.
  2. The registered test is prepared to be executed by suite.
  3. If #:dimensions was provided, the returned test has the value provided as size its constraints.
  4. If #:skip-iterations! was provided, the returned test will perform this number of of iterations before start taking measurements.
  5. If #:min-iterations or #:max-iterations was provided, the returned test will perform a number of iterations in the range [min, max], being min the value provided to #:min-iterations or zero, and max. the value provided to #:max-iterations or an implementation upper limit.
  6. If #:min-sample-iterations or #:max-sample-iterations was provided, the returned test will perform number of iterations per measurement sample in the range [min, max], being min the value provided to #:min-sample-iterations or zero, and max. the value provided to #:max-sample-iterations or an implementation upper limit.
  7. If #:max-time was provided, its value will be used as the time limit of the registered test.

Guile Predicate: test-case? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <test-case>.

Postconditions:

  1. The value returned is #t if object is of type <test-case>.

Guile Function: test-set-constraints! test #:key skip-iterations! min-iterations max-iterations min-sample-iterations max-sample-iterations

Preconditions:

  1. test was returned by add-test!.
  2. If #:skip-iterations!, #:min-iterations, #:max-iterations, #:min-sample-iterations or #:max-sample-iterations is provided, its value is a non-negative integer.

Effects:

  1. If #:skip-iterations! was provided, any previous value of iterations to skip stops taking effect.
  2. If #:min-iterations or #:max-iterations was provided, any previous iteration limit value stops taking effect.
  3. If #:min-sample-iterations or #:max-sample-iterations was provided, any previous sample iteration limit value stops taking effect.

Postconditions:

  1. If #:skip-iterations! was provided, this will be number of iterations performed on the test code represented by test before taking measurements.
  2. If #:min-iterations or #:max-iterations was provided, the number of iterations performed by test will be [min, max], being min the value provided to #:min-iterations or zero, and max. the value provided to #:max-iterations or an implementation upper limit.
  3. If #:min-sample-iterations or #:max-sample-iterations was provided, the number of iterations per measurement sample performed by test will be [min, max], being min the value provided to #:min-sample-iterations or zero, and max. the value provided to #:max-sample-iterations or an implementation upper limit.

Guile Function: test-add-dimension! test dim

Preconditions:

  1. test was returned by add-test!.
  2. dim is a list of non-negative integers.

Effects:

  1. Add a new dimension to test with dim as its space of values.

Postconditions:

  1. test has another dimension added to its constraints.

Guile Function: test-dimensions test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The value returned is the, possibly empty, list of dimensions provided for test.

Guile Function: test-skip-iterations! test iterations

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. Any previous value of iterations to skip stops taking effect.

Postconditions:

  1. test will perform iterations before taking measurements.

Guile Function: test-iterations-to-skip test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The number of iterations that test will perform iterations before taking measurements.

Guile Function: test-limit-iterations! test #:key min max

Preconditions:

  1. test was returned by add-test!.
  2. If #:min or #:max is provided, its value must be a non-negative integer.

Effects:

  1. Any previous iteration limit stops taking effect.

Postconditions:

  1. The number of iterations performed by test will be [min, max], being min the value provided to #:min or zero, and max. the value provided to #:max or an implementation upper limit.

Guile Function: test-min-iterations test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The minimum number of iterations that test will perform iterations taking measurements.

Guile Function: test-max-iterations test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The maximum number of iterations that test will perform iterations taking measurements, or zero if no such limit has been provided by the user.

Guile Function: test-limit-samples! test #:key min max

Preconditions:

  1. test was returned by add-test!.
  2. If #:min or #:max is provided, its value must be a non-negative integer.

Effects:

  1. Any previous sample iteration limit stops taking effect.

Postconditions:

  1. The number of iterations per measurement sample performed by test will be [min, max], being min the value provided to #:min or zero, and max. the value provided to #:max or an implementation upper limit.

Guile Function: test-max-sample-iterations test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The maximum number of iterations per measurement sample that test will perform iterations, or zero if no such limit has been provided by the user.

Guile Function: test-min-sample-iterations test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The minimum number of iterations per measurement sample that test will perform iterations, or zero if no such limit has been provided by the user.

Guile Function: test-set-max-time! test time

Preconditions:

  1. test was returned by add-test!.
  2. time is a positive integer or a pair of non-negative integers.

Effects:

  1. Any previous time limit value stops taking effect.
  2. If time is a pair, use its car value as the number of seconds of the time limit and its cdr value as the number of milliseconds.
  3. Otherwise, use time as the number of seconds.

Postconditions:

  1. The time limit provided will be used for test execution.

Guile Function: test-max-time! test

Preconditions:

  1. test was returned by add-test!.

Effects:

  1. None.

Postconditions:

  1. The returned value is a pair whose contents are the number of seconds and the number of milliseconds stored as time limit for test.

The following function does not return a <test-case> object. The tests registered by this function are executed by the main function (see Guile Utilities Reference.)

Guile Function: register-test! name #:key test direct dimensions skip-iterations min-iterations max-iterations min-sample-iterations max-sample-iterations set-up tear-down max-time

Preconditions:

  1. (string? name) is #t.
  2. If #:set-up is provided, (set-up <state-object>) is a valid call. If it isn’t provided, the value returned by the default set-up function is (get-sizes <state-object>).
  3. If #:tear-down is provided, (tear-down <state-object> (set-up <state-object>)) is a valid call.
  4. If #:direct is provided and it isn’t #f, (apply test <state-object2> (set-up <state-object1>)), being test the value provided through #:test, must be a valid call.
  5. If #:direct is not provided or it is #f, (apply test (set-up <state-object>)), being test the value provided through #:test, is a valid call.
  6. If #:dimensions is provided, it must be a list of lists of non-negative integers.
  7. If #:skip-iterations!, #:min-iterations, #:max-iterations, #:min-sample-iterations or #:max-sample-iterations is provided, the value provided is non-negative integers.
  8. If #:max-time is provided, its value is either a non-negative integer or a pair of non-negative integers.

Effects:

  1. Register the provided test definition and constraints as name.

Postconditions:

  1. The registered test is prepared to be executed by main.


6.2.2 Guile Test State Reference

Guile Foreign Object: <state>

This type represents a test execution state.

It encapsulates micro_benchmark_test_state underneath. Note that this name is not exported by the module; the name <state> is only for reference.

Objects of this type only are provided through the test case functions. These objects must not escape its calling scope, otherwise they are not considered valid.


These objects provide two predicates:

Guile Predicate: state? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <state>.

Postconditions:

  1. The value returned is #t if object is an state object, provided to a test related function.

Guile Predicate: keep-running? state

Preconditions:

  1. state is a valid object, provided through a test related function.

Effects:

  1. Perform housekeeping.
  2. Check constraints.

Postconditions:

  1. The value returned is #t if the test should perform, at least, one iteration more.

The following functions are available for objects of this type.

Guile Function: state-sizes state

Preconditions:

  1. state is a valid object, provided through a test related function.

Effects:

  1. None.

Postconditions:

  1. The returned value is a combination of the dimensional constraints provided.

Guile Function: state-name state

Preconditions:

  1. state is a valid object, provided through a test related function.

Effects:

  1. None.

Postconditions:

  1. The returned value is the name provided at the registration of the test associated to state.

Guile Function: set-state-name! state name

Preconditions:

  1. state is a valid object, provided through a test related function.

Effects:

  1. Any previous name for this test execution stops taking effect.

Postconditions:

  1. The test execution associated to state will report its name as name.


6.3 Guile Report Reference

The following sections describe the report generated by a test suite (see Guile Suite Report Reference), its output functionality (see Guile Report Output Reference) and the detailed report provided for each execution (see Guile Exec Report Reference).

Note there is no equivalent to the test case report of C4.


6.3.1 Guile Suite Report Reference

Guile Internal Record: <report>

This type represents a suite report.

It encapsulates micro_benchmark_report underneath and a reference to the suite to controls its lifetime. Note that this name is not exported by the module, but objects of this type are returned by the following function.


Guile Constructor: get-report suite

Preconditions:

  1. suite was returned by make-suite.
  2. suite contains a report.

Effects:

  1. None.

Postconditions:

  1. The report contained on suite is returned.

Guile Predicate: report? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <report>.

Postconditions:

  1. The value returned is #t if object was returned by get-report.

Guile Function: report-name report

Preconditions:

  1. report was returned by get-report.

Effects:

  1. None.

Postconditions:

  1. The returned value is the name of the suite associated to report.

The following functions allow to iterate a report object, traversing the test execution reports contained in it:

Guile Function: for-each-report report fun #:key self-test

Preconditions:

  1. report was returned by get-report.

Effects:

  1. If #:self-test was provided and true, or not provided at all, call fun once for each test execution report contained in report, including the self test of the suite.
  2. If #:self-test was provided as false, call fun once for each test execution report contained in report, excluding the self test of the suite.
  3. Each call of fun will be performed as (fun exr), where exr is an <exec-report> object.
  4. The order of calls is the order in which the test executions were performed.

Postconditions:

  1. None.

Guile Function: fold-reports report fun init #:key self-test

Preconditions:

  1. report was returned by get-report.

Effects:

  1. If #:self-test was provided and true, or not provided at all, call fun once for each test execution report contained in report, including the self test of the suite.
  2. If #:self-test was provided as false, call fun once for each test execution report contained in report, excluding the self test of the suite.
  3. Each call of fun will be performed as (fun exr ip), where exr is an <exec-report> object and ip is init in the first call and the value returned by the last fun call the successive times.
  4. The order of calls is the order in which the test executions were performed.

Postconditions:

  1. None.


6.3.2 Guile Exec Report Reference

Guile Internal Record: <exec-report>

This type represents a suite report.

It encapsulates micro_benchmark_exec_report underneath and a reference to the suite to controls its lifetime. Note that this name is not exported by the module, but objects of this type can be accessed through a <report>.


Guile Predicate: exec-report? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <exec-report>.

Postconditions:

  1. The value returned is #t if object obtained from a <report> object.

Guile Function: exec-report-name report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the name of the test execution associated to report.

Guile Function: exec-report-test-name report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the name used to register the test case associated to report.

Guile Function: exec-report-sizes report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the combination of dimensions used for the test execution associated to report.

Guile Function: exec-report-iterations report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the measured iterations performed by the test execution associated to report.

Guile Function: exec-report-total-iterations report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the total number of iterations performed by the test execution associated to report.

Guile Function: exec-report-total-samples report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the total number of samples gathered during the test execution associated to report.

Guile Function: exec-report-used-samples report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of samples used for the aggregated data calculation of the test execution associated to report.

Guile Function: exec-report-iteration-time report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the calculated aggregated data (see Guile Stats Utilities Reference) of the test execution associated to report regarding its iteration time.

Guile Function: exec-report-sample-time report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the calculated aggregated data (see Guile Stats Utilities Reference) of the test execution associated to report regarding its time per sample.

Guile Function: exec-report-sample-iterations report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the calculated aggregated data (see Guile Stats Utilities Reference) of the test execution associated to report regarding its iterations per sample.

Guile Function: exec-report-total-time report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the total time (see Guile Time Utilities Reference) of the test execution associated to report.

Guile Function: exec-report-time-samples report

Preconditions:

  1. report was obtained through a valid call to for-each-report or fold-reports.

Effects:

  1. None.

Postconditions:

  1. The returned value is the list of <time-sample> objects (see Guile Time Utilities Reference) which contain the data gathered during the test execution associated to report.


6.3.3 Guile Report Output Reference

Variable: Guile Value output/console

Representation of the console format.


Variable: Guile Value output/lisp

Representation of the S-expression format.


Variable: Guile Value output/text

Representation of the text format.


Variable: Guile Value stats/mean

Print only the mean value.


Variable: Guile Value stats/variance

Print only the variance value.


Variable: Guile Value stats/std-deviation

Print only the standard deviation value.


Variable: Guile Value stats/basic

Print the mean and standard deviation values.


Variable: Guile Value stats/all

Print the mean, variance standard deviation values.


Guile Function: print-report report #:key type port self-test size-constraints total-time total-iterations samples extra-data iteration-time sample-time sample-iterations

Preconditions:

  1. report was returned by get-report.
  2. If #:type is provided, its value is one of the output/ values defined in this section.
  3. If #:iteration-time, #:sample-time or #:sample-iterations is provided, its value is one of the stats/ values defined in this section.
  4. If #:self-test, #:size-constraints, #:total-time, #:total-iterations, #:samples or #:extra-data is provided, its value is either #t or #f.

Effects:

  1. If #:port is provided, the output is written to its value unless its value is #f, when the output is returned as a string. Otherwise, the output is printed to the standard output.
  2. If #:type is provided, the report is printed with the specified format. Otherwise, the output is printed with the console format.
  3. If #:self-test is provided, its value determines if the self test results are printed. Otherwise, this is determined by the default value of the library.
  4. If #:size-constraints is provided, its value determines if the size constraints of each test execution are printed. Otherwise, this is determined by the default value of the library.
  5. If #:total-time is provided, its value determines if the total time of each test execution is printed. Otherwise, this is determined by the default value of the library.
  6. If #:total-iterations is provided, its value determines if the total number of iterations of each test execution is printed. Otherwise, this is determined by the default value of the library.
  7. If #:samples is provided, its value determines if the number of samples of each test execution is printed. Otherwise, this is determined by the default value of the library.
  8. If #:extra-data is provided, its value determines if the extra data of each test execution is printed. Otherwise, this is determined by the default value of the library.
  9. If #:iteration-time is provided, its value determines how is printed the iteration time data aggregated from each test execution. Otherwise, this is determined by the default value of the library.
  10. If #:sample-time is provided, its value determines how is printed the sample time data aggregated from each test execution. Otherwise, this is determined by the default value of the library.
  11. If #:sample-iterations is provided, its value determines how is printed the iterations per sample data aggregated from each test execution. Otherwise, this is determined by the default value of the library.

Postconditions:

  1. The standard output, or the output provided through #:port has received the full printed report..


6.4 Guile Utilities Reference

The following function makes very easy writing benchmark scripts in Guile, as you only have to use the module (mbenchmark), register your tests and call main:

Guile Function: main args

Preconditions:

  1. args is a list of strings.

Effects:

  1. Parse the command line:
  2. If argv contains the parameters --help, --version or an unrecognized parameter, call exit with the corresponding value.
  3. Execute the registered tests.
  4. Print the results.

Postconditions:

  1. The standard output, or the output file provided through the command line parameters, was flushed.

The following functions control the log level of the library:

Guile Function: set-log-level! level

Preconditions:

  1. level is the symbol eq? to 'error ,'warn, 'info, 'debug or 'trace.

Effects:

  1. Change the library log level.

Postconditions:

  1. Only messages from the level provided or higher will be printed.

Guile Function: set-module-log-level! module level

Preconditions:

  1. level is the symbol eq? to 'error ,'warn, 'info, 'debug or 'trace.

Effects:

  1. Change module log level.

Postconditions:

  1. The module of the library provided whose name was provided as module only emits log messages of the level provided or higher.


6.4.1 Guile Time Utilities Reference

Guile Record: <elapsed-time>

This type represents the time elapsed on a test.


Guile Predicate: elapsed-time? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <elapsed-time>

Postconditions:

  1. The returned value is #f if object type is not <elapsed-time>.

Guile Function: elapsed-time-seconds et

Preconditions:

  1. (elapsed-time? et) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of seconds stored on et.

Guile Function: elapsed-time-nanoseconds et

Preconditions:

  1. (elapsed-time? et) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of nanoseconds stored on et.

Guile Record: <time-sample>

This type represents a time sample taken during the execution of a test case.


Guile Predicate: time-sample? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <time-sample>

Postconditions:

  1. The returned value is #f if object type is not <time-sample>.

Guile Function: stat-value-unit stat

Preconditions:

  1. (stat-value? stat) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the number of iterations stored on sample.

Guile Function: time-sample-elapsed-time sample

Preconditions:

  1. (time-sample? sample) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the elapsed time on stored on sample.
  2. The returned value has type <elapsed-time>.


6.4.2 Guile Stats Utilities Reference

Guile Record: <stat-value>

This type represents the time elapsed on a test.


Guile Predicate: stat-value? object

Preconditions:

  1. None.

Effects:

  1. Check if object type is <stat-value>

Postconditions:

  1. The returned value is #f if object type is not <stat-value>.

Guile Function: stat-value-unit stat

Preconditions:

  1. (stat-value? stat) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the unit stored on stat.

Guile Function: stat-value-mean stat

Preconditions:

  1. (stat-value? stat) is #t.

Effects:

  1. None.

Postconditions:

  1. The returned value is the mean stored on stat.


Next: , Previous: , Up: MicroBenchmark   [Contents][Index]

How To Contribute


7 Contributing

MicroBenchmark is Free Software5: you can copy, distribute and/or modify it under the terms of the GNU LGPLv3, or any later version (see GNU Lesser General Public License.) That freedom makes possible the collaboration on equal grounds and leverages our collective effort to make better software available for everyone to:

As such, contributions to the project are appreciated and encouraged.

There are plenty of things to be done and everybody should be able to contribute. Currently there is not much collective activity, but contact <bug-mbenchmark@nongnu.org> if you are interested on contributing. Your help is very welcome!

The following sections show how to report a bug of MicroBenchmark (see Reporting Bugs), how to send a patch for its inclusion into the repository (see Sending Patches), the coding guidelines followed by the project (see Coding Guidelines) and how the source code is physically structured on the project (see Source Code Organization.)


7.1 Reporting Bugs

Please send MicroBenchmark bug reports to <bug-mbenchmark@nongnu.org>.


7.2 Sending Patches

Steps to send a patch

  1. Compile the patched code.
  2. Run the test suite.
  3. (optional) Run make distcheck.
  4. Extract the patches with git format-patch HEAD~<NUM_COMMITS>
  5. Send the patches to the mailing list for review.

The steps 1 to 3 can be performed automatically with the provided script build-aux/build.sh.


7.3 Coding Guidelines

MicroBenchmark follows to GNU Coding Standards (see GNU Coding Standards) for its source code and build process. The command make indent-code can be used to format the source code the script build-aux/indent-code.sh6.

In addition to them, these are some common ideas of the code base:

  • Function names
    Functions with _create on their suffix allocate the object memory, which must be released through the corresponding _release function. Functions with _init and _cleanup work on objects already allocated and must go in pairs too.
  • Optimization targets
    The abstract interfaces (meter, timer) used for the data collection probably need some work for more extended usage.

    The self test empty is a good baseline for checking introduced overheads.

    On the other hand, optimizations outside the critical path (i.e. test state code paths) are only worth the effort if the readability gets affected.


7.4 Source Code Organization

The source code tree is organized onto the following directories:

  • . (Project Directory)
    Contains common files such as README, NEWS and ChangeLog7. It also contains the main project definitions: Makefile.am and configure.ac.
  • build-aux/
    Utilities needed for the build process8.
  • doc/
    Documentation of the project, such as this document.
  • doc/examples/
    Examples of the library usage. They can be executed with make run-examples. The output included into this manual can be regenerated by make regen-doc-examples.
  • etc/
    Exported m4 macros and pkg-config files.
  • m4/
    Macros for configure and Makefile generation. Most files are copied in by autoreconf. mbenchmark.m4 contains macros used by MicroBenchmark itself.
  • include/internal/
    Header files for internal usage.
  • include/mbenchmark/
    Header files installed by the library.
  • guile/
    Guile modules.
  • lib/
    Helper and language binding libraries.
  • src/
    Core modules.
  • tests/
    Test suite.

Next: , Previous: , Up: MicroBenchmark   [Contents][Index]

Licenses


Appendix A GNU Free Documentation License

Version 1.3, 3 November 2008
Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
https://fsf.org/

Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
  1. PREAMBLE

    The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

    This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

    We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

  2. APPLICABILITY AND DEFINITIONS

    This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

    A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

    A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

    The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

    The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

    A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.

    Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

    The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.

    The “publisher” means any person or entity that distributes copies of the Document to the public.

    A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition.

    The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

  3. VERBATIM COPYING

    You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

    You may also lend copies, under the same conditions stated above, and you may publicly display copies.

  4. COPYING IN QUANTITY

    If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

    If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

    If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

    It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

  5. MODIFICATIONS

    You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

    1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
    2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
    3. State on the Title page the name of the publisher of the Modified Version, as the publisher.
    4. Preserve all the copyright notices of the Document.
    5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
    6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
    7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
    8. Include an unaltered copy of this License.
    9. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
    10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
    11. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
    12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
    13. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version.
    14. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section.
    15. Preserve any Warranty Disclaimers.

    If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.

    You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

    You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

    The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

  6. COMBINING DOCUMENTS

    You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

    The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

    In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”

  7. COLLECTIONS OF DOCUMENTS

    You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

    You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

  8. AGGREGATION WITH INDEPENDENT WORKS

    A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

    If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

  9. TRANSLATION

    Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

    If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

  10. TERMINATION

    You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

    However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

    Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

    Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

  11. FUTURE REVISIONS OF THIS LICENSE

    The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See https://www.gnu.org/copyleft/.

    Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

  12. RELICENSING

    “Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of copyrightable works thus published on the MMC site.

    “CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.

    “Incorporate” means to publish or republish a Document, in whole or in part, as part of another Document.

    An MMC is “eligible for relicensing” if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

    The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

ADDENDUM: How to use this License for your documents

To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:

  Copyright (C)  year  your name.
  Permission is granted to copy, distribute and/or modify this document
  under the terms of the GNU Free Documentation License, Version 1.3
  or any later version published by the Free Software Foundation;
  with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
  Texts.  A copy of the license is included in the section entitled ``GNU
  Free Documentation License''.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with…Texts.” line with this:

    with the Invariant Sections being list their titles, with
    the Front-Cover Texts being list, and with the Back-Cover Texts
    being list.

If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.

If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.


Appendix B GNU General Public License

Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. https://fsf.org/

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.

Preamble

The GNU General Public License is a free, copyleft license for software and other kinds of works.

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program—to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users’ freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and modification follow.

TERMS AND CONDITIONS

  1. Definitions.

    “This License” refers to version 3 of the GNU General Public License.

    “Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

    “The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.

    To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.

    A “covered work” means either the unmodified Program or a work based on the Program.

    To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.

    To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.

    An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.

  2. Source Code.

    The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.

    A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.

    The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.

    The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.

    The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.

    The Corresponding Source for a work in source code form is that same work.

  3. Basic Permissions.

    All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.

    You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.

    Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.

  4. Protecting Users’ Legal Rights From Anti-Circumvention Law.

    No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.

    When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work’s users, your or third parties’ legal rights to forbid circumvention of technological measures.

  5. Conveying Verbatim Copies.

    You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.

    You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.

  6. Conveying Modified Source Versions.

    You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

    1. The work must carry prominent notices stating that you modified it, and giving a relevant date.
    2. The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
    3. You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
    4. If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.

    A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.

  7. Conveying Non-Source Forms.

    You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:

    1. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
    2. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
    3. Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
    4. Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
    5. Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.

    A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.

    A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.

    “Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.

    If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).

    The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.

    Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.

  8. Additional Terms.

    “Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.

    When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.

    Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:

    1. Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
    2. Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
    3. Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
    4. Limiting the use for publicity purposes of names of licensors or authors of the material; or
    5. Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
    6. Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.

    All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.

    If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.

    Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.

  9. Termination.

    You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).

    However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

    Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

    Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.

  10. Acceptance Not Required for Having Copies.

    You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.

  11. Automatic Licensing of Downstream Recipients.

    Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.

    An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party’s predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.

    You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.

  12. Patents.

    A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor’s “contributor version”.

    A contributor’s “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.

    Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor’s essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.

    In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.

    If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient’s use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.

    If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

    A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.

    Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.

  13. No Surrender of Others’ Freedom.

    If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.

  14. Use with the GNU Affero General Public License.

    Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.

  15. Revised Versions of this License.

    The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

    Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.

    If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Program.

    Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.

  16. Disclaimer of Warranty.

    THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  17. Limitation of Liability.

    IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

  18. Interpretation of Sections 15 and 16.

    If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.

END OF TERMS AND CONDITIONS

How to Apply These Terms to Your New Programs

If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.

one line to give the program's name and a brief idea of what it does.
Copyright (C) year name of author

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.

This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see https://www.gnu.org/licenses/.

Also add information on how to contact you by electronic and paper mail.

If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:

program Copyright (C) year name of author
This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w’.
This is free software, and you are welcome to redistribute it
under certain conditions; type ‘show c’ for details.

The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of the General Public License. Of course, your program’s commands might be different; for a GUI interface, you would use an “about box”.

You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see https://www.gnu.org/licenses/.

The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read https://www.gnu.org/licenses/why-not-lgpl.html.


Appendix C GNU Lesser General Public License

Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. https://fsf.org/

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.

This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below.

  1. Additional Definitions.

    As used herein, “this License” refers to version 3 of the GNU Lesser General Public License, and the “GNU GPL” refers to version 3 of the GNU General Public License.

    “The Library” refers to a covered work governed by this License, other than an Application or a Combined Work as defined below.

    An “Application” is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library.

    A “Combined Work” is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the “Linked Version”.

    The “Minimal Corresponding Source” for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version.

    The “Corresponding Application Code” for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.

  2. Exception to Section 3 of the GNU GPL.

    You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL.

  3. Conveying Modified Versions.

    If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version:

    1. under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or
    2. under the GNU GPL, with none of the additional permissions of this License applicable to that copy.
  4. Object Code Incorporating Material from Library Header Files.

    The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:

    1. Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License.
    2. Accompany the object code with a copy of the GNU GPL and this license document.
  5. Combined Works.

    You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:

    1. Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License.
    2. Accompany the Combined Work with a copy of the GNU GPL and this license document.
    3. For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document.
    4. Do one of the following:
      1. Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.
      2. Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user’s computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version.
    5. Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.)
  6. Combined Libraries.

    You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following:

    1. Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License.
    2. Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
  7. Revised Versions of the GNU Lesser General Public License.

    The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

    Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation.

    If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy’s public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.


Next: , Previous: , Up: MicroBenchmark   [Contents][Index]

Indices


Appendix D Concept Index

Jump to:   A   B   C   D   E   G   K   L   M   P   R   S   T  
Index Entry  Section

A
additional data: Test Execution Report
aggregated data: Test Execution Report
automatic test: Execution Stages

B
benchmark: What Is A Benchmark

C
C main function: Utilities Reference
chronomenter: Chronometer Reference
chronometer: Predefined Chronometers
chronometer, time_t: Predefined Chronometers
chronometer, time_t: Predefined Chronometers
chronometer, clock_t: Predefined Chronometers
chronometer, gettimeofday: Predefined Chronometers
chronometer, clock_gettime: Predefined Chronometers
clock: Predefined Clocks
clock, CLOCK_REALTIME: Predefined Clocks
clock, CLOCK_TAI: Predefined Clocks
clock, CLOCK_MONOTONIC: Predefined Clocks
clock, CLOCK_PROCESS_CPUTIME_ID: Predefined Clocks
clock, CLOCK_THREAD_CPUTIME_ID: Predefined Clocks
clock (): Predefined Chronometers
clock_gettime (): Predefined Chronometers
custom data collection: Custom Data Collection Reference

D
data collection: Data Collection Reference
data collection, custom: Custom Data Collection Reference
directed test: Execution Stages

E
elapsed time: Time Reference
error handling: Utilities Reference

G
gettimeofday (): Predefined Chronometers
GNU Free Documentation License: GNU Free Documentation License
GNU General Public License: GNU General Public License
GNU Lesser General Public License: GNU Lesser General Public License

K
keep running?: State Reference
keep running?: C++ State Reference

L
license, GNU Free Documentation License: GNU Free Documentation License
license, GNU General Public License: GNU General Public License
license, GNU Lesser General Public License: GNU Lesser General Public License
log: Library Log Reference

M
macros, registration: Utilities Reference
macros, constraints: Utilities Reference
macros, main: Utilities Reference
meter: Predefined Meters
meter: Meter Reference
micro-benchmark: MicroBenchmark Concepts

P
performance: What Is A Benchmark
performance counters: What Is A Benchmark
profiling: What Is A Benchmark

R
registration: Utilities Reference
registration, macros: Utilities Reference
registration, constraints: Utilities Reference
registration, functions: Utilities Reference
registration, helpers: Utilities Reference
registration constraints: Utilities Reference
registration functions: Utilities Reference
report, suite: Suite Report Reference
report, test: Test Report Reference
report, test execution: Test Execution Report
report, output: Output Reference
report, statistics: Data Collection Reference
report, suite: C++ Suite Report Reference
report, test: C++ Test Report Reference
report, test execution: C++ Exec Report Reference
report output: Output Reference

S
set up: Execution Stages
setitimer: Predefined Timers
state: MicroBenchmark Concepts
statistical values: Predefined Calculations
statistics: Data Collection Reference
stats, values: Predefined Calculations
stats, mean: Predefined Calculations
stats, μ: Predefined Calculations
stats, variance: Predefined Calculations
stats, σ²: Predefined Calculations
stats, standard deviation: Predefined Calculations
stats, σ: Predefined Calculations
stats, meter: Predefined Meters
stats, chronometer: Predefined Chronometers
stats, custom data collection: Custom Data Collection Reference
suite: MicroBenchmark Concepts
suite output: Output Reference
suite report: Suite Report Reference
suite report: C++ Suite Report Reference

T
tear down: Execution Stages
test case: MicroBenchmark Concepts
test case, set up: Execution Stages
test case, automatic test: Execution Stages
test case, directed test: Execution Stages
test case, tear down: Execution Stages
test case, constraints: Test Constraints Reference
test case, iteration limits: Test Constraints Reference
test case, sample limits: Test Constraints Reference
test case, time limits: Test Constraints Reference
test case, constraints: C++ Test Reference
test case, iteration limits: C++ Test Reference
test case, sample limits: C++ Test Reference
test execution report: Test Execution Report
test execution report, aggregated data: Test Execution Report
test execution report, additional data: Test Execution Report
test execution report: C++ Exec Report Reference
test report: Test Report Reference
test report: C++ Test Report Reference
time, elapsed: Time Reference
time, from clock: Time Reference
time (): Predefined Chronometers
time utility: What Is A Benchmark
timer: Predefined Timers
timer, chrono-adapter: Predefined Timers
timer, itimer: Predefined Timers
timer, timer_t: Predefined Timers
timer, timerfd: Predefined Timers
timerfd_create (): Predefined Timers
timer_create (): Predefined Timers

Jump to:   A   B   C   D   E   G   K   L   M   P   R   S   T  

Appendix E C Programming Index

Jump to:   M  
Index Entry  Section

M
micro_benchmark_auto_test_fun: Test Definition Reference
micro_benchmark_chronometer_create: Chronometer Reference
micro_benchmark_chronometer_create_default: Chronometer Reference
micro_benchmark_chronometer_from_meter: Chronometer Reference
micro_benchmark_chronometer_get_default: Chronometer Reference
micro_benchmark_chronometer_provider: Chronometer Reference
micro_benchmark_chronometer_release: Chronometer Reference
micro_benchmark_chronometer_set_default: Chronometer Reference
micro_benchmark_cleanup: Utilities Reference
micro_benchmark_clock_time: Time Reference
micro_benchmark_clock_type: Time Reference
MICRO_BENCHMARK_COMPILER_BARRIER: Optimization Utilities Reference
MICRO_BENCHMARK_CONSTRAINT_TEST: Utilities Reference
micro_benchmark_custom_meter: Test Constraints Reference
micro_benchmark_custom_sample_collector: Custom Data Collection Reference
micro_benchmark_custom_sample_collector_fun: Custom Data Collection Reference
micro_benchmark_custom_sample_create_data_fun: Custom Data Collection Reference
micro_benchmark_custom_sample_release_data_fun: Custom Data Collection Reference
micro_benchmark_custom_time_calculator: Custom Data Collection Reference
MICRO_BENCHMARK_DO_NOT_OPTIMIZE: Optimization Utilities Reference
micro_benchmark_do_not_optimize: Optimization Utilities Reference
micro_benchmark_exec_report: Test Execution Report
micro_benchmark_exec_report_get_extra_data: Test Execution Report
micro_benchmark_exec_report_get_iterations: Test Execution Report
micro_benchmark_exec_report_get_name: Test Execution Report
micro_benchmark_exec_report_get_sizes: Test Execution Report
micro_benchmark_exec_report_get_time_sample: Test Execution Report
micro_benchmark_exec_report_iteration_time: Test Execution Report
micro_benchmark_exec_report_meter_get_sample: Test Execution Report
micro_benchmark_exec_report_meter_num_samples: Test Execution Report
micro_benchmark_exec_report_meter_sample_type: Test Execution Report
micro_benchmark_exec_report_number_of_meters: Test Execution Report
micro_benchmark_exec_report_num_time_samples: Test Execution Report
micro_benchmark_exec_report_sample_iterations: Test Execution Report
micro_benchmark_exec_report_sample_time: Test Execution Report
micro_benchmark_exec_report_total_iterations: Test Execution Report
micro_benchmark_exec_report_total_samples: Test Execution Report
micro_benchmark_exec_report_total_time: Test Execution Report
micro_benchmark_exec_report_used_samples: Test Execution Report
micro_benchmark_get_default_output_values: Output Reference
micro_benchmark_init: Utilities Reference
micro_benchmark_log_level: Library Log Reference
micro_benchmark_log_reset_output: Library Log Reference
micro_benchmark_log_set_output: Library Log Reference
MICRO_BENCHMARK_MAIN: Utilities Reference
micro_benchmark_main: Utilities Reference
micro_benchmark_meter: Meter Reference
micro_benchmark_meter_cleanup_fun: Meter Reference
micro_benchmark_meter_data: Meter Reference
micro_benchmark_meter_definition: Meter Reference
micro_benchmark_meter_get_sample_fun: Meter Reference
micro_benchmark_meter_init_fun: Meter Reference
micro_benchmark_meter_restart_fun: Meter Reference
micro_benchmark_meter_start_fun: Meter Reference
micro_benchmark_meter_stop_fun: Meter Reference
micro_benchmark_output_stat: Output Reference
micro_benchmark_output_type: Output Reference
micro_benchmark_output_values: Output Reference
micro_benchmark_print_custom_report: Output Reference
micro_benchmark_print_report: Output Reference
MICRO_BENCHMARK_REGISTER_AUTO_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_FULL_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_NAMED_AUTO_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_NAMED_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_SIMPLE_TEST: Utilities Reference
micro_benchmark_register_static_constraint: Utilities Reference
micro_benchmark_register_static_test: Utilities Reference
MICRO_BENCHMARK_REGISTER_TEST: Utilities Reference
micro_benchmark_report: Suite Report Reference
micro_benchmark_report_extractor_fun: Test Execution Report
micro_benchmark_report_extra_data: Test Execution Report
micro_benchmark_report_get_name: Suite Report Reference
micro_benchmark_report_get_number_of_tests: Suite Report Reference
micro_benchmark_report_get_test_report: Suite Report Reference
micro_benchmark_set_default_output_values: Output Reference
micro_benchmark_set_error_handler: Utilities Reference
micro_benchmark_set_log_level: Library Log Reference
micro_benchmark_set_module_log_level: Library Log Reference
micro_benchmark_set_up_fun: Test Definition Reference
micro_benchmark_state_get_data: State Reference
micro_benchmark_state_get_dimensions: State Reference
micro_benchmark_state_get_name: State Reference
micro_benchmark_state_get_size: State Reference
micro_benchmark_state_keep_running: State Reference
micro_benchmark_state_set_name: State Reference
micro_benchmark_static_constraint: Utilities Reference
micro_benchmark_stats_generic_sample: Data Collection Reference
micro_benchmark_stats_generic_samples: Data Collection Reference
micro_benchmark_stats_generic_sample_data: Data Collection Reference
micro_benchmark_stats_meter_cleanup: Meter Reference
micro_benchmark_stats_meter_get_max_resolution: Meter Reference
micro_benchmark_stats_meter_get_min_resolution: Meter Reference
micro_benchmark_stats_meter_get_name: Meter Reference
micro_benchmark_stats_meter_get_sample: Meter Reference
micro_benchmark_stats_meter_get_sample_type: Meter Reference
micro_benchmark_stats_meter_init: Meter Reference
micro_benchmark_stats_meter_init_with_data: Meter Reference
micro_benchmark_stats_meter_restart: Meter Reference
micro_benchmark_stats_meter_sample: Data Collection Reference
micro_benchmark_stats_meter_start: Meter Reference
micro_benchmark_stats_meter_stop: Meter Reference
micro_benchmark_stats_sample_type: Data Collection Reference
micro_benchmark_stats_unit: Time Reference
micro_benchmark_stats_value: Time Reference
micro_benchmark_suite: Suite Reference
micro_benchmark_suite_create: Suite Reference
micro_benchmark_suite_get_name: Suite Reference
micro_benchmark_suite_get_number_of_tests: Suite Reference
micro_benchmark_suite_get_report: Suite Reference
micro_benchmark_suite_get_test: Suite Reference
micro_benchmark_suite_register_test: Suite Reference
micro_benchmark_suite_release: Suite Reference
micro_benchmark_suite_run: Suite Reference
micro_benchmark_tear_down_fun: Test Definition Reference
micro_benchmark_test_case: Test Reference
micro_benchmark_test_case_add_dimension: Test Constraints Reference
micro_benchmark_test_case_add_meter: Test Constraints Reference
micro_benchmark_test_case_definition: Test Definition Reference
micro_benchmark_test_case_dimensions: Test Constraints Reference
micro_benchmark_test_case_get_data: Test Definition Reference
micro_benchmark_test_case_get_definition: Test Definition Reference
micro_benchmark_test_case_get_dimension: Test Constraints Reference
micro_benchmark_test_case_get_max_time: Test Constraints Reference
micro_benchmark_test_case_get_name: Test Definition Reference
micro_benchmark_test_case_is_enabled: Test Definition Reference
micro_benchmark_test_case_iterations_to_skip: Test Constraints Reference
micro_benchmark_test_case_limit_iterations: Test Constraints Reference
micro_benchmark_test_case_limit_samples: Test Constraints Reference
micro_benchmark_test_case_max_iterations: Test Constraints Reference
micro_benchmark_test_case_max_sample_iterations: Test Constraints Reference
micro_benchmark_test_case_min_iterations: Test Constraints Reference
micro_benchmark_test_case_min_sample_iterations: Test Constraints Reference
micro_benchmark_test_case_set_calculator: Test Constraints Reference
micro_benchmark_test_case_set_chrono: Test Constraints Reference
micro_benchmark_test_case_set_custom_chrono: Test Constraints Reference
micro_benchmark_test_case_set_data: Test Definition Reference
micro_benchmark_test_case_set_enabled: Test Definition Reference
micro_benchmark_test_case_set_max_time: Test Constraints Reference
micro_benchmark_test_case_skip_iterations: Test Constraints Reference
micro_benchmark_test_definition: Test Definition Reference
micro_benchmark_test_fun: Test Definition Reference
micro_benchmark_test_report: Test Report Reference
micro_benchmark_test_report_get_exec_report: Test Report Reference
micro_benchmark_test_report_get_name: Test Report Reference
micro_benchmark_test_report_get_num_executions: Test Report Reference
micro_benchmark_test_state: State Reference
micro_benchmark_timer: Timer Reference
micro_benchmark_timer_cleanup: Timer Reference
micro_benchmark_timer_cleanup_fun: Timer Reference
micro_benchmark_timer_create: Timer Provider Reference
micro_benchmark_timer_create_default: Timer Provider Reference
micro_benchmark_timer_data: Timer Reference
micro_benchmark_timer_definition: Timer Reference
micro_benchmark_timer_elapsed: Timer Reference
micro_benchmark_timer_elapsed_fun: Timer Reference
micro_benchmark_timer_from_meter: Timer Provider Reference
micro_benchmark_timer_from_provided_meter: Timer Provider Reference
micro_benchmark_timer_from_template: Timer Provider Reference
micro_benchmark_timer_get_default: Timer Reference
micro_benchmark_timer_get_name: Timer Reference
micro_benchmark_timer_get_resolution: Timer Reference
micro_benchmark_timer_get_type: Timer Reference
micro_benchmark_timer_init: Timer Reference
micro_benchmark_timer_init_fun: Timer Reference
micro_benchmark_timer_init_with_extra: Timer Reference
micro_benchmark_timer_is_running: Timer Reference
micro_benchmark_timer_provider: Timer Provider Reference
micro_benchmark_timer_release: Timer Provider Reference
micro_benchmark_timer_restart: Timer Reference
micro_benchmark_timer_restart_fun: Timer Reference
micro_benchmark_timer_running_fun: Timer Reference
micro_benchmark_timer_set_default: Timer Reference
micro_benchmark_timer_set_default_provider: Timer Reference
micro_benchmark_timer_start: Timer Reference
micro_benchmark_timer_start_fun: Timer Reference
micro_benchmark_timer_stop: Timer Reference
micro_benchmark_timer_stop_fun: Timer Reference
micro_benchmark_time_sample: Time Reference
micro_benchmark_time_samples: Time Reference
micro_benchmark_time_sample_: Time Reference
micro_benchmark_time_stats_values: Time Reference
micro_benchmark_write_custom_report: Output Reference
micro_benchmark_write_report: Output Reference

Jump to:   M  

Appendix F C++ Programming Index

Jump to:   <  
C   D   E   I   M   O   R   S   T  
Index Entry  Section

<
<iteration-stat>: C++ Exec Report Reference
<iteration-unit>: C++ Exec Report Reference
<time-samples>: C++ Exec Report Reference
<time-stat>: C++ Exec Report Reference
<time-unit>: C++ Exec Report Reference
<with-constraints>: C++ Suite Reference

C
compiler_barrier: C++ Optimization Utilities Reference

D
do_not_optimize: C++ Optimization Utilities Reference

E
exec_report: C++ Exec Report Reference
exec_report::exec_report: C++ Exec Report Reference
exec_report::exec_report: C++ Exec Report Reference
exec_report::iterations: C++ Exec Report Reference
exec_report::iteration_time: C++ Exec Report Reference
exec_report::name: C++ Exec Report Reference
exec_report::operator=: C++ Exec Report Reference
exec_report::sample_iterations: C++ Exec Report Reference
exec_report::sample_time: C++ Exec Report Reference
exec_report::sizes: C++ Exec Report Reference
exec_report::time_samples: C++ Exec Report Reference
exec_report::total_iterations: C++ Exec Report Reference
exec_report::total_samples: C++ Exec Report Reference
exec_report::total_time: C++ Exec Report Reference
exec_report::used_samples: C++ Exec Report Reference

I
io::default_output_values: C++ Report Output Reference
io::default_output_values: C++ Report Output Reference
io::output_stat: C++ Report Output Reference
io::output_type: C++ Report Output Reference
io::output_values: C++ Report Output Reference

M
main: C++ Utilities Reference
MICRO_BENCHMARK_MAIN: C++ Utilities Reference

O
operator<<: C++ Suite Report Reference
operator<<: C++ Suite Report Reference

R
report: C++ Suite Report Reference
report::begin: C++ Suite Report Reference
report::empty: C++ Suite Report Reference
report::end: C++ Suite Report Reference
report::executions: C++ Suite Report Reference
report::name: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::rbegin: C++ Suite Report Reference
report::rend: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::reverse_executions: C++ Suite Report Reference
report::reverse_tests: C++ Suite Report Reference
report::size: C++ Suite Report Reference
report::tests: C++ Suite Report Reference
report::with_args: C++ Suite Report Reference

S
state: C++ State Reference
state::get_name: C++ State Reference
state::keep_running: C++ State Reference
state::set_name: C++ State Reference
state::set_name: C++ State Reference
state::sizes: C++ State Reference
suite: C++ Suite Reference
suite::name: C++ Suite Reference
suite::register_class: C++ Suite Reference
suite::register_class: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::run: C++ Suite Reference
suite::stored_report: C++ Suite Reference
suite::suite: C++ Suite Reference
suite::suite: C++ Suite Reference
suite::tests: C++ Suite Reference

T
test_case: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::get_dimension: C++ Test Reference
test_case::iterations_to_skip: C++ Test Reference
test_case::limit_iterations: C++ Test Reference
test_case::limit_samples: C++ Test Reference
test_case::max_iterations: C++ Test Reference
test_case::max_sample_iterations: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::min_iterations: C++ Test Reference
test_case::min_sample_iterations: C++ Test Reference
test_case::number_of_dimensions: C++ Test Reference
test_case::operator=: C++ Test Reference
test_case::skip_iterations: C++ Test Reference
test_case::test_case: C++ Test Reference
test_case::test_case: C++ Test Reference
test_report: C++ Test Report Reference
test_report::empty: C++ Test Report Reference
test_report::end: C++ Test Report Reference
test_report::executions: C++ Test Report Reference
test_report::name: C++ Test Report Reference
test_report::operator=: C++ Test Report Reference
test_report::rbegin: C++ Test Report Reference
test_report::rend: C++ Test Report Reference
test_report::reverse_executions: C++ Test Report Reference
test_report::size: C++ Test Report Reference
test_report::test_report: C++ Test Report Reference
test_report::test_report: C++ Test Report Reference
total_time_type: C++ Exec Report Reference

Jump to:   <  
C   D   E   I   M   O   R   S   T  

Appendix G Guile Programming Index

Jump to:   <  
A   E   F   G   K   M   O   P   R   S   T  
Index Entry  Section

<
<elapsed-time>: Guile Time Utilities Reference
<exec-report>: Guile Exec Report Reference
<report>: Guile Suite Report Reference
<stat-value>: Guile Stats Utilities Reference
<state>: Guile Test State Reference
<suite>: Guile Suite Reference
<test-case>: Guile Test Case Reference
<time-sample>: Guile Time Utilities Reference

A
add-test!: Guile Test Case Reference

E
elapsed-time-nanoseconds: Guile Time Utilities Reference
elapsed-time-seconds: Guile Time Utilities Reference
elapsed-time?: Guile Time Utilities Reference
exec-report-iteration-time: Guile Exec Report Reference
exec-report-iterations: Guile Exec Report Reference
exec-report-name: Guile Exec Report Reference
exec-report-sample-iterations: Guile Exec Report Reference
exec-report-sample-time: Guile Exec Report Reference
exec-report-sizes: Guile Exec Report Reference
exec-report-test-name: Guile Exec Report Reference
exec-report-time-samples: Guile Exec Report Reference
exec-report-total-iterations: Guile Exec Report Reference
exec-report-total-samples: Guile Exec Report Reference
exec-report-total-time: Guile Exec Report Reference
exec-report-used-samples: Guile Exec Report Reference
exec-report?: Guile Exec Report Reference

F
fold-reports: Guile Suite Report Reference
for-each-report: Guile Suite Report Reference

G
get-report: Guile Suite Reference
get-report: Guile Suite Report Reference

K
keep-running?: Guile Test State Reference

M
main: Guile Utilities Reference
make-suite: Guile Suite Reference

O
output/console: Guile Report Output Reference
output/lisp: Guile Report Output Reference
output/text: Guile Report Output Reference

P
print-report: Guile Report Output Reference

R
register-test!: Guile Test Case Reference
report-name: Guile Suite Report Reference
report?: Guile Suite Report Reference
run-suite!: Guile Suite Reference

S
set-log-level!: Guile Utilities Reference
set-module-log-level!: Guile Utilities Reference
set-state-name!: Guile Test State Reference
stat-value-mean: Guile Stats Utilities Reference
stat-value-unit: Guile Time Utilities Reference
stat-value-unit: Guile Stats Utilities Reference
stat-value?: Guile Stats Utilities Reference
state-name: Guile Test State Reference
state-sizes: Guile Test State Reference
state?: Guile Test State Reference
stats/all: Guile Report Output Reference
stats/basic: Guile Report Output Reference
stats/mean: Guile Report Output Reference
stats/std-deviation: Guile Report Output Reference
stats/variance: Guile Report Output Reference
suite-name: Guile Suite Reference
suite-number-of-tests: Guile Suite Reference
suite?: Guile Suite Reference

T
test-add-dimension!: Guile Test Case Reference
test-case?: Guile Test Case Reference
test-dimensions: Guile Test Case Reference
test-iterations-to-skip: Guile Test Case Reference
test-limit-iterations!: Guile Test Case Reference
test-limit-samples!: Guile Test Case Reference
test-max-iterations: Guile Test Case Reference
test-max-sample-iterations: Guile Test Case Reference
test-max-time!: Guile Test Case Reference
test-min-iterations: Guile Test Case Reference
test-min-sample-iterations: Guile Test Case Reference
test-set-constraints!: Guile Test Case Reference
test-set-max-time!: Guile Test Case Reference
test-skip-iterations!: Guile Test Case Reference
time-sample-elapsed-time: Guile Time Utilities Reference
time-sample?: Guile Time Utilities Reference

Jump to:   <  
A   E   F   G   K   M   O   P   R   S   T  

Appendix H Programming Index

Jump to:   <  
A   C   D   E   F   G   I   K   M   O   P   R   S   T  
Index Entry  Section

<
<elapsed-time>: Guile Time Utilities Reference
<exec-report>: Guile Exec Report Reference
<iteration-stat>: C++ Exec Report Reference
<iteration-unit>: C++ Exec Report Reference
<report>: Guile Suite Report Reference
<stat-value>: Guile Stats Utilities Reference
<state>: Guile Test State Reference
<suite>: Guile Suite Reference
<test-case>: Guile Test Case Reference
<time-sample>: Guile Time Utilities Reference
<time-samples>: C++ Exec Report Reference
<time-stat>: C++ Exec Report Reference
<time-unit>: C++ Exec Report Reference
<with-constraints>: C++ Suite Reference

A
add-test!: Guile Test Case Reference

C
compiler_barrier: C++ Optimization Utilities Reference

D
do_not_optimize: C++ Optimization Utilities Reference
dynamic_example: Common API Concepts

E
elapsed-time-nanoseconds: Guile Time Utilities Reference
elapsed-time-seconds: Guile Time Utilities Reference
elapsed-time?: Guile Time Utilities Reference
exec-report-iteration-time: Guile Exec Report Reference
exec-report-iterations: Guile Exec Report Reference
exec-report-name: Guile Exec Report Reference
exec-report-sample-iterations: Guile Exec Report Reference
exec-report-sample-time: Guile Exec Report Reference
exec-report-sizes: Guile Exec Report Reference
exec-report-test-name: Guile Exec Report Reference
exec-report-time-samples: Guile Exec Report Reference
exec-report-total-iterations: Guile Exec Report Reference
exec-report-total-samples: Guile Exec Report Reference
exec-report-total-time: Guile Exec Report Reference
exec-report-used-samples: Guile Exec Report Reference
exec-report?: Guile Exec Report Reference
exec_report: C++ Exec Report Reference
exec_report::exec_report: C++ Exec Report Reference
exec_report::exec_report: C++ Exec Report Reference
exec_report::iterations: C++ Exec Report Reference
exec_report::iteration_time: C++ Exec Report Reference
exec_report::name: C++ Exec Report Reference
exec_report::operator=: C++ Exec Report Reference
exec_report::sample_iterations: C++ Exec Report Reference
exec_report::sample_time: C++ Exec Report Reference
exec_report::sizes: C++ Exec Report Reference
exec_report::time_samples: C++ Exec Report Reference
exec_report::total_iterations: C++ Exec Report Reference
exec_report::total_samples: C++ Exec Report Reference
exec_report::total_time: C++ Exec Report Reference
exec_report::used_samples: C++ Exec Report Reference

F
fold-reports: Guile Suite Report Reference
for-each-report: Guile Suite Report Reference
function_example: Common API Concepts

G
get-report: Guile Suite Reference
get-report: Guile Suite Report Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference
Guile Value: Guile Report Output Reference

I
io::default_output_values: C++ Report Output Reference
io::default_output_values: C++ Report Output Reference
io::output_stat: C++ Report Output Reference
io::output_type: C++ Report Output Reference
io::output_values: C++ Report Output Reference

K
keep-running?: Guile Test State Reference

M
main: C++ Utilities Reference
main: Guile Utilities Reference
make-suite: Guile Suite Reference
micro_benchmark_auto_test_fun: Test Definition Reference
micro_benchmark_chronometer_create: Chronometer Reference
micro_benchmark_chronometer_create_default: Chronometer Reference
micro_benchmark_chronometer_from_meter: Chronometer Reference
micro_benchmark_chronometer_get_default: Chronometer Reference
micro_benchmark_chronometer_provider: Chronometer Reference
micro_benchmark_chronometer_release: Chronometer Reference
micro_benchmark_chronometer_set_default: Chronometer Reference
micro_benchmark_cleanup: Utilities Reference
micro_benchmark_clock_time: Time Reference
micro_benchmark_clock_type: Time Reference
MICRO_BENCHMARK_COMPILER_BARRIER: Optimization Utilities Reference
MICRO_BENCHMARK_CONSTRAINT_TEST: Utilities Reference
micro_benchmark_custom_meter: Test Constraints Reference
micro_benchmark_custom_sample_collector: Custom Data Collection Reference
micro_benchmark_custom_sample_collector_fun: Custom Data Collection Reference
micro_benchmark_custom_sample_create_data_fun: Custom Data Collection Reference
micro_benchmark_custom_sample_release_data_fun: Custom Data Collection Reference
micro_benchmark_custom_time_calculator: Custom Data Collection Reference
MICRO_BENCHMARK_DO_NOT_OPTIMIZE: Optimization Utilities Reference
micro_benchmark_do_not_optimize: Optimization Utilities Reference
micro_benchmark_exec_report: Test Execution Report
micro_benchmark_exec_report_get_extra_data: Test Execution Report
micro_benchmark_exec_report_get_iterations: Test Execution Report
micro_benchmark_exec_report_get_name: Test Execution Report
micro_benchmark_exec_report_get_sizes: Test Execution Report
micro_benchmark_exec_report_get_time_sample: Test Execution Report
micro_benchmark_exec_report_iteration_time: Test Execution Report
micro_benchmark_exec_report_meter_get_sample: Test Execution Report
micro_benchmark_exec_report_meter_num_samples: Test Execution Report
micro_benchmark_exec_report_meter_sample_type: Test Execution Report
micro_benchmark_exec_report_number_of_meters: Test Execution Report
micro_benchmark_exec_report_num_time_samples: Test Execution Report
micro_benchmark_exec_report_sample_iterations: Test Execution Report
micro_benchmark_exec_report_sample_time: Test Execution Report
micro_benchmark_exec_report_total_iterations: Test Execution Report
micro_benchmark_exec_report_total_samples: Test Execution Report
micro_benchmark_exec_report_total_time: Test Execution Report
micro_benchmark_exec_report_used_samples: Test Execution Report
micro_benchmark_get_default_output_values: Output Reference
micro_benchmark_init: Utilities Reference
micro_benchmark_log_level: Library Log Reference
micro_benchmark_log_reset_output: Library Log Reference
micro_benchmark_log_set_output: Library Log Reference
MICRO_BENCHMARK_MAIN: Utilities Reference
micro_benchmark_main: Utilities Reference
MICRO_BENCHMARK_MAIN: C++ Utilities Reference
micro_benchmark_meter: Meter Reference
micro_benchmark_meter_cleanup_fun: Meter Reference
micro_benchmark_meter_data: Meter Reference
micro_benchmark_meter_definition: Meter Reference
micro_benchmark_meter_get_sample_fun: Meter Reference
micro_benchmark_meter_init_fun: Meter Reference
micro_benchmark_meter_restart_fun: Meter Reference
micro_benchmark_meter_start_fun: Meter Reference
micro_benchmark_meter_stop_fun: Meter Reference
micro_benchmark_output_stat: Output Reference
micro_benchmark_output_type: Output Reference
micro_benchmark_output_values: Output Reference
micro_benchmark_print_custom_report: Output Reference
micro_benchmark_print_report: Output Reference
MICRO_BENCHMARK_REGISTER_AUTO_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_FULL_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_NAMED_AUTO_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_NAMED_TEST: Utilities Reference
MICRO_BENCHMARK_REGISTER_SIMPLE_TEST: Utilities Reference
micro_benchmark_register_static_constraint: Utilities Reference
micro_benchmark_register_static_test: Utilities Reference
MICRO_BENCHMARK_REGISTER_TEST: Utilities Reference
micro_benchmark_report: Suite Report Reference
micro_benchmark_report_extractor_fun: Test Execution Report
micro_benchmark_report_extra_data: Test Execution Report
micro_benchmark_report_get_name: Suite Report Reference
micro_benchmark_report_get_number_of_tests: Suite Report Reference
micro_benchmark_report_get_test_report: Suite Report Reference
micro_benchmark_set_default_output_values: Output Reference
micro_benchmark_set_error_handler: Utilities Reference
micro_benchmark_set_log_level: Library Log Reference
micro_benchmark_set_module_log_level: Library Log Reference
micro_benchmark_set_up_fun: Test Definition Reference
micro_benchmark_state_get_data: State Reference
micro_benchmark_state_get_dimensions: State Reference
micro_benchmark_state_get_name: State Reference
micro_benchmark_state_get_size: State Reference
micro_benchmark_state_keep_running: State Reference
micro_benchmark_state_set_name: State Reference
micro_benchmark_static_constraint: Utilities Reference
micro_benchmark_stats_generic_sample: Data Collection Reference
micro_benchmark_stats_generic_samples: Data Collection Reference
micro_benchmark_stats_generic_sample_data: Data Collection Reference
micro_benchmark_stats_meter_cleanup: Meter Reference
micro_benchmark_stats_meter_get_max_resolution: Meter Reference
micro_benchmark_stats_meter_get_min_resolution: Meter Reference
micro_benchmark_stats_meter_get_name: Meter Reference
micro_benchmark_stats_meter_get_sample: Meter Reference
micro_benchmark_stats_meter_get_sample_type: Meter Reference
micro_benchmark_stats_meter_init: Meter Reference
micro_benchmark_stats_meter_init_with_data: Meter Reference
micro_benchmark_stats_meter_restart: Meter Reference
micro_benchmark_stats_meter_sample: Data Collection Reference
micro_benchmark_stats_meter_start: Meter Reference
micro_benchmark_stats_meter_stop: Meter Reference
micro_benchmark_stats_sample_type: Data Collection Reference
micro_benchmark_stats_unit: Time Reference
micro_benchmark_stats_value: Time Reference
micro_benchmark_suite: Suite Reference
micro_benchmark_suite_create: Suite Reference
micro_benchmark_suite_get_name: Suite Reference
micro_benchmark_suite_get_number_of_tests: Suite Reference
micro_benchmark_suite_get_report: Suite Reference
micro_benchmark_suite_get_test: Suite Reference
micro_benchmark_suite_register_test: Suite Reference
micro_benchmark_suite_release: Suite Reference
micro_benchmark_suite_run: Suite Reference
micro_benchmark_tear_down_fun: Test Definition Reference
micro_benchmark_test_case: Test Reference
micro_benchmark_test_case_add_dimension: Test Constraints Reference
micro_benchmark_test_case_add_meter: Test Constraints Reference
micro_benchmark_test_case_definition: Test Definition Reference
micro_benchmark_test_case_dimensions: Test Constraints Reference
micro_benchmark_test_case_get_data: Test Definition Reference
micro_benchmark_test_case_get_definition: Test Definition Reference
micro_benchmark_test_case_get_dimension: Test Constraints Reference
micro_benchmark_test_case_get_max_time: Test Constraints Reference
micro_benchmark_test_case_get_name: Test Definition Reference
micro_benchmark_test_case_is_enabled: Test Definition Reference
micro_benchmark_test_case_iterations_to_skip: Test Constraints Reference
micro_benchmark_test_case_limit_iterations: Test Constraints Reference
micro_benchmark_test_case_limit_samples: Test Constraints Reference
micro_benchmark_test_case_max_iterations: Test Constraints Reference
micro_benchmark_test_case_max_sample_iterations: Test Constraints Reference
micro_benchmark_test_case_min_iterations: Test Constraints Reference
micro_benchmark_test_case_min_sample_iterations: Test Constraints Reference
micro_benchmark_test_case_set_calculator: Test Constraints Reference
micro_benchmark_test_case_set_chrono: Test Constraints Reference
micro_benchmark_test_case_set_custom_chrono: Test Constraints Reference
micro_benchmark_test_case_set_data: Test Definition Reference
micro_benchmark_test_case_set_enabled: Test Definition Reference
micro_benchmark_test_case_set_max_time: Test Constraints Reference
micro_benchmark_test_case_skip_iterations: Test Constraints Reference
micro_benchmark_test_definition: Test Definition Reference
micro_benchmark_test_fun: Test Definition Reference
micro_benchmark_test_report: Test Report Reference
micro_benchmark_test_report_get_exec_report: Test Report Reference
micro_benchmark_test_report_get_name: Test Report Reference
micro_benchmark_test_report_get_num_executions: Test Report Reference
micro_benchmark_test_state: State Reference
micro_benchmark_timer: Timer Reference
micro_benchmark_timer_cleanup: Timer Reference
micro_benchmark_timer_cleanup_fun: Timer Reference
micro_benchmark_timer_create: Timer Provider Reference
micro_benchmark_timer_create_default: Timer Provider Reference
micro_benchmark_timer_data: Timer Reference
micro_benchmark_timer_definition: Timer Reference
micro_benchmark_timer_elapsed: Timer Reference
micro_benchmark_timer_elapsed_fun: Timer Reference
micro_benchmark_timer_from_meter: Timer Provider Reference
micro_benchmark_timer_from_provided_meter: Timer Provider Reference
micro_benchmark_timer_from_template: Timer Provider Reference
micro_benchmark_timer_get_default: Timer Reference
micro_benchmark_timer_get_name: Timer Reference
micro_benchmark_timer_get_resolution: Timer Reference
micro_benchmark_timer_get_type: Timer Reference
micro_benchmark_timer_init: Timer Reference
micro_benchmark_timer_init_fun: Timer Reference
micro_benchmark_timer_init_with_extra: Timer Reference
micro_benchmark_timer_is_running: Timer Reference
micro_benchmark_timer_provider: Timer Provider Reference
micro_benchmark_timer_release: Timer Provider Reference
micro_benchmark_timer_restart: Timer Reference
micro_benchmark_timer_restart_fun: Timer Reference
micro_benchmark_timer_running_fun: Timer Reference
micro_benchmark_timer_set_default: Timer Reference
micro_benchmark_timer_set_default_provider: Timer Reference
micro_benchmark_timer_start: Timer Reference
micro_benchmark_timer_start_fun: Timer Reference
micro_benchmark_timer_stop: Timer Reference
micro_benchmark_timer_stop_fun: Timer Reference
micro_benchmark_time_sample: Time Reference
micro_benchmark_time_samples: Time Reference
micro_benchmark_time_sample_: Time Reference
micro_benchmark_time_stats_values: Time Reference
micro_benchmark_write_custom_report: Output Reference
micro_benchmark_write_report: Output Reference

O
operator<<: C++ Suite Report Reference
operator<<: C++ Suite Report Reference

P
print-report: Guile Report Output Reference

R
register-test!: Guile Test Case Reference
report: C++ Suite Report Reference
report-name: Guile Suite Report Reference
report::begin: C++ Suite Report Reference
report::empty: C++ Suite Report Reference
report::end: C++ Suite Report Reference
report::executions: C++ Suite Report Reference
report::name: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::print: C++ Suite Report Reference
report::rbegin: C++ Suite Report Reference
report::rend: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::report: C++ Suite Report Reference
report::reverse_executions: C++ Suite Report Reference
report::reverse_tests: C++ Suite Report Reference
report::size: C++ Suite Report Reference
report::tests: C++ Suite Report Reference
report::with_args: C++ Suite Report Reference
report?: Guile Suite Report Reference
run-suite!: Guile Suite Reference

S
set-log-level!: Guile Utilities Reference
set-module-log-level!: Guile Utilities Reference
set-state-name!: Guile Test State Reference
stat-value-mean: Guile Stats Utilities Reference
stat-value-unit: Guile Time Utilities Reference
stat-value-unit: Guile Stats Utilities Reference
stat-value?: Guile Stats Utilities Reference
state: C++ State Reference
state-name: Guile Test State Reference
state-sizes: Guile Test State Reference
state::get_name: C++ State Reference
state::keep_running: C++ State Reference
state::set_name: C++ State Reference
state::set_name: C++ State Reference
state::sizes: C++ State Reference
state?: Guile Test State Reference
suite: C++ Suite Reference
suite-name: Guile Suite Reference
suite-number-of-tests: Guile Suite Reference
suite::name: C++ Suite Reference
suite::register_class: C++ Suite Reference
suite::register_class: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::register_test: C++ Suite Reference
suite::run: C++ Suite Reference
suite::stored_report: C++ Suite Reference
suite::suite: C++ Suite Reference
suite::suite: C++ Suite Reference
suite::tests: C++ Suite Reference
suite?: Guile Suite Reference

T
test-add-dimension!: Guile Test Case Reference
test-case?: Guile Test Case Reference
test-dimensions: Guile Test Case Reference
test-iterations-to-skip: Guile Test Case Reference
test-limit-iterations!: Guile Test Case Reference
test-limit-samples!: Guile Test Case Reference
test-max-iterations: Guile Test Case Reference
test-max-sample-iterations: Guile Test Case Reference
test-max-time!: Guile Test Case Reference
test-min-iterations: Guile Test Case Reference
test-min-sample-iterations: Guile Test Case Reference
test-set-constraints!: Guile Test Case Reference
test-set-max-time!: Guile Test Case Reference
test-skip-iterations!: Guile Test Case Reference
test_case: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::add_dimension: C++ Test Reference
test_case::get_dimension: C++ Test Reference
test_case::iterations_to_skip: C++ Test Reference
test_case::limit_iterations: C++ Test Reference
test_case::limit_samples: C++ Test Reference
test_case::max_iterations: C++ Test Reference
test_case::max_sample_iterations: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::max_time: C++ Test Reference
test_case::min_iterations: C++ Test Reference
test_case::min_sample_iterations: C++ Test Reference
test_case::number_of_dimensions: C++ Test Reference
test_case::operator=: C++ Test Reference
test_case::skip_iterations: C++ Test Reference
test_case::test_case: C++ Test Reference
test_case::test_case: C++ Test Reference
test_report: C++ Test Report Reference
test_report::begin: C++ Test Report Reference
test_report::empty: C++ Test Report Reference
test_report::end: C++ Test Report Reference
test_report::executions: C++ Test Report Reference
test_report::name: C++ Test Report Reference
test_report::operator=: C++ Test Report Reference
test_report::rbegin: C++ Test Report Reference
test_report::rend: C++ Test Report Reference
test_report::reverse_executions: C++ Test Report Reference
test_report::size: C++ Test Report Reference
test_report::test_report: C++ Test Report Reference
test_report::test_report: C++ Test Report Reference
time-sample-elapsed-time: Guile Time Utilities Reference
time-sample?: Guile Time Utilities Reference
total_time_type: C++ Exec Report Reference

Jump to:   <  
A   C   D   E   F   G   I   K   M   O   P   R   S   T  

Footnotes

(1)

The comparison with an established reference is the main idea of a benchmark.

(2)

Currenty only llvm-cov from version 15.0.7 works with the options provided by the build script. Versions from 13.0.1 and 14.0.6 work removing one --source-prefix option.

(3)

This includes using the reference obtained at the set up stage during the test or tear down stages, or the reference obtained at the test stage during tear down.

(4)

Contact <bug-mbenchmark@nongnu.org> if you deem this feature useful.

(5)

The following links contain more information about Free Software, if you are interested:

(6)

indent outputs an undesired format on certain C constructions such as guard macros. The problematic source lines have comments /* *INDENT-ON* */ and /* *INDENT-OFF* */ surrounding them to avoid its modification. Patches for indent-code.sh to instruct indent about these cases are very welcome.

(7)

ChangeLog is generated using gnulib’s gitlog-to-changelog by the build process on git checkouts.

(8)

gitlog-to-changelog and test-driver.scm are downloaded at build time when they aren’t present.