Next: Acknowledgments [Contents][Index]
This manual is for MicroBenchmark—a library to measure the performance of code fragments—version 0.0, released the day 5 July 2023. MicroBenchmark is distributed under the GNU Lesser General Public License, Version 3 or any later version.
Copyright © 2023 Miguel Ángel Arruga Vivas <rosen644835@gmail.com>
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
The examples from this manual are additionally licensed under the GNU General Public License, Version 3 or any later version published by the Free Software Foundation. A copy of the license is included in the section entitled “GNU General Public License”.
Next: Introduction, Previous: MicroBenchmark, Up: MicroBenchmark [Contents][Index]
MicroBenchmark has to thank the GNU project their effort to provide the operating system where this document is written.
MicroBenchmark has taken many ideas from Google Benchmark, a Free Software library for C++ and Python.
MicroBenchmark also has taken its structural concepts from Kent Beck’s paper Simple Smalltalk Testing: With Patterns.
And also thank you, who are reading this.
Next: Installation, Previous: Acknowledgments, Up: MicroBenchmark [Contents][Index]
MicroBenchmark makes easy to measure the performance of code fragments and/or functions by sampling repeatedly their execution time. These type of performance measurements are the base of what usually is called micro-benchmarks, hence the library name.
Next: MicroBenchmark Concepts, Up: Introduction [Contents][Index]
Benchmark commonly names some software used to measure the performance of certain hardware and/or software. Sometimes these values cannot be directly measured, e.g. in an unknown or dynamic environment, and proxy measures are used to estimate it.
The performance of a processing unit is usually represented by operations per time units. Software throughput can be measured on size units per time units. Software performance can be measured on time units too. These quantities are compared on different environments, where its values produce an ordering between the compared things1.
The need of more efficient programs has been present since long time
ago, therefore tools to aid with this task can be found on almost any
operating system. For example, the shell keyword time
(see Pipelines in GNU Bash Reference Manual) can be used to
measure the execution time of a process.
To aid with a finer grain optimization, modern operating systems implement a measurement process called profiling (see GNU profiler gprof.) A profiler usually works by taking samples of the call stack on the executing process with certain frequency.
Modern processors incorporate performance counters, whose values
can be accessed through utilities such as perf
(see
perf wiki), which
can be used as a profiler too.
Next: Predefined Calculations, Previous: What Is A Benchmark, Up: Introduction [Contents][Index]
A complementary approach to the analysis on the final code is the extraction of some fragment of the code and to measure its execution performance separated of the whole process. These measurements are usually compared between different implementation algorithms of the fragment, reference implementations and/or baselines. This process is usually called micro-benchmark.
The main unit of MicroBenchmark framework is the test case (see Test Reference.) A test case encapsulates the desired process of measurement, including its preparation and finalization (see Execution Stages.)
The execution environment of a test case can be constrained (see Test Constraints.) Size constraints produce a combinatorial number of executions to be measured (see Test Dimensions.) The concrete values are provided to the test through the state object (see State Reference.)
A collection of test cases is called a suite (see Suite Reference.) Its execution is store the collected data into a report (see Report Reference.)
Up: MicroBenchmark Concepts [Contents][Index]
The execution flow of a test case is composed of three sequential stages:
The Set Up stage receives the execution state object as a parameter and allocates the fixture for the test code and prepares the environment for its execution.
An automatic test receives the allocated fixture directly and does not have direct access to the execution state. It is invoked repeatedly by the framework and measures are sampled after certain amount of iterations.
A directed test receives the framework execution state object (see State Reference) as its parameter, where the allocated fixture by the previous stage is stored. This is the concept used by Google Benchmark, where code must manually loop on the state.
The Tear Down stage releases the resources allocated for the fixture and performs any other cleanup needed.
Next: Basic Usage, Previous: MicroBenchmark Concepts, Up: Introduction [Contents][Index]
MicroBenchmark samples the elapsed time and iterations performed by the test function at certain intervals. Certain iterations and samples are discarded from the final statistical values calculated from the data collected, such as the warm up iterations.
From these samples the following values are calculated:
The time of each iteration, as common statistical values:
The iterations performed on each sample, as the common statistical values:
The time of each sample.
MicroBenchmark also measures the total time elapsed on the test case execution and the total number of iterations performed. The following sections describe the chronometers and timers predefined by the library.
Next: Predefined Clocks, Up: Predefined Calculations [Contents][Index]
A meter is a device used to take measures (see Meter Reference for a detailed description.)
Next: Predefined Chronometers, Previous: Predefined Meters, Up: Predefined Calculations [Contents][Index]
The clock type controls the type of measurement performed by a chronometer or a timer. The library implements the following clock type definitions (see Time Reference) if they were available at build time:
Wall clock time provided by the operating system. CLOCK_TAI
is
preferred when it is availableto CLOCK_REALTIME
.
Non decreasing clock time, such as CLOCK_MONOTONIC
.
Process time, such as CLOCK_PROCESS_CPUTIME_ID
.
Execution thread time.
Next: Predefined Timers, Previous: Predefined Clocks, Up: Predefined Calculations [Contents][Index]
A chronometer is a device used to measure time. It is an
specialization of the micro_benchmark_meter
type used to
calculate the sample time and the total time (see Chronometer Reference for a detailed description.)
The following chronometers are implemented by the library:
Realtime chronometer based on the the C standard type time_t
,
retrieved from time
function. It represents seconds, so its
minimum resolution is one second.
Process time chronometer based on the C standard type clock_t
,
retrieved from clock
function. It overflows easily; this is
not checked by the framework.
Realtime chronometer based on the function gettimeofday
. Its
values are represented by struct timeval
.
Chronometer based on the function clock_gettime
. Its minimum
theoretical resolution is calculated based on the clock type selected
(see Time Reference.) Its values are represented by struct
timespec
.
Previous: Predefined Chronometers, Up: Predefined Calculations [Contents][Index]
A timer is a device that is triggered after a predefined amount of time. They are used to calculate the sample time and the total time (see Timer Reference for a detailed description.)
The following timers are implemented by the library:
Timer adapter from a provided chronometer (see Timer Provider Reference.)
Timer based on the setitimer
API. This timer only supports
one timer per type.
Timer based on the timer_t
API.
Timer based on the timerfd
API, currently only supported on GNU
systems based on Linux kernel.
Previous: Predefined Calculations, Up: Introduction [Contents][Index]
Once the library has been installed (see Installation), Guile
users might want to dive directly into a REPL, as it should be usable.
On the other hand, C and C++ code has to be compiled against the
installed library before executing it. MicroBenchmark comes with
support for autoconf
based projects (see Autoconf Macros) and pkg-config
modules (see Other Build Systems.)
The following sections (see Simple Tests onwards) showcase some code examples of the library usage.
Next: Other Build Systems, Up: Basic Usage [Contents][Index]
Common autoconf macros can be used to find the library, such as:
AC_SEARCH_LIBS([micro_benchmark_init], [mbenchmark], [...])
MicroBenchmark comes with some (experimental) macros too:
MICRO_BENCHMARK_CHECK_C
Search MicroBenchmark library on some predefined routes and call
AC_SUBST
on the following variables:
Contains the compiler flags needed to find the headers of the local installation of MicroBenchmark.
Contains the compiler flags needed to compile and link a program with MicroBenchmark.
Contains the linker flags needed to link a program with MicroBenchmark.
Contains a reference to MicroBenchmark libtool file, when available.
MICRO_BENCHMARK_CHECK_CXX
Search MicroBenchmark C++ library on some predefined routes and call
AC_SUBST
on the following variables:
Contains the compiler flags needed to find the headers of the local installation of MicroBenchmark.
Contains the compiler flags needed to compile and link a C++ program with MicroBenchmark.
Contains the linker flags needed to link a program against MicroBenchmark C++ binding.
Contains the reference to MicroBenchmark C++ libtool file, when available.
MICRO_BENCHMARK_CHECK_GUILE
Check that the modules availability and the library version.
This could be a basic example of their usage:
configure.ac: ---
@dots{} AS_IF([@dots{}], [dnl To substitute CFLAGS/LIBS/LTLIBS MICRO_BENCHMARK_CHECK_C @dots{} dnl To substitute CXXFLAGS/CXXLIBS/CXXLTLIBS MICRO_BENCHMARK_CHECK_CXX @dots{} dnl For Guile, only checks are performed MICRO_BENCHMARK_CHECK_GUILE ]) @dots{}
--- Makefile.am: ---
@dots{} ctest_CFLAGS += $(MICRO_BENCHMARK_CFLAGS) ctest_LIBS += $(MICRO_BENCHMARK_LIBS) @dots{} # For the C++ library cpptest_CXXFLAGS += $(MICRO_BENCHMARK_CXXFLAGS) cpptest_LIBS += $(MICRO_BENCHMARK_CXXLIBS) @dots{} # Or with libtool ltctest_LDADD += $(MICRO_BENCHMARK_LTLIBS) ltcxxlttest_LDADD += $(MICRO_BENCHMARK_CXXLTLIBS) @dots{}
---
Next: Simple Tests, Previous: Autoconf Macros, Up: Basic Usage [Contents][Index]
The following pkg-config
modules are provided by
MicroBenchmark.
pkg-config
[–cflags|–libs] mbenchmark
pkg-config
[–cflags|–libs] mbenchmark-c++
For example, to compile a benchmark executable from a source file
test.c (or test.cpp) using pkg-config
, you
could use the following command line:
$ gcc $(pkg-config --cflags --ldflags mbenchmark) -o test test.c # Or for C++ $ g++ $(pkg-config --cflags --ldflags mbenchmark-c++) -o test-c++ test.cpp
Next: Directed Test Cases, Previous: Other Build Systems, Up: Basic Usage [Contents][Index]
This section shows some basic code examples of MicroBenchmark usage.
This code generates a minimal (and not very useful) C benchmark:
#include <mbenchmark/all.h> #include <unistd.h> static void test (void *unused) { (void) unused; usleep (1000); } MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test); MICRO_BENCHMARK_MAIN ();
This file can be found on the source code tree at
doc/examples/basic.c
. Lets go through its parts:
#include <mbenchmark/all.h>
includes all the needed
files to use the library.
void test (void *unused)
is the code under test...
MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test)
.
main
function, MICRO_BENCHMARK_MAIN()
can
be placed anywhere on the file scope.
This code could be compiled with a C++ compiler too. Nonetheless, C++14 or later code might use the features available through the C++ binding. This would be the equivalent example in C++:
(use-modules (mbenchmark)) (define (test . args) (usleep 1000)) (register-test! "test" #:test test) (main (command-line))
This file can be found on the source code tree at
doc/examples/basic.cxx
. These are the main differences with
the C interface:
#include <mbenchmark/all.h>
,
#include <mbenchmark/all.hpp>
is included. These two files
cannot be included together, as the macro names exported by them
collide.
std::vector<std::size_t>
as its parameter, the one returned by
sizes
method on the state object (see C++ State Reference).
register_test
function (rtest
)
has to be declared at namespace level.
This would be its Guile equivalent:
(use-modules (mbenchmark)) (define (test . args) (usleep 1000)) (register-test! "test" #:test test) (main (command-line))
This file can be found on the source code tree at
doc/examples/basic.scm
. Let’s see its parts:
(mbenchmark)
includes all the functionality needed.
register-test!
uses keywords for their arguments (see Guile Test Case Reference.)
main
could be provided with the -e main
, but here it
is explicit to allow its direct execution.
The execution of these examples would print something like this:
basic --brief --log-level=warn
⇒
Suite: basic (1 test execution) ==================================== Test Name | Iterations | It.Time (μ) ==================================== test | 4026 | 1.22ms ====================================
Next: Test Constraints, Previous: Simple Tests, Up: Basic Usage [Contents][Index]
TODO
Next: Test Dimensions, Previous: Directed Test Cases, Up: Basic Usage [Contents][Index]
The examples shown on the previous section use the library defaults, which could not be appropriate for your specific use case. MicroBenchmark includes several options to customize the environment where the test is executed (see Test Constraints Reference.)
The following examples show the basic constraint usage on the implemented languages:
#include <mbenchmark/all.h> #include <unistd.h> static void limit_iterations (micro_benchmark_test_case test) { micro_benchmark_test_case_limit_iterations (test, 300, 400); micro_benchmark_test_case_limit_samples (test, 2, 5); } static void test_1 (void *ptr) { (void) ptr; /* This test code will run for 300 to 400 iterations, taking measurements each 2 to 5 iterations. */ usleep (10000); } MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test_1); MICRO_BENCHMARK_CONSTRAINT_TEST ("test_1", limit_iterations); static void limit_time (micro_benchmark_test_case test) { micro_benchmark_clock_time max = { 1, 0 }; micro_benchmark_test_case_set_max_time (test, max); } static void test_2 (void *ptr) { (void) ptr; /* This test code will run for 1 second. */ usleep (10000); } MICRO_BENCHMARK_REGISTER_SIMPLE_TEST (test_2); MICRO_BENCHMARK_CONSTRAINT_TEST ("test_2", limit_time); MICRO_BENCHMARK_MAIN ();
#include <mbenchmark/all.hpp> #include <chrono> #include <thread> namespace { using micro_benchmark::with_constraints; void test_1 (std::vector<std::size_t> const&) { std::this_thread::sleep_for (std::chrono::milliseconds(10)); } void limit_iterations (micro_benchmark::test_case& test) { test.limit_iterations (300, 400); test.limit_samples (2, 5); } auto rt1 = micro_benchmark::register_test (with_constraints, "test_1", limit_iterations, test_1); void test_2 (std::vector<std::size_t> const&) { std::this_thread::sleep_for (std::chrono::milliseconds(10)); } void limit_time (micro_benchmark::test_case& test) { test.max_time (std::chrono::seconds (1)); } auto rt2 = micro_benchmark::register_test (with_constraints, "test_2", limit_time, test_2); } MICRO_BENCHMARK_MAIN ();
(define (test-1) (usleep 10000)) (register-test! "test-1" #:test test-1 ;; This test will run for 300 to 400 iterations... #:min-iterations 300 #:max-iterations 400 ;; ... taking measurements each 2 to 5 iterations. #:min-sample-iterations 2 #:max-sample-iterations 5) (register-test! "test-2" #:test test-1 ;; This test will run for 1 second. #:max-time 1) (main (command-line))
Their output could look like this:
constraints --brief --log-level=warn
⇒
Suite: constraints (2 test executions) ==================================== Test Name | Iterations | It.Time (μ) ==================================== test_1 | 400 | 10.3ms test_2 | 97 | 10.3ms ====================================
Previous: Test Constraints, Up: Basic Usage [Contents][Index]
Usually, we want to provide different inputs to the test code in order to compare its behavior. To this end, MicroBenchmark allows to control the sizes provided to the test (see Test Constraints Reference), which can be used to produce tailored input.
The following examples show how to control the input of the test code:
#include <mbenchmark/all.h> #include <unistd.h> static void one_dimension (micro_benchmark_test_case test) { /* This test will have three dimensions, but four different values on them, so only one execution will be performed. */ const size_t sizes[] = { 1, 2, 3, 4 }; const size_t ssizes = sizeof (sizes) / sizeof (*sizes); micro_benchmark_test_case_add_dimension (test, ssizes, sizes); } static size_t * set_up_1d (micro_benchmark_test_state state) { /* Prepare the data */ static size_t s; s = micro_benchmark_state_get_size (state, 0) * 100000; return &s; } static void test_1d (size_t *sz) { /* test code */ usleep (*sz); } MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up_1d, test_1d, 0); MICRO_BENCHMARK_CONSTRAINT_TEST ("test_1d", one_dimension); static void three_dimensions (micro_benchmark_test_case test) { /* This test will have three dimensions, but no variation on them, so only one execution will be performed. */ const size_t sizes[] = { 1, 2, 3 }; micro_benchmark_test_case_add_dimension (test, 1, sizes + 0); micro_benchmark_test_case_add_dimension (test, 1, sizes + 1); micro_benchmark_test_case_add_dimension (test, 1, sizes + 2); } static size_t * set_up_3d (micro_benchmark_test_state state) { /* Prepare the data */ static size_t s[3]; s[0] = micro_benchmark_state_get_size (state, 0) * 10000; s[1] = micro_benchmark_state_get_size (state, 1) * 100000; s[2] = micro_benchmark_state_get_size (state, 2) * 1000000; return s; } static void test_3d (size_t *sz) { /* test code */ usleep (sz[0] + sz[1] + sz[2]); } MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up_3d, test_3d, 0); MICRO_BENCHMARK_CONSTRAINT_TEST ("test_3d", three_dimensions); MICRO_BENCHMARK_MAIN ();
#include <mbenchmark/all.hpp> #include <chrono> #include <thread> namespace { using micro_benchmark::with_constraints; void one_dimension (micro_benchmark::test_case& test) { test.add_dimension ({ 1, 2, 3 }); } void test_1d (std::vector<std::size_t> const& v) { std::this_thread::sleep_for (std::chrono::milliseconds (v[0] * 100)); } auto t1d = micro_benchmark::register_test (with_constraints, "test1d", one_dimension, test_1d); void three_dimensions (micro_benchmark::test_case& test) { test.add_dimension ({ 1 }); test.add_dimension ({ 10 }); test.add_dimension ({ 100 }); } void test_3d (std::vector<std::size_t> const& v) { std::this_thread::sleep_for (std::chrono::milliseconds (v[0] + v[1] + v[2])); } auto t3d = micro_benchmark::register_test (with_constraints, "test3d", three_dimensions, test_3d); } MICRO_BENCHMARK_MAIN ();
(define (test1d n) (usleep (* n 100000))) (register-test! "test1d" #:test test1d #:dimensions '((1 2 3))) (define (test3d x y z) (usleep (* 100000 (+ x y z)))) (register-test! "test3d" #:test test3d #:dimensions '((1) (2) (3))) (main (command-line))
This could be their output:
dimensions --brief --log-level=warn
⇒
Suite: dimensions (5 test executions) ======================================== Test Name | Iterations | It.Time (μ) ======================================== test_1d | -- | -- test_1d/1 | 50 | 100ms test_1d/2 | 25 | 200ms test_1d/3 | 17 | 300ms test_1d/4 | 13 | 400ms ---------------------------------------- test_3d/1/2/3 | 2 | 3.21s ========================================
Next: Advanced Usage, Previous: Introduction, Up: MicroBenchmark [Contents][Index]
The library can be installed from the source following the common steps explained on INSTALL:
$ ./configure --prefix=<prefix> $ make -j $ make check -j $ make install
The following sections show in detail the requirements for the build process and the specific available configuration options.
Next: Configuration, Up: Installation [Contents][Index]
The following sections detail the software needed to build and run MicroBenchmark.
Next: Build Requirements, Up: Requirements [Contents][Index]
MicroBenchmark uses the following components at runtime:
Several functions from the library are used at runtime. These include mathematical functions for the statistical calculations. Some configurations may use realtime facilities from the system.
Output facilities use GNU libunistring for strings manipulation.
The C++ binding may use templates such as std::basic_string
and
std::function
as implementation details, even when the public
interface does not depend on them.
Guile binding uses the modern foreign function interface, which was introduced with Guile 2.2.
Next: Build From VCS Requirements, Previous: Runtime Requirements, Up: Requirements [Contents][Index]
The generic build process from MicroBenchmark from a released source tarball have a reduced set of dependencies. These are the packages required:
For the build recipes and test suite.
This release has passed all the tests with the following toolchains:
Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.
This release has passed all the tests with the following toolchains:
Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.
This release has passed all the tests with the following toolchain and GNU Guile combinations:
Tests run on GNU Guix: GNU Binutils 2.38 and GNU libc 2.35.
MicroBenchmark uses GNU gettext (see GNU gettext) for its localization (see Configuration.)
The coverage report generation (see Configuration) uses gcov (see gcov in GNU Gcov) or a compatible interface2.
Info files can be found on the distributed tarballs, but the generation of the manual pages on html format requires a functional GNU Texinfo installation (see GNU Texinfo manual).
The manual pages can be generated on dvi and pdf formats, which need a functional TeX distribution.
These are common package names for these dependencies:
gcc
libunistring-dev
, libunistring-devel
make
gettext
(Optional)
g++
or gcc-c++
(Optional)
texinfo
(Optional)
texlive
(Optional)
guile-3.0-dev
, guile30-devel
guile-2.2-dev
, guile22-devel
Previous: Build Requirements, Up: Requirements [Contents][Index]
Some of the optional dependencies might be mandatory to build the software from VCS:
make
target
distcheck
.
There are some additional dependencies to build the MicroBenchmark from the git repository, in addition to git itself. The following software to compile MicroBenchmark after modifying its sources:
The external dependencies build-aux/gitlog-to-changelog and build-aux/test-driver.scm will be downloaded automatically when they aren’t found on the source directory. You can avoid this placing them manually on the source tree.
MicroBenchmark uses GNU Autoconf (see GNU Autoconf) for the generation of configure. GNU Automake (see GNU Automake) is used for the generation of Makefile.in. GNU libtool (see GNU libtool) is needed for the library generation scripts. GNU autopoint (see autopoint in GNU autopoint) is needed for the internationalization support.
configure.ac uses the macro AX_CXX_COMPILE_STDCXX (see ax_cxx_compile_stdcxxx in GNU Autoconf Archive.)
The target make indent-code
and the script
build-aux/indent.sh use GNU indent (see GNU
indent) to format C source code.
These are common names for the packages providing these dependencies:
git
wget
autoconf
autoconf-archive
automake
autopoint
libtool
indent
Previous: Requirements, Up: Installation [Contents][Index]
MicroBenchmark provides several options to be used at
configure
time. In addition to the generic parameters, such
as --prefix
, --libdir
and so on, the generated
configure provided (generated) accepts these options:
--disable-assert
Disable assertions on the code.
--enable-clock-gettime
--disable-clock-gettime
Enable or disable clock_gettime
chronometer implementation
(see Predefined Chronometers.) If this parameter is not provided,
this chronometer will be compiled if clock_gettime
is
available.
--enable-gettimeofday
--disable-gettimeofday
Enable or disable gettimeofday
chronometer implementation
(see Predefined Chronometers.) If this parameter is not provided,
this chronometer will be compiled if gettimeofday
is
available.
--enable-itimer
--disable-itimer
Enable or disable itimer
timer implementation
(see Predefined Timers.) If this parameter is not provided, this
timer will be compiled if setitimer
is available.
--enable-timer-t
--disable-timer-t
Enable or disable timer-t
timer implementation
(see Predefined Timers.) If this parameter is not provided,
this timer will be compiled if timer_create
is available.
--enable-timerfd
--disable-timerfd
Enable or disable timerfd
timer implementation
(see Predefined Timers.) If this parameter is not provided, the
timer will be compiled if timerfd_create
is available.
--disable-guile
Disable Guile bindings.
--enable-coverage
--enable-coverage=auto
Enable the generation of a coverage report with make check
.
The generated report can be found at
build-aux/mbenchmark-gcov.tar.gz. If --enable-coverage
is used, any error checking the needed tools for the coverage report
generation will result on failed execution of the configure
script. If --enable-coverage=auto
is used, any error finding
or checking the tools needed for the coverage report generation will
disable the generation of the report.
The coverage report is disabled by default.
--disable-traces
Do not emit trace level log calls on the code.
--with-log-level=LEVEL
Select the default log level for the library. The following levels are available. Each level includes the previous ones.
error
: Only error messages are shown.
warn
: Warning messages are emitted to the log output.
info
: Messages about the current execution status are emitted
to the log output.
debug
: Debugging information is emitted to the log output.
trace
: Detailed information at each step of the suite execution
is emitted to the log output.
--with-libunistring=
DIR
--with-libunistring-include=
INCLUDEDIR
--with-libunistring-libs=
LIBDIR
Select the libunistring installation used.
Next: Common API Concepts, Previous: Installation, Up: MicroBenchmark [Contents][Index]
The following sections show some more advanced features offered by MicroBenchmark, such as different output formats (see Report Output) and helper functionality tell the compiler we really want to perform the desired computations (see Optimizer Tips.)
Contact the mailing list <bug-mbenchmark@nongnu.org> if you have ideas, or more information about possible complex use cases of MicroBenchmark.
Next: Report Output, Up: Advanced Usage [Contents][Index]
THe following example shows a more realistic use case in C:
#include <mbenchmark/all.h> #include <stdio.h> #include <string.h> /* Main can be placed anywhere. */ MICRO_BENCHMARK_MAIN (); /* Recursive implementation -- overflows with small values!. */ long fibrec (int x) { if (x > 1) return fibrec (x - 1) + fibrec (x - 2); if (x < 0) return fibrec (x + 2) - fibrec (x + 1); return x; } /* Iterative implementation -- overflows with small values!. */ long fibit (int x) { long curr, last; #define impl(c, l, start, sign) \ last = l; \ curr = c; \ for (int i = start; i < sign x; ++i) \ { \ long tmp = last sign curr; \ last = curr; \ curr = tmp; \ } \ return curr if (x > 0) { impl (1, 0, 1, +); } impl (0, 1, 0, -); #undef impl } /* Prepare the pointer and its value. */ static int * set_up (micro_benchmark_test_state s) { static int value = 0; value = micro_benchmark_state_get_size (s, 0); value *= micro_benchmark_state_get_size (s, 1) ? 1 : -1; return &value; } static void tear_down (micro_benchmark_test_state s, int *ptr) { char buf[100]; const char *name = micro_benchmark_state_get_name (s); if (strcmp (name, "test_fibit") == 0) snprintf (buf, sizeof (buf) - 1, "fibit/%d", *ptr); else snprintf (buf, sizeof (buf) - 1, "fibrec/%d", *ptr); buf[sizeof (buf) - 1] = '\0'; micro_benchmark_state_set_name (s, buf); } /* Recursive implementation test. */ static void test_fibrec (int *r) { long fib = fibrec (*r); MICRO_BENCHMARK_DO_NOT_OPTIMIZE (fib); } MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up, test_fibrec, tear_down); /* Iterative implementation test. */ static void test_fibit (int *r) { long fib = fibit (*r); MICRO_BENCHMARK_DO_NOT_OPTIMIZE (fib); } MICRO_BENCHMARK_REGISTER_AUTO_TEST (set_up, test_fibit, tear_down); /* Constraints on the recursive test. */ static void rec_constraints (micro_benchmark_test_case test) { const size_t sizes[] = { 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30 }; const size_t ssizes = sizeof (sizes) / sizeof (*sizes); micro_benchmark_test_case_add_dimension (test, ssizes, sizes); const size_t sign[] = { 0, 1 }; const size_t ssign = sizeof (sign) / sizeof (*sign); micro_benchmark_test_case_add_dimension (test, ssign, sign); micro_benchmark_clock_time mt = { 3, 0 }; micro_benchmark_test_case_set_max_time (test, mt); micro_benchmark_test_case_limit_iterations (test, 0, 1000000); } MICRO_BENCHMARK_CONSTRAINT_TEST ("test_fibrec", rec_constraints); /* Constraints on the iterative test, this is enough. */ static void it_constraints (micro_benchmark_test_case test) { const size_t sizes[] = { 10, 20, 30, 40 }; const size_t ssizes = sizeof (sizes) / sizeof (*sizes); micro_benchmark_test_case_add_dimension (test, ssizes, sizes); const size_t sign[] = { 0, 1 }; const size_t ssign = sizeof (sign) / sizeof (*sign); micro_benchmark_test_case_add_dimension (test, ssign, sign); micro_benchmark_clock_time mt = { 3, 0 }; micro_benchmark_test_case_set_max_time (test, mt); micro_benchmark_test_case_limit_iterations (test, 0, 1000000); } MICRO_BENCHMARK_CONSTRAINT_TEST ("test_fibit", it_constraints);
This is its C++ version:
#include <mbenchmark/all.hpp> #include <functional> /* std::plus and std::less */ #include <string> /* Main can be placed anywhere. */ MICRO_BENCHMARK_MAIN (); namespace { /* Recursive implementation. */ template <typename T, typename I = int> T fibrec (I x) { if (x > 1) return fibrec<T> (x - 1) + fibrec<T> (x - 2); if (x < 0) return fibrec<T> (x + 2) - fibrec<T> (x + 1); return x; } /* Iterative implementation. */ template <typename T, typename I = int> T fibit (I x) { auto doit = [x] (T curr, T last, I start, auto&& op) { for (I i = start; i < op (0, x); ++i) { T tmp = op (last, curr); last = curr; curr = tmp; } return curr; }; if (x > 0) return doit (T{1}, T{0}, 1, std::plus<T>{}); return doit (T{0}, T{1}, 0, std::minus<T>{}); } /* Prepare the pointer and its value. */ int set_up (micro_benchmark::state const& s) { auto sizes = s.sizes (); int value = sizes.at (0); value *= sizes.at (1) ? 1 : -1; return value; } void tear_down (micro_benchmark::state& s, int v) { constexpr auto fibit_base = "fibit/"; constexpr auto fibrec_base = "fibrec/"; std::string name = s.get_name (); if (name == "test_fibit") s.set_name (fibit_base + std::to_string (v)); else s.set_name (fibrec_base + std::to_string (v)); } /* Constraints on the recursive test. */ void rec_constraints (micro_benchmark::test_case& test) { test.add_dimension ({ 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30 }); test.add_dimension ({ 0, 1 }); test.limit_iterations (0, 1000000); } /* Register the recursive test with its constraints. */ void rtestfun (int v) { auto r = fibrec<long> (v); micro_benchmark::do_not_optimize (r); } auto rtest = micro_benchmark::register_test (micro_benchmark::with_constraints, "test_fibrec", rec_constraints, rtestfun, set_up, tear_down); /* Constraints on the iterative test, this is enough. */ void it_constraints (micro_benchmark::test_case& test) { test.add_dimension ({ 10, 20, 30, 40 }); test.add_dimension ({ 0, 1 }); test.limit_iterations (0, 1000000); } /* Register the iterative test with its constraints. */ void itestfun (int v) { auto r = fibit<long> (v); micro_benchmark::do_not_optimize (r); } auto itest = micro_benchmark::register_test (micro_benchmark::with_constraints, "test_fibit", it_constraints, itestfun, set_up, tear_down); }
This is its Guile version:
(define (fibrec n) (cond ((> n 1) (+ (fibrec (- n 2)) (fibrec (- n 1)))) ((< n 0) (- (fibrec (+ n 2)) (fibrec (+ n 1)))) (else n))) (define (fibit n) (define (up curr last p) (if (> p 0) (up (+ curr last) curr (- p 1)) curr)) (define (down curr last p) (if (< p 0) (down (- last curr) curr (+ p 1)) curr)) (cond ((> n 1) (up 1 0 (- n 1))) ((< n 0) (down 0 1 n)) (else n))) (define (set-up state) (define (doit size sign) (if (eqv? sign 0) (list size) (list (- size)))) (apply doit (state-sizes state))) (define (tear-down state n) (let ((base (if (equal? (state-name state) "fibit") "fibit/" "fibrec/"))) (set-state-name! state (string-append base (number->string n))))) (register-test! "fibit" #:test fibit #:set-up set-up #:tear-down tear-down #:dimensions '((10 50 100 500 1000 5000 10000 50000 100000) (0 1)) #:max-iterations 1000000) (register-test! "fibrec" #:test fibrec #:set-up set-up #:tear-down tear-down #:dimensions '((10 12 14 16 18 20 22 24 26 28 30) (0 1)) #:max-iterations 1000000) (main (command-line))
These file can be found on the source code tree as
doc/examples/fib.c
, doc/examples/fib.cxx
and
doc/examples.fib.scm
respectively.
Their output would be something like this:
fib --brief --log-level=warn
⇒
Suite: fib (40 test executions) ======================================== Test Name | Iterations | It.Time (μ) ======================================== fibit | -- | -- fibit/10 | 1000000 | 252ns fibit/50 | 1000000 | 429ns fibit/100 | 1000000 | 1.64μs fibit/500 | 92253 | 53.3μs fibit/1000 | 32308 | 159μs fibit/5000 | 2509 | 2.00ms fibit/10000 | 639 | 7.84ms fibit/50000 | 32 | 157ms fibit/100000 | 9 | 582ms fibit/-10 | 1000000 | 308ns fibit/-50 | 1000000 | 503ns fibit/-100 | 1000000 | 1.94μs fibit/-500 | 80266 | 62.7μs fibit/-1000 | 29633 | 169μs fibit/-5000 | 2153 | 2.33ms fibit/-10000 | 576 | 8.78ms fibit/-50000 | 29 | 173ms fibit/-100000 | 9 | 621ms ---------------------------------------- fibrec | -- | -- fibrec/10 | 1000000 | 2.07μs fibrec/12 | 884087 | 4.53μs fibrec/14 | 388238 | 11.7μs fibrec/16 | 155112 | 31.1μs fibrec/18 | 60585 | 81.3μs fibrec/20 | 23411 | 213μs fibrec/22 | 8991 | 555μs fibrec/24 | 3441 | 1.45ms fibrec/26 | 1316 | 3.80ms fibrec/28 | 510 | 9.84ms fibrec/30 | 195 | 25.7ms fibrec/-10 | 1000000 | 2.81μs fibrec/-12 | 593652 | 7.29μs fibrec/-14 | 243955 | 19.3μs fibrec/-16 | 95423 | 51.0μs fibrec/-18 | 36732 | 135μs fibrec/-20 | 13949 | 357μs fibrec/-22 | 5257 | 0.945ms fibrec/-24 | 1998 | 2.51ms fibrec/-26 | 770 | 6.53ms fibrec/-28 | 295 | 17.0ms fibrec/-30 | 113 | 44.5ms ---------------------------------------- ========================================
The recursive implementation speed decreases very quickly when the magnitude increases. On the other hand, the iterative version can calculate several times values with the same order of magnitude.
Next: Optimizer Tips, Previous: Comparing Implementations, Up: Advanced Usage [Contents][Index]
MicroBenchmark provides several predefined formats for its output. The default output format is a tabulated output of the results (see Console Output), but the library may also provide you directly S-expressions (see Lisp Output) and a record format (see Text Output).
Next: Lisp Output, Up: Report Output [Contents][Index]
NOTE: This format is under development and may change..
The usual output looks like this:
Suite: console (1 test execution) ========================================================= Test Name | Iterations | It.Time (μ/σ) | Total Time ========================================================= Self-Test | -- | -- | -- empty | 65536 | 22.7ns/2.89ns | 0.3484s barrier | 65536 | 23.1ns/2.19ns | 0.3508s read+write | 65536 | 29.6ns/1.13ns | 0.3521s addition | 65536 | 32.7ns/1.71ns | 0.3529s multiplication | 65536 | 34.8ns/1.16ns | 0.3513s empty | 65536 | 23.6ns/2.88ns | 0.3536s --------------------------------------------------------- test | 3994 | 1.25ms/0.074ms | 5.0912s =========================================================
And here are all the possible values:
Suite: console (1 test execution) ====================================================================================================================================================== Test Name | Total It. | Iterations | It.Time (μ/σ²/σ) | Total S. | Samples | S.Time (μ/σ²/σ) | S.Iter. (μ/σ²/σ) | Total Time | Sizes ====================================================================================================================================================== Self-Test | -- | -- | -- | -- | -- | -- | -- | -- | -- empty | 65536 | 65536 | 21.1ns/8.06ns²/2.84ns | 58 | 58 | 24.1μs/189μs²/13.7μs | 1130/373162/611 | 0.3273s | {0} barrier | 65536 | 65536 | 22.3ns/2.00ns²/1.41ns | 59 | 59 | 24.7μs/195μs²/14.0μs | 1111/393171/627 | 0.3430s | {1} read+write | 65536 | 65536 | 29.3ns/8.21ns²/2.87ns | 60 | 60 | 31.6μs/334μs²/18.3μs | 1092/398707/631 | 0.3459s | {2} addition | 65536 | 65536 | 32.2ns/6.57ns²/2.56ns | 60 | 60 | 34.9μs/398μs²/19.9μs | 1092/390244/625 | 0.3470s | {3} multiplication | 65536 | 65536 | 34.6ns/2.04ns²/1.43ns | 60 | 60 | 37.7μs/460μs²/21.4μs | 1092/383910/620 | 0.3485s | {4} empty | 65536 | 65536 | 22.6ns/3.05ns²/1.75ns | 60 | 60 | 24.5μs/198μs²/14.1μs | 1092/386938/622 | 0.3484s | {5} ------------------------------------------------------------------------------------------------------------------------------------------------------ test | 3860 | 3860 | 1.29ms/0.002ms²/0.045ms | 311 | 311 | 16.0ms/78.1ms²/8.83ms | 12.4/47.5/6.89 | 5.0819s | ∅ ====================================================================================================================================================== ------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------
Next: Text Output, Previous: Console Output, Up: Report Output [Contents][Index]
This format is compatible with Lisp’s read
procedure, and can
be used for further processing of the collected data.
NOTE: The format is under development and may change.
The usual output looks like this:
(suite (name "lisp") (test (name "Self-Test") (execution (number 0) (name "empty") (total-time 0.326749 "s") (iterations 65536) (iteration-time (mean 22.072614 "ns") (std-deviation 4.105545 "ns"))) (execution (number 1) (name "barrier") (total-time 0.326562 "s") (iterations 65536) (iteration-time (mean 21.374931 "ns") (std-deviation 1.093298 "ns"))) (execution (number 2) (name "read+write") (total-time 0.333746 "s") (iterations 65536) (iteration-time (mean 28.694221 "ns") (std-deviation 3.253493 "ns"))) (execution (number 3) (name "addition") (total-time 0.345400 "s") (iterations 65536) (iteration-time (mean 38.210674 "ns") (std-deviation 25.125504 "ns"))) (execution (number 4) (name "multiplication") (total-time 0.339907 "s") (iterations 65536) (iteration-time (mean 35.148271 "ns") (std-deviation 5.568858 "ns"))) (execution (number 5) (name "empty") (total-time 0.332254 "s") (iterations 65536) (iteration-time (mean 21.904489 "ns") (std-deviation 1.905702 "ns"))) (number-of-runs 6)) (test (name "test") (execution (number 0) (name "test") (total-time 5.087626 "s") (iterations 3981) (iteration-time (mean 1.240041 "ms") (std-deviation 0.074364 "ms"))) (number-of-runs 1)) (number-of-tests 2))
And here are all the possible values:
(suite (name "lisp") (test (name "Self-Test") (execution (number 0) (name "empty") (dimensions (0)) (total-time 0.314982 "s") (total-iterations 65536) (iterations 65536) (total-samples 57) (used-samples 57) (iteration-time (mean 20.941117 "ns") (std-deviation 5.240370 "ns") (variance 27.461475 "ns²")) (sample-time (mean 24.478947 "μs") (std-deviation 16.031848 "μs") (variance 257.020146 "μs²")) (sample-iterations (mean 1149.754386) (std-deviation 629.857787) (variance 396720.831454)) (custom-meters (total 0))) (execution (number 1) (name "barrier") (dimensions (1)) (total-time 0.323575 "s") (total-iterations 65536) (iterations 65536) (total-samples 57) (used-samples 57) (iteration-time (mean 21.163393 "ns") (std-deviation 0.844636 "ns") (variance 0.713410 "ns²")) (sample-time (mean 24.239070 "μs") (std-deviation 13.764559 "μs") (variance 189.463080 "μs²")) (sample-iterations (mean 1149.754386) (std-deviation 652.575592) (variance 425854.902882)) (custom-meters (total 0))) (execution (number 2) (name "read+write") (dimensions (2)) (total-time 0.322978 "s") (total-iterations 65536) (iterations 65536) (total-samples 57) (used-samples 57) (iteration-time (mean 27.766774 "ns") (std-deviation 2.136562 "ns") (variance 4.564897 "ns²")) (sample-time (mean 31.624877 "μs") (std-deviation 17.903620 "μs") (variance 320.539599 "μs²")) (sample-iterations (mean 1149.754386) (std-deviation 652.000419) (variance 425104.545739)) (custom-meters (total 0))) (execution (number 3) (name "addition") (dimensions (3)) (total-time 0.324984 "s") (total-iterations 65536) (iterations 65536) (total-samples 58) (used-samples 58) (iteration-time (mean 31.652676 "ns") (std-deviation 7.132115 "ns") (variance 50.867060 "ns²")) (sample-time (mean 37.170897 "μs") (std-deviation 27.209318 "μs") (variance 740.347009 "μs²")) (sample-iterations (mean 1129.931034) (std-deviation 656.896073) (variance 431512.451301)) (custom-meters (total 0))) (execution (number 4) (name "multiplication") (dimensions (4)) (total-time 0.323793 "s") (total-iterations 65536) (iterations 65536) (total-samples 57) (used-samples 57) (iteration-time (mean 32.703045 "ns") (std-deviation 0.937458 "ns") (variance 0.878827 "ns²")) (sample-time (mean 37.474596 "μs") (std-deviation 21.326054 "μs") (variance 454.800575 "μs²")) (sample-iterations (mean 1149.754386) (std-deviation 656.408880) (variance 430872.617168)) (custom-meters (total 0))) (execution (number 5) (name "empty") (dimensions (5)) (total-time 0.326234 "s") (total-iterations 65536) (iterations 65536) (total-samples 58) (used-samples 58) (iteration-time (mean 21.576884 "ns") (std-deviation 2.229484 "ns") (variance 4.970598 "ns²")) (sample-time (mean 24.337345 "μs") (std-deviation 14.608577 "μs") (variance 213.410528 "μs²")) (sample-iterations (mean 1129.931034) (std-deviation 660.495265) (variance 436253.995160)) (custom-meters (total 0))) (number-of-runs 6)) (test (name "test") (execution (number 0) (name "test") (total-time 5.082051 "s") (total-iterations 3938) (iterations 3938) (total-samples 311) (used-samples 311) (iteration-time (mean 1.265120 "ms") (std-deviation 0.067238 "ms") (variance 0.004521 "ms²")) (sample-time (mean 15.978674 "ms") (std-deviation 8.834413 "ms") (variance 78.046855 "ms²")) (sample-iterations (mean 12.662379) (std-deviation 7.027033) (variance 49.379193)) (custom-meters (total 0))) (number-of-runs 1)) (number-of-tests 2))
Previous: Lisp Output, Up: Report Output [Contents][Index]
This format is compatible with GNU recutils (see GNU recutils), and can be used for further processing of the collected data.
NOTE: The format is under development and may change.
The usual output looks like this:
# Suite Name: recutils # Number of tests: 2 Suite: recutils Test: Self-Test Test-Case: empty Total-Time: 0.330962 seconds Iterations: 65536 Iteration-Time: 21.889642 ns Iteration-Time-Dev: 3.465406 ns Suite: recutils Test: Self-Test Test-Case: barrier Total-Time: 0.336391 seconds Iterations: 65536 Iteration-Time: 22.075144 ns Iteration-Time-Dev: 1.480519 ns Suite: recutils Test: Self-Test Test-Case: read+write Total-Time: 0.338262 seconds Iterations: 65536 Iteration-Time: 28.806907 ns Iteration-Time-Dev: 1.788922 ns Suite: recutils Test: Self-Test Test-Case: addition Total-Time: 0.335605 seconds Iterations: 65536 Iteration-Time: 31.240347 ns Iteration-Time-Dev: 1.614777 ns Suite: recutils Test: Self-Test Test-Case: multiplication Total-Time: 0.336565 seconds Iterations: 65536 Iteration-Time: 33.552166 ns Iteration-Time-Dev: 0.872344 ns Suite: recutils Test: Self-Test Test-Case: empty Total-Time: 0.336429 seconds Iterations: 65536 Iteration-Time: 22.165482 ns Iteration-Time-Dev: 1.831835 ns Suite: recutils Test: test Test-Case: test Total-Time: 5.087685 seconds Iterations: 4019 Iteration-Time: 1.244888 ms Iteration-Time-Dev: 0.070221 ms
And here are all the possible values:
# Suite Name: recutils # Number of tests: 2 Suite: recutils Test: Self-Test Test-Case: empty Test-Dimensions: (0) Total-Time: 0.313433 seconds Total-Iterations: 65536 Total-Samples: 57 Iterations: 65536 Samples: 57 Iteration-Time: 21.266609 ns Iteration-Time-Dev: 3.854903 ns Iteration-Time-Var: 14.860275 ns² Sample-Time: 24.024211 μs Sample-Time-Dev: 14.308473 μs Sample-Time-Var: 204.732392 μs² Iterations-Sample: 1149.754386 iterations Iterations-Sample-Dev: 658.869331 iterations Iterations-Sample-Var: 434108.795739 iterations Suite: recutils Test: Self-Test Test-Case: barrier Test-Dimensions: (1) Total-Time: 0.324264 seconds Total-Iterations: 65536 Total-Samples: 57 Iterations: 65536 Samples: 57 Iteration-Time: 22.271273 ns Iteration-Time-Dev: 1.055291 ns Iteration-Time-Var: 1.113640 ns² Sample-Time: 25.427140 μs Sample-Time-Dev: 14.430901 μs Sample-Time-Var: 208.250895 μs² Iterations-Sample: 1149.754386 iterations Iterations-Sample-Dev: 653.540940 iterations Iterations-Sample-Var: 427115.760025 iterations Suite: recutils Test: Self-Test Test-Case: read+write Test-Dimensions: (2) Total-Time: 0.324357 seconds Total-Iterations: 65536 Total-Samples: 57 Iterations: 65536 Samples: 57 Iteration-Time: 29.853592 ns Iteration-Time-Dev: 0.998944 ns Iteration-Time-Var: 0.997890 ns² Sample-Time: 34.188228 μs Sample-Time-Dev: 19.447461 μs Sample-Time-Var: 378.203752 μs² Iterations-Sample: 1149.754386 iterations Iterations-Sample-Dev: 655.631110 iterations Iterations-Sample-Var: 429852.152882 iterations Suite: recutils Test: Self-Test Test-Case: addition Test-Dimensions: (3) Total-Time: 0.327395 seconds Total-Iterations: 65536 Total-Samples: 58 Iterations: 65536 Samples: 58 Iteration-Time: 32.130466 ns Iteration-Time-Dev: 1.723760 ns Iteration-Time-Var: 2.971349 ns² Sample-Time: 36.404397 μs Sample-Time-Dev: 21.188594 μs Sample-Time-Var: 448.956513 μs² Iterations-Sample: 1129.931034 iterations Iterations-Sample-Dev: 638.763055 iterations Iterations-Sample-Var: 408018.240774 iterations Suite: recutils Test: Self-Test Test-Case: multiplication Test-Dimensions: (4) Total-Time: 0.326852 seconds Total-Iterations: 65536 Total-Samples: 58 Iterations: 65536 Samples: 58 Iteration-Time: 35.174917 ns Iteration-Time-Dev: 1.703684 ns Iteration-Time-Var: 2.902540 ns² Sample-Time: 39.356707 μs Sample-Time-Dev: 22.986242 μs Sample-Time-Var: 528.367301 μs² Iterations-Sample: 1129.931034 iterations Iterations-Sample-Dev: 664.098383 iterations Iterations-Sample-Var: 441026.661827 iterations Suite: recutils Test: Self-Test Test-Case: empty Test-Dimensions: (5) Total-Time: 0.328522 seconds Total-Iterations: 65536 Total-Samples: 58 Iterations: 65536 Samples: 58 Iteration-Time: 22.131524 ns Iteration-Time-Dev: 0.876206 ns Iteration-Time-Var: 0.767737 ns² Sample-Time: 24.892259 μs Sample-Time-Dev: 14.260445 μs Sample-Time-Var: 203.360286 μs² Iterations-Sample: 1129.931034 iterations Iterations-Sample-Dev: 649.394100 iterations Iterations-Sample-Var: 421712.696915 iterations Suite: recutils Test: test Test-Case: test Test-Dimensions: () Total-Time: 5.089305 seconds Total-Iterations: 3935 Total-Samples: 311 Iterations: 3935 Samples: 311 Iteration-Time: 1.269683 ms Iteration-Time-Dev: 0.063774 ms Iteration-Time-Var: 0.004067 ms² Sample-Time: 15.970704 ms Sample-Time-Dev: 8.820956 ms Sample-Time-Var: 77.809264 ms² Iterations-Sample: 12.652733 iterations Iterations-Sample-Dev: 7.028627 iterations Iterations-Sample-Var: 49.401597 iterations
Previous: Report Output, Up: Advanced Usage [Contents][Index]
Compilers are usually allowed to throw away any computation whose effects are not visible. Micro benchmarks usually throw away the result of the computation at each iteration, and the compiler might be really nice to you and move or throw the code under test with it. At the end of the day, the fastest execution is achieved executing no code at all.
If you are experiencing these kind of problems, see Optimization Utilities Reference for useful functions to tell the compiler we need the values produced by the code under test. Users of the C++ binding can check See C++ Optimization Utilities Reference for their equivalents. Guile binding doesn’t implement yet these kind of helpers.
These tools must be used carefully, as the compiler might well find ways to comply with our requirements with less work than we expected—really a big kudos compiler writers. Your mileage may vary.
Next: C API Reference, Previous: Advanced Usage, Up: MicroBenchmark [Contents][Index]
The functions of this reference are specified using Floyd-Hoare logic, as P{Q}R, where:
The following order is imposed between these elements:
The following examples showcase the format used by the API Reference:
extra_qualifiers
¶Preconditions:
Conditions to be met at the call point.
<-- This number is for reference purposes, all preconditions must be met at the call point.
Effects:
Effect produced by the call.
Another effect produced by the call when value has the special
value foobar
.
<-- This number is for reference purposes. It does not impose an order on the effects unless specified otherwise.
Postconditions:
Condition provided after the call.
Another condition provided after the call when ParamType is something.
<-- This number is for reference purposes, all postconditions must be provided at the return point.
General comments regarding the function come after the formal definition.
Preconditions: …
Note:
NULL
doesn’t point to a valid
object in C.
Next: C++ API Reference, Previous: Common API Concepts, Up: MicroBenchmark [Contents][Index]
The top level object is called a suite (see Suite Reference) which is composed by one or more tests.
Tests (see Test Reference) are defined by the struct
micro_benchmark_test_definition
. These definitions can registered
onto the suite.
Each test execution is controlled through a state object (see State Reference), where statistics are collected.
A report with the results (see Report Reference) is available once the execution has finished.
In addition to these interfaces, utility macros and functions are provided (see Utilities Reference) to reduce the boilerplate. Also, compilers sometimes need hints to not optimize away code under test (see Optimization Utilities Reference.)
MicroBenchmark is on an early stage of development. The API might change in future versions.
Next: Test Reference, Up: C API Reference [Contents][Index]
The suite is represented by the following type:
Opaque pointer type. This object controls the lifetime of their derived objects, so all references obtained using an object of this type are bound to the lifetime of that object.
Preconditions:
Effects:
Postconditions:
NULL
.
Preconditions
micro_benchmark_suite_create
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_create
.
Effects:
Postconditions:
micro_benchmark_suite_create
that originated this suite.
Preconditions:
micro_benchmark_suite_create
.
Effects:
Postconditions:
NULL
.
Preconditions:
micro_benchmark_suite_create
.
Effects:
Postconditions:
Note that no interface to retrieve tests by name is provided, as several tests might be registered under the same name.
Preconditions:
micro_benchmark_suite_create
.
micro_benchmark_suite_get_number_of_tests
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_create
.
micro_benchmark_suite_run (suite)
has not been called.
Effects:
Postconditions:
micro_benchmark_suite_get_report
.
Preconditions:
micro_benchmark_suite_create
.
micro_benchmark_suite_run (suite)
has been called.
Effects:
Postconditions:
NULL
.
Next: State Reference, Previous: Suite Reference, Up: C API Reference [Contents][Index]
This section shows the types and functions related to the test functionality.
The main types is:
Representation of a registered test case.
The following sections explain in depth the definiton of test cases and how to associate constraints to their execution.
Next: Test Constraints Reference, Up: Test Reference [Contents][Index]
Its prototype is void *(*) (micro_benchmark_test_state)
.
Its prototype is void (*) (micro_benchmark_test_state, void *)
.
Its prototype is void (*) (void *)
.
Its prototype is void (*) (micro_benchmark_test_state)
.
bool
is_auto, micro_benchmark_set_up_fun
set_up, micro_benchmark_tear_down_fun
tear_down, micro_benchmark_auto_test_fun
auto_test, micro_benchmark_test_fun
test } ¶Definition of a test case.
Opaque type of a registered test case.
Preconditions
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
NULL
.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
NULL
.
micro_benchmark_test_definition
pointed by the returned
value contains the same values as the one the provided to
micro_benchmark_suite_register_test
or
MICRO_BENCHMARK_REGISTER_
macro family.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
true
.
Postconditions:
true
.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_set_enabled
was called with
test as its parameter, the returned value is the value
provided as its parameter.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_set_data
or NULL
if
micro_benchmark_test_case_set_data
has not been called.
Previous: Test Definition Reference, Up: Test Reference [Contents][Index]
This subsection contains the different static constraints that can be applied to a test case.
The following functions modify the constraints applied to the test case execution:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
,
micro_benchmark_suite_get_test
and the associated
micro_benchmark_suite
is still valid.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
micro_benchmark_test_case_dimensions
.
Effects:
Postconditions:
*
sizes.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_skip_iterations
has been called
with test as its parameter, the returned value is the one
provided to micro_benchmark_test_case_skip_iterations
.
The following functions control the number of iterations performed by the code under test.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_limit_iterations
was called and
the parameter max_iterations was not zero, the returned value is
the parameter provided to that call.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_limit_iterations
was called, the
returned value is the value provided by the parameter
min_iterations to that call.
The following functions limit of iterations performed on each measurement sample taken, or query these limits.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_limit_sample_iterations
was
called, the returned value is the value provided by the parameter
min_iterations to that call.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_limit_sample_iterations
was
called, the returned value is the value provided by the parameter
max_iterations to that call.
The following functions control the maximum time spent executing of test code.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
micro_benchmark_test_case_set_max_time
was called, the
returned value is equivalent to the last value provided to that
function.
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_register_test
.
micro_benchmark_suite_get_test
.
Effects:
Postconditions:
Set custom statistical calculator for test.
micro_benchmark_meter
meter, micro_benchmark_custom_sample_collector
collector } ¶Custom meter definition.
Add custom meter to test. Data pointer provided by the user for test.
Next: Report Reference, Previous: Test Reference, Up: C API Reference [Contents][Index]
Note: the lifetime of test state objects is semantically bound to the call where it is received as parameter. Once the scope of the function call ends, the object is not considered valid anymore3 and any usage of its value is undefined.
Opaque type, representation of the internal state of the test execution.
The following function is the main driver of the benchmark. It performs the data collection and determines if the test code must be executed again.
Preconditions:
Effects:
Postconditions:
true
if the test should perform at least
one iteration more according to its constraints.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
micro_benchmark_test_case_add_dimension
was called at least
dimension+1 times, the returned value is one of the values
provided call number dimension, starting from 0, to
micro_benchmark_test_case_add_dimension
to the test associated
to state.
micro_benchmark_test_case_add_dimension
was called less than
dimension+1 times.
Preconditions:
Effects:
Postconditions:
NULL
unless
micro_benchmark_test_case_set_data
was called. In the latter
case, the value of the object pointer provided to
micro_benchmark_test_case_set_data
is returned.
micro_benchmark_test_case_set_data
was called. In the
latter case, the value of the object pointer provided to
micro_benchmark_test_case_set_data
is returned.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
micro_benchmark_state_set_name
was called, the value
returned is equivalent to the value provided to
micro_benchmark_state_set_name
.
micro_benchmark_state_set_name
was not called, the value
returned is equivalent to the value provided to
micro_benchmark_suite_register_test
.
Next: Data Collection reference, Previous: State Reference, Up: C API Reference [Contents][Index]
This section documents the report generated by a suite execution (see Suite Report Reference), the report of each test contained on the suite (see Test Report Reference) and the report of each execution of a test case (see Test Execution Report.) The last section contains useful functionality to print the data stored into reports with predefined formats (see Output Reference).
Next: Test Report Reference, Up: Report Reference [Contents][Index]
Opaque representation of the information collected by the suite execution.
The lifetime of these objects is bound to the suite that generated
them. Calling micro_benchmark_suite_release
ends the lifetime
of all reports associated to that suite.
Preconditions:
micro_benchmark_suite_get_report
.
Effects:
Postconditions:
micro_benchmark_suite_create
that returned the suite associated
to report.
Preconditions:
micro_benchmark_suite_get_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_suite_get_report
.
micro_benchmark_report_get_number_of_tests (report)
.
Effects:
Postconditions:
Next: Test Execution Report, Previous: Suite Report Reference, Up: Report Reference [Contents][Index]
This section documents the reported data from a provided test case.
Opaque representation of the information collected from a test case.
The lifetime of these objects is bound to the suite that generated
them. Calling micro_benchmark_suite_release
ends the lifetime
of all reports associated to that suite.
Preconditions:
micro_benchmark_report_get_test_report
.
Effects:
Postconditions:
micro_benchmark_suite_register_test
call.
Preconditions:
micro_benchmark_suite_get_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_report_get_test_report
.
micro_benchmark_test_report_get_num_executions (report)
.
Effects:
Postconditions:
Next: Output Reference, Previous: Test Report Reference, Up: Report Reference [Contents][Index]
This section documents the reported data from an execution of a provided test.
Opaque representation of the information collected from a test case execution. Certain constraints, such as size dimensions, require several executions of the same test case.
The lifetime of these objects is bound to the suite that generated
them. Calling micro_benchmark_suite_release
ends the lifetime
of all reports associated to that suite.
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
micro_benchmark_exec_report_get_num_time_samples
(report)
.
Effects:
Postconditions:
The following functions access to the aggregated statistical data of the test execution report.
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
The report can contain additional data collected during the test execution (see Custom Data Collection Reference.)
Note: This API is still on a very early stage, expect changes.
const char*
name, size_t
size, const char **
keys, const char**
values } ¶Additional data collected. name
identifies the data. The
length of keys
and values
must be greater or equal than
size
.
Extract a printable representation of the collected data. Its
prototype is micro_benchmark_report_extra_data (*) (void *)
.
Note: Not specified yet.
Return the number of custom meters whose data has been collected onto report.
Note: Not specified yet.
Return the number of samples collected by meter onto
report. meter must be less than the value returned by
micro_benchmark_exec_report_number_of_meters
.
Note: Not specified yet.
Return the type of samples collected by meter onto report.
meter must be less than the value returned by
micro_benchmark_exec_report_number_of_meters
.
Note: Not specified yet.
Return the sample pos collected by meter onto
report. meter must be less than the value returned by
micro_benchmark_exec_report_number_of_meters (report)
and
pos must be less than the value returned by
micro_benchmark_exec_report_meter_num_samples
.
Note: Not specified yet.
Extract the custom data stored on report from meter pos.
Previous: Test Execution Report, Up: Report Reference [Contents][Index]
This section documents the output of the reported data.
MICRO_BENCHMARK_CONSOLE_OUTPUT
, MICRO_BENCHMARK_TEXT_OUTPUT
, MICRO_BENCHMARK_LISP_OUTPUT
} ¶Predefined output formats:
MICRO_BENCHMARK_CONSOLE_OUTPUT
.
MICRO_BENCHMARK_TEXT_OUTPUT
.
MICRO_BENCHMARK_LISP_OUTPUT
.
MICRO_BENCHMARK_STAT_NONE
, MICRO_BENCHMARK_STAT_MEAN
, MICRO_BENCHMARK_STAT_VARIANCE
, MICRO_BENCHMARK_STAT_STD_DEVIATION
, MICRO_BENCHMARK_STAT_BASIC
, MICRO_BENCHMARK_STAT_ALL
} ¶Provided statistical values.
|
to select both values.
BASIC
is equivalent to MEAN | STD_DEVIATION
.
ALL
is is equivalent to MEAN | VARIANCE |
STD_DEVIATION
.
bool
self_test, bool
size_constraints, bool
total_time, bool
total_iterations, bool
iterations, bool
total_samples, bool
samples, bool
extra_data, micro_benchmark_output_stat
iteration_times, micro_benchmark_output_stat
sample_times, micro_benchmark_output_stat
batch_stats } ¶Each field determines the output of a value of the report.
Preconditions:
Effects:
Postconditions:
micro_benchmark_set_default_output_values
was called, the
returned value is equivalent to the parameter provided to the last
call in effect of micro_benchmark_set_default_output_values
.
Preconditions:
Effects:
Postconditions:
micro_benchmark_output_values
will use the values provided by
new_values to the last call of this function.
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
micro_benchmark_get_default_output_values
on out with
output format type.
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
micro_benchmark_get_default_output_values
on the standard
output with the console format.
Postconditions:
Preconditions:
micro_benchmark_test_report_get_exec_report
.
Effects:
Postconditions:
Next: Timer Reference, Previous: Report Reference, Up: C API Reference [Contents][Index]
This section details the data collected and reported by MicroBenchmark.
MICRO_BENCHMARK_SAMPLE_INT64
, MICRO_BENCHMARK_SAMPLE_UINT64
, MICRO_BENCHMARK_SAMPLE_DOUBLE
, MICRO_BENCHMARK_SAMPLE_TIME
, MICRO_BENCHMARK_SAMPLE_SMALL_BUFFER
, MICRO_BENCHMARK_SAMPLE_USER_PROVIDED
} ¶Type of data collected.
int64_t
integer, uint64_t
modular, double
floating_point, micro_benchmark_clock_time
time, unsigned char
buffer[MICRO_BENCHMARK_SAMPLE_SB_SIZE]
, void *
user_provided } ¶Union of the data collection types.
bool
discarded, size_t
iterations, micro_benchmark_stats_meter_sample
value } ¶Generic data sampled on a test execution.
Pointer to a collected sample.
Pointer to an array of samples. It is effectively the same type as
micro_benchmark_stats_generic_sample
and it’s meant for
documentation purposes only.
Next: Meter Reference, Up: Data Collection reference [Contents][Index]
Its prototype is void *(*) (void)
.
Its prototype is void (*) (void *)
;
Its prototype is void (*) (void *, size_t,
micro_benchmark_stats_generic_samples)
.
micro_benchmark_custom_sample_create_data_fun
create_data, micro_benchmark_custom_sample_release_data_fun
release_data, micro_benchmark_custom_sample_collector_fun
parse_samples, micro_benchmark_report_extractor_fun
get_data } ¶Custom data collection.
Its prototype is micro_benchmark_time_stats_values (*) (size_t,
micro_benchmark_time_samples)
Next: Time Reference, Previous: Custom Data Collection Reference, Up: Data Collection reference [Contents][Index]
A meter is an device that produces measurement samples.
Currently, the only meters provide by the library are chronometers (see Chronometer Reference.)
void *
ptr, const char *
name, micro_benchmark_stats_sample_type
sample_type, micro_benchmark_stats_meter_sample
min_resolution, micro_benchmark_stats_meter_sample
max_resolution } ¶Data of a meter implementation.
ptr
is the instance data that is provided to the meter
functions. sample_type
represents the active union field from
the samples produced by the meter, as well as min_resolution
and max_resolution
fields.
Function to initialize a meter instance. Its prototype is void
(*) (micro_benchmark_meter_data *, const void *)
.
Function to release a meter instance. Its prototype is void (*)
(const micro_benchmark_meter_data *)
.
Start the measuring process of a meter instance. Its prototype is
void (*) (void *)
.
Stop the measuring process of a meter instance. Its prototype is
void (*) (void *)
.
Restart the measuring process of a meter instance. Its prototype is
void (*) (void *)
.
Extract the measure performed by the meter. Its prototype is
micro_benchmark_stats_meter_sample (*) (void *)
.
micro_benchmark_meter_data
data, micro_benchmark_meter_init_fun
init, micro_benchmark_meter_cleanup_fun
cleanup, micro_benchmark_meter_start_fun
start, micro_benchmark_meter_stop_fun
stop, micro_benchmark_meter_restart_fun
restart, micro_benchmark_meter_get_sample_fun
get_sample } ¶Meter definition.
Pointer to a constant micro_benchmark_meter_definition object.
Preconditions:
micro_benchmark_stats_meter_init
or
micro_benchmark_stats_meter_init_with_data
.
micro_benchmark_stats_meter_init
or
micro_benchmark_stats_meter_init_with_data
with a value
equivalent to meter as its parameter has been followed by a call
to micro_benchmark_stats_meter_cleanup
with a value equivalent
to meter as its parameter.
Effects:
meter->init (meter->data, NULL)
.
Postconditions:
Preconditions:
micro_benchmark_stats_meter_init
or
micro_benchmark_stats_meter_init_with_data
.
micro_benchmark_stats_meter_init
or
micro_benchmark_stats_meter_init_with_data
with the value of
meter as its parameter has been followed by a call to
micro_benchmark_stats_meter_cleanup
with the value of
meter as its parameter.
Effects:
meter->init (meter->data, extra_data)
.
Postconditions:
Preconditions:
Effects:
meter->cleanup
.
Postconditions:
Preconditions:
Effects:
meter->start
.
Postconditions:
Preconditions:
Effects:
meter->stop
.
Postconditions:
Preconditions:
Effects:
meter->restart
.
Postconditions:
Preconditions:
Effects:
meter->get_sample
.
Postconditions:
micro_benchmark_meter_start
.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Next: Chronometer Reference, Previous: Meter Reference, Up: Data Collection reference [Contents][Index]
int32_t
seconds, ¶int32_t
nanoseconds }
Structure used by the framework for time measuring.
MICRO_BENCHMARK_CLOCK_REALTIME
, MICRO_BENCHMARK_CLOCK_MONOTONIC
, MICRO_BENCHMARK_CLOCK_PROCESS
, MICRO_BENCHMARK_CLOCK_THREAD
} ¶Clock type enumeration (see Predefined Clocks.)
bool
discarded, size_t
iterations, micro_benchmark_clock_time
elapsed } ¶Time sample value. This data type contains, at least, the specified
values discarded
, iterations
and elapsed
.
Pointer to a constant value of type
micro_benchmark_time_sample_
.
Pointer to an array of constant values of type
micro_benchmark_time_sample_
values. This type is equivalent
to micro_benchmark_time_sample
and it is used only for
documentation purposes.
MICRO_BENCHMARK_STATS_UNIT_NONE
, MICRO_BENCHMARK_STATS_ITERATIONS
, MICRO_BENCHMARK_STATS_TIME_MIN
, MICRO_BENCHMARK_STATS_TIME_S
, MICRO_BENCHMARK_STATS_TIME_MS
, MICRO_BENCHMARK_STATS_TIME_US
, MICRO_BENCHMARK_STATS_TIME_NS
} ¶Unit of measure.
micro_benchmark_stats_unit
unit, double
mean, double
variance, double
std_deviation } ¶Aggregated statistical values.
size_t
total_samples, size_t
samples, size_t
iterations, micro_benchmark_stats_value
iteration_time, micro_benchmark_stats_value
sample_time, micro_benchmark_stats_value
sample_iterations } ¶Information sampled during a test case execution.
Note: this type is experimental.
Previous: Time Reference, Up: Data Collection reference [Contents][Index]
A chronometer is a specialization of a meter that measures time duration.
MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT
, MICRO_BENCHMARK_CHRONO_PROVIDER_TIME_T
, MICRO_BENCHMARK_CHRONO_PROVIDER_CLOCK_T
, MICRO_BENCHMARK_CHRONO_PROVIDER_GETTIMEOFDAY
, MICRO_BENCHMARK_CHRONO_PROVIDER_CLOCK_GETTIME
, MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED
} ¶Chronometer type (see Predefined Chronometers.)
Preconditions:
Effects:
Postconditions:
micro_benchmark_chronometer_set_default
was called, the type
of the returned value is equivalent to the last value provided to that
call.
Preconditions:
MICRO_BENCHMARK_CHRONO_PROVIDER_USER_PROVIDED
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_meter_definition
where at least all the
function fields have been filled with a valid function.
Effects:
micro_benchmark_stats_meter_init_with_data (ptr,
clock_type)
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
micro_benchmark_meter_definition
where at least all the field
function fields have been filled with a valid function.
Effects:
Postconditions:
MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT
to
micro_benchmark_chronometer_create
will have the same effective
type as the meter provided to the last call of this function.
Preconditions:
Effects:
Postconditions:
MICRO_BENCHMARK_CHRONO_PROVIDER_DEFAULT
.
micro_benchmark_chronometer_set_default
was called, the
returned value is equivalent to the last value provided to that call.
Next: Utilities Reference, Previous: Data Collection reference, Up: C API Reference [Contents][Index]
Timers are devices triggered after some time.
Timer instance data.
Its prototype is void (*) (micro_benchmark_timer_data *,
micro_benchmark_clock_type, const void *)
.
Its prototype is void (*) (const micro_benchmark_timer_data *)
.
Its prototype is void (*) (void *, micro_benchmark_clock_time)
.
Its prototype is void (*) (void *)
.
Its prototype is void (*) (void *)
.
Its prototype is bool (*) (void *)
.
Its prototype is micro_benchmark_clock_time (*) (void *)
.
micro_benchmark_timer_data
data, micro_benchmark_timer_init_fun
init, micro_benchmark_timer_cleanup_fun
release, micro_benchmark_timer_start_fun
start, micro_benchmark_timer_stop_fun
stop, micro_benchmark_timer_restart_fun
restart, micro_benchmark_timer_running_fun
is_running, micro_benchmark_timer_elapsed_fun
elapsed } ¶Timer definition.
Pointer to a timer instance.
Preconditions:
micro_benchmark_timer_init
or
micro_benchmark_timer_init_with_extra
.
micro_benchmark_timer_init
or
micro_benchmark_timer_init_with_extra
with a value equivalent
to timer as its parameter has been followed by a call to
micro_benchmark_timer_cleanup
with a value equivalent to
timer as its parameter.
Effects:
timer->init (timer->data, type, NULL)
.
Postconditions:
Preconditions:
micro_benchmark_timer_init
or
micro_benchmark_timer_init_with_extra
.
micro_benchmark_timer_init
or
micro_benchmark_timer_init_with_extra
with a value equivalent
to timer as its parameter has been followed by a call to
micro_benchmark_timer_cleanup
with a value equivalent to
timer as its parameter.
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
timer->data.name
;
Preconditions:
Effects:
Postconditions:
timer->data.resolution
;
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
micro_benchmark_timer_restart
or
micro_benchmark_timer_stop
.
Preconditions:
Effects:
micro_benchmark_timer_start
.
Postconditions:
micro_benchmark_timer_stop
was performed.
Preconditions:
Effects:
Postconditions:
false
if the deadline provided to this
timer by micro_benchmark_timer_start
has not expired.
true
otherwise.
Preconditions:
Effects:
Postconditions:
micro_benchmark_timer_start
and
micro_benchmark_timer_stop
, and the following pairs of calls to
micro_benchmark_timer_restart
and
micro_benchmark_timer_stop
, up to deadline.
Preconditions:
micro_benchmark_timer_definition
where at least all the
function fields have been filled with a valid function.
Effects:
Postconditions:
Preconditions:
MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED
.
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
micro_benchmark_timer_create_default
and the ones created by
providing the type MICRO_BENCHMARK_TIMER_PROVIDER_DEFAULT
on
micro_benchmark_timer_create
.
micro_benchmark_timer_set_default
was called, the returned
value is equivalent to the last value provided to that call.
Up: Timer Reference [Contents][Index]
MICRO_BENCHMARK_TIMER_PROVIDER_DEFAULT
, MICRO_BENCHMARK_TIMER_PROVIDER_ITIMER
, MICRO_BENCHMARK_TIMER_PROVIDER_TIMER_T
, MICRO_BENCHMARK_TIMER_PROVIDER_TIMERFD
, MICRO_BENCHMARK_TIMER_PROVIDER_FROM_METER
, MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED
} ¶Timer type (see Predefined Timers.)
Preconditions:
Effects:
Postconditions:
micro_benchmark_timer_set_default
was called, the type of
the returned value is equivalent to the last value provided to that
call.
Preconditions:
MICRO_BENCHMARK_TIMER_PROVIDER_USER_PROVIDED
.
MICRO_BENCHMARK_TIMER_PROVIDER_FROM_METER
.
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
micro_benchmark_meter_definition
where at least all the
function fields have been filled with a valid function.
Effects:
Postconditions:
Preconditions:
micro_benchmark_timer_definition
where at least all the
function fields have been filled with a valid function.
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Previous: Timer Reference, Up: C API Reference [Contents][Index]
The following function can be used to provide a custom error handler to be executed by the library if any postcondition, implicit or explicit, could not be provided:
Preconditions:
Effects:
Postconditions:
These are some utilities to make quick and easy a test with MicroBenchmark.
Preconditions:
micro_benchmark_test_fun
.
Effects:
#fun
as the test registration name.
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Preconditions:
micro_benchmark_set_up_fun
.
micro_benchmark_test_fun
.
micro_benchmark_set_up_fun
.
Effects:
#fun
as the test registration name.
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Preconditions:
const char *
, and
points to a valid, zero-ended array of characters.
micro_benchmark_set_up_fun
.
micro_benchmark_test_fun
.
micro_benchmark_set_up_fun
.
Effects:
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Preconditions:
micro_benchmark_auto_test_fun
.
Effects:
#fun
as the test registration name.
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Preconditions:
micro_benchmark_set_up_fun
.
micro_benchmark_auto_test_fun
.
micro_benchmark_set_up_fun
.
Effects:
#fun
as the test registration name.
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Preconditions:
const char *
, and
points to a valid, zero-ended array of characters.
micro_benchmark_set_up_fun
.
micro_benchmark_auto_test_fun
.
micro_benchmark_set_up_fun
.
Effects:
Postconditions:
micro_benchmark_main
unless its
execution is disabled by its parameters or the provided test
constraints.
Its prototype is void (*) (micro_benchmark_test_case)
.
Preconditions:
const char *
, and
points to a valid, zero-ended array of characters.
micro_benchmark_static_constraint
.
Effects:
Postconditions:
micro_benchmark_main
, if a test with that name has
been registered.
Preconditions:
int main (int, char**)
is not defined elsewhere on the binary.
Effects:
int main (int, char **)
.
Postconditions:
micro_benchmark_main
with the
provided command line arguments its value to the main
function.
Preconditions:
char *
.
NULL
or points to a
valid, readable, zero-ended array of characters.
Effects:
--help
, --version
or an unrecognized parameter, call exit
with the corresponding
value.
Postconditions:
EXIT_SUCCESS
.
The following functions are used to implement the registration macros:
Preconditions:
Effects:
Postconditions:
micro_benchmark_main
will execute test unless it is
disabled through its parameters or the constraints associated to
name.
Preconditions:
Effects:
Postconditions:
micro_benchmark_main
will call constraint if a test was
registered as name as its parameter, with the test case as its
parameter.
NOTE: The following functions are executed automatically by the linker on all known configurations. Do not call them manually.
Preconditions:
micro_benchmark_init
has not been called before.
micro_benchmark_init
has been followed by a
corresponding call to micro_benchmark_cleanup
.
Effects:
Postconditions:
Preconditions:
micro_benchmark_init
has been called once.
micro_benchmark_init
except the last one has been
followed by a corresponding call to micro_benchmark_cleanup
.
Effects:
Postconditions:
micro_benchmark_main
anymore.
Next: Optimization Utilities Reference, Up: Utilities Reference [Contents][Index]
The library produce messages at different levels of verbosity.
MICRO_BENCHMARK_ERROR_LEVEL
, MICRO_BENCHMARK_WARNING_LEVEL
, MICRO_BENCHMARK_INFO_LEVEL
, MICRO_BENCHMARK_DEBUG_LEVEL
, MICRO_BENCHMARK_TRACE_LEVEL
} ¶Levels used by the library (see Configuration).
The following functions control the logging output generated by the library:
Preconditions:
FILE
.
Effects:
Postconditions:
micro_benchmark_log_set_output
or
micro_benchmark_log_reset_output
.
Preconditions:
Effects:
micro_benchmark_log_set_output
was called, stop using the
provided FILE *
.
Postconditions:
micro_benchmark_log_set_output
.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Previous: Library Log Reference, Up: Utilities Reference [Contents][Index]
The following macros are meant to avoid certain optimizations the compiler might perform on your test code.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Next: Guile API Reference, Previous: C API Reference, Up: MicroBenchmark [Contents][Index]
The library namespace is micro_benchmark
. The following
definitions omit this namespace for brevity. MicroBenchmark uses
east const for its C++ code and places all the type specifiers
together.
All functions require that the library has been loaded, properly initialized and stays loaded during the duration of all calls. This initialization is usually is performed by the static initialization sequence of the binary.
Next: C++ Test Reference, Up: C++ API Reference [Contents][Index]
Main driver of the execution, C++ representation of the C type
micro_benchmark_suite
(see Suite Reference.)
char const*
name) ¶Preconditions:
Effects:
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
Internal type for the registration of a constraint function together
with the test. The provided value
micro_benchmark::with_constraints
has this type.
Note: The following interface is going to change, to allow
member function pointers and current register_class
functionality.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
test_case::name
is equivalent to name.
suite::run
unless its
execution is disabled.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
test_case::name
is equivalent to name.
suite::run
unless its
execution is disabled by the constraints function or later
modifications of the test_case handle.
Preconditions:
*this
is a valid object.
operator()
or
operator() const
with the right signature.
Effects:
test
in the suite with name.
T
has a valid static member constraints
, it is
called with the defined test as its parameter.
Postconditions:
test_case::name
is equivalent to name.
suite::run
unless its
execution is disabled by the constraints function or later
modifications of the test_case handle.
T
has a valid instance member set_up
, it is
called before the test execution.
T
has a valid instance member tear_down
, it is
called after the test execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
Preconditions:
*this
is a valid object.
this
->run
has not been called.
Effects:
Postconditions:
Preconditions:
*this
is a valid object.
this
->run
has been called.
Effects:
Postconditions:
Next: C++ State Reference, Previous: C++ Suite Reference, Up: C++ API Reference [Contents][Index]
Test case handle.
The objects returned by suite::register_test
,
suite::register_class
are in a valid state, as well as
the objects provided by the library to a constraint function through
its parameter.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
*this
is initialized with state of other.
Postconditions:
Preconditions:
Effects:
*this
is the state of other.
Postconditions:
The following methods modify the constraints applied to the test execution.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
is increased by one.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
is increased by one.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
is increased by one.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
is increased by one.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
this
.
Preconditions:
*this
is in a valid state.
*this
.
Effects:
Postconditions:
this
.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will perform it iterations
before start taking measurements.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will
perform before start taking measurements.
The following function control the number of iterations performed by the code under test.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will perform at least min
iterations during the measurements.
*this
will perform at most max
iterations during the measurements.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will perform while taking
measurements.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will perform while taking
measurements.
The following function control the iterations performed on each measurement sample taken.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will perform at least min
iterations each measurement sample.
*this
will perform at most max
iterations each measurement sample.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
.
The following functions limit the test time:
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will have the provided time as
its time limit.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
will have the provided time as
its time limit.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
.
Next: C++ Report Reference, Previous: C++ Test Reference, Up: C++ API Reference [Contents][Index]
The following type represents the state of a test execution.
An instance of the following type is provided to set up and tear down functions when they are provided to the test. An instance of this type is provided to directed tests as well.
Handle of the execution state. The lifetime of these objects is bounded to the function call that provided it. No public constructor is available for this type.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
.
this->set_name
was called, the returned value is
equivalent to the value provided to the function call in effect.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
true
if the test should run at least one
iteration more.
Preconditions:
*this
is in a valid state.
Effects:
Postconditions:
Next: C++ Utilities Reference, Previous: C++ State Reference, Up: C++ API Reference [Contents][Index]
The following sections describe the reports generated by the execution of the tests (see C++ Test Reference) registered at a suite object (see C++ Suite Reference) after running that suite.
Next: C++ Test Report Reference, Up: C++ Report Reference [Contents][Index]
Valid objects of this type represent the report generated by the
execution of a suite
object.
Preconditions:
Effects:
Postconditions:
*this
are the assignment and the
destruction.
noexcept
¶noexcept
¶Preconditions:
Effects:
*this
.
*this
.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
The following functions allow the iteration over the reports. You may write on your code:
report rep = @dots{}; for (auto const& test: rep) do_something_with_test (test); for (auto const& e: rep.exections ()) do_something_with_execution (e);
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
true
if the number of executed tests was
zero.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in order of execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in reverse order of execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in the order of execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in reverse order of execution.
The following functions print the report with the default format (see Report Output.).
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the standard
output.
io::default_output_values
, are emitted.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the stream
associated to out.
io::default_output_values
, are emitted.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the stream
associated to out.
Postconditions:
Note: this function can only be called through argument-dependent lookup.
The following overloads emit customized output.
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the standard
output.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the stream
associated to out.
Postconditions:
The following functions print the report using the format provided as its parameter (see Report Output.)
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the standard
output.
io::default_output_values
, are emitted.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the stream
associated to out.
io::default_output_values
, are emitted.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
*this
is printed to the stream
associated to out.
Postconditions:
The following method allow the customization of the output obtained by
operator<<
. It can be used as the following example:
using micro_benchmark::io::output_type; report rep = @dots{}; // Print the lisp report to std::cout std::cout << rep.with_args (output_type::lisp); @dots{}
Preconditions:
*this
is a valid object.
this->print (std::declval<std::ostream>, args...)
is a valid
expression.
Effects:
Postconditions:
this
. Its validity
is bounded to the validity of this
.
Preconditions:
report::with_args
call.
Effects:
ptr->print (out, args...)
where ptr
is the
report referenced by report and args...
were the
arguments provided to the call report::with_args
.
Postconditions:
Next: C++ Exec Report Reference, Previous: C++ Suite Report Reference, Up: C++ Report Reference [Contents][Index]
Valid objects of this type represent the report generated by all
executions of a test_case
object.
Preconditions:
Effects:
Postconditions:
*this
are the assignment and the
destruction.
noexcept
¶noexcept
¶Preconditions:
Effects:
*this
.
*this
if other
is valid.
Postconditions:
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
The following functions allow the iteration over the test reports.
You may write on your code, being rep
an object of type
test_report
:
test_report rep = @dots{}; for (auto const& e: rep) do_something_with_execution (e);
test_report::begin
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
true
if the number of executions
associated to *this
was zero.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in the order of execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
in reverse order of execution.
Next: C++ Report Output Reference, Previous: C++ Test Report Reference, Up: C++ Report Reference [Contents][Index]
Valid objects of this type represent the report generated by a single
execution of a test_case
object.
Preconditions:
Effects:
Postconditions:
*this
are the assignment and the
destruction.
noexcept
¶noexcept
¶Preconditions:
Effects:
*this
.
*this
if other
is valid.
Postconditions:
Other helper types are:
This structure contains two public members—seconds
and
nanoseconds
—and a function template to<T>()
to convert
the result using std::chrono::duration_cast
..
Valid objects of this type represent the report generated by a single
execution of a test_case
object.
Internal template. Has two members: to_unit
which returns its
value in std::chrono::nanoseconds
and value
which
returns the underlying double
value.
Internal template. Has two members: to_unit
which returns its
value in std::size_t
and value
which returns the
underlying double
value.
Internal template. Has three <iteration-unit>
or
<time-unit>
fields: mean
,
std_deviation
and variance
.
Internal alias of a collection of time samples.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
state::set_name
was called during the execution associated
to *this
, the returned value is equivalent to the last
value provided during that execution.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
state::sizes
during the execution associated to *this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Preconditions:
*this
is a valid object.
Effects:
Postconditions:
*this
.
Previous: C++ Exec Report Reference, Up: C++ Report Reference [Contents][Index]
This enumeration represents the output format (see Report Output.) The following values are defined by this enumeration class:
console
lisp
text
This enumeration represent the print option for a statistical values (see Report Output.) The following values are defined by this enumeration class:
none
mean
variance
std_deviation
Customized output.
Default output values.
Preconditions:
Effects:
Postconditions:
Previous: C++ Report Reference, Up: C++ API Reference [Contents][Index]
Declare int main (int, char **)
, call
micro_benchmark::main
and return its value.
Preconditions:
char *
.
Effects:
--help
,
--version
or an unrecognized parameter, call exit
with the corresponding value.
Postconditions:
EXIT_SUCCESS
.
Up: C++ Utilities Reference [Contents][Index]
The following functions detail optimization (defeating) functionality (see Optimizer Tips).
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Next: Contributing, Previous: C++ API Reference, Up: MicroBenchmark [Contents][Index]
To use MicroBenchmark, you only need to import the main module,
(mbenmchmark)
. It exports all the library functionality. The
following sections specify the different parts that compose it.
Next: Guile Test Reference, Up: Guile API Reference [Contents][Index]
The suite functionality is provided by the module (mbenchmark
suite)
. The following names are provided:
This type represents a test suite. Its starting state is empty.
It encapsulates micro_benchmark_suite
underneath and controls
its lifetime. Note that this name is not exported by the module; the
name <suite>
is only for reference.
Preconditions:
(string? name)
is #t
.
Effects:
Postconditions:
suite?
returns #t
on the returned value.
Preconditions:
Effects:
<suite>
.
Postconditions:
#t
if object is of type
<suite>
.
Preconditions:
make-suite
.
Effects:
Postconditions:
Preconditions:
make-suite
.
Effects:
Postconditions:
Preconditions:
make-suite
.
run-suite
has not been called on suite.
Effects:
Postconditions:
Preconditions:
make-suite
.
run-suite
has been called on suite.
Effects:
Postconditions:
Next: Guile Report Reference, Previous: Guile Suite Reference, Up: Guile API Reference [Contents][Index]
In Scheme, the test registration functions are the constructors of test cases. This organization eases the implementation and seems to make sense too. Let us know your opinion at <bug-mbenchmark@nongnu.org>.
Next: Guile Test State Reference, Up: Guile Test Reference [Contents][Index]
This type represents a test case.
It encapsulates micro_benchmark_test_case
underneath and a
reference to the suite to controls its lifetime. Note that this name
is not exported by the module, but objects of this type are returned
by the following function.
Preconditions:
make-suite
.
(string? name)
is #t
.
#:set-up
is provided, (set-up <state-object>)
is a
valid call. If it isn’t provided, the value returned by the default
set-up function is (get-sizes <state-object>)
.
#:tear-down
is provided, (tear-down <state-object>
(set-up <state-object>))
is a valid call.
#:direct
is provided and it isn’t #f
, (apply
test <state-object2> (set-up <state-object1>))
, being
test the value provided through #:test
, must be a valid
call.
#:direct
is not provided or it is #f
, (apply
test (set-up <state-object>))
, being test the value
provided through #:test
, is a valid call.
#:dimensions
is provided, its value is a list of lists of
non-negative integers.
#:skip-iterations!
, #:min-iterations
,
#:max-iterations
, #:min-sample-iterations
or
#:max-sample-iterations
is provided, its value is a
non-negative integer.
#:max-time
is provided, its value is either a non-negative
integer or a pair of non-negative integers.
Effects:
name
.
Postconditions:
test-case?
returns #t
on the returned
value.
#:dimensions
was provided, the returned test has the value
provided as size its constraints.
#:skip-iterations!
was provided, the returned test will
perform this number of of iterations before start taking measurements.
#:min-iterations
or #:max-iterations was provided, the
returned test will perform a number of iterations in the range
[min, max], being min the value provided to
#:min-iterations
or zero, and max. the value provided to
#:max-iterations
or an implementation upper limit.
#:min-sample-iterations
or #:max-sample-iterations was
provided, the returned test will perform number of iterations per
measurement sample in the range [min, max], being
min the value provided to #:min-sample-iterations
or
zero, and max. the value provided to
#:max-sample-iterations
or an implementation upper limit.
#:max-time
was provided, its value will be used as the time
limit of the registered test.
Preconditions:
Effects:
<test-case>
.
Postconditions:
#t
if object is of type
<test-case>
.
Preconditions:
add-test!
.
#:skip-iterations!
, #:min-iterations
,
#:max-iterations
, #:min-sample-iterations
or
#:max-sample-iterations
is provided, its value is a
non-negative integer.
Effects:
#:skip-iterations!
was provided, any previous value of
iterations to skip stops taking effect.
#:min-iterations
or #:max-iterations was provided, any
previous iteration limit value stops taking effect.
#:min-sample-iterations
or #:max-sample-iterations was
provided, any previous sample iteration limit value stops taking
effect.
Postconditions:
#:skip-iterations!
was provided, this will be number of
iterations performed on the test code represented by test
before taking measurements.
#:min-iterations
or #:max-iterations was provided, the
number of iterations performed by test will be [min,
max], being min the value provided to
#:min-iterations
or zero, and max. the value provided to
#:max-iterations
or an implementation upper limit.
#:min-sample-iterations
or #:max-sample-iterations was
provided, the number of iterations per measurement sample performed by
test will be [min, max], being min the value
provided to #:min-sample-iterations
or zero, and max. the
value provided to #:max-sample-iterations
or an implementation
upper limit.
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
#:min
or #:max
is provided, its value must be a
non-negative integer.
Effects:
Postconditions:
#:min
or
zero, and max. the value provided to #:max
or an
implementation upper limit.
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
#:min
or #:max
is provided, its value must be a
non-negative integer.
Effects:
Postconditions:
#:min
or zero, and max. the value provided to
#:max
or an implementation upper limit.
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
Preconditions:
add-test!
.
Effects:
car
value as the number of
seconds of the time limit and its cdr
value as the number of
milliseconds.
Postconditions:
Preconditions:
add-test!
.
Effects:
Postconditions:
The following function does not return a <test-case>
object.
The tests registered by this function are executed by the main
function (see Guile Utilities Reference.)
Preconditions:
(string? name)
is #t
.
#:set-up
is provided, (set-up <state-object>)
is a
valid call. If it isn’t provided, the value returned by the default
set-up function is (get-sizes <state-object>)
.
#:tear-down
is provided, (tear-down <state-object>
(set-up <state-object>))
is a valid call.
#:direct
is provided and it isn’t #f
, (apply
test <state-object2> (set-up <state-object1>))
, being
test the value provided through #:test
, must be a valid
call.
#:direct
is not provided or it is #f
, (apply
test (set-up <state-object>))
, being test the value
provided through #:test
, is a valid call.
#:dimensions
is provided, it must be a list of lists of
non-negative integers.
#:skip-iterations!
, #:min-iterations
,
#:max-iterations
, #:min-sample-iterations
or
#:max-sample-iterations
is provided, the value provided is
non-negative integers.
#:max-time
is provided, its value is either a non-negative
integer or a pair of non-negative integers.
Effects:
name
.
Postconditions:
main
.
Previous: Guile Test Case Reference, Up: Guile Test Reference [Contents][Index]
This type represents a test execution state.
It encapsulates micro_benchmark_test_state
underneath. Note
that this name is not exported by the module; the name <state>
is only for reference.
Objects of this type only are provided through the test case functions. These objects must not escape its calling scope, otherwise they are not considered valid.
These objects provide two predicates:
Preconditions:
Effects:
<state>
.
Postconditions:
#t
if object is an state object,
provided to a test related function.
Preconditions:
Effects:
Postconditions:
#t
if the test should perform, at least,
one iteration more.
The following functions are available for objects of this type.
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Preconditions:
Effects:
Postconditions:
Next: Guile Utilities Reference, Previous: Guile Test Reference, Up: Guile API Reference [Contents][Index]
The following sections describe the report generated by a test suite (see Guile Suite Report Reference), its output functionality (see Guile Report Output Reference) and the detailed report provided for each execution (see Guile Exec Report Reference).
Note there is no equivalent to the test case report of C4.
Next: Guile Exec Report Reference, Up: Guile Report Reference [Contents][Index]
This type represents a suite report.
It encapsulates micro_benchmark_report
underneath and a
reference to the suite to controls its lifetime. Note that this name
is not exported by the module, but objects of this type are returned
by the following function.
Preconditions:
make-suite
.
Effects:
Postconditions:
Preconditions:
Effects:
<report>
.
Postconditions:
#t
if object was returned by
get-report
.
Preconditions:
get-report
.
Effects:
Postconditions:
The following functions allow to iterate a report object, traversing the test execution reports contained in it:
Preconditions:
get-report
.
Effects:
#:self-test
was provided and true, or not provided at all,
call fun once for each test execution report contained in
report, including the self test of the suite.
#:self-test
was provided as false, call fun once for
each test execution report contained in report, excluding the
self test of the suite.
(fun exr)
, where exr is an
<exec-report>
object.
Postconditions:
Preconditions:
get-report
.
Effects:
#:self-test
was provided and true, or not provided at all,
call fun once for each test execution report contained in
report, including the self test of the suite.
#:self-test
was provided as false, call fun once for
each test execution report contained in report, excluding the
self test of the suite.
(fun exr ip)
, where exr is an
<exec-report>
object and ip is init in the first
call and the value returned by the last fun call the successive
times.
Postconditions:
Next: Guile Report Output Reference, Previous: Guile Suite Report Reference, Up: Guile Report Reference [Contents][Index]
This type represents a suite report.
It encapsulates micro_benchmark_exec_report
underneath and a
reference to the suite to controls its lifetime. Note that this name
is not exported by the module, but objects of this type can be
accessed through a <report>
.
Preconditions:
Effects:
<exec-report>
.
Postconditions:
#t
if object obtained from a
<report>
object.
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
Preconditions:
for-each-report
or fold-reports
.
Effects:
Postconditions:
<time-sample>
objects
(see Guile Time Utilities Reference) which contain the data
gathered during the test execution associated to report.
Previous: Guile Exec Report Reference, Up: Guile Report Reference [Contents][Index]
Representation of the console format.
Representation of the S-expression format.
Representation of the text format.
Print only the mean value.
Print only the variance value.
Print only the standard deviation value.
Print the mean and standard deviation values.
Print the mean, variance standard deviation values.
Preconditions:
get-report
.
#:type
is provided, its value is one of the output/
values defined in this section.
#:iteration-time
, #:sample-time
or
#:sample-iterations
is provided, its value is one of the
stats/
values defined in this section.
#:self-test
, #:size-constraints
, #:total-time
,
#:total-iterations
, #:samples
or #:extra-data
is
provided, its value is either #t
or #f
.
Effects:
#f
, when the output is returned as a string.
Otherwise, the output is printed to the standard output.
Postconditions:
#:port
has
received the full printed report..
Previous: Guile Report Reference, Up: Guile API Reference [Contents][Index]
The following function makes very easy writing benchmark scripts in
Guile, as you only have to use the module (mbenchmark)
,
register your tests and call main:
Preconditions:
Effects:
--help
, --version
or an unrecognized parameter, call exit
with the corresponding
value.
Postconditions:
The following functions control the log level of the library:
Preconditions:
eq?
to 'error
,'warn
,
'info
, 'debug
or 'trace
.
Effects:
Postconditions:
Preconditions:
eq?
to 'error
,'warn
,
'info
, 'debug
or 'trace
.
Effects:
Postconditions:
This type represents the time elapsed on a test.
Preconditions:
Effects:
<elapsed-time>
Postconditions:
#f
if object type is not
<elapsed-time>
.
Preconditions:
(elapsed-time? et)
is #t
.
Effects:
Postconditions:
Preconditions:
(elapsed-time? et)
is #t
.
Effects:
Postconditions:
This type represents a time sample taken during the execution of a test case.
Preconditions:
Effects:
<time-sample>
Postconditions:
#f
if object type is not
<time-sample>
.
Preconditions:
(stat-value? stat)
is #t
.
Effects:
Postconditions:
Preconditions:
(time-sample? sample)
is #t
.
Effects:
Postconditions:
<elapsed-time>
.
Previous: Guile Time Utilities Reference, Up: Guile Utilities Reference [Contents][Index]
This type represents the time elapsed on a test.
Preconditions:
Effects:
<stat-value>
Postconditions:
#f
if object type is not
<stat-value>
.
Preconditions:
(stat-value? stat)
is #t
.
Effects:
Postconditions:
Preconditions:
(stat-value? stat)
is #t
.
Effects:
Postconditions:
Next: GNU Free Documentation License, Previous: Guile API Reference, Up: MicroBenchmark [Contents][Index]
MicroBenchmark is Free Software5: you can copy, distribute and/or modify it under the terms of the GNU LGPLv3, or any later version (see GNU Lesser General Public License.) That freedom makes possible the collaboration on equal grounds and leverages our collective effort to make better software available for everyone to:
As such, contributions to the project are appreciated and encouraged.
There are plenty of things to be done and everybody should be able to contribute. Currently there is not much collective activity, but contact <bug-mbenchmark@nongnu.org> if you are interested on contributing. Your help is very welcome!
The following sections show how to report a bug of MicroBenchmark (see Reporting Bugs), how to send a patch for its inclusion into the repository (see Sending Patches), the coding guidelines followed by the project (see Coding Guidelines) and how the source code is physically structured on the project (see Source Code Organization.)
Next: Sending Patches, Up: Contributing [Contents][Index]
Please send MicroBenchmark bug reports to <bug-mbenchmark@nongnu.org>.
Next: Coding Guidelines, Previous: Reporting Bugs, Up: Contributing [Contents][Index]
Steps to send a patch
The steps 1 to 3 can be performed automatically with the provided script build-aux/build.sh.
Next: Source Code Organization, Previous: Sending Patches, Up: Contributing [Contents][Index]
MicroBenchmark follows to GNU Coding Standards
(see GNU Coding Standards) for its source code
and build process. The command make indent-code
can be used
to format the source code the script
build-aux/indent-code.sh6.
In addition to them, these are some common ideas of the code base:
_create
on their suffix allocate the object
memory, which must be released through the corresponding
_release
function. Functions with _init
and
_cleanup
work on objects already allocated and must go in pairs
too.
The self test empty
is a good baseline for checking introduced
overheads.
On the other hand, optimizations outside the critical path (i.e. test state code paths) are only worth the effort if the readability gets affected.
Previous: Coding Guidelines, Up: Contributing [Contents][Index]
The source code tree is organized onto the following directories:
make run-examples
. The output included into this manual can
be regenerated by make regen-doc-examples
.
m4
macros and pkg-config
files.
autoreconf
. mbenchmark.m4
contains macros used by MicroBenchmark itself.
Next: GNU General Public License, Previous: Contributing, Up: MicroBenchmark [Contents][Index]
Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. https://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.
The “publisher” means any person or entity that distributes copies of the Document to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See https://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
“Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of copyrightable works thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part, as part of another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (C) year your name. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with…Texts.” line with this:
with the Invariant Sections being list their titles, with the Front-Cover Texts being list, and with the Back-Cover Texts being list.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
Next: GNU Lesser General Public License, Previous: GNU Free Documentation License, Up: MicroBenchmark [Contents][Index]
Copyright © 2007 Free Software Foundation, Inc. https://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program—to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users’ freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work’s users, your or third parties’ legal rights to forbid circumvention of technological measures.
You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party’s predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor’s “contributor version”.
A contributor’s “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor’s essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient’s use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
one line to give the program's name and a brief idea of what it does. Copyright (C) year name of author This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
program Copyright (C) year name of author This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w’. This is free software, and you are welcome to redistribute it under certain conditions; type ‘show c’ for details.
The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of the General Public License. Of course, your program’s commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see https://www.gnu.org/licenses/.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read https://www.gnu.org/licenses/why-not-lgpl.html.
Next: Concept Index, Previous: GNU General Public License, Up: MicroBenchmark [Contents][Index]
Copyright © 2007 Free Software Foundation, Inc. https://fsf.org/ Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below.
As used herein, “this License” refers to version 3 of the GNU Lesser General Public License, and the “GNU GPL” refers to version 3 of the GNU General Public License.
“The Library” refers to a covered work governed by this License, other than an Application or a Combined Work as defined below.
An “Application” is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library.
A “Combined Work” is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the “Linked Version”.
The “Minimal Corresponding Source” for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version.
The “Corresponding Application Code” for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.
You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL.
If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version:
The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:
You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:
You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following:
The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy’s public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.
Next: C Programming Index, Previous: GNU Lesser General Public License, Up: MicroBenchmark [Contents][Index]
Jump to: | A B C D E G K L M P R S T |
---|
Jump to: | A B C D E G K L M P R S T |
---|
Next: C++ Programming Index, Previous: Concept Index, Up: MicroBenchmark [Contents][Index]
Jump to: | M |
---|
Jump to: | M |
---|
Next: Guile Programming Index, Previous: C Programming Index, Up: MicroBenchmark [Contents][Index]
Jump to: | <
C D E I M O R S T |
---|
Jump to: | <
C D E I M O R S T |
---|
Next: Programming Index, Previous: C++ Programming Index, Up: MicroBenchmark [Contents][Index]
Jump to: | <
A E F G K M O P R S T |
---|
Jump to: | <
A E F G K M O P R S T |
---|
Previous: Guile Programming Index, Up: MicroBenchmark [Contents][Index]
Jump to: | <
A C D E F G I K M O P R S T |
---|
Jump to: | <
A C D E F G I K M O P R S T |
---|
The comparison with an established reference is the main idea of a benchmark.
Currenty only llvm-cov
from version
15.0.7 works with the options provided by the build script. Versions
from 13.0.1 and 14.0.6 work removing one --source-prefix
option.
This includes using the reference obtained at the set up stage during the test or tear down stages, or the reference obtained at the test stage during tear down.
Contact <bug-mbenchmark@nongnu.org> if you deem this feature useful.
The following links contain more information about Free Software, if you are interested:
indent
outputs an
undesired format on certain C constructions such as guard macros. The
problematic source lines have comments /* *INDENT-ON* */
and
/* *INDENT-OFF* */
surrounding them to avoid its modification.
Patches for indent-code.sh to instruct indent
about these
cases are very welcome.
ChangeLog is generated using gnulib’s
gitlog-to-changelog
by the build process on git checkouts.
gitlog-to-changelog and test-driver.scm are downloaded at build time when they aren’t present.