Benchmark Tests

There are benchmark tests for all relevant API functions, located in the benchmark directory.

The benchmark application is similar to that of the tuning system, but measures the throughput of the RAPP API functions instead of the Compute layer functions. The build system runs all the benchmark tests and creates the output HTML plot, in a similar way to how the tuned configurations are built, but without the re-entrancy. The following steps are carried out at the end of e.g. make all:

  1. Check if the file benchmark/benchmarkplot.html exists. If so, we are done.
  2. Compile the benchmark application and pack it together with the library to benchmark in a self-extracting archive rappbenchmark.run.
  3. If we are cross-compiling, the user must manually run rappbenchmark.run on the target platform. Otherwise it will be executed automatically. When finished, it has produced data file benchmarkdata.py.
  4. Run the plotdata.py script to generate the output HTML plot benchmarkplot.html.

After benchmarking, the generated plot file is located in the benchmark directory in the build tree. To make RAPP benchmarked on the platform for everyone else, it must be copied to the source directory and/or added to the distribution. A tarball to send to the maintainers, containing the necessary files, can be created using the make-target export-new-archfiles. There's also the make-target update-benchmarks (or together with the tune-file using update-archfiles) to put the generated files in the local source directory with the correct name.

Next section: Correctness Tests


Generated on 1 Jun 2016 for RAPP Compute by  doxygen 1.6.1