API Reference

Warning

This package is under active development. API changes are very likely.

This package aims to give an easy way to benchmark several functions for different inputs and provide ways to visualize the benchmark results.

To utilize the full features (visualization and post-processing) you need to install the optional dependencies:

  • NumPy
  • pandas
  • matplotlib
simple_benchmark.assert_same_results(funcs, arguments, equality_func)[source]

Asserts that all functions return the same result.

New in version 0.1.0.

Parameters:
  • funcs (iterable of callables) – The functions to check.
  • arguments (dict) – A dictionary containing where the key represents the reported value (for example an integer representing the list size) as key and the argument for the functions (for example the list) as value. In case you want to plot the result it should be sorted and ordered (e.g. an collections.OrderedDict or a plain dict if you are using Python 3.7 or later).
  • equality_func (callable) – The function that determines if the results are equal. This function should accept two arguments and return a boolean (True if the results should be considered equal, False if not).
Raises:

AssertionError – In case any two results are not equal.

simple_benchmark.assert_not_mutating_input(funcs, arguments, equality_func, copy_func=<function deepcopy>)[source]

Asserts that none of the functions mutate the arguments.

New in version 0.1.0.

Parameters:
  • funcs (iterable of callables) – The functions to check.
  • arguments (dict) – A dictionary containing where the key represents the reported value (for example an integer representing the list size) as key and the argument for the functions (for example the list) as value. In case you want to plot the result it should be sorted and ordered (e.g. an collections.OrderedDict or a plain dict if you are using Python 3.7 or later).
  • equality_func (callable) – The function that determines if the results are equal. This function should accept two arguments and return a boolean (True if the results should be considered equal, False if not).
  • copy_func (callable, optional) – The function that is used to copy the original argument. Default is copy.deepcopy().
Raises:

AssertionError – In case any two results are not equal.

Notes

In case the arguments are MultiArgument then the copy_func and the equality_func get these MultiArgument as single arguments and need to handle them appropriately.

simple_benchmark.benchmark(funcs, arguments, argument_name='', warmups=None, time_per_benchmark=datetime.timedelta(microseconds=100000), function_aliases=None, estimator=<built-in function min>, maximum_time=None)[source]

Create a benchmark suite for different functions and for different arguments.

Parameters:
  • funcs (iterable of callables) – The functions to benchmark.
  • arguments (dict) – A dictionary containing where the key represents the reported value (for example an integer representing the list size) as key and the argument for the functions (for example the list) as value. In case you want to plot the result it should be sorted and ordered (e.g. an collections.OrderedDict or a plain dict if you are using Python 3.7 or later).
  • argument_name (str, optional) – The name of the reported value. For example if the arguments represent list sizes this could be “size of the list”. Default is an empty string.
  • warmups (None or iterable of callables, optional) – If not None it specifies the callables that need a warmup call before being timed. That is so, that caches can be filled or jitters to kick in. Default is None.
  • time_per_benchmark (datetime.timedelta, optional) –

    Each benchmark should take approximately this time. The value is ignored for functions that take very little time or very long. Default is 0.1 seconds.

    Changed in version 0.1.0: Now requires a datetime.timedelta instead of a float.

  • function_aliases (None or dict, optional) – If not None it should be a dictionary containing the function as key and the name of the function as value. The value will be used in the final reports and plots. Default is None.
  • estimator (callable, optional) – Each function is called with each argument multiple times and each timing is recorded. The benchmark_estimator (by default min()) is used to reduce this list of timings to one final value. The minimum is generally a good way to estimate how fast a function can run (see also the discussion in timeit.Timer.repeat()). Default is min().
  • maximum_time (datetime.timedelta or None, optional) –

    If not None it represents the maximum time the first call of the function may take. If exceeded the benchmark will stop evaluating the function from then on. Default is None.

    New in version 0.1.0.

Returns:

benchmark – The result of the benchmarks.

Return type:

BenchmarkResult

class simple_benchmark.BenchmarkBuilder(time_per_benchmark=datetime.timedelta(microseconds=100000), estimator=<built-in function min>, maximum_time=None)[source]

A class useful for building benchmarks by adding decorators to the functions instead of collecting them later.

Parameters:
  • time_per_benchmark (datetime.timedelta, optional) –

    Each benchmark should take approximately this time. The value is ignored for functions that take very little time or very long. Default is 0.1 seconds.

    Changed in version 0.1.0: Now requires a datetime.timedelta instead of a float.

  • estimator (callable, optional) – Each function is called with each argument multiple times and each timing is recorded. The benchmark_estimator (by default min()) is used to reduce this list of timings to one final value. The minimum is generally a good way to estimate how fast a function can run (see also the discussion in timeit.Timer.repeat()). Default is min().
  • maximum_time (datetime.timedelta or None, optional) –

    If not None it represents the maximum time the first call of the function may take. If exceeded the benchmark will stop evaluating the function from then on. Default is None.

    New in version 0.1.0.

See also

benchmark

add_arguments(name='')[source]
A decorator factory that returns a decorator that can be used to add a function that produces the x-axis
values and the associated test data for the benchmark.
Parameters:name (str, optional) – The label for the x-axis.
Returns:decorator – The decorator that adds the function that produces the x-axis values and the test data to the benchmark.
Return type:callable
Raises:TypeError – In case name is a callable.
add_function(warmups=False, alias=None)[source]

A decorator factory that returns a decorator that can be used to add a function to the benchmark.

Parameters:
  • warmups (bool, optional) – If true the function is called once before each benchmark run. Default is False.
  • alias (str or None, optional) – If None then the displayed function name is the name of the function, otherwise the string is used when the function is referred to. Default is None.
Returns:

decorator – The decorator that adds the function to the benchmark.

Return type:

callable

Raises:

TypeError – In case name is a callable.

add_functions(functions)[source]

Add multiple functions to the benchmark.

Parameters:functions (iterable of callables) – The functions to add to the benchmark
assert_not_mutating_input(equality_func, copy_func=<function deepcopy>)[source]

Asserts that none of the stored functions mutate the arguments.

New in version 0.1.0.

Parameters:
  • equality_func (callable) – The function that determines if the results are equal. This function should accept two arguments and return a boolean (True if the results should be considered equal, False if not).
  • copy_func (callable, optional) – The function that is used to copy the original argument. Default is copy.deepcopy().
Warns:

UserWarning – In case the instance has no arguments for the functions.

Raises:

AssertionError – In case any two results are not equal.

Notes

In case the arguments are MultiArgument then the copy_func and the equality_func get these MultiArgument as single arguments and need to handle them appropriately.

assert_same_results(equality_func)[source]

Asserts that all stored functions return the same result.

New in version 0.1.0.

Parameters:equality_func (callable) – The function that determines if the results are equal. This function should accept two arguments and return a boolean (True if the results should be considered equal, False if not).
Warns:UserWarning – In case the instance has no arguments for the functions.
Raises:AssertionError – In case any two results are not equal.
run()[source]

Starts the benchmark.

Returns:

result – The result of the benchmark.

Return type:

BenchmarkResult

Warns:

UserWarning – In case the instance has no arguments for the functions.

New in version 0.1.0.

use_random_arrays_as_arguments(sizes)[source]

Alternative to add_arguments() that provides random arrays of the specified sizes as arguments for the benchmark.

Parameters:sizes (iterable of int) – An iterable containing the sizes for the arrays (should be sorted).
Raises:ImportError – If NumPy isn’t installed.
use_random_lists_as_arguments(sizes)[source]

Alternative to add_arguments() that provides random lists of the specified sizes as arguments for the benchmark.

Parameters:sizes (iterable of int) – An iterable containing the sizes for the lists (should be sorted).
class simple_benchmark.BenchmarkResult(timings, function_aliases, arguments, argument_name)[source]

A class holding a benchmarking result that provides additional printing and plotting functions.

plot(relative_to=None, ax=None)[source]

Plot the benchmarks, either relative or absolute.

Parameters:
  • relative_to (callable or None, optional) – If None it will plot the absolute timings, otherwise it will use the given relative_to function as reference for the timings.
  • ax (matplotlib.axes.Axes or None, optional) – The axes on which to plot. If None plots on the currently active axes.
Raises:

ImportError – If matplotlib isn’t installed.

plot_both(relative_to)[source]

Plot both the absolute times and the relative time.

Parameters:relative_to (callable or None) – If None it will plot the absolute timings, otherwise it will use the given relative_to function as reference for the timings.
Raises:ImportError – If matplotlib isn’t installed.
plot_difference_percentage(relative_to, ax=None)[source]

Plot the benchmarks relative to one of the benchmarks with percentages on the y-axis.

Parameters:
  • relative_to (callable) – The benchmarks are plotted relative to the timings of the given function.
  • ax (matplotlib.axes.Axes or None, optional) – The axes on which to plot. If None plots on the currently active axes.
Raises:

ImportError – If matplotlib isn’t installed.

to_pandas_dataframe()[source]

Return the timing results as pandas DataFrame. This is the preferred way of accessing the text form of the timings.

Returns:results – The timings as DataFrame.
Return type:pandas.DataFrame
Warns:UserWarning – In case multiple functions have the same name.
Raises:ImportError – If pandas isn’t installed.
class simple_benchmark.MultiArgument[source]

Class that behaves like a tuple but signals to the benchmark that it should pass multiple arguments to the function to benchmark.