• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

benchmarks/H01-Aug-2021-15,89010,876

README.rstH A D02-Feb-20213.2 KiB9765

asv.conf.jsonH A D02-May-20213.2 KiB8670

README.rst

1..  -*- rst -*-
2
3================
4SciPy benchmarks
5================
6
7Benchmarking SciPy with Airspeed Velocity.
8
9
10Usage
11-----
12
13Airspeed Velocity manages building and Python virtualenvs by itself,
14unless told otherwise. Some of the benchmarking features in
15``runtests.py`` also tell ASV to use the SciPy compiled by
16``runtests.py``. To run the benchmarks, you do not need to install a
17development version of SciPy to your current Python environment.
18
19Run a benchmark against currently checked-out SciPy version (don't record the
20result)::
21
22    python runtests.py --bench sparse.Arithmetic
23
24Compare change in benchmark results with another branch::
25
26    python runtests.py --bench-compare master sparse.Arithmetic
27
28Run benchmarks against the system-installed SciPy rather than rebuild::
29
30    python runtests.py -n --bench sparse.Arithmetic
31
32Run ASV commands directly (note, this will not set env vars for ``ccache``
33and disabling BLAS/LAPACK multi-threading, as ``runtests.py`` does)::
34
35    cd benchmarks
36    asv run --skip-existing-commits --steps 10 ALL
37    asv publish
38    asv preview
39
40More on how to use ``asv`` can be found in `ASV documentation`_
41Command-line help is available as usual via ``asv --help`` and
42``asv run --help``.
43
44.. _ASV documentation: https://asv.readthedocs.io/
45
46
47Writing benchmarks
48------------------
49
50See `ASV documentation`_ for the basics on how to write benchmarks.
51
52Some things to consider:
53
54- When importing things from SciPy on the top of the test files, do it as::
55
56      try:
57          from scipy.sparse.linalg import onenormest
58      except ImportError:
59          pass
60
61  The benchmark files need to be importable also when benchmarking old versions
62  of SciPy. The benchmarks themselves don't need any guarding against missing
63  features --- only the top-level imports.
64
65- Try to keep the runtime of the benchmark reasonable.
66
67- Use ASV's ``time_`` methods for benchmarking times rather than cooking up
68  time measurements via ``time.clock``, even if it requires some juggling when
69  writing the benchmark.
70
71- Preparing arrays etc., should generally be put in the ``setup`` method rather
72  than the ``time_`` methods, to avoid counting preparation time together with
73  the time of the benchmarked operation.
74
75- Use ``run_monitored`` from ``common.py`` if you need to measure memory usage.
76
77- Benchmark versioning: by default ``asv`` invalidates old results
78  when there is any code change in the benchmark routine or in
79  setup/setup_cache.
80
81  This can be controlled manually by setting a fixed benchmark version
82  number, using the ``version`` attribute. See `ASV documentation`_
83  for details.
84
85  If set manually, the value needs to be changed manually when old
86  results should be invalidated. In case you want to preserve previous
87  benchmark results when the benchmark did not previously have a
88  manual ``version`` attribute, the automatically computed default
89  values can be found in ``results/benchmark.json``.
90
91- Benchmark attributes such as ``params`` and ``param_names`` must be
92  the same regardless of whether some features are available, or
93  e.g. SCIPY_XSLOW=1 is set.
94
95  Instead, benchmarks that should not be run can be skipped by raising
96  ``NotImplementedError`` in ``setup()``.
97