1.. _`cache_provider`:
2.. _cache:
3
4
5Cache: working with cross-testrun state
6=======================================
7
8
9
10Usage
11---------
12
13The plugin provides two command line options to rerun failures from the
14last ``pytest`` invocation:
15
16* ``--lf``, ``--last-failed`` - to only re-run the failures.
17* ``--ff``, ``--failed-first`` - to run the failures first and then the rest of
18  the tests.
19
20For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
21all cross-session cache contents ahead of a test run.
22
23Other plugins may access the `config.cache`_ object to set/get
24**json encodable** values between ``pytest`` invocations.
25
26.. note::
27
28    This plugin is enabled by default, but can be disabled if needed: see
29    :ref:`cmdunregister` (the internal name for this plugin is
30    ``cacheprovider``).
31
32
33Rerunning only failures or failures first
34-----------------------------------------------
35
36First, let's create 50 test invocation of which only 2 fail:
37
38.. code-block:: python
39
40    # content of test_50.py
41    import pytest
42
43
44    @pytest.mark.parametrize("i", range(50))
45    def test_num(i):
46        if i in (17, 25):
47            pytest.fail("bad luck")
48
49If you run this for the first time you will see two failures:
50
51.. code-block:: pytest
52
53    $ pytest -q
54    .................F.......F........................                   [100%]
55    ================================= FAILURES =================================
56    _______________________________ test_num[17] _______________________________
57
58    i = 17
59
60        @pytest.mark.parametrize("i", range(50))
61        def test_num(i):
62            if i in (17, 25):
63    >           pytest.fail("bad luck")
64    E           Failed: bad luck
65
66    test_50.py:7: Failed
67    _______________________________ test_num[25] _______________________________
68
69    i = 25
70
71        @pytest.mark.parametrize("i", range(50))
72        def test_num(i):
73            if i in (17, 25):
74    >           pytest.fail("bad luck")
75    E           Failed: bad luck
76
77    test_50.py:7: Failed
78    ========================= short test summary info ==========================
79    FAILED test_50.py::test_num[17] - Failed: bad luck
80    FAILED test_50.py::test_num[25] - Failed: bad luck
81    2 failed, 48 passed in 0.12s
82
83If you then run it with ``--lf``:
84
85.. code-block:: pytest
86
87    $ pytest --lf
88    =========================== test session starts ============================
89    platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
90    cachedir: $PYTHON_PREFIX/.pytest_cache
91    rootdir: $REGENDOC_TMPDIR
92    collected 2 items
93    run-last-failure: rerun previous 2 failures
94
95    test_50.py FF                                                        [100%]
96
97    ================================= FAILURES =================================
98    _______________________________ test_num[17] _______________________________
99
100    i = 17
101
102        @pytest.mark.parametrize("i", range(50))
103        def test_num(i):
104            if i in (17, 25):
105    >           pytest.fail("bad luck")
106    E           Failed: bad luck
107
108    test_50.py:7: Failed
109    _______________________________ test_num[25] _______________________________
110
111    i = 25
112
113        @pytest.mark.parametrize("i", range(50))
114        def test_num(i):
115            if i in (17, 25):
116    >           pytest.fail("bad luck")
117    E           Failed: bad luck
118
119    test_50.py:7: Failed
120    ========================= short test summary info ==========================
121    FAILED test_50.py::test_num[17] - Failed: bad luck
122    FAILED test_50.py::test_num[25] - Failed: bad luck
123    ============================ 2 failed in 0.12s =============================
124
125You have run only the two failing tests from the last run, while the 48 passing
126tests have not been run ("deselected").
127
128Now, if you run with the ``--ff`` option, all tests will be run but the first
129previous failures will be executed first (as can be seen from the series
130of ``FF`` and dots):
131
132.. code-block:: pytest
133
134    $ pytest --ff
135    =========================== test session starts ============================
136    platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
137    cachedir: $PYTHON_PREFIX/.pytest_cache
138    rootdir: $REGENDOC_TMPDIR
139    collected 50 items
140    run-last-failure: rerun previous 2 failures first
141
142    test_50.py FF................................................        [100%]
143
144    ================================= FAILURES =================================
145    _______________________________ test_num[17] _______________________________
146
147    i = 17
148
149        @pytest.mark.parametrize("i", range(50))
150        def test_num(i):
151            if i in (17, 25):
152    >           pytest.fail("bad luck")
153    E           Failed: bad luck
154
155    test_50.py:7: Failed
156    _______________________________ test_num[25] _______________________________
157
158    i = 25
159
160        @pytest.mark.parametrize("i", range(50))
161        def test_num(i):
162            if i in (17, 25):
163    >           pytest.fail("bad luck")
164    E           Failed: bad luck
165
166    test_50.py:7: Failed
167    ========================= short test summary info ==========================
168    FAILED test_50.py::test_num[17] - Failed: bad luck
169    FAILED test_50.py::test_num[25] - Failed: bad luck
170    ======================= 2 failed, 48 passed in 0.12s =======================
171
172.. _`config.cache`:
173
174New ``--nf``, ``--new-first`` options: run new tests first followed by the rest
175of the tests, in both cases tests are also sorted by the file modified time,
176with more recent files coming first.
177
178Behavior when no tests failed in the last run
179---------------------------------------------
180
181When no tests failed in the last run, or when no cached ``lastfailed`` data was
182found, ``pytest`` can be configured either to run all of the tests or no tests,
183using the ``--last-failed-no-failures`` option, which takes one of the following values:
184
185.. code-block:: bash
186
187    pytest --last-failed --last-failed-no-failures all    # run all tests (default behavior)
188    pytest --last-failed --last-failed-no-failures none   # run no tests and exit
189
190The new config.cache object
191--------------------------------
192
193.. regendoc:wipe
194
195Plugins or conftest.py support code can get a cached value using the
196pytest ``config`` object.  Here is a basic example plugin which
197implements a :ref:`fixture <fixture>` which re-uses previously created state
198across pytest invocations:
199
200.. code-block:: python
201
202    # content of test_caching.py
203    import pytest
204    import time
205
206
207    def expensive_computation():
208        print("running expensive computation...")
209
210
211    @pytest.fixture
212    def mydata(request):
213        val = request.config.cache.get("example/value", None)
214        if val is None:
215            expensive_computation()
216            val = 42
217            request.config.cache.set("example/value", val)
218        return val
219
220
221    def test_function(mydata):
222        assert mydata == 23
223
224If you run this command for the first time, you can see the print statement:
225
226.. code-block:: pytest
227
228    $ pytest -q
229    F                                                                    [100%]
230    ================================= FAILURES =================================
231    ______________________________ test_function _______________________________
232
233    mydata = 42
234
235        def test_function(mydata):
236    >       assert mydata == 23
237    E       assert 42 == 23
238
239    test_caching.py:20: AssertionError
240    -------------------------- Captured stdout setup ---------------------------
241    running expensive computation...
242    ========================= short test summary info ==========================
243    FAILED test_caching.py::test_function - assert 42 == 23
244    1 failed in 0.12s
245
246If you run it a second time, the value will be retrieved from
247the cache and nothing will be printed:
248
249.. code-block:: pytest
250
251    $ pytest -q
252    F                                                                    [100%]
253    ================================= FAILURES =================================
254    ______________________________ test_function _______________________________
255
256    mydata = 42
257
258        def test_function(mydata):
259    >       assert mydata == 23
260    E       assert 42 == 23
261
262    test_caching.py:20: AssertionError
263    ========================= short test summary info ==========================
264    FAILED test_caching.py::test_function - assert 42 == 23
265    1 failed in 0.12s
266
267See the :fixture:`config.cache fixture <cache>` for more details.
268
269
270Inspecting Cache content
271------------------------
272
273You can always peek at the content of the cache using the
274``--cache-show`` command line option:
275
276.. code-block:: pytest
277
278    $ pytest --cache-show
279    =========================== test session starts ============================
280    platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
281    cachedir: $PYTHON_PREFIX/.pytest_cache
282    rootdir: $REGENDOC_TMPDIR
283    cachedir: $PYTHON_PREFIX/.pytest_cache
284    --------------------------- cache values for '*' ---------------------------
285    cache/lastfailed contains:
286      {'test_50.py::test_num[17]': True,
287       'test_50.py::test_num[25]': True,
288       'test_assert1.py::test_function': True,
289       'test_assert2.py::test_set_comparison': True,
290       'test_caching.py::test_function': True,
291       'test_foocompare.py::test_compare': True}
292    cache/nodeids contains:
293      ['test_50.py::test_num[0]',
294       'test_50.py::test_num[10]',
295       'test_50.py::test_num[11]',
296       'test_50.py::test_num[12]',
297       'test_50.py::test_num[13]',
298       'test_50.py::test_num[14]',
299       'test_50.py::test_num[15]',
300       'test_50.py::test_num[16]',
301       'test_50.py::test_num[17]',
302       'test_50.py::test_num[18]',
303       'test_50.py::test_num[19]',
304       'test_50.py::test_num[1]',
305       'test_50.py::test_num[20]',
306       'test_50.py::test_num[21]',
307       'test_50.py::test_num[22]',
308       'test_50.py::test_num[23]',
309       'test_50.py::test_num[24]',
310       'test_50.py::test_num[25]',
311       'test_50.py::test_num[26]',
312       'test_50.py::test_num[27]',
313       'test_50.py::test_num[28]',
314       'test_50.py::test_num[29]',
315       'test_50.py::test_num[2]',
316       'test_50.py::test_num[30]',
317       'test_50.py::test_num[31]',
318       'test_50.py::test_num[32]',
319       'test_50.py::test_num[33]',
320       'test_50.py::test_num[34]',
321       'test_50.py::test_num[35]',
322       'test_50.py::test_num[36]',
323       'test_50.py::test_num[37]',
324       'test_50.py::test_num[38]',
325       'test_50.py::test_num[39]',
326       'test_50.py::test_num[3]',
327       'test_50.py::test_num[40]',
328       'test_50.py::test_num[41]',
329       'test_50.py::test_num[42]',
330       'test_50.py::test_num[43]',
331       'test_50.py::test_num[44]',
332       'test_50.py::test_num[45]',
333       'test_50.py::test_num[46]',
334       'test_50.py::test_num[47]',
335       'test_50.py::test_num[48]',
336       'test_50.py::test_num[49]',
337       'test_50.py::test_num[4]',
338       'test_50.py::test_num[5]',
339       'test_50.py::test_num[6]',
340       'test_50.py::test_num[7]',
341       'test_50.py::test_num[8]',
342       'test_50.py::test_num[9]',
343       'test_assert1.py::test_function',
344       'test_assert2.py::test_set_comparison',
345       'test_caching.py::test_function',
346       'test_foocompare.py::test_compare']
347    cache/stepwise contains:
348      []
349    example/value contains:
350      42
351
352    ========================== no tests ran in 0.12s ===========================
353
354``--cache-show`` takes an optional argument to specify a glob pattern for
355filtering:
356
357.. code-block:: pytest
358
359    $ pytest --cache-show example/*
360    =========================== test session starts ============================
361    platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
362    cachedir: $PYTHON_PREFIX/.pytest_cache
363    rootdir: $REGENDOC_TMPDIR
364    cachedir: $PYTHON_PREFIX/.pytest_cache
365    ----------------------- cache values for 'example/*' -----------------------
366    example/value contains:
367      42
368
369    ========================== no tests ran in 0.12s ===========================
370
371Clearing Cache content
372----------------------
373
374You can instruct pytest to clear all cache files and values
375by adding the ``--cache-clear`` option like this:
376
377.. code-block:: bash
378
379    pytest --cache-clear
380
381This is recommended for invocations from Continuous Integration
382servers where isolation and correctness is more important
383than speed.
384
385
386Stepwise
387--------
388
389As an alternative to ``--lf -x``, especially for cases where you expect a large part of the test suite will fail, ``--sw``, ``--stepwise`` allows you to fix them one at a time. The test suite will run until the first failure and then stop. At the next invocation, tests will continue from the last failing test and then run until the next failing test. You may use the ``--stepwise-skip`` option to ignore one failing test and stop the test execution on the second failing test instead. This is useful if you get stuck on a failing test and just want to ignore it until later.
390