• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..17-May-2021-

functional/H03-May-2022-50,13836,959

fuzz/H17-May-2021-206173

lint/H03-May-2022-1,9361,452

sanitizer_suppressions/H17-May-2021-112102

util/H17-May-2021-2,1772,069

README.mdH A D17-May-20219.4 KiB285198

config.ini.inH A D17-May-2021905 2420

README.md

1This directory contains integration tests that test bitcoind and its
2utilities in their entirety. It does not contain unit tests, which
3can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test),
4etc.
5
6This directory contains the following sets of tests:
7
8- [functional](/test/functional) which test the functionality of
9bitcoind and bitcoin-qt by interacting with them through the RPC and P2P
10interfaces.
11- [util](/test/util) which tests the bitcoin utilities, currently only
12bitcoin-tx.
13- [lint](/test/lint/) which perform various static analysis checks.
14
15The util tests are run as part of `make check` target. The functional
16tests and lint scripts can be run as explained in the sections below.
17
18# Running tests locally
19
20Before tests can be run locally, Bitcoin Core must be built.  See the [building instructions](/doc#building) for help.
21
22
23### Functional tests
24
25#### Dependencies
26
27The ZMQ functional test requires a python ZMQ library. To install it:
28
29- on Unix, run `sudo apt-get install python3-zmq`
30- on mac OS, run `pip3 install pyzmq`
31
32#### Running the tests
33
34Individual tests can be run by directly calling the test script, e.g.:
35
36```
37test/functional/feature_rbf.py
38```
39
40or can be run through the test_runner harness, eg:
41
42```
43test/functional/test_runner.py feature_rbf.py
44```
45
46You can run any combination (incl. duplicates) of tests by calling:
47
48```
49test/functional/test_runner.py <testname1> <testname2> <testname3> ...
50```
51
52Wildcard test names can be passed, if the paths are coherent and the test runner
53is called from a `bash` shell or similar that does the globbing. For example,
54to run all the wallet tests:
55
56```
57test/functional/test_runner.py test/functional/wallet*
58functional/test_runner.py functional/wallet* (called from the test/ directory)
59test_runner.py wallet* (called from the test/functional/ directory)
60```
61
62but not
63
64```
65test/functional/test_runner.py wallet*
66```
67
68Combinations of wildcards can be passed:
69
70```
71test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
72test_runner.py tool* mempool*
73```
74
75Run the regression test suite with:
76
77```
78test/functional/test_runner.py
79```
80
81Run all possible tests with
82
83```
84test/functional/test_runner.py --extended
85```
86
87By default, up to 4 tests will be run in parallel by test_runner. To specify
88how many jobs to run, append `--jobs=n`
89
90The individual tests and the test_runner harness have many command-line
91options. Run `test/functional/test_runner.py -h` to see them all.
92
93#### Troubleshooting and debugging test failures
94
95##### Resource contention
96
97The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make
98conflicts with other processes unlikely. However, if there is another bitcoind
99process running on the system (perhaps from a previous test which hasn't successfully
100killed all its bitcoind nodes), then there may be a port conflict which will
101cause the test to fail. It is recommended that you run the tests on a system
102where no other bitcoind processes are running.
103
104On linux, the test framework will warn if there is another
105bitcoind process running when the tests are started.
106
107If there are zombie bitcoind processes after test failure, you can kill them
108by running the following commands. **Note that these commands will kill all
109bitcoind processes running on the system, so should not be used if any non-test
110bitcoind processes are being run.**
111
112```bash
113killall bitcoind
114```
115
116or
117
118```bash
119pkill -9 bitcoind
120```
121
122
123##### Data directory cache
124
125A pre-mined blockchain with 200 blocks is generated the first time a
126functional test is run and is stored in test/cache. This speeds up
127test startup times since new blockchains don't need to be generated for
128each test. However, the cache may get into a bad state, in which case
129tests will fail. If this happens, remove the cache directory (and make
130sure bitcoind processes are stopped as above):
131
132```bash
133rm -rf test/cache
134killall bitcoind
135```
136
137##### Test logging
138
139The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR
140and CRITICAL). From within your functional tests you can log to these different
141levels using the logger included in the test_framework, e.g.
142`self.log.debug(object)`. By default:
143
144- when run through the test_runner harness, *all* logs are written to
145  `test_framework.log` and no logs are output to the console.
146- when run directly, *all* logs are written to `test_framework.log` and INFO
147  level and above are output to the console.
148- when run by [our CI (Continuous Integration)](/ci/README.md), no logs are output to the console. However, if a test
149  fails, the `test_framework.log` and bitcoind `debug.log`s will all be dumped
150  to the console to help troubleshooting.
151
152These log files can be located under the test data directory (which is always
153printed in the first line of test output):
154  - `<test data directory>/test_framework.log`
155  - `<test data directory>/node<node number>/regtest/debug.log`.
156
157The node number identifies the relevant test node, starting from `node0`, which
158corresponds to its position in the nodes list of the specific test,
159e.g. `self.nodes[0]`.
160
161To change the level of logs output to the console, use the `-l` command line
162argument.
163
164`test_framework.log` and bitcoind `debug.log`s can be combined into a single
165aggregate log by running the `combine_logs.py` script. The output can be plain
166text, colorized text or html. For example:
167
168```
169test/functional/combine_logs.py -c <test data directory> | less -r
170```
171
172will pipe the colorized logs from the test into less.
173
174Use `--tracerpc` to trace out all the RPC calls and responses to the console. For
175some tests (eg any that use `submitblock` to submit a full block over RPC),
176this can result in a lot of screen output.
177
178By default, the test data directory will be deleted after a successful run.
179Use `--nocleanup` to leave the test data directory intact. The test data
180directory is never deleted after a failed test.
181
182##### Attaching a debugger
183
184A python debugger can be attached to tests at any point. Just add the line:
185
186```py
187import pdb; pdb.set_trace()
188```
189
190anywhere in the test. You will then be able to inspect variables, as well as
191call methods that interact with the bitcoind nodes-under-test.
192
193If further introspection of the bitcoind instances themselves becomes
194necessary, this can be accomplished by first setting a pdb breakpoint
195at an appropriate location, running the test to that point, then using
196`gdb` (or `lldb` on macOS) to attach to the process and debug.
197
198For instance, to attach to `self.node[1]` during a run you can get
199the pid of the node within `pdb`.
200
201```
202(pdb) self.node[1].process.pid
203```
204
205Alternatively, you can find the pid by inspecting the temp folder for the specific test
206you are running. The path to that folder is printed at the beginning of every
207test run:
208
209```bash
2102017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3
211```
212
213Use the path to find the pid file in the temp folder:
214
215```bash
216cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid
217```
218
219Then you can use the pid to start `gdb`:
220
221```bash
222gdb /home/example/bitcoind <pid>
223```
224
225Note: gdb attach step may require ptrace_scope to be modified, or `sudo` preceding the `gdb`.
226See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt
227
228##### Profiling
229
230An easy way to profile node performance during functional tests is provided
231for Linux platforms using `perf`.
232
233Perf will sample the running node and will generate profile data in the node's
234datadir. The profile data can then be presented using `perf report` or a graphical
235tool like [hotspot](https://github.com/KDAB/hotspot).
236
237To generate a profile during test suite runs, use the `--perf` flag.
238
239To see render the output to text, run
240
241```sh
242perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less
243```
244
245For ways to generate more granular profiles, see the README in
246[test/functional](/test/functional).
247
248### Util tests
249
250Util tests can be run locally by running `test/util/bitcoin-util-test.py`.
251Use the `-v` option for verbose output.
252
253### Lint tests
254
255#### Dependencies
256
257| Lint test | Dependency | Version [used by CI](../ci/lint/04_install.sh) | Installation
258|-----------|:----------:|:-------------------------------------------:|--------------
259| [`lint-python.sh`](lint/lint-python.sh) | [flake8](https://gitlab.com/pycqa/flake8) | [3.7.8](https://github.com/bitcoin/bitcoin/pull/15257) | `pip3 install flake8==3.7.8`
260| [`lint-shell.sh`](lint/lint-shell.sh) | [ShellCheck](https://github.com/koalaman/shellcheck) | [0.6.0](https://github.com/bitcoin/bitcoin/pull/15166) | [details...](https://github.com/koalaman/shellcheck#installing)
261| [`lint-shell.sh`](lint/lint-shell.sh) | [yq](https://github.com/kislyuk/yq) | default | `pip3 install yq`
262| [`lint-spelling.sh`](lint/lint-spelling.sh) | [codespell](https://github.com/codespell-project/codespell) | [1.15.0](https://github.com/bitcoin/bitcoin/pull/16186) | `pip3 install codespell==1.15.0`
263
264Please be aware that on Linux distributions all dependencies are usually available as packages, but could be outdated.
265
266#### Running the tests
267
268Individual tests can be run by directly calling the test script, e.g.:
269
270```
271test/lint/lint-filenames.sh
272```
273
274You can run all the shell-based lint tests by running:
275
276```
277test/lint/lint-all.sh
278```
279
280# Writing functional tests
281
282You are encouraged to write functional tests for new or existing features.
283Further information about the functional test framework and individual
284tests is found in [test/functional](/test/functional).
285