• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

fuzzers/H11-Oct-2021-213105

interceptor_test/H03-May-2022-385230

java/H11-Oct-2021-349202

plugin_test/H11-Oct-2021-7925

scenarios/H11-Oct-2021-2620

tools/H11-Oct-2021-345254

.gitignoreH A D11-Oct-2021146 1615

0000-unittests.cH A D11-Oct-20212.6 KiB7324

0001-multiobj.cH A D11-Oct-20213.3 KiB9840

0002-unkpart.cH A D11-Oct-20217.9 KiB236132

0003-msgmaxsize.cH A D11-Oct-20216.6 KiB177102

0004-conf.cH A D11-Oct-202124.4 KiB730495

0005-order.cH A D11-Oct-20214 KiB13265

0006-symbols.cH A D11-Oct-20217 KiB162115

0007-autotopic.cH A D11-Oct-20213.9 KiB13058

0008-reqacks.cH A D11-Oct-20216.3 KiB17798

0009-mock_cluster.cH A D11-Oct-20213.5 KiB10140

0011-produce_batch.cH A D11-Oct-202119.9 KiB555371

0012-produce_consume.cH A D11-Oct-202114.1 KiB516332

0013-null-msgs.cH A D11-Oct-202112.6 KiB454287

0014-reconsume-191.cH A D11-Oct-202116.7 KiB497325

0015-offset_seeks.cH A D11-Oct-20216.1 KiB17291

0016-client_swname.cH A D11-Oct-20216.4 KiB15891

0017-compression.cH A D11-Oct-20215.3 KiB14573

0018-cgrp_term.cH A D11-Oct-20218.4 KiB273176

0019-list_groups.cH A D11-Oct-20217.8 KiB245144

0020-destroy_hang.cH A D11-Oct-20214.6 KiB16378

0021-rkt_destroy.cH A D11-Oct-20212.5 KiB7520

0022-consume_batch.cH A D11-Oct-20215.2 KiB15881

0025-timers.cH A D11-Oct-20214.9 KiB14576

0026-consume_pause.cH A D11-Oct-202115.2 KiB457283

0028-long_topicnames.cH A D11-Oct-20212.8 KiB8024

0029-assign_offset.cH A D11-Oct-20216.2 KiB197119

0030-offset_commit.cH A D11-Oct-202118.8 KiB587397

0031-get_offsets.cH A D11-Oct-20213.9 KiB11755

0033-regex_subscribe.cH A D11-Oct-202114.9 KiB519373

0034-offset_reset.cH A D11-Oct-202115.4 KiB397261

0035-api_version.cH A D11-Oct-20212.7 KiB7526

0036-partial_fetch.cH A D11-Oct-20213.1 KiB8733

0037-destroy_hang_local.cH A D11-Oct-20212.7 KiB8930

0038-performance.cH A D11-Oct-20214.3 KiB11965

0039-event.cH A D11-Oct-20219 KiB286179

0040-io_event.cH A D11-Oct-20216.6 KiB238153

0041-fetch_max_bytes.cH A D11-Oct-20213.2 KiB9434

0042-many_topics.cH A D11-Oct-20217.3 KiB254146

0043-no_connection.cH A D11-Oct-20212.5 KiB7831

0044-partition_cnt.cH A D11-Oct-20213.1 KiB9637

0045-subscribe_update.cH A D11-Oct-202113.4 KiB464260

0046-rkt_cache.cH A D11-Oct-20212.2 KiB6620

0047-partial_buf_tmout.cH A D11-Oct-20213.4 KiB9941

0048-partitioner.cH A D11-Oct-202111 KiB309206

0049-consume_conn_close.cH A D11-Oct-20215.5 KiB16283

0050-subscribe_adds.cH A D11-Oct-20214.4 KiB12561

0051-assign_adds.cH A D11-Oct-20214.5 KiB12863

0052-msg_timestamps.cH A D11-Oct-20218.5 KiB210124

0053-stats_cb.cppH A D11-Oct-202116.1 KiB525325

0054-offset_time.cppH A D11-Oct-20219.2 KiB213144

0055-producer_latency.cH A D11-Oct-20219.4 KiB263172

0056-balanced_group_mt.cH A D11-Oct-202110.9 KiB308206

0057-invalid_topic.cppH A D11-Oct-20213.9 KiB11561

0058-log.cppH A D11-Oct-20214.5 KiB12580

0059-bsearch.cppH A D11-Oct-20217.5 KiB238152

0060-op_prio.cppH A D11-Oct-20215.2 KiB16288

0061-consumer_lag.cppH A D11-Oct-20218.7 KiB278174

0062-stats_event.cH A D11-Oct-20214.5 KiB12567

0063-clusterid.cppH A D11-Oct-20216.1 KiB18191

0064-interceptors.cH A D11-Oct-202116.4 KiB481270

0065-yield.cppH A D11-Oct-20214.4 KiB14072

0066-plugins.cppH A D11-Oct-20214 KiB12366

0067-empty_topic.cppH A D11-Oct-20214.9 KiB14370

0068-produce_timeout.cH A D11-Oct-20215 KiB13872

0069-consumer_add_parts.cH A D11-Oct-20214.3 KiB11959

0070-null_empty.cppH A D11-Oct-20216.4 KiB190113

0072-headers_ut.cH A D11-Oct-202117.9 KiB469321

0073-headers.cH A D11-Oct-202114.1 KiB399261

0074-producev.cH A D11-Oct-20213.3 KiB8943

0075-retry.cH A D11-Oct-20218.4 KiB248146

0076-produce_retry.cH A D11-Oct-202113 KiB342204

0077-compaction.cH A D11-Oct-202113.6 KiB346211

0078-c_from_cpp.cppH A D11-Oct-20213.2 KiB9537

0079-fork.cH A D11-Oct-20213.3 KiB9742

0080-admin_ut.cH A D11-Oct-202145.6 KiB1,168808

0081-admin.cH A D11-Oct-202176.7 KiB1,9861,371

0082-fetch_max_bytes.cppH A D11-Oct-20214.6 KiB13559

0083-cb_event.cH A D11-Oct-20218.6 KiB215127

0084-destroy_flags.cH A D11-Oct-20218.1 KiB218140

0085-headers.cppH A D11-Oct-202113 KiB402292

0086-purge.cH A D11-Oct-202111.3 KiB315190

0088-produce_metadata_timeout.cH A D11-Oct-20215.8 KiB16283

0089-max_poll_interval.cH A D11-Oct-20217.1 KiB18297

0090-idempotence.cH A D11-Oct-20216.4 KiB17485

0091-max_poll_interval_timeout.cH A D11-Oct-202110 KiB304165

0092-mixed_msgver.cH A D11-Oct-20213.8 KiB10549

0093-holb.cH A D11-Oct-20217 KiB202111

0094-idempotence_msg_timeout.cH A D11-Oct-20218.6 KiB22698

0095-all_brokers_down.cppH A D11-Oct-20213.5 KiB12358

0097-ssl_verify.cppH A D11-Oct-202114.6 KiB456308

0098-consumer-txn.cppH A D11-Oct-202141.3 KiB1,211922

0099-commit_metadata.cH A D11-Oct-20217.1 KiB201122

0100-thread_interceptors.cppH A D11-Oct-20216.6 KiB196120

0101-fetch-from-follower.cppH A D11-Oct-202114.5 KiB421272

0102-static_group_rebalance.cH A D11-Oct-202120.5 KiB538327

0103-transactions.cH A D11-Oct-202146.8 KiB1,258774

0104-fetch_from_follower_mock.cH A D11-Oct-202112.6 KiB336178

0105-transactions_mock.cH A D11-Oct-2021105.8 KiB2,8281,752

0106-cgrp_sess_timeout.cH A D11-Oct-202111.2 KiB307176

0107-topic_recreate.cH A D11-Oct-20218.8 KiB262137

0109-auto_create_topics.cppH A D11-Oct-20217.7 KiB221124

0110-batch_size.cppH A D11-Oct-20215.6 KiB184112

0111-delay_create_topics.cppH A D11-Oct-20214.1 KiB13068

0112-assign_unknown_part.cH A D11-Oct-20213.9 KiB9742

0113-cooperative_rebalance.cppH A D11-Oct-2021107.7 KiB3,0402,051

0114-sticky_partitioning.cppH A D11-Oct-20215.8 KiB186101

0115-producer_auth.cppH A D11-Oct-20216.5 KiB196117

0116-kafkaconsumer_close.cppH A D11-Oct-20214.6 KiB14384

0117-mock_errors.cH A D11-Oct-20219.9 KiB286155

0118-commit_rebalance.cH A D11-Oct-20214.7 KiB12352

0119-consumer_auth.cppH A D11-Oct-20215 KiB16791

0120-asymmetric_subscription.cH A D11-Oct-20216.5 KiB187111

0121-clusterid.cH A D11-Oct-20214.1 KiB12058

0122-buffer_cleaning_after_rebalance.cH A D11-Oct-20218.6 KiB227128

0123-connections_max_idle.cH A D11-Oct-20213.2 KiB9945

0124-openssl_invalid_engine.cH A D11-Oct-20212.7 KiB6629

0125-immediate_flush.cH A D11-Oct-20212.9 KiB7929

1000-unktopic.cH A D11-Oct-20214.5 KiB15475

8000-idle.cppH A D11-Oct-20211.9 KiB6220

LibrdkafkaTestApp.pyH A D11-Oct-20218.2 KiB185131

MakefileH A D11-Oct-20214.3 KiB183104

README.mdH A D11-Oct-202116.7 KiB501318

autotest.shH A D11-Oct-2021577 3415

backtrace.gdbH A D11-Oct-2021199 3124

broker_version_tests.pyH A D11-Oct-20219.2 KiB245184

buildbox.shH A D11-Oct-2021251 187

cleanup-checker-tests.shH A D11-Oct-2021447 2113

cluster_testing.pyH A D11-Oct-20215.2 KiB160108

delete-test-topics.shH A D11-Oct-20211 KiB5743

gen-ssl-certs.shH A D11-Oct-20213.9 KiB166112

interactive_broker_version.pyH A D11-Oct-202110 KiB251183

librdkafka.suppressionsH A D11-Oct-20217.4 KiB446430

lz4_manual_test.shH A D11-Oct-20211.7 KiB6034

multi-broker-version-test.shH A D11-Oct-20211.1 KiB5135

parse-refcnt.shH A D11-Oct-2021752 4423

performance_plot.pyH A D11-Oct-20212.8 KiB11175

run-consumer-tests.shH A D11-Oct-2021282 177

run-producer-tests.shH A D11-Oct-2021282 177

run-test.shH A D11-Oct-20213 KiB141110

rusage.cH A D11-Oct-20218.9 KiB252136

sasl_test.pyH A D11-Oct-202110.3 KiB271214

sockem.cH A D11-Oct-202122.7 KiB807521

sockem.hH A D11-Oct-20212.5 KiB8611

sockem_ctrl.cH A D11-Oct-20214.9 KiB14681

sockem_ctrl.hH A D11-Oct-20212.2 KiB6222

test.cH A D11-Oct-2021215.1 KiB6,5294,763

test.conf.exampleH A D11-Oct-2021681 2820

test.hH A D11-Oct-202131.4 KiB799544

testcpp.cppH A D11-Oct-20214.1 KiB13474

testcpp.hH A D11-Oct-202111.8 KiB360231

testshared.hH A D11-Oct-202113 KiB358226

until-fail.shH A D11-Oct-20212.1 KiB8848

xxxx-assign_partition.cH A D11-Oct-20214.5 KiB12362

xxxx-metadata.cppH A D11-Oct-20214.9 KiB15770

README.md

1# Automated regression tests for librdkafka
2
3
4## Supported test environments
5
6While the standard test suite works well on OSX and Windows,
7the full test suite (which must be run for PRs and releases) will
8only run on recent Linux distros due to its use of ASAN, Kerberos, etc.
9
10
11## Automated broker cluster setup using trivup
12
13A local broker cluster can be set up using
14[trivup](https://github.com/edenhill/trivup), which is a Python package
15available on PyPi.
16These self-contained clusters are used to run the librdkafka test suite
17on a number of different broker versions or with specific broker configs.
18
19trivup will download the specified Kafka version into its root directory,
20the root directory is also used for cluster instances, where Kafka will
21write messages, logs, etc.
22The trivup root directory is by default `tmp` in the current directory but
23may be specified by setting the `TRIVUP_ROOT` environment variable
24to alternate directory, e.g., `TRIVUP_ROOT=$HOME/trivup make full`.
25
26First install trivup:
27
28    $ pip3 install trivup
29
30Bring up a Kafka cluster (with the specified version) and start an interactive
31shell, when the shell is exited the cluster is brought down and deleted.
32
33    $ ./interactive_broker_version.py 2.3.0   # Broker version
34
35In the trivup shell, run the test suite:
36
37    $ make
38
39
40If you'd rather use an existing cluster, you may omit trivup and
41provide a `test.conf` file that specifies the brokers and possibly other
42librdkafka configuration properties:
43
44    $ cp test.conf.example test.conf
45    $ $EDITOR test.conf
46
47
48
49## Run specific tests
50
51To run tests:
52
53    # Run tests in parallel (quicker, but harder to troubleshoot)
54    $ make
55
56    # Run a condensed test suite (quickest)
57    # This is what is run on CI builds.
58    $ make quick
59
60    # Run tests in sequence
61    $ make run_seq
62
63    # Run specific test
64    $ TESTS=0004 make
65
66    # Run test(s) with helgrind, valgrind, gdb
67    $ TESTS=0009 ./run-test.sh valgrind|helgrind|gdb
68
69
70All tests in the 0000-0999 series are run automatically with `make`.
71
72Tests 1000-1999 are subject to specific non-standard setups or broker
73configuration, these tests are run with `TESTS=1nnn make`.
74See comments in the test's source file for specific requirements.
75
76To insert test results into SQLite database make sure the `sqlite3` utility
77is installed, then add this to `test.conf`:
78
79    test.sql.command=sqlite3 rdktests
80
81
82
83## Adding a new test
84
85The simplest way to add a new test is to copy one of the recent
86(higher `0nnn-..` number) tests to the next free
87`0nnn-<what-is-tested>` file.
88
89If possible and practical, try to use the C++ API in your test as that will
90cover both the C and C++ APIs and thus provide better test coverage.
91Do note that the C++ test framework is not as feature rich as the C one,
92so if you need message verification, etc, you're better off with a C test.
93
94After creating your test file it needs to be added in a couple of places:
95
96 * Add to [tests/CMakeLists.txt](tests/CMakeLists.txt)
97 * Add to [win32/tests/tests.vcxproj](win32/tests/tests.vcxproj)
98 * Add to both locations in [tests/test.c](tests/test.c) - search for an
99   existing test number to see what needs to be done.
100
101You don't need to add the test to the Makefile, it is picked up automatically.
102
103Some additional guidelines:
104 * If your test depends on a minimum broker version, make sure to specify it
105   in test.c using `TEST_BRKVER()` (see 0091 as an example).
106 * If your test can run without an active cluster, flag the test
107   with `TEST_F_LOCAL`.
108 * If your test runs for a long time or produces/consumes a lot of messages
109   it might not be suitable for running on CI (which should run quickly
110   and are bound by both time and resources). In this case it is preferred
111   if you modify your test to be able to run quicker and/or with less messages
112   if the `test_quick` variable is true.
113 * There's plenty of helper wrappers in test.c for common librdkafka functions
114   that makes tests easier to write by not having to deal with errors, etc.
115 * Fail fast, use `TEST_ASSERT()` et.al., the sooner an error is detected
116   the better since it makes troubleshooting easier.
117 * Use `TEST_SAY()` et.al. to inform the developer what your test is doing,
118   making it easier to troubleshoot upon failure. But try to keep output
119   down to reasonable levels. There is a `TEST_LEVEL` environment variable
120   that can be used with `TEST_SAYL()` to only emit certain printouts
121   if the test level is increased. The default test level is 2.
122 * The test runner will automatically adjust timeouts (it knows about)
123   if running under valgrind, on CI, or similar environment where the
124   execution speed may be slower.
125   To make sure your test remains sturdy in these type of environments, make
126   sure to use the `tmout_multip(milliseconds)` macro when passing timeout
127   values to non-test functions, e.g, `rd_kafka_poll(rk, tmout_multip(3000))`.
128 * If your test file contains multiple separate sub-tests, use the
129   `SUB_TEST()`, `SUB_TEST_QUICK()` and `SUB_TEST_PASS()` from inside
130   the test functions to help differentiate test failures.
131
132
133## Test scenarios
134
135A test scenario defines the cluster configuration used by tests.
136The majority of tests use the "default" scenario which matches the
137Apache Kafka default broker configuration (topic auto creation enabled, etc).
138
139If a test relies on cluster configuration that is mutually exclusive with
140the default configuration an alternate scenario must be defined in
141`scenarios/<scenario>.json` which is a configuration object which
142is passed to [trivup](https://github.com/edenhill/trivup).
143
144Try to reuse an existing test scenario as far as possible to speed up
145test times, since each new scenario will require a new cluster incarnation.
146
147
148## A guide to testing, verifying, and troubleshooting, librdkafka
149
150
151### Creating a development build
152
153The [dev-conf.sh](../dev-conf.sh) script configures and builds librdkafka and
154the test suite for development use, enabling extra runtime
155checks (`ENABLE_DEVEL`, `rd_dassert()`, etc), disabling optimization
156(to get accurate stack traces and line numbers), enable ASAN, etc.
157
158    # Reconfigure librdkafka for development use and rebuild.
159    $ ./dev-conf.sh
160
161**NOTE**: Performance tests and benchmarks should not use a development build.
162
163
164### Controlling the test framework
165
166A test run may be dynamically set up using a number of environment variables.
167These environment variables work for all different ways of invocing the tests,
168be it `make`, `run-test.sh`, `until-fail.sh`, etc.
169
170 * `TESTS=0nnn` - only run a single test identified by its full number, e.g.
171                  `TESTS=0102 make`. (Yes, the var should have been called TEST)
172 * `SUBTESTS=...` - only run sub-tests (tests that are using `SUB_TEST()`)
173                      that contains this string.
174 * `TESTS_SKIP=...` - skip these tests.
175 * `TEST_DEBUG=...` - this will automatically set the `debug` config property
176                      of all instantiated clients to the value.
177                      E.g.. `TEST_DEBUG=broker,protocol TESTS=0001 make`
178 * `TEST_LEVEL=n` - controls the `TEST_SAY()` output level, a higher number
179                      yields more test output. Default level is 2.
180 * `RD_UT_TEST=name` - only run unittest containing `name`, should be used
181                          with `TESTS=0000`.
182                          See [../src/rdunittest.c](../src/rdunittest.c) for
183                          unit test names.
184
185
186Let's say that you run the full test suite and get a failure in test 0061,
187which is a consumer test. You want to quickly reproduce the issue
188and figure out what is wrong, so limit the tests to just 0061, and provide
189the relevant debug options (which is typically `cgrp,fetch` for consumers):
190
191    $ TESTS=0061 TEST_DEBUG=cgrp,fetch make
192
193If the test did not fail you've found an intermittent issue, this is where
194[until-fail.sh](until-fail.sh) comes in to play, so run the test until it fails:
195
196    # bare means to run the test without valgrind
197    $ TESTS=0061 TEST_DEBUG=cgrp,fetch ./until-fail.sh bare
198
199
200### How to run tests
201
202The standard way to run the test suite is firing up a trivup cluster
203in an interactive shell:
204
205    $ ./interactive_broker_version.py 2.3.0   # Broker version
206
207
208And then running the test suite in parallel:
209
210    $ make
211
212
213Run one test at a time:
214
215    $ make run_seq
216
217
218Run a single test:
219
220    $ TESTS=0034 make
221
222
223Run test suite with valgrind (see instructions below):
224
225    $ ./run-test.sh valgrind   # memory checking
226
227or with helgrind (the valgrind thread checker):
228
229    $ ./run-test.sh helgrind   # thread checking
230
231
232To run the tests in gdb:
233
234**NOTE**: gdb support is flaky on OSX due to signing issues.
235
236    $ ./run-test.sh gdb
237    (gdb) run
238
239    # wait for test to crash, or interrupt with Ctrl-C
240
241    # backtrace of current thread
242    (gdb) bt
243    # move up or down a stack frame
244    (gdb) up
245    (gdb) down
246    # select specific stack frame
247    (gdb) frame 3
248    # show code at location
249    (gdb) list
250
251    # print variable content
252    (gdb) p rk.rk_conf.group_id
253    (gdb) p *rkb
254
255    # continue execution (if interrupted)
256    (gdb) cont
257
258    # single-step one instruction
259    (gdb) step
260
261    # restart
262    (gdb) run
263
264    # see all threads
265    (gdb) info threads
266
267    # see backtraces of all threads
268    (gdb) thread apply all bt
269
270    # exit gdb
271    (gdb) exit
272
273
274If a test crashes and produces a core file (make sure your shell has
275`ulimit -c unlimited` set!), do:
276
277    # On linux
278    $ LD_LIBRARY_PATH=../src:../src-cpp gdb ./test-runner <core-file>
279    (gdb) bt
280
281    # On OSX
282    $ DYLD_LIBRARY_PATH=../src:../src-cpp gdb ./test-runner /cores/core.<pid>
283    (gdb) bt
284
285
286To run all tests repeatedly until one fails, this is a good way of finding
287intermittent failures, race conditions, etc:
288
289    $ ./until-fail.sh bare  # bare is to run the test without valgrind,
290                            # may also be one or more of the modes supported
291                            # by run-test.sh:
292                            #  bare valgrind helgrind gdb, etc..
293
294To run a single test repeatedly with valgrind until failure:
295
296    $ TESTS=0103 ./until-fail.sh valgrind
297
298
299
300### Finding memory leaks, memory corruption, etc.
301
302There are two ways to verifying there are no memory leaks, out of bound
303memory accesses, use after free, etc. ASAN or valgrind.
304
305#### ASAN - AddressSanitizer
306
307The first option is using AddressSanitizer, this is build-time instrumentation
308provided by clang and gcc to insert memory checks in the build library.
309
310To enable AddressSanitizer (ASAN), run `./dev-conf.sh asan` from the
311librdkafka root directory.
312This script will rebuild librdkafka and the test suite with ASAN enabled.
313
314Then run tests as usual. Memory access issues will be reported on stderr
315in real time as they happen (and the test will fail eventually), while
316memory leaks will be reported on stderr when the test run exits successfully,
317i.e., no tests failed.
318
319Test failures will typically cause the current test to exit hard without
320cleaning up, in which case there will be a large number of reported memory
321leaks, these shall be ignored. The memory leak report is only relevant
322when the test suite passes.
323
324**NOTE**: The OSX version of ASAN does not provide memory leak protection,
325          you will need to run the test suite on Linux (native or in Docker).
326
327**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
328
329
330#### Valgrind - memory checker
331
332Valgrind is a powerful virtual machine that intercepts all memory accesses
333of an unmodified program, reporting memory access violations, use after free,
334memory leaks, etc.
335
336Valgrind provides additional checks over ASAN and is mostly useful
337for troubleshooting crashes, memory issues and leaks when ASAN falls short.
338
339To use valgrind, make sure librdkafka and the test suite is built without
340ASAN or TSAN, it must be a clean build without any other instrumentation,
341then simply run:
342
343    $ ./run-test.sh valgrind
344
345Valgrind will report to stderr, just like ASAN.
346
347
348**NOTE**: Valgrind only runs on Linux.
349
350**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
351
352
353### TSAN - Thread and locking issues
354
355librdkafka uses a number of internal threads which communicate and share state
356through op queues, conditional variables, mutexes and atomics.
357
358While the docstrings in the librdkafka source code specify what locking is
359required it is very hard to manually verify that the correct locks
360are acquired, and in the correct order (to avoid deadlocks).
361
362TSAN, ThreadSanitizer, is of great help here. As with ASAN, TSAN is a
363build-time option: run `./dev-conf.sh tsan` to rebuild with TSAN.
364
365Run the test suite as usual, preferably in parallel. TSAN will output
366thread errors to stderr and eventually fail the test run.
367
368If you're having threading issues and TSAN does not provide enough information
369to sort it out, you can also try running the test with helgrind, which
370is valgrind's thread checker (`./run-test.sh helgrind`).
371
372
373**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
374
375
376### Resource usage thresholds (experimental)
377
378**NOTE**: This is an experimental feature, some form of system-specific
379          calibration will be needed.
380
381If the `-R` option is passed to the `test-runner`, or the `make rusage`
382target is used, the test framework will monitor each test's resource usage
383and fail the test if the default or test-specific thresholds are exceeded.
384
385Per-test thresholds are specified in test.c using the `_THRES()` macro.
386
387Currently monitored resources are:
388 * `utime` - User CPU time in seconds (default 1.0s)
389 * `stime` - System/Kernel CPU time in seconds (default 0.5s).
390 * `rss` - RSS (memory) usage (default 10.0 MB)
391 * `ctxsw` - Number of voluntary context switches, e.g. syscalls (default 10000).
392
393Upon successful test completion a log line will be emitted with a resource
394usage summary, e.g.:
395
396    Test resource usage summary: 20.161s (32.3%) User CPU time, 12.976s (20.8%) Sys CPU time, 0.000MB RSS memory increase, 4980 Voluntary context switches
397
398The User and Sys CPU thresholds are based on observations running the
399test suite on an Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (8 cores)
400which define the base line system.
401
402Since no two development environments are identical a manual CPU calibration
403value can be passed as `-R<C>`, where `C` is the CPU calibration for
404the local system compared to the base line system.
405The CPU threshold will be multiplied by the CPU calibration value (default 1.0),
406thus a value less than 1.0 means the local system is faster than the
407base line system, and a value larger than 1.0 means the local system is
408slower than the base line system.
409I.e., if you are on an i5 system, pass `-R2.0` to allow higher CPU usages,
410or `-R0.8` if your system is faster than the base line system.
411The the CPU calibration value may also be set with the
412`TEST_CPU_CALIBRATION=1.5` environment variable.
413
414In an ideal future, the test suite would be able to auto-calibrate.
415
416
417**NOTE**: The resource usage threshold checks will run tests in sequence,
418          not parallell, to be able to effectively measure per-test usage.
419
420
421# PR and release verification
422
423Prior to pushing your PR  you must verify that your code change has not
424introduced any regression or new issues, this requires running the test
425suite in multiple different modes:
426
427 * PLAINTEXT, SSL transports
428 * All SASL mechanisms (PLAIN, GSSAPI, SCRAM, OAUTHBEARER)
429 * Idempotence enabled for all tests
430 * With memory checking
431 * With thread checking
432 * Compatibility with older broker versions
433
434These tests must also be run for each release candidate that is created.
435
436    $ make release-test
437
438This will take approximately 30 minutes.
439
440**NOTE**: Run this on Linux (for ASAN and Kerberos tests to work properly), not OSX.
441
442
443# Test mode specifics
444
445The following sections rely on trivup being installed.
446
447
448### Compatbility tests with multiple broker versions
449
450To ensure compatibility across all supported broker versions the entire
451test suite is run in a trivup based cluster, one test run for each
452relevant broker version.
453
454    $ ./broker_version_tests.py
455
456
457### SASL tests
458
459Testing SASL requires a bit of configuration on the brokers, to automate
460this the entire test suite is run on trivup based clusters.
461
462    $ ./sasl_tests.py
463
464
465
466### Full test suite(s) run
467
468To run all tests, including the broker version and SASL tests, etc, use
469
470    $ make full
471
472**NOTE**: `make full` is a sub-set of the more complete `make release-test` target.
473
474
475### Idempotent Producer tests
476
477To run the entire test suite with `enable.idempotence=true` enabled, use
478`make idempotent_seq` or `make idempotent_par` for sequencial or
479parallel testing.
480Some tests are skipped or slightly modified when idempotence is enabled.
481
482
483## Manual testing notes
484
485The following manual tests are currently performed manually, they should be
486implemented as automatic tests.
487
488### LZ4 interop
489
490    $ ./interactive_broker_version.py -c ./lz4_manual_test.py 0.8.2.2 0.9.0.1 2.3.0
491
492Check the output and follow the instructions.
493
494
495
496
497## Test numbers
498
499Automated tests: 0000-0999
500Manual tests:    8000-8999
501