• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

interceptor_test/H03-May-2022-385230

java/H03-May-2022-197147

plugin_test/H03-May-2022-7925

scenarios/H03-May-2022-1914

.gitignoreH A D09-Jul-2020146 1615

0000-unittests.cH A D09-Jul-20201.7 KiB448

0001-multiobj.cH A D09-Jul-20203.3 KiB9840

0002-unkpart.cH A D09-Jul-20207.9 KiB236132

0003-msgmaxsize.cH A D09-Jul-20206.5 KiB17298

0004-conf.cH A D09-Jul-202021.6 KiB650444

0005-order.cH A D09-Jul-20204 KiB13265

0006-symbols.cH A D09-Jul-20207 KiB162115

0007-autotopic.cH A D09-Jul-20203.9 KiB13058

0008-reqacks.cH A D09-Jul-20206.3 KiB17798

0009-mock_cluster.cH A D09-Jul-20203.2 KiB9234

0011-produce_batch.cH A D09-Jul-202019.8 KiB554370

0012-produce_consume.cH A D09-Jul-202014.1 KiB516332

0013-null-msgs.cH A D09-Jul-202012.6 KiB454287

0014-reconsume-191.cH A D09-Jul-202016.7 KiB497325

0015-offset_seeks.cH A D09-Jul-20203.6 KiB10348

0016-client_swname.cH A D09-Jul-20206.4 KiB15891

0017-compression.cH A D09-Jul-20205.3 KiB14573

0018-cgrp_term.cH A D09-Jul-20208.4 KiB273176

0019-list_groups.cH A D09-Jul-20207.8 KiB245144

0020-destroy_hang.cH A D09-Jul-20204.6 KiB16378

0021-rkt_destroy.cH A D09-Jul-20202.5 KiB7520

0022-consume_batch.cH A D09-Jul-20205.2 KiB15881

0025-timers.cH A D09-Jul-20204.9 KiB14576

0026-consume_pause.cH A D09-Jul-202015.2 KiB457283

0028-long_topicnames.cH A D09-Jul-20202.8 KiB8024

0029-assign_offset.cH A D09-Jul-20206.2 KiB197119

0030-offset_commit.cH A D09-Jul-202017.4 KiB546363

0031-get_offsets.cH A D09-Jul-20203.9 KiB11755

0033-regex_subscribe.cH A D09-Jul-202013.6 KiB481345

0034-offset_reset.cH A D09-Jul-20204.4 KiB13874

0035-api_version.cH A D09-Jul-20202.7 KiB7526

0036-partial_fetch.cH A D09-Jul-20203.1 KiB8733

0037-destroy_hang_local.cH A D09-Jul-20202.7 KiB8930

0038-performance.cH A D09-Jul-20204.3 KiB11965

0039-event.cH A D09-Jul-20206.6 KiB222127

0040-io_event.cH A D09-Jul-20206.6 KiB240154

0041-fetch_max_bytes.cH A D09-Jul-20203.2 KiB9434

0042-many_topics.cH A D09-Jul-20207.3 KiB254146

0043-no_connection.cH A D09-Jul-20202.5 KiB7831

0044-partition_cnt.cH A D09-Jul-20203.1 KiB9637

0045-subscribe_update.cH A D09-Jul-202011.3 KiB388219

0046-rkt_cache.cH A D09-Jul-20202.2 KiB6620

0047-partial_buf_tmout.cH A D09-Jul-20203.4 KiB9941

0048-partitioner.cH A D09-Jul-202010.8 KiB305202

0049-consume_conn_close.cH A D09-Jul-20205.5 KiB16283

0050-subscribe_adds.cH A D09-Jul-20204.4 KiB12561

0051-assign_adds.cH A D09-Jul-20204.5 KiB12863

0052-msg_timestamps.cH A D09-Jul-20208.5 KiB210124

0053-stats_cb.cppH A D09-Jul-202016.1 KiB525325

0054-offset_time.cppH A D09-Jul-20209.2 KiB213144

0055-producer_latency.cH A D09-Jul-20209.3 KiB263172

0056-balanced_group_mt.cH A D09-Jul-202010.9 KiB308206

0057-invalid_topic.cppH A D09-Jul-20203.9 KiB11561

0058-log.cppH A D09-Jul-20204.5 KiB12580

0059-bsearch.cppH A D09-Jul-20207.5 KiB238152

0060-op_prio.cppH A D09-Jul-20205.2 KiB16288

0061-consumer_lag.cppH A D09-Jul-20208.8 KiB276173

0062-stats_event.cH A D09-Jul-20204.5 KiB12567

0063-clusterid.cppH A D09-Jul-20206.1 KiB18191

0064-interceptors.cH A D09-Jul-202015.9 KiB464256

0065-yield.cppH A D09-Jul-20204.4 KiB14072

0066-plugins.cppH A D09-Jul-20204 KiB12366

0067-empty_topic.cppH A D09-Jul-20204.8 KiB14168

0068-produce_timeout.cH A D09-Jul-20205 KiB13872

0069-consumer_add_parts.cH A D09-Jul-20204.2 KiB11758

0070-null_empty.cppH A D09-Jul-20206.4 KiB190113

0072-headers_ut.cH A D09-Jul-202017.9 KiB469321

0073-headers.cH A D09-Jul-202014.1 KiB399261

0074-producev.cH A D09-Jul-20202.4 KiB6823

0075-retry.cH A D09-Jul-20208.4 KiB248146

0076-produce_retry.cH A D09-Jul-202013 KiB342204

0077-compaction.cH A D09-Jul-202013.6 KiB346211

0078-c_from_cpp.cppH A D09-Jul-20203.2 KiB9537

0079-fork.cH A D09-Jul-20203.3 KiB9742

0080-admin_ut.cH A D09-Jul-202029.8 KiB752504

0081-admin.cH A D09-Jul-202046.8 KiB1,208799

0082-fetch_max_bytes.cppH A D09-Jul-20204.6 KiB13559

0083-cb_event.cH A D09-Jul-20208.6 KiB215127

0084-destroy_flags.cH A D09-Jul-20207.9 KiB213138

0085-headers.cppH A D09-Jul-202013 KiB402292

0086-purge.cH A D09-Jul-202011.3 KiB315190

0088-produce_metadata_timeout.cH A D09-Jul-20205.8 KiB16283

0089-max_poll_interval.cH A D09-Jul-20207.1 KiB18297

0090-idempotence.cH A D09-Jul-20206.4 KiB17485

0091-max_poll_interval_timeout.cH A D09-Jul-202010 KiB304165

0092-mixed_msgver.cH A D09-Jul-20203.8 KiB10549

0093-holb.cH A D09-Jul-20207 KiB202111

0094-idempotence_msg_timeout.cH A D09-Jul-20208.6 KiB22698

0095-all_brokers_down.cppH A D09-Jul-20203.5 KiB12358

0097-ssl_verify.cppH A D09-Jul-202010.7 KiB332225

0098-consumer-txn.cppH A D09-Jul-202041.3 KiB1,211922

0099-commit_metadata.cH A D09-Jul-20207.1 KiB201122

0100-thread_interceptors.cppH A D09-Jul-20206.6 KiB196120

0101-fetch-from-follower.cppH A D09-Jul-202014.5 KiB421272

0102-static_group_rebalance.cH A D09-Jul-202020.2 KiB524318

0103-transactions.cH A D09-Jul-202031.9 KiB848529

0104-fetch_from_follower_mock.cH A D09-Jul-20209.3 KiB248119

0105-transactions_mock.cH A D09-Jul-202020.5 KiB606349

0106-cgrp_sess_timeout.cH A D09-Jul-20208.6 KiB230132

0107-topic_recreate.cH A D09-Jul-20208.8 KiB262137

1000-unktopic.cH A D09-Jul-20204.5 KiB15475

8000-idle.cppH A D09-Jul-20201.9 KiB6220

LibrdkafkaTestApp.pyH A D09-Jul-20208.1 KiB184130

MakefileH A D09-Jul-20204 KiB174101

README.mdH A D09-Jul-202016.3 KiB495312

autotest.shH A D09-Jul-2020577 3415

backtrace.gdbH A D09-Jul-2020199 3124

broker_version_tests.pyH A D09-Jul-20209.1 KiB239179

buildbox.shH A D09-Jul-2020251 187

cleanup-checker-tests.shH A D09-Jul-2020479 2113

cluster_testing.pyH A D09-Jul-20205.3 KiB160108

delete-test-topics.shH A D09-Jul-20201 KiB5743

gen-ssl-certs.shH A D09-Jul-20203.9 KiB166112

interactive_broker_version.pyH A D09-Jul-20209.9 KiB250182

librdkafka.suppressionsH A D09-Jul-20207.1 KiB422410

lz4_manual_test.shH A D09-Jul-20201.7 KiB6034

multi-broker-version-test.shH A D09-Jul-20201.1 KiB5135

performance_plot.pyH A D09-Jul-20202.8 KiB11175

run-test.shH A D09-Jul-20202.8 KiB131102

rusage.cH A D09-Jul-20208.9 KiB252136

sasl_test.pyH A D09-Jul-202010.4 KiB271214

sockem.cH A D09-Jul-202022.7 KiB806520

sockem.hH A D09-Jul-20202.5 KiB8611

sockem_ctrl.cH A D09-Jul-20204.9 KiB14681

sockem_ctrl.hH A D09-Jul-20202.2 KiB6222

test.cH A D09-Jul-2020183 KiB5,6374,104

test.conf.exampleH A D09-Jul-2020681 2820

test.hH A D09-Jul-202025.8 KiB683456

testcpp.cppH A D09-Jul-20204.1 KiB13474

testcpp.hH A D09-Jul-20205.1 KiB16293

testshared.hH A D09-Jul-202011.9 KiB323203

until-fail.shH A D09-Jul-20201.8 KiB7546

xxxx-assign_partition.cH A D09-Jul-20204.5 KiB12362

xxxx-metadata.cppH A D09-Jul-20204.9 KiB15770

README.md

1# Automated regression tests for librdkafka
2
3
4## Supported test environments
5
6While the standard test suite works well on OSX and Windows,
7the full test suite (which must be run for PRs and releases) will
8only run on recent Linux distros due to its use of ASAN, Kerberos, etc.
9
10
11## Automated broker cluster setup using trivup
12
13A local broker cluster can be set up using
14[trivup](https://github.com/edenhill/trivup), which is a Python package
15available on PyPi.
16These self-contained clusters are used to run the librdkafka test suite
17on a number of different broker versions or with specific broker configs.
18
19trivup will download the specified Kafka version into its root directory,
20the root directory is also used for cluster instances, where Kafka will
21write messages, logs, etc.
22The trivup root directory is by default `tmp` in the current directory but
23may be specified by setting the `TRIVUP_ROOT` environment variable
24to alternate directory, e.g., `TRIVUP_ROOT=$HOME/trivup make full`.
25
26First install trivup:
27
28    $ pip3 install trivup
29
30Bring up a Kafka cluster (with the specified version) and start an interactive
31shell, when the shell is exited the cluster is brought down and deleted.
32
33    $ ./interactive_broker_version.py 2.3.0   # Broker version
34
35In the trivup shell, run the test suite:
36
37    $ make
38
39
40If you'd rather use an existing cluster, you may omit trivup and
41provide a `test.conf` file that specifies the brokers and possibly other
42librdkafka configuration properties:
43
44    $ cp test.conf.example test.conf
45    $ $EDITOR test.conf
46
47
48
49## Run specific tests
50
51To run tests:
52
53    # Run tests in parallel (quicker, but harder to troubleshoot)
54    $ make
55
56    # Run a condensed test suite (quickest)
57    # This is what is run on CI builds.
58    $ make quick
59
60    # Run tests in sequence
61    $ make run_seq
62
63    # Run specific test
64    $ TESTS=0004 make
65
66    # Run test(s) with helgrind, valgrind, gdb
67    $ TESTS=0009 ./run-test.sh valgrind|helgrind|gdb
68
69
70All tests in the 0000-0999 series are run automatically with `make`.
71
72Tests 1000-1999 are subject to specific non-standard setups or broker
73configuration, these tests are run with `TESTS=1nnn make`.
74See comments in the test's source file for specific requirements.
75
76To insert test results into SQLite database make sure the `sqlite3` utility
77is installed, then add this to `test.conf`:
78
79    test.sql.command=sqlite3 rdktests
80
81
82
83## Adding a new test
84
85The simplest way to add a new test is to copy one of the recent
86(higher `0nnn-..` number) tests to the next free
87`0nnn-<what-is-tested>` file.
88
89If possible and practical, try to use the C++ API in your test as that will
90cover both the C and C++ APIs and thus provide better test coverage.
91Do note that the C++ test framework is not as feature rich as the C one,
92so if you need message verification, etc, you're better off with a C test.
93
94After creating your test file it needs to be added in a couple of places:
95
96 * Add to [tests/CMakeLists.txt](tests/CMakeLists.txt)
97 * Add to [win32/tests/tests.vcxproj](win32/tests/tests.vcxproj)
98 * Add to both locations in [tests/test.c](tests/test.c) - search for an
99   existing test number to see what needs to be done.
100
101You don't need to add the test to the Makefile, it is picked up automatically.
102
103Some additional guidelines:
104 * If your test depends on a minimum broker version, make sure to specify it
105   in test.c using `TEST_BRKVER()` (see 0091 as an example).
106 * If your test can run without an active cluster, flag the test
107   with `TEST_F_LOCAL`.
108 * If your test runs for a long time or produces/consumes a lot of messages
109   it might not be suitable for running on CI (which should run quickly
110   and are bound by both time and resources). In this case it is preferred
111   if you modify your test to be able to run quicker and/or with less messages
112   if the `test_quick` variable is true.
113 * There's plenty of helper wrappers in test.c for common librdkafka functions
114   that makes tests easier to write by not having to deal with errors, etc.
115 * Fail fast, use `TEST_ASSERT()` et.al., the sooner an error is detected
116   the better since it makes troubleshooting easier.
117 * Use `TEST_SAY()` et.al. to inform the developer what your test is doing,
118   making it easier to troubleshoot upon failure. But try to keep output
119   down to reasonable levels. There is a `TEST_LEVEL` environment variable
120   that can be used with `TEST_SAYL()` to only emit certain printouts
121   if the test level is increased. The default test level is 2.
122 * The test runner will automatically adjust timeouts (it knows about)
123   if running under valgrind, on CI, or similar environment where the
124   execution speed may be slower.
125   To make sure your test remains sturdy in these type of environments, make
126   sure to use the `tmout_multip(milliseconds)` macro when passing timeout
127   values to non-test functions, e.g, `rd_kafka_poll(rk, tmout_multip(3000))`.
128
129
130## Test scenarios
131
132A test scenario defines the cluster configuration used by tests.
133The majority of tests use the "default" scenario which matches the
134Apache Kafka default broker configuration (topic auto creation enabled, etc).
135
136If a test relies on cluster configuration that is mutually exclusive with
137the default configuration an alternate scenario must be defined in
138`scenarios/<scenario>.json` which is a configuration object which
139is passed to [trivup](https://github.com/edenhill/trivup).
140
141Try to reuse an existing test scenario as far as possible to speed up
142test times, since each new scenario will require a new cluster incarnation.
143
144
145## A guide to testing, verifying, and troubleshooting, librdkafka
146
147
148### Creating a development build
149
150The [dev-conf.sh](../dev-conf.sh) script configures and builds librdkafka and
151the test suite for development use, enabling extra runtime
152checks (`ENABLE_DEVEL`, `rd_dassert()`, etc), disabling optimization
153(to get accurate stack traces and line numbers), enable ASAN, etc.
154
155    # Reconfigure librdkafka for development use and rebuild.
156    $ ./dev-conf.sh
157
158**NOTE**: Performance tests and benchmarks should not use a development build.
159
160
161### Controlling the test framework
162
163A test run may be dynamically set up using a number of environment variables.
164These environment variables work for all different ways of invocing the tests,
165be it `make`, `run-test.sh`, `until-fail.sh`, etc.
166
167 * `TESTS=0nnn` - only run a single test identified by its full number, e.g.
168                  `TESTS=0102 make`. (Yes, the var should have been called TEST)
169 * `TEST_DEBUG=...` - this will automatically set the `debug` config property
170                      of all instantiated clients to the value.
171                      E.g.. `TEST_DEBUG=broker,protocol TESTS=0001 make`
172 * `TEST_LEVEL=n` - controls the `TEST_SAY()` output level, a higher number
173                      yields more test output. Default level is 2.
174 * `RD_UT_TEST=name` - only run the specific unittest, should be used with
175                          `TESTS=0000`.
176                          See [../src/rdunittest.c](../src/rdunittest.c) for
177                          unit test names.
178
179
180Let's say that you run the full test suite and get a failure in test 0061,
181which is a consumer test. You want to quickly reproduce the issue
182and figure out what is wrong, so limit the tests to just 0061, and provide
183the relevant debug options (which is typically `cgrp,fetch` for consumers):
184
185    $ TESTS=0061 TEST_DEBUG=cgrp,fetch make
186
187If the test did not fail you've found an intermittent issue, this is where
188[until-fail.sh](until-fail.sh) comes in to play, so run the test until it fails:
189
190    # bare means to run the test without valgrind
191    $ TESTS=0061 TEST_DEBUG=cgrp,fetch ./until-fail.sh bare
192
193
194### How to run tests
195
196The standard way to run the test suite is firing up a trivup cluster
197in an interactive shell:
198
199    $ ./interactive_broker_version.py 2.3.0   # Broker version
200
201
202And then running the test suite in parallel:
203
204    $ make
205
206
207Run one test at a time:
208
209    $ make run_seq
210
211
212Run a single test:
213
214    $ TESTS=0034 make
215
216
217Run test suite with valgrind (see instructions below):
218
219    $ ./run-test.sh valgrind   # memory checking
220
221or with helgrind (the valgrind thread checker):
222
223    $ ./run-test.sh helgrind   # thread checking
224
225
226To run the tests in gdb:
227
228**NOTE**: gdb support is flaky on OSX due to signing issues.
229
230    $ ./run-test.sh gdb
231    (gdb) run
232
233    # wait for test to crash, or interrupt with Ctrl-C
234
235    # backtrace of current thread
236    (gdb) bt
237    # move up or down a stack frame
238    (gdb) up
239    (gdb) down
240    # select specific stack frame
241    (gdb) frame 3
242    # show code at location
243    (gdb) list
244
245    # print variable content
246    (gdb) p rk.rk_conf.group_id
247    (gdb) p *rkb
248
249    # continue execution (if interrupted)
250    (gdb) cont
251
252    # single-step one instruction
253    (gdb) step
254
255    # restart
256    (gdb) run
257
258    # see all threads
259    (gdb) info threads
260
261    # see backtraces of all threads
262    (gdb) thread apply all bt
263
264    # exit gdb
265    (gdb) exit
266
267
268If a test crashes and produces a core file (make sure your shell has
269`ulimit -c unlimited` set!), do:
270
271    # On linux
272    $ LD_LIBRARY_PATH=../src:../src-cpp gdb ./test-runner <core-file>
273    (gdb) bt
274
275    # On OSX
276    $ DYLD_LIBRARY_PATH=../src:../src-cpp gdb ./test-runner /cores/core.<pid>
277    (gdb) bt
278
279
280To run all tests repeatedly until one fails, this is a good way of finding
281intermittent failures, race conditions, etc:
282
283    $ ./until-fail.sh bare  # bare is to run the test without valgrind,
284                            # may also be one or more of the modes supported
285                            # by run-test.sh:
286                            #  bare valgrind helgrind gdb, etc..
287
288To run a single test repeatedly with valgrind until failure:
289
290    $ TESTS=0103 ./until-fail.sh valgrind
291
292
293
294### Finding memory leaks, memory corruption, etc.
295
296There are two ways to verifying there are no memory leaks, out of bound
297memory accesses, use after free, etc. ASAN or valgrind.
298
299#### ASAN - AddressSanitizer
300
301The first option is using AddressSanitizer, this is build-time instrumentation
302provided by clang and gcc to insert memory checks in the build library.
303
304To enable AddressSanitizer (ASAN), run `./dev-conf.sh asan` from the
305librdkafka root directory.
306This script will rebuild librdkafka and the test suite with ASAN enabled.
307
308Then run tests as usual. Memory access issues will be reported on stderr
309in real time as they happen (and the test will fail eventually), while
310memory leaks will be reported on stderr when the test run exits successfully,
311i.e., no tests failed.
312
313Test failures will typically cause the current test to exit hard without
314cleaning up, in which case there will be a large number of reported memory
315leaks, these shall be ignored. The memory leak report is only relevant
316when the test suite passes.
317
318**NOTE**: The OSX version of ASAN does not provide memory leak protection,
319          you will need to run the test suite on Linux (native or in Docker).
320
321**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
322
323
324#### Valgrind - memory checker
325
326Valgrind is a powerful virtual machine that intercepts all memory accesses
327of an unmodified program, reporting memory access violations, use after free,
328memory leaks, etc.
329
330Valgrind provides additional checks over ASAN and is mostly useful
331for troubleshooting crashes, memory issues and leaks when ASAN falls short.
332
333To use valgrind, make sure librdkafka and the test suite is built without
334ASAN or TSAN, it must be a clean build without any other instrumentation,
335then simply run:
336
337    $ ./run-test.sh valgrind
338
339Valgrind will report to stderr, just like ASAN.
340
341
342**NOTE**: Valgrind only runs on Linux.
343
344**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
345
346
347### TSAN - Thread and locking issues
348
349librdkafka uses a number of internal threads which communicate and share state
350through op queues, conditional variables, mutexes and atomics.
351
352While the docstrings in the librdkafka source code specify what locking is
353required it is very hard to manually verify that the correct locks
354are acquired, and in the correct order (to avoid deadlocks).
355
356TSAN, ThreadSanitizer, is of great help here. As with ASAN, TSAN is a
357build-time option: run `./dev-conf.sh tsan` to rebuild with TSAN.
358
359Run the test suite as usual, preferably in parallel. TSAN will output
360thread errors to stderr and eventually fail the test run.
361
362If you're having threading issues and TSAN does not provide enough information
363to sort it out, you can also try running the test with helgrind, which
364is valgrind's thread checker (`./run-test.sh helgrind`).
365
366
367**NOTE**: ASAN, TSAN and valgrind are mutually exclusive.
368
369
370### Resource usage thresholds (experimental)
371
372**NOTE**: This is an experimental feature, some form of system-specific
373          calibration will be needed.
374
375If the `-R` option is passed to the `test-runner`, or the `make rusage`
376target is used, the test framework will monitor each test's resource usage
377and fail the test if the default or test-specific thresholds are exceeded.
378
379Per-test thresholds are specified in test.c using the `_THRES()` macro.
380
381Currently monitored resources are:
382 * `utime` - User CPU time in seconds (default 1.0s)
383 * `stime` - System/Kernel CPU time in seconds (default 0.5s).
384 * `rss` - RSS (memory) usage (default 10.0 MB)
385 * `ctxsw` - Number of voluntary context switches, e.g. syscalls (default 10000).
386
387Upon successful test completion a log line will be emitted with a resource
388usage summary, e.g.:
389
390    Test resource usage summary: 20.161s (32.3%) User CPU time, 12.976s (20.8%) Sys CPU time, 0.000MB RSS memory increase, 4980 Voluntary context switches
391
392The User and Sys CPU thresholds are based on observations running the
393test suite on an Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (8 cores)
394which define the base line system.
395
396Since no two development environments are identical a manual CPU calibration
397value can be passed as `-R<C>`, where `C` is the CPU calibration for
398the local system compared to the base line system.
399The CPU threshold will be multiplied by the CPU calibration value (default 1.0),
400thus a value less than 1.0 means the local system is faster than the
401base line system, and a value larger than 1.0 means the local system is
402slower than the base line system.
403I.e., if you are on an i5 system, pass `-R2.0` to allow higher CPU usages,
404or `-R0.8` if your system is faster than the base line system.
405The the CPU calibration value may also be set with the
406`TEST_CPU_CALIBRATION=1.5` environment variable.
407
408In an ideal future, the test suite would be able to auto-calibrate.
409
410
411**NOTE**: The resource usage threshold checks will run tests in sequence,
412          not parallell, to be able to effectively measure per-test usage.
413
414
415# PR and release verification
416
417Prior to pushing your PR  you must verify that your code change has not
418introduced any regression or new issues, this requires running the test
419suite in multiple different modes:
420
421 * PLAINTEXT, SSL transports
422 * All SASL mechanisms (PLAIN, GSSAPI, SCRAM, OAUTHBEARER)
423 * Idempotence enabled for all tests
424 * With memory checking
425 * With thread checking
426 * Compatibility with older broker versions
427
428These tests must also be run for each release candidate that is created.
429
430    $ make release-test
431
432This will take approximately 30 minutes.
433
434**NOTE**: Run this on Linux (for ASAN and Kerberos tests to work properly), not OSX.
435
436
437# Test mode specifics
438
439The following sections rely on trivup being installed.
440
441
442### Compatbility tests with multiple broker versions
443
444To ensure compatibility across all supported broker versions the entire
445test suite is run in a trivup based cluster, one test run for each
446relevant broker version.
447
448    $ ./broker_version_tests.py
449
450
451### SASL tests
452
453Testing SASL requires a bit of configuration on the brokers, to automate
454this the entire test suite is run on trivup based clusters.
455
456    $ ./sasl_tests.py
457
458
459
460### Full test suite(s) run
461
462To run all tests, including the broker version and SASL tests, etc, use
463
464    $ make full
465
466**NOTE**: `make full` is a sub-set of the more complete `make release-test` target.
467
468
469### Idempotent Producer tests
470
471To run the entire test suite with `enable.idempotence=true` enabled, use
472`make idempotent_seq` or `make idempotent_par` for sequencial or
473parallel testing.
474Some tests are skipped or slightly modified when idempotence is enabled.
475
476
477## Manual testing notes
478
479The following manual tests are currently performed manually, they should be
480implemented as automatic tests.
481
482### LZ4 interop
483
484    $ ./interactive_broker_version.py -c ./lz4_manual_test.py 0.8.2.2 0.9.0.1 2.3.0
485
486Check the output and follow the instructions.
487
488
489
490
491## Test numbers
492
493Automated tests: 0000-0999
494Manual tests:    8000-8999
495