• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..08-Oct-2021-

acl/H08-Oct-2021-735568

additional/H08-Oct-2021-956791

addzone/H08-Oct-2021-1,4331,201

allow-query/H08-Oct-2021-2,1811,692

auth/H08-Oct-2021-366292

autosign/H08-Oct-2021-4,0003,236

builtin/H08-Oct-2021-370304

cacheclean/H08-Oct-2021-5,2855,174

case/H08-Oct-2021-334254

catz/H08-Oct-2021-1,9721,662

cds/H08-Oct-2021-476319

chain/H08-Oct-2021-1,6691,225

checkconf/H08-Oct-2021-7,1966,234

checkds/H08-Oct-2021-1,163856

checknames/H08-Oct-2021-670554

checkzone/H08-Oct-2021-939781

common/H08-Oct-2021-9176

cookie/H08-Oct-2021-1,4411,184

cpu/H08-Oct-2021-13870

database/H08-Oct-2021-164106

delzone/H08-Oct-2021-246185

dialup/H08-Oct-2021-256201

digdelv/H08-Oct-2021-1,9251,525

dlzexternal/H08-Oct-2021-1,282900

dns64/H08-Oct-2021-2,1461,824

dnssec/H08-Oct-2021-9,8667,948

dnstap/H03-May-2022-1,5611,316

doth/H08-Oct-2021-13,59513,386

dscp/H08-Oct-2021-374296

dsdigest/H08-Oct-2021-395270

dupsigs/H08-Oct-2021-433302

dyndb/H08-Oct-2021-2,3481,553

ecdsa/H08-Oct-2021-292203

eddsa/H08-Oct-2021-500338

ednscompliance/H08-Oct-2021-191137

emptyzones/H08-Oct-2021-220161

fetchlimit/H08-Oct-2021-585457

filter-aaaa/H08-Oct-2021-2,2901,901

formerr/H08-Oct-2021-251147

forward/H08-Oct-2021-1,6371,244

geoip2/H08-Oct-2021-3,1052,787

glue/H08-Oct-2021-249170

hooks/H08-Oct-2021-513326

idna/H08-Oct-2021-462198

include-multiplecfg/H08-Oct-2021-191109

inline/H08-Oct-2021-2,8382,323

integrity/H08-Oct-2021-306236

ixfr/H08-Oct-2021-714513

journal/H08-Oct-2021-1,2071,100

kasp/H08-Oct-2021-8,4645,590

keepalive/H08-Oct-2021-297226

keymgr2kasp/H08-Oct-2021-1,8821,265

legacy/H08-Oct-2021-1,3461,152

limits/H08-Oct-2021-33,46933,370

logfileconfig/H08-Oct-2021-1,008840

masterfile/H08-Oct-2021-359278

masterformat/H08-Oct-2021-718587

metadata/H08-Oct-2021-343252

mirror/H08-Oct-2021-1,028748

mkeys/H08-Oct-2021-1,4321,071

names/H08-Oct-2021-171119

notify/H08-Oct-2021-1,1911,060

nsec3/H08-Oct-2021-700498

nslookup/H08-Oct-2021-208152

nsupdate/H08-Oct-2021-3,7373,097

nzd2nzf/H08-Oct-2021-171115

padding/H08-Oct-2021-375295

pending/H08-Oct-2021-607449

pipelined/H08-Oct-2021-908701

qmin/H08-Oct-2021-1,5431,203

reclimit/H08-Oct-2021-1,019773

redirect/H08-Oct-2021-1,4521,188

resolver/H08-Oct-2021-2,4732,055

rndc/H08-Oct-2021-1,3331,107

rootkeysentinel/H08-Oct-2021-607463

rpz/H08-Oct-2021-3,1282,492

rpzextra/H08-Oct-2021-309198

rpzrecurse/H08-Oct-2021-2,3631,900

rrchecker/H07-May-2022-180148

rrl/H08-Oct-2021-759535

rrsetorder/H08-Oct-2021-943783

rsabigexponent/H08-Oct-2021-714552

runtime/H07-May-2022-523427

serve-stale/H08-Oct-2021-3,2542,562

sfcache/H08-Oct-2021-502334

shutdown/H08-Oct-2021-442287

smartsign/H08-Oct-2021-433343

sortlist/H08-Oct-2021-185127

spf/H08-Oct-2021-13282

staticstub/H08-Oct-2021-1,220769

statistics/H08-Oct-2021-768598

statschannel/H08-Oct-2021-1,7521,208

stress/H08-Oct-2021-412265

stub/H08-Oct-2021-420327

synthfromdnssec/H08-Oct-2021-602470

tcp/H08-Oct-2021-673494

timeouts/H08-Oct-2021-417254

tkey/H08-Oct-2021-796589

tools/H08-Oct-2021-13587

tsig/H08-Oct-2021-674546

tsiggss/H08-Oct-2021-433302

ttl/H08-Oct-2021-166134

unknown/H08-Oct-2021-3,6453,520

upforwd/H08-Oct-2021-892635

verify/H08-Oct-2021-412307

views/H08-Oct-2021-885713

wildcard/H08-Oct-2021-689549

xfer/H08-Oct-2021-1,7151,416

xferquota/H08-Oct-2021-458347

zero/H08-Oct-2021-439333

zonechecks/H08-Oct-2021-511412

Makefile.amH A D08-Oct-20214 KiB252201

READMEH A D08-Oct-202127.5 KiB711515

ans.plH A D08-Oct-202114.4 KiB533353

ckdnsrps.shH A D08-Oct-20214.1 KiB166117

cleanall.shH A D08-Oct-2021872 3514

conf.sh.commonH A D08-Oct-202118.3 KiB745488

conf.sh.inH A D08-Oct-20213.9 KiB14689

custom-test-driverH A D08-Oct-20214.9 KiB16097

digcomp.plH A D08-Oct-20213.5 KiB163139

ditch.plH A D08-Oct-20212.1 KiB8646

feature-test.cH A D08-Oct-20214.1 KiB187154

fromhex.plH A D08-Oct-2021995 4625

genzone.shH A D08-Oct-202112.3 KiB510391

get_ports.shH A D08-Oct-20212 KiB6742

ifconfig.batH A D08-Oct-20211.6 KiB4538

ifconfig.sh.inH A D08-Oct-20217.1 KiB250218

kasp.shH A D08-Oct-202142.8 KiB1,182843

makejournal.cH A D08-Oct-20213.6 KiB157114

org.isc.bind.systemH A D08-Oct-2021515 175

org.isc.bind.system.plistH A D08-Oct-2021425 1816

packet.plH A D08-Oct-20213.9 KiB16394

parallel.shH A D08-Oct-20211.2 KiB3319

resolve.cH A D08-Oct-202111.8 KiB501427

run.gdbH A D08-Oct-202125 21

run.sh.inH A D08-Oct-202110.2 KiB325262

runall.shH A D08-Oct-20213 KiB11060

runsequential.shH A D08-Oct-2021743 255

send.plH A D08-Oct-2021811 3211

setup.shH A D08-Oct-2021695 329

start.plH A D08-Oct-202110.2 KiB441274

start.sh.inH A D08-Oct-2021508 194

stop.plH A D08-Oct-20216.1 KiB288180

stop.sh.inH A D08-Oct-2021531 215

stopall.shH A D08-Oct-2021504 225

testcrypto.shH A D08-Oct-20211.8 KiB7256

testsock.plH A D08-Oct-20211,022 4324

testsock6.plH A D08-Oct-2021706 2410

testsummary.shH A D08-Oct-20212.9 KiB9052

README

1Copyright (C) Internet Systems Consortium, Inc. ("ISC")
2
3See COPYRIGHT in the source root or https://isc.org/copyright.html for terms.
4
5Introduction
6===
7This directory holds a simple test environment for running bind9 system tests
8involving multiple name servers.
9
10With the exception of "common" (which holds configuration information common to
11multiple tests), each directory holds a set of scripts and configuration
12files to test different parts of BIND.  The directories are named for the
13aspect of BIND they test, for example:
14
15  dnssec/       DNSSEC tests
16  forward/      Forwarding tests
17  glue/         Glue handling tests
18
19etc.
20
21Typically each set of tests sets up 2-5 name servers and then performs one or
22more tests against them.  Within the test subdirectory, each name server has a
23separate subdirectory containing its configuration data.  These subdirectories
24are named "nsN" or "ansN" (where N is a number between 1 and 8, e.g. ns1, ans2
25etc.)
26
27The tests are completely self-contained and do not require access to the real
28DNS.  Generally, one of the test servers (usually ns1) is set up as a root
29nameserver and is listed in the hints file of the others.
30
31
32Preparing to Run the Tests
33===
34To enable all servers to run on the same machine, they bind to separate virtual
35IP addresses on the loopback interface.  ns1 runs on 10.53.0.1, ns2 on
3610.53.0.2, etc.  Before running any tests, you must set up these addresses by
37running the command
38
39    sh ifconfig.sh up
40
41as root.  The interfaces can be removed by executing the command:
42
43    sh ifconfig.sh down
44
45... also as root.
46
47The servers use unprivileged ports (above 1024) instead of the usual port 53,
48so they can be run without root privileges once the interfaces have been set
49up.
50
51
52Note for MacOS Users
53---
54If you wish to make the interfaces survive across reboots, copy
55org.isc.bind.system and org.isc.bind.system.plist to /Library/LaunchDaemons
56then run
57
58    launchctl load /Library/LaunchDaemons/org.isc.bind.system.plist
59
60... as root.
61
62
63Running the System Tests
64===
65
66Running an Individual Test
67---
68The tests can be run individually using the following command:
69
70    sh run.sh [flags] <test-name> [<test-arguments>]
71
72e.g.
73
74    sh run.sh [flags] notify
75
76Optional flags are:
77
78    -k              Keep servers running after the test completes.  Each test
79                    usually starts a number of nameservers, either instances
80                    of the "named" being tested, or custom servers (written in
81                    Python or Perl) that feature test-specific behavior.  The
82                    servers are automatically started before the test is run
83                    and stopped after it ends.  This flag leaves them running
84                    at the end of the test, so that additional queries can be
85                    sent by hand.  To stop the servers afterwards, use the
86                    command "sh stop.sh <test-name>".
87
88    -n              Noclean - do not remove the output files if the test
89                    completes successfully.  By default, files created by the
90                    test are deleted if it passes;  they are not deleted if the
91                    test fails.
92
93    -p <number>     Sets the range of ports used by the test.  A block of 100
94                    ports is available for each test, the number given to the
95                    "-p" switch being the number of the start of that block
96                    (e.g. "-p 7900" will mean that the test is able to use
97                    ports 7900 through 7999).  If not specified, the test will
98                    have ports 5000 to 5099 available to it.
99
100Arguments are:
101
102    test-name       Mandatory.  The name of the test, which is the name of the
103                    subdirectory in bin/tests/system holding the test files.
104
105    test-arguments  Optional arguments that are passed to each of the test's
106                    scripts.
107
108
109Running All The System Tests
110---
111To run all the system tests, enter the command:
112
113    sh runall.sh [-c] [-n] [numproc]
114
115The optional flag "-c" forces colored output (by default system test output is
116not printed in color due to run.sh being piped through "tee").
117
118The optional flag "-n" has the same effect as it does for "run.sh" - it causes
119the retention of all output files from all tests.
120
121The optional "numproc" argument specifies the maximum number of tests that can
122run in parallel.  The default is 1, which means that all of the tests run
123sequentially.  If greater than 1, up to "numproc" tests will run simultaneously,
124new tests being started as tests finish.  Each test will get a unique set of
125ports, so there is no danger of tests interfering with one another.  Parallel
126running will reduce the total time taken to run the BIND system tests, but will
127mean that the output from all the tests sent to the screen will be mixed up
128with one another.  However, the systests.output file produced at the end of the
129run (in the bin/tests/system directory) will contain the output from each test
130in sequential order.
131
132Note that it is not possible to pass arguments to tests though the "runall.sh"
133script.
134
135A run of all the system tests can also be initiated via make:
136
137    make [-j numproc] test
138
139In this case, retention of the output files after a test completes successfully
140is specified by setting the environment variable SYSTEMTEST_NO_CLEAN to 1 prior
141to running make, e.g.
142
143    SYSTEMTEST_NO_CLEAN=1 make [-j numproc] test
144
145while setting environment variable SYSTEMTEST_FORCE_COLOR to 1 forces system
146test output to be printed in color.
147
148
149Running Multiple System Test Suites Simultaneously
150---
151In some cases it may be desirable to have multiple instances of the system test
152suite running simultaneously (e.g. from different terminal windows).  To do
153this:
154
1551. Each installation must have its own directory tree.  The system tests create
156files in the test directories, so separate directory trees are required to
157avoid interference between the same test running in the different
158installations.
159
1602. For one of the test suites, the starting port number must be specified by
161setting the environment variable STARTPORT before starting the test suite.
162Each test suite comprises about 100 tests, each being allocated a set of 100
163ports.  The port ranges for each test are allocated sequentially, so each test
164suite requires about 10,000 ports to itself.  By default, the port allocation
165starts at 5,000.  So the following set of commands:
166
167    Terminal Window 1:
168        cd <installation-1>/bin/tests/system
169        sh runall.sh 4
170
171    Terminal Window 2:
172        cd <installation-2>/bin/tests/system
173        STARTPORT=20000 sh runall.sh 4
174
175... will start the test suite for installation-1 using the default base port
176of 5,000, so the test suite will use ports 5,000 through 15,000 (or there
177abouts).  The use of "STARTPORT=20000" to prefix the run of the test suite for
178installation-2 will mean the test suite uses ports 20,000 through 30,000 or so.
179
180
181Format of Test Output
182---
183All output from the system tests is in the form of lines with the following
184structure:
185
186    <letter>:<test-name>:<message> [(<number>)]
187
188e.g.
189
190    I:catz:checking that dom1.example is not served by master (1)
191
192The meanings of the fields are as follows:
193
194<letter>
195This indicates the type of message.  This is one of:
196
197    S   Start of the test
198    A   Start of test (retained for backwards compatibility)
199    T   Start of test (retained for backwards compatibility)
200    E   End of the test
201    I   Information.  A test will typically output many of these messages
202        during its run, indicating test progress.  Note that such a message may
203        be of the form "I:testname:failed", indicating that a sub-test has
204        failed.
205    R   Result.  Each test will result in one such message, which is of the
206        form:
207
208                R:<test-name>:<result>
209
210        where <result> is one of:
211
212            PASS        The test passed
213            FAIL        The test failed
214            SKIPPED     The test was not run, usually because some
215                        prerequisites required to run the test are missing.
216
217<test-name>
218This is the name of the test from which the message emanated, which is also the
219name of the subdirectory holding the test files.
220
221<message>
222This is text output by the test during its execution.
223
224(<number>)
225If present, this will correlate with a file created by the test.  The tests
226execute commands and route the output of each command to a file.  The name of
227this file depends on the command and the test, but will usually be of the form:
228
229    <command>.out.<suffix><number>
230
231e.g. nsupdate.out.test28, dig.out.q3.  This aids diagnosis of problems by
232allowing the output that caused the problem message to be identified.
233
234
235Re-Running the Tests
236---
237If there is a requirement to re-run a test (or the entire test suite), the
238files produced by the tests should be deleted first.  Normally, these files are
239deleted if the test succeeds but are retained on error.  The run.sh script
240automatically calls a given test's clean.sh script before invoking its setup.sh
241script.
242
243Deletion of the files produced by the set of tests (e.g. after the execution
244of "runall.sh") can be carried out using the command:
245
246    sh cleanall.sh
247
248or
249
250    make testclean
251
252(Note that the Makefile has two other targets for cleaning up files: "clean"
253will delete all the files produced by the tests, as well as the object and
254executable files used by the tests.  "distclean" does all the work of "clean"
255as well as deleting configuration files produced by "configure".)
256
257
258Developer Notes
259===
260This section is intended for developers writing new tests.
261
262
263Overview
264---
265As noted above, each test is in a separate directory.  To interact with the
266test framework, the directories contain the following standard files:
267
268prereq.sh   Run at the beginning to determine whether the test can be run at
269            all; if not, we see a R:SKIPPED result.  This file is optional:
270            if not present, the test is assumed to have all its prerequisites
271            met.
272
273setup.sh    Run after prereq.sh, this sets up the preconditions for the tests.
274            Although optional, virtually all tests will require such a file to
275            set up the ports they should use for the test.
276
277tests.sh    Runs the actual tests.  This file is mandatory.
278
279clean.sh    Run at the end to clean up temporary files, but only if the test
280            was completed successfully and its running was not inhibited by the
281	    "-n" switch being passed to "run.sh".  Otherwise the temporary
282	    files are left in place for inspection.
283
284ns<N>       These subdirectories contain test name servers that can be queried
285	    or can interact with each other.  The value of N indicates the
286	    address the server listens on: for example, ns2 listens on
287	    10.53.0.2, and ns4 on 10.53.0.4.  All test servers use an
288	    unprivileged port, so they don't need to run as root.  These
289	    servers log at the highest debug level and the log is captured in
290	    the file "named.run".
291
292ans<N>      Like ns[X], but these are simple mock name servers implemented in
293            Perl or Python.  They are generally programmed to misbehave in ways
294            named would not so as to exercise named's ability to interoperate
295            with badly behaved name servers.
296
297
298Port Usage
299---
300In order for the tests to run in parallel, each test requires a unique set of
301ports.  These are specified by the "-p" option passed to "run.sh", which sets
302environment variables that the scripts listed above can reference.
303
304The convention used in the system tests is that the number passed is the start
305of a range of 100 ports.  The test is free to use the ports as required,
306although the first ten ports in the block are named and generally tests use the
307named ports for their intended purpose.  The names of the environment variables
308are:
309
310    PORT                     Number to be used for the query port.
311    CONTROLPORT              Number to be used as the RNDC control port.
312    EXTRAPORT1 - EXTRAPORT8  Eight port numbers that can be used as needed.
313
314Two other environment variables are defined:
315
316    LOWPORT                  The lowest port number in the range.
317    HIGHPORT                 The highest port number in the range.
318
319Since port ranges usually start on a boundary of 10, the variables are set such
320that the last digit of the port number corresponds to the number of the
321EXTRAPORTn variable.  For example, if the port range were to start at 5200, the
322port assignments would be:
323
324    PORT = 5200
325    EXTRAPORT1 = 5201
326        :
327    EXTRAPORT8 = 5208
328    CONTROLPORT = 5209
329    LOWPORT = 5200
330    HIGHPORT = 5299
331
332When running tests in parallel (i.e. giving a value of "numproc" greater than 1
333in the "make" or "runall.sh" commands listed above), it is guaranteed that each
334test will get a set of unique port numbers.
335
336
337Writing a Test
338---
339The test framework requires up to four shell scripts (listed above) as well as
340a number of nameserver instances to run.  Certain expectations are put on each
341script:
342
343
344General
345---
3461. Each of the four scripts will be invoked with the command
347
348    (cd <test-directory> ; sh <script> [<arguments>] )
349
350... so that working directory when the script starts executing is the test
351directory.
352
3532. Arguments can be only passed to the script if the test is being run as a
354one-off with "run.sh".  In this case, everything on the command line after the
355name of the test is passed to each script.  For example, the command:
356
357    sh run.sh -p 12300 mytest -D xyz
358
359... will run "mytest" with a port range of 12300 to 12399.  Each of the
360framework scripts provided by the test will be invoked using the remaining
361arguments, e.g.:
362
363   (cd mytest ; sh prereq.sh -D xyz)
364   (cd mytest ; sh setup.sh -D xyz)
365   (cd mytest ; sh tests.sh -D xyz)
366   (cd mytest ; sh clean.sh -D xyz)
367
368No arguments will be passed to the test scripts if the test is run as part of
369a run of the full test suite (e.g. the tests are started with "runall.sh").
370
3713.  Each script should start with the following lines:
372
373    . ../conf.sh
374
375"conf.sh" defines a series of environment variables together with functions
376useful for the test scripts.
377
378
379prereq.sh
380---
381As noted above, this is optional.  If present, it should check whether specific
382software needed to run the test is available and/or whether BIND has been
383configured with the appropriate options required.
384
385    * If the software required to run the test is present and the BIND
386      configure options are correct, prereq.sh should return with a status code
387      of 0.
388
389    * If the software required to run the test is not available and/or BIND
390      has not been configured with the appropriate options, prereq.sh should
391      return with a status code of 1.
392
393    * If there is some other problem (e.g. prerequisite software is available
394      but is not properly configured), a status code of 255 should be returned.
395
396
397setup.sh
398---
399This is responsible for setting up the configuration files used in the test.
400
401To cope with the varying port number, ports are not hard-coded into
402configuration files (or, for that matter, scripts that emulate nameservers).
403Instead, setup.sh is responsible for editing the configuration files to set the
404port numbers.
405
406To do this, configuration files should be supplied in the form of templates
407containing tokens identifying ports.  The tokens have the same name as the
408environment variables listed above, but are prefixed and suffixed by the "@"
409symbol.  For example, a fragment of a configuration file template might look
410like:
411
412    controls {
413        inet 10.53.0.1 port @CONTROLPORT@ allow { any; } keys { rndc_key; };
414    };
415
416    options {
417        query-source address 10.53.0.1;
418        notify-source 10.53.0.1;
419        transfer-source 10.53.0.1;
420        port @PORT@;
421        allow-new-zones yes;
422    };
423
424setup.sh should copy the template to the desired filename using the
425"copy_setports" shell function defined in "conf.sh", i.e.
426
427    copy_setports ns1/named.conf.in ns1/named.conf
428
429This replaces the tokens @PORT@, @CONTROLPORT@, @EXTRAPORT1@ through
430@EXTRAPORT8@ with the contents of the environment variables listed above.
431setup.sh should do this for all configuration files required when the test
432starts.
433
434("setup.sh" should also use this method for replacing the tokens in any Perl or
435Python name servers used in the test.)
436
437
438tests.sh
439---
440This is the main test file and the contents depend on the test.  The contents
441are completely up to the developer, although most test scripts have a form
442similar to the following for each sub-test:
443
444    1. n=`expr $n + 1`
445    2. echo_i "prime cache nodata.example ($n)"
446    3. ret=0
447    4. $DIG -p ${PORT} @10.53.0.1 nodata.example TXT > dig.out.test$n
448    5. grep "status: NOERROR" dig.out.test$n > /dev/null || ret=1
449    6. grep "ANSWER: 0," dig.out.test$n > /dev/null || ret=1
450    7. if [ $ret != 0 ]; then echo_i "failed"; fi
451    8. status=`expr $status + $ret`
452
4531.  Increment the test number "n" (initialized to zero at the start of the
454    script).
455
4562.  Indicate that the sub-test is about to begin.  Note that "echo_i" instead
457    of "echo" is used.  echo_i is a function defined in "conf.sh" which will
458    prefix the message with "I:<testname>:", so allowing the output from each
459    test to be identified within the output.  The test number is included in
460    the message in order to tie the sub-test with its output.
461
4623. Initialize return status.
463
4644 - 6. Carry out the sub-test.  In this case, a nameserver is queried (note
465    that the port used is given by the PORT environment variable, which was set
466    by the inclusion of the file "conf.sh" at the start of the script).  The
467    output is routed to a file whose suffix includes the test number.  The
468    response from the server is examined and, in this case, if the required
469    string is not found, an error is indicated by setting "ret" to 1.
470
4717.  If the sub-test failed, a message is printed. "echo_i" is used to print
472    the message to add the prefix "I:<test-name>:" before it is output.
473
4748.  "status", used to track how many of the sub-tests have failed, is
475    incremented accordingly.  The value of "status" determines the status
476    returned by "tests.sh", which in turn determines whether the framework
477    prints the PASS or FAIL message.
478
479Regardless of this, rules that should be followed are:
480
481a.  Use the environment variables set by conf.sh to determine the ports to use
482    for sending and receiving queries.
483
484b.  Use a counter to tag messages and to associate the messages with the output
485    files.
486
487c.  Store all output produced by queries/commands into files.  These files
488    should be named according to the command that produced them, e.g. "dig"
489    output should be stored in a file "dig.out.<suffix>", the suffix being
490    related to the value of the counter.
491
492d.  Use "echo_i" to output informational messages.
493
494e.  Retain a count of test failures and return this as the exit status from
495    the script.
496
497
498clean.sh
499---
500The inverse of "setup.sh", this is invoked by the framework to clean up the
501test directory.  It should delete all files that have been created by the test
502during its run.
503
504
505Starting Nameservers
506---
507As noted earlier, a system test will involve a number of nameservers.  These
508will be either instances of named, or special servers written in a language
509such as Perl or Python.
510
511For the former, the version of "named" being run is that in the "bin/named"
512directory in the tree holding the tests (i.e. if "make test" is being run
513immediately after "make", the version of "named" used is that just built).  The
514configuration files, zone files etc. for these servers are located in
515subdirectories of the test directory named "nsN", where N is a small integer.
516The latter are special nameservers, mostly used for generating deliberately bad
517responses, located in subdirectories named "ansN" (again, N is an integer).
518In addition to configuration files, these directories should hold the
519appropriate script files as well.
520
521Note that the "N" for a particular test forms a single number space, e.g. if
522there is an "ns2" directory, there cannot be an "ans2" directory as well.
523Ideally, the directory numbers should start at 1 and work upwards.
524
525When running a test, the servers are started using "start.sh" (which is nothing
526more than a wrapper for start.pl).  The options for "start.pl" are documented
527in the header for that file, so will not be repeated here.  In summary, when
528invoked by "run.sh", start.pl looks for directories named "nsN" or "ansN" in
529the test directory and starts the servers it finds there.
530
531
532"named" Command-Line Options
533---
534By default, start.pl starts a "named" server with the following options:
535
536    -c named.conf   Specifies the configuration file to use (so by implication,
537                    each "nsN" nameserver's configuration file must be called
538                    named.conf).
539
540    -d 99           Sets the maximum debugging level.
541
542    -D <name>       The "-D" option sets a string used to identify the
543                    nameserver in a process listing.  In this case, the string
544                    is the name of the subdirectory.
545
546    -g              Runs the server in the foreground and logs everything to
547                    stderr.
548
549    -m record
550                    Turns on these memory usage debugging flags.
551
552    -U 4            Uses four listeners.
553
554    -X named.lock   Acquires a lock on this file in the "nsN" directory, so
555                    preventing multiple instances of this named running in this
556                    directory (which could possibly interfere with the test).
557
558All output is sent to a file called "named.run" in the nameserver directory.
559
560The options used to start named can be altered.  There are three ways of doing
561this.  "start.pl" checks the methods in a specific order: if a check succeeds,
562the options are set and any other specification is ignored.  In order, these
563are:
564
5651. Specifying options to "start.sh"/"start.pl" after the name of the test
566directory, e.g.
567
568    sh start.sh reclimit ns1 -- "-c n.conf -d 43"
569
570(This is only really useful when running tests interactively.)
571
5722. Including a file called "named.args" in the "nsN" directory.  If present,
573the contents of the first non-commented, non-blank line of the file are used as
574the named command-line arguments.  The rest of the file is ignored.
575
5763. Tweaking the default command line arguments with "-T" options.  This flag is
577used to alter the behavior of BIND for testing and is not documented in the
578ARM.  The presence of certain files in the "nsN" directory adds flags to
579the default command line (the content of the files is irrelevant - it
580is only the presence that counts):
581
582    named.noaa       Appends "-T noaa" to the command line, which causes
583                     "named" to never set the AA bit in an answer.
584
585    named.dropedns   Adds "-T dropedns" to the command line, which causes
586                     "named" to recognise EDNS options in messages, but drop
587                     messages containing them.
588
589    named.maxudp1460 Adds "-T maxudp1460" to the command line, setting the
590                     maximum UDP size handled by named to 1460.
591
592    named.maxudp512  Adds "-T maxudp512" to the command line, setting the
593                     maximum UDP size handled by named to 512.
594
595    named.noedns     Appends "-T noedns" to the command line, which disables
596                     recognition of EDNS options in messages.
597
598    named.notcp      Adds "-T notcp", which disables TCP in "named".
599
600    named.soa        Appends "-T nosoa" to the command line, which disables
601                     the addition of SOA records to negative responses (or to
602                     the additional section if the response is triggered by RPZ
603                     rewriting).
604
605Starting Other Nameservers
606---
607In contrast to "named", nameservers written in Perl or Python (whose script
608file should have the name "ans.pl" or "ans.py" respectively) are started with a
609fixed command line.  In essence, the server is given the address and nothing
610else.
611
612(This is not strictly true: Python servers are provided with the number of the
613query port to use.  Altering the port used by Perl servers currently requires
614creating a template file containing the "@PORT@" token, and having "setup.sh"
615substitute the actual port being used before the test starts.)
616
617
618Stopping Nameservers
619---
620As might be expected, the test system stops nameservers with the script
621"stop.sh", which is little more than a wrapper for "stop.pl".  Like "start.pl",
622the options available are listed in the file's header and will not be repeated
623here.
624
625In summary though, the nameservers for a given test, if left running by
626specifying the "-k" flag to "run.sh" when the test is started, can be stopped
627by the command:
628
629    sh stop.sh <test-name> [server]
630
631... where if the server (e.g. "ns1", "ans3") is not specified, all servers
632associated with the test are stopped.
633
634
635Adding a Test to the System Test Suite
636---
637Once a test has been created, the following files should be edited:
638
639* conf.sh.in  The name of the test should be added to the PARALLELDIRS or
640SEQUENTIALDIRS variables as appropriate.  The former is used for tests that
641can run in parallel with other tests, the latter for tests that are unable to
642do so.
643
644* Makefile.in The name of the test should be added to one of the the PARALLEL
645or SEQUENTIAL variables.
646
647(It is likely that a future iteration of the system test suite will remove the
648need to edit multiple files to add a test.)
649
650
651Valgrind
652---
653When running system tests, named can be run under Valgrind.  The output from
654Valgrind are sent to per-process files that can be reviewed after the test has
655completed.  To enable this, set the USE_VALGRIND environment variable to
656"helgrind" to run the Helgrind tool, or any other value to run the Memcheck
657tool.  To use "helgrind" effectively, build BIND with --disable-atomic.
658
659
660Maintenance Notes
661===
662This section is aimed at developers maintaining BIND's system test framework.
663
664Notes on Parallel Execution
665---
666Although execution of an individual test is controlled by "run.sh", which
667executes the above shell scripts (and starts the relevant servers) for each
668test, the running of all tests in the test suite is controlled by the Makefile.
669("runall.sh" does little more than invoke "make" on the Makefile.)
670
671All system tests are capable of being run in parallel.  For this to work, each
672test needs to use a unique set of ports.  To avoid the need to define which
673tests use which ports (and so risk port clashes as further tests are added),
674the ports are assigned when the tests are run.  This is achieved by having the
675"test" target in the Makefile depend on "parallel.mk".  That file is created
676when "make check" is run, and contains a target for each test of the form:
677
678    <test-name>:
679        @$(SHELL) run.sh -p <baseport> <test-name>
680
681The <baseport> is unique and the values of <baseport> for each test are
682separated by at least 100 ports.
683
684
685Cleaning Up From Tests
686---
687When a test is run, up to three different types of files are created:
688
6891. Files generated by the test itself, e.g. output from "dig" and "rndc", are
690stored in the test directory.
691
6922. Files produced by named which may not be cleaned up if named exits
693abnormally, e.g. core files, PID files etc., are stored in the test directory.
694
6953. A file "test.output.<test-name>" containing the text written to stdout by the
696test is written to bin/tests/system/.  This file is only produced when the test
697is run as part of the entire test suite (e.g. via "runall.sh").
698
699If the test fails, all these files are retained.  But if the test succeeds,
700they are cleaned up at different times:
701
7021. Files generated by the test itself are cleaned up by the test's own
703"clean.sh", which is called from "run.sh".
704
7052. Files that may not be cleaned up if named exits abnormally can be removed
706using the "cleanall.sh" script.
707
7083. "test.output.*" files are deleted when the test suite ends.  At this point,
709the file "testsummary.sh" is called which concatenates all the "test.output.*"
710files into a single "systests.output" file before deleting them.
711