• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-Jul-2020-

acl/H03-Jul-2020-737570

additional/H03-Jul-2020-958793

addzone/H03-Jul-2020-1,3401,142

allow-query/H03-Jul-2020-2,1831,694

auth/H03-Jul-2020-319249

autosign/H03-Jul-2020-3,9963,239

builtin/H03-Jul-2020-372306

cacheclean/H03-Jul-2020-5,2845,175

case/H03-Jul-2020-336256

catz/H03-Jul-2020-1,8121,526

cds/H03-Jul-2020-455303

chain/H03-Jul-2020-1,250965

checkconf/H03-Jul-2020-4,9344,300

checkds/H07-May-2022-374295

checknames/H03-Jul-2020-544445

checkzone/H03-Jul-2020-823689

common/H03-Jul-2020-9176

cookie/H03-Jul-2020-909756

coverage/H03-Jul-2020-434308

database/H03-Jul-2020-166108

delzone/H03-Jul-2020-248187

dialup/H03-Jul-2020-257202

digdelv/H03-Jul-2020-1,8781,473

dlz/H03-Jul-2020-206121

dlzexternal/H03-Jul-2020-1,314916

dns64/H03-Jul-2020-1,9671,676

dnssec/H03-Jul-2020-9,2687,490

dnstap/H03-May-2022-1,5431,306

dscp/H03-Jul-2020-376298

dsdigest/H03-Jul-2020-399274

dupsigs/H03-Jul-2020-436305

dyndb/H03-Jul-2020-2,4151,583

ecdsa/H03-Jul-2020-191116

eddsa/H03-Jul-2020-321198

ednscompliance/H03-Jul-2020-193139

emptyzones/H03-Jul-2020-222163

fetchlimit/H03-Jul-2020-607467

filter-aaaa/H03-Jul-2020-2,3141,912

formerr/H03-Jul-2020-253149

forward/H03-Jul-2020-974781

geoip2/H03-Jul-2020-3,1082,790

glue/H03-Jul-2020-251172

idna/H03-Jul-2020-464200

inline/H03-Jul-2020-2,6392,151

integrity/H03-Jul-2020-308238

ixfr/H03-Jul-2020-782598

kasp/H03-Jul-2020-8,2835,475

keepalive/H03-Jul-2020-300229

keymgr/H03-Jul-2020-1,046806

legacy/H03-Jul-2020-1,4051,210

limits/H03-Jul-2020-33,47133,372

logfileconfig/H03-Jul-2020-1,011843

masterfile/H03-Jul-2020-361280

masterformat/H03-Jul-2020-780637

metadata/H03-Jul-2020-345254

mirror/H03-Jul-2020-1,031751

mkeys/H03-Jul-2020-1,4311,070

names/H03-Jul-2020-173121

notify/H03-Jul-2020-1,1811,050

nslookup/H03-Jul-2020-210154

nsupdate/H03-Jul-2020-3,4762,859

nzd2nzf/H03-Jul-2020-188121

padding/H03-Jul-2020-378298

pending/H03-Jul-2020-611453

pipelined/H03-Jul-2020-1,020768

pkcs11/H03-Jul-2020-316221

qmin/H03-Jul-2020-1,283988

reclimit/H03-Jul-2020-1,057797

redirect/H03-Jul-2020-1,4571,193

resolver/H03-Jul-2020-2,4492,021

rndc/H03-Jul-2020-1,141934

rootkeysentinel/H03-Jul-2020-611467

rpz/H03-Jul-2020-3,2762,587

rpzrecurse/H03-Jul-2020-2,2161,773

rrchecker/H07-May-2022-179147

rrl/H03-Jul-2020-752531

rrsetorder/H03-Jul-2020-888742

rsabigexponent/H03-Jul-2020-774579

runtime/H07-May-2022-496403

serve-stale/H03-Jul-2020-1,5401,238

sfcache/H03-Jul-2020-503335

shutdown/H03-Jul-2020-468293

smartsign/H03-Jul-2020-434344

sortlist/H03-Jul-2020-187129

spf/H03-Jul-2020-13484

staticstub/H03-Jul-2020-1,224773

statistics/H03-Jul-2020-692530

statschannel/H03-Jul-2020-1,6131,076

stress/H03-Jul-2020-435275

stub/H03-Jul-2020-313234

synthfromdnssec/H03-Jul-2020-603471

tcp/H03-Jul-2020-668482

tkey/H03-Jul-2020-906673

tools/H03-Jul-2020-13688

tsig/H03-Jul-2020-557455

tsiggss/H03-Jul-2020-436305

ttl/H03-Jul-2020-166134

unknown/H03-Jul-2020-3,6303,505

upforwd/H03-Jul-2020-831570

verify/H03-Jul-2020-415310

views/H03-Jul-2020-785634

wildcard/H03-Jul-2020-646509

win32/H03-Jul-2020-870864

xfer/H03-Jul-2020-1,6781,380

xferquota/H03-Jul-2020-460349

zero/H03-Jul-2020-456340

zonechecks/H03-Jul-2020-513414

Makefile.inH A D03-Jul-20202.2 KiB9441

READMEH A D03-Jul-202028.1 KiB721524

ans.plH A D03-Jul-202014.3 KiB530350

cleanall.shH A D03-Jul-2020912 3615

cleanpkcs11.shH A D03-Jul-2020520 173

conf.sh.commonH A D03-Jul-202018.6 KiB673418

conf.sh.inH A D03-Jul-20203.8 KiB13579

conf.sh.win32H A D03-Jul-20204.3 KiB13072

digcomp.plH A D03-Jul-20203.5 KiB163139

ditch.plH A D03-Jul-20202.1 KiB8646

feature-test.cH A D03-Jul-20204.1 KiB186152

genzone.shH A D03-Jul-202011.9 KiB498381

ifconfig.batH A D03-Jul-20201.6 KiB4538

ifconfig.shH A D03-Jul-20206.8 KiB245215

org.isc.bind.systemH A D03-Jul-2020514 175

org.isc.bind.system.plistH A D03-Jul-2020425 1816

packet.plH A D03-Jul-20202.5 KiB10147

parallel.shH A D03-Jul-20201.2 KiB3521

run.gdbH A D03-Jul-202025 21

run.shH A D03-Jul-20208.3 KiB309236

runall.shH A D03-Jul-20202.9 KiB10656

runsequential.shH A D03-Jul-2020767 266

send.plH A D03-Jul-2020810 3211

setup.shH A D03-Jul-2020723 3310

start.plH A D03-Jul-20209.9 KiB430267

start.shH A D03-Jul-2020559 184

stop.plH A D03-Jul-20206.1 KiB287179

stop.shH A D03-Jul-2020558 184

stopall.shH A D03-Jul-2020532 236

system-test-driver.shH A D03-Jul-20201.5 KiB6848

testcrypto.shH A D03-Jul-20201.9 KiB7559

testsock.plH A D03-Jul-20201,021 4324

testsock6.plH A D03-Jul-2020705 2410

testsummary.shH A D03-Jul-20202.7 KiB8548

README

1Copyright (C) Internet Systems Consortium, Inc. ("ISC")
2
3See COPYRIGHT in the source root or http://isc.org/copyright.html for terms.
4
5Introduction
6===
7This directory holds a simple test environment for running bind9 system tests
8involving multiple name servers.
9
10With the exception of "common" (which holds configuration information common to
11multiple tests) and "win32" (which holds files needed to run the tests in a
12Windows environment), each directory holds a set of scripts and configuration
13files to test different parts of BIND.  The directories are named for the
14aspect of BIND they test, for example:
15
16  dnssec/       DNSSEC tests
17  forward/      Forwarding tests
18  glue/         Glue handling tests
19
20etc.
21
22Typically each set of tests sets up 2-5 name servers and then performs one or
23more tests against them.  Within the test subdirectory, each name server has a
24separate subdirectory containing its configuration data.  These subdirectories
25are named "nsN" or "ansN" (where N is a number between 1 and 8, e.g. ns1, ans2
26etc.)
27
28The tests are completely self-contained and do not require access to the real
29DNS.  Generally, one of the test servers (usually ns1) is set up as a root
30nameserver and is listed in the hints file of the others.
31
32
33Preparing to Run the Tests
34===
35To enable all servers to run on the same machine, they bind to separate virtual
36IP addresses on the loopback interface.  ns1 runs on 10.53.0.1, ns2 on
3710.53.0.2, etc.  Before running any tests, you must set up these addresses by
38running the command
39
40    sh ifconfig.sh up
41
42as root.  The interfaces can be removed by executing the command:
43
44    sh ifconfig.sh down
45
46... also as root.
47
48The servers use unprivileged ports (above 1024) instead of the usual port 53,
49so they can be run without root privileges once the interfaces have been set
50up.
51
52
53Note for MacOS Users
54---
55If you wish to make the interfaces survive across reboots, copy
56org.isc.bind.system and org.isc.bind.system.plist to /Library/LaunchDaemons
57then run
58
59    launchctl load /Library/LaunchDaemons/org.isc.bind.system.plist
60
61... as root.
62
63
64Running the System Tests
65===
66
67Running an Individual Test
68---
69The tests can be run individually using the following command:
70
71    sh run.sh [flags] <test-name> [<test-arguments>]
72
73e.g.
74
75    sh run.sh [flags] notify
76
77Optional flags are:
78
79    -k              Keep servers running after the test completes.  Each test
80                    usually starts a number of nameservers, either instances
81                    of the "named" being tested, or custom servers (written in
82                    Python or Perl) that feature test-specific behavior.  The
83                    servers are automatically started before the test is run
84                    and stopped after it ends.  This flag leaves them running
85                    at the end of the test, so that additional queries can be
86                    sent by hand.  To stop the servers afterwards, use the
87                    command "sh stop.sh <test-name>".
88
89    -n              Noclean - do not remove the output files if the test
90                    completes successfully.  By default, files created by the
91                    test are deleted if it passes;  they are not deleted if the
92                    test fails.
93
94    -p <number>     Sets the range of ports used by the test.  A block of 100
95                    ports is available for each test, the number given to the
96                    "-p" switch being the number of the start of that block
97                    (e.g. "-p 7900" will mean that the test is able to use
98                    ports 7900 through 7999).  If not specified, the test will
99                    have ports 5000 to 5099 available to it.
100
101Arguments are:
102
103    test-name       Mandatory.  The name of the test, which is the name of the
104                    subdirectory in bin/tests/system holding the test files.
105
106    test-arguments  Optional arguments that are passed to each of the test's
107                    scripts.
108
109
110Running All The System Tests
111---
112To run all the system tests, enter the command:
113
114    sh runall.sh [-c] [-n] [numproc]
115
116The optional flag "-c" forces colored output (by default system test output is
117not printed in color due to run.sh being piped through "tee").
118
119The optional flag "-n" has the same effect as it does for "run.sh" - it causes
120the retention of all output files from all tests.
121
122The optional "numproc" argument specifies the maximum number of tests that can
123run in parallel.  The default is 1, which means that all of the tests run
124sequentially.  If greater than 1, up to "numproc" tests will run simultaneously,
125new tests being started as tests finish.  Each test will get a unique set of
126ports, so there is no danger of tests interfering with one another.  Parallel
127running will reduce the total time taken to run the BIND system tests, but will
128mean that the output from all the tests sent to the screen will be mixed up
129with one another.  However, the systests.output file produced at the end of the
130run (in the bin/tests/system directory) will contain the output from each test
131in sequential order.
132
133Note that it is not possible to pass arguments to tests though the "runall.sh"
134script.
135
136A run of all the system tests can also be initiated via make:
137
138    make [-j numproc] test
139
140In this case, retention of the output files after a test completes successfully
141is specified by setting the environment variable SYSTEMTEST_NO_CLEAN to 1 prior
142to running make, e.g.
143
144    SYSTEMTEST_NO_CLEAN=1 make [-j numproc] test
145
146while setting environment variable SYSTEMTEST_FORCE_COLOR to 1 forces system
147test output to be printed in color.
148
149
150Running Multiple System Test Suites Simultaneously
151---
152In some cases it may be desirable to have multiple instances of the system test
153suite running simultaneously (e.g. from different terminal windows).  To do
154this:
155
1561. Each installation must have its own directory tree.  The system tests create
157files in the test directories, so separate directory trees are required to
158avoid interference between the same test running in the different
159installations.
160
1612. For one of the test suites, the starting port number must be specified by
162setting the environment variable STARTPORT before starting the test suite.
163Each test suite comprises about 100 tests, each being allocated a set of 100
164ports.  The port ranges for each test are allocated sequentially, so each test
165suite requires about 10,000 ports to itself.  By default, the port allocation
166starts at 5,000.  So the following set of commands:
167
168    Terminal Window 1:
169        cd <installation-1>/bin/tests/system
170        sh runall.sh 4
171
172    Terminal Window 2:
173        cd <installation-2>/bin/tests/system
174        STARTPORT=20000 sh runall.sh 4
175
176... will start the test suite for installation-1 using the default base port
177of 5,000, so the test suite will use ports 5,000 through 15,000 (or there
178abouts).  The use of "STARTPORT=20000" to prefix the run of the test suite for
179installation-2 will mean the test suite uses ports 20,000 through 30,000 or so.
180
181
182Format of Test Output
183---
184All output from the system tests is in the form of lines with the following
185structure:
186
187    <letter>:<test-name>:<message> [(<number>)]
188
189e.g.
190
191    I:catz:checking that dom1.example is not served by master (1)
192
193The meanings of the fields are as follows:
194
195<letter>
196This indicates the type of message.  This is one of:
197
198    S   Start of the test
199    A   Start of test (retained for backwards compatibility)
200    T   Start of test (retained for backwards compatibility)
201    E   End of the test
202    I   Information.  A test will typically output many of these messages
203        during its run, indicating test progress.  Note that such a message may
204        be of the form "I:testname:failed", indicating that a sub-test has
205        failed.
206    R   Result.  Each test will result in one such message, which is of the
207        form:
208
209                R:<test-name>:<result>
210
211        where <result> is one of:
212
213            PASS        The test passed
214            FAIL        The test failed
215            SKIPPED     The test was not run, usually because some
216                        prerequisites required to run the test are missing.
217            UNTESTED    The test was not run for some other reason, e.g. a
218                        prerequisite is available but is not compatible with
219                        the platform on which the test is run.
220
221<test-name>
222This is the name of the test from which the message emanated, which is also the
223name of the subdirectory holding the test files.
224
225<message>
226This is text output by the test during its execution.
227
228(<number>)
229If present, this will correlate with a file created by the test.  The tests
230execute commands and route the output of each command to a file.  The name of
231this file depends on the command and the test, but will usually be of the form:
232
233    <command>.out.<suffix><number>
234
235e.g. nsupdate.out.test28, dig.out.q3.  This aids diagnosis of problems by
236allowing the output that caused the problem message to be identified.
237
238
239Re-Running the Tests
240---
241If there is a requirement to re-run a test (or the entire test suite), the
242files produced by the tests should be deleted first.  Normally, these files are
243deleted if the test succeeds but are retained on error.  The run.sh script
244automatically calls a given test's clean.sh script before invoking its setup.sh
245script.
246
247Deletion of the files produced by the set of tests (e.g. after the execution
248of "runall.sh") can be carried out using the command:
249
250    sh cleanall.sh
251
252or
253
254    make testclean
255
256(Note that the Makefile has two other targets for cleaning up files: "clean"
257will delete all the files produced by the tests, as well as the object and
258executable files used by the tests.  "distclean" does all the work of "clean"
259as well as deleting configuration files produced by "configure".)
260
261
262Developer Notes
263===
264This section is intended for developers writing new tests.
265
266
267Overview
268---
269As noted above, each test is in a separate directory.  To interact with the
270test framework, the directories contain the following standard files:
271
272prereq.sh   Run at the beginning to determine whether the test can be run at
273            all; if not, we see a result of R:SKIPPED or R:UNTESTED.  This file
274            is optional: if not present, the test is assumed to have all its
275            prerequisites met.
276
277setup.sh    Run after prereq.sh, this sets up the preconditions for the tests.
278            Although optional, virtually all tests will require such a file to
279            set up the ports they should use for the test.
280
281tests.sh    Runs the actual tests.  This file is mandatory.
282
283clean.sh    Run at the end to clean up temporary files, but only if the test
284            was completed successfully and its running was not inhibited by the
285	    "-n" switch being passed to "run.sh".  Otherwise the temporary
286	    files are left in place for inspection.
287
288ns<N>       These subdirectories contain test name servers that can be queried
289	    or can interact with each other.  The value of N indicates the
290	    address the server listens on: for example, ns2 listens on
291	    10.53.0.2, and ns4 on 10.53.0.4.  All test servers use an
292	    unprivileged port, so they don't need to run as root.  These
293	    servers log at the highest debug level and the log is captured in
294	    the file "named.run".
295
296ans<N>      Like ns[X], but these are simple mock name servers implemented in
297            Perl or Python.  They are generally programmed to misbehave in ways
298            named would not so as to exercise named's ability to interoperate
299            with badly behaved name servers.
300
301
302Port Usage
303---
304In order for the tests to run in parallel, each test requires a unique set of
305ports.  These are specified by the "-p" option passed to "run.sh", which sets
306environment variables that the scripts listed above can reference.
307
308The convention used in the system tests is that the number passed is the start
309of a range of 100 ports.  The test is free to use the ports as required,
310although the first ten ports in the block are named and generally tests use the
311named ports for their intended purpose.  The names of the environment variables
312are:
313
314    PORT                     Number to be used for the query port.
315    CONTROLPORT              Number to be used as the RNDC control port.
316    EXTRAPORT1 - EXTRAPORT8  Eight port numbers that can be used as needed.
317
318Two other environment variables are defined:
319
320    LOWPORT                  The lowest port number in the range.
321    HIGHPORT                 The highest port number in the range.
322
323Since port ranges usually start on a boundary of 10, the variables are set such
324that the last digit of the port number corresponds to the number of the
325EXTRAPORTn variable.  For example, if the port range were to start at 5200, the
326port assignments would be:
327
328    PORT = 5200
329    EXTRAPORT1 = 5201
330        :
331    EXTRAPORT8 = 5208
332    CONTROLPORT = 5209
333    LOWPORT = 5200
334    HIGHPORT = 5299
335
336When running tests in parallel (i.e. giving a value of "numproc" greater than 1
337in the "make" or "runall.sh" commands listed above), it is guaranteed that each
338test will get a set of unique port numbers.
339
340
341Writing a Test
342---
343The test framework requires up to four shell scripts (listed above) as well as
344a number of nameserver instances to run.  Certain expectations are put on each
345script:
346
347
348General
349---
3501. Each of the four scripts will be invoked with the command
351
352    (cd <test-directory> ; sh <script> [<arguments>] )
353
354... so that working directory when the script starts executing is the test
355directory.
356
3572. Arguments can be only passed to the script if the test is being run as a
358one-off with "run.sh".  In this case, everything on the command line after the
359name of the test is passed to each script.  For example, the command:
360
361    sh run.sh -p 12300 mytest -D xyz
362
363... will run "mytest" with a port range of 12300 to 12399.  Each of the
364framework scripts provided by the test will be invoked using the remaining
365arguments, e.g.:
366
367   (cd mytest ; sh prereq.sh -D xyz)
368   (cd mytest ; sh setup.sh -D xyz)
369   (cd mytest ; sh tests.sh -D xyz)
370   (cd mytest ; sh clean.sh -D xyz)
371
372No arguments will be passed to the test scripts if the test is run as part of
373a run of the full test suite (e.g. the tests are started with "runall.sh").
374
3753.  Each script should start with the following lines:
376
377    SYSTEMTESTTOP=..
378    . $SYSTEMTESTTOP/conf.sh
379
380"conf.sh" defines a series of environment variables together with functions
381useful for the test scripts. (conf.sh.win32 is the Windows equivalent of this
382file.)
383
384
385prereq.sh
386---
387As noted above, this is optional.  If present, it should check whether specific
388software needed to run the test is available and/or whether BIND has been
389configured with the appropriate options required.
390
391    * If the software required to run the test is present and the BIND
392      configure options are correct, prereq.sh should return with a status code
393      of 0.
394
395    * If the software required to run the test is not available and/or BIND
396      has not been configured with the appropriate options, prereq.sh should
397      return with a status code of 1.
398
399    * If there is some other problem (e.g. prerequisite software is available
400      but is not properly configured), a status code of 255 should be returned.
401
402
403setup.sh
404---
405This is responsible for setting up the configuration files used in the test.
406
407To cope with the varying port number, ports are not hard-coded into
408configuration files (or, for that matter, scripts that emulate nameservers).
409Instead, setup.sh is responsible for editing the configuration files to set the
410port numbers.
411
412To do this, configuration files should be supplied in the form of templates
413containing tokens identifying ports.  The tokens have the same name as the
414environment variables listed above, but are prefixed and suffixed by the "@"
415symbol.  For example, a fragment of a configuration file template might look
416like:
417
418    controls {
419        inet 10.53.0.1 port @CONTROLPORT@ allow { any; } keys { rndc_key; };
420    };
421
422    options {
423        query-source address 10.53.0.1;
424        notify-source 10.53.0.1;
425        transfer-source 10.53.0.1;
426        port @PORT@;
427        allow-new-zones yes;
428    };
429
430setup.sh should copy the template to the desired filename using the
431"copy_setports" shell function defined in "conf.sh", i.e.
432
433    copy_setports ns1/named.conf.in ns1/named.conf
434
435This replaces the tokens @PORT@, @CONTROLPORT@, @EXTRAPORT1@ through
436@EXTRAPORT8@ with the contents of the environment variables listed above.
437setup.sh should do this for all configuration files required when the test
438starts.
439
440("setup.sh" should also use this method for replacing the tokens in any Perl or
441Python name servers used in the test.)
442
443
444tests.sh
445---
446This is the main test file and the contents depend on the test.  The contents
447are completely up to the developer, although most test scripts have a form
448similar to the following for each sub-test:
449
450    1. n=`expr $n + 1`
451    2. echo_i "prime cache nodata.example ($n)"
452    3. ret=0
453    4. $DIG -p ${PORT} @10.53.0.1 nodata.example TXT > dig.out.test$n
454    5. grep "status: NOERROR" dig.out.test$n > /dev/null || ret=1
455    6. grep "ANSWER: 0," dig.out.test$n > /dev/null || ret=1
456    7. if [ $ret != 0 ]; then echo_i "failed"; fi
457    8. status=`expr $status + $ret`
458
4591.  Increment the test number "n" (initialized to zero at the start of the
460    script).
461
4622.  Indicate that the sub-test is about to begin.  Note that "echo_i" instead
463    of "echo" is used.  echo_i is a function defined in "conf.sh" which will
464    prefix the message with "I:<testname>:", so allowing the output from each
465    test to be identified within the output.  The test number is included in
466    the message in order to tie the sub-test with its output.
467
4683. Initialize return status.
469
4704 - 6. Carry out the sub-test.  In this case, a nameserver is queried (note
471    that the port used is given by the PORT environment variable, which was set
472    by the inclusion of the file "conf.sh" at the start of the script).  The
473    output is routed to a file whose suffix includes the test number.  The
474    response from the server is examined and, in this case, if the required
475    string is not found, an error is indicated by setting "ret" to 1.
476
4777.  If the sub-test failed, a message is printed. "echo_i" is used to print
478    the message to add the prefix "I:<test-name>:" before it is output.
479
4808.  "status", used to track how many of the sub-tests have failed, is
481    incremented accordingly.  The value of "status" determines the status
482    returned by "tests.sh", which in turn determines whether the framework
483    prints the PASS or FAIL message.
484
485Regardless of this, rules that should be followed are:
486
487a.  Use the environment variables set by conf.sh to determine the ports to use
488    for sending and receiving queries.
489
490b.  Use a counter to tag messages and to associate the messages with the output
491    files.
492
493c.  Store all output produced by queries/commands into files.  These files
494    should be named according to the command that produced them, e.g. "dig"
495    output should be stored in a file "dig.out.<suffix>", the suffix being
496    related to the value of the counter.
497
498d.  Use "echo_i" to output informational messages.
499
500e.  Retain a count of test failures and return this as the exit status from
501    the script.
502
503
504clean.sh
505---
506The inverse of "setup.sh", this is invoked by the framework to clean up the
507test directory.  It should delete all files that have been created by the test
508during its run.
509
510
511Starting Nameservers
512---
513As noted earlier, a system test will involve a number of nameservers.  These
514will be either instances of named, or special servers written in a language
515such as Perl or Python.
516
517For the former, the version of "named" being run is that in the "bin/named"
518directory in the tree holding the tests (i.e. if "make test" is being run
519immediately after "make", the version of "named" used is that just built).  The
520configuration files, zone files etc. for these servers are located in
521subdirectories of the test directory named "nsN", where N is a small integer.
522The latter are special nameservers, mostly used for generating deliberately bad
523responses, located in subdirectories named "ansN" (again, N is an integer).
524In addition to configuration files, these directories should hold the
525appropriate script files as well.
526
527Note that the "N" for a particular test forms a single number space, e.g. if
528there is an "ns2" directory, there cannot be an "ans2" directory as well.
529Ideally, the directory numbers should start at 1 and work upwards.
530
531When running a test, the servers are started using "start.sh" (which is nothing
532more than a wrapper for start.pl).  The options for "start.pl" are documented
533in the header for that file, so will not be repeated here.  In summary, when
534invoked by "run.sh", start.pl looks for directories named "nsN" or "ansN" in
535the test directory and starts the servers it finds there.
536
537
538"named" Command-Line Options
539---
540By default, start.pl starts a "named" server with the following options:
541
542    -c named.conf   Specifies the configuration file to use (so by implication,
543                    each "nsN" nameserver's configuration file must be called
544                    named.conf).
545
546    -d 99           Sets the maximum debugging level.
547
548    -D <name>       The "-D" option sets a string used to identify the
549                    nameserver in a process listing.  In this case, the string
550                    is the name of the subdirectory.
551
552    -g              Runs the server in the foreground and logs everything to
553                    stderr.
554
555    -m record,size,mctx
556                    Turns on these memory usage debugging flags.
557
558    -U 4            Uses four listeners.
559
560    -X named.lock   Acquires a lock on this file in the "nsN" directory, so
561                    preventing multiple instances of this named running in this
562                    directory (which could possibly interfere with the test).
563
564All output is sent to a file called "named.run" in the nameserver directory.
565
566The options used to start named can be altered.  There are three ways of doing
567this.  "start.pl" checks the methods in a specific order: if a check succeeds,
568the options are set and any other specification is ignored.  In order, these
569are:
570
5711. Specifying options to "start.sh"/"start.pl" after the name of the test
572directory, e.g.
573
574    sh start.sh reclimit ns1 -- "-c n.conf -d 43"
575
576(This is only really useful when running tests interactively.)
577
5782. Including a file called "named.args" in the "nsN" directory.  If present,
579the contents of the first non-commented, non-blank line of the file are used as
580the named command-line arguments.  The rest of the file is ignored.
581
5823. Tweaking the default command line arguments with "-T" options.  This flag is
583used to alter the behavior of BIND for testing and is not documented in the
584ARM.  The presence of certain files in the "nsN" directory adds flags to
585the default command line (the content of the files is irrelevant - it
586is only the presence that counts):
587
588    named.noaa       Appends "-T noaa" to the command line, which causes
589                     "named" to never set the AA bit in an answer.
590
591    named.dropedns   Adds "-T dropedns" to the command line, which causes
592                     "named" to recognise EDNS options in messages, but drop
593                     messages containing them.
594
595    named.maxudp1460 Adds "-T maxudp1460" to the command line, setting the
596                     maximum UDP size handled by named to 1460.
597
598    named.maxudp512  Adds "-T maxudp512" to the command line, setting the
599                     maximum UDP size handled by named to 512.
600
601    named.noedns     Appends "-T noedns" to the command line, which disables
602                     recognition of EDNS options in messages.
603
604    named.notcp      Adds "-T notcp", which disables TCP in "named".
605
606    named.soa        Appends "-T nosoa" to the command line, which disables
607                     the addition of SOA records to negative responses (or to
608                     the additional section if the response is triggered by RPZ
609                     rewriting).
610
611Starting Other Nameservers
612---
613In contrast to "named", nameservers written in Perl or Python (whose script
614file should have the name "ans.pl" or "ans.py" respectively) are started with a
615fixed command line.  In essence, the server is given the address and nothing
616else.
617
618(This is not strictly true: Python servers are provided with the number of the
619query port to use.  Altering the port used by Perl servers currently requires
620creating a template file containing the "@PORT@" token, and having "setup.sh"
621substitute the actual port being used before the test starts.)
622
623
624Stopping Nameservers
625---
626As might be expected, the test system stops nameservers with the script
627"stop.sh", which is little more than a wrapper for "stop.pl".  Like "start.pl",
628the options available are listed in the file's header and will not be repeated
629here.
630
631In summary though, the nameservers for a given test, if left running by
632specifying the "-k" flag to "run.sh" when the test is started, can be stopped
633by the command:
634
635    sh stop.sh <test-name> [server]
636
637... where if the server (e.g. "ns1", "ans3") is not specified, all servers
638associated with the test are stopped.
639
640
641Adding a Test to the System Test Suite
642---
643Once a test has been created, the following files should be edited:
644
645* conf.sh.in  The name of the test should be added to the PARALLELDIRS or
646SEQUENTIALDIRS variables as appropriate.  The former is used for tests that
647can run in parallel with other tests, the latter for tests that are unable to
648do so.
649
650* conf.sh.win32 This is the Windows equivalent of conf.sh.in.  The name of the
651test should be added to the PARALLELDIRS or SEQUENTIALDIRS variables as
652appropriate.
653
654* Makefile.in The name of the test should be added to one of the the PARALLEL
655or SEQUENTIAL variables.
656
657(It is likely that a future iteration of the system test suite will remove the
658need to edit multiple files to add a test.)
659
660
661Valgrind
662---
663When running system tests, named can be run under Valgrind.  The output from
664Valgrind are sent to per-process files that can be reviewed after the test has
665completed.  To enable this, set the USE_VALGRIND environment variable to
666"helgrind" to run the Helgrind tool, or any other value to run the Memcheck
667tool.  To use "helgrind" effectively, build BIND with --disable-atomic.
668
669
670Maintenance Notes
671===
672This section is aimed at developers maintaining BIND's system test framework.
673
674Notes on Parallel Execution
675---
676Although execution of an individual test is controlled by "run.sh", which
677executes the above shell scripts (and starts the relevant servers) for each
678test, the running of all tests in the test suite is controlled by the Makefile.
679("runall.sh" does little more than invoke "make" on the Makefile.)
680
681All system tests are capable of being run in parallel.  For this to work, each
682test needs to use a unique set of ports.  To avoid the need to define which
683tests use which ports (and so risk port clashes as further tests are added),
684the ports are assigned when the tests are run.  This is achieved by having the
685"test" target in the Makefile depend on "parallel.mk".  That file is created
686when "make check" is run, and contains a target for each test of the form:
687
688    <test-name>:
689        @$(SHELL) run.sh -p <baseport> <test-name>
690
691The <baseport> is unique and the values of <baseport> for each test are
692separated by at least 100 ports.
693
694
695Cleaning Up From Tests
696---
697When a test is run, up to three different types of files are created:
698
6991. Files generated by the test itself, e.g. output from "dig" and "rndc", are
700stored in the test directory.
701
7022. Files produced by named which may not be cleaned up if named exits
703abnormally, e.g. core files, PID files etc., are stored in the test directory.
704
7053. A file "test.output.<test-name>" containing the text written to stdout by the
706test is written to bin/tests/system/.  This file is only produced when the test
707is run as part of the entire test suite (e.g. via "runall.sh").
708
709If the test fails, all these files are retained.  But if the test succeeds,
710they are cleaned up at different times:
711
7121. Files generated by the test itself are cleaned up by the test's own
713"clean.sh", which is called from "run.sh".
714
7152. Files that may not be cleaned up if named exits abnormally can be removed
716using the "cleanall.sh" script.
717
7183. "test.output.*" files are deleted when the test suite ends.  At this point,
719the file "testsummary.sh" is called which concatenates all the "test.output.*"
720files into a single "systests.output" file before deleting them.
721