1RCS: @(#) $Id: README.developer,v 1.6 2009/06/02 22:49:55 andreas_kupries Exp $ 2 3Welcome to the tcllib, the Tcl Standard Library. 4================================================ 5 6Introduction 7------------ 8 9This README is intended to be a guide to the tools available to a 10 11 Developer 12 13working on Tcllib to help him with his tasks, i.e. making the tasks easier 14to perform. It is our hope that this will improve the quality of even 15non-released revisions of Tcllib, and make the work of the release 16manager easier as well. 17 18Audience 19-------- 20 21The intended audience are, first and foremost, developers beginning to 22work on Tcllib. To an experienced developer this document will be less 23of a guide and more of a reference. Anybody else interested in working 24on Tcllib is invited as well. 25 26 27Directory hierarchy and file basics 28------------------------------------ 29 30The main directories under the tcllib top directory are 31 32 modules/ 33 examples/ 34and apps/ 35 36Each directory FOO under modules/ represents one package, sometimes 37more. In the case of the latter the packages are usually related in 38some way. Examples are the base64, math, and struct modules, with 39loose (base64) to strong (math) relations between the packages. 40 41Examples associated with a module FOO, if there are any, are placed 42into the directory 43 44 examples/FOO 45 46Any type of distributable application can be found under apps/, 47together with their documentation, if any. Note that the apps/ 48directory is currently not split into sub-directories. 49 50Regarding the files in Tcllib, the most common types found are 51 52 .tcl Tcl code for a package. 53 54 .man Documentation for a package, in doctools format. 55 56 .test Test suite for a package, or part of. Based on tcltest. 57 58 .bench Performance benchmarks for a package, or part of. 59 Based on modules/bench 60 61 .pcx Syntax rules for TclDevKit's tclchecker. Using these 62 rules allows tclchecker to check the use of commands 63 of a Tcllib package X without having to scan the 64 implementation of X, i.e. its .tcl files. 65 66 67Adding a new module 68------------------- 69 70Assuming that FOO is the name of the new module, and T is the toplevel 71directory of the Tcllib sources 72 73(1) Create the directory T/modules/FOO and put all the files of 74 the module into it. Note: 75 76 * The file 'pkgIndex.tcl' is required. 77 78 * Implementation files should have the extension '.tcl', 79 naturally. 80 81 * If available, documentation should be in doctools format, 82 and the files should have the extension '.man' for SAK to 83 recognize them. 84 85 * If available the testsuite(s) should use 'tcltest' and the 86 general format as used by the other modules in Tcllib 87 (declaration of minimally needed Tcl, tcltest, supporting 88 packages, etc.). The file(s) should have the extension 89 '.test' for SAK to recognize them. 90 91 Note that an empty testsuite, or a testsuite which does not 92 perform any tests is less than useful and will not be 93 accepted. 94 95 * If available the benchmark(s) should use 'bench' and the 96 general format as used by the other modules in Tcllib. The 97 file(s) should have the extension '.bench' for SAK to 98 recognize them. 99 100 * Other files can be named and placed as the module sees fit. 101 102(2) If the new module has an example application A which is 103 polished enough for general use, put this application into the 104 file "T/apps/A.tcl", and its documentation into the file 105 "T/apps/A.man". While documentation for the application is 106 optional, it is preferred. 107 108 For examples which are not full-fledged applications, a 109 skeleton, or not really polished for use, etc., create the 110 directory T/examples/FOO/ and put them there. 111 112 A key difference is what happens to them on installation, and 113 what the target audience is. 114 115 The examples are for developers using packages in Tcllib, 116 whereas the applications are also for users of Tcllib which do 117 not have an interest in developing for and with it. As such, 118 they are installed as regular commands, accessible through the 119 PATH, and example files are not installed. 120 121(3) To make Tcllib's installer aware of FOO, edit the file 122 123 T/support/installation/modules.tcl 124 125 Add a line 'Module FOO $impaction $docaction $exaction'. The 126 various actions describe to the installer how to install the 127 implementation files, the documentation, and the examples. 128 129 Add a line 'Application A' for any application A which was 130 added to T/apps for FOO. 131 132 The following actions are available: 133 134 Implementation 135 136 _tcl - Copy all .tcl files in T/modules/FOO into the installation. 137 _tcr - See above, does it for .tcl files in subdirectories as well. 138 _tci - _tcl + Copying of a tclIndex - special to modules 'math', 'control'. 139 _msg - _tcl + Copying of subdir 'msgs' - special to modules 'dns', 'log'. 140 _doc - _tcl + Copying of subdir 'mpformats' - special to module 'doctools'. 141 _tex - _tcl + Copying of .tex files - special to module 'textutil'. 142 143 The _null action, see below, is available in principle 144 too, but a module without implementation does not make 145 sense. 146 147 Documentation 148 149 _null - Module has no documentation, do nothing. 150 _man - Process the .man files in T/modules/FOO and 151 install the results (nroff and/or HTML) in the 152 proper location, as given to the installer. 153 154 Examples 155 156 _null - Module has no examples, do nothing 157 _exa - Copy the directory T/examples/FOO 158 (recursively) to the install location for 159 examples. 160 161 162Testing modules 163--------------- 164 165To run the testsuite of a module FOO in tcllib use the 'test run' 166argument of sak.tcl, like so: 167 168 % pwd 169 /the/tcllib/toplevel/directory 170 171 % ./sak.tcl test run FOO 172or % ./sak.tcl test run modules/FOO 173 174To run the testsuites of all modules either invoke 'test run' without a 175module name, or use 'make test'. The latter assumes that configure was 176run for Tcllib before, i.e.: 177 178 % ./sak.tcl test run 179or % ./sak.tcl test run 180 % make test 181 182In all of the above cases the result will be a combination of progress 183display and testsuite log, showing for each module the tests that pass 184or failed and how many of each in a summary at the end. 185 186To get a detailed log, it is necessary to invoke 'test run' with 187additional options. 188 189First example: 190 % ./sak.tcl test run -l LOG FOO 191 192This shows the same short log on the terminal, and writes a detailed 193log to the file LOG.log, and excerpts to other files (LOG.summary, 194LOG.failures, etc.). 195 196Second example: 197 % ./sak.tcl test run -v FOO 198 % make test > LOG 199 200This writes the detailed log to stdout, or to the file LOG, instead of 201the short log. In all cases, the detailed log contains a list of all 202test cases executed, which failed, and how they failed (expected 203versus actual results). 204 205Note: 206The commands 207 % make test 208and % make test > LOG 209 210are able to generate different output (short vs long log) because the 211Makefile target contains code which detects that stdout has been 212redirected to a file and acts accordingly. 213 214Non-developers should reports problems in Tcllib's bug tracker. 215Information about its location and the relevant category can be found 216in the section 'BUGS, IDEAS, FEEDBACK' of the manpage of the module 217and/or package. 218 219Module documentation 220-------------------- 221 222The main format used for the documentation of packages in Tcllib is 223'doctools', the support packages of which are part of Tcllib, see the 224module 'doctools'. 225 226To convert this documentation to HTML or nroff manpages, or some other 227format use the 'doc' argument of sak.tcl, like so: 228 229 % pwd 230 /the/tcllib/toplevel/directory 231 232 % ./sak.tcl doc html FOO 233or % ./sak.tcl doc html modules/FOO 234 235The result of the conversion can be found in the newly-created 'doc' 236directory in the current working directory. 237 238The set of formats the documentation can be converted into can be 239queried via 240 241 % ./sak.tcl help doc 242 243 244To convert the documentation of all modules either invoke 'test run' 245without a module name, or use 'make html-doc', etc.. The latter 246assumes that configure was run for Tcllib before, i.e.: 247 248 % ./sak.tcl doc html 249 % make html-doc 250 251Note the special format 'validate'. Using this format does not convert 252the documentation to anything (and the sub-directory 'doc' will not be 253created), it just checks that the documentation is syntactically 254correct. I.e. 255 256 % ./sak.tcldoc validate modules/FOO 257 % ./sak.tcldoc validate 258 259 260Validating modules 261------------------ 262 263Running the testsuite of a module, or checking the syntax of its 264documentation (see the previous sections) are two forms of validation. 265 266The 'validate' command of sak.tcl provides a few more. The online 267documentation of this command is available via 268 269 % ./sak.tcl help validate 270 271The validated parts are man pages, testsuites, version information, 272and syntax. The latter only if various static syntax checkers are 273available on the PATH, like TclDevKit's tclchecker. 274 275Note that testsuite validation is not the execution of the testsuites, 276only if a package has a testsuite or not. 277 278It is strongly recommended to validate a module before committing any 279type of change made to it. 280 281It is recommended to validate all modules before committing any type 282of change made to one of them. We have package inter-dependencies 283between packages in Tcllib, thus changing one package may break 284others, and just validating the changed package will not catch such 285problems. 286 287 288Writing Tests 289------------- 290 291While a previous section talked about running the testsuite for a 292module and the packages therein this has no meaning if the module in 293question has no testsuites at all. 294 295This section gives a very basic overview on methodologies for writing 296tests and testsuites. 297 298First there are "drudgery" tests. Written to check absolutely basic 299assumptions which should never fail. 300 301Example: 302 303 For a command FOO taking two arguments, three tests calling it 304 with zero, one, and three arguments. The basic checks that the 305 command fails if it has not enough arguments, or too many. 306 307After that come the tests checking things based on our knowledge of 308the command, about its properties and assumptions. Some examples based 309on the graph operations added during Google's Summer of Code 2009. 310 311** The BellmanFord command in struct::graph::ops takes a 312 _startnode_ as argument, and this node should be a node of the 313 graph. equals one test case checking the behavior when the 314 specified node is not a node a graph. 315 316 This often gives rise to code in the implementation which 317 explicitly checks the assumption and throws a nice error. 318 Instead of letting the algorithm fails later in some weird 319 non-deterministic way. 320 321 Such checks cannot be done always. The graph argument for 322 example is just a command in itself, and while we expect it to 323 exhibit a certain interface, i.e. set of sub-commands aka 324 methods, we cannot check that it has them, except by actually 325 trying to use them. That is done by the algorithm anyway, so 326 an explicit check is just overhead we can get by without. 327 328** IIRC one of the distinguishing characteristic of either 329 BellmanFord and/or Johnson is that they are able to handle 330 negative weights. Whereas Dijkstra requires positive weights. 331 332 This induces (at least) three testcases ... Graph with all 333 positive weights, all negative, and a mix of positive and 334 negative weights. 335 336 Thinking further does the algorithm handle the weight '0' as 337 well ? Another test case, or several, if we mix zero with 338 positive and negative weights. 339 340** The two algorithms we are currently thinking about are about 341 distances between nodes, and distance can be 'Inf'inity, 342 i.e. nodes may not be connected. This means that good test 343 cases are 344 345 (1) Strongly connected graph 346 (2) Connected graph 347 (3) Disconnected graph. 348 349 At the extremes of (1) and (3) we have the fully connected 350 graphs and graphs without edges, only nodes, i.e. completely 351 disconnected. 352 353** IIRC both of the algorithms take weighted arcs, and fill in a 354 default if arcs are left unweighted in the input graph. 355 356 This also induces three test cases: 357 358 (1) Graph will all arcs with explicit weights. 359 (2) Graph without weights at all. 360 (3) Graph with mixture of weighted and unweighted graphs. 361 362 363What was described above via examples is called 'black-box' testing. 364Test cases are designed and written based on our knowledge of the 365properties of the algorithm and its inputs, without referencing a 366particular implementation. 367 368Going further, a complement to 'black-box' testing is 'white-box'. For 369this we know the implementation of the algorithm, we look at it and 370design our tests cases so that they force the code through all 371possible paths in the implementation. Wherever a decision is made we 372have a test cases forcing a specific direction of the decision, for 373all possible directions. 374 375In practice I often hope that the black-box tests I have made are 376enough to cover all the paths, obviating the need for white-box tests. 377 378So, if you, dear reader, now believe that writing tests for an 379algorithm takes at least as much time as coding the algorithm, and 380often more time, then you are completely right. It does. Much more 381time. See for example also http://sqlite.org/testing.html, a writeup 382on how the Sqlite database engine is tested. 383 384 385 386An interesting connection is to documentation. In one direction, the 387properties you are checking with black-box testing are properties 388which should be documented in the algorithm man page. And conversely, 389if you have documentation of properties of an algorithm then this is a 390good reference to base black-box tests on. 391 392In practice test cases and documentation often get written together, 393cross-influencing each other. And the actual writing of test cases is 394a mix of black and white box, possibly influencing the implementation 395while writing the tests. Like writing test for 'startnode not in input 396graph' serving as reminder to put in a check for this into the code. 397