• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

End.shH A D05-Apr-2020211 50

GNUmakefile.inH A D05-Apr-2020601 2920

ObjectTesting.hH A D05-Apr-202016.8 KiB507399

READMEH A D05-Apr-202015.5 KiB350275

Start.shH A D05-Apr-2020201 50

Summary.shH A D05-Apr-20203.5 KiB11285

TestInfoH A D05-Apr-2020274 54

Testing.hH A D05-Apr-202022.3 KiB636398

example1.mH A D05-Apr-2020375 1715

example2.mH A D05-Apr-2020491 2318

example3.mH A D05-Apr-20201.1 KiB4233

example4.mH A D05-Apr-2020749 3124

example5.mH A D05-Apr-2020784 3527

example6.mH A D05-Apr-20201 KiB4032

example7.mH A D05-Apr-20201 KiB4434

example8.mH A D05-Apr-2020909 3428

example9.mH A D05-Apr-20201.5 KiB4537

gnustep-tests.inH A D05-Apr-202021.4 KiB792600

README

1
2TestFramework
3=============
4
5This testsuite is a general framework for testing the core GNUstep
6libraries. Since, in part, we are testing the very basic level of
7an Objective-C runtime, we support testing plain C or C++ as well
8as Objective-C, and aim for better flexibility, ease-of-use, and
9and portability than older frameworks such as as OCUnit.
10
11The aim of this framework is to provide a very simple, yet reasonably
12comprehensive regression test mechanism, primarily for Objective-C
13development.
14
15Please run the GNUstep testsuite (using this framework) often, when
16adding new features, fixing bugs and running on new platforms.
17
18Where working on features common to both Apple's Cocoa/iOS APIs and
19to GNUstep, please try creating and running test cases in the Apple
20environment before implementing/changing GNUstep code, so that you
21are sure the behavior is the same in both cases.
22
23
24License
25-------
26
27The testing framework and many of the test cases in the testsuite are
28copyright by the FSF and distributed under the GPL.  However, some tests
29may not be copyright by the FSF, but retain the copyright of the original
30owner (e.g tests submitted as bug reports). You should feel free to add tests
31that are not copyright by the FSF. The copyright of these tests should
32be clearly stated, however, and they should still be distributed under the
33GPL version 3 or later.
34
35
36Running Tests
37-------------
38
39To run a testsuite, use the gnustep-tests script along with the name of
40the project testsuite (directory) you wish to test:
41
42gnustep-tests base
43
44or where a group of tests within a project is to be run:
45
46gnustep-tests base/NSArray
47
48You may run an individual test file by using the gnustep-tests script
49with the name of the Objective-C test source file:
50
51gnustep-tests mytest.m
52gnustep-tests base/NSDate/general.m
53
54Alternatively, you may run tests from within a project/directory. eg.
55
56cd base
57gnustep-tests .
58
59If you supply no arguments, the gnustep-test script will examine all the
60subdirectories of the current directory, and attempt to run tests in each.
61
62During testing, temporary files are created and removed for each test,
63but any temporary files from the most recent test in each directory
64are left (to help you debugging), as are the log files.
65The summary log of a test run is left in the tests.sum file.
66The detailed log of a test run is left in the tests.log file.
67The log from any previous run is left in oldtests.sum (and oldtests.log).
68You can run 'gnustep-tests --clean' to remove left-over temporary files
69and all log files.
70
71
72Interpreting the output
73-----------------------
74
75The summary output lists all test failures ... there should not be any.
76If a test fails then either there is a problem in the software being
77tested, or a problem in the test itself. Either way, you should try to
78fix the problem and provide a patch, or at least report it at:
79https://savannah.gnu.org/bugs/?group=gnustep"
80
81After the listing of any failures is a summary of counts of events as follows.
82
83Passed tests:
84  The number of individual tests which passed.
85
86Failed tests:
87  The number of individual tests failed ... this should really not appear.
88
89Failed sets:
90  The number of sets of tests which have been abandoned part way through
91  because of some individual test failure or an exception in support code
92  between tests.
93
94Failed builds:
95  The number of separate test files which did not even build/compile.
96
97Failed files:
98  The number of separate test files which failed while running.
99
100Dashed hopes:
101  The number of hopeful tests which did not pass, but which were not
102  expected to pass (new code being worked on etc).
103
104Skipped sets:
105  The number of sets of tests which were skipped entirely ...
106  eg. those for features which work on some platforms, but not on yours.
107
108The binary executable of the most recently executed test file in each test
109directory is left in the obj subdirectory.  So you can easily debug a failed
110test by:
1111. running gnustep-tests with the single test file as its argument.
1122. running gdb using obj/filename as an argument.
1133. setting a breakpoint at the exact test which failed, running to there,
1144. and then stepping through slowly to see exactly what is going wrong.
115
116You can use the --failfast option with gnustep-tests to tell it to abandon
117testing after the first failure ... in which case you know that the
118executable of the failed test will be available (unless the test file
119failed to even compile of course).  In this case, any core dump file will
120also be left available.
121
122As a convenience for debugging, the tests use a testStart() function in which
123they set the line number of the test, so you can stop in that function and
124upon stepping out of it the debugger will be examining the specified test.
125eg.
126(gdb) break testStart if line == 42
127(gdb) run
128
129If you use the --debug command line option, and any tests fail, then the
130debugger (gdb) will automatically be started for you with breakpoints set
131to the testStart() function of the failed tests.
132
133You can use the --debug command line option in conjunction with the --failfast
134option to have testing stopped at the first failure and the gdb debugger
135automatically launched to debug the failed testcase with a breakpoint set
136in the testStart() function for that testcase.
137
138You can also use the --developer command line option to define the TESTDEV
139pre-processor variable (to turn on developer only test cases, and to have
140all 'hopes' treated as actual 'tests' with pass/fail results).
141
142Writing Tests
143-------------
144
145The test framework may be used for testing Objective-C, ObjectiveC++, C,
146and C++ code.  The test source files must have a .m (for Objective-C and C)
147or a .mm (for Objective-C++ and C++) file extension in order to be recognised.
148
149A minimal test should be a file importing the header "Testing.h"
150(which defines global variables, functions, and standard test macros)
151and containing a main() function implementation which executes the
152actual test code.
153Groups of tests should be placed between calls to the START_SET() and
154END_SET() macros.
155
156You should look at the example test files in the
157$GNUSTEP_MAKEFILES/TestFramework directory
158for how to write test cases, and you should examine Testing.h
159in the same directory to see full documentation of the range
160of macros provided.
161
162The main workhorse of the test framework is the pass() function, which has
163a variable number of arguments ... first an integer expression, and second
164a printf style format string describing what is being tested.
165If you are calling the function directly you should use "%s,%d" at the
166start of the format string and pass __FILE__ and __LINE__ as the next two
167parameters (for consistency with the test macros).
168The function uses the global variable 'testHopeful' to decide whether a
169test which did not pass is a 'FAIL' (when testHopeful==NO) or a 'DASHED'
170hope (when testHopeful==YES).
171The function sets the global variable 'testPassed' to a BOOL reflecting
172the result of the test (YES if the test passed, NO otherwise).
173The only other functions are for occasional use to report sections of
174the testsuite as not having run for some reason.
175
176There are just four basic test macros.
177All have uppercase names beginning with 'PASS'.
178All wrap test code and a call to the pass() function in exception handlers.
179All provide file name and line number information in the description string.
180All are code blocks and do not need a semicolon terminator.
181Code fragments must be enclosed in round brackets if they contain commas.
182
183PASS		passes if an expression resulting in an integer value is
184		non-zero
185PASS_EQUAL	passes if an expression resulting in an object is identical
186		to or -isEqual: to another object (if the expected object
187                implements the -isEqualForTestcase: method, that is used
188                instead of -isEqual:)
189PASS_EXCEPTION	passes if a code fragment raises an exception
190PASS_RUNS	passes if a code fragment runs without raising an exception
191
192There is a boolean variable called 'testHopeful' which, if set to YES,
193means that tests which do not pass are considered to be 'Dashed hopes'
194rather than failed tests.  You should set this for tests of code which
195is under development (or which is testing a feature which may be
196unsupported in the package under test) ... to indicate that the test is
197not to be considered a failure if it doesn't pass.
198
199Tests are grouped together, along with any associated non-test code, between
200paired calls to the START_SET and END_SET macros. Any setting of testHopeful
201within a set is automatically restored at the end of a set, so it makes sense
202to group hopes together in a set.
203
204You can skip an entire set by calling the SKIP() macro just after the start,
205in which case the entire set will be reported as being Skipped.
206It is appropriate to skip sets of tests if you have checked and found that
207some feature you are testing is not available in the version of the package
208under test.
209
210Any uncaught exception (ie one which occurs outside a one of the four test
211macros and is not caught by an exception handler you write yourself) will
212cause the remaining tests in a set to be omitted.  In this case the set
213will be reported as Failed.
214
215You may also arrange to jump to the end of the set if a test fails by wrapping
216the test in a NEED macro.  Doing this also causes the set to be reported as
217Failed if the needed test does not pass.
218
219It's likely that you are writing new tests for a library or framework ...
220and those tests will need to link with that framework.  You should add the
221instructions for that to a GNUmakefile.preamble if the directory containing
222your tests, or a GNUmakefile.super in the directory above in the case where
223you have multiple test directories.
224eg.
225ADDITIONAL_OBJC_LIBS=-lmyLibrary
226
227When contributing to a test suite, please bracket your new test code using
228#if defined(TESTDEV)
229...
230#endif /* TESTDEV */
231so that it is only built when gnustep-tests is invoked with the --developer
232command line argument.
233This ensures that the new code won't break any existing test code when
234people are simply running the testsuite, and once you are sure that the
235new testcases are correct (and portable to all operating systems), the
236check for TESTDEV can be removed.
237
238Ignoring failed test files
239--------------------------
240
241When a test file crashes during running, or terminates with some sort of
242failure status (eg the main() function returns a non-zero value) the framework
243treats the test file as having failed ... it assumes that the program
244crashed during the tests and the tests did not complete.
245
246On rare occasions you might actually want a test program to abort this way
247and have it treated as normal completion.  In order to do this you simply
248create an additional file with the same name as the test program and a
249file extension of '.abort'.
250eg. If myTest.m is expected to crash, you would create myTest.abort to have
251that crash treated as a normal test completion.
252
253
254Advanced building
255-----------------
256
257In most cases, all you need to do is write an objective-c file as described
258above, and the test framework will build it and run it for you automatically,
259but occasionally you may need to use your own build process.
260
261Where tests must make use of external resources or ensure that other tests
262have already been run before they are run, you can make use of the gnustep
263make package facilities to control dependencies etc.
264
265Normally the tests in a directory are built run using a makefile generated in
266the directory.  This makefile uses the standard conventions of including
267GNUmakefile.preamble before test-tool.make and including GNUmakefile.postamble
268after test-tool.make, which gives you a high degree of control over how the
269tests in the directory are built.
270
271In addition to the preamble/postamble mechanism, the file ../GNUmakefile.super
272is included at the start of the generated makefile (if it exists).  This allows
273all the test directories in a suite to use a common makefile fragment to provide
274information for the whole testsuite.
275You can also use the GSTESTROOT environment variable to locate resources common
276to the whole testsuite ... it is set automatically by gnustep-test to be the
277absolute path to the topmost directory in the testsuite.
278
279Your system should not make any assumption about the order in which test
280files are built ... the test framework may build many test files in parallel
281in order to make effective use of multiple processors.  In fact the make
282program will normally build up to four tests at a time, but you can change
283that by setting the MAKEFLAGS environment variable to '-j N' where N is the
284number of simultaneous builds you want to be permitted (or you can simply
285use 'gnustep-tests --sequential' to force building of one test at a time).
286
287For total control, the framework checks to see if a 'GNUmakefile.tests' file
288exists in the directory, and if it does it uses that file as a template to
289create the GNUmakefile rather than using its own make file.
290This template makefile may use @TESTNAMES@ where it wants a list of the
291tests to be run, and @TESTRULES@ where it wants the rules to build the
292tests to be included.
293It should also use @TESTOPTS@ near the start of the file to permit necessary
294makefile control to say where the executables should be stored.
295The GNUmakefile.tests script should build each individual test when it is
296invoked with that test name as a target, and it should also build all tests
297if it is invoked without a target, and have a 'clean' target to clean up
298before and after all tests.
299
300
301Directory layout
302----------------
303
304A test suite is considered to be a collection of individual test files in
305a single directory or a collection of directories in a hierarchy.
306All directories which contain test files must also contain a TestInfo file
307to mark them as containing files used by the framework, and the root of
308the test suite is considered to be the topmost directory in the hierarchy
309which contains a TestInfo file.  The test framework sets the GSTESTROOT
310environment variable to the absolute path of the root of the test suite
311being executed, so scripts and makefiles can use this to locate resources.
312
313The test framework ignores any directory which does not contain a TestInfo
314file.  This feature prevents accidental attempts to treat a project source
315code directory as a testsuite.  This is also useful in conjunction with the
316various makefile options listed above ... the makefiles may be used to
317build resources for tests in subdirectories which are ignored by the
318test framework itself.
319
320In addition to being a marker, the TestInfo file is a shell script which
321is sourced before execution of each test program in its directory, typically
322it is used to set up environment variables (eg. LD_LIBRARY_PATH to tell the
323program where to find dynamic libraries the tests use).
324
325
326Providing extra control and information
327---------------------------------------
328
329If a Start.sh script is present in a test directory, it will be run
330immediately before tests are performed in that directory.  It is able
331to append information to the log of the test run using the GSTESTLOG
332variable.
333
334If an End.sh file is present in a test directory, it will be run immediately
335after the tests in that directory are performed.  It is able to append
336information to the log of the test run using the GSTESTLOG variable.
337
338In both cases, you must make sure that the file does not do anything
339which would confuse the test framework at the point when it analyses the
340log ... so you need to avoid starting a line in the log with any of the
341special phrases generated to mark a passed test or a particular type of
342failure.
343
344If a Summary.sh file is present in a test directory and gnustep-tests is
345used to run just those tests in that directory, the shell script will be
346executed in order to provide the summary of the test results.  In all other
347cases the summary is done by the Summary.sh script provided in the test
348framework.
349
350