• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..25-Nov-2021-

00_basics/H25-Nov-2021-97,18094,471

01_vars/H25-Nov-2021-44,38637,226

02_classes/H25-Nov-2021-6,6845,380

04_examples/H25-Nov-2021-817731

05_processes/H25-Nov-2021-1,6511,361

06_storage/01_local/H25-Nov-2021-127103

07_packages/H25-Nov-2021-3,7733,079

08_commands/H25-Nov-2021-1,4381,127

09_services/H25-Nov-2021-618486

10_files/H25-Nov-2021-69,17156,908

11_databases/00_syntax/H25-Nov-2021-4135

14_reports/H25-Nov-2021-786631

15_control/H25-Nov-2021-2,3321,823

16_cf-serverd/serial/H25-Nov-2021-4,0453,188

17_users/unsafe/H25-Nov-2021-6,1985,106

19_security/other_writeable/H25-Nov-2021-150117

20_meta/H25-Nov-2021-145116

21_methods/H03-May-2022-658538

22_cf-runagent/serial/H25-Nov-2021-2,8722,298

23_failsafe/H25-Nov-2021-11088

24_cmd_line_arguments/H25-Nov-2021-337314

25_cf-execd/H25-Nov-2021-2,4982,058

26_cf-net/serial/H25-Nov-2021-387332

27_cf-secret/H03-May-2022-208179

28_inform_testing/H25-Nov-2021-2,7122,220

29_simulate_mode/H25-Nov-2021-866771

30_custom_promise_types/H25-Nov-2021-1,176982

dummy_etc/H25-Nov-2021-134130

Makefile.amH A D25-Nov-20213.7 KiB11361

Makefile.inH A D25-Nov-202140.1 KiB1,115980

READMEH A D25-Nov-202114.8 KiB374289

dcs.cf.subH A D25-Nov-202134.1 KiB981846

default.cf.subH A D25-Nov-20211.2 KiB4133

mock_package_manager.cH A D25-Nov-202112.4 KiB457313

no_fds.cH A D25-Nov-20212.6 KiB10267

plucked.cf.subH A D25-Nov-202156.5 KiB1,9651,713

root-MD5=617eb383deffef843ea856b129d0a423.privH A D25-Nov-20211.7 KiB3129

root-MD5=617eb383deffef843ea856b129d0a423.pubH A D25-Nov-2021426 98

run_with_server.cf.subH A D25-Nov-20212.8 KiB10886

testallH A D25-Nov-202147.8 KiB1,3891,130

write_args.shH A D25-Nov-2021355 2313

xml-c14nize.cH A D25-Nov-20211 KiB5140

README

1==============================================================================
2CFEngine acceptance testsuite
3==============================================================================
4
5CFEngine has an extensive testsuite which covers lot of functionality which can
6be tested as a series of cf-agent runs.
7
8You are encouraged to run this testsuite on any new software/hardware
9configuration, in order to
10 * verify that CFEngine functions correctly
11 * provide developers with reproducible way to fix any problems encountered in
12   new configurations / environment
13
14In case you find a bug you are encouraged to create tests in format of testsuite
15which demonstrate bug found, so the test could be added to this testsuite and
16checked for in the future.
17
18Note that the testall script generates JUnit style XML output for parsing by CI systems.
19
20https://llg.cubic.org/docs/junit/
21
22
23------------------------------------------------------------------------------
24Preparing for running tests
25------------------------------------------------------------------------------
26
27* Compile CFEngine.
28  - It is advised to use Tokyo Cabinet as it gives much better performance in
29    test suite over Berkeley DB.
30
31* Install fakeroot(1). If this tool is not available for your operating system,
32  you may use any other "fake root" environment or even sudo(1). Alternative
33  tools are specified by --gainroot option of `testall' script. Note that if you
34  use the --unsafe option (can damage your system), you may have to use
35  --gainroot=sudo in order to get correct results.
36
37* If you want output in color, set CFENGINE_COLOR to 1.  If you want
38  diff output colorized, you also need to install the colordiff
39  utility.
40
41* If you plan to use `git bisect` to hunt for a bug, try
42  tests/acceptance/bisect-acceptance.pl, contributed by the
43  indomitable Chris Dituri
44
45------------------------------------------------------------------------------
46Running testsuite
47------------------------------------------------------------------------------
48
49All tests ought only to create files and directories in /tmp, and ought not to
50modify other files. The exception to this rule are so called unsafe tests, which
51reside in a special directory. More on unsafe tests below.
52
53Run
54
55  ./testall --agent=$workdir/bin/cf-agent
56
57e.g.
58
59  ./testall --agent=/var/cfengine/bin/cf-agent
60
61Testing will start. For every test case the name and result (failed / passed)
62will be produced. At the end testing summary will be provided.
63
64Test runner creates the following log files:
65
66 * test.log contains detailed information about each test case (name, standard
67   output/error contents, return code, and test status).
68 * summary.log contains summary information, like the one displayed during
69   testsuite run.
70
71Also a directory .succeeded will be created, containing stamps for each passed
72test case, so test cases which passed before and failing in subsequent testsuite
73run will be additionally marked in output as "(UNEXPECTED FAILURE)".
74
75You might run a subset of tests by passing either filenames:
76
77  ./testall --agent=$workdir/bin/cf-agent 01_vars/01_basic/sysvars.cf
78
79or directories to 'testall':
80
81  ./testall --agent=$workdir/bin/cf-agent 01_vars
82
83------------------------------------------------------------------------------
84Creating/editing test cases
85------------------------------------------------------------------------------
86
87Each test should be 100% standalone. If you include "default.cf.sub" in
88inputs, then the bundlesequence is automatically defined, and your file
89must contain at least 1 of the main bundles:
90
91* init - setup, create initial and hoped-for final states
92* test - the actual test code
93* check - the comparison of expected and actual results
94* destroy - anything you want to do at the end like killing processes etc
95
96However if the test is really simple you might want to skip including
97"default.cf.sub" in inputs. Then you should define your own
98bundlesequence, e.g. {"test","check"}. It's recommended to avoid
99including "default.cf.sub" when not needed, since the test will run much
100faster.
101
102Look in default.cf for some standard check bundles (for example, to compare
103files when testing file edits, or for cleaning up temporary files).
104
105For a test named XYZ, if the test runner finds the file XYZ.def.json,
106that file will be copied to the input directory so it will be loaded
107on startup. That lets you set hard classes and def variables and add
108inputs before the test ever runs.
109
110Tests should be named with short names describing what they test, using lower-
111case letters and underscores only. If the test is expected to generate an error
112(that is, if they contan syntax errors or other faults), it should have an
113additional '.x' suffix before '.cf' (e.g. 'string_vars.x.cf'). A crash will
114still cause a failure.
115
116Tests which are not expected to pass yet (e.g. there is a bug in code which
117prevents tests from passing) should be placed in 'staging' subdirectory in the
118test directory where they belong. Such test cases will be only run if --staging
119argument to ./testall is passed.
120
121Tests which modify the system outside of /tmp are so called unsafe tests, and
122should be placed in the 'unsafe' subdirectory in the directory where they
123belong. Such test cases have the potential to damage the host system and will
124only be run if the --unsafe argument is given to ./testall. For the user's
125protection, this option is needed even if the test file name is specified
126directly on the command line.
127
128Tests which need network connectivity should be placed to 'network'
129subdirectories. Those tests may be disabled by passing --no-network option to
130'testall'.
131
132Tests which cannot be run in parallel with other tests should be put in a
133'serial' subdirectory. This is necessary if the test depends on for example
134a certain number of cf-agent processes running, or to open a network port that
135may be shared with other tests. Unsafe tests are always carried out serially,
136and do not need to be in a 'serial' subdirectory. You can also give individual
137tests a name that contains "serial" as a word.
138
139Serial tests enforce strictly sorted order when they are encountered, so you can
140use them to initialize some precondition for a set a of tests by sorting it
141lexicographically before the others and giving it a name containing "serial".
142The other tests can then be executed in parallel, unless they need to be serial
143for other reasons. For example, you can use two serial tests, one to set up a
144cf-serverd, and one to tear it down, and in between you put normal tests. Note
145that timed tests (see below) have one small exception: If a test that is both
146timed and serial is executed in the same "block" as other serial tests (that is,
147when one or more serials tests follow another), it will always be executed
148first. So put it in a timed directory as well, if you want to be absolutely
149sure.
150
151NOTE: Since the class 'ok' is used in most tests, never create a persistent
152class called 'ok' in any test. Persistent classes are cleaned up between test
153runs, but better safe than sorry.
154
155All tests should contain three bundles: init, test and check. In the
156"body common control" this file should be included, and bundlesequence
157should be set to default("$(this.promise_filename)").
158
159Output "$(this.promise_filename) Pass" for passing
160   and "$(this.promise_filename) FAIL" for failing.
161
162If you want to use tools like grep, diff, touch, etc, please use the
163$(G.grep) format so that the correct path to the tool is chosen by
164the test template. If a tool is missing you can add it to dcs.cf.sub.
165
166------------------------------------------------------------------------------
167Waiting in tests
168------------------------------------------------------------------------------
169
170If your test needs to wait for a significant amount of time, for example in
171order for locks to expire, you should use the wait functionality in the test
172suite. It requires three parts:
173
174  1. Your test needs to be put in a "timed" directory.
175  2. Whenever you want to wait, use the
176     "dcs_wait($(this.promise_filename), <seconds>)" method to wait the
177     specified number of seconds.
178  3. Each test invocation will have a predefined class set,
179     "test_pass_<number>", where <number> is the current pass number starting
180     from one. This means you can wait several times.
181
182The test suite will keep track of time, and run other tests while your test is
183waiting. Some things to look out for though:
184
185  - During the wait time, your test is no longer running, so you cannot for
186    example do polling.
187  - You cannot leave daemons running while waiting, because it may interfere
188    with other tests. If you need that you will have to wait the traditional
189    way, by introducing sleeps in the policy itself.
190  - The timing is not guaranteed to be accurate to the second. The test will be
191    resumed as soon as the current test has finished running, but if it takes
192    a long time, this will add to the wait time.
193
194------------------------------------------------------------------------------
195Handling different platforms
196------------------------------------------------------------------------------
197
198For tests that need to be skipped on certain platforms, you can add
199special meta variables to the *test* bundle. These are the
200possible variable names:
201
202  - test_skip_unsupported
203      Skips a test because it makes no sense on that platform (e.g.
204      symbolic links on Windows).
205
206  - test_skip_needs_work
207      Skips a test because the test itself is not adapted to the
208      platform (even if the functionality exists).
209
210  - test_soft_fail
211  - test_suppress_fail
212  - test_flakey_fail
213      Runs the test, but will accept failure. Use this when there is a
214      real failure, but it is acceptable for the time being. This
215      variable requires a meta tag on the variable set to
216      "redmine<number>", where <number> is a Redmine issue number.
217      There is a subtle difference between the three. Soft failures will
218      not be reported as a failure in the XML output, and is
219      appropriate for test cases that document incoming bug reports.
220      Suppressed failures will count as a failure, but it won't block
221      the build, and is appropriate for regressions or bad test
222      failures.
223      Flakey failures count in a category by themselves and won't block
224      the build. If any are present then a different exit code will be
225      produced from the test run so that CI runners can react accordingly.
226
227Additionally, a *description* meta variable can be added to the test to describe
228its function.
229
230  meta:
231      "description" -> { "CFE-2971" }
232        string => "Test that class expressions can be used to define classes via augments";
233
234The rule of thumb is:
235
236* If you are writing an acceptance test for a (not yet fixed) bug in
237  Redmine, use test_soft_fail.
238
239* If you need the build to work, but the bug you are suppressing cannot
240  stay unfixed for long, use test_suppress_fail.
241
242In all cases, the variable is expected to be set to a class expression
243suitable for ifvarclass, where a positive match means that the test
244will be skipped. So the expression '"test_skip_needs_work" string =>
245"hpux|aix";' will skip the test on HP-UX and AIX, and nowhere else.
246
247Example:
248  bundle agent test
249  {
250    meta:
251
252      # Indicate that this test should be skipped on hpux because the
253      # functionality is unsupported on that platform.
254      "test_skip_unsupported" string => "hpux";
255  }
256
257  bundle agent test
258  {
259    meta:
260
261      # Indicates that the test should be skipped on hpux because the
262      # test needs to be adapted for the platform.
263      "test_skip_needs_work" string => "hpux";
264  }
265
266  bundle agent test
267  {
268    meta:
269
270      # Indicate the test is expected to fail on hpux and aix, but
271      # should not cause the build to change from green. This is
272      # appropriate for test cases that document incoming bug reports.
273      "test_soft_fail"
274        string => "hpux|aix",
275        meta => { "redmine1234" };
276  }
277
278  bundle agent test
279  {
280    meta:
281
282      # Indicate the test is expected to fail on hpux but will not
283      # block the build. This is appropriate for regressions or very
284      # bad bad test failures.
285
286      "test_suppress_fail"
287        string => "hpux",
288        meta => { "redmine1234" };
289  }
290
291
292------------------------------------------------------------------------------
293Glossary
294------------------------------------------------------------------------------
295
296For purposes of testing, here is what our terms mean:
297
298Pass: the test did what we expected (whether that was setting a variable,
299editing a file, killing or starting a process, or correctly failing to do
300these actions in the light of existing conditions or attributes).  Note that
301in the case of tests that end in an 'x', a Pass is generated when the test
302abnormally terminates and we wanted it to do that.
303
304FAIL: not doing what we wanted: either test finished and returned "FAIL" from
305check bundle, or something went wrong - cf-agent might have dropped core,
306cf-promises may have denied execution of the promises, etc.
307
308Soft fail: the test failed as expected by a "test_soft_fail" promise.
309
310Skipped: test is skipped due to be either explicitly disabled or being
311Nova-specific and being run on Community cf-agent.
312
313------------------------------------------------------------------------------
314Example Test Skeleton
315------------------------------------------------------------------------------
316body file control
317{
318      # Feel free to avoid including "default.cf.sub" and define your
319      # own bundlesequence for simple tests
320
321      inputs => { "../default.cf.sub" };
322}
323
324#######################################################
325
326bundle agent init
327# Initialize the environment and prepare for test
328{
329
330}
331
332bundle agent test
333# Activate policy for behaviour you wish to inspect
334{
335  meta:
336    "description" -> { "CFE-1234" }
337      string => "What does this test?";
338
339    "test_soft_fail"
340      string => "Class_Expression|here",
341      meta => { "CFE-1234" };
342}
343
344bundle agent check
345# Check the result of the test
346{
347
348  # Pass/Fail requires a specific format:
349  # reports: "$(this.promise_filename) Pass";
350  # reports: "$(this.promise_filename) FAIL";
351
352  # Consider using one of the dcs bundles
353  # methods: "Pass/Fail" usebundle => dcs_passif( "Namespace_Scoped_Class_Indicating_Test_Passed", $(this.promise_filename) );
354
355}
356bundle agent __main__
357# @brief The test entry
358# Note: The testall script runs the agent with the acceptance test as the entry point,
359##+begin_src sh :results output :exports both :dir ~/CFEngine/core/tests/acceptance
360# pwd | sed 's/\/.*CFEngine\///'
361# find . -name "*.cf*" | xargs chmod 600
362# cf-agent -Kf ./01_vars/02_functions/basename_1.cf --define AUTO
363##+end_src sh
364#
365##+RESULTS:
366#: core/tests/acceptance
367#: R: /home/nickanderson/Northern.Tech/CFEngine/core/tests/acceptance/./01_vars/02_functions/basename_1.cf Pass
368{
369  methods:
370      "Run the default test bundle sequence (init,test,check,cleanup) if defined"
371        usebundle => default( $(this.promise_filename) );
372}
373------------------------------------------------------------------------------
374