Name Date Size #Lines LOC

..24-Apr-2024-

runfiles/H24-Apr-2024-2,0951,768

test-runner/H21-May-2022-2,2001,739

zfs-tests/H17-Apr-2024-183,59586,894

Makefile.amH A D12-Jan-2024839 3221

README.mdH A D21-May-20229.7 KiB262208

README.md

1# ZFS Test Suite README
2
3### 1) Building and installing the ZFS Test Suite
4
5The ZFS Test Suite runs under the test-runner framework.  This framework
6is built along side the standard ZFS utilities and is included as part of
7zfs-test package.  The zfs-test package can be built from source as follows:
8
9    $ ./configure
10    $ make pkg-utils
11
12The resulting packages can be installed using the rpm or dpkg command as
13appropriate for your distributions.  Alternately, if you have installed
14ZFS from a distributions repository (not from source) the zfs-test package
15may be provided for your distribution.
16
17    - Installed from source
18    $ rpm -ivh ./zfs-test*.rpm, or
19    $ dpkg -i ./zfs-test*.deb,
20
21    - Installed from package repository
22    $ yum install zfs-test
23    $ apt-get install zfs-test
24
25### 2) Running the ZFS Test Suite
26
27The pre-requisites for running the ZFS Test Suite are:
28
29  * Three scratch disks
30    * Specify the disks you wish to use in the $DISKS variable, as a
31      space delimited list like this: DISKS='vdb vdc vdd'.  By default
32      the zfs-tests.sh script will construct three loopback devices to
33      be used for testing: DISKS='loop0 loop1 loop2'.
34  * A non-root user with a full set of basic privileges and the ability
35    to sudo(8) to root without a password to run the test.
36  * Specify any pools you wish to preserve as a space delimited list in
37    the $KEEP variable. All pools detected at the start of testing are
38    added automatically.
39  * The ZFS Test Suite will add users and groups to test machine to
40    verify functionality.  Therefore it is strongly advised that a
41    dedicated test machine, which can be a VM, be used for testing.
42  * On FreeBSD, mountd(8) must use `/etc/zfs/exports`
43    as one of its export files – by default this can be done by setting
44    `zfs_enable=yes` in `/etc/rc.conf`.
45
46Once the pre-requisites are satisfied simply run the zfs-tests.sh script:
47
48    $ /usr/share/zfs/zfs-tests.sh
49
50Alternately, the zfs-tests.sh script can be run from the source tree to allow
51developers to rapidly validate their work.  In this mode the ZFS utilities and
52modules from the source tree will be used (rather than those installed on the
53system).  In order to avoid certain types of failures you will need to ensure
54the ZFS udev rules are installed.  This can be done manually or by ensuring
55some version of ZFS is installed on the system.
56
57    $ ./scripts/zfs-tests.sh
58
59The following zfs-tests.sh options are supported:
60
61    -v          Verbose zfs-tests.sh output When specified additional
62                information describing the test environment will be logged
63                prior to invoking test-runner.  This includes the runfile
64                being used, the DISKS targeted, pools to keep, etc.
65
66    -q          Quiet test-runner output.  When specified it is passed to
67                test-runner(1) which causes output to be written to the
68                console only for tests that do not pass and the results
69                summary.
70
71    -x          Remove all testpools, dm, lo, and files (unsafe).  When
72                specified the script will attempt to remove any leftover
73                configuration from a previous test run.  This includes
74                destroying any pools named testpool, unused DM devices,
75                and loopback devices backed by file-vdevs.  This operation
76                can be DANGEROUS because it is possible that the script
77                will mistakenly remove a resource not related to the testing.
78
79    -k          Disable cleanup after test failure.  When specified the
80                zfs-tests.sh script will not perform any additional cleanup
81                when test-runner exists.  This is useful when the results of
82                a specific test need to be preserved for further analysis.
83
84    -f          Use sparse files directly instead of loopback devices for
85                the testing.  When running in this mode certain tests will
86                be skipped which depend on real block devices.
87
88    -c          Only create and populate constrained path
89
90    -I NUM      Number of iterations
91
92    -d DIR      Create sparse files for vdevs in the DIR directory.  By
93                default these files are created under /var/tmp/.
94                This directory must be world-writable.
95
96    -s SIZE     Use vdevs of SIZE (default: 4G)
97
98    -r RUNFILES Run tests in RUNFILES (default: common.run,linux.run)
99
100    -t PATH     Run single test at PATH relative to test suite
101
102    -T TAGS     Comma separated list of tags (default: 'functional')
103
104    -u USER     Run single test as USER (default: root)
105
106
107The ZFS Test Suite allows the user to specify a subset of the tests via a
108runfile or list of tags.
109
110The format of the runfile is explained in test-runner(1), and
111the files that zfs-tests.sh uses are available for reference under
112/usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:
113
114    $ /usr/share/zfs/zfs-tests.sh -r my_tests.run
115
116Otherwise user can set needed tags to run only specific tests.
117
118### 3) Test results
119
120While the ZFS Test Suite is running, one informational line is printed at the
121end of each test, and a results summary is printed at the end of the run. The
122results summary includes the location of the complete logs, which is logged in
123the form `/var/tmp/test_results/[ISO 8601 date]`.  A normal test run launched
124with the `zfs-tests.sh` wrapper script will look something like this:
125
126    $ /usr/share/zfs/zfs-tests.sh -v -d /tmp/test
127
128    --- Configuration ---
129    Runfile:         /usr/share/zfs/runfiles/linux.run
130    STF_TOOLS:       /usr/share/zfs/test-runner
131    STF_SUITE:       /usr/share/zfs/zfs-tests
132    STF_PATH:        /var/tmp/constrained_path.G0Sf
133    FILEDIR:         /tmp/test
134    FILES:           /tmp/test/file-vdev0 /tmp/test/file-vdev1 /tmp/test/file-vdev2
135    LOOPBACKS:       /dev/loop0 /dev/loop1 /dev/loop2
136    DISKS:           loop0 loop1 loop2
137    NUM_DISKS:       3
138    FILESIZE:        4G
139    ITERATIONS:      1
140    TAGS:            functional
141    Keep pool(s):    rpool
142
143
144    /usr/share/zfs/test-runner/bin/test-runner.py  -c /usr/share/zfs/runfiles/linux.run \
145        -T functional -i /usr/share/zfs/zfs-tests -I 1
146    Test: /usr/share/zfs/zfs-tests/tests/functional/arc/setup (run as root) [00:00] [PASS]
147    ...more than 1100 additional tests...
148    Test: /usr/share/zfs/zfs-tests/tests/functional/zvol/zvol_swap/cleanup (run as root) [00:00] [PASS]
149
150    Results Summary
151    SKIP	  52
152    PASS	 1129
153
154    Running Time:	02:35:33
155    Percent passed:	95.6%
156    Log directory:	/var/tmp/test_results/20180515T054509
157
158### 4) Example of adding and running test-case (zpool_example)
159
160  This broadly boils down to 5 steps
161  1. Create/Set password-less sudo for user running test case.
162  2. Edit configure.ac, Makefile.am appropriately
163  3. Create/Modify .run files
164  4. Create actual test-scripts
165  5. Run Test case
166
167  Will look at each of them in depth.
168
169  * Set password-less sudo for 'Test' user as test script cannot be run as root
170  * Edit file **configure.ac** and include line under AC_CONFIG_FILES section
171    ~~~~
172      tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile
173    ~~~~
174  * Edit file **tests/runfiles/Makefile.am** and add line *zpool_example*.
175    ~~~~
176      pkgdatadir = $(datadir)/@PACKAGE@/runfiles
177      dist_pkgdata_DATA = \
178        zpool_example.run \
179        common.run \
180        freebsd.run \
181        linux.run \
182        longevity.run \
183        perf-regression.run \
184        sanity.run \
185        sunos.run
186    ~~~~
187  * Create file **tests/runfiles/zpool_example.run**. This defines the most
188    common properties when run with test-runner.py or zfs-tests.sh.
189    ~~~~
190      [DEFAULT]
191      timeout = 600
192      outputdir = /var/tmp/test_results
193      tags = ['functional']
194
195      tests = ['zpool_example_001_pos']
196    ~~~~
197    If adding test-case to an already existing suite the runfile would
198    already be present and it needs to be only updated. For example, adding
199    **zpool_example_002_pos** to the above runfile only update the **"tests ="**
200    section of the runfile as shown below
201    ~~~~
202      [DEFAULT]
203      timeout = 600
204      outputdir = /var/tmp/test_results
205      tags = ['functional']
206
207      tests = ['zpool_example_001_pos', 'zpool_example_002_pos']
208    ~~~~
209
210  * Edit **tests/zfs-tests/tests/functional/cli_root/Makefile.am** and add line
211    under SUBDIRS.
212    ~~~~
213      zpool_example \ (Make sure to escape the line end as there will be other folders names following)
214    ~~~~
215  * Create new file **tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile.am**
216    the contents of the file could be as below. What it says it that now we have
217    a test case *zpool_example_001_pos.ksh*
218    ~~~~
219      pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_example
220      dist_pkgdata_SCRIPTS = \
221        zpool_example_001_pos.ksh
222    ~~~~
223  * We can now create our test-case zpool_example_001_pos.ksh under
224    **tests/zfs-tests/tests/functional/cli_root/zpool_example/**.
225    ~~~~
226	# DESCRIPTION:
227	#	zpool_example Test
228	#
229	# STRATEGY:
230	#	1. Demo a very basic test case
231	#
232
233	DISKS_DEV1="/dev/loop0"
234	DISKS_DEV2="/dev/loop1"
235	TESTPOOL=EXAMPLE_POOL
236
237	function cleanup
238	{
239		# Cleanup
240		destroy_pool $TESTPOOL
241		log_must rm -f $DISKS_DEV1
242		log_must rm -f $DISKS_DEV2
243	}
244
245	log_assert "zpool_example"
246	# Run function "cleanup" on exit
247	log_onexit cleanup
248
249	# Prep backend device
250	log_must dd if=/dev/zero of=$DISKS_DEV1 bs=512 count=140000
251	log_must dd if=/dev/zero of=$DISKS_DEV2 bs=512 count=140000
252
253	# Create pool
254	log_must zpool create $TESTPOOL $type $DISKS_DEV1 $DISKS_DEV2
255
256	log_pass "zpool_example"
257    ~~~~
258  * Run Test case, which can be done in two ways. Described in detail above in
259    section 2.
260    * test-runner.py (This takes run file as input. See *zpool_example.run*)
261    * zfs-tests.sh. Can execute the run file or individual tests
262