1This page describes the available test types and the requirements for
2authoring that apply to all test types. There is also a supplementary
3[guide to writing good testcases](test-style-guidelines.html).
4
5## Test Locations
6
7Each top level directory in the repository corresponds to tests for a
8single specification. For W3C specs, these directories are named after
9the shortname of the spec (i.e. the name used for snapshot
10publications under `/TR/`).
11
12Within the specification-specific directory there are two common ways
13of laying out tests. The first is a flat structure which is sometimes
14adopted for very short specifications. The alternative is a nested
15structure with each subdirectory corresponding to the id of a heading
16in the specification. This layout provides some implicit metadata
17about the part of a specification being tested according to its
18location in the filesystem, and is preferred for larger
19specifications.
20
21When adding new tests to existing specifications, try to follow the
22structure of existing tests.
23
24Because of path length limitations on Windows, test paths must be less
25that 150 characters relative to the test root directory (this gives
26vendors just over 100 characters for their own paths when running in
27automation).
28
29## Choosing the Test Type
30
31Tests should be written using the mechanism that is most conducive to
32running in automation. In general the following order of preference holds:
33
34* [idlharness.js](testharness-idlharness.html) tests - for testing
35  anything in a WebIDL block.
36
37* [testharness.js](testharness.html) tests - for any test that can be
38  written using script alone.
39
40* [Reftests][reftests] - for most tests of rendering.
41
42* WebDriver tests - for testing the webdriver protocol itself or (in
43  the future) for certain tests that require access to privileged APIs.
44
45* [Manual tests][manual-tests] - as a last resort for anything that can't be tested
46  using one of the above techniques.
47
48Some scenarios demand certain test types. For example:
49
50* Tests for layout will generally be reftests. In some cases it will
51  not be possible to construct a reference and a test that will always
52  render the same, in which case a manual test, accompanied by
53  testharness tests that inspect the layout via the DOM must be
54  written.
55
56* Features that require human interaction for security reasons
57  (e.g. to pick a file from the local filesystem) typically have to be
58  manual tests.
59
60## General Test Design Requirements
61
62### Short
63
64Tests should be as short as possible. For reftests in particular
65scrollbars at 800×600px window size must be avoided unless scrolling
66behaviour is specifically being tested. For all tests extraneous
67elements on the page should be avoided so it is clear what is part of
68the test (for a typical testharness test, the only content on the page
69will be rendered by the harness itself).
70
71### Minimal
72
73Tests should generally avoid depending on edge case behaviour of
74features that they don't explicitly intend to test. For example,
75except where testing parsing, tests should contain no
76[parse errors][validator]. Of course tests which intentionally address
77the interactions between multiple platform features are not only
78acceptable but encouraged.
79
80### Cross-platform
81
82Tests should be as cross-platform as reasonably possible, working
83across different devices, screen resolutions, paper sizes, etc.
84Exceptions should document their assumptions.
85
86### Self-Contained
87
88Tests must not depend on external network resources, including
89w3c-test.org. When these tests are run on CI systems they are
90typically configured with access to external resources disabled, so
91tests that try to access them will fail. Where tests want to use
92multiple hosts this is possible thorough a known set of subdomains and
93features of wptserve (see
94["Tests Involving Multiple Origins"](#tests-involving-multiple-origins)).
95
96## File Names
97
98Generally file names should be somewhat descriptive of what is being
99tested; very generic names like `001.html` are discouraged. A common
100format, required by CSS tests, is described in
101[CSS Naming Conventions](css-naming.html).
102
103## File Formats
104
105Tests must be HTML, XHTML or SVG files.
106
107Note: For CSS tests, the test source will be parsed and
108re-serialized. This re-serialization will cause minor changes to the
109test file, notably: attribute values will always be quoted, whitespace
110between attributes will be collapsed to a single space, duplicate
111attributes will be removed, optional closing tags will be inserted,
112and invalid markup will be normalized.  If these changes should make
113the test inoperable, for example if the test is testing markup error
114recovery, add the [flag][requirement-flags] `asis` to prevent
115re-serialization. This flag will also prevent format conversions so it
116may be necessary to provide alternate versions of the test in other
117formats (XHTML, HTML, etc.)
118
119## Character Encoding
120
121Except when specifically testing encoding, tests must be encoded in
122UTF-8, marked through the use of e.g. `<meta charset=utf-8>`, or in
123pure ASCII.
124
125## Support files
126
127Various support files are available in in the `/common/` and `/media/`
128directories (web-platform-tests) and `/support/` (CSS). Reusing
129existing resources is encouraged where possible, as is adding
130generally useful files to these common areas rather than to specific
131testsuites.
132
133For CSS tests the following standard images are available in the
134support directory:
135
136 * 1x1 color swatches
137 * 15x15 color swatches
138 * 15x15 bordered color swatches
139 * assorted rulers and red/green grids
140 * a cat
141 * a 4-part picture
142
143## Tools
144Sometimes you may want to add a script to the repository that's meant
145to be used from the command line, not from a browser (e.g., a script
146for generating test files). If you want to ensure (e.g., or security
147reasons) that such scripts won't be handled by the HTTP server, but
148will instead only be usable from the command line, then place them
149in either:
150
151* the `tools` subdir at the root of the repository, or
152* the `tools` subdir at the root of any top-level directory in the
153  repo which contains the tests the script is meant to be used with
154
155Any files in those `tools` directories won't be handled by the HTTP
156server; instead the server will return a 404 if a user navigates to
157the URL for a file within them.
158
159If you want to add a script for use with a particular set of tests
160but there isn't yet any `tools` subdir at the root of a top-level
161directory in the repository containing those tests, you can create
162a `tools` subdir at the root of that top-level directory and place
163your scripts there.
164
165For example, if you wanted to add a script for use with tests in the
166`notifications` directory, create the `notifications/tools` subdir
167and put your script there.
168
169## Style Rules
170
171A number of style rules should be applied to the test file. These are
172not uniformly enforced throughout the existing tests, but will be for
173new tests. Any of these rules may be broken if the test demands it:
174
175 * No trailing whitespace
176
177 * Use spaces rather than tabs for indentation
178
179 * Use UNIX-style line endings (i.e. no CR characters at EOL).
180
181## Advanced Testing Features
182
183Certain test scenarios require more than just static HTML
184generation. This is supported through the
185[wptserve](http://github.com/w3c/wptserve) server. Several scenarios
186in particular are common:
187
188### Standalone workers tests
189
190Tests that only require assertions in a dedicated worker scope can use
191standalone workers tests. In this case, the test is a JavaScript file
192with extension `.worker.js` that imports `testharness.js`. The test can
193then use all the usual APIs, and can be run from the path to the
194JavaScript file with the `.js` removed.
195
196For example, one could write a test for the `FileReaderSync` API by
197creating a `FileAPI/FileReaderSync.worker.js` as follows:
198
199    importScripts("/resources/testharness.js");
200    test(function () {
201      var blob = new Blob(["Hello"]);
202      var fr = new FileReaderSync();
203      assert_equals(fr.readAsText(blob), "Hello");
204    }, "FileReaderSync#readAsText.");
205    done();
206
207This test could then be run from `FileAPI/FileReaderSync.worker`.
208
209### Multi-global tests
210
211Tests for features that exist in multiple global scopes can be written in a way
212that they are automatically run in a window scope as well as a dedicated worker
213scope.
214In this case, the test is a JavaScript file with extension `.any.js`.
215The test can then use all the usual APIs, and can be run from the path to the
216JavaScript file with the `.js` replaced by `.worker` or `.html`.
217
218For example, one could write a test for the `Blob` constructor by
219creating a `FileAPI/Blob-constructor.any.js` as follows:
220
221    test(function () {
222      var blob = new Blob();
223      assert_equals(blob.size, 0);
224      assert_equals(blob.type, "");
225      assert_false(blob.isClosed);
226    }, "The Blob constructor.");
227
228This test could then be run from `FileAPI/Blob-constructor.any.worker` as well
229as `FileAPI/Blob-constructor.any.html`.
230
231### Tests Involving Multiple Origins
232
233In the test environment, five subdomains are available; `www`, `www1`,
234`www2`, `天気の良い日` and `élève`. These must be used for
235cross-origin tests. In addition two ports are available for http and
236one for websockets. Tests must not hardcode the hostname of the server
237that they expect to be running on or the port numbers, as these are
238not guaranteed by the test environment. Instead tests can get this
239information in one of two ways:
240
241* From script, using the `location` API.
242
243* By using a textual substitution feature of the server.
244
245In order for the latter to work, a file must either have a name of the
246form `{name}.sub.{ext}` e.g. `example-test.sub.html` or be referenced
247through a URL containing `pipe=sub` in the query string
248e.g. `example-test.html?pipe=sub`. The substitution syntax uses `{{ }}`
249to delimit items for substitution. For example to substitute in
250the host name on which the tests are running, one would write:
251
252    {{host}}
253
254As well as the host, one can get full domains, including subdomains
255using the `domains` dictionary. For example:
256
257    {{domains[www]}}
258
259would be replaced by the fully qualified domain name of the `www`
260subdomain. Ports are also available on a per-protocol basis e.g.
261
262    {{ports[ws][0]}}
263
264is replaced with the first (and only) websockets port, whilst
265
266    {{ports[http][1]}}
267
268is replaced with the second HTTP port.
269
270The request URL itself can be used as part of the substitution using
271the `location` dictionary, which has entries matching the
272`window.location` API. For example
273
274    {{location[host]}}
275
276is replaced by `hostname:port` for the current request.
277
278### Tests Requiring Special Headers
279
280For tests requiring that a certain HTTP header is set to some static
281value, a file with the same path as the test file except for an an
282additional `.headers` suffix may be created. For example for
283`/example/test.html`, the headers file would be
284`/example/test.html.headers`. This file consists of lines of the form
285
286    header-name: header-value
287
288For example
289
290    Content-Type: text/html; charset=big5
291
292To apply the same headers to all files in a directory use a
293`__dir__.headers` file. This will only apply to the immediate
294directory and not subdirectories.
295
296Headers files may be used in combination with substitutions by naming
297the file e.g. `test.html.sub.headers`.
298
299### Tests Requiring Full Control Over The HTTP Response
300
301For full control over the request and response the server provides the
302ability to write `.asis` files; these are served as literal HTTP
303responses. It also provides the ability to write python scripts that
304have access to request data and can manipulate the content and timing
305of the response. For details see the
306[wptserve documentation](http://wptserve.readthedocs.org).
307
308## CSS-Specific Requirements
309
310Tests for CSS specs have some additional requirements that have to be
311met in order to be included in an official specification testsuite.
312
313* [Naming conventions](css-naming.html)
314
315* [User style sheets](css-user-styles.html)
316
317* [Metadata](css-metadata.html)
318
319## Lint tool
320
321We have a lint tool for catching common mistakes in test files. You can run
322it manually by starting the `lint` executable from the root of your local
323web-platform-tests working directory like this:
324
325```
326./lint
327```
328
329The lint tool is also run automatically for every submitted pull request,
330and reviewers will not merge branches with tests that have lint errors, so
331you must fix any errors the lint tool reports. For details on doing that,
332see the [lint-tool documentation][lint-tool].
333
334But in the unusual case of error reports for things essential to a certain
335test or that for other exceptional reasons shouldn't prevent a merge of a
336test, update and commit the `lint.whitelist` file in the web-platform-tests
337root directory to suppress the error reports. For details on doing that,
338see the [lint-tool documentation][lint-tool].
339
340[lint-tool]: ./lint-tool.html
341[reftests]: ./reftests.html
342[manual-tests]: ./manual-test.html
343[test-templates]: ./test-templates.html
344[requirement-flags]: ./test-templates.html#requirement-flags
345[testharness-documentation]: ./testharness-documentation.html
346[validator]: http://validator.w3.org
347