• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

annotations/H03-May-2022-7,5197,413

collections/H03-May-2022-1,3401,272

definitions/H31-Mar-2022-4,5024,230

examples/H31-Mar-2022-119109

scripts/H03-May-2022-11

tools/H31-Mar-2022-469420

.editorconfigH A D31-Mar-2022148 118

CONTRIBUTING.mdH A D31-Mar-202214.6 KiB382319

META.ymlH A D31-Mar-202297 54

README.mdH A D31-Mar-20223.9 KiB8266

TODOH A D31-Mar-2022427 148

README.md

1Annotation-model: Tests for the Web Annotation Data Model
2=========================================================
3
4The [Web Annotation Data Model](https://www.w3.org/TR/annotation-model)
5specification presents a JSON-oriented collection of terms and structure that
6permit the sharing of annotations about other content.
7
8The purpose of these tests is to help validate that each of the structural
9requirements expressed in the Data Model specification are properly supported
10by implementations.
11
12The general approach for this testing is to enable both manual and automated
13testing. However, since the specification has no actual user interface
14requirements, there is no general automation mechanism that can be presented
15for clients.  Instead, the automation mechanism is one where client
16implementors could take advantage of the plumbing we provide here to push their
17data into the tests and collect the results of the testing.  This assumes
18knowledge of the requirements of each test / collection of tests so that the
19input data is relevant.  Each test or test collection contains information
20sufficient for the task.
21
22Running Tests
23-------------
24
25In the case of this test collection, we will be initially creating manual
26tests.  These will automatically determine pass or fail and generate output for
27the main WPT window.  The plan is to minimize the number of such tests to
28ease the burden on the testers while still exercising all the features.
29
30The workflow for running these tests is something like:
31
321. Start up the test driver window and select the annotation-model tests -
33   click "Start"
342. A window pops up that shows a test - the description of which tells the
35   tester what input is expected.  The window contains a textarea into which
36   the input can be typed / pasted, along with a button to click to start
37   testing that input.
383. The tester (presumably in another window) brings up their annotation client
39   and uses it to generate an annotation that supplies the requested structure.
40   They then copy / paste that into the aforementioned textarea and select the
41   button.
424. The test runs.  Success or failure is determined and reported to the test
43   driver window, which then cycles to the next test in the sequence.
445. Repeat steps 2-4 until done.
456. Download the JSON format report of test results, which can then be visually
46   inspected, reported on using various tools, or passed on to W3C for
47   evaluation and collection in the Implementation Report via github.
48
49**Remember that while these tests are written to help exercise implementations,
50their other (important) purpose is to increase confidence that there are
51interoperable implementations.** So, implementers are our audience, but these
52tests are not meant to be a comprehensive collection of tests for a client that
53might implement the Recommendation.  The bulk of the tests are manual because
54there are no UI requirements in the Recommendation that would make it possible
55to effectively stimulate every client portably.
56
57Having said that, because the structure of these "manual" tests is very rigid,
58it is possible for an implementer who understands test automation to use an
59open source tool such as [Selenium](http://www.seleniumhq.org/) to run these
60"manual" tests against their implementation - exercising their implementation
61against content they provide to create annotations and feed the data into our
62test input field and run the test.
63
64Capturing and Reporting Results
65-------------------------------
66
67As tests are run against implementations, if the results of testing are
68submitted to [test-results](https://github.com/w3c/test-results/) then they will
69be automatically included in documents generated by
70[wptreport](https://www.github.com/w3c/wptreport). The same tool can be used
71locally to view reports about recorded results.
72
73
74Automating Test Execution
75-------------------------
76
77Writing Tests
78-------------
79
80If you are interested in writing tests for this environment, see the
81associated [CONTRIBUTING](CONTRIBUTING.md) document.
82