1.. role:: javascript(code)
2  :language: javascript
3
4==============
5Change Streams
6==============
7
8.. contents::
9
10--------
11
12Introduction
13============
14
15The YAML and JSON files in this directory are platform-independent tests that
16drivers can use to prove their conformance to the Change Streams Spec.
17
18Several prose tests, which are not easily expressed in YAML, are also presented
19in this file. Those tests will need to be manually implemented by each driver.
20
21Spec Test Format
22================
23
24Each YAML file has the following keys:
25
26- ``database_name``: The default database
27- ``collection_name``: The default collection
28- ``database2_name``: Another database
29- ``collection2_name``: Another collection
30- ``tests``: An array of tests that are to be run independently of each other.
31  Each test will have some of the following fields:
32
33  - ``description``: The name of the test.
34  - ``minServerVersion``: The minimum server version to run this test against. If not present, assume there is no minimum server version.
35  - ``maxServerVersion``: Reserved for later use
36  - ``failPoint``: Optional configureFailPoint command document to run to configure a fail point on the primary server.
37  - ``target``: The entity on which to run the change stream. Valid values are:
38
39    - ``collection``: Watch changes on collection ``database_name.collection_name``
40    - ``database``: Watch changes on database ``database_name``
41    - ``client``: Watch changes on entire clusters
42  - ``topology``: An array of server topologies against which to run the test.
43    Valid topologies are ``single``, ``replicaset``, and ``sharded``.
44  - ``changeStreamPipeline``: An array of additional aggregation pipeline stages to add to the change stream
45  - ``changeStreamOptions``: Additional options to add to the changeStream
46  - ``operations``: Array of documents, each describing an operation. Each document has the following fields:
47
48    - ``database``: Database against which to run the operation
49    - ``collection``: Collection against which to run the operation
50    - ``name``: Name of the command to run
51    - ``arguments`` (optional): Object of arguments for the command (ex: document to insert)
52
53  - ``expectations``: Optional list of command-started events in Extended JSON format
54  - ``result``: Document with ONE of the following fields:
55
56    - ``error``: Describes an error received during the test
57    - ``success``: An Extended JSON array of documents expected to be received from the changeStream
58
59Spec Test Match Function
60========================
61
62The definition of MATCH or MATCHES in the Spec Test Runner is as follows:
63
64- MATCH takes two values, ``expected`` and ``actual``
65- Notation is "Assert [actual] MATCHES [expected]
66- Assertion passes if ``expected`` is a subset of ``actual``, with the value ``42`` acting as placeholders for "any value"
67
68Pseudocode implementation of ``actual`` MATCHES ``expected``:
69
70::
71
72  If expected is "42" or 42:
73    Assert that actual exists (is not null or undefined)
74  Else:
75    Assert that actual is of the same JSON type as expected
76    If expected is a JSON array:
77      For every idx/value in expected:
78        Assert that actual[idx] MATCHES value
79    Else if expected is a JSON object:
80      For every key/value in expected
81        Assert that actual[key] MATCHES value
82    Else:
83      Assert that expected equals actual
84
85The expected values for ``result.success`` and ``expectations`` are written in Extended JSON. Drivers may adopt any of the following approaches to comparisons, as long as they are consistent:
86
87- Convert ``actual`` to Extended JSON and compare to ``expected``
88- Convert ``expected`` and ``actual`` to BSON, and compare them
89- Convert ``expected`` and ``actual`` to native equivalents of JSON, and compare them
90
91Spec Test Runner
92================
93
94Before running the tests
95
96- Create a MongoClient ``globalClient``, and connect to the server.
97When executing tests against a sharded cluster, ``globalClient`` must only connect to one mongos. This is because tests
98that set failpoints will only work consistently if both the ``configureFailPoint`` and failing commands are sent to the
99same mongos.
100
101For each YAML file, for each element in ``tests``:
102
103- If ``topology`` does not include the topology of the server instance(s), skip this test.
104- Use ``globalClient`` to
105
106  - Drop the database ``database_name``
107  - Drop the database ``database2_name``
108  - Create the database ``database_name`` and the collection ``database_name.collection_name``
109  - Create the database ``database2_name`` and the collection ``database2_name.collection2_name``
110  - If the the ``failPoint`` field is present, configure the fail point on the primary server. See
111    `Server Fail Point <../../transactions/tests#server-fail-point>`_ in the
112    Transactions spec test documentation for more information.
113
114- Create a new MongoClient ``client``
115- Begin monitoring all APM events for ``client``. (If the driver uses global listeners, filter out all events that do not originate with ``client``). Filter out any "internal" commands (e.g. ``isMaster``)
116- Using ``client``, create a changeStream ``changeStream`` against the specified ``target``. Use ``changeStreamPipeline`` and ``changeStreamOptions`` if they are non-empty. Capture any error.
117- If there was no error, use ``globalClient`` and run every operation in ``operations`` in serial against the server until all operations have been executed or an error is thrown. Capture any error.
118- If there was no error and ``result.error`` is set, iterate ``changeStream`` once and capture any error.
119- If there was no error and ``result.success`` is non-empty, iterate ``changeStream`` until it returns as many changes as there are elements in the ``result.success`` array or an error is thrown. Capture any error.
120- Close ``changeStream``
121- If there was an error:
122
123  - Assert that an error was expected for the test.
124  - Assert that the error MATCHES ``result.error``
125
126- Else:
127
128  - Assert that no error was expected for the test
129  - Assert that the changes received from ``changeStream`` MATCH the results in ``result.success``
130
131- If there are any ``expectations``
132
133  - For each (``expected``, ``idx``) in ``expectations``
134    - If ``actual[idx]`` is a ``killCursors`` event, skip it and move to ``actual[idx+1]``.
135    - Else assert that ``actual[idx]`` MATCHES ``expected``
136
137- Close the MongoClient ``client``
138
139After running all tests
140
141- Close the MongoClient ``globalClient``
142- Drop database ``database_name``
143- Drop database ``database2_name``
144
145Iterating the Change Stream
146---------------------------
147
148Although synchronous drivers must provide a `non-blocking mode of iteration <../change-streams.rst#not-blocking-on-iteration>`_, asynchronous drivers may not have such a mechanism. Those drivers with only a blocking mode of iteration should be careful not to iterate the change stream unnecessarily, as doing so could cause the test runner to block indefinitely. For this reason, the test runner procedure above advises drivers to take a conservative approach to iteration.
149
150If the test expects an error and one was not thrown by either creating the change stream or executing the test's operations, iterating the change stream once allows for an error to be thrown by a ``getMore`` command. If the test does not expect any error, the change stream should be iterated only until it returns as many result documents as are expected by the test.
151
152Testing on Sharded Clusters
153---------------------------
154
155When writing data on sharded clusters, majority-committed data does not always show up in the response of the first
156``getMore`` command after the data is written. This is because in sharded clusters, no data from shard A may be returned
157until all other shard reports an entry that sorts after the change in shard A.
158
159To account for this, drivers MUST NOT rely on change stream documents in certain batches. For example, if expecting two
160documents in a change stream, these may not be part of the same ``getMore`` response, or even be produced in two
161subsequent ``getMore`` responses. Drivers MUST allow for a ``getMore`` to produce empty batches when testing on a
162sharded cluster. By default, this can take up to 10 seconds, but can be controlled by enabling the ``writePeriodicNoops``
163server parameter and configuring the ``periodNoopIntervalSecs`` parameter. Choosing lower values allows for running
164change stream tests with smaller timeouts.
165
166Prose Tests
167===========
168
169The following tests have not yet been automated, but MUST still be tested. All tests SHOULD be run on both replica sets and sharded clusters unless otherwise specified:
170
171#. ``ChangeStream`` must continuously track the last seen ``resumeToken``
172#. ``ChangeStream`` will throw an exception if the server response is missing the resume token (if wire version is < 8, this is a driver-side error; for 8+, this is a server-side error)
173#. After receiving a ``resumeToken``, ``ChangeStream`` will automatically resume one time on a resumable error with the initial pipeline and options, except for the addition/update of a ``resumeToken``.
174#. ``ChangeStream`` will not attempt to resume on any error encountered while executing an ``aggregate`` command. Note that retryable reads may retry ``aggregate`` commands. Drivers should be careful to distinguish retries from resume attempts. Alternatively, drivers may specify ``retryReads=false`` or avoid using a `retryable error <../../retryable-reads/retryable-reads.rst#retryable-error>`_ for this test.
175#. **Removed**
176#. ``ChangeStream`` will perform server selection before attempting to resume, using initial ``readPreference``
177#. Ensure that a cursor returned from an aggregate command with a cursor id and an initial empty batch is not closed on the driver side.
178#. The ``killCursors`` command sent during the "Resume Process" must not be allowed to throw an exception.
179#. ``$changeStream`` stage for ``ChangeStream`` against a server ``>=4.0`` and ``<4.0.7`` that has not received any results yet MUST include a ``startAtOperationTime`` option when resuming a change stream.
180#. **Removed**
181#. For a ``ChangeStream`` under these conditions:
182
183   - Running against a server ``>=4.0.7``.
184   - The batch is empty or has been iterated to the last document.
185
186   Expected result:
187
188   - ``getResumeToken`` must return the ``postBatchResumeToken`` from the current command response.
189
190#. For a ``ChangeStream`` under these conditions:
191
192   - Running against a server ``<4.0.7``.
193   - The batch is empty or has been iterated to the last document.
194
195   Expected result:
196
197   - ``getResumeToken`` must return the ``_id`` of the last document returned if one exists.
198   - ``getResumeToken`` must return ``resumeAfter`` from the initial aggregate if the option was specified.
199   - If ``resumeAfter`` was not specified, the ``getResumeToken`` result must be empty.
200
201#. For a ``ChangeStream`` under these conditions:
202
203   - The batch is not empty.
204   - The batch has been iterated up to but not including the last element.
205
206   Expected result:
207
208   - ``getResumeToken`` must return the ``_id`` of the previous document returned.
209
210#. For a ``ChangeStream`` under these conditions:
211
212   - The batch is not empty.
213   - The batch hasn’t been iterated at all.
214   - Only the initial ``aggregate`` command has been executed.
215
216   Expected result:
217
218   - ``getResumeToken`` must return ``startAfter`` from the initial aggregate if the option was specified.
219   - ``getResumeToken`` must return ``resumeAfter`` from the initial aggregate if the option was specified.
220   - If neither the ``startAfter`` nor ``resumeAfter`` options were specified, the ``getResumeToken`` result must be empty.
221
222   Note that this test cannot be run against sharded topologies because in that case the initial ``aggregate`` command only establishes cursors on the shards and always returns an empty ``firstBatch``.
223
224#. **Removed**
225#. **Removed**
226#. ``$changeStream`` stage for ``ChangeStream`` started with ``startAfter`` against a server ``>=4.1.1`` that has not received any results yet MUST include a ``startAfter`` option and MUST NOT include a ``resumeAfter`` option when resuming a change stream.
227#. ``$changeStream`` stage for ``ChangeStream`` started with ``startAfter`` against a server ``>=4.1.1`` that has received at least one result MUST include a ``resumeAfter`` option and MUST NOT include a ``startAfter`` option when resuming a change stream.
228