1Metadata-Version: 2.1
2Name: taskcluster
3Version: 44.2.2
4Summary: Python client for Taskcluster
5Home-page: https://github.com/taskcluster/taskcluster
6Author: Mozilla Taskcluster and Release Engineering
7Author-email: release+python@mozilla.com
8License: UNKNOWN
9Platform: UNKNOWN
10Classifier: Programming Language :: Python :: 2.7
11Classifier: Programming Language :: Python :: 3.5
12Classifier: Programming Language :: Python :: 3.6
13Classifier: Programming Language :: Python :: 3.7
14Description-Content-Type: text/markdown
15License-File: LICENSE
16Requires-Dist: requests (>=2.4.3)
17Requires-Dist: mohawk (>=0.3.4)
18Requires-Dist: slugid (>=2)
19Requires-Dist: taskcluster-urls (>=12.1.0)
20Requires-Dist: six (>=1.10.0)
21Requires-Dist: aiohttp (>=3.7.4) ; python_version >= "3.6"
22Requires-Dist: async-timeout (>=2.0.0) ; python_version >= "3.6"
23Provides-Extra: test
24Requires-Dist: pytest ; extra == 'test'
25Requires-Dist: pytest-cov ; extra == 'test'
26Requires-Dist: pytest-mock ; extra == 'test'
27Requires-Dist: httmock ; extra == 'test'
28Requires-Dist: mock ; extra == 'test'
29Requires-Dist: setuptools-lint ; extra == 'test'
30Requires-Dist: flake8 ; extra == 'test'
31Requires-Dist: psutil ; extra == 'test'
32Requires-Dist: hypothesis ; extra == 'test'
33Requires-Dist: tox ; extra == 'test'
34Requires-Dist: coverage ; extra == 'test'
35Requires-Dist: python-dateutil ; extra == 'test'
36Requires-Dist: subprocess32 ; (python_version == "2.7") and extra == 'test'
37Requires-Dist: pytest-asyncio ; (python_version >= "3.6") and extra == 'test'
38Requires-Dist: aiofiles ; (python_version >= "3.6") and extra == 'test'
39Requires-Dist: httptest ; (python_version >= "3.6") and extra == 'test'
40
41# Taskcluster Client for Python
42
43[![Download](https://img.shields.io/badge/pypi-taskcluster-brightgreen)](https://pypi.python.org/pypi/taskcluster)
44[![License](https://img.shields.io/badge/license-MPL%202.0-orange.svg)](http://mozilla.org/MPL/2.0)
45
46**A Taskcluster client library for Python.**
47
48This library is a complete interface to Taskcluster in Python.  It provides
49both synchronous and asynchronous interfaces for all Taskcluster API methods,
50in both Python-2 and Python-3 variants.
51
52## Usage
53
54For a general guide to using Taskcluster clients, see [Calling Taskcluster APIs](https://docs.taskcluster.net/docs/manual/using/api).
55
56### Setup
57
58Before calling an API end-point, you'll need to create a client instance.
59There is a class for each service, e.g., `Queue` and `Auth`.  Each takes the
60same options, described below.  Note that only `rootUrl` is
61required, and it's unusual to configure any other options aside from
62`credentials`.
63
64For each service, there are sync and async variants.  The classes under
65`taskcluster` (e.g., `taskcluster.Queue`) are Python-2 compatible and operate
66synchronously.  The classes under `taskcluster.aio` (e.g.,
67`taskcluster.aio.Queue`) require Python >= 3.6.
68
69#### Authentication Options
70
71Here is a simple set-up of an Index client:
72
73```python
74import taskcluster
75index = taskcluster.Index({
76  'rootUrl': 'https://tc.example.com',
77  'credentials': {'clientId': 'id', 'accessToken': 'accessToken'},
78})
79```
80
81The `rootUrl` option is required as it gives the Taskcluster deployment to
82which API requests should be sent.  Credentials are only required if the
83request is to be authenticated -- many Taskcluster API methods do not require
84authentication.
85
86In most cases, the root URL and Taskcluster credentials should be provided in [standard environment variables](https://docs.taskcluster.net/docs/manual/design/env-vars).  Use `taskcluster.optionsFromEnvironment()` to read these variables automatically:
87
88```python
89auth = taskcluster.Auth(taskcluster.optionsFromEnvironment())
90```
91
92Note that this function does not respect `TASKCLUSTER_PROXY_URL`.  To use the Taskcluster Proxy from within a task:
93
94```python
95auth = taskcluster.Auth({'rootUrl': os.environ['TASKCLUSTER_PROXY_URL']})
96```
97
98#### Authorized Scopes
99
100If you wish to perform requests on behalf of a third-party that has small set
101of scopes than you do. You can specify [which scopes your request should be
102allowed to
103use](https://docs.taskcluster.net/docs/manual/design/apis/hawk/authorized-scopes),
104in the `authorizedScopes` option.
105
106```python
107opts = taskcluster.optionsFromEnvironment()
108opts['authorizedScopes'] = ['queue:create-task:highest:my-provisioner/my-worker-type']
109queue = taskcluster.Queue(opts)
110```
111
112#### Other Options
113
114The following additional options are accepted when constructing a client object:
115
116* `signedUrlExpiration` - default value for the `expiration` argument to `buildSignedUrl`
117* `maxRetries` - maximum number of times to retry a failed request
118
119### Calling API Methods
120
121API methods are available as methods on the corresponding client object.  For
122sync clients, these are sync methods, and for async clients they are async
123methods; the calling convention is the same in either case.
124
125There are four calling conventions for methods:
126
127```python
128client.method(v1, v1, payload)
129client.method(payload, k1=v1, k2=v2)
130client.method(payload=payload, query=query, params={k1: v1, k2: v2})
131client.method(v1, v2, payload=payload, query=query)
132```
133
134Here, `v1` and `v2` are URL parameters (named `k1` and `k2`), `payload` is the
135request payload, and `query` is a dictionary of query arguments.
136
137For example, in order to call an API method with query-string arguments:
138
139```python
140await queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g',
141  query={'continuationToken': previousResponse.get('continuationToken')})
142```
143
144
145### Generating URLs
146
147It is often necessary to generate the URL for an API method without actually calling the method.
148To do so, use `buildUrl` or, for an API method that requires authentication, `buildSignedUrl`.
149
150```python
151import taskcluster
152
153index = taskcluster.Index(taskcluster.optionsFromEnvironment())
154print(index.buildUrl('findTask', 'builds.v1.latest'))
155secrets = taskcluster.Secrets(taskcluster.optionsFromEnvironment())
156print(secret.buildSignedUrl('get', 'my-secret'))
157```
158
159Note that signed URLs are time-limited; the expiration can be set with the `signedUrlExpiration` option to the client constructor, or with the `expiration` keyword arguement to `buildSignedUrl`, both given in seconds.
160
161### Generating Temporary Credentials
162
163If you have non-temporary taskcluster credentials you can generate a set of
164[temporary credentials](https://docs.taskcluster.net/docs/manual/design/apis/hawk/temporary-credentials) as follows. Notice that the credentials cannot last more
165than 31 days, and you can only revoke them by revoking the credentials that was
166used to issue them (this takes up to one hour).
167
168It is not the responsibility of the caller to apply any clock drift adjustment
169to the start or expiry time - this is handled by the auth service directly.
170
171```python
172import datetime
173
174start = datetime.datetime.now()
175expiry = start + datetime.timedelta(0,60)
176scopes = ['ScopeA', 'ScopeB']
177name = 'foo'
178
179credentials = taskcluster.createTemporaryCredentials(
180    # issuing clientId
181    clientId,
182    # issuing accessToken
183    accessToken,
184    # Validity of temporary credentials starts here, in timestamp
185    start,
186    # Expiration of temporary credentials, in timestamp
187    expiry,
188    # Scopes to grant the temporary credentials
189    scopes,
190    # credential name (optional)
191    name
192)
193```
194
195You cannot use temporary credentials to issue new temporary credentials.  You
196must have `auth:create-client:<name>` to create a named temporary credential,
197but unnamed temporary credentials can be created regardless of your scopes.
198
199### Handling Timestamps
200Many taskcluster APIs require ISO 8601 time stamps offset into the future
201as way of providing expiration, deadlines, etc. These can be easily created
202using `datetime.datetime.isoformat()`, however, it can be rather error prone
203and tedious to offset `datetime.datetime` objects into the future. Therefore
204this library comes with two utility functions for this purposes.
205
206```python
207dateObject = taskcluster.fromNow("2 days 3 hours 1 minute")
208  # -> datetime.datetime(2017, 1, 21, 17, 8, 1, 607929)
209dateString = taskcluster.fromNowJSON("2 days 3 hours 1 minute")
210  # -> '2017-01-21T17:09:23.240178Z'
211```
212
213By default it will offset the date time into the future, if the offset strings
214are prefixed minus (`-`) the date object will be offset into the past. This is
215useful in some corner cases.
216
217```python
218dateObject = taskcluster.fromNow("- 1 year 2 months 3 weeks 5 seconds");
219  # -> datetime.datetime(2015, 10, 30, 18, 16, 50, 931161)
220```
221
222The offset string is ignorant of whitespace and case insensitive. It may also
223optionally be prefixed plus `+` (if not prefixed minus), any `+` prefix will be
224ignored. However, entries in the offset string must be given in order from
225high to low, ie. `2 years 1 day`. Additionally, various shorthands may be
226employed, as illustrated below.
227
228```
229  years,    year,   yr,   y
230  months,   month,  mo
231  weeks,    week,         w
232  days,     day,          d
233  hours,    hour,         h
234  minutes,  minute, min
235  seconds,  second, sec,  s
236```
237
238The `fromNow` method may also be given a date to be relative to as a second
239argument. This is useful if offset the task expiration relative to the the task
240deadline or doing something similar.  This argument can also be passed as the
241kwarg `dateObj`
242
243```python
244dateObject1 = taskcluster.fromNow("2 days 3 hours");
245dateObject2 = taskcluster.fromNow("1 year", dateObject1);
246taskcluster.fromNow("1 year", dateObj=dateObject1);
247  # -> datetime.datetime(2018, 1, 21, 17, 59, 0, 328934)
248```
249### Generating SlugIDs
250
251To generate slugIds (Taskcluster's client-generated unique IDs), use
252`taskcluster.slugId()`, which will return a unique slugId on each call.
253
254In some cases it is useful to be able to create a mapping from names to
255slugIds, with the ability to generate the same slugId multiple times.
256The `taskcluster.stableSlugId()` function returns a callable that does
257just this.
258
259```python
260gen = taskcluster.stableSlugId()
261sometask = gen('sometask')
262assert gen('sometask') == sometask  # same input generates same output
263assert gen('sometask') != gen('othertask')
264
265gen2 = taskcluster.stableSlugId()
266sometask2 = gen('sometask')
267assert sometask2 != sometask  # but different slugId generators produce
268                              # different output
269```
270
271### Scope Analysis
272
273The `scopeMatch(assumedScopes, requiredScopeSets)` function determines
274whether one or more of a set of required scopes are satisfied by the assumed
275scopes, taking *-expansion into account.  This is useful for making local
276decisions on scope satisfaction, but note that `assumed_scopes` must be the
277*expanded* scopes, as this function cannot perform expansion.
278
279It takes a list of a assumed scopes, and a list of required scope sets on
280disjunctive normal form, and checks if any of the required scope sets are
281satisfied.
282
283Example:
284
285```python
286requiredScopeSets = [
287    ["scopeA", "scopeB"],
288    ["scopeC:*"]
289]
290assert scopesMatch(['scopeA', 'scopeB'], requiredScopeSets)
291assert scopesMatch(['scopeC:xyz'], requiredScopeSets)
292assert not scopesMatch(['scopeA'], requiredScopeSets)
293assert not scopesMatch(['scopeC'], requiredScopeSets)
294```
295
296### Pagination
297
298Many Taskcluster API methods are paginated.  There are two ways to handle
299pagination easily with the python client.  The first is to implement pagination
300in your code:
301
302```python
303import taskcluster
304queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
305i = 0
306tasks = 0
307outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g')
308while outcome.get('continuationToken'):
309    print('Response %d gave us %d more tasks' % (i, len(outcome['tasks'])))
310    if outcome.get('continuationToken'):
311        outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
312    i += 1
313    tasks += len(outcome.get('tasks', []))
314print('Task Group %s has %d tasks' % (outcome['taskGroupId'], tasks))
315```
316
317There's also an experimental feature to support built in automatic pagination
318in the sync client.  This feature allows passing a callback as the
319'paginationHandler' keyword-argument.  This function will be passed the
320response body of the API method as its sole positional arugment.
321
322This example of the built in pagination shows how a list of tasks could be
323built and then counted:
324
325```python
326import taskcluster
327queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
328
329responses = []
330
331def handle_page(y):
332    print("%d tasks fetched" % len(y.get('tasks', [])))
333    responses.append(y)
334
335queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', paginationHandler=handle_page)
336
337tasks = 0
338for response in responses:
339    tasks += len(response.get('tasks', []))
340
341print("%d requests fetch %d tasks" % (len(responses), tasks))
342```
343
344### Pulse Events
345
346This library can generate exchange patterns for Pulse messages based on the
347Exchanges definitions provded by each service.  This is done by instantiating a
348`<service>Events` class and calling a method with the name of the vent.
349Options for the topic exchange methods can be in the form of either a single
350dictionary argument or keyword arguments.  Only one form is allowed.
351
352```python
353from taskcluster import client
354qEvt = client.QueueEvents({rootUrl: 'https://tc.example.com'})
355# The following calls are equivalent
356print(qEvt.taskCompleted({'taskId': 'atask'}))
357print(qEvt.taskCompleted(taskId='atask'))
358```
359
360Note that the client library does *not* provide support for interfacing with a Pulse server.
361
362### Logging
363
364Logging is set up in `taskcluster/__init__.py`.  If the special
365`DEBUG_TASKCLUSTER_CLIENT` environment variable is set, the `__init__.py`
366module will set the `logging` module's level for its logger to `logging.DEBUG`
367and if there are no existing handlers, add a `logging.StreamHandler()`
368instance.  This is meant to assist those who do not wish to bother figuring out
369how to configure the python logging module but do want debug messages
370
371## Uploading and Downloading Objects
372
373The Object service provides an API for reliable uploads and downloads of large objects.
374This library provides convenience methods to implement the client portion of those APIs, providing well-tested, resilient upload and download functionality.
375These methods will negotiate the appropriate method with the object service and perform the required steps to transfer the data.
376
377All methods are available in both sync and async versions, with identical APIs except for the `async`/`await` keywords.
378These methods are not available for Python-2.7.
379
380In either case, you will need to provide a configured `Object` instance with appropriate credentials for the operation.
381
382NOTE: There is an helper function to upload `s3` artifacts, `taskcluster.helper.upload_artifact`, but it is deprecated as it only supports the `s3` artifact type.
383
384### Uploads
385
386To upload, use any of the following:
387
388* `await taskcluster.aio.upload.uploadFromBuf(projectId=.., name=.., contentType=.., contentLength=.., uploadId=.., expires=.., maxRetries=.., objectService=.., data=..)` - asynchronously upload data from a buffer full of bytes.
389* `await taskcluster.aio.upload.uploadFromFile(projectId=.., name=.., contentType=.., contentLength=.., uploadId=.., expires=.., maxRetries=.., objectService=.., file=..)` - asynchronously upload data from a standard Python file.
390  Note that this is [probably what you want](https://github.com/python/asyncio/wiki/ThirdParty#filesystem), even in an async context.
391* `await taskcluster.aio.upload(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., readerFactory=..)` - asynchronously upload data from an async reader factory.
392* `taskcluster.upload.uploadFromBuf(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., data=..)` - upload data from a buffer full of bytes.
393* `taskcluster.upload.uploadFromFile(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., file=..)` - upload data from a standard Python file.
394* `taskcluster.upload(projectId=.., name=.., contentType=.., contentLength=.., expires=.., uploadId=.., maxRetries=.., objectService=.., readerFactory=..)` - upload data from a sync reader factory.
395
396A "reader" is an object with a `read(max_size=-1)` method which reads and returns a chunk of 1 .. `max_size` bytes, or returns an empty string at EOF, async for the async functions and sync for the remainder.
397A "reader factory" is an async callable which returns a fresh reader, ready to read the first byte of the object.
398When uploads are retried, the reader factory may be called more than once.
399
400The `uploadId` parameter may be omitted, in which case a new slugId will be generated.
401
402### Downloads
403
404To download, use any of the following:
405
406* `await taskcluster.aio.download.downloadToBuf(name=.., maxRetries=.., objectService=..)` - asynchronously download an object to an in-memory buffer, returning a tuple (buffer, content-type).
407  If the file is larger than available memory, this will crash.
408* `await taskcluster.aio.download.downloadToFile(name=.., maxRetries=.., objectService=.., file=..)` - asynchronously download an object to a standard Python file, returning the content type.
409* `await taskcluster.aio.download.download(name=.., maxRetries=.., objectService=.., writerFactory=..)` - asynchronously download an object to an async writer factory, returning the content type.
410* `taskcluster.download.downloadToBuf(name=.., maxRetries=.., objectService=..)` - download an object to an in-memory buffer, returning a tuple (buffer, content-type).
411  If the file is larger than available memory, this will crash.
412* `taskcluster.download.downloadToFile(name=.., maxRetries=.., objectService=.., file=..)` - download an object to a standard Python file, returning the content type.
413* `taskcluster.download.download(name=.., maxRetries=.., objectService=.., writerFactory=..)` - download an object to a sync writer factory, returning the content type.
414
415A "writer" is an object with a `write(data)` method which writes the given data, async for the async functions and sync for the remainder.
416A "writer factory" is a callable (again either async or sync) which returns a fresh writer, ready to write the first byte of the object.
417When uploads are retried, the writer factory may be called more than once.
418
419### Artifact Downloads
420
421Artifacts can be downloaded from the queue service with similar functions to those above.
422These functions support all of the queue's storage types, raising an error for `error` artifacts.
423In each case, if `runId` is omitted then the most recent run will be used.
424
425* `await taskcluster.aio.download.downloadArtifactToBuf(taskId=.., runId=.., name=.., maxRetries=.., queueService=..)` - asynchronously download an object to an in-memory buffer, returning a tuple (buffer, content-type).
426  If the file is larger than available memory, this will crash.
427* `await taskcluster.aio.download.downloadArtifactToFile(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., file=..)` - asynchronously download an object to a standard Python file, returning the content type.
428* `await taskcluster.aio.download.downloadArtifact(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., writerFactory=..)` - asynchronously download an object to an async writer factory, returning the content type.
429* `taskcluster.download.downloadArtifactToBuf(taskId=.., runId=.., name=.., maxRetries=.., queueService=..)` - download an object to an in-memory buffer, returning a tuple (buffer, content-type).
430  If the file is larger than available memory, this will crash.
431* `taskcluster.download.downloadArtifactToFile(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., file=..)` - download an object to a standard Python file, returning the content type.
432* `taskcluster.download.downloadArtifact(taskId=.., runId=.., name=.., maxRetries=.., queueService=.., writerFactory=..)` - download an object to a sync writer factory, returning the content type.
433
434## Integration Helpers
435
436The Python Taskcluster client has a module `taskcluster.helper` with utilities which allows you to easily share authentication options across multiple services in your project.
437
438Generally a project using this library will face different use cases and authentication options:
439
440* No authentication for a new contributor without Taskcluster access,
441* Specific client credentials through environment variables on a developer's computer,
442* Taskcluster Proxy when running inside a task.
443
444### Shared authentication
445
446The class `taskcluster.helper.TaskclusterConfig` is made to be instantiated once in your project, usually in a top level module. That singleton is then accessed by different parts of your projects, whenever a Taskcluster service is needed.
447
448Here is a sample usage:
449
4501. in `project/__init__.py`, no call to Taskcluster is made at that point:
451
452```python
453from taskcluster.helper import Taskcluster config
454
455tc = TaskclusterConfig('https://community-tc.services.mozilla.com')
456```
457
4582. in `project/boot.py`, we authenticate on Taskcuster with provided credentials, or environment variables, or taskcluster proxy (in that order):
459
460```python
461from project import tc
462
463tc.auth(client_id='XXX', access_token='YYY')
464```
465
4663. at that point, you can load any service using the authenticated wrapper from anywhere in your code:
467
468```python
469from project import tc
470
471def sync_usage():
472    queue = tc.get_service('queue')
473    queue.ping()
474
475async def async_usage():
476    hooks = tc.get_service('hooks', use_async=True)  # Asynchronous service class
477    await hooks.ping()
478```
479
480Supported environment variables are:
481- `TASKCLUSTER_ROOT_URL` to specify your Taskcluster instance base url. You can either use that variable or instanciate `TaskclusterConfig` with the base url.
482- `TASKCLUSTER_CLIENT_ID` & `TASKCLUSTER_ACCESS_TOKEN` to specify your client credentials instead of providing them to `TaskclusterConfig.auth`
483- `TASKCLUSTER_PROXY_URL` to specify the proxy address used to reach Taskcluster in a task. It defaults to `http://taskcluster` when not specified.
484
485For more details on Taskcluster environment variables, [here is the documentation](https://docs.taskcluster.net/docs/manual/design/env-vars).
486
487### Loading secrets across multiple authentications
488
489Another available utility is `taskcluster.helper.load_secrets` which allows you to retrieve a secret using an authenticated `taskcluster.Secrets` instance (using `TaskclusterConfig.get_service` or the synchronous class directly).
490
491This utility loads a secret, but allows you to:
4921. share a secret across multiple projects, by using key prefixes inside the secret,
4932. check that some required keys are present in the secret,
4943. provide some default values,
4954. provide a local secret source instead of using the Taskcluster service (useful for local development or sharing _secrets_ with contributors)
496
497Let's say you have a secret on a Taskcluster instance named `project/foo/prod-config`, which is needed by a backend and some tasks. Here is its content:
498
499```yaml
500common:
501  environment: production
502  remote_log: https://log.xx.com/payload
503
504backend:
505  bugzilla_token: XXXX
506
507task:
508  backend_url: https://backend.foo.mozilla.com
509```
510
511In your backend, you would do:
512
513```python
514from taskcluster import Secrets
515from taskcluster.helper import load_secrets
516
517prod_config = load_secrets(
518  Secrets({...}),
519  'project/foo/prod-config',
520
521  # We only need the common & backend parts
522  prefixes=['common', 'backend'],
523
524  # We absolutely need a bugzilla token to run
525  required=['bugzilla_token'],
526
527  # Let's provide some default value for the environment
528  existing={
529    'environment': 'dev',
530  }
531)
532  # -> prod_config == {
533  #     "environment": "production"
534  #     "remote_log": "https://log.xx.com/payload",
535  #     "bugzilla_token": "XXXX",
536  #   }
537```
538
539In your task, you could do the following using `TaskclusterConfig` mentionned above (the class has a shortcut to use an authenticated `Secrets` service automatically):
540
541```python
542from project import tc
543
544prod_config = tc.load_secrets(
545  'project/foo/prod-config',
546
547  # We only need the common & bot parts
548  prefixes=['common', 'bot'],
549
550  # Let's provide some default value for the environment and backend_url
551  existing={
552    'environment': 'dev',
553    'backend_url': 'http://localhost:8000',
554  }
555)
556  # -> prod_config == {
557  #     "environment": "production"
558  #     "remote_log": "https://log.xx.com/payload",
559  #     "backend_url": "https://backend.foo.mozilla.com",
560  #   }
561```
562
563To provide local secrets value, you first need to load these values as a dictionary (usually by reading a local file in your format of choice : YAML, JSON, ...) and providing the dictionary to `load_secrets` by using the `local_secrets` parameter:
564
565```python
566import os
567import yaml
568
569from taskcluster import Secrets
570from taskcluster.helper import load_secrets
571
572local_path = 'path/to/file.yml'
573
574prod_config = load_secrets(
575  Secrets({...}),
576  'project/foo/prod-config',
577
578  # We support an optional local file to provide some configuration without reaching Taskcluster
579  local_secrets=yaml.safe_load(open(local_path)) if os.path.exists(local_path) else None,
580)
581```
582
583## Compatibility
584
585This library is co-versioned with Taskcluster itself.
586That is, a client with version x.y.z contains API methods corresponding to Taskcluster version x.y.z.
587Taskcluster is careful to maintain API compatibility, and guarantees it within a major version.
588That means that any client with version x.* will work against any Taskcluster services at version x.*, and is very likely to work for many other major versions of the Taskcluster services.
589Any incompatibilities are noted in the [Changelog](https://github.com/taskcluster/taskcluster/blob/main/CHANGELOG.md).
590
591
592
593
594
595
596