1Taskcluster Client Library in Python
2======================================
3
4[![Build Status](https://travis-ci.org/taskcluster/taskcluster-client.py.svg?branch=master)](https://travis-ci.org/taskcluster/taskcluster-client.py)
5
6This is a library used to interact with Taskcluster within Python programs.  It
7presents the entire REST API to consumers as well as being able to generate
8URLs Signed by Hawk credentials.  It can also generate routing keys for
9listening to pulse messages from Taskcluster.
10
11The library builds the REST API methods from the same [API Reference
12format](/docs/manual/design/apis/reference-format) as the
13Javascript client library.
14
15## Generating Temporary Credentials
16If you have non-temporary taskcluster credentials you can generate a set of
17temporary credentials as follows. Notice that the credentials cannot last more
18than 31 days, and you can only revoke them by revoking the credentials that was
19used to issue them (this takes up to one hour).
20
21It is not the responsibility of the caller to apply any clock drift adjustment
22to the start or expiry time - this is handled by the auth service directly.
23
24```python
25import datetime
26
27start = datetime.datetime.now()
28expiry = start + datetime.timedelta(0,60)
29scopes = ['ScopeA', 'ScopeB']
30name = 'foo'
31
32credentials = taskcluster.createTemporaryCredentials(
33    # issuing clientId
34    clientId,
35    # issuing accessToken
36    accessToken,
37    # Validity of temporary credentials starts here, in timestamp
38    start,
39    # Expiration of temporary credentials, in timestamp
40    expiry,
41    # Scopes to grant the temporary credentials
42    scopes,
43    # credential name (optional)
44    name
45)
46```
47
48You cannot use temporary credentials to issue new temporary credentials.  You
49must have `auth:create-client:<name>` to create a named temporary credential,
50but unnamed temporary credentials can be created regardless of your scopes.
51
52## API Documentation
53
54The REST API methods are documented in the [reference docs](/docs/reference).
55
56## Query-String arguments
57Query string arguments are now supported.  In order to use them, you can call
58a method like this:
59
60```python
61queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
62```
63
64These query-string arguments are only supported using this calling convention
65
66## Sync vs Async
67
68The objects under `taskcluster` (e.g., `taskcluster.Queue`) are
69python2-compatible and operate synchronously.
70
71
72The objects under `taskcluster.aio` (e.g., `taskcluster.aio.Queue`) require
73`python>=3.6`. The async objects use asyncio coroutines for concurrency; this
74allows us to put I/O operations in the background, so operations that require
75the cpu can happen sooner. Given dozens of operations that can run concurrently
76(e.g., cancelling a medium-to-large task graph), this can result in significant
77performance improvements. The code would look something like
78
79```python
80#!/usr/bin/env python
81import aiohttp
82import asyncio
83from taskcluster.aio import Auth
84
85async def do_ping():
86    with aiohttp.ClientSession() as session:
87        a = Auth(session=session)
88        print(await a.ping())
89
90loop = asyncio.get_event_loop()
91loop.run_until_complete(do_ping())
92```
93
94Other async code examples are available [here](#methods-contained-in-the-client-library).
95
96Here's a slide deck for an [introduction to async python](https://gitpitch.com/escapewindow/slides-sf-2017/async-python).
97
98## Usage
99
100* Here's a simple command:
101
102    ```python
103    import taskcluster
104    index = taskcluster.Index({
105      'rootUrl': 'https://tc.example.com',
106      'credentials': {'clientId': 'id', 'accessToken': 'accessToken'},
107    })
108    index.ping()
109    ```
110
111* There are four calling conventions for methods:
112
113    ```python
114    client.method(v1, v1, payload)
115    client.method(payload, k1=v1, k2=v2)
116    client.method(payload=payload, query=query, params={k1: v1, k2: v2})
117    client.method(v1, v2, payload=payload, query=query)
118    ```
119
120* Options for the topic exchange methods can be in the form of either a single
121  dictionary argument or keyword arguments.  Only one form is allowed
122
123    ```python
124    from taskcluster import client
125    qEvt = client.QueueEvents({rootUrl: 'https://tc.example.com'})
126    # The following calls are equivalent
127    qEvt.taskCompleted({'taskId': 'atask'})
128    qEvt.taskCompleted(taskId='atask')
129    ```
130
131## Root URL
132
133This client requires a `rootUrl` argument to identify the Taskcluster
134deployment to talk to.  As of this writing, the production cluster has rootUrl
135`https://taskcluster.net`.
136
137## Environment Variables
138
139As of version 6.0.0, the client does not read the standard `TASKCLUSTER_…`
140environment variables automatically.  To fetch their values explicitly, use
141`taskcluster.optionsFromEnvironment()`:
142
143```python
144auth = taskcluster.Auth(taskcluster.optionsFromEnvironment())
145```
146
147## Pagination
148There are two ways to accomplish pagination easily with the python client.  The first is
149to implement pagination in your code:
150```python
151import taskcluster
152queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
153i = 0
154tasks = 0
155outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g')
156while outcome.get('continuationToken'):
157    print('Response %d gave us %d more tasks' % (i, len(outcome['tasks'])))
158    if outcome.get('continuationToken'):
159        outcome = queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', query={'continuationToken': outcome.get('continuationToken')})
160    i += 1
161    tasks += len(outcome.get('tasks', []))
162print('Task Group %s has %d tasks' % (outcome['taskGroupId'], tasks))
163```
164
165There's also an experimental feature to support built in automatic pagination
166in the sync client.  This feature allows passing a callback as the
167'paginationHandler' keyword-argument.  This function will be passed the
168response body of the API method as its sole positional arugment.
169
170This example of the built in pagination shows how a list of tasks could be
171built and then counted:
172
173```python
174import taskcluster
175queue = taskcluster.Queue({'rootUrl': 'https://tc.example.com'})
176
177responses = []
178
179def handle_page(y):
180    print("%d tasks fetched" % len(y.get('tasks', [])))
181    responses.append(y)
182
183queue.listTaskGroup('JzTGxwxhQ76_Tt1dxkaG5g', paginationHandler=handle_page)
184
185tasks = 0
186for response in responses:
187    tasks += len(response.get('tasks', []))
188
189print("%d requests fetch %d tasks" % (len(responses), tasks))
190```
191
192## Logging
193Logging is set up in `taskcluster/__init__.py`.  If the special
194`DEBUG_TASKCLUSTER_CLIENT` environment variable is set, the `__init__.py`
195module will set the `logging` module's level for its logger to `logging.DEBUG`
196and if there are no existing handlers, add a `logging.StreamHandler()`
197instance.  This is meant to assist those who do not wish to bother figuring out
198how to configure the python logging module but do want debug messages
199
200
201## Scopes
202The `scopeMatch(assumedScopes, requiredScopeSets)` function determines
203whether one or more of a set of required scopes are satisfied by the assumed
204scopes, taking *-expansion into account.  This is useful for making local
205decisions on scope satisfaction, but note that `assumed_scopes` must be the
206*expanded* scopes, as this function cannot perform expansion.
207
208It takes a list of a assumed scopes, and a list of required scope sets on
209disjunctive normal form, and checks if any of the required scope sets are
210satisfied.
211
212Example:
213
214```
215    requiredScopeSets = [
216        ["scopeA", "scopeB"],
217        ["scopeC:*"]
218    ]
219    assert scopesMatch(['scopeA', 'scopeB'], requiredScopeSets)
220    assert scopesMatch(['scopeC:xyz'], requiredScopeSets)
221    assert not scopesMatch(['scopeA'], requiredScopeSets)
222    assert not scopesMatch(['scopeC'], requiredScopeSets)
223```
224
225## Relative Date-time Utilities
226A lot of taskcluster APIs requires ISO 8601 time stamps offset into the future
227as way of providing expiration, deadlines, etc. These can be easily created
228using `datetime.datetime.isoformat()`, however, it can be rather error prone
229and tedious to offset `datetime.datetime` objects into the future. Therefore
230this library comes with two utility functions for this purposes.
231
232```python
233dateObject = taskcluster.fromNow("2 days 3 hours 1 minute")
234# datetime.datetime(2017, 1, 21, 17, 8, 1, 607929)
235dateString = taskcluster.fromNowJSON("2 days 3 hours 1 minute")
236# '2017-01-21T17:09:23.240178Z'
237```
238
239By default it will offset the date time into the future, if the offset strings
240are prefixed minus (`-`) the date object will be offset into the past. This is
241useful in some corner cases.
242
243```python
244dateObject = taskcluster.fromNow("- 1 year 2 months 3 weeks 5 seconds");
245# datetime.datetime(2015, 10, 30, 18, 16, 50, 931161)
246```
247
248The offset string is ignorant of whitespace and case insensitive. It may also
249optionally be prefixed plus `+` (if not prefixed minus), any `+` prefix will be
250ignored. However, entries in the offset string must be given in order from
251high to low, ie. `2 years 1 day`. Additionally, various shorthands may be
252employed, as illustrated below.
253
254```
255  years,    year,   yr,   y
256  months,   month,  mo
257  weeks,    week,         w
258  days,     day,          d
259  hours,    hour,         h
260  minutes,  minute, min
261  seconds,  second, sec,  s
262```
263
264The `fromNow` method may also be given a date to be relative to as a second
265argument. This is useful if offset the task expiration relative to the the task
266deadline or doing something similar.  This argument can also be passed as the
267kwarg `dateObj`
268
269```python
270dateObject1 = taskcluster.fromNow("2 days 3 hours");
271dateObject2 = taskcluster.fromNow("1 year", dateObject1);
272taskcluster.fromNow("1 year", dateObj=dateObject1);
273# datetime.datetime(2018, 1, 21, 17, 59, 0, 328934)
274```
275
276## Methods contained in the client library
277
278<!-- START OF GENERATED DOCS -->
279
280### Methods in `taskcluster.Auth`
281```python
282import asyncio # Only for async
283// Create Auth client instance
284import taskcluster
285import taskcluster.aio
286
287auth = taskcluster.Auth(options)
288# Below only for async instances, assume already in coroutine
289loop = asyncio.get_event_loop()
290session = taskcluster.aio.createSession(loop=loop)
291asyncAuth = taskcluster.aio.Auth(options, session=session)
292```
293Authentication related API end-points for Taskcluster and related
294services. These API end-points are of interest if you wish to:
295  * Authorize a request signed with Taskcluster credentials,
296  * Manage clients and roles,
297  * Inspect or audit clients and roles,
298  * Gain access to various services guarded by this API.
299
300Note that in this service "authentication" refers to validating the
301correctness of the supplied credentials (that the caller posesses the
302appropriate access token). This service does not provide any kind of user
303authentication (identifying a particular person).
304
305### Clients
306The authentication service manages _clients_, at a high-level each client
307consists of a `clientId`, an `accessToken`, scopes, and some metadata.
308The `clientId` and `accessToken` can be used for authentication when
309calling Taskcluster APIs.
310
311The client's scopes control the client's access to Taskcluster resources.
312The scopes are *expanded* by substituting roles, as defined below.
313
314### Roles
315A _role_ consists of a `roleId`, a set of scopes and a description.
316Each role constitutes a simple _expansion rule_ that says if you have
317the scope: `assume:<roleId>` you get the set of scopes the role has.
318Think of the `assume:<roleId>` as a scope that allows a client to assume
319a role.
320
321As in scopes the `*` kleene star also have special meaning if it is
322located at the end of a `roleId`. If you have a role with the following
323`roleId`: `my-prefix*`, then any client which has a scope staring with
324`assume:my-prefix` will be allowed to assume the role.
325
326### Guarded Services
327The authentication service also has API end-points for delegating access
328to some guarded service such as AWS S3, or Azure Table Storage.
329Generally, we add API end-points to this server when we wish to use
330Taskcluster credentials to grant access to a third-party service used
331by many Taskcluster components.
332#### Ping Server
333Respond without doing anything.
334This endpoint is used to check that the service is up.
335
336
337```python
338# Sync calls
339auth.ping() # -> None`
340# Async call
341await asyncAuth.ping() # -> None
342```
343
344#### List Clients
345Get a list of all clients.  With `prefix`, only clients for which
346it is a prefix of the clientId are returned.
347
348By default this end-point will try to return up to 1000 clients in one
349request. But it **may return less, even none**.
350It may also return a `continuationToken` even though there are no more
351results. However, you can only be sure to have seen all results if you
352keep calling `listClients` with the last `continuationToken` until you
353get a result without a `continuationToken`.
354
355
356Required [output schema](v1/list-clients-response.json#)
357
358```python
359# Sync calls
360auth.listClients() # -> result`
361# Async call
362await asyncAuth.listClients() # -> result
363```
364
365#### Get Client
366Get information about a single client.
367
368
369
370Takes the following arguments:
371
372  * `clientId`
373
374Required [output schema](v1/get-client-response.json#)
375
376```python
377# Sync calls
378auth.client(clientId) # -> result`
379auth.client(clientId='value') # -> result
380# Async call
381await asyncAuth.client(clientId) # -> result
382await asyncAuth.client(clientId='value') # -> result
383```
384
385#### Create Client
386Create a new client and get the `accessToken` for this client.
387You should store the `accessToken` from this API call as there is no
388other way to retrieve it.
389
390If you loose the `accessToken` you can call `resetAccessToken` to reset
391it, and a new `accessToken` will be returned, but you cannot retrieve the
392current `accessToken`.
393
394If a client with the same `clientId` already exists this operation will
395fail. Use `updateClient` if you wish to update an existing client.
396
397The caller's scopes must satisfy `scopes`.
398
399
400
401Takes the following arguments:
402
403  * `clientId`
404
405Required [input schema](v1/create-client-request.json#)
406
407Required [output schema](v1/create-client-response.json#)
408
409```python
410# Sync calls
411auth.createClient(clientId, payload) # -> result`
412auth.createClient(payload, clientId='value') # -> result
413# Async call
414await asyncAuth.createClient(clientId, payload) # -> result
415await asyncAuth.createClient(payload, clientId='value') # -> result
416```
417
418#### Reset `accessToken`
419Reset a clients `accessToken`, this will revoke the existing
420`accessToken`, generate a new `accessToken` and return it from this
421call.
422
423There is no way to retrieve an existing `accessToken`, so if you loose it
424you must reset the accessToken to acquire it again.
425
426
427
428Takes the following arguments:
429
430  * `clientId`
431
432Required [output schema](v1/create-client-response.json#)
433
434```python
435# Sync calls
436auth.resetAccessToken(clientId) # -> result`
437auth.resetAccessToken(clientId='value') # -> result
438# Async call
439await asyncAuth.resetAccessToken(clientId) # -> result
440await asyncAuth.resetAccessToken(clientId='value') # -> result
441```
442
443#### Update Client
444Update an exisiting client. The `clientId` and `accessToken` cannot be
445updated, but `scopes` can be modified.  The caller's scopes must
446satisfy all scopes being added to the client in the update operation.
447If no scopes are given in the request, the client's scopes remain
448unchanged
449
450
451
452Takes the following arguments:
453
454  * `clientId`
455
456Required [input schema](v1/create-client-request.json#)
457
458Required [output schema](v1/get-client-response.json#)
459
460```python
461# Sync calls
462auth.updateClient(clientId, payload) # -> result`
463auth.updateClient(payload, clientId='value') # -> result
464# Async call
465await asyncAuth.updateClient(clientId, payload) # -> result
466await asyncAuth.updateClient(payload, clientId='value') # -> result
467```
468
469#### Enable Client
470Enable a client that was disabled with `disableClient`.  If the client
471is already enabled, this does nothing.
472
473This is typically used by identity providers to re-enable clients that
474had been disabled when the corresponding identity's scopes changed.
475
476
477
478Takes the following arguments:
479
480  * `clientId`
481
482Required [output schema](v1/get-client-response.json#)
483
484```python
485# Sync calls
486auth.enableClient(clientId) # -> result`
487auth.enableClient(clientId='value') # -> result
488# Async call
489await asyncAuth.enableClient(clientId) # -> result
490await asyncAuth.enableClient(clientId='value') # -> result
491```
492
493#### Disable Client
494Disable a client.  If the client is already disabled, this does nothing.
495
496This is typically used by identity providers to disable clients when the
497corresponding identity's scopes no longer satisfy the client's scopes.
498
499
500
501Takes the following arguments:
502
503  * `clientId`
504
505Required [output schema](v1/get-client-response.json#)
506
507```python
508# Sync calls
509auth.disableClient(clientId) # -> result`
510auth.disableClient(clientId='value') # -> result
511# Async call
512await asyncAuth.disableClient(clientId) # -> result
513await asyncAuth.disableClient(clientId='value') # -> result
514```
515
516#### Delete Client
517Delete a client, please note that any roles related to this client must
518be deleted independently.
519
520
521
522Takes the following arguments:
523
524  * `clientId`
525
526```python
527# Sync calls
528auth.deleteClient(clientId) # -> None`
529auth.deleteClient(clientId='value') # -> None
530# Async call
531await asyncAuth.deleteClient(clientId) # -> None
532await asyncAuth.deleteClient(clientId='value') # -> None
533```
534
535#### List Roles
536Get a list of all roles, each role object also includes the list of
537scopes it expands to.
538
539
540Required [output schema](v1/list-roles-response.json#)
541
542```python
543# Sync calls
544auth.listRoles() # -> result`
545# Async call
546await asyncAuth.listRoles() # -> result
547```
548
549#### Get Role
550Get information about a single role, including the set of scopes that the
551role expands to.
552
553
554
555Takes the following arguments:
556
557  * `roleId`
558
559Required [output schema](v1/get-role-response.json#)
560
561```python
562# Sync calls
563auth.role(roleId) # -> result`
564auth.role(roleId='value') # -> result
565# Async call
566await asyncAuth.role(roleId) # -> result
567await asyncAuth.role(roleId='value') # -> result
568```
569
570#### Create Role
571Create a new role.
572
573The caller's scopes must satisfy the new role's scopes.
574
575If there already exists a role with the same `roleId` this operation
576will fail. Use `updateRole` to modify an existing role.
577
578Creation of a role that will generate an infinite expansion will result
579in an error response.
580
581
582
583Takes the following arguments:
584
585  * `roleId`
586
587Required [input schema](v1/create-role-request.json#)
588
589Required [output schema](v1/get-role-response.json#)
590
591```python
592# Sync calls
593auth.createRole(roleId, payload) # -> result`
594auth.createRole(payload, roleId='value') # -> result
595# Async call
596await asyncAuth.createRole(roleId, payload) # -> result
597await asyncAuth.createRole(payload, roleId='value') # -> result
598```
599
600#### Update Role
601Update an existing role.
602
603The caller's scopes must satisfy all of the new scopes being added, but
604need not satisfy all of the client's existing scopes.
605
606An update of a role that will generate an infinite expansion will result
607in an error response.
608
609
610
611Takes the following arguments:
612
613  * `roleId`
614
615Required [input schema](v1/create-role-request.json#)
616
617Required [output schema](v1/get-role-response.json#)
618
619```python
620# Sync calls
621auth.updateRole(roleId, payload) # -> result`
622auth.updateRole(payload, roleId='value') # -> result
623# Async call
624await asyncAuth.updateRole(roleId, payload) # -> result
625await asyncAuth.updateRole(payload, roleId='value') # -> result
626```
627
628#### Delete Role
629Delete a role. This operation will succeed regardless of whether or not
630the role exists.
631
632
633
634Takes the following arguments:
635
636  * `roleId`
637
638```python
639# Sync calls
640auth.deleteRole(roleId) # -> None`
641auth.deleteRole(roleId='value') # -> None
642# Async call
643await asyncAuth.deleteRole(roleId) # -> None
644await asyncAuth.deleteRole(roleId='value') # -> None
645```
646
647#### Expand Scopes
648Return an expanded copy of the given scopeset, with scopes implied by any
649roles included.
650
651This call uses the GET method with an HTTP body.  It remains only for
652backward compatibility.
653
654
655Required [input schema](v1/scopeset.json#)
656
657Required [output schema](v1/scopeset.json#)
658
659```python
660# Sync calls
661auth.expandScopesGet(payload) # -> result`
662# Async call
663await asyncAuth.expandScopesGet(payload) # -> result
664```
665
666#### Expand Scopes
667Return an expanded copy of the given scopeset, with scopes implied by any
668roles included.
669
670
671Required [input schema](v1/scopeset.json#)
672
673Required [output schema](v1/scopeset.json#)
674
675```python
676# Sync calls
677auth.expandScopes(payload) # -> result`
678# Async call
679await asyncAuth.expandScopes(payload) # -> result
680```
681
682#### Get Current Scopes
683Return the expanded scopes available in the request, taking into account all sources
684of scopes and scope restrictions (temporary credentials, assumeScopes, client scopes,
685and roles).
686
687
688Required [output schema](v1/scopeset.json#)
689
690```python
691# Sync calls
692auth.currentScopes() # -> result`
693# Async call
694await asyncAuth.currentScopes() # -> result
695```
696
697#### Get Temporary Read/Write Credentials S3
698Get temporary AWS credentials for `read-write` or `read-only` access to
699a given `bucket` and `prefix` within that bucket.
700The `level` parameter can be `read-write` or `read-only` and determines
701which type of credentials are returned. Please note that the `level`
702parameter is required in the scope guarding access.  The bucket name must
703not contain `.`, as recommended by Amazon.
704
705This method can only allow access to a whitelisted set of buckets.  To add
706a bucket to that whitelist, contact the Taskcluster team, who will add it to
707the appropriate IAM policy.  If the bucket is in a different AWS account, you
708will also need to add a bucket policy allowing access from the Taskcluster
709account.  That policy should look like this:
710
711```js
712{
713  "Version": "2012-10-17",
714  "Statement": [
715    {
716      "Sid": "allow-taskcluster-auth-to-delegate-access",
717      "Effect": "Allow",
718      "Principal": {
719        "AWS": "arn:aws:iam::692406183521:root"
720      },
721      "Action": [
722        "s3:ListBucket",
723        "s3:GetObject",
724        "s3:PutObject",
725        "s3:DeleteObject",
726        "s3:GetBucketLocation"
727      ],
728      "Resource": [
729        "arn:aws:s3:::<bucket>",
730        "arn:aws:s3:::<bucket>/*"
731      ]
732    }
733  ]
734}
735```
736
737The credentials are set to expire after an hour, but this behavior is
738subject to change. Hence, you should always read the `expires` property
739from the response, if you intend to maintain active credentials in your
740application.
741
742Please note that your `prefix` may not start with slash `/`. Such a prefix
743is allowed on S3, but we forbid it here to discourage bad behavior.
744
745Also note that if your `prefix` doesn't end in a slash `/`, the STS
746credentials may allow access to unexpected keys, as S3 does not treat
747slashes specially.  For example, a prefix of `my-folder` will allow
748access to `my-folder/file.txt` as expected, but also to `my-folder.txt`,
749which may not be intended.
750
751Finally, note that the `PutObjectAcl` call is not allowed.  Passing a canned
752ACL other than `private` to `PutObject` is treated as a `PutObjectAcl` call, and
753will result in an access-denied error from AWS.  This limitation is due to a
754security flaw in Amazon S3 which might otherwise allow indefinite access to
755uploaded objects.
756
757**EC2 metadata compatibility**, if the querystring parameter
758`?format=iam-role-compat` is given, the response will be compatible
759with the JSON exposed by the EC2 metadata service. This aims to ease
760compatibility for libraries and tools built to auto-refresh credentials.
761For details on the format returned by EC2 metadata service see:
762[EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials).
763
764
765
766Takes the following arguments:
767
768  * `level`
769  * `bucket`
770  * `prefix`
771
772Required [output schema](v1/aws-s3-credentials-response.json#)
773
774```python
775# Sync calls
776auth.awsS3Credentials(level, bucket, prefix) # -> result`
777auth.awsS3Credentials(level='value', bucket='value', prefix='value') # -> result
778# Async call
779await asyncAuth.awsS3Credentials(level, bucket, prefix) # -> result
780await asyncAuth.awsS3Credentials(level='value', bucket='value', prefix='value') # -> result
781```
782
783#### List Accounts Managed by Auth
784Retrieve a list of all Azure accounts managed by Taskcluster Auth.
785
786
787Required [output schema](v1/azure-account-list-response.json#)
788
789```python
790# Sync calls
791auth.azureAccounts() # -> result`
792# Async call
793await asyncAuth.azureAccounts() # -> result
794```
795
796#### List Tables in an Account Managed by Auth
797Retrieve a list of all tables in an account.
798
799
800
801Takes the following arguments:
802
803  * `account`
804
805Required [output schema](v1/azure-table-list-response.json#)
806
807```python
808# Sync calls
809auth.azureTables(account) # -> result`
810auth.azureTables(account='value') # -> result
811# Async call
812await asyncAuth.azureTables(account) # -> result
813await asyncAuth.azureTables(account='value') # -> result
814```
815
816#### Get Shared-Access-Signature for Azure Table
817Get a shared access signature (SAS) string for use with a specific Azure
818Table Storage table.
819
820The `level` parameter can be `read-write` or `read-only` and determines
821which type of credentials are returned.  If level is read-write, it will create the
822table if it doesn't already exist.
823
824
825
826Takes the following arguments:
827
828  * `account`
829  * `table`
830  * `level`
831
832Required [output schema](v1/azure-table-access-response.json#)
833
834```python
835# Sync calls
836auth.azureTableSAS(account, table, level) # -> result`
837auth.azureTableSAS(account='value', table='value', level='value') # -> result
838# Async call
839await asyncAuth.azureTableSAS(account, table, level) # -> result
840await asyncAuth.azureTableSAS(account='value', table='value', level='value') # -> result
841```
842
843#### List containers in an Account Managed by Auth
844Retrieve a list of all containers in an account.
845
846
847
848Takes the following arguments:
849
850  * `account`
851
852Required [output schema](v1/azure-container-list-response.json#)
853
854```python
855# Sync calls
856auth.azureContainers(account) # -> result`
857auth.azureContainers(account='value') # -> result
858# Async call
859await asyncAuth.azureContainers(account) # -> result
860await asyncAuth.azureContainers(account='value') # -> result
861```
862
863#### Get Shared-Access-Signature for Azure Container
864Get a shared access signature (SAS) string for use with a specific Azure
865Blob Storage container.
866
867The `level` parameter can be `read-write` or `read-only` and determines
868which type of credentials are returned.  If level is read-write, it will create the
869container if it doesn't already exist.
870
871
872
873Takes the following arguments:
874
875  * `account`
876  * `container`
877  * `level`
878
879Required [output schema](v1/azure-container-response.json#)
880
881```python
882# Sync calls
883auth.azureContainerSAS(account, container, level) # -> result`
884auth.azureContainerSAS(account='value', container='value', level='value') # -> result
885# Async call
886await asyncAuth.azureContainerSAS(account, container, level) # -> result
887await asyncAuth.azureContainerSAS(account='value', container='value', level='value') # -> result
888```
889
890#### Get DSN for Sentry Project
891Get temporary DSN (access credentials) for a sentry project.
892The credentials returned can be used with any Sentry client for up to
89324 hours, after which the credentials will be automatically disabled.
894
895If the project doesn't exist it will be created, and assigned to the
896initial team configured for this component. Contact a Sentry admin
897to have the project transferred to a team you have access to if needed
898
899
900
901Takes the following arguments:
902
903  * `project`
904
905Required [output schema](v1/sentry-dsn-response.json#)
906
907```python
908# Sync calls
909auth.sentryDSN(project) # -> result`
910auth.sentryDSN(project='value') # -> result
911# Async call
912await asyncAuth.sentryDSN(project) # -> result
913await asyncAuth.sentryDSN(project='value') # -> result
914```
915
916#### Get Token for Statsum Project
917Get temporary `token` and `baseUrl` for sending metrics to statsum.
918
919The token is valid for 24 hours, clients should refresh after expiration.
920
921
922
923Takes the following arguments:
924
925  * `project`
926
927Required [output schema](v1/statsum-token-response.json#)
928
929```python
930# Sync calls
931auth.statsumToken(project) # -> result`
932auth.statsumToken(project='value') # -> result
933# Async call
934await asyncAuth.statsumToken(project) # -> result
935await asyncAuth.statsumToken(project='value') # -> result
936```
937
938#### Get Token for Webhooktunnel Proxy
939Get temporary `token` and `id` for connecting to webhooktunnel
940The token is valid for 96 hours, clients should refresh after expiration.
941
942
943Required [output schema](v1/webhooktunnel-token-response.json#)
944
945```python
946# Sync calls
947auth.webhooktunnelToken() # -> result`
948# Async call
949await asyncAuth.webhooktunnelToken() # -> result
950```
951
952#### Authenticate Hawk Request
953Validate the request signature given on input and return list of scopes
954that the authenticating client has.
955
956This method is used by other services that wish rely on Taskcluster
957credentials for authentication. This way we can use Hawk without having
958the secret credentials leave this service.
959
960
961Required [input schema](v1/authenticate-hawk-request.json#)
962
963Required [output schema](v1/authenticate-hawk-response.json#)
964
965```python
966# Sync calls
967auth.authenticateHawk(payload) # -> result`
968# Async call
969await asyncAuth.authenticateHawk(payload) # -> result
970```
971
972#### Test Authentication
973Utility method to test client implementations of Taskcluster
974authentication.
975
976Rather than using real credentials, this endpoint accepts requests with
977clientId `tester` and accessToken `no-secret`. That client's scopes are
978based on `clientScopes` in the request body.
979
980The request is validated, with any certificate, authorizedScopes, etc.
981applied, and the resulting scopes are checked against `requiredScopes`
982from the request body. On success, the response contains the clientId
983and scopes as seen by the API method.
984
985
986Required [input schema](v1/test-authenticate-request.json#)
987
988Required [output schema](v1/test-authenticate-response.json#)
989
990```python
991# Sync calls
992auth.testAuthenticate(payload) # -> result`
993# Async call
994await asyncAuth.testAuthenticate(payload) # -> result
995```
996
997#### Test Authentication (GET)
998Utility method similar to `testAuthenticate`, but with the GET method,
999so it can be used with signed URLs (bewits).
1000
1001Rather than using real credentials, this endpoint accepts requests with
1002clientId `tester` and accessToken `no-secret`. That client's scopes are
1003`['test:*', 'auth:create-client:test:*']`.  The call fails if the
1004`test:authenticate-get` scope is not available.
1005
1006The request is validated, with any certificate, authorizedScopes, etc.
1007applied, and the resulting scopes are checked, just like any API call.
1008On success, the response contains the clientId and scopes as seen by
1009the API method.
1010
1011This method may later be extended to allow specification of client and
1012required scopes via query arguments.
1013
1014
1015Required [output schema](v1/test-authenticate-response.json#)
1016
1017```python
1018# Sync calls
1019auth.testAuthenticateGet() # -> result`
1020# Async call
1021await asyncAuth.testAuthenticateGet() # -> result
1022```
1023
1024
1025
1026
1027### Exchanges in `taskcluster.AuthEvents`
1028```python
1029// Create AuthEvents client instance
1030import taskcluster
1031authEvents = taskcluster.AuthEvents(options)
1032```
1033The auth service is responsible for storing credentials, managing
1034assignment of scopes, and validation of request signatures from other
1035services.
1036
1037These exchanges provides notifications when credentials or roles are
1038updated. This is mostly so that multiple instances of the auth service
1039can purge their caches and synchronize state. But you are of course
1040welcome to use these for other purposes, monitoring changes for example.
1041#### Client Created Messages
1042 * `authEvents.clientCreated(routingKeyPattern) -> routingKey`
1043   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1044
1045#### Client Updated Messages
1046 * `authEvents.clientUpdated(routingKeyPattern) -> routingKey`
1047   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1048
1049#### Client Deleted Messages
1050 * `authEvents.clientDeleted(routingKeyPattern) -> routingKey`
1051   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1052
1053#### Role Created Messages
1054 * `authEvents.roleCreated(routingKeyPattern) -> routingKey`
1055   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1056
1057#### Role Updated Messages
1058 * `authEvents.roleUpdated(routingKeyPattern) -> routingKey`
1059   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1060
1061#### Role Deleted Messages
1062 * `authEvents.roleDeleted(routingKeyPattern) -> routingKey`
1063   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1064
1065
1066
1067
1068### Methods in `taskcluster.AwsProvisioner`
1069```python
1070import asyncio # Only for async
1071// Create AwsProvisioner client instance
1072import taskcluster
1073import taskcluster.aio
1074
1075awsProvisioner = taskcluster.AwsProvisioner(options)
1076# Below only for async instances, assume already in coroutine
1077loop = asyncio.get_event_loop()
1078session = taskcluster.aio.createSession(loop=loop)
1079asyncAwsProvisioner = taskcluster.aio.AwsProvisioner(options, session=session)
1080```
1081The AWS Provisioner is responsible for provisioning instances on EC2 for use in
1082Taskcluster.  The provisioner maintains a set of worker configurations which
1083can be managed with an API that is typically available at
1084aws-provisioner.taskcluster.net/v1.  This API can also perform basic instance
1085management tasks in addition to maintaining the internal state of worker type
1086configuration information.
1087
1088The Provisioner runs at a configurable interval.  Each iteration of the
1089provisioner fetches a current copy the state that the AWS EC2 api reports.  In
1090each iteration, we ask the Queue how many tasks are pending for that worker
1091type.  Based on the number of tasks pending and the scaling ratio, we may
1092submit requests for new instances.  We use pricing information, capacity and
1093utility factor information to decide which instance type in which region would
1094be the optimal configuration.
1095
1096Each EC2 instance type will declare a capacity and utility factor.  Capacity is
1097the number of tasks that a given machine is capable of running concurrently.
1098Utility factor is a relative measure of performance between two instance types.
1099We multiply the utility factor by the spot price to compare instance types and
1100regions when making the bidding choices.
1101
1102When a new EC2 instance is instantiated, its user data contains a token in
1103`securityToken` that can be used with the `getSecret` method to retrieve
1104the worker's credentials and any needed passwords or other restricted
1105information.  The worker is responsible for deleting the secret after
1106retrieving it, to prevent dissemination of the secret to other proceses
1107which can read the instance user data.
1108
1109#### List worker types with details
1110Return a list of worker types, including some summary information about
1111current capacity for each.  While this list includes all defined worker types,
1112there may be running EC2 instances for deleted worker types that are not
1113included here.  The list is unordered.
1114
1115
1116Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-summaries-response.json#)
1117
1118```python
1119# Sync calls
1120awsProvisioner.listWorkerTypeSummaries() # -> result`
1121# Async call
1122await asyncAwsProvisioner.listWorkerTypeSummaries() # -> result
1123```
1124
1125#### Create new Worker Type
1126Create a worker type.  A worker type contains all the configuration
1127needed for the provisioner to manage the instances.  Each worker type
1128knows which regions and which instance types are allowed for that
1129worker type.  Remember that Capacity is the number of concurrent tasks
1130that can be run on a given EC2 resource and that Utility is the relative
1131performance rate between different instance types.  There is no way to
1132configure different regions to have different sets of instance types
1133so ensure that all instance types are available in all regions.
1134This function is idempotent.
1135
1136Once a worker type is in the provisioner, a back ground process will
1137begin creating instances for it based on its capacity bounds and its
1138pending task count from the Queue.  It is the worker's responsibility
1139to shut itself down.  The provisioner has a limit (currently 96hours)
1140for all instances to prevent zombie instances from running indefinitely.
1141
1142The provisioner will ensure that all instances created are tagged with
1143aws resource tags containing the provisioner id and the worker type.
1144
1145If provided, the secrets in the global, region and instance type sections
1146are available using the secrets api.  If specified, the scopes provided
1147will be used to generate a set of temporary credentials available with
1148the other secrets.
1149
1150
1151
1152Takes the following arguments:
1153
1154  * `workerType`
1155
1156Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#)
1157
1158Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
1159
1160```python
1161# Sync calls
1162awsProvisioner.createWorkerType(workerType, payload) # -> result`
1163awsProvisioner.createWorkerType(payload, workerType='value') # -> result
1164# Async call
1165await asyncAwsProvisioner.createWorkerType(workerType, payload) # -> result
1166await asyncAwsProvisioner.createWorkerType(payload, workerType='value') # -> result
1167```
1168
1169#### Update Worker Type
1170Provide a new copy of a worker type to replace the existing one.
1171This will overwrite the existing worker type definition if there
1172is already a worker type of that name.  This method will return a
1173200 response along with a copy of the worker type definition created
1174Note that if you are using the result of a GET on the worker-type
1175end point that you will need to delete the lastModified and workerType
1176keys from the object returned, since those fields are not allowed
1177the request body for this method
1178
1179Otherwise, all input requirements and actions are the same as the
1180create method.
1181
1182
1183
1184Takes the following arguments:
1185
1186  * `workerType`
1187
1188Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#)
1189
1190Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
1191
1192```python
1193# Sync calls
1194awsProvisioner.updateWorkerType(workerType, payload) # -> result`
1195awsProvisioner.updateWorkerType(payload, workerType='value') # -> result
1196# Async call
1197await asyncAwsProvisioner.updateWorkerType(workerType, payload) # -> result
1198await asyncAwsProvisioner.updateWorkerType(payload, workerType='value') # -> result
1199```
1200
1201#### Get Worker Type Last Modified Time
1202This method is provided to allow workers to see when they were
1203last modified.  The value provided through UserData can be
1204compared against this value to see if changes have been made
1205If the worker type definition has not been changed, the date
1206should be identical as it is the same stored value.
1207
1208
1209
1210Takes the following arguments:
1211
1212  * `workerType`
1213
1214Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-last-modified.json#)
1215
1216```python
1217# Sync calls
1218awsProvisioner.workerTypeLastModified(workerType) # -> result`
1219awsProvisioner.workerTypeLastModified(workerType='value') # -> result
1220# Async call
1221await asyncAwsProvisioner.workerTypeLastModified(workerType) # -> result
1222await asyncAwsProvisioner.workerTypeLastModified(workerType='value') # -> result
1223```
1224
1225#### Get Worker Type
1226Retrieve a copy of the requested worker type definition.
1227This copy contains a lastModified field as well as the worker
1228type name.  As such, it will require manipulation to be able to
1229use the results of this method to submit date to the update
1230method.
1231
1232
1233
1234Takes the following arguments:
1235
1236  * `workerType`
1237
1238Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#)
1239
1240```python
1241# Sync calls
1242awsProvisioner.workerType(workerType) # -> result`
1243awsProvisioner.workerType(workerType='value') # -> result
1244# Async call
1245await asyncAwsProvisioner.workerType(workerType) # -> result
1246await asyncAwsProvisioner.workerType(workerType='value') # -> result
1247```
1248
1249#### Delete Worker Type
1250Delete a worker type definition.  This method will only delete
1251the worker type definition from the storage table.  The actual
1252deletion will be handled by a background worker.  As soon as this
1253method is called for a worker type, the background worker will
1254immediately submit requests to cancel all spot requests for this
1255worker type as well as killing all instances regardless of their
1256state.  If you want to gracefully remove a worker type, you must
1257either ensure that no tasks are created with that worker type name
1258or you could theoretically set maxCapacity to 0, though, this is
1259not a supported or tested action
1260
1261
1262
1263Takes the following arguments:
1264
1265  * `workerType`
1266
1267```python
1268# Sync calls
1269awsProvisioner.removeWorkerType(workerType) # -> None`
1270awsProvisioner.removeWorkerType(workerType='value') # -> None
1271# Async call
1272await asyncAwsProvisioner.removeWorkerType(workerType) # -> None
1273await asyncAwsProvisioner.removeWorkerType(workerType='value') # -> None
1274```
1275
1276#### List Worker Types
1277Return a list of string worker type names.  These are the names
1278of all managed worker types known to the provisioner.  This does
1279not include worker types which are left overs from a deleted worker
1280type definition but are still running in AWS.
1281
1282
1283Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-response.json#)
1284
1285```python
1286# Sync calls
1287awsProvisioner.listWorkerTypes() # -> result`
1288# Async call
1289await asyncAwsProvisioner.listWorkerTypes() # -> result
1290```
1291
1292#### Create new Secret
1293Insert a secret into the secret storage.  The supplied secrets will
1294be provided verbatime via `getSecret`, while the supplied scopes will
1295be converted into credentials by `getSecret`.
1296
1297This method is not ordinarily used in production; instead, the provisioner
1298creates a new secret directly for each spot bid.
1299
1300
1301
1302Takes the following arguments:
1303
1304  * `token`
1305
1306Required [input schema](http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#)
1307
1308```python
1309# Sync calls
1310awsProvisioner.createSecret(token, payload) # -> None`
1311awsProvisioner.createSecret(payload, token='value') # -> None
1312# Async call
1313await asyncAwsProvisioner.createSecret(token, payload) # -> None
1314await asyncAwsProvisioner.createSecret(payload, token='value') # -> None
1315```
1316
1317#### Get a Secret
1318Retrieve a secret from storage.  The result contains any passwords or
1319other restricted information verbatim as well as a temporary credential
1320based on the scopes specified when the secret was created.
1321
1322It is important that this secret is deleted by the consumer (`removeSecret`),
1323or else the secrets will be visible to any process which can access the
1324user data associated with the instance.
1325
1326
1327
1328Takes the following arguments:
1329
1330  * `token`
1331
1332Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-secret-response.json#)
1333
1334```python
1335# Sync calls
1336awsProvisioner.getSecret(token) # -> result`
1337awsProvisioner.getSecret(token='value') # -> result
1338# Async call
1339await asyncAwsProvisioner.getSecret(token) # -> result
1340await asyncAwsProvisioner.getSecret(token='value') # -> result
1341```
1342
1343#### Report an instance starting
1344An instance will report in by giving its instance id as well
1345as its security token.  The token is given and checked to ensure
1346that it matches a real token that exists to ensure that random
1347machines do not check in.  We could generate a different token
1348but that seems like overkill
1349
1350
1351
1352Takes the following arguments:
1353
1354  * `instanceId`
1355  * `token`
1356
1357```python
1358# Sync calls
1359awsProvisioner.instanceStarted(instanceId, token) # -> None`
1360awsProvisioner.instanceStarted(instanceId='value', token='value') # -> None
1361# Async call
1362await asyncAwsProvisioner.instanceStarted(instanceId, token) # -> None
1363await asyncAwsProvisioner.instanceStarted(instanceId='value', token='value') # -> None
1364```
1365
1366#### Remove a Secret
1367Remove a secret.  After this call, a call to `getSecret` with the given
1368token will return no information.
1369
1370It is very important that the consumer of a
1371secret delete the secret from storage before handing over control
1372to untrusted processes to prevent credential and/or secret leakage.
1373
1374
1375
1376Takes the following arguments:
1377
1378  * `token`
1379
1380```python
1381# Sync calls
1382awsProvisioner.removeSecret(token) # -> None`
1383awsProvisioner.removeSecret(token='value') # -> None
1384# Async call
1385await asyncAwsProvisioner.removeSecret(token) # -> None
1386await asyncAwsProvisioner.removeSecret(token='value') # -> None
1387```
1388
1389#### Get All Launch Specifications for WorkerType
1390This method returns a preview of all possible launch specifications
1391that this worker type definition could submit to EC2.  It is used to
1392test worker types, nothing more
1393
1394**This API end-point is experimental and may be subject to change without warning.**
1395
1396
1397
1398Takes the following arguments:
1399
1400  * `workerType`
1401
1402Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#)
1403
1404```python
1405# Sync calls
1406awsProvisioner.getLaunchSpecs(workerType) # -> result`
1407awsProvisioner.getLaunchSpecs(workerType='value') # -> result
1408# Async call
1409await asyncAwsProvisioner.getLaunchSpecs(workerType) # -> result
1410await asyncAwsProvisioner.getLaunchSpecs(workerType='value') # -> result
1411```
1412
1413#### Get AWS State for a worker type
1414Return the state of a given workertype as stored by the provisioner.
1415This state is stored as three lists: 1 for running instances, 1 for
1416pending requests.  The `summary` property contains an updated summary
1417similar to that returned from `listWorkerTypeSummaries`.
1418
1419
1420
1421Takes the following arguments:
1422
1423  * `workerType`
1424
1425```python
1426# Sync calls
1427awsProvisioner.state(workerType) # -> None`
1428awsProvisioner.state(workerType='value') # -> None
1429# Async call
1430await asyncAwsProvisioner.state(workerType) # -> None
1431await asyncAwsProvisioner.state(workerType='value') # -> None
1432```
1433
1434#### Backend Status
1435This endpoint is used to show when the last time the provisioner
1436has checked in.  A check in is done through the deadman's snitch
1437api.  It is done at the conclusion of a provisioning iteration
1438and used to tell if the background provisioning process is still
1439running.
1440
1441**Warning** this api end-point is **not stable**.
1442
1443
1444Required [output schema](http://schemas.taskcluster.net/aws-provisioner/v1/backend-status-response.json#)
1445
1446```python
1447# Sync calls
1448awsProvisioner.backendStatus() # -> result`
1449# Async call
1450await asyncAwsProvisioner.backendStatus() # -> result
1451```
1452
1453#### Ping Server
1454Respond without doing anything.
1455This endpoint is used to check that the service is up.
1456
1457
1458```python
1459# Sync calls
1460awsProvisioner.ping() # -> None`
1461# Async call
1462await asyncAwsProvisioner.ping() # -> None
1463```
1464
1465
1466
1467
1468### Exchanges in `taskcluster.AwsProvisionerEvents`
1469```python
1470// Create AwsProvisionerEvents client instance
1471import taskcluster
1472awsProvisionerEvents = taskcluster.AwsProvisionerEvents(options)
1473```
1474Exchanges from the provisioner... more docs later
1475#### WorkerType Created Message
1476 * `awsProvisionerEvents.workerTypeCreated(routingKeyPattern) -> routingKey`
1477   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
1478   * `workerType` is required  Description: WorkerType that this message concerns.
1479   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1480
1481#### WorkerType Updated Message
1482 * `awsProvisionerEvents.workerTypeUpdated(routingKeyPattern) -> routingKey`
1483   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
1484   * `workerType` is required  Description: WorkerType that this message concerns.
1485   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1486
1487#### WorkerType Removed Message
1488 * `awsProvisionerEvents.workerTypeRemoved(routingKeyPattern) -> routingKey`
1489   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
1490   * `workerType` is required  Description: WorkerType that this message concerns.
1491   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
1492
1493
1494
1495
1496### Methods in `taskcluster.EC2Manager`
1497```python
1498import asyncio # Only for async
1499// Create EC2Manager client instance
1500import taskcluster
1501import taskcluster.aio
1502
1503eC2Manager = taskcluster.EC2Manager(options)
1504# Below only for async instances, assume already in coroutine
1505loop = asyncio.get_event_loop()
1506session = taskcluster.aio.createSession(loop=loop)
1507asyncEC2Manager = taskcluster.aio.EC2Manager(options, session=session)
1508```
1509A taskcluster service which manages EC2 instances.  This service does not understand any taskcluster concepts intrinsicaly other than using the name `workerType` to refer to a group of associated instances.  Unless you are working on building a provisioner for AWS, you almost certainly do not want to use this service
1510#### Ping Server
1511Respond without doing anything.
1512This endpoint is used to check that the service is up.
1513
1514
1515```python
1516# Sync calls
1517eC2Manager.ping() # -> None`
1518# Async call
1519await asyncEC2Manager.ping() # -> None
1520```
1521
1522#### See the list of worker types which are known to be managed
1523This method is only for debugging the ec2-manager
1524
1525
1526Required [output schema](v1/list-worker-types.json#)
1527
1528```python
1529# Sync calls
1530eC2Manager.listWorkerTypes() # -> result`
1531# Async call
1532await asyncEC2Manager.listWorkerTypes() # -> result
1533```
1534
1535#### Run an instance
1536Request an instance of a worker type
1537
1538
1539
1540Takes the following arguments:
1541
1542  * `workerType`
1543
1544Required [input schema](v1/run-instance-request.json#)
1545
1546```python
1547# Sync calls
1548eC2Manager.runInstance(workerType, payload) # -> None`
1549eC2Manager.runInstance(payload, workerType='value') # -> None
1550# Async call
1551await asyncEC2Manager.runInstance(workerType, payload) # -> None
1552await asyncEC2Manager.runInstance(payload, workerType='value') # -> None
1553```
1554
1555#### Terminate all resources from a worker type
1556Terminate all instances for this worker type
1557
1558
1559
1560Takes the following arguments:
1561
1562  * `workerType`
1563
1564```python
1565# Sync calls
1566eC2Manager.terminateWorkerType(workerType) # -> None`
1567eC2Manager.terminateWorkerType(workerType='value') # -> None
1568# Async call
1569await asyncEC2Manager.terminateWorkerType(workerType) # -> None
1570await asyncEC2Manager.terminateWorkerType(workerType='value') # -> None
1571```
1572
1573#### Look up the resource stats for a workerType
1574Return an object which has a generic state description. This only contains counts of instances
1575
1576
1577
1578Takes the following arguments:
1579
1580  * `workerType`
1581
1582Required [output schema](v1/worker-type-resources.json#)
1583
1584```python
1585# Sync calls
1586eC2Manager.workerTypeStats(workerType) # -> result`
1587eC2Manager.workerTypeStats(workerType='value') # -> result
1588# Async call
1589await asyncEC2Manager.workerTypeStats(workerType) # -> result
1590await asyncEC2Manager.workerTypeStats(workerType='value') # -> result
1591```
1592
1593#### Look up the resource health for a workerType
1594Return a view of the health of a given worker type
1595
1596
1597
1598Takes the following arguments:
1599
1600  * `workerType`
1601
1602Required [output schema](v1/health.json#)
1603
1604```python
1605# Sync calls
1606eC2Manager.workerTypeHealth(workerType) # -> result`
1607eC2Manager.workerTypeHealth(workerType='value') # -> result
1608# Async call
1609await asyncEC2Manager.workerTypeHealth(workerType) # -> result
1610await asyncEC2Manager.workerTypeHealth(workerType='value') # -> result
1611```
1612
1613#### Look up the most recent errors of a workerType
1614Return a list of the most recent errors encountered by a worker type
1615
1616
1617
1618Takes the following arguments:
1619
1620  * `workerType`
1621
1622Required [output schema](v1/errors.json#)
1623
1624```python
1625# Sync calls
1626eC2Manager.workerTypeErrors(workerType) # -> result`
1627eC2Manager.workerTypeErrors(workerType='value') # -> result
1628# Async call
1629await asyncEC2Manager.workerTypeErrors(workerType) # -> result
1630await asyncEC2Manager.workerTypeErrors(workerType='value') # -> result
1631```
1632
1633#### Look up the resource state for a workerType
1634Return state information for a given worker type
1635
1636
1637
1638Takes the following arguments:
1639
1640  * `workerType`
1641
1642Required [output schema](v1/worker-type-state.json#)
1643
1644```python
1645# Sync calls
1646eC2Manager.workerTypeState(workerType) # -> result`
1647eC2Manager.workerTypeState(workerType='value') # -> result
1648# Async call
1649await asyncEC2Manager.workerTypeState(workerType) # -> result
1650await asyncEC2Manager.workerTypeState(workerType='value') # -> result
1651```
1652
1653#### Ensure a KeyPair for a given worker type exists
1654Idempotently ensure that a keypair of a given name exists
1655
1656
1657
1658Takes the following arguments:
1659
1660  * `name`
1661
1662Required [input schema](v1/create-key-pair.json#)
1663
1664```python
1665# Sync calls
1666eC2Manager.ensureKeyPair(name, payload) # -> None`
1667eC2Manager.ensureKeyPair(payload, name='value') # -> None
1668# Async call
1669await asyncEC2Manager.ensureKeyPair(name, payload) # -> None
1670await asyncEC2Manager.ensureKeyPair(payload, name='value') # -> None
1671```
1672
1673#### Ensure a KeyPair for a given worker type does not exist
1674Ensure that a keypair of a given name does not exist.
1675
1676
1677
1678Takes the following arguments:
1679
1680  * `name`
1681
1682```python
1683# Sync calls
1684eC2Manager.removeKeyPair(name) # -> None`
1685eC2Manager.removeKeyPair(name='value') # -> None
1686# Async call
1687await asyncEC2Manager.removeKeyPair(name) # -> None
1688await asyncEC2Manager.removeKeyPair(name='value') # -> None
1689```
1690
1691#### Terminate an instance
1692Terminate an instance in a specified region
1693
1694
1695
1696Takes the following arguments:
1697
1698  * `region`
1699  * `instanceId`
1700
1701```python
1702# Sync calls
1703eC2Manager.terminateInstance(region, instanceId) # -> None`
1704eC2Manager.terminateInstance(region='value', instanceId='value') # -> None
1705# Async call
1706await asyncEC2Manager.terminateInstance(region, instanceId) # -> None
1707await asyncEC2Manager.terminateInstance(region='value', instanceId='value') # -> None
1708```
1709
1710#### Request prices for EC2
1711Return a list of possible prices for EC2
1712
1713
1714Required [output schema](v1/prices.json#)
1715
1716```python
1717# Sync calls
1718eC2Manager.getPrices() # -> result`
1719# Async call
1720await asyncEC2Manager.getPrices() # -> result
1721```
1722
1723#### Request prices for EC2
1724Return a list of possible prices for EC2
1725
1726
1727Required [input schema](v1/prices-request.json#)
1728
1729Required [output schema](v1/prices.json#)
1730
1731```python
1732# Sync calls
1733eC2Manager.getSpecificPrices(payload) # -> result`
1734# Async call
1735await asyncEC2Manager.getSpecificPrices(payload) # -> result
1736```
1737
1738#### Get EC2 account health metrics
1739Give some basic stats on the health of our EC2 account
1740
1741
1742Required [output schema](v1/health.json#)
1743
1744```python
1745# Sync calls
1746eC2Manager.getHealth() # -> result`
1747# Async call
1748await asyncEC2Manager.getHealth() # -> result
1749```
1750
1751#### Look up the most recent errors in the provisioner across all worker types
1752Return a list of recent errors encountered
1753
1754
1755Required [output schema](v1/errors.json#)
1756
1757```python
1758# Sync calls
1759eC2Manager.getRecentErrors() # -> result`
1760# Async call
1761await asyncEC2Manager.getRecentErrors() # -> result
1762```
1763
1764#### See the list of regions managed by this ec2-manager
1765This method is only for debugging the ec2-manager
1766
1767
1768```python
1769# Sync calls
1770eC2Manager.regions() # -> None`
1771# Async call
1772await asyncEC2Manager.regions() # -> None
1773```
1774
1775#### See the list of AMIs and their usage
1776List AMIs and their usage by returning a list of objects in the form:
1777{
1778region: string
1779  volumetype: string
1780  lastused: timestamp
1781}
1782
1783
1784```python
1785# Sync calls
1786eC2Manager.amiUsage() # -> None`
1787# Async call
1788await asyncEC2Manager.amiUsage() # -> None
1789```
1790
1791#### See the current EBS volume usage list
1792Lists current EBS volume usage by returning a list of objects
1793that are uniquely defined by {region, volumetype, state} in the form:
1794{
1795region: string,
1796  volumetype: string,
1797  state: string,
1798  totalcount: integer,
1799  totalgb: integer,
1800  touched: timestamp (last time that information was updated),
1801}
1802
1803
1804```python
1805# Sync calls
1806eC2Manager.ebsUsage() # -> None`
1807# Async call
1808await asyncEC2Manager.ebsUsage() # -> None
1809```
1810
1811#### Statistics on the Database client pool
1812This method is only for debugging the ec2-manager
1813
1814
1815```python
1816# Sync calls
1817eC2Manager.dbpoolStats() # -> None`
1818# Async call
1819await asyncEC2Manager.dbpoolStats() # -> None
1820```
1821
1822#### List out the entire internal state
1823This method is only for debugging the ec2-manager
1824
1825
1826```python
1827# Sync calls
1828eC2Manager.allState() # -> None`
1829# Async call
1830await asyncEC2Manager.allState() # -> None
1831```
1832
1833#### Statistics on the sqs queues
1834This method is only for debugging the ec2-manager
1835
1836
1837```python
1838# Sync calls
1839eC2Manager.sqsStats() # -> None`
1840# Async call
1841await asyncEC2Manager.sqsStats() # -> None
1842```
1843
1844#### Purge the SQS queues
1845This method is only for debugging the ec2-manager
1846
1847
1848```python
1849# Sync calls
1850eC2Manager.purgeQueues() # -> None`
1851# Async call
1852await asyncEC2Manager.purgeQueues() # -> None
1853```
1854
1855
1856
1857
1858### Methods in `taskcluster.Github`
1859```python
1860import asyncio # Only for async
1861// Create Github client instance
1862import taskcluster
1863import taskcluster.aio
1864
1865github = taskcluster.Github(options)
1866# Below only for async instances, assume already in coroutine
1867loop = asyncio.get_event_loop()
1868session = taskcluster.aio.createSession(loop=loop)
1869asyncGithub = taskcluster.aio.Github(options, session=session)
1870```
1871The github service is responsible for creating tasks in reposnse
1872to GitHub events, and posting results to the GitHub UI.
1873
1874This document describes the API end-point for consuming GitHub
1875web hooks, as well as some useful consumer APIs.
1876
1877When Github forbids an action, this service returns an HTTP 403
1878with code ForbiddenByGithub.
1879#### Ping Server
1880Respond without doing anything.
1881This endpoint is used to check that the service is up.
1882
1883
1884```python
1885# Sync calls
1886github.ping() # -> None`
1887# Async call
1888await asyncGithub.ping() # -> None
1889```
1890
1891#### Consume GitHub WebHook
1892Capture a GitHub event and publish it via pulse, if it's a push,
1893release or pull request.
1894
1895
1896```python
1897# Sync calls
1898github.githubWebHookConsumer() # -> None`
1899# Async call
1900await asyncGithub.githubWebHookConsumer() # -> None
1901```
1902
1903#### List of Builds
1904A paginated list of builds that have been run in
1905Taskcluster. Can be filtered on various git-specific
1906fields.
1907
1908
1909Required [output schema](v1/build-list.json#)
1910
1911```python
1912# Sync calls
1913github.builds() # -> result`
1914# Async call
1915await asyncGithub.builds() # -> result
1916```
1917
1918#### Latest Build Status Badge
1919Checks the status of the latest build of a given branch
1920and returns corresponding badge svg.
1921
1922
1923
1924Takes the following arguments:
1925
1926  * `owner`
1927  * `repo`
1928  * `branch`
1929
1930```python
1931# Sync calls
1932github.badge(owner, repo, branch) # -> None`
1933github.badge(owner='value', repo='value', branch='value') # -> None
1934# Async call
1935await asyncGithub.badge(owner, repo, branch) # -> None
1936await asyncGithub.badge(owner='value', repo='value', branch='value') # -> None
1937```
1938
1939#### Get Repository Info
1940Returns any repository metadata that is
1941useful within Taskcluster related services.
1942
1943
1944
1945Takes the following arguments:
1946
1947  * `owner`
1948  * `repo`
1949
1950Required [output schema](v1/repository.json#)
1951
1952```python
1953# Sync calls
1954github.repository(owner, repo) # -> result`
1955github.repository(owner='value', repo='value') # -> result
1956# Async call
1957await asyncGithub.repository(owner, repo) # -> result
1958await asyncGithub.repository(owner='value', repo='value') # -> result
1959```
1960
1961#### Latest Status for Branch
1962For a given branch of a repository, this will always point
1963to a status page for the most recent task triggered by that
1964branch.
1965
1966Note: This is a redirect rather than a direct link.
1967
1968
1969
1970Takes the following arguments:
1971
1972  * `owner`
1973  * `repo`
1974  * `branch`
1975
1976```python
1977# Sync calls
1978github.latest(owner, repo, branch) # -> None`
1979github.latest(owner='value', repo='value', branch='value') # -> None
1980# Async call
1981await asyncGithub.latest(owner, repo, branch) # -> None
1982await asyncGithub.latest(owner='value', repo='value', branch='value') # -> None
1983```
1984
1985#### Post a status against a given changeset
1986For a given changeset (SHA) of a repository, this will attach a "commit status"
1987on github. These statuses are links displayed next to each revision.
1988The status is either OK (green check) or FAILURE (red cross),
1989made of a custom title and link.
1990
1991
1992
1993Takes the following arguments:
1994
1995  * `owner`
1996  * `repo`
1997  * `sha`
1998
1999Required [input schema](v1/create-status.json#)
2000
2001```python
2002# Sync calls
2003github.createStatus(owner, repo, sha, payload) # -> None`
2004github.createStatus(payload, owner='value', repo='value', sha='value') # -> None
2005# Async call
2006await asyncGithub.createStatus(owner, repo, sha, payload) # -> None
2007await asyncGithub.createStatus(payload, owner='value', repo='value', sha='value') # -> None
2008```
2009
2010#### Post a comment on a given GitHub Issue or Pull Request
2011For a given Issue or Pull Request of a repository, this will write a new message.
2012
2013
2014
2015Takes the following arguments:
2016
2017  * `owner`
2018  * `repo`
2019  * `number`
2020
2021Required [input schema](v1/create-comment.json#)
2022
2023```python
2024# Sync calls
2025github.createComment(owner, repo, number, payload) # -> None`
2026github.createComment(payload, owner='value', repo='value', number='value') # -> None
2027# Async call
2028await asyncGithub.createComment(owner, repo, number, payload) # -> None
2029await asyncGithub.createComment(payload, owner='value', repo='value', number='value') # -> None
2030```
2031
2032
2033
2034
2035### Exchanges in `taskcluster.GithubEvents`
2036```python
2037// Create GithubEvents client instance
2038import taskcluster
2039githubEvents = taskcluster.GithubEvents(options)
2040```
2041The github service publishes a pulse
2042message for supported github events, translating Github webhook
2043events into pulse messages.
2044
2045This document describes the exchange offered by the taskcluster
2046github service
2047#### GitHub Pull Request Event
2048 * `githubEvents.pullRequest(routingKeyPattern) -> routingKey`
2049   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
2050   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2051   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2052   * `action` is required  Description: The GitHub `action` which triggered an event. See for possible values see the payload actions property.
2053
2054#### GitHub push Event
2055 * `githubEvents.push(routingKeyPattern) -> routingKey`
2056   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
2057   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2058   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2059
2060#### GitHub release Event
2061 * `githubEvents.release(routingKeyPattern) -> routingKey`
2062   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
2063   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2064   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2065
2066#### GitHub release Event
2067 * `githubEvents.taskGroupDefined(routingKeyPattern) -> routingKey`
2068   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key.
2069   * `organization` is required  Description: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2070   * `repository` is required  Description: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped.
2071
2072
2073
2074
2075### Methods in `taskcluster.Hooks`
2076```python
2077import asyncio # Only for async
2078// Create Hooks client instance
2079import taskcluster
2080import taskcluster.aio
2081
2082hooks = taskcluster.Hooks(options)
2083# Below only for async instances, assume already in coroutine
2084loop = asyncio.get_event_loop()
2085session = taskcluster.aio.createSession(loop=loop)
2086asyncHooks = taskcluster.aio.Hooks(options, session=session)
2087```
2088Hooks are a mechanism for creating tasks in response to events.
2089
2090Hooks are identified with a `hookGroupId` and a `hookId`.
2091
2092When an event occurs, the resulting task is automatically created.  The
2093task is created using the scope `assume:hook-id:<hookGroupId>/<hookId>`,
2094which must have scopes to make the createTask call, including satisfying all
2095scopes in `task.scopes`.  The new task has a `taskGroupId` equal to its
2096`taskId`, as is the convention for decision tasks.
2097
2098Hooks can have a "schedule" indicating specific times that new tasks should
2099be created.  Each schedule is in a simple cron format, per
2100https://www.npmjs.com/package/cron-parser.  For example:
2101 * `['0 0 1 * * *']` -- daily at 1:00 UTC
2102 * `['0 0 9,21 * * 1-5', '0 0 12 * * 0,6']` -- weekdays at 9:00 and 21:00 UTC, weekends at noon
2103
2104The task definition is used as a JSON-e template, with a context depending on how it is fired.  See
2105[/docs/reference/core/taskcluster-hooks/docs/firing-hooks](firing-hooks)
2106for more information.
2107#### Ping Server
2108Respond without doing anything.
2109This endpoint is used to check that the service is up.
2110
2111
2112```python
2113# Sync calls
2114hooks.ping() # -> None`
2115# Async call
2116await asyncHooks.ping() # -> None
2117```
2118
2119#### List hook groups
2120This endpoint will return a list of all hook groups with at least one hook.
2121
2122
2123Required [output schema](v1/list-hook-groups-response.json#)
2124
2125```python
2126# Sync calls
2127hooks.listHookGroups() # -> result`
2128# Async call
2129await asyncHooks.listHookGroups() # -> result
2130```
2131
2132#### List hooks in a given group
2133This endpoint will return a list of all the hook definitions within a
2134given hook group.
2135
2136
2137
2138Takes the following arguments:
2139
2140  * `hookGroupId`
2141
2142Required [output schema](v1/list-hooks-response.json#)
2143
2144```python
2145# Sync calls
2146hooks.listHooks(hookGroupId) # -> result`
2147hooks.listHooks(hookGroupId='value') # -> result
2148# Async call
2149await asyncHooks.listHooks(hookGroupId) # -> result
2150await asyncHooks.listHooks(hookGroupId='value') # -> result
2151```
2152
2153#### Get hook definition
2154This endpoint will return the hook definition for the given `hookGroupId`
2155and hookId.
2156
2157
2158
2159Takes the following arguments:
2160
2161  * `hookGroupId`
2162  * `hookId`
2163
2164Required [output schema](v1/hook-definition.json#)
2165
2166```python
2167# Sync calls
2168hooks.hook(hookGroupId, hookId) # -> result`
2169hooks.hook(hookGroupId='value', hookId='value') # -> result
2170# Async call
2171await asyncHooks.hook(hookGroupId, hookId) # -> result
2172await asyncHooks.hook(hookGroupId='value', hookId='value') # -> result
2173```
2174
2175#### Get hook status
2176This endpoint will return the current status of the hook.  This represents a
2177snapshot in time and may vary from one call to the next.
2178
2179
2180
2181Takes the following arguments:
2182
2183  * `hookGroupId`
2184  * `hookId`
2185
2186Required [output schema](v1/hook-status.json#)
2187
2188```python
2189# Sync calls
2190hooks.getHookStatus(hookGroupId, hookId) # -> result`
2191hooks.getHookStatus(hookGroupId='value', hookId='value') # -> result
2192# Async call
2193await asyncHooks.getHookStatus(hookGroupId, hookId) # -> result
2194await asyncHooks.getHookStatus(hookGroupId='value', hookId='value') # -> result
2195```
2196
2197#### Create a hook
2198This endpoint will create a new hook.
2199
2200The caller's credentials must include the role that will be used to
2201create the task.  That role must satisfy task.scopes as well as the
2202necessary scopes to add the task to the queue.
2203
2204
2205
2206
2207Takes the following arguments:
2208
2209  * `hookGroupId`
2210  * `hookId`
2211
2212Required [input schema](v1/create-hook-request.json#)
2213
2214Required [output schema](v1/hook-definition.json#)
2215
2216```python
2217# Sync calls
2218hooks.createHook(hookGroupId, hookId, payload) # -> result`
2219hooks.createHook(payload, hookGroupId='value', hookId='value') # -> result
2220# Async call
2221await asyncHooks.createHook(hookGroupId, hookId, payload) # -> result
2222await asyncHooks.createHook(payload, hookGroupId='value', hookId='value') # -> result
2223```
2224
2225#### Update a hook
2226This endpoint will update an existing hook.  All fields except
2227`hookGroupId` and `hookId` can be modified.
2228
2229
2230
2231Takes the following arguments:
2232
2233  * `hookGroupId`
2234  * `hookId`
2235
2236Required [input schema](v1/create-hook-request.json#)
2237
2238Required [output schema](v1/hook-definition.json#)
2239
2240```python
2241# Sync calls
2242hooks.updateHook(hookGroupId, hookId, payload) # -> result`
2243hooks.updateHook(payload, hookGroupId='value', hookId='value') # -> result
2244# Async call
2245await asyncHooks.updateHook(hookGroupId, hookId, payload) # -> result
2246await asyncHooks.updateHook(payload, hookGroupId='value', hookId='value') # -> result
2247```
2248
2249#### Delete a hook
2250This endpoint will remove a hook definition.
2251
2252
2253
2254Takes the following arguments:
2255
2256  * `hookGroupId`
2257  * `hookId`
2258
2259```python
2260# Sync calls
2261hooks.removeHook(hookGroupId, hookId) # -> None`
2262hooks.removeHook(hookGroupId='value', hookId='value') # -> None
2263# Async call
2264await asyncHooks.removeHook(hookGroupId, hookId) # -> None
2265await asyncHooks.removeHook(hookGroupId='value', hookId='value') # -> None
2266```
2267
2268#### Trigger a hook
2269This endpoint will trigger the creation of a task from a hook definition.
2270
2271The HTTP payload must match the hooks `triggerSchema`.  If it does, it is
2272provided as the `payload` property of the JSON-e context used to render the
2273task template.
2274
2275
2276
2277Takes the following arguments:
2278
2279  * `hookGroupId`
2280  * `hookId`
2281
2282Required [input schema](v1/trigger-hook.json#)
2283
2284Required [output schema](v1/task-status.json#)
2285
2286```python
2287# Sync calls
2288hooks.triggerHook(hookGroupId, hookId, payload) # -> result`
2289hooks.triggerHook(payload, hookGroupId='value', hookId='value') # -> result
2290# Async call
2291await asyncHooks.triggerHook(hookGroupId, hookId, payload) # -> result
2292await asyncHooks.triggerHook(payload, hookGroupId='value', hookId='value') # -> result
2293```
2294
2295#### Get a trigger token
2296Retrieve a unique secret token for triggering the specified hook. This
2297token can be deactivated with `resetTriggerToken`.
2298
2299
2300
2301Takes the following arguments:
2302
2303  * `hookGroupId`
2304  * `hookId`
2305
2306Required [output schema](v1/trigger-token-response.json#)
2307
2308```python
2309# Sync calls
2310hooks.getTriggerToken(hookGroupId, hookId) # -> result`
2311hooks.getTriggerToken(hookGroupId='value', hookId='value') # -> result
2312# Async call
2313await asyncHooks.getTriggerToken(hookGroupId, hookId) # -> result
2314await asyncHooks.getTriggerToken(hookGroupId='value', hookId='value') # -> result
2315```
2316
2317#### Reset a trigger token
2318Reset the token for triggering a given hook. This invalidates token that
2319may have been issued via getTriggerToken with a new token.
2320
2321
2322
2323Takes the following arguments:
2324
2325  * `hookGroupId`
2326  * `hookId`
2327
2328Required [output schema](v1/trigger-token-response.json#)
2329
2330```python
2331# Sync calls
2332hooks.resetTriggerToken(hookGroupId, hookId) # -> result`
2333hooks.resetTriggerToken(hookGroupId='value', hookId='value') # -> result
2334# Async call
2335await asyncHooks.resetTriggerToken(hookGroupId, hookId) # -> result
2336await asyncHooks.resetTriggerToken(hookGroupId='value', hookId='value') # -> result
2337```
2338
2339#### Trigger a hook with a token
2340This endpoint triggers a defined hook with a valid token.
2341
2342The HTTP payload must match the hooks `triggerSchema`.  If it does, it is
2343provided as the `payload` property of the JSON-e context used to render the
2344task template.
2345
2346
2347
2348Takes the following arguments:
2349
2350  * `hookGroupId`
2351  * `hookId`
2352  * `token`
2353
2354Required [input schema](v1/trigger-hook.json#)
2355
2356Required [output schema](v1/task-status.json#)
2357
2358```python
2359# Sync calls
2360hooks.triggerHookWithToken(hookGroupId, hookId, token, payload) # -> result`
2361hooks.triggerHookWithToken(payload, hookGroupId='value', hookId='value', token='value') # -> result
2362# Async call
2363await asyncHooks.triggerHookWithToken(hookGroupId, hookId, token, payload) # -> result
2364await asyncHooks.triggerHookWithToken(payload, hookGroupId='value', hookId='value', token='value') # -> result
2365```
2366
2367
2368
2369
2370### Methods in `taskcluster.Index`
2371```python
2372import asyncio # Only for async
2373// Create Index client instance
2374import taskcluster
2375import taskcluster.aio
2376
2377index = taskcluster.Index(options)
2378# Below only for async instances, assume already in coroutine
2379loop = asyncio.get_event_loop()
2380session = taskcluster.aio.createSession(loop=loop)
2381asyncIndex = taskcluster.aio.Index(options, session=session)
2382```
2383The task index, typically available at `index.taskcluster.net`, is
2384responsible for indexing tasks. The service ensures that tasks can be
2385located by recency and/or arbitrary strings. Common use-cases include:
2386
2387 * Locate tasks by git or mercurial `<revision>`, or
2388 * Locate latest task from given `<branch>`, such as a release.
2389
2390**Index hierarchy**, tasks are indexed in a dot (`.`) separated hierarchy
2391called a namespace. For example a task could be indexed with the index path
2392`some-app.<revision>.linux-64.release-build`. In this case the following
2393namespaces is created.
2394
2395 1. `some-app`,
2396 1. `some-app.<revision>`, and,
2397 2. `some-app.<revision>.linux-64`
2398
2399Inside the namespace `some-app.<revision>` you can find the namespace
2400`some-app.<revision>.linux-64` inside which you can find the indexed task
2401`some-app.<revision>.linux-64.release-build`. This is an example of indexing
2402builds for a given platform and revision.
2403
2404**Task Rank**, when a task is indexed, it is assigned a `rank` (defaults
2405to `0`). If another task is already indexed in the same namespace with
2406lower or equal `rank`, the index for that task will be overwritten. For example
2407consider index path `mozilla-central.linux-64.release-build`. In
2408this case one might choose to use a UNIX timestamp or mercurial revision
2409number as `rank`. This way the latest completed linux 64 bit release
2410build is always available at `mozilla-central.linux-64.release-build`.
2411
2412Note that this does mean index paths are not immutable: the same path may
2413point to a different task now than it did a moment ago.
2414
2415**Indexed Data**, when a task is retrieved from the index the result includes
2416a `taskId` and an additional user-defined JSON blob that was indexed with
2417the task.
2418
2419**Entry Expiration**, all indexed entries must have an expiration date.
2420Typically this defaults to one year, if not specified. If you are
2421indexing tasks to make it easy to find artifacts, consider using the
2422artifact's expiration date.
2423
2424**Valid Characters**, all keys in a namespace `<key1>.<key2>` must be
2425in the form `/[a-zA-Z0-9_!~*'()%-]+/`. Observe that this is URL-safe and
2426that if you strictly want to put another character you can URL encode it.
2427
2428**Indexing Routes**, tasks can be indexed using the API below, but the
2429most common way to index tasks is adding a custom route to `task.routes` of the
2430form `index.<namespace>`. In order to add this route to a task you'll
2431need the scope `queue:route:index.<namespace>`. When a task has
2432this route, it will be indexed when the task is **completed successfully**.
2433The task will be indexed with `rank`, `data` and `expires` as specified
2434in `task.extra.index`. See the example below:
2435
2436```js
2437{
2438  payload:  { /* ... */ },
2439  routes: [
2440    // index.<namespace> prefixed routes, tasks CC'ed such a route will
2441    // be indexed under the given <namespace>
2442    "index.mozilla-central.linux-64.release-build",
2443    "index.<revision>.linux-64.release-build"
2444  ],
2445  extra: {
2446    // Optional details for indexing service
2447    index: {
2448      // Ordering, this taskId will overwrite any thing that has
2449      // rank <= 4000 (defaults to zero)
2450      rank:       4000,
2451
2452      // Specify when the entries expire (Defaults to 1 year)
2453      expires:          new Date().toJSON(),
2454
2455      // A little informal data to store along with taskId
2456      // (less 16 kb when encoded as JSON)
2457      data: {
2458        hgRevision:   "...",
2459        commitMessae: "...",
2460        whatever...
2461      }
2462    },
2463    // Extra properties for other services...
2464  }
2465  // Other task properties...
2466}
2467```
2468
2469**Remark**, when indexing tasks using custom routes, it's also possible
2470to listen for messages about these tasks. For
2471example one could bind to `route.index.some-app.*.release-build`,
2472and pick up all messages about release builds. Hence, it is a
2473good idea to document task index hierarchies, as these make up extension
2474points in their own.
2475#### Ping Server
2476Respond without doing anything.
2477This endpoint is used to check that the service is up.
2478
2479
2480```python
2481# Sync calls
2482index.ping() # -> None`
2483# Async call
2484await asyncIndex.ping() # -> None
2485```
2486
2487#### Find Indexed Task
2488Find a task by index path, returning the highest-rank task with that path. If no
2489task exists for the given path, this API end-point will respond with a 404 status.
2490
2491
2492
2493Takes the following arguments:
2494
2495  * `indexPath`
2496
2497Required [output schema](v1/indexed-task-response.json#)
2498
2499```python
2500# Sync calls
2501index.findTask(indexPath) # -> result`
2502index.findTask(indexPath='value') # -> result
2503# Async call
2504await asyncIndex.findTask(indexPath) # -> result
2505await asyncIndex.findTask(indexPath='value') # -> result
2506```
2507
2508#### List Namespaces
2509List the namespaces immediately under a given namespace.
2510
2511This endpoint
2512lists up to 1000 namespaces. If more namespaces are present, a
2513`continuationToken` will be returned, which can be given in the next
2514request. For the initial request, the payload should be an empty JSON
2515object.
2516
2517
2518
2519Takes the following arguments:
2520
2521  * `namespace`
2522
2523Required [output schema](v1/list-namespaces-response.json#)
2524
2525```python
2526# Sync calls
2527index.listNamespaces(namespace) # -> result`
2528index.listNamespaces(namespace='value') # -> result
2529# Async call
2530await asyncIndex.listNamespaces(namespace) # -> result
2531await asyncIndex.listNamespaces(namespace='value') # -> result
2532```
2533
2534#### List Tasks
2535List the tasks immediately under a given namespace.
2536
2537This endpoint
2538lists up to 1000 tasks. If more tasks are present, a
2539`continuationToken` will be returned, which can be given in the next
2540request. For the initial request, the payload should be an empty JSON
2541object.
2542
2543**Remark**, this end-point is designed for humans browsing for tasks, not
2544services, as that makes little sense.
2545
2546
2547
2548Takes the following arguments:
2549
2550  * `namespace`
2551
2552Required [output schema](v1/list-tasks-response.json#)
2553
2554```python
2555# Sync calls
2556index.listTasks(namespace) # -> result`
2557index.listTasks(namespace='value') # -> result
2558# Async call
2559await asyncIndex.listTasks(namespace) # -> result
2560await asyncIndex.listTasks(namespace='value') # -> result
2561```
2562
2563#### Insert Task into Index
2564Insert a task into the index.  If the new rank is less than the existing rank
2565at the given index path, the task is not indexed but the response is still 200 OK.
2566
2567Please see the introduction above for information
2568about indexing successfully completed tasks automatically using custom routes.
2569
2570
2571
2572Takes the following arguments:
2573
2574  * `namespace`
2575
2576Required [input schema](v1/insert-task-request.json#)
2577
2578Required [output schema](v1/indexed-task-response.json#)
2579
2580```python
2581# Sync calls
2582index.insertTask(namespace, payload) # -> result`
2583index.insertTask(payload, namespace='value') # -> result
2584# Async call
2585await asyncIndex.insertTask(namespace, payload) # -> result
2586await asyncIndex.insertTask(payload, namespace='value') # -> result
2587```
2588
2589#### Get Artifact From Indexed Task
2590Find a task by index path and redirect to the artifact on the most recent
2591run with the given `name`.
2592
2593Note that multiple calls to this endpoint may return artifacts from differen tasks
2594if a new task is inserted into the index between calls. Avoid using this method as
2595a stable link to multiple, connected files if the index path does not contain a
2596unique identifier.  For example, the following two links may return unrelated files:
2597* https://index.taskcluster.net/task/some-app.win64.latest.installer/artifacts/public/installer.exe`
2598* https://index.taskcluster.net/task/some-app.win64.latest.installer/artifacts/public/debug-symbols.zip`
2599
2600This problem be remedied by including the revision in the index path or by bundling both
2601installer and debug symbols into a single artifact.
2602
2603If no task exists for the given index path, this API end-point responds with 404.
2604
2605
2606
2607Takes the following arguments:
2608
2609  * `indexPath`
2610  * `name`
2611
2612```python
2613# Sync calls
2614index.findArtifactFromTask(indexPath, name) # -> None`
2615index.findArtifactFromTask(indexPath='value', name='value') # -> None
2616# Async call
2617await asyncIndex.findArtifactFromTask(indexPath, name) # -> None
2618await asyncIndex.findArtifactFromTask(indexPath='value', name='value') # -> None
2619```
2620
2621
2622
2623
2624### Methods in `taskcluster.Login`
2625```python
2626import asyncio # Only for async
2627// Create Login client instance
2628import taskcluster
2629import taskcluster.aio
2630
2631login = taskcluster.Login(options)
2632# Below only for async instances, assume already in coroutine
2633loop = asyncio.get_event_loop()
2634session = taskcluster.aio.createSession(loop=loop)
2635asyncLogin = taskcluster.aio.Login(options, session=session)
2636```
2637The Login service serves as the interface between external authentication
2638systems and Taskcluster credentials.
2639#### Ping Server
2640Respond without doing anything.
2641This endpoint is used to check that the service is up.
2642
2643
2644```python
2645# Sync calls
2646login.ping() # -> None`
2647# Async call
2648await asyncLogin.ping() # -> None
2649```
2650
2651#### Get Taskcluster credentials given a suitable `access_token`
2652Given an OIDC `access_token` from a trusted OpenID provider, return a
2653set of Taskcluster credentials for use on behalf of the identified
2654user.
2655
2656This method is typically not called with a Taskcluster client library
2657and does not accept Hawk credentials. The `access_token` should be
2658given in an `Authorization` header:
2659```
2660Authorization: Bearer abc.xyz
2661```
2662
2663The `access_token` is first verified against the named
2664:provider, then passed to the provider's APIBuilder to retrieve a user
2665profile. That profile is then used to generate Taskcluster credentials
2666appropriate to the user. Note that the resulting credentials may or may
2667not include a `certificate` property. Callers should be prepared for either
2668alternative.
2669
2670The given credentials will expire in a relatively short time. Callers should
2671monitor this expiration and refresh the credentials if necessary, by calling
2672this endpoint again, if they have expired.
2673
2674
2675
2676Takes the following arguments:
2677
2678  * `provider`
2679
2680Required [output schema](v1/oidc-credentials-response.json#)
2681
2682```python
2683# Sync calls
2684login.oidcCredentials(provider) # -> result`
2685login.oidcCredentials(provider='value') # -> result
2686# Async call
2687await asyncLogin.oidcCredentials(provider) # -> result
2688await asyncLogin.oidcCredentials(provider='value') # -> result
2689```
2690
2691
2692
2693
2694### Methods in `taskcluster.Notify`
2695```python
2696import asyncio # Only for async
2697// Create Notify client instance
2698import taskcluster
2699import taskcluster.aio
2700
2701notify = taskcluster.Notify(options)
2702# Below only for async instances, assume already in coroutine
2703loop = asyncio.get_event_loop()
2704session = taskcluster.aio.createSession(loop=loop)
2705asyncNotify = taskcluster.aio.Notify(options, session=session)
2706```
2707The notification service, typically available at `notify.taskcluster.net`
2708listens for tasks with associated notifications and handles requests to
2709send emails and post pulse messages.
2710#### Ping Server
2711Respond without doing anything.
2712This endpoint is used to check that the service is up.
2713
2714
2715```python
2716# Sync calls
2717notify.ping() # -> None`
2718# Async call
2719await asyncNotify.ping() # -> None
2720```
2721
2722#### Send an Email
2723Send an email to `address`. The content is markdown and will be rendered
2724to HTML, but both the HTML and raw markdown text will be sent in the
2725email. If a link is included, it will be rendered to a nice button in the
2726HTML version of the email
2727
2728
2729Required [input schema](v1/email-request.json#)
2730
2731```python
2732# Sync calls
2733notify.email(payload) # -> None`
2734# Async call
2735await asyncNotify.email(payload) # -> None
2736```
2737
2738#### Publish a Pulse Message
2739Publish a message on pulse with the given `routingKey`.
2740
2741
2742Required [input schema](v1/pulse-request.json#)
2743
2744```python
2745# Sync calls
2746notify.pulse(payload) # -> None`
2747# Async call
2748await asyncNotify.pulse(payload) # -> None
2749```
2750
2751#### Post IRC Message
2752Post a message on IRC to a specific channel or user, or a specific user
2753on a specific channel.
2754
2755Success of this API method does not imply the message was successfully
2756posted. This API method merely inserts the IRC message into a queue
2757that will be processed by a background process.
2758This allows us to re-send the message in face of connection issues.
2759
2760However, if the user isn't online the message will be dropped without
2761error. We maybe improve this behavior in the future. For now just keep
2762in mind that IRC is a best-effort service.
2763
2764
2765Required [input schema](v1/irc-request.json#)
2766
2767```python
2768# Sync calls
2769notify.irc(payload) # -> None`
2770# Async call
2771await asyncNotify.irc(payload) # -> None
2772```
2773
2774
2775
2776
2777### Methods in `taskcluster.Pulse`
2778```python
2779import asyncio # Only for async
2780// Create Pulse client instance
2781import taskcluster
2782import taskcluster.aio
2783
2784pulse = taskcluster.Pulse(options)
2785# Below only for async instances, assume already in coroutine
2786loop = asyncio.get_event_loop()
2787session = taskcluster.aio.createSession(loop=loop)
2788asyncPulse = taskcluster.aio.Pulse(options, session=session)
2789```
2790The taskcluster-pulse service, typically available at `pulse.taskcluster.net`
2791manages pulse credentials for taskcluster users.
2792
2793A service to manage Pulse credentials for anything using
2794Taskcluster credentials. This allows for self-service pulse
2795access and greater control within the Taskcluster project.
2796#### Ping Server
2797Respond without doing anything.
2798This endpoint is used to check that the service is up.
2799
2800
2801```python
2802# Sync calls
2803pulse.ping() # -> None`
2804# Async call
2805await asyncPulse.ping() # -> None
2806```
2807
2808#### List Namespaces
2809List the namespaces managed by this service.
2810
2811This will list up to 1000 namespaces. If more namespaces are present a
2812`continuationToken` will be returned, which can be given in the next
2813request. For the initial request, do not provide continuation token.
2814
2815
2816Required [output schema](v1/list-namespaces-response.json#)
2817
2818```python
2819# Sync calls
2820pulse.listNamespaces() # -> result`
2821# Async call
2822await asyncPulse.listNamespaces() # -> result
2823```
2824
2825#### Get a namespace
2826Get public information about a single namespace. This is the same information
2827as returned by `listNamespaces`.
2828
2829
2830
2831Takes the following arguments:
2832
2833  * `namespace`
2834
2835Required [output schema](v1/namespace.json#)
2836
2837```python
2838# Sync calls
2839pulse.namespace(namespace) # -> result`
2840pulse.namespace(namespace='value') # -> result
2841# Async call
2842await asyncPulse.namespace(namespace) # -> result
2843await asyncPulse.namespace(namespace='value') # -> result
2844```
2845
2846#### Claim a namespace
2847Claim a namespace, returning a connection string with access to that namespace
2848good for use until the `reclaimAt` time in the response body. The connection
2849string can be used as many times as desired during this period, but must not
2850be used after `reclaimAt`.
2851
2852Connections made with this connection string may persist beyond `reclaimAt`,
2853although it should not persist forever.  24 hours is a good maximum, and this
2854service will terminate connections after 72 hours (although this value is
2855configurable).
2856
2857The specified `expires` time updates any existing expiration times.  Connections
2858for expired namespaces will be terminated.
2859
2860
2861
2862Takes the following arguments:
2863
2864  * `namespace`
2865
2866Required [input schema](v1/namespace-request.json#)
2867
2868Required [output schema](v1/namespace-response.json#)
2869
2870```python
2871# Sync calls
2872pulse.claimNamespace(namespace, payload) # -> result`
2873pulse.claimNamespace(payload, namespace='value') # -> result
2874# Async call
2875await asyncPulse.claimNamespace(namespace, payload) # -> result
2876await asyncPulse.claimNamespace(payload, namespace='value') # -> result
2877```
2878
2879
2880
2881
2882### Methods in `taskcluster.PurgeCache`
2883```python
2884import asyncio # Only for async
2885// Create PurgeCache client instance
2886import taskcluster
2887import taskcluster.aio
2888
2889purgeCache = taskcluster.PurgeCache(options)
2890# Below only for async instances, assume already in coroutine
2891loop = asyncio.get_event_loop()
2892session = taskcluster.aio.createSession(loop=loop)
2893asyncPurgeCache = taskcluster.aio.PurgeCache(options, session=session)
2894```
2895The purge-cache service is responsible for publishing a pulse
2896message for workers, so they can purge cache upon request.
2897
2898This document describes the API end-point for publishing the pulse
2899message. This is mainly intended to be used by tools.
2900#### Ping Server
2901Respond without doing anything.
2902This endpoint is used to check that the service is up.
2903
2904
2905```python
2906# Sync calls
2907purgeCache.ping() # -> None`
2908# Async call
2909await asyncPurgeCache.ping() # -> None
2910```
2911
2912#### Purge Worker Cache
2913Publish a purge-cache message to purge caches named `cacheName` with
2914`provisionerId` and `workerType` in the routing-key. Workers should
2915be listening for this message and purge caches when they see it.
2916
2917
2918
2919Takes the following arguments:
2920
2921  * `provisionerId`
2922  * `workerType`
2923
2924Required [input schema](v1/purge-cache-request.json#)
2925
2926```python
2927# Sync calls
2928purgeCache.purgeCache(provisionerId, workerType, payload) # -> None`
2929purgeCache.purgeCache(payload, provisionerId='value', workerType='value') # -> None
2930# Async call
2931await asyncPurgeCache.purgeCache(provisionerId, workerType, payload) # -> None
2932await asyncPurgeCache.purgeCache(payload, provisionerId='value', workerType='value') # -> None
2933```
2934
2935#### All Open Purge Requests
2936This is useful mostly for administors to view
2937the set of open purge requests. It should not
2938be used by workers. They should use the purgeRequests
2939endpoint that is specific to their workerType and
2940provisionerId.
2941
2942
2943Required [output schema](v1/all-purge-cache-request-list.json#)
2944
2945```python
2946# Sync calls
2947purgeCache.allPurgeRequests() # -> result`
2948# Async call
2949await asyncPurgeCache.allPurgeRequests() # -> result
2950```
2951
2952#### Open Purge Requests for a provisionerId/workerType pair
2953List of caches that need to be purged if they are from before
2954a certain time. This is safe to be used in automation from
2955workers.
2956
2957
2958
2959Takes the following arguments:
2960
2961  * `provisionerId`
2962  * `workerType`
2963
2964Required [output schema](v1/purge-cache-request-list.json#)
2965
2966```python
2967# Sync calls
2968purgeCache.purgeRequests(provisionerId, workerType) # -> result`
2969purgeCache.purgeRequests(provisionerId='value', workerType='value') # -> result
2970# Async call
2971await asyncPurgeCache.purgeRequests(provisionerId, workerType) # -> result
2972await asyncPurgeCache.purgeRequests(provisionerId='value', workerType='value') # -> result
2973```
2974
2975
2976
2977
2978### Exchanges in `taskcluster.PurgeCacheEvents`
2979```python
2980// Create PurgeCacheEvents client instance
2981import taskcluster
2982purgeCacheEvents = taskcluster.PurgeCacheEvents(options)
2983```
2984The purge-cache service, typically available at
2985`purge-cache.taskcluster.net`, is responsible for publishing a pulse
2986message for workers, so they can purge cache upon request.
2987
2988This document describes the exchange offered for workers by the
2989cache-purge service.
2990#### Purge Cache Messages
2991 * `purgeCacheEvents.purgeCache(routingKeyPattern) -> routingKey`
2992   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
2993   * `provisionerId` is required  Description: `provisionerId` under which to purge cache.
2994   * `workerType` is required  Description: `workerType` for which to purge cache.
2995
2996
2997
2998
2999### Methods in `taskcluster.Queue`
3000```python
3001import asyncio # Only for async
3002// Create Queue client instance
3003import taskcluster
3004import taskcluster.aio
3005
3006queue = taskcluster.Queue(options)
3007# Below only for async instances, assume already in coroutine
3008loop = asyncio.get_event_loop()
3009session = taskcluster.aio.createSession(loop=loop)
3010asyncQueue = taskcluster.aio.Queue(options, session=session)
3011```
3012The queue, typically available at `queue.taskcluster.net`, is responsible
3013for accepting tasks and track their state as they are executed by
3014workers. In order ensure they are eventually resolved.
3015
3016This document describes the API end-points offered by the queue. These
3017end-points targets the following audience:
3018 * Schedulers, who create tasks to be executed,
3019 * Workers, who execute tasks, and
3020 * Tools, that wants to inspect the state of a task.
3021#### Ping Server
3022Respond without doing anything.
3023This endpoint is used to check that the service is up.
3024
3025
3026```python
3027# Sync calls
3028queue.ping() # -> None`
3029# Async call
3030await asyncQueue.ping() # -> None
3031```
3032
3033#### Get Task Definition
3034This end-point will return the task-definition. Notice that the task
3035definition may have been modified by queue, if an optional property is
3036not specified the queue may provide a default value.
3037
3038
3039
3040Takes the following arguments:
3041
3042  * `taskId`
3043
3044Required [output schema](v1/task.json#)
3045
3046```python
3047# Sync calls
3048queue.task(taskId) # -> result`
3049queue.task(taskId='value') # -> result
3050# Async call
3051await asyncQueue.task(taskId) # -> result
3052await asyncQueue.task(taskId='value') # -> result
3053```
3054
3055#### Get task status
3056Get task status structure from `taskId`
3057
3058
3059
3060Takes the following arguments:
3061
3062  * `taskId`
3063
3064Required [output schema](v1/task-status-response.json#)
3065
3066```python
3067# Sync calls
3068queue.status(taskId) # -> result`
3069queue.status(taskId='value') # -> result
3070# Async call
3071await asyncQueue.status(taskId) # -> result
3072await asyncQueue.status(taskId='value') # -> result
3073```
3074
3075#### List Task Group
3076List tasks sharing the same `taskGroupId`.
3077
3078As a task-group may contain an unbounded number of tasks, this end-point
3079may return a `continuationToken`. To continue listing tasks you must call
3080the `listTaskGroup` again with the `continuationToken` as the
3081query-string option `continuationToken`.
3082
3083By default this end-point will try to return up to 1000 members in one
3084request. But it **may return less**, even if more tasks are available.
3085It may also return a `continuationToken` even though there are no more
3086results. However, you can only be sure to have seen all results if you
3087keep calling `listTaskGroup` with the last `continuationToken` until you
3088get a result without a `continuationToken`.
3089
3090If you are not interested in listing all the members at once, you may
3091use the query-string option `limit` to return fewer.
3092
3093
3094
3095Takes the following arguments:
3096
3097  * `taskGroupId`
3098
3099Required [output schema](v1/list-task-group-response.json#)
3100
3101```python
3102# Sync calls
3103queue.listTaskGroup(taskGroupId) # -> result`
3104queue.listTaskGroup(taskGroupId='value') # -> result
3105# Async call
3106await asyncQueue.listTaskGroup(taskGroupId) # -> result
3107await asyncQueue.listTaskGroup(taskGroupId='value') # -> result
3108```
3109
3110#### List Dependent Tasks
3111List tasks that depend on the given `taskId`.
3112
3113As many tasks from different task-groups may dependent on a single tasks,
3114this end-point may return a `continuationToken`. To continue listing
3115tasks you must call `listDependentTasks` again with the
3116`continuationToken` as the query-string option `continuationToken`.
3117
3118By default this end-point will try to return up to 1000 tasks in one
3119request. But it **may return less**, even if more tasks are available.
3120It may also return a `continuationToken` even though there are no more
3121results. However, you can only be sure to have seen all results if you
3122keep calling `listDependentTasks` with the last `continuationToken` until
3123you get a result without a `continuationToken`.
3124
3125If you are not interested in listing all the tasks at once, you may
3126use the query-string option `limit` to return fewer.
3127
3128
3129
3130Takes the following arguments:
3131
3132  * `taskId`
3133
3134Required [output schema](v1/list-dependent-tasks-response.json#)
3135
3136```python
3137# Sync calls
3138queue.listDependentTasks(taskId) # -> result`
3139queue.listDependentTasks(taskId='value') # -> result
3140# Async call
3141await asyncQueue.listDependentTasks(taskId) # -> result
3142await asyncQueue.listDependentTasks(taskId='value') # -> result
3143```
3144
3145#### Create New Task
3146Create a new task, this is an **idempotent** operation, so repeat it if
3147you get an internal server error or network connection is dropped.
3148
3149**Task `deadline`**: the deadline property can be no more than 5 days
3150into the future. This is to limit the amount of pending tasks not being
3151taken care of. Ideally, you should use a much shorter deadline.
3152
3153**Task expiration**: the `expires` property must be greater than the
3154task `deadline`. If not provided it will default to `deadline` + one
3155year. Notice, that artifacts created by task must expire before the task.
3156
3157**Task specific routing-keys**: using the `task.routes` property you may
3158define task specific routing-keys. If a task has a task specific
3159routing-key: `<route>`, then when the AMQP message about the task is
3160published, the message will be CC'ed with the routing-key:
3161`route.<route>`. This is useful if you want another component to listen
3162for completed tasks you have posted.  The caller must have scope
3163`queue:route:<route>` for each route.
3164
3165**Dependencies**: any tasks referenced in `task.dependencies` must have
3166already been created at the time of this call.
3167
3168**Scopes**: Note that the scopes required to complete this API call depend
3169on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
3170`provisionerId`, and `workerType` properties of the task definition.
3171
3172**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
3173the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
3174legacy and should not be used. Note that the new, non-legacy scopes require
3175a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
3176
3177
3178
3179Takes the following arguments:
3180
3181  * `taskId`
3182
3183Required [input schema](v1/create-task-request.json#)
3184
3185Required [output schema](v1/task-status-response.json#)
3186
3187```python
3188# Sync calls
3189queue.createTask(taskId, payload) # -> result`
3190queue.createTask(payload, taskId='value') # -> result
3191# Async call
3192await asyncQueue.createTask(taskId, payload) # -> result
3193await asyncQueue.createTask(payload, taskId='value') # -> result
3194```
3195
3196#### Define Task
3197**Deprecated**, this is the same as `createTask` with a **self-dependency**.
3198This is only present for legacy.
3199
3200
3201
3202Takes the following arguments:
3203
3204  * `taskId`
3205
3206Required [input schema](v1/create-task-request.json#)
3207
3208Required [output schema](v1/task-status-response.json#)
3209
3210```python
3211# Sync calls
3212queue.defineTask(taskId, payload) # -> result`
3213queue.defineTask(payload, taskId='value') # -> result
3214# Async call
3215await asyncQueue.defineTask(taskId, payload) # -> result
3216await asyncQueue.defineTask(payload, taskId='value') # -> result
3217```
3218
3219#### Schedule Defined Task
3220scheduleTask will schedule a task to be executed, even if it has
3221unresolved dependencies. A task would otherwise only be scheduled if
3222its dependencies were resolved.
3223
3224This is useful if you have defined a task that depends on itself or on
3225some other task that has not been resolved, but you wish the task to be
3226scheduled immediately.
3227
3228This will announce the task as pending and workers will be allowed to
3229claim it and resolve the task.
3230
3231**Note** this operation is **idempotent** and will not fail or complain
3232if called with a `taskId` that is already scheduled, or even resolved.
3233To reschedule a task previously resolved, use `rerunTask`.
3234
3235
3236
3237Takes the following arguments:
3238
3239  * `taskId`
3240
3241Required [output schema](v1/task-status-response.json#)
3242
3243```python
3244# Sync calls
3245queue.scheduleTask(taskId) # -> result`
3246queue.scheduleTask(taskId='value') # -> result
3247# Async call
3248await asyncQueue.scheduleTask(taskId) # -> result
3249await asyncQueue.scheduleTask(taskId='value') # -> result
3250```
3251
3252#### Rerun a Resolved Task
3253This method _reruns_ a previously resolved task, even if it was
3254_completed_. This is useful if your task completes unsuccessfully, and
3255you just want to run it from scratch again. This will also reset the
3256number of `retries` allowed.
3257
3258Remember that `retries` in the task status counts the number of runs that
3259the queue have started because the worker stopped responding, for example
3260because a spot node died.
3261
3262**Remark** this operation is idempotent, if you try to rerun a task that
3263is not either `failed` or `completed`, this operation will just return
3264the current task status.
3265
3266
3267
3268Takes the following arguments:
3269
3270  * `taskId`
3271
3272Required [output schema](v1/task-status-response.json#)
3273
3274```python
3275# Sync calls
3276queue.rerunTask(taskId) # -> result`
3277queue.rerunTask(taskId='value') # -> result
3278# Async call
3279await asyncQueue.rerunTask(taskId) # -> result
3280await asyncQueue.rerunTask(taskId='value') # -> result
3281```
3282
3283#### Cancel Task
3284This method will cancel a task that is either `unscheduled`, `pending` or
3285`running`. It will resolve the current run as `exception` with
3286`reasonResolved` set to `canceled`. If the task isn't scheduled yet, ie.
3287it doesn't have any runs, an initial run will be added and resolved as
3288described above. Hence, after canceling a task, it cannot be scheduled
3289with `queue.scheduleTask`, but a new run can be created with
3290`queue.rerun`. These semantics is equivalent to calling
3291`queue.scheduleTask` immediately followed by `queue.cancelTask`.
3292
3293**Remark** this operation is idempotent, if you try to cancel a task that
3294isn't `unscheduled`, `pending` or `running`, this operation will just
3295return the current task status.
3296
3297
3298
3299Takes the following arguments:
3300
3301  * `taskId`
3302
3303Required [output schema](v1/task-status-response.json#)
3304
3305```python
3306# Sync calls
3307queue.cancelTask(taskId) # -> result`
3308queue.cancelTask(taskId='value') # -> result
3309# Async call
3310await asyncQueue.cancelTask(taskId) # -> result
3311await asyncQueue.cancelTask(taskId='value') # -> result
3312```
3313
3314#### Claim Work
3315Claim pending task(s) for the given `provisionerId`/`workerType` queue.
3316
3317If any work is available (even if fewer than the requested number of
3318tasks, this will return immediately. Otherwise, it will block for tens of
3319seconds waiting for work.  If no work appears, it will return an emtpy
3320list of tasks.  Callers should sleep a short while (to avoid denial of
3321service in an error condition) and call the endpoint again.  This is a
3322simple implementation of "long polling".
3323
3324
3325
3326Takes the following arguments:
3327
3328  * `provisionerId`
3329  * `workerType`
3330
3331Required [input schema](v1/claim-work-request.json#)
3332
3333Required [output schema](v1/claim-work-response.json#)
3334
3335```python
3336# Sync calls
3337queue.claimWork(provisionerId, workerType, payload) # -> result`
3338queue.claimWork(payload, provisionerId='value', workerType='value') # -> result
3339# Async call
3340await asyncQueue.claimWork(provisionerId, workerType, payload) # -> result
3341await asyncQueue.claimWork(payload, provisionerId='value', workerType='value') # -> result
3342```
3343
3344#### Claim Task
3345claim a task - never documented
3346
3347
3348
3349Takes the following arguments:
3350
3351  * `taskId`
3352  * `runId`
3353
3354Required [input schema](v1/task-claim-request.json#)
3355
3356Required [output schema](v1/task-claim-response.json#)
3357
3358```python
3359# Sync calls
3360queue.claimTask(taskId, runId, payload) # -> result`
3361queue.claimTask(payload, taskId='value', runId='value') # -> result
3362# Async call
3363await asyncQueue.claimTask(taskId, runId, payload) # -> result
3364await asyncQueue.claimTask(payload, taskId='value', runId='value') # -> result
3365```
3366
3367#### Reclaim task
3368Refresh the claim for a specific `runId` for given `taskId`. This updates
3369the `takenUntil` property and returns a new set of temporary credentials
3370for performing requests on behalf of the task. These credentials should
3371be used in-place of the credentials returned by `claimWork`.
3372
3373The `reclaimTask` requests serves to:
3374 * Postpone `takenUntil` preventing the queue from resolving
3375   `claim-expired`,
3376 * Refresh temporary credentials used for processing the task, and
3377 * Abort execution if the task/run have been resolved.
3378
3379If the `takenUntil` timestamp is exceeded the queue will resolve the run
3380as _exception_ with reason `claim-expired`, and proceeded to retry to the
3381task. This ensures that tasks are retried, even if workers disappear
3382without warning.
3383
3384If the task is resolved, this end-point will return `409` reporting
3385`RequestConflict`. This typically happens if the task have been canceled
3386or the `task.deadline` have been exceeded. If reclaiming fails, workers
3387should abort the task and forget about the given `runId`. There is no
3388need to resolve the run or upload artifacts.
3389
3390
3391
3392Takes the following arguments:
3393
3394  * `taskId`
3395  * `runId`
3396
3397Required [output schema](v1/task-reclaim-response.json#)
3398
3399```python
3400# Sync calls
3401queue.reclaimTask(taskId, runId) # -> result`
3402queue.reclaimTask(taskId='value', runId='value') # -> result
3403# Async call
3404await asyncQueue.reclaimTask(taskId, runId) # -> result
3405await asyncQueue.reclaimTask(taskId='value', runId='value') # -> result
3406```
3407
3408#### Report Run Completed
3409Report a task completed, resolving the run as `completed`.
3410
3411
3412
3413Takes the following arguments:
3414
3415  * `taskId`
3416  * `runId`
3417
3418Required [output schema](v1/task-status-response.json#)
3419
3420```python
3421# Sync calls
3422queue.reportCompleted(taskId, runId) # -> result`
3423queue.reportCompleted(taskId='value', runId='value') # -> result
3424# Async call
3425await asyncQueue.reportCompleted(taskId, runId) # -> result
3426await asyncQueue.reportCompleted(taskId='value', runId='value') # -> result
3427```
3428
3429#### Report Run Failed
3430Report a run failed, resolving the run as `failed`. Use this to resolve
3431a run that failed because the task specific code behaved unexpectedly.
3432For example the task exited non-zero, or didn't produce expected output.
3433
3434Do not use this if the task couldn't be run because if malformed
3435payload, or other unexpected condition. In these cases we have a task
3436exception, which should be reported with `reportException`.
3437
3438
3439
3440Takes the following arguments:
3441
3442  * `taskId`
3443  * `runId`
3444
3445Required [output schema](v1/task-status-response.json#)
3446
3447```python
3448# Sync calls
3449queue.reportFailed(taskId, runId) # -> result`
3450queue.reportFailed(taskId='value', runId='value') # -> result
3451# Async call
3452await asyncQueue.reportFailed(taskId, runId) # -> result
3453await asyncQueue.reportFailed(taskId='value', runId='value') # -> result
3454```
3455
3456#### Report Task Exception
3457Resolve a run as _exception_. Generally, you will want to report tasks as
3458failed instead of exception. You should `reportException` if,
3459
3460  * The `task.payload` is invalid,
3461  * Non-existent resources are referenced,
3462  * Declared actions cannot be executed due to unavailable resources,
3463  * The worker had to shutdown prematurely,
3464  * The worker experienced an unknown error, or,
3465  * The task explicitly requested a retry.
3466
3467Do not use this to signal that some user-specified code crashed for any
3468reason specific to this code. If user-specific code hits a resource that
3469is temporarily unavailable worker should report task _failed_.
3470
3471
3472
3473Takes the following arguments:
3474
3475  * `taskId`
3476  * `runId`
3477
3478Required [input schema](v1/task-exception-request.json#)
3479
3480Required [output schema](v1/task-status-response.json#)
3481
3482```python
3483# Sync calls
3484queue.reportException(taskId, runId, payload) # -> result`
3485queue.reportException(payload, taskId='value', runId='value') # -> result
3486# Async call
3487await asyncQueue.reportException(taskId, runId, payload) # -> result
3488await asyncQueue.reportException(payload, taskId='value', runId='value') # -> result
3489```
3490
3491#### Create Artifact
3492This API end-point creates an artifact for a specific run of a task. This
3493should **only** be used by a worker currently operating on this task, or
3494from a process running within the task (ie. on the worker).
3495
3496All artifacts must specify when they `expires`, the queue will
3497automatically take care of deleting artifacts past their
3498expiration point. This features makes it feasible to upload large
3499intermediate artifacts from data processing applications, as the
3500artifacts can be set to expire a few days later.
3501
3502We currently support 3 different `storageType`s, each storage type have
3503slightly different features and in some cases difference semantics.
3504We also have 2 deprecated `storageType`s which are only maintained for
3505backwards compatiability and should not be used in new implementations
3506
3507**Blob artifacts**, are useful for storing large files.  Currently, these
3508are all stored in S3 but there are facilities for adding support for other
3509backends in futre.  A call for this type of artifact must provide information
3510about the file which will be uploaded.  This includes sha256 sums and sizes.
3511This method will return a list of general form HTTP requests which are signed
3512by AWS S3 credentials managed by the Queue.  Once these requests are completed
3513the list of `ETag` values returned by the requests must be passed to the
3514queue `completeArtifact` method
3515
3516**S3 artifacts**, DEPRECATED is useful for static files which will be
3517stored on S3. When creating an S3 artifact the queue will return a
3518pre-signed URL to which you can do a `PUT` request to upload your
3519artifact. Note that `PUT` request **must** specify the `content-length`
3520header and **must** give the `content-type` header the same value as in
3521the request to `createArtifact`.
3522
3523**Azure artifacts**, DEPRECATED are stored in _Azure Blob Storage_ service
3524which given the consistency guarantees and API interface offered by Azure
3525is more suitable for artifacts that will be modified during the execution
3526of the task. For example docker-worker has a feature that persists the
3527task log to Azure Blob Storage every few seconds creating a somewhat
3528live log. A request to create an Azure artifact will return a URL
3529featuring a [Shared-Access-Signature](http://msdn.microsoft.com/en-us/library/azure/dn140256.aspx),
3530refer to MSDN for further information on how to use these.
3531**Warning: azure artifact is currently an experimental feature subject
3532to changes and data-drops.**
3533
3534**Reference artifacts**, only consists of meta-data which the queue will
3535store for you. These artifacts really only have a `url` property and
3536when the artifact is requested the client will be redirect the URL
3537provided with a `303` (See Other) redirect. Please note that we cannot
3538delete artifacts you upload to other service, we can only delete the
3539reference to the artifact, when it expires.
3540
3541**Error artifacts**, only consists of meta-data which the queue will
3542store for you. These artifacts are only meant to indicate that you the
3543worker or the task failed to generate a specific artifact, that you
3544would otherwise have uploaded. For example docker-worker will upload an
3545error artifact, if the file it was supposed to upload doesn't exists or
3546turns out to be a directory. Clients requesting an error artifact will
3547get a `424` (Failed Dependency) response. This is mainly designed to
3548ensure that dependent tasks can distinguish between artifacts that were
3549suppose to be generated and artifacts for which the name is misspelled.
3550
3551**Artifact immutability**, generally speaking you cannot overwrite an
3552artifact when created. But if you repeat the request with the same
3553properties the request will succeed as the operation is idempotent.
3554This is useful if you need to refresh a signed URL while uploading.
3555Do not abuse this to overwrite artifacts created by another entity!
3556Such as worker-host overwriting artifact created by worker-code.
3557
3558As a special case the `url` property on _reference artifacts_ can be
3559updated. You should only use this to update the `url` property for
3560reference artifacts your process has created.
3561
3562
3563
3564Takes the following arguments:
3565
3566  * `taskId`
3567  * `runId`
3568  * `name`
3569
3570Required [input schema](v1/post-artifact-request.json#)
3571
3572Required [output schema](v1/post-artifact-response.json#)
3573
3574```python
3575# Sync calls
3576queue.createArtifact(taskId, runId, name, payload) # -> result`
3577queue.createArtifact(payload, taskId='value', runId='value', name='value') # -> result
3578# Async call
3579await asyncQueue.createArtifact(taskId, runId, name, payload) # -> result
3580await asyncQueue.createArtifact(payload, taskId='value', runId='value', name='value') # -> result
3581```
3582
3583#### Complete Artifact
3584This endpoint finalises an upload done through the blob `storageType`.
3585The queue will ensure that the task/run is still allowing artifacts
3586to be uploaded.  For single-part S3 blob artifacts, this endpoint
3587will simply ensure the artifact is present in S3.  For multipart S3
3588artifacts, the endpoint will perform the commit step of the multipart
3589upload flow.  As the final step for both multi and single part artifacts,
3590the `present` entity field will be set to `true` to reflect that the
3591artifact is now present and a message published to pulse.  NOTE: This
3592endpoint *must* be called for all artifacts of storageType 'blob'
3593
3594
3595
3596Takes the following arguments:
3597
3598  * `taskId`
3599  * `runId`
3600  * `name`
3601
3602Required [input schema](v1/put-artifact-request.json#)
3603
3604```python
3605# Sync calls
3606queue.completeArtifact(taskId, runId, name, payload) # -> None`
3607queue.completeArtifact(payload, taskId='value', runId='value', name='value') # -> None
3608# Async call
3609await asyncQueue.completeArtifact(taskId, runId, name, payload) # -> None
3610await asyncQueue.completeArtifact(payload, taskId='value', runId='value', name='value') # -> None
3611```
3612
3613#### Get Artifact from Run
3614Get artifact by `<name>` from a specific run.
3615
3616**Public Artifacts**, in-order to get an artifact you need the scope
3617`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
3618But if the artifact `name` starts with `public/`, authentication and
3619authorization is not necessary to fetch the artifact.
3620
3621**API Clients**, this method will redirect you to the artifact, if it is
3622stored externally. Either way, the response may not be JSON. So API
3623client users might want to generate a signed URL for this end-point and
3624use that URL with an HTTP client that can handle responses correctly.
3625
3626**Downloading artifacts**
3627There are some special considerations for those http clients which download
3628artifacts.  This api endpoint is designed to be compatible with an HTTP 1.1
3629compliant client, but has extra features to ensure the download is valid.
3630It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
3631taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
3632artifacts.
3633
3634In order to download an artifact the following must be done:
3635
36361. Obtain queue url.  Building a signed url with a taskcluster client is
3637recommended
36381. Make a GET request which does not follow redirects
36391. In all cases, if specified, the
3640x-taskcluster-location-{content,transfer}-{sha256,length} values must be
3641validated to be equal to the Content-Length and Sha256 checksum of the
3642final artifact downloaded. as well as any intermediate redirects
36431. If this response is a 500-series error, retry using an exponential
3644backoff.  No more than 5 retries should be attempted
36451. If this response is a 400-series error, treat it appropriately for
3646your context.  This might be an error in responding to this request or
3647an Error storage type body.  This request should not be retried.
36481. If this response is a 200-series response, the response body is the artifact.
3649If the x-taskcluster-location-{content,transfer}-{sha256,length} and
3650x-taskcluster-location-content-encoding are specified, they should match
3651this response body
36521. If the response type is a 300-series redirect, the artifact will be at the
3653location specified by the `Location` header.  There are multiple artifact storage
3654types which use a 300-series redirect.
36551. For all redirects followed, the user must verify that the content-sha256, content-length,
3656transfer-sha256, transfer-length and content-encoding match every further request.  The final
3657artifact must also be validated against the values specified in the original queue response
36581. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
3659must not occur
36601. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
3661have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
3662must be treated as an error
3663
3664**Headers**
3665The following important headers are set on the response to this method:
3666
3667* location: the url of the artifact if a redirect is to be performed
3668* x-taskcluster-artifact-storage-type: the storage type.  Example: blob, s3, error
3669
3670The following important headers are set on responses to this method for Blob artifacts
3671
3672* x-taskcluster-location-content-sha256: the SHA256 of the artifact
3673*after* any content-encoding is undone.  Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
3674* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
3675is undone
3676* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
3677*before* any content-encoding is undone.  This is the SHA256 of what is sent over
3678the wire.  Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
3679* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
3680is undone
3681* x-taskcluster-location-content-encoding: the content-encoding used.  It will either
3682be `gzip` or `identity` right now.  This is hardcoded to a value set when the artifact
3683was created and no content-negotiation occurs
3684* x-taskcluster-location-content-type: the content-type of the artifact
3685
3686**Caching**, artifacts may be cached in data centers closer to the
3687workers in-order to reduce bandwidth costs. This can lead to longer
3688response times. Caching can be skipped by setting the header
3689`x-taskcluster-skip-cache: true`, this should only be used for resources
3690where request volume is known to be low, and caching not useful.
3691(This feature may be disabled in the future, use is sparingly!)
3692
3693
3694
3695Takes the following arguments:
3696
3697  * `taskId`
3698  * `runId`
3699  * `name`
3700
3701```python
3702# Sync calls
3703queue.getArtifact(taskId, runId, name) # -> None`
3704queue.getArtifact(taskId='value', runId='value', name='value') # -> None
3705# Async call
3706await asyncQueue.getArtifact(taskId, runId, name) # -> None
3707await asyncQueue.getArtifact(taskId='value', runId='value', name='value') # -> None
3708```
3709
3710#### Get Artifact from Latest Run
3711Get artifact by `<name>` from the last run of a task.
3712
3713**Public Artifacts**, in-order to get an artifact you need the scope
3714`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
3715But if the artifact `name` starts with `public/`, authentication and
3716authorization is not necessary to fetch the artifact.
3717
3718**API Clients**, this method will redirect you to the artifact, if it is
3719stored externally. Either way, the response may not be JSON. So API
3720client users might want to generate a signed URL for this end-point and
3721use that URL with a normal HTTP client.
3722
3723**Remark**, this end-point is slightly slower than
3724`queue.getArtifact`, so consider that if you already know the `runId` of
3725the latest run. Otherwise, just us the most convenient API end-point.
3726
3727
3728
3729Takes the following arguments:
3730
3731  * `taskId`
3732  * `name`
3733
3734```python
3735# Sync calls
3736queue.getLatestArtifact(taskId, name) # -> None`
3737queue.getLatestArtifact(taskId='value', name='value') # -> None
3738# Async call
3739await asyncQueue.getLatestArtifact(taskId, name) # -> None
3740await asyncQueue.getLatestArtifact(taskId='value', name='value') # -> None
3741```
3742
3743#### Get Artifacts from Run
3744Returns a list of artifacts and associated meta-data for a given run.
3745
3746As a task may have many artifacts paging may be necessary. If this
3747end-point returns a `continuationToken`, you should call the end-point
3748again with the `continuationToken` as the query-string option:
3749`continuationToken`.
3750
3751By default this end-point will list up-to 1000 artifacts in a single page
3752you may limit this with the query-string parameter `limit`.
3753
3754
3755
3756Takes the following arguments:
3757
3758  * `taskId`
3759  * `runId`
3760
3761Required [output schema](v1/list-artifacts-response.json#)
3762
3763```python
3764# Sync calls
3765queue.listArtifacts(taskId, runId) # -> result`
3766queue.listArtifacts(taskId='value', runId='value') # -> result
3767# Async call
3768await asyncQueue.listArtifacts(taskId, runId) # -> result
3769await asyncQueue.listArtifacts(taskId='value', runId='value') # -> result
3770```
3771
3772#### Get Artifacts from Latest Run
3773Returns a list of artifacts and associated meta-data for the latest run
3774from the given task.
3775
3776As a task may have many artifacts paging may be necessary. If this
3777end-point returns a `continuationToken`, you should call the end-point
3778again with the `continuationToken` as the query-string option:
3779`continuationToken`.
3780
3781By default this end-point will list up-to 1000 artifacts in a single page
3782you may limit this with the query-string parameter `limit`.
3783
3784
3785
3786Takes the following arguments:
3787
3788  * `taskId`
3789
3790Required [output schema](v1/list-artifacts-response.json#)
3791
3792```python
3793# Sync calls
3794queue.listLatestArtifacts(taskId) # -> result`
3795queue.listLatestArtifacts(taskId='value') # -> result
3796# Async call
3797await asyncQueue.listLatestArtifacts(taskId) # -> result
3798await asyncQueue.listLatestArtifacts(taskId='value') # -> result
3799```
3800
3801#### Get a list of all active provisioners
3802Get all active provisioners.
3803
3804The term "provisioner" is taken broadly to mean anything with a provisionerId.
3805This does not necessarily mean there is an associated service performing any
3806provisioning activity.
3807
3808The response is paged. If this end-point returns a `continuationToken`, you
3809should call the end-point again with the `continuationToken` as a query-string
3810option. By default this end-point will list up to 1000 provisioners in a single
3811page. You may limit this with the query-string parameter `limit`.
3812
3813
3814Required [output schema](v1/list-provisioners-response.json#)
3815
3816```python
3817# Sync calls
3818queue.listProvisioners() # -> result`
3819# Async call
3820await asyncQueue.listProvisioners() # -> result
3821```
3822
3823#### Get an active provisioner
3824Get an active provisioner.
3825
3826The term "provisioner" is taken broadly to mean anything with a provisionerId.
3827This does not necessarily mean there is an associated service performing any
3828provisioning activity.
3829
3830
3831
3832Takes the following arguments:
3833
3834  * `provisionerId`
3835
3836Required [output schema](v1/provisioner-response.json#)
3837
3838```python
3839# Sync calls
3840queue.getProvisioner(provisionerId) # -> result`
3841queue.getProvisioner(provisionerId='value') # -> result
3842# Async call
3843await asyncQueue.getProvisioner(provisionerId) # -> result
3844await asyncQueue.getProvisioner(provisionerId='value') # -> result
3845```
3846
3847#### Update a provisioner
3848Declare a provisioner, supplying some details about it.
3849
3850`declareProvisioner` allows updating one or more properties of a provisioner as long as the required scopes are
3851possessed. For example, a request to update the `aws-provisioner-v1`
3852provisioner with a body `{description: 'This provisioner is great'}` would require you to have the scope
3853`queue:declare-provisioner:aws-provisioner-v1#description`.
3854
3855The term "provisioner" is taken broadly to mean anything with a provisionerId.
3856This does not necessarily mean there is an associated service performing any
3857provisioning activity.
3858
3859
3860
3861Takes the following arguments:
3862
3863  * `provisionerId`
3864
3865Required [input schema](v1/update-provisioner-request.json#)
3866
3867Required [output schema](v1/provisioner-response.json#)
3868
3869```python
3870# Sync calls
3871queue.declareProvisioner(provisionerId, payload) # -> result`
3872queue.declareProvisioner(payload, provisionerId='value') # -> result
3873# Async call
3874await asyncQueue.declareProvisioner(provisionerId, payload) # -> result
3875await asyncQueue.declareProvisioner(payload, provisionerId='value') # -> result
3876```
3877
3878#### Get Number of Pending Tasks
3879Get an approximate number of pending tasks for the given `provisionerId`
3880and `workerType`.
3881
3882The underlying Azure Storage Queues only promises to give us an estimate.
3883Furthermore, we cache the result in memory for 20 seconds. So consumers
3884should be no means expect this to be an accurate number.
3885It is, however, a solid estimate of the number of pending tasks.
3886
3887
3888
3889Takes the following arguments:
3890
3891  * `provisionerId`
3892  * `workerType`
3893
3894Required [output schema](v1/pending-tasks-response.json#)
3895
3896```python
3897# Sync calls
3898queue.pendingTasks(provisionerId, workerType) # -> result`
3899queue.pendingTasks(provisionerId='value', workerType='value') # -> result
3900# Async call
3901await asyncQueue.pendingTasks(provisionerId, workerType) # -> result
3902await asyncQueue.pendingTasks(provisionerId='value', workerType='value') # -> result
3903```
3904
3905#### Get a list of all active worker-types
3906Get all active worker-types for the given provisioner.
3907
3908The response is paged. If this end-point returns a `continuationToken`, you
3909should call the end-point again with the `continuationToken` as a query-string
3910option. By default this end-point will list up to 1000 worker-types in a single
3911page. You may limit this with the query-string parameter `limit`.
3912
3913
3914
3915Takes the following arguments:
3916
3917  * `provisionerId`
3918
3919Required [output schema](v1/list-workertypes-response.json#)
3920
3921```python
3922# Sync calls
3923queue.listWorkerTypes(provisionerId) # -> result`
3924queue.listWorkerTypes(provisionerId='value') # -> result
3925# Async call
3926await asyncQueue.listWorkerTypes(provisionerId) # -> result
3927await asyncQueue.listWorkerTypes(provisionerId='value') # -> result
3928```
3929
3930#### Get a worker-type
3931Get a worker-type from a provisioner.
3932
3933
3934
3935Takes the following arguments:
3936
3937  * `provisionerId`
3938  * `workerType`
3939
3940Required [output schema](v1/workertype-response.json#)
3941
3942```python
3943# Sync calls
3944queue.getWorkerType(provisionerId, workerType) # -> result`
3945queue.getWorkerType(provisionerId='value', workerType='value') # -> result
3946# Async call
3947await asyncQueue.getWorkerType(provisionerId, workerType) # -> result
3948await asyncQueue.getWorkerType(provisionerId='value', workerType='value') # -> result
3949```
3950
3951#### Update a worker-type
3952Declare a workerType, supplying some details about it.
3953
3954`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
3955possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
3956provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
3957`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
3958
3959
3960
3961Takes the following arguments:
3962
3963  * `provisionerId`
3964  * `workerType`
3965
3966Required [input schema](v1/update-workertype-request.json#)
3967
3968Required [output schema](v1/workertype-response.json#)
3969
3970```python
3971# Sync calls
3972queue.declareWorkerType(provisionerId, workerType, payload) # -> result`
3973queue.declareWorkerType(payload, provisionerId='value', workerType='value') # -> result
3974# Async call
3975await asyncQueue.declareWorkerType(provisionerId, workerType, payload) # -> result
3976await asyncQueue.declareWorkerType(payload, provisionerId='value', workerType='value') # -> result
3977```
3978
3979#### Get a list of all active workers of a workerType
3980Get a list of all active workers of a workerType.
3981
3982`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
3983To filter the query, you should call the end-point with `quarantined` as a query-string option with a
3984true or false value.
3985
3986The response is paged. If this end-point returns a `continuationToken`, you
3987should call the end-point again with the `continuationToken` as a query-string
3988option. By default this end-point will list up to 1000 workers in a single
3989page. You may limit this with the query-string parameter `limit`.
3990
3991
3992
3993Takes the following arguments:
3994
3995  * `provisionerId`
3996  * `workerType`
3997
3998Required [output schema](v1/list-workers-response.json#)
3999
4000```python
4001# Sync calls
4002queue.listWorkers(provisionerId, workerType) # -> result`
4003queue.listWorkers(provisionerId='value', workerType='value') # -> result
4004# Async call
4005await asyncQueue.listWorkers(provisionerId, workerType) # -> result
4006await asyncQueue.listWorkers(provisionerId='value', workerType='value') # -> result
4007```
4008
4009#### Get a worker-type
4010Get a worker from a worker-type.
4011
4012
4013
4014Takes the following arguments:
4015
4016  * `provisionerId`
4017  * `workerType`
4018  * `workerGroup`
4019  * `workerId`
4020
4021Required [output schema](v1/worker-response.json#)
4022
4023```python
4024# Sync calls
4025queue.getWorker(provisionerId, workerType, workerGroup, workerId) # -> result`
4026queue.getWorker(provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4027# Async call
4028await asyncQueue.getWorker(provisionerId, workerType, workerGroup, workerId) # -> result
4029await asyncQueue.getWorker(provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4030```
4031
4032#### Quarantine a worker
4033Quarantine a worker
4034
4035
4036
4037Takes the following arguments:
4038
4039  * `provisionerId`
4040  * `workerType`
4041  * `workerGroup`
4042  * `workerId`
4043
4044Required [input schema](v1/quarantine-worker-request.json#)
4045
4046Required [output schema](v1/worker-response.json#)
4047
4048```python
4049# Sync calls
4050queue.quarantineWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result`
4051queue.quarantineWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4052# Async call
4053await asyncQueue.quarantineWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result
4054await asyncQueue.quarantineWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4055```
4056
4057#### Declare a worker
4058Declare a worker, supplying some details about it.
4059
4060`declareWorker` allows updating one or more properties of a worker as long as the required scopes are
4061possessed.
4062
4063
4064
4065Takes the following arguments:
4066
4067  * `provisionerId`
4068  * `workerType`
4069  * `workerGroup`
4070  * `workerId`
4071
4072Required [input schema](v1/update-worker-request.json#)
4073
4074Required [output schema](v1/worker-response.json#)
4075
4076```python
4077# Sync calls
4078queue.declareWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result`
4079queue.declareWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4080# Async call
4081await asyncQueue.declareWorker(provisionerId, workerType, workerGroup, workerId, payload) # -> result
4082await asyncQueue.declareWorker(payload, provisionerId='value', workerType='value', workerGroup='value', workerId='value') # -> result
4083```
4084
4085
4086
4087
4088### Exchanges in `taskcluster.QueueEvents`
4089```python
4090// Create QueueEvents client instance
4091import taskcluster
4092queueEvents = taskcluster.QueueEvents(options)
4093```
4094The queue, typically available at `queue.taskcluster.net`, is responsible
4095for accepting tasks and track their state as they are executed by
4096workers. In order ensure they are eventually resolved.
4097
4098This document describes AMQP exchanges offered by the queue, which allows
4099third-party listeners to monitor tasks as they progress to resolution.
4100These exchanges targets the following audience:
4101 * Schedulers, who takes action after tasks are completed,
4102 * Workers, who wants to listen for new or canceled tasks (optional),
4103 * Tools, that wants to update their view as task progress.
4104
4105You'll notice that all the exchanges in the document shares the same
4106routing key pattern. This makes it very easy to bind to all messages
4107about a certain kind tasks.
4108
4109**Task specific routes**, a task can define a task specific route using
4110the `task.routes` property. See task creation documentation for details
4111on permissions required to provide task specific routes. If a task has
4112the entry `'notify.by-email'` in as task specific route defined in
4113`task.routes` all messages about this task will be CC'ed with the
4114routing-key `'route.notify.by-email'`.
4115
4116These routes will always be prefixed `route.`, so that cannot interfere
4117with the _primary_ routing key as documented here. Notice that the
4118_primary_ routing key is always prefixed `primary.`. This is ensured
4119in the routing key reference, so API clients will do this automatically.
4120
4121Please, note that the way RabbitMQ works, the message will only arrive
4122in your queue once, even though you may have bound to the exchange with
4123multiple routing key patterns that matches more of the CC'ed routing
4124routing keys.
4125
4126**Delivery guarantees**, most operations on the queue are idempotent,
4127which means that if repeated with the same arguments then the requests
4128will ensure completion of the operation and return the same response.
4129This is useful if the server crashes or the TCP connection breaks, but
4130when re-executing an idempotent operation, the queue will also resend
4131any related AMQP messages. Hence, messages may be repeated.
4132
4133This shouldn't be much of a problem, as the best you can achieve using
4134confirm messages with AMQP is at-least-once delivery semantics. Hence,
4135this only prevents you from obtaining at-most-once delivery semantics.
4136
4137**Remark**, some message generated by timeouts maybe dropped if the
4138server crashes at wrong time. Ideally, we'll address this in the
4139future. For now we suggest you ignore this corner case, and notify us
4140if this corner case is of concern to you.
4141#### Task Defined Messages
4142 * `queueEvents.taskDefined(routingKeyPattern) -> routingKey`
4143   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4144   * `taskId` is required  Description: `taskId` for the task this message concerns
4145   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4146   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4147   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4148   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4149   * `workerType` is required  Description: `workerType` this task must run on.
4150   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4151   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4152   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4153
4154#### Task Pending Messages
4155 * `queueEvents.taskPending(routingKeyPattern) -> routingKey`
4156   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4157   * `taskId` is required  Description: `taskId` for the task this message concerns
4158   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4159   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4160   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4161   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4162   * `workerType` is required  Description: `workerType` this task must run on.
4163   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4164   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4165   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4166
4167#### Task Running Messages
4168 * `queueEvents.taskRunning(routingKeyPattern) -> routingKey`
4169   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4170   * `taskId` is required  Description: `taskId` for the task this message concerns
4171   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4172   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4173   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4174   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4175   * `workerType` is required  Description: `workerType` this task must run on.
4176   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4177   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4178   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4179
4180#### Artifact Creation Messages
4181 * `queueEvents.artifactCreated(routingKeyPattern) -> routingKey`
4182   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4183   * `taskId` is required  Description: `taskId` for the task this message concerns
4184   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4185   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4186   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4187   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4188   * `workerType` is required  Description: `workerType` this task must run on.
4189   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4190   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4191   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4192
4193#### Task Completed Messages
4194 * `queueEvents.taskCompleted(routingKeyPattern) -> routingKey`
4195   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4196   * `taskId` is required  Description: `taskId` for the task this message concerns
4197   * `runId` is required  Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4198   * `workerGroup` is required  Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4199   * `workerId` is required  Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4200   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4201   * `workerType` is required  Description: `workerType` this task must run on.
4202   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4203   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4204   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4205
4206#### Task Failed Messages
4207 * `queueEvents.taskFailed(routingKeyPattern) -> routingKey`
4208   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4209   * `taskId` is required  Description: `taskId` for the task this message concerns
4210   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4211   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4212   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4213   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4214   * `workerType` is required  Description: `workerType` this task must run on.
4215   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4216   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4217   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4218
4219#### Task Exception Messages
4220 * `queueEvents.taskException(routingKeyPattern) -> routingKey`
4221   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4222   * `taskId` is required  Description: `taskId` for the task this message concerns
4223   * `runId` Description: `runId` of latest run for the task, `_` if no run is exists for the task.
4224   * `workerGroup` Description: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
4225   * `workerId` Description: `workerId` of latest run for the task, `_` if no run is exists for the task.
4226   * `provisionerId` is required  Description: `provisionerId` this task is targeted at.
4227   * `workerType` is required  Description: `workerType` this task must run on.
4228   * `schedulerId` is required  Description: `schedulerId` this task was created by.
4229   * `taskGroupId` is required  Description: `taskGroupId` this task was created in.
4230   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4231
4232#### Task Group Resolved Messages
4233 * `queueEvents.taskGroupResolved(routingKeyPattern) -> routingKey`
4234   * `routingKeyKind` is constant of `primary`  is required  Description: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key.
4235   * `taskGroupId` is required  Description: `taskGroupId` for the task-group this message concerns
4236   * `schedulerId` is required  Description: `schedulerId` for the task-group this message concerns
4237   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4238
4239
4240
4241
4242### Methods in `taskcluster.Secrets`
4243```python
4244import asyncio # Only for async
4245// Create Secrets client instance
4246import taskcluster
4247import taskcluster.aio
4248
4249secrets = taskcluster.Secrets(options)
4250# Below only for async instances, assume already in coroutine
4251loop = asyncio.get_event_loop()
4252session = taskcluster.aio.createSession(loop=loop)
4253asyncSecrets = taskcluster.aio.Secrets(options, session=session)
4254```
4255The secrets service provides a simple key/value store for small bits of secret
4256data.  Access is limited by scopes, so values can be considered secret from
4257those who do not have the relevant scopes.
4258
4259Secrets also have an expiration date, and once a secret has expired it can no
4260longer be read.  This is useful for short-term secrets such as a temporary
4261service credential or a one-time signing key.
4262#### Ping Server
4263Respond without doing anything.
4264This endpoint is used to check that the service is up.
4265
4266
4267```python
4268# Sync calls
4269secrets.ping() # -> None`
4270# Async call
4271await asyncSecrets.ping() # -> None
4272```
4273
4274#### Set Secret
4275Set the secret associated with some key.  If the secret already exists, it is
4276updated instead.
4277
4278
4279
4280Takes the following arguments:
4281
4282  * `name`
4283
4284Required [input schema](v1/secret.json#)
4285
4286```python
4287# Sync calls
4288secrets.set(name, payload) # -> None`
4289secrets.set(payload, name='value') # -> None
4290# Async call
4291await asyncSecrets.set(name, payload) # -> None
4292await asyncSecrets.set(payload, name='value') # -> None
4293```
4294
4295#### Delete Secret
4296Delete the secret associated with some key.
4297
4298
4299
4300Takes the following arguments:
4301
4302  * `name`
4303
4304```python
4305# Sync calls
4306secrets.remove(name) # -> None`
4307secrets.remove(name='value') # -> None
4308# Async call
4309await asyncSecrets.remove(name) # -> None
4310await asyncSecrets.remove(name='value') # -> None
4311```
4312
4313#### Read Secret
4314Read the secret associated with some key.  If the secret has recently
4315expired, the response code 410 is returned.  If the caller lacks the
4316scope necessary to get the secret, the call will fail with a 403 code
4317regardless of whether the secret exists.
4318
4319
4320
4321Takes the following arguments:
4322
4323  * `name`
4324
4325Required [output schema](v1/secret.json#)
4326
4327```python
4328# Sync calls
4329secrets.get(name) # -> result`
4330secrets.get(name='value') # -> result
4331# Async call
4332await asyncSecrets.get(name) # -> result
4333await asyncSecrets.get(name='value') # -> result
4334```
4335
4336#### List Secrets
4337List the names of all secrets.
4338
4339By default this end-point will try to return up to 1000 secret names in one
4340request. But it **may return less**, even if more tasks are available.
4341It may also return a `continuationToken` even though there are no more
4342results. However, you can only be sure to have seen all results if you
4343keep calling `listTaskGroup` with the last `continuationToken` until you
4344get a result without a `continuationToken`.
4345
4346If you are not interested in listing all the members at once, you may
4347use the query-string option `limit` to return fewer.
4348
4349
4350Required [output schema](v1/secret-list.json#)
4351
4352```python
4353# Sync calls
4354secrets.list() # -> result`
4355# Async call
4356await asyncSecrets.list() # -> result
4357```
4358
4359
4360
4361
4362### Exchanges in `taskcluster.TreeherderEvents`
4363```python
4364// Create TreeherderEvents client instance
4365import taskcluster
4366treeherderEvents = taskcluster.TreeherderEvents(options)
4367```
4368The taskcluster-treeherder service is responsible for processing
4369task events published by TaskCluster Queue and producing job messages
4370that are consumable by Treeherder.
4371
4372This exchange provides that job messages to be consumed by any queue that
4373attached to the exchange.  This could be a production Treeheder instance,
4374a local development environment, or a custom dashboard.
4375#### Job Messages
4376 * `treeherderEvents.jobs(routingKeyPattern) -> routingKey`
4377   * `destination` is required  Description: destination
4378   * `project` is required  Description: project
4379   * `reserved` Description: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
4380
4381
4382
4383<!-- END OF GENERATED DOCS -->
4384