• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

.circleci/H29-Oct-2021-9186

prometheus_client/H29-Oct-2021-4,1013,149

tests/H29-Oct-2021-3,7693,084

.coveragercH A D29-Oct-2021256 1412

.gitignoreH A D29-Oct-202176 1010

CODE_OF_CONDUCT.mdH A D29-Oct-2021155 42

CONTRIBUTING.mdH A D29-Oct-20211.6 KiB4329

LICENSEH A D29-Oct-202111.1 KiB202169

MAINTAINERS.mdH A D29-Oct-202158 21

MANIFEST.inH A D29-Oct-202150 32

NOTICEH A D29-Oct-2021236 64

README.mdH A D29-Oct-202118.7 KiB624451

SECURITY.mdH A D29-Oct-2021170 74

setup.pyH A D29-Oct-20211.9 KiB5449

tox.iniH A D29-Oct-20211.6 KiB9476

README.md

1# Prometheus Python Client
2
3The official Python 2 and 3 client for [Prometheus](http://prometheus.io).
4
5## Three Step Demo
6
7**One**: Install the client:
8```
9pip install prometheus-client
10```
11
12**Two**: Paste the following into a Python interpreter:
13```python
14from prometheus_client import start_http_server, Summary
15import random
16import time
17
18# Create a metric to track time spent and requests made.
19REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
20
21# Decorate function with metric.
22@REQUEST_TIME.time()
23def process_request(t):
24    """A dummy function that takes some time."""
25    time.sleep(t)
26
27if __name__ == '__main__':
28    # Start up the server to expose the metrics.
29    start_http_server(8000)
30    # Generate some requests.
31    while True:
32        process_request(random.random())
33```
34
35**Three**: Visit [http://localhost:8000/](http://localhost:8000/) to view the metrics.
36
37From one easy to use decorator you get:
38  * `request_processing_seconds_count`: Number of times this function was called.
39  * `request_processing_seconds_sum`: Total amount of time spent in this function.
40
41Prometheus's `rate` function allows calculation of both requests per second,
42and latency over time from this data.
43
44In addition if you're on Linux the `process` metrics expose CPU, memory and
45other information about the process for free!
46
47## Installation
48
49```
50pip install prometheus_client
51```
52
53This package can be found on
54[PyPI](https://pypi.python.org/pypi/prometheus_client).
55
56## Instrumenting
57
58Four types of metric are offered: Counter, Gauge, Summary and Histogram.
59See the documentation on [metric types](http://prometheus.io/docs/concepts/metric_types/)
60and [instrumentation best practices](https://prometheus.io/docs/practices/instrumentation/#counter-vs-gauge-summary-vs-histogram)
61on how to use them.
62
63### Counter
64
65Counters go up, and reset when the process restarts.
66
67
68```python
69from prometheus_client import Counter
70c = Counter('my_failures', 'Description of counter')
71c.inc()     # Increment by 1
72c.inc(1.6)  # Increment by given value
73```
74
75If there is a suffix of `_total` on the metric name, it will be removed. When
76exposing the time series for counter, a `_total` suffix will be added. This is
77for compatibility between OpenMetrics and the Prometheus text format, as OpenMetrics
78requires the `_total` suffix.
79
80There are utilities to count exceptions raised:
81
82```python
83@c.count_exceptions()
84def f():
85  pass
86
87with c.count_exceptions():
88  pass
89
90# Count only one type of exception
91with c.count_exceptions(ValueError):
92  pass
93```
94
95### Gauge
96
97Gauges can go up and down.
98
99```python
100from prometheus_client import Gauge
101g = Gauge('my_inprogress_requests', 'Description of gauge')
102g.inc()      # Increment by 1
103g.dec(10)    # Decrement by given value
104g.set(4.2)   # Set to a given value
105```
106
107There are utilities for common use cases:
108
109```python
110g.set_to_current_time()   # Set to current unixtime
111
112# Increment when entered, decrement when exited.
113@g.track_inprogress()
114def f():
115  pass
116
117with g.track_inprogress():
118  pass
119```
120
121A Gauge can also take its value from a callback:
122
123```python
124d = Gauge('data_objects', 'Number of objects')
125my_dict = {}
126d.set_function(lambda: len(my_dict))
127```
128
129### Summary
130
131Summaries track the size and number of events.
132
133```python
134from prometheus_client import Summary
135s = Summary('request_latency_seconds', 'Description of summary')
136s.observe(4.7)    # Observe 4.7 (seconds in this case)
137```
138
139There are utilities for timing code:
140
141```python
142@s.time()
143def f():
144  pass
145
146with s.time():
147  pass
148```
149
150The Python client doesn't store or expose quantile information at this time.
151
152### Histogram
153
154Histograms track the size and number of events in buckets.
155This allows for aggregatable calculation of quantiles.
156
157```python
158from prometheus_client import Histogram
159h = Histogram('request_latency_seconds', 'Description of histogram')
160h.observe(4.7)    # Observe 4.7 (seconds in this case)
161```
162
163The default buckets are intended to cover a typical web/rpc request from milliseconds to seconds.
164They can be overridden by passing `buckets` keyword argument to `Histogram`.
165
166There are utilities for timing code:
167
168```python
169@h.time()
170def f():
171  pass
172
173with h.time():
174  pass
175```
176
177### Info
178
179Info tracks key-value information, usually about a whole target.
180
181```python
182from prometheus_client import Info
183i = Info('my_build_version', 'Description of info')
184i.info({'version': '1.2.3', 'buildhost': 'foo@bar'})
185```
186
187### Enum
188
189Enum tracks which of a set of states something is currently in.
190
191```python
192from prometheus_client import Enum
193e = Enum('my_task_state', 'Description of enum',
194        states=['starting', 'running', 'stopped'])
195e.state('running')
196```
197
198### Labels
199
200All metrics can have labels, allowing grouping of related time series.
201
202See the best practices on [naming](http://prometheus.io/docs/practices/naming/)
203and [labels](http://prometheus.io/docs/practices/instrumentation/#use-labels).
204
205Taking a counter as an example:
206
207```python
208from prometheus_client import Counter
209c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
210c.labels('get', '/').inc()
211c.labels('post', '/submit').inc()
212```
213
214Labels can also be passed as keyword-arguments:
215
216```python
217from prometheus_client import Counter
218c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
219c.labels(method='get', endpoint='/').inc()
220c.labels(method='post', endpoint='/submit').inc()
221```
222
223Metrics with labels are not initialized when declared, because the client can't
224know what values the label can have. It is recommended to initialize the label
225values by calling the `.labels()` method alone:
226
227```python
228from prometheus_client import Counter
229c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
230c.labels('get', '/')
231c.labels('post', '/submit')
232```
233
234### Exemplars
235
236Exemplars can be added to counter and histogram metrics. Exemplars can be
237specified by passing a dict of label value pairs to be exposed as the exemplar.
238For example with a counter:
239
240```python
241from prometheus_client import Counter
242c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
243c.labels('get', '/').inc(exemplar={'trace_id': 'abc123'})
244c.labels('post', '/submit').inc(1.0, {'trace_id': 'def456'})
245```
246
247And with a histogram:
248
249```python
250from prometheus_client import Histogram
251h = Histogram('request_latency_seconds', 'Description of histogram')
252h.observe(4.7, {'trace_id': 'abc123'})
253```
254
255### Process Collector
256
257The Python client automatically exports metrics about process CPU usage, RAM,
258file descriptors and start time. These all have the prefix `process`, and
259are only currently available on Linux.
260
261The namespace and pid constructor arguments allows for exporting metrics about
262other processes, for example:
263```
264ProcessCollector(namespace='mydaemon', pid=lambda: open('/var/run/daemon.pid').read())
265```
266
267### Platform Collector
268
269The client also automatically exports some metadata about Python. If using Jython,
270metadata about the JVM in use is also included. This information is available as
271labels on the `python_info` metric. The value of the metric is 1, since it is the
272labels that carry information.
273
274## Exporting
275
276There are several options for exporting metrics.
277
278### HTTP
279
280Metrics are usually exposed over HTTP, to be read by the Prometheus server.
281
282The easiest way to do this is via `start_http_server`, which will start a HTTP
283server in a daemon thread on the given port:
284
285```python
286from prometheus_client import start_http_server
287
288start_http_server(8000)
289```
290
291Visit [http://localhost:8000/](http://localhost:8000/) to view the metrics.
292
293To add Prometheus exposition to an existing HTTP server, see the `MetricsHandler` class
294which provides a `BaseHTTPRequestHandler`. It also serves as a simple example of how
295to write a custom endpoint.
296
297#### Twisted
298
299To use prometheus with [twisted](https://twistedmatrix.com/), there is `MetricsResource` which exposes metrics as a twisted resource.
300
301```python
302from prometheus_client.twisted import MetricsResource
303from twisted.web.server import Site
304from twisted.web.resource import Resource
305from twisted.internet import reactor
306
307root = Resource()
308root.putChild(b'metrics', MetricsResource())
309
310factory = Site(root)
311reactor.listenTCP(8000, factory)
312reactor.run()
313```
314
315#### WSGI
316
317To use Prometheus with [WSGI](http://wsgi.readthedocs.org/en/latest/), there is
318`make_wsgi_app` which creates a WSGI application.
319
320```python
321from prometheus_client import make_wsgi_app
322from wsgiref.simple_server import make_server
323
324app = make_wsgi_app()
325httpd = make_server('', 8000, app)
326httpd.serve_forever()
327```
328
329Such an application can be useful when integrating Prometheus metrics with WSGI
330apps.
331
332The method `start_wsgi_server` can be used to serve the metrics through the
333WSGI reference implementation in a new thread.
334
335```python
336from prometheus_client import start_wsgi_server
337
338start_wsgi_server(8000)
339```
340
341#### ASGI
342
343To use Prometheus with [ASGI](http://asgi.readthedocs.org/en/latest/), there is
344`make_asgi_app` which creates an ASGI application.
345
346```python
347from prometheus_client import make_asgi_app
348
349app = make_asgi_app()
350```
351Such an application can be useful when integrating Prometheus metrics with ASGI
352apps.
353
354#### Flask
355
356To use Prometheus with [Flask](http://flask.pocoo.org/) we need to serve metrics through a Prometheus WSGI application. This can be achieved using [Flask's application dispatching](http://flask.pocoo.org/docs/latest/patterns/appdispatch/). Below is a working example.
357
358Save the snippet below in a `myapp.py` file
359
360```python
361from flask import Flask
362from werkzeug.middleware.dispatcher import DispatcherMiddleware
363from prometheus_client import make_wsgi_app
364
365# Create my app
366app = Flask(__name__)
367
368# Add prometheus wsgi middleware to route /metrics requests
369app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {
370    '/metrics': make_wsgi_app()
371})
372```
373
374Run the example web application like this
375
376```bash
377# Install uwsgi if you do not have it
378pip install uwsgi
379uwsgi --http 127.0.0.1:8000 --wsgi-file myapp.py --callable app
380```
381
382Visit http://localhost:8000/metrics to see the metrics
383
384### Node exporter textfile collector
385
386The [textfile collector](https://github.com/prometheus/node_exporter#textfile-collector)
387allows machine-level statistics to be exported out via the Node exporter.
388
389This is useful for monitoring cronjobs, or for writing cronjobs to expose metrics
390about a machine system that the Node exporter does not support or would not make sense
391to perform at every scrape (for example, anything involving subprocesses).
392
393```python
394from prometheus_client import CollectorRegistry, Gauge, write_to_textfile
395
396registry = CollectorRegistry()
397g = Gauge('raid_status', '1 if raid array is okay', registry=registry)
398g.set(1)
399write_to_textfile('/configured/textfile/path/raid.prom', registry)
400```
401
402A separate registry is used, as the default registry may contain other metrics
403such as those from the Process Collector.
404
405## Exporting to a Pushgateway
406
407The [Pushgateway](https://github.com/prometheus/pushgateway)
408allows ephemeral and batch jobs to expose their metrics to Prometheus.
409
410```python
411from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
412
413registry = CollectorRegistry()
414g = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)
415g.set_to_current_time()
416push_to_gateway('localhost:9091', job='batchA', registry=registry)
417```
418
419A separate registry is used, as the default registry may contain other metrics
420such as those from the Process Collector.
421
422Pushgateway functions take a grouping key. `push_to_gateway` replaces metrics
423with the same grouping key, `pushadd_to_gateway` only replaces metrics with the
424same name and grouping key and `delete_from_gateway` deletes metrics with the
425given job and grouping key. See the
426[Pushgateway documentation](https://github.com/prometheus/pushgateway/blob/master/README.md)
427for more information.
428
429`instance_ip_grouping_key` returns a grouping key with the instance label set
430to the host's IP address.
431
432### Handlers for authentication
433
434If the push gateway you are connecting to is protected with HTTP Basic Auth,
435you can use a special handler to set the Authorization header.
436
437```python
438from prometheus_client import CollectorRegistry, Gauge, push_to_gateway
439from prometheus_client.exposition import basic_auth_handler
440
441def my_auth_handler(url, method, timeout, headers, data):
442    username = 'foobar'
443    password = 'secret123'
444    return basic_auth_handler(url, method, timeout, headers, data, username, password)
445registry = CollectorRegistry()
446g = Gauge('job_last_success_unixtime', 'Last time a batch job successfully finished', registry=registry)
447g.set_to_current_time()
448push_to_gateway('localhost:9091', job='batchA', registry=registry, handler=my_auth_handler)
449```
450
451## Bridges
452
453It is also possible to expose metrics to systems other than Prometheus.
454This allows you to take advantage of Prometheus instrumentation even
455if you are not quite ready to fully transition to Prometheus yet.
456
457### Graphite
458
459Metrics are pushed over TCP in the Graphite plaintext format.
460
461```python
462from prometheus_client.bridge.graphite import GraphiteBridge
463
464gb = GraphiteBridge(('graphite.your.org', 2003))
465# Push once.
466gb.push()
467# Push every 10 seconds in a daemon thread.
468gb.start(10.0)
469```
470
471Graphite [tags](https://grafana.com/blog/2018/01/11/graphite-1.1-teaching-an-old-dog-new-tricks/) are also supported.
472
473```python
474from prometheus_client.bridge.graphite import GraphiteBridge
475
476gb = GraphiteBridge(('graphite.your.org', 2003), tags=True)
477c = Counter('my_requests_total', 'HTTP Failures', ['method', 'endpoint'])
478c.labels('get', '/').inc()
479gb.push()
480```
481
482## Custom Collectors
483
484Sometimes it is not possible to directly instrument code, as it is not
485in your control. This requires you to proxy metrics from other systems.
486
487To do so you need to create a custom collector, for example:
488
489```python
490from prometheus_client.core import GaugeMetricFamily, CounterMetricFamily, REGISTRY
491
492class CustomCollector(object):
493    def collect(self):
494        yield GaugeMetricFamily('my_gauge', 'Help text', value=7)
495        c = CounterMetricFamily('my_counter_total', 'Help text', labels=['foo'])
496        c.add_metric(['bar'], 1.7)
497        c.add_metric(['baz'], 3.8)
498        yield c
499
500REGISTRY.register(CustomCollector())
501```
502
503`SummaryMetricFamily`, `HistogramMetricFamily` and `InfoMetricFamily` work similarly.
504
505A collector may implement a `describe` method which returns metrics in the same
506format as `collect` (though you don't have to include the samples). This is
507used to predetermine the names of time series a `CollectorRegistry` exposes and
508thus to detect collisions and duplicate registrations.
509
510Usually custom collectors do not have to implement `describe`. If `describe` is
511not implemented and the CollectorRegistry was created with `auto_describe=True`
512(which is the case for the default registry) then `collect` will be called at
513registration time instead of `describe`. If this could cause problems, either
514implement a proper `describe`, or if that's not practical have `describe`
515return an empty list.
516
517
518## Multiprocess Mode (E.g. Gunicorn)
519
520Prometheus client libraries presume a threaded model, where metrics are shared
521across workers. This doesn't work so well for languages such as Python where
522it's common to have processes rather than threads to handle large workloads.
523
524To handle this the client library can be put in multiprocess mode.
525This comes with a number of limitations:
526
527- Registries can not be used as normal, all instantiated metrics are exported
528  - Registering metrics to a registry later used by a `MultiProcessCollector`
529    may cause duplicate metrics to be exported
530- Custom collectors do not work (e.g. cpu and memory metrics)
531- Info and Enum metrics do not work
532- The pushgateway cannot be used
533- Gauges cannot use the `pid` label
534- Exemplars are not supported
535
536There's several steps to getting this working:
537
538**1. Deployment**:
539
540The `PROMETHEUS_MULTIPROC_DIR` environment variable must be set to a directory
541that the client library can use for metrics. This directory must be wiped
542between process/Gunicorn runs (before startup is recommended).
543
544This environment variable should be set from a start-up shell script,
545and not directly from Python (otherwise it may not propagate to child processes).
546
547**2. Metrics collector**:
548
549The application must initialize a new `CollectorRegistry`, and store the
550multi-process collector inside. It is a best practice to create this registry
551inside the context of a request to avoid metrics registering themselves to a
552collector used by a `MultiProcessCollector`. If a registry with metrics
553registered is used by a `MultiProcessCollector` duplicate metrics may be
554exported, one for multiprocess, and one for the process serving the request.
555
556```python
557from prometheus_client import multiprocess
558from prometheus_client import generate_latest, CollectorRegistry, CONTENT_TYPE_LATEST, Counter
559
560MY_COUNTER = Counter('my_counter', 'Description of my counter')
561
562# Expose metrics.
563def app(environ, start_response):
564    registry = CollectorRegistry()
565    multiprocess.MultiProcessCollector(registry)
566    data = generate_latest(registry)
567    status = '200 OK'
568    response_headers = [
569        ('Content-type', CONTENT_TYPE_LATEST),
570        ('Content-Length', str(len(data)))
571    ]
572    start_response(status, response_headers)
573    return iter([data])
574```
575
576**3. Gunicorn configuration**:
577
578The `gunicorn` configuration file needs to include the following function:
579
580```python
581from prometheus_client import multiprocess
582
583def child_exit(server, worker):
584    multiprocess.mark_process_dead(worker.pid)
585```
586
587**4. Metrics tuning (Gauge)**:
588
589When `Gauge` metrics are used, additional tuning needs to be performed.
590Gauges have several modes they can run in, which can be selected with the `multiprocess_mode` parameter.
591
592- 'all': Default. Return a timeseries per process alive or dead.
593- 'liveall': Return a timeseries per process that is still alive.
594- 'livesum': Return a single timeseries that is the sum of the values of alive processes.
595- 'max': Return a single timeseries that is the maximum of the values of all processes, alive or dead.
596- 'min': Return a single timeseries that is the minimum of the values of all processes, alive or dead.
597
598```python
599from prometheus_client import Gauge
600
601# Example gauge
602IN_PROGRESS = Gauge("inprogress_requests", "help", multiprocess_mode='livesum')
603```
604
605
606## Parser
607
608The Python client supports parsing the Prometheus text format.
609This is intended for advanced use cases where you have servers
610exposing Prometheus metrics and need to get them into some other
611system.
612
613```python
614from prometheus_client.parser import text_string_to_metric_families
615for family in text_string_to_metric_families(u"my_gauge 1.0\n"):
616  for sample in family.samples:
617    print("Name: {0} Labels: {1} Value: {2}".format(*sample))
618```
619
620## Links
621
622* [Releases](https://github.com/prometheus/client_python/releases): The releases page shows the history of the project and acts as a changelog.
623* [PyPI](https://pypi.python.org/pypi/prometheus_client)
624