1---
2title: Querying examples
3nav_title: Examples
4sort_rank: 4
5---
6
7# Query examples
8
9## Simple time series selection
10
11Return all time series with the metric `http_requests_total`:
12
13    http_requests_total
14
15Return all time series with the metric `http_requests_total` and the given
16`job` and `handler` labels:
17
18    http_requests_total{job="apiserver", handler="/api/comments"}
19
20Return a whole range of time (in this case 5 minutes) for the same vector,
21making it a range vector:
22
23    http_requests_total{job="apiserver", handler="/api/comments"}[5m]
24
25Note that an expression resulting in a range vector cannot be graphed directly,
26but viewed in the tabular ("Console") view of the expression browser.
27
28Using regular expressions, you could select time series only for jobs whose
29name match a certain pattern, in this case, all jobs that end with `server`:
30
31    http_requests_total{job=~".*server"}
32
33All regular expressions in Prometheus use [RE2
34syntax](https://github.com/google/re2/wiki/Syntax).
35
36To select all HTTP status codes except 4xx ones, you could run:
37
38    http_requests_total{status!~"4.."}
39
40## Subquery
41
42Return the 5-minute rate of the `http_requests_total` metric for the past 30 minutes, with a resolution of 1 minute.
43
44    rate(http_requests_total[5m])[30m:1m]
45
46This is an example of a nested subquery. The subquery for the `deriv` function uses the default resolution. Note that using subqueries unnecessarily is unwise.
47
48    max_over_time(deriv(rate(distance_covered_total[5s])[30s:5s])[10m:])
49
50## Using functions, operators, etc.
51
52Return the per-second rate for all time series with the `http_requests_total`
53metric name, as measured over the last 5 minutes:
54
55    rate(http_requests_total[5m])
56
57Assuming that the `http_requests_total` time series all have the labels `job`
58(fanout by job name) and `instance` (fanout by instance of the job), we might
59want to sum over the rate of all instances, so we get fewer output time series,
60but still preserve the `job` dimension:
61
62    sum(rate(http_requests_total[5m])) by (job)
63
64If we have two different metrics with the same dimensional labels, we can apply
65binary operators to them and elements on both sides with the same label set
66will get matched and propagated to the output. For example, this expression
67returns the unused memory in MiB for every instance (on a fictional cluster
68scheduler exposing these metrics about the instances it runs):
69
70    (instance_memory_limit_bytes - instance_memory_usage_bytes) / 1024 / 1024
71
72The same expression, but summed by application, could be written like this:
73
74    sum(
75      instance_memory_limit_bytes - instance_memory_usage_bytes
76    ) by (app, proc) / 1024 / 1024
77
78If the same fictional cluster scheduler exposed CPU usage metrics like the
79following for every instance:
80
81    instance_cpu_time_ns{app="lion", proc="web", rev="34d0f99", env="prod", job="cluster-manager"}
82    instance_cpu_time_ns{app="elephant", proc="worker", rev="34d0f99", env="prod", job="cluster-manager"}
83    instance_cpu_time_ns{app="turtle", proc="api", rev="4d3a513", env="prod", job="cluster-manager"}
84    instance_cpu_time_ns{app="fox", proc="widget", rev="4d3a513", env="prod", job="cluster-manager"}
85    ...
86
87...we could get the top 3 CPU users grouped by application (`app`) and process
88type (`proc`) like this:
89
90    topk(3, sum(rate(instance_cpu_time_ns[5m])) by (app, proc))
91
92Assuming this metric contains one time series per running instance, you could
93count the number of running instances per application like this:
94
95    count(instance_cpu_time_ns) by (app)
96