1---
2title: Object Storage
3type: docs
4menu: thanos
5slug: /storage.md
6---
7
8# Object Storage
9
10Thanos supports any object stores that can be implemented against Thanos [objstore.Bucket interface](/pkg/objstore/objstore.go)
11
12All clients are configured using `--objstore.config-file` to reference to the configuration file or `--objstore.config` to put yaml config directly.
13
14## How to use `config` flags?
15
16You can either pass YAML file defined below in `--objstore.config-file` or pass the YAML content directly using `--objstore.config`.
17We recommend the latter as it gives an explicit static view of configuration for each component. It also saves you the fuss of creating and managing additional file.
18
19Don't be afraid of multiline flags!
20
21In Kubernetes it is as easy as (on Thanos sidecar example):
22
23```yaml
24      - args:
25        - sidecar
26        - |
27          --objstore.config=type: GCS
28          config:
29            bucket: <bucket>
30        - --prometheus.url=http://localhost:9090
31        - |
32          --tracing.config=type: STACKDRIVER
33          config:
34            service_name: ""
35            project_id: <project>
36            sample_factor: 16
37        - --tsdb.path=/prometheus-data
38```
39
40## How to add a new client?
41
421. Create new directory under `pkg/objstore/<provider>`
432. Implement [objstore.Bucket interface](/pkg/objstore/objstore.go)
443. Add `NewTestBucket` constructor for testing purposes, that creates and deletes temporary bucket.
454. Use created `NewTestBucket` in [ForeachStore method](/pkg/objstore/objtesting/foreach.go) to ensure we can run tests against new provider. (In PR)
465. RUN the [TestObjStoreAcceptanceTest](/pkg/objstore/objtesting/acceptance_e2e_test.go) against your provider to ensure it fits. Fix any found error until test passes. (In PR)
476. Add client implementation to the factory in [factory](/pkg/objstore/client/factory.go) code. (Using as small amount of flags as possible in every command)
487. Add client struct config to [bucketcfggen](/scripts/cfggen/main.go) to allow config auto generation.
49
50At that point, anyone can use your provider by spec.
51
52## Configuration
53
54Current object storage client implementations:
55
56| Provider             | Maturity | Auto-tested on CI | Maintainers |
57|----------------------|-------------------|-----------|---------------|
58| [Google Cloud Storage](./storage.md#gcs) | Stable  (production usage)             | yes       | @bwplotka   |
59| [AWS/S3](./storage.md#s3) | Stable  (production usage)               | yes        | @bwplotka          |
60| [Azure Storage Account](./storage.md#azure) | Stable  (production usage) | no       | @vglafirov   |
61| [OpenStack Swift](./storage.md#openstack-swift)      | Beta  (working PoCs, testing usage)               | no        | @sudhi-vm   |
62| [Tencent COS](./storage.md#tencent-cos)          | Beta  (testing usage)                   | no        | @jojohappy          |
63| [AliYun OSS](./storage.md#aliyun-oss)           | Beta  (testing usage)                   | no        | @shaulboozhiao,@wujinhu      |
64| [Local Filesystem](./storage.md#filesystem) | Beta  (testing usage)             | yes       | @bwplotka   |
65
66NOTE: Currently Thanos requires strong consistency (write-read) for object store implementation.
67
68### S3
69
70Thanos uses the [minio client](https://github.com/minio/minio-go) library to upload Prometheus data into AWS S3.
71
72You can configure an S3 bucket as an object store with YAML, either by passing the configuration directly to the `--objstore.config` parameter, or (preferably) by passing the path to a configuration file to the `--objstore.config-file` option.
73
74NOTE: Minio client was mainly for AWS S3, but it can be configured against other S3-compatible object storages e.g Ceph
75
76[embedmd]:# (flags/config_bucket_s3.txt yaml)
77```yaml
78type: S3
79config:
80  bucket: ""
81  endpoint: ""
82  region: ""
83  access_key: ""
84  insecure: false
85  signature_version2: false
86  encrypt_sse: false
87  secret_key: ""
88  put_user_metadata: {}
89  http_config:
90    idle_conn_timeout: 90s
91    response_header_timeout: 2m
92    insecure_skip_verify: false
93  trace:
94    enable: false
95  part_size: 134217728
96```
97
98At a minimum, you will need to provide a value for the `bucket`, `endpoint`, `access_key`, and `secret_key` keys. The rest of the keys are optional.
99
100The AWS region to endpoint mapping can be found in this [link](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
101
102Make sure you use a correct signature version. Currently AWS requires signature v4, so it needs `signature-version2: false`. If you don't specify it, you will get an `Access Denied` error. On the other hand, several S3 compatible APIs use `signature-version2: true`.
103
104You can configure the timeout settings for the HTTP client by setting the `http_config.idle_conn_timeout` and `http_config.response_header_timeout` keys. As a rule of thumb, if you are seeing errors like `timeout awaiting response headers` in your logs, you may want to increase the value of `http_config.response_header_timeout`.
105
106Please refer to the documentation of [the Transport type](https://golang.org/pkg/net/http/#Transport) in the `net/http` package for detailed information on what each option does.
107
108`part_size` is specified in bytes and refers to the minimum file size used for multipart uploads, as some custom S3 implementations may have different requirements. A value of `0` means to use a default 128 MiB size.
109
110For debug and testing purposes you can set
111
112* `insecure: true` to switch to plain insecure HTTP instead of HTTPS
113
114* `http_config.insecure_skip_verify: true` to disable TLS certificate verification (if your S3 based storage is using a self-signed certificate, for example)
115
116* `trace.enable: true` to enable the minio client's verbose logging. Each request and response will be logged into the debug logger, so debug level logging must be enabled for this functionality.
117
118#### Credentials
119
120By default Thanos will try to retrieve credentials from the following sources:
121
1221. From config file if BOTH `access_key` and `secret_key` are present.
1231. From the standard AWS environment variable - `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`
1241. From `~/.aws/credentials`
1251. IAM credentials retrieved from an instance profile.
126
127NOTE: Getting access key from config file and secret key from other method (and vice versa) is not supported.
128
129#### AWS Policies
130
131Example working AWS IAM policy for user:
132
133* For deployment (policy for Thanos services):
134
135```json
136{
137    "Version": "2012-10-17",
138    "Statement": [
139        {
140            "Sid": "Statement",
141            "Effect": "Allow",
142            "Action": [
143                "s3:ListBucket",
144                "s3:GetObject",
145                "s3:DeleteObject",
146                "s3:PutObject"
147            ],
148            "Resource": [
149                "arn:aws:s3:::<bucket>/*",
150                "arn:aws:s3:::<bucket>"
151            ]
152        }
153    ]
154}
155```
156
157(No bucket policy)
158
159To test the policy, set env vars for S3 access for *empty, not used* bucket as well as:
160
161```
162THANOS_TEST_OBJSTORE_SKIP=GCS,AZURE,SWIFT,COS,ALIYUNOSS
163THANOS_ALLOW_EXISTING_BUCKET_USE=true
164```
165
166And run: `GOCACHE=off go test -v -run TestObjStore_AcceptanceTest_e2e ./pkg/...`
167
168* For testing (policy to run e2e tests):
169
170We need access to CreateBucket and DeleteBucket and access to all buckets:
171
172```json
173{
174    "Version": "2012-10-17",
175    "Statement": [
176        {
177            "Sid": "Statement",
178            "Effect": "Allow",
179            "Action": [
180                "s3:ListBucket",
181                "s3:GetObject",
182                "s3:DeleteObject",
183                "s3:PutObject",
184                "s3:CreateBucket",
185                "s3:DeleteBucket"
186            ],
187            "Resource": [
188                "arn:aws:s3:::<bucket>/*",
189                "arn:aws:s3:::<bucket>"
190            ]
191        }
192    ]
193}
194```
195
196With this policy you should be able to run set `THANOS_TEST_OBJSTORE_SKIP=GCS,AZURE,SWIFT,COS,ALIYUNOSS` and unset `S3_BUCKET` and run all tests using `make test`.
197
198Details about AWS policies: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
199
200### GCS
201
202To configure Google Cloud Storage bucket as an object store you need to set `bucket` with GCS bucket name and configure Google Application credentials.
203
204For example:
205
206[embedmd]:# (flags/config_bucket_gcs.txt yaml)
207```yaml
208type: GCS
209config:
210  bucket: ""
211  service_account: ""
212```
213
214#### Using GOOGLE_APPLICATION_CREDENTIALS
215
216Application credentials are configured via JSON file and only the bucket needs to be specified,
217the client looks for:
218
2191. A JSON file whose path is specified by the
220   `GOOGLE_APPLICATION_CREDENTIALS` environment variable.
2212. A JSON file in a location known to the gcloud command-line tool.
222   On Windows, this is `%APPDATA%/gcloud/application_default_credentials.json`.
223   On other systems, `$HOME/.config/gcloud/application_default_credentials.json`.
2243. On Google App Engine it uses the `appengine.AccessToken` function.
2254. On Google Compute Engine and Google App Engine Managed VMs, it fetches
226   credentials from the metadata server.
227   (In this final case any provided scopes are ignored.)
228
229You can read more on how to get application credential json file in [https://cloud.google.com/docs/authentication/production](https://cloud.google.com/docs/authentication/production)
230
231#### Using inline a Service Account
232
233Another possibility is to inline the ServiceAccount into the Thanos configuration and only maintain one file.
234This feature was added, so that the Prometheus Operator only needs to take care of one secret file.
235
236```yaml
237type: GCS
238config:
239  bucket: "thanos"
240  service_account: |-
241    {
242      "type": "service_account",
243      "project_id": "project",
244      "private_key_id": "abcdefghijklmnopqrstuvwxyz12345678906666",
245      "private_key": "-----BEGIN PRIVATE KEY-----\...\n-----END PRIVATE KEY-----\n",
246      "client_email": "project@thanos.iam.gserviceaccount.com",
247      "client_id": "123456789012345678901",
248      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
249      "token_uri": "https://oauth2.googleapis.com/token",
250      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
251      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/thanos%40gitpods.iam.gserviceaccount.com"
252    }
253```
254
255#### GCS Policies
256
257__Note:__ GCS Policies should be applied at the project level, not at the bucket level
258
259For deployment:
260
261`Storage Object Creator` and `Storage Object Viewer`
262
263For testing:
264
265`Storage Object Admin` for ability to create and delete temporary buckets.
266
267To test the policy is working as expected, exec into the sidecar container, eg:
268
269```sh
270kubectl exec -it -n <namespace> <prometheus with sidecar pod name> -c <sidecar container name> -- /bin/sh
271```
272
273Then test that you can at least list objects in the bucket, eg:
274
275```sh
276thanos bucket ls --objstore.config="${OBJSTORE_CONFIG}"
277```
278
279### Azure
280
281To use Azure Storage as Thanos object store, you need to precreate storage account from Azure portal or using Azure CLI. Follow the instructions from Azure Storage Documentation: [https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal)
282
283To configure Azure Storage account as an object store you need to provide a path to Azure storage config file in flag `--objstore.config-file`.
284
285Config file format is the following:
286
287[embedmd]:# (flags/config_bucket_azure.txt yaml)
288```yaml
289type: AZURE
290config:
291  storage_account: ""
292  storage_account_key: ""
293  container: ""
294  endpoint: ""
295  max_retries: 0
296```
297
298### OpenStack Swift
299
300Thanos uses [gophercloud](http://gophercloud.io/) client to upload Prometheus data into [OpenStack Swift](https://docs.openstack.org/swift/latest/).
301
302Below is an example configuration file for thanos to use OpenStack swift container as an object store.
303Note that if the `name` of a user, project or tenant is used one must also specify its domain by ID or name.
304Various examples for OpenStack authentication can be found in the [official documentation](https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=password-authentication-with-scoped-authorization-detail#password-authentication-with-unscoped-authorization).
305
306[embedmd]:# (flags/config_bucket_swift.txt yaml)
307```yaml
308type: SWIFT
309config:
310  auth_url: ""
311  username: ""
312  user_domain_name: ""
313  user_domain_id: ""
314  user_id: ""
315  password: ""
316  domain_id: ""
317  domain_name: ""
318  project_id: ""
319  project_name: ""
320  project_domain_id: ""
321  project_domain_name: ""
322  region_name: ""
323  container_name: ""
324```
325
326### Tencent COS
327
328To use Tencent COS as storage store, you should apply a Tencent Account to create an object storage bucket at first. Note that detailed from Tencent Cloud Documents: [https://cloud.tencent.com/document/product/436](https://cloud.tencent.com/document/product/436)
329
330To configure Tencent Account to use COS as storage store you need to set these parameters in yaml format stored in a file:
331
332[embedmd]:# (flags/config_bucket_cos.txt yaml)
333```yaml
334type: COS
335config:
336  bucket: ""
337  region: ""
338  app_id: ""
339  secret_key: ""
340  secret_id: ""
341```
342
343Set the flags `--objstore.config-file` to reference to the configuration file.
344
345##  AliYun OSS
346In order to use AliYun OSS object storage, you should first create a bucket with proper Storage Class , ACLs and get the access key on the AliYun cloud. Go to [https://www.alibabacloud.com/product/oss](https://www.alibabacloud.com/product/oss) for more detail.
347
348To use AliYun OSS object storage, please specify following yaml configuration file in `objstore.config*` flag.
349
350[embedmd]:# (flags/config_bucket_aliyunoss.txt yaml)
351```yaml
352type: ALIYUNOSS
353config:
354  endpoint: ""
355  bucket: ""
356  access_key_id: ""
357  access_key_secret: ""
358```
359
360Use --objstore.config-file to reference to this configuration file.
361
362### Filesystem
363
364This storage type is used when user wants to store and access the bucket in the local filesystem.
365We treat filesystem the same way we would treat object storage, so all optimization for remote bucket applies even though,
366we might have the files locally.
367
368NOTE: This storage type is experimental and might be inefficient. It is NOT advised to use it as the main storage for metrics
369in production environment. Particularly there is no planned support for distributed filesystems like NFS.
370This is mainly useful for testing and demos.
371
372[embedmd]:# (flags/config_bucket_filesystem.txt yaml)
373```yaml
374type: FILESYSTEM
375config:
376  directory: ""
377```
378