1---
2stage: Configure
3group: Configure
4info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
5---
6
7# Troubleshooting Auto DevOps **(FREE)**
8
9The information in this documentation page describes common errors when using
10Auto DevOps, and any available workarounds.
11
12## Unable to select a buildpack
13
14Auto Build and Auto Test may fail to detect your language or framework with the
15following error:
16
17```plaintext
18Step 5/11 : RUN /bin/herokuish buildpack build
19 ---> Running in eb468cd46085
20    -----> Unable to select a buildpack
21The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1
22```
23
24The following are possible reasons:
25
26- Your application may be missing the key files the buildpack is looking for.
27  Ruby applications require a `Gemfile` to be properly detected,
28  even though it's possible to write a Ruby app without a `Gemfile`.
29- No buildpack may exist for your application. Try specifying a
30  [custom buildpack](customize.md#custom-buildpacks).
31
32## Pipeline that extends Auto DevOps with only / except fails
33
34If your pipeline fails with the following message:
35
36```plaintext
37Found errors in your .gitlab-ci.yml:
38
39  jobs:test config key may not be used with `rules`: only
40```
41
42This error appears when the included job's rules configuration has been overridden with the `only` or `except` syntax.
43To fix this issue, you must either:
44
45- Transition your `only/except` syntax to rules.
46- (Temporarily) Pin your templates to the [GitLab 12.10 based templates](https://gitlab.com/gitlab-org/auto-devops-v12-10).
47
48## Failure to create a Kubernetes namespace
49
50Auto Deploy fails if GitLab can't create a Kubernetes namespace and
51service account for your project. For help debugging this issue, see
52[Troubleshooting failed deployment jobs](../../user/project/clusters/deploy_to_cluster.md#troubleshooting).
53
54## Detected an existing PostgreSQL database
55
56After upgrading to GitLab 13.0, you may encounter this message when deploying
57with Auto DevOps:
58
59```plaintext
60Detected an existing PostgreSQL database installed on the
61deprecated channel 1, but the current channel is set to 2. The default
62channel changed to 2 in of GitLab 13.0.
63[...]
64```
65
66Auto DevOps, by default, installs an in-cluster PostgreSQL database alongside
67your application. The default installation method changed in GitLab 13.0, and
68upgrading existing databases requires user involvement. The two installation
69methods are:
70
71- **channel 1 (deprecated):** Pulls in the database as a dependency of the associated
72  Helm chart. Only supports Kubernetes versions up to version 1.15.
73- **channel 2 (current):** Installs the database as an independent Helm chart. Required
74  for using the in-cluster database feature with Kubernetes versions 1.16 and greater.
75
76If you receive this error, you can do one of the following actions:
77
78- You can *safely* ignore the warning and continue using the channel 1 PostgreSQL
79  database by setting `AUTO_DEVOPS_POSTGRES_CHANNEL` to `1` and redeploying.
80
81- You can delete the channel 1 PostgreSQL database and install a fresh channel 2
82  database by setting `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value and
83  redeploying.
84
85  WARNING:
86  Deleting the channel 1 PostgreSQL database permanently deletes the existing
87  channel 1 database and all its data. See
88  [Upgrading PostgreSQL](upgrading_postgresql.md)
89  for more information on backing up and upgrading your database.
90
91- If you are not using the in-cluster database, you can set
92  `POSTGRES_ENABLED` to `false` and re-deploy. This option is especially relevant to
93  users of *custom charts without the in-chart PostgreSQL dependency*.
94  Database auto-detection is based on the `postgresql.enabled` Helm value for
95  your release. This value is set based on the `POSTGRES_ENABLED` CI/CD variable
96  and persisted by Helm, regardless of whether or not your chart uses the
97  variable.
98
99WARNING:
100Setting `POSTGRES_ENABLED` to `false` permanently deletes any existing
101channel 1 database for your environment.
102
103## Error: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
104
105After upgrading your Kubernetes cluster to [v1.16+](stages.md#kubernetes-116),
106you may encounter this message when deploying with Auto DevOps:
107
108```plaintext
109UPGRADE FAILED
110Error: failed decoding reader into objects: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
111```
112
113This can occur if your current deployments on the environment namespace were deployed with a
114deprecated/removed API that doesn't exist in Kubernetes v1.16+. For example,
115if [your in-cluster PostgreSQL was installed in a legacy way](#detected-an-existing-postgresql-database),
116the resource was created via the `extensions/v1beta1` API. However, the deployment resource
117was moved to the `app/v1` API in v1.16.
118
119To recover such outdated resources, you must convert the current deployments by mapping legacy APIs
120to newer APIs. There is a helper tool called [`mapkubeapis`](https://github.com/hickeyma/helm-mapkubeapis)
121that works for this problem. Follow these steps to use the tool in Auto DevOps:
122
1231. Modify your `.gitlab-ci.yml` with:
124
125   ```yaml
126   include:
127     - template: Auto-DevOps.gitlab-ci.yml
128     - remote: https://gitlab.com/shinya.maeda/ci-templates/-/raw/master/map-deprecated-api.gitlab-ci.yml
129
130   variables:
131     HELM_VERSION_FOR_MAPKUBEAPIS: "v2" # If you're using auto-depoy-image v2 or above, please specify "v3".
132   ```
133
1341. Run the job `<environment-name>:map-deprecated-api`. Ensure that this job succeeds before moving
135   to the next step. You should see something like the following output:
136
137   ```shell
138   2020/10/06 07:20:49 Found deprecated or removed Kubernetes API:
139   "apiVersion: extensions/v1beta1
140   kind: Deployment"
141   Supported API equivalent:
142   "apiVersion: apps/v1
143   kind: Deployment"
144   ```
145
1461. Revert your `.gitlab-ci.yml` to the previous version. You no longer need to include the
147   supplemental template `map-deprecated-api`.
148
1491. Continue the deployments as usual.
150
151## Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached
152
153As [announced in the official CNCF blog post](https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/),
154the stable Helm chart repository was deprecated and removed on November 13th, 2020.
155You may encounter this error after that date.
156
157Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them
158to use new official repositories or the [Helm Stable Archive repository maintained by GitLab](https://gitlab.com/gitlab-org/cluster-integration/helm-stable-archive).
159Auto Deploy contains [an example fix](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/merge_requests/127).
160
161In Auto Deploy, `v1.0.6+` of `auto-deploy-image` no longer adds the deprecated stable repository to
162the `helm` command. If you use a custom chart and it relies on the deprecated stable repository,
163specify an older `auto-deploy-image` like this example:
164
165```yaml
166include:
167  - template: Auto-DevOps.gitlab-ci.yml
168
169.auto-deploy:
170  image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.5"
171```
172
173Keep in mind that this approach stops working when the stable repository is removed,
174so you must eventually fix your custom chart.
175
176To fix your custom chart:
177
1781. In your chart directory, update the `repository` value in your `requirements.yaml` file from :
179
180   ```yaml
181   repository: "https://kubernetes-charts.storage.googleapis.com/"
182   ```
183
184   to:
185
186   ```yaml
187   repository: "https://charts.helm.sh/stable"
188   ```
189
1901. In your chart directory, run `helm dep update .` using the same Helm major version as Auto DevOps.
1911. Commit the changes for the `requirements.yaml` file.
1921. If you previously had a `requirements.lock` file, commit the changes to the file.
193   If you did not previously have a `requirements.lock` file in your chart,
194   you do not need to commit the new one. This file is optional, but when present,
195   it's used to verify the integrity of the downloaded dependencies.
196
197You can find more information in
198[issue #263778, "Migrate PostgreSQL from stable Helm repository"](https://gitlab.com/gitlab-org/gitlab/-/issues/263778).
199
200## Error: release .... failed: timed out waiting for the condition
201
202When getting started with Auto DevOps, you may encounter this error when first
203deploying your application:
204
205```plaintext
206INSTALL FAILED
207PURGING CHART
208Error: release staging failed: timed out waiting for the condition
209```
210
211This is most likely caused by a failed liveness (or readiness) probe attempted
212during the deployment process. By default, these probes are run against the root
213page of the deployed application on port 5000. If your application isn't configured
214to serve anything at the root page, or is configured to run on a specific port
215*other* than 5000, this check fails.
216
217If it fails, you should see these failures in the events for the relevant
218Kubernetes namespace. These events look like the following example:
219
220```plaintext
221LAST SEEN   TYPE      REASON                   OBJECT                                            MESSAGE
2223m20s       Warning   Unhealthy                pod/staging-85db88dcb6-rxd6g                      Readiness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
2233m32s       Warning   Unhealthy                pod/staging-85db88dcb6-rxd6g                      Liveness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
224```
225
226To change the port used for the liveness checks, pass
227[custom values to the Helm chart](customize.md#customize-values-for-helm-chart)
228used by Auto DevOps:
229
2301. Create a directory and file at the root of your repository named `.gitlab/auto-deploy-values.yaml`.
231
2321. Populate the file with the following content, replacing the port values with
233   the actual port number your application is configured to use:
234
235   ```yaml
236   service:
237     internalPort: <port_value>
238     externalPort: <port_value>
239   ```
240
2411. Commit your changes.
242
243After committing your changes, subsequent probes should use the newly-defined ports.
244The page that's probed can also be changed by overriding the `livenessProbe.path`
245and `readinessProbe.path` values (shown in the
246[default `values.yaml`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/master/assets/auto-deploy-app/values.yaml)
247file) in the same fashion.
248