1---
2stage: none
3group: unassigned
4info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
5---
6
7# Backwards compatibility across updates
8
9GitLab deployments can be broken down into many components. Updating GitLab is not atomic. Therefore, **many components must be backwards-compatible**.
10
11## Common gotchas
12
13In a sense, these scenarios are all transient states. But they can often persist for several hours in a live, production environment. Therefore we must treat them with the same care as permanent states.
14
15### When modifying a Sidekiq worker
16
17For example when [changing arguments](sidekiq_style_guide.md#changing-the-arguments-for-a-worker):
18
19- Is it ok if jobs are being enqueued with the old signature but executed by the new monthly release?
20- Is it ok if jobs are being enqueued with the new signature but executed by the previous monthly release?
21
22### When adding a new Sidekiq worker
23
24Is it ok if these jobs don't get executed for several hours because [Sidekiq nodes are not yet updated](sidekiq_style_guide.md#adding-new-workers)?
25
26### When modifying JavaScript
27
28Is it ok when a browser has the new JavaScript code, but the Rails code is running the previous monthly release on:
29
30- the REST API?
31- the GraphQL API?
32- internal APIs in controllers?
33
34### When adding a pre-deployment migration
35
36Is it ok if the pre-deployment migration has executed, but the web, Sidekiq, and API nodes are running the previous release?
37
38### When adding a post-deployment migration
39
40Is it ok if all GitLab nodes have been updated, but the post-deployment migrations don't get executed until a couple days later?
41
42### When adding a background migration
43
44Is it ok if all nodes have been updated, and then the post-deployment migrations get executed a couple days later, and then the background migrations take a week to finish?
45
46### When upgrading a dependency like Rails
47
48Is it ok that some nodes have the new Rails version, but some nodes have the old Rails version?
49
50## A walkthrough of an update
51
52Backward compatibility problems during updates are often very subtle. This is why it is worth
53familiarizing yourself with:
54
55- [Update instructions](../update/index.md)
56- [Reference architectures](../administration/reference_architectures/index.md)
57- [GitLab.com's architecture](https://about.gitlab.com/handbook/engineering/infrastructure/production/architecture/)
58- [GitLab.com's upgrade pipeline](https://gitlab.com/gitlab-org/release/docs/blob/master/general/deploy/gitlab-com-deployer.md#upgrade-pipeline-default)
59
60To illustrate how these problems arise, take a look at this example:
61
62- �� New version
63- �� Old version
64
65In this example, you can imagine that we are updating by one monthly release. But refer to [How long must code be backwards-compatible?](#how-long-must-code-be-backwards-compatible).
66
67| Update step | PostgreSQL DB | Web nodes | API nodes | Sidekiq nodes | Compatibility concerns |
68| --- | --- | --- | --- | --- | --- |
69| Initial state | �� | �� | �� | �� | |
70| Ran pre-deployment migrations | �� except post-deploy migrations | �� | �� | �� | Rails code in �� is making DB calls to �� |
71| Update web nodes | �� except post-deploy migrations | �� | �� | �� | JavaScript in �� is making API calls to ��. Rails code in �� is enqueuing jobs that are getting run by Sidekiq nodes in �� |
72| Update API and Sidekiq nodes | �� except post-deploy migrations | �� | �� | �� | Rails code in �� is making DB calls without post-deployment migrations or background migrations |
73| Run post-deployment migrations | �� | �� | �� | �� | Rails code in �� is making DB calls without background migrations |
74| Background migrations finish | �� | �� | �� | �� | |
75
76This example is not exhaustive. GitLab can be deployed in many different ways. Even each update step is not atomic. For example, with rolling deploys, nodes within a group are temporarily on different versions. You should assume that a lot of time passes between update steps. This is often true on GitLab.com.
77
78## How long must code be backwards-compatible?
79
80For users following [zero-downtime update instructions](../update/index.md#upgrading-without-downtime), the answer is one monthly release. For example:
81
82- 13.11 => 13.12
83- 13.12 => 14.0
84- 14.0 => 14.1
85
86For GitLab.com, there can be multiple tiny version updates per day, so GitLab.com doesn't constrain how far changes must be backwards-compatible.
87
88Many users [skip some monthly releases](../update/index.md#upgrading-to-a-new-major-version), for example:
89
90- 13.0 => 13.12
91
92These users accept some downtime during the update. Unfortunately we can't ignore this case completely. For example, 13.12 may execute Sidekiq jobs from 13.0, which illustrates why [we avoid removing arguments from jobs until a major release](sidekiq_style_guide.md#deprecate-and-remove-an-argument). The main question is: Will the deployment get to a good state after the update is complete?
93
94## What kind of components can GitLab be broken down into?
95
96The [50,000 reference architecture](../administration/reference_architectures/50k_users.md) runs GitLab on 48+ nodes. GitLab.com is [bigger than that](https://about.gitlab.com/handbook/engineering/infrastructure/production/architecture/), plus a portion of the [infrastructure runs on Kubernetes](https://about.gitlab.com/handbook/engineering/infrastructure/production/kubernetes/gitlab-com/), plus there is a ["canary" stage which receives updates first](https://about.gitlab.com/handbook/engineering/#sts=Canary%20Testing).
97
98But the problem isn't just that there are many nodes. The bigger problem is that a deployment can be divided into different contexts. And GitLab.com is not the only one that does this. Some possible divisions:
99
100- "Canary web app nodes": Handle non-API requests from a subset of users
101- "Git app nodes": Handle Git requests
102- "Web app nodes": Handle web requests
103- "API app nodes": Handle API requests
104- "Sidekiq app nodes": Handle Sidekiq jobs
105- "PostgreSQL database": Handle internal PostgreSQL calls
106- "Redis database": Handle internal Redis calls
107- "Gitaly nodes": Handle internal Gitaly calls
108
109During an update, there will be [two different versions of GitLab running in different contexts](#a-walkthrough-of-an-update). For example, [a web node may enqueue jobs which get run on an old Sidekiq node](#when-modifying-a-sidekiq-worker).
110
111## Doesn't the order of update steps matter?
112
113Yes! We have specific instructions for [zero-downtime updates](../update/index.md#upgrading-without-downtime) because it allows us to ignore some permutations of compatibility. This is why we don't worry about Rails code making DB calls to an old PostgreSQL database schema.
114
115## I've identified a potential backwards compatibility problem, what can I do about it?
116
117### Coordinate
118
119For major or minor version updates of Rails or Puma:
120
121- Engage the Quality team to thoroughly test the MR.
122- Notify the `@gitlab-org/release/managers` on the MR prior to merging.
123
124### Feature flags
125
126[Feature flags](feature_flags/index.md) are a tool, not a strategy, for handling backward compatibility problems.
127
128For example, it is safe to add a new feature with frontend and API changes, if both
129frontend and API changes are disabled by default. This can be done with multiple
130merge requests, merged in any order. After all the changes are deployed to
131GitLab.com, the feature can be enabled in ChatOps and validated on GitLab.com.
132
133**However, it is not necessarily safe to enable the feature by default.** If the
134feature flag is removed, or the default is flipped to enabled, in the same release
135where the code was merged, then customers performing [zero-downtime updates](../update/zero_downtime.md)
136will end up running the new frontend code against the previous release's API.
137
138If you're not sure whether it's safe to enable all the changes at once, then one
139option is to enable the API in the **current** release and enable the frontend
140change in the **next** release. This is an example of the [Expand and contract pattern](#expand-and-contract-pattern).
141
142Or you may be able to avoid delaying by a release by modifying the frontend to
143[degrade gracefully](#graceful-degradation) against the previous release's API.
144
145### Graceful degradation
146
147As an example, when adding a new feature with frontend and API changes, it may be possible to write the frontend such that the new feature degrades gracefully against old API responses. This may help avoid needing to spread a change over 3 releases.
148
149### Expand and contract pattern
150
151One way to guarantee zero downtime updates for on-premise instances is following the
152[expand and contract pattern](https://martinfowler.com/bliki/ParallelChange.html).
153
154This means that every breaking change is broken down in three phases: expand, migrate, and contract.
155
1561. **expand**: a breaking change is introduced keeping the software backward-compatible.
1571. **migrate**: all consumers are updated to make use of the new implementation.
1581. **contract**: backward compatibility is removed.
159
160Those three phases **must be part of different milestones**, to allow zero downtime updates.
161
162Depending on the support level for the feature, the contract phase could be delayed until the next major release.
163
164## Expand and contract examples
165
166Route changes, changing Sidekiq worker parameters, and database migrations are all perfect examples of a breaking change.
167Let's see how we can handle them safely.
168
169### Route changes
170
171When changing routing we should pay attention to make sure a route generated from the new version can be served by the old one and vice versa.
172[As you can see](#some-links-to-issues-and-mrs-were-broken), not doing it can lead to an outage.
173This type of change may look like an immediate switch between the two implementations. However,
174especially with the canary stage, there is an extended period of time where both version of the code
175coexists in production.
176
1771. **expand**: a new route is added, pointing to the same controller as the old one. But nothing in the application generates links for the new routes.
1781. **migrate**: now that every machine in the fleet can understand the new route, we can generate links with the new routing.
1791. **contract**: the old route can be safely removed. (If the old route was likely to be widely shared, like the link to a repository file, we might want to add redirects and keep the old route for a longer period.)
180
181### Changing Sidekiq worker's parameters
182
183This topic is explained in detail in [Sidekiq Compatibility across Updates](sidekiq_style_guide.md#sidekiq-compatibility-across-updates).
184
185When we need to add a new parameter to a Sidekiq worker class, we can split this into the following steps:
186
1871. **expand**: the worker class adds a new parameter with a default value.
1881. **migrate**: we add the new parameter to all the invocations of the worker.
1891. **contract**: we remove the default value.
190
191At a first look, it may seem safe to bundle expand and migrate into a single milestone, but this causes an outage if Puma restarts before Sidekiq.
192Puma enqueues jobs with an extra parameter that the old Sidekiq cannot handle.
193
194### Database migrations
195
196The following graph is a simplified visual representation of a deployment, this guides us in understanding how expand and contract is implemented in our migrations strategy.
197
198There's a special consideration here. Using our post-deployment migrations framework allows us to bundle all three phases into one milestone.
199
200```mermaid
201gantt
202  title Deployment
203  dateFormat  HH:mm
204
205  section Deploy box
206  Run migrations           :done, migr, after schemaA, 2m
207  Run post-deployment migrations     :postmigr, after mcvn  , 2m
208
209  section Database
210    Schema A      :done, schemaA, 00:00  , 1h
211    Schema B      :crit, schemaB, after migr, 58m
212    Schema C.     : schemaC, after postmigr, 1h
213
214  section Machine A
215    Version N      :done, mavn, 00:00 , 75m
216    Version N+1      : after mavn, 105m
217
218  section Machine B
219    Version N      :done, mbvn, 00:00 , 105m
220    Version N+1      : mbdone, after mbvn, 75m
221
222  section Machine C
223    Version N      :done, mcvn, 00:00 , 2h
224    Version N+1      : mbcdone, after mcvn, 1h
225```
226
227If we look at this schema from a database point of view, we can see two deployments feed into a single GitLab deployment:
228
2291. from `Schema A` to `Schema B`
2301. from `Schema B` to `Schema C`
231
232And these deployments align perfectly with application changes.
233
2341. At the beginning we have `Version N` on `Schema A`.
2351. Then we have a _long_ transition period with both `Version N` and `Version N+1` on `Schema B`.
2361. When we only have `Version N+1` on `Schema B` the schema changes again.
2371. Finally we have  `Version N+1` on `Schema C`.
238
239With all those details in mind, let's imagine we need to replace a query, and this query has an index to support it.
240
2411. **expand**: this is the from `Schema A` to `Schema B` deployment. We add the new index, but the application ignores it for now.
2421. **migrate**: this is the `Version N` to `Version N+1` application deployment. The new code is deployed, at this point in time only the new query runs.
2431. **contract**: from `Schema B` to `Schema C` (post-deployment migration). Nothing uses the old index anymore, we can safely remove it.
244
245This is only an example. More complex migrations, especially when background migrations are needed may
246require more than one milestone. For details please refer to our [migration style guide](migration_style_guide.md).
247
248## Examples of previous incidents
249
250### Some links to issues and MRs were broken
251
252When we moved MR routes, users on the new servers were redirected to the new URLs. When these users shared these new URLs in
253Markdown (or anywhere else), they were broken links for users on the old servers.
254
255For more information, see [the relevant issue](https://gitlab.com/gitlab-org/gitlab/-/issues/118840).
256
257### Stale cache in issue or merge request descriptions and comments
258
259We bumped the Markdown cache version and found a bug when a user edited a description or comment which was generated from a different Markdown
260cache version. The cached HTML wasn't generated properly after saving. In most cases, this wouldn't have happened because users would have
261viewed the Markdown before clicking **Edit** and that would mean the Markdown cache is refreshed. But because we run mixed versions, this is
262more likely to happen. Another user on a different version could view the same page and refresh the cache to the other version behind the scenes.
263
264For more information, see [the relevant issue](https://gitlab.com/gitlab-org/gitlab/-/issues/208255).
265
266### Project service templates incorrectly copied
267
268We changed the column which indicates whether a service is a template. When we create services, we copy attributes from the template
269and set this column to `false`. The old servers were still updating the old column, but that was fine because we had a DB trigger
270that updated the new column from the old one. For the new servers though, they were only updating the new column and that same trigger
271was now working against us and setting it back to the wrong value.
272
273For more information, see [the relevant issue](https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9176).
274
275### Sidebar wasn't loading for some users
276
277We changed the data type of one GraphQL field. When a user opened an issue page from the new servers and the GraphQL AJAX request went
278to the old servers, a type mismatch happened, which resulted in a JavaScript error that prevented the sidebar from loading.
279
280For more information, see [the relevant issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1772).
281
282### CI artifact uploads were failing
283
284We added a `NOT NULL` constraint to a column and marked it as a `NOT VALID` constraint so that it is not enforced on existing rows.
285But even with that, this was still a problem because the old servers were still inserting new rows with null values.
286
287For more information, see [the relevant issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/1944).
288
289### Downtime on release features between canary and production deployment
290
291To address the issue, we added a new column to an existing table with a `NOT NULL` constraint without
292specifying a default value. In other words, this requires the application to set a value to the column.
293
294The older version of the application didn't set the `NOT NULL` constraint since the entity/concept didn't
295exist before.
296
297The problem starts right after the canary deployment is complete. At that moment,
298the database migration (to add the column) has successfully run and canary instance starts using
299the new application code, hence QA was successful. Unfortunately, the production
300instance still uses the older code, so it started failing to insert a new release entry.
301
302For more information, see [this issue related to the Releases API](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64151).
303
304### Builds failing due to varying deployment times across node types
305
306In [one production issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2442),
307CI builds that used the `parallel` keyword and depending on the
308variable `CI_NODE_TOTAL` being an integer failed. This was caused because after a user pushed a commit:
309
3101. New code: Sidekiq created a new pipeline and new build. `build.options[:parallel]` is a `Hash`.
3111. Old code: Runners requested a job from an API node that is running the previous version.
3121. As a result, the [new code](https://gitlab.com/gitlab-org/gitlab/-/blob/42b82a9a3ac5a96f9152aad6cbc583c42b9fb082/app/models/concerns/ci/contextable.rb#L104)
313was not run on the API server. The runner's request failed because the
314older API server tried return the `CI_NODE_TOTAL` CI/CD variable, but
315instead of sending an integer value (for example, 9), it sent a serialized
316`Hash` value (`{:number=>9, :total=>9}`).
317
318If you look at the [deployment pipeline](https://ops.gitlab.net/gitlab-com/gl-infra/deployer/-/pipelines/202212),
319you see all nodes were updated in parallel:
320
321![GitLab.com deployment pipeline](img/deployment_pipeline_v13_3.png)
322
323However, even though the updated started around the same time, the completion time varied significantly:
324
325|Node type|Duration (min)|
326|---------|--------------|
327|API      |54            |
328|Sidekiq  |21            |
329|K8S      |8             |
330
331Builds that used the `parallel` keyword and depended on `CI_NODE_TOTAL`
332and `CI_NODE_INDEX` would fail during the time after Sidekiq was
333updated. Since Kubernetes (K8S) also runs Sidekiq pods, the window could
334have been as long as 46 minutes or as short as 33 minutes. Either way,
335having a feature flag to turn on after the deployment finished would
336prevent this from happening.
337