1---
2stage: Verify
3group: Runner
4info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
5---
6
7# Using Graphical Processing Units (GPUs)
8
9> Introduced in GitLab Runner 13.9.
10
11GitLab Runner supports the use of Graphical Processing Units (GPUs).
12The following section describes the required configuration to enable GPUs
13for various executors.
14
15## Shell executor
16
17No runner configuration is needed.
18
19## Docker executor
20
21Use the [`gpus` configuration option in the `runners.docker` section](advanced-configuration.md#the-runnersdocker-section).
22For example:
23
24```toml
25[runners.docker]
26    gpus = "all"
27```
28
29## Docker Machine executor
30
31See the [documentation for the GitLab fork of Docker Machine](../executors/docker_machine.md#using-gpus-on-google-compute-engine).
32
33## Kubernetes executor
34
35No runner configuration should be needed. Be sure to check that
36[the node selector chooses a node with GPU support](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/).
37
38GitLab Runner has been [tested on Amazon Elastic Kubernetes Service](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4355)
39with [GPU-enabled instances](https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html).
40
41## Validate that GPUs are enabled
42
43You can use runners with NVIDIA GPUs. For NVIDIA GPUs, one
44way to ensure that a GPU is enabled for a CI job is to run `nvidia-smi`
45at the beginning of the script. For example:
46
47```yaml
48train:
49  script:
50    - nvidia-smi
51```
52
53If GPUs are enabled, the output of `nvdia-smi` displays the available devices. In
54the following example, a single NVIDIA Tesla P4 is enabled:
55
56```shell
57+-----------------------------------------------------------------------------+
58| NVIDIA-SMI 450.51.06    Driver Version: 450.51.06    CUDA Version: 11.0     |
59|-------------------------------+----------------------+----------------------+
60| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
61| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
62|                               |                      |               MIG M. |
63|===============================+======================+======================|
64|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
65| N/A   43C    P0    22W /  75W |      0MiB /  7611MiB |      3%      Default |
66|                               |                      |                  N/A |
67+-------------------------------+----------------------+----------------------+
68
69+-----------------------------------------------------------------------------+
70| Processes:                                                                  |
71|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
72|        ID   ID                                                   Usage      |
73|=============================================================================|
74|  No running processes found                                                 |
75+-----------------------------------------------------------------------------+
76```
77
78If the hardware does not support a GPU, `nvidia-smi` should fail either because
79it's missing or because it can't communicate with the driver:
80
81```shell
82modprobe: ERROR: could not insert 'nvidia': No such device
83NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
84```
85