109467b48Spatrick=========================
209467b48SpatrickCompiling CUDA with clang
309467b48Spatrick=========================
409467b48Spatrick
509467b48Spatrick.. contents::
609467b48Spatrick   :local:
709467b48Spatrick
809467b48SpatrickIntroduction
909467b48Spatrick============
1009467b48Spatrick
1109467b48SpatrickThis document describes how to compile CUDA code with clang, and gives some
1209467b48Spatrickdetails about LLVM and clang's CUDA implementations.
1309467b48Spatrick
1409467b48SpatrickThis document assumes a basic familiarity with CUDA. Information about CUDA
1509467b48Spatrickprogramming can be found in the
1609467b48Spatrick`CUDA programming guide
1709467b48Spatrick<http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html>`_.
1809467b48Spatrick
1909467b48SpatrickCompiling CUDA Code
2009467b48Spatrick===================
2109467b48Spatrick
2209467b48SpatrickPrerequisites
2309467b48Spatrick-------------
2409467b48Spatrick
25097a140dSpatrickCUDA is supported since llvm 3.9. Clang currently supports CUDA 7.0 through
26*d415bd75Srobert11.5. If clang detects a newer CUDA version, it will issue a warning and will
27*d415bd75Srobertattempt to use detected CUDA SDK it as if it were CUDA 11.5.
2809467b48Spatrick
29097a140dSpatrickBefore you build CUDA code, you'll need to have installed the CUDA SDK.  See
30097a140dSpatrick`NVIDIA's CUDA installation guide
3109467b48Spatrick<https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html>`_ for
32097a140dSpatrickdetails.  Note that clang `maynot support
33097a140dSpatrick<https://bugs.llvm.org/show_bug.cgi?id=26966>`_ the CUDA toolkit as installed by
34097a140dSpatricksome Linux package managers. Clang does attempt to deal with specific details of
35097a140dSpatrickCUDA installation on a handful of common Linux distributions, but in general the
36097a140dSpatrickmost reliable way to make it work is to install CUDA in a single directory from
37097a140dSpatrickNVIDIA's `.run` package and specify its location via `--cuda-path=...` argument.
3809467b48Spatrick
3909467b48SpatrickCUDA compilation is supported on Linux. Compilation on MacOS and Windows may or
40097a140dSpatrickmay not work and currently have no maintainers.
4109467b48Spatrick
4209467b48SpatrickInvoking clang
4309467b48Spatrick--------------
4409467b48Spatrick
4509467b48SpatrickInvoking clang for CUDA compilation works similarly to compiling regular C++.
4609467b48SpatrickYou just need to be aware of a few additional flags.
4709467b48Spatrick
4809467b48SpatrickYou can use `this <https://gist.github.com/855e277884eb6b388cd2f00d956c2fd4>`_
4909467b48Spatrickprogram as a toy example.  Save it as ``axpy.cu``.  (Clang detects that you're
5009467b48Spatrickcompiling CUDA code by noticing that your filename ends with ``.cu``.
5109467b48SpatrickAlternatively, you can pass ``-x cuda``.)
5209467b48Spatrick
5309467b48SpatrickTo build and run, run the following commands, filling in the parts in angle
5409467b48Spatrickbrackets as described below:
5509467b48Spatrick
5609467b48Spatrick.. code-block:: console
5709467b48Spatrick
5809467b48Spatrick  $ clang++ axpy.cu -o axpy --cuda-gpu-arch=<GPU arch> \
5909467b48Spatrick      -L<CUDA install path>/<lib64 or lib>             \
6009467b48Spatrick      -lcudart_static -ldl -lrt -pthread
6109467b48Spatrick  $ ./axpy
6209467b48Spatrick  y[0] = 2
6309467b48Spatrick  y[1] = 4
6409467b48Spatrick  y[2] = 6
6509467b48Spatrick  y[3] = 8
6609467b48Spatrick
6709467b48SpatrickOn MacOS, replace `-lcudart_static` with `-lcudart`; otherwise, you may get
6809467b48Spatrick"CUDA driver version is insufficient for CUDA runtime version" errors when you
6909467b48Spatrickrun your program.
7009467b48Spatrick
7109467b48Spatrick* ``<CUDA install path>`` -- the directory where you installed CUDA SDK.
7209467b48Spatrick  Typically, ``/usr/local/cuda``.
7309467b48Spatrick
7409467b48Spatrick  Pass e.g. ``-L/usr/local/cuda/lib64`` if compiling in 64-bit mode; otherwise,
7509467b48Spatrick  pass e.g. ``-L/usr/local/cuda/lib``.  (In CUDA, the device code and host code
7609467b48Spatrick  always have the same pointer widths, so if you're compiling 64-bit code for
7709467b48Spatrick  the host, you're also compiling 64-bit code for the device.) Note that as of
7809467b48Spatrick  v10.0 CUDA SDK `no longer supports compilation of 32-bit
7909467b48Spatrick  applications <https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-features>`_.
8009467b48Spatrick
8109467b48Spatrick* ``<GPU arch>`` -- the `compute capability
8209467b48Spatrick  <https://developer.nvidia.com/cuda-gpus>`_ of your GPU. For example, if you
8309467b48Spatrick  want to run your program on a GPU with compute capability of 3.5, specify
8409467b48Spatrick  ``--cuda-gpu-arch=sm_35``.
8509467b48Spatrick
8609467b48Spatrick  Note: You cannot pass ``compute_XX`` as an argument to ``--cuda-gpu-arch``;
8709467b48Spatrick  only ``sm_XX`` is currently supported.  However, clang always includes PTX in
8809467b48Spatrick  its binaries, so e.g. a binary compiled with ``--cuda-gpu-arch=sm_30`` would be
8909467b48Spatrick  forwards-compatible with e.g. ``sm_35`` GPUs.
9009467b48Spatrick
9109467b48Spatrick  You can pass ``--cuda-gpu-arch`` multiple times to compile for multiple archs.
9209467b48Spatrick
9309467b48SpatrickThe `-L` and `-l` flags only need to be passed when linking.  When compiling,
9409467b48Spatrickyou may also need to pass ``--cuda-path=/path/to/cuda`` if you didn't install
9509467b48Spatrickthe CUDA SDK into ``/usr/local/cuda`` or ``/usr/local/cuda-X.Y``.
9609467b48Spatrick
9709467b48SpatrickFlags that control numerical code
9809467b48Spatrick---------------------------------
9909467b48Spatrick
10009467b48SpatrickIf you're using GPUs, you probably care about making numerical code run fast.
10109467b48SpatrickGPU hardware allows for more control over numerical operations than most CPUs,
10209467b48Spatrickbut this results in more compiler options for you to juggle.
10309467b48Spatrick
10409467b48SpatrickFlags you may wish to tweak include:
10509467b48Spatrick
10609467b48Spatrick* ``-ffp-contract={on,off,fast}`` (defaults to ``fast`` on host and device when
10709467b48Spatrick  compiling CUDA) Controls whether the compiler emits fused multiply-add
10809467b48Spatrick  operations.
10909467b48Spatrick
11009467b48Spatrick  * ``off``: never emit fma operations, and prevent ptxas from fusing multiply
11109467b48Spatrick    and add instructions.
11209467b48Spatrick  * ``on``: fuse multiplies and adds within a single statement, but never
11309467b48Spatrick    across statements (C11 semantics).  Prevent ptxas from fusing other
11409467b48Spatrick    multiplies and adds.
11509467b48Spatrick  * ``fast``: fuse multiplies and adds wherever profitable, even across
11609467b48Spatrick    statements.  Doesn't prevent ptxas from fusing additional multiplies and
11709467b48Spatrick    adds.
11809467b48Spatrick
11909467b48Spatrick  Fused multiply-add instructions can be much faster than the unfused
12009467b48Spatrick  equivalents, but because the intermediate result in an fma is not rounded,
12109467b48Spatrick  this flag can affect numerical code.
12209467b48Spatrick
12309467b48Spatrick* ``-fcuda-flush-denormals-to-zero`` (default: off) When this is enabled,
12409467b48Spatrick  floating point operations may flush `denormal
12509467b48Spatrick  <https://en.wikipedia.org/wiki/Denormal_number>`_ inputs and/or outputs to 0.
12609467b48Spatrick  Operations on denormal numbers are often much slower than the same operations
12709467b48Spatrick  on normal numbers.
12809467b48Spatrick
12909467b48Spatrick* ``-fcuda-approx-transcendentals`` (default: off) When this is enabled, the
13009467b48Spatrick  compiler may emit calls to faster, approximate versions of transcendental
13109467b48Spatrick  functions, instead of using the slower, fully IEEE-compliant versions.  For
13209467b48Spatrick  example, this flag allows clang to emit the ptx ``sin.approx.f32``
13309467b48Spatrick  instruction.
13409467b48Spatrick
13509467b48Spatrick  This is implied by ``-ffast-math``.
13609467b48Spatrick
13709467b48SpatrickStandard library support
13809467b48Spatrick========================
13909467b48Spatrick
14009467b48SpatrickIn clang and nvcc, most of the C++ standard library is not supported on the
14109467b48Spatrickdevice side.
14209467b48Spatrick
14309467b48Spatrick``<math.h>`` and ``<cmath>``
14409467b48Spatrick----------------------------
14509467b48Spatrick
14609467b48SpatrickIn clang, ``math.h`` and ``cmath`` are available and `pass
147*d415bd75Srobert<https://github.com/llvm/llvm-test-suite/blob/main/External/CUDA/math_h.cu>`_
14809467b48Spatrick`tests
149*d415bd75Srobert<https://github.com/llvm/llvm-test-suite/blob/main/External/CUDA/cmath.cu>`_
15009467b48Spatrickadapted from libc++'s test suite.
15109467b48Spatrick
15209467b48SpatrickIn nvcc ``math.h`` and ``cmath`` are mostly available.  Versions of ``::foof``
15309467b48Spatrickin namespace std (e.g. ``std::sinf``) are not available, and where the standard
15409467b48Spatrickcalls for overloads that take integral arguments, these are usually not
15509467b48Spatrickavailable.
15609467b48Spatrick
15709467b48Spatrick.. code-block:: c++
15809467b48Spatrick
15909467b48Spatrick  #include <math.h>
16009467b48Spatrick  #include <cmath.h>
16109467b48Spatrick
16209467b48Spatrick  // clang is OK with everything in this function.
16309467b48Spatrick  __device__ void test() {
16409467b48Spatrick    std::sin(0.); // nvcc - ok
16509467b48Spatrick    std::sin(0);  // nvcc - error, because no std::sin(int) override is available.
16609467b48Spatrick    sin(0);       // nvcc - same as above.
16709467b48Spatrick
16809467b48Spatrick    sinf(0.);       // nvcc - ok
16909467b48Spatrick    std::sinf(0.);  // nvcc - no such function
17009467b48Spatrick  }
17109467b48Spatrick
17209467b48Spatrick``<std::complex>``
17309467b48Spatrick------------------
17409467b48Spatrick
17509467b48Spatricknvcc does not officially support ``std::complex``.  It's an error to use
17609467b48Spatrick``std::complex`` in ``__device__`` code, but it often works in ``__host__
17709467b48Spatrick__device__`` code due to nvcc's interpretation of the "wrong-side rule" (see
17809467b48Spatrickbelow).  However, we have heard from implementers that it's possible to get
17909467b48Spatrickinto situations where nvcc will omit a call to an ``std::complex`` function,
18009467b48Spatrickespecially when compiling without optimizations.
18109467b48Spatrick
18209467b48SpatrickAs of 2016-11-16, clang supports ``std::complex`` without these caveats.  It is
18309467b48Spatricktested with libstdc++ 4.8.5 and newer, but is known to work only with libc++
18409467b48Spatricknewer than 2016-11-16.
18509467b48Spatrick
18609467b48Spatrick``<algorithm>``
18709467b48Spatrick---------------
18809467b48Spatrick
18909467b48SpatrickIn C++14, many useful functions from ``<algorithm>`` (notably, ``std::min`` and
19009467b48Spatrick``std::max``) become constexpr.  You can therefore use these in device code,
19109467b48Spatrickwhen compiling with clang.
19209467b48Spatrick
19309467b48SpatrickDetecting clang vs NVCC from code
19409467b48Spatrick=================================
19509467b48Spatrick
19609467b48SpatrickAlthough clang's CUDA implementation is largely compatible with NVCC's, you may
19709467b48Spatrickstill want to detect when you're compiling CUDA code specifically with clang.
19809467b48Spatrick
19909467b48SpatrickThis is tricky, because NVCC may invoke clang as part of its own compilation
20009467b48Spatrickprocess!  For example, NVCC uses the host compiler's preprocessor when
20109467b48Spatrickcompiling for device code, and that host compiler may in fact be clang.
20209467b48Spatrick
20309467b48SpatrickWhen clang is actually compiling CUDA code -- rather than being used as a
20409467b48Spatricksubtool of NVCC's -- it defines the ``__CUDA__`` macro.  ``__CUDA_ARCH__`` is
20509467b48Spatrickdefined only in device mode (but will be defined if NVCC is using clang as a
20609467b48Spatrickpreprocessor).  So you can use the following incantations to detect clang CUDA
20709467b48Spatrickcompilation, in host and device modes:
20809467b48Spatrick
20909467b48Spatrick.. code-block:: c++
21009467b48Spatrick
21109467b48Spatrick  #if defined(__clang__) && defined(__CUDA__) && !defined(__CUDA_ARCH__)
21209467b48Spatrick  // clang compiling CUDA code, host mode.
21309467b48Spatrick  #endif
21409467b48Spatrick
21509467b48Spatrick  #if defined(__clang__) && defined(__CUDA__) && defined(__CUDA_ARCH__)
21609467b48Spatrick  // clang compiling CUDA code, device mode.
21709467b48Spatrick  #endif
21809467b48Spatrick
21909467b48SpatrickBoth clang and nvcc define ``__CUDACC__`` during CUDA compilation.  You can
22009467b48Spatrickdetect NVCC specifically by looking for ``__NVCC__``.
22109467b48Spatrick
22209467b48SpatrickDialect Differences Between clang and nvcc
22309467b48Spatrick==========================================
22409467b48Spatrick
22509467b48SpatrickThere is no formal CUDA spec, and clang and nvcc speak slightly different
22609467b48Spatrickdialects of the language.  Below, we describe some of the differences.
22709467b48Spatrick
22809467b48SpatrickThis section is painful; hopefully you can skip this section and live your life
22909467b48Spatrickblissfully unaware.
23009467b48Spatrick
23109467b48SpatrickCompilation Models
23209467b48Spatrick------------------
23309467b48Spatrick
23409467b48SpatrickMost of the differences between clang and nvcc stem from the different
23509467b48Spatrickcompilation models used by clang and nvcc.  nvcc uses *split compilation*,
23609467b48Spatrickwhich works roughly as follows:
23709467b48Spatrick
23809467b48Spatrick * Run a preprocessor over the input ``.cu`` file to split it into two source
23909467b48Spatrick   files: ``H``, containing source code for the host, and ``D``, containing
24009467b48Spatrick   source code for the device.
24109467b48Spatrick
24209467b48Spatrick * For each GPU architecture ``arch`` that we're compiling for, do:
24309467b48Spatrick
24409467b48Spatrick   * Compile ``D`` using nvcc proper.  The result of this is a ``ptx`` file for
24509467b48Spatrick     ``P_arch``.
24609467b48Spatrick
24709467b48Spatrick   * Optionally, invoke ``ptxas``, the PTX assembler, to generate a file,
24809467b48Spatrick     ``S_arch``, containing GPU machine code (SASS) for ``arch``.
24909467b48Spatrick
25009467b48Spatrick * Invoke ``fatbin`` to combine all ``P_arch`` and ``S_arch`` files into a
25109467b48Spatrick   single "fat binary" file, ``F``.
25209467b48Spatrick
25309467b48Spatrick * Compile ``H`` using an external host compiler (gcc, clang, or whatever you
25409467b48Spatrick   like).  ``F`` is packaged up into a header file which is force-included into
25509467b48Spatrick   ``H``; nvcc generates code that calls into this header to e.g. launch
25609467b48Spatrick   kernels.
25709467b48Spatrick
25809467b48Spatrickclang uses *merged parsing*.  This is similar to split compilation, except all
25909467b48Spatrickof the host and device code is present and must be semantically-correct in both
26009467b48Spatrickcompilation steps.
26109467b48Spatrick
26209467b48Spatrick  * For each GPU architecture ``arch`` that we're compiling for, do:
26309467b48Spatrick
26409467b48Spatrick    * Compile the input ``.cu`` file for device, using clang.  ``__host__`` code
26509467b48Spatrick      is parsed and must be semantically correct, even though we're not
26609467b48Spatrick      generating code for the host at this time.
26709467b48Spatrick
26809467b48Spatrick      The output of this step is a ``ptx`` file ``P_arch``.
26909467b48Spatrick
27009467b48Spatrick    * Invoke ``ptxas`` to generate a SASS file, ``S_arch``.  Note that, unlike
27109467b48Spatrick      nvcc, clang always generates SASS code.
27209467b48Spatrick
27309467b48Spatrick  * Invoke ``fatbin`` to combine all ``P_arch`` and ``S_arch`` files into a
27409467b48Spatrick    single fat binary file, ``F``.
27509467b48Spatrick
27609467b48Spatrick  * Compile ``H`` using clang.  ``__device__`` code is parsed and must be
27709467b48Spatrick    semantically correct, even though we're not generating code for the device
27809467b48Spatrick    at this time.
27909467b48Spatrick
28009467b48Spatrick    ``F`` is passed to this compilation, and clang includes it in a special ELF
28109467b48Spatrick    section, where it can be found by tools like ``cuobjdump``.
28209467b48Spatrick
28309467b48Spatrick(You may ask at this point, why does clang need to parse the input file
28409467b48Spatrickmultiple times?  Why not parse it just once, and then use the AST to generate
28509467b48Spatrickcode for the host and each device architecture?
28609467b48Spatrick
28709467b48SpatrickUnfortunately this can't work because we have to define different macros during
28809467b48Spatrickhost compilation and during device compilation for each GPU architecture.)
28909467b48Spatrick
29009467b48Spatrickclang's approach allows it to be highly robust to C++ edge cases, as it doesn't
29109467b48Spatrickneed to decide at an early stage which declarations to keep and which to throw
29209467b48Spatrickaway.  But it has some consequences you should be aware of.
29309467b48Spatrick
29409467b48SpatrickOverloading Based on ``__host__`` and ``__device__`` Attributes
29509467b48Spatrick---------------------------------------------------------------
29609467b48Spatrick
29709467b48SpatrickLet "H", "D", and "HD" stand for "``__host__`` functions", "``__device__``
29809467b48Spatrickfunctions", and "``__host__ __device__`` functions", respectively.  Functions
29909467b48Spatrickwith no attributes behave the same as H.
30009467b48Spatrick
30109467b48Spatricknvcc does not allow you to create H and D functions with the same signature:
30209467b48Spatrick
30309467b48Spatrick.. code-block:: c++
30409467b48Spatrick
30509467b48Spatrick  // nvcc: error - function "foo" has already been defined
30609467b48Spatrick  __host__ void foo() {}
30709467b48Spatrick  __device__ void foo() {}
30809467b48Spatrick
30909467b48SpatrickHowever, nvcc allows you to "overload" H and D functions with different
31009467b48Spatricksignatures:
31109467b48Spatrick
31209467b48Spatrick.. code-block:: c++
31309467b48Spatrick
31409467b48Spatrick  // nvcc: no error
31509467b48Spatrick  __host__ void foo(int) {}
31609467b48Spatrick  __device__ void foo() {}
31709467b48Spatrick
31809467b48SpatrickIn clang, the ``__host__`` and ``__device__`` attributes are part of a
31909467b48Spatrickfunction's signature, and so it's legal to have H and D functions with
32009467b48Spatrick(otherwise) the same signature:
32109467b48Spatrick
32209467b48Spatrick.. code-block:: c++
32309467b48Spatrick
32409467b48Spatrick  // clang: no error
32509467b48Spatrick  __host__ void foo() {}
32609467b48Spatrick  __device__ void foo() {}
32709467b48Spatrick
32809467b48SpatrickHD functions cannot be overloaded by H or D functions with the same signature:
32909467b48Spatrick
33009467b48Spatrick.. code-block:: c++
33109467b48Spatrick
33209467b48Spatrick  // nvcc: error - function "foo" has already been defined
33309467b48Spatrick  // clang: error - redefinition of 'foo'
33409467b48Spatrick  __host__ __device__ void foo() {}
33509467b48Spatrick  __device__ void foo() {}
33609467b48Spatrick
33709467b48Spatrick  // nvcc: no error
33809467b48Spatrick  // clang: no error
33909467b48Spatrick  __host__ __device__ void bar(int) {}
34009467b48Spatrick  __device__ void bar() {}
34109467b48Spatrick
34209467b48SpatrickWhen resolving an overloaded function, clang considers the host/device
34309467b48Spatrickattributes of the caller and callee.  These are used as a tiebreaker during
34409467b48Spatrickoverload resolution.  See `IdentifyCUDAPreference
345097a140dSpatrick<https://clang.llvm.org/doxygen/SemaCUDA_8cpp.html>`_ for the full set of rules,
34609467b48Spatrickbut at a high level they are:
34709467b48Spatrick
34809467b48Spatrick * D functions prefer to call other Ds.  HDs are given lower priority.
34909467b48Spatrick
35009467b48Spatrick * Similarly, H functions prefer to call other Hs, or ``__global__`` functions
35109467b48Spatrick   (with equal priority).  HDs are given lower priority.
35209467b48Spatrick
35309467b48Spatrick * HD functions prefer to call other HDs.
35409467b48Spatrick
35509467b48Spatrick   When compiling for device, HDs will call Ds with lower priority than HD, and
35609467b48Spatrick   will call Hs with still lower priority.  If it's forced to call an H, the
35709467b48Spatrick   program is malformed if we emit code for this HD function.  We call this the
35809467b48Spatrick   "wrong-side rule", see example below.
35909467b48Spatrick
36009467b48Spatrick   The rules are symmetrical when compiling for host.
36109467b48Spatrick
36209467b48SpatrickSome examples:
36309467b48Spatrick
36409467b48Spatrick.. code-block:: c++
36509467b48Spatrick
36609467b48Spatrick   __host__ void foo();
36709467b48Spatrick   __device__ void foo();
36809467b48Spatrick
36909467b48Spatrick   __host__ void bar();
37009467b48Spatrick   __host__ __device__ void bar();
37109467b48Spatrick
37209467b48Spatrick   __host__ void test_host() {
37309467b48Spatrick     foo();  // calls H overload
37409467b48Spatrick     bar();  // calls H overload
37509467b48Spatrick   }
37609467b48Spatrick
37709467b48Spatrick   __device__ void test_device() {
37809467b48Spatrick     foo();  // calls D overload
37909467b48Spatrick     bar();  // calls HD overload
38009467b48Spatrick   }
38109467b48Spatrick
38209467b48Spatrick   __host__ __device__ void test_hd() {
38309467b48Spatrick     foo();  // calls H overload when compiling for host, otherwise D overload
38409467b48Spatrick     bar();  // always calls HD overload
38509467b48Spatrick   }
38609467b48Spatrick
38709467b48SpatrickWrong-side rule example:
38809467b48Spatrick
38909467b48Spatrick.. code-block:: c++
39009467b48Spatrick
39109467b48Spatrick  __host__ void host_only();
39209467b48Spatrick
39309467b48Spatrick  // We don't codegen inline functions unless they're referenced by a
39409467b48Spatrick  // non-inline function.  inline_hd1() is called only from the host side, so
39509467b48Spatrick  // does not generate an error.  inline_hd2() is called from the device side,
39609467b48Spatrick  // so it generates an error.
39709467b48Spatrick  inline __host__ __device__ void inline_hd1() { host_only(); }  // no error
39809467b48Spatrick  inline __host__ __device__ void inline_hd2() { host_only(); }  // error
39909467b48Spatrick
40009467b48Spatrick  __host__ void host_fn() { inline_hd1(); }
40109467b48Spatrick  __device__ void device_fn() { inline_hd2(); }
40209467b48Spatrick
40309467b48Spatrick  // This function is not inline, so it's always codegen'ed on both the host
40409467b48Spatrick  // and the device.  Therefore, it generates an error.
40509467b48Spatrick  __host__ __device__ void not_inline_hd() { host_only(); }
40609467b48Spatrick
40709467b48SpatrickFor the purposes of the wrong-side rule, templated functions also behave like
40809467b48Spatrick``inline`` functions: They aren't codegen'ed unless they're instantiated
40909467b48Spatrick(usually as part of the process of invoking them).
41009467b48Spatrick
41109467b48Spatrickclang's behavior with respect to the wrong-side rule matches nvcc's, except
41209467b48Spatricknvcc only emits a warning for ``not_inline_hd``; device code is allowed to call
41309467b48Spatrick``not_inline_hd``.  In its generated code, nvcc may omit ``not_inline_hd``'s
41409467b48Spatrickcall to ``host_only`` entirely, or it may try to generate code for
41509467b48Spatrick``host_only`` on the device.  What you get seems to depend on whether or not
41609467b48Spatrickthe compiler chooses to inline ``host_only``.
41709467b48Spatrick
41809467b48SpatrickMember functions, including constructors, may be overloaded using H and D
41909467b48Spatrickattributes.  However, destructors cannot be overloaded.
42009467b48Spatrick
42109467b48SpatrickUsing a Different Class on Host/Device
42209467b48Spatrick--------------------------------------
42309467b48Spatrick
42409467b48SpatrickOccasionally you may want to have a class with different host/device versions.
42509467b48Spatrick
42609467b48SpatrickIf all of the class's members are the same on the host and device, you can just
42709467b48Spatrickprovide overloads for the class's member functions.
42809467b48Spatrick
42909467b48SpatrickHowever, if you want your class to have different members on host/device, you
43009467b48Spatrickwon't be able to provide working H and D overloads in both classes. In this
43109467b48Spatrickcase, clang is likely to be unhappy with you.
43209467b48Spatrick
43309467b48Spatrick.. code-block:: c++
43409467b48Spatrick
43509467b48Spatrick  #ifdef __CUDA_ARCH__
43609467b48Spatrick  struct S {
43709467b48Spatrick    __device__ void foo() { /* use device_only */ }
43809467b48Spatrick    int device_only;
43909467b48Spatrick  };
44009467b48Spatrick  #else
44109467b48Spatrick  struct S {
44209467b48Spatrick    __host__ void foo() { /* use host_only */ }
44309467b48Spatrick    double host_only;
44409467b48Spatrick  };
44509467b48Spatrick
44609467b48Spatrick  __device__ void test() {
44709467b48Spatrick    S s;
44809467b48Spatrick    // clang generates an error here, because during host compilation, we
44909467b48Spatrick    // have ifdef'ed away the __device__ overload of S::foo().  The __device__
45009467b48Spatrick    // overload must be present *even during host compilation*.
45109467b48Spatrick    S.foo();
45209467b48Spatrick  }
45309467b48Spatrick  #endif
45409467b48Spatrick
45509467b48SpatrickWe posit that you don't really want to have classes with different members on H
45609467b48Spatrickand D.  For example, if you were to pass one of these as a parameter to a
45709467b48Spatrickkernel, it would have a different layout on H and D, so would not work
45809467b48Spatrickproperly.
45909467b48Spatrick
46009467b48SpatrickTo make code like this compatible with clang, we recommend you separate it out
46109467b48Spatrickinto two classes.  If you need to write code that works on both host and
46209467b48Spatrickdevice, consider writing an overloaded wrapper function that returns different
46309467b48Spatricktypes on host and device.
46409467b48Spatrick
46509467b48Spatrick.. code-block:: c++
46609467b48Spatrick
46709467b48Spatrick  struct HostS { ... };
46809467b48Spatrick  struct DeviceS { ... };
46909467b48Spatrick
47009467b48Spatrick  __host__ HostS MakeStruct() { return HostS(); }
47109467b48Spatrick  __device__ DeviceS MakeStruct() { return DeviceS(); }
47209467b48Spatrick
47309467b48Spatrick  // Now host and device code can call MakeStruct().
47409467b48Spatrick
47509467b48SpatrickUnfortunately, this idiom isn't compatible with nvcc, because it doesn't allow
47609467b48Spatrickyou to overload based on the H/D attributes.  Here's an idiom that works with
47709467b48Spatrickboth clang and nvcc:
47809467b48Spatrick
47909467b48Spatrick.. code-block:: c++
48009467b48Spatrick
48109467b48Spatrick  struct HostS { ... };
48209467b48Spatrick  struct DeviceS { ... };
48309467b48Spatrick
48409467b48Spatrick  #ifdef __NVCC__
48509467b48Spatrick    #ifndef __CUDA_ARCH__
48609467b48Spatrick      __host__ HostS MakeStruct() { return HostS(); }
48709467b48Spatrick    #else
48809467b48Spatrick      __device__ DeviceS MakeStruct() { return DeviceS(); }
48909467b48Spatrick    #endif
49009467b48Spatrick  #else
49109467b48Spatrick    __host__ HostS MakeStruct() { return HostS(); }
49209467b48Spatrick    __device__ DeviceS MakeStruct() { return DeviceS(); }
49309467b48Spatrick  #endif
49409467b48Spatrick
49509467b48Spatrick  // Now host and device code can call MakeStruct().
49609467b48Spatrick
49709467b48SpatrickHopefully you don't have to do this sort of thing often.
49809467b48Spatrick
49909467b48SpatrickOptimizations
50009467b48Spatrick=============
50109467b48Spatrick
50209467b48SpatrickModern CPUs and GPUs are architecturally quite different, so code that's fast
50309467b48Spatrickon a CPU isn't necessarily fast on a GPU.  We've made a number of changes to
50409467b48SpatrickLLVM to make it generate good GPU code.  Among these changes are:
50509467b48Spatrick
50609467b48Spatrick* `Straight-line scalar optimizations <https://goo.gl/4Rb9As>`_ -- These
50709467b48Spatrick  reduce redundancy within straight-line code.
50809467b48Spatrick
50909467b48Spatrick* `Aggressive speculative execution
510097a140dSpatrick  <https://llvm.org/docs/doxygen/html/SpeculativeExecution_8cpp_source.html>`_
51109467b48Spatrick  -- This is mainly for promoting straight-line scalar optimizations, which are
51209467b48Spatrick  most effective on code along dominator paths.
51309467b48Spatrick
51409467b48Spatrick* `Memory space inference
515097a140dSpatrick  <https://llvm.org/doxygen/NVPTXInferAddressSpaces_8cpp_source.html>`_ --
516097a140dSpatrick  In PTX, we can operate on pointers that are in a particular "address space"
51709467b48Spatrick  (global, shared, constant, or local), or we can operate on pointers in the
51809467b48Spatrick  "generic" address space, which can point to anything.  Operations in a
51909467b48Spatrick  non-generic address space are faster, but pointers in CUDA are not explicitly
52009467b48Spatrick  annotated with their address space, so it's up to LLVM to infer it where
52109467b48Spatrick  possible.
52209467b48Spatrick
52309467b48Spatrick* `Bypassing 64-bit divides
524097a140dSpatrick  <https://llvm.org/docs/doxygen/html/BypassSlowDivision_8cpp_source.html>`_ --
52509467b48Spatrick  This was an existing optimization that we enabled for the PTX backend.
52609467b48Spatrick
52709467b48Spatrick  64-bit integer divides are much slower than 32-bit ones on NVIDIA GPUs.
52809467b48Spatrick  Many of the 64-bit divides in our benchmarks have a divisor and dividend
52909467b48Spatrick  which fit in 32-bits at runtime. This optimization provides a fast path for
53009467b48Spatrick  this common case.
53109467b48Spatrick
532097a140dSpatrick* Aggressive loop unrolling and function inlining -- Loop unrolling and
53309467b48Spatrick  function inlining need to be more aggressive for GPUs than for CPUs because
53409467b48Spatrick  control flow transfer in GPU is more expensive. More aggressive unrolling and
53509467b48Spatrick  inlining also promote other optimizations, such as constant propagation and
53609467b48Spatrick  SROA, which sometimes speed up code by over 10x.
53709467b48Spatrick
53809467b48Spatrick  (Programmers can force unrolling and inline using clang's `loop unrolling pragmas
539097a140dSpatrick  <https://clang.llvm.org/docs/AttributeReference.html#pragma-unroll-pragma-nounroll>`_
54009467b48Spatrick  and ``__attribute__((always_inline))``.)
54109467b48Spatrick
54209467b48SpatrickPublication
54309467b48Spatrick===========
54409467b48Spatrick
54509467b48SpatrickThe team at Google published a paper in CGO 2016 detailing the optimizations
54609467b48Spatrickthey'd made to clang/LLVM.  Note that "gpucc" is no longer a meaningful name:
54709467b48SpatrickThe relevant tools are now just vanilla clang/LLVM.
54809467b48Spatrick
54909467b48Spatrick| `gpucc: An Open-Source GPGPU Compiler <http://dl.acm.org/citation.cfm?id=2854041>`_
55009467b48Spatrick| Jingyue Wu, Artem Belevich, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng, Robert Hundt
55109467b48Spatrick| *Proceedings of the 2016 International Symposium on Code Generation and Optimization (CGO 2016)*
55209467b48Spatrick|
55309467b48Spatrick| `Slides from the CGO talk <http://wujingyue.github.io/docs/gpucc-talk.pdf>`_
55409467b48Spatrick|
55509467b48Spatrick| `Tutorial given at CGO <http://wujingyue.github.io/docs/gpucc-tutorial.pdf>`_
55609467b48Spatrick
55709467b48SpatrickObtaining Help
55809467b48Spatrick==============
55909467b48Spatrick
56009467b48SpatrickTo obtain help on LLVM in general and its CUDA support, see `the LLVM
561097a140dSpatrickcommunity <https://llvm.org/docs/#mailing-lists>`_.
562