1=============================================
2Building a JIT: Per-function Lazy Compilation
3=============================================
4
5.. contents::
6   :local:
7
8**This tutorial is under active development. It is incomplete and details may
9change frequently.** Nonetheless we invite you to try it out as it stands, and
10we welcome any feedback.
11
12Chapter 3 Introduction
13======================
14
15**Warning: This text is currently out of date due to ORC API updates.**
16
17**The example code has been updated and can be used. The text will be updated
18once the API churn dies down.**
19
20Welcome to Chapter 3 of the "Building an ORC-based JIT in LLVM" tutorial. This
21chapter discusses lazy JITing and shows you how to enable it by adding an ORC
22CompileOnDemand layer the JIT from `Chapter 2 <BuildingAJIT2.html>`_.
23
24Lazy Compilation
25================
26
27When we add a module to the KaleidoscopeJIT class from Chapter 2 it is
28immediately optimized, compiled and linked for us by the IRTransformLayer,
29IRCompileLayer and RTDyldObjectLinkingLayer respectively. This scheme, where all the
30work to make a Module executable is done up front, is simple to understand and
31its performance characteristics are easy to reason about. However, it will lead
32to very high startup times if the amount of code to be compiled is large, and
33may also do a lot of unnecessary compilation if only a few compiled functions
34are ever called at runtime. A truly "just-in-time" compiler should allow us to
35defer the compilation of any given function until the moment that function is
36first called, improving launch times and eliminating redundant work. In fact,
37the ORC APIs provide us with a layer to lazily compile LLVM IR:
38*CompileOnDemandLayer*.
39
40The CompileOnDemandLayer class conforms to the layer interface described in
41Chapter 2, but its addModule method behaves quite differently from the layers
42we have seen so far: rather than doing any work up front, it just scans the
43Modules being added and arranges for each function in them to be compiled the
44first time it is called. To do this, the CompileOnDemandLayer creates two small
45utilities for each function that it scans: a *stub* and a *compile
46callback*. The stub is a pair of a function pointer (which will be pointed at
47the function's implementation once the function has been compiled) and an
48indirect jump through the pointer. By fixing the address of the indirect jump
49for the lifetime of the program we can give the function a permanent "effective
50address", one that can be safely used for indirection and function pointer
51comparison even if the function's implementation is never compiled, or if it is
52compiled more than once (due to, for example, recompiling the function at a
53higher optimization level) and changes address. The second utility, the compile
54callback, represents a re-entry point from the program into the compiler that
55will trigger compilation and then execution of a function. By initializing the
56function's stub to point at the function's compile callback, we enable lazy
57compilation: The first attempted call to the function will follow the function
58pointer and trigger the compile callback instead. The compile callback will
59compile the function, update the function pointer for the stub, then execute
60the function. On all subsequent calls to the function, the function pointer
61will point at the already-compiled function, so there is no further overhead
62from the compiler. We will look at this process in more detail in the next
63chapter of this tutorial, but for now we'll trust the CompileOnDemandLayer to
64set all the stubs and callbacks up for us. All we need to do is to add the
65CompileOnDemandLayer to the top of our stack and we'll get the benefits of
66lazy compilation. We just need a few changes to the source:
67
68.. code-block:: c++
69
70  ...
71  #include "llvm/ExecutionEngine/SectionMemoryManager.h"
72  #include "llvm/ExecutionEngine/Orc/CompileOnDemandLayer.h"
73  #include "llvm/ExecutionEngine/Orc/CompileUtils.h"
74  ...
75
76  ...
77  class KaleidoscopeJIT {
78  private:
79    std::unique_ptr<TargetMachine> TM;
80    const DataLayout DL;
81    RTDyldObjectLinkingLayer ObjectLayer;
82    IRCompileLayer<decltype(ObjectLayer), SimpleCompiler> CompileLayer;
83
84    using OptimizeFunction =
85        std::function<std::shared_ptr<Module>(std::shared_ptr<Module>)>;
86
87    IRTransformLayer<decltype(CompileLayer), OptimizeFunction> OptimizeLayer;
88
89    std::unique_ptr<JITCompileCallbackManager> CompileCallbackManager;
90    CompileOnDemandLayer<decltype(OptimizeLayer)> CODLayer;
91
92  public:
93    using ModuleHandle = decltype(CODLayer)::ModuleHandleT;
94
95First we need to include the CompileOnDemandLayer.h header, then add two new
96members: a std::unique_ptr<JITCompileCallbackManager> and a CompileOnDemandLayer,
97to our class. The CompileCallbackManager member is used by the CompileOnDemandLayer
98to create the compile callback needed for each function.
99
100.. code-block:: c++
101
102  KaleidoscopeJIT()
103      : TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
104        ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
105        CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
106        OptimizeLayer(CompileLayer,
107                      [this](std::shared_ptr<Module> M) {
108                        return optimizeModule(std::move(M));
109                      }),
110        CompileCallbackManager(
111            orc::createLocalCompileCallbackManager(TM->getTargetTriple(), 0)),
112        CODLayer(OptimizeLayer,
113                 [this](Function &F) { return std::set<Function*>({&F}); },
114                 *CompileCallbackManager,
115                 orc::createLocalIndirectStubsManagerBuilder(
116                   TM->getTargetTriple())) {
117    llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr);
118  }
119
120Next we have to update our constructor to initialize the new members. To create
121an appropriate compile callback manager we use the
122createLocalCompileCallbackManager function, which takes a TargetMachine and a
123JITTargetAddress to call if it receives a request to compile an unknown
124function.  In our simple JIT this situation is unlikely to come up, so we'll
125cheat and just pass '0' here. In a production quality JIT you could give the
126address of a function that throws an exception in order to unwind the JIT'd
127code's stack.
128
129Now we can construct our CompileOnDemandLayer. Following the pattern from
130previous layers we start by passing a reference to the next layer down in our
131stack -- the OptimizeLayer. Next we need to supply a 'partitioning function':
132when a not-yet-compiled function is called, the CompileOnDemandLayer will call
133this function to ask us what we would like to compile. At a minimum we need to
134compile the function being called (given by the argument to the partitioning
135function), but we could also request that the CompileOnDemandLayer compile other
136functions that are unconditionally called (or highly likely to be called) from
137the function being called. For KaleidoscopeJIT we'll keep it simple and just
138request compilation of the function that was called. Next we pass a reference to
139our CompileCallbackManager. Finally, we need to supply an "indirect stubs
140manager builder": a utility function that constructs IndirectStubManagers, which
141are in turn used to build the stubs for the functions in each module. The
142CompileOnDemandLayer will call the indirect stub manager builder once for each
143call to addModule, and use the resulting indirect stubs manager to create
144stubs for all functions in all modules in the set. If/when the module set is
145removed from the JIT the indirect stubs manager will be deleted, freeing any
146memory allocated to the stubs. We supply this function by using the
147createLocalIndirectStubsManagerBuilder utility.
148
149.. code-block:: c++
150
151  // ...
152          if (auto Sym = CODLayer.findSymbol(Name, false))
153  // ...
154  return cantFail(CODLayer.addModule(std::move(Ms),
155                                     std::move(Resolver)));
156  // ...
157
158  // ...
159  return CODLayer.findSymbol(MangledNameStream.str(), true);
160  // ...
161
162  // ...
163  CODLayer.removeModule(H);
164  // ...
165
166Finally, we need to replace the references to OptimizeLayer in our addModule,
167findSymbol, and removeModule methods. With that, we're up and running.
168
169**To be done:**
170
171** Chapter conclusion.**
172
173Full Code Listing
174=================
175
176Here is the complete code listing for our running example with a CompileOnDemand
177layer added to enable lazy function-at-a-time compilation. To build this example, use:
178
179.. code-block:: bash
180
181    # Compile
182    clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
183    # Run
184    ./toy
185
186Here is the code:
187
188.. literalinclude:: ../../examples/Kaleidoscope/BuildingAJIT/Chapter3/KaleidoscopeJIT.h
189   :language: c++
190
191`Next: Extreme Laziness -- Using Compile Callbacks to JIT directly from ASTs <BuildingAJIT4.html>`_
192