1# Speculative Load Hardening
2
3### A Spectre Variant #1 Mitigation Technique
4
5Author: Chandler Carruth - [chandlerc@google.com](mailto:chandlerc@google.com)
6
7## Problem Statement
8
9Recently, Google Project Zero and other researchers have found information leak
10vulnerabilities by exploiting speculative execution in modern CPUs. These
11exploits are currently broken down into three variants:
12* GPZ Variant #1 (a.k.a. Spectre Variant #1): Bounds check (or predicate) bypass
13* GPZ Variant #2 (a.k.a. Spectre Variant #2): Branch target injection
14* GPZ Variant #3 (a.k.a. Meltdown): Rogue data cache load
15
16For more details, see the Google Project Zero blog post and the Spectre research
17paper:
18* https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
19* https://spectreattack.com/spectre.pdf
20
21The core problem of GPZ Variant #1 is that speculative execution uses branch
22prediction to select the path of instructions speculatively executed. This path
23is speculatively executed with the available data, and may load from memory and
24leak the loaded values through various side channels that survive even when the
25speculative execution is unwound due to being incorrect. Mispredicted paths can
26cause code to be executed with data inputs that never occur in correct
27executions, making checks against malicious inputs ineffective and allowing
28attackers to use malicious data inputs to leak secret data. Here is an example,
29extracted and simplified from the Project Zero paper:
30```
31struct array {
32  unsigned long length;
33  unsigned char data[];
34};
35struct array *arr1 = ...; // small array
36struct array *arr2 = ...; // array of size 0x400
37unsigned long untrusted_offset_from_caller = ...;
38if (untrusted_offset_from_caller < arr1->length) {
39  unsigned char value = arr1->data[untrusted_offset_from_caller];
40  unsigned long index2 = ((value&1)*0x100)+0x200;
41  unsigned char value2 = arr2->data[index2];
42}
43```
44
45The key of the attack is to call this with `untrusted_offset_from_caller` that
46is far outside of the bounds when the branch predictor will predict that it
47will be in-bounds. In that case, the body of the `if` will be executed
48speculatively, and may read secret data into `value` and leak it via a
49cache-timing side channel when a dependent access is made to populate `value2`.
50
51## High Level Mitigation Approach
52
53While several approaches are being actively pursued to mitigate specific
54branches and/or loads inside especially risky software (most notably various OS
55kernels), these approaches require manual and/or static analysis aided auditing
56of code and explicit source changes to apply the mitigation. They are unlikely
57to scale well to large applications. We are proposing a comprehensive
58mitigation approach that would apply automatically across an entire program
59rather than through manual changes to the code. While this is likely to have a
60high performance cost, some applications may be in a good position to take this
61performance / security tradeoff.
62
63The specific technique we propose is to cause loads to be checked using
64branchless code to ensure that they are executing along a valid control flow
65path. Consider the following C-pseudo-code representing the core idea of a
66predicate guarding potentially invalid loads:
67```
68void leak(int data);
69void example(int* pointer1, int* pointer2) {
70  if (condition) {
71    // ... lots of code ...
72    leak(*pointer1);
73  } else {
74    // ... more code ...
75    leak(*pointer2);
76  }
77}
78```
79
80This would get transformed into something resembling the following:
81```
82uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
83uintptr_t all_zeros_mask = 0;
84void leak(int data);
85void example(int* pointer1, int* pointer2) {
86  uintptr_t predicate_state = all_ones_mask;
87  if (condition) {
88    // Assuming ?: is implemented using branchless logic...
89    predicate_state = !condition ? all_zeros_mask : predicate_state;
90    // ... lots of code ...
91    //
92    // Harden the pointer so it can't be loaded
93    pointer1 &= predicate_state;
94    leak(*pointer1);
95  } else {
96    predicate_state = condition ? all_zeros_mask : predicate_state;
97    // ... more code ...
98    //
99    // Alternative: Harden the loaded value
100    int value2 = *pointer2 & predicate_state;
101    leak(value2);
102  }
103}
104```
105
106The result should be that if the `if (condition) {` branch is mis-predicted,
107there is a *data* dependency on the condition used to zero out any pointers
108prior to loading through them or to zero out all of the loaded bits. Even
109though this code pattern may still execute speculatively, *invalid* speculative
110executions are prevented from leaking secret data from memory (but note that
111this data might still be loaded in safe ways, and some regions of memory are
112required to not hold secrets, see below for detailed limitations). This
113approach only requires the underlying hardware have a way to implement a
114branchless and unpredicted conditional update of a register's value. All modern
115architectures have support for this, and in fact such support is necessary to
116correctly implement constant time cryptographic primitives.
117
118Crucial properties of this approach:
119* It is not preventing any particular side-channel from working. This is
120  important as there are an unknown number of potential side channels and we
121  expect to continue discovering more. Instead, it prevents the observation of
122  secret data in the first place.
123* It accumulates the predicate state, protecting even in the face of nested
124  *correctly* predicted control flows.
125* It passes this predicate state across function boundaries to provide
126  [interprocedural protection](#interprocedural-checking).
127* When hardening the address of a load, it uses a *destructive* or
128  *non-reversible* modification of the address to prevent an attacker from
129  reversing the check using attacker-controlled inputs.
130* It does not completely block speculative execution, and merely prevents
131  *mis*-speculated paths from leaking secrets from memory (and stalls
132  speculation until this can be determined).
133* It is completely general and makes no fundamental assumptions about the
134  underlying architecture other than the ability to do branchless conditional
135  data updates and a lack of value prediction.
136* It does not require programmers to identify all possible secret data using
137  static source code annotations or code vulnerable to a variant #1 style
138  attack.
139
140Limitations of this approach:
141* It requires re-compiling source code to insert hardening instruction
142  sequences. Only software compiled in this mode is protected.
143* The performance is heavily dependent on a particular architecture's
144  implementation strategy. We outline a potential x86 implementation below and
145  characterize its performance.
146* It does not defend against secret data already loaded from memory and
147  residing in registers or leaked through other side-channels in
148  non-speculative execution. Code dealing with this, e.g cryptographic
149  routines, already uses constant-time algorithms and code to prevent
150  side-channels. Such code should also scrub registers of secret data following
151  [these
152  guidelines](https://github.com/HACS-workshop/spectre-mitigations/blob/master/crypto_guidelines.md).
153* To achieve reasonable performance, many loads may not be checked, such as
154  those with compile-time fixed addresses. This primarily consists of accesses
155  at compile-time constant offsets of global and local variables. Code which
156  needs this protection and intentionally stores secret data must ensure the
157  memory regions used for secret data are necessarily dynamic mappings or heap
158  allocations. This is an area which can be tuned to provide more comprehensive
159  protection at the cost of performance.
160* [Hardened loads](#hardening-the-address-of-the-load) may still load data from
161  _valid_ addresses if not _attacker-controlled_ addresses. To prevent these
162  from reading secret data, the low 2gb of the address space and 2gb above and
163  below any executable pages should be protected.
164
165Credit:
166* The core idea of tracing misspeculation through data and marking pointers to
167  block misspeculated loads was developed as part of a HACS 2018 discussion
168  between Chandler Carruth, Paul Kocher, Thomas Pornin, and several other
169  individuals.
170* Core idea of masking out loaded bits was part of the original mitigation
171  suggested by Jann Horn when these attacks were reported.
172
173
174### Indirect Branches, Calls, and Returns
175
176It is possible to attack control flow other than conditional branches with
177variant #1 style mispredictions.
178* A prediction towards a hot call target of a virtual method can lead to it
179  being speculatively executed when an expected type is used (often called
180  "type confusion").
181* A hot case may be speculatively executed due to prediction instead of the
182  correct case for a switch statement implemented as a jump table.
183* A hot common return address may be predicted incorrectly when returning from
184  a function.
185
186These code patterns are also vulnerable to Spectre variant #2, and as such are
187best mitigated with a
188[retpoline](https://support.google.com/faqs/answer/7625886) on x86 platforms.
189When a mitigation technique like retpoline is used, speculation simply cannot
190proceed through an indirect control flow edge (or it cannot be mispredicted in
191the case of a filled RSB) and so it is also protected from variant #1 style
192attacks. However, some architectures, micro-architectures, or vendors do not
193employ the retpoline mitigation, and on future x86 hardware (both Intel and
194AMD) it is expected to become unnecessary due to hardware-based mitigation.
195
196When not using a retpoline, these edges will need independent protection from
197variant #1 style attacks. The analogous approach to that used for conditional
198control flow should work:
199```
200uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
201uintptr_t all_zeros_mask = 0;
202void leak(int data);
203void example(int* pointer1, int* pointer2) {
204  uintptr_t predicate_state = all_ones_mask;
205  switch (condition) {
206  case 0:
207    // Assuming ?: is implemented using branchless logic...
208    predicate_state = (condition != 0) ? all_zeros_mask : predicate_state;
209    // ... lots of code ...
210    //
211    // Harden the pointer so it can't be loaded
212    pointer1 &= predicate_state;
213    leak(*pointer1);
214    break;
215
216  case 1:
217    predicate_state = (condition != 1) ? all_zeros_mask : predicate_state;
218    // ... more code ...
219    //
220    // Alternative: Harden the loaded value
221    int value2 = *pointer2 & predicate_state;
222    leak(value2);
223    break;
224
225    // ...
226  }
227}
228```
229
230The core idea remains the same: validate the control flow using data-flow and
231use that validation to check that loads cannot leak information along
232misspeculated paths. Typically this involves passing the desired target of such
233control flow across the edge and checking that it is correct afterwards. Note
234that while it is tempting to think that this mitigates variant #2 attacks, it
235does not. Those attacks go to arbitrary gadgets that don't include the checks.
236
237
238### Variant #1.1 and #1.2 attacks: "Bounds Check Bypass Store"
239
240Beyond the core variant #1 attack, there are techniques to extend this attack.
241The primary technique is known as "Bounds Check Bypass Store" and is discussed
242in this research paper: https://people.csail.mit.edu/vlk/spectre11.pdf
243
244We will analyze these two variants independently. First, variant #1.1 works by
245speculatively storing over the return address after a bounds check bypass. This
246speculative store then ends up being used by the CPU during speculative
247execution of the return, potentially directing speculative execution to
248arbitrary gadgets in the binary. Let's look at an example.
249```
250unsigned char local_buffer[4];
251unsigned char *untrusted_data_from_caller = ...;
252unsigned long untrusted_size_from_caller = ...;
253if (untrusted_size_from_caller < sizeof(local_buffer)) {
254  // Speculative execution enters here with a too-large size.
255  memcpy(local_buffer, untrusted_data_from_caller,
256         untrusted_size_from_caller);
257  // The stack has now been smashed, writing an attacker-controlled
258  // address over the return address.
259  minor_processing(local_buffer);
260  return;
261  // Control will speculate to the attacker-written address.
262}
263```
264
265However, this can be mitigated by hardening the load of the return address just
266like any other load. This is sometimes complicated because x86 for example
267*implicitly* loads the return address off the stack. However, the
268implementation technique below is specifically designed to mitigate this
269implicit load by using the stack pointer to communicate misspeculation between
270functions. This additionally causes a misspeculation to have an invalid stack
271pointer and never be able to read the speculatively stored return address. See
272the detailed discussion below.
273
274For variant #1.2, the attacker speculatively stores into the vtable or jump
275table used to implement an indirect call or indirect jump. Because this is
276speculative, this will often be possible even when these are stored in
277read-only pages. For example:
278```
279class FancyObject : public BaseObject {
280public:
281  void DoSomething() override;
282};
283void f(unsigned long attacker_offset, unsigned long attacker_data) {
284  FancyObject object = getMyObject();
285  unsigned long *arr[4] = getFourDataPointers();
286  if (attacker_offset < 4) {
287    // We have bypassed the bounds check speculatively.
288    unsigned long *data = arr[attacker_offset];
289    // Now we have computed a pointer inside of `object`, the vptr.
290    *data = attacker_data;
291    // The vptr points to the virtual table and we speculatively clobber that.
292    g(object); // Hand the object to some other routine.
293  }
294}
295// In another file, we call a method on the object.
296void g(BaseObject &object) {
297  object.DoSomething();
298  // This speculatively calls the address stored over the vtable.
299}
300```
301
302Mitigating this requires hardening loads from these locations, or mitigating
303the indirect call or indirect jump. Any of these are sufficient to block the
304call or jump from using a speculatively stored value that has been read back.
305
306For both of these, using retpolines would be equally sufficient. One possible
307hybrid approach is to use retpolines for indirect call and jump, while relying
308on SLH to mitigate returns.
309
310Another approach that is sufficient for both of these is to harden all of the
311speculative stores. However, as most stores aren't interesting and don't
312inherently leak data, this is expected to be prohibitively expensive given the
313attack it is defending against.
314
315
316## Implementation Details
317
318There are a number of complex details impacting the implementation of this
319technique, both on a particular architecture and within a particular compiler.
320We discuss proposed implementation techniques for the x86 architecture and the
321LLVM compiler. These are primarily to serve as an example, as other
322implementation techniques are very possible.
323
324
325### x86 Implementation Details
326
327On the x86 platform we break down the implementation into three core
328components: accumulating the predicate state through the control flow graph,
329checking the loads, and checking control transfers between procedures.
330
331
332#### Accumulating Predicate State
333
334Consider baseline x86 instructions like the following, which test three
335conditions and if all pass, loads data from memory and potentially leaks it
336through some side channel:
337```
338# %bb.0:                                # %entry
339        pushq   %rax
340        testl   %edi, %edi
341        jne     .LBB0_4
342# %bb.1:                                # %then1
343        testl   %esi, %esi
344        jne     .LBB0_4
345# %bb.2:                                # %then2
346        testl   %edx, %edx
347        je      .LBB0_3
348.LBB0_4:                                # %exit
349        popq    %rax
350        retq
351.LBB0_3:                                # %danger
352        movl    (%rcx), %edi
353        callq   leak
354        popq    %rax
355        retq
356```
357
358When we go to speculatively execute the load, we want to know whether any of
359the dynamically executed predicates have been misspeculated. To track that,
360along each conditional edge, we need to track the data which would allow that
361edge to be taken. On x86, this data is stored in the flags register used by the
362conditional jump instruction. Along both edges after this fork in control flow,
363the flags register remains alive and contains data that we can use to build up
364our accumulated predicate state. We accumulate it using the x86 conditional
365move instruction which also reads the flag registers where the state resides.
366These conditional move instructions are known to not be predicted on any x86
367processors, making them immune to misprediction that could reintroduce the
368vulnerability. When we insert the conditional moves, the code ends up looking
369like the following:
370```
371# %bb.0:                                # %entry
372        pushq   %rax
373        xorl    %eax, %eax              # Zero out initial predicate state.
374        movq    $-1, %r8                # Put all-ones mask into a register.
375        testl   %edi, %edi
376        jne     .LBB0_1
377# %bb.2:                                # %then1
378        cmovneq %r8, %rax               # Conditionally update predicate state.
379        testl   %esi, %esi
380        jne     .LBB0_1
381# %bb.3:                                # %then2
382        cmovneq %r8, %rax               # Conditionally update predicate state.
383        testl   %edx, %edx
384        je      .LBB0_4
385.LBB0_1:
386        cmoveq  %r8, %rax               # Conditionally update predicate state.
387        popq    %rax
388        retq
389.LBB0_4:                                # %danger
390        cmovneq %r8, %rax               # Conditionally update predicate state.
391        ...
392```
393
394Here we create the "empty" or "correct execution" predicate state by zeroing
395`%rax`, and we create a constant "incorrect execution" predicate value by
396putting `-1` into `%r8`. Then, along each edge coming out of a conditional
397branch we do a conditional move that in a correct execution will be a no-op,
398but if misspeculated, will replace the `%rax` with the value of `%r8`.
399Misspeculating any one of the three predicates will cause `%rax` to hold the
400"incorrect execution" value from `%r8` as we preserve incoming values when
401execution is correct rather than overwriting it.
402
403We now have a value in `%rax` in each basic block that indicates if at some
404point previously a predicate was mispredicted. And we have arranged for that
405value to be particularly effective when used below to harden loads.
406
407
408##### Indirect Call, Branch, and Return Predicates
409
410There is no analogous flag to use when tracing indirect calls, branches, and
411returns. The predicate state must be accumulated through some other means.
412Fundamentally, this is the reverse of the problem posed in CFI: we need to
413check where we came from rather than where we are going. For function-local
414jump tables, this is easily arranged by testing the input to the jump table
415within each destination (not yet implemented, use retpolines):
416```
417        pushq   %rax
418        xorl    %eax, %eax              # Zero out initial predicate state.
419        movq    $-1, %r8                # Put all-ones mask into a register.
420        jmpq    *.LJTI0_0(,%rdi,8)      # Indirect jump through table.
421.LBB0_2:                                # %sw.bb
422        testq   $0, %rdi                # Validate index used for jump table.
423        cmovneq %r8, %rax               # Conditionally update predicate state.
424        ...
425        jmp     _Z4leaki                # TAILCALL
426
427.LBB0_3:                                # %sw.bb1
428        testq   $1, %rdi                # Validate index used for jump table.
429        cmovneq %r8, %rax               # Conditionally update predicate state.
430        ...
431        jmp     _Z4leaki                # TAILCALL
432
433.LBB0_5:                                # %sw.bb10
434        testq   $2, %rdi                # Validate index used for jump table.
435        cmovneq %r8, %rax               # Conditionally update predicate state.
436        ...
437        jmp     _Z4leaki                # TAILCALL
438        ...
439
440        .section        .rodata,"a",@progbits
441        .p2align        3
442.LJTI0_0:
443        .quad   .LBB0_2
444        .quad   .LBB0_3
445        .quad   .LBB0_5
446        ...
447```
448
449Returns have a simple mitigation technique on x86-64 (or other ABIs which have
450what is called a "red zone" region beyond the end of the stack). This region is
451guaranteed to be preserved across interrupts and context switches, making the
452return address used in returning to the current code remain on the stack and
453valid to read. We can emit code in the caller to verify that a return edge was
454not mispredicted:
455```
456        callq   other_function
457return_addr:
458        testq   -8(%rsp), return_addr   # Validate return address.
459        cmovneq %r8, %rax               # Update predicate state.
460```
461
462For an ABI without a "red zone" (and thus unable to read the return address
463from the stack), we can compute the expected return address prior to the call
464into a register preserved across the call and use that similarly to the above.
465
466Indirect calls (and returns in the absence of a red zone ABI) pose the most
467significant challenge to propagate. The simplest technique would be to define a
468new ABI such that the intended call target is passed into the called function
469and checked in the entry. Unfortunately, new ABIs are quite expensive to deploy
470in C and C++. While the target function could be passed in TLS, we would still
471require complex logic to handle a mixture of functions compiled with and
472without this extra logic (essentially, making the ABI backwards compatible).
473Currently, we suggest using retpolines here and will continue to investigate
474ways of mitigating this.
475
476
477##### Optimizations, Alternatives, and Tradeoffs
478
479Merely accumulating predicate state involves significant cost. There are
480several key optimizations we employ to minimize this and various alternatives
481that present different tradeoffs in the generated code.
482
483First, we work to reduce the number of instructions used to track the state:
484* Rather than inserting a `cmovCC` instruction along every conditional edge in
485  the original program, we track each set of condition flags we need to capture
486  prior to entering each basic block and reuse a common `cmovCC` sequence for
487  those.
488  * We could further reuse suffixes when there are multiple `cmovCC`
489    instructions required to capture the set of flags. Currently this is
490    believed to not be worth the cost as paired flags are relatively rare and
491    suffixes of them are exceedingly rare.
492* A common pattern in x86 is to have multiple conditional jump instructions
493  that use the same flags but handle different conditions. Naively, we could
494  consider each fallthrough between them an "edge" but this causes a much more
495  complex control flow graph. Instead, we accumulate the set of conditions
496  necessary for fallthrough and use a sequence of `cmovCC` instructions in a
497  single fallthrough edge to track it.
498
499Second, we trade register pressure for simpler `cmovCC` instructions by
500allocating a register for the "bad" state. We could read that value from memory
501as part of the conditional move instruction, however, this creates more
502micro-ops and requires the load-store unit to be involved. Currently, we place
503the value into a virtual register and allow the register allocator to decide
504when the register pressure is sufficient to make it worth spilling to memory
505and reloading.
506
507
508#### Hardening Loads
509
510Once we have the predicate accumulated into a special value for correct vs.
511misspeculated, we need to apply this to loads in a way that ensures they do not
512leak secret data. There are two primary techniques for this: we can either
513harden the loaded value to prevent observation, or we can harden the address
514itself to prevent the load from occurring. These have significantly different
515performance tradeoffs.
516
517
518##### Hardening loaded values
519
520The most appealing way to harden loads is to mask out all of the bits loaded.
521The key requirement is that for each bit loaded, along the misspeculated path
522that bit is always fixed at either 0 or 1 regardless of the value of the bit
523loaded. The most obvious implementation uses either an `and` instruction with
524an all-zero mask along misspeculated paths and an all-one mask along correct
525paths, or an `or` instruction with an all-one mask along misspeculated paths
526and an all-zero mask along correct paths. Other options become less appealing
527such as multiplying by zero, or multiple shift instructions. For reasons we
528elaborate on below, we end up suggesting you use `or` with an all-ones mask,
529making the x86 instruction sequence look like the following:
530```
531        ...
532
533.LBB0_4:                                # %danger
534        cmovneq %r8, %rax               # Conditionally update predicate state.
535        movl    (%rsi), %edi            # Load potentially secret data from %rsi.
536        orl     %eax, %edi
537```
538
539Other useful patterns may be to fold the load into the `or` instruction itself
540at the cost of a register-to-register copy.
541
542There are some challenges with deploying this approach:
5431. Many loads on x86 are folded into other instructions. Separating them would
544   add very significant and costly register pressure with prohibitive
545   performance cost.
5461. Loads may not target a general purpose register requiring extra instructions
547   to map the state value into the correct register class, and potentially more
548   expensive instructions to mask the value in some way.
5491. The flags registers on x86 are very likely to be live, and challenging to
550   preserve cheaply.
5511. There are many more values loaded than pointers & indices used for loads. As
552   a consequence, hardening the result of a load requires substantially more
553   instructions than hardening the address of the load (see below).
554
555Despite these challenges, hardening the result of the load critically allows
556the load to proceed and thus has dramatically less impact on the total
557speculative / out-of-order potential of the execution. There are also several
558interesting techniques to try and mitigate these challenges and make hardening
559the results of loads viable in at least some cases. However, we generally
560expect to fall back when unprofitable from hardening the loaded value to the
561next approach of hardening the address itself.
562
563
564###### Loads folded into data-invariant operations can be hardened after the operation
565
566The first key to making this feasible is to recognize that many operations on
567x86 are "data-invariant". That is, they have no (known) observable behavior
568differences due to the particular input data. These instructions are often used
569when implementing cryptographic primitives dealing with private key data
570because they are not believed to provide any side-channels. Similarly, we can
571defer hardening until after them as they will not in-and-of-themselves
572introduce a speculative execution side-channel. This results in code sequences
573that look like:
574```
575        ...
576
577.LBB0_4:                                # %danger
578        cmovneq %r8, %rax               # Conditionally update predicate state.
579        addl    (%rsi), %edi            # Load and accumulate without leaking.
580        orl     %eax, %edi
581```
582
583While an addition happens to the loaded (potentially secret) value, that
584doesn't leak any data and we then immediately harden it.
585
586
587###### Hardening of loaded values deferred down the data-invariant expression graph
588
589We can generalize the previous idea and sink the hardening down the expression
590graph across as many data-invariant operations as desirable. This can use very
591conservative rules for whether something is data-invariant. The primary goal
592should be to handle multiple loads with a single hardening instruction:
593```
594        ...
595
596.LBB0_4:                                # %danger
597        cmovneq %r8, %rax               # Conditionally update predicate state.
598        addl    (%rsi), %edi            # Load and accumulate without leaking.
599        addl    4(%rsi), %edi           # Continue without leaking.
600        addl    8(%rsi), %edi
601        orl     %eax, %edi              # Mask out bits from all three loads.
602```
603
604
605###### Preserving the flags while hardening loaded values on Haswell, Zen, and newer processors
606
607Sadly, there are no useful instructions on x86 that apply a mask to all 64 bits
608without touching the flag registers. However, we can harden loaded values that
609are narrower than a word (fewer than 32-bits on 32-bit systems and fewer than
61064-bits on 64-bit systems) by zero-extending the value to the full word size
611and then shifting right by at least the number of original bits using the BMI2
612`shrx` instruction:
613```
614        ...
615
616.LBB0_4:                                # %danger
617        cmovneq %r8, %rax               # Conditionally update predicate state.
618        addl    (%rsi), %edi            # Load and accumulate 32 bits of data.
619        shrxq   %rax, %rdi, %rdi        # Shift out all 32 bits loaded.
620```
621
622Because on x86 the zero-extend is free, this can efficiently harden the loaded
623value.
624
625
626##### Hardening the address of the load
627
628When hardening the loaded value is inapplicable, most often because the
629instruction directly leaks information (like `cmp` or `jmpq`), we switch to
630hardening the _address_ of the load instead of the loaded value. This avoids
631increasing register pressure by unfolding the load or paying some other high
632cost.
633
634To understand how this works in practice, we need to examine the exact
635semantics of the x86 addressing modes which, in its fully general form, looks
636like `(%base,%index,scale)offset`. Here `%base` and `%index` are 64-bit
637registers that can potentially be any value, and may be attacker controlled,
638and `scale` and `offset` are fixed immediate values. `scale` must be `1`, `2`,
639`4`, or `8`, and `offset` can be any 32-bit sign extended value. The exact
640computation performed to find the address is then: `%base + (scale * %index) +
641offset` under 64-bit 2's complement modular arithmetic.
642
643One issue with this approach is that, after hardening, the  `%base + (scale *
644%index)` subexpression will compute a value near zero (`-1 + (scale * -1)`) and
645then a large, positive `offset` will index into memory within the first two
646gigabytes of address space. While these offsets are not attacker controlled,
647the attacker could chose to attack a load which happens to have the desired
648offset and then successfully read memory in that region. This significantly
649raises the burden on the attacker and limits the scope of attack but does not
650eliminate it. To fully close the attack we must work with the operating system
651to preclude mapping memory in the low two gigabytes of address space.
652
653
654###### 64-bit load checking instructions
655
656We can use the following instruction sequences to check loads. We set up `%r8`
657in these examples to hold the special value of `-1` which will be `cmov`ed over
658`%rax` in misspeculated paths.
659
660Single register addressing mode:
661```
662        ...
663
664.LBB0_4:                                # %danger
665        cmovneq %r8, %rax               # Conditionally update predicate state.
666        orq     %rax, %rsi              # Mask the pointer if misspeculating.
667        movl    (%rsi), %edi
668```
669
670Two register addressing mode:
671```
672        ...
673
674.LBB0_4:                                # %danger
675        cmovneq %r8, %rax               # Conditionally update predicate state.
676        orq     %rax, %rsi              # Mask the pointer if misspeculating.
677        orq     %rax, %rcx              # Mask the index if misspeculating.
678        movl    (%rsi,%rcx), %edi
679```
680
681This will result in a negative address near zero or in `offset` wrapping the
682address space back to a small positive address. Small, negative addresses will
683fault in user-mode for most operating systems, but targets which need the high
684address space to be user accessible may need to adjust the exact sequence used
685above. Additionally, the low addresses will need to be marked unreadable by the
686OS to fully harden the load.
687
688
689###### RIP-relative addressing is even easier to break
690
691There is a common addressing mode idiom that is substantially harder to check:
692addressing relative to the instruction pointer. We cannot change the value of
693the instruction pointer register and so we have the harder problem of forcing
694`%base + scale * %index + offset` to be an invalid address, by *only* changing
695`%index`. The only advantage we have is that the attacker also cannot modify
696`%base`. If we use the fast instruction sequence above, but only apply it to
697the index, we will always access `%rip + (scale * -1) + offset`. If the
698attacker can find a load which with this address happens to point to secret
699data, then they can reach it. However, the loader and base libraries can also
700simply refuse to map the heap, data segments, or stack within 2gb of any of the
701text in the program, much like it can reserve the low 2gb of address space.
702
703
704###### The flag registers again make everything hard
705
706Unfortunately, the technique of using `orq`-instructions has a serious flaw on
707x86. The very thing that makes it easy to accumulate state, the flag registers
708containing predicates, causes serious problems here because they may be alive
709and used by the loading instruction or subsequent instructions. On x86, the
710`orq` instruction **sets** the flags and will override anything already there.
711This makes inserting them into the instruction stream very hazardous.
712Unfortunately, unlike when hardening the loaded value, we have no fallback here
713and so we must have a fully general approach available.
714
715The first thing we must do when generating these sequences is try to analyze
716the surrounding code to prove that the flags are not in fact alive or being
717used. Typically, it has been set by some other instruction which just happens
718to set the flags register (much like ours!) with no actual dependency. In those
719cases, it is safe to directly insert these instructions. Alternatively we may
720be able to move them earlier to avoid clobbering the used value.
721
722However, this may ultimately be impossible. In that case, we need to preserve
723the flags around these instructions:
724```
725        ...
726
727.LBB0_4:                                # %danger
728        cmovneq %r8, %rax               # Conditionally update predicate state.
729        pushfq
730        orq     %rax, %rcx              # Mask the pointer if misspeculating.
731        orq     %rax, %rdx              # Mask the index if misspeculating.
732        popfq
733        movl    (%rcx,%rdx), %edi
734```
735
736Using the `pushf` and `popf` instructions saves the flags register around our
737inserted code, but comes at a high cost. First, we must store the flags to the
738stack and reload them. Second, this causes the stack pointer to be adjusted
739dynamically, requiring a frame pointer be used for referring to temporaries
740spilled to the stack, etc.
741
742On newer x86 processors we can use the `lahf` and `sahf` instructions to save
743all of the flags besides the overflow flag in a register rather than on the
744stack. We can then use `seto` and `add` to save and restore the overflow flag
745in a register. Combined, this will save and restore flags in the same manner as
746above but using two registers rather than the stack. That is still very
747expensive if slightly less expensive than `pushf` and `popf` in most cases.
748
749
750###### A flag-less alternative on Haswell, Zen and newer processors
751
752Starting with the BMI2 x86 instruction set extensions available on Haswell and
753Zen processors, there is an instruction for shifting that does not set any
754flags: `shrx`. We can use this and the `lea` instruction to implement analogous
755code sequences to the above ones. However, these are still very marginally
756slower, as there are fewer ports able to dispatch shift instructions in most
757modern x86 processors than there are for `or` instructions.
758
759Fast, single register addressing mode:
760```
761        ...
762
763.LBB0_4:                                # %danger
764        cmovneq %r8, %rax               # Conditionally update predicate state.
765        shrxq   %rax, %rsi, %rsi        # Shift away bits if misspeculating.
766        movl    (%rsi), %edi
767```
768
769This will collapse the register to zero or one, and everything but the offset
770in the addressing mode to be less than or equal to 9. This means the full
771address can only be guaranteed to be less than `(1 << 31) + 9`. The OS may wish
772to protect an extra page of the low address space to account for this
773
774
775##### Optimizations
776
777A very large portion of the cost for this approach comes from checking loads in
778this way, so it is important to work to optimize this. However, beyond making
779the instruction sequences to *apply* the checks efficient (for example by
780avoiding `pushfq` and `popfq` sequences), the only significant optimization is
781to check fewer loads without introducing a vulnerability. We apply several
782techniques to accomplish that.
783
784
785###### Don't check loads from compile-time constant stack offsets
786
787We implement this optimization on x86 by skipping the checking of loads which
788use a fixed frame pointer offset.
789
790The result of this optimization is that patterns like reloading a spilled
791register or accessing a global field don't get checked. This is a very
792significant performance win.
793
794
795###### Don't check dependent loads
796
797A core part of why this mitigation strategy works is that it establishes a
798data-flow check on the loaded address. However, this means that if the address
799itself was already loaded using a checked load, there is no need to check a
800dependent load provided it is within the same basic block as the checked load,
801and therefore has no additional predicates guarding it. Consider code like the
802following:
803```
804        ...
805
806.LBB0_4:                                # %danger
807        movq    (%rcx), %rdi
808        movl    (%rdi), %edx
809```
810
811This will get transformed into:
812```
813        ...
814
815.LBB0_4:                                # %danger
816        cmovneq %r8, %rax               # Conditionally update predicate state.
817        orq     %rax, %rcx              # Mask the pointer if misspeculating.
818        movq    (%rcx), %rdi            # Hardened load.
819        movl    (%rdi), %edx            # Unhardened load due to dependent addr.
820```
821
822This doesn't check the load through `%rdi` as that pointer is dependent on a
823checked load already.
824
825
826###### Protect large, load-heavy blocks with a single lfence
827
828It may be worth using a single `lfence` instruction at the start of a block
829which begins with a (very) large number of loads that require independent
830protection *and* which require hardening the address of the load. However, this
831is unlikely to be profitable in practice. The latency hit of the hardening
832would need to exceed that of an `lfence` when *correctly* speculatively
833executed. But in that case, the `lfence` cost is a complete loss of speculative
834execution (at a minimum). So far, the evidence we have of the performance cost
835of using `lfence` indicates few if any hot code patterns where this trade off
836would make sense.
837
838
839###### Tempting optimizations that break the security model
840
841Several optimizations were considered which didn't pan out due to failure to
842uphold the security model. One in particular is worth discussing as many others
843will reduce to it.
844
845We wondered whether only the *first* load in a basic block could be checked. If
846the check works as intended, it forms an invalid pointer that doesn't even
847virtual-address translate in the hardware. It should fault very early on in its
848processing. Maybe that would stop things in time for the misspeculated path to
849fail to leak any secrets. This doesn't end up working because the processor is
850fundamentally out-of-order, even in its speculative domain. As a consequence,
851the attacker could cause the initial address computation itself to stall and
852allow an arbitrary number of unrelated loads (including attacked loads of
853secret data) to pass through.
854
855
856#### Interprocedural Checking
857
858Modern x86 processors may speculate into called functions and out of functions
859to their return address. As a consequence, we need a way to check loads that
860occur after a misspeculated predicate but where the load and the misspeculated
861predicate are in different functions. In essence, we need some interprocedural
862generalization of the predicate state tracking. A primary challenge to passing
863the predicate state between functions is that we would like to not require a
864change to the ABI or calling convention in order to make this mitigation more
865deployable, and further would like code mitigated in this way to be easily
866mixed with code not mitigated in this way and without completely losing the
867value of the mitigation.
868
869
870##### Embed the predicate state into the high bit(s) of the stack pointer
871
872We can use the same technique that allows hardening pointers to pass the
873predicate state into and out of functions. The stack pointer is trivially
874passed between functions and we can test for it having the high bits set to
875detect when it has been marked due to misspeculation. The callsite instruction
876sequence looks like (assuming a misspeculated state value of `-1`):
877```
878        ...
879
880.LBB0_4:                                # %danger
881        cmovneq %r8, %rax               # Conditionally update predicate state.
882        shlq    $47, %rax
883        orq     %rax, %rsp
884        callq   other_function
885        movq    %rsp, %rax
886        sarq    63, %rax                # Sign extend the high bit to all bits.
887```
888
889This first puts the predicate state into the high bits of `%rsp` before calling
890the function and then reads it back out of high bits of `%rsp` afterward. When
891correctly executing (speculatively or not), these are all no-ops. When
892misspeculating, the stack pointer will end up negative. We arrange for it to
893remain a canonical address, but otherwise leave the low bits alone to allow
894stack adjustments to proceed normally without disrupting this. Within the
895called function, we can extract this predicate state and then reset it on
896return:
897```
898other_function:
899        # prolog
900        callq   other_function
901        movq    %rsp, %rax
902        sarq    63, %rax                # Sign extend the high bit to all bits.
903        # ...
904
905.LBB0_N:
906        cmovneq %r8, %rax               # Conditionally update predicate state.
907        shlq    $47, %rax
908        orq     %rax, %rsp
909        retq
910```
911
912This approach is effective when all code is mitigated in this fashion, and can
913even survive very limited reaches into unmitigated code (the state will
914round-trip in and back out of an unmitigated function, it just won't be
915updated). But it does have some limitations. There is a cost to merging the
916state into `%rsp` and it doesn't insulate mitigated code from misspeculation in
917an unmitigated caller.
918
919There is also an advantage to using this form of interprocedural mitigation: by
920forming these invalid stack pointer addresses we can prevent speculative
921returns from successfully reading speculatively written values to the actual
922stack. This works first by forming a data-dependency between computing the
923address of the return address on the stack and our predicate state. And even
924when satisfied, if a misprediction causes the state to be poisoned the
925resulting stack pointer will be invalid.
926
927
928##### Rewrite API of internal functions to directly propagate predicate state
929
930(Not yet implemented.)
931
932We have the option with internal functions to directly adjust their API to
933accept the predicate as an argument and return it. This is likely to be
934marginally cheaper than embedding into `%rsp` for entering functions.
935
936
937##### Use `lfence` to guard function transitions
938
939An `lfence` instruction can be used to prevent subsequent loads from
940speculatively executing until all prior mispredicted predicates have resolved.
941We can use this broader barrier to speculative loads executing between
942functions. We emit it in the entry block to handle calls, and prior to each
943return. This approach also has the advantage of providing the strongest degree
944of mitigation when mixed with unmitigated code by halting all misspeculation
945entering a function which is mitigated, regardless of what occurred in the
946caller. However, such a mixture is inherently more risky. Whether this kind of
947mixture is a sufficient mitigation requires careful analysis.
948
949Unfortunately, experimental results indicate that the performance overhead of
950this approach is very high for certain patterns of code. A classic example is
951any form of recursive evaluation engine. The hot, rapid call and return
952sequences exhibit dramatic performance loss when mitigated with `lfence`. This
953component alone can regress performance by 2x or more, making it an unpleasant
954tradeoff even when only used in a mixture of code.
955
956
957##### Use an internal TLS location to pass predicate state
958
959We can define a special thread-local value to hold the predicate state between
960functions. This avoids direct ABI implications by using a side channel between
961callers and callees to communicate the predicate state. It also allows implicit
962zero-initialization of the state, which allows non-checked code to be the first
963code executed.
964
965However, this requires a load from TLS in the entry block, a store to TLS
966before every call and every ret, and a load from TLS after every call. As a
967consequence it is expected to be substantially more expensive even than using
968`%rsp` and potentially `lfence` within the function entry block.
969
970
971##### Define a new ABI and/or calling convention
972
973We could define a new ABI and/or calling convention to explicitly pass the
974predicate state in and out of functions. This may be interesting if none of the
975alternatives have adequate performance, but it makes deployment and adoption
976dramatically more complex, and potentially infeasible.
977
978
979## High-Level Alternative Mitigation Strategies
980
981There are completely different alternative approaches to mitigating variant 1
982attacks. [Most](https://lwn.net/Articles/743265/)
983[discussion](https://lwn.net/Articles/744287/) so far focuses on mitigating
984specific known attackable components in the Linux kernel (or other kernels) by
985manually rewriting the code to contain an instruction sequence that is not
986vulnerable. For x86 systems this is done by either injecting an `lfence`
987instruction along the code path which would leak data if executed speculatively
988or by rewriting memory accesses to have branch-less masking to a known safe
989region. On Intel systems, `lfence` [will prevent the speculative load of secret
990data](https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf).
991On AMD systems `lfence` is currently a no-op, but can be made
992dispatch-serializing by setting an MSR, and thus preclude misspeculation of the
993code path ([mitigation G-2 +
994V1-1](https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf)).
995
996However, this relies on finding and enumerating all possible points in code
997which could be attacked to leak information. While in some cases static
998analysis is effective at doing this at scale, in many cases it still relies on
999human judgement to evaluate whether code might be vulnerable. Especially for
1000software systems which receive less detailed scrutiny but remain sensitive to
1001these attacks, this seems like an impractical security model. We need an
1002automatic and systematic mitigation strategy.
1003
1004
1005### Automatic `lfence` on Conditional Edges
1006
1007A natural way to scale up the existing hand-coded mitigations is simply to
1008inject an `lfence` instruction into both the target and fallthrough
1009destinations of every conditional branch. This ensures that no predicate or
1010bounds check can be bypassed speculatively. However, the performance overhead
1011of this approach is, simply put, catastrophic. Yet it remains the only truly
1012"secure by default" approach known prior to this effort and serves as the
1013baseline for performance.
1014
1015One attempt to address the performance overhead of this and make it more
1016realistic to deploy is [MSVC's /Qspectre
1017switch](https://blogs.msdn.microsoft.com/vcblog/2018/01/15/spectre-mitigations-in-msvc/).
1018Their technique is to use static analysis within the compiler to only insert
1019`lfence` instructions into conditional edges at risk of attack. However,
1020[initial](https://arstechnica.com/gadgets/2018/02/microsofts-compiler-level-spectre-fix-shows-how-hard-this-problem-will-be-to-solve/)
1021[analysis](https://www.paulkocher.com/doc/MicrosoftCompilerSpectreMitigation.html)
1022has shown that this approach is incomplete and only catches a small and limited
1023subset of attackable patterns which happen to resemble very closely the initial
1024proofs of concept. As such, while its performance is acceptable, it does not
1025appear to be an adequate systematic mitigation.
1026
1027
1028## Performance Overhead
1029
1030The performance overhead of this style of comprehensive mitigation is very
1031high. However, it compares very favorably with previously recommended
1032approaches such as the `lfence` instruction. Just as users can restrict the
1033scope of `lfence` to control its performance impact, this mitigation technique
1034could be restricted in scope as well.
1035
1036However, it is important to understand what it would cost to get a fully
1037mitigated baseline. Here we assume targeting a Haswell (or newer) processor and
1038using all of the tricks to improve performance (so leaves the low 2gb
1039unprotected and +/- 2gb surrounding any PC in the program). We ran both
1040Google's microbenchmark suite and a large highly-tuned server built using
1041ThinLTO and PGO. All were built with `-march=haswell` to give access to BMI2
1042instructions, and benchmarks were run on large Haswell servers. We collected
1043data both with an `lfence`-based mitigation and load hardening as presented
1044here. The summary is that mitigating with load hardening is 1.77x faster than
1045mitigating with `lfence`, and the overhead of load hardening compared to a
1046normal program is likely between a 10% overhead and a 50% overhead with most
1047large applications seeing a 30% overhead or less.
1048
1049| Benchmark                              | `lfence` | Load Hardening | Mitigated Speedup |
1050| -------------------------------------- | -------: | -------------: | ----------------: |
1051| Google microbenchmark suite            |   -74.8% |         -36.4% |          **2.5x** |
1052| Large server QPS (using ThinLTO & PGO) |   -62%   |         -29%   |          **1.8x** |
1053
1054Below is a visualization of the microbenchmark suite results which helps show
1055the distribution of results that is somewhat lost in the summary. The y-axis is
1056a log-scale speedup ratio of load hardening relative to `lfence` (up -> faster
1057-> better). Each box-and-whiskers represents one microbenchmark which may have
1058many different metrics measured. The red line marks the median, the box marks
1059the first and third quartiles, and the whiskers mark the min and max.
1060
1061![Microbenchmark result visualization](speculative_load_hardening_microbenchmarks.png)
1062
1063We don't yet have benchmark data on SPEC or the LLVM test suite, but we can
1064work on getting that. Still, the above should give a pretty clear
1065characterization of the performance, and specific benchmarks are unlikely to
1066reveal especially interesting properties.
1067
1068
1069### Future Work: Fine Grained Control and API-Integration
1070
1071The performance overhead of this technique is likely to be very significant and
1072something users wish to control or reduce. There are interesting options here
1073that impact the implementation strategy used.
1074
1075One particularly appealing option is to allow both opt-in and opt-out of this
1076mitigation at reasonably fine granularity such as on a per-function basis,
1077including intelligent handling of inlining decisions -- protected code can be
1078prevented from inlining into unprotected code, and unprotected code will become
1079protected when inlined into protected code. For systems where only a limited
1080set of code is reachable by externally controlled inputs, it may be possible to
1081limit the scope of mitigation through such mechanisms without compromising the
1082application's overall security. The performance impact may also be focused in a
1083few key functions that can be hand-mitigated in ways that have lower
1084performance overhead while the remainder of the application receives automatic
1085protection.
1086
1087For both limiting the scope of mitigation or manually mitigating hot functions,
1088there needs to be some support for mixing mitigated and unmitigated code
1089without completely defeating the mitigation. For the first use case, it would
1090be particularly desirable that mitigated code remains safe when being called
1091during misspeculation from unmitigated code.
1092
1093For the second use case, it may be important to connect the automatic
1094mitigation technique to explicit mitigation APIs such as what is described in
1095http://wg21.link/p0928 (or any other eventual API) so that there is a clean way
1096to switch from automatic to manual mitigation without immediately exposing a
1097hole. However, the design for how to do this is hard to come up with until the
1098APIs are better established. We will revisit this as those APIs mature.
1099