1=========================================== 2Control Flow Integrity Design Documentation 3=========================================== 4 5This page documents the design of the :doc:`ControlFlowIntegrity` schemes 6supported by Clang. 7 8Forward-Edge CFI for Virtual Calls 9================================== 10 11This scheme works by allocating, for each static type used to make a virtual 12call, a region of read-only storage in the object file holding a bit vector 13that maps onto to the region of storage used for those virtual tables. Each 14set bit in the bit vector corresponds to the `address point`_ for a virtual 15table compatible with the static type for which the bit vector is being built. 16 17For example, consider the following three C++ classes: 18 19.. code-block:: c++ 20 21 struct A { 22 virtual void f1(); 23 virtual void f2(); 24 virtual void f3(); 25 }; 26 27 struct B : A { 28 virtual void f1(); 29 virtual void f2(); 30 virtual void f3(); 31 }; 32 33 struct C : A { 34 virtual void f1(); 35 virtual void f2(); 36 virtual void f3(); 37 }; 38 39The scheme will cause the virtual tables for A, B and C to be laid out 40consecutively: 41 42.. csv-table:: Virtual Table Layout for A, B, C 43 :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 44 45 A::offset-to-top, &A::rtti, &A::f1, &A::f2, &A::f3, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, C::offset-to-top, &C::rtti, &C::f1, &C::f2, &C::f3 46 47The bit vector for static types A, B and C will look like this: 48 49.. csv-table:: Bit Vectors for A, B, C 50 :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 51 52 A, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0 53 B, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 54 C, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 55 56Bit vectors are represented in the object file as byte arrays. By loading 57from indexed offsets into the byte array and applying a mask, a program can 58test bits from the bit set with a relatively short instruction sequence. Bit 59vectors may overlap so long as they use different bits. For the full details, 60see the `ByteArrayBuilder`_ class. 61 62In this case, assuming A is laid out at offset 0 in bit 0, B at offset 0 in 63bit 1 and C at offset 0 in bit 2, the byte array would look like this: 64 65.. code-block:: c++ 66 67 char bits[] = { 0, 0, 1, 0, 0, 0, 3, 0, 0, 0, 0, 5, 0, 0 }; 68 69To emit a virtual call, the compiler will assemble code that checks that 70the object's virtual table pointer is in-bounds and aligned and that the 71relevant bit is set in the bit vector. 72 73For example on x86 a typical virtual call may look like this: 74 75.. code-block:: none 76 77 ca7fbb: 48 8b 0f mov (%rdi),%rcx 78 ca7fbe: 48 8d 15 c3 42 fb 07 lea 0x7fb42c3(%rip),%rdx 79 ca7fc5: 48 89 c8 mov %rcx,%rax 80 ca7fc8: 48 29 d0 sub %rdx,%rax 81 ca7fcb: 48 c1 c0 3d rol $0x3d,%rax 82 ca7fcf: 48 3d 7f 01 00 00 cmp $0x17f,%rax 83 ca7fd5: 0f 87 36 05 00 00 ja ca8511 84 ca7fdb: 48 8d 15 c0 0b f7 06 lea 0x6f70bc0(%rip),%rdx 85 ca7fe2: f6 04 10 10 testb $0x10,(%rax,%rdx,1) 86 ca7fe6: 0f 84 25 05 00 00 je ca8511 87 ca7fec: ff 91 98 00 00 00 callq *0x98(%rcx) 88 [...] 89 ca8511: 0f 0b ud2 90 91The compiler relies on co-operation from the linker in order to assemble 92the bit vectors for the whole program. It currently does this using LLVM's 93`type metadata`_ mechanism together with link-time optimization. 94 95.. _address point: https://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable-general 96.. _type metadata: https://llvm.org/docs/TypeMetadata.html 97.. _ByteArrayBuilder: https://llvm.org/docs/doxygen/html/structllvm_1_1ByteArrayBuilder.html 98 99Optimizations 100------------- 101 102The scheme as described above is the fully general variant of the scheme. 103Most of the time we are able to apply one or more of the following 104optimizations to improve binary size or performance. 105 106In fact, if you try the above example with the current version of the 107compiler, you will probably find that it will not use the described virtual 108table layout or machine instructions. Some of the optimizations we are about 109to introduce cause the compiler to use a different layout or a different 110sequence of machine instructions. 111 112Stripping Leading/Trailing Zeros in Bit Vectors 113~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 114 115If a bit vector contains leading or trailing zeros, we can strip them from 116the vector. The compiler will emit code to check if the pointer is in range 117of the region covered by ones, and perform the bit vector check using a 118truncated version of the bit vector. For example, the bit vectors for our 119example class hierarchy will be emitted like this: 120 121.. csv-table:: Bit Vectors for A, B, C 122 :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 123 124 A, , , 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, , 125 B, , , , , , , , 1, , , , , , , 126 C, , , , , , , , , , , , , 1, , 127 128Short Inline Bit Vectors 129~~~~~~~~~~~~~~~~~~~~~~~~ 130 131If the vector is sufficiently short, we can represent it as an inline constant 132on x86. This saves us a few instructions when reading the correct element 133of the bit vector. 134 135If the bit vector fits in 32 bits, the code looks like this: 136 137.. code-block:: none 138 139 dc2: 48 8b 03 mov (%rbx),%rax 140 dc5: 48 8d 15 14 1e 00 00 lea 0x1e14(%rip),%rdx 141 dcc: 48 89 c1 mov %rax,%rcx 142 dcf: 48 29 d1 sub %rdx,%rcx 143 dd2: 48 c1 c1 3d rol $0x3d,%rcx 144 dd6: 48 83 f9 03 cmp $0x3,%rcx 145 dda: 77 2f ja e0b <main+0x9b> 146 ddc: ba 09 00 00 00 mov $0x9,%edx 147 de1: 0f a3 ca bt %ecx,%edx 148 de4: 73 25 jae e0b <main+0x9b> 149 de6: 48 89 df mov %rbx,%rdi 150 de9: ff 10 callq *(%rax) 151 [...] 152 e0b: 0f 0b ud2 153 154Or if the bit vector fits in 64 bits: 155 156.. code-block:: none 157 158 11a6: 48 8b 03 mov (%rbx),%rax 159 11a9: 48 8d 15 d0 28 00 00 lea 0x28d0(%rip),%rdx 160 11b0: 48 89 c1 mov %rax,%rcx 161 11b3: 48 29 d1 sub %rdx,%rcx 162 11b6: 48 c1 c1 3d rol $0x3d,%rcx 163 11ba: 48 83 f9 2a cmp $0x2a,%rcx 164 11be: 77 35 ja 11f5 <main+0xb5> 165 11c0: 48 ba 09 00 00 00 00 movabs $0x40000000009,%rdx 166 11c7: 04 00 00 167 11ca: 48 0f a3 ca bt %rcx,%rdx 168 11ce: 73 25 jae 11f5 <main+0xb5> 169 11d0: 48 89 df mov %rbx,%rdi 170 11d3: ff 10 callq *(%rax) 171 [...] 172 11f5: 0f 0b ud2 173 174If the bit vector consists of a single bit, there is only one possible 175virtual table, and the check can consist of a single equality comparison: 176 177.. code-block:: none 178 179 9a2: 48 8b 03 mov (%rbx),%rax 180 9a5: 48 8d 0d a4 13 00 00 lea 0x13a4(%rip),%rcx 181 9ac: 48 39 c8 cmp %rcx,%rax 182 9af: 75 25 jne 9d6 <main+0x86> 183 9b1: 48 89 df mov %rbx,%rdi 184 9b4: ff 10 callq *(%rax) 185 [...] 186 9d6: 0f 0b ud2 187 188Virtual Table Layout 189~~~~~~~~~~~~~~~~~~~~ 190 191The compiler lays out classes of disjoint hierarchies in separate regions 192of the object file. At worst, bit vectors in disjoint hierarchies only 193need to cover their disjoint hierarchy. But the closer that classes in 194sub-hierarchies are laid out to each other, the smaller the bit vectors for 195those sub-hierarchies need to be (see "Stripping Leading/Trailing Zeros in Bit 196Vectors" above). The `GlobalLayoutBuilder`_ class is responsible for laying 197out the globals efficiently to minimize the sizes of the underlying bitsets. 198 199.. _GlobalLayoutBuilder: https://github.com/llvm/llvm-project/blob/main/llvm/include/llvm/Transforms/IPO/LowerTypeTests.h 200 201Alignment 202~~~~~~~~~ 203 204If all gaps between address points in a particular bit vector are multiples 205of powers of 2, the compiler can compress the bit vector by strengthening 206the alignment requirements of the virtual table pointer. For example, given 207this class hierarchy: 208 209.. code-block:: c++ 210 211 struct A { 212 virtual void f1(); 213 virtual void f2(); 214 }; 215 216 struct B : A { 217 virtual void f1(); 218 virtual void f2(); 219 virtual void f3(); 220 virtual void f4(); 221 virtual void f5(); 222 virtual void f6(); 223 }; 224 225 struct C : A { 226 virtual void f1(); 227 virtual void f2(); 228 }; 229 230The virtual tables will be laid out like this: 231 232.. csv-table:: Virtual Table Layout for A, B, C 233 :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 234 235 A::offset-to-top, &A::rtti, &A::f1, &A::f2, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, &B::f4, &B::f5, &B::f6, C::offset-to-top, &C::rtti, &C::f1, &C::f2 236 237Notice that each address point for A is separated by 4 words. This lets us 238emit a compressed bit vector for A that looks like this: 239 240.. csv-table:: 241 :header: 2, 6, 10, 14 242 243 1, 1, 0, 1 244 245At call sites, the compiler will strengthen the alignment requirements by 246using a different rotate count. For example, on a 64-bit machine where the 247address points are 4-word aligned (as in A from our example), the ``rol`` 248instruction may look like this: 249 250.. code-block:: none 251 252 dd2: 48 c1 c1 3b rol $0x3b,%rcx 253 254Padding to Powers of 2 255~~~~~~~~~~~~~~~~~~~~~~ 256 257Of course, this alignment scheme works best if the address points are 258in fact aligned correctly. To make this more likely to happen, we insert 259padding between virtual tables that in many cases aligns address points to 260a power of 2. Specifically, our padding aligns virtual tables to the next 261highest power of 2 bytes; because address points for specific base classes 262normally appear at fixed offsets within the virtual table, this normally 263has the effect of aligning the address points as well. 264 265This scheme introduces tradeoffs between decreased space overhead for 266instructions and bit vectors and increased overhead in the form of padding. We 267therefore limit the amount of padding so that we align to no more than 128 268bytes. This number was found experimentally to provide a good tradeoff. 269 270Eliminating Bit Vector Checks for All-Ones Bit Vectors 271~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 272 273If the bit vector is all ones, the bit vector check is redundant; we simply 274need to check that the address is in range and well aligned. This is more 275likely to occur if the virtual tables are padded. 276 277Forward-Edge CFI for Virtual Calls by Interleaving Virtual Tables 278----------------------------------------------------------------- 279 280Dimitar et. al. proposed a novel approach that interleaves virtual tables in [1]_. 281This approach is more efficient in terms of space because padding and bit vectors are no longer needed. 282At the same time, it is also more efficient in terms of performance because in the interleaved layout 283address points of the virtual tables are consecutive, thus the validity check of a virtual 284vtable pointer is always a range check. 285 286At a high level, the interleaving scheme consists of three steps: 1) split virtual table groups into 287separate virtual tables, 2) order virtual tables by a pre-order traversal of the class hierarchy 288and 3) interleave virtual tables. 289 290The interleaving scheme implemented in LLVM is inspired by [1]_ but has its own 291enhancements (more in `Interleave virtual tables`_). 292 293.. [1] `Protecting C++ Dynamic Dispatch Through VTable Interleaving <https://cseweb.ucsd.edu/~lerner/papers/ivtbl-ndss16.pdf>`_. Dimitar Bounov, Rami Gökhan Kıcı, Sorin Lerner. 294 295Split virtual table groups into separate virtual tables 296~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 297 298The Itanium C++ ABI glues multiple individual virtual tables for a class into a combined virtual table (virtual table group). 299The interleaving scheme, however, can only work with individual virtual tables so it must split the combined virtual tables first. 300In comparison, the old scheme does not require the splitting but it is more efficient when the combined virtual tables have been split. 301The `GlobalSplit`_ pass is responsible for splitting combined virtual tables into individual ones. 302 303.. _GlobalSplit: https://github.com/llvm/llvm-project/blob/main/llvm/lib/Transforms/IPO/GlobalSplit.cpp 304 305Order virtual tables by a pre-order traversal of the class hierarchy 306~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 307 308This step is common to both the old scheme described above and the interleaving scheme. 309For the interleaving scheme, since the combined virtual tables have been split in the previous step, 310this step ensures that for any class all the compatible virtual tables will appear consecutively. 311For the old scheme, the same property may not hold since it may work on combined virtual tables. 312 313For example, consider the following four C++ classes: 314 315.. code-block:: c++ 316 317 struct A { 318 virtual void f1(); 319 }; 320 321 struct B : A { 322 virtual void f1(); 323 virtual void f2(); 324 }; 325 326 struct C : A { 327 virtual void f1(); 328 virtual void f3(); 329 }; 330 331 struct D : B { 332 virtual void f1(); 333 virtual void f2(); 334 }; 335 336This step will arrange the virtual tables for A, B, C, and D in the order of *vtable-of-A, vtable-of-B, vtable-of-D, vtable-of-C*. 337 338Interleave virtual tables 339~~~~~~~~~~~~~~~~~~~~~~~~~ 340 341This step is where the interleaving scheme deviates from the old scheme. Instead of laying out 342whole virtual tables in the previously computed order, the interleaving scheme lays out table 343entries of the virtual tables strategically to ensure the following properties: 344 345(1) offset-to-top and RTTI fields layout property 346 347The Itanium C++ ABI specifies that offset-to-top and RTTI fields appear at the offsets behind the 348address point. Note that libraries like libcxxabi do assume this property. 349 350(2) virtual function entry layout property 351 352For each virtual function the distance between an virtual table entry for this function and the corresponding 353address point is always the same. This property ensures that dynamic dispatch still works with the interleaving layout. 354 355Note that the interleaving scheme in the CFI implementation guarantees both properties above whereas the original scheme proposed 356in [1]_ only guarantees the second property. 357 358To illustrate how the interleaving algorithm works, let us continue with the running example. 359The algorithm first separates all the virtual table entries into two work lists. To do so, 360it starts by allocating two work lists, one initialized with all the offset-to-top entries of virtual tables in the order 361computed in the last step, one initialized with all the RTTI entries in the same order. 362 363.. csv-table:: Work list 1 Layout 364 :header: 0, 1, 2, 3 365 366 A::offset-to-top, B::offset-to-top, D::offset-to-top, C::offset-to-top 367 368 369.. csv-table:: Work list 2 layout 370 :header: 0, 1, 2, 3, 371 372 &A::rtti, &B::rtti, &D::rtti, &C::rtti 373 374Then for each virtual function the algorithm goes through all the virtual tables in the previously computed order 375to collect all the related entries into a virtual function list. 376After this step, there are the following virtual function lists: 377 378.. csv-table:: f1 list 379 :header: 0, 1, 2, 3 380 381 &A::f1, &B::f1, &D::f1, &C::f1 382 383 384.. csv-table:: f2 list 385 :header: 0, 1 386 387 &B::f2, &D::f2 388 389 390.. csv-table:: f3 list 391 :header: 0 392 393 &C::f3 394 395Next, the algorithm picks the longest remaining virtual function list and appends the whole list to the shortest work list 396until no function lists are left, and pads the shorter work list so that they are of the same length. 397In the example, f1 list will be first added to work list 1, then f2 list will be added 398to work list 2, and finally f3 list will be added to the work list 2. Since work list 1 now has one more entry than 399work list 2, a padding entry is added to the latter. After this step, the two work lists look like: 400 401.. csv-table:: Work list 1 Layout 402 :header: 0, 1, 2, 3, 4, 5, 6, 7 403 404 A::offset-to-top, B::offset-to-top, D::offset-to-top, C::offset-to-top, &A::f1, &B::f1, &D::f1, &C::f1 405 406 407.. csv-table:: Work list 2 layout 408 :header: 0, 1, 2, 3, 4, 5, 6, 7 409 410 &A::rtti, &B::rtti, &D::rtti, &C::rtti, &B::f2, &D::f2, &C::f3, padding 411 412Finally, the algorithm merges the two work lists into the interleaved layout by alternatingly 413moving the head of each list to the final layout. After this step, the final interleaved layout looks like: 414 415.. csv-table:: Interleaved layout 416 :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 417 418 A::offset-to-top, &A::rtti, B::offset-to-top, &B::rtti, D::offset-to-top, &D::rtti, C::offset-to-top, &C::rtti, &A::f1, &B::f2, &B::f1, &D::f2, &D::f1, &C::f3, &C::f1, padding 419 420In the above interleaved layout, each virtual table's offset-to-top and RTTI are always adjacent, which shows that the layout has the first property. 421For the second property, let us look at f2 as an example. In the interleaved layout, 422there are two entries for f2: B::f2 and D::f2. The distance between &B::f2 423and its address point D::offset-to-top (the entry immediately after &B::rtti) is 5 entry-length, so is the distance between &D::f2 and C::offset-to-top (the entry immediately after &D::rtti). 424 425Forward-Edge CFI for Indirect Function Calls 426============================================ 427 428Under forward-edge CFI for indirect function calls, each unique function 429type has its own bit vector, and at each call site we need to check that the 430function pointer is a member of the function type's bit vector. This scheme 431works in a similar way to forward-edge CFI for virtual calls, the distinction 432being that we need to build bit vectors of function entry points rather than 433of virtual tables. 434 435Unlike when re-arranging global variables, we cannot re-arrange functions 436in a particular order and base our calculations on the layout of the 437functions' entry points, as we have no idea how large a particular function 438will end up being (the function sizes could even depend on how we arrange 439the functions). Instead, we build a jump table, which is a block of code 440consisting of one branch instruction for each of the functions in the bit 441set that branches to the target function, and redirect any taken function 442addresses to the corresponding jump table entry. In this way, the distance 443between function entry points is predictable and controllable. In the object 444file's symbol table, the symbols for the target functions also refer to the 445jump table entries, so that addresses taken outside the module will pass 446any verification done inside the module. 447 448In more concrete terms, suppose we have three functions ``f``, ``g``, 449``h`` which are all of the same type, and a function foo that returns their 450addresses: 451 452.. code-block:: none 453 454 f: 455 mov 0, %eax 456 ret 457 458 g: 459 mov 1, %eax 460 ret 461 462 h: 463 mov 2, %eax 464 ret 465 466 foo: 467 mov f, %eax 468 mov g, %edx 469 mov h, %ecx 470 ret 471 472Our jump table will (conceptually) look like this: 473 474.. code-block:: none 475 476 f: 477 jmp .Ltmp0 ; 5 bytes 478 int3 ; 1 byte 479 int3 ; 1 byte 480 int3 ; 1 byte 481 482 g: 483 jmp .Ltmp1 ; 5 bytes 484 int3 ; 1 byte 485 int3 ; 1 byte 486 int3 ; 1 byte 487 488 h: 489 jmp .Ltmp2 ; 5 bytes 490 int3 ; 1 byte 491 int3 ; 1 byte 492 int3 ; 1 byte 493 494 .Ltmp0: 495 mov 0, %eax 496 ret 497 498 .Ltmp1: 499 mov 1, %eax 500 ret 501 502 .Ltmp2: 503 mov 2, %eax 504 ret 505 506 foo: 507 mov f, %eax 508 mov g, %edx 509 mov h, %ecx 510 ret 511 512Because the addresses of ``f``, ``g``, ``h`` are evenly spaced at a power of 5132, and function types do not overlap (unlike class types with base classes), 514we can normally apply the `Alignment`_ and `Eliminating Bit Vector Checks 515for All-Ones Bit Vectors`_ optimizations thus simplifying the check at each 516call site to a range and alignment check. 517 518Shared library support 519====================== 520 521**EXPERIMENTAL** 522 523The basic CFI mode described above assumes that the application is a 524monolithic binary; at least that all possible virtual/indirect call 525targets and the entire class hierarchy are known at link time. The 526cross-DSO mode, enabled with **-f[no-]sanitize-cfi-cross-dso** relaxes 527this requirement by allowing virtual and indirect calls to cross the 528DSO boundary. 529 530Assuming the following setup: the binary consists of several 531instrumented and several uninstrumented DSOs. Some of them may be 532dlopen-ed/dlclose-d periodically, even frequently. 533 534 - Calls made from uninstrumented DSOs are not checked and just work. 535 - Calls inside any instrumented DSO are fully protected. 536 - Calls between different instrumented DSOs are also protected, with 537 a performance penalty (in addition to the monolithic CFI 538 overhead). 539 - Calls from an instrumented DSO to an uninstrumented one are 540 unchecked and just work, with performance penalty. 541 - Calls from an instrumented DSO outside of any known DSO are 542 detected as CFI violations. 543 544In the monolithic scheme a call site is instrumented as 545 546.. code-block:: none 547 548 if (!InlinedFastCheck(f)) 549 abort(); 550 call *f 551 552In the cross-DSO scheme it becomes 553 554.. code-block:: none 555 556 if (!InlinedFastCheck(f)) 557 __cfi_slowpath(CallSiteTypeId, f); 558 call *f 559 560CallSiteTypeId 561-------------- 562 563``CallSiteTypeId`` is a stable process-wide identifier of the 564call-site type. For a virtual call site, the type in question is the class 565type; for an indirect function call it is the function signature. The 566mapping from a type to an identifier is an ABI detail. In the current, 567experimental, implementation the identifier of type T is calculated as 568follows: 569 570 - Obtain the mangled name for "typeinfo name for T". 571 - Calculate MD5 hash of the name as a string. 572 - Reinterpret the first 8 bytes of the hash as a little-endian 573 64-bit integer. 574 575It is possible, but unlikely, that collisions in the 576``CallSiteTypeId`` hashing will result in weaker CFI checks that would 577still be conservatively correct. 578 579CFI_Check 580--------- 581 582In the general case, only the target DSO knows whether the call to 583function ``f`` with type ``CallSiteTypeId`` is valid or not. To 584export this information, every DSO implements 585 586.. code-block:: none 587 588 void __cfi_check(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData) 589 590This function provides external modules with access to CFI checks for 591the targets inside this DSO. For each known ``CallSiteTypeId``, this 592function performs an ``llvm.type.test`` with the corresponding type 593identifier. It reports an error if the type is unknown, or if the 594check fails. Depending on the values of compiler flags 595``-fsanitize-trap`` and ``-fsanitize-recover``, this function may 596print an error, abort and/or return to the caller. ``DiagData`` is an 597opaque pointer to the diagnostic information about the error, or 598``null`` if the caller does not provide this information. 599 600The basic implementation is a large switch statement over all values 601of CallSiteTypeId supported by this DSO, and each case is similar to 602the InlinedFastCheck() in the basic CFI mode. 603 604CFI Shadow 605---------- 606 607To route CFI checks to the target DSO's __cfi_check function, a 608mapping from possible virtual / indirect call targets to the 609corresponding __cfi_check functions is maintained. This mapping is 610implemented as a sparse array of 2 bytes for every possible page (4096 611bytes) of memory. The table is kept readonly most of the time. 612 613There are 3 types of shadow values: 614 615 - Address in a CFI-instrumented DSO. 616 - Unchecked address (a “trusted” non-instrumented DSO). Encoded as 617 value 0xFFFF. 618 - Invalid address (everything else). Encoded as value 0. 619 620For a CFI-instrumented DSO, a shadow value encodes the address of the 621__cfi_check function for all call targets in the corresponding memory 622page. If Addr is the target address, and V is the shadow value, then 623the address of __cfi_check is calculated as 624 625.. code-block:: none 626 627 __cfi_check = AlignUpTo(Addr, 4096) - (V + 1) * 4096 628 629This works as long as __cfi_check is aligned by 4096 bytes and located 630below any call targets in its DSO, but not more than 256MB apart from 631them. 632 633CFI_SlowPath 634------------ 635 636The slow path check is implemented in a runtime support library as 637 638.. code-block:: none 639 640 void __cfi_slowpath(uint64 CallSiteTypeId, void *TargetAddr) 641 void __cfi_slowpath_diag(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData) 642 643These functions loads a shadow value for ``TargetAddr``, finds the 644address of ``__cfi_check`` as described above and calls 645that. ``DiagData`` is an opaque pointer to diagnostic data which is 646passed verbatim to ``__cfi_check``, and ``__cfi_slowpath`` passes 647``nullptr`` instead. 648 649Compiler-RT library contains reference implementations of slowpath 650functions, but they have unresolvable issues with correctness and 651performance in the handling of dlopen(). It is recommended that 652platforms provide their own implementations, usually as part of libc 653or libdl. 654 655Position-independent executable requirement 656------------------------------------------- 657 658Cross-DSO CFI mode requires that the main executable is built as PIE. 659In non-PIE executables the address of an external function (taken from 660the main executable) is the address of that function’s PLT record in 661the main executable. This would break the CFI checks. 662 663Backward-edge CFI for return statements (RCFI) 664============================================== 665 666This section is a proposal. As of March 2017 it is not implemented. 667 668Backward-edge control flow (`RET` instructions) can be hijacked 669via overwriting the return address (`RA`) on stack. 670Various mitigation techniques (e.g. `SafeStack`_, `RFG`_, `Intel CET`_) 671try to detect or prevent `RA` corruption on stack. 672 673RCFI enforces the expected control flow in several different ways described below. 674RCFI heavily relies on LTO. 675 676Leaf Functions 677-------------- 678If `f()` is a leaf function (i.e. it has no calls 679except maybe no-return calls) it can be called using a special calling convention 680that stores `RA` in a dedicated register `R` before the `CALL` instruction. 681`f()` does not spill `R` and does not use the `RET` instruction, 682instead it uses the value in `R` to `JMP` to `RA`. 683 684This flavour of CFI is *precise*, i.e. the function is guaranteed to return 685to the point exactly following the call. 686 687An alternative approach is to 688copy `RA` from stack to `R` in the first instruction of `f()`, 689then `JMP` to `R`. 690This approach is simpler to implement (does not require changing the caller) 691but weaker (there is a small window when `RA` is actually stored on stack). 692 693 694Functions called once 695--------------------- 696Suppose `f()` is called in just one place in the program 697(assuming we can verify this in LTO mode). 698In this case we can replace the `RET` instruction with a `JMP` instruction 699with the immediate constant for `RA`. 700This will *precisely* enforce the return control flow no matter what is stored on stack. 701 702Another variant is to compare `RA` on stack with the known constant and abort 703if they don't match; then `JMP` to the known constant address. 704 705Functions called in a small number of call sites 706------------------------------------------------ 707We may extend the above approach to cases where `f()` 708is called more than once (but still a small number of times). 709With LTO we know all possible values of `RA` and we check them 710one-by-one (or using binary search) against the value on stack. 711If the match is found, we `JMP` to the known constant address, otherwise abort. 712 713This protection is *near-precise*, i.e. it guarantees that the control flow will 714be transferred to one of the valid return addresses for this function, 715but not necessary to the point of the most recent `CALL`. 716 717General case 718------------ 719For functions called multiple times a *return jump table* is constructed 720in the same manner as jump tables for indirect function calls (see above). 721The correct jump table entry (or its index) is passed by `CALL` to `f()` 722(as an extra argument) and then spilled to stack. 723The `RET` instruction is replaced with a load of the jump table entry, 724jump table range check, and `JMP` to the jump table entry. 725 726This protection is also *near-precise*. 727 728Returns from functions called indirectly 729---------------------------------------- 730 731If a function is called indirectly, the return jump table is constructed for the 732equivalence class of functions instead of a single function. 733 734Cross-DSO calls 735--------------- 736Consider two instrumented DSOs, `A` and `B`. `A` defines `f()` and `B` calls it. 737 738This case will be handled similarly to the cross-DSO scheme using the slow path callback. 739 740Non-goals 741--------- 742 743RCFI does not protect `RET` instructions: 744 * in non-instrumented DSOs, 745 * in instrumented DSOs for functions that are called from non-instrumented DSOs, 746 * embedded into other instructions (e.g. `0f4fc3 cmovg %ebx,%eax`). 747 748.. _SafeStack: https://clang.llvm.org/docs/SafeStack.html 749.. _RFG: https://xlab.tencent.com/en/2016/11/02/return-flow-guard 750.. _Intel CET: https://software.intel.com/en-us/blogs/2016/06/09/intel-release-new-technology-specifications-protect-rop-attacks 751 752Hardware support 753================ 754 755We believe that the above design can be efficiently implemented in hardware. 756A single new instruction added to an ISA would allow to perform the forward-edge CFI check 757with fewer bytes per check (smaller code size overhead) and potentially more 758efficiently. The current software-only instrumentation requires at least 75932-bytes per check (on x86_64). 760A hardware instruction may probably be less than ~ 12 bytes. 761Such instruction would check that the argument pointer is in-bounds, 762and is properly aligned, and if the checks fail it will either trap (in monolithic scheme) 763or call the slow path function (cross-DSO scheme). 764The bit vector lookup is probably too complex for a hardware implementation. 765 766.. code-block:: none 767 768 // This instruction checks that 'Ptr' 769 // * is aligned by (1 << kAlignment) and 770 // * is inside [kRangeBeg, kRangeBeg+(kRangeSize<<kAlignment)) 771 // and if the check fails it jumps to the given target (slow path). 772 // 773 // 'Ptr' is a register, pointing to the virtual function table 774 // or to the function which we need to check. We may require an explicit 775 // fixed register to be used. 776 // 'kAlignment' is a 4-bit constant. 777 // 'kRangeSize' is a ~20-bit constant. 778 // 'kRangeBeg' is a PC-relative constant (~28 bits) 779 // pointing to the beginning of the allowed range for 'Ptr'. 780 // 'kFailedCheckTarget': is a PC-relative constant (~28 bits) 781 // representing the target to branch to when the check fails. 782 // If kFailedCheckTarget==0, the process will trap 783 // (monolithic binary scheme). 784 // Otherwise it will jump to a handler that implements `CFI_SlowPath` 785 // (cross-DSO scheme). 786 CFI_Check(Ptr, kAlignment, kRangeSize, kRangeBeg, kFailedCheckTarget) { 787 if (Ptr < kRangeBeg || 788 Ptr >= kRangeBeg + (kRangeSize << kAlignment) || 789 Ptr & ((1 << kAlignment) - 1)) 790 Jump(kFailedCheckTarget); 791 } 792 793An alternative and more compact encoding would not use `kFailedCheckTarget`, 794and will trap on check failure instead. 795This will allow us to fit the instruction into **8-9 bytes**. 796The cross-DSO checks will be performed by a trap handler and 797performance-critical ones will have to be black-listed and checked using the 798software-only scheme. 799 800Note that such hardware extension would be complementary to checks 801at the callee side, such as e.g. **Intel ENDBRANCH**. 802Moreover, CFI would have two benefits over ENDBRANCH: a) precision and b) 803ability to protect against invalid casts between polymorphic types. 804