xref: /linux/Documentation/bpf/bpf_design_QA.rst (revision 6c8c1406)
1==============
2BPF Design Q&A
3==============
4
5BPF extensibility and applicability to networking, tracing, security
6in the linux kernel and several user space implementations of BPF
7virtual machine led to a number of misunderstanding on what BPF actually is.
8This short QA is an attempt to address that and outline a direction
9of where BPF is heading long term.
10
11.. contents::
12    :local:
13    :depth: 3
14
15Questions and Answers
16=====================
17
18Q: Is BPF a generic instruction set similar to x64 and arm64?
19-------------------------------------------------------------
20A: NO.
21
22Q: Is BPF a generic virtual machine ?
23-------------------------------------
24A: NO.
25
26BPF is generic instruction set *with* C calling convention.
27-----------------------------------------------------------
28
29Q: Why C calling convention was chosen?
30~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
31
32A: Because BPF programs are designed to run in the linux kernel
33which is written in C, hence BPF defines instruction set compatible
34with two most used architectures x64 and arm64 (and takes into
35consideration important quirks of other architectures) and
36defines calling convention that is compatible with C calling
37convention of the linux kernel on those architectures.
38
39Q: Can multiple return values be supported in the future?
40~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
41A: NO. BPF allows only register R0 to be used as return value.
42
43Q: Can more than 5 function arguments be supported in the future?
44~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
45A: NO. BPF calling convention only allows registers R1-R5 to be used
46as arguments. BPF is not a standalone instruction set.
47(unlike x64 ISA that allows msft, cdecl and other conventions)
48
49Q: Can BPF programs access instruction pointer or return address?
50-----------------------------------------------------------------
51A: NO.
52
53Q: Can BPF programs access stack pointer ?
54------------------------------------------
55A: NO.
56
57Only frame pointer (register R10) is accessible.
58From compiler point of view it's necessary to have stack pointer.
59For example, LLVM defines register R11 as stack pointer in its
60BPF backend, but it makes sure that generated code never uses it.
61
62Q: Does C-calling convention diminishes possible use cases?
63-----------------------------------------------------------
64A: YES.
65
66BPF design forces addition of major functionality in the form
67of kernel helper functions and kernel objects like BPF maps with
68seamless interoperability between them. It lets kernel call into
69BPF programs and programs call kernel helpers with zero overhead,
70as all of them were native C code. That is particularly the case
71for JITed BPF programs that are indistinguishable from
72native kernel C code.
73
74Q: Does it mean that 'innovative' extensions to BPF code are disallowed?
75------------------------------------------------------------------------
76A: Soft yes.
77
78At least for now, until BPF core has support for
79bpf-to-bpf calls, indirect calls, loops, global variables,
80jump tables, read-only sections, and all other normal constructs
81that C code can produce.
82
83Q: Can loops be supported in a safe way?
84----------------------------------------
85A: It's not clear yet.
86
87BPF developers are trying to find a way to
88support bounded loops.
89
90Q: What are the verifier limits?
91--------------------------------
92A: The only limit known to the user space is BPF_MAXINSNS (4096).
93It's the maximum number of instructions that the unprivileged bpf
94program can have. The verifier has various internal limits.
95Like the maximum number of instructions that can be explored during
96program analysis. Currently, that limit is set to 1 million.
97Which essentially means that the largest program can consist
98of 1 million NOP instructions. There is a limit to the maximum number
99of subsequent branches, a limit to the number of nested bpf-to-bpf
100calls, a limit to the number of the verifier states per instruction,
101a limit to the number of maps used by the program.
102All these limits can be hit with a sufficiently complex program.
103There are also non-numerical limits that can cause the program
104to be rejected. The verifier used to recognize only pointer + constant
105expressions. Now it can recognize pointer + bounded_register.
106bpf_lookup_map_elem(key) had a requirement that 'key' must be
107a pointer to the stack. Now, 'key' can be a pointer to map value.
108The verifier is steadily getting 'smarter'. The limits are
109being removed. The only way to know that the program is going to
110be accepted by the verifier is to try to load it.
111The bpf development process guarantees that the future kernel
112versions will accept all bpf programs that were accepted by
113the earlier versions.
114
115
116Instruction level questions
117---------------------------
118
119Q: LD_ABS and LD_IND instructions vs C code
120~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121
122Q: How come LD_ABS and LD_IND instruction are present in BPF whereas
123C code cannot express them and has to use builtin intrinsics?
124
125A: This is artifact of compatibility with classic BPF. Modern
126networking code in BPF performs better without them.
127See 'direct packet access'.
128
129Q: BPF instructions mapping not one-to-one to native CPU
130~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
131Q: It seems not all BPF instructions are one-to-one to native CPU.
132For example why BPF_JNE and other compare and jumps are not cpu-like?
133
134A: This was necessary to avoid introducing flags into ISA which are
135impossible to make generic and efficient across CPU architectures.
136
137Q: Why BPF_DIV instruction doesn't map to x64 div?
138~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139A: Because if we picked one-to-one relationship to x64 it would have made
140it more complicated to support on arm64 and other archs. Also it
141needs div-by-zero runtime check.
142
143Q: Why there is no BPF_SDIV for signed divide operation?
144~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
145A: Because it would be rarely used. llvm errors in such case and
146prints a suggestion to use unsigned divide instead.
147
148Q: Why BPF has implicit prologue and epilogue?
149~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
150A: Because architectures like sparc have register windows and in general
151there are enough subtle differences between architectures, so naive
152store return address into stack won't work. Another reason is BPF has
153to be safe from division by zero (and legacy exception path
154of LD_ABS insn). Those instructions need to invoke epilogue and
155return implicitly.
156
157Q: Why BPF_JLT and BPF_JLE instructions were not introduced in the beginning?
158~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
159A: Because classic BPF didn't have them and BPF authors felt that compiler
160workaround would be acceptable. Turned out that programs lose performance
161due to lack of these compare instructions and they were added.
162These two instructions is a perfect example what kind of new BPF
163instructions are acceptable and can be added in the future.
164These two already had equivalent instructions in native CPUs.
165New instructions that don't have one-to-one mapping to HW instructions
166will not be accepted.
167
168Q: BPF 32-bit subregister requirements
169~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170Q: BPF 32-bit subregisters have a requirement to zero upper 32-bits of BPF
171registers which makes BPF inefficient virtual machine for 32-bit
172CPU architectures and 32-bit HW accelerators. Can true 32-bit registers
173be added to BPF in the future?
174
175A: NO.
176
177But some optimizations on zero-ing the upper 32 bits for BPF registers are
178available, and can be leveraged to improve the performance of JITed BPF
179programs for 32-bit architectures.
180
181Starting with version 7, LLVM is able to generate instructions that operate
182on 32-bit subregisters, provided the option -mattr=+alu32 is passed for
183compiling a program. Furthermore, the verifier can now mark the
184instructions for which zero-ing the upper bits of the destination register
185is required, and insert an explicit zero-extension (zext) instruction
186(a mov32 variant). This means that for architectures without zext hardware
187support, the JIT back-ends do not need to clear the upper bits for
188subregisters written by alu32 instructions or narrow loads. Instead, the
189back-ends simply need to support code generation for that mov32 variant,
190and to overwrite bpf_jit_needs_zext() to make it return "true" (in order to
191enable zext insertion in the verifier).
192
193Note that it is possible for a JIT back-end to have partial hardware
194support for zext. In that case, if verifier zext insertion is enabled,
195it could lead to the insertion of unnecessary zext instructions. Such
196instructions could be removed by creating a simple peephole inside the JIT
197back-end: if one instruction has hardware support for zext and if the next
198instruction is an explicit zext, then the latter can be skipped when doing
199the code generation.
200
201Q: Does BPF have a stable ABI?
202------------------------------
203A: YES. BPF instructions, arguments to BPF programs, set of helper
204functions and their arguments, recognized return codes are all part
205of ABI. However there is one specific exception to tracing programs
206which are using helpers like bpf_probe_read() to walk kernel internal
207data structures and compile with kernel internal headers. Both of these
208kernel internals are subject to change and can break with newer kernels
209such that the program needs to be adapted accordingly.
210
211Q: Are tracepoints part of the stable ABI?
212------------------------------------------
213A: NO. Tracepoints are tied to internal implementation details hence they are
214subject to change and can break with newer kernels. BPF programs need to change
215accordingly when this happens.
216
217Q: Are places where kprobes can attach part of the stable ABI?
218--------------------------------------------------------------
219A: NO. The places to which kprobes can attach are internal implementation
220details, which means that they are subject to change and can break with
221newer kernels. BPF programs need to change accordingly when this happens.
222
223Q: How much stack space a BPF program uses?
224-------------------------------------------
225A: Currently all program types are limited to 512 bytes of stack
226space, but the verifier computes the actual amount of stack used
227and both interpreter and most JITed code consume necessary amount.
228
229Q: Can BPF be offloaded to HW?
230------------------------------
231A: YES. BPF HW offload is supported by NFP driver.
232
233Q: Does classic BPF interpreter still exist?
234--------------------------------------------
235A: NO. Classic BPF programs are converted into extend BPF instructions.
236
237Q: Can BPF call arbitrary kernel functions?
238-------------------------------------------
239A: NO. BPF programs can only call a set of helper functions which
240is defined for every program type.
241
242Q: Can BPF overwrite arbitrary kernel memory?
243---------------------------------------------
244A: NO.
245
246Tracing bpf programs can *read* arbitrary memory with bpf_probe_read()
247and bpf_probe_read_str() helpers. Networking programs cannot read
248arbitrary memory, since they don't have access to these helpers.
249Programs can never read or write arbitrary memory directly.
250
251Q: Can BPF overwrite arbitrary user memory?
252-------------------------------------------
253A: Sort-of.
254
255Tracing BPF programs can overwrite the user memory
256of the current task with bpf_probe_write_user(). Every time such
257program is loaded the kernel will print warning message, so
258this helper is only useful for experiments and prototypes.
259Tracing BPF programs are root only.
260
261Q: New functionality via kernel modules?
262----------------------------------------
263Q: Can BPF functionality such as new program or map types, new
264helpers, etc be added out of kernel module code?
265
266A: NO.
267
268Q: Directly calling kernel function is an ABI?
269----------------------------------------------
270Q: Some kernel functions (e.g. tcp_slow_start) can be called
271by BPF programs.  Do these kernel functions become an ABI?
272
273A: NO.
274
275The kernel function protos will change and the bpf programs will be
276rejected by the verifier.  Also, for example, some of the bpf-callable
277kernel functions have already been used by other kernel tcp
278cc (congestion-control) implementations.  If any of these kernel
279functions has changed, both the in-tree and out-of-tree kernel tcp cc
280implementations have to be changed.  The same goes for the bpf
281programs and they have to be adjusted accordingly.
282
283Q: Attaching to arbitrary kernel functions is an ABI?
284-----------------------------------------------------
285Q: BPF programs can be attached to many kernel functions.  Do these
286kernel functions become part of the ABI?
287
288A: NO.
289
290The kernel function prototypes will change, and BPF programs attaching to
291them will need to change.  The BPF compile-once-run-everywhere (CO-RE)
292should be used in order to make it easier to adapt your BPF programs to
293different versions of the kernel.
294
295Q: Marking a function with BTF_ID makes that function an ABI?
296-------------------------------------------------------------
297A: NO.
298
299The BTF_ID macro does not cause a function to become part of the ABI
300any more than does the EXPORT_SYMBOL_GPL macro.
301