History log of /linux/tools/testing/selftests/bpf/trace_helpers.c (Results 1 – 25 of 29)
Revision Date Author Comments
# 4d4992ff 10-Apr-2024 Jiri Olsa <jolsa@kernel.org>

selftests/bpf: Add read_trace_pipe_iter function

We have two printk tests reading trace_pipe in non blocking way,
with the very same code. Moving that in new read_trace_pipe_iter
function.

Current

selftests/bpf: Add read_trace_pipe_iter function

We have two printk tests reading trace_pipe in non blocking way,
with the very same code. Moving that in new read_trace_pipe_iter
function.

Current read_trace_pipe is used from samples/bpf and needs to
do blocking read and printf of the trace_pipe data, using new
read_trace_pipe_iter to implement that.

Both printk tests do early checks for the number of found messages
and can bail earlier, but I did not find any speed difference w/o
that condition, so I did not complicate the change more for that.

Some of the samples/bpf programs use read_trace_pipe function,
so I kept that interface untouched. I did not see any issues with
affected samples/bpf programs other than there's slight change in
read_trace_pipe output. The current code uses puts that adds new
line after the printed string, so we would occasionally see extra
new line. With this patch we read output per lines, so there's no
need to use puts and we can use just printf instead without extra
new line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240410140952.292261-1-jolsa@kernel.org

show more ...


# d1f02581 26-Mar-2024 Yonghong Song <yonghong.song@linux.dev>

selftests/bpf: Add {load,search}_kallsyms_custom_local()

These two functions allow selftests to do loading/searching
kallsyms based on their specific compare functions.

Signed-off-by: Yonghong Song

selftests/bpf: Add {load,search}_kallsyms_custom_local()

These two functions allow selftests to do loading/searching
kallsyms based on their specific compare functions.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240326041513.1199440-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 9475dacb 26-Mar-2024 Yonghong Song <yonghong.song@linux.dev>

selftests/bpf: Refactor trace helper func load_kallsyms_local()

Refactor trace helper function load_kallsyms_local() such that
it invokes a common function with a compare function as input.
The comm

selftests/bpf: Refactor trace helper func load_kallsyms_local()

Refactor trace helper function load_kallsyms_local() such that
it invokes a common function with a compare function as input.
The common function will be used later for other local functions.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240326041508.1199239-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# a68b50f4 02-Feb-2024 Shung-Hsi Yu <shung-hsi.yu@suse.com>

selftests/bpf: trace_helpers.c: do not use poisoned type

After commit c698eaebdf47 ("selftests/bpf: trace_helpers.c: Optimize
kallsyms cache") trace_helpers.c now includes libbpf_internal.h, and
thu

selftests/bpf: trace_helpers.c: do not use poisoned type

After commit c698eaebdf47 ("selftests/bpf: trace_helpers.c: Optimize
kallsyms cache") trace_helpers.c now includes libbpf_internal.h, and
thus can no longer use the u32 type (among others) since they are poison
in libbpf_internal.h. Replace u32 with __u32 to fix the following error
when building trace_helpers.c on powerpc:

error: attempt to use poisoned "u32"

Fixes: c698eaebdf47 ("selftests/bpf: trace_helpers.c: Optimize kallsyms cache")
Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20240202095559.12900-1-shung-hsi.yu@suse.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>

show more ...


# a28b1ba2 07-Sep-2023 Rong Tao <rongtao@cestc.cn>

selftests/bpf: trace_helpers.c: Add a global ksyms initialization mutex

As Jirka said [0], we just need to make sure that global ksyms
initialization won't race.

[0] https://lore.kernel.org/lkml/ZP

selftests/bpf: trace_helpers.c: Add a global ksyms initialization mutex

As Jirka said [0], we just need to make sure that global ksyms
initialization won't race.

[0] https://lore.kernel.org/lkml/ZPCbAs3ItjRd8XVh@krava/

Signed-off-by: Rong Tao <rongtao@cestc.cn>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/tencent_5D0A837E219E2CFDCB0495DAD7D5D1204407@qq.com

show more ...


# c698eaeb 07-Sep-2023 Rong Tao <rongtao@cestc.cn>

selftests/bpf: trace_helpers.c: Optimize kallsyms cache

Static ksyms often have problems because the number of symbols exceeds the
MAX_SYMS limit. Like changing the MAX_SYMS from 300000 to 400000 in

selftests/bpf: trace_helpers.c: Optimize kallsyms cache

Static ksyms often have problems because the number of symbols exceeds the
MAX_SYMS limit. Like changing the MAX_SYMS from 300000 to 400000 in
commit e76a014334a6("selftests/bpf: Bump and validate MAX_SYMS") solves
the problem somewhat, but it's not the perfect way.

This commit uses dynamic memory allocation, which completely solves the
problem caused by the limitation of the number of kallsyms. At the same
time, add APIs:

load_kallsyms_local()
ksym_search_local()
ksym_get_addr_local()
free_kallsyms_local()

There are used to solve the problem of selftests/bpf updating kallsyms
after attach new symbols during testmod testing.

Signed-off-by: Rong Tao <rongtao@cestc.cn>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/tencent_C9BDA68F9221F21BE4081566A55D66A9700A@qq.com

show more ...


# e76a0143 06-Jul-2023 Björn Töpel <bjorn@rivosinc.com>

selftests/bpf: Bump and validate MAX_SYMS

BPF tests that load /proc/kallsyms, e.g. bpf_cookie, will perform a
buffer overrun if the number of syms on the system is larger than
MAX_SYMS.

Bump the MA

selftests/bpf: Bump and validate MAX_SYMS

BPF tests that load /proc/kallsyms, e.g. bpf_cookie, will perform a
buffer overrun if the number of syms on the system is larger than
MAX_SYMS.

Bump the MAX_SYMS to 400000, and add a runtime check that bails out if
the maximum is reached.

Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20230706142228.1128452-1-bjorn@kernel.org

show more ...


# 88dc8b36 31-Mar-2023 Jiri Olsa <jolsa@kernel.org>

selftests/bpf: Add read_build_id function

Adding read_build_id function that parses out build id from
specified binary.

It will replace extract_build_id and also be used in following
changes.

Sign

selftests/bpf: Add read_build_id function

Adding read_build_id function that parses out build id from
specified binary.

It will replace extract_build_id and also be used in following
changes.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20230331093157.1749137-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# ab4c15fe 13-Mar-2023 Ross Zwisler <zwisler@google.com>

selftests/bpf: use canonical ftrace path

The canonical location for the tracefs filesystem is at
/sys/kernel/tracing.

But, from Documentation/trace/ftrace.rst:

Before 4.1, all ftrace tracing con

selftests/bpf: use canonical ftrace path

The canonical location for the tracefs filesystem is at
/sys/kernel/tracing.

But, from Documentation/trace/ftrace.rst:

Before 4.1, all ftrace tracing control files were within the debugfs
file system, which is typically located at /sys/kernel/debug/tracing.
For backward compatibility, when mounting the debugfs file system,
the tracefs file system will be automatically mounted at:

/sys/kernel/debug/tracing

Many tests in the bpf selftest code still refer to this older debugfs
path, so let's update them to avoid confusion.

Signed-off-by: Ross Zwisler <zwisler@google.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20230313205628.1058720-3-zwisler@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 10705b2b 25-Oct-2022 Jiri Olsa <jolsa@kernel.org>

selftests/bpf: Add load_kallsyms_refresh function

Adding load_kallsyms_refresh function to re-read symbols from
/proc/kallsyms file.

This will be needed to get proper functions addresses from
bpf_t

selftests/bpf: Add load_kallsyms_refresh function

Adding load_kallsyms_refresh function to re-read symbols from
/proc/kallsyms file.

This will be needed to get proper functions addresses from
bpf_testmod.ko module, which is loaded/unloaded several times
during the tests run, so symbols might be already old when
we need to use them.

Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20221025134148.3300700-6-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 2d0df019 05-Apr-2022 Yuntao Wang <ytcoode@gmail.com>

selftests/bpf: Fix file descriptor leak in load_kallsyms()

Currently, if sym_cnt > 0, it just returns and does not close file, fix it.

Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by:

selftests/bpf: Fix file descriptor leak in load_kallsyms()

Currently, if sym_cnt > 0, it just returns and does not close file, fix it.

Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220405145711.49543-1-ytcoode@gmail.com

show more ...


# f7a11eec 16-Mar-2022 Jiri Olsa <jolsa@kernel.org>

selftests/bpf: Add kprobe_multi attach test

Adding kprobe_multi attach test that uses new fprobe interface to
attach kprobe program to multiple functions.

The test is attaching programs to bpf_fent

selftests/bpf: Add kprobe_multi attach test

Adding kprobe_multi attach test that uses new fprobe interface to
attach kprobe program to multiple functions.

The test is attaching programs to bpf_fentry_test* functions and
uses single trampoline program bpf_prog_test_run to trigger
bpf_fentry_test* functions.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20220316122419.933957-11-jolsa@kernel.org

show more ...


# cdb5ed97 27-Jan-2022 Yonghong Song <yhs@fb.com>

selftests/bpf: fix a clang compilation error

When building selftests/bpf with clang
make -j LLVM=1
make -C tools/testing/selftests/bpf -j LLVM=1
I hit the following compilation error:

trace_h

selftests/bpf: fix a clang compilation error

When building selftests/bpf with clang
make -j LLVM=1
make -C tools/testing/selftests/bpf -j LLVM=1
I hit the following compilation error:

trace_helpers.c:152:9: error: variable 'found' is used uninitialized whenever 'while' loop exits because its condition is false [-Werror,-Wsometimes-uninitialized]
while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
trace_helpers.c:161:7: note: uninitialized use occurs here
if (!found)
^~~~~
trace_helpers.c:152:9: note: remove the condition if it is always true
while (fscanf(f, "%zx-%zx %s %zx %*[^\n]\n", &start, &end, buf, &base) == 4) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1
trace_helpers.c:145:12: note: initialize the variable 'found' to silence this warning
bool found;
^
= false

It is possible that for sane /proc/self/maps we may never hit the above issue
in practice. But let us initialize variable 'found' properly to silence the
compilation error.

Signed-off-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/r/20220127163726.1442032-1-yhs@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# ff943683 26-Jan-2022 Andrii Nakryiko <andrii@kernel.org>

selftests/bpf: fix uprobe offset calculation in selftests

Fix how selftests determine relative offset of a function that is
uprobed. Previously, there was an assumption that uprobed function is
alwa

selftests/bpf: fix uprobe offset calculation in selftests

Fix how selftests determine relative offset of a function that is
uprobed. Previously, there was an assumption that uprobed function is
always in the first executable region, which is not always the case
(libbpf CI hits this case now). So get_base_addr() approach in isolation
doesn't work anymore. So teach get_uprobe_offset() to determine correct
memory mapping and calculate uprobe offset correctly.

While at it, I merged together two implementations of
get_uprobe_offset() helper, moving powerpc64-specific logic inside (had
to add extra {} block to avoid unused variable error for insn).

Also ensured that uprobed functions are never inlined, but are still
static (and thus local to each selftest), by using a no-op asm volatile
block internally. I didn't want to keep them global __weak, because some
tests use uprobe's ref counter offset (to test USDT-like logic) which is
not compatible with non-refcounted uprobe. So it's nicer to have each
test uprobe target local to the file and guaranteed to not be inlined or
skipped by the compiler (which can happen with static functions,
especially if compiling selftests with -O2).

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20220126193058.3390292-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 20d1b54a 22-Oct-2021 Song Liu <songliubraving@fb.com>

selftests/bpf: Guess function end for test_get_branch_snapshot

Function in modules could appear in /proc/kallsyms in random order.

ffffffffa02608a0 t bpf_testmod_loop_test
ffffffffa02600c0 t __trac

selftests/bpf: Guess function end for test_get_branch_snapshot

Function in modules could appear in /proc/kallsyms in random order.

ffffffffa02608a0 t bpf_testmod_loop_test
ffffffffa02600c0 t __traceiter_bpf_testmod_test_writable_bare
ffffffffa0263b60 d __tracepoint_bpf_testmod_test_write_bare
ffffffffa02608c0 T bpf_testmod_test_read
ffffffffa0260d08 t __SCT__tp_func_bpf_testmod_test_writable_bare
ffffffffa0263300 d __SCK__tp_func_bpf_testmod_test_read
ffffffffa0260680 T bpf_testmod_test_write
ffffffffa0260860 t bpf_testmod_test_mod_kfunc

Therefore, we cannot reliably use kallsyms_find_next() to find the end of
a function. Replace it with a simple guess (start + 128). This is good
enough for this test.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211022234814.318457-1-songliubraving@fb.com

show more ...


# 025bd7c7 10-Sep-2021 Song Liu <songliubraving@fb.com>

selftests/bpf: Add test for bpf_get_branch_snapshot

This test uses bpf_get_branch_snapshot from a fexit program. The test uses
a target function (bpf_testmod_loop_test) and compares the record again

selftests/bpf: Add test for bpf_get_branch_snapshot

This test uses bpf_get_branch_snapshot from a fexit program. The test uses
a target function (bpf_testmod_loop_test) and compares the record against
kallsyms. If there isn't enough record matching kallsyms, the test fails.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20210910183352.3151445-4-songliubraving@fb.com

show more ...


# 4bd11e08 15-Aug-2021 Andrii Nakryiko <andrii@kernel.org>

selftests/bpf: Add ref_ctr_offset selftests

Extend attach_probe selftests to specify ref_ctr_offset for uprobe/uretprobe
and validate that its value is incremented from zero.

Turns out that once up

selftests/bpf: Add ref_ctr_offset selftests

Extend attach_probe selftests to specify ref_ctr_offset for uprobe/uretprobe
and validate that its value is incremented from zero.

Turns out that once uprobe is attached with ref_ctr_offset, uretprobe for the
same location/function *has* to use ref_ctr_offset as well, otherwise
perf_event_open() fails with -EINVAL. So this test uses ref_ctr_offset for
both uprobe and uretprobe, even though for the purpose of test uprobe would be
enough.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210815070609.987780-17-andrii@kernel.org

show more ...


# a549aaa6 15-Aug-2021 Andrii Nakryiko <andrii@kernel.org>

selftests/bpf: Extract uprobe-related helpers into trace_helpers.{c,h}

Extract two helpers used for working with uprobes into trace_helpers.{c,h} to
be re-used between multiple uprobe-using selftest

selftests/bpf: Extract uprobe-related helpers into trace_helpers.{c,h}

Extract two helpers used for working with uprobes into trace_helpers.{c,h} to
be re-used between multiple uprobe-using selftests. Also rename get_offset()
into more appropriate get_uprobe_offset().

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210815070609.987780-14-andrii@kernel.org

show more ...


# 2c2f6abe 29-Sep-2020 Hao Luo <haoluo@google.com>

selftests/bpf: Ksyms_btf to test typed ksyms

Selftests for typed ksyms. Tests two types of ksyms: one is a struct,
the other is a plain int. This tests two paths in the kernel. Struct
ksyms will be

selftests/bpf: Ksyms_btf to test typed ksyms

Selftests for typed ksyms. Tests two types of ksyms: one is a struct,
the other is a plain int. This tests two paths in the kernel. Struct
ksyms will be converted into PTR_TO_BTF_ID by the verifier while int
typed ksyms will be converted into PTR_TO_MEM.

Signed-off-by: Hao Luo <haoluo@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200929235049.2533242-4-haoluo@google.com

show more ...


# 24a6034a 21-Mar-2020 Daniel T. Lee <danieltimlee@gmail.com>

samples, bpf: Move read_trace_pipe to trace_helpers

To reduce the reliance of trace samples (trace*_user) on bpf_load,
move read_trace_pipe to trace_helpers. By moving this bpf_loader helper
elsewhe

samples, bpf: Move read_trace_pipe to trace_helpers

To reduce the reliance of trace samples (trace*_user) on bpf_load,
move read_trace_pipe to trace_helpers. By moving this bpf_loader helper
elsewhere, trace functions can be easily migrated to libbbpf.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200321100424.1593964-2-danieltimlee@gmail.com

show more ...


# 47da6e4d 23-Jul-2019 Andrii Nakryiko <andriin@fb.com>

selftests/bpf: remove perf buffer helpers

libbpf's perf_buffer API supersedes trace_helper.h's helpers.
Remove those helpers after all existing users were already moved to
perf_buffer API.

Signed-o

selftests/bpf: remove perf buffer helpers

libbpf's perf_buffer API supersedes trace_helper.h's helpers.
Remove those helpers after all existing users were already moved to
perf_buffer API.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 92bd6820 26-May-2019 Chang-Hsien Tsai <luke.tw@gmail.com>

bpf: style fix in while(!feof()) loop

Use fgets() as the while loop condition.

Signed-off-by: Chang-Hsien Tsai <luke.tw@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkman

bpf: style fix in while(!feof()) loop

Use fgets() as the while loop condition.

Signed-off-by: Chang-Hsien Tsai <luke.tw@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

show more ...


# 0979ff79 03-Apr-2019 Daniel T. Lee <danieltimlee@gmail.com>

selftests/bpf: ksym_search won't check symbols exists

Currently, ksym_search located at trace_helpers won't check symbols are
existing or not.

In ksym_search, when symbol is not found, it will retu

selftests/bpf: ksym_search won't check symbols exists

Currently, ksym_search located at trace_helpers won't check symbols are
existing or not.

In ksym_search, when symbol is not found, it will return &syms[0](_stext).
But when the kernel symbols are not loaded, it will return NULL, which is
not a desired action.

This commit will add verification logic whether symbols are loaded prior
to the symbol search.

Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

show more ...


# 3dca2115 21-Oct-2018 Daniel Borkmann <daniel@iogearbox.net>

bpf, libbpf: simplify and cleanup perf ring buffer walk

Simplify bpf_perf_event_read_simple() a bit and fix up some minor
things along the way: the return code in the header is not of type
int but e

bpf, libbpf: simplify and cleanup perf ring buffer walk

Simplify bpf_perf_event_read_simple() a bit and fix up some minor
things along the way: the return code in the header is not of type
int but enum bpf_perf_event_ret instead. Once callback indicated
to break the loop walking event data, it also needs to be consumed
in data_tail since it has been processed already.

Moreover, bpf_perf_event_print_t callback should avoid void * as
we actually get a pointer to struct perf_event_header and thus
applications can make use of container_of() to have type checks.
The walk also doesn't have to use modulo op since the ring size is
required to be power of two.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 1bd70d2e 18-Oct-2018 Peng Hao <peng.hao2@zte.com.cn>

selftests/bpf: fix file resource leak in load_kallsyms

FILE pointer variable f is opened but never closed.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Daniel Borkmann <daniel@ioge

selftests/bpf: fix file resource leak in load_kallsyms

FILE pointer variable f is opened but never closed.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

show more ...


12