History log of /linux/scripts/atomic/gen-atomic-fallback.sh (Results 1 – 18 of 18)
Revision Date Author Comments
# e01cc1e8 25-Sep-2023 Uros Bizjak <ubizjak@gmail.com>

locking/atomic: Add generic support for sync_try_cmpxchg() and its fallback

Provide the generic sync_try_cmpxchg() function from the
raw_ prefixed version, also adding explicit instrumentation.

The

locking/atomic: Add generic support for sync_try_cmpxchg() and its fallback

Provide the generic sync_try_cmpxchg() function from the
raw_ prefixed version, also adding explicit instrumentation.

The patch amends existing scripts to generate sync_try_cmpxchg()
locking primitive and its raw_sync_try_cmpxchg() fallback, while
leaving existing macros from the try_cmpxchg() family unchanged.

The target can define its own arch_sync_try_cmpxchg() to override the
generic version of raw_sync_try_cmpxchg(). This allows the target
to generate more optimal assembly than the generic version.

Additionally, the patch renames two scripts to better reflect
whet they really do.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org

show more ...


# 6d2779ec 19-Sep-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: fix fallback ifdeffery

Since commit:

9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")

The ordering fallbacks for atomic*_read_acquire() and
a

locking/atomic: scripts: fix fallback ifdeffery

Since commit:

9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")

The ordering fallbacks for atomic*_read_acquire() and
atomic*_set_release() erroneously fall back to the implictly relaxed
atomic*_read() and atomic*_set() variants respectively, without any
additional barriers. This loses the ACQUIRE and RELEASE ordering
semantics, which can result in a wide variety of problems, even on
strongly-ordered architectures where the implementation of
atomic*_read() and/or atomic*_set() allows the compiler to reorder those
relative to other accesses.

In practice this has been observed to break bit spinlocks on arm64,
resulting in dentry cache corruption.

The fallback logic was intended to allow ACQUIRE/RELEASE/RELAXED ops to
be defined in terms of FULL ops, but where an op had RELAXED ordering by
default, this unintentionally permitted the ACQUIRE/RELEASE ops to be
defined in terms of the implicitly RELAXED default.

This patch corrects the logic to avoid falling back to implicitly
RELAXED ops, resulting in the same behaviour as prior to commit
9257959a6e5b4fca.

I've verified the resulting assembly on arm64 by generating outlined
wrappers of the atomics. Prior to this patch the compiler generates
sequences using relaxed load (LDR) and store (STR) instructions, e.g.

| <outlined_atomic64_read_acquire>:
| ldr x0, [x0]
| ret
|
| <outlined_atomic64_set_release>:
| str x1, [x0]
| ret

With this patch applied the compiler generates sequences using the
intended load-acquire (LDAR) and store-release (STLR) instructions, e.g.

| <outlined_atomic64_read_acquire>:
| ldar x0, [x0]
| ret
|
| <outlined_atomic64_set_release>:
| stlr x1, [x0]
| ret

To make sure that there were no other victims of the ifdeffery rewrite,
I generated outlined copies of all of the {atomic,atomic64,atomic_long}
atomic operations before and after commit 9257959a6e5b4fca. A diff of
the generated assembly on arm64 shows that only the read_acquire() and
set_release() operations were changed, and only lost their intended
ordering:

| [mark@lakrids:~/src/linux]% diff -u \
| <(aarch64-linux-gnu-objdump -d before-9257959a6e5b4fca.o)
| <(aarch64-linux-gnu-objdump -d after-9257959a6e5b4fca.o)
| --- /proc/self/fd/11 2023-09-19 16:51:51.114779415 +0100
| +++ /proc/self/fd/16 2023-09-19 16:51:51.114779415 +0100
| @@ -1,5 +1,5 @@
|
| -before-9257959a6e5b4fca.o: file format elf64-littleaarch64
| +after-9257959a6e5b4fca.o: file format elf64-littleaarch64
|
|
| Disassembly of section .text:
| @@ -9,7 +9,7 @@
| 4: d65f03c0 ret
|
| 0000000000000008 <outlined_atomic_read_acquire>:
| - 8: 88dffc00 ldar w0, [x0]
| + 8: b9400000 ldr w0, [x0]
| c: d65f03c0 ret
|
| 0000000000000010 <outlined_atomic_set>:
| @@ -17,7 +17,7 @@
| 14: d65f03c0 ret
|
| 0000000000000018 <outlined_atomic_set_release>:
| - 18: 889ffc01 stlr w1, [x0]
| + 18: b9000001 str w1, [x0]
| 1c: d65f03c0 ret
|
| 0000000000000020 <outlined_atomic_add>:
| @@ -1230,7 +1230,7 @@
| 1070: d65f03c0 ret
|
| 0000000000001074 <outlined_atomic64_read_acquire>:
| - 1074: c8dffc00 ldar x0, [x0]
| + 1074: f9400000 ldr x0, [x0]
| 1078: d65f03c0 ret
|
| 000000000000107c <outlined_atomic64_set>:
| @@ -1238,7 +1238,7 @@
| 1080: d65f03c0 ret
|
| 0000000000001084 <outlined_atomic64_set_release>:
| - 1084: c89ffc01 stlr x1, [x0]
| + 1084: f9000001 str x1, [x0]
| 1088: d65f03c0 ret
|
| 000000000000108c <outlined_atomic64_add>:
| @@ -2427,7 +2427,7 @@
| 207c: d65f03c0 ret
|
| 0000000000002080 <outlined_atomic_long_read_acquire>:
| - 2080: c8dffc00 ldar x0, [x0]
| + 2080: f9400000 ldr x0, [x0]
| 2084: d65f03c0 ret
|
| 0000000000002088 <outlined_atomic_long_set>:
| @@ -2435,7 +2435,7 @@
| 208c: d65f03c0 ret
|
| 0000000000002090 <outlined_atomic_long_set_release>:
| - 2090: c89ffc01 stlr x1, [x0]
| + 2090: f9000001 str x1, [x0]
| 2094: d65f03c0 ret
|
| 0000000000002098 <outlined_atomic_long_add>:

I've build tested this with a variety of configs for alpha, arm, arm64,
csky, i386, m68k, microblaze, mips, nios2, openrisc, powerpc, riscv,
s390, sh, sparc, x86_64, and xtensa, for which I've seen no issues. I
was unable to build test for ia64 and parisc due to existing build
breakage in v6.6-rc2.

Fixes: 9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")
Reported-by: Ming Lei <ming.lei@redhat.com>
Reported-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Baokun Li <libaokun1@huawei.com>
Link: https://lkml.kernel.org/r/20230919171430.2697727-1-mark.rutland@arm.com

show more ...


# ad811070 05-Jun-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: generate kerneldoc comments

Currently the atomics are documented in Documentation/atomic_t.txt, and
have no kerneldoc comments. There are a sufficient number of gotchas
(e.g

locking/atomic: scripts: generate kerneldoc comments

Currently the atomics are documented in Documentation/atomic_t.txt, and
have no kerneldoc comments. There are a sufficient number of gotchas
(e.g. semantics, noinstr-safety) that it would be nice to have comments
to call these out, and it would be nice to have kerneldoc comments such
that these can be collated.

While it's possible to derive the semantics from the code, this can be
painful given the amount of indirection we currently have (e.g. fallback
paths), and it's easy to be mislead by naming, e.g.

* The unconditional void-returning ops *only* have relaxed variants
without a _relaxed suffix, and can easily be mistaken for being fully
ordered.

It would be nice to give these a _relaxed() suffix, but this would
result in significant churn throughout the kernel.

* Our naming of conditional and unconditional+test ops is rather
inconsistent, and it can be difficult to derive the name of an
operation, or to identify where an op is conditional or
unconditional+test.

Some ops are clearly conditional:
- dec_if_positive
- add_unless
- dec_unless_positive
- inc_unless_negative

Some ops are clearly unconditional+test:
- sub_and_test
- dec_and_test
- inc_and_test

However, what exactly those test is not obvious. A _test_zero suffix
might be clearer.

Others could be read ambiguously:
- inc_not_zero // conditional
- add_negative // unconditional+test

It would probably be worth renaming these, e.g. to inc_unless_zero and
add_test_negative.

As a step towards making this more consistent and easier to understand,
this patch adds kerneldoc comments for all generated *atomic*_*()
functions. These are generated from templates, with some common text
shared, making it easy to extend these in future if necessary.

I've tried to make these as consistent and clear as possible, and I've
deliberately ensured:

* All ops have their ordering explicitly mentioned in the short and long
description.

* All test ops have "test" in their short description.

* All ops are described as an expression using their usual C operator.
For example:

andnot: "Atomically updates @v to (@v & ~@i)"
inc: "Atomically updates @v to (@v + 1)"

Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.

* All conditional ops have their condition described as an expression
using the usual C operators. For example:

add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
cmpxchg: "If (@v == @old), atomically updates @v to @new"

Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.

* All bitwise ops (and,andnot,or,xor) explicitly mention that they are
bitwise in their short description, so that they are not mistaken for
performing their logical equivalents.

* The noinstr safety of each op is explicitly described, with a
description of whether or not to use the raw_ form of the op.

There should be no functional change as a result of this patch.

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-26-mark.rutland@arm.com

show more ...


# 1d78814d 05-Jun-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: simplify raw_atomic*() definitions

Currently each ordering variant has several potential definitions,
with a mixture of preprocessor and C definitions, including several
cop

locking/atomic: scripts: simplify raw_atomic*() definitions

Currently each ordering variant has several potential definitions,
with a mixture of preprocessor and C definitions, including several
copies of its C prototype, e.g.

| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif

Make this a bit simpler by defining the C prototype once, and writing
the various potential definitions as plain C code guarded by ifdeffery.
For example, the above becomes:

| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| #if defined(arch_atomic_fetch_andnot_acquire)
| return arch_atomic_fetch_andnot_acquire(i, v);
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| #elif defined(arch_atomic_fetch_andnot)
| return arch_atomic_fetch_andnot(i, v);
| #else
| return raw_atomic_fetch_and_acquire(~i, v);
| #endif
| }

Which is far easier to read. As we now always have a single copy of the
C prototype wrapping all the potential definitions, we now have an
obvious single location for kerneldoc comments.

At the same time, the fallbacks for raw_atomic*_xhcg() are made to use
'new' rather than 'i' as the name of the new value. This is what the
existing fallback template used, and is more consistent with the
raw_atomic{_try,}cmpxchg() fallbacks.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com

show more ...


# 9257959a 05-Jun-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: restructure fallback ifdeffery

Currently the various ordering variants of an atomic operation are
defined in groups of full/acquire/release/relaxed ordering variants with
so

locking/atomic: scripts: restructure fallback ifdeffery

Currently the various ordering variants of an atomic operation are
defined in groups of full/acquire/release/relaxed ordering variants with
some shared ifdeffery and several potential definitions of each ordering
variant in different branches of the shared ifdeffery.

As an ordering variant can have several potential definitions down
different branches of the shared ifdeffery, it can be painful for a
human to find a relevant definition, and we don't have a good location
to place anything common to all definitions of an ordering variant (e.g.
kerneldoc).

Historically the grouping of full/acquire/release/relaxed ordering
variants was necessary as we filled in the missing atomics in the same
namespace as the architecture used. It would be easy to accidentally
define one ordering fallback in terms of another ordering fallback with
redundant barriers, and avoiding that would otherwise require a lot of
baroque ifdeffery.

With recent changes we no longer need to fill in the missing atomics in
the arch_atomic*_<op>() namespace, and only need to fill in the
raw_atomic*_<op>() namespace. Due to this, there's no risk of a
namespace collision, and we can define each raw_atomic*_<op> ordering
variant with its own ifdeffery checking for the arch_atomic*_<op>
ordering variants.

Restructure the fallbacks in this way, with each ordering variant having
its own ifdeffery of the form:

| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif

Note that where there's no relevant arch_atomic*_<op>() ordering
variant, we'll define the operation in terms of a distinct
raw_atomic*_<otherop>(), as this itself might have been filled in with a
fallback.

As we now generate the raw_atomic*_<op>() implementations directly, we
no longer need the trivial wrappers, so they are removed.

This makes the ifdeffery easier to follow, and will allow for further
improvements in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com

show more ...


# 7ed7a156 05-Jun-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: factor out order template generation

Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so th

locking/atomic: scripts: factor out order template generation

Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so that it can be reused
elsewhere.

This results in no change to the generated headers, so there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-17-mark.rutland@arm.com

show more ...


# a083ecc9 05-Jun-2023 Mark Rutland <mark.rutland@arm.com>

locking/atomic: scripts: remove bogus order parameter

At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.

Replace the

locking/atomic: scripts: remove bogus order parameter

At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.

Replace the current bogus use of ${order} with an empty string instead.

This results in no change to the generated headers.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-15-mark.rutland@arm.com

show more ...


# 8c8b096a 31-May-2023 Peter Zijlstra <peterz@infradead.org>

instrumentation: Wire up cmpxchg128()

Wire up the cmpxchg128 family in the atomic wrapper scripts.

These provide the generic cmpxchg128 family of functions from the
arch_ prefixed version, adding e

instrumentation: Wire up cmpxchg128()

Wire up the cmpxchg128 family in the atomic wrapper scripts.

These provide the generic cmpxchg128 family of functions from the
arch_ prefixed version, adding explicit instrumentation where needed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230531132323.519237070@infradead.org

show more ...


# e6ce9d74 05-Apr-2023 Uros Bizjak <ubizjak@gmail.com>

locking/atomic: Add generic try_cmpxchg{,64}_local() support

Add generic support for try_cmpxchg{,64}_local() and their falbacks.

These provides the generic try_cmpxchg_local family of functions
fr

locking/atomic: Add generic try_cmpxchg{,64}_local() support

Add generic support for try_cmpxchg{,64}_local() and their falbacks.

These provides the generic try_cmpxchg_local family of functions
from the arch_ prefixed version, also adding explicit instrumentation.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230405141710.3551-2-ubizjak@gmail.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# 0aa7be05 15-May-2022 Uros Bizjak <ubizjak@gmail.com>

locking/atomic: Add generic try_cmpxchg64 support

Add generic support for try_cmpxchg64{,_acquire,_release,_relaxed}
and their falbacks involving cmpxchg64.

Signed-off-by: Uros Bizjak <ubizjak@gmai

locking/atomic: Add generic try_cmpxchg64 support

Add generic support for try_cmpxchg64{,_acquire,_release,_relaxed}
and their falbacks involving cmpxchg64.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20220515184205.103089-2-ubizjak@gmail.com

show more ...


# f3e615b4 13-Jul-2021 Mark Rutland <mark.rutland@arm.com>

locking/atomic: remove ARCH_ATOMIC remanants

Now that gen-atomic-fallback.sh is only used to generate the arch_*
fallbacks, we don't need to also generate the non-arch_* forms, and can
removethe inf

locking/atomic: remove ARCH_ATOMIC remanants

Now that gen-atomic-fallback.sh is only used to generate the arch_*
fallbacks, we don't need to also generate the non-arch_* forms, and can
removethe infrastructure this needed.

There is no change to any of the generated headers as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210713105253.7615-3-mark.rutland@arm.com

show more ...


# 47401d94 13-Jul-2021 Mark Rutland <mark.rutland@arm.com>

locking/atomic: simplify ifdef generation

In gen-atomic-fallback.sh's gen_proto_order_variants(), we generate some
ifdeferry with:

| local basename="${arch}${atomic}_${pfx}${name}${sfx}"
| ...
| pr

locking/atomic: simplify ifdef generation

In gen-atomic-fallback.sh's gen_proto_order_variants(), we generate some
ifdeferry with:

| local basename="${arch}${atomic}_${pfx}${name}${sfx}"
| ...
| printf "#ifdef ${basename}\n"
| ...
| printf "#endif /* ${arch}${atomic}_${pfx}${name}${sfx} */\n\n"

For clarity, use ${basename} for both sides, rather than open-coding the
string generation.

There is no change to any of the generated headers as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210713105253.7615-2-mark.rutland@arm.com

show more ...


# 29f006fd 29-Aug-2020 Peter Zijlstra <peterz@infradead.org>

asm-generic/atomic: Add try_cmpxchg() fallbacks

Only x86 provides try_cmpxchg() outside of the atomic_t interfaces,
provide generic fallbacks to create this interface from the widely
available cmpxc

asm-generic/atomic: Add try_cmpxchg() fallbacks

Only x86 provides try_cmpxchg() outside of the atomic_t interfaces,
provide generic fallbacks to create this interface from the widely
available cmpxchg() function.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/159870621515.1229682.15506193091065001742.stgit@devnote2

show more ...


# 5faafd56 25-Jun-2020 Peter Zijlstra <peterz@infradead.org>

locking/atomics: Provide the arch_atomic_ interface to generic code

Architectures with instrumented (KASAN/KCSAN) atomic operations
natively provide arch_atomic_ variants that are not instrumented.

locking/atomics: Provide the arch_atomic_ interface to generic code

Architectures with instrumented (KASAN/KCSAN) atomic operations
natively provide arch_atomic_ variants that are not instrumented.

It turns out that some generic code also requires arch_atomic_ in
order to avoid instrumentation, so provide the arch_atomic_ interface
as a direct map into the regular atomic_ interface for
non-instrumented architectures.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

show more ...


# 37f8173d 24-Jan-2020 Peter Zijlstra <peterz@infradead.org>

locking/atomics: Flip fallbacks and instrumentation

Currently instrumentation of atomic primitives is done at the architecture
level, while composites or fallbacks are provided at the generic level.

locking/atomics: Flip fallbacks and instrumentation

Currently instrumentation of atomic primitives is done at the architecture
level, while composites or fallbacks are provided at the generic level.

The result is that there are no uninstrumented variants of the
fallbacks. Since there is now need of such variants to isolate text poke
from any form of instrumentation invert this ordering.

Doing this means moving the instrumentation into the generic code as
well as having (for now) two variants of the fallbacks.

Notes:

- the various *cond_read* primitives are not proper fallbacks
and got moved into linux/atomic.c. No arch_ variants are
generated because the base primitives smp_cond_load*()
are instrumented.

- once all architectures are moved over to arch_atomic_ one of the
fallback variants can be removed and some 2300 lines reclaimed.

- atomic_{read,set}*() are no longer double-instrumented

Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de

show more ...


# 765dcd20 26-Nov-2019 Marco Elver <elver@google.com>

asm-generic/atomic: Use __always_inline for fallback wrappers

Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclin

asm-generic/atomic: Use __always_inline for fallback wrappers

Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.

While the fallback wrappers aren't pure wrappers, they are trivial
nonetheless, and the function they wrap should determine the final
inlining policy.

For x86 tinyconfig we observe:
- vmlinux baseline: 1315988
- vmlinux with patch: 1315928 (-60 bytes)

[ tglx: Cherry-picked from KCSAN ]

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

show more ...


# 4d8e5cd2 01-Nov-2018 Ingo Molnar <mingo@kernel.org>

locking/atomics: Fix scripts/atomic/ script permissions

Mark all these scripts executable.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Will Deacon

locking/atomics: Fix scripts/atomic/ script permissions

Mark all these scripts executable.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linuxdrivers@attotech.com
Cc: dvyukov@google.com
Cc: boqun.feng@gmail.com
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: glider@google.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>

show more ...


# ace9bad4 04-Sep-2018 Mark Rutland <mark.rutland@arm.com>

locking/atomics: Add common header generation files

To minimize repetition, to allow for future rework, and to ensure
regularity of the various atomic APIs, we'd like to automatically
generate (the

locking/atomics: Add common header generation files

To minimize repetition, to allow for future rework, and to ensure
regularity of the various atomic APIs, we'd like to automatically
generate (the bulk of) a number of headers related to atomics.

This patch adds the infrastructure to do so, leaving actual conversion
of headers to subsequent patches. This infrastructure consists of:

* atomics.tbl - a table describing the functions in the atomics API,
with names, prototypes, and metadata describing the variants that
exist (e.g fetch/return, acquire/release/relaxed). Note that the
return type is dependent on the particular variant.

* atomic-tbl.sh - a library of routines useful for dealing with
atomics.tbl (e.g. querying which variants exist, or generating
argument/parameter lists for a given function variant).

* gen-atomic-fallback.sh - a script which generates a header of
fallbacks, covering cases where architecture omit certain functions
(e.g. omitting relaxed variants).

* gen-atomic-long.sh - a script which generates wrappers providing the
atomic_long API atomic of the relevant atomic or atomic64 API,
ensuring the APIs are consistent.

* gen-atomic-instrumented.sh - a script which generates atomic* wrappers
atop of arch_atomic* functions, with automatically generated KASAN
instrumentation.

* fallbacks/* - a set of fallback implementations for atomics, which
should be used when no implementation of a given atomic is provided.
These are used by gen-atomic-fallback.sh to generate fallbacks, and
these are also used by other scripts to determine the set of optional
atomics (as required to generate preprocessor guards correctly).

Fallbacks may use the following variables:

${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be
used to derive the atomic type, and to prefix functions

${int} integer type: int/s64/long

${pfx} variant prefix, e.g. fetch_

${name} base function name, e.g. add

${sfx} variant suffix, e.g. _return

${order} order suffix, e.g. _relaxed

${atomicname} full name, e.g. atomic64_fetch_add_relaxed

${ret} return type of the function, e.g. void

${retstmt} a return statement (with a trailing space), unless the
variant returns void

${params} parameter list for the function declaration, e.g.
"int i, atomic_t *v"

${args} argument list for invoking the function, e.g. "i, v"

... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are
open-coded for fallbacks where these do not vary, or are critical to
understanding the logic of the fallback.

The MAINTAINERS entry for the atomic infrastructure is updated to cover
the new scripts.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: catalin.marinas@arm.com
Cc: Will Deacon <will.deacon@arm.com>
Cc: linuxdrivers@attotech.com
Cc: dvyukov@google.com
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: glider@google.com
Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

show more ...