#
a51749ab |
| 30-Nov-2023 |
Jann Horn <jannh@google.com> |
locking/mutex: Document that mutex_unlock() is non-atomic
I have seen several cases of attempts to use mutex_unlock() to release an object such that the object can then be freed by another task.
Th
locking/mutex: Document that mutex_unlock() is non-atomic
I have seen several cases of attempts to use mutex_unlock() to release an object such that the object can then be freed by another task.
This is not safe because mutex_unlock(), in the MUTEX_FLAG_WAITERS && !MUTEX_FLAG_HANDOFF case, accesses the mutex structure after having marked it as unlocked; so mutex_unlock() requires its caller to ensure that the mutex stays alive until mutex_unlock() returns.
If MUTEX_FLAG_WAITERS is set and there are real waiters, those waiters have to keep the mutex alive, but we could have a spurious MUTEX_FLAG_WAITERS left if an interruptible/killable waiter bailed between the points where __mutex_unlock_slowpath() did the cmpxchg reading the flags and where it acquired the wait_lock.
( With spinlocks, that kind of code pattern is allowed and, from what I remember, used in several places in the kernel. )
Document this, such a semantic difference between mutexes and spinlocks is fairly unintuitive.
[ mingo: Made the changelog a bit more assertive, refined the comments. ]
Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20231130204817.2031407-1-jannh@google.com
show more ...
|
#
957e4808 |
| 14-Aug-2023 |
Brian Foster <bfoster@redhat.com> |
locking: export contention tracepoints for bcachefs six locks
The bcachefs implementation of six locks is intended to land in generic locking code in the long term, but has been pulled into the bcac
locking: export contention tracepoints for bcachefs six locks
The bcachefs implementation of six locks is intended to land in generic locking code in the long term, but has been pulled into the bcachefs subsystem for internal use for the time being. This code lift breaks the bcachefs module build as six locks depend a couple of the generic locking tracepoints. Export these tracepoint symbols for bcachefs.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
show more ...
|
#
dc1f7893 |
| 30-Mar-2022 |
Peter Zijlstra <peterz@infradead.org> |
locking/mutex: Make contention tracepoints more consistent wrt adaptive spinning
Have the trace_contention_*() tracepoints consistently include adaptive spinning. In order to differentiate between t
locking/mutex: Make contention tracepoints more consistent wrt adaptive spinning
Have the trace_contention_*() tracepoints consistently include adaptive spinning. In order to differentiate between the spinning and non-spinning states add LCB_F_MUTEX and combine with LCB_F_SPIN.
The consequence is that a mutex contention can now triggler multiple _begin() tracepoints before triggering an _end().
Additionally, this fixes one path where mutex would trigger _end() without ever seeing a _begin().
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
show more ...
|
#
ee042be1 |
| 22-Mar-2022 |
Namhyung Kim <namhyung@kernel.org> |
locking: Apply contention tracepoints in the slow path
Adding the lock contention tracepoints in various lock function slow paths. Note that each arch can define spinlock differently, I only added
locking: Apply contention tracepoints in the slow path
Adding the lock contention tracepoints in various lock function slow paths. Note that each arch can define spinlock differently, I only added it only to the generic qspinlock for now.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Link: https://lkml.kernel.org/r/20220322185709.141236-3-namhyung@kernel.org
show more ...
|
#
16edd9b5 |
| 22-Mar-2022 |
Namhyung Kim <namhyung@kernel.org> |
locking: Add lock contention tracepoints
This adds two new lock contention tracepoints like below:
* lock:contention_begin * lock:contention_end
The lock:contention_begin takes a flags argument
locking: Add lock contention tracepoints
This adds two new lock contention tracepoints like below:
* lock:contention_begin * lock:contention_end
The lock:contention_begin takes a flags argument to classify locks. I found it useful to identify what kind of locks it's tracing like if it's spinning or sleeping, reader-writer lock, real-time, and per-cpu.
Move tracepoint definitions into mutex.c so that we can use them without lockdep.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Link: https://lkml.kernel.org/r/20220322185709.141236-2-namhyung@kernel.org
show more ...
|
#
c0bed69d |
| 03-Dec-2021 |
Kefeng Wang <wangkefeng.wang@huawei.com> |
locking: Make owner_on_cpu() into <linux/sched.h>
Move the owner_on_cpu() from kernel/locking/rwsem.c into include/linux/sched.h with under CONFIG_SMP, then use it in the mutex/rwsem/rtmutex to simp
locking: Make owner_on_cpu() into <linux/sched.h>
Move the owner_on_cpu() from kernel/locking/rwsem.c into include/linux/sched.h with under CONFIG_SMP, then use it in the mutex/rwsem/rtmutex to simplify the code.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20211203075935.136808-2-wangkefeng.wang@huawei.com
show more ...
|
#
6c2787f2 |
| 13-Oct-2021 |
Yanfei Xu <yanfei.xu@windriver.com> |
locking: Remove rcu_read_{,un}lock() for preempt_{dis,en}able()
preempt_disable/enable() is equal to RCU read-side crital section, and the spinning codes in mutex and rwsem could ensure that the pre
locking: Remove rcu_read_{,un}lock() for preempt_{dis,en}able()
preempt_disable/enable() is equal to RCU read-side crital section, and the spinning codes in mutex and rwsem could ensure that the preemption is disabled. So let's remove the unnecessary rcu_read_lock/unlock for saving some cycles in hot codes.
Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Link: https://lore.kernel.org/r/20211013134154.1085649-2-yanfei.xu@windriver.com
show more ...
|
#
12235da8 |
| 09-Sep-2021 |
Maarten Lankhorst <maarten.lankhorst@linux.intel.com> |
kernel/locking: Add context to ww_mutex_trylock()
i915 will soon gain an eviction path that trylock a whole lot of locks for eviction, getting dmesg failures like below:
BUG: MAX_LOCK_DEPTH too l
kernel/locking: Add context to ww_mutex_trylock()
i915 will soon gain an eviction path that trylock a whole lot of locks for eviction, getting dmesg failures like below:
BUG: MAX_LOCK_DEPTH too low! turning off the locking correctness validator. depth: 48 max: 48! 48 locks held by i915_selftest/5776: #0: ffff888101a79240 (&dev->mutex){....}-{3:3}, at: __driver_attach+0x88/0x160 #1: ffffc900009778c0 (reservation_ww_class_acquire){+.+.}-{0:0}, at: i915_vma_pin.constprop.63+0x39/0x1b0 [i915] #2: ffff88800cf74de8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_vma_pin.constprop.63+0x5f/0x1b0 [i915] #3: ffff88810c7f9e38 (&vm->mutex/1){+.+.}-{3:3}, at: i915_vma_pin_ww+0x1c4/0x9d0 [i915] #4: ffff88810bad5768 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_gem_evict_something+0x110/0x860 [i915] #5: ffff88810bad60e8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_gem_evict_something+0x110/0x860 [i915] ... #46: ffff88811964d768 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_gem_evict_something+0x110/0x860 [i915] #47: ffff88811964e0e8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: i915_gem_evict_something+0x110/0x860 [i915] INFO: lockdep is turned off.
Fixing eviction to nest into ww_class_acquire is a high priority, but it requires a rework of the entire driver, which can only be done one step at a time.
As an intermediate solution, add an acquire context to ww_mutex_trylock, which allows us to do proper nesting annotations on the trylocks, making the above lockdep splat disappear.
This is also useful in regulator_lock_nested, which may avoid dropping regulator_nesting_mutex in the uncontended path, so use it there.
TTM may be another user for this, where we could lock a buffer in a fastpath with list locks held, without dropping all locks we hold.
[peterz: rework actual ww_mutex_trylock() implementations] Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YUBGPdDDjKlxAuXJ@hirez.programming.kicks-ass.net
show more ...
|
#
b857174e |
| 19-Aug-2021 |
Sebastian Andrzej Siewior <bigeasy@linutronix.de> |
locking/ww_mutex: Initialize waiter.ww_ctx properly
The consolidation of the debug code for mutex waiter intialization sets waiter::ww_ctx to a poison value unconditionally. For regular mutexes this
locking/ww_mutex: Initialize waiter.ww_ctx properly
The consolidation of the debug code for mutex waiter intialization sets waiter::ww_ctx to a poison value unconditionally. For regular mutexes this is intended to catch the case where waiter_ww_ctx is dereferenced accidentally.
For ww_mutex the poison value has to be overwritten either with a context pointer or NULL for ww_mutexes without context.
The rework broke this as it made the store conditional on the context pointer instead of the argument which signals whether ww_mutex code should be compiled in or optiized out. As a result waiter::ww_ctx ends up with the poison pointer for contextless ww_mutexes which causes a later dereference of the poison pointer because it is != NULL.
Use the build argument instead so for ww_mutex the poison value is always overwritten.
Fixes: c0afb0ffc06e6 ("locking/ww_mutex: Gather mutex_waiter initialization") Reported-by: Guenter Roeck <linux@roeck-us.net> Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210819193030.zpwrpvvrmy7xxxiy@linutronix.de
show more ...
|
#
bb630f9f |
| 15-Aug-2021 |
Thomas Gleixner <tglx@linutronix.de> |
locking/rtmutex: Add mutex variant for RT
Add the necessary defines, helpers and API functions for replacing struct mutex on a PREEMPT_RT enabled kernel with an rtmutex based variant.
No functional
locking/rtmutex: Add mutex variant for RT
Add the necessary defines, helpers and API functions for replacing struct mutex on a PREEMPT_RT enabled kernel with an rtmutex based variant.
No functional change when CONFIG_PREEMPT_RT=n
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211305.081517417@linutronix.de
show more ...
|
#
2674bd18 |
| 17-Aug-2021 |
Peter Zijlstra (Intel) <peterz@infradead.org> |
locking/ww_mutex: Split out the W/W implementation logic into kernel/locking/ww_mutex.h
Split the W/W mutex helper functions out into a separate header file, so they can be shared with a rtmutex bas
locking/ww_mutex: Split out the W/W implementation logic into kernel/locking/ww_mutex.h
Split the W/W mutex helper functions out into a separate header file, so they can be shared with a rtmutex based variant later.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211304.396893399@linutronix.de
show more ...
|
#
aaa77de1 |
| 17-Aug-2021 |
Peter Zijlstra (Intel) <peterz@infradead.org> |
locking/ww_mutex: Split up ww_mutex_unlock()
Split the ww related part out into a helper function so it can be reused for a rtmutex based ww_mutex implementation.
[ mingo: Fixed bisection failure.
locking/ww_mutex: Split up ww_mutex_unlock()
Split the ww related part out into a helper function so it can be reused for a rtmutex based ww_mutex implementation.
[ mingo: Fixed bisection failure. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211304.340166556@linutronix.de
show more ...
|
#
c0afb0ff |
| 15-Aug-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/ww_mutex: Gather mutex_waiter initialization
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (I
locking/ww_mutex: Gather mutex_waiter initialization
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211304.281927514@linutronix.de
show more ...
|
#
cf702edd |
| 15-Aug-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/ww_mutex: Simplify lockdep annotations
No functional change.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
locking/ww_mutex: Simplify lockdep annotations
No functional change.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211304.222921634@linutronix.de
show more ...
|
#
ebf4c55c |
| 15-Aug-2021 |
Thomas Gleixner <tglx@linutronix.de> |
locking/mutex: Make mutex::wait_lock raw
The wait_lock of mutex is really a low level lock. Convert it to a raw_spinlock like the wait_lock of rtmutex.
[ mingo: backmerged the test_lockup.c build f
locking/mutex: Make mutex::wait_lock raw
The wait_lock of mutex is really a low level lock. Convert it to a raw_spinlock like the wait_lock of rtmutex.
[ mingo: backmerged the test_lockup.c build fix by bigeasy. ]
Co-developed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211304.166863404@linutronix.de
show more ...
|
#
a321fb90 |
| 17-Aug-2021 |
Thomas Gleixner <tglx@linutronix.de> |
locking/mutex: Consolidate core headers, remove kernel/locking/mutex-debug.h
Having two header files which contain just the non-debug and debug variants is mostly waste of disc space and has no real
locking/mutex: Consolidate core headers, remove kernel/locking/mutex-debug.h
Having two header files which contain just the non-debug and debug variants is mostly waste of disc space and has no real value. Stick the debug variants into the common mutex.h file as counterpart to the stubs for the non-debug case.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211303.995350521@linutronix.de
show more ...
|
#
e6b4457b |
| 30-Jun-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/mutex: Add MUTEX_WARN_ON
Cleanup some #ifdef'fery.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanfei.
locking/mutex: Add MUTEX_WARN_ON
Cleanup some #ifdef'fery.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Link: https://lore.kernel.org/r/20210630154115.020298650@infradead.org
show more ...
|
#
ad90880d |
| 30-Jun-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/mutex: Introduce __mutex_trylock_or_handoff()
Yanfei reported that it is possible to loose HANDOFF when we race with mutex_unlock() and end up setting HANDOFF on an unlocked mutex. At that p
locking/mutex: Introduce __mutex_trylock_or_handoff()
Yanfei reported that it is possible to loose HANDOFF when we race with mutex_unlock() and end up setting HANDOFF on an unlocked mutex. At that point anybody can steal it, losing HANDOFF in the process.
If this happens often enough, we can in fact starve the top waiter.
Solve this by folding the 'set HANDOFF' operation into the trylock operation, such that either we acquire the lock, or it gets HANDOFF set. This avoids having HANDOFF set on an unlocked mutex.
Reported-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Link: https://lore.kernel.org/r/20210630154114.958507900@infradead.org
show more ...
|
#
048661a1 |
| 30-Jun-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/mutex: Fix HANDOFF condition
Yanfei reported that setting HANDOFF should not depend on recomputing @first, only on @first state. Which would then give:
if (ww_ctx || !first) first = _
locking/mutex: Fix HANDOFF condition
Yanfei reported that setting HANDOFF should not depend on recomputing @first, only on @first state. Which would then give:
if (ww_ctx || !first) first = __mutex_waiter_is_first(lock, &waiter); if (first) __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
But because 'ww_ctx || !first' is basically 'always' and the test for first is relatively cheap, omit that first branch entirely.
Reported-by: Yanfei Xu <yanfei.xu@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Link: https://lore.kernel.org/r/20210630154114.896786297@infradead.org
show more ...
|
#
ab4e4d9f |
| 30-Jun-2021 |
Peter Zijlstra <peterz@infradead.org> |
locking/mutex: Use try_cmpxchg()
For simpler and better code.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanf
locking/mutex: Use try_cmpxchg()
For simpler and better code.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Waiman Long <longman@redhat.com> Reviewed-by: Yanfei Xu <yanfei.xu@windriver.com> Link: https://lore.kernel.org/r/20210630154114.834438545@infradead.org
show more ...
|
#
2f064a59 |
| 11-Jun-2021 |
Peter Zijlstra <peterz@infradead.org> |
sched: Change task_struct::state
Change the type and name of task_struct::state. Drop the volatile and shrink it to an 'unsigned int'. Rename it in order to find all uses such that we can use READ_O
sched: Change task_struct::state
Change the type and name of task_struct::state. Drop the volatile and shrink it to an 'unsigned int'. Rename it in order to find all uses such that we can use READ_ONCE/WRITE_ONCE as appropriate.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com> Acked-by: Will Deacon <will@kernel.org> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Link: https://lore.kernel.org/r/20210611082838.550736351@infradead.org
show more ...
|
#
3a010c49 |
| 17-May-2021 |
Zqiang <qiang.zhang@windriver.com> |
locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal
When a interruptible mutex locker is interrupted by a signal without acquiring this lock and removed from the wait queue. if the
locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal
When a interruptible mutex locker is interrupted by a signal without acquiring this lock and removed from the wait queue. if the mutex isn't contended enough to have a waiter put into the wait queue again, the setting of the WAITER bit will force mutex locker to go into the slowpath to acquire the lock every time, so if the wait queue is empty, the WAITER bit need to be clear.
Fixes: 040a0a371005 ("mutex: Add support for wound/wait style locks") Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Zqiang <qiang.zhang@windriver.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210517034005.30828-1-qiang.zhang@windriver.com
show more ...
|
#
e2db7592 |
| 22-Mar-2021 |
Ingo Molnar <mingo@kernel.org> |
locking: Fix typos in comments
Fix ~16 single-word typos in locking code comments.
Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@k
locking: Fix typos in comments
Fix ~16 single-word typos in locking code comments.
Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
5de2055d |
| 16-Mar-2021 |
Waiman Long <longman@redhat.com> |
locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling
The use_ww_ctx flag is passed to mutex_optimistic_spin(), but the function doesn't use it. The frequent use of the (use_ww_ctx && ww_ctx) comb
locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling
The use_ww_ctx flag is passed to mutex_optimistic_spin(), but the function doesn't use it. The frequent use of the (use_ww_ctx && ww_ctx) combination is repetitive.
In fact, ww_ctx should not be used at all if !use_ww_ctx. Simplify ww_mutex code by dropping use_ww_ctx from mutex_optimistic_spin() an clear ww_ctx if !use_ww_ctx. In this way, we can replace (use_ww_ctx && ww_ctx) by just (ww_ctx).
Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Davidlohr Bueso <dbueso@suse.de> Link: https://lore.kernel.org/r/20210316153119.13802-2-longman@redhat.com
show more ...
|
#
0f319d49 |
| 10-Feb-2021 |
Sebastian Andrzej Siewior <bigeasy@linutronix.de> |
locking/mutex: Kill mutex_trylock_recursive()
There are not users of mutex_trylock_recursive() in tree as of v5.11-rc7.
Remove it.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
locking/mutex: Kill mutex_trylock_recursive()
There are not users of mutex_trylock_recursive() in tree as of v5.11-rc7.
Remove it.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210210085248.219210-2-bigeasy@linutronix.de
show more ...
|