History log of /netbsd/sys/kern/kern_lwp.c (Results 1 – 25 of 251)
Revision Date Author Comments
# 416a8a0e 09-Apr-2023 riastradh <riastradh@NetBSD.org>

kern: KASSERT(A && B) -> KASSERT(A); KASSERT(B)


# 22e17a50 01-Jul-2022 riastradh <riastradh@NetBSD.org>

kern: Fix comment about precondition of lwp_update_creds.


# 2d52435a 22-May-2022 andvar <andvar@NetBSD.org>

fix various small typos, mainly in comments.


# 6ca4ac1d 07-May-2022 mrg <mrg@NetBSD.org>

bump maxthreads default.

bump the default MAXLWP to 4096 from 2048, and adjust the default
limits seen to be 2048 cur / 4096 max. remove the linkage to
maxuprc entirely.

remove cpu_maxlwp() that i

bump maxthreads default.

bump the default MAXLWP to 4096 from 2048, and adjust the default
limits seen to be 2048 cur / 4096 max. remove the linkage to
maxuprc entirely.

remove cpu_maxlwp() that isn't implemented anywhere. instead,
grow the maxlwp for larger memory systems, picking 1 lwp per 1MiB
of ram, limited to 65535 like the system limit.

remove some magic numbers.


i've been having weird firefox issues for a few months now and
it turns out i was having pthread_create() failures and since
bumping the defaults i've had none of the recent issues.

show more ...


# df32c3d2 09-Apr-2022 riastradh <riastradh@NetBSD.org>

kern: Handle l_mutex with atomic_store_release, atomic_load_consume.

- Where the lock is held and known to be correct, no atomic.
- In loops to acquire the lock, use atomic_load_relaxed before we

kern: Handle l_mutex with atomic_store_release, atomic_load_consume.

- Where the lock is held and known to be correct, no atomic.
- In loops to acquire the lock, use atomic_load_relaxed before we
restart with atomic_load_consume.

Nix membar_exit.

(Who knows, using atomic_load_consume here might fix bugs on Alpha!)

show more ...


# 5d085195 10-Mar-2022 riastradh <riastradh@NetBSD.org>

kern: Fix synchronization of clearing LP_RUNNING and lwp_free.

1. membar_sync is not necessary here -- only a store-release is
required.

2. membar_consumer _before_ loading l->l_pflag is not eno

kern: Fix synchronization of clearing LP_RUNNING and lwp_free.

1. membar_sync is not necessary here -- only a store-release is
required.

2. membar_consumer _before_ loading l->l_pflag is not enough; a
load-acquire is required.

Actually it's not really clear to me why any barriers are needed, since
the store-release and load-acquire should be implied by releasing and
acquiring the lwp lock (and maybe we could spin with the lock instead
of reading l->l_pflag unlocked). But maybe there's something subtle
about access to l->l_mutex that's not obvious here.

show more ...


# 25ae68d5 22-Dec-2021 thorpej <thorpej@NetBSD.org>

Do the last change differently:

Instead of having a pre-destruct hook, put knowledge of passive
serialization into the pool allocator directly, enabled by PR_PSERIALIZE
when the pool / pool_cache is

Do the last change differently:

Instead of having a pre-destruct hook, put knowledge of passive
serialization into the pool allocator directly, enabled by PR_PSERIALIZE
when the pool / pool_cache is initialized. This will guarantee that
a passive serialization barrier will be performed before the object's
destructor is called, or before the page containing the object is freed
back to the system (in the case of no destructor). Note that the internal
allocator overhead is different when PR_PSERIALIZE is used (it implies
PR_NOTOUCH, because the objects must remain in a valid state).

In the DRM Linux API shim, this allows us to remove the custom page
allocator for SLAB_TYPESAFE_BY_RCU.

show more ...


# 863f56d4 21-Dec-2021 thorpej <thorpej@NetBSD.org>

Rather than calling xc_barrier() in lwp_dtor(), set a pre-destruct hook
on the lwp_cache and invoke the barrier there.


# 769cefde 28-Sep-2021 thorpej <thorpej@NetBSD.org>

futex_release_all_lwp(): No need to pass the "tid" argument separately; that
is a vestige of an older version of the code. Also, move a KASSERT() that
both futex_release_all_lwp() call sites had ins

futex_release_all_lwp(): No need to pass the "tid" argument separately; that
is a vestige of an older version of the code. Also, move a KASSERT() that
both futex_release_all_lwp() call sites had inside of futex_release_all_lwp()
itself.

show more ...


# 383f480f 13-Jan-2021 skrll <skrll@NetBSD.org>

Improve English in comments


# 4ac5a1dd 22-Jun-2020 maxv <maxv@NetBSD.org>

Permanent node doesn't need a log, plus the log gets leaked anyway. Found
by kLSan.


# d6324141 06-Jun-2020 ad <ad@NetBSD.org>

lwp_exit(): add a warning about (l != curlwp)


# a956373d 01-Jun-2020 thorpej <thorpej@NetBSD.org>

lwp_thread_cleanup(): Remove overly-aggressive assertion.


# 14b4bbb2 23-May-2020 ad <ad@NetBSD.org>

Move proc_lock into the data segment. It was dynamically allocated because
at the time we had mutex_obj_alloc() but not __cacheline_aligned.


# 01503c25 23-May-2020 ad <ad@NetBSD.org>

- Replace pid_table_lock with a lockless lookup covered by pserialize, with
the "writer" side being pid_table expansion. The basic idea is that when
doing an LWP lookup there is usually already

- Replace pid_table_lock with a lockless lookup covered by pserialize, with
the "writer" side being pid_table expansion. The basic idea is that when
doing an LWP lookup there is usually already a lock held (p->p_lock), or a
spin mutex that needs to be taken (l->l_mutex), and either can be used to
get the found LWP stable and confidently determine that all is correct.

- For user processes LSLARVAL implies the same thing as LSIDL ("not visible
by ID"), and lookup by ID in proc0 doesn't really happen. In-tree the new
state should be understood by top(1), the tty subsystem and so on, and
would attract the attention of 3rd party kernel grovellers in time, so
remove it and just rely on LSIDL.

show more ...


# fc62baac 29-Apr-2020 thorpej <thorpej@NetBSD.org>

- proc_find() retains traditional semantics of requiring the canonical
PID to look up a proc. Add a separate proc_find_lwpid() to look up a
proc by the ID of any of its LWPs.
- Add proc_find_lwp

- proc_find() retains traditional semantics of requiring the canonical
PID to look up a proc. Add a separate proc_find_lwpid() to look up a
proc by the ID of any of its LWPs.
- Add proc_find_lwp_acquire_proc(), which enables looking up the LWP
*and* a proc given the ID of any LWP. Returns with the proc::p_lock
held.
- Rewrite lwp_find2() in terms of proc_find_lwp_acquire_proc(), and add
allow the proc to be wildcarded, rather than just curproc or specific
proc.
- lwp_find2() now subsumes the original intent of lwp_getref_lwpid(), but
in a much nicer way, so garbage-collect the remnants of that recently
added mechanism.

show more ...


# 9a08a752 26-Apr-2020 thorpej <thorpej@NetBSD.org>

Add a NetBSD native futex implementation, mostly written by riastradh@.
Map the COMPAT_LINUX futex calls to the native ones.


# 739430b9 24-Apr-2020 thorpej <thorpej@NetBSD.org>

Overhaul the way LWP IDs are allocated. Instead of each LWP having it's
own LWP ID space, LWP IDs came from the same number space as PIDs. The
lead LWP of a process gets the PID as its LID. If a m

Overhaul the way LWP IDs are allocated. Instead of each LWP having it's
own LWP ID space, LWP IDs came from the same number space as PIDs. The
lead LWP of a process gets the PID as its LID. If a multi-LWP process's
lead LWP exits, the PID persists for the process.

In addition to providing system-wide unique thread IDs, this also lets us
eliminate the per-process LWP radix tree, and some associated locks.

Remove the separate "global thread ID" map added previously; it is no longer
needed to provide this functionality.

Nudged in this direction by ad@ and chs@.

show more ...


# 4c12ffd5 19-Apr-2020 ad <ad@NetBSD.org>

lwp_wait(): don't need to check for process exit, cv_wait_sig() does it.


# aa33eaa2 04-Apr-2020 thorpej <thorpej@NetBSD.org>

Add support for lazily generating a "global thread ID" for a LWP. This
identifier uniquely identifies an LWP across the entire system, and will
be used in future improvements in user-space synchroni

Add support for lazily generating a "global thread ID" for a LWP. This
identifier uniquely identifies an LWP across the entire system, and will
be used in future improvements in user-space synchronization primitives.

(Test disabled and libc stub not included intentionally so as to avoid
multiple libc version bumps.)

show more ...


# cb39d4d4 04-Apr-2020 maxv <maxv@NetBSD.org>

Drop specificdata from KCOV, kMSan doesn't interact well with it. Also
reduces the overhead.


# 70e37b6f 26-Mar-2020 ad <ad@NetBSD.org>

Fix crash observed with procfs on current-users by David Hopper. LWP refcnt
and p_zomblwp both must reach the needed state, and LSZOMB be set, under a
single hold of p_lock.


# 6cc31eb1 26-Mar-2020 ad <ad@NetBSD.org>

softint_overlay() (slow case) gains ~nothing but creates potential headaches.
In the interests of simplicity remove it and always use the kthreads.


# 5b97fdf1 08-Mar-2020 ad <ad@NetBSD.org>

PR kern/55020: dbregs_dr?_dont_inherit_lwp test cases fail on real hardware

lwp_wait(): make the check for deadlock much more permissive.


# e9c23bf6 27-Feb-2020 ad <ad@NetBSD.org>

Remove an unneeded ifdef MULTIPROCESSOR.


1234567891011