History log of /netbsd/sys/sys/lwp.h (Results 1 – 25 of 216)
Revision Date Author Comments
# 3567f89a 23-Jul-2022 mrg <mrg@NetBSD.org>

make MAXLWP a real option that triggers rebuilds properly.


# 6ca4ac1d 07-May-2022 mrg <mrg@NetBSD.org>

bump maxthreads default.

bump the default MAXLWP to 4096 from 2048, and adjust the default
limits seen to be 2048 cur / 4096 max. remove the linkage to
maxuprc entirely.

remove cpu_maxlwp() that i

bump maxthreads default.

bump the default MAXLWP to 4096 from 2048, and adjust the default
limits seen to be 2048 cur / 4096 max. remove the linkage to
maxuprc entirely.

remove cpu_maxlwp() that isn't implemented anywhere. instead,
grow the maxlwp for larger memory systems, picking 1 lwp per 1MiB
of ram, limited to 65535 like the system limit.

remove some magic numbers.


i've been having weird firefox issues for a few months now and
it turns out i was having pthread_create() failures and since
bumping the defaults i've had none of the recent issues.

show more ...


# df32c3d2 09-Apr-2022 riastradh <riastradh@NetBSD.org>

kern: Handle l_mutex with atomic_store_release, atomic_load_consume.

- Where the lock is held and known to be correct, no atomic.
- In loops to acquire the lock, use atomic_load_relaxed before we

kern: Handle l_mutex with atomic_store_release, atomic_load_consume.

- Where the lock is held and known to be correct, no atomic.
- In loops to acquire the lock, use atomic_load_relaxed before we
restart with atomic_load_consume.

Nix membar_exit.

(Who knows, using atomic_load_consume here might fix bugs on Alpha!)

show more ...


# e62aad30 09-Apr-2022 riastradh <riastradh@NetBSD.org>

sys/lwp.h: Nix trailing whitespace.


# 13348569 27-Feb-2022 gutteridge <gutteridge@NetBSD.org>

lwp.h: correct grammar in a comment


# 307e02b2 23-Oct-2020 thorpej <thorpej@NetBSD.org>

- sleepq_block(): Add a new LWP flag, LW_CATCHINTR, that is used to track
the intent to catch signals while sleeping. Initialize this flag based
on the catch_p argument to sleepq_block(), and ra

- sleepq_block(): Add a new LWP flag, LW_CATCHINTR, that is used to track
the intent to catch signals while sleeping. Initialize this flag based
on the catch_p argument to sleepq_block(), and rather than test catch_p
when awakened, test LW_CATCHINTR. This allows the intent to change
(based on whatever criteria the owner of the sleepq wishes) while the
LWP is asleep. This is separate from LW_SINTR in order to leave all
other logic around LW_SINTR unaffected.
- In sleepq_transfer(), adjust also LW_CATCHINTR based on the catch_p
argument. Also allow the new LWP lock argument to be NULL, which
will cause the lwp_setlock() call to be skipped; this allows transfer
to another sleepq that is known to be protected by the same lock.
- Add a new function, sleepq_uncatch(), that will transition an LWP
from "interruptible sleep" to "uninterruptible sleep" on its current
sleepq.

show more ...


# 941120eb 01-Aug-2020 riastradh <riastradh@NetBSD.org>

New functions kthread_fpu_enter/exit.

The MI definitions don't do anything but maintain a flag, but MD code
can define kthread_fpu_enter/exit_md to actually enable/disable the
FPU. (These are almos

New functions kthread_fpu_enter/exit.

The MI definitions don't do anything but maintain a flag, but MD code
can define kthread_fpu_enter/exit_md to actually enable/disable the
FPU. (These are almost pcu_load/discard on systems that use pcu(9),
except they apply to all PCUs.)

Discussed on tech-kern:
https://mail-index.netbsd.org/tech-kern/2020/06/20/msg026524.html

The proposed kthread flag KTHREAD_FPU is not included because I
couldn't find any particular need for it that would not be covered by
just calling kthread_fpu_enter/exit in the kthread function.

show more ...


# 01503c25 23-May-2020 ad <ad@NetBSD.org>

- Replace pid_table_lock with a lockless lookup covered by pserialize, with
the "writer" side being pid_table expansion. The basic idea is that when
doing an LWP lookup there is usually already

- Replace pid_table_lock with a lockless lookup covered by pserialize, with
the "writer" side being pid_table expansion. The basic idea is that when
doing an LWP lookup there is usually already a lock held (p->p_lock), or a
spin mutex that needs to be taken (l->l_mutex), and either can be used to
get the found LWP stable and confidently determine that all is correct.

- For user processes LSLARVAL implies the same thing as LSIDL ("not visible
by ID"), and lookup by ID in proc0 doesn't really happen. In-tree the new
state should be understood by top(1), the tty subsystem and so on, and
would attract the attention of 3rd party kernel grovellers in time, so
remove it and just rely on LSIDL.

show more ...


# fc62baac 29-Apr-2020 thorpej <thorpej@NetBSD.org>

- proc_find() retains traditional semantics of requiring the canonical
PID to look up a proc. Add a separate proc_find_lwpid() to look up a
proc by the ID of any of its LWPs.
- Add proc_find_lwp

- proc_find() retains traditional semantics of requiring the canonical
PID to look up a proc. Add a separate proc_find_lwpid() to look up a
proc by the ID of any of its LWPs.
- Add proc_find_lwp_acquire_proc(), which enables looking up the LWP
*and* a proc given the ID of any LWP. Returns with the proc::p_lock
held.
- Rewrite lwp_find2() in terms of proc_find_lwp_acquire_proc(), and add
allow the proc to be wildcarded, rather than just curproc or specific
proc.
- lwp_find2() now subsumes the original intent of lwp_getref_lwpid(), but
in a much nicer way, so garbage-collect the remnants of that recently
added mechanism.

show more ...


# 9a08a752 26-Apr-2020 thorpej <thorpej@NetBSD.org>

Add a NetBSD native futex implementation, mostly written by riastradh@.
Map the COMPAT_LINUX futex calls to the native ones.


# 739430b9 24-Apr-2020 thorpej <thorpej@NetBSD.org>

Overhaul the way LWP IDs are allocated. Instead of each LWP having it's
own LWP ID space, LWP IDs came from the same number space as PIDs. The
lead LWP of a process gets the PID as its LID. If a m

Overhaul the way LWP IDs are allocated. Instead of each LWP having it's
own LWP ID space, LWP IDs came from the same number space as PIDs. The
lead LWP of a process gets the PID as its LID. If a multi-LWP process's
lead LWP exits, the PID persists for the process.

In addition to providing system-wide unique thread IDs, this also lets us
eliminate the per-process LWP radix tree, and some associated locks.

Remove the separate "global thread ID" map added previously; it is no longer
needed to provide this functionality.

Nudged in this direction by ad@ and chs@.

show more ...


# 44199b63 10-Apr-2020 ad <ad@NetBSD.org>

- Make this needed sequence always work for condvars, by not touching the CV
again after wakeup. Previously it could panic because cv_signal() could
be called by cv_wait_sig() + others:

cv_bro

- Make this needed sequence always work for condvars, by not touching the CV
again after wakeup. Previously it could panic because cv_signal() could
be called by cv_wait_sig() + others:

cv_broadcast(cv);
cv_destroy(cv);

- In support of the above, if an LWP doing a timed wait is awoken by
cv_broadcast() or cv_signal(), don't return an error if the timer
fires after the fact, i.e. either succeed or fail, not both.

- Remove LOCKDEBUG code for CVs which never worked properly and is of
questionable use.

show more ...


# aa33eaa2 04-Apr-2020 thorpej <thorpej@NetBSD.org>

Add support for lazily generating a "global thread ID" for a LWP. This
identifier uniquely identifies an LWP across the entire system, and will
be used in future improvements in user-space synchroni

Add support for lazily generating a "global thread ID" for a LWP. This
identifier uniquely identifies an LWP across the entire system, and will
be used in future improvements in user-space synchronization primitives.

(Test disabled and libc stub not included intentionally so as to avoid
multiple libc version bumps.)

show more ...


# cb39d4d4 04-Apr-2020 maxv <maxv@NetBSD.org>

Drop specificdata from KCOV, kMSan doesn't interact well with it. Also
reduces the overhead.


# bb03bb7d 26-Mar-2020 ad <ad@NetBSD.org>

Change sleepq_t from a TAILQ to a LIST and remove SOBJ_SLEEPQ_FIFO. Only
select/poll used the FIFO method and that was for collisions which rarely
occur. Shrinks sleep_t and condvar_t.


# 9cde7993 15-Feb-2020 ad <ad@NetBSD.org>

- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock
in softint_dispatch() is risky. May help with the "softint screwup"
panic.

- Correct the memory barriers around zombie

- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock
in softint_dispatch() is risky. May help with the "softint screwup"
panic.

- Correct the memory barriers around zombies switching into oblivion.

show more ...


# 906105f9 15-Feb-2020 ad <ad@NetBSD.org>

PR kern/54922: 9.99.45@20200202 panic: diagnostic assertion linux ldconfig triggers vpp != NULL in exit1()->radixtree.c line 674

Create an lwp_renumber() from the code in emulexec() and use in
linux

PR kern/54922: 9.99.45@20200202 panic: diagnostic assertion linux ldconfig triggers vpp != NULL in exit1()->radixtree.c line 674

Create an lwp_renumber() from the code in emulexec() and use in
linux_e_proc_exec() and linux_e_proc_fork() too.

show more ...


# 1a625c58 29-Jan-2020 ad <ad@NetBSD.org>

- Track LWPs in a per-process radixtree. It uses no extra memory in the
single threaded case. Replace scans of p->p_lwps with lookups in the
tree. Find free LIDs for new LWPs in the tree. Rep

- Track LWPs in a per-process radixtree. It uses no extra memory in the
single threaded case. Replace scans of p->p_lwps with lookups in the
tree. Find free LIDs for new LWPs in the tree. Replace the hashed sleep
queues for park/unpark with lookups in the tree under cover of a RW lock.

- lwp_wait(): if waiting on a specific LWP, find the LWP via tree lookup and
return EINVAL if it's detached, not ESRCH.

- Group the locks in struct proc at the end of the struct in their own cache
line.

- Add some comments.

show more ...


# 669015d4 28-Jan-2020 ad <ad@NetBSD.org>

Put pri_t back to an int. It looks like there might be a sign extension
issue somewhere but it's not worth the hassle trying to find it.


# 7f67618c 25-Jan-2020 ad <ad@NetBSD.org>

- Fix a race between the kernel and libpthread, where a new thread can start
life without its self->pt_lid being filled in.

- Fix an error path in _lwp_create(). If the new LID can't be copied ou

- Fix a race between the kernel and libpthread, where a new thread can start
life without its self->pt_lid being filled in.

- Fix an error path in _lwp_create(). If the new LID can't be copied out,
then get rid of the new LWP (i.e. either succeed or fail, not both).

- Mark l_dopreempt and l_nopreempt volatile in struct lwp.

show more ...


# c4e1413d 21-Jan-2020 ad <ad@NetBSD.org>

ddb's "show all locks":

- Make the output easier to scan quickly.

- Show every LWP that is blocked on a lock, and the details of the lock.


# 89217999 12-Jan-2020 ad <ad@NetBSD.org>

A final set of scheduler tweaks:

- Try hard to keep vfork() parent and child on the same CPU until execve(),
failing that on the same core, but in all other cases scatter new LWPs
among the diff

A final set of scheduler tweaks:

- Try hard to keep vfork() parent and child on the same CPU until execve(),
failing that on the same core, but in all other cases scatter new LWPs
among the different CPU packages, round robin, to try and get the best out
of the available cache and bus bandwidth.

- Remove attempts at balancing. Replace with a rate-limited skim of other
CPU's run queues in sched_idle(), starting in the current package and
moving outwards. Add a sysctl tunable to change the interval.

- Make the cacheht_time tuneable take a milliseconds value.

- It's possible to configure things such that there's no CPU allowed to run
an LWP. Defeat this by always having a default:

Reported-by: syzbot+46968944dd9359ab93bc@syzkaller.appspotmail.com
Reported-by: syzbot+7f750a4cc230d1e831f9@syzkaller.appspotmail.com
Reported-by: syzbot+88d7675158f5cb4684db@syzkaller.appspotmail.com
Reported-by: syzbot+d409c2338150e9a8ae1e@syzkaller.appspotmail.com
Reported-by: syzbot+e152dc5bff188f67358a@syzkaller.appspotmail.com

show more ...


# 70d0f751 12-Jan-2020 ad <ad@NetBSD.org>

Make pri_t a short and get back some more space in struct lwp.


# e5e7dbf1 12-Jan-2020 ad <ad@NetBSD.org>

- Shuffle some items around in struct lwp to save space. Remove an unused
item or two.

- For lockstat, get a useful callsite for vnode locks (caller to vn_lock()).


# 14f6a1f0 08-Jan-2020 ad <ad@NetBSD.org>

Hopefully fix some problems seen with MP support on non-x86, in particular
where curcpu() is defined as curlwp->l_cpu:

- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling

Hopefully fix some problems seen with MP support on non-x86, in particular
where curcpu() is defined as curlwp->l_cpu:

- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before
calling cpu_switchto(). It's not safe to let other actors mess with the
LWP (in particular l->l_cpu) while it's still context switching. This
removes l->l_ctxswtch.

- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since
it's now covered by the LWP's lock.

- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything
is in cache anyway so it wasn't buying much by trying to avoid saving old
state. This means cpu_switchto() will never be called with prevlwp ==
NULL.

- Remove some KERNEL_LOCK handling which hasn't been needed for years.

show more ...


123456789