#
0d280c5f |
| 14-Aug-2022 |
jsg <jsg@openbsd.org> |
remove unneeded includes in sys/kern ok mpi@ miod@
|
#
6bbcc068 |
| 24-Nov-2021 |
visa <visa@openbsd.org> |
Fix type of count.
|
#
5c8bc909 |
| 24-Nov-2021 |
visa <visa@openbsd.org> |
Simplify arithmetics on the main path.
|
#
7720a192 |
| 24-Nov-2021 |
claudio <claudio@openbsd.org> |
Add a few dt(4) TRACEPOINTS to SMR. Should help to better understand what goes on in SMR. OK mpi@
|
#
d73de46f |
| 06-Jul-2021 |
kettenis <kettenis@openbsd.org> |
Introduce CPU_IS_RUNNING() and us it in scheduler-related code to prevent waiting on CPUs that didn't spin up. This will allow us to spin down CPUs in the future to save power as well.
ok mpi@
|
#
2c6d48bb |
| 29-Jun-2021 |
kettenis <kettenis@openbsd.org> |
Didn't intend to commit the CPU_IS_RUNNING() changes just yet, so revert those bits.
|
#
11548269 |
| 29-Jun-2021 |
kettenis <kettenis@openbsd.org> |
SMP support. Mostly works, but occasionally craps out during boot.
ok drahn@
|
#
83695439 |
| 25-Dec-2020 |
visa <visa@openbsd.org> |
Small smr_grace_wait() optimization
Make the SMR thread maintain an explicit system-wide grace period and make CPUs observe the current grace period when crossing a quiescent state. This lets the SM
Small smr_grace_wait() optimization
Make the SMR thread maintain an explicit system-wide grace period and make CPUs observe the current grace period when crossing a quiescent state. This lets the SMR thread avoid a forced context switch for CPUs that have already entered the latest grace period.
This change provides a small improvement in smr_grace_wait()'s performance in terms of context switching.
OK mpi@, anton@
show more ...
|
#
fc0b7835 |
| 03-Apr-2020 |
visa <visa@openbsd.org> |
Adjust SMR_ASSERT_CRITICAL() and SMR_ASSERT_NONCRITICAL() so that the panic message shows the actual code location of the assert. Do this by moving the assert logic inside the macros.
Prompted by an
Adjust SMR_ASSERT_CRITICAL() and SMR_ASSERT_NONCRITICAL() so that the panic message shows the actual code location of the assert. Do this by moving the assert logic inside the macros.
Prompted by and OK claudio@ OK mpi@
show more ...
|
#
1ab6845c |
| 25-Feb-2020 |
visa <visa@openbsd.org> |
Start the SMR thread when all CPUs are ready for scheduling. This prevents the appearance of a "smr: dispatch took N seconds" message during boot when there is an early smr_call(). Such a call can ha
Start the SMR thread when all CPUs are ready for scheduling. This prevents the appearance of a "smr: dispatch took N seconds" message during boot when there is an early smr_call(). Such a call can happen with mfii(4). The initial dispatch cannot make progress until smr_grace_wait() can visit all CPUs.
This fix is essentially a hack. It makes use of the fact that there is no hard guarantee on how quickly the callback of smr_call() gets invoked. It is assumed that the SMR call backlog does not grow large during boot.
An alternative fix is to make smr_grace_wait() skip secondary CPUs until they have been started. However, this could break if the spinup logic of secondary CPUs was changed.
Delayed SMR dispatch reported and fix tested by Hrvoje Popovski Discussed with and OK kettenis@, claudio@
show more ...
|
#
82fff5fa |
| 30-Dec-2019 |
jsg <jsg@openbsd.org> |
convert infinite msleep(9) to msleep_nsec(9)
ok mpi@
|
#
4bc97b15 |
| 03-Jul-2019 |
cheloha <cheloha@openbsd.org> |
Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).
Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not
Add tsleep_nsec(9), msleep_nsec(9), and rwsleep_nsec(9).
Equivalent to their unsuffixed counterparts except that (a) they take a timeout in terms of nanoseconds, and (b) INFSLP, aka UINT64_MAX (not zero) indicates that a timeout should not be set.
For now, zero nanoseconds is not a strictly valid invocation: we log a warning on DIAGNOSTIC kernels if we see such a call. We still sleep until the next tick in such a case, however. In the future this could become some sort of poll... TBD.
To facilitate conversions to these interfaces: add inline conversion functions to sys/time.h for turning your timeout into nanoseconds.
Also do a few easy conversions for warmup and to demonstrate how further conversions should be done.
Lots of input from mpi@ and ratchov@. Additional input from tedu@, deraadt@, mortimer@, millert@, and claudio@.
Partly inspired by FreeBSD r247787.
positive feedback from deraadt@, ok mpi@
show more ...
|
#
9e473608 |
| 17-May-2019 |
visa <visa@openbsd.org> |
Add SMR_ASSERT_NONCRITICAL() in assertwaitok(). This eases debugging because now the error is detected before context switch.
The sleep code path eventually calls assertwaitok() in mi_switch(), so t
Add SMR_ASSERT_NONCRITICAL() in assertwaitok(). This eases debugging because now the error is detected before context switch.
The sleep code path eventually calls assertwaitok() in mi_switch(), so the assertwaitok() in the SMR barrier function is somewhat redundant and can be removed.
OK mpi@
show more ...
|
#
5266b40f |
| 16-May-2019 |
visa <visa@openbsd.org> |
Remove incorrect optimization. The current logic for skipping idle CPUs does not establish strong enough ordering between CPUs. Consequently, smr_grace_wait() might incorrectly skip a CPU and invoke
Remove incorrect optimization. The current logic for skipping idle CPUs does not establish strong enough ordering between CPUs. Consequently, smr_grace_wait() might incorrectly skip a CPU and invoke an SMR callback too early.
Prompted by haesbaert@
show more ...
|
#
aa45e4b6 |
| 14-May-2019 |
visa <visa@openbsd.org> |
Add lock order checking for smr_barrier(9). This is similar to the checking done in taskq_barrier(9) and timeout_barrier(9).
OK mpi@
|
#
f2396460 |
| 26-Feb-2019 |
visa <visa@openbsd.org> |
Introduce safe memory reclamation, a mechanism for reclaiming shared objects that readers can access without locking. This provides a basis for read-copy-update operations.
Readers access SMR-protec
Introduce safe memory reclamation, a mechanism for reclaiming shared objects that readers can access without locking. This provides a basis for read-copy-update operations.
Readers access SMR-protected shared objects inside SMR read-side critical section where sleeping is not allowed. To reclaim an SMR-protected object, the writer has to ensure mutual exclusion of other writers, remove the object's shared reference and wait until read-side references cannot exist any longer. As an alternative to waiting, the writer can schedule a callback that gets invoked when reclamation is safe.
The mechanism relies on CPU quiescent states to determine when an SMR-protected object is ready for reclamation.
The <sys/smr.h> header additionally provides an implementation of singly- and doubly-linked lists that can be used together with SMR. These lists allow lockless read access with a concurrent writer.
Discussed with many OK mpi@ sashan@
show more ...
|