#
f4e79c03 |
| 01-Mar-2023 |
riastradh <riastradh@NetBSD.org> |
arm32: Optimization: Omit needless membar when triggering softint.
When we are triggering a softint, it can't already hold any mutexes. So any path to mutex_exit(mtx) must go via mutex_enter(mtx), w
arm32: Optimization: Omit needless membar when triggering softint.
When we are triggering a softint, it can't already hold any mutexes. So any path to mutex_exit(mtx) must go via mutex_enter(mtx), which is always done with atomic r/m/w, and we need not issue any explicit barrier between ci->ci_curlwp = softlwp and a potential load of mtx->mtx_owner in mutex_exit.
PR kern/57240
XXX pullup-8 XXX pullup-9 XXX pullup-10
show more ...
|
#
f7bcfe7a |
| 23-Feb-2023 |
riastradh <riastradh@NetBSD.org> |
arm32: Add missing barriers in cpu_switchto.
Details in comments.
PR kern/57240
XXX pullup-8 XXX pullup-9 XXX pullup-10
|
#
71376dac |
| 30-May-2021 |
dholland <dholland@NetBSD.org> |
typo in comment
|
#
b151ecc7 |
| 21-Nov-2020 |
skrll <skrll@NetBSD.org> |
Ensure that r5 contains curlwp before DO_AST_AND_RESTORE_ALIGNMENT_FAULTS in lwp_trampoline as required by the move to make ASTs operate per-LWP rather than per-CPU.
Thanks to martin@ for bisecting
Ensure that r5 contains curlwp before DO_AST_AND_RESTORE_ALIGNMENT_FAULTS in lwp_trampoline as required by the move to make ASTs operate per-LWP rather than per-CPU.
Thanks to martin@ for bisecting the amap corruption he was seeing and testing this fix.
show more ...
|
#
323a9dd8 |
| 15-Aug-2020 |
skrll <skrll@NetBSD.org> |
#ifdef _ARM_ARCH_7 the dmbs
|
#
f0286e4f |
| 14-Aug-2020 |
skrll <skrll@NetBSD.org> |
Mirror the changes to aarch64 and
- Switch to TPIDRPRW_IS_CURLWP, because curlwp is accessed much more often by MI code. It also makes curlwp preemption safe,
- Make ASTs operate per-LWP rather
Mirror the changes to aarch64 and
- Switch to TPIDRPRW_IS_CURLWP, because curlwp is accessed much more often by MI code. It also makes curlwp preemption safe,
- Make ASTs operate per-LWP rather than per-CPU, otherwise sometimes LWPs can see spurious ASTs (which doesn't cause a problem, it just means some time may be wasted).
- Make sure ASTs are always set on the same CPU as the target LWP, and delivered via IPI if posted from a remote CPU so that they are resolved quickly.
- Add some cache line padding to struct cpu_info.
- Add a memory barrier in a couple of places where ci_curlwp is set. This is needed whenever an LWP that is resuming on the CPU could hold an adaptive mutex. The barrier needs to drain the CPU's store buffer, so that the update to ci_curlwp becomes globally visible before the LWP can resume and call mutex_exit().
show more ...
|
#
078bd70b |
| 10-Jul-2020 |
skrll <skrll@NetBSD.org> |
Add support for KASAN on ARMv[67]
Thanks to maxv for many pointers and reviews.
|
#
4532d9e0 |
| 06-Jul-2020 |
skrll <skrll@NetBSD.org> |
Whitespace
|
#
97698f58 |
| 03-Jul-2020 |
skrll <skrll@NetBSD.org> |
KNF (sort #includes)
|
#
e52c0724 |
| 11-Feb-2020 |
skrll <skrll@NetBSD.org> |
G/C
|
#
2dc232a0 |
| 08-Jan-2020 |
skrll <skrll@NetBSD.org> |
oldlwp is always non-NULL in cpu_switchto so remove the test for NULL.
|
#
14f6a1f0 |
| 08-Jan-2020 |
ad <ad@NetBSD.org> |
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling cpu_switchto(). It's not safe to let other actors mess with the LWP (in particular l->l_cpu) while it's still context switching. This removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything is in cache anyway so it wasn't buying much by trying to avoid saving old state. This means cpu_switchto() will never be called with prevlwp == NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
show more ...
|
#
0e61a937 |
| 29-Oct-2019 |
joerg <joerg@NetBSD.org> |
Explicitly annotate FPU requirements for LLVM MC.
When using GCC, this annotations change the global state, but there is no push/pop functionality for .fpu to avoid this problem. The state is local
Explicitly annotate FPU requirements for LLVM MC.
When using GCC, this annotations change the global state, but there is no push/pop functionality for .fpu to avoid this problem. The state is local to each inline assembler block with LLVM MC.
show more ...
|
#
5de93b97 |
| 13-Sep-2019 |
skrll <skrll@NetBSD.org> |
Typo in comment
|
#
bd6172a7 |
| 22-Nov-2018 |
skrll <skrll@NetBSD.org> |
Typo in comment
|
#
b3f4e5e3 |
| 01-Jul-2017 |
skrll <skrll@NetBSD.org> |
Whitespace (align comments)
|
#
65d17c4a |
| 01-Jul-2017 |
skrll <skrll@NetBSD.org> |
Trailing whitespace
|
#
3bb865cd |
| 08-Apr-2015 |
matt <matt@NetBSD.org> |
Make TPIDRPRW_IS_CURLWP work for MULTIPROCESSOR. get curcpu() from new lwp. don't set lwp l_cpu (already done). Remove support for __HAVE_UNNESTED_INTRS don't set curlwp until after we are done savin
Make TPIDRPRW_IS_CURLWP work for MULTIPROCESSOR. get curcpu() from new lwp. don't set lwp l_cpu (already done). Remove support for __HAVE_UNNESTED_INTRS don't set curlwp until after we are done saving the oldlwp. disable interrupts when setting curlwp/kernel stack pointer. Overall, these changes simplify cpu_switchto even more.
show more ...
|
#
6630c63c |
| 24-Mar-2015 |
skrll <skrll@NetBSD.org> |
There is no need to save/restore l_private in softint_switch now that cpu_switchto is fixed
|
#
db413e41 |
| 24-Mar-2015 |
matt <matt@NetBSD.org> |
Rework register usage in cpu_switchto so curcpu() is preserved across ras_lookup. Only set vfp & tpid registers and do ras lookups if new lwp is not LW_SYSTEM. (tested on RPI and atf tests on BPI b
Rework register usage in cpu_switchto so curcpu() is preserved across ras_lookup. Only set vfp & tpid registers and do ras lookups if new lwp is not LW_SYSTEM. (tested on RPI and atf tests on BPI by skrll).
show more ...
|
#
5f6b2211 |
| 22-Mar-2015 |
matt <matt@NetBSD.org> |
Fix register usage in softint_switch. load / restore l_private across softint_dispatch
|
#
07dd5229 |
| 22-Mar-2015 |
matt <matt@NetBSD.org> |
Make sure to save the user thread point in softint_switch in case it was set just before we got an interrupt. Otherwise if the softint blocks, the old value would be restored and change lost.
|
#
7d1220ac |
| 18-Oct-2014 |
snj <snj@NetBSD.org> |
src is too big these days to tolerate superfluous apostrophes. It's "its", people!
|
#
084b5771 |
| 15-Jun-2014 |
ozaki-r <ozaki-r@NetBSD.org> |
Fix wrong instruction; mcr => mrc
|
#
3f90f698 |
| 28-Mar-2014 |
matt <matt@NetBSD.org> |
ARM_MMU_EXTENDED support.
|