#
fa24d106 |
| 16-Apr-2023 |
skrll <skrll@NetBSD.org> |
Rename VM_KERNEL_IO_ADDRESS to VM_KERNEL_IO_BASE to match RISC-V
It's less letters, matches other similar variables and will help with sharing code between the two architectures.
NFCI.
|
#
9f62fa6d |
| 25-Jun-2022 |
jmcneill <jmcneill@NetBSD.org> |
Remove GIC_SPLFUNCS.
|
#
59d61cf6 |
| 30-Oct-2021 |
jmcneill <jmcneill@NetBSD.org> |
Add __HAVE_PREEMPTION support to gic_splfuncs asm funcs.
"looks right to me" - thorpej
|
#
e2598d0f |
| 30-Oct-2021 |
jmcneill <jmcneill@NetBSD.org> |
Add CI_SPLX_SAVEDIPL and CI_HWPL
|
#
a7efe0a7 |
| 30-Sep-2021 |
skrll <skrll@NetBSD.org> |
Ensure TCR_EPD0 is set on entry to pmap_activate and ensure it is set as early as possible for APs.
|
#
45fdbf40 |
| 18-Sep-2021 |
jmcneill <jmcneill@NetBSD.org> |
gic_splx: performance optimizations
Avoid any kind of register access (DAIF, PMR, etc), barriers, and atomic operations in the common case where no interrupt fires between spl being raised and lower
gic_splx: performance optimizations
Avoid any kind of register access (DAIF, PMR, etc), barriers, and atomic operations in the common case where no interrupt fires between spl being raised and lowered.
This introduces a per-CPU return address (ci_splx_restart) used by the vector handler to restart a sequence in splx that compares the new ipl with the per-CPU hardware priority state stored in ci_hwpl.
show more ...
|
#
4381aa53 |
| 11-Dec-2020 |
skrll <skrll@NetBSD.org> |
s:aarch64/cpufunc.h:arm/cpufunc.h:
a baby step in the grand arm header unification challenge
|
#
08448190 |
| 10-Nov-2020 |
skrll <skrll@NetBSD.org> |
AA64 is not MIPS.
Change all KSEG references to directmap
|
#
2e80b90f |
| 15-Sep-2020 |
ryo <ryo@NetBSD.org> |
fix typo
|
#
e94704b3 |
| 12-Aug-2020 |
skrll <skrll@NetBSD.org> |
Part II of ad's aarch64 performance improvements (cpu_switch.S bugs are all mine)
- Use tpidr_el1 to hold curlwp and not curcpu, because curlwp is accessed much more often by MI code. It also mak
Part II of ad's aarch64 performance improvements (cpu_switch.S bugs are all mine)
- Use tpidr_el1 to hold curlwp and not curcpu, because curlwp is accessed much more often by MI code. It also makes curlwp preemption safe and allows aarch64_curlwp() to be a const function (curcpu must be volatile).
- Make ASTs operate per-LWP rather than per-CPU, otherwise sometimes LWPs can see spurious ASTs (which doesn't cause a problem, it just means some time may be wasted).
- Use plain stores to set/clear ASTs. Make sure ASTs are always set on the same CPU as the target LWP, and delivered via IPI if posted from a remote CPU so that they are resolved quickly.
- Add some cache line padding to struct cpu_info, to match x86.
- Add a memory barrier in a couple of places where ci_curlwp is set. This is needed whenever an LWP that is resuming on the CPU could hold an adaptive mutex. The barrier needs to drain the CPU's store buffer, so that the update to ci_curlwp becomes globally visible before the LWP can resume and call mutex_exit(). By my reading of the ARM docs it looks like the instruction I used will do the right thing, but I'm not 100% sure.
show more ...
|
#
9e5aa928 |
| 06-Aug-2020 |
ryo <ryo@NetBSD.org> |
revert the changes of http://mail-index.netbsd.org/source-changes/2020/08/03/msg120183.html
This change is overengineered. bus_space_{peek,poke}_N does not have to be reentrant nor available for int
revert the changes of http://mail-index.netbsd.org/source-changes/2020/08/03/msg120183.html
This change is overengineered. bus_space_{peek,poke}_N does not have to be reentrant nor available for interrupt context.
requested by skrll@
show more ...
|
#
4521cafe |
| 03-Aug-2020 |
ryo <ryo@NetBSD.org> |
Implement MD ucas(9) (__HAVE_UCAS_FULL)
|
#
188f9bab |
| 03-Aug-2020 |
ryo <ryo@NetBSD.org> |
Fix a problem in which a fault occured in an interrupt handler during copyin/copyout was erroneously detected as being occured by copyin.
- keep idepth in faultbuf and compare it to avoid unnecessar
Fix a problem in which a fault occured in an interrupt handler during copyin/copyout was erroneously detected as being occured by copyin.
- keep idepth in faultbuf and compare it to avoid unnecessary fault recovery - make cpu_set_onfault() nestable to use bus_space_{peek,poke}() in hardware interrupt handlers during copyin & copyout.
show more ...
|
#
f3b33b34 |
| 28-May-2020 |
ryo <ryo@NetBSD.org> |
- make AP{IB,DA,DB}Key are also enabled when ARMV83_PAC. - If no ARMV83_PAC, clearly disable SCTLR_En{IA,IB,DA,DB}
|
#
e433820c |
| 23-May-2020 |
ryo <ryo@NetBSD.org> |
Not only the kernel thread, but also the userland PAC keys (APIA,APIB,APDA,APDB,APGA) are now randomly initialized at exec, and switched when context switch. userland programs are able to perform poi
Not only the kernel thread, but also the userland PAC keys (APIA,APIB,APDA,APDB,APGA) are now randomly initialized at exec, and switched when context switch. userland programs are able to perform pointer authentication on ARMv8.3+PAC cpu.
reviewd by maxv@, thanks.
show more ...
|
#
31acfc56 |
| 15-May-2020 |
ryo <ryo@NetBSD.org> |
SCTLR_EnIA should be enabled in the caller(locore).
For some reason, gcc make aarch64_pac_init() function non-leaf, and it uses paciasp/autiasp.
|
#
06c802bd |
| 12-Apr-2020 |
maxv <maxv@NetBSD.org> |
Add support for Pointer Authentication (PAC).
We use the "pac-ret" option, to sign the return instruction pointer on function entry, and authenticate it on function exit. This acts as a mitigation a
Add support for Pointer Authentication (PAC).
We use the "pac-ret" option, to sign the return instruction pointer on function entry, and authenticate it on function exit. This acts as a mitigation against ROP.
The authentication uses a per-lwp (secret) I-A key stored in the 128bit APIAKey register and part of the lwp context. During lwp creation, the kernel generates a random key, and during context switches, it installs the key of the target lwp on the CPU.
Userland cannot read the APIAKey register directly. However, it can sign its pointers with it, because the register is architecturally shared between userland and the kernel. Although part of the CPU design, it is a bit of an undesired behavior, because it allows to forge valid kernel pointers from userland. To avoid that, we don't share the key with userland, and rather switch it in EL0<->EL1 transitions. This means that when userland executes, a different key is loaded in APIAKey than the one the kernel uses. For now the userland key is a fixed 128bit zero value.
The DDB stack unwinder is changed to strip the authentication code from the pointers in lr.
Two problems are known:
* Currently the idlelwps' keys are not really secret. This is because the RNG is not yet available when we spawn these lwps. Not overly important, but would be nice to fix with UEFI RNG. * The key switching in EL0<->EL1 transitions is not the most optimized code on the planet. Instead of checking aarch64_pac_enabled, it would be better to hot-patch the code at boot time, but there currently is no hot-patch support on aarch64.
Tested on Qemu.
show more ...
|
#
91a840ce |
| 20-Feb-2020 |
skrll <skrll@NetBSD.org> |
G/C
|
#
bcd0d17a |
| 29-Jan-2020 |
skrll <skrll@NetBSD.org> |
G/C some more
|
#
6aae9c32 |
| 29-Jan-2020 |
skrll <skrll@NetBSD.org> |
G/C
|
#
42621b3d |
| 28-Jan-2020 |
maxv <maxv@NetBSD.org> |
Jazelle and T32EE are not part of ARMv8, fix the bits to their real meanings. No functional change.
|
#
14f6a1f0 |
| 08-Jan-2020 |
ad <ad@NetBSD.org> |
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling cpu_switchto(). It's not safe to let other actors mess with the LWP (in particular l->l_cpu) while it's still context switching. This removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything is in cache anyway so it wasn't buying much by trying to avoid saving old state. This means cpu_switchto() will never be called with prevlwp == NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
show more ...
|
#
72ed90ed |
| 28-Dec-2019 |
jmcneill <jmcneill@NetBSD.org> |
Do not use Early Write Acknowledge for PCIe I/O and config space.
|
#
98f9f636 |
| 27-Dec-2019 |
jmcneill <jmcneill@NetBSD.org> |
Enable early write acknowledge for device memory mappings.
|
#
9a45a760 |
| 24-Nov-2019 |
skrll <skrll@NetBSD.org> |
corect #include order
|