History log of /openbsd/sys/kern/init_main.c (Results 1 – 25 of 326)
Revision Date Author Comments
# 30d20579 02-Apr-2024 deraadt <deraadt@openbsd.org>

Delete the msyscall mechanism entirely, since mimmutable+pinsyscalls has
replaced it with a more strict mechanism, which happens to be lockless O(1)
rather than micro-lock O(1)+O(log N). Also nop-ou

Delete the msyscall mechanism entirely, since mimmutable+pinsyscalls has
replaced it with a more strict mechanism, which happens to be lockless O(1)
rather than micro-lock O(1)+O(log N). Also nop-out the sys_msyscall(2) guts,
but leave the syscall around for a bit longer so that people can build through
it, since ld.so(1) still wants to call it.

show more ...


# 4d8bb338 14-Feb-2024 miod <miod@openbsd.org>

Enable the pool gc thread on m88k MULTIPROCESSOR kernels now that
pmap_unmap_direct() has been fixed; also tested by aoyama@


# 307c5b23 01-Jan-2024 jsg <jsg@openbsd.org>

copyright++;


# 551b33bf 11-Dec-2023 kettenis <kettenis@openbsd.org>

Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the

Implement per-CPU caching for the page table page (vp) pool and the PTE
descriptor (pted) pool in the arm64 pmap implementation. This
significantly reduces the side-effects of lock contention on the kernel
map lock that is (incorrectly) translated into excessive page daemon
wakeups. This is not a perfect solution but it does lead to significant
speedups on machines with many CPU cores.

This requires adding a new pmap_init_percpu() function that gets called
at the point where kernel is ready to set up the per-CPU pool caches.
Dummy implementations of this function are added for all non-arm64
architectures. Some other architectures can probably benefit from
providing an actual implementation that sets up per-CPU caches for
pmap pools as well.

ok phessler@, claudio@, miod@, patrick@

show more ...


# 94c38e45 29-Aug-2023 claudio <claudio@openbsd.org>

Remove p_rtime from struct proc and replace it by passing the timespec
as argument to the tuagg_locked function.

- Remove incorrect use of p_rtime in other parts of the tree. p_rtime was
almost alwa

Remove p_rtime from struct proc and replace it by passing the timespec
as argument to the tuagg_locked function.

- Remove incorrect use of p_rtime in other parts of the tree. p_rtime was
almost always 0 so including it in any sum did not alter the result.
- In main() the update of time can be further simplified since at that time
only the primary cpu is running.
- Add missing nanouptime() call in cpu_hatch() for hppa
- Rename tuagg_unlocked to tuagg_locked like it is done in the rest of
the tree.

OK cheloha@ dlg@

show more ...


# 2298d1a8 15-Jun-2023 cheloha <cheloha@openbsd.org>

all platforms, main(): call clockqueue_init() just before sched_init_cpu()

Move the clockqueue_init() call out of clockintr_cpu_init() and up
just before the sched_init_cpu() call for a given CPU.

all platforms, main(): call clockqueue_init() just before sched_init_cpu()

Move the clockqueue_init() call out of clockintr_cpu_init() and up
just before the sched_init_cpu() call for a given CPU.

This will allow sched_init_cpu() to allocate clockintr handles for a
given CPU's scheduler in a later patch.

Link: https://marc.info/?l=openbsd-tech&m=168661507607622&w=2

ok kettenis@, claudio@

show more ...


# 74c41a21 01-Jan-2023 jsg <jsg@openbsd.org>

copyright++;


# 886cb583 10-Nov-2022 jmatthew <jmatthew@openbsd.org>

Add support for per-cpu event counters, to be used for clock and IPI
counters where the event being counted occurs across all CPUs in the
system. Counter instances can be made per-cpu by calling evc

Add support for per-cpu event counters, to be used for clock and IPI
counters where the event being counted occurs across all CPUs in the
system. Counter instances can be made per-cpu by calling evcount_percpu()
after the counter is attached, and this can occur before or after all system
CPUs are attached. Per-cpu counter instances should be incremented using
evcount_inc().

ok kettenis@ jca@ cheloha@

show more ...


# 1b4a394f 30-Oct-2022 guenther <guenther@openbsd.org>

Simplfity setregs() by passing it the ps_strings and switching
sys_execve() to return EJUSTRETURN.

setregs() is the MD routine used by sys_execve() to set up the
thread's trapframe and PCB such that

Simplfity setregs() by passing it the ps_strings and switching
sys_execve() to return EJUSTRETURN.

setregs() is the MD routine used by sys_execve() to set up the
thread's trapframe and PCB such that, on 'return' to userspace, it
has the register values defined by the ABI and otherwise zero. It
had to set the syscall retval[] values previously because the normal
syscall return path overwrites a couple registers with the retval[]
values. By instead returning EJUSTRETURN that and some complexity
with program-counter handling on m88k and sparc64 goes away.

Also, give setregs() add a 'struct ps_strings *arginfo' argument
so powerpc, powerpc64, and sh can directly get argc/argv/envp
values for registers instead of copyin()ing the one in userspace.

Improvements from miod@ and millert@
Testing assistance miod@, kettenis@, and aoyama@
ok miod@ kettenis@

show more ...


# 0d280c5f 14-Aug-2022 jsg <jsg@openbsd.org>

remove unneeded includes in sys/kern
ok mpi@ miod@


# 608889eb 23-Jul-2022 cheloha <cheloha@openbsd.org>

kernel: remove global "randompid" toggle

Apparently, we used to created several kthreads before the kernel
random number generator was up and running. A toggle, "randompid",
was needed to tell allo

kernel: remove global "randompid" toggle

Apparently, we used to created several kthreads before the kernel
random number generator was up and running. A toggle, "randompid",
was needed to tell allocpid() whether it made sense to attempt to
allocate random PIDs.

However, these days we get e.g. arc4random(9) into a working state
before any kthreads are spawned, so the toggle is no longer needed.

Thread: https://marc.info/?l=openbsd-tech&m=165541052614453&w=2

Very nice historical context provided by miod@.

probably ok miod@ deraadt@

show more ...


# 7eb8d89d 22-Feb-2022 guenther <guenther@openbsd.org>

Delete unnecessary #includes of <sys/domain.h> and/or <sys/protosw.h>

net/if_pppx.c pointed out by jsg@
ok gnezdo@ deraadt@ jsg@ mpi@ millert@


# 2360386f 01-Jan-2022 jsg <jsg@openbsd.org>

copyright++;


# f231ff59 09-Dec-2021 guenther <guenther@openbsd.org>

We only have one syscall table: inline sysent/SYS_MAXSYSCALL and
SYS_syscall as the nosys() function into the MD syscall entry
routines and the SYSCALL_DEBUG support. Adjust alpha's syscall
check to

We only have one syscall table: inline sysent/SYS_MAXSYSCALL and
SYS_syscall as the nosys() function into the MD syscall entry
routines and the SYSCALL_DEBUG support. Adjust alpha's syscall
check to match the other archs. Also, make sysent const to get it
into .rodata.

With that, 'struct emul' is unused: delete it and all its references

ok millert@

show more ...


# 4ed6f7c2 07-Dec-2021 guenther <guenther@openbsd.org>

Delete the last emulation callbacks: we're Just ELF, so declare
exec_elf_fixup() and coredump_elf() in <sys/exec_elf.h> and call
them and the MD setregs() directly in kern_exec.c and kern_sig.c

Also

Delete the last emulation callbacks: we're Just ELF, so declare
exec_elf_fixup() and coredump_elf() in <sys/exec_elf.h> and call
them and the MD setregs() directly in kern_exec.c and kern_sig.c

Also delete e_name[] (only used by sysctl), e_errno (unused), and
e_syscallnames[] (only used by SYSCALL_DEBUG) and constipate
syscallnames to 'const char *const[]'

ok kettenis@

show more ...


# 5a72e03e 07-Dec-2021 guenther <guenther@openbsd.org>

Continue to delete emulation support: we only have one sigcode and
sigobject. Just use the existing globals for the former and use a
global for the latter.

ok jsg@ kettenis@


# b702d795 07-Dec-2021 guenther <guenther@openbsd.org>

Continue to delete emulation support: since we're Just ELF, the size
of the auxinfo is fixed: provide ELF_AUX_WORDS in <sys/exec_elf.h>
as a replacement for emul->e_arglen

ok millert@


# 682e3c94 06-Dec-2021 guenther <guenther@openbsd.org>

Start to delete emulation support: since we're Just ELF, make
copyargs() return 0/1 and merge elf_copyargs() into it. Rename
ep_emul_arg and ep_emul_argp to have clearer meaning and type and
elimina

Start to delete emulation support: since we're Just ELF, make
copyargs() return 0/1 and merge elf_copyargs() into it. Rename
ep_emul_arg and ep_emul_argp to have clearer meaning and type and
eliminate ep_emul_argsize as no longer necessary. Make sure
ep_auxinfo (nee ep_emul_argp) is initialized as powerpc64 always
uses it in setregs().

ok semarie@ deraadt@ kettenis@

show more ...


# 7a02b0b9 30-Jun-2021 bluhm <bluhm@openbsd.org>

Remove unused variable cryptodesc_pool. Document global variables
in crypto.c and annotate locking protection. Assert kernel lock
where needed. Remove dead code from crypto_get_driverid(). Move
c

Remove unused variable cryptodesc_pool. Document global variables
in crypto.c and annotate locking protection. Assert kernel lock
where needed. Remove dead code from crypto_get_driverid(). Move
crypto_init() prototype into header file.
OK mpi@

show more ...


# 72adf922 02-Jun-2021 visa <visa@openbsd.org>

Enable pool cache on knote pool

Use the pool cache to reduce the overhead of memory management in
function kqueue_register().

When EV_ADD is given, kqueue_register() pre-allocates a knote to avoid

Enable pool cache on knote pool

Use the pool cache to reduce the overhead of memory management in
function kqueue_register().

When EV_ADD is given, kqueue_register() pre-allocates a knote to avoid
potential sleeping in the middle of the critical section that spans
from knote lookup to insertion. However, the pre-allocation is useless
if the lookup finds a matching knote.

The cost of knote allocation will become significant with kqueue-based
poll(2) and select(2) because the frequency of allocation will increase.
Most of the cost appears to come from the locking inside the pool.
The pool cache amortizes it by using CPU-local caches of free knotes
as buffers.

OK dlg@ mpi@

show more ...


# 193f316c 08-Feb-2021 mpi <mpi@openbsd.org>

Revert the convertion of per-process thread into a SMR_TAILQ.

We did not reach a consensus about using SMR to unlock single_thread_set()
so there's no point in keeping this change.


# 868335c7 11-Jan-2021 mpi <mpi@openbsd.org>

New rw_obj_init() API providing reference-counted rwlock.

Original port from NetBSD by guenther@, required for upcoming amap & anon
locking.

ok kettenis@


# b8f16a83 01-Jan-2021 jsg <jsg@openbsd.org>

copyright++;


# 627a59d1 28-Dec-2020 mpi <mpi@openbsd.org>

Use per-CPU counters for fault and stats counters reached in uvm_fault().

ok kettenis@, dlg@


# b21c774f 07-Dec-2020 mpi <mpi@openbsd.org>

Convert the per-process thread list into a SMR_TAILQ.

Currently all iterations are done under KERNEL_LOCK() and therefor use
the *_LOCKED() variant.

From and ok claudio@


12345678910>>...14