History log of /netbsd/sys/kern/sys_sched.c (Results 1 – 25 of 50)
Revision Date Author Comments
# 416a8a0e 09-Apr-2023 riastradh <riastradh@NetBSD.org>

kern: KASSERT(A && B) -> KASSERT(A); KASSERT(B)


# 14b4bbb2 23-May-2020 ad <ad@NetBSD.org>

Move proc_lock into the data segment. It was dynamically allocated because
at the time we had mutex_obj_alloc() but not __cacheline_aligned.


# 34084a87 29-Apr-2020 thorpej <thorpej@NetBSD.org>

Sanitize the pid and lid arguments passed to do_sched_getparam()
and sys__sched_getaffinity() now that -1 as the pid argument to
lwp_find2() means "wildcard proc".


# e101e4cf 27-Jan-2020 ad <ad@NetBSD.org>

Remove comment that is out of date and I think hinting at something other
than what it says (preemption case for SCHED_FIFO).


# 2f01a2bb 30-Jul-2016 christos <christos@NetBSD.org>

Fix reversed test.


# 575a7585 07-Jul-2016 msaitoh <msaitoh@NetBSD.org>

KNF. Remove extra spaces. No functional change.


# 774674fd 03-Jul-2016 christos <christos@NetBSD.org>

GSoC 2016 Charles Cui: Implement thread priority protection based on work
by Andy Doran. Also document the get/set pshared thread calls as not
implemented, and add a skeleton implementation that is d

GSoC 2016 Charles Cui: Implement thread priority protection based on work
by Andy Doran. Also document the get/set pshared thread calls as not
implemented, and add a skeleton implementation that is disabled.
XXX: document _sched_protect(2).

show more ...


# 05fd0bf3 25-Feb-2014 pooka <pooka@NetBSD.org>

Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicat

Ensure that the top level sysctl nodes (kern, vfs, net, ...) exist before
the sysctl link sets are processed, and remove redundancy.

Shaves >13kB off of an amd64 GENERIC, not to mention >1k duplicate
lines of code.

show more ...


# d82c72ce 20-Apr-2012 rmind <rmind@NetBSD.org>

- Convert x86 MD code, mainly pmap(9) e.g. TLB shootdown code, to use
kcpuset(9) and thus replace hardcoded CPU bitmasks. This removes the
limitation of maximum CPUs.

- Support up to 256 CPUs o

- Convert x86 MD code, mainly pmap(9) e.g. TLB shootdown code, to use
kcpuset(9) and thus replace hardcoded CPU bitmasks. This removes the
limitation of maximum CPUs.

- Support up to 256 CPUs on amd64 architecture by default.

Bug fixes, improvements, completion of Xen part and testing on 64-core
AMD Opteron(tm) Processor 6282 SE (also, as Xen HVM domU with 128 CPUs)
by Manuel Bouyer.

show more ...


# 3176eca8 13-Apr-2012 yamt <yamt@NetBSD.org>

- do_sched_getparam: release locks earlier.
- add comments


# 154d3024 19-Feb-2012 rmind <rmind@NetBSD.org>

Remove COMPAT_SA / KERN_SA. Welcome to 6.99.3!
Approved by core@.


# 2c8eede9 29-Jan-2012 rmind <rmind@NetBSD.org>

- Add mi_cpu_init() and initialise cpu_lock and kcpuset_attached/running there.
- Add kcpuset_running which gets set in idle_loop().
- Use kcpuset_running in pserialize_perform().


# f449fc41 07-Aug-2011 rmind <rmind@NetBSD.org>

- Add an argument to kcpuset_create() for zeroing.
- Add kcpuset_atomic_set(), kcpuset_atomic_clear() and kcpuset_merge().


# 1a2935c6 07-Aug-2011 rmind <rmind@NetBSD.org>

Remove LW_AFFINITY flag and fix some bugs affinity mask handling.


# 01280861 07-Aug-2011 rmind <rmind@NetBSD.org>

Add kcpuset(9) - a reworked dynamic CPU set implementation for kernel.
Suitable for use during the early boot. MD and other implementations
should be replaced with this interface.

Discussed on: tec

Add kcpuset(9) - a reworked dynamic CPU set implementation for kernel.
Suitable for use during the early boot. MD and other implementations
should be replaced with this interface.

Discussed on: tech-kern@

show more ...


# 3c507045 01-Jul-2010 rmind <rmind@NetBSD.org>

Remove pfind() and pgfind(), fix locking in various broken uses of these.
Rename real routines to proc_find() and pgrp_find(), remove PFIND_* flags
and have consistent behaviour. Provide proc_find_r

Remove pfind() and pgfind(), fix locking in various broken uses of these.
Rename real routines to proc_find() and pgrp_find(), remove PFIND_* flags
and have consistent behaviour. Provide proc_find_raw() for special cases.
Fix memory leak in sysctl_proc_corename().

COMPAT_LINUX: rework ptrace() locking, minimise differences between
different versions per-arch.

Note: while this change adds some formal cosmetics for COMPAT_DARWIN and
COMPAT_IRIX - locking there is utterly broken (for ages).

Fixes PR/43176.

show more ...


# b2f37683 03-Oct-2009 elad <elad@NetBSD.org>

- Move sched_listener and co. from kern_synch.c to sys_sched.c, where it
really belongs (suggested by rmind@),

- Rename sched_init() to synch_init(), and introduce a new sched_init()
in sys_sche

- Move sched_listener and co. from kern_synch.c to sys_sched.c, where it
really belongs (suggested by rmind@),

- Rename sched_init() to synch_init(), and introduce a new sched_init()
in sys_sched.c where we (a) initialize the sysctl node (no more
link-set) and (b) listen on the process scope with sched_listener.

Reviewed by and okay rmind@.

show more ...


# 4f1720c3 03-Mar-2009 rmind <rmind@NetBSD.org>

lwp_create: fix the locking bugs on affinity ingerition path (mea culpa).
pset_assign: traverse the list of LWPs safely.
sched_setaffinity: free cpuset (unused path) outside the lock.

Reviewed (with

lwp_create: fix the locking bugs on affinity ingerition path (mea culpa).
pset_assign: traverse the list of LWPs safely.
sched_setaffinity: free cpuset (unused path) outside the lock.

Reviewed (with feedback) by <ad>.

show more ...


# 909e7f42 20-Jan-2009 rmind <rmind@NetBSD.org>

- Make thread-affinity and processor-set interfaces mutually exlusive.
- pset_assign: when CPU is assigned, migrate out all LWPs from it.


# 8f1873ea 31-Oct-2008 rmind <rmind@NetBSD.org>

- Avoid the race with CPU online/offline state changes, when setting the
affinity (cpu_lock protects these operations now).
- Disallow setting of state of CPU to to offline, if there are bound LWPs

- Avoid the race with CPU online/offline state changes, when setting the
affinity (cpu_lock protects these operations now).
- Disallow setting of state of CPU to to offline, if there are bound LWPs,
which have no CPU to migrate.
- Disallow setting of affinity for the LWP(s), if all CPUs in the dynamic
CPU-set are offline.
- sched_setaffinity: fix invalid check of kcpuset_isset().
- Rename cpu_setonline() to cpu_setstate().

Should fix PR/39349.

show more ...


# a8552a3a 18-Oct-2008 rmind <rmind@NetBSD.org>

Obviously intention was to check for SCHED_OTHER, not SCHED_FIFO.


# d5ea013e 18-Oct-2008 rmind <rmind@NetBSD.org>

Disallow user priority adjustments for SCHED_OTHER policy, simplify
convert_pri(). Sync schedctl(8) with the change. Closes PR/38009.


# fc7511b0 15-Oct-2008 wrstuden <wrstuden@NetBSD.org>

Merge wrstuden-revivesa into HEAD.


# 4f91cff0 14-Jul-2008 rmind <rmind@NetBSD.org>

- Disallow setting of affinity for zombie LWPs.
- Fix the possible NULL dereference when LWP exiting.
- Fix the inhertance of affinity.


# 1d875fc7 22-Jun-2008 christos <christos@NetBSD.org>

Adjust to separate kcpuset_t and cpuset_t.


12