#
7f19f937 |
| 10-Apr-2022 |
riastradh <riastradh@NetBSD.org> |
pthread: Nix trailing whitespace.
|
#
f2e0628f |
| 12-Feb-2022 |
riastradh <riastradh@NetBSD.org> |
libpthread: Move namespacing include to top of .c files.
Stuff like libc's namespace.h, or atomic_op_namespace.h, which does namespacing tricks like `#define atomic_cas_uint _atomic_cas_uint', has t
libpthread: Move namespacing include to top of .c files.
Stuff like libc's namespace.h, or atomic_op_namespace.h, which does namespacing tricks like `#define atomic_cas_uint _atomic_cas_uint', has to go at the top of each .c file. If it goes in the middle, it might be too late to affect the declarations, and result in compile errors.
I tripped over this by including <sys/atomic.h> in mips <machine/lock.h>.
(Maybe we should create a new pthread_namespace.h file for the purpose, but this'll do for now.)
show more ...
|
#
07889332 |
| 11-Jun-2020 |
ad <ad@NetBSD.org> |
Drop self->pt_lock before clearing TSD / malloc TSD.
|
#
0a10e321 |
| 19-Apr-2020 |
joerg <joerg@NetBSD.org> |
Improve TSD behavior
Optimistically check whether the key has been used by this thread already and avoid locking in that case. This avoids the atomic operation in the hot path. When the value is set
Improve TSD behavior
Optimistically check whether the key has been used by this thread already and avoid locking in that case. This avoids the atomic operation in the hot path. When the value is set to non-NULL for the first time, put the entry on the to-be-freed list and keep it their until destruction or thread exit. Setting the key to NULL and back is common enough and updating the list is more expensive than the extra check on the final round.
show more ...
|
#
ec36b8f4 |
| 19-Apr-2020 |
joerg <joerg@NetBSD.org> |
Reinit TSD mutex in the child to avoid issues with former waiters
|
#
f8ff9381 |
| 16-Feb-2020 |
kamil <kamil@NetBSD.org> |
Revert "Enhance the pthread(3) + malloc(3) init model"
It is reported to hand on aarch64 with gzip.
|
#
af31a025 |
| 15-Feb-2020 |
kamil <kamil@NetBSD.org> |
Enhance the pthread(3) + malloc(3) init model
Separate the pthread_atfork(3) call from pthread_tsd_init() and move it into a distinct function.
Call inside pthread__init() late TSD initialization r
Enhance the pthread(3) + malloc(3) init model
Separate the pthread_atfork(3) call from pthread_tsd_init() and move it into a distinct function.
Call inside pthread__init() late TSD initialization route, just after "pthread_atfork(NULL, NULL, pthread__fork_callback);".
Document that malloc(3) initialization is now controlled again and called during the first pthread_atfork(3) call.
Remove #if 0 code from pthread_mutex.c as we no longer initialize malloc prematurely.
show more ...
|
#
561716c2 |
| 25-Dec-2019 |
joerg <joerg@NetBSD.org> |
Since pthread_setspecific requires locks, ensure that they are acquired before fork and dropped in both parent and child. At least Python depends on TSD after fork, even though it is undefined behavi
Since pthread_setspecific requires locks, ensure that they are acquired before fork and dropped in both parent and child. At least Python depends on TSD after fork, even though it is undefined behavior in POSIX.
show more ...
|
#
03bbc080 |
| 05-Mar-2019 |
christos <christos@NetBSD.org> |
Transfer all the keys that were created in the libc stub implementation to the pthread tsd implementation when the main thread is created. This corrects a problem where a process created keys before
Transfer all the keys that were created in the libc stub implementation to the pthread tsd implementation when the main thread is created. This corrects a problem where a process created keys before libpthread was loaded (either from the libc constructor or because libpthread was dlopened later). This fixes a problem with jemalloc which creates keys in the constructor.
show more ...
|
#
4125aa29 |
| 09-Jul-2017 |
christos <christos@NetBSD.org> |
PR/52386: Use the number of iterations we document.
|
#
0b89fc75 |
| 25-Aug-2015 |
pooka <pooka@NetBSD.org> |
Revert 1.14 now that the arduous task of fixing rumphijack to allow mmap() in early init has been completed.
|
#
86b46120 |
| 30-May-2015 |
christos <christos@NetBSD.org> |
Thanks rump for not letting us use even mmap during initialization.
|
#
68741714 |
| 29-May-2015 |
christos <christos@NetBSD.org> |
Fix previous: Can't use calloc/malloc before we complete initialization of the thread library, because malloc uses pthread_foo_specific, and it will end up initializing itself incorrectly.
|
#
75cc59e5 |
| 29-May-2015 |
manu <manu@NetBSD.org> |
Make PTHREAD_KEYS_MAX dynamically adjustable
NetBSD's PTHREAD_KEYS_MAX is set to 256, which is low compared to other systems like Linux (1024) or MacOS X (512). As a result some setups tested on Lin
Make PTHREAD_KEYS_MAX dynamically adjustable
NetBSD's PTHREAD_KEYS_MAX is set to 256, which is low compared to other systems like Linux (1024) or MacOS X (512). As a result some setups tested on Linux will exhibit problems on NetBSD because of pthread_keys usage beyond the limit. This happens for instance on Apache with various module loaded, and in this case no particular developper can be blamed for going beyond the limit, since several modules from different sources contribute to the problem.
This patch makes the limit conigurable through the PTHREAD_KEYS_MAX environement variable. If undefined, the default remains unchanged (256). In any case, the value cannot be lowered below POSIX-mandated _POSIX_THREAD_KEYS_MAX (128).
While there: - use EXIT_FAILURE instead of 1 when calling err(3) in libpthread. - Reset _POSIX_THREAD_KEYS_MAX to POSIX mandated 128, instead of 256.
show more ...
|
#
173a7764 |
| 21-Mar-2013 |
christos <christos@NetBSD.org> |
- Allow libpthread to be dlopened again, by providing libc stubs to libpthread. - Fail if the dlopened libpthread does pthread_create(). From manu@ - Discussed at length in the mailing lists; approve
- Allow libpthread to be dlopened again, by providing libc stubs to libpthread. - Fail if the dlopened libpthread does pthread_create(). From manu@ - Discussed at length in the mailing lists; approved by core@ - This was chosen as the least intrusive patch that will provide the necessary functionality. XXX: pullup to 6
show more ...
|
#
183b5db6 |
| 22-Nov-2012 |
christos <christos@NetBSD.org> |
Don't call the destructor in pthread_key_delete() following the standard.
|
#
c15b1d22 |
| 21-Nov-2012 |
christos <christos@NetBSD.org> |
Replace the simple implementation of pthread_key_{create,destroy} and pthread_{g,s}etspecific functions, to one that invalidates values of keys in other threads when pthread_key_delete() is called. T
Replace the simple implementation of pthread_key_{create,destroy} and pthread_{g,s}etspecific functions, to one that invalidates values of keys in other threads when pthread_key_delete() is called. This fixes chromium, which expects pthread_key_delete() to do cleanup in all threads.
show more ...
|
#
dfd5e8f6 |
| 02-Mar-2012 |
joerg <joerg@NetBSD.org> |
Fix indentation.
|
#
ce099b40 |
| 28-Apr-2008 |
martin <martin@NetBSD.org> |
Remove clause 3 and 4 from TNF licenses
|
#
8b2c109b |
| 08-Mar-2008 |
ad <ad@NetBSD.org> |
Add a cast to make lint happy.
|
#
55faac12 |
| 07-Mar-2008 |
ad <ad@NetBSD.org> |
pthread_key_create: instead of using a simple 1/0 value to record a key as allocated, use an array of pointers and save __builtin_return_address(0) so keys can be identified when doing post-mortem de
pthread_key_create: instead of using a simple 1/0 value to record a key as allocated, use an array of pointers and save __builtin_return_address(0) so keys can be identified when doing post-mortem debugging.
show more ...
|
#
989565f8 |
| 24-Dec-2007 |
ad <ad@NetBSD.org> |
- Fix pthread_rwlock_trywrlock() which was broken.
- Add new functions: pthread_mutex_held_np, mutex_owner_np, rwlock_held_np, rwlock_wrheld_np, rwlock_rdheld_np. These match the kernel's locking
- Fix pthread_rwlock_trywrlock() which was broken.
- Add new functions: pthread_mutex_held_np, mutex_owner_np, rwlock_held_np, rwlock_wrheld_np, rwlock_rdheld_np. These match the kernel's locking primitives and can be used when porting kernel code to userspace.
- Always create LWPs detached. Do join/exit sync mostly in userland. When looped on a dual core box this seems ~30% quicker than using lwp_wait(). Reduce number of lock acquire/release ops during thread exit.
show more ...
|
#
b8833ff5 |
| 16-Aug-2007 |
ad <ad@NetBSD.org> |
- Reinitialize the absolute minimum when recycling user thread state. Chops another ~10% off create/join in a loop on i386. - Disable low level debugging as this is stable. Improves benchmarks ac
- Reinitialize the absolute minimum when recycling user thread state. Chops another ~10% off create/join in a loop on i386. - Disable low level debugging as this is stable. Improves benchmarks across the board by a small percentage. Uncontested mutex acquire and release in a loop becomes about 8% quicker. - Minor cleanup.
show more ...
|
#
37ac1db4 |
| 29-Sep-2003 |
wiz <wiz@NetBSD.org> |
available, not avaliable. From miod@openbsd.
|
#
bd9a18b7 |
| 13-Aug-2003 |
nathanw <nathanw@NetBSD.org> |
Split out pthread_{set,get}specific() into a separate file and arrange for that file to not be built with profiling. This makes it reasonable to use pthread_{set,get}specific() to implement thread-sa
Split out pthread_{set,get}specific() into a separate file and arrange for that file to not be built with profiling. This makes it reasonable to use pthread_{set,get}specific() to implement thread-safe profiline call counts.
show more ...
|