Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1 |
|
#
737b020b |
| 29-May-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Misc adjustments to code documentation
* Misc adjustments to bring some of the pmap related code comments up-to-date.
Submitted-by: falsifian (James Cook)
|
#
d11da0c9 |
| 19-May-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Clean-up comments on kmalloc_obj
* Rewrite some of the confusing comments (that were no longer appplicable).
* Remove the mgt back-pointer from the slab structure. It is something that
kernel - Clean-up comments on kmalloc_obj
* Rewrite some of the confusing comments (that were no longer appplicable).
* Remove the mgt back-pointer from the slab structure. It is something that I originally had to actively deal with slabs in kfree_obj(), but eventually decided to nix in favor of a top-down passive poll instead.
Reported-by: James Cook
show more ...
|
Revision tags: v6.0.0, v6.0.0rc1, v6.1.0 |
|
#
3ced5137 |
| 17-Apr-2021 |
Sascha Wildner <saw@online.de> |
<sys/_malloc.h>: Use basic integer types like in the rest of the header.
|
#
e21a70fe |
| 22-Mar-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add some sysctls to kmalloc_obj
* kern.kzone_pollfreq (default 1hz) - Set polling frequency for kmalloc zone cleanups.
* kern.kzone_bretire (default 4) - Set number of zones the kmalloc
kernel - Add some sysctls to kmalloc_obj
* kern.kzone_pollfreq (default 1hz) - Set polling frequency for kmalloc zone cleanups.
* kern.kzone_bretire (default 4) - Set number of zones the kmalloc poller can retire per interval. Too high a number can create noticable system stalls due to kernel_map operations.
* Add a few more fields to the kmalloc_mgt structure. Track retirement count to help with debugging.
show more ...
|
#
56c9ecc8 |
| 22-Mar-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Implement type-stable EXIS semantics for kfree_obj() - step 2
* Put an exislock_t in the kmalloc_slab structure and issue exis_terminate() upon entering the slab on the full list.
* Do n
kernel - Implement type-stable EXIS semantics for kfree_obj() - step 2
* Put an exislock_t in the kmalloc_slab structure and issue exis_terminate() upon entering the slab on the full list.
* Do not return a full slab to the gcache until it becomes exis_freeable().
This implements type-stable operation for objects allocated via the kmalloc_obj() mechanism. Kernel code operating inside an exis_hold() / exis_drop() sequence is guaranteed type-stability.
* Note that destroying a kmalloc_obj zone shreds any related slabs regardless of their EXIS state. However, this is not a problem for the zones we care about because they are global zones that will never be destroyed.
show more ...
|
#
e9dbfea1 |
| 21-Mar-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add kmalloc_obj subsystem step 1
* Implement per-zone memory management to kmalloc() in the form of kmalloc_obj() and friends. Currently the subsystem uses the same malloc_type struct
kernel - Add kmalloc_obj subsystem step 1
* Implement per-zone memory management to kmalloc() in the form of kmalloc_obj() and friends. Currently the subsystem uses the same malloc_type structure but is otherwise distinct from the normal kmalloc(), so to avoid programming mistakes the *_obj() subsystem post-pends '_obj' to malloc_type pointers passed into it.
This mechanism will eventually replace objcache. This mechanism is designed to greatly reduce fragmentation issues on systems with long uptimes.
Eventually the feature will be better integrated and I will be able to remove the _obj stuff.
* This is a object allocator, so the zone must be dedicated to one type of object with a fixed size. All allocations out of the zone are of the object.
The allocator is not quite type-stable yet, but will be once existential locks are integrated into the freeing mechanism.
* Implement a mini-slab allocator for management. Since the zones are single-object, similar to objcache, the fixed-size mini-slabs are a lot easier to optimize and much simpler in construction than the main kernel slab allocator.
Uses a per-zone/per-cpu active/alternate slab with an ultra-optimized allocation path, and a per-zone partial/full/empty list.
Also has a globaldata-based per-cpu cache of free slabs. The mini-slab allocator frees slabs back to the same cpu they were originally allocated from in order to retain memory locality over time.
* Implement a passive cleanup poller. This currently polls kmalloc zones very slowly looking for excess full slabs to release back to the global slab cache or the system (if the global slab cache is full).
This code will ultimately also handle existential type-stable freeing.
* Fragmentation is greatly reduced due to the distinct zones. Slabs are dedicated to the zone and do not share allocation space with other zones. Also, when a zone is destroyed, all of its memory is cleanly disposed of and there will be no left-over fragmentation.
* Initially use the new interface for the following. These zones tend to or can become quite big:
vnodes namecache (but not related strings) hammer2 chains hammer2 inodes tmpfs nodes tmpfs dirents (but not related strings)
show more ...
|
Revision tags: v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3 |
|
#
3ab3ae18 |
| 17-Dec-2019 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Refactor malloc_type to reduce static data in image
* malloc_type was embedding a SMP_MAXCPU array of kmalloc_use structures, which winds up being 16KB a pop x 400+ MALLOC_DEFINE() decl
kernel - Refactor malloc_type to reduce static data in image
* malloc_type was embedding a SMP_MAXCPU array of kmalloc_use structures, which winds up being 16KB a pop x 400+ MALLOC_DEFINE() declarations.
This was over 6MB of static data in the kernel binary, and it wasn't BSS because the declaration is initialized with some defaults. So this reduction is significant and directly impacts both memory use and kernel boot times.
* Change malloc_type->ks_use from an array to a pointer. Embed a single kmalloc_use structure (ks_use0) as the default.
When ncpus is probed, the kernel now goes through all malloc_type structures and dynamically allocates a properly-sized ks_use array. Any new malloc hoppers after that point will also dynamically allocate ks_use.
show more ...
|
#
5449304d |
| 18-Oct-2019 |
zrj <rimvydas.jasinskas@gmail.com> |
<sys/malloc.h>: Separate basic typedefs to _malloc.h hearder.
This will be used to reduce <sys/globaldata.h> pollution through the <sys/slaballoc.h> and will allow not to include <sys/malloc.h> fo
<sys/malloc.h>: Separate basic typedefs to _malloc.h hearder.
This will be used to reduce <sys/globaldata.h> pollution through the <sys/slaballoc.h> and will allow not to include <sys/malloc.h> for almost every kernel source even if no memory allocations are done.
While there move MALLOC_DECLARE() macro too, it would help with malloc type visibility from headers that define it and will allow finally to sort most of the header includes alphabetically without side effects.
show more ...
|