Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2 |
|
#
2c64e990 |
| 25-Jan-2016 |
zrj <rimvydas.jasinskas@gmail.com> |
Remove advertising header from sys/
Correct BSD License clause numbering from 1-2-4 to 1-2-3.
Some less clear cases taken as it was done of FreeBSD.
|
Revision tags: v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2 |
|
#
5b5bf28b |
| 05-Jul-2014 |
Matthew Dillon <dillon@apollo.backplane.com> |
build - Add missing symbol for vkernel64 build
* Add symbols required to support access to the GD_OTHER_CPUS field, which vkernel64's assembly now needs.
|
#
cc694a4a |
| 30-Jun-2014 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Move CPUMASK_LOCK out of the cpumask_t
* Add cpulock_t (a 32-bit integer on all platforms) and implement CPULOCK_EXCL as well as space for a counter.
* Break-out CPUMASK_LOCK, add a new
kernel - Move CPUMASK_LOCK out of the cpumask_t
* Add cpulock_t (a 32-bit integer on all platforms) and implement CPULOCK_EXCL as well as space for a counter.
* Break-out CPUMASK_LOCK, add a new field to the pmap (pm_active_lock) and do the process vmm (p_vmm_cpulock) and implement the mmu interlock there.
The VMM subsystem uses additional bits in cpulock_t as a mask counter for implementing its interlock.
The PMAP subsystem just uses the CPULOCK_EXCL bit in pm_active_lock for its own interlock.
* Max cpus on 64-bit systems is now 64 instead of 63.
* cpumask_t is now just a pure cpu mask and no longer requires all-or-none atomic ops, just normal bit-for-bit atomic ops. This will allow us to hopefully extend it past the 64-cpu limit soon.
show more ...
|
Revision tags: v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0 |
|
#
a86ce0cd |
| 20-Sep-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on G
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on GSOC core
This is, needless to say, a huge amount of work compressed down into a few paragraphs of comments. Adds the pc64/vmm subdirectory and tons of stuff to support hardware virtualization in guest-user mode, plus the ability for programs (vkernels) running in this mode to make normal system calls to the host.
* Add system call infrastructure for VMM mode operations in kern/sys_vmm.c which vectors through a structure to machine-specific implementations.
vmm_guest_ctl_args() vmm_guest_sync_addr_args()
vmm_guest_ctl_args() - bootstrap VMM and EPT modes. Copydown the original user stack for EPT (since EPT 'physical' addresses cannot reach that far into the backing store represented by the process's original VM space). Also installs the GUEST_CR3 for the guest using parameters supplied by the guest.
vmm_guest_sync_addr_args() - A host helper function that the vkernel can use to invalidate page tables on multiple real cpus. This is a lot more efficient than having the vkernel try to do it itself with IPI signals via cpusync*().
* Add Intel VMX support to the host infrastructure. Again, tons of work compressed down into a one paragraph commit message. Intel VMX support added. AMD SVM support is not part of this GSOC and not yet supported by DragonFly.
* Remove PG_* defines for PTE's and related mmu operations. Replace with a table lookup so the same pmap code can be used for normal page tables and also EPT tables.
* Also include X86_PG_V defines specific to normal page tables for a few situations outside the pmap code.
* Adjust DDB to disassemble SVM related (intel) instructions.
* Add infrastructure to exit1() to deal related structures.
* Optimize pfind() and pfindn() to remove the global token when looking up the current process's PID (Matt)
* Add support for EPT (double layer page tables). This primarily required adjusting the pmap code to use a table lookup to get the PG_* bits.
Add an indirect vector for copyin, copyout, and other user address space copy operations to support manual walks when EPT is in use.
A multitude of system calls which manually looked up user addresses via the vm_map now need a VMM layer call to translate EPT.
* Remove the MP lock from trapsignal() use cases in trap().
* (Matt) Add pthread_yield()s in most spin loops to help situations where the vkernel is running on more cpu's than the host has, and to help with scheduler edge cases on the host.
* (Matt) Add a pmap_fault_page_quick() infrastructure that vm_fault_page() uses to try to shortcut operations and avoid locks. Implement it for pc64. This function checks whether the page is already faulted in as requested by looking up the PTE. If not it returns NULL and the full blown vm_fault_page() code continues running.
* (Matt) Remove the MP lock from most the vkernel's trap() code
* (Matt) Use a shared spinlock when possible for certain critical paths related to the copyin/copyout path.
show more ...
|
Revision tags: v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0 |
|
#
b5d16701 |
| 11-Dec-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Change the discrete mplock into mp_token
* Use a lwkt_token for the mp_lock. This consolidates our longer-term spinnable locks (the mplock and tokens) into just tokens, making it easie
kernel - Change the discrete mplock into mp_token
* Use a lwkt_token for the mp_lock. This consolidates our longer-term spinnable locks (the mplock and tokens) into just tokens, making it easier to solve performance issues.
* Some refactoring of the token code was needed to guarantee the ordering when acquiring and releasing the mp_token vs other tokens.
* The thread switch code, lwkt_switch(), is simplified by this change though not necessarily faster.
* Remove td_mpcount, mp_lock, and other related fields.
* Remove assertions related to td_mpcount and friends, generally folding them into similar assertions for tokens.
show more ...
|
Revision tags: v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0 |
|
#
f9235b6d |
| 24-Aug-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - rewrite the LWKT scheduler's priority mechanism
The purpose of these changes is to begin to address the issue of cpu-bound kernel threads. For example, the crypto threads, or a HAMMER prun
kernel - rewrite the LWKT scheduler's priority mechanism
The purpose of these changes is to begin to address the issue of cpu-bound kernel threads. For example, the crypto threads, or a HAMMER prune cycle that operates entirely out of the buffer cache. These threads tend to hicup the system, creating temporary lockups because they never switch away due to their nature as kernel threads.
* Change the LWKT scheduler from a strict hard priority model to a fair-share with hard priority queueing model.
A kernel thread will be queued with a hard priority, giving it dibs on the cpu earlier if it has a higher priority. However, if the thread runs past its fair-share quantum it will then become limited by that quantum and other lower-priority threads will be allowed to run.
* Rewrite lwkt_yield() and lwkt_user_yield(), remove uio_yield(). Both yield functions are now very fast and can be called without further timing conditionals, simplifying numerous callers.
lwkt_user_yield() now uses the fair-share quantum to determine when to yield the cpu for a cpu-bound kernel thread.
* Implement the new yield in the crypto kernel threads, HAMMER, and other places (many of which already used the old yield functions which didn't work very well).
* lwkt_switch() now only round-robins after the fair share quantum is exhausted. It does not necessarily always round robin.
* Separate the critical section count from td_pri. Add td_critcount.
show more ...
|
Revision tags: v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0 |
|
#
0e6594a8 |
| 21-Mar-2010 |
Sascha Wildner <saw@online.de> |
vkernel64: Additional adjustments (amd64 -> x86_64, recent commits etc.).
|