Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1 |
|
#
5229377c |
| 07-Sep-2021 |
Sascha Wildner <saw@online.de> |
kernel/libc: Remove the old vmm code.
Removes the kernel code and two system calls.
Bump __DragonFly_version too.
Reviewed-by: aly, dillon
|
Revision tags: v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2 |
|
#
80d831e1 |
| 25-Jul-2020 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Refactor in-kernel system call API to remove bcopy()
* Change the in-kernel system call prototype to take the system call arguments as a separate pointer, and make the contents read-onl
kernel - Refactor in-kernel system call API to remove bcopy()
* Change the in-kernel system call prototype to take the system call arguments as a separate pointer, and make the contents read-only.
int sy_call_t (void *); int sy_call_t (struct sysmsg *sysmsg, const void *);
* System calls with 6 arguments or less no longer need to copy the arguments from the trapframe to a holding structure. Instead, we simply point into the trapframe.
The L1 cache footprint will be a bit smaller, but in simple tests the results are not noticably faster... maybe 1ns or so (roughly 1%).
show more ...
|
Revision tags: v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0 |
|
#
2a7bd4d8 |
| 18-May-2019 |
Sascha Wildner <saw@online.de> |
kernel: Don't include <sys/user.h> in kernel code.
There is really no point in doing that because its main purpose is to expose kernel structures to userland. The majority of cases wasn't needed at
kernel: Don't include <sys/user.h> in kernel code.
There is really no point in doing that because its main purpose is to expose kernel structures to userland. The majority of cases wasn't needed at all and the rest required only a couple of other includes.
show more ...
|
Revision tags: v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0 |
|
#
a86ce0cd |
| 20-Sep-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on G
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on GSOC core
This is, needless to say, a huge amount of work compressed down into a few paragraphs of comments. Adds the pc64/vmm subdirectory and tons of stuff to support hardware virtualization in guest-user mode, plus the ability for programs (vkernels) running in this mode to make normal system calls to the host.
* Add system call infrastructure for VMM mode operations in kern/sys_vmm.c which vectors through a structure to machine-specific implementations.
vmm_guest_ctl_args() vmm_guest_sync_addr_args()
vmm_guest_ctl_args() - bootstrap VMM and EPT modes. Copydown the original user stack for EPT (since EPT 'physical' addresses cannot reach that far into the backing store represented by the process's original VM space). Also installs the GUEST_CR3 for the guest using parameters supplied by the guest.
vmm_guest_sync_addr_args() - A host helper function that the vkernel can use to invalidate page tables on multiple real cpus. This is a lot more efficient than having the vkernel try to do it itself with IPI signals via cpusync*().
* Add Intel VMX support to the host infrastructure. Again, tons of work compressed down into a one paragraph commit message. Intel VMX support added. AMD SVM support is not part of this GSOC and not yet supported by DragonFly.
* Remove PG_* defines for PTE's and related mmu operations. Replace with a table lookup so the same pmap code can be used for normal page tables and also EPT tables.
* Also include X86_PG_V defines specific to normal page tables for a few situations outside the pmap code.
* Adjust DDB to disassemble SVM related (intel) instructions.
* Add infrastructure to exit1() to deal related structures.
* Optimize pfind() and pfindn() to remove the global token when looking up the current process's PID (Matt)
* Add support for EPT (double layer page tables). This primarily required adjusting the pmap code to use a table lookup to get the PG_* bits.
Add an indirect vector for copyin, copyout, and other user address space copy operations to support manual walks when EPT is in use.
A multitude of system calls which manually looked up user addresses via the vm_map now need a VMM layer call to translate EPT.
* Remove the MP lock from trapsignal() use cases in trap().
* (Matt) Add pthread_yield()s in most spin loops to help situations where the vkernel is running on more cpu's than the host has, and to help with scheduler edge cases on the host.
* (Matt) Add a pmap_fault_page_quick() infrastructure that vm_fault_page() uses to try to shortcut operations and avoid locks. Implement it for pc64. This function checks whether the page is already faulted in as requested by looking up the PTE. If not it returns NULL and the full blown vm_fault_page() code continues running.
* (Matt) Remove the MP lock from most the vkernel's trap() code
* (Matt) Use a shared spinlock when possible for certain critical paths related to the copyin/copyout path.
show more ...
|
Revision tags: v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
#
f2081646 |
| 12-Nov-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add syscall quick return path for x86-64
* Flag the case where a sysretq can be performed to quickly return from a system call instead of having to execute the slower doreti code.
* Th
kernel - Add syscall quick return path for x86-64
* Flag the case where a sysretq can be performed to quickly return from a system call instead of having to execute the slower doreti code.
* This about halves syscall times for simple system calls such as getuid(), and reduces longer syscalls by ~80ns or so on a fast 3.4GHz SandyBridge, but does not seem to really effect performance a whole lot.
Taken-From: FreeBSD (loosely)
show more ...
|
Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0 |
|
#
b2b3ffcd |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
#
c1543a89 |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
#
2883d2d8 |
| 21-Oct-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - (mainly x86_64) - Fix a number of rare races
* Move the MP lock from outside to inside exit1(), also fixing an issue where sigexit() was calling exit1() without it.
* Move calls to dsche
kernel - (mainly x86_64) - Fix a number of rare races
* Move the MP lock from outside to inside exit1(), also fixing an issue where sigexit() was calling exit1() without it.
* Move calls to dsched_exit_thread() and biosched_done() out of the platform code and into the mainline code. This also fixes an issue where the code was improperly blocking way too late in the thread termination code, after the point where it had been descheduled permanently and tsleep decomissioned for the thread.
* Cleanup and document related code areas.
* Fix a missing proc_token release in the SIGKILL exit path.
* Fix FAKE_MCOUNT()s in the x86-64 code. These are NOPs anyway (since kernel profiling doesn't work), but fix them anyway.
* Use APIC_PUSH_FRAME() in the Xcpustop assembly code for x86-64 in order to properly acquire a working %gs. This may improve the handling of panic()s on x86_64.
* Also fix some cases if #if JG'd (ifdef'd out) code in case the code is ever used later on.
* Protect set_user_TLS() with a critical section to be safe.
* Add debug code to help track down further x86-64 seg-fault issues, and provide better kprintf()s for the debug path in question.
show more ...
|
#
3919ced0 |
| 13-Dec-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Move MP lock inward, plus misc other stuff
* Remove the MPSAFE flag from the syscalls.master file. All system calls are now called without the MP lock held and will acquire the MP lock i
kernel - Move MP lock inward, plus misc other stuff
* Remove the MPSAFE flag from the syscalls.master file. All system calls are now called without the MP lock held and will acquire the MP lock if necessary.
* Shift the MP lock inward. Try to leave most copyin/copyout operations outside the MP lock. Reorder some of the copyouts in the linux emulation code to suit.
Kernel resource operations are MP safe.
Process ucred access is now outside the MP lock but not quite MP safe yet (will be fixed in a followup).
* Remove unnecessary KKASSERT(p) calls left over from the time before system calls where prefixed with sys_*
* Fix a bunch of cases in the linux emulation code when setting groups where the ngrp range check is incorrect.
show more ...
|