Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc |
|
#
a798ebf2 |
| 03-Dec-2016 |
Imre Vadász <imre@vdsz.com> |
kernel: Fix stop_cpus()/restart_cpus() usages when panicing.
* If we are panicing (i.e. panicstr != NULL), Debugger() should make sure that cpus are stopped when it returns. So call stop_cpus() ex
kernel: Fix stop_cpus()/restart_cpus() usages when panicing.
* If we are panicing (i.e. panicstr != NULL), Debugger() should make sure that cpus are stopped when it returns. So call stop_cpus() explicitly if Debugger() does an early return (i.e. in the cons_unavail case), and don't call restart_cpus() at the end if we are panicing.
* This should make sure that Debugger()'s behaviour matches the expectations of panic() in sys/kern/kern_shutdown.c.
show more ...
|
Revision tags: v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2 |
|
#
adf0eb4f |
| 15-Dec-2015 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Change where dump context is saved
* Save the dump context a little earlier in the panic to improve chances that post-morten kgdb can print the stack backtrace.
* Use a function union fo
kernel - Change where dump context is saved
* Save the dump context a little earlier in the panic to improve chances that post-morten kgdb can print the stack backtrace.
* Use a function union for variable arguments calls from ddb>
show more ...
|
Revision tags: v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4 |
|
#
09c0e0d6 |
| 31-Jul-2015 |
Sascha Wildner <saw@online.de> |
kernel: Add prototypes for setjmp()/longjmp() to <sys/systm.h>.
Used by ddb and vinum. Remove the inclusion of the <setjmp.h> userspace header.
|
Revision tags: v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2 |
|
#
c07315c4 |
| 04-Jul-2014 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Refactor cpumask_t to extend cpus past 64, part 1/2
* 64-bit systems only. 32-bit builds use the macros but cannot be expanded past 32 cpus.
* Change cpumask_t from __uint64_t to a stru
kernel - Refactor cpumask_t to extend cpus past 64, part 1/2
* 64-bit systems only. 32-bit builds use the macros but cannot be expanded past 32 cpus.
* Change cpumask_t from __uint64_t to a structure. This commit implements one 64-bit sub-element (the next one will implement four for 256 cpus).
* Create a CPUMASK_*() macro API for non-atomic and atomic cpumask manipulation. These macros generally take lvalues as arguments, allowing for a fairly optimal implementation.
* Change all C code operating on cpumask's to use the newly created CPUMASK_*() macro API.
* Compile-test 32 and 64-bit. Run-test 64-bit.
* Adjust sbin/usched, usr.sbin/powerd. usched currently needs more work.
show more ...
|
Revision tags: v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0 |
|
#
a86ce0cd |
| 20-Sep-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on G
hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree
* This merge contains work primarily by Mihai Carabas, with some misc fixes also by Matthew Dillon.
* Special note on GSOC core
This is, needless to say, a huge amount of work compressed down into a few paragraphs of comments. Adds the pc64/vmm subdirectory and tons of stuff to support hardware virtualization in guest-user mode, plus the ability for programs (vkernels) running in this mode to make normal system calls to the host.
* Add system call infrastructure for VMM mode operations in kern/sys_vmm.c which vectors through a structure to machine-specific implementations.
vmm_guest_ctl_args() vmm_guest_sync_addr_args()
vmm_guest_ctl_args() - bootstrap VMM and EPT modes. Copydown the original user stack for EPT (since EPT 'physical' addresses cannot reach that far into the backing store represented by the process's original VM space). Also installs the GUEST_CR3 for the guest using parameters supplied by the guest.
vmm_guest_sync_addr_args() - A host helper function that the vkernel can use to invalidate page tables on multiple real cpus. This is a lot more efficient than having the vkernel try to do it itself with IPI signals via cpusync*().
* Add Intel VMX support to the host infrastructure. Again, tons of work compressed down into a one paragraph commit message. Intel VMX support added. AMD SVM support is not part of this GSOC and not yet supported by DragonFly.
* Remove PG_* defines for PTE's and related mmu operations. Replace with a table lookup so the same pmap code can be used for normal page tables and also EPT tables.
* Also include X86_PG_V defines specific to normal page tables for a few situations outside the pmap code.
* Adjust DDB to disassemble SVM related (intel) instructions.
* Add infrastructure to exit1() to deal related structures.
* Optimize pfind() and pfindn() to remove the global token when looking up the current process's PID (Matt)
* Add support for EPT (double layer page tables). This primarily required adjusting the pmap code to use a table lookup to get the PG_* bits.
Add an indirect vector for copyin, copyout, and other user address space copy operations to support manual walks when EPT is in use.
A multitude of system calls which manually looked up user addresses via the vm_map now need a VMM layer call to translate EPT.
* Remove the MP lock from trapsignal() use cases in trap().
* (Matt) Add pthread_yield()s in most spin loops to help situations where the vkernel is running on more cpu's than the host has, and to help with scheduler edge cases on the host.
* (Matt) Add a pmap_fault_page_quick() infrastructure that vm_fault_page() uses to try to shortcut operations and avoid locks. Implement it for pc64. This function checks whether the page is already faulted in as requested by looking up the PTE. If not it returns NULL and the full blown vm_fault_page() code continues running.
* (Matt) Remove the MP lock from most the vkernel's trap() code
* (Matt) Use a shared spinlock when possible for certain critical paths related to the copyin/copyout path.
show more ...
|
Revision tags: v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2 |
|
#
1918fc5c |
| 24-Oct-2012 |
Sascha Wildner <saw@online.de> |
kernel: Make SMP support default (and non-optional).
The 'SMP' kernel option gets removed with this commit, so it has to be removed from everybody's configs.
Reviewed-by: sjg Approved-by: many
|
Revision tags: v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
4090d6ff |
| 03-Jan-2012 |
Sascha Wildner <saw@online.de> |
kernel: Use NULL for pointers.
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0 |
|
#
b2b3ffcd |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
#
c1543a89 |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
#
da23a592 |
| 09-Dec-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add support for up to 63 cpus & 512G of ram for 64-bit builds.
* Increase SMP_MAXCPU to 63 for 64-bit builds.
* cpumask_t is 64 bits on 64-bit builds now. It remains 32 bits on 32-bit b
kernel - Add support for up to 63 cpus & 512G of ram for 64-bit builds.
* Increase SMP_MAXCPU to 63 for 64-bit builds.
* cpumask_t is 64 bits on 64-bit builds now. It remains 32 bits on 32-bit builds.
* Add #define's for atomic_set_cpumask(), atomic_clear_cpumask, and atomic_cmpset_cpumask(). Replace all use cases on cpu masks with these functions.
* Add CPUMASK(), BSRCPUMASK(), and BSFCPUMASK() macros. Replace all use cases on cpu masks with these functions.
In particular note that (1 << cpu) just doesn't work with a 64-bit cpumask.
Numerous bits of assembly also had to be adjusted to use e.g. btq instead of btl, etc.
* Change __uint32_t declarations that were meant to be cpu masks to use cpumask_t (most already have).
Also change other bits of code which work on cpu masks to be more agnostic. For example, poll_cpumask0 and lwp_cpumask.
* 64-bit atomic ops cannot use "iq", they must use "r", because most x86-64 do NOT have 64-bit immediate value support.
* Rearrange initial kernel memory allocations to start from KvaStart and not KERNBASE, because only 2GB of KVM is available after KERNBASE.
Certain VM allocations with > 32G of ram can exceed 2GB. For example, vm_page_array[]. 2GB was not enough.
* Remove numerous mdglobaldata fields that are not used.
* Align CPU_prvspace[] for now. Eventually it will be moved into a mapped area. Reserve sufficient space at MPPTDI now, but it is still unused.
* When pre-allocating kernel page table PD entries calculate the number of page table pages at KvaStart and at KERNBASE separately, since the KVA space starting at KERNBASE caps out at 2GB.
* Change kmem_init() and vm_page_startup() to not take memory range arguments. Instead the globals (virtual_start and virtual_end) are manipualted directly.
show more ...
|