Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2 |
|
#
3206d887 |
| 25-Jun-2019 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Increase elf limits
* Increase the MAXTSIZ default from 256MB to 32GB. Certain debug executables, such as chromium, exceeded the original limit.
* Leave the default data limit at 128MB
kernel - Increase elf limits
* Increase the MAXTSIZ default from 256MB to 32GB. Certain debug executables, such as chromium, exceeded the original limit.
* Leave the default data limit at 128MB for the moment, but it will be increased as soon as we work out low-memory hinting vs heap allocation.
show more ...
|
Revision tags: v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3 |
|
#
4837705e |
| 03-May-2019 |
Matthew Dillon <dillon@apollo.backplane.com> |
pthreads and kernel - change MAP_STACK operation
* Only allow new mmap()'s to intrude on ungrown MAP_STACK areas when MAP_TRYFIXED is specified. This was not working as intended before. Adjust
pthreads and kernel - change MAP_STACK operation
* Only allow new mmap()'s to intrude on ungrown MAP_STACK areas when MAP_TRYFIXED is specified. This was not working as intended before. Adjust the manual page to be more clear.
* Make kern.maxssiz (the maximum user stack size) visible via sysctl.
* Add kern.maxthrssiz, indicating the approximate space for placement of pthread stacks. This defaults to 128GB.
* The old libthread_xu stack code did use TRYFIXED and will work with the kernel changes, but change how it works to not assume that the user stack should suddenly be limited to the pthread stack size (~2MB).
Just use a normal mmap() now without TRYFIXED and a hint based on kern.maxthrssiz (defaults to 512GB), calculating a starting address hint that is (_usrstack - maxssiz - maxthrssiz).
* Adjust procfs to report MAP_STACK segments as 'STK'.
show more ...
|
Revision tags: v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc |
|
#
d30a28dd |
| 01-Feb-2018 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Fix kernel minidumps
* Refactor minidumps. Fix an overflows due to KVM now being 8TB, fix improper pdp[] array calculations (cropped up when we want to > 1 PML4e entry for the kernel),
kernel - Fix kernel minidumps
* Refactor minidumps. Fix an overflows due to KVM now being 8TB, fix improper pdp[] array calculations (cropped up when we want to > 1 PML4e entry for the kernel), and refactor the page table entry handling code to improve efficiency and reduce the dump size.
If we had kept the original pte mapping in the minidump it would have required ~16GB of disk space JUST to hold a pte array that is mostly 0's. Now it only requires ~2MB.
Dumping performance is improved because the page table array is primarily flushed to storage in 4KB block sizes, and now only 2MB or so is written out in this manner.
* minidump now dumps the PDP array of PD entries (representing 1GB each) for the entire system VA space (user and kernel) - 256TB. This requires 512*512*8 = 2MB of storage.
PD pages and PT pages are no longer linearized into an array in the minidump. Instead, their physical addresses are included in the dump map and libkvm accesses the PTEs through the physical map.
NOTE: Only kernel memory proper is actually populated at this time, but this leaves the door open for e.g. dumping more information without having to change the minidump format again.
* Revamp the minidump header, magic string, and version to address the new reality. libkvm should still be able to recognize the old minidump format, as well as now the new one.
Reminded-by: everyone
show more ...
|
#
8ff9866b |
| 04-Dec-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Expand physical memory support to 64TB
* Make NKPML4E truly programmable and change the default from 1 PDP page to 16 PDP pages. This increases KVM from 512G to 8TB, which should be en
kernel - Expand physical memory support to 64TB
* Make NKPML4E truly programmable and change the default from 1 PDP page to 16 PDP pages. This increases KVM from 512G to 8TB, which should be enough to accomodate a maximal 64TB configuration.
Note that e.g. 64TB of physical ram certainly requires more than one kernel PDP page, since the vm_page_array alone would require around 2TB, never mind everything else!
PDP entries in the PML4E (512 total @ 512GB per entry): 256 User space 112 (unused, avail for NKPML4E) 128 DMAP (64TB max physical memory) 16 KVM NKPML4E default (8TB) (recommend 64 max)
* Increase the DMAP from 64 PDP pages to 128 PDP pages, allowing support for up to 64TB of physical memory.
* Changes the meaning of KPML4I from being 'the index of the only PDP page in the PML4e' to 'the index of the first PDP page in the PML4e'. There are NKPML4E PDP pages starting at index KPML4I.
* NKPDPE can now exceed 512. This is calculated to be the maximmum number of PD pages needed for KVM, which is now (NKPML4E*NPDPEPG-1).
We now pre-allocate and populate only enough PD pages to accomodate the page tables we are pre-installing. Those, in turn, are calculated to be sufficient for bootstrapping mainly vm_page_array and a large initial set of pv_entry structures.
* Remove nkpt, it was not being used any more.
show more ...
|
Revision tags: v5.0.2, v5.0.1 |
|
#
f70051b1 |
| 29-Oct-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Fix boot issues with > 512GB of ram
* Fix DMAP installation issues for kernels with > 512GB of ram. The page table was not being laid out properly for PML4e entries past the first one.
kernel - Fix boot issues with > 512GB of ram
* Fix DMAP installation issues for kernels with > 512GB of ram. The page table was not being laid out properly for PML4e entries past the first one.
* Fix early panic reporting. Conditionalize the lapic access as the lapic might not exist yet.
* Tested to 1TB of ram. Theoretically DragonFlyBSD can support up to 32TB of ram (and slightly less than ~64TB with one #define change).
Reported-by: zrj Testing-by: zrj
show more ...
|
Revision tags: v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1 |
|
#
11ba7f73 |
| 10-Aug-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Lower VM_MAX_USER_ADDRESS to finalize work-around for Ryzen bug
* Reduce VM_MAX_USER_ADDRESS by 2MB, effectively making the top 2MB of the user address space unmappable. The user stack n
kernel - Lower VM_MAX_USER_ADDRESS to finalize work-around for Ryzen bug
* Reduce VM_MAX_USER_ADDRESS by 2MB, effectively making the top 2MB of the user address space unmappable. The user stack now starts 2MB down from where it did before. Theoretically we only need to reduce the top of the user address space by 4KB, but doing it by 2MB may be more useful for future page table optimizations.
* As per AMD, Ryzen has an issue when the instruction pre-fetcher crosses from canonical to non-canonical address space. This can only occur at the top of the user stack.
In DragonFlyBSD, the signal trampoline resides at the top of the user stack and an IRETQ into it can cause a Ryzen box to lockup and destabilize due to this action. The bug case was, basically two cpu threads on the same core, one in a cpu-bound loop of some sort while the other takes a normal UNIX signal (causing the IRETQ into the signal trampoline). The IRETQ microcode freezes until the cpu-bound loop terminates, preventing the cpu thread from being able to take any interrupt or IPI whatsoever for the duration, and the cpu may destabilize afterwords as well.
* The pre-fetcher is somewhat heuristical, so just moving the trampoline down is no guarantee if the top 4KB of the user stack is mapped or mappable. It is better to make the boundary unmappable by userland.
* Bug first tracked down by myself in early 2017. AMD validated the bug and determined that unmapping the boundary page completely solves the issue.
* Also retain the code which places the signal trampoline in its own page so we can maintain separate protection settings for the code, and make it read-only (R+X).
show more ...
|
Revision tags: v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc |
|
#
77c48adb |
| 06-Jan-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Refactor phys_avail[] and dump_avail[]
* Refactor phys_avail[] and dump_avail[] into a more understandable structure.
|
#
aedf5523 |
| 28-Dec-2016 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Increase KVM from 128G to 511G, further increase maximum swap
* Increase KVM (Kernel Virtual Memory) to the maximum we currently support. Up to half of it can be used for swblock structu
kernel - Increase KVM from 128G to 511G, further increase maximum swap
* Increase KVM (Kernel Virtual Memory) to the maximum we currently support. Up to half of it can be used for swblock structures (SWAPMETA in vmstat -z). This allows the following swap maximums.
128G of ram - 15TB of data can be swapped out. 256G of ram - 30TB of data can be swapped out. 512G+ of ram - 55TB - this is the maximum we can support swapped out.
* We can support > 512G of KVM in the future with only a bit of work on how KVM is reserved.
* Remove some debugging code.
show more ...
|
Revision tags: v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1 |
|
#
5e700a85 |
| 21-Nov-2014 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Increase default MAXTSIZ from 128M to 256M
* Increase the default max text size from 128m to 256m. Note that this value can also be overridden in /boot/loader.conf via kern.maxtsiz.
* C
kernel - Increase default MAXTSIZ from 128M to 256M
* Increase the default max text size from 128m to 256m. Note that this value can also be overridden in /boot/loader.conf via kern.maxtsiz.
* Currently only chrome compiled w/ full debugging has a text size which exceeds 128M. The normally compiled chrome is hitting 93MB though so we might as well up the limit now.
show more ...
|
Revision tags: v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3 |
|
#
8cd7f47b |
| 08-Aug-2013 |
François Tigeot <ftigeot@wolfpond.org> |
kernel: Add VM_MAX_ADDRESS and VM_MIN_ADDRESS constants
|
Revision tags: v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
#
33fb3ba1 |
| 10-Nov-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Increase maximum supported physical memory to 32TB
* Increase the maximum supported physical memory to 32TB (untested), by increasing the number of DMAP PDPs we reserve in the PML4E from
kernel - Increase maximum supported physical memory to 32TB
* Increase the maximum supported physical memory to 32TB (untested), by increasing the number of DMAP PDPs we reserve in the PML4E from 1 to 32.
show more ...
|
#
701c977e |
| 26-Oct-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Rewrite the x86-64 pmap code
* Use unassociated VM pages (without a VM object) for all page table pages.
* Remove kptobj and pmap->pm_pteobj.
* For the moment implement a Red-Black tree f
kernel - Rewrite the x86-64 pmap code
* Use unassociated VM pages (without a VM object) for all page table pages.
* Remove kptobj and pmap->pm_pteobj.
* For the moment implement a Red-Black tree for pv_entry_t manipulation. Revamp the pindex to include all page table page levels, from terminal pages to the PML4 page. The hierarchy is now arranged via the PV system.
* As before, the kernel page tables only use PV entries for terminal pages.
* Refactor the locking to allow blocking operations during deep scans. Individual PV entries are now locked and critical PMAP operations do not require the pmap->pm_token. This should greatly improve threaded program performance.
* Fix kgdb on the live kernel (pmap_extract() was not handling short-cutted page directory pages).
show more ...
|
Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0 |
|
#
b2b3ffcd |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
#
3f3709c3 |
| 07-Nov-2009 |
Jordan Gordeev <jgordeev@dir.bg> |
Revert "rename amd64 architecture to x86_64"
This reverts commit c1543a890188d397acca9fe7f76bcd982481a763.
I'm reverting it because: 1) the change didn't get properly discussed 2) it was based on
Revert "rename amd64 architecture to x86_64"
This reverts commit c1543a890188d397acca9fe7f76bcd982481a763.
I'm reverting it because: 1) the change didn't get properly discussed 2) it was based on false premises: "The rest of the world seems to call amd64 x86_64." 3) no pkgsrc bulk build was done to test the change 4) the original committer acted irresponsibly by committing such a big change just before going on vacation.
show more ...
|
#
c1543a89 |
| 04-Nov-2009 |
Simon Schubert <corecode@dragonflybsd.org> |
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc build
rename amd64 architecture to x86_64
The rest of the world seems to call amd64 x86_64. Bite the bullet and rename all of the architecture files and references. This will hopefully make pkgsrc builds less painful.
Discussed-with: dillon@
show more ...
|
Revision tags: v2.5.1, v2.4.1, v2.5.0, v2.4.0 |
|
#
bfc09ba0 |
| 25-Aug-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
AMD64 - Fix format conversions and other warnings.
|
#
a2a636cc |
| 12-Aug-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
AMD64 - Sync machine-dependent bits from smtms.
Submitted-by: Jordan Gordeev <jgordeev@dir.bg>
|
Revision tags: v2.3.2, v2.3.1, v2.2.1 |
|
#
67553e72 |
| 26-Apr-2009 |
Jordan Gordeev <jgordeev@dir.bg> |
amd64: first steps towards 64-bit pmap remove 32-bit amd64 pmap replace with work-in-progress 64-bit pmap
|
#
48ffc236 |
| 26-Apr-2009 |
Jordan Gordeev <jgordeev@dir.bg> |
amd64: first steps towards 64-bit pmap remove 32-bit amd64 pmap replace with work-in-progress 64-bit pmap
|
Revision tags: v2.2.0, v2.3.0, v2.1.1, v2.0.1 |
|
#
c8fe38ae |
| 29-Aug-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
AMD64 - Sync AMD64 support from Jordan Gordeev's svn repository and Google SOC project. This work is still continuing but represents substantial progress in the effort.
With this commit the world b
AMD64 - Sync AMD64 support from Jordan Gordeev's svn repository and Google SOC project. This work is still continuing but represents substantial progress in the effort.
With this commit the world builds and installs, the loader is able to boot the kernel, and the kernel is able to initialize, probe devices, and exec the init program. The init program is then able to run until it hits its first fork(). For the purposes of the GSOC the project is being considered a big success!
The code has been adapted from multiple sources, most notably Peter Wemm and other peoples work from FreeBSD, with many modifications to make it work with DragonFly. Also thanks go to Simon Schubert for working on gdb and compiler issues, and to Noah Yan for a good chunk of precursor work in 2007.
While Jordan wishes to be modest on his contribution, frankly we would not have been able to make this much progress without the large number of man-hours Jordan has dedicated to his GSOC project painstakingly gluing code together, tracking down issues, and progressing the boot sequence.
Submitted-by: Jordan Gordeev <jgordeev@dir.bg>
show more ...
|
#
39923942 |
| 21-Aug-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
Resurrect headers for sys/platform/pc64/include from CVS Attic.
Patch and mark them as platform specific.
On-behalf-of: Noah Yan <noah.yan@gmail.com> Submitted-by: Noah Yan <noah.yan@gmail.com>
|
#
da23a592 |
| 09-Dec-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add support for up to 63 cpus & 512G of ram for 64-bit builds.
* Increase SMP_MAXCPU to 63 for 64-bit builds.
* cpumask_t is 64 bits on 64-bit builds now. It remains 32 bits on 32-bit b
kernel - Add support for up to 63 cpus & 512G of ram for 64-bit builds.
* Increase SMP_MAXCPU to 63 for 64-bit builds.
* cpumask_t is 64 bits on 64-bit builds now. It remains 32 bits on 32-bit builds.
* Add #define's for atomic_set_cpumask(), atomic_clear_cpumask, and atomic_cmpset_cpumask(). Replace all use cases on cpu masks with these functions.
* Add CPUMASK(), BSRCPUMASK(), and BSFCPUMASK() macros. Replace all use cases on cpu masks with these functions.
In particular note that (1 << cpu) just doesn't work with a 64-bit cpumask.
Numerous bits of assembly also had to be adjusted to use e.g. btq instead of btl, etc.
* Change __uint32_t declarations that were meant to be cpu masks to use cpumask_t (most already have).
Also change other bits of code which work on cpu masks to be more agnostic. For example, poll_cpumask0 and lwp_cpumask.
* 64-bit atomic ops cannot use "iq", they must use "r", because most x86-64 do NOT have 64-bit immediate value support.
* Rearrange initial kernel memory allocations to start from KvaStart and not KERNBASE, because only 2GB of KVM is available after KERNBASE.
Certain VM allocations with > 32G of ram can exceed 2GB. For example, vm_page_array[]. 2GB was not enough.
* Remove numerous mdglobaldata fields that are not used.
* Align CPU_prvspace[] for now. Eventually it will be moved into a mapped area. Reserve sufficient space at MPPTDI now, but it is still unused.
* When pre-allocating kernel page table PD entries calculate the number of page table pages at KvaStart and at KERNBASE separately, since the KVA space starting at KERNBASE caps out at 2GB.
* Change kmem_init() and vm_page_startup() to not take memory range arguments. Instead the globals (virtual_start and virtual_end) are manipualted directly.
show more ...
|
#
ad54aa11 |
| 15-Sep-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Increase x86_64 & vkernel kvm, adjust vm_page_array mapping
* Change the vm_page_array and dmesg space to not use the DMAP area. The space could not be accessed by userland kvm utilities
kernel - Increase x86_64 & vkernel kvm, adjust vm_page_array mapping
* Change the vm_page_array and dmesg space to not use the DMAP area. The space could not be accessed by userland kvm utilities due to that issue.
TODO - reoptimize to use 2M super-pages.
* Auto-size NKPT to accomodate the above changes as vm_page_array[] is now mapped into the kernel page tables.
* Increase NKPDPE to 128 PDPs to accomodate machines with large amounts of ram. This increases the kernel KVA space to 128G.
show more ...
|
#
791c6551 |
| 06-Feb-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Expand the x86_64 KVA to 8G
* Our kmem_init() was mapping out the ~6G of KVA below KERNBASE. KERNBASE is at the -2G mark and unlike i386 it does not mark the beginning of KVA.
Add two
kernel - Expand the x86_64 KVA to 8G
* Our kmem_init() was mapping out the ~6G of KVA below KERNBASE. KERNBASE is at the -2G mark and unlike i386 it does not mark the beginning of KVA.
Add two more globals, virtual2_start and virtual2_end, adn adjust kmem_init() to use that space. This fixes kernel_map exhaustion issues on x86_64. Before the change only ~600M of KVA was available after a fresh boot.
* Populate the PDPs around both KERNBASE and at virtual2_start for bootstrapping purposes.
* Adjust kernel_vm_end to start iteration for growkernel purposes at VM_MIN_KERNEL_ADDRESS and no longer use it to figure out the end of KVM for the minidump.
In addition, adjust minidump to dump the entire kernel virtual address space.
* Remove numerous extranious variables.
* Fix a bug in vm_map_insert() where vm_map->first_free was being incorrect set when the map does not begin with reserved space.
show more ...
|