Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2 |
|
#
6481baf4 |
| 06-Jun-2018 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add fuwordadd32(), fuwordadd64()
* Add locked-bus-cycle fetchadd equivalents for kernel access to userland. Will be used by kern_umtx.c
|
Revision tags: v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2 |
|
#
a94cabeb |
| 18-Nov-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Sync to recent API changes
* Add uservtophys() to the vkernel code. This is a bit of a quick hack but it should work. It won't be efficient, though.
* vkernel compiles again and appea
vkernel - Sync to recent API changes
* Add uservtophys() to the vkernel code. This is a bit of a quick hack but it should work. It won't be efficient, though.
* vkernel compiles again and appears to run ok.
show more ...
|
Revision tags: v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc |
|
#
95270b7e |
| 01-Feb-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Many fixes for vkernel support, plus a few main kernel fixes
REAL KERNEL
* The big enchillada is that the main kernel's thread switch code has a small timing window where it clears t
kernel - Many fixes for vkernel support, plus a few main kernel fixes
REAL KERNEL
* The big enchillada is that the main kernel's thread switch code has a small timing window where it clears the PM_ACTIVE bit for the cpu while switching between two threads. However, it *ALSO* checks and avoids loading the %cr3 if the two threads have the same pmap.
This results in a situation where an invalidation on the pmap in another cpuc may not have visibility to the cpu doing the switch, and yet the cpu doing the switch also decides not to reload %cr3 and so does not invalidate the TLB either. The result is a stale TLB and bad things happen.
For now just unconditionally load %cr3 until I can come up with code to handle the case.
This bug is very difficult to reproduce on a normal system, it requires a multi-threaded program doing nasty things (munmap, etc) on one cpu while another thread is switching to a third thread on some other cpu.
* KNOTE after handling the vkernel trap in postsig() instead of before.
* Change the kernel's pmap_inval_smp() code to take a 64-bit npgs argument instead of a 32-bit npgs argument. This fixes situations that crop up when a process uses more than 16TB of address space.
* Add an lfence to the pmap invalidation code that I think might be needed.
* Handle some wrap/overflow cases in pmap_scan() related to the use of large address spaces.
* Fix an unnecessary invltlb in pmap_clearbit() for unmanaged PTEs.
* Test PG_RW after locking the pv_entry to handle potential races.
* Add bio_crc to struct bio. This field is only used for debugging for now but may come in useful later.
* Add some global debug variables in the pmap_inval_smp() and related paths. Refactor the npgs handling.
* Load the tsc_target field after waiting for completion of the previous invalidation op instead of before. Also add a conservative mfence() in the invalidation path before loading the info fields.
* Remove the global pmap_inval_bulk_count counter.
* Adjust swtch.s to always reload the user process %cr3, with an explanation. FIXME LATER!
* Add some test code to vm/swap_pager.c which double-checks that the page being paged out does not get corrupted during the operation. This code is #if 0'd.
* We must hold an object lock around the swp_pager_meta_ctl() call in swp_pager_async_iodone(). I think.
* Reorder when PG_SWAPINPROG is cleared. Finish the I/O before clearing the bit.
* Change the vm_map_growstack() API to pass a vm_map in instead of curproc.
* Use atomic ops for vm_object->generation counts, since objects can be locked shared.
VKERNEL
* Unconditionally save the FP state after returning from VMSPACE_CTL_RUN. This solves a severe FP corruption bug in the vkernel due to calls it makes into libc (which uses %xmm registers all over the place).
This is not a complete fix. We need a formal userspace/kernelspace FP abstraction. Right now the vkernel doesn't have a kernelspace FP abstraction so if a kernel thread switches preemptively bad things happen.
* The kernel tracks and locks pv_entry structures to interlock pte's. The vkernel never caught up, and does not really have a pv_entry or placemark mechanism. The vkernel's pmap really needs a complete re-port from the real-kernel pmap code. Until then, we use poor hacks.
* Use the vm_page's spinlock to interlock pte changes.
* Make sure that PG_WRITEABLE is set or cleared with the vm_page spinlock held.
* Have pmap_clearbit() acquire the pmobj token for the pmap in the iteration. This appears to be necessary, currently, as most of the rest of the vkernel pmap code also uses the pmobj token.
* Fix bugs in the vkernel's swapu32() and swapu64().
* Change pmap_page_lookup() and pmap_unwire_pgtable() to fully busy the page. Note however that a page table page is currently never soft-busied. Also other vkernel code that busies a page table page.
* Fix some sillycode in a pmap->pm_ptphint test.
* Don't inherit e.g. PG_M from the previous pte when overwriting it with a pte of a different physical address.
* Change the vkernel's pmap_clear_modify() function to clear VTPE_RW (which also clears VPTE_M), and not just VPTE_M. Formally we want the vkernel to be notified when a page becomes modified and it won't be unless we also clear VPTE_RW and force a fault. <--- I may change this back after testing.
* Wrap pmap_replacevm() with a critical section.
* Scrap the old grow_stack() code. vm_fault() and vm_fault_page() handle it (vm_fault_page() just now got the ability).
* Properly flag VM_FAULT_USERMODE.
show more ...
|
#
00eb801e |
| 01-Feb-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Adjust invalidation ABI a bit
* Make some adjustments to tighten up the atomic ops the vkernel uses to modify VPTEs.
* Report unexpected VPTE_M races.
|
#
dc039ae0 |
| 28-Jan-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Change vm_fault_page[_quick]() semantics + vkernel fixes
* vm_fault_page[_quick]() needs to be left busied for PROT_WRITE so modifications made by the caller do not race other operations
kernel - Change vm_fault_page[_quick]() semantics + vkernel fixes
* vm_fault_page[_quick]() needs to be left busied for PROT_WRITE so modifications made by the caller do not race other operations in the kernel. Modify the API to accomodate the behavior.
* Fix procfs write race with new vm_fault_page() API.
* Fix bugs in ept_swapu32() and ept_swapu64() (vkernel + VMM)
* pmap_fault_page_quick() doesn't understand EPT page tables, have it fail for that case too. This fixes bugs in vkernel + VMM mode.
* Also do some minor normalization of variables names in pmap.c
* vkernel/pmap - Use atomic_swap_long() to modify PTEs instead of a simple (non-atomic) assignment.
* vkernel/pmap - Fix numerous bugs in the VMM and non-VMM code for pmap_kenter*(), pmap_qenter*(), etc.
* vkernel/pmap - Collapse certain pmap_qremove_*() routines into the base pmap_qremove().
show more ...
|
#
7f4bfbe7 |
| 27-Jan-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Partial fix to EPT swapu32 and swapu64
* EPT needed swapu32/swapu64 functions, write them.
* Fix bounds checking bug in std_swapu32()
* Misc cleanups.
|
#
eb36cb6b |
| 26-Jan-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernrel - Synchronize w/master, adjust for vmm_guest_sync_addr() changes
* synchronize suword, fuword, etc naming conventions with master.
* Use the new vmm_guest_sync_addr() ABI to more safely ma
vkernrel - Synchronize w/master, adjust for vmm_guest_sync_addr() changes
* synchronize suword, fuword, etc naming conventions with master.
* Use the new vmm_guest_sync_addr() ABI to more safely make adjustments to virtual page tables.
show more ...
|
Revision tags: v4.6.1 |
|
#
afd2da4d |
| 03-Aug-2016 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Remove PG_ZERO and zeroidle (page-zeroing) entirely
* Remove the PG_ZERO flag and remove all page-zeroing optimizations, entirely. Aftering doing a substantial amount of testing, these
kernel - Remove PG_ZERO and zeroidle (page-zeroing) entirely
* Remove the PG_ZERO flag and remove all page-zeroing optimizations, entirely. Aftering doing a substantial amount of testing, these optimizations, which existed all the way back to CSRG BSD, no longer provide any benefit on a modern system.
- Pre-zeroing a page only takes 80ns on a modern cpu. vm_fault overhead in general is ~at least 1 microscond.
- Pre-zeroing a page leads to a cold-cache case on-use, forcing the fault source (e.g. a userland program) to actually get the data from main memory in its likely immediate use of the faulted page, reducing performance.
- Zeroing the page at fault-time is actually more optimal because it does not require any reading of dynamic ram and leaves the cache hot.
- Multiple synth and build tests show that active idle-time zeroing of pages actually reduces performance somewhat and incidental allocations of already-zerod pages (from page-table tear-downs) do not affect performance in any meaningful way.
* Remove bcopyi() and obbcopy() -> collapse into bcopy(). These other versions existed because bcopy() used to be specially-optimized and could not be used in all situations. That is no longer true.
* Remove bcopy function pointer argument to m_devget(). It is no longer used. This function existed to help support ancient drivers which might have needed a special memory copy to read and write mapped data. It has long been supplanted by BUSDMA.
show more ...
|
Revision tags: v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2 |
|
#
629f89de |
| 14-Mar-2014 |
Imre Vadasz <imre@vdsz.com> |
Implemented casuword for vkernel64. Fix two typos in casuword for pc64.
|
Revision tags: v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0 |
|
#
56f3779c |
| 27-Mar-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Fix copyin/copyout to return the correct error code
* These functions must return EFAULT on error, not a KERN_* error code.
|
Revision tags: v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3 |
|
#
06bb314f |
| 17-May-2012 |
Sascha Wildner <saw@online.de> |
kernel: Remove some bogus casts to the own type.
|
Revision tags: v3.0.2, v3.0.1, v3.1.0, v3.0.0 |
|
#
9c793cde |
| 23-Dec-2011 |
Sascha Wildner <saw@online.de> |
vkernel/vkernel64: Add suword32() to fix build.
|
#
86d7f5d3 |
| 26-Nov-2011 |
John Marino <draco@marino.st> |
Initial import of binutils 2.22 on the new vendor branch
Future versions of binutils will also reside on this branch rather than continuing to create new binutils branches for each new version.
|
Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0 |
|
#
7c4633ad |
| 31-Jan-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Fix lwbuf build error for vkernel64
* Fix a compile error that was preventing vkernel64's from building.
|
#
7a683a24 |
| 20-Jan-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Optimize the x86-64 lwbuf API
* Change lwbuf_alloc(m) to lwbuf_alloc(m, &lwb_cache), passing a pointer to a struct lwb which lwbuf_alloc() may used if it desires.
* The x86-64 lwbuf_allo
kernel - Optimize the x86-64 lwbuf API
* Change lwbuf_alloc(m) to lwbuf_alloc(m, &lwb_cache), passing a pointer to a struct lwb which lwbuf_alloc() may used if it desires.
* The x86-64 lwbuf_alloc() now just fills in the passed lwb and returns it. The i386 lwbuf_alloc() still uses the objcache w/ its kva mappings. This removes objcache calls from the critical path.
* The x86-64 lwbuf_alloc()/lwbuf_free() functions are now inlines (ALL x86-64 lwbuf functions are now inlines).
show more ...
|
Revision tags: v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0 |
|
#
a3f156de |
| 28-Aug-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
vkernel - Make copyin/copyout mpsafe
* copyin and copyout are mpsafe now that the VM system is locked up, so remove the get_mplock()/rel_mplock() wrapper.
|
#
573fb415 |
| 03-Jul-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - MPSAFE work - Finish tokenizing vm_page.c
* Finish tokenizing vm_page.c
* Certain global procedures, particular vm_page_hold() and vm_page_unhold(), are best called with the vm_token alr
kernel - MPSAFE work - Finish tokenizing vm_page.c
* Finish tokenizing vm_page.c
* Certain global procedures, particular vm_page_hold() and vm_page_unhold(), are best called with the vm_token already held for implied non-blocking operation.
show more ...
|
#
c0a27981 |
| 16-May-2010 |
Sascha Wildner <saw@online.de> |
Go over sys/platform and remove dead initialization and unneeded variables.
No functional changes.
|
Revision tags: v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0 |
|
#
0e6594a8 |
| 21-Mar-2010 |
Sascha Wildner <saw@online.de> |
vkernel64: Additional adjustments (amd64 -> x86_64, recent commits etc.).
|
Revision tags: v2.5.1, v2.4.1, v2.5.0, v2.4.0 |
|
#
da673940 |
| 17-Aug-2009 |
Jordan Gordeev <jgordeev@dir.bg> |
Add platform vkernel64.
|