History log of /dragonfly/sys/sys/vkernel.h (Results 1 – 21 of 21)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1
# 73d64b98 03-Apr-2017 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Implement NX (2)

* Flesh out NX implementation for main kernel.

* Implement NX support for the vkernel.


Revision tags: v4.8.0, v4.6.2, v4.9.0, v4.8.0rc
# 7f4bfbe7 27-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

vkernel - Partial fix to EPT swapu32 and swapu64

* EPT needed swapu32/swapu64 functions, write them.

* Fix bounds checking bug in std_swapu32()

* Misc cleanups.


# 76f1911e 23-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

kernel - pmap and vkernel work

* Remove the pmap.pm_token entirely. The pmap is currently protected
primarily by fine-grained locks and the vm_map lock. The intention
is to eventually be able

kernel - pmap and vkernel work

* Remove the pmap.pm_token entirely. The pmap is currently protected
primarily by fine-grained locks and the vm_map lock. The intention
is to eventually be able to protect it without the vm_map lock at all.

* Enhance pv_entry acquisition (representing PTE locations) to include
a placemarker facility for non-existant PTEs, allowing the PTE location
to be locked whether a pv_entry exists for it or not.

* Fix dev_dmmap (struct dev_mmap) (for future use), it was returning a
page index for physical memory as a 32-bit integer instead of a 64-bit
integer.

* Use pmap_kextract() instead of pmap_extract() where appropriate.

* Put the token contention test back in kern_clock.c for real kernels
so token contention shows up as sys% instead of idle%.

* Modify the pmap_extract() API to also return a locked pv_entry,
and add pmap_extract_done() to release it. Adjust users of
pmap_extract().

* Change madvise/mcontrol MADV_INVAL (used primarily by the vkernel)
to use a shared vm_map lock instead of an exclusive lock. This
significantly improves the vkernel's performance and significantly
reduces stalls and glitches when typing in one under heavy loads.

* The new placemarkers also have the side effect of fixing several
difficult-to-reproduce bugs in the pmap code, by ensuring that
shared and unmanaged pages are properly locked whereas before only
managed pages (with pv_entry's) were properly locked.

* Adjust the vkernel's pmap code to use atomic ops in numerous places.

* Rename the pmap_change_wiring() call to pmap_unwire(). The routine
was only being used to unwire (and could only safely be called for
unwiring anyway). Remove the unused 'wired' and the 'entry'
arguments.

Also change how pmap_unwire() works to remove a small race condition.

* Fix race conditions in the vmspace_*() system calls which could lead
to pmap corruption. Note that the vkernel did not trigger any of
these conditions, I found them while looking for another bug.

* Add missing maptypes to procfs's /proc/*/map report.

show more ...


# d95d5e03 22-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

vkernel - Fix vmspace_*() call bottleneck

* Remove a global token in most cases by caching ve's, and hold it shared
for lookups if it cannot be found in the cache.


Revision tags: v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0
# a86ce0cd 20-Sep-2013 Matthew Dillon <dillon@apollo.backplane.com>

hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree

* This merge contains work primarily by Mihai Carabas, with some misc
fixes also by Matthew Dillon.

* Special note on G

hammer2 - Merge Mihai Carabas's VKERNEL/VMM GSOC project into the main tree

* This merge contains work primarily by Mihai Carabas, with some misc
fixes also by Matthew Dillon.

* Special note on GSOC core

This is, needless to say, a huge amount of work compressed down into a
few paragraphs of comments. Adds the pc64/vmm subdirectory and tons
of stuff to support hardware virtualization in guest-user mode, plus
the ability for programs (vkernels) running in this mode to make normal
system calls to the host.

* Add system call infrastructure for VMM mode operations in kern/sys_vmm.c
which vectors through a structure to machine-specific implementations.

vmm_guest_ctl_args()
vmm_guest_sync_addr_args()

vmm_guest_ctl_args() - bootstrap VMM and EPT modes. Copydown the original
user stack for EPT (since EPT 'physical' addresses cannot reach that far
into the backing store represented by the process's original VM space).
Also installs the GUEST_CR3 for the guest using parameters supplied by
the guest.

vmm_guest_sync_addr_args() - A host helper function that the vkernel can
use to invalidate page tables on multiple real cpus. This is a lot more
efficient than having the vkernel try to do it itself with IPI signals
via cpusync*().

* Add Intel VMX support to the host infrastructure. Again, tons of work
compressed down into a one paragraph commit message. Intel VMX support
added. AMD SVM support is not part of this GSOC and not yet supported
by DragonFly.

* Remove PG_* defines for PTE's and related mmu operations. Replace with
a table lookup so the same pmap code can be used for normal page tables
and also EPT tables.

* Also include X86_PG_V defines specific to normal page tables for a few
situations outside the pmap code.

* Adjust DDB to disassemble SVM related (intel) instructions.

* Add infrastructure to exit1() to deal related structures.

* Optimize pfind() and pfindn() to remove the global token when looking
up the current process's PID (Matt)

* Add support for EPT (double layer page tables). This primarily required
adjusting the pmap code to use a table lookup to get the PG_* bits.

Add an indirect vector for copyin, copyout, and other user address space
copy operations to support manual walks when EPT is in use.

A multitude of system calls which manually looked up user addresses via
the vm_map now need a VMM layer call to translate EPT.

* Remove the MP lock from trapsignal() use cases in trap().

* (Matt) Add pthread_yield()s in most spin loops to help situations where
the vkernel is running on more cpu's than the host has, and to help with
scheduler edge cases on the host.

* (Matt) Add a pmap_fault_page_quick() infrastructure that vm_fault_page()
uses to try to shortcut operations and avoid locks. Implement it for
pc64. This function checks whether the page is already faulted in as
requested by looking up the PTE. If not it returns NULL and the full
blown vm_fault_page() code continues running.

* (Matt) Remove the MP lock from most the vkernel's trap() code

* (Matt) Use a shared spinlock when possible for certain critical paths
related to the copyin/copyout path.

show more ...


Revision tags: v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0
# 86d7f5d3 26-Nov-2011 John Marino <draco@marino.st>

Initial import of binutils 2.22 on the new vendor branch

Future versions of binutils will also reside on this branch rather
than continuing to create new binutils branches for each new version.


Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0, v2.3.2, v2.3.1, v2.2.1, v2.2.0, v2.3.0, v2.1.1, v2.0.1
# 39005e16 01-Jul-2007 Matthew Dillon <dillon@dragonflybsd.org>

More multi-threaded support for virtualization. Move the save context
from the process structure to the lwp structure, cleaning up the vmspace
support structures at the same time. This allows multi

More multi-threaded support for virtualization. Move the save context
from the process structure to the lwp structure, cleaning up the vmspace
support structures at the same time. This allows multiple LWPs in the
same process to be running a virtualization context at the same time.

show more ...


# 287ebb09 29-Jun-2007 Matthew Dillon <dillon@dragonflybsd.org>

Implement struct lwp->lwp_vmspace. Leave p_vmspace intact. This allows
vkernels to run threaded and to run emulated VM spaces on a per-thread basis.
struct proc->p_vmspace is left intact, making it

Implement struct lwp->lwp_vmspace. Leave p_vmspace intact. This allows
vkernels to run threaded and to run emulated VM spaces on a per-thread basis.
struct proc->p_vmspace is left intact, making it easy to switch into and out
of an emulated VM space. This is needed for the virtual kernel SMP work.

This also gives us the flexibility to run emulated VM spaces in their own
threads, or in a limited number of separate threads. Linux does this and
they say it improved performance. I don't think it necessarily improved
performance but its nice to have the flexibility to do it in the future.

show more ...


# e7f2d7de 08-Jan-2007 Matthew Dillon <dillon@dragonflybsd.org>

Use CBREAK mode for the console.

Adjust code for new vm_fault_page*() semantics (it now marks the page as
referenced and dirties it automatically so the caller doesn't have to).

Fix a VPTE_W or VPT

Use CBREAK mode for the console.

Adjust code for new vm_fault_page*() semantics (it now marks the page as
referenced and dirties it automatically so the caller doesn't have to).

Fix a VPTE_W or VPTE_WIRED snafu.
FIx a PG_MANAGED vs VPTE_MANAGED snafu.

Add additional VPTE_ bit definitions.

show more ...


# 4e7c41c5 08-Jan-2007 Matthew Dillon <dillon@dragonflybsd.org>

Modify the trapframe sigcontext, ucontext, etc. Add %gs to the trapframe
and xflags and an expanded floating point save area to sigcontext/ucontext
so traps can be fully specified.

Remove all the %

Modify the trapframe sigcontext, ucontext, etc. Add %gs to the trapframe
and xflags and an expanded floating point save area to sigcontext/ucontext
so traps can be fully specified.

Remove all the %gs hacks in the system code and signal trampoline and handle
%gs faults natively, like we do %fs faults.

Implement writebacks to the virtual page table to set VPTE_M and VPTE_A and
add checks for VPTE_R and VPTE_W.

Consolidate the TLS save area into a MD structure that can be accessed by MI
code.

Reformulate the vmspace_ctl() system call to allow an extended context to be
passed (for TLS info and soon the FP and eventually the LDT).

Adjust the GDB patches to recognize the new location of %gs.

Properly detect non-exception returns to the virtual kernel when the virtual
kernel is running an emulated user process and receives a signal.

And misc other work on the virtual kernel.

show more ...


# c5a45196 31-Dec-2006 Matthew Dillon <dillon@dragonflybsd.org>

zbootinit() was being called with too few pv_entry's on machines with small
amounts of memory.

Move the vm.kvm_* sysctls from MD to MI source files.


# 9c059ae3 04-Dec-2006 Matthew Dillon <dillon@dragonflybsd.org>

Misc vkernel work.


# 4a22e893 20-Oct-2006 Matthew Dillon <dillon@dragonflybsd.org>

Add a ton of infrastructure for VKERNEL support. Add code for intercepting
traps and system calls, for switching to and executing a foreign VM space,
and for accessing trap frames.


# c263ae77 13-Sep-2006 Matthew Dillon <dillon@dragonflybsd.org>

Clean up some #include's that shouldn't have been in there. Unbreak
buildworld.


# afeabdca 13-Sep-2006 Matthew Dillon <dillon@dragonflybsd.org>

MAP_VPAGETABLE support part 3/3.

Implement a new system call called mcontrol() which is an extension of
madvise(), adding an additional 64 bit argument. Add two new advisories,
MADV_INVAL and MADV_

MAP_VPAGETABLE support part 3/3.

Implement a new system call called mcontrol() which is an extension of
madvise(), adding an additional 64 bit argument. Add two new advisories,
MADV_INVAL and MADV_SETMAP.

MADV_INVAL will invalidate the pmap for the specified virtual address
range. You need to do this for the virtual addresses effected by changes
made in a virtual page table.

MADV_SETMAP sets the top-level page table entry for the virtual page table
governing the mapped range. It only works for memory governed by a virtual
page table and strange things will happen if you only set the root
page table entry for part of the virtual range.

Further refine the virtual page table format. Keep with 32 bit VPTE's for
the moment, but properly implement VPTE_PS and VPTE_V. VPTE_PS can be
used to suport 4MB linear maps in the top level page table and it can also
be used when specifying the 'root' VPTE to disable the page table entirely
and just linear map the backing store. VPTE_V is the 'valid' bit (before
it was inverted, now it is normal).

show more ...


# 75f59a66 12-Sep-2006 Matthew Dillon <dillon@dragonflybsd.org>

MAP_VPAGETABLE support part 2/3.

Implement preliminary virtual page table handling code in vm_fault. This
code is strictly temporary so subsystem and userland interactions can be
tested, but the re

MAP_VPAGETABLE support part 2/3.

Implement preliminary virtual page table handling code in vm_fault. This
code is strictly temporary so subsystem and userland interactions can be
tested, but the real code will be very similar.

show more ...


# a8a94599 07-Feb-2011 Sascha Wildner <saw@online.de>

Remove useless belt and suspenders include guards in some of our headers.

For these headers:

/usr/include/machine/atomic.h
/usr/include/machine/bus_dma.h
/usr/include/machine/coredump.h
/usr/includ

Remove useless belt and suspenders include guards in some of our headers.

For these headers:

/usr/include/machine/atomic.h
/usr/include/machine/bus_dma.h
/usr/include/machine/coredump.h
/usr/include/machine/cpufunc.h
/usr/include/machine/db_machdep.h
/usr/include/machine/elf.h
/usr/include/machine/endian.h
/usr/include/machine/frame.h
/usr/include/machine/limits.h
/usr/include/machine/npx.h
/usr/include/machine/profile.h
/usr/include/machine/segments.h
/usr/include/machine/stdarg.h
/usr/include/machine/stdint.h
/usr/include/machine/trap.h
/usr/include/machine/tss.h
/usr/include/machine/ucontext.h
/usr/include/machine/vframe.h
/usr/include/machine/vm86.h

All these headers #define _CPU_... and not _MACHINE_... even though they
are in /usr/include/machine. And the headers themselves have include
guards already. So there's little point in having them around the actual
#include additionally.

show more ...


# af2b4857 13-Jun-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - MPSAFE work - tokenize more vm stuff

* Tokenize the vkernel entry points. The module is fairly compact so
a per-vkernel token was used right off the bat instead of a global token.

* Als

kernel - MPSAFE work - tokenize more vm stuff

* Tokenize the vkernel entry points. The module is fairly compact so
a per-vkernel token was used right off the bat instead of a global token.

* Also fix a couple of races in the vkernel implementation to make things
more robust.

show more ...


# 8608b858 23-Mar-2010 Matthew Dillon <dillon@apollo.backplane.com>

vkernel64 - Cleanup, unbreak 32 bit

* Remove stdio cruft from init_main.c

* Change vpte_t from 64-bits to u_long, so it will be 32 bits
on 32 bit machines and 64 bits on 64 bit machines. 32 bit

vkernel64 - Cleanup, unbreak 32 bit

* Remove stdio cruft from init_main.c

* Change vpte_t from 64-bits to u_long, so it will be 32 bits
on 32 bit machines and 64 bits on 64 bit machines. 32 bit
machines can't handle the address space breakdown or issue
atomic ops on 64 bit quantities.

Adjust various defines in sys/vkernel.h to accomodate both
cases.

* Adjust atomic ops used by vm/vm_fault.c for virtual page
table access (int -> long).

* Adjust atomic ops and types used by the 32 bit
platform/vkernel code, primarily (int -> long) and
also some vm_offset_t's which are really vm_paddr_t's
or vpte_t's.

* Adjust src/test/vkernel/Makefile to run properly on
a 32 or 64 bit system.

show more ...


# 61cddc1c 01-Jan-2010 Jordan Gordeev <jgordeev@dir.bg>

amd64: Add kernel support for 64-bit virtual page tables.
WARNING: This change removes support for 32-bit vpagetables.


# bb47c072 18-Jan-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Fix vkernel_trap

* vkernel_trap restores the trapframe for the original vkernel call to
vmspace_ctl(), but only the syscall trap code was actually setting
up the frame for a syscall-ret

kernel - Fix vkernel_trap

* vkernel_trap restores the trapframe for the original vkernel call to
vmspace_ctl(), but only the syscall trap code was actually setting
up the frame for a syscall-return.

The other calls to vkernel_trap() (signal, page-fault, other traps)
were not properly adjusting the frame for a syscall-return and it
is only pure luck that it didn't bite us until now.

* Add a per-platform cpu_vkernel_trap() which does the syscall-return
fixup at the end.

Reported-by: Antonio Huete Jimenez <tuxillo@quantumachine.net>

show more ...