History log of /dragonfly/sys/kern/kern_xio.c (Results 1 – 25 of 25)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3
# 13dd34d8 18-Oct-2019 zrj <rimvydas.jasinskas@gmail.com>

kernel: Cleanup <sys/uio.h> issues.

The iovec_free() inline very complicates this header inclusion. The
NULL check is not always seen from <sys/_null.h>. Luckily only three
kernel sources needs

kernel: Cleanup <sys/uio.h> issues.

The iovec_free() inline very complicates this header inclusion. The
NULL check is not always seen from <sys/_null.h>. Luckily only three
kernel sources needs it: kern_subr.c, sys_generic.c and uipc_syscalls.c.
Also just a single dev/drm source makes use of 'struct uio'.
* Include <sys/uio.h> explicitly first in drm_fops.c to avoid kfree()
macro override in drm compat layer.
* Use <sys/_uio.h> where only enums and struct uio is needed, but ensure
that userland will not include it for possible later <sys/user.h> use.
* Stop using <sys/vnode.h> as shortcut for uiomove*() prototypes. The
uiomove*() family functions possibly transfer data across kernel/user
space boundary. This header presence explicitly mark sources as such.
* Prefer to add <sys/uio.h> after <sys/systm.h>, but before <sys/proc.h>
and definitely before <sys/malloc.h> (except for 3 mentioned sources).
This will allow to remove <sys/malloc.h> from <sys/uio.h> later on.
* Adjust <sys/user.h> to use component headers instead of <sys/uio.h>.

While there, use opportunity for a minimal whitespace cleanup.

No functional differences observed in compiler intermediates.

show more ...


Revision tags: v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc
# a36803d2 27-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Remove numerous user VM page functions

* Remove vmapbuf() and vunmapbuf(), they are unsafe.

* Remove xio_init_ubuf() - It is not used and is unsafe.

* Remove vm_fault_quick_hold_pages() -

kernel - Remove numerous user VM page functions

* Remove vmapbuf() and vunmapbuf(), they are unsafe.

* Remove xio_init_ubuf() - It is not used and is unsafe.

* Remove vm_fault_quick_hold_pages() - It is not used as is unsafe.

show more ...


Revision tags: v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0
# 86d7f5d3 26-Nov-2011 John Marino <draco@marino.st>

Initial import of binutils 2.22 on the new vendor branch

Future versions of binutils will also reside on this branch rather
than continuing to create new binutils branches for each new version.


# b12defdc 18-Oct-2011 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Major SMP performance patch / VM system, bus-fault/seg-fault fixes

This is a very large patch which reworks locking in the entire VM subsystem,
concentrated on VM objects and the x86-64 pma

kernel - Major SMP performance patch / VM system, bus-fault/seg-fault fixes

This is a very large patch which reworks locking in the entire VM subsystem,
concentrated on VM objects and the x86-64 pmap code. These fixes remove
nearly all the spin lock contention for non-threaded VM faults and narrows
contention for threaded VM faults to just the threads sharing the pmap.

Multi-socket many-core machines will see a 30-50% improvement in parallel
build performance (tested on a 48-core opteron), depending on how well
the build parallelizes.

As part of this work a long-standing problem on 64-bit systems where programs
would occasionally seg-fault or bus-fault for no reason has been fixed. The
problem was related to races between vm_fault, the vm_object collapse code,
and the vm_map splitting code.

* Most uses of vm_token have been removed. All uses of vm_spin have been
removed. These have been replaced with per-object tokens and per-queue
(vm_page_queues[]) spin locks.

Note in particular that since we still have the page coloring code the
PQ_FREE and PQ_CACHE queues are actually many queues, individually
spin-locked, resulting in very excellent MP page allocation and freeing
performance.

* Reworked vm_page_lookup() and vm_object->rb_memq. All (object,pindex)
lookup operations are now covered by the vm_object hold/drop system,
which utilize pool tokens on vm_objects. Calls now require that the
VM object be held in order to ensure a stable outcome.

Also added vm_page_lookup_busy_wait(), vm_page_lookup_busy_try(),
vm_page_busy_wait(), vm_page_busy_try(), and other API functions
which integrate the PG_BUSY handling.

* Added OBJ_CHAINLOCK. Most vm_object operations are protected by
the vm_object_hold/drop() facility which is token-based. Certain
critical functions which must traverse backing_object chains use
a hard-locking flag and lock almost the entire chain as it is traversed
to prevent races against object deallocation, collapses, and splits.

The last object in the chain (typically a vnode) is NOT locked in
this manner, so concurrent faults which terminate at the same vnode will
still have good performance. This is important e.g. for parallel compiles
which might be running dozens of the same compiler binary concurrently.

* Created a per vm_map token and removed most uses of vmspace_token.

* Removed the mp_lock in sys_execve(). It has not been needed in a while.

* Add kmem_lim_size() which returns approximate available memory (reduced
by available KVM), in megabytes. This is now used to scale up the
slab allocator cache and the pipe buffer caches to reduce unnecessary
global kmem operations.

* Rewrote vm_page_alloc(), various bits in vm/vm_contig.c, the swapcache
scan code, and the pageout scan code. These routines were rewritten
to use the per-queue spin locks.

* Replaced the exponential backoff in the spinlock code with something
a bit less complex and cleaned it up.

* Restructured the IPIQ func/arg1/arg2 array for better cache locality.
Removed the per-queue ip_npoll and replaced it with a per-cpu gd_npoll,
which is used by other cores to determine if they need to issue an
actual hardware IPI or not. This reduces hardware IPI issuance
considerably (and the removal of the decontention code reduced it even
more).

* Temporarily removed the lwkt thread fairq code and disabled a number of
features. These will be worked back in once we track down some of the
remaining performance issues.

Temproarily removed the lwkt thread resequencer for tokens for the same
reason. This might wind up being permanent.

Added splz_check()s in a few critical places.

* Increased the number of pool tokens from 1024 to 4001 and went to a
prime-number mod algorithm to reduce overlaps.

* Removed the token decontention code. This was a bit of an eyesore and
while it did its job when we had global locks it just gets in the way now
that most of the global locks are gone.

Replaced the decontention code with a fall back which acquires the
tokens in sorted order, to guarantee that deadlocks will always be
resolved eventually in the scheduler.

* Introduced a simplified spin-for-a-little-while function
_lwkt_trytoken_spin() that the token code now uses rather than giving
up immediately.

* The vfs_bio subsystem no longer uses vm_token and now uses the
vm_object_hold/drop API for buffer cache operations, resulting
in very good concurrency.

* Gave the vnode its own spinlock instead of sharing vp->v_lock.lk_spinlock,
which fixes a deadlock.

* Adjusted all platform pamp.c's to handle the new main kernel APIs. The
i386 pmap.c is still a bit out of date but should be compatible.

* Completely rewrote very large chunks of the x86-64 pmap.c code. The
critical path no longer needs pmap_spin but pmap_spin itself is still
used heavily, particularin the pv_entry handling code.

A per-pmap token and per-pmap object are now used to serialize pmamp
access and vm_page lookup operations when needed.

The x86-64 pmap.c code now uses only vm_page->crit_count instead of
both crit_count and hold_count, which fixes races against other parts of
the kernel uses vm_page_hold().

_pmap_allocpte() mechanics have been completely rewritten to remove
potential races. Much of pmap_enter() and pmap_enter_quick() has also
been rewritten.

Many other changes.

* The following subsystems (and probably more) no longer use the vm_token
or vmobj_token in critical paths:

x The swap_pager now uses the vm_object_hold/drop API instead of vm_token.

x mmap() and vm_map/vm_mmap in general now use the vm_object_hold/drop API
instead of vm_token.

x vnode_pager

x zalloc

x vm_page handling

x vfs_bio

x umtx system calls

x vm_fault and friends

* Minor fixes to fill_kinfo_proc() to deal with process scan panics (ps)
revealed by recent global lock removals.

* lockmgr() locks no longer support LK_NOSPINWAIT. Spin locks are
unconditionally acquired.

* Replaced netif/e1000's spinlocks with lockmgr locks. The spinlocks
were not appropriate owing to the large context they were covering.

* Misc atomic ops added

show more ...


Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0
# e54488bb 19-Aug-2009 Matthew Dillon <dillon@apollo.backplane.com>

AMD64 - Refactor uio_resid and size_t assumptions.

* uio_resid changed from int to size_t (size_t == unsigned long equivalent).

* size_t assumptions in most kernel code has been refactored to opera

AMD64 - Refactor uio_resid and size_t assumptions.

* uio_resid changed from int to size_t (size_t == unsigned long equivalent).

* size_t assumptions in most kernel code has been refactored to operate in a
64 bit environment.

* In addition, the 2G limitation for VM related system calls such as mmap()
has been removed in 32 bit environments. Note however that because
read() and write() return ssize_t, these functions are still limited
to a 2G byte count in 32 bit environments.

show more ...


Revision tags: v2.3.2, v2.3.1, v2.2.1, v2.2.0, v2.3.0, v2.1.1, v2.0.1
# 17cde63e 09-May-2008 Matthew Dillon <dillon@dragonflybsd.org>

Fix many bugs and issues in the VM system, particularly related to
heavy paging.

* (cleanup) PG_WRITEABLE is now set by the low level pmap code and not by
high level code. It means 'This page may

Fix many bugs and issues in the VM system, particularly related to
heavy paging.

* (cleanup) PG_WRITEABLE is now set by the low level pmap code and not by
high level code. It means 'This page may contain a managed page table
mapping which is writeable', meaning that hardware can dirty the page
at any time. The page must be tested via appropriate pmap calls before
being disposed of.

* (cleanup) PG_MAPPED is now handled by the low level pmap code and only
applies to managed mappings. There is still a bit of cruft left over
related to the pmap code's page table pages but the high level code is now
clean.

* (bug) Various XIO, SFBUF, and MSFBUF routines which bypass normal paging
operations were not properly dirtying pages when the caller intended
to write to them.

* (bug) vfs_busy_pages in kern/vfs_bio.c had a busy race. Separate the code
out to ensure that we have marked all the pages as undergoing IO before we
call vm_page_protect(). vm_page_protect(... VM_PROT_NONE) can block
under very heavy paging conditions and if the pages haven't been marked
for IO that could blow up the code.

* (optimization) Make a minor optimization. When busying pages for write
IO, downgrade the page table mappings to read-only instead of removing
them entirely.

* (bug) In platform/pc32/i386/pmap.c fix various places where
pmap_inval_add() was being called at the wrong point. Only one was
critical, in pmap_enter(), where pmap_inval_add() was being called so far
away from the pmap entry being modified that it could wind up being flushed
out prior to the modification, breaking the cpusync required.

pmap.c also contains most of the work involved in the PG_MAPPED and
PG_WRITEABLE changes.

* (bug) Close numerous pte updating races with hardware setting the
modified bit. There is still one race left (in pmap_enter()).

* (bug) Disable pmap_copy() entirely. Fix most of the bugs anyway, but
there is still one left in the handling of the srcmpte variable.

* (cleanup) Change vm_page_dirty() from an inline to a real procedure, and
move the code which set the object to writeable/maybedirty into
vm_page_dirty().

* (bug) Calls to vm_page_protect(... VM_PROT_NONE) can block. Fix all cases
where this call was made with a non-busied page. All such calls are
now made with a busied page, preventing blocking races from re-dirtying
or remapping the page unexpectedly.

(Such blockages could only occur during heavy paging activity where the
underlying page table pages are being actively recycled).

* (bug) Fix the pageout code to properly mark pages as undergoing I/O before
changing their protection bits.

* (bug) Busy pages undergoing zeroing or partial zeroing in the vnode pager
(vm/vnode_pager.c) to avoid unexpected effects.

show more ...


# 83269e7d 13-Aug-2007 Matthew Dillon <dillon@dragonflybsd.org>

Add xio_init_pages(), which builds an XIO based on an array of vm_page_t's.


# c4734fe7 29-Jun-2007 Matthew Dillon <dillon@dragonflybsd.org>

Add a new flag, XIOF_VMLINEAR, which requires that the buffer being mapped
be contiguous within a single VM object.


# 0a7648b9 29-Jun-2007 Matthew Dillon <dillon@dragonflybsd.org>

Get out-of-band DMA buffers working for user<->user syslinks. This
allows the syslink protocol to operate in a manner very similar to the
way sophisticated DMA hardware works, where you have a DMA b

Get out-of-band DMA buffers working for user<->user syslinks. This
allows the syslink protocol to operate in a manner very similar to the
way sophisticated DMA hardware works, where you have a DMA buffer attached
to a command.

Augment the syslink protocol to implement read, write, and read-modify-write
style commands.

Obtain the MP lock in places where needed because fileops are called without
it held now. Our VM ops are not MP safe yet.

Use an XIO to map VM pages between userland processes. Add additional
XIO functions to aid in copying data to and from a userland context. This
removes an extra buffer copy from the path and allows us to manipulate pure
vm_page_t's for just about everything.

show more ...


# 06c5a8d6 11-Jan-2007 Matthew Dillon <dillon@dragonflybsd.org>

Replace remaining uses of vm_fault_quick() with vm_fault_page_quick().
Do not directly access userland virtual addresses in the kernel UMTX code.


# 765f70a1 07-May-2006 Matthew Dillon <dillon@dragonflybsd.org>

We have to use pmap_extract() here. pmap_kextract() will choke on a missing
page directory and the user memory hasn't been touched yet.


# e43a034f 06-Jun-2005 Matthew Dillon <dillon@dragonflybsd.org>

Remove spl*() calls from kern, replacing them with critical sections.
Change the meaning of safepri from a cpl mask to a thread priority.
Make a minor adjustment to tests within one of the buffer cac

Remove spl*() calls from kern, replacing them with critical sections.
Change the meaning of safepri from a cpl mask to a thread priority.
Make a minor adjustment to tests within one of the buffer cache's
critical sections.

show more ...


# 4f1640d6 02-Mar-2005 Hiten Pandya <hmp@dragonflybsd.org>

Rename the flags for sf_buf_alloc(9) to be in line with FreeBSD:

SFBA_PCATCH -> SFB_CATCH
SFBA_QUICK -> SFB_CPUPRIVATE

Discussed-with: Matthew Dillon <dillon at apollo.backplane.com>


# 03aa69bd 01-Mar-2005 Matthew Dillon <dillon@dragonflybsd.org>

Clean up the XIO API and structure. XIO no longer tries to 'track' partial
copies into or out of an XIO. It no longer adjusts xio_offset or xio_bytes
once they have been initialized. Instead, a re

Clean up the XIO API and structure. XIO no longer tries to 'track' partial
copies into or out of an XIO. It no longer adjusts xio_offset or xio_bytes
once they have been initialized. Instead, a relative offset is now passed
to API calls to handle partial copies. This makes the API a lot less confusing
and makes the XIO structure a lot more flexible, shareable, and more suitable
for use by higher level entities (buffer cache, pipe code, upcoming MSFBUF
work, etc).

show more ...


# 8c10bfcf 16-Jul-2004 Matthew Dillon <dillon@dragonflybsd.org>

Update all my personal copyrights to the Dragonfly Standard Copyright.


# d6a46bb7 05-Jun-2004 Matthew Dillon <dillon@dragonflybsd.org>

Add the MSFBUF API. MSFBUFs are like SFBUFs but they manage ephermal
multi-page mappings instead of single-page mappings. MSFBUFs have the
same caching and page invalidation optimizations that SFBU

Add the MSFBUF API. MSFBUFs are like SFBUFs but they manage ephermal
multi-page mappings instead of single-page mappings. MSFBUFs have the
same caching and page invalidation optimizations that SFBUFs have and are
considered to be SMP-friendly.

Whereas XIO manages pure page lists, MSFBUFs manage KVA mappings of pure
page lists.

This initial commit just gets the basic API operational. The roadmap for
future work includes things like better interactions with third-party XIOs,
mapping user buffers into the kernel (extending the xio_init_ubuf() API into
the MSFBUF API), and allowing higher level subsystems to pass previously
released MSFBUFs as a hint to speed-up regeneration. We also need to come
up with a way to overload additional sets of MSFBUFs representing smaller
chunks of memory on top of the same KVA space in order to efficiently use
our KVA reservation when dealing with subsystems like the buffer cache.

MSFBUFs will eventually replace the KVA management in the BUF/BIO, PIPE,
and other subsystems which create fake linear mappings with pbufs. The
general idea for BUF/BIO will be to use XIO and MSFBUFs to avoid KVA
mapping file data through the nominal I/O path. XIO will be the primary I/O
buffer mechanism while MSFBUFs will be used when things like UFS decide they
need a temporary mapping.

This is a collaborative work between Hiten Pandya <hmp@leaf.dragonflybsd.org>
and Matthew Dillon <dillon@backplane.com>.

show more ...


# 06ecca5a 13-May-2004 Matthew Dillon <dillon@dragonflybsd.org>

Close an interrupt race between vm_page_lookup() and (typically) a
vm_page_sleep_busy() check by using the correct spl protection.
An interrupt can occur inbetween the two operations and unbusy/free

Close an interrupt race between vm_page_lookup() and (typically) a
vm_page_sleep_busy() check by using the correct spl protection.
An interrupt can occur inbetween the two operations and unbusy/free
the page in question, causing the busy check to fail and for the code
to fall through and then operate on a page that may have been freed
and possibly even reused. Also note that vm_page_grab() had the same
issue between the lookup, busy check, and vm_page_busy() call.

Close an interrupt race when scanning a VM object's memq. Interrupts
can free pages, removing them from memq, which interferes with memq scans
and can cause a page unassociated with the object to be processed as if it
were associated with the object.

Calls to vm_page_hold() and vm_page_unhold() require spl protection.

Rename the passed socket descriptor argument in sendfile() to make the
code more readable.

Fix several serious bugs in procfs_rwmem(). In particular, force it to
block if a page is busy and then retry.

Get rid of vm_pager_map_pag() and vm_pager_unmap_page(), make the functions
that used to use these routines use SFBUF's instead.

Get rid of the (userland?) 4MB page mapping feature in pmap_object_init_pt()
for now. The code appears to not track the page directory properly and
could result in a non-zero page being freed as PG_ZERO.

This commit also includes updated code comments and some additional
non-operational code cleanups.

show more ...


# 82f4c82a 03-Apr-2004 Matthew Dillon <dillon@dragonflybsd.org>

Fix bugs in xio_copy_*(). We were not using the masked offset when
calculation the number of bytes to copy from the first indexed page,
leading to a negative 'n' calculation in situations that could

Fix bugs in xio_copy_*(). We were not using the masked offset when
calculation the number of bytes to copy from the first indexed page,
leading to a negative 'n' calculation in situations that could be
triggered with a ^C on programs using pipes (such as a buildworld).
This almost universally resulted in a panic.

show more ...


# 24712b90 01-Apr-2004 Matthew Dillon <dillon@dragonflybsd.org>

Enhance the pmap_kenter*() API and friends, separating out entries which
only need invalidation on the local cpu against entries which need invalidation
across the entire system, and provide a synchr

Enhance the pmap_kenter*() API and friends, separating out entries which
only need invalidation on the local cpu against entries which need invalidation
across the entire system, and provide a synchronization abstraction.

Enhance sf_buf_alloc() and friends to allow the caller to specify whether the
sf_buf's kernel mapping is going to be used on just the current cpu or
whether it needs to be valid across all cpus. This is done by maintaining
a cpumask of known-synchronized cpus in the struct sf_buf

Optimize sf_buf_alloc() and friends by removing both TAILQ operations in the
critical path. TAILQ operations to remove the sf_buf from the free queue
are now done in a lazy fashion. Most sf_buf operations allocate a buf,
work on it, and free it, so why waste time moving the sf_buf off the freelist
if we are only going to move back onto the free list a microsecond later?

Fix a bug in sf_buf_alloc() code as it was being used by the PIPE code.
sf_buf_alloc() was unconditionally using PCATCH in its tsleep() call, which
is only correct when called from the sendfile() interface.

Optimize the PIPE code to require only local cpu_invlpg()'s when mapping
sf_buf's, greatly reducing the number of IPIs required. On a DELL-2550,
a pipe test which explicitly blows out the sf_buf caching by using huge
buffers improves from 350 to 550 MBytes/sec. However, note that buildworld
times were not found to have changed.

Replace the PIPE code's custom 'struct pipemapping' structure with a
struct xio and use the XIO API functions rather then its own.

show more ...


# 5ed411ff 31-Mar-2004 Matthew Dillon <dillon@dragonflybsd.org>

Add missing sf_buf_free()'s.

Reported-by: Jonathan Lemon <jlemon@flugsvamp.com>


# 81ee925d 31-Mar-2004 Matthew Dillon <dillon@dragonflybsd.org>

Initial XIO implementation. XIOs represent data through a list of VM pages
rather then mapped KVM, allowing them to be passed between threads without
having to worry about KVM mapping overheads, TLB

Initial XIO implementation. XIOs represent data through a list of VM pages
rather then mapped KVM, allowing them to be passed between threads without
having to worry about KVM mapping overheads, TLB invalidation, and so forth.

This initial implementation supports creating XIOs from user or kernel data
and copying from an XIO to a user or kernel buffer or a uio. XIO are intended
to be used with CAPS, PIPES, VFS, DEV, and other I/O paths.

The XIO concept is an outgrowth of Alan Cox'es unique use of target-side
SF_BUF mapping to improve pipe performance.

show more ...


# 7a683a24 20-Jan-2011 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Optimize the x86-64 lwbuf API

* Change lwbuf_alloc(m) to lwbuf_alloc(m, &lwb_cache), passing a pointer to
a struct lwb which lwbuf_alloc() may used if it desires.

* The x86-64 lwbuf_allo

kernel - Optimize the x86-64 lwbuf API

* Change lwbuf_alloc(m) to lwbuf_alloc(m, &lwb_cache), passing a pointer to
a struct lwb which lwbuf_alloc() may used if it desires.

* The x86-64 lwbuf_alloc() now just fills in the passed lwb and returns it.
The i386 lwbuf_alloc() still uses the objcache w/ its kva mappings. This
removes objcache calls from the critical path.

* The x86-64 lwbuf_alloc()/lwbuf_free() functions are now inlines (ALL x86-64
lwbuf functions are now inlines).

show more ...


# 573fb415 03-Jul-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - MPSAFE work - Finish tokenizing vm_page.c

* Finish tokenizing vm_page.c

* Certain global procedures, particular vm_page_hold() and vm_page_unhold(),
are best called with the vm_token alr

kernel - MPSAFE work - Finish tokenizing vm_page.c

* Finish tokenizing vm_page.c

* Certain global procedures, particular vm_page_hold() and vm_page_unhold(),
are best called with the vm_token already held for implied non-blocking
operation.

show more ...


# 5c5185ae 09-Mar-2010 Samuel J. Greear <sjg@thesjg.com>

kernel - Introduce lightweight buffers

* Summary:
The lightweight buffer (lwbuf) subsystem is effectively a reimplementation
of the sfbuf (sendfile buffers) implementation. It was designed to

kernel - Introduce lightweight buffers

* Summary:
The lightweight buffer (lwbuf) subsystem is effectively a reimplementation
of the sfbuf (sendfile buffers) implementation. It was designed to be
lighter weight than the sfbuf implementation when possible, on x86_64
we use the DMAP and the implementation is -very- simple. It was also
designed to be more SMP friendly.

* Replace all consumption of sfbuf with lwbuf

* Refactor sfbuf to act as an external refcount mechanism for sendfile(2),
this will probably go away eventually as well.

show more ...


# 255b6068 25-Feb-2010 Samuel J. Greear <sjg@thesjg.com>

kernel - Initialize xio->xio_bytes properly in xio_init_pages()