History log of /dragonfly/sys/platform/pc64/icu/icu_vector.s (Results 1 – 19 of 19)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2
# bbf928c6 29-Aug-2020 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Fix atomicy issue in APIC EOI and enable_icus

* Change ICU_INTREN() and ICU_INTRDIS() to only modify the ICU
whos bit is being changed, rather than always setting the mask
for both ICUs

kernel - Fix atomicy issue in APIC EOI and enable_icus

* Change ICU_INTREN() and ICU_INTRDIS() to only modify the ICU
whos bit is being changed, rather than always setting the mask
for both ICUs.

* If masking a level IRQ on the APIC, make sure the EOI
to the APIC is atomic with the masking operation.

* Make sure that ICU enablement is atomic with ICU masking
operations.

* Does not fix any known bugs

show more ...


Revision tags: v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc
# 4611d87f 03-Jan-2018 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Intel user/kernel separation MMU bug fix part 1/3

* Part 1/3 of the fix for the Intel user/kernel separation MMU bug.
It appears that it is possible to discern the contents of kernel
me

kernel - Intel user/kernel separation MMU bug fix part 1/3

* Part 1/3 of the fix for the Intel user/kernel separation MMU bug.
It appears that it is possible to discern the contents of kernel
memory with careful timing measurements of instructions due to
speculative memory reads and speculative instruction execution
by Intel cpus. This can happen because Intel will allow both to
occur even when the memory access is later disallowed due to
privilege separation in the PTE.

Even though the execution is always aborted, the speculative
reads and speculative execution results in timing artifacts which
can be measured. A speculative compare/branch can lead to timing
artifacts that allow the actual contents of kernel memory to be
discerned.

While there are multiple speculative attacks possible, the Intel
bug is particularly bad because it allows a user program to more
or less effortlessly access kernel memory (and if a DMAP is
present, all of physical memory).

* Part 1 implements all the logic required to load an 'isolated'
version of the user process's PML4e into %cr3 on all user
transitions, and to load the 'normal' U+K version into %cr3 on
all transitions from user to kernel.

* Part 1 fully allocates, copies, and implements the %cr3 loads for
the 'isolated' version of the user process PML4e.

* Part 1 does not yet actually adjust the contents of this isolated
version to replace the kernel map with just a trampoline map in
kernel space. It does remove the DMAP as a test, though. The
full separation will be done in part 3.

show more ...


Revision tags: v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0
# 7fdff911 22-Jun-2015 Sascha Wildner <saw@online.de>

kernel: Include generic headers which will take care of platforms.


Revision tags: v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0
# 86d7f5d3 26-Nov-2011 John Marino <draco@marino.st>

Initial import of binutils 2.22 on the new vendor branch

Future versions of binutils will also reside on this branch rather
than continuing to create new binutils branches for each new version.


Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0
# bfc09ba0 25-Aug-2009 Matthew Dillon <dillon@apollo.backplane.com>

AMD64 - Fix format conversions and other warnings.


# a2a636cc 12-Aug-2009 Matthew Dillon <dillon@apollo.backplane.com>

AMD64 - Sync machine-dependent bits from smtms.

Submitted-by: Jordan Gordeev <jgordeev@dir.bg>


Revision tags: v2.3.2
# 729e15a8 10-Jul-2009 Sepherosa Ziehau <sephe@dragonflybsd.org>

Use same interrupt vector handler for fast/slow interrupt handlers

Slow interrupt vector handler is removed. Fast interrupt vector handler,
ithread_fast_handler(), now schedules slow interrupt hand

Use same interrupt vector handler for fast/slow interrupt handlers

Slow interrupt vector handler is removed. Fast interrupt vector handler,
ithread_fast_handler(), now schedules slow interrupt handlers if necessary:
o No fast interrupt handlers are registered
o Mixed fast and slow interrpt handlers are registered
o Non-MPSAFE fast interrupt handlers could not get BGL

i386/amd64: gd_ipending field in mdglobaldata is revoked, which is only
used by slow interrupt vector handler.

ithread_fast_handler()'s invoking convetion is changed:
- ithead_fast_handler() must be called with critical section being held
- Callers of ithead_fast_handler() no longer bump gd_intr_nesting_level

Discussed-with: dillon@
Reviewed-by: dillon@

show more ...


Revision tags: v2.3.1, v2.2.1
# 5b9f6cc4 02-Apr-2009 Matthew Dillon <dillon@apollo.backplane.com>

AMD64 - Make signals operational, fix reg mappings, fix %fs management, trace

Adjust sigframe, trapframe, mcontext, ucontext, and regs. Add tf_xflags
too all structures. Reorder struct regs to mat

AMD64 - Make signals operational, fix reg mappings, fix %fs management, trace

Adjust sigframe, trapframe, mcontext, ucontext, and regs. Add tf_xflags
too all structures. Reorder struct regs to match the register layout
in the other structures.

Implement the commented out signaling code. Signals now work, or at least
do not crash programs. Theoretically the FP state is also saved and restored.

The exec() code failed to adjust gd_user_fs and gd_user_gs when setting
the msr registers for the user %fs and %gs, causing %fs to unexpectedly
change in running user programs.

Implement trace/debug support functions to set %rip and to single-step.

Define the missing vkernel flag FP_SOFTFP.

show more ...


Revision tags: v2.2.0, v2.3.0, v2.1.1, v2.0.1
# c8fe38ae 29-Aug-2008 Matthew Dillon <dillon@dragonflybsd.org>

AMD64 - Sync AMD64 support from Jordan Gordeev's svn repository and
Google SOC project. This work is still continuing but represents
substantial progress in the effort.

With this commit the world b

AMD64 - Sync AMD64 support from Jordan Gordeev's svn repository and
Google SOC project. This work is still continuing but represents
substantial progress in the effort.

With this commit the world builds and installs, the loader is able to
boot the kernel, and the kernel is able to initialize, probe devices, and
exec the init program. The init program is then able to run until it hits
its first fork(). For the purposes of the GSOC the project is being
considered a big success!

The code has been adapted from multiple sources, most notably Peter Wemm
and other peoples work from FreeBSD, with many modifications to make it
work with DragonFly. Also thanks go to Simon Schubert for working on gdb
and compiler issues, and to Noah Yan for a good chunk of precursor work
in 2007.

While Jordan wishes to be modest on his contribution, frankly we would
not have been able to make this much progress without the large number
of man-hours Jordan has dedicated to his GSOC project painstakingly gluing
code together, tracking down issues, and progressing the boot sequence.

Submitted-by: Jordan Gordeev <jgordeev@dir.bg>

show more ...


# 4fb281af 17-Feb-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

x86_64: 64-bit index register should be used.

Looks like qemu does not accept 32-bit index register, while the
real boxs and virtualbox accept 32-bit index regiter.

However, according to AMD <<2459

x86_64: 64-bit index register should be used.

Looks like qemu does not accept 32-bit index register, while the
real boxs and virtualbox accept 32-bit index regiter.

However, according to AMD <<24593--Rev. 3.17--June 2010>> Page 25,
64-bit index register should be used to create effective address.

DragonFly-bug: http://bugs.dragonflybsd.org/issue1991

show more ...


# 1c2bce94 30-Jan-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

icu: Put ICU_IMR_OFFSET into machine_base/icu/icu.h


# 9611ff20 17-Jan-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

x86_64 intr: Support upto 192 IDT entries in ipl and intr vector asm code

Most parts are same as following commit on i386:
c263294b570bc9641fe5184b066fd801803046a4
except that 64bits mask array is u

x86_64 intr: Support upto 192 IDT entries in ipl and intr vector asm code

Most parts are same as following commit on i386:
c263294b570bc9641fe5184b066fd801803046a4
except that 64bits mask array is used.

Things like (1UL << $const_val) does not work in .s file; currently
"movq $1,%rcx; shlq $const_val,%rcx;" is used instead.

show more ...


# 49792f61 17-Jan-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

icu: Remove unused macros


# 58e8d3d8 16-Jan-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

x86_64 intr: Don't pass the vector name to INTR_HANDLER


# 35e45e47 16-Jan-2011 Sepherosa Ziehau <sephe@dragonflybsd.org>

x86_64 intr: We no longer have the fast version of intr vectors


# faaf4131 07-Nov-2010 Michael Neumann <mneumann@ntecs.de>

x86_64 - Get completely rid of APIC_IO

For SMP kernels compile time APIC_IO option has been superseeded
by loader tunable hw.apic_io_enable which defaults to APIC I/O
enabled.


# 2d910aaf 28-Oct-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Fix serious issue w/ smp_invltlb(), plus other issues (2)

It looks like there are a number of additional cases where kernel threads
can wind up running with interrupts physically disabled,

kernel - Fix serious issue w/ smp_invltlb(), plus other issues (2)

It looks like there are a number of additional cases where kernel threads
can wind up running with interrupts physically disabled, which can create
serious problems for smp_invltlb().

It is not entirely clear how these deadlocks occur since the IPI code does
a forced "STI" if it would otherwise loop, but there are several other
hard loops that do not: lwkt_switch looping on a vm_token, spin locks,
and probably a few other places.

We want interrupts to be enabled in all cases where these sorts of loops
occur in order to be able to service Xinvltlb via smp_invltlb() as well
as to prevent the LAPIC interrupt queue from filling up and livelocking
something that it shouldn't.

* lwkt_preempt() must save, zero, and restore gd->gd_intr_nesting_level
when issuing a preemption. Otherwise it will improperly panic on
an assertion if the preempting interrupt thread code tries to switch out.
It is perfectly acceptable for the preempting thread to block (it just
switches back to the thread that got preempted).

Why the assertion was not occuring before I do not know but it is
probably related to threads getting stuck in 'cli' mode. The additional
changes below appear to significantly increase the number of interrupt
thread preemptions which succeed (lwkt.preempt_{hit,miss} counters).

* STI prior to running ithread_fast_handler() from all IDTVECs related
to device interrupts.

* STI in Xcpustop, Xipiq, and Xtimer. These functions can call more
complex C code and doing so with interrupts disabled may prevent
Xinvltlb (via smp_invltlb()) from being executed, deadlocking the
system.

* Reorder a mfence(). Probably not needed but do it anyway.

show more ...


# 2883d2d8 21-Oct-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - (mainly x86_64) - Fix a number of rare races

* Move the MP lock from outside to inside exit1(), also fixing an issue
where sigexit() was calling exit1() without it.

* Move calls to dsche

kernel - (mainly x86_64) - Fix a number of rare races

* Move the MP lock from outside to inside exit1(), also fixing an issue
where sigexit() was calling exit1() without it.

* Move calls to dsched_exit_thread() and biosched_done() out of the
platform code and into the mainline code. This also fixes an
issue where the code was improperly blocking way too late in the
thread termination code, after the point where it had been descheduled
permanently and tsleep decomissioned for the thread.

* Cleanup and document related code areas.

* Fix a missing proc_token release in the SIGKILL exit path.

* Fix FAKE_MCOUNT()s in the x86-64 code. These are NOPs anyway
(since kernel profiling doesn't work), but fix them anyway.

* Use APIC_PUSH_FRAME() in the Xcpustop assembly code for x86-64
in order to properly acquire a working %gs. This may improve the
handling of panic()s on x86_64.

* Also fix some cases if #if JG'd (ifdef'd out) code in case the
code is ever used later on.

* Protect set_user_TLS() with a critical section to be safe.

* Add debug code to help track down further x86-64 seg-fault issues,
and provide better kprintf()s for the debug path in question.

show more ...


# f9235b6d 24-Aug-2010 Matthew Dillon <dillon@apollo.backplane.com>

kernel - rewrite the LWKT scheduler's priority mechanism

The purpose of these changes is to begin to address the issue of cpu-bound
kernel threads. For example, the crypto threads, or a HAMMER prun

kernel - rewrite the LWKT scheduler's priority mechanism

The purpose of these changes is to begin to address the issue of cpu-bound
kernel threads. For example, the crypto threads, or a HAMMER prune cycle
that operates entirely out of the buffer cache. These threads tend to hicup
the system, creating temporary lockups because they never switch away due
to their nature as kernel threads.

* Change the LWKT scheduler from a strict hard priority model to
a fair-share with hard priority queueing model.

A kernel thread will be queued with a hard priority, giving it dibs on
the cpu earlier if it has a higher priority. However, if the thread
runs past its fair-share quantum it will then become limited by that
quantum and other lower-priority threads will be allowed to run.

* Rewrite lwkt_yield() and lwkt_user_yield(), remove uio_yield().
Both yield functions are now very fast and can be called without
further timing conditionals, simplifying numerous callers.

lwkt_user_yield() now uses the fair-share quantum to determine when
to yield the cpu for a cpu-bound kernel thread.

* Implement the new yield in the crypto kernel threads, HAMMER, and
other places (many of which already used the old yield functions
which didn't work very well).

* lwkt_switch() now only round-robins after the fair share
quantum is exhausted. It does not necessarily always round robin.

* Separate the critical section count from td_pri. Add td_critcount.

show more ...