#
3fb0e55c |
| 13-Sep-2020 |
jsg <jsg@openbsd.org> |
add an ipi for wbinvd and a linux style wbinvd_on_all_cpus() function
ok kettenis@ deraadt@
|
#
81b71faf |
| 18-Jan-2019 |
pd <pd@openbsd.org> |
delete vmm(4) in i386
We will still be able to run i386 guests on amd64 vmm.
Reasons to delete i386 vmm:
- Been broken for a while, almost no one complained. - Had been falling out of sync from am
delete vmm(4) in i386
We will still be able to run i386 guests on amd64 vmm.
Reasons to delete i386 vmm:
- Been broken for a while, almost no one complained. - Had been falling out of sync from amd64 while it worked. - If your machine has vmx, you most probably can run amd64, so why not run that?
ok deraadt@ mlarkin@
show more ...
|
#
4965d1a4 |
| 13-Jan-2018 |
mpi <mpi@openbsd.org> |
Define and use IPL_MPFLOOR in our common mutex implementation.
ok kettenis@, visa@
|
#
889d96f8 |
| 21-Oct-2016 |
mlarkin <mlarkin@openbsd.org> |
vmm(4) for i386. Userland changes forthcoming. Note that for the time being, i386 hosts are limited to running only i386 guests, even if the underlying hardware supports amd64. This is a restriction
vmm(4) for i386. Userland changes forthcoming. Note that for the time being, i386 hosts are limited to running only i386 guests, even if the underlying hardware supports amd64. This is a restriction I hope to lift moving forward, but for now please don't report problems running amd64 guests on i386 hosts.
This was a straightforward port of the in-tree amd64 code plus the old rotted tree I had from last year for i386 support. Changes included converting 64-bit VMREAD/VMWRITE ops to 2x32-bit ops, and fixing treatment of the TSS, which differs on i386.
ok deraadt@
show more ...
|
#
747479c5 |
| 16-May-2013 |
kettenis <kettenis@openbsd.org> |
Implement a mechanism to establish interrupt handlers that don't grab the kernel lock upon entry through a new IPL_MPSAFE flag/level.
|
#
950cab9c |
| 05-Jul-2011 |
oga <oga@openbsd.org> |
N: Thou shalt not call hardclock() with biglock held.
i386 disobeys the Nth commandment. Fix this. While here, make i386 and amd64 definitions of iplclock and statclock match.
ok art@, kettenis@
|
#
584b2611 |
| 22-May-2010 |
deraadt <deraadt@openbsd.org> |
protection should use the upper case names
|
#
0fa4144a |
| 26-Apr-2008 |
kettenis <kettenis@openbsd.org> |
Remove softast; it's no longer used.
ok krw@
|
#
0330a9d2 |
| 18-Apr-2008 |
kettenis <kettenis@openbsd.org> |
Now that i386 has a per-process astpending, we can garbage collect ipi_ast and do an ipi_nop cross-call from signotify() instead.
ok miod@
|
#
37477a60 |
| 07-Sep-2007 |
art <art@openbsd.org> |
Remove some left-overs from the TSC based microtime. We don't need to synchronize the tsc between CPUs anymore. While here, also remove the slow TLB IPI since it's been dead for a while.
noticed by
Remove some left-overs from the TSC based microtime. We don't need to synchronize the tsc between CPUs anymore. While here, also remove the slow TLB IPI since it's been dead for a while.
noticed by mickey toby@ ok
show more ...
|
#
5e4a5d80 |
| 21-Apr-2007 |
gwk <gwk@openbsd.org> |
Introduce a smp aware hw.setperf mechanism, it will scale all CPUs or cores by the same amount, i.e. if you do hw.setperf=50 both cores will be scaled to the opearting state corresponing to 50%. Test
Introduce a smp aware hw.setperf mechanism, it will scale all CPUs or cores by the same amount, i.e. if you do hw.setperf=50 both cores will be scaled to the opearting state corresponing to 50%. Tested by many with est (mainly on core2duo machines like X60 thinkpads). Only enable est during GENERIC.MP build no one tested powernow.
ok art@
show more ...
|
#
7a83af50 |
| 12-Apr-2007 |
art <art@openbsd.org> |
Faster signal delivery on i386/MP.
We need to poke the other CPU so that it processes the AST immediately and doesn't wait for the next interrupt or syscall.
Since IPIs really shouldn't process AST
Faster signal delivery on i386/MP.
We need to poke the other CPU so that it processes the AST immediately and doesn't wait for the next interrupt or syscall.
Since IPIs really shouldn't process ASTs, we need to trigger a soft interrupt on the destination CPU to process the AST. But since we can't send soft interrupts to other CPUs, we send an IPI, that triggers a soft interrupt that in turn processes the AST.
Also, this marks the beginning of moving to slightly better IPI mechanism of short and optimized IPIs instead of the large and complicated IPI infrastructure we're using now.
tested by many, ok tholo@
show more ...
|
#
36f8f6b9 |
| 12-Mar-2006 |
brad <brad@openbsd.org> |
remove IPL_IMP and splimp().
|
#
8fd19177 |
| 31-May-2005 |
art <art@openbsd.org> |
IPL_SCHED should block statclock on architectures where the scheduler is clocked by the statclock.
miod@ ok
|
#
5bc652b1 |
| 29-May-2005 |
deraadt <deraadt@openbsd.org> |
sched work by niklas and art backed out; causes panics
|
#
51e884f5 |
| 25-May-2005 |
niklas <niklas@openbsd.org> |
This patch is mortly art's work and was done *a year* ago. Art wants to thank everyone for the prompt review and ok of this work ;-) Yeah, that includes me too, or maybe especially me. I am sorry.
This patch is mortly art's work and was done *a year* ago. Art wants to thank everyone for the prompt review and ok of this work ;-) Yeah, that includes me too, or maybe especially me. I am sorry.
Change the sched_lock to a mutex. This fixes, among other things, the infamous "telnet localhost &" problem. The real bug in that case was that the sched_lock which is by design a non-recursive lock, was recursively acquired, and not enough releases made us hold the lock in the idle loop, blocking scheduling on the other processors. Some of the other processors would hold the biglock though, which made it impossible for cpu 0 to enter the kernel... A nice deadlock. Let me just say debugging this for days just to realize that it was all fixed in an old diff noone ever ok'd was somewhat of an anti-climax.
This diff also changes splsched to be correct for all our architectures.
show more ...
|
#
012ea299 |
| 13-Jun-2004 |
niklas <niklas@openbsd.org> |
debranch SMP, have fun
|