Home
last modified time | relevance | path

Searched hist:"1 fc8fad1" (Results 1 – 9 of 9) sorted by relevance

/openbsd/sys/arch/amd64/conf/
H A Dld.script1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
/openbsd/sys/arch/amd64/include/
H A Dasm.h1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Dcodepatch.h1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
/openbsd/sys/arch/amd64/amd64/
H A Dvector.S1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Didentcpu.c1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Dlocore.S1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Dpmap.c1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Dcpu.c1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@
H A Dmachdep.c1fc8fad1 Thu Jul 12 14:11:11 GMT 2018 guenther <guenther@openbsd.org> Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed. This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability. The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@