#
d3e01796 |
| 25-Sep-2023 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Untangle fixmap.h and pgtable.h and mmu.h
fixmap.h need pgtable.h for [un]map_kernel_page()
pgtable.h need fixmap.h for FIXADDR_TOP.
Untangle the two files by moving FIXADDR_TOP into pgta
powerpc: Untangle fixmap.h and pgtable.h and mmu.h
fixmap.h need pgtable.h for [un]map_kernel_page()
pgtable.h need fixmap.h for FIXADDR_TOP.
Untangle the two files by moving FIXADDR_TOP into pgtable.h
Also move VIRT_IMMR_BASE to fixmap.h to avoid fixmap.h in mmu.h
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/5eba12392a018be28ad0a02ed844767b132589e7.1695659959.git.christophe.leroy@csgroup.eu
show more ...
|
#
26e04120 |
| 11-Jul-2023 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/kuap: Use MMU_FTR_KUAP on all and refactor disabling kuap
All but book3s/64 use a static branch key for disabling kuap. book3s/64 uses an mmu feature.
Refactor all targets to use MMU_FTR_KU
powerpc/kuap: Use MMU_FTR_KUAP on all and refactor disabling kuap
All but book3s/64 use a static branch key for disabling kuap. book3s/64 uses an mmu feature.
Refactor all targets to use MMU_FTR_KUAP like book3s/64.
For PPC32 that implies updating mmu features fixups once KUAP has been initialised.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/6b3d7c977bad73378ea368bc6818e9c94ea95ab0.1689091022.git.christophe.leroy@csgroup.eu
show more ...
|
#
dfc3095c |
| 19-Sep-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Remove CONFIG_FSL_BOOKE
PPC_85xx is PPC32 only. PPC_85xx always selects E500 and is the only PPC32 that selects E500. FSL_BOOKE is selected when E500 and PPC32 are selected.
So FSL_BOOKE i
powerpc: Remove CONFIG_FSL_BOOKE
PPC_85xx is PPC32 only. PPC_85xx always selects E500 and is the only PPC32 that selects E500. FSL_BOOKE is selected when E500 and PPC32 are selected.
So FSL_BOOKE is redundant with PPC_85xx.
Remove FSL_BOOKE.
And rename four files accordingly.
cpu_setup_fsl_booke.S is not renamed because it is linked to PPC_FSL_BOOK3E and not to FSL_BOOKE as suggested by its name.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/08e3e15594e66d63b9e89c5b4f9c35153913c28f.1663606875.git.christophe.leroy@csgroup.eu
show more ...
|
#
b0e0d68b |
| 31-Aug-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32: Allow fragmented physical memory
Since commit 9e849f231c3c ("powerpc/mm/32s: use generic mmu_mapin_ram() for all blocks.") it is possible to map all blocks as RAM on any PPC32.
Remove r
powerpc/32: Allow fragmented physical memory
Since commit 9e849f231c3c ("powerpc/mm/32s: use generic mmu_mapin_ram() for all blocks.") it is possible to map all blocks as RAM on any PPC32.
Remove related restrictions.
And remove call to wii_memory_fixups() which doesn't do anything else than checks since commit 160985f3025b ("powerpc/wii: remove wii_mmu_mapin_mem2()")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fc9a5b8ef49f6eb86e970d4c7ccfba9b407fd4eb.1661938317.git.christophe.leroy@csgroup.eu
show more ...
|
#
fc06755e |
| 31-Aug-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32: Drop a stale comment about reservation of gigantic pages
A comment about the reservation of gigantic pages was left in MMU_init() after commit 79cc38ded1e1 ("powerpc/mm/hugetlb: Add supp
powerpc/32: Drop a stale comment about reservation of gigantic pages
A comment about the reservation of gigantic pages was left in MMU_init() after commit 79cc38ded1e1 ("powerpc/mm/hugetlb: Add support for reserving gigantic huge pages via kernel command line")
Remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/959d77be630b9b46a7458f0fbd41dc3a94ec811a.1661938317.git.christophe.leroy@csgroup.eu
show more ...
|
#
513f5bba |
| 14-Jun-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32: Remove __map_without_ltlbs
__map_without_ltlbs is used only for 40x, and only when STRICT_KERNEL_RWX, KFENCE or DEBUG_PAGEALLOC is active.
Do the verification directly in 40x version of
powerpc/32: Remove __map_without_ltlbs
__map_without_ltlbs is used only for 40x, and only when STRICT_KERNEL_RWX, KFENCE or DEBUG_PAGEALLOC is active.
Do the verification directly in 40x version of mmu_mapin_ram() and remove __map_without_ltlbs from core ppc32.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3422094db965d218c4c3d8580f526963a9ac897f.1655202721.git.christophe.leroy@csgroup.eu
show more ...
|
#
56e54b4e |
| 14-Jun-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32: Remove 'noltlbs' kernel parameter
Mapping without large TLBs has no added value on the 8xx.
Mapping without large TLBs is still necessary on 40x when selecting CONFIG_KFENCE or CONFIG_D
powerpc/32: Remove 'noltlbs' kernel parameter
Mapping without large TLBs has no added value on the 8xx.
Mapping without large TLBs is still necessary on 40x when selecting CONFIG_KFENCE or CONFIG_DEBUG_PAGEALLOC or CONFIG_STRICT_KERNEL_RWX, but this is done automatically and doesn't require user selection.
Remove 'noltlbs' kernel parameter, the user has no reason to use it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/80ca17bd39cf608a8ebd0764d7064a498e131199.1655202721.git.christophe.leroy@csgroup.eu
show more ...
|
#
1ce84497 |
| 14-Jun-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32: Remove the 'nobats' kernel parameter
Mapping without BATs doesn't bring any added value to the user.
Remove that option.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> S
powerpc/32: Remove the 'nobats' kernel parameter
Mapping without BATs doesn't bring any added value to the user.
Remove that option.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6977314c823cfb728bc0273cea634b41807bfb64.1655202721.git.christophe.leroy@csgroup.eu
show more ...
|
#
86c38fec |
| 08-Mar-2022 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Remove asm/prom.h from all files that don't need it
Several files include asm/prom.h for no reason.
Clean it up.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop c
powerpc: Remove asm/prom.h from all files that don't need it
Several files include asm/prom.h for no reason.
Clean it up.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop change to prom_parse.c as reported by lkp@intel.com] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c9b8fda63dcf63e1b28f43e7ebdb95182cbc286.1646767214.git.christophe.leroy@csgroup.eu
show more ...
|
#
90cbac0e |
| 04-Mar-2021 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc: Enable KFENCE for PPC32
Add architecture specific implementation details for KFENCE and enable KFENCE for the ppc32 architecture. In particular, this implements the required interface in <a
powerpc: Enable KFENCE for PPC32
Add architecture specific implementation details for KFENCE and enable KFENCE for the ppc32 architecture. In particular, this implements the required interface in <asm/kfence.h>.
KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the Read/Write linear map to be mapped at page granularity.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8dfe1bd2abde26337c1d8c1ad0acfcc82185e0d5.1614868445.git.christophe.leroy@csgroup.eu
show more ...
|
#
ca15ca40 |
| 07-Aug-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
mm: remove unneeded includes of <asm/pgalloc.h>
Patch series "mm: cleanup usage of <asm/pgalloc.h>"
Most architectures have very similar versions of pXd_alloc_one() and pXd_free_one() for intermedi
mm: remove unneeded includes of <asm/pgalloc.h>
Patch series "mm: cleanup usage of <asm/pgalloc.h>"
Most architectures have very similar versions of pXd_alloc_one() and pXd_free_one() for intermediate levels of page table. These patches add generic versions of these functions in <asm-generic/pgalloc.h> and enable use of the generic functions where appropriate.
In addition, functions declared and defined in <asm/pgalloc.h> headers are used mostly by core mm and early mm initialization in arch and there is no actual reason to have the <asm/pgalloc.h> included all over the place. The first patch in this series removes unneeded includes of <asm/pgalloc.h>
In the end it didn't work out as neatly as I hoped and moving pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require unnecessary changes to arches that have custom page table allocations, so I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local to mm/.
This patch (of 8):
In most cases <asm/pgalloc.h> header is required only for allocations of page table memory. Most of the .c files that include that header do not use symbols declared in <asm/pgalloc.h> and do not require that header.
As for the other header files that used to include <asm/pgalloc.h>, it is possible to move that include into the .c file that actually uses symbols from <asm/pgalloc.h> and drop the include from the header file.
The process was somewhat automated using
sed -i -E '/[<"]asm\/pgalloc\.h/d' \ $(grep -L -w -f /tmp/xx \ $(git grep -E -l '[<"]asm/pgalloc\.h'))
where /tmp/xx contains all the symbols defined in arch/*/include/asm/pgalloc.h.
[rppt@linux.ibm.com: fix powerpc warning]
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Pekka Enberg <penberg@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Joerg Roedel <joro@8bytes.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Cc: Stafford Horne <shorne@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Joerg Roedel <jroedel@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
b506923e |
| 02-Jul-2020 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
Revert "powerpc/kasan: Fix shadow pages allocation failure"
This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210.
This commit moved too much work in kasan_init(). The allocation of shadow p
Revert "powerpc/kasan: Fix shadow pages allocation failure"
This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210.
This commit moved too much work in kasan_init(). The allocation of shadow pages has to be moved for the reason explained in that patch, but the allocation of page tables still need to be done before switching to the final hash table.
First revert the incorrect commit, following patch redoes it properly.
Fixes: d2a91cef9bbd ("powerpc/kasan: Fix shadow pages allocation failure") Cc: stable@vger.kernel.org Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181 Link: https://lore.kernel.org/r/3667deb0911affbf999b99f87c31c77d5e870cd2.1593690707.git.christophe.leroy@csgroup.eu
show more ...
|
#
e31cf2f4 |
| 09-Jun-2020 |
Mike Rapoport <rppt@linux.ibm.com> |
mm: don't include asm/pgtable.h if linux/mm.h is already included
Patch series "mm: consolidate definitions of page table accessors", v2.
The low level page table accessors (pXY_index(), pXY_offset
mm: don't include asm/pgtable.h if linux/mm.h is already included
Patch series "mm: consolidate definitions of page table accessors", v2.
The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures.
Most of these definitions are actually identical and typically it boils down to, e.g.
static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); }
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); }
These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.
For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic.
These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header.
This patch (of 12):
The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>.
The include statements in such cases are remove with a simple loop:
for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
2b279c03 |
| 19-May-2020 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignme
powerpc/32s: Allow mapping with BATs with DEBUG_PAGEALLOC
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set by default to 256M which is a good compromise for keeping enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fd29c1718ee44d82115d0e835ced808eb4ccbf51.1589866984.git.christophe.leroy@csgroup.eu
show more ...
|
#
fcdafd10 |
| 19-May-2020 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also en
powerpc/8xx: Allow large TLBs with DEBUG_PAGEALLOC
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also enforce a 512kB data alignment minimum. That's a trade-off between size of speed, taking into account that DEBUG_PAGEALLOC is a debug option. Anyway the alignment is still tunable.
We also allow tuning of alignment for book3s to limit the complexity of the test in Kconfig that will anyway disappear in the following patches once DEBUG_PAGEALLOC is handled together with BATs.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c13256f2d356a316715da61fe089b3623ef217a5.1589866984.git.christophe.leroy@csgroup.eu
show more ...
|
#
d2a91cef |
| 19-May-2020 |
Christophe Leroy <christophe.leroy@csgroup.eu> |
powerpc/kasan: Fix shadow pages allocation failure
Doing kasan pages allocation in MMU_init is too early, kernel doesn't have access yet to the entire memory space and memblock_alloc() fails when th
powerpc/kasan: Fix shadow pages allocation failure
Doing kasan pages allocation in MMU_init is too early, kernel doesn't have access yet to the entire memory space and memblock_alloc() fails when the kernel is a bit big.
Do it from kasan_init() instead.
Fixes: 2edb16efc899 ("powerpc/32: Add KASAN support") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c24163ee5d5f8cdf52fefa45055ceb35435b8f15.1589866984.git.christophe.leroy@csgroup.eu
show more ...
|
#
4ed47dbe |
| 20-Sep-2019 |
Jason Yan <yanaijie@huawei.com> |
powerpc: move memstart_addr and kernstart_addr to init-common.c
These two variables are both defined in init_32.c and init_64.c. Move them to init-common.c and make them __ro_after_init.
Signed-off
powerpc: move memstart_addr and kernstart_addr to init-common.c
These two variables are both defined in init_32.c and init_64.c. Move them to init-common.c and make them __ro_after_init.
Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Diana Craciun <diana.craciun@nxp.com> Tested-by: Diana Craciun <diana.craciun@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
2874c5fd |
| 27-May-2019 |
Thomas Gleixner <tglx@linutronix.de> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
2edb16ef |
| 26-Apr-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc/32: Add KASAN support
This patch adds KASAN support for PPC32. The following patch will add an early activation of hash table for book3s. Until then, a warning will be raised if trying to us
powerpc/32: Add KASAN support
This patch adds KASAN support for PPC32. The following patch will add an early activation of hash table for book3s. Until then, a warning will be raised if trying to use KASAN on an hash 6xx.
To support KASAN, this patch initialises that MMU mapings for accessing to the KASAN shadow area defined in a previous patch.
An early mapping is set as soon as the kernel code has been relocated at its definitive place.
Then the definitive mapping is set once paging is initialised.
For modules, the shadow area is allocated at module_alloc().
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
9d9f2ccc |
| 29-Mar-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc/mm: change #include "mmu_decl.h" to <mm/mmu_decl.h>
This patch make inclusion of mmu_decl.h independant of the location of the file including it.
Signed-off-by: Christophe Leroy <christophe
powerpc/mm: change #include "mmu_decl.h" to <mm/mmu_decl.h>
This patch make inclusion of mmu_decl.h independant of the location of the file including it.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
69795cab |
| 18-Apr-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc: Add framework for Kernel Userspace Protection
This patch adds a skeleton for Kernel Userspace Protection functionnalities like Kernel Userspace Access Protection and Kernel Userspace Execut
powerpc: Add framework for Kernel Userspace Protection
This patch adds a skeleton for Kernel Userspace Protection functionnalities like Kernel Userspace Access Protection and Kernel Userspace Execution Prevention
The subsequent implementation of KUAP for radix makes use of a MMU feature in order to patch out assembly when KUAP is disabled or unsupported. This won't work unless there's an entry point for KUP support before the feature magic happens, so for PPC64 setup_kup() is called early in setup.
On PPC32, feature_fixup() is done too early to allow the same.
Suggested-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
d5f17ee9 |
| 21-Feb-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX
This patch implements handling of STRICT_KERNEL_RWX with large TLBs directly in the TLB miss handlers.
To do so, etext and sinitt
powerpc/8xx: don't disable large TLBs with CONFIG_STRICT_KERNEL_RWX
This patch implements handling of STRICT_KERNEL_RWX with large TLBs directly in the TLB miss handlers.
To do so, etext and sinittext are aligned on 512kB boundaries and the miss handlers use 512kB pages instead of 8Mb pages for addresses close to the boundaries.
It sets RO PP flags for addresses under sinittext.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
63b2bc61 |
| 21-Feb-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX
Today, STRICT_KERNEL_RWX is based on the use of regular pages to map kernel pages.
On Book3s 32, it has three consequences: - Using pages instead of B
powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX
Today, STRICT_KERNEL_RWX is based on the use of regular pages to map kernel pages.
On Book3s 32, it has three consequences: - Using pages instead of BAT for mapping kernel linear memory severely impacts performance. - Exec protection is not effective because no-execute cannot be set at page level (except on 603 which doesn't have hash tables) - Write protection is not effective because PP bits do not provide RO mode for kernel-only pages (except on 603 which handles it in software via PAGE_DIRTY)
On the 603+, we have: - Independent IBAT and DBAT allowing limitation of exec parts. - NX bit can be set in segment registers to forbit execution on memory mapped by pages. - RO mode on DBATs even for kernel-only blocks.
On the 601, there is nothing much we can do other than warn the user about it, because: - BATs are common to instructions and data. - BAT do not provide RO mode for kernel-only blocks. - segment registers don't have the NX bit.
In order to use IBAT for exec protection, this patch: - Aligns _etext to BAT block sizes (128kb) - Set NX bit in kernel segment register (Except on vmalloc area when CONFIG_MODULES is selected) - Maps kernel text with IBATs.
In order to use DBAT for exec protection, this patch: - Aligns RW DATA to BAT block sizes (4M) - Maps kernel RO area with write prohibited DBATs - Maps remaining memory with remaining DBATs
Here is what we get with this patch on a 832x when activating STRICT_KERNEL_RWX:
Symbols: c0000000 T _stext c0680000 R __start_rodata c0680000 R _etext c0800000 T __init_begin c0800000 T _sinittext
~# cat /sys/kernel/debug/block_address_translation ---[ Instruction Block Address Translation ]--- 0: 0xc0000000-0xc03fffff 0x00000000 Kernel EXEC coherent 1: 0xc0400000-0xc05fffff 0x00400000 Kernel EXEC coherent 2: 0xc0600000-0xc067ffff 0x00600000 Kernel EXEC coherent 3: - 4: - 5: - 6: - 7: -
---[ Data Block Address Translation ]--- 0: 0xc0000000-0xc07fffff 0x00000000 Kernel RO coherent 1: 0xc0800000-0xc0ffffff 0x00800000 Kernel RW coherent 2: 0xc1000000-0xc1ffffff 0x01000000 Kernel RW coherent 3: 0xc2000000-0xc3ffffff 0x02000000 Kernel RW coherent 4: 0xc4000000-0xc7ffffff 0x04000000 Kernel RW coherent 5: 0xc8000000-0xcfffffff 0x08000000 Kernel RW coherent 6: 0xd0000000-0xdfffffff 0x10000000 Kernel RW coherent 7: -
~# cat /sys/kernel/debug/segment_registers ---[ User Segments ]--- 0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xa085d0 0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xa086e1 0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xa087f2 0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xa08903 0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xa08a14 0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xa08b25 0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xa08c36 0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xa08d47 0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xa08e58 0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xa08f69 0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xa0907a 0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xa0918b
---[ Kernel Segments ]--- 0xc0000000-0xcfffffff Kern key 0 User key 1 No Exec VSID 0x000ccc 0xd0000000-0xdfffffff Kern key 0 User key 1 No Exec VSID 0x000ddd 0xe0000000-0xefffffff Kern key 0 User key 1 No Exec VSID 0x000eee 0xf0000000-0xffffffff Kern key 0 User key 1 No Exec VSID 0x000fff
Aligning _etext to 128kb allows to map up to 32Mb text with 8 IBATs: 16Mb + 8Mb + 4Mb + 2Mb + 1Mb + 512kb + 256kb + 128kb (+ 128kb) = 32Mb (A 9th IBAT is unneeded as 32Mb would need only a single 32Mb block)
Aligning data to 4M allows to map up to 512Mb data with 8 DBATs: 16Mb + 8Mb + 4Mb + 4Mb + 32Mb + 64Mb + 128Mb + 256Mb = 512Mb
Because some processors only have 4 BATs and because some targets need DBATs for mapping other areas, the following patch will allow to modify _etext and data alignment.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
28ea38b9 |
| 21-Feb-2019 |
Christophe Leroy <christophe.leroy@c-s.fr> |
powerpc/mmu: add is_strict_kernel_rwx() helper
Add a helper to know whether STRICT_KERNEL_RWX is enabled.
This is based on rodata_enabled flag which is defined only when CONFIG_STRICT_KERNEL_RWX is
powerpc/mmu: add is_strict_kernel_rwx() helper
Add a helper to know whether STRICT_KERNEL_RWX is enabled.
This is based on rodata_enabled flag which is defined only when CONFIG_STRICT_KERNEL_RWX is selected.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|
#
7e140591 |
| 28-Mar-2018 |
Jonathan Neuschäfer <j.neuschaefer@gmx.net> |
powerpc/mm/32: Remove the reserved memory hack
This hack, introduced in commit c5df7f775148 ("powerpc: allow ioremap within reserved memory regions") is now unnecessary.
Signed-off-by: Jonathan Neu
powerpc/mm/32: Remove the reserved memory hack
This hack, introduced in commit c5df7f775148 ("powerpc: allow ioremap within reserved memory regions") is now unnecessary.
Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
show more ...
|