History log of /linux/arch/parisc/mm/init.c (Results 1 – 25 of 114)
Revision Date Author Comments
# 0cc2dc49 05-May-2024 Mike Rapoport (IBM) <rppt@kernel.org>

arch: make execmem setup available regardless of CONFIG_MODULES

execmem does not depend on modules, on the contrary modules use
execmem.

To make execmem available when CONFIG_MODULES=n, for instanc

arch: make execmem setup available regardless of CONFIG_MODULES

execmem does not depend on modules, on the contrary modules use
execmem.

To make execmem available when CONFIG_MODULES=n, for instance for
kprobes, split execmem_params initialization out from
arch/*/kernel/module.c and compile it when CONFIG_EXECMEM=y

Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>

show more ...


# bc46ef3c 11-Dec-2023 Kent Overstreet <kent.overstreet@linux.dev>

shm: Slim down dependencies

list_head is in types.h, not list.h., and the uapi header wasn't needed.

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>


# e5ef93d0 07-Sep-2023 Helge Deller <deller@gmx.de>

parisc: BTLB: Initialize BTLB tables at CPU startup

Initialize the BTLB entries when starting up a CPU.
Note that BTLBs are not available on 64-bit CPUs.

Signed-off-by: Helge Deller <deller@gmx.de>


# a4c59c9a 10-Aug-2023 Helge Deller <deller@gmx.de>

parisc: dma: Add prototype for pcxl_dma_start

Signed-off-by: Helge Deller <deller@gmx.de>


# c2ff2b73 03-Aug-2023 Mike Rapoport (IBM) <rppt@kernel.org>

parisc/mm: preallocate fixmap page tables at init

Christoph Biedl reported early OOM on recent kernels:

swapper: page allocation failure: order:0, mode:0x100(__GFP_ZERO),
nodemask=(null)
CP

parisc/mm: preallocate fixmap page tables at init

Christoph Biedl reported early OOM on recent kernels:

swapper: page allocation failure: order:0, mode:0x100(__GFP_ZERO),
nodemask=(null)
CPU: 0 PID: 0 Comm: swapper Not tainted 6.3.0-rc4+ #16
Hardware name: 9000/785/C3600
Backtrace:
[<10408594>] show_stack+0x48/0x5c
[<10e152d8>] dump_stack_lvl+0x48/0x64
[<10e15318>] dump_stack+0x24/0x34
[<105cf7f8>] warn_alloc+0x10c/0x1c8
[<105d068c>] __alloc_pages+0xbbc/0xcf8
[<105d0e4c>] __get_free_pages+0x28/0x78
[<105ad10c>] __pte_alloc_kernel+0x30/0x98
[<10406934>] set_fixmap+0xec/0xf4
[<10411ad4>] patch_map.constprop.0+0xa8/0xdc
[<10411bb0>] __patch_text_multiple+0xa8/0x208
[<10411d78>] patch_text+0x30/0x48
[<1041246c>] arch_jump_label_transform+0x90/0xcc
[<1056f734>] jump_label_update+0xd4/0x184
[<1056fc9c>] static_key_enable_cpuslocked+0xc0/0x110
[<1056fd08>] static_key_enable+0x1c/0x2c
[<1011362c>] init_mem_debugging_and_hardening+0xdc/0xf8
[<1010141c>] start_kernel+0x5f0/0xa98
[<10105da8>] start_parisc+0xb8/0xe4

Mem-Info:
active_anon:0 inactive_anon:0 isolated_anon:0
active_file:0 inactive_file:0 isolated_file:0
unevictable:0 dirty:0 writeback:0
slab_reclaimable:0 slab_unreclaimable:0
mapped:0 shmem:0 pagetables:0
sec_pagetables:0 bounce:0
kernel_misc_reclaimable:0
free:0 free_pcp:0 free_cma:0
Node 0 active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
mapped:0kB dirty:0kB writeback:0kB shmem:0kB
+writeback_tmp:0kB kernel_stack:0kB pagetables:0kB sec_pagetables:0kB
all_unreclaimable? no
Normal free:0kB boost:0kB min:0kB low:0kB high:0kB
reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB writepending:0kB
+present:1048576kB managed:1039360kB mlocked:0kB bounce:0kB free_pcp:0kB
local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 0
Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB
0*1024kB 0*2048kB 0*4096kB = 0kB
0 total pagecache pages
0 pages in swap cache
Free swap = 0kB
Total swap = 0kB
262144 pages RAM
0 pages HighMem/MovableOnly
2304 pages reserved
Backtrace:
[<10411d78>] patch_text+0x30/0x48
[<1041246c>] arch_jump_label_transform+0x90/0xcc
[<1056f734>] jump_label_update+0xd4/0x184
[<1056fc9c>] static_key_enable_cpuslocked+0xc0/0x110
[<1056fd08>] static_key_enable+0x1c/0x2c
[<1011362c>] init_mem_debugging_and_hardening+0xdc/0xf8
[<1010141c>] start_kernel+0x5f0/0xa98
[<10105da8>] start_parisc+0xb8/0xe4

Kernel Fault: Code=15 (Data TLB miss fault) at addr 0f7fe3c0
CPU: 0 PID: 0 Comm: swapper Not tainted 6.3.0-rc4+ #16
Hardware name: 9000/785/C3600

This happens because patching static key code temporarily maps it via
fixmap and if it happens before page allocator is initialized set_fixmap()
cannot allocate memory using pte_alloc_kernel().

Make sure that fixmap page tables are preallocated early so that
pte_offset_kernel() in set_fixmap() never resorts to pte allocation.

Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Helge Deller <deller@gmx.de>
Tested-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
Tested-by: John David Anglin <dave.anglin@bell.net>
Cc: <stable@vger.kernel.org> # v6.4+

show more ...


# b62b37d6 30-Jun-2023 Helge Deller <deller@gmx.de>

parisc: init: Drop unused variable end_paddr

Signed-off-by: Helge Deller <deller@gmx.de>


# cbe263b6 23-Jul-2022 Jason Wang <wangborong@cdjrlc.com>

parisc: Drop zero variable initialisations in mm/init.c

Initialise global and static variable to 0 is always unnecessary.
Remove the unnecessary initialisations.

Signed-off-by: Jason Wang <wangboro

parisc: Drop zero variable initialisations in mm/init.c

Initialise global and static variable to 0 is always unnecessary.
Remove the unnecessary initialisations.

Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 252358f1 11-Jul-2022 Anshuman Khandual <anshuman.khandual@arm.com>

parisc/mm: enable ARCH_HAS_VM_GET_PAGE_PROT

This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports
standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT,
which looks up

parisc/mm: enable ARCH_HAS_VM_GET_PAGE_PROT

This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports
standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT,
which looks up a private and static protection_map[] array. Subsequently
all __SXXX and __PXXX macros can be dropped which are no longer needed.

Link: https://lkml.kernel.org/r/20220711070600.2378316-14-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Brian Cain <bcain@quicinc.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


# 44eeb9b5 17-May-2022 Helge Deller <deller@gmx.de>

parisc: Prevent ldil() to sign-extend into upper 32 bits

Add some build time checks to prevent that the various usages of
"ldil L%(TMPALIAS_MAP_START), %reg"
sign-extends into the upper 32 bits whe

parisc: Prevent ldil() to sign-extend into upper 32 bits

Add some build time checks to prevent that the various usages of
"ldil L%(TMPALIAS_MAP_START), %reg"
sign-extends into the upper 32 bits when building a 64-bit kernel.

Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 9129886b 22-Jan-2022 John David Anglin <dave.anglin@bell.net>

parisc: Drop __init from map_pages declaration

With huge kernel pages, we randomly eat a SPARC in map_pages(). This
is fixed by dropping __init from the declaration.

However, map_pages references t

parisc: Drop __init from map_pages declaration

With huge kernel pages, we randomly eat a SPARC in map_pages(). This
is fixed by dropping __init from the declaration.

However, map_pages references the __init routine memblock_alloc_try_nid
via memblock_alloc. Thus, it needs to be marked with __ref.

memblock_alloc is only called before the kernel text is set to readonly.

The __ref on free_initmem is no longer needed.

Comment regarding map_pages being in the init section is removed.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v5.4+
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 1ae8e91e 01-Nov-2021 Yihao Han <hanyihao@vivo.com>

parisc: Use swap() to swap values in setup_bootmem()

Signed-off-by: Yihao Han <hanyihao@vivo.com>
Signed-off-by: Helge Deller <deller@gmx.de>


# 1030d681 09-Oct-2021 Sven Schnelle <svens@stackframe.org>

parisc: fix warning in flush_tlb_all

I've got the following splat after enabling preemption:

[ 3.724721] BUG: using __this_cpu_add() in preemptible [00000000] code: swapper/0/1
[ 3.734630] ca

parisc: fix warning in flush_tlb_all

I've got the following splat after enabling preemption:

[ 3.724721] BUG: using __this_cpu_add() in preemptible [00000000] code: swapper/0/1
[ 3.734630] caller is __this_cpu_preempt_check+0x38/0x50
[ 3.740635] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.0-rc4-64bit+ #324
[ 3.744605] Hardware name: 9000/785/C8000
[ 3.744605] Backtrace:
[ 3.744605] [<00000000401d9d58>] show_stack+0x74/0xb0
[ 3.744605] [<0000000040c27bd4>] dump_stack_lvl+0x10c/0x188
[ 3.744605] [<0000000040c27c84>] dump_stack+0x34/0x48
[ 3.744605] [<0000000040c33438>] check_preemption_disabled+0x178/0x1b0
[ 3.744605] [<0000000040c334f8>] __this_cpu_preempt_check+0x38/0x50
[ 3.744605] [<00000000401d632c>] flush_tlb_all+0x58/0x2e0
[ 3.744605] [<00000000401075c0>] 0x401075c0
[ 3.744605] [<000000004010b8fc>] 0x4010b8fc
[ 3.744605] [<00000000401080fc>] 0x401080fc
[ 3.744605] [<00000000401d5224>] do_one_initcall+0x128/0x378
[ 3.744605] [<0000000040102de8>] 0x40102de8
[ 3.744605] [<0000000040c33864>] kernel_init+0x60/0x3a8
[ 3.744605] [<00000000401d1020>] ret_from_kernel_thread+0x20/0x28
[ 3.744605]

Fix this by moving the __inc_irq_stat() into the locked section.

Signed-off-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 7bf82eb3 15-Jul-2021 Matthew Wilcox (Oracle) <willy@infradead.org>

parisc: Rename PMD_ORDER to PMD_TABLE_ORDER

This is the order of the page table allocation, not the order of a PMD.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Helge

parisc: Rename PMD_ORDER to PMD_TABLE_ORDER

This is the order of the page table allocation, not the order of a PMD.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 1f9d03c5 30-Apr-2021 Kefeng Wang <wangkefeng.wang@huawei.com>

mm: move mem_init_print_info() into mm_init()

mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().

Link: ht

mm: move mem_init_print_info() into mm_init()

mem_init_print_info() is called in mem_init() on each architecture, and
pass NULL argument, so using void argument and move it into mm_init().

Link: https://lkml.kernel.org/r/20210317015210.33641-1-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com> [x86]
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> [powerpc]
Acked-by: David Hildenbrand <david@redhat.com>
Tested-by: Anatoly Pugachev <matorola@gmail.com> [sparc64]
Acked-by: Russell King <rmk+kernel@armlinux.org.uk> [arm]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: "Peter Zijlstra" <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# b7795074 12-Feb-2021 Helge Deller <deller@gmx.de>

parisc: Optimize per-pagetable spinlocks

On parisc a spinlock is stored in the next page behind the pgd which
protects against parallel accesses to the pgd. That's why one additional
page (PGD_ALLOC

parisc: Optimize per-pagetable spinlocks

On parisc a spinlock is stored in the next page behind the pgd which
protects against parallel accesses to the pgd. That's why one additional
page (PGD_ALLOC_ORDER) is allocated for the pgd.

Matthew Wilcox suggested that we instead should use a pointer in the
struct page table for this spinlock and noted, that the comments for the
PGD_ORDER and PMD_ORDER defines were wrong.

Both suggestions are addressed with this patch. Instead of having an own
spinlock to protect the pgd, we now switch to use the existing
page_table_lock. Additionally, beside loading the pgd into cr25 in
switch_mm_irqs_off(), the physical address of this lock is loaded into
cr28 (tr4), so that we can avoid implementing a complicated lookup in
assembly for this lock in the TLB fault handlers.

The existing Hybrid L2/L3 page table scheme (where the pmd is adjacent
to the pgd) has been dropped with this patch.

Remove the locking in set_pte() and the huge-page pte functions too.
They trigger a spinlock recursion on 32bit machines and seem unnecessary.

Suggested-by: Matthew Wilcox <willy@infradead.org>
Fixes: b37d1c1898b2 ("parisc: Use per-pagetable spinlock")
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 33def849 22-Oct-2020 Joe Perches <joe@perches.com>

treewide: Convert macro and uses of __section(foo) to __section("foo")

Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.

Remove the q

treewide: Convert macro and uses of __section(foo) to __section("foo")

Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.

Remove the quote operator # from compiler_attributes.h __section macro.

Convert all unquoted __section(foo) uses to quoted __section("foo").
Also convert __attribute__((section("foo"))) uses to __section("foo")
even if the __attribute__ has multiple list entry forms.

Conversion done using the script at:

https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl

Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# c89ab04f 07-Aug-2020 Mike Rapoport <rppt@linux.ibm.com>

mm/sparse: cleanup the code surrounding memory_present()

After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent
functions that call memory_present() for each region in memblock.memory

mm/sparse: cleanup the code surrounding memory_present()

After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent
functions that call memory_present() for each region in memblock.memory:
sparse_memory_present_with_active_regions() and membocks_present().

Moreover, all architectures have a call to either of these functions
preceding the call to sparse_init() and in the most cases they are called
one after the other.

Mark the regions from memblock.memory as present during sparce_init() by
making sparse_init() call memblocks_present(), make memblocks_present()
and memory_present() functions static and remove redundant
sparse_memory_present_with_active_regions() function.

Also remove no longer required HAVE_MEMORY_PRESENT configuration option.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# 208151bf 14-Jun-2020 Helge Deller <deller@gmx.de>

parisc: Convert to BIT_MASK() and BIT_WORD()

Drop own open-coded implementation to set bits and use the kernel
provided BIT_MASK() and BIT_WORD() macros.

Signed-off-by: Helge Deller <deller@gmx.de>


# e31cf2f4 09-Jun-2020 Mike Rapoport <rppt@linux.ibm.com>

mm: don't include asm/pgtable.h if linux/mm.h is already included

Patch series "mm: consolidate definitions of page table accessors", v2.

The low level page table accessors (pXY_index(), pXY_offset

mm: don't include asm/pgtable.h if linux/mm.h is already included

Patch series "mm: consolidate definitions of page table accessors", v2.

The low level page table accessors (pXY_index(), pXY_offset()) are
duplicated across all architectures and sometimes more than once. For
instance, we have 31 definition of pgd_offset() for 25 supported
architectures.

Most of these definitions are actually identical and typically it boils
down to, e.g.

static inline unsigned long pmd_index(unsigned long address)
{
return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1);
}

static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address);
}

These definitions can be shared among 90% of the arches provided
XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined.

For architectures that really need a custom version there is always
possibility to override the generic version with the usual ifdefs magic.

These patches introduce include/linux/pgtable.h that replaces
include/asm-generic/pgtable.h and add the definitions of the page table
accessors to the new header.

This patch (of 12):

The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the
functions involving page table manipulations, e.g. pte_alloc() and
pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h>
in the files that include <linux/mm.h>.

The include statements in such cases are remove with a simple loop:

for f in $(git grep -l "include <linux/mm.h>") ; do
sed -i -e '/include <asm\/pgtable.h>/ d' $f
done

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# 625bf73e 03-Jun-2020 Mike Rapoport <rppt@linux.ibm.com>

parisc: simplify detection of memory zone boundaries

free_area_init() only requires the definition of maximal PFN for each of
the supported zone rater than calculation of actual zone sizes and the
s

parisc: simplify detection of memory zone boundaries

free_area_init() only requires the definition of maximal PFN for each of
the supported zone rater than calculation of actual zone sizes and the
sizes of the holes between the zones.

After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available to all architectures.

Using this function instead of free_area_init_node() simplifies the zone
detection.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Hoan Tran <hoan@os.amperecomputing.com> [arm64]
Cc: Baoquan He <bhe@redhat.com>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Link: http://lkml.kernel.org/r/20200412194859.12663-12-rppt@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# bf71bc16 28-May-2020 Helge Deller <deller@gmx.de>

parisc: Fix kernel panic in mem_init()

The Debian kernel v5.6 triggers this kernel panic:

Kernel panic - not syncing: Bad Address (null pointer deref?)
Bad Address (null pointer deref?): Code=26

parisc: Fix kernel panic in mem_init()

The Debian kernel v5.6 triggers this kernel panic:

Kernel panic - not syncing: Bad Address (null pointer deref?)
Bad Address (null pointer deref?): Code=26 (Data memory access rights trap) at addr 0000000000000000
CPU: 0 PID: 0 Comm: swapper Not tainted 5.6.0-2-parisc64 #1 Debian 5.6.14-1
IAOQ[0]: mem_init+0xb0/0x150
IAOQ[1]: mem_init+0xb4/0x150
RP(r2): start_kernel+0x6c8/0x1190
Backtrace:
[<0000000040101ab4>] start_kernel+0x6c8/0x1190
[<0000000040108574>] start_parisc+0x158/0x1b8

on a HP-PARISC rp3440 machine with this memory layout:
Memory Ranges:
0) Start 0x0000000000000000 End 0x000000003fffffff Size 1024 MB
1) Start 0x0000004040000000 End 0x00000040ffdfffff Size 3070 MB

Fix the crash by avoiding virt_to_page() and similar functions in
mem_init() until the memory zones have been fully set up.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # v5.0+

show more ...


# 8121fbc4 12-Jan-2020 Mike Rapoport <rppt@linux.ibm.com>

parisc: map_pages(): cleanup page table initialization

The current code uses '#if PTRS_PER_PMD == 1' to distinguish 2 vs 3 levels,
setup, it casts pgd to pgd to cope with page table folding and conv

parisc: map_pages(): cleanup page table initialization

The current code uses '#if PTRS_PER_PMD == 1' to distinguish 2 vs 3 levels,
setup, it casts pgd to pgd to cope with page table folding and converts
addresses of page table entries from physical to virtual and back for no
good reason.

Simplify the accesses to the page table entries using proper unfolding of
the upper layers and replacing '#if PTRS_PER_PMD' with explicit
'#if CONFIG_PGTABLE_LEVELS == 3'

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 8b7f938e 08-Jan-2020 Mike Rapoport <rppt@linux.ibm.com>

parisc: fix map_pages() to actually populate upper directory

The commit d96885e277b5 ("parisc: use pgtable-nopXd instead of
4level-fixup") converted PA-RISC to use folded page tables, but it missed

parisc: fix map_pages() to actually populate upper directory

The commit d96885e277b5 ("parisc: use pgtable-nopXd instead of
4level-fixup") converted PA-RISC to use folded page tables, but it missed
the conversion of pgd_populate() to pud_populate() in maps_pages()
function. This caused the upper page table directory to remain empty and
the system would crash as a result.

Using pud_populate() that actually populates the page table instead of
dummy pgd_populate() fixes the issue.

Fixes: d96885e277b5 ("parisc: use pgtable-nopXd instead of 4level-fixup")
Reported-by: Meelis Roos <mroos@linux.ee>
Reported-by: Jeroen Roovers <jer@gentoo.org>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Tested-by: Jeroen Roovers <jer@gentoo.org>
Tested-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Helge Deller <deller@gmx.de>

show more ...


# 4afd58e1 14-May-2019 Christoph Hellwig <hch@lst.de>

initramfs: provide a generic free_initrd_mem implementation

For most architectures free_initrd_mem just expands to the same
free_reserved_area call. Provide that as a generic implementation marked

initramfs: provide a generic free_initrd_mem implementation

For most architectures free_initrd_mem just expands to the same
free_reserved_area call. Provide that as a generic implementation marked
__weak.

Link: http://lkml.kernel.org/r/20190213174621.29297-8-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Steven Price <steven.price@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

show more ...


# 4e617c86 10-May-2019 Helge Deller <deller@gmx.de>

parisc: Use __ro_after_init in init.c

Signed-off-by: Helge Deller <deller@gmx.de>


12345