#
71770895 |
| 29-Mar-2024 |
Samuel Holland <samuel.holland@sifive.com> |
arm64: crypto: use CC_FLAGS_FPU for NEON CFLAGS
Now that CC_FLAGS_FPU is exported and can be used anywhere in the source tree, use it instead of duplicating the flags here.
Link: https://lkml.kerne
arm64: crypto: use CC_FLAGS_FPU for NEON CFLAGS
Now that CC_FLAGS_FPU is exported and can be used anywhere in the source tree, use it instead of duplicating the flags here.
Link: https://lkml.kernel.org/r/20240329072441.591471-6-samuel.holland@sifive.com Signed-off-by: Samuel Holland <samuel.holland@sifive.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Christian König <christian.koenig@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nicolas Schier <nicolas@fjasle.eu> Cc: Palmer Dabbelt <palmer@rivosinc.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: WANG Xuerui <git@xen0n.name> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
show more ...
|
#
04e85bbf |
| 02-Aug-2021 |
Alexey Dobriyan <adobriyan@gmail.com> |
isystem: delete global -isystem compile option
Further isolate kernel from userspace, prevent accidental inclusion of undesireable headers, mainly float.h and stdatomic.h.
nds32 keeps -isystem glob
isystem: delete global -isystem compile option
Further isolate kernel from userspace, prevent accidental inclusion of undesireable headers, mainly float.h and stdatomic.h.
nds32 keeps -isystem globally due to intrinsics used in entrenched header.
-isystem is selectively reenabled for some files, again, for intrinsics.
Compile tested on:
hexagon-defconfig hexagon-allmodconfig alpha-allmodconfig alpha-allnoconfig alpha-defconfig arm64-allmodconfig arm64-allnoconfig arm64-defconfig arm-am200epdkit arm-aspeed_g4 arm-aspeed_g5 arm-assabet arm-at91_dt arm-axm55xx arm-badge4 arm-bcm2835 arm-cerfcube arm-clps711x arm-cm_x300 arm-cns3420vb arm-colibri_pxa270 arm-colibri_pxa300 arm-collie arm-corgi arm-davinci_all arm-dove arm-ep93xx arm-eseries_pxa arm-exynos arm-ezx arm-footbridge arm-gemini arm-h3600 arm-h5000 arm-hackkit arm-hisi arm-imote2 arm-imx_v4_v5 arm-imx_v6_v7 arm-integrator arm-iop32x arm-ixp4xx arm-jornada720 arm-keystone arm-lart arm-lpc18xx arm-lpc32xx arm-lpd270 arm-lubbock arm-magician arm-mainstone arm-milbeaut_m10v arm-mini2440 arm-mmp2 arm-moxart arm-mps2 arm-multi_v4t arm-multi_v5 arm-multi_v7 arm-mv78xx0 arm-mvebu_v5 arm-mvebu_v7 arm-mxs arm-neponset arm-netwinder arm-nhk8815 arm-omap1 arm-omap2plus arm-orion5x arm-oxnas_v6 arm-palmz72 arm-pcm027 arm-pleb arm-pxa arm-pxa168 arm-pxa255-idp arm-pxa3xx arm-pxa910 arm-qcom arm-realview arm-rpc arm-s3c2410 arm-s3c6400 arm-s5pv210 arm-sama5 arm-shannon arm-shmobile arm-simpad arm-socfpga arm-spear13xx arm-spear3xx arm-spear6xx arm-spitz arm-stm32 arm-sunxi arm-tct_hammer arm-tegra arm-trizeps4 arm-u8500 arm-versatile arm-vexpress arm-vf610m4 arm-viper arm-vt8500_v6_v7 arm-xcep arm-zeus csky-allmodconfig csky-allnoconfig csky-defconfig h8300-edosk2674 h8300-h8300h-sim h8300-h8s-sim i386-allmodconfig i386-allnoconfig i386-defconfig ia64-allmodconfig ia64-allnoconfig ia64-bigsur ia64-generic ia64-gensparse ia64-tiger ia64-zx1 m68k-amcore m68k-amiga m68k-apollo m68k-atari m68k-bvme6000 m68k-hp300 m68k-m5208evb m68k-m5249evb m68k-m5272c3 m68k-m5275evb m68k-m5307c3 m68k-m5407c3 m68k-m5475evb m68k-mac m68k-multi m68k-mvme147 m68k-mvme16x m68k-q40 m68k-stmark2 m68k-sun3 m68k-sun3x microblaze-allmodconfig microblaze-allnoconfig microblaze-mmu mips-ar7 mips-ath25 mips-ath79 mips-bcm47xx mips-bcm63xx mips-bigsur mips-bmips_be mips-bmips_stb mips-capcella mips-cavium_octeon mips-ci20 mips-cobalt mips-cu1000-neo mips-cu1830-neo mips-db1xxx mips-decstation mips-decstation_64 mips-decstation_r4k mips-e55 mips-fuloong2e mips-gcw0 mips-generic mips-gpr mips-ip22 mips-ip27 mips-ip28 mips-ip32 mips-jazz mips-jmr3927 mips-lemote2f mips-loongson1b mips-loongson1c mips-loongson2k mips-loongson3 mips-malta mips-maltaaprp mips-malta_kvm mips-malta_qemu_32r6 mips-maltasmvp mips-maltasmvp_eva mips-maltaup mips-maltaup_xpa mips-mpc30x mips-mtx1 mips-nlm_xlp mips-nlm_xlr mips-omega2p mips-pic32mzda mips-pistachio mips-qi_lb60 mips-rb532 mips-rbtx49xx mips-rm200 mips-rs90 mips-rt305x mips-sb1250_swarm mips-tb0219 mips-tb0226 mips-tb0287 mips-vocore2 mips-workpad mips-xway nds32-allmodconfig nds32-allnoconfig nds32-defconfig nios2-10m50 nios2-3c120 nios2-allmodconfig nios2-allnoconfig openrisc-allmodconfig openrisc-allnoconfig openrisc-or1klitex openrisc-or1ksim openrisc-simple_smp parisc-allnoconfig parisc-generic-32bit parisc-generic-64bit powerpc-acadia powerpc-adder875 powerpc-akebono powerpc-amigaone powerpc-arches powerpc-asp8347 powerpc-bamboo powerpc-bluestone powerpc-canyonlands powerpc-cell powerpc-chrp32 powerpc-cm5200 powerpc-currituck powerpc-ebony powerpc-eiger powerpc-ep8248e powerpc-ep88xc powerpc-fsp2 powerpc-g5 powerpc-gamecube powerpc-ge_imp3a powerpc-holly powerpc-icon powerpc-iss476-smp powerpc-katmai powerpc-kilauea powerpc-klondike powerpc-kmeter1 powerpc-ksi8560 powerpc-linkstation powerpc-lite5200b powerpc-makalu powerpc-maple powerpc-mgcoge powerpc-microwatt powerpc-motionpro powerpc-mpc512x powerpc-mpc5200 powerpc-mpc7448_hpc2 powerpc-mpc8272_ads powerpc-mpc8313_rdb powerpc-mpc8315_rdb powerpc-mpc832x_mds powerpc-mpc832x_rdb powerpc-mpc834x_itx powerpc-mpc834x_itxgp powerpc-mpc834x_mds powerpc-mpc836x_mds powerpc-mpc836x_rdk powerpc-mpc837x_mds powerpc-mpc837x_rdb powerpc-mpc83xx powerpc-mpc8540_ads powerpc-mpc8560_ads powerpc-mpc85xx_cds powerpc-mpc866_ads powerpc-mpc885_ads powerpc-mvme5100 powerpc-obs600 powerpc-pasemi powerpc-pcm030 powerpc-pmac32 powerpc-powernv powerpc-ppa8548 powerpc-ppc40x powerpc-ppc44x powerpc-ppc64 powerpc-ppc64e powerpc-ppc6xx powerpc-pq2fads powerpc-ps3 powerpc-pseries powerpc-rainier powerpc-redwood powerpc-sam440ep powerpc-sbc8548 powerpc-sequoia powerpc-skiroot powerpc-socrates powerpc-storcenter powerpc-stx_gp3 powerpc-taishan powerpc-tqm5200 powerpc-tqm8540 powerpc-tqm8541 powerpc-tqm8548 powerpc-tqm8555 powerpc-tqm8560 powerpc-tqm8xx powerpc-walnut powerpc-warp powerpc-wii powerpc-xes_mpc85xx riscv-allmodconfig riscv-allnoconfig riscv-nommu_k210 riscv-nommu_k210_sdcard riscv-nommu_virt riscv-rv32 s390-allmodconfig s390-allnoconfig s390-debug s390-zfcpdump sh-ap325rxa sh-apsh4a3a sh-apsh4ad0a sh-dreamcast sh-ecovec24 sh-ecovec24-romimage sh-edosk7705 sh-edosk7760 sh-espt sh-hp6xx sh-j2 sh-kfr2r09 sh-kfr2r09-romimage sh-landisk sh-lboxre2 sh-magicpanelr2 sh-microdev sh-migor sh-polaris sh-r7780mp sh-r7785rp sh-rsk7201 sh-rsk7203 sh-rsk7264 sh-rsk7269 sh-rts7751r2d1 sh-rts7751r2dplus sh-sdk7780 sh-sdk7786 sh-se7206 sh-se7343 sh-se7619 sh-se7705 sh-se7712 sh-se7721 sh-se7722 sh-se7724 sh-se7750 sh-se7751 sh-se7780 sh-secureedge5410 sh-sh03 sh-sh2007 sh-sh7710voipgw sh-sh7724_generic sh-sh7757lcr sh-sh7763rdp sh-sh7770_generic sh-sh7785lcr sh-sh7785lcr_32bit sh-shmin sh-shx3 sh-titan sh-ul2 sh-urquell sparc-allmodconfig sparc-allnoconfig sparc-sparc32 sparc-sparc64 um-i386-allmodconfig um-i386-allnoconfig um-i386-defconfig um-x86_64-allmodconfig um-x86_64-allnoconfig x86_64-allmodconfig x86_64-allnoconfig x86_64-defconfig xtensa-allmodconfig xtensa-allnoconfig xtensa-audio_kc705 xtensa-cadence_csp xtensa-common xtensa-generic_kc705 xtensa-iss xtensa-nommu_kc705 xtensa-smp_lx200 xtensa-virt xtensa-xip_kc705
Tested-by: Nathan Chancellor <nathan@kernel.org> # build (hexagon) Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
show more ...
|
#
a7a08b27 |
| 08-Sep-2021 |
Arnd Bergmann <arnd@arndb.de> |
arch: remove compat_alloc_user_space
All users of compat_alloc_user_space() and copy_in_user() have been removed from the kernel, only a few functions in sparc remain that can be changed to calling
arch: remove compat_alloc_user_space
All users of compat_alloc_user_space() and copy_in_user() have been removed from the kernel, only a few functions in sparc remain that can be changed to calling arch_copy_in_user() instead.
Link: https://lkml.kernel.org/r/20210727144859.4150043-7-arnd@kernel.org Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
show more ...
|
#
28513304 |
| 27-May-2021 |
Robin Murphy <robin.murphy@arm.com> |
arm64: Import latest memcpy()/memmove() implementation
Import the latest implementation of memcpy(), based on the upstream code of string/aarch64/memcpy.S at commit afd6244 from https://github.com/A
arm64: Import latest memcpy()/memmove() implementation
Import the latest implementation of memcpy(), based on the upstream code of string/aarch64/memcpy.S at commit afd6244 from https://github.com/ARM-software/optimized-routines, and subsuming memmove() in the process.
Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license.
Note also that the needs of the usercopy routines vs. regular memcpy() have now diverged so far that we abandon the shared template idea and the damage which that incurred to the tuning of LDP/STP loops. We'll be back to tackle those routines separately in future.
Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/3c953af43506581b2422f61952261e76949ba711.1622128527.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
72fd7236 |
| 03-Mar-2021 |
Julien Thierry <jthierry@redhat.com> |
arm64: Move instruction encoder/decoder under lib/
Aarch64 instruction set encoding and decoding logic can prove useful for some features/tools both part of the kernel and outside the kernel.
Isola
arm64: Move instruction encoder/decoder under lib/
Aarch64 instruction set encoding and decoding logic can prove useful for some features/tools both part of the kernel and outside the kernel.
Isolate the function dealing only with encoding/decoding instructions, with minimal dependency on kernel utilities in order to be able to reuse that code.
Code was only moved, no code should have been added, removed nor modifier.
Signed-off-by: Julien Thierry <jthierry@redhat.com> Link: https://lore.kernel.org/r/20210303170536.1838032-5-jthierry@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
1cbdf60b |
| 26-May-2021 |
Peter Collingbourne <pcc@google.com> |
kasan: arm64: support specialized outlined tag mismatch checks
By using outlined checks we can achieve a significant code size improvement by moving the tag-based ASAN checks into separate functions
kasan: arm64: support specialized outlined tag mismatch checks
By using outlined checks we can achieve a significant code size improvement by moving the tag-based ASAN checks into separate functions. Unlike the existing CONFIG_KASAN_OUTLINE mode these functions have a custom calling convention that preserves most registers and is specialized to the register containing the address and the type of access, and as a result we can eliminate the code size and performance overhead of a standard calling convention such as AAPCS for these functions.
This change depends on a separate series of changes to Clang [1] to support outlined checks in the kernel, although the change works fine without them (we just don't get outlined checks). This is because the flag -mllvm -hwasan-inline-all-checks=0 has no effect until the Clang changes land. The flag was introduced in the Clang 9.0 timeframe as part of the support for outlined checks in userspace and because our minimum Clang version is 10.0 we can pass it unconditionally.
Outlined checks require a new runtime function with a custom calling convention. Add this function to arch/arm64/lib.
I measured the code size of defconfig + tag-based KASAN, as well as boot time (i.e. time to init launch) on a DragonBoard 845c with an Android arm64 GKI kernel. The results are below:
code size boot time CONFIG_KASAN_INLINE=y before 92824064 6.18s CONFIG_KASAN_INLINE=y after 38822400 6.65s CONFIG_KASAN_OUTLINE=y 39215616 11.48s
We can see straight away that specialized outlined checks beat the existing CONFIG_KASAN_OUTLINE=y on both code size and boot time for tag-based ASAN.
As for the comparison between CONFIG_KASAN_INLINE=y before and after we saw similar performance numbers in userspace [2] and decided that since the performance overhead is minimal compared to the overhead of tag-based ASAN itself as well as compared to the code size improvements we would just replace the inlined checks with the specialized outlined checks without the option to select between them, and that is what I have implemented in this patch.
Signed-off-by: Peter Collingbourne <pcc@google.com> Acked-by: Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Link: https://linux-review.googlesource.com/id/I1a30036c70ab3c3ee78d75ed9b87ef7cdc3fdb76 Link: [1] https://reviews.llvm.org/D90426 Link: [2] https://reviews.llvm.org/D56954 Link: https://lore.kernel.org/r/20210526174927.2477847-3-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
34bfeea4 |
| 04-May-2020 |
Catalin Marinas <catalin.marinas@arm.com> |
arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE
Pages allocated by the kernel are not guaranteed to have the tags zeroed, especially as the kernel does not (yet) use MTE
arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE
Pages allocated by the kernel are not guaranteed to have the tags zeroed, especially as the kernel does not (yet) use MTE itself. To ensure the user can still access such pages when mapped into its address space, clear the tags via set_pte_at(). A new page flag - PG_mte_tagged (PG_arch_2) - is used to track pages with valid allocation tags.
Since the zero page is mapped as pte_special(), it won't be covered by the above set_pte_at() mechanism. Clear its tags during early MTE initialisation.
Co-developed-by: Steven Price <steven.price@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
show more ...
|
#
5777eaed |
| 15-Jan-2020 |
Robin Murphy <robin.murphy@arm.com> |
arm64: Implement optimised checksum routine
Apparently there exist certain workloads which rely heavily on software checksumming, for which the generic do_csum() implementation becomes a significant
arm64: Implement optimised checksum routine
Apparently there exist certain workloads which rely heavily on software checksumming, for which the generic do_csum() implementation becomes a significant bottleneck. Therefore let's give arm64 its own optimised version - for ease of maintenance this foregoes assembly or intrisics, and is thus not actually arm64-specific, but does rely heavily on C idioms that translate well to the A64 ISA and the typical load/store capabilities of most ARMv8 CPU cores.
The resulting increase in checksum throughput scales nicely with buffer size, tending towards 4x for a small in-order core (Cortex-A53), and up to 6x or more for an aggressive big core (Ampere eMAG).
Reported-by: Lingyan Huang <huanglingyan2@huawei.com> Tested-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
eb3aabbf |
| 28-Aug-2019 |
Andrew Murray <andrew.murray@arm.com> |
arm64: atomics: Remove atomic_ll_sc compilation unit
We no longer fall back to out-of-line atomics on systems with CONFIG_ARM64_LSE_ATOMICS where ARM64_HAS_LSE_ATOMICS is not set.
Remove the unused
arm64: atomics: Remove atomic_ll_sc compilation unit
We no longer fall back to out-of-line atomics on systems with CONFIG_ARM64_LSE_ATOMICS where ARM64_HAS_LSE_ATOMICS is not set.
Remove the unused compilation unit which provided these symbols.
Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
42d038c4 |
| 06-Aug-2019 |
Leo Yan <leo.yan@linaro.org> |
arm64: Add support for function error injection
Inspired by the commit 7cd01b08d35f ("powerpc: Add support for function error injection"), this patch supports function error injection for Arm64.
Th
arm64: Add support for function error injection
Inspired by the commit 7cd01b08d35f ("powerpc: Add support for function error injection"), this patch supports function error injection for Arm64.
This patch mainly support two functions: one is regs_set_return_value() which is used to overwrite the return value; the another function is override_function_with_return() which is to override the probed function returning and jump to its caller.
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
show more ...
|
#
edf072d3 |
| 08-Feb-2019 |
Torsten Duwe <duwe@lst.de> |
arm64: Makefile: Replace -pg with CC_FLAGS_FTRACE
In preparation for arm64 supporting ftrace built on other compiler options, let's have the arm64 Makefiles remove the $(CC_FLAGS_FTRACE) flags, what
arm64: Makefile: Replace -pg with CC_FLAGS_FTRACE
In preparation for arm64 supporting ftrace built on other compiler options, let's have the arm64 Makefiles remove the $(CC_FLAGS_FTRACE) flags, whatever these may be, rather than assuming '-pg'.
There should be no functional change as a result of this patch.
Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
cc9f8349 |
| 04-Dec-2018 |
Jackie Liu <liuyun01@kylinos.cn> |
arm64: crypto: add NEON accelerated XOR implementation
This is a NEON acceleration method that can improve performance by approximately 20%. I got the following data from the centos 7.5 on Huawei's
arm64: crypto: add NEON accelerated XOR implementation
This is a NEON acceleration method that can improve performance by approximately 20%. I got the following data from the centos 7.5 on Huawei's HISI1616 chip:
[ 93.837726] xor: measuring software checksum speed [ 93.874039] 8regs : 7123.200 MB/sec [ 93.914038] 32regs : 7180.300 MB/sec [ 93.954043] arm64_neon: 9856.000 MB/sec [ 93.954047] xor: using function: arm64_neon (9856.000 MB/sec)
I believe this code can bring some optimization for all arm64 platform. thanks for Ard Biesheuvel's suggestions.
Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
2a6c7c36 |
| 19-Sep-2018 |
Tri Vo <trong@android.com> |
arm64: lse: remove -fcall-used-x0 flag
x0 is not callee-saved in the PCS. So there is no need to specify -fcall-used-x0.
Clang doesn't currently support -fcall-used flags. This patch will help buil
arm64: lse: remove -fcall-used-x0 flag
x0 is not callee-saved in the PCS. So there is no need to specify -fcall-used-x0.
Clang doesn't currently support -fcall-used flags. This patch will help building the kernel with clang.
Tested-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Tri Vo <trong@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
7481cddf |
| 27-Aug-2018 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
arm64/lib: add accelerated crc32 routines
Unlike crc32c(), which is wired up to the crypto API internally so the optimal driver is selected based on the platform's capabilities, crc32_le() is implem
arm64/lib: add accelerated crc32 routines
Unlike crc32c(), which is wired up to the crypto API internally so the optimal driver is selected based on the platform's capabilities, crc32_le() is implemented as a library function using a slice-by-8 table based C implementation. Even though few of the call sites may be bottlenecks, calling a time variant implementation with a non-negligible D-cache footprint is a bit of a waste, given that ARMv8.1 and up mandates support for the CRC32 instructions that were optional in ARMv8.0, but are already widely available, even on the Cortex-A53 based Raspberry Pi.
So implement routines that use these instructions if available, and fall back to the existing generic routines otherwise. The selection is based on alternatives patching.
Note that this unconditionally selects CONFIG_CRC32 as a builtin. Since CRC32 is relied upon by core functionality such as CONFIG_OF_FLATTREE, this just codifies the status quo.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
7c8fc35d |
| 19-Jun-2018 |
Will Deacon <will.deacon@arm.com> |
locking/atomics/arm64: Replace our atomic/lock bitop implementations with asm-generic
The <asm-generic/bitops/{atomic,lock}.h> implementations are built around the atomic-fetch ops, which we impleme
locking/atomics/arm64: Replace our atomic/lock bitop implementations with asm-generic
The <asm-generic/bitops/{atomic,lock}.h> implementations are built around the atomic-fetch ops, which we implement efficiently for both LSE and LL/SC systems. Use that instead of our hand-rolled, out-of-line bitops.S.
Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Cc: yamada.masahiro@socionext.com Link: https://lore.kernel.org/lkml/1529412794-17720-9-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
show more ...
|
#
3789c122 |
| 27-Apr-2018 |
Mark Rutland <mark.rutland@arm.com> |
arm64: avoid instrumenting atomic_ll_sc.o
Our out-of-line atomics are built with a special calling convention, preventing pointless stack spilling, and allowing us to patch call sites with ARMv8.1 a
arm64: avoid instrumenting atomic_ll_sc.o
Our out-of-line atomics are built with a special calling convention, preventing pointless stack spilling, and allowing us to patch call sites with ARMv8.1 atomic instructions.
Instrumentation inserted by the compiler may result in calls to functions not following this special calling convention, resulting in registers being unexpectedly clobbered, and various problems resulting from this.
For example, if a kernel is built with KCOV and ARM64_LSE_ATOMICS, the compiler inserts calls to __sanitizer_cov_trace_pc in the prologues of the atomic functions. This has been observed to result in spurious cmpxchg failures, leading to a hang early on in the boot process.
This patch avoids such issues by preventing instrumentation of our out-of-line atomics.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
6b24442d |
| 09-Feb-2018 |
Will Deacon <will.deacon@arm.com> |
arm64: lse: Pass -fomit-frame-pointer to out-of-line ll/sc atomics
In cases where x30 is used as a temporary in the out-of-line ll/sc atomics (e.g. atomic_fetch_add), the compiler tends to put out a
arm64: lse: Pass -fomit-frame-pointer to out-of-line ll/sc atomics
In cases where x30 is used as a temporary in the out-of-line ll/sc atomics (e.g. atomic_fetch_add), the compiler tends to put out a full stackframe, which included pointing the x29 at the new frame.
Since these things aren't traceable anyway, we can pass -fomit-frame-pointer to reduce the work when spilling. Since this is incompatible with -pg, we also remove that from the CFLAGS for this file.
Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
fb872273 |
| 03-Nov-2017 |
Jason A. Donenfeld <Jason@zx2c4.com> |
arm64: support __int128 on gcc 5+
Versions of gcc prior to gcc 5 emitted a __multi3 function call when dealing with TI types, resulting in failures when trying to link to libgcc, and more generally,
arm64: support __int128 on gcc 5+
Versions of gcc prior to gcc 5 emitted a __multi3 function call when dealing with TI types, resulting in failures when trying to link to libgcc, and more generally, bad performance. However, since gcc 5, the compiler supports actually emitting fast instructions, which means we can at long last enable this option and receive the speedups.
The gcc commit that added proper Aarch64 support is: https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=d1ae7bb994f49316f6f63e6173f2931e837a351d This commit appears to be part of the gcc 5 release.
There are still a few instructions, __ashlti3 and __ashrti3, which require libgcc, which is fine. Rather than linking to libgcc, we simply provide them ourselves, since they're not that complicated.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
b2441318 |
| 01-Nov-2017 |
Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine
License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license identifiers to apply.
- when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary:
SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became the concluded license(s).
- when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time.
In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related.
Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
show more ...
|
#
5d7bdeb1 |
| 25-Jul-2017 |
Robin Murphy <robin.murphy@arm.com> |
arm64: uaccess: Implement *_flushcache variants
Implement the set of copy functions with guarantees of a clean cache upon completion necessary to support the pmem driver.
Reviewed-by: Will Deacon <
arm64: uaccess: Implement *_flushcache variants
Implement the set of copy functions with guarantees of a clean cache upon completion necessary to support the pmem driver.
Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
5be8b70a |
| 25-Feb-2016 |
Ard Biesheuvel <ard.biesheuvel@linaro.org> |
arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls to out of line non-LSE atomics implementations on cores that
arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls to out of line non-LSE atomics implementations on cores that lack hardware support for LSE. To avoid paying the overhead cost of a function call even if no call ends up being made, the bl instruction is kept invisible to the compiler, and the out of line implementations preserve all registers, not just the ones that they are required to preserve as per the AAPCS64.
However, commit fd045f6cd98e ("arm64: add support for module PLTs") added support for routing branch instructions via veneers if the branch target offset exceeds the range of the ordinary relative branch instructions. Since this deals with jump and call instructions that are exposed to ELF relocations, the PLT code uses x16 to hold the address of the branch target when it performs an indirect branch-to-register, something which is explicitly allowed by the AAPCS64 (and ordinary compiler generated code does not expect register x16 or x17 to retain their values across a bl instruction).
Since the lse runtime patched bl instructions don't adhere to the AAPCS64, they don't deal with this clobbering of registers x16 and x17. So add them to the clobber list of the asm() statements that perform the call instructions, and drop x16 and x17 from the list of registers that are callee saved in the out of line non-LSE implementations.
In addition, since we have given these functions two scratch registers, they no longer need to stack/unstack temp registers.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [will: factored clobber list into #define, updated Makefile comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
c0385b24 |
| 03-Feb-2015 |
Will Deacon <will.deacon@arm.com> |
arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics
In order to patch in the new atomic instructions at runtime, we need to generate wrappers around the out-of-line exclusive load
arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics
In order to patch in the new atomic instructions at runtime, we need to generate wrappers around the out-of-line exclusive load/store atomics.
This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which causes our atomic functions to branch to the out-of-line ll/sc implementations. To avoid the register spill overhead of the PCS, the out-of-line functions are compiled with specific compiler flags to force out-of-line save/restore of any registers that are usually caller-saved.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
show more ...
|
#
0a42cb0a |
| 28-Apr-2014 |
zhichang.yuan <zhichang.yuan@linaro.org> |
arm64: lib: Implement optimized string length routines
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions.
Signed-off-by: Zhichang Yua
arm64: lib: Implement optimized string length routines
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions.
Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
192c4d90 |
| 28-Apr-2014 |
zhichang.yuan <zhichang.yuan@linaro.org> |
arm64: lib: Implement optimized string compare routines
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions.
Signed-off-by: Zhichang Yu
arm64: lib: Implement optimized string compare routines
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions.
Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|
#
d875c9b3 |
| 28-Apr-2014 |
zhichang.yuan <zhichang.yuan@linaro.org> |
arm64: lib: Implement optimized memcmp routine
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function.
Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro
arm64: lib: Implement optimized memcmp routine
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function.
Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
show more ...
|