Masahiro Yamada [Sun, 7 Nov 2021 16:18:02 +0000 (01:18 +0900)]
arm64: vdso: remove -nostdlib compiler flag
The -nostdlib option requests the compiler to not use the standard
system startup files or libraries when linking. It is effective only
when $(CC) is used as a linker driver.
Since commit 691efbedc60d ("arm64: vdso: use $(LD) instead of $(CC)
to link VDSO"), $(LD) is directly used, hence -nostdlib is unneeded.
Reiji Watanabe [Mon, 1 Nov 2021 04:54:21 +0000 (21:54 -0700)]
arm64: arm64_ftr_reg->name may not be a human-readable string
The id argument of ARM64_FTR_REG_OVERRIDE() is used for two purposes:
one as the system register encoding (used for the sys_id field of
__ftr_reg_entry), and the other as the register name (stringified
and used for the name field of arm64_ftr_reg), which is debug
information. The id argument is supposed to be a macro that
indicates an encoding of the register (eg. SYS_ID_AA64PFR0_EL1, etc).
ARM64_FTR_REG(), which also has the same id argument,
uses ARM64_FTR_REG_OVERRIDE() and passes the id to the macro.
Since the id argument is completely macro-expanded before it is
substituted into a macro body of ARM64_FTR_REG_OVERRIDE(),
the stringified id in the body of ARM64_FTR_REG_OVERRIDE is not
a human-readable register name, but a string of numeric bitwise
operations.
Fix this so that human-readable register names are available as
debug information.
Fixes: 8f266a5d878a ("arm64: cpufeature: Add global feature override facility") Signed-off-by: Reiji Watanabe <reijiw@google.com> Reviewed-by: Oliver Upton <oupton@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211101045421.2215822-1-reijiw@google.com Signed-off-by: Will Deacon <will@kernel.org>
Will Deacon [Fri, 29 Oct 2021 11:25:38 +0000 (12:25 +0100)]
Merge branch 'for-next/vdso' into for-next/core
* for-next/vdso:
arm64: vdso32: require CROSS_COMPILE_COMPAT for gcc+bfd
arm64: vdso32: suppress error message for 'make mrproper'
arm64: vdso32: drop test for -march=armv8-a
arm64: vdso32: drop the test for dmb ishld
Will Deacon [Fri, 29 Oct 2021 11:25:33 +0000 (12:25 +0100)]
Merge branch 'for-next/trbe-errata' into for-next/core
* for-next/trbe-errata:
arm64: errata: Add detection for TRBE write to out-of-range
arm64: errata: Add workaround for TSB flush failures
arm64: errata: Add detection for TRBE overwrite in FILL mode
arm64: Add Neoverse-N2, Cortex-A710 CPU part definition
Will Deacon [Fri, 29 Oct 2021 11:25:29 +0000 (12:25 +0100)]
Merge branch 'for-next/sve' into for-next/core
* for-next/sve:
arm64/sve: Fix warnings when SVE is disabled
arm64/sve: Add stub for sve_max_virtualisable_vl()
arm64/sve: Track vector lengths for tasks in an array
arm64/sve: Explicitly load vector length when restoring SVE state
arm64/sve: Put system wide vector length information into structs
arm64/sve: Use accessor functions for vector lengths in thread_struct
arm64/sve: Rename find_supported_vector_length()
arm64/sve: Make access to FFR optional
arm64/sve: Make sve_state_size() static
arm64/sve: Remove sve_load_from_fpsimd_state()
arm64/fp: Reindent fpsimd_save()
Will Deacon [Fri, 29 Oct 2021 11:24:53 +0000 (12:24 +0100)]
Merge branch 'for-next/kselftest' into for-next/core
* for-next/kselftest:
selftests: arm64: Factor out utility functions for assembly FP tests
selftests: arm64: Add coverage of ptrace flags for SVE VL inheritance
selftests: arm64: Verify that all possible vector lengths are handled
selftests: arm64: Fix and enable test for setting current VL in vec-syscfg
selftests: arm64: Remove bogus error check on writing to files
selftests: arm64: Fix printf() format mismatch in vec-syscfg
selftests: arm64: Move FPSIMD in SVE ptrace test into a function
selftests: arm64: More comprehensively test the SVE ptrace interface
selftests: arm64: Verify interoperation of SVE and FPSIMD register sets
selftests: arm64: Clarify output when verifying SVE register set
selftests: arm64: Document what the SVE ptrace test is doing
selftests: arm64: Remove extraneous register setting code
selftests: arm64: Don't log child creation as a test in SVE ptrace test
selftests: arm64: Use a define for the number of SVE ptrace tests to be run
Will Deacon [Fri, 29 Oct 2021 11:24:47 +0000 (12:24 +0100)]
Merge branch 'for-next/kexec' into for-next/core
* for-next/kexec:
arm64: trans_pgd: remove trans_pgd_map_page()
arm64: kexec: remove cpu-reset.h
arm64: kexec: remove the pre-kexec PoC maintenance
arm64: kexec: keep MMU enabled during kexec relocation
arm64: kexec: install a copy of the linear-map
arm64: kexec: use ld script for relocation function
arm64: kexec: relocate in EL1 mode
arm64: kexec: configure EL2 vectors for kexec
arm64: kexec: pass kimage as the only argument to relocation function
arm64: kexec: Use dcache ops macros instead of open-coding
arm64: kexec: skip relocation code for inplace kexec
arm64: kexec: flush image and lists during kexec load time
arm64: hibernate: abstract ttrb0 setup function
arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
arm64: kernel: add helper for booted at EL2 and not VHE
Will Deacon [Fri, 29 Oct 2021 11:20:21 +0000 (12:20 +0100)]
Merge branch 'for-next/8.6-timers' into for-next/core
* for-next/8.6-timers:
arm64: Add HWCAP for self-synchronising virtual counter
arm64: Add handling of CNTVCTSS traps
arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
arm64: Add a capability for FEAT_ECV
clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
With 6caa5812e2d1 ("KVM: arm64: Use generic KVM xfer to guest work
function") all arm64 exit paths are properly equipped to handle the
POSIX timers' task work.
Deferring timer callbacks to thread context, not only limits the amount
of time spent in hard interrupt context, but is a safer
implementation[1], and will allow PREEMPT_RT setups to use KVM[2].
So let's enable POSIX_CPU_TIMERS_TASK_WORK on arm64.
Mark Brown [Tue, 26 Oct 2021 11:18:02 +0000 (12:18 +0100)]
arm64: Document boot requirements for FEAT_SME_FA64
The EAC1 release of the SME specification adds the FA64 feature which
requires enablement at higher ELs before lower ELs can use it. Document
what we require from higher ELs in our boot requirements.
Mark Brown [Fri, 22 Oct 2021 14:16:35 +0000 (15:16 +0100)]
arm64/sve: Fix warnings when SVE is disabled
In configurations where SVE is disabled we define but never reference the
functions for retrieving the default vector length, causing warnings. Fix
this by move the ifdef up, marking get_default_vl() inline since it is
referenced from code guarded by an IS_ENABLED() check, and do the same for
the other accessors for consistency.
Suzuki K Poulose [Tue, 19 Oct 2021 16:31:42 +0000 (17:31 +0100)]
arm64: errata: Add detection for TRBE write to out-of-range
Arm Neoverse-N2 and Cortex-A710 cores are affected by an erratum where
the trbe, under some circumstances, might write upto 64bytes to an
address after the Limit as programmed by the TRBLIMITR_EL1.LIMIT.
This might -
- Corrupt a page in the ring buffer, which may corrupt trace from a
previous session, consumed by userspace.
- Hit the guard page at the end of the vmalloc area and raise a fault.
To keep the handling simpler, we always leave the last page from the
range, which TRBE is allowed to write. This can be achieved by ensuring
that we always have more than a PAGE worth space in the range, while
calculating the LIMIT for TRBE. And then the LIMIT pointer can be
adjusted to leave the PAGE (TRBLIMITR.LIMIT -= PAGE_SIZE), out of the
TRBE range while enabling it. This makes sure that the TRBE will only
write to an area within its allowed limit (i.e, [head-head+size]) and
we do not have to handle address faults within the driver.
Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Leo Yan <leo.yan@linaro.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211019163153.3692640-5-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Suzuki K Poulose [Tue, 19 Oct 2021 16:31:41 +0000 (17:31 +0100)]
arm64: errata: Add workaround for TSB flush failures
Arm Neoverse-N2 (#2067961) and Cortex-A710 (#2054223) suffers
from errata, where a TSB (trace synchronization barrier)
fails to flush the trace data completely, when executed from
a trace prohibited region. In Linux we always execute it
after we have moved the PE to trace prohibited region. So,
we can apply the workaround every time a TSB is executed.
The work around is to issue two TSB consecutively.
NOTE: This errata is defined as LOCAL_CPU_ERRATUM, implying
that a late CPU could be blocked from booting if it is the
first CPU that requires the workaround. This is because we
do not allow setting a cpu_hwcaps after the SMP boot. The
other alternative is to use "this_cpu_has_cap()" instead
of the faster system wide check, which may be a bit of an
overhead, given we may have to do this in nvhe KVM host
before a guest entry.
Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211019163153.3692640-4-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Suzuki K Poulose [Tue, 19 Oct 2021 16:31:40 +0000 (17:31 +0100)]
arm64: errata: Add detection for TRBE overwrite in FILL mode
Arm Neoverse-N2 and the Cortex-A710 cores are affected
by a CPU erratum where the TRBE will overwrite the trace buffer
in FILL mode. The TRBE doesn't stop (as expected in FILL mode)
when it reaches the limit and wraps to the base to continue
writing upto 3 cache lines. This will overwrite any trace that
was written previously.
Add the Neoverse-N2 erratum(#2139208) and Cortex-A710 erratum
(#2119858) to the detection logic.
This will be used by the TRBE driver in later patches to work
around the issue. The detection has been kept with the core
arm64 errata framework list to make sure :
- We don't duplicate the framework in TRBE driver
- The errata detection is advertised like the rest
of the CPU errata.
Note that the Kconfig entries are not fully active until the
TRBE driver implements the work around.
Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Mike Leach <mike.leach@linaro.org>
cc: Leo Yan <leo.yan@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211019163153.3692640-3-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Brown [Tue, 19 Oct 2021 18:18:51 +0000 (19:18 +0100)]
selftests: arm64: Factor out utility functions for assembly FP tests
The various floating point test programs written in assembly have a bunch
of helper functions and macros which are cut'n'pasted between them. Factor
them out into a separate source file which is linked into all of them.
We don't include memcmp() since it isn't as generic as it should be and
directly branches to report an error in the programs.
For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:
* Since the fixup code is anonymous, backtraces will symbolize fixups as
offsets from the nearest prior symbol, currently
`__entry_tramp_text_end`. This is confusing, and painful to debug
without access to the relevant vmlinux.
* Since the exception handler adjusts the PC to execute the fixup, and
the fixup uses a direct branch back into the function it fixes,
backtraces of fixups miss the original function. This is confusing,
and violates requirements for RELIABLE_STACKTRACE (and therefore
LIVEPATCH).
* Inline assembly and associated fixups are generated from templates,
and we have many copies of logically identical fixups which only
differ in which specific registers are written to and which address is
branched to at the end of the fixup. This is potentially wasteful of
I-cache resources, and makes it hard to add additional logic to fixups
without significant bloat.
* In the case of load_unaligned_zeropad(), the logic in the fixup
requires a temporary register that we must allocate even in the
fast-path where it will not be used.
This patch address all four concerns for load_unaligned_zeropad() fixups
by adding a dedicated exception handler which performs the fixup logic
in exception context and subsequent returns back after the faulting
instruction. For the moment, the fixup logic is identical to the old
assembly fixup logic, but in future we could enhance this by taking the
ESR and FAR into account to constrain the faults we try to fix up, or to
specialize fixups for MTE tag check faults.
Other than backtracing, there should be no functional change as a result
of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-13-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:17 +0000 (17:02 +0100)]
arm64: extable: add a dedicated uaccess handler
For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:
* Since the fixup code is anonymous, backtraces will symbolize fixups as
offsets from the nearest prior symbol, currently
`__entry_tramp_text_end`. This is confusing, and painful to debug
without access to the relevant vmlinux.
* Since the exception handler adjusts the PC to execute the fixup, and
the fixup uses a direct branch back into the function it fixes,
backtraces of fixups miss the original function. This is confusing,
and violates requirements for RELIABLE_STACKTRACE (and therefore
LIVEPATCH).
* Inline assembly and associated fixups are generated from templates,
and we have many copies of logically identical fixups which only
differ in which specific registers are written to and which address is
branched to at the end of the fixup. This is potentially wasteful of
I-cache resources, and makes it hard to add additional logic to fixups
without significant bloat.
This patch address all three concerns for inline uaccess fixups by
adding a dedicated exception handler which updates registers in
exception context and subsequent returns back into the function which
faulted, removing the need for fixups specialized to each faulting
instruction.
Other than backtracing, there should be no functional change as a result
of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-12-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:16 +0000 (17:02 +0100)]
arm64: extable: add `type` and `data` fields
Subsequent patches will add specialized handlers for fixups, in addition
to the simple PC fixup and BPF handlers we have today. In preparation,
this patch adds a new `type` field to struct exception_table_entry, and
uses this to distinguish the fixup and BPF cases. A `data` field is also
added so that subsequent patches can associate data specific to each
exception site (e.g. register numbers).
Handlers are named ex_handler_*() for consistency, following the exmaple
of x86. At the same time, get_ex_fixup() is split out into a helper so
that it can be used by other ex_handler_*() functions ins subsequent
patches.
This patch will increase the size of the exception tables, which will be
remedied by subsequent patches removing redundant fixup code. There
should be no functional change as a result of this patch.
Since each entry is now 12 bytes in size, we must reduce the alignment
of each entry from `.align 3` (i.e. 8 bytes) to `.align 2` (i.e. 4
bytes), which is the natrual alignment of the `insn` and `fixup` fields.
The current 8-byte alignment is a holdover from when the `insn` and
`fixup` fields was 8 bytes, and while not harmful has not been necessary
since commit:
Similarly, RO_EXCEPTION_TABLE_ALIGN is dropped to 4 bytes.
Concurrently with this patch, x86's exception table entry format is
being updated (similarly to a 12-byte format, with 32-bytes of absolute
data). Once both have been merged it should be possible to unify the
sorttable logic for the two.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: James Morse <james.morse@arm.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-11-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:15 +0000 (17:02 +0100)]
arm64: extable: use `ex` for `exception_table_entry`
Subsequent patches will extend `struct exception_table_entry` with more
fields, and the distinction between the entry and its `fixup` field will
become more important.
For clarity, let's consistently use `ex` to refer to refer to an entire
entry. In subsequent patches we'll use `fixup` to refer to the fixup
field specifically. This matches the naming convention used today in
arch/arm64/net/bpf_jit_comp.c.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-10-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:14 +0000 (17:02 +0100)]
arm64: extable: make fixup_exception() return bool
The return values of fixup_exception() and arm64_bpf_fixup_exception()
represent a boolean condition rather than an error code, so for clarity
it would be better to return `bool` rather than `int`.
This patch adjusts the code accordingly. While we're modifying the
prototype, we also remove the unnecessary `extern` keyword, so that this
won't look out of place when we make subsequent additions to the header.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: James Morse <james.morse@arm.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-9-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:13 +0000 (17:02 +0100)]
arm64: extable: consolidate definitions
In subsequent patches we'll alter the structure and usage of struct
exception_table_entry. For inline assembly, we create these using the
`_ASM_EXTABLE()` CPP macro defined in <asm/uaccess.h>, and for plain
assembly code we use the `_asm_extable()` GAS macro defined in
<asm/assembler.h>, which are largely identical save for different
escaping and stringification requirements.
This patch moves the common definitions to a new <asm/asm-extable.h>
header, so that it's easier to keep the two in-sync, and to remove the
implication that these are only used for uaccess helpers (as e.g.
load_unaligned_zeropad() is only used on kernel memory, and depends upon
`_ASM_EXTABLE()`.
At the same time, a few minor modifications are made for clarity and in
preparation for subsequent patches:
* The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
macro. This will make it easier to support different fixup variants in
subsequent patches without needing to update all users of
`_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
variants of the macros are structurally identical.
For the CPP macro, the stringification of fields is left to the
wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
necessary to stringify fields in wrapper macros to safely concatenate
strings which cannot be token-pasted together in CPP.
* The fields of the structure are created separately on their own lines.
This will make it easier to add/remove/modify individual fields
clearly.
* Additional parentheses are added around the use of macro arguments in
field definitions to avoid any potential problems with evaluation due
to operator precedence, and to make errors upon misuse clearer.
* USER() is moved into <asm/asm-uaccess.h>, as it is not required by all
assembly code, and is already refered to by comments in that file.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:11 +0000 (17:02 +0100)]
arm64: factor out GPR numbering helpers
In <asm/sysreg.h> we have macros to convert the names of general purpose
registers (GPRs) into integer constants, which we use to manually build
the encoding for `MRS` and `MSR` instructions where we can't rely on the
assembler to do so for us.
In subsequent patches we'll need to map the same GPR names to integer
constants so that we can use this to build metadata for exception
fixups.
So that the we can use the mappings elsewhere, factor out the
definitions into a new <asm/gpr-num.h> header, renaming the definitions
to align with this "GPR num" naming for clarity.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-6-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:10 +0000 (17:02 +0100)]
arm64: kvm: use kvm_exception_table_entry
In subsequent patches we'll alter `struct exception_table_entry`, adding
fields that are not needed for KVM exception fixups.
In preparation for this, migrate KVM to its own `struct
kvm_exception_table_entry`, which is identical to the current format of
`struct exception_table_entry`. Comments are updated accordingly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:09 +0000 (17:02 +0100)]
arm64: lib: __arch_copy_to_user(): fold fixups into body
Like other functions, __arch_copy_to_user() places its exception fixups
in the `.fixup` section without any clear association with
__arch_copy_to_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_copy_to_user() itself will be missing from the backtrace.
This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.
To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_copy_to_user().
Inline assembly will be dealt with in subsequent patches.
Other than the improved backtracing, there should be no functional
change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-4-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:08 +0000 (17:02 +0100)]
arm64: lib: __arch_copy_from_user(): fold fixups into body
Like other functions, __arch_copy_from_user() places its exception
fixups in the `.fixup` section without any clear association with
__arch_copy_from_user() itself. If we backtrace the fixup code, it will
be symbolized as an offset from the nearest prior symbol, which happens
to be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_copy_from_user() itself will be missing from the backtrace.
This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.
To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_copy_from_user().
Inline assembly will be dealt with in subsequent patches.
Other than the improved backtracing, there should be no functional
change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Rutland [Tue, 19 Oct 2021 16:02:07 +0000 (17:02 +0100)]
arm64: lib: __arch_clear_user(): fold fixups into body
Like other functions, __arch_clear_user() places its exception fixups in
the `.fixup` section without any clear association with
__arch_clear_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_clear_user() itself will be missing from the backtrace.
This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.
To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_clear_user().
Inline assembly will be dealt with in subsequent patches.
Other than the improved backtracing, there should be no functional
change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20211019160219.5202-2-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Nick Desaulniers [Tue, 19 Oct 2021 22:36:46 +0000 (15:36 -0700)]
arm64: vdso32: require CROSS_COMPILE_COMPAT for gcc+bfd
Similar to
commit 231ad7f409f1 ("Makefile: infer --target from ARCH for CC=clang")
There really is no point in setting --target based on
$CROSS_COMPILE_COMPAT for clang when the integrated assembler is being
used, since
commit ef94340583ee ("arm64: vdso32: drop -no-integrated-as flag").
Allows COMPAT_VDSO to be selected without setting $CROSS_COMPILE_COMPAT
when using clang and lld together.
Nick Desaulniers [Tue, 19 Oct 2021 22:36:45 +0000 (15:36 -0700)]
arm64: vdso32: suppress error message for 'make mrproper'
When running the following command without arm-linux-gnueabi-gcc in
one's $PATH, the following warning is observed:
$ ARCH=arm64 CROSS_COMPILE_COMPAT=arm-linux-gnueabi- make -j72 LLVM=1 mrproper
make[1]: arm-linux-gnueabi-gcc: No such file or directory
This is because KCONFIG is not run for mrproper, so CONFIG_CC_IS_CLANG
is not set, and we end up eagerly evaluating various variables that try
to invoke CC_COMPAT.
This is a similar problem to what was observed in
commit dc960bfeedb0 ("h8300: suppress error messages for 'make clean'")
Reported-by: Lucas Henneman <henneman@google.com> Suggested-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20211019223646.1146945-4-ndesaulniers@google.com Signed-off-by: Will Deacon <will@kernel.org>
Nick Desaulniers [Tue, 19 Oct 2021 22:36:44 +0000 (15:36 -0700)]
arm64: vdso32: drop test for -march=armv8-a
As Arnd points out:
gcc-4.8 already supported -march=armv8, and we require gcc-5.1 now, so
both this #if/#else construct and the corresponding
"cc32-option,-march=armv8-a" check should be obsolete now.
According to Documentation/process/changes.rst, the minimum supported
version of binutils is 2.23. Since all supported versions of GAS support
this instruction, drop the assembler invocation, preprocessor
flags/guards, and the cross assembler macro that's now unused.
This also avoids a recursive self reference in a follow up cleanup
patch.
Cc: Christian Biesinger <cbiesinger@google.com> Cc: Simon Marchi <simon.marchi@polymtl.ca> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20211019223646.1146945-2-ndesaulniers@google.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Brown [Tue, 19 Oct 2021 17:22:14 +0000 (18:22 +0100)]
arm64/sve: Track vector lengths for tasks in an array
As for SVE we will track a per task SME vector length for tasks. Convert
the existing storage for the vector length into an array and update
fpsimd_flush_task() to initialise this in a function.
Mark Brown [Tue, 19 Oct 2021 17:22:13 +0000 (18:22 +0100)]
arm64/sve: Explicitly load vector length when restoring SVE state
Currently when restoring the SVE state we supply the SVE vector length
as an argument to sve_load_state() and the underlying macros. This becomes
inconvenient with the addition of SME since we may need to restore any
combination of SVE and SME vector lengths, and we already separately
restore the vector length in the KVM code. We don't need to know the vector
length during the actual register load since the SME load instructions can
index into the data array for us.
Refactor the interface so we explicitly set the vector length separately
to restoring the SVE registers in preparation for adding SME support, no
functional change should be involved.
Mark Brown [Tue, 19 Oct 2021 17:22:12 +0000 (18:22 +0100)]
arm64/sve: Put system wide vector length information into structs
With the introduction of SME we will have a second vector length in the
system, enumerated and configured in a very similar fashion to the
existing SVE vector length. While there are a few differences in how
things are handled this is a relatively small portion of the overall
code so in order to avoid code duplication we factor out
We create two structs, one vl_info for the static hardware properties
and one vl_config for the runtime configuration, with an array
instantiated for each and update all the users to reference these. Some
accessor functions are provided where helpful for readability, and the
write to set the vector length is put into a function since the system
register being updated needs to be chosen at compile time.
This is a mostly mechanical replacement, further work will be required
to actually make things generic, ensuring that we handle those places
where there are differences properly.
Mark Brown [Tue, 19 Oct 2021 17:22:11 +0000 (18:22 +0100)]
arm64/sve: Use accessor functions for vector lengths in thread_struct
In a system with SME there are parallel vector length controls for SVE and
SME vectors which function in much the same way so it is desirable to
share the code for handling them as much as possible. In order to prepare
for doing this add a layer of accessor functions for the various VL related
operations on tasks.
Since almost all current interactions are actually via task->thread rather
than directly with the thread_info the accessors use that. Accessors are
provided for both generic and SVE specific usage, the generic accessors
should be used for cases where register state is being manipulated since
the registers are shared between streaming and regular SVE so we know that
when SME support is implemented we will always have to be in the appropriate
mode already and hence can generalise now.
Since we are using task_struct and we don't want to cause widespread
inclusion of sched.h the acessors are all out of line, it is hoped that
none of the uses are in a sufficiently critical path for this to be an
issue. Those that are most likely to present an issue are in the same
translation unit so hopefully the compiler may be able to inline anyway.
This is purely adding the layer of abstraction, additional work will be
needed to support tasks using SME.
Mark Brown [Tue, 19 Oct 2021 17:22:10 +0000 (18:22 +0100)]
arm64/sve: Rename find_supported_vector_length()
The function has SVE specific checks in it and it will be more trouble
to add conditional code for SME than it is to simply rename it to be SVE
specific.
Mark Brown [Tue, 19 Oct 2021 17:22:09 +0000 (18:22 +0100)]
arm64/sve: Make access to FFR optional
SME introduces streaming SVE mode in which FFR is not present and the
instructions for accessing it UNDEF. In preparation for handling this
update the low level SVE state access functions to take a flag specifying
if FFR should be handled. When saving the register state we store a zero
for FFR to guard against uninitialized data being read. No behaviour change
should be introduced by this patch.
Mark Brown [Tue, 19 Oct 2021 17:22:07 +0000 (18:22 +0100)]
arm64/sve: Remove sve_load_from_fpsimd_state()
Following optimisations of the SVE register handling we no longer load the
SVE state from a saved copy of the FPSIMD registers, we convert directly
in registers or from one saved state to another. Remove the function so we
don't need to update it during further refactoring.
Mark Brown [Tue, 19 Oct 2021 17:22:06 +0000 (18:22 +0100)]
arm64/fp: Reindent fpsimd_save()
Currently all the active code in fpsimd_save() is inside a check for
TIF_FOREIGN_FPSTATE. Reduce the indentation level by changing to return
from the function if TIF_FOREIGN_FPSTATE is set.
Marc Zyngier [Sun, 17 Oct 2021 12:42:24 +0000 (13:42 +0100)]
arm64: Add handling of CNTVCTSS traps
Since CNTVCTSS obey the same control bits as CNTVCT, add the necessary
decoding to the hook table. Note that there is no known user of
this at the moment.
Marc Zyngier [Sun, 17 Oct 2021 12:42:23 +0000 (13:42 +0100)]
arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
CNTPCTSS_EL0 and CNTVCTSS_EL0 are alternatives to the usual
CNTPCT_EL0 and CNTVCT_EL0 that do not require a previous ISB
to be synchronised (SS stands for Self-Synchronising).
Use the ARM64_HAS_ECV capability to control alternative sequences
that switch to these low(er)-cost primitives. Note that the
counter access in the VDSO is for now left alone until we decide
whether we want to allow this.
Will Deacon [Tue, 19 Oct 2021 09:52:11 +0000 (10:52 +0100)]
Merge branch 'timers/drivers/armv8.6_arch_timer' of https://git.linaro.org/people/daniel.lezcano/linux into for-next/8.6-timers
Pull Arm architected timer driver rework from Marc (via Daniel) so that
we can add the Armv8.6 support on top.
Link: https://lore.kernel.org/r/d0c55386-2f7f-a940-45bb-d80ae5e0f378@linaro.org
* 'timers/drivers/armv8.6_arch_timer' of https://git.linaro.org/people/daniel.lezcano/linux:
clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
Oliver Upton [Sun, 17 Oct 2021 12:42:20 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
Unfortunately, the architecture provides no means to determine the bit
width of the system counter. However, we do know the following from the
specification:
- the system counter is at least 56 bits wide
- Roll-over time of not less than 40 years
To date, the arch timer driver has depended on the first property,
assuming any system counter to be 56 bits wide and masking off the rest.
However, combining a narrow clocksource mask with a high frequency
counter could result in prematurely wrapping the system counter by a
significant margin. For example, a 56 bit wide, 1GHz system counter
would wrap in a mere 2.28 years!
This is a problem for two reasons: v8.6+ implementations are required to
provide a 64 bit, 1GHz system counter. Furthermore, before v8.6,
implementers may select a counter frequency of their choosing.
Fix the issue by deriving a valid clock mask based on the second
property from above. Set the floor at 56 bits, since we know no system
counter is narrower than that.
[maz: fixed width computation not to lose the last bit, added
max delta generation for the timer]
Marc Zyngier [Sun, 17 Oct 2021 12:42:19 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
Switching from TVAL to CVAL has a small drawback: we need an ISB
before reading the counter. We cannot get rid of it, but we can
instead remove the one that comes just after writing to CVAL.
This reduces the number of ISBs from 3 to 2 when programming
the timer.
Marc Zyngier [Sun, 17 Oct 2021 12:42:17 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
The Applied Micro XGene-1 SoC has a busted implementation of the
CVAL register: it looks like it is based on TVAL instead of the
other way around. The net effect of this implementation blunder
is that the maximum deadline you can program in the timer is
32bit wide.
Use a MIDR check to notice the broken CPU, and reduce the width
of the timer to 32bit.
Marc Zyngier [Sun, 17 Oct 2021 12:42:15 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
Similarily to the sysreg-based timer, move the MMIO over to using
the CVAL registers instead of TVAL. Note that there is no warranty
that the 64bit MMIO access will be atomic, but the timer is always
disabled at the point where we program CVAL.
Marc Zyngier [Sun, 17 Oct 2021 12:42:14 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
The MMIO timer base address gets published after we have registered
the callbacks and the interrupt handler, which is... a bit dangerous.
Fix this by moving the base address publication to the point where
we register the timer, and expose a pointer to the timer structure
itself rather than a naked value.
Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-7-maz@kernel.org Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Marc Zyngier [Sun, 17 Oct 2021 12:42:13 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
The '_tval' name in the erratum handling function names doesn't
make much sense anymore (and they were using CVAL the first place).
Drop the _tval tag.
Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-6-maz@kernel.org Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Marc Zyngier [Sun, 17 Oct 2021 12:42:12 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
In order to cope better with high frequency counters, move the
programming of the timers from the countdown timer (TVAL) over
to the comparator (CVAL).
The programming model is slightly different, as we now need to
read the current counter value to have an absolute deadline
instead of a relative one.
There is a small overhead to this change, which we will address
in the following patches.
Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-5-maz@kernel.org Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Marc Zyngier [Sun, 17 Oct 2021 12:42:11 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
The various accessors for the timer sysreg and MMIO registers are
currently hardwired to 32bit. However, we are about to introduce
the use of the CVAL registers, which require a 64bit access.
Upgrade the write side of the accessors to take a 64bit value
(the read side is left untouched as we don't plan to ever read
back any of these registers).
No functional change expected.
Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-4-maz@kernel.org Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Marc Zyngier [Sun, 17 Oct 2021 12:42:10 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
The arch timer driver never reads the various TVAL registers, only
writes to them. It is thus pointless to provide accessors
for them and to implement errata workarounds.
Drop these read-side accessors, and add a couple of BUG() statements
for the time being. These statements will be removed further down
the line.
Reviewed-by: Oliver Upton <oupton@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211017124225.3018098-3-maz@kernel.org Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Marc Zyngier [Sun, 17 Oct 2021 12:42:09 +0000 (13:42 +0100)]
clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
As we are about to change the registers that are used by the driver,
start by adding build-time checks to ensure that we always handle
all registers and access modes.
Fix modpost Section mismatch error in next_platform_timer().
[...]
WARNING: modpost: vmlinux.o(.text.unlikely+0x26e60): Section mismatch in reference from the function next_platform_timer() to the variable .init.data:acpi_gtdt_desc
The function next_platform_timer() references
the variable __initdata acpi_gtdt_desc.
This is often because next_platform_timer lacks a __initdata
annotation or the annotation of acpi_gtdt_desc is wrong.
WARNING: modpost: vmlinux.o(.text.unlikely+0x26e64): Section mismatch in reference from the function next_platform_timer() to the variable .init.data:acpi_gtdt_desc
The function next_platform_timer() references
the variable __initdata acpi_gtdt_desc.
This is often because next_platform_timer lacks a __initdata
annotation or the annotation of acpi_gtdt_desc is wrong.
Sumit Garg [Mon, 11 Oct 2021 12:50:59 +0000 (18:20 +0530)]
arm64: ftrace: use function_nocfi for _mcount as well
Commit 800618f955a9 ("arm64: ftrace: use function_nocfi for ftrace_call")
only fixed address of ftrace_call but address of _mcount needs to be
fixed as well. Use function_nocfi() to get the actual address of _mcount
function as with CONFIG_CFI_CLANG, the compiler replaces function pointers
with jump table addresses which breaks dynamic ftrace as the address of
_mcount is replaced with the address of _mcount.cfi_jt.
With mainline, this won't be a problem since by default
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y with Clang >= 10 as it supports
-fpatchable-function-entry and CFI requires Clang 12 but for consistency
we should add function_nocfi() for _mcount as well.
Anders Roxell [Thu, 7 Oct 2021 19:56:01 +0000 (21:56 +0200)]
arm64: asm: setup.h: export common variables
When building the kernel with sparse enabled 'C=1' the following
warnings can be seen:
arch/arm64/kernel/setup.c:58:13: warning: symbol '__fdt_pointer' was not declared. Should it be static?
arch/arm64/kernel/setup.c:84:25: warning: symbol 'boot_args' was not declared. Should it be static?
Rework so the variables are exported, since these two variable are
created and used in setup.c, also used in head.S.
Mike Kravetz [Tue, 5 Oct 2021 20:25:29 +0000 (13:25 -0700)]
arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE
For non-4K PAGE_SIZE configs, the largest gigantic huge page size is
CONT_PMD_SHIFT order. On arm64 with 64K PAGE_SIZE, the gigantic page is
16G. Therefore, one should be able to specify 'hugetlb_cma=16G' on the
kernel command line so that one gigantic page can be allocated from CMA.
However, when adding such an option the following message is produced:
hugetlb_cma: cma area should be at least 8796093022208 MiB
This is because the calculation for non-4K gigantic page order is
incorrect in the arm64 specific routine arm64_hugetlb_cma_reserve().
Fixes: abb7962adc80 ("arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs") Cc: <stable@vger.kernel.org> # 5.9.x Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20211005202529.213812-1-mike.kravetz@oracle.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Architectures supported by KASAN_HW_TAGS can provide an asymmetric mode
of execution. On an MTE enabled arm64 hw for example this can be
identified with the asymmetric tagging mode of execution. In particular,
when such a mode is present, the CPU triggers a fault on a tag mismatch
during a load operation and asynchronously updates a register when a tag
mismatch is detected during a store operation.
Extend the KASAN HW execution mode kernel command line parameter to
support asymmetric mode.
MTE provides an asymmetric mode for detecting tag exceptions. In
particular, when such a mode is present, the CPU triggers a fault
on a tag mismatch during a load operation and asynchronously updates
a register when a tag mismatch is detected during a store operation.
Add support for MTE asymmetric mode.
Note: If the CPU does not support MTE asymmetric mode the kernel falls
back on synchronous mode which is the default for kasan=on.
Add the cpufeature entries to detect the presence of Asymmetric MTE.
Note: The tag checking mode is initialized via cpu_enable_mte() ->
kasan_init_hw_tags() hence to enable it we require asymmetric mode
to be at least on the boot CPU. If the boot CPU does not have it, it is
fine for late CPUs to have it as long as the feature is not enabled
(ARM64_CPUCAP_BOOT_CPU_FEATURE).
Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211006154751.4463-4-vincenzo.frascino@arm.com Signed-off-by: Will Deacon <will@kernel.org>
John Garry [Fri, 1 Oct 2021 10:48:46 +0000 (18:48 +0800)]
drivers/perf: Improve build test coverage
Improve build test cover by allowing some drivers to build under
COMPILE_TEST where possible.
Some notes:
- Mostly a dependency on CONFIG_ACPI is not really required for only
building (but left untouched), but is required for TX2 which uses ACPI
functions which have no stubs
- XGENE required 64b dependency as it relies on some unsigned long perf
struct fields being 64b
- I don't see why TX2 requires NUMA to build, but left untouched
- Added an explicit dependency on GENERIC_MSI_IRQ_DOMAIN for
ARM_SMMU_V3_PMU, which is required for platform MSI functions
John Garry [Fri, 1 Oct 2021 10:48:45 +0000 (18:48 +0800)]
drivers/perf: thunderx2_pmu: Change data in size tx2_uncore_event_update()
A LSL of 32 requires > 32b value to hold the result. However in
tx2_uncore_event_update(), 1UL << 32 currently only works as unsigned
long is 64b on a 64b system.
If we want to compile test for a 32b system, we need unsigned long long,
whose min size is 64b.
The PA PMU counter offset was correct in [1] and the driver has
already been verified. We want to keep the register offset using
lower case character in later version that is consistent with
the existed driver. Since there was no functional change, we
didn't do more test. However there is typo when modified the PA
PMU counter offset by mistake, so fix this bad mistake.
CONFIG_SPARSEMEM_VMEMMAP is now the only available memory model on arm64
platforms and free_unused_memmap() would just return without creating any
holes in the memmap mapping. There is no need for any special handling in
pfn_valid() and HAVE_ARCH_PFN_VALID can just be dropped. This also moves
the pfn upper bits sanity check into generic pfn_valid().
[rppt: rebased on v5.15-rc3]
Link: https://lkml.kernel.org/r/1621947349-25421-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: David Hildenbrand <david@redhat.com> Cc: Mike Rapoport <rppt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20210930013039.11260-3-rppt@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
Mike Rapoport [Thu, 30 Sep 2021 01:30:38 +0000 (04:30 +0300)]
dma-mapping: remove bogus test for pfn_valid from dma_map_resource
dma_map_resource() uses pfn_valid() to ensure the range is not RAM.
However, pfn_valid() only checks for availability of the memory map for a
PFN but it does not ensure that the PFN is actually backed by RAM.
As dma_map_resource() is the only method in DMA mapping APIs that has this
check, simply drop the pfn_valid() test from dma_map_resource().
The intend of trans_pgd_map_page() was to map contiguous range of VA
memory to the memory that is getting relocated during kexec. However,
since we are now using linear map instead of contiguous range this
function is not needed
This header contains only cpu_soft_restart() which is never used directly
anymore. So, remove this header, and rename the helper to be
cpu_soft_restart().
Broadcom Stingray:
The performance data: for a moderate size kernel + initramfs: 25M the
relocation was taking 0.382s, with enabled MMU it now takes
0.019s only or x20 improvement.
The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.
To perform the kexec relocation with the MMU enabled, we need a copy
of the linear map.
Create one, and install it from the relocation code. This has to be done
from the assembly code as it will be idmapped with TTBR0. The kernel
runs in TTRB1, so can't use the break-before-make sequence on the mapping
it is executing from.
The makes no difference yet as the relocation code runs with the MMU
disabled.
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.
In preparation to MMU enabled relocation, configure our EL2 table now.
Kexec uses #HVC_SOFT_RESTART to branch to the new world, so extend
el1_sync vector that is provided by trans_pgd_copy_el2_vectors() to
support this case.
arm64: kexec: pass kimage as the only argument to relocation function
Currently, kexec relocation function (arm64_relocate_new_kernel) accepts
the following arguments:
head: start of array that contains relocation information.
entry: entry point for new kernel or purgatory.
dtb_mem: first and only argument to entry.
The number of arguments cannot be easily expended, because this
function is also called from HVC_SOFT_RESTART, which preserves only
three arguments. And, also arm64_relocate_new_kernel is written in
assembly but called without stack, thus no place to move extra arguments
to free registers.
Soon, we will need to pass more arguments: once we enable MMU we
will need to pass information about page tables.
Pass kimage to arm64_relocate_new_kernel, and teach it to get the
required fields from kimage.
arm64: kexec: Use dcache ops macros instead of open-coding
kexec does dcache maintenance when it re-writes all memory. Our
dcache_by_line_op macro depends on reading the sanitized DminLine
from memory. Kexec may have overwritten this, so open-codes the
sequence.
dcache_by_line_op is a whole set of macros, it uses dcache_line_size
which uses read_ctr for the sanitsed DminLine. Reading the DminLine
is the first thing the dcache_by_line_op does.
Rename dcache_by_line_op dcache_by_myline_op and take DminLine as
an argument. Kexec can now use the slightly smaller macro.
This makes up-coming changes to the dcache maintenance easier on
the eye.
Code generated by the existing callers is unchanged.
arm64: kexec: skip relocation code for inplace kexec
In case of kdump or when segments are already in place the relocation
is not needed, therefore the setup of relocation function and call to
it can be skipped.
arm64: kexec: flush image and lists during kexec load time
Currently, during kexec load we are copying relocation function and
flushing it. However, we can also flush kexec relocation buffers and
if new kernel image is already in place (i.e. crash kernel), we can
also flush the new kernel image itself.
Currently, only hibernate sets custom ttbr0 with safe idmaped function.
Kexec, is also going to be using this functionality when relocation code
is going to be idmapped.
Move the setup sequence to a dedicated cpu_install_ttbr0() for custom
ttbr0.
Yee Lee [Thu, 30 Sep 2021 08:16:13 +0000 (16:16 +0800)]
scs: Release kasan vmalloc poison in scs_free process
Since scs allocation is moved to vmalloc region, the
shadow stack is protected by kasan_posion_vmalloc.
However, the vfree_atomic operation needs to access
its context for scs_free process and causes kasan error
as the dump info below.
This patch Adds kasan_unpoison_vmalloc() before vfree_atomic,
which aligns to the prior flow as using kmem_cache.
The vmalloc region will go back posioned in the following
vumap() operations.
==================================================================
BUG: KASAN: vmalloc-out-of-bounds in llist_add_batch+0x60/0xd4
Write of size 8 at addr ffff8000100b9000 by task kthreadd/2
Suggested-by: Kuan-Ying Lee <kuan-ying.lee@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Tested-by: Will Deacon <will@kernel.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Yee Lee <yee.lee@mediatek.com> Fixes: a2abe7cbd8fe ("scs: switch to vmapped shadow stacks") Link: https://lore.kernel.org/r/20210930081619.30091-1-yee.lee@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>