]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/log
mirror_ubuntu-jammy-kernel.git
4 years agoMerge branch 'for-next/scs' into for-next/core
Will Deacon [Thu, 28 May 2020 17:03:40 +0000 (18:03 +0100)]
Merge branch 'for-next/scs' into for-next/core

Support for Clang's Shadow Call Stack in the kernel
(Sami Tolvanen and Will Deacon)
* for-next/scs:
  arm64: entry-ftrace.S: Update comment to indicate that x18 is live
  scs: Move DEFINE_SCS macro into core code
  scs: Remove references to asm/scs.h from core code
  scs: Move scs_overflow_check() out of architecture code
  arm64: scs: Use 'scs_sp' register alias for x18
  scs: Move accounting into alloc/free functions
  arm64: scs: Store absolute SCS stack pointer value in thread_info
  efi/libstub: Disable Shadow Call Stack
  arm64: scs: Add shadow stacks for SDEI
  arm64: Implement Shadow Call Stack
  arm64: Disable SCS for hypervisor code
  arm64: vdso: Disable Shadow Call Stack
  arm64: efi: Restore register x18 if it was corrupted
  arm64: Preserve register x18 when CPU is suspended
  arm64: Reserve register x18 from general allocation with SCS
  scs: Disable when function graph tracing is enabled
  scs: Add support for stack usage debugging
  scs: Add page accounting for shadow call stack allocations
  scs: Add support for Clang's Shadow Call Stack (SCS)

4 years agoMerge branch 'for-next/kvm/errata' into for-next/core
Will Deacon [Thu, 28 May 2020 17:02:51 +0000 (18:02 +0100)]
Merge branch 'for-next/kvm/errata' into for-next/core

KVM CPU errata rework
(Andrew Scull and Marc Zyngier)
* for-next/kvm/errata:
  KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h
  arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}

4 years agoMerge branch 'for-next/bti' into for-next/core
Will Deacon [Thu, 28 May 2020 17:00:51 +0000 (18:00 +0100)]
Merge branch 'for-next/bti' into for-next/core

Support for Branch Target Identification (BTI) in user and kernel
(Mark Brown and others)
* for-next/bti: (39 commits)
  arm64: vdso: Fix CFI directives in sigreturn trampoline
  arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
  arm64: bti: Fix support for userspace only BTI
  arm64: kconfig: Update and comment GCC version check for kernel BTI
  arm64: vdso: Map the vDSO text with guarded pages when built for BTI
  arm64: vdso: Force the vDSO to be linked as BTI when built for BTI
  arm64: vdso: Annotate for BTI
  arm64: asm: Provide a mechanism for generating ELF note for BTI
  arm64: bti: Provide Kconfig for kernel mode BTI
  arm64: mm: Mark executable text as guarded pages
  arm64: bpf: Annotate JITed code for BTI
  arm64: Set GP bit in kernel page tables to enable BTI for the kernel
  arm64: asm: Override SYM_FUNC_START when building the kernel with BTI
  arm64: bti: Support building kernel C code using BTI
  arm64: Document why we enable PAC support for leaf functions
  arm64: insn: Report PAC and BTI instructions as skippable
  arm64: insn: Don't assume unrecognized HINTs are skippable
  arm64: insn: Provide a better name for aarch64_insn_is_nop()
  arm64: insn: Add constants for new HINT instruction decode
  arm64: Disable old style assembly annotations
  ...

4 years agoMerge branches 'for-next/acpi', 'for-next/bpf', 'for-next/cpufeature', 'for-next...
Will Deacon [Thu, 28 May 2020 16:47:34 +0000 (17:47 +0100)]
Merge branches 'for-next/acpi', 'for-next/bpf', 'for-next/cpufeature', 'for-next/docs', 'for-next/kconfig', 'for-next/misc', 'for-next/perf', 'for-next/ptr-auth', 'for-next/sdei', 'for-next/smccc' and 'for-next/vdso' into for-next/core

ACPI and IORT updates
(Lorenzo Pieralisi)
* for-next/acpi:
  ACPI/IORT: Remove the unused __get_pci_rid()
  ACPI/IORT: Fix PMCG node single ID mapping handling
  ACPI: IORT: Add comments for not calling acpi_put_table()
  ACPI: GTDT: Put GTDT table after parsing
  ACPI: IORT: Add extra message "applying workaround" for off-by-1 issue
  ACPI/IORT: work around num_ids ambiguity
  Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"
  ACPI/IORT: take _DMA methods into account for named components

BPF JIT optimisations for immediate value generation
(Luke Nelson)
* for-next/bpf:
  bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates
  bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates
  arm64: insn: Fix two bugs in encoding 32-bit logical immediates

Addition of new CPU ID register fields and removal of some benign sanity checks
(Anshuman Khandual and others)
* for-next/cpufeature: (27 commits)
  KVM: arm64: Check advertised Stage-2 page size capability
  arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
  arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
  arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
  arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
  arm64/cpufeature: Introduce ID_MMFR5 CPU register
  arm64/cpufeature: Introduce ID_DFR1 CPU register
  arm64/cpufeature: Introduce ID_PFR2 CPU register
  arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
  arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
  arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
  arm64/cpufeature: Drop open encodings while extracting parange
  arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug
  arm64: cpufeature: Group indexed system register definitions by name
  arm64: cpufeature: Extend comment to describe absence of field info
  arm64: drop duplicate definitions of ID_AA64MMFR0_TGRAN constants
  arm64: cpufeature: Add an overview comment for the cpufeature framework
  ...

Minor documentation tweaks for silicon errata and booting requirements
(Rob Herring and Will Deacon)
* for-next/docs:
  arm64: silicon-errata.rst: Sort the Cortex-A55 entries
  arm64: docs: Mandate that the I-cache doesn't hold stale kernel text

Minor Kconfig cleanups
(Geert Uytterhoeven)
* for-next/kconfig:
  arm64: cpufeature: Add "or" to mitigations for multiple errata
  arm64: Sort vendor-specific errata

Miscellaneous updates
(Ard Biesheuvel and others)
* for-next/misc:
  arm64: mm: Add asid_gen_match() helper
  arm64: stacktrace: Factor out some common code into on_stack()
  arm64: Call debug_traps_init() from trap_init() to help early kgdb
  arm64: cacheflush: Fix KGDB trap detection
  arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()
  arm64: kexec_file: print appropriate variable
  arm: mm: use __pfn_to_section() to get mem_section
  arm64: Reorder the macro arguments in the copy routines
  efi/libstub/arm64: align PE/COFF sections to segment alignment
  KVM: arm64: Drop PTE_S2_MEMATTR_MASK
  arm64/kernel: Fix range on invalidating dcache for boot page tables
  arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely
  arm64: lib: Consistently enable crc32 extension
  arm64/mm: Use phys_to_page() to access pgtable memory
  arm64: smp: Make cpus_stuck_in_kernel static
  arm64: entry: remove unneeded semicolon in el1_sync_handler()
  arm64/kernel: vmlinux.lds: drop redundant discard/keep macros
  arm64: drop GZFLAGS definition and export
  arm64: kexec_file: Avoid temp buffer for RNG seed
  arm64: rename stext to primary_entry

Perf PMU driver updates
(Tang Bin and others)
* for-next/perf:
  pmu/smmuv3: Clear IRQ affinity hint on device removal
  drivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers
  drivers/perf: hisi: Fix typo in events attribute array
  drivers/perf: arm_spe_pmu: Avoid duplicate printouts
  drivers/perf: arm_dsu_pmu: Avoid duplicate printouts

Pointer authentication updates and support for vmcoreinfo
(Amit Daniel Kachhap and Mark Rutland)
* for-next/ptr-auth:
  Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'
  arm64/crash_core: Export KERNELPACMASK in vmcoreinfo
  arm64: simplify ptrauth initialization
  arm64: remove ptrauth_keys_install_kernel sync arg

SDEI cleanup and non-critical fixes
(James Morse and others)
* for-next/sdei:
  firmware: arm_sdei: Document the motivation behind these set_fs() calls
  firmware: arm_sdei: remove unused interfaces
  firmware: arm_sdei: Put the SDEI table after using it
  firmware: arm_sdei: Drop check for /firmware/ node and always register driver

SMCCC updates and refactoring
(Sudeep Holla)
* for-next/smccc:
  firmware: smccc: Fix missing prototype warning for arm_smccc_version_init
  firmware: smccc: Add function to fetch SMCCC version
  firmware: smccc: Refactor SMCCC specific bits into separate file
  firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead
  firmware: smccc: Add the definition for SMCCCv1.2 version/error codes
  firmware: smccc: Update link to latest SMCCC specification
  firmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above

vDSO cleanup and non-critical fixes
(Mark Rutland and Vincenzo Frascino)
* for-next/vdso:
  arm64: vdso: Add --eh-frame-hdr to ldflags
  arm64: vdso: use consistent 'map' nomenclature
  arm64: vdso: use consistent 'abi' nomenclature
  arm64: vdso: simplify arch_vdso_type ifdeffery
  arm64: vdso: remove aarch32_vdso_pages[]
  arm64: vdso: Add '-Bsymbolic' to ldflags

4 years agoKVM: arm64: Move __load_guest_stage2 to kvm_mmu.h
Marc Zyngier [Thu, 28 May 2020 13:12:59 +0000 (14:12 +0100)]
KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h

Having __load_guest_stage2 in kvm_hyp.h is quickly going to trigger
a circular include problem. In order to avoid this, let's move
it to kvm_mmu.h, where it will be a better fit anyway.

In the process, drop the __hyp_text annotation, which doesn't help
as the function is marked as __always_inline.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoKVM: arm64: Check advertised Stage-2 page size capability
Marc Zyngier [Thu, 28 May 2020 13:12:58 +0000 (14:12 +0100)]
KVM: arm64: Check advertised Stage-2 page size capability

With ARMv8.5-GTG, the hardware (or more likely a hypervisor) can
advertise the supported Stage-2 page sizes.

Let's check this at boot time.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
Anshuman Khandual [Wed, 27 May 2020 10:04:36 +0000 (15:34 +0530)]
arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()

There is no way to proceed when requested register could not be searched in
arm64_ftr_reg[]. Requesting for a non present register would be an error as
well. Hence lets just WARN_ON() when search fails in get_arm64_ftr_reg()
rather than checking for return value and doing a BUG_ON() instead in some
individual callers. But there are also caller instances that dont error out
when register search fails. Add a new helper get_arm64_ftr_reg_nowarn() for
such cases.

Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Link: https://lore.kernel.org/r/1590573876-19120-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI/IORT: Remove the unused __get_pci_rid()
Zenghui Yu [Sat, 9 May 2020 09:34:30 +0000 (17:34 +0800)]
ACPI/IORT: Remove the unused __get_pci_rid()

Since commit bc8648d49a95 ("ACPI/IORT: Handle PCI aliases properly for
IOMMUs"), __get_pci_rid() has become actually unused and can be removed.

Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Hanjun Guo <guohanjun@huawei.com>
Link: https://lore.kernel.org/r/20200509093430.1983-1-yuzenghui@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
Anshuman Khandual [Tue, 19 May 2020 09:40:54 +0000 (15:10 +0530)]
arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context

ID_MMFR4_EL1 has been missing in the CPU context (i.e cpuinfo_arm64). This
just adds the register along with other required changes.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-18-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
Anshuman Khandual [Tue, 19 May 2020 09:40:48 +0000 (15:10 +0530)]
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register

Enable the following features bits in ID_AA64PFR1 register as per ARM DDI
0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-12-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
Anshuman Khandual [Tue, 19 May 2020 09:40:47 +0000 (15:10 +0530)]
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register

Enable MPAM and SEL2 features bits in ID_AA64PFR0 register as per ARM DDI
0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-11-git-send-email-anshuman.khandual@arm.com
[will: Make SEL2 a NONSTRICT feature per Suzuki]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
Anshuman Khandual [Tue, 19 May 2020 09:40:46 +0000 (15:10 +0530)]
arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register

Enable TLB features bit in ID_AA64ISAR0 register as per ARM DDI 0487F.a
specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-10-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
Anshuman Khandual [Tue, 19 May 2020 09:40:45 +0000 (15:10 +0530)]
arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register

Enable all remaining feature bits like EVT, CCIDX, LSM, HPDS, CnP, XNX,
SpecSEI in ID_MMFR4 register per ARM DDI 0487F.a.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-9-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add remaining feature bits in ID_PFR0 register
Anshuman Khandual [Tue, 19 May 2020 09:40:44 +0000 (15:10 +0530)]
arm64/cpufeature: Add remaining feature bits in ID_PFR0 register

Enable DIT and CSV2 feature bits in ID_PFR0 register as per ARM DDI 0487F.a
specification. Except RAS and AMU, all other feature bits are now enabled.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-8-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Introduce ID_MMFR5 CPU register
Anshuman Khandual [Tue, 19 May 2020 09:40:43 +0000 (15:10 +0530)]
arm64/cpufeature: Introduce ID_MMFR5 CPU register

This adds basic building blocks required for ID_MMFR5 CPU register which
provides information about the implemented memory model and memory
management support in AArch32 state. This is added per ARM DDI 0487F.a
specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-7-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Introduce ID_DFR1 CPU register
Anshuman Khandual [Tue, 19 May 2020 09:40:42 +0000 (15:10 +0530)]
arm64/cpufeature: Introduce ID_DFR1 CPU register

This adds basic building blocks required for ID_DFR1 CPU register which
provides top level information about the debug system in AArch32 state.
We hide the register from KVM guests, as we don't emulate the 'MTPMU'
feature.

This is added per ARM DDI 0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Will Deacon <will@kernel.org>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-6-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Introduce ID_PFR2 CPU register
Anshuman Khandual [Tue, 19 May 2020 09:40:41 +0000 (15:10 +0530)]
arm64/cpufeature: Introduce ID_PFR2 CPU register

This adds basic building blocks required for ID_PFR2 CPU register which
provides information about the AArch32 programmers model which must be
interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added
per ARM DDI 0487F.a specification.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-5-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
Anshuman Khandual [Tue, 19 May 2020 09:40:40 +0000 (15:10 +0530)]
arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0

Double lock feature can have the following possible values.

0b0000 - Double lock implemented
0b1111 - Double lock not implemented

But in case of a conflict the safe value should be 0b1111. Hence this must
be a signed feature instead. Also change FTR_EXACT to FTR_LOWER_SAFE. While
here, fix the erroneous bit width value from 28 to 4.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-4-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
Anshuman Khandual [Tue, 19 May 2020 09:40:39 +0000 (15:10 +0530)]
arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register

ID_DFR0 based TraceFilt feature should not be exposed to guests. Hence lets
drop it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
Anshuman Khandual [Tue, 19 May 2020 09:40:38 +0000 (15:10 +0530)]
arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register

ID_ISAR0[31..28] bits are RES0 in ARMv8, Reserved/UNK in ARMv7. Currently
these bits get exposed through generic_id_ftr32[] which is not desirable.
Hence define an explicit ftr_id_isar0[] array for ID_ISAR0 register where
those bits can be hidden.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/1589881254-10082-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Add asid_gen_match() helper
Jean-Philippe Brucker [Tue, 19 May 2020 17:54:43 +0000 (19:54 +0200)]
arm64: mm: Add asid_gen_match() helper

Add a macro to check if an ASID is from the current generation, since a
subsequent patch will introduce a third user for this test.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Link: https://lore.kernel.org/r/20200519175502.2504091-6-jean-philippe@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Fix missing prototype warning for arm_smccc_version_init
Sudeep Holla [Thu, 21 May 2020 11:08:36 +0000 (12:08 +0100)]
firmware: smccc: Fix missing prototype warning for arm_smccc_version_init

Commit f2ae97062a48 ("firmware: smccc: Refactor SMCCC specific bits into
separate file") introduced the following build warning:

drivers/firmware/smccc/smccc.c:14:13: warning: no previous prototype for
function 'arm_smccc_version_init' [-Wmissing-prototypes]
 void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
             ^~~~~~~~~~~~~~~~~~~~~~

Fix the same by adding the missing prototype in arm-smccc.h

Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Link: https://lore.kernel.org/r/20200521110836.57252-1-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Fix CFI directives in sigreturn trampoline
Will Deacon [Tue, 19 May 2020 11:56:05 +0000 (12:56 +0100)]
arm64: vdso: Fix CFI directives in sigreturn trampoline

Daniel reports that the .cfi_startproc is misplaced for the sigreturn
trampoline, which causes LLVM's unwinder to misbehave:

  | I run into this with LLVM’s unwinder.
  | This combination was always broken.

This prompted Dave to question our use of CFI directives more generally,
and I ended up going down a rabbit hole trying to figure out how this
very poorly documented stuff gets used.

Move the CFI directives so that the "mysterious NOP" is included in
the .cfi_{start,end}proc block and add a bunch of comments so that I
can save myself another headache in future.

Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reported-by: Dave Martin <dave.martin@arm.com>
Reported-by: Daniel Kiss <daniel.kiss@arm.com>
Tested-by: Daniel Kiss <daniel.kiss@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
Will Deacon [Tue, 19 May 2020 11:38:33 +0000 (12:38 +0100)]
arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction

For better or worse, GDB relies on the exact instruction sequence in the
VDSO sigreturn trampoline in order to unwind from signals correctly.
Commit c91db232da48 ("arm64: vdso: Convert to modern assembler annotations")
unfortunately added a BTI C instruction to the start of __kernel_rt_sigreturn,
which breaks this check. Thankfully, it's also not required, since the
trampoline is called from a RET instruction when returning from the signal
handler

Remove the unnecessary BTI C instruction from __kernel_rt_sigreturn,
and do the same for the 32-bit VDSO as well for good measure.

Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Tamas Zsoldos <tamas.zsoldos@arm.com>
Reviewed-by: Dave Martin <dave.martin@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Fixes: c91db232da48 ("arm64: vdso: Convert to modern assembler annotations")
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Add function to fetch SMCCC version
Sudeep Holla [Mon, 18 May 2020 09:12:21 +0000 (10:12 +0100)]
firmware: smccc: Add function to fetch SMCCC version

For backward compatibility reasons, PSCI maintains SMCCC version as
SMCCC didn't provide ARM_SMCCC_VERSION_FUNC_ID until v1.1.

PSCI initialises both the SMCCC version and conduit. Similar to the
conduit, let us provide accessors to fetch the SMCCC version also so
that other SMCCC v1.1+ features can use it.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-7-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Refactor SMCCC specific bits into separate file
Sudeep Holla [Mon, 18 May 2020 09:12:20 +0000 (10:12 +0100)]
firmware: smccc: Refactor SMCCC specific bits into separate file

In order to add newer SMCCC v1.1+ functionality and to avoid cluttering
PSCI firmware driver with SMCCC bits, let us move the SMCCC specific
details under drivers/firmware/smccc/smccc.c

We can also drop conduit and smccc_version from psci_operations structure
as SMCCC was the sole user and now it maintains those.

No functionality change in this patch though.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-6-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead
Sudeep Holla [Mon, 18 May 2020 09:12:19 +0000 (10:12 +0100)]
firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead

Instead of maintaining 2 sets of enums/macros for tracking SMCCC version,
let us drop smccc_version enum and use ARM_SMCCC_VERSION_1_x directly
instead.

This is in preparation to drop smccc_version here and move it separately
under drivers/firmware/smccc.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-5-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Add the definition for SMCCCv1.2 version/error codes
Sudeep Holla [Mon, 18 May 2020 09:12:18 +0000 (10:12 +0100)]
firmware: smccc: Add the definition for SMCCCv1.2 version/error codes

Add the definition for SMCCC v1.2 version and new error code added.
While at it, also add a note that ARM DEN 0070A is deprecated and is
now merged into the main SMCCC specification(ARM DEN 0028C).

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Link: https://lore.kernel.org/r/20200518091222.27467-4-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Update link to latest SMCCC specification
Sudeep Holla [Mon, 18 May 2020 09:12:17 +0000 (10:12 +0100)]
firmware: smccc: Update link to latest SMCCC specification

The current link gets redirected to the revision B published in November
2016 though it actually points to the original revision A published in
June 2013.

Let us update the link to point to the latest version, so that it
doesn't get stale anytime soon. Currently it points to v1.2 published in
March 2020(i.e. DEN0028C).

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-3-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above
Sudeep Holla [Mon, 18 May 2020 09:12:16 +0000 (10:12 +0100)]
firmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above

SMCCC v1.0 lacked discoverability of version and features. To accelerate
adoption of few mitigations and protect systems more rapidly from various
vulnerability, PSCI v1.0 was updated to add SMCCC discovery mechanism
though the PSCI firmware implementation of PSCI_FEATURES(SMCCC_VERSION)
which returns success on firmware compliant to SMCCC v1.1 and above.

This inturn makes SMCCC v1.1 and above dependent on ARM_PSCI_FW for
backward compatibility. Let us introduce a new hidden config for the
same to build more features on top of SMCCC v1.1 and above.

While at it, also sort alphabetically the psci entry.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Etienne Carriere <etienne.carriere@st.com>
Reviewed-by: Etienne Carriere <etienne.carriere@st.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200518091222.27467-2-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI/IORT: Fix PMCG node single ID mapping handling
Tuan Phan [Wed, 20 May 2020 17:13:07 +0000 (10:13 -0700)]
ACPI/IORT: Fix PMCG node single ID mapping handling

An IORT PMCG node can have no ID mapping if its overflow interrupt is
wire based therefore the code that parses the PMCG node can not assume
the node will always have a single mapping present at index 0.

Fix iort_get_id_mapping_index() by checking for an overflow interrupt
and mapping count.

Fixes: 24e516049360 ("ACPI/IORT: Add support for PMCG")
Signed-off-by: Tuan Phan <tuanphan@os.amperecomputing.com>
Reviewed-by: Hanjun Guo <guoahanjun@huawei.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Link: https://lore.kernel.org/r/1589994787-28637-1-git-send-email-tuanphan@os.amperecomputing.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Drop open encodings while extracting parange
Anshuman Khandual [Wed, 13 May 2020 09:03:34 +0000 (14:33 +0530)]
arm64/cpufeature: Drop open encodings while extracting parange

Currently there are multiple instances of parange feature width mask open
encodings while fetching it's value. Even the width mask value (0x7) itself
is not accurate. It should be (0xf) per ID_AA64MMFR0_EL1.PARange[3:0] as in
ARM ARM (0487F.a). Replace them with cpuid_feature_extract_unsigned_field()
which can extract given standard feature (4 bits width i.e 0xf mask) field.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/1589360614-1164-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpufeature: Validate hypervisor capabilities during CPU hotplug
Anshuman Khandual [Tue, 12 May 2020 01:57:27 +0000 (07:27 +0530)]
arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug

This validates hypervisor capabilities like VMID width, IPA range for any
hot plug CPU against system finalized values. KVM's view of the IPA space
is used while allowing a given CPU to come up. While here, it factors out
get_vmid_bits() for general use.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-kernel@vger.kernel.org
Suggested-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/1589248647-22925-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: arm_sdei: Document the motivation behind these set_fs() calls
James Morse [Tue, 19 May 2020 18:21:08 +0000 (19:21 +0100)]
firmware: arm_sdei: Document the motivation behind these set_fs() calls

The SDEI handler save/restores the addr_limit using set_fs(). It isn't
very clear why. The reason is to mirror the arch code's entry assembly.
The arch code does this because perf may access user-space, and
inheriting the addr_limit may be a problem.

Add a comment explaining why this is here.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: James Morse <james.morse@arm.com>
Link: https://bugs.chromium.org/p/project-zero/issues/detail?id=822
Link: https://lore.kernel.org/r/20200519182108.13693-4-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: arm_sdei: remove unused interfaces
Christoph Hellwig [Tue, 19 May 2020 18:21:07 +0000 (19:21 +0100)]
firmware: arm_sdei: remove unused interfaces

The export symbols to register/unregister and enable/disable events
aren't used upstream, remove them.

[ dropped the parts of Christoph's patch that made the API static too ]

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/20200504164224.2842960-1-hch@lst.de/
Link: https://lore.kernel.org/r/20200519182108.13693-3-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: arm_sdei: Put the SDEI table after using it
Hanjun Guo [Tue, 19 May 2020 18:21:06 +0000 (19:21 +0100)]
firmware: arm_sdei: Put the SDEI table after using it

The acpi_get_table() should be coupled with acpi_put_table() if
the mapped table is not used for runtime after the initialization
to release the table mapping, put the SDEI table after using it.

Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/1589021566-46373-1-git-send-email-guohanjun@huawei.com/
Link: https://lore.kernel.org/r/20200519182108.13693-2-james.morse@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agopmu/smmuv3: Clear IRQ affinity hint on device removal
Jean-Philippe Brucker [Wed, 22 Apr 2020 08:48:06 +0000 (10:48 +0200)]
pmu/smmuv3: Clear IRQ affinity hint on device removal

Currently when trying to remove the SMMUv3 PMU module we get a
WARN_ON_ONCE from free_irq(), because the affinity hint set during probe
hasn't been properly cleared.

[  238.878383] WARNING: CPU: 0 PID: 175 at kernel/irq/manage.c:1744 free_irq+0x324/0x358
...
[  238.897263] Call trace:
[  238.897998]  free_irq+0x324/0x358
[  238.898792]  devm_irq_release+0x18/0x28
[  238.899189]  release_nodes+0x1b0/0x228
[  238.899984]  devres_release_all+0x38/0x60
[  238.900779]  device_release_driver_internal+0x10c/0x1d0
[  238.901574]  driver_detach+0x50/0xe0
[  238.902368]  bus_remove_driver+0x5c/0xd8
[  238.903448]  driver_unregister+0x30/0x60
[  238.903958]  platform_driver_unregister+0x14/0x20
[  238.905075]  arm_smmu_pmu_exit+0x1c/0xecc [arm_smmuv3_pmu]
[  238.905547]  __arm64_sys_delete_module+0x14c/0x260
[  238.906342]  el0_svc_common.constprop.0+0x74/0x178
[  238.907355]  do_el0_svc+0x24/0x90
[  238.907932]  el0_sync_handler+0x11c/0x198
[  238.908979]  el0_sync+0x158/0x180

Just like the other perf drivers, clear the affinity hint before
releasing the device.

Fixes: 7d839b4b9e00 ("perf/smmuv3: Add arm64 smmuv3 pmu driver")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Link: https://lore.kernel.org/r/20200422084805.237738-1-jean-philippe@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers
Zhou Wang [Thu, 7 May 2020 02:58:25 +0000 (10:58 +0800)]
drivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers

This patch lets HiSilicon uncore PMU driver can be built as modules.
A common module and three specific uncore PMU driver modules will be built.

Export necessary functions in hisi_uncore_pmu module, and change
irq_set_affinity to irq_set_affinity_hint to pass compile.

Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
Tested-by: Qi Liu <liuqi115@huawei.com>
Reviewed-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1588820305-174479-1-git-send-email-wangzhou1@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI: IORT: Add comments for not calling acpi_put_table()
Hanjun Guo [Fri, 8 May 2020 04:05:53 +0000 (12:05 +0800)]
ACPI: IORT: Add comments for not calling acpi_put_table()

The iort_table will be used at runtime after acpi_iort_init(),
so add some comments to clarify this to make it less confusing.

Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Link: https://lore.kernel.org/r/1588910753-18543-2-git-send-email-guohanjun@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI: GTDT: Put GTDT table after parsing
Hanjun Guo [Fri, 8 May 2020 04:05:52 +0000 (12:05 +0800)]
ACPI: GTDT: Put GTDT table after parsing

The mapped GTDT table needs to be released after
the driver init.

Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Link: https://lore.kernel.org/r/1588910753-18543-1-git-send-email-guohanjun@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: stacktrace: Factor out some common code into on_stack()
Yunfeng Ye [Fri, 8 May 2020 03:15:45 +0000 (11:15 +0800)]
arm64: stacktrace: Factor out some common code into on_stack()

There are some common codes for stack checking, so factors it out into
the function on_stack().

No functional change.

Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
Link: https://lore.kernel.org/r/07b3b0e6-3f58-4fed-07ea-7d17b7508948@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Call debug_traps_init() from trap_init() to help early kgdb
Douglas Anderson [Wed, 13 May 2020 23:06:37 +0000 (16:06 -0700)]
arm64: Call debug_traps_init() from trap_init() to help early kgdb

A new kgdb feature will soon land (kgdb_earlycon) that lets us run
kgdb much earlier.  In order for everything to work properly it's
important that the break hook is setup by the time we process
"kgdbwait".

Right now the break hook is setup in debug_traps_init() and that's
called from arch_initcall().  That's a bit too late since
kgdb_earlycon really needs things to be setup by the time the system
calls dbg_late_init().

We could fix this by adding call_break_hook() into early_brk64() and
that works fine.  However, it's a little ugly.  Instead, let's just
add a call to debug_traps_init() straight from trap_init().  There's
already a documented dependency between trap_init() and
debug_traps_init() and this makes the dependency more obvious rather
than just relying on a comment.

NOTE: this solution isn't early enough to let us select the
"ARCH_HAS_EARLY_DEBUG" KConfig option that is introduced by the
kgdb_earlycon patch series.  That would only be set if we could do
breakpoints when early params are parsed.  This patch only enables
"late early" breakpoints, AKA breakpoints when dbg_late_init() is
called.  It's expected that this should be fine for most people.

It should also be noted that if you crash you can still end up in kgdb
earlier than debug_traps_init().  Since you don't need breakpoints to
debug a crash that's fine.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200513160501.1.I0b5edf030cc6ebef6ab4829f8867cdaea42485d8@changeid
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry-ftrace.S: Update comment to indicate that x18 is live
Will Deacon [Mon, 18 May 2020 13:01:01 +0000 (14:01 +0100)]
arm64: entry-ftrace.S: Update comment to indicate that x18 is live

The Shadow Call Stack pointer is held in x18, so update the ftrace
entry comment to indicate that it cannot be safely clobbered.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Move DEFINE_SCS macro into core code
Will Deacon [Fri, 15 May 2020 15:17:12 +0000 (16:17 +0100)]
scs: Move DEFINE_SCS macro into core code

Defining static shadow call stacks is not architecture-specific, so move
the DEFINE_SCS() macro into the core header file.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Remove references to asm/scs.h from core code
Will Deacon [Fri, 15 May 2020 15:15:46 +0000 (16:15 +0100)]
scs: Remove references to asm/scs.h from core code

asm/scs.h is no longer needed by the core code, so remove a redundant
header inclusion and update the stale Kconfig text.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Move scs_overflow_check() out of architecture code
Will Deacon [Fri, 15 May 2020 13:56:05 +0000 (14:56 +0100)]
scs: Move scs_overflow_check() out of architecture code

There is nothing architecture-specific about scs_overflow_check() as
it's just a trivial wrapper around scs_corrupted().

For parity with task_stack_end_corrupted(), rename scs_corrupted() to
task_scs_end_corrupted() and call it from schedule_debug() when
CONFIG_SCHED_STACK_END_CHECK_is enabled, which better reflects its
purpose as a debug feature to catch inadvertent overflow of the SCS.
Finally, remove the unused scs_overflow_check() function entirely.

This has absolutely no impact on architectures that do not support SCS
(currently arm64 only).

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: scs: Use 'scs_sp' register alias for x18
Will Deacon [Fri, 15 May 2020 13:46:46 +0000 (14:46 +0100)]
arm64: scs: Use 'scs_sp' register alias for x18

x18 holds the SCS stack pointer value, so introduce a register alias to
make this easier to read in assembly code.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Move accounting into alloc/free functions
Will Deacon [Fri, 15 May 2020 13:43:11 +0000 (14:43 +0100)]
scs: Move accounting into alloc/free functions

There's no need to perform the shadow stack page accounting independently
of the lifetime of the underlying allocation, so call the accounting code
from the {alloc,free}() functions and simplify the code in the process.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: scs: Store absolute SCS stack pointer value in thread_info
Will Deacon [Fri, 15 May 2020 13:11:05 +0000 (14:11 +0100)]
arm64: scs: Store absolute SCS stack pointer value in thread_info

Storing the SCS information in thread_info as a {base,offset} pair
introduces an additional load instruction on the ret-to-user path,
since the SCS stack pointer in x18 has to be converted back to an offset
by subtracting the base.

Replace the offset with the absolute SCS stack pointer value instead
and avoid the redundant load.

Tested-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoefi/libstub: Disable Shadow Call Stack
Sami Tolvanen [Mon, 27 Apr 2020 16:00:18 +0000 (09:00 -0700)]
efi/libstub: Disable Shadow Call Stack

Shadow stacks are not available in the EFI stub, filter out SCS flags.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: scs: Add shadow stacks for SDEI
Sami Tolvanen [Mon, 27 Apr 2020 16:00:17 +0000 (09:00 -0700)]
arm64: scs: Add shadow stacks for SDEI

This change adds per-CPU shadow call stacks for the SDEI handler.
Similarly to how the kernel stacks are handled, we add separate shadow
stacks for normal and critical events.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Implement Shadow Call Stack
Sami Tolvanen [Mon, 27 Apr 2020 16:00:16 +0000 (09:00 -0700)]
arm64: Implement Shadow Call Stack

This change implements shadow stack switching, initial SCS set-up,
and interrupt shadow stacks for arm64.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Disable SCS for hypervisor code
Sami Tolvanen [Mon, 27 Apr 2020 16:00:15 +0000 (09:00 -0700)]
arm64: Disable SCS for hypervisor code

Disable SCS for code that runs at a different exception level by
adding __noscs to __hyp_text.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Disable Shadow Call Stack
Sami Tolvanen [Mon, 27 Apr 2020 16:00:14 +0000 (09:00 -0700)]
arm64: vdso: Disable Shadow Call Stack

Shadow stacks are only available in the kernel, so disable SCS
instrumentation for the vDSO.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: efi: Restore register x18 if it was corrupted
Sami Tolvanen [Mon, 27 Apr 2020 16:00:13 +0000 (09:00 -0700)]
arm64: efi: Restore register x18 if it was corrupted

If we detect a corrupted x18, restore the register before jumping back
to potentially SCS instrumented code. This is safe, because the wrapper
is called with preemption disabled and a separate shadow stack is used
for interrupt handling.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Preserve register x18 when CPU is suspended
Sami Tolvanen [Mon, 27 Apr 2020 16:00:12 +0000 (09:00 -0700)]
arm64: Preserve register x18 when CPU is suspended

Don't lose the current task's shadow stack when the CPU is suspended.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Reserve register x18 from general allocation with SCS
Sami Tolvanen [Mon, 27 Apr 2020 16:00:11 +0000 (09:00 -0700)]
arm64: Reserve register x18 from general allocation with SCS

Reserve the x18 register from general allocation when SCS is enabled,
because the compiler uses the register to store the current task's
shadow stack pointer. Note that all external kernel modules must also be
compiled with -ffixed-x18 if the kernel has SCS enabled.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Disable when function graph tracing is enabled
Sami Tolvanen [Mon, 27 Apr 2020 16:00:10 +0000 (09:00 -0700)]
scs: Disable when function graph tracing is enabled

The graph tracer hooks returns by modifying frame records on the
(regular) stack, but with SCS the return address is taken from the
shadow stack, and the value in the frame record has no effect. As we
don't currently have a mechanism to determine the corresponding slot
on the shadow stack (and to pass this through the ftrace
infrastructure), for now let's disable SCS when the graph tracer is
enabled.

With SCS the return address is taken from the shadow stack and the
value in the frame record has no effect. The mcount based graph tracer
hooks returns by modifying frame records on the (regular) stack, and
thus is not compatible. The patchable-function-entry graph tracer
used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved
to the shadow stack, and is compatible.

Modifying the mcount based graph tracer to work with SCS would require
a mechanism to determine the corresponding slot on the shadow stack
(and to pass this through the ftrace infrastructure), and we expect
that everyone will eventually move to the patchable-function-entry
based graph tracer anyway, so for now let's disable SCS when the
mcount-based graph tracer is enabled.

SCS and patchable-function-entry are both supported from LLVM 10.x.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Add support for stack usage debugging
Sami Tolvanen [Mon, 27 Apr 2020 16:00:09 +0000 (09:00 -0700)]
scs: Add support for stack usage debugging

Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled,
also prints out the highest shadow stack usage per process.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
[will: rewrote most of scs_check_usage()]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Add page accounting for shadow call stack allocations
Sami Tolvanen [Mon, 27 Apr 2020 16:00:08 +0000 (09:00 -0700)]
scs: Add page accounting for shadow call stack allocations

This change adds accounting for the memory allocated for shadow stacks.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoscs: Add support for Clang's Shadow Call Stack (SCS)
Sami Tolvanen [Mon, 27 Apr 2020 16:00:07 +0000 (09:00 -0700)]
scs: Add support for Clang's Shadow Call Stack (SCS)

This change adds generic support for Clang's Shadow Call Stack,
which uses a shadow stack to protect return addresses from being
overwritten by an attacker. Details are available here:

  https://clang.llvm.org/docs/ShadowCallStack.html

Note that security guarantees in the kernel differ from the ones
documented for user space. The kernel must store addresses of
shadow stacks in memory, which means an attacker capable reading
and writing arbitrary memory may be able to locate them and hijack
control flow by modifying the stacks.

Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
[will: Numerous cosmetic changes]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: bti: Fix support for userspace only BTI
Mark Brown [Tue, 12 May 2020 11:39:50 +0000 (12:39 +0100)]
arm64: bti: Fix support for userspace only BTI

When setting PTE_MAYBE_GP we check system_supports_bti() but this is
true for systems where only CONFIG_BTI is set causing us to enable BTI
on some kernel text. Add an extra check for the kernel mode option,
using an ifdef due to line length.

Fixes: c8027285e366 ("arm64: Set GP bit in kernel page tables to enable BTI for the kernel")
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20200512113950.29996-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cpufeature: Add "or" to mitigations for multiple errata
Geert Uytterhoeven [Tue, 12 May 2020 14:52:55 +0000 (16:52 +0200)]
arm64: cpufeature: Add "or" to mitigations for multiple errata

Several actions are not mitigations for a single erratum, but for
multiple errata.  However, printing a line like

    CPU features: detected: ARM errata 11655221530923

may give the false impression that all three listed errata have been
detected.  This can confuse the user, who may think his Cortex-A55 is
suddenly affected by a Cortex-A76 erratum.

Add "or" to all descriptions for mitigations for multiple errata, to
make it clear that only one or more of the errata printed are
applicable, and not necessarily all of them.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20200512145255.5520-1-geert+renesas@glider.be
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: kconfig: Update and comment GCC version check for kernel BTI
Will Deacon [Tue, 12 May 2020 11:45:40 +0000 (12:45 +0100)]
arm64: kconfig: Update and comment GCC version check for kernel BTI

Some versions of GCC are known to suffer from a BTI code generation bug,
meaning that CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI cannot be solely used
to determine whether or not we can compile with kernel with BTI enabled.

Update the BTI Kconfig entry to refer to the relevant GCC bugzilla entry
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697) and update the check
now that the fix has been merged into GCC release 10.1.

Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI: IORT: Add extra message "applying workaround" for off-by-1 issue
Hanjun Guo [Fri, 8 May 2020 03:56:38 +0000 (11:56 +0800)]
ACPI: IORT: Add extra message "applying workaround" for off-by-1 issue

As we already applied a workaround for the off-by-1 issue,
it's good to add extra message "applying workaround" to make
people less uneasy to see FW_BUG message in the boot log.

Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/1588910198-8348-1-git-send-email-guohanjun@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoDocumentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'
Amit Daniel Kachhap [Mon, 11 May 2020 13:01:56 +0000 (18:31 +0530)]
Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'

Add documentation for KERNELPACMASK variable being added to the vmcoreinfo.

It indicates the PAC bits mask information of signed kernel pointers if
Armv8.3-A Pointer Authentication feature is present.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Link: https://lore.kernel.org/r/1589202116-18265-2-git-send-email-amit.kachhap@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/crash_core: Export KERNELPACMASK in vmcoreinfo
Amit Daniel Kachhap [Mon, 11 May 2020 13:01:55 +0000 (18:31 +0530)]
arm64/crash_core: Export KERNELPACMASK in vmcoreinfo

Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.

Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.

This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.

The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros
like ptrauth_kernel_pac_mask can be used ungaurded. This config protection
is confusing as the pointer authentication feature may be missing at
runtime even though this config is present.

Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agobpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates
Luke Nelson [Fri, 8 May 2020 18:15:46 +0000 (11:15 -0700)]
bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates

The current code for BPF_{ADD,SUB} BPF_K loads the BPF immediate to a
temporary register before performing the addition/subtraction. Similarly,
BPF_JMP BPF_K cases load the immediate to a temporary register before
comparison.

This patch introduces optimizations that use arm64 immediate add, sub,
cmn, or cmp instructions when the BPF immediate fits. If the immediate
does not fit, it falls back to using a temporary register.

Example of generated code for BPF_ALU64_IMM(BPF_ADD, R0, 2):

without optimization:

  24: mov x10, #0x2
  28: add x7, x7, x10

with optimization:

  24: add x7, x7, #0x2

The code could use A64_{ADD,SUB}_I directly and check if it returns
AARCH64_BREAK_FAULT, similar to how logical immediates are handled.
However, aarch64_insn_gen_add_sub_imm from insn.c prints error messages
when the immediate does not fit, and it's simpler to check if the
immediate fits ahead of time.

Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20200508181547.24783-4-luke.r.nels@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agobpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates
Luke Nelson [Fri, 8 May 2020 18:15:45 +0000 (11:15 -0700)]
bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates

The current code for BPF_{AND,OR,XOR,JSET} BPF_K loads the immediate to
a temporary register before use.

This patch changes the code to avoid using a temporary register
when the BPF immediate is encodable using an arm64 logical immediate
instruction. If the encoding fails (due to the immediate not being
encodable), it falls back to using a temporary register.

Example of generated code for BPF_ALU32_IMM(BPF_AND, R0, 0x80000001):

without optimization:

  24: mov  w10, #0x8000ffff
  28: movk w10, #0x1
  2c: and  w7, w7, w10

with optimization:

  24: and  w7, w7, #0x80000001

Since the encoding process is quite complex, the JIT reuses existing
functionality in arch/arm64/kernel/insn.c for encoding logical immediates
rather than duplicate it in the JIT.

Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20200508181547.24783-3-luke.r.nels@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Fix two bugs in encoding 32-bit logical immediates
Luke Nelson [Fri, 8 May 2020 18:15:44 +0000 (11:15 -0700)]
arm64: insn: Fix two bugs in encoding 32-bit logical immediates

This patch fixes two issues present in the current function for encoding
arm64 logical immediates when using the 32-bit variants of instructions.

First, the code does not correctly reject an all-ones 32-bit immediate,
and returns an undefined instruction encoding.

Second, the code incorrectly rejects some 32-bit immediates that are
actually encodable as logical immediates. The root cause is that the code
uses a default mask of 64-bit all-ones, even for 32-bit immediates.
This causes an issue later on when the default mask is used to fill the
top bits of the immediate with ones, shown here:

  /*
   * Pattern: 0..01..10..01..1
   *
   * Fill the unused top bits with ones, and check if
   * the result is a valid immediate (all ones with a
   * contiguous ranges of zeroes).
   */
  imm |= ~mask;
  if (!range_of_ones(~imm))
          return AARCH64_BREAK_FAULT;

To see the problem, consider an immediate of the form 0..01..10..01..1,
where the upper 32 bits are zero, such as 0x80000001. The code checks
if ~(imm | ~mask) contains a range of ones: the incorrect mask yields
1..10..01..10..0, which fails the check; the correct mask yields
0..01..10..0, which succeeds.

The fix for both issues is to generate a correct mask based on the
instruction immediate size, and use the mask to check for all-ones,
all-zeroes, and values wider than the mask.

Currently, arch/arm64/kvm/va_layout.c is the only user of this function,
which uses 64-bit immediates and therefore won't trigger these bugs.

We tested the new code against llvm-mc with all 1,302 encodable 32-bit
logical immediates and all 5,334 encodable 64-bit logical immediates.

Fixes: ef3935eeebff ("arm64: insn: Add encoder for bitwise operations using literals")
Suggested-by: Will Deacon <will@kernel.org>
Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200508181547.24783-2-luke.r.nels@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Map the vDSO text with guarded pages when built for BTI
Mark Brown [Wed, 6 May 2020 19:51:38 +0000 (20:51 +0100)]
arm64: vdso: Map the vDSO text with guarded pages when built for BTI

The kernel is responsible for mapping the vDSO into userspace processes,
including mapping the text section as executable. Handle the mapping of
the vDSO for BTI similarly, mapping the text section as guarded pages so
the BTI annotations in the vDSO become effective when they are present.

This will mean that we can have BTI active for the vDSO in processes that
do not otherwise support BTI. This should not be an issue for any expected
use of the vDSO.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Force the vDSO to be linked as BTI when built for BTI
Mark Brown [Wed, 6 May 2020 19:51:37 +0000 (20:51 +0100)]
arm64: vdso: Force the vDSO to be linked as BTI when built for BTI

When the kernel and hence vDSO are built with BTI enabled force the linker
to link the vDSO as BTI. This will cause the linker to warn if any of the
input files do not have the BTI annotation, ensuring that we don't silently
fail to provide a vDSO that is built and annotated for BTI when the
kernel is being built with BTI.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Annotate for BTI
Mark Brown [Wed, 6 May 2020 19:51:36 +0000 (20:51 +0100)]
arm64: vdso: Annotate for BTI

Generate BTI annotations for all assembly files included in the 64 bit
vDSO.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-10-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: asm: Provide a mechanism for generating ELF note for BTI
Mark Brown [Wed, 6 May 2020 19:51:35 +0000 (20:51 +0100)]
arm64: asm: Provide a mechanism for generating ELF note for BTI

ELF files built for BTI should have a program property note section which
identifies them as such. The linker expects to find this note in all
object files it is linking into a BTI annotated output, the compiler will
ensure that this happens for C files but for assembler files we need to do
this in the source so provide a macro which can be used for this purpose.
To support likely future requirements for additional notes we split the
defininition of the flags to set for BTI code from the macro that creates
the note itself.

This is mainly for use in the vDSO which should be a normal ELF shared
library and should therefore include BTI annotations when built for BTI.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20200506195138.22086-9-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: bti: Provide Kconfig for kernel mode BTI
Mark Brown [Wed, 6 May 2020 19:51:34 +0000 (20:51 +0100)]
arm64: bti: Provide Kconfig for kernel mode BTI

Now that all the code is in place provide a Kconfig option allowing users
to enable BTI for the kernel if their toolchain supports it, defaulting it
on since this has security benefits. This is a separate configuration
option since we currently don't support secondary CPUs that lack BTI if
the boot CPU supports it.

Code generation issues mean that current GCC 9 versions are not able to
produce usable BTI binaries so we disable support for building with GCC
versions prior to 10, once a fix is backported to GCC 9 the dependencies
will be updated.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-8-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Mark executable text as guarded pages
Mark Brown [Wed, 6 May 2020 19:51:33 +0000 (20:51 +0100)]
arm64: mm: Mark executable text as guarded pages

When the kernel is built for BTI and running on a system which supports
make all executable text guarded pages to ensure that loadable module
and JITed BPF code is protected by BTI.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-7-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: bpf: Annotate JITed code for BTI
Mark Brown [Wed, 6 May 2020 19:51:32 +0000 (20:51 +0100)]
arm64: bpf: Annotate JITed code for BTI

In order to extend the protection offered by BTI to all code executing in
kernel mode we need to annotate JITed BPF code appropriately for BTI. To
do this we need to add a landing pad to the start of each BPF function and
also immediately after the function prologue if we are emitting a function
which can be tail called. Jumps within BPF functions are all to immediate
offsets and therefore do not require landing pads.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-6-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Set GP bit in kernel page tables to enable BTI for the kernel
Mark Brown [Wed, 6 May 2020 19:51:31 +0000 (20:51 +0100)]
arm64: Set GP bit in kernel page tables to enable BTI for the kernel

Now that the kernel is built with BTI annotations enable the feature by
setting the GP bit in the stage 1 translation tables.  This is done
based on the features supported by the boot CPU so that we do not need
to rewrite the translation tables.

In order to avoid potential issues on big.LITTLE systems when there are
a mix of BTI and non-BTI capable CPUs in the system when we have enabled
kernel mode BTI we change BTI to be a _STRICT_BOOT_CPU_FEATURE when we
have kernel BTI.  This will prevent any CPUs that don't support BTI
being started if the boot CPU supports BTI rather than simply not using
BTI as we do when supporting BTI only in userspace.  The main concern is
the possibility of BTYPE being preserved by a CPU that does not
implement BTI when a thread is migrated to it resulting in an incorrect
state which could generate an exception when the thread migrates back to
a CPU that does support BTI.  If we encounter practical systems which
mix BTI and non-BTI CPUs we will need to revisit this implementation.

Since we currently do not generate landing pads in the BPF JIT we only
map the base kernel text in this way.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: asm: Override SYM_FUNC_START when building the kernel with BTI
Mark Brown [Wed, 6 May 2020 19:51:30 +0000 (20:51 +0100)]
arm64: asm: Override SYM_FUNC_START when building the kernel with BTI

When the kernel is built for BTI override SYM_FUNC_START and related macros
to add a BTI landing pad to the start of all global functions, ensuring that
they are BTI safe. The ; at the end of the BTI_x macros is for the
benefit of the macro-generated functions in xen-hypercall.S.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: bti: Support building kernel C code using BTI
Mark Brown [Wed, 6 May 2020 19:51:29 +0000 (20:51 +0100)]
arm64: bti: Support building kernel C code using BTI

When running with BTI enabled we need to ask the compiler to enable
generation of BTI landing pads beyond those generated as a result of
pointer authentication instructions being landing pads. Since the two
features are practically speaking unlikely to be used separately we
will make kernel mode BTI depend on pointer authentication in order
to simplify the Makefile.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200506195138.22086-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Document why we enable PAC support for leaf functions
Mark Brown [Wed, 6 May 2020 19:51:28 +0000 (20:51 +0100)]
arm64: Document why we enable PAC support for leaf functions

Document the fact that we enable pointer authentication protection for
leaf functions since there is some narrow potential for ROP protection
benefits and little overhead has been observed.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20200506195138.22086-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: vdso: Add --eh-frame-hdr to ldflags
Vincenzo Frascino [Thu, 7 May 2020 10:40:49 +0000 (11:40 +0100)]
arm64: vdso: Add --eh-frame-hdr to ldflags

LLVM's unwinder depends on the .eh_frame_hdr being present for
unwinding. However, when compiling Linux with GCC, the section
is not present in the vdso library object and when compiling
with Clang, it is present, but it has zero length.

With GCC the problem was not spotted because libgcc unwinder does
not require the .eh_frame_hdr section to be present.

Add --eh-frame-hdr to ldflags to correctly generate and populate
the section for both GCC and LLVM.

Fixes: 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation")
Reported-by: Tamas Zsoldos <tamas.zsoldos@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Tamas Zsoldos <tamas.zsoldos@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200507104049.47834-1-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cacheflush: Fix KGDB trap detection
Daniel Thompson [Mon, 4 May 2020 17:05:18 +0000 (18:05 +0100)]
arm64: cacheflush: Fix KGDB trap detection

flush_icache_range() contains a bodge to avoid issuing IPIs when the kgdb
trap handler is running because issuing IPIs is unsafe (and not needed)
in this execution context. However the current test, based on
kgdb_connected is flawed: it both over-matches and under-matches.

The over match occurs because kgdb_connected is set when gdb attaches
to the stub and remains set during normal running. This is relatively
harmelss because in almost all cases irq_disabled() will be false.

The under match is more serious. When kdb is used instead of kgdb to access
the debugger then kgdb_connected is not set in all the places that the
debug core updates sw breakpoints (and hence flushes the icache). This
can lead to deadlock.

Fix by replacing the ad-hoc check with the proper kgdb macro. This also
allows us to drop the #ifdef wrapper.

Fixes: 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings")
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20200504170518.2959478-1-daniel.thompson@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branches 'for-next/asm' and 'for-next/insn' into for-next/bti
Will Deacon [Tue, 5 May 2020 14:19:09 +0000 (15:19 +0100)]
Merge branches 'for-next/asm' and 'for-next/insn' into for-next/bti

Merge in dependencies for in-kernel Branch Target Identification support.

* for-next/asm:
  arm64: Disable old style assembly annotations
  arm64: kernel: Convert to modern annotations for assembly functions
  arm64: entry: Refactor and modernise annotation for ret_to_user
  x86/asm: Provide a Kconfig symbol for disabling old assembly annotations
  x86/32: Remove CONFIG_DOUBLEFAULT

* for-next/insn:
  arm64: insn: Report PAC and BTI instructions as skippable
  arm64: insn: Don't assume unrecognized HINTs are skippable
  arm64: insn: Provide a better name for aarch64_insn_is_nop()
  arm64: insn: Add constants for new HINT instruction decode

4 years agoMerge branch 'for-next/bti-user' into for-next/bti
Will Deacon [Tue, 5 May 2020 14:15:58 +0000 (15:15 +0100)]
Merge branch 'for-next/bti-user' into for-next/bti

Merge in user support for Branch Target Identification, which narrowly
missed the cut for 5.7 after a late ABI concern.

* for-next/bti-user:
  arm64: bti: Document behaviour for dynamically linked binaries
  arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY
  arm64: BTI: Add Kconfig entry for userspace BTI
  mm: smaps: Report arm64 guarded pages in smaps
  arm64: mm: Display guarded pages in ptdump
  KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions
  arm64: BTI: Reset BTYPE when skipping emulated instructions
  arm64: traps: Shuffle code to eliminate forward declarations
  arm64: unify native/compat instruction skipping
  arm64: BTI: Decode BYTPE bits when printing PSTATE
  arm64: elf: Enable BTI at exec based on ELF program properties
  elf: Allow arch to tweak initial mmap prot flags
  arm64: Basic Branch Target Identification support
  ELF: Add ELF program property parsing support
  ELF: UAPI and Kconfig additions for ELF program properties

4 years agoarm64: cpufeature: Group indexed system register definitions by name
Will Deacon [Tue, 5 May 2020 12:08:02 +0000 (13:08 +0100)]
arm64: cpufeature: Group indexed system register definitions by name

Some system registers contain an index in the name (e.g. ID_MMFR<n>_EL1)
and, while this index often follows the register encoding, newer additions
to the architecture are necessarily tacked on the end. Sorting these
registers by encoding therefore becomes a bit of a mess.

Group the indexed system register definitions by name so that it's easier to
read and will hopefully reduce the chance of us accidentally introducing
duplicate definitions in the future.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cpufeature: Extend comment to describe absence of field info
Will Deacon [Tue, 5 May 2020 10:45:21 +0000 (11:45 +0100)]
arm64: cpufeature: Extend comment to describe absence of field info

When a feature register field is omitted from the description of the
register, the corresponding bits are treated as STRICT RES0, including
for KVM guests. This is subtly different to declaring the field as
HIDDEN/STRICT/EXACT/0, so update the comment to call this out.

Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Sort vendor-specific errata
Geert Uytterhoeven [Thu, 16 Apr 2020 11:56:57 +0000 (13:56 +0200)]
arm64: Sort vendor-specific errata

Sort configuration options for vendor-specific errata by vendor, to
increase uniformity.
Move ARM64_WORKAROUND_REPEAT_TLBI up, as it is also selected by
ARM64_ERRATUM_1286807.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agofirmware: arm_sdei: Drop check for /firmware/ node and always register driver
Sudeep Holla [Wed, 22 Apr 2020 12:28:23 +0000 (13:28 +0100)]
firmware: arm_sdei: Drop check for /firmware/ node and always register driver

As with most of the drivers, let us register this driver unconditionally
by dropping the checks for presence of firmware nodes(DT) or entries(ACPI).

Further, as mentioned in the commit acafce48b07b ("firmware: arm_sdei:
Fix DT platform device creation"), the core takes care of creation of
platform device when the appropriate device node is found and probe
is called accordingly.

Let us check only for the presence of ACPI firmware entry before creating
the platform device and flag warning if we fail.

Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20200422122823.1390-1-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()
Anshuman Khandual [Mon, 4 May 2020 12:29:37 +0000 (17:59 +0530)]
arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()

This moves device_initcall() near cpuinfo_regs_init() making the calling
sequence clear. Besides it is a standard practice to have device_initcall()
(any __initcall for that matter) just after the function it actually calls.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1588595377-4503-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Report PAC and BTI instructions as skippable
Mark Brown [Mon, 4 May 2020 13:13:26 +0000 (14:13 +0100)]
arm64: insn: Report PAC and BTI instructions as skippable

The PAC and BTI instructions can be safely skipped so report them as
such, allowing them to be probed.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Don't assume unrecognized HINTs are skippable
Mark Brown [Mon, 4 May 2020 13:13:25 +0000 (14:13 +0100)]
arm64: insn: Don't assume unrecognized HINTs are skippable

Currently the kernel assumes that any HINT which it does not explicitly
recognise is skippable.  This is not robust as new instructions may be
added which need special handling, and in any case software should only
be using explicit NOP instructions for deliberate NOPs.

This has the effect of rendering PAC and BTI instructions unprobeable
which means that probes can't be inserted on the first instruction of
functions built with those features.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Provide a better name for aarch64_insn_is_nop()
Mark Brown [Mon, 4 May 2020 13:13:24 +0000 (14:13 +0100)]
arm64: insn: Provide a better name for aarch64_insn_is_nop()

The current aarch64_insn_is_nop() has exactly one caller which uses it
solely to identify if the instruction is a HINT that can safely be stepped,
requiring us to list things that aren't NOPs and make things more confusing
than they need to be. Rename the function to reflect the actual usage and
make things more clear.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add constants for new HINT instruction decode
Mark Brown [Mon, 4 May 2020 13:13:23 +0000 (14:13 +0100)]
arm64: insn: Add constants for new HINT instruction decode

Add constants for decoding newer instructions defined in the HINT space.
Since we are now decoding both the op2 and CRm fields rename the enum as
well; this is compatible with what the existing users are doing.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200504131326.18290-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}
Andrew Scull [Mon, 4 May 2020 09:48:58 +0000 (10:48 +0100)]
arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}

Errata 11655221319367 and 1530923 each allow TLB entries to be
allocated as a result of a speculative AT instruction. In order to
avoid mandating VHE on certain affected CPUs, apply the workaround to
both the nVHE and the VHE case for all affected CPUs.

Signed-off-by: Andrew Scull <ascull@google.com>
Acked-by: Will Deacon <will@kernel.org>
CC: Marc Zyngier <maz@kernel.org>
CC: James Morse <james.morse@arm.com>
CC: Suzuki K Poulose <suzuki.poulose@arm.com>
CC: Will Deacon <will@kernel.org>
CC: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Disable old style assembly annotations
Mark Brown [Fri, 1 May 2020 11:54:30 +0000 (12:54 +0100)]
arm64: Disable old style assembly annotations

Now that we have converted arm64 over to the new style SYM_ assembler
annotations select ARCH_USE_SYM_ANNOTATIONS so the old macros aren't
available and we don't regress.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200501115430.37315-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: kernel: Convert to modern annotations for assembly functions
Mark Brown [Fri, 1 May 2020 11:54:29 +0000 (12:54 +0100)]
arm64: kernel: Convert to modern annotations for assembly functions

In an effort to clarify and simplify the annotation of assembly functions
in the kernel new macros have been introduced. These replace ENTRY and
ENDPROC and also add a new annotation for static functions which previously
had no ENTRY equivalent. Update the annotations in the core kernel code to
the new macros.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: entry: Refactor and modernise annotation for ret_to_user
Mark Brown [Fri, 1 May 2020 11:54:28 +0000 (12:54 +0100)]
arm64: entry: Refactor and modernise annotation for ret_to_user

As part of an effort to clarify and clean up the assembler annotations
new macros have been introduced which annotate the start and end of blocks
of code in assembler files. Currently ret_to_user has an out of line slow
path work_pending placed above the main function which makes annotating the
start and end of these blocks of code awkward.

Since work_pending is only referenced from within ret_to_user try to make
things a bit clearer by moving it after the current ret_to_user and then
marking both ret_to_user and work_pending as part of a single ret_to_user
code block.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoACPI/IORT: work around num_ids ambiguity
Ard Biesheuvel [Fri, 1 May 2020 16:10:14 +0000 (18:10 +0200)]
ACPI/IORT: work around num_ids ambiguity

The ID mapping table structure of the IORT table describes the size of
a range using a num_ids field carrying the number of IDs in the region
minus one. This has been misinterpreted in the past in the parsing code,
and firmware is known to have shipped where this results in an ambiguity,
where regions that should be adjacent have an overlap of one value.

So let's work around this by detecting this case specifically: when
resolving an ID translation, allow one that matches right at the end of
a multi-ID region to be superseded by a subsequent one.

To prevent potential regressions on broken firmware that happened to
work before, only take the subsequent match into account if it occurs
at the start of a mapping region.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Link: https://lore.kernel.org/r/20200501161014.5935-3-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoRevert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"
Ard Biesheuvel [Fri, 1 May 2020 16:10:13 +0000 (18:10 +0200)]
Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"

This reverts commit 3c23b83a88d00383e1d498cfa515249aa2fe0238.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20200501161014.5935-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>