]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/log
mirror_ubuntu-artful-kernel.git
6 years agoUBUNTU: Ubuntu-4.13.0-28.31 Ubuntu-4.13.0-28.31
Seth Forshee [Thu, 11 Jan 2018 23:48:34 +0000 (17:48 -0600)]
UBUNTU: Ubuntu-4.13.0-28.31

Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoUBUNTU: SAUCE: x86/kvm: Fix stuff_RSB() for 32-bit
William Grant [Thu, 11 Jan 2018 23:05:42 +0000 (17:05 -0600)]
UBUNTU: SAUCE: x86/kvm: Fix stuff_RSB() for 32-bit

CVE-2017-5753
CVE-2017-5715

Signed-off-by: William Grant <wgrant@ubuntu.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoUBUNTU: Start new release
Seth Forshee [Thu, 11 Jan 2018 23:43:58 +0000 (17:43 -0600)]
UBUNTU: Start new release

Ignore: yes
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoUBUNTU: Ubuntu-4.13.0-27.30 Ubuntu-4.13.0-27.30
Marcelo Henrique Cerri [Thu, 11 Jan 2018 20:42:34 +0000 (18:42 -0200)]
UBUNTU: Ubuntu-4.13.0-27.30

Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
6 years agox86/microcode/AMD: Add support for fam17h microcode loading
Tom Lendacky [Thu, 30 Nov 2017 22:46:40 +0000 (16:46 -0600)]
x86/microcode/AMD: Add support for fam17h microcode loading

CVE-2017-5753
CVE-2017-5715

The size for the Microcode Patch Block (MPB) for an AMD family 17h
processor is 3200 bytes.  Add a #define for fam17h so that it does
not default to 2048 bytes and fail a microcode load/update.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@alien8.de>
Link: https://lkml.kernel.org/r/20171130224640.15391.40247.stgit@tlendack-t1.amdoffice.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit f4e9b7af0cd58dd039a0fb2cd67d57cea4889abf)
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
6 years agoUBUNTU: [Config] updateconfigs to enable GENERIC_CPU_VULNERABILITIES
Kleber Sacilotto de Souza [Thu, 11 Jan 2018 19:14:31 +0000 (20:14 +0100)]
UBUNTU: [Config] updateconfigs to enable GENERIC_CPU_VULNERABILITIES

The new kernel config option was added by commit "sysfs/cpu: Add
vulnerability folder".

CVE-2017-5754
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: [Config] Disable CONFIG_PPC_DEBUG_RFI
Marcelo Henrique Cerri [Wed, 10 Jan 2018 20:17:08 +0000 (18:17 -0200)]
UBUNTU: [Config] Disable CONFIG_PPC_DEBUG_RFI

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Make the fallback robust against memory corruption
Michael Ellerman [Tue, 9 Jan 2018 15:43:00 +0000 (21:13 +0530)]
UBUNTU: SAUCE: rfi-flush: Make the fallback robust against memory corruption

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
The load dependency we add in the fallback flush relies on the value
we loaded from the fallback area being zero. Although that should
always be the case, bugs happen, so make the code robust against any
corruption by xor'ing it with itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Fix some RFI conversions in the KVM code
Michael Ellerman [Mon, 8 Jan 2018 06:39:52 +0000 (12:09 +0530)]
UBUNTU: SAUCE: rfi-flush: Fix some RFI conversions in the KVM code

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Spotted by Paul.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Fix the 32-bit KVM build
Michael Ellerman [Mon, 8 Jan 2018 06:39:45 +0000 (12:09 +0530)]
UBUNTU: SAUCE: rfi-flush: Fix the 32-bit KVM build

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Spotted by Paul.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Fallback flush add load dependency
Nicholas Piggin [Mon, 8 Jan 2018 06:39:37 +0000 (12:09 +0530)]
UBUNTU: SAUCE: rfi-flush: Fallback flush add load dependency

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Add a data dependency on loads for the fallback flush. This
reduces or eliminates instances of incomplete flushing on P8 and
P9.

Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Use rfi-flush in printks
Michael Ellerman [Sun, 7 Jan 2018 13:07:03 +0000 (00:07 +1100)]
UBUNTU: SAUCE: rfi-flush: Use rfi-flush in printks

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Add no_rfi_flush and nopti comandline options
Michael Ellerman [Sun, 7 Jan 2018 12:52:42 +0000 (23:52 +1100)]
UBUNTU: SAUCE: rfi-flush: Add no_rfi_flush and nopti comandline options

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
We use the x86 'nopti' option because all the documenation on earth is
going to refer to that, and we can guess what users mean when they
specify that - they want to avoid any overhead due to Meltdown
mitigations.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Refactor the macros so the nops are defined once
Michael Ellerman [Sun, 7 Jan 2018 11:02:02 +0000 (22:02 +1100)]
UBUNTU: SAUCE: rfi-flush: Refactor the macros so the nops are defined once

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
To avoid a bug like the previous commit ever happening again, put the
nops in a single place.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Fix HRFI_TO_UNKNOWN
Michael Ellerman [Sun, 7 Jan 2018 10:52:36 +0000 (21:52 +1100)]
UBUNTU: SAUCE: rfi-flush: Fix HRFI_TO_UNKNOWN

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
We forgot to expand the number of nops in HRFI_TO_UNKNOWN when we
expanded the number of nops. The result is we actually overwrite the
rfid with a nop, which is not good. Luckily this is only used in
denorm_done, which is not hit often.

Spotted by Ram.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Fix the fallback flush to actually activate
Michael Ellerman [Sat, 6 Jan 2018 15:50:16 +0000 (21:20 +0530)]
UBUNTU: SAUCE: rfi-flush: Fix the fallback flush to actually activate

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Since we now have three nops, we need to branch further to get over
the nops to the branch to the fallback flush.

Instead of putting the branch in slot 1 and branching by 8, put it in
0 and branch all the way to keep it simple.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Put the fallback flushes in the real trampoline section
Michael Ellerman [Sat, 6 Jan 2018 15:50:07 +0000 (21:20 +0530)]
UBUNTU: SAUCE: rfi-flush: Put the fallback flushes in the real trampoline section

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Otherwise they end up somewhere random depending on what code preceeds
them, which varies depending on CONFIG options. The HRFI version at
least needs to be below __end_interrupts so that the HMI early handler
can call it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Rework pseries logic to be more cautious
Michael Ellerman [Sat, 6 Jan 2018 15:49:57 +0000 (21:19 +0530)]
UBUNTU: SAUCE: rfi-flush: Rework pseries logic to be more cautious

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Rather than assuming a successful return from the hcall will tell us a
valid flush type, if the hcall doesn't select one of the known flush
types use the fallback.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Rework powernv logic to be more cautious
Michael Ellerman [Sat, 6 Jan 2018 15:49:45 +0000 (21:19 +0530)]
UBUNTU: SAUCE: rfi-flush: Rework powernv logic to be more cautious

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Assume we need to do the fallback flush, unless firmware tells us
explicitly not to, by having the two needs-l1d-flush properties set to
disabled.

The previous logic assumed that the existence of a "fw-features"
node with no further properties was sufficient to indicate the flush
wasn't needed.

This should make no difference in practice with current firmwares,
because the "fw-features" node has only just been introduced, so there
are no machines in the wild which have an empty "fw-features" node.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Add barriers to the fallback L1D flushing
Balbir Singh [Fri, 5 Jan 2018 17:25:48 +0000 (22:55 +0530)]
UBUNTU: SAUCE: rfi-flush: Add barriers to the fallback L1D flushing

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Add a hwsync after DCBT_STOP_ALL_STREAM_IDS to order loads/
stores prior to stopping prefetch with loads and stores
as a part of the flushing. A lwsync is needed to ensure
that after we don't mix the flushing of one congruence class
with another

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Add speculation barrier before ori 30,30,0 flush
Nicholas Piggin [Fri, 5 Jan 2018 13:50:48 +0000 (19:20 +0530)]
UBUNTU: SAUCE: rfi-flush: Add speculation barrier before ori 30,30,0 flush

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
add an ori 31,31,0 speculation barrier ahead of the ori 30,30,0 flush
type, which was found necessary to completely flush out all lines.

Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Allow HV to advertise multiple flush types
Michael Ellerman [Fri, 5 Jan 2018 13:47:42 +0000 (19:17 +0530)]
UBUNTU: SAUCE: rfi-flush: Allow HV to advertise multiple flush types

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
To enable migration between machines with different flush types
enabled, allow the hypervisor to advertise more than one flush type,
and if we see that we patch both in. On any given machine only one
will be active (due to firmware configuration), but a kernel will be
able to migrate between machines with different flush instructions
enabled without modification.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Support more than one flush type at once
Michael Ellerman [Fri, 5 Jan 2018 13:47:17 +0000 (19:17 +0530)]
UBUNTU: SAUCE: rfi-flush: Support more than one flush type at once

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Expand the RFI section to two nop slots
Michael Ellerman [Fri, 5 Jan 2018 13:46:58 +0000 (19:16 +0530)]
UBUNTU: SAUCE: rfi-flush: Expand the RFI section to two nop slots

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Push the instruction selection down to the patching routine
Michael Ellerman [Fri, 5 Jan 2018 13:43:57 +0000 (19:13 +0530)]
UBUNTU: SAUCE: rfi-flush: Push the instruction selection down to the patching routine

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Make l1d_flush_type bit flags
Michael Ellerman [Fri, 5 Jan 2018 13:21:41 +0000 (18:51 +0530)]
UBUNTU: SAUCE: rfi-flush: Make l1d_flush_type bit flags

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
So we can select more than one.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Implement congruence-first fallback flush
Nicholas Piggin [Fri, 5 Jan 2018 12:28:06 +0000 (17:58 +0530)]
UBUNTU: SAUCE: rfi-flush: Implement congruence-first fallback flush

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
This patch chnages the fallback flush to load all ways of a set,
then move to the next set. This is the best way to flush the cache,
accoring to HW people.

Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: KVM: Revert the implementation of H_GET_CPU_CHARACTERISTICS
Michael Ellerman [Fri, 5 Jan 2018 12:27:24 +0000 (17:57 +0530)]
UBUNTU: SAUCE: KVM: Revert the implementation of H_GET_CPU_CHARACTERISTICS

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
After discussions this needs to be in Qemu, to deal with migration and
other complications.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: kvmppc_skip_(H)interrupt returns to host kernel
Michael Ellerman [Fri, 5 Jan 2018 12:26:52 +0000 (17:56 +0530)]
UBUNTU: SAUCE: rfi-flush: kvmppc_skip_(H)interrupt returns to host kernel

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Add HRFI_TO_UNKNOWN and use it in denorm
Michael Ellerman [Fri, 5 Jan 2018 12:25:53 +0000 (17:55 +0530)]
UBUNTU: SAUCE: rfi-flush: Add HRFI_TO_UNKNOWN and use it in denorm

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: rfi-flush: Make DEBUG_RFI a CONFIG option
Michael Ellerman [Fri, 5 Jan 2018 12:23:23 +0000 (17:53 +0530)]
UBUNTU: SAUCE: rfi-flush: Make DEBUG_RFI a CONFIG option

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: powerpc: Secure memory rfi flush
Ananth N Mavinakayanahalli [Fri, 5 Jan 2018 04:20:56 +0000 (15:20 +1100)]
UBUNTU: SAUCE: powerpc: Secure memory rfi flush

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742772
This puts a nop before each rfid/hrfid and patches in an L1-D
cache flush instruction where possible.

It provides /sys/devices/system/cpu/secure_memory_protection which can
report and can patch the rfi flushes at runtime.

This has some debug checking in the rfi instructions to make sure
we're returning to the context we think we are, so we can avoid
some flushes.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agos390: add ppa to kernel entry / exit
Martin Schwidefsky [Thu, 21 Dec 2017 08:17:59 +0000 (09:17 +0100)]
s390: add ppa to kernel entry / exit

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742771
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agos390: introduce CPU alternatives
Vasily Gorbik [Tue, 2 Jan 2018 10:26:25 +0000 (10:26 +0000)]
s390: introduce CPU alternatives

CVE-2017-5754

BugLink: http://bugs.launchpad.net/bugs/1742771
Implement CPU alternatives, which allows to optionally patch newer
instructions at runtime, based on CPU facilities availability.

A new kernel boot parameter "noaltinstr" disables patching.

Current implementation is derived from x86 alternatives. Although
ideal instructions padding (when altinstr is longer then oldinstr)
is added at compile time, and no oldinstr nops optimization has to be
done at runtime. Also couple of compile time sanity checks are done:
1. oldinstr and altinstr must be <= 254 bytes long,
2. oldinstr and altinstr must not have an odd length.

alternative(oldinstr, altinstr, facility);
alternative_2(oldinstr, altinstr1, facility1, altinstr2, facility2);

Both compile time and runtime padding consists of either 6/4/2 bytes nop
or a jump (brcl) + 2 bytes nop filler if padding is longer then 6 bytes.

.altinstructions and .altinstr_replacement sections are part of
__init_begin : __init_end region and are freed after initialization.

Signed-off-by: Vasily Gorbik <gor@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agos390/spinlock: add gmb memory barrier
Martin Schwidefsky [Mon, 18 Dec 2017 06:58:11 +0000 (07:58 +0100)]
s390/spinlock: add gmb memory barrier

CVE-2017-5753
CVE-2017-5715

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agopowerpc: add gmb barrier
Andy Whitcroft [Wed, 20 Dec 2017 12:12:08 +0000 (12:12 +0000)]
powerpc: add gmb barrier

CVE-2017-5753
CVE-2017-5715

Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpu/AMD: Remove now unused definition of MFENCE_RDTSC feature
Tom Lendacky [Wed, 20 Dec 2017 10:55:48 +0000 (10:55 +0000)]
x86/cpu/AMD: Remove now unused definition of MFENCE_RDTSC feature

CVE-2017-5753
CVE-2017-5715

With the switch to using LFENCE_RDTSC on AMD platforms there is no longer
a need for the MFENCE_RDTSC feature.  Remove it usage and definition.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/svm: Add code to clear registers on VM exit
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Add code to clear registers on VM exit

CVE-2017-5753
CVE-2017-5715

Clear registers on VM exit to prevent speculative use of them.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/svm: Add code to clobber the RSB on VM exit
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Add code to clobber the RSB on VM exit

CVE-2017-5753
CVE-2017-5715

Add code to overwrite the local CPU RSB entries from the previous less
privileged mode.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoKVM: x86: Add speculative control CPUID support for guests
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
KVM: x86: Add speculative control CPUID support for guests

CVE-2017-5753
CVE-2017-5715

Provide the guest with the speculative control CPUID related values.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/svm: Set IBPB when running a different VCPU
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Set IBPB when running a different VCPU

CVE-2017-5753
CVE-2017-5715

Set IBPB (Indirect Branch Prediction Barrier) when the current CPU is
going to run a VCPU different from what was previously run.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/svm: Set IBRS value on VM entry and exit
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Set IBRS value on VM entry and exit

CVE-2017-5753
CVE-2017-5715

Set/restore the guests IBRS value on VM entry. On VM exit back to the
kernel save the guest IBRS value and then set IBRS to 1.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoKVM: SVM: Do not intercept new speculative control MSRs
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
KVM: SVM: Do not intercept new speculative control MSRs

CVE-2017-5753
CVE-2017-5715

Allow guest access to the speculative control MSRs without being
intercepted.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/microcode: Extend post microcode reload to support IBPB feature
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/microcode: Extend post microcode reload to support IBPB feature

CVE-2017-5753
CVE-2017-5715

Add an IBPB feature check to the speculative control update check after
a microcode reload.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpu/AMD: Add speculative control support for AMD
Tom Lendacky [Wed, 20 Dec 2017 10:52:54 +0000 (10:52 +0000)]
x86/cpu/AMD: Add speculative control support for AMD

CVE-2017-5753
CVE-2017-5715

Add speculative control support for AMD processors. For AMD, speculative
control is indicated as follows:

  CPUID EAX=0x00000007, ECX=0x00 return EDX[26] indicates support for
  both IBRS and IBPB.

  CPUID EAX=0x80000008, ECX=0x00 return EBX[12] indicates support for
  just IBPB.

On AMD family 0x10, 0x12 and 0x16 processors where either of the above
features are not supported, IBPB can be achieved by disabling
indirect branch predictor support in MSR 0xc0011021[14] at boot.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/entry: Use retpoline for syscall's indirect calls
Tim Chen [Thu, 9 Nov 2017 00:30:06 +0000 (16:30 -0800)]
x86/entry: Use retpoline for syscall's indirect calls

CVE-2017-5753
CVE-2017-5715

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/syscall: Clear unused extra registers on 32-bit compatible syscall entrance
Tim Chen [Sat, 16 Sep 2017 02:41:24 +0000 (19:41 -0700)]
x86/syscall: Clear unused extra registers on 32-bit compatible syscall entrance

CVE-2017-5753
CVE-2017-5715

To prevent the unused registers %r8-%r15, from being used speculatively,
we clear them upon syscall entrance for code hygiene in 32 bit compatible
mode.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/syscall: Clear unused extra registers on syscall entrance
Tim Chen [Tue, 19 Sep 2017 22:21:40 +0000 (15:21 -0700)]
x86/syscall: Clear unused extra registers on syscall entrance

CVE-2017-5753
CVE-2017-5715

To prevent the unused registers %r12-%r15, %rbp and %rbx from
being used speculatively, we clear them upon syscall entrance
for code hygiene.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/spec_ctrl: Add lock to serialize changes to ibrs and ibpb control
Tim Chen [Mon, 20 Nov 2017 21:47:54 +0000 (13:47 -0800)]
x86/spec_ctrl: Add lock to serialize changes to ibrs and ibpb control

CVE-2017-5753
CVE-2017-5715

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature
Tim Chen [Thu, 16 Nov 2017 12:47:48 +0000 (04:47 -0800)]
x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

CVE-2017-5753
CVE-2017-5715

There are 2 ways to control IBPB and IBRS

1. At boot time
noibrs kernel boot parameter will disable IBRS usage
noibpb kernel boot parameter will disable IBPB usage
Otherwise if the above parameters are not specified, the system
will enable ibrs and ibpb usage if the cpu supports it.

2. At run time
echo 0 > /proc/sys/kernel/ibrs_enabled will turn off IBRS
echo 1 > /proc/sys/kernel/ibrs_enabled will turn on IBRS in kernel
echo 2 > /proc/sys/kernel/ibrs_enabled will turn on IBRS in both userspace and kernel

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
[marcelo.cerri@canonical.com: add x86 guards to kernel/smp.c]
[marcelo.cerri@canonical.com: include asm/msr.h under x86 guard in kernel/sysctl.c]
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
6 years agox86/kvm: Pad RSB on VM transition
Tim Chen [Sat, 21 Oct 2017 00:05:54 +0000 (17:05 -0700)]
x86/kvm: Pad RSB on VM transition

CVE-2017-5753
CVE-2017-5715

Add code to pad the local CPU's RSB entries to protect
from previous less privilege mode.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/kvm: Toggle IBRS on VM entry and exit
Tim Chen [Sat, 21 Oct 2017 00:04:35 +0000 (17:04 -0700)]
x86/kvm: Toggle IBRS on VM entry and exit

CVE-2017-5753
CVE-2017-5715

Restore guest IBRS on VM entry and set it to 1 on VM exit
back to kernel.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/kvm: Set IBPB when switching VM
Tim Chen [Fri, 13 Oct 2017 21:31:46 +0000 (14:31 -0700)]
x86/kvm: Set IBPB when switching VM

CVE-2017-5753
CVE-2017-5715

Set IBPB (Indirect branch prediction barrier) when switching VM.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/kvm: add MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD to kvm
Wei Wang [Tue, 7 Nov 2017 08:47:53 +0000 (16:47 +0800)]
x86/kvm: add MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD to kvm

CVE-2017-5753
CVE-2017-5715

Add field to access guest MSR_IA332_SPEC_CTRL and MSR_IA32_PRED_CMD state.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/entry: Stuff RSB for entry to kernel for non-SMEP platform
Tim Chen [Wed, 15 Nov 2017 01:16:30 +0000 (17:16 -0800)]
x86/entry: Stuff RSB for entry to kernel for non-SMEP platform

CVE-2017-5753
CVE-2017-5715

Stuff RSB to prevent RSB underflow on non-SMEP platform.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/mm: Only set IBPB when the new thread cannot ptrace current thread
Tim Chen [Tue, 7 Nov 2017 21:52:42 +0000 (13:52 -0800)]
x86/mm: Only set IBPB when the new thread cannot ptrace current thread

CVE-2017-5753
CVE-2017-5715

To reduce overhead of setting IBPB, we only do that when
the new thread cannot ptrace the current one.  If the new
thread has ptrace capability on current thread, it is safe.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/mm: Set IBPB upon context switch
Tim Chen [Fri, 20 Oct 2017 19:56:29 +0000 (12:56 -0700)]
x86/mm: Set IBPB upon context switch

CVE-2017-5753
CVE-2017-5715

Set IBPB on context switch with changing of page table.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/idle: Disable IBRS when offlining cpu and re-enable on wakeup
Tim Chen [Wed, 15 Nov 2017 20:24:19 +0000 (12:24 -0800)]
x86/idle: Disable IBRS when offlining cpu and re-enable on wakeup

CVE-2017-5753
CVE-2017-5715

Clear IBRS when cpu is offlined and set it when brining it back online.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/idle: Disable IBRS entering idle and enable it on wakeup
Tim Chen [Tue, 7 Nov 2017 02:19:14 +0000 (18:19 -0800)]
x86/idle: Disable IBRS entering idle and enable it on wakeup

CVE-2017-5753
CVE-2017-5715

Clear IBRS on idle entry and set it on idle exit into kernel on mwait.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/enter: Use IBRS on syscall and interrupts
Tim Chen [Fri, 13 Oct 2017 21:25:00 +0000 (14:25 -0700)]
x86/enter: Use IBRS on syscall and interrupts

CVE-2017-5753
CVE-2017-5715

Set IBRS upon kernel entrance via syscall and interrupts. Clear it upon exit.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/enter: MACROS to set/clear IBRS and set IBPB
Tim Chen [Sat, 16 Sep 2017 01:04:53 +0000 (18:04 -0700)]
x86/enter: MACROS to set/clear IBRS and set IBPB

CVE-2017-5753
CVE-2017-5715

Setup macros to control IBRS and IBPB

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/feature: Report presence of IBPB and IBRS control
Tim Chen [Wed, 27 Sep 2017 19:09:14 +0000 (12:09 -0700)]
x86/feature: Report presence of IBPB and IBRS control

CVE-2017-5753
CVE-2017-5715

Report presence of IBPB and IBRS.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/feature: Enable the x86 feature to control Speculation
Tim Chen [Thu, 24 Aug 2017 16:34:41 +0000 (09:34 -0700)]
x86/feature: Enable the x86 feature to control Speculation

CVE-2017-5753
CVE-2017-5715

cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
IA32_SPEC_CTRL (0x48) and IA32_PRED_CMD (0x49)
IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)
IA32_PRED_CMD,  bit0 – Indirect Branch Prediction Barrier (IBPB)

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoudf: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:56 +0000 (13:11 +0300)]
udf: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agonet: mpls: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:55 +0000 (13:11 +0300)]
net: mpls: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agofs: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:54 +0000 (13:11 +0300)]
fs: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoipv6: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:53 +0000 (13:11 +0300)]
ipv6: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agouserns: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:52 +0000 (13:11 +0300)]
userns: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoThermal/int340x: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:51 +0000 (13:11 +0300)]
Thermal/int340x: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agocw1200: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:50 +0000 (13:11 +0300)]
cw1200: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoqla2xxx: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:49 +0000 (13:11 +0300)]
qla2xxx: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agop54: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:48 +0000 (13:11 +0300)]
p54: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agocarl9170: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:47 +0000 (13:11 +0300)]
carl9170: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

Real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agouvcvideo: prevent speculative execution
Elena Reshetova [Mon, 4 Sep 2017 10:11:46 +0000 (13:11 +0300)]
uvcvideo: prevent speculative execution

CVE-2017-5753
CVE-2017-5715

real commit text tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86, bpf, jit: prevent speculative execution when JIT is enabled
Elena Reshetova [Mon, 4 Sep 2017 10:11:45 +0000 (13:11 +0300)]
x86, bpf, jit: prevent speculative execution when JIT is enabled

CVE-2017-5753
CVE-2017-5715

When constant blinding is enabled (bpf_jit_harden = 1), this adds
a generic memory barrier (lfence for intel, mfence for AMD) before
emitting x86 jitted code for the BPF_ALU(64)_OR_X and BPF_ALU_LHS_X
(for BPF_REG_AX register) eBPF instructions. This is needed in order
to prevent speculative execution on out of bounds BPF_MAP array
indexes when JIT is enabled. This way an arbitary kernel memory is
not exposed through side-channel attacks.

For more details, please see this Google Project Zero report: tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agobpf: prevent speculative execution in eBPF interpreter
Elena Reshetova [Mon, 4 Sep 2017 10:11:44 +0000 (13:11 +0300)]
bpf: prevent speculative execution in eBPF interpreter

CVE-2017-5753
CVE-2017-5715

This adds a generic memory barrier before LD_IMM_DW and
LDX_MEM_B/H/W/DW eBPF instructions during eBPF program
execution in order to prevent speculative execution on out
of bound BFP_MAP array indexes. This way an arbitary kernel
memory is not exposed through side channel attacks.

For more details, please see this Google Project Zero report: tbd

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agolocking/barriers: introduce new memory barrier gmb()
Elena Reshetova [Mon, 4 Sep 2017 10:11:43 +0000 (13:11 +0300)]
locking/barriers: introduce new memory barrier gmb()

CVE-2017-5753
CVE-2017-5715

In constrast to existing mb() and rmb() barriers,
gmb() barrier is arch-independent and can be used to
implement any type of memory barrier.
In x86 case, it is either lfence or mfence, based on
processor type. ARM and others can define it according
to their needs.

Suggested-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/pti: Make unpoison of pgd for trusted boot work for real
Dave Hansen [Wed, 10 Jan 2018 22:49:39 +0000 (14:49 -0800)]
x86/pti: Make unpoison of pgd for trusted boot work for real

CVE-2017-5754

The inital fix for trusted boot and PTI potentially misses the pgd clearing
if pud_alloc() sets a PGD.  It probably works in *practice* because for two
adjacent calls to map_tboot_page() that share a PGD entry, the first will
clear NX, *then* allocate and set the PGD (without NX clear).  The second
call will *not* allocate but will clear the NX bit.

Defer the NX clearing to a point after it is known that all top-level
allocations have occurred.  Add a comment to clarify why.

[ tglx: Massaged changelog ]

Fixes: 262b6b30087 ("x86/tboot: Unbreak tboot with PTI enabled")
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: "Tim Chen" <tim.c.chen@linux.intel.com>
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: peterz@infradead.org
Cc: ning.sun@intel.com
Cc: tboot-devel@lists.sourceforge.net
Cc: andi@firstfloor.org
Cc: luto@kernel.org
Cc: law@redhat.com
Cc: pbonzini@redhat.com
Cc: torvalds@linux-foundation.org
Cc: gregkh@linux-foundation.org
Cc: dwmw@amazon.co.uk
Cc: nickc@redhat.com
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180110224939.2695CD47@viggo.jf.intel.com
(cherry picked from commit 8a931d1e24bacf01f00a35d43bfe7917256c5c49)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/alternatives: Fix optimize_nops() checking
Borislav Petkov [Wed, 10 Jan 2018 11:28:16 +0000 (12:28 +0100)]
x86/alternatives: Fix optimize_nops() checking

CVE-2017-5754

The alternatives code checks only the first byte whether it is a NOP, but
with NOPs in front of the payload and having actual instructions after it
breaks the "optimized' test.

Make sure to scan all bytes before deciding to optimize the NOPs in there.

Reported-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Andrew Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: https://lkml.kernel.org/r/20180110112815.mgciyf5acwacphkq@pd.tnic
(cherry picked from commit 612e8e9350fd19cae6900cf36ea0c6892d1a0dca)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agosysfs/cpu: Fix typos in vulnerability documentation
David Woodhouse [Tue, 9 Jan 2018 15:02:51 +0000 (15:02 +0000)]
sysfs/cpu: Fix typos in vulnerability documentation

CVE-2017-5754

Fixes: 87590ce6e ("sysfs/cpu: Add vulnerability folder")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
(cherry picked from commit 9ecccfaa7cb5249bd31bdceb93fcf5bedb8a24d8)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC
Tom Lendacky [Mon, 8 Jan 2018 22:09:32 +0000 (16:09 -0600)]
x86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC

CVE-2017-5754

With LFENCE now a serializing instruction, use LFENCE_RDTSC in preference
to MFENCE_RDTSC.  However, since the kernel could be running under a
hypervisor that does not support writing that MSR, read the MSR back and
verify that the bit has been set successfully.  If the MSR can be read
and the bit is set, then set the LFENCE_RDTSC feature, otherwise set the
MFENCE_RDTSC feature.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Paul Turner <pjt@google.com>
Link: https://lkml.kernel.org/r/20180108220932.12580.52458.stgit@tlendack-t1.amdoffice.net
(cherry picked from commit 9c6a73c75864ad9fa49e5fa6513e4c4071c0e29f)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpu/AMD: Make LFENCE a serializing instruction
Tom Lendacky [Mon, 8 Jan 2018 22:09:21 +0000 (16:09 -0600)]
x86/cpu/AMD: Make LFENCE a serializing instruction

CVE-2017-5754

To aid in speculation control, make LFENCE a serializing instruction
since it has less overhead than MFENCE.  This is done by setting bit 1
of MSR 0xc0011029 (DE_CFG).  Some families that support LFENCE do not
have this MSR.  For these families, the LFENCE instruction is already
serializing.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Paul Turner <pjt@google.com>
Link: https://lkml.kernel.org/r/20180108220921.12580.71694.stgit@tlendack-t1.amdoffice.net
(cherry picked from commit e4d0e84e490790798691aaa0f2e598637f1867ec)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/mm/pti: Remove dead logic in pti_user_pagetable_walk*()
Jike Song [Mon, 8 Jan 2018 16:03:41 +0000 (00:03 +0800)]
x86/mm/pti: Remove dead logic in pti_user_pagetable_walk*()

CVE-2017-5754

The following code contains dead logic:

 162 if (pgd_none(*pgd)) {
 163         unsigned long new_p4d_page = __get_free_page(gfp);
 164         if (!new_p4d_page)
 165                 return NULL;
 166
 167         if (pgd_none(*pgd)) {
 168                 set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
 169                 new_p4d_page = 0;
 170         }
 171         if (new_p4d_page)
 172                 free_page(new_p4d_page);
 173 }

There can't be any difference between two pgd_none(*pgd) at L162 and L167,
so it's always false at L171.

Dave Hansen explained:

 Yes, the double-test was part of an optimization where we attempted to
 avoid using a global spinlock in the fork() path.  We would check for
 unallocated mid-level page tables without the lock.  The lock was only
 taken when we needed to *make* an entry to avoid collisions.

 Now that it is all single-threaded, there is no chance of a collision,
 no need for a lock, and no need for the re-check.

As all these functions are only called during init, mark them __init as
well.

Fixes: 03f4424f348e ("x86/mm/pti: Add functions to clone kernel PMDs")
Signed-off-by: Jike Song <albcamus@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Jiri Koshina <jikos@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Kees Cook <keescook@google.com>
Cc: Andi Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg KH <gregkh@linux-foundation.org>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Cc: Paul Turner <pjt@google.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180108160341.3461-1-albcamus@gmail.com
(cherry picked from commit 8d56eff266f3e41a6c39926269c4c3f58f881a8e)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/tboot: Unbreak tboot with PTI enabled
Dave Hansen [Sat, 6 Jan 2018 17:41:14 +0000 (18:41 +0100)]
x86/tboot: Unbreak tboot with PTI enabled

CVE-2017-5754

This is another case similar to what EFI does: create a new set of
page tables, map some code at a low address, and jump to it.  PTI
mistakes this low address for userspace and mistakenly marks it
non-executable in an effort to make it unusable for userspace.

Undo the poison to allow execution.

Fixes: 385ce0ea4c07 ("x86/mm/pti: Add Kconfig")
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Jon Masters <jcm@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jeff Law <law@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: David" <dwmw@amazon.co.uk>
Cc: Nick Clifton <nickc@redhat.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180108102805.GK25546@redhat.com
(cherry picked from commit 262b6b30087246abf09d6275eb0c0dc421bcbe38)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpu: Implement CPU vulnerabilites sysfs functions
Thomas Gleixner [Sun, 7 Jan 2018 21:48:01 +0000 (22:48 +0100)]
x86/cpu: Implement CPU vulnerabilites sysfs functions

CVE-2017-5754

Implement the CPU vulnerabilty show functions for meltdown, spectre_v1 and
spectre_v2.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180107214913.177414879@linutronix.de
(cherry picked from commit 61dc0f555b5c761cdafb0ba5bd41ecf22d68a4c4)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agosysfs/cpu: Add vulnerability folder
Thomas Gleixner [Sun, 7 Jan 2018 21:48:00 +0000 (22:48 +0100)]
sysfs/cpu: Add vulnerability folder

CVE-2017-5754

As the meltdown/spectre problem affects several CPU architectures, it makes
sense to have common way to express whether a system is affected by a
particular vulnerability or not. If affected the way to express the
mitigation should be common as well.

Create /sys/devices/system/cpu/vulnerabilities folder and files for
meltdown, spectre_v1 and spectre_v2.

Allow architectures to override the show function.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lkml.kernel.org/r/20180107214913.096657732@linutronix.de
(cherry picked from commit 87590ce6e373d1a5401f6539f0c59ef92dd924a9)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/cpufeatures: Add X86_BUG_SPECTRE_V[12]
David Woodhouse [Sat, 6 Jan 2018 11:49:23 +0000 (11:49 +0000)]
x86/cpufeatures: Add X86_BUG_SPECTRE_V[12]

CVE-2017-5754

Add the bug bits for spectre v1/2 and force them unconditionally for all
cpus.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kees Cook <keescook@google.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/1515239374-23361-2-git-send-email-dwmw@amazon.co.uk
(cherry picked from commit 99c6fa2511d8a683e61468be91b83f85452115fa)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/Documentation: Add PTI description
Dave Hansen [Fri, 5 Jan 2018 17:44:36 +0000 (09:44 -0800)]
x86/Documentation: Add PTI description

CVE-2017-5754

Add some details about how PTI works, what some of the downsides
are, and how to debug it when things go wrong.

Also document the kernel parameter: 'pti/nopti'.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Moritz Lipp <moritz.lipp@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@iaik.tugraz.at>
Cc: Richard Fellner <richard.fellner@student.tugraz.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Andi Lutomirsky <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180105174436.1BC6FA2B@viggo.jf.intel.com
(cherry picked from commit 01c9b17bf673b05bb401b76ec763e9730ccf1376)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/pti: Unbreak EFI old_memmap
Jiri Kosina [Fri, 5 Jan 2018 21:35:41 +0000 (22:35 +0100)]
x86/pti: Unbreak EFI old_memmap

CVE-2017-5754

EFI_OLD_MEMMAP's efi_call_phys_prolog() calls set_pgd() with swapper PGD that
has PAGE_USER set, which makes PTI set NX on it, and therefore EFI can't
execute it's code.

Fix that by forcefully clearing _PAGE_NX from the PGD (this can't be done
by the pgprot API).

_PAGE_NX will be automatically reintroduced in efi_call_phys_epilog(), as
_set_pgd() will again notice that this is _PAGE_USER, and set _PAGE_NX on
it.

Tested-by: Dimitri Sivanich <sivanich@hpe.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-efi@vger.kernel.org
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/nycvar.YFH.7.76.1801052215460.11852@cbobk.fhfr.pm
(cherry picked from commit de53c3786a3ce162a1c815d0c04c766c23ec9c0a)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN
Thomas Gleixner [Fri, 5 Jan 2018 14:27:34 +0000 (15:27 +0100)]
x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN

CVE-2017-5754

Use the name associated with the particular attack which needs page table
isolation for mitigation.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: Jiri Koshina <jikos@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Andi Lutomirski <luto@amacapital.net>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Greg KH <gregkh@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kees Cook <keescook@google.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801051525300.1724@nanos
(cherry picked from commit de791821c295cc61419a06fe5562288417d1bc58)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm
David Woodhouse [Thu, 4 Jan 2018 14:37:05 +0000 (14:37 +0000)]
x86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm

CVE-2017-5754

Where an ALTERNATIVE is used in the middle of an inline asm block, this
would otherwise lead to the following instruction being appended directly
to the trailing ".popsection", and a failed compile.

Fixes: 9cebed423c84 ("x86, alternative: Use .pushsection/.popsection")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: Rik van Riel <riel@redhat.com>
Cc: ak@linux.intel.com
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Turner <pjt@google.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kees Cook <keescook@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180104143710.8961-8-dwmw@amazon.co.uk
(cherry picked from commit b9e705ef7cfaf22db0daab91ad3cd33b0fa32eb9)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/tlb: Drop the _GPL from the cpu_tlbstate export
Thomas Gleixner [Thu, 4 Jan 2018 21:19:04 +0000 (22:19 +0100)]
x86/tlb: Drop the _GPL from the cpu_tlbstate export

CVE-2017-5754

The recent changes for PTI touch cpu_tlbstate from various tlb_flush
inlines. cpu_tlbstate is exported as GPL symbol, so this causes a
regression when building out of tree drivers for certain graphics cards.

Aside of that the export was wrong since it was introduced as it should
have been EXPORT_PER_CPU_SYMBOL_GPL().

Use the correct PER_CPU export and drop the _GPL to restore the previous
state which allows users to utilize the cards they payed for.

As always I'm really thrilled to make this kind of change to support the
#friends (or however the hot hashtag of today is spelled) from that closet
sauce graphics corp.

Fixes: 1e02ce4cccdc ("x86: Store a per-cpu shadow copy of CR4")
Fixes: 6fd166aae78c ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
Reported-by: Kees Cook <keescook@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
(cherry picked from commit 1e5476815fd7f98b888e01a0f9522b63085f96c9)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/events/intel/ds: Use the proper cache flush method for mapping ds buffers
Peter Zijlstra [Thu, 4 Jan 2018 17:07:12 +0000 (18:07 +0100)]
x86/events/intel/ds: Use the proper cache flush method for mapping ds buffers

CVE-2017-5754

Thomas reported the following warning:

 BUG: using smp_processor_id() in preemptible [00000000] code: ovsdb-server/4498
 caller is native_flush_tlb_single+0x57/0xc0
 native_flush_tlb_single+0x57/0xc0
 __set_pte_vaddr+0x2d/0x40
 set_pte_vaddr+0x2f/0x40
 cea_set_pte+0x30/0x40
 ds_update_cea.constprop.4+0x4d/0x70
 reserve_ds_buffers+0x159/0x410
 x86_reserve_hardware+0x150/0x160
 x86_pmu_event_init+0x3e/0x1f0
 perf_try_init_event+0x69/0x80
 perf_event_alloc+0x652/0x740
 SyS_perf_event_open+0x3f6/0xd60
 do_syscall_64+0x5c/0x190

set_pte_vaddr is used to map the ds buffers into the cpu entry area, but
there are two problems with that:

 1) The resulting flush is not supposed to be called in preemptible context

 2) The cpu entry area is supposed to be per CPU, but the debug store
    buffers are mapped for all CPUs so these mappings need to be flushed
    globally.

Add the necessary preemption protection across the mapping code and flush
TLBs globally.

Fixes: c1961a4631da ("x86/events/intel/ds: Map debug buffers in cpu_entry_area")
Reported-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Thomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180104170712.GB3040@hirez.programming.kicks-ass.net
(cherry picked from commit 42f3bdc5dd962a5958bc024c1e1444248a6b8b4a)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/kaslr: Fix the vaddr_end mess
Thomas Gleixner [Thu, 4 Jan 2018 11:32:03 +0000 (12:32 +0100)]
x86/kaslr: Fix the vaddr_end mess

CVE-2017-5754

vaddr_end for KASLR is only documented in the KASLR code itself and is
adjusted depending on config options. So it's not surprising that a change
of the memory layout causes KASLR to have the wrong vaddr_end. This can map
arbitrary stuff into other areas causing hard to understand problems.

Remove the whole ifdef magic and define the start of the cpu_entry_area to
be the end of the KASLR vaddr range.

Add documentation to that effect.

Fixes: 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
Reported-by: Benjamin Gilbert <benjamin.gilbert@coreos.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Benjamin Gilbert <benjamin.gilbert@coreos.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable <stable@vger.kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Garnier <thgarnie@google.com>,
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
(cherry picked from commit 1dddd25125112ba49706518ac9077a1026a18f37)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/mm: Map cpu_entry_area at the same place on 4/5 level
Thomas Gleixner [Thu, 4 Jan 2018 12:01:40 +0000 (13:01 +0100)]
x86/mm: Map cpu_entry_area at the same place on 4/5 level

CVE-2017-5754

There is no reason for 4 and 5 level pagetables to have a different
layout. It just makes determining vaddr_end for KASLR harder than
necessary.

Fixes: 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Benjamin Gilbert <benjamin.gilbert@coreos.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable <stable@vger.kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Garnier <thgarnie@google.com>,
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
(cherry picked from commit f2078904810373211fb15f91888fba14c01a4acc)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/mm: Set MODULES_END to 0xffffffffff000000
Andrey Ryabinin [Thu, 28 Dec 2017 16:06:20 +0000 (19:06 +0300)]
x86/mm: Set MODULES_END to 0xffffffffff000000

CVE-2017-5754

Since f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.

So passing page unaligned address to kasan_populate_zero_shadow() have two
possible effects:

1) It may leave one page hole in supposed to be populated area. After commit
  21506525fb8d ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
  hole happens to be in the shadow covering fixmap area and leads to crash:

 BUG: unable to handle kernel paging request at fffffbffffe8ee04
 RIP: 0010:check_memory_region+0x5c/0x190

 Call Trace:
  <NMI>
  memcpy+0x1f/0x50
  ghes_copy_tofrom_phys+0xab/0x180
  ghes_read_estatus+0xfb/0x280
  ghes_notify_nmi+0x2b2/0x410
  nmi_handle+0x115/0x2c0
  default_do_nmi+0x57/0x110
  do_nmi+0xf8/0x150
  end_repeat_nmi+0x1a/0x1e

Note, the crash likely disappeared after commit 92a0f81d8957, which
changed kasan_populate_zero_shadow() call the way it was before
commit 21506525fb8d.

2) Attempt to load module near MODULES_END will fail, because
   __vmalloc_node_range() called from kasan_module_alloc() will hit the
   WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.

To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
which means that MODULES_END should be 8*PAGE_SIZE aligned.

The whole point of commit f06bdd4001c2 was to move MODULES_END down if
NR_CPUS is big, so the cpu_entry_area takes a lot of space.
But since 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
the cpu_entry_area is no longer in fixmap, so we could just set
MODULES_END to a fixed 8*PAGE_SIZE aligned address.

Fixes: f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
Reported-by: Jakub Kicinski <kubakici@wp.pl>
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Garnier <thgarnie@google.com>
Link: https://lkml.kernel.org/r/20171228160620.23818-1-aryabinin@virtuozzo.com
(cherry picked from commit f5a40711fa58f1c109165a4fec6078bf2dfd2bdc)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/process: Define cpu_tss_rw in same section as declaration
Nick Desaulniers [Wed, 3 Jan 2018 20:39:52 +0000 (12:39 -0800)]
x86/process: Define cpu_tss_rw in same section as declaration

CVE-2017-5754

cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
leading to section mismatch warnings.

Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
it's mapped to the cpu entry area and must be page aligned.

[ tglx: Massaged changelog a bit ]

Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: thomas.lendacky@amd.com
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: tklauser@distanz.ch
Cc: minipli@googlemail.com
Cc: me@kylehuey.com
Cc: namit@vmware.com
Cc: luto@kernel.org
Cc: jpoimboe@redhat.com
Cc: tj@kernel.org
Cc: cl@linux.com
Cc: bp@suse.de
Cc: thgarnie@google.com
Cc: kirill.shutemov@linux.intel.com
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
(cherry picked from commit 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/dumpstack: Print registers for first stack frame
Josh Poimboeuf [Sun, 31 Dec 2017 16:18:07 +0000 (10:18 -0600)]
x86/dumpstack: Print registers for first stack frame

CVE-2017-5754

In the stack dump code, if the frame after the starting pt_regs is also
a regs frame, the registers don't get printed.  Fix that.

Reported-by: Andy Lutomirski <luto@amacapital.net>
Tested-by: Alexander Tsoy <alexander@tsoy.me>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toralf Förster <toralf.foerster@gmx.de>
Cc: stable@vger.kernel.org
Fixes: 3b3fa11bc700 ("x86/dumpstack: Print any pt_regs found on the stack")
Link: http://lkml.kernel.org/r/396f84491d2f0ef64eda4217a2165f5712f6a115.1514736742.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 3ffdeb1a02be3086f1411a15c5b9c481fa28e21f)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/dumpstack: Fix partial register dumps
Josh Poimboeuf [Sun, 31 Dec 2017 16:18:06 +0000 (10:18 -0600)]
x86/dumpstack: Fix partial register dumps

CVE-2017-5754

The show_regs_safe() logic is wrong.  When there's an iret stack frame,
it prints the entire pt_regs -- most of which is random stack data --
instead of just the five registers at the end.

show_regs_safe() is also poorly named: the on_stack() checks aren't for
safety.  Rename the function to show_regs_if_on_stack() and add a
comment to explain why the checks are needed.

These issues were introduced with the "partial register dump" feature of
the following commit:

  b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")

That patch had gone through a few iterations of development, and the
above issues were artifacts from a previous iteration of the patch where
'regs' pointed directly to the iret frame rather than to the (partially
empty) pt_regs.

Tested-by: Alexander Tsoy <alexander@tsoy.me>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toralf Förster <toralf.foerster@gmx.de>
Cc: stable@vger.kernel.org
Fixes: b02fcf9ba121 ("x86/unwinder: Handle stack overflows more gracefully")
Link: http://lkml.kernel.org/r/5b05b8b344f59db2d3d50dbdeba92d60f2304c54.1514736742.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit a9cdbe72c4e8bf3b38781c317a79326e2e1a230d)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/pti: Make sure the user/kernel PTEs match
Thomas Gleixner [Wed, 3 Jan 2018 14:57:59 +0000 (15:57 +0100)]
x86/pti: Make sure the user/kernel PTEs match

CVE-2017-5754

Meelis reported that his K8 Athlon64 emits MCE warnings when PTI is
enabled:

[Hardware Error]: Error Addr: 0x0000ffff81e000e0
[Hardware Error]: MC1 Error: L1 TLB multimatch.
[Hardware Error]: cache level: L1, tx: INSN

The address is in the entry area, which is mapped into kernel _AND_ user
space. That's special because we switch CR3 while we are executing
there.

User mapping:
0xffffffff81e00000-0xffffffff82000000           2M     ro         PSE     GLB x  pmd

Kernel mapping:
0xffffffff81000000-0xffffffff82000000          16M     ro         PSE         x  pmd

So the K8 is complaining that the TLB entries differ. They differ in the
GLB bit.

Drop the GLB bit when installing the user shared mapping.

Fixes: 6dc72c3cbca0 ("x86/mm/pti: Share entry text PMD")
Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Meelis Roos <mroos@linux.ee>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801031407180.1957@nanos
(cherry picked from commit 52994c256df36fda9a715697431cba9daecb6b11)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>