]> git.proxmox.com Git - mirror_ubuntu-artful-kernel.git/log
mirror_ubuntu-artful-kernel.git
6 years agoUBUNTU: Ubuntu-4.13.0-36.40 Ubuntu-4.13.0-36.40
Khalid Elmously [Fri, 16 Feb 2018 18:24:06 +0000 (13:24 -0500)]
UBUNTU: Ubuntu-4.13.0-36.40

Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoUBUNTU: Start new release
Khalid Elmously [Fri, 16 Feb 2018 17:46:31 +0000 (12:46 -0500)]
UBUNTU: Start new release

Ignore: yes
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoUBUNTU: Ubuntu-4.13.0-35.39
Kleber Sacilotto de Souza [Mon, 12 Feb 2018 10:28:47 +0000 (11:28 +0100)]
UBUNTU: Ubuntu-4.13.0-35.39

Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: [Packaging] pull in retpoline files
Andy Whitcroft [Sun, 11 Feb 2018 11:29:18 +0000 (11:29 +0000)]
UBUNTU: [Packaging] pull in retpoline files

CVE-2017-5715 (Spectre v2 Intel)

Signed-off-by: Andy Whitcroft <apw@canonical.com>
6 years agoUBUNTU: [Packaging] retpoline files must be sorted
Andy Whitcroft [Sun, 11 Feb 2018 12:31:15 +0000 (12:31 +0000)]
UBUNTU: [Packaging] retpoline files must be sorted

CVE-2017-5715 (Spectre v2 Intel)

Signed-off-by: Andy Whitcroft <apw@canonical.com>
6 years agoUBUNTU: SAUCE: turn off IBRS when full retpoline is present
Andy Whitcroft [Sat, 10 Feb 2018 13:01:09 +0000 (13:01 +0000)]
UBUNTU: SAUCE: turn off IBRS when full retpoline is present

CVE-2017-5715 (Spectre v2 Intel)

When we have full retpoline enabled then we do not actually need to toggle
IBRS on entering and leaving the kernel.

Signed-off-by: Andy Whitcroft <apw@canonical.com>
6 years agoRevert "UBUNTU: SAUCE: turn off IBPB when full retpoline is present"
Andy Whitcroft [Sun, 11 Feb 2018 10:18:38 +0000 (10:18 +0000)]
Revert "UBUNTU: SAUCE: turn off IBPB when full retpoline is present"

CVE-2017-5715 (Spectre v2 Intel)

This reverts commit 4c04cc2c2dd1dd856238f53ee6f84b53f0c34caa.

Signed-off-by: Andy Whitcroft <apw@canonical.com>
6 years agoUBUNTU: Start new release
Andy Whitcroft [Sun, 11 Feb 2018 11:43:17 +0000 (11:43 +0000)]
UBUNTU: Start new release

Ignore: yes
Signed-off-by: Andy Whitcroft <apw@canonical.com>
6 years agoUBUNTU: Ubuntu-4.13.0-34.37
Khalid Elmously [Fri, 9 Feb 2018 19:50:17 +0000 (14:50 -0500)]
UBUNTU: Ubuntu-4.13.0-34.37

Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agolibata: apply MAX_SEC_1024 to all LITEON EP1 series devices
Xinyu Lin [Thu, 1 Feb 2018 19:16:40 +0000 (14:16 -0500)]
libata: apply MAX_SEC_1024 to all LITEON EP1 series devices

BugLink: http://bugs.launchpad.net/bugs/1743053
LITEON EP1 has the same timeout issues as CX1 series devices.

Revert max_sectors to the value of 1024.

'e0edc8c54646 ("libata: apply MAX_SEC_1024 to all CX1-JB*-HP devices")'

Signed-off-by: Xinyu Lin <xinyu0123@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: stable@vger.kernel.org
(cherry picked from commit db5ff909798ef0099004ad50a0ff5fde92426fd1)
Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Acked-by: Kleber Souza <kleber.souza@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoKVM: s390: wire up bpb feature
Christian Borntraeger [Fri, 2 Feb 2018 22:45:46 +0000 (17:45 -0500)]
KVM: s390: wire up bpb feature

BugLink: http://bugs.launchpad.net/bugs/1747090
The new firmware interfaces for branch prediction behaviour changes
are transparently available for the guest. Nevertheless, there is
new state attached that should be migrated and properly resetted.
Provide a mechanism for handling reset, migration and VSIE.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
[Changed capability number to 152. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
(back ported from commit 35b3fde6203b932b2b1a5b53b3d8808abc9c4f60)
Signed-off-by: Joseph Salisbury <joseph.salisbury@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoRevert "mm, memory_hotplug: do not associate hotadded memory to zones until online"
Colin Ian King [Fri, 9 Feb 2018 10:08:54 +0000 (10:08 +0000)]
Revert "mm, memory_hotplug: do not associate hotadded memory to zones until online"

BugLink: http://bugs.launchpad.net/bugs/1747069
Hotplug removal causes i386 crashes when exercised with the kernel
selftest mem-on-off-test script.  A fix occurs in 4.15 however this
requires a large set of changes that are way too large to be SRU'able
and the least risky way forward is to revert the offending commit.

This fix reverts commit f1dd2cd13c4b ("mm, memory_hotplug: do not
associate hotadded memory to zones until online"), however the
revert required some manual fix-ups because of subsequent fixes after
this commit.

Note that running the mem-on-off-script is not always sufficient to
trigger the bug. A good reproducer is to run in a 4 CPU VM with 2GB
of memory and after running the script run sync and then re-install
the kernel packages to trip the issue.  This has been thoroughly
tested on i386 and amd64 and given a solid soak test using the
ADT tests too.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoUBUNTU: SAUCE: turn off IBPB when full retpoline is present
Andy Whitcroft [Thu, 8 Feb 2018 20:37:45 +0000 (20:37 +0000)]
UBUNTU: SAUCE: turn off IBPB when full retpoline is present

CVE-2017-5715 (Spectre v2 Intel)

When we have full retpoline enabled then we do not actually require IBPB
flushes when entering the kernel.  Add a new use_ibpb bit to represent
when we have retpoline enabled.  Further split the enable bit into two
0x1 representing whether entry IBPB is enabled and 0x10 representing
whether kernel flushes for userspace/VMs etc are applied.

Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoKVM: x86: Add speculative control CPUID support for guests
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
KVM: x86: Add speculative control CPUID support for guests

CVE-2017-5715 (Spectre v2 Intel)

Provide the guest with the speculative control CPUID related values.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/svm: Set IBPB when running a different VCPU
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Set IBPB when running a different VCPU

CVE-2017-5715 (Spectre v2 Intel)

Set IBPB (Indirect Branch Prediction Barrier) when the current CPU is
going to run a VCPU different from what was previously run.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/svm: Set IBRS value on VM entry and exit
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/svm: Set IBRS value on VM entry and exit

CVE-2017-5715 (Spectre v2 Intel)

Set/restore the guests IBRS value on VM entry. On VM exit back to the
kernel save the guest IBRS value and then set IBRS to 1.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoKVM: SVM: Do not intercept new speculative control MSRs
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
KVM: SVM: Do not intercept new speculative control MSRs

CVE-2017-5715 (Spectre v2 Intel)

Allow guest access to the speculative control MSRs without being
intercepted.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/microcode: Extend post microcode reload to support IBPB feature
Tom Lendacky [Wed, 20 Dec 2017 10:55:47 +0000 (10:55 +0000)]
x86/microcode: Extend post microcode reload to support IBPB feature

CVE-2017-5715 (Spectre v2 Intel)

Add an IBPB feature check to the speculative control update check after
a microcode reload.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/cpu/AMD: Add speculative control support for AMD
Tom Lendacky [Wed, 20 Dec 2017 10:52:54 +0000 (10:52 +0000)]
x86/cpu/AMD: Add speculative control support for AMD

CVE-2017-5715 (Spectre v2 Intel)

Add speculative control support for AMD processors. For AMD, speculative
control is indicated as follows:

  CPUID EAX=0x00000007, ECX=0x00 return EDX[26] indicates support for
  both IBRS and IBPB.

  CPUID EAX=0x80000008, ECX=0x00 return EBX[12] indicates support for
  just IBPB.

On AMD family 0x10, 0x12 and 0x16 processors where either of the above
features are not supported, IBPB can be achieved by disabling
indirect branch predictor support in MSR 0xc0011021[14] at boot.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/spec_ctrl: Add lock to serialize changes to ibrs and ibpb control
Tim Chen [Mon, 20 Nov 2017 21:47:54 +0000 (13:47 -0800)]
x86/spec_ctrl: Add lock to serialize changes to ibrs and ibpb control

CVE-2017-5715 (Spectre v2 Intel)

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature
Tim Chen [Thu, 16 Nov 2017 12:47:48 +0000 (04:47 -0800)]
x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

CVE-2017-5715 (Spectre v2 Intel)

There are 2 ways to control IBPB and IBRS

1. At boot time
noibrs kernel boot parameter will disable IBRS usage
noibpb kernel boot parameter will disable IBPB usage
Otherwise if the above parameters are not specified, the system
will enable ibrs and ibpb usage if the cpu supports it.

2. At run time
echo 0 > /proc/sys/kernel/ibrs_enabled will turn off IBRS
echo 1 > /proc/sys/kernel/ibrs_enabled will turn on IBRS in kernel
echo 2 > /proc/sys/kernel/ibrs_enabled will turn on IBRS in both userspace and kernel

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
[marcelo.cerri@canonical.com: add x86 guards to kernel/smp.c]
[marcelo.cerri@canonical.com: include asm/msr.h under x86 guard in kernel/sysctl.c]
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/kvm: Toggle IBRS on VM entry and exit
Tim Chen [Sat, 21 Oct 2017 00:04:35 +0000 (17:04 -0700)]
x86/kvm: Toggle IBRS on VM entry and exit

CVE-2017-5715 (Spectre v2 Intel)

Restore guest IBRS on VM entry and set it to 1 on VM exit
back to kernel.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/kvm: Set IBPB when switching VM
Tim Chen [Fri, 13 Oct 2017 21:31:46 +0000 (14:31 -0700)]
x86/kvm: Set IBPB when switching VM

CVE-2017-5715 (Spectre v2 Intel)

Set IBPB (Indirect branch prediction barrier) when switching VM.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/kvm: add MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD to kvm
Wei Wang [Tue, 7 Nov 2017 08:47:53 +0000 (16:47 +0800)]
x86/kvm: add MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD to kvm

CVE-2017-5715 (Spectre v2 Intel)

Add field to access guest MSR_IA332_SPEC_CTRL and MSR_IA32_PRED_CMD state.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/entry: Stuff RSB for entry to kernel for non-SMEP platform
Tim Chen [Wed, 15 Nov 2017 01:16:30 +0000 (17:16 -0800)]
x86/entry: Stuff RSB for entry to kernel for non-SMEP platform

CVE-2017-5715 (Spectre v2 Intel)

Stuff RSB to prevent RSB underflow on non-SMEP platform.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/mm: Only set IBPB when the new thread cannot ptrace current thread
Tim Chen [Tue, 7 Nov 2017 21:52:42 +0000 (13:52 -0800)]
x86/mm: Only set IBPB when the new thread cannot ptrace current thread

CVE-2017-5715 (Spectre v2 Intel)

To reduce overhead of setting IBPB, we only do that when
the new thread cannot ptrace the current one.  If the new
thread has ptrace capability on current thread, it is safe.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/mm: Set IBPB upon context switch
Tim Chen [Fri, 20 Oct 2017 19:56:29 +0000 (12:56 -0700)]
x86/mm: Set IBPB upon context switch

CVE-2017-5715 (Spectre v2 Intel)

Set IBPB on context switch with changing of page table.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/idle: Disable IBRS when offlining cpu and re-enable on wakeup
Tim Chen [Wed, 15 Nov 2017 20:24:19 +0000 (12:24 -0800)]
x86/idle: Disable IBRS when offlining cpu and re-enable on wakeup

CVE-2017-5715 (Spectre v2 Intel)

Clear IBRS when cpu is offlined and set it when brining it back online.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/idle: Disable IBRS entering idle and enable it on wakeup
Tim Chen [Tue, 7 Nov 2017 02:19:14 +0000 (18:19 -0800)]
x86/idle: Disable IBRS entering idle and enable it on wakeup

CVE-2017-5715 (Spectre v2 Intel)

Clear IBRS on idle entry and set it on idle exit into kernel on mwait.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/enter: Use IBRS on syscall and interrupts
Tim Chen [Fri, 13 Oct 2017 21:25:00 +0000 (14:25 -0700)]
x86/enter: Use IBRS on syscall and interrupts

CVE-2017-5715 (Spectre v2 Intel)

Set IBRS upon kernel entrance via syscall and interrupts. Clear it upon exit.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/enter: MACROS to set/clear IBRS and set IBPB
Tim Chen [Sat, 16 Sep 2017 01:04:53 +0000 (18:04 -0700)]
x86/enter: MACROS to set/clear IBRS and set IBPB

CVE-2017-5715 (Spectre v2 Intel)

Setup macros to control IBRS and IBPB

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/feature: Report presence of IBPB and IBRS control
Tim Chen [Wed, 27 Sep 2017 19:09:14 +0000 (12:09 -0700)]
x86/feature: Report presence of IBPB and IBRS control

CVE-2017-5715 (Spectre v2 Intel)

Report presence of IBPB and IBRS.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/feature: Enable the x86 feature to control Speculation
Tim Chen [Thu, 24 Aug 2017 16:34:41 +0000 (09:34 -0700)]
x86/feature: Enable the x86 feature to control Speculation

CVE-2017-5715 (Spectre v2 Intel)

cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
IA32_SPEC_CTRL (0x48) and IA32_PRED_CMD (0x49)
IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)
IA32_PRED_CMD,  bit0 – Indirect Branch Prediction Barrier (IBPB)

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agotun/tap: sanitize TUNSETSNDBUF input
Craig Gallek [Mon, 30 Oct 2017 22:50:11 +0000 (18:50 -0400)]
tun/tap: sanitize TUNSETSNDBUF input

BugLink: https://launchpad.net/bugs/1748846
Syzkaller found several variants of the lockup below by setting negative
values with the TUNSETSNDBUF ioctl.  This patch adds a sanity check
to both the tun and tap versions of this ioctl.

  watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [repro:2389]
  Modules linked in:
  irq event stamp: 329692056
  hardirqs last  enabled at (329692055): [<ffffffff824b8381>] _raw_spin_unlock_irqrestore+0x31/0x75
  hardirqs last disabled at (329692056): [<ffffffff824b9e58>] apic_timer_interrupt+0x98/0xb0
  softirqs last  enabled at (35659740): [<ffffffff824bc958>] __do_softirq+0x328/0x48c
  softirqs last disabled at (35659731): [<ffffffff811c796c>] irq_exit+0xbc/0xd0
  CPU: 0 PID: 2389 Comm: repro Not tainted 4.14.0-rc7 #23
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
  task: ffff880009452140 task.stack: ffff880006a20000
  RIP: 0010:_raw_spin_lock_irqsave+0x11/0x80
  RSP: 0018:ffff880006a27c50 EFLAGS: 00000282 ORIG_RAX: ffffffffffffff10
  RAX: ffff880009ac68d0 RBX: ffff880006a27ce0 RCX: 0000000000000000
  RDX: 0000000000000001 RSI: ffff880006a27ce0 RDI: ffff880009ac6900
  RBP: ffff880006a27c60 R08: 0000000000000000 R09: 0000000000000000
  R10: 0000000000000001 R11: 000000000063ff00 R12: ffff880009ac6900
  R13: ffff880006a27cf8 R14: 0000000000000001 R15: ffff880006a27cf8
  FS:  00007f4be4838700(0000) GS:ffff88000cc00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 0000000020101000 CR3: 0000000009616000 CR4: 00000000000006f0
  Call Trace:
   prepare_to_wait+0x26/0xc0
   sock_alloc_send_pskb+0x14e/0x270
   ? remove_wait_queue+0x60/0x60
   tun_get_user+0x2cc/0x19d0
   ? __tun_get+0x60/0x1b0
   tun_chr_write_iter+0x57/0x86
   __vfs_write+0x156/0x1e0
   vfs_write+0xf7/0x230
   SyS_write+0x57/0xd0
   entry_SYSCALL_64_fastpath+0x1f/0xbe
  RIP: 0033:0x7f4be4356df9
  RSP: 002b:00007ffc18101c08 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
  RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4be4356df9
  RDX: 0000000000000046 RSI: 0000000020101000 RDI: 0000000000000005
  RBP: 00007ffc18101c40 R08: 0000000000000001 R09: 0000000000000001
  R10: 0000000000000001 R11: 0000000000000293 R12: 0000559c75f64780
  R13: 00007ffc18101d30 R14: 0000000000000000 R15: 0000000000000000

Fixes: 33dccbb050bb ("tun: Limit amount of queued packets per device")
Fixes: 20d29d7a916a ("net: macvtap driver")
Signed-off-by: Craig Gallek <kraig@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 93161922c658c714715686cd0cf69b090cb9bf1d)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Acked-by: Brad Figg <brad.figg@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agotun: allow positive return values on dev_get_valid_name() call
Julien Gomes [Wed, 25 Oct 2017 18:50:50 +0000 (11:50 -0700)]
tun: allow positive return values on dev_get_valid_name() call

BugLink: https://launchpad.net/bugs/1748846
If the name argument of dev_get_valid_name() contains "%d", it will try
to assign it a unit number in __dev__alloc_name() and return either the
unit number (>= 0) or an error code (< 0).
Considering positive values as error values prevent tun device creations
relying this mechanism, therefor we should only consider negative values
as errors here.

Signed-off-by: Julien Gomes <julien@arista.com>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 5c25f65fd1e42685f7ccd80e0621829c105785d9)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Acked-by: Brad Figg <brad.figg@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agotun: call dev_get_valid_name() before register_netdevice()
Cong Wang [Fri, 13 Oct 2017 18:58:53 +0000 (11:58 -0700)]
tun: call dev_get_valid_name() before register_netdevice()

BugLink: https://launchpad.net/bugs/1748846
register_netdevice() could fail early when we have an invalid
dev name, in which case ->ndo_uninit() is not called. For tun
device, this is a problem because a timer etc. are already
initialized and it expects ->ndo_uninit() to clean them up.

We could move these initializations into a ->ndo_init() so
that register_netdevice() knows better, however this is still
complicated due to the logic in tun_detach().

Therefore, I choose to just call dev_get_valid_name() before
register_netdevice(), which is quicker and much easier to audit.
And for this specific case, it is already enough.

Fixes: 96442e42429e ("tuntap: choose the txq based on rxq")
Reported-by: Dmitry Alexeev <avekceeb@gmail.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 0ad646c81b2182f7fa67ec0c8c825e0ee165696d)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Acked-by: Brad Figg <brad.figg@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: drm/amdgpu: add atpx quirk handling (v2)
Alex Deucher [Tue, 16 Jan 2018 10:48:00 +0000 (11:48 +0100)]
UBUNTU: SAUCE: drm/amdgpu: add atpx quirk handling (v2)

BugLink: https://launchpad.net/bugs/1742759
Add quirks for handling PX/HG systems.  In this case, add
a quirk for a weston dGPU that only seems to properly power
down using ATPX power control rather than HG (_PR3).

v2: append a new weston XT

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com> (v2)
Reviewed-and-Tested-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Timo Aaltonen <timo.aaltonen@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Khaled Elmously <khalid.elmously@canonical.com>
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
6 years agoUBUNTU: [config] Keep ignoring retpoline
Stefan Bader [Thu, 8 Feb 2018 11:25:14 +0000 (12:25 +0100)]
UBUNTU: [config] Keep ignoring retpoline

getabis needs update to pull retpoline info
needs checking whether we can drop before doing a new spin

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
6 years agoUBUNTU: Start new release
Stefan Bader [Thu, 8 Feb 2018 09:23:19 +0000 (10:23 +0100)]
UBUNTU: Start new release

Ignore: yes
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
6 years agoUBUNTU: Ubuntu-4.13.0-33.36
Khalid Elmously [Tue, 6 Feb 2018 18:22:54 +0000 (13:22 -0500)]
UBUNTU: Ubuntu-4.13.0-33.36

Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agox86/retpoline: Simplify vmexit_fill_RSB()
Borislav Petkov [Sat, 27 Jan 2018 16:24:33 +0000 (16:24 +0000)]
x86/retpoline: Simplify vmexit_fill_RSB()

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507
commit 1dde7415e99933bb7293d6b2843752cbdb43ec11

Simplify it to call an asm-function instead of pasting 41 insn bytes at
every call site. Also, add alignment to the macro as suggested here:

  https://support.google.com/faqs/answer/7625886

[dwmw2: Clean up comments, let it clobber %ebx and just tell the compiler]

Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: ak@linux.intel.com
Cc: dave.hansen@intel.com
Cc: karahmed@amazon.de
Cc: arjan@linux.intel.com
Cc: torvalds@linux-foundation.org
Cc: peterz@infradead.org
Cc: bp@alien8.de
Cc: pbonzini@redhat.com
Cc: tim.c.chen@linux.intel.com
Cc: gregkh@linux-foundation.org
Link: https://lkml.kernel.org/r/1517070274-12128-3-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 6a6a9c38986e9c4bcfdc53fba7b915a6ab834ce1)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/retpoline: Remove the esp/rsp thunk
Waiman Long [Mon, 22 Jan 2018 22:09:34 +0000 (17:09 -0500)]
x86/retpoline: Remove the esp/rsp thunk

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507
commit 1df37383a8aeabb9b418698f0bcdffea01f4b1b2

It doesn't make sense to have an indirect call thunk with esp/rsp as
retpoline code won't work correctly with the stack pointer register.
Removing it will help compiler writers to catch error in case such
a thunk call is emitted incorrectly.

Fixes: 76b043848fd2 ("x86/retpoline: Add initial retpoline support")
Suggested-by: Jeff Law <law@redhat.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Kees Cook <keescook@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: https://lkml.kernel.org/r/1516658974-27852-1-git-send-email-longman@redhat.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 04007c4ac40ca141e9160b33368e592394d1b6d3)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/retpoline: Optimize inline assembler for vmexit_fill_RSB
Andi Kleen [Wed, 17 Jan 2018 22:53:28 +0000 (14:53 -0800)]
x86/retpoline: Optimize inline assembler for vmexit_fill_RSB

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507
commit 3f7d875566d8e79c5e0b2c9a413e91b2c29e0854 upstream.

The generated assembler for the C fill RSB inline asm operations has
several issues:

- The C code sets up the loop register, which is then immediately
  overwritten in __FILL_RETURN_BUFFER with the same value again.

- The C code also passes in the iteration count in another register, which
  is not used at all.

Remove these two unnecessary operations. Just rely on the single constant
passed to the macro for the iterations.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: dave.hansen@intel.com
Cc: gregkh@linuxfoundation.org
Cc: torvalds@linux-foundation.org
Cc: arjan@linux.intel.com
Link: https://lkml.kernel.org/r/20180117225328.15414-1-andi@firstfloor.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 5fa871644e6d48f5c120e8c469c96850a01c60c8)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/retpoline: Add LFENCE to the retpoline/RSB filling RSB macros
Tom Lendacky [Sat, 13 Jan 2018 23:27:30 +0000 (17:27 -0600)]
x86/retpoline: Add LFENCE to the retpoline/RSB filling RSB macros

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507
commit 28d437d550e1e39f805d99f9f8ac399c778827b7 upstream.

The PAUSE instruction is currently used in the retpoline and RSB filling
macros as a speculation trap.  The use of PAUSE was originally suggested
because it showed a very, very small difference in the amount of
cycles/time used to execute the retpoline as compared to LFENCE.  On AMD,
the PAUSE instruction is not a serializing instruction, so the pause/jmp
loop will use excess power as it is speculated over waiting for return
to mispredict to the correct target.

The RSB filling macro is applicable to AMD, and, if software is unable to
verify that LFENCE is serializing on AMD (possible when running under a
hypervisor), the generic retpoline support will be used and, so, is also
applicable to AMD.  Keep the current usage of PAUSE for Intel, but add an
LFENCE instruction to the speculation trap for AMD.

The same sequence has been adopted by GCC for the GCC generated retpolines.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@alien8.de>
Acked-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: Kees Cook <keescook@google.com>
Link: https://lkml.kernel.org/r/20180113232730.31060.36287.stgit@tlendack-t1.amdoffice.net
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 956ec9e7b59a1fb3fd4c9bc8a13a7f7700e9d7d2)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/retpoline: Fill RSB on context switch for affected CPUs
David Woodhouse [Fri, 12 Jan 2018 17:49:25 +0000 (17:49 +0000)]
x86/retpoline: Fill RSB on context switch for affected CPUs

CVE-2017-5715 (Spectre v2 retpoline)
BugLink: http://bugs.launchpad.net/bugs/1747507
commit c995efd5a740d9cbafbf58bde4973e8b50b4d761 upstream.

On context switch from a shallow call stack to a deeper one, as the CPU
does 'ret' up the deeper side it may encounter RSB entries (predictions for
where the 'ret' goes to) which were populated in userspace.

This is problematic if neither SMEP nor KPTI (the latter of which marks
userspace pages as NX for the kernel) are active, as malicious code in
userspace may then be executed speculatively.

Overwrite the CPU's return prediction stack with calls which are predicted
to return to an infinite loop, to "capture" speculation if this
happens. This is required both for retpoline, and also in conjunction with
IBRS for !SMEP && !KPTI.

On Skylake+ the problem is slightly different, and an *underflow* of the
RSB may cause errant branch predictions to occur. So there it's not so much
overwrite, as *filling* the RSB to attempt to prevent it getting
empty. This is only a partial solution for Skylake+ since there are many
other conditions which may result in the RSB becoming empty. The full
solution on Skylake+ is to use IBRS, which will prevent the problem even
when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
required on context switch.

[ tglx: Added missing vendor check and slighty massaged comments and
   changelog ]

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: gnomes@lxorguk.ukuu.org.uk
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: thomas.lendacky@amd.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Kees Cook <keescook@google.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: https://lkml.kernel.org/r/1515779365-9032-1-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 051547583bdda4b74953053a1034026c56b55c4c)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: [d-i] Add qede to nic-modules udeb
dann frazier [Tue, 16 Jan 2018 20:36:00 +0000 (21:36 +0100)]
UBUNTU: [d-i] Add qede to nic-modules udeb

BugLink: https://bugs.launchpad.net/bugs/1743638
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: complete all tasklets prior to host reset
Xiaofei Tan [Tue, 24 Oct 2017 15:51:47 +0000 (23:51 +0800)]
scsi: hisi_sas: complete all tasklets prior to host reset

BugLink: https://bugs.launchpad.net/bugs/1739807
The CQ event is handled in tasklet context, and it could be delayed if
the system loading is high.

It is possible to run into some problems when executing a host reset
when cq_tasklet_vx_hw() is being executed.

So, prior to host reset, execute tasklet_kill() to ensure that all CQ
tasklets are complete.

Besides, as the function hisi_sas_wait_tasklets_done() is added to do
tasklet_kill(), this patch refactors some code where tasklet_kill() is
used.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(backported from commit 571295f8055c0b69c9911021ae6cf1a6973cf517)
[ dannf: dropped change in hisi_sas_v3_hw.c reset handler, which didn't
  yet exit; resolved trivial merge conflict in hisi_sas.h ]
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: kill tasklet when destroying irq in v3 hw
Xiang Chen [Thu, 10 Aug 2017 16:09:38 +0000 (00:09 +0800)]
scsi: hisi_sas: kill tasklet when destroying irq in v3 hw

BugLink: https://bugs.launchpad.net/bugs/1739807
This patch adds calls to kill CQ takslets v3 hw during probe failure.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit d499669facfdc9e8a7c479e86fd11f3d49e065ee)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: fix the risk of freeing slot twice
Xiaofei Tan [Tue, 24 Oct 2017 15:51:38 +0000 (23:51 +0800)]
scsi: hisi_sas: fix the risk of freeing slot twice

BugLink: https://bugs.launchpad.net/bugs/1739807
The function hisi_sas_slot_task_free() is used to free the slot and do
tidy-up of LLDD resources. The LLDD generally should know the state of
a slot and decide when to free it, and it should only be done once.

For some scenarios, we really don't know the state, like when TMF
timeout. In this case, we check task->lldd_task before calling
hisi_sas_slot_task_free().

However, we may miss some scenarios when we should also check
task->lldd_task, and it is not SMP safe to check task->lldd_task as we
don't protect it within spin lock.

This patch is to fix this risk of freeing slot twice, as follows:

  1. Check task->lldd_task in the hisi_sas_slot_task_free(), and give
     up freeing of this time if task->lldd_task is NULL.

  2. Set slot->buf to NULL after it is freed.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 6ba0fbc35aa9f3bc8c12be3b4047055c9ce2ac92)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: fix NULL check in SMP abort task path
Xiaofei Tan [Tue, 24 Oct 2017 15:51:37 +0000 (23:51 +0800)]
scsi: hisi_sas: fix NULL check in SMP abort task path

BugLink: https://bugs.launchpad.net/bugs/1739807
This patch adds a NULL check of task->lldd_task before freeing the
slot in SMP path.

This is to guard against the scenario of the slot being freed during
the from the preceding internal abort.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 378c233bcb21dfb2d9c2548b9a1fa6a8d35c78dd)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: us start_phy in PHY_FUNC_LINK_RESET
Xiang Chen [Tue, 24 Oct 2017 15:51:36 +0000 (23:51 +0800)]
scsi: hisi_sas: us start_phy in PHY_FUNC_LINK_RESET

BugLink: https://bugs.launchpad.net/bugs/1739807
When a PHY_FUNC_LINK_RESET is issued, we need to fill the transport
identify_frame to SAS controller before the PHYs are enabled.

Without this, we may find that if a PHY which belonged to a wideport
before the reset may generate a new port id.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 1eb8eeac17ee808b50b422f5ef2e27f5497f82ad)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: fix internal abort slot timeout bug
Xiang Chen [Tue, 24 Oct 2017 15:51:32 +0000 (23:51 +0800)]
scsi: hisi_sas: fix internal abort slot timeout bug

BugLink: https://bugs.launchpad.net/bugs/1739807
When an internal abort times out in hisi_sas_internal_task_abort(),
goto the exit label in and not go through the other task status
checks.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit f692a677e2cb0ccab4ef91be97d8fb020cca246d)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: service interrupt ITCT_CLR interrupt in v2 hw
Xiang Chen [Thu, 10 Aug 2017 16:09:33 +0000 (00:09 +0800)]
scsi: hisi_sas: service interrupt ITCT_CLR interrupt in v2 hw

BugLink: https://bugs.launchpad.net/bugs/1739807
This patch is a fix related to freeing a device in v2 hw driver.

Before, we polled to ITCT CLR interrupt to check if a device is free.

This was error prone, as if the interrupt doesn't occur in 10us, we miss
processing it.

To avoid this situation, service this interrupt and sync the event with
a completion.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 640acc9a9693e729b7936fb4801f2dd59041a141)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: add irq and tasklet cleanup in v2 hw
Xiang Chen [Thu, 10 Aug 2017 16:09:32 +0000 (00:09 +0800)]
scsi: hisi_sas: add irq and tasklet cleanup in v2 hw

BugLink: https://bugs.launchpad.net/bugs/1739807
This patch adds support to clean-up allocated IRQs and kill tasklets
when probe fails and for driver removal.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 8a253888bfc395a7a169f20d710d9289e59a8592)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: add v2 hw DFX feature
Xiaofei Tan [Thu, 10 Aug 2017 16:09:29 +0000 (00:09 +0800)]
scsi: hisi_sas: add v2 hw DFX feature

BugLink: https://bugs.launchpad.net/bugs/1739807
Add DFX feature for v2 hw. We are adding support for
the following errors:
- loss_of_dword_sync_count
- invalid_dword_count
- phy_reset_problem_count
- running_disparity_error_count

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit c52108c61bd3e97495858e6c7423d312093fcfba)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: fix v2 hw underflow residual value
Xiang Chen [Thu, 10 Aug 2017 16:09:28 +0000 (00:09 +0800)]
scsi: hisi_sas: fix v2 hw underflow residual value

BugLink: https://bugs.launchpad.net/bugs/1739807
The value dw0 is the residual bytes when UNDERFLOW error happens, but we
filled the residual with the value of dw3 before. So change the residual
from dw3 to dw0.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 01b361fc900dd8ef7f8537c20e3a1986ab63f4d1)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: avoid potential v2 hw interrupt issue
Xiang Chen [Thu, 10 Aug 2017 16:09:27 +0000 (00:09 +0800)]
scsi: hisi_sas: avoid potential v2 hw interrupt issue

BugLink: https://bugs.launchpad.net/bugs/1739807
When some interrupts happen together, we need to process every interrupt
one-by-one, and should not return immediately when one interrupt process
is finished being processed.

Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit c16db736653f5319d1d9b0d176f1f80394bd2614)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoscsi: hisi_sas: fix reset and port ID refresh issues
Xiaofei Tan [Thu, 10 Aug 2017 16:09:26 +0000 (00:09 +0800)]
scsi: hisi_sas: fix reset and port ID refresh issues

BugLink: https://bugs.launchpad.net/bugs/1739807
This patch provides fixes for the following issues:

1. Fix issue of controller reset required to send commands. For reset
   process, it may be required to send commands to the controller, but
   not during soft reset.  So add HISI_SAS_NOT_ACCEPT_CMD_BIT to prevent
   executing a task during this period.

2. Send a broadcast event in rescan topology to detect any topology
   changes during reset.

3. Previously it was not ensured that libsas has processed the PHY up
   and down events after reset. Potentially this could cause an issue
   that we still process the PHY event after reset. So resolve this by
   flushing shot workqueue in LLDD reset.

4. Port ID requires refresh after reset. The port ID generated after
   reset is not guaranteed to be the same as before reset, so it needs
   to be refreshed for each device's ITCT.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit 917d3bdaf8f2ab3bace2bd60b78d83a2b3096d98)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoACPI / APEI: clear error status before acknowledging the error
Tyler Baicar [Tue, 16 Jan 2018 23:14:48 +0000 (17:14 -0600)]
ACPI / APEI: clear error status before acknowledging the error

BugLink: https://launchpad.net/bugs/1732990
Currently we acknowledge errors before clearing the error status.
This could cause a new error to be populated by firmware in-between
the error acknowledgment and the error status clearing which would
cause the second error's status to be cleared without being handled.
So, clear the error status before acknowledging the errors.

Also, make sure to acknowledge the error if the error status read
fails.

Signed-off-by: Tyler Baicar <tbaicar@codeaurora.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit aaf2c2fb0f51f91c699039440862b6ae9c25c10e)
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoACPI: APEI: fix the wrong iteration of generic error status block
gengdongjiu [Tue, 16 Jan 2018 23:14:47 +0000 (17:14 -0600)]
ACPI: APEI: fix the wrong iteration of generic error status block

BugLink: https://launchpad.net/bugs/1732990
The revision 0x300 generic error data entry is different
from the old version, but currently iterating through the
GHES estatus blocks does not take into account this difference.
This will lead to failure to get the right data entry if GHES
has revision 0x300 error data entry.

Update the GHES estatus iteration macro to properly increment using
acpi_hest_get_next(), and correct the iteration termination condition
because the status block data length only includes error data
length.

Convert the CPER estatus checking and printing iteration logic
to use same macro.

Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
Tested-by: Tyler Baicar <tbaicar@codeaurora.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit c4335fdd38227788178953c101b77180504d7ea0)
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agovfio/pci: Virtualize Maximum Read Request Size
Alex Williamson [Tue, 16 Jan 2018 23:40:40 +0000 (17:40 -0600)]
vfio/pci: Virtualize Maximum Read Request Size

BugLink: https://launchpad.net/bugs/1732804
MRRS defines the maximum read request size a device is allowed to
make.  Drivers will often increase this to allow more data transfer
with a single request.  Completions to this request are bound by the
MPS setting for the bus.  Aside from device quirks (none known), it
doesn't seem to make sense to set an MRRS value less than MPS, yet
this is a likely scenario given that user drivers do not have a
system-wide view of the PCI topology.  Virtualize MRRS such that the
user can set MRRS >= MPS, but use MPS as the floor value that we'll
write to hardware.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit cf0d53ba4947aad6e471491d5b20a567cbe92e56)
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agovfio/pci: Virtualize Maximum Payload Size
Alex Williamson [Tue, 16 Jan 2018 23:40:39 +0000 (17:40 -0600)]
vfio/pci: Virtualize Maximum Payload Size

BugLink: https://launchpad.net/bugs/1732804
With virtual PCI-Express chipsets, we now see userspace/guest drivers
trying to match the physical MPS setting to a virtual downstream port.
Of course a lone physical device surrounded by virtual interconnects
cannot make a correct decision for a proper MPS setting.  Instead,
let's virtualize the MPS control register so that writes through to
hardware are disallowed.  Userspace drivers like QEMU assume they can
write anything to the device and we'll filter out anything dangerous.
Since mismatched MPS can lead to AER and other faults, let's add it
to the kernel side rather than relying on userspace virtualization to
handle it.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
(cherry picked from commit 523184972b282cd9ca17a76f6ca4742394856818)
Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoscsi: hisi_sas: support zone management commands
Xiaofei Tan [Thu, 4 Jan 2018 17:52:00 +0000 (18:52 +0100)]
scsi: hisi_sas: support zone management commands

BugLink: https://bugs.launchpad.net/bugs/1739891
Add two ATA commands, ATA_CMD_ZAC_MGMT_IN and ATA_CMD_ZAC_MGMT_OUT in
hisi_sas_get_ata_protocol(), to support SATA SMR disk.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry picked from commit c3fe8a2bbbc22bd4945ea69ab5a29913baeb35e4)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoi2c: xlp9xx: Handle I2C_M_RECV_LEN in msg->flags
Kamlakant Patel [Thu, 4 Jan 2018 17:37:32 +0000 (10:37 -0700)]
i2c: xlp9xx: Handle I2C_M_RECV_LEN in msg->flags

BugLink: https://bugs.launchpad.net/bugs/1738073
The driver needs to handle the flag I2C_M_RECV_LEN during receive to
support SMBus emulation.

Update receive logic to handle the case where the length is received
as the first byte of a transaction.

Also update the code to handle I2C_CLIENT_PEC, which is set when the
client sends a packet error checking code byte.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Kamlakant Patel <kamlakant.patel@cavium.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
(cherry picked from commit 5515ae112172e20667f02b16f45fbf992923dcb0)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoi2c: xlp9xx: Get clock frequency with clk API
Jayachandran C [Thu, 4 Jan 2018 17:37:10 +0000 (10:37 -0700)]
i2c: xlp9xx: Get clock frequency with clk API

BugLink: https://bugs.launchpad.net/bugs/1738073
Get the input clock frequency to the controller from the linux clk
API, if it is available. This allows us to pass in the block input
frequency either from ACPI (using APD) or from device tree.

The old hardcoded frequency is used as default for backwards compatibility.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Kamlakant Patel <kamlakant.patel@cavium.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
(cherry picked from commit c347b8fc22b21899154cc153a4951aaf226b4e1a)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoACPI / APD: Add clock frequency for ThunderX2 I2C controller
Jayachandran C [Thu, 4 Jan 2018 17:36:50 +0000 (10:36 -0700)]
ACPI / APD: Add clock frequency for ThunderX2 I2C controller

BugLink: https://bugs.launchpad.net/bugs/1738073
Add the input frequency of 125MHz for the ThunderX2 I2C controller block.
The ACPI ID used is "CAV9007".

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Kamlakant Patel <kamlakant.patel@cavium.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
(cherry picked from commit 1977dbefe92c0baefefb62927df6e3908af8c453)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoarm64: Add software workaround for Falkor erratum 1041
Shanker Donthineni [Thu, 4 Jan 2018 17:43:54 +0000 (10:43 -0700)]
arm64: Add software workaround for Falkor erratum 1041

BugLink: https://bugs.launchpad.net/bugs/1738497
The ARM architecture defines the memory locations that are permitted
to be accessed as the result of a speculative instruction fetch from
an exception level for which all stages of translation are disabled.
Specifically, the core is permitted to speculatively fetch from the
4KB region containing the current program counter 4K and next 4K.

When translation is changed from enabled to disabled for the running
exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the
Falkor core may errantly speculatively access memory locations outside
of the 4KB region permitted by the architecture. The errant memory
access may lead to one of the following unexpected behaviors.

1) A System Error Interrupt (SEI) being raised by the Falkor core due
   to the errant memory access attempting to access a region of memory
   that is protected by a slave-side memory protection unit.
2) Unpredictable device behavior due to a speculative read from device
   memory. This behavior may only occur if the instruction cache is
   disabled prior to or coincident with translation being changed from
   enabled to disabled.

The conditions leading to this erratum will not occur when either of the
following occur:
 1) A higher exception level disables translation of a lower exception level
   (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0).
 2) An exception level disabling its stage-1 translation if its stage-2
    translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1
    to 0 when HCR_EL2[VM] has a value of 1).

To avoid the errant behavior, software must execute an ISB immediately
prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0.

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
(backported from commit 932b50c7c1c65e6f23002e075b97ee083c4a9e71)
[ dannf: trivial offset fix in Kconfig ]
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
[kleber.souza: solve conflicts during cherry-pick]
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: [Config] CONFIG_QCOM_FALKOR_ERRATUM_E1041=y
dann frazier [Thu, 4 Jan 2018 17:43:42 +0000 (10:43 -0700)]
UBUNTU: [Config] CONFIG_QCOM_FALKOR_ERRATUM_E1041=y

BugLink: https://bugs.launchpad.net/bugs/1738497
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
[kleber.souza: fixed BugLink on the commit message.]
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agonet: thunderx: Fix TCP/UDP checksum offload for IPv4 pkts
Florian Westphal [Thu, 4 Jan 2018 17:35:00 +0000 (18:35 +0100)]
net: thunderx: Fix TCP/UDP checksum offload for IPv4 pkts

BugLink: https://bugs.launchpad.net/bugs/1736593
Offload IP header checksum to NIC.

This fixes a previous patch which disabled checksum offloading
for both IPv4 and IPv6 packets.  So L3 checksum offload was
getting disabled for IPv4 pkts.  And HW is dropping these pkts
for some reason.

Without this patch, IPv4 TSO appears to be broken:

WIthout this patch I get ~16kbyte/s, with patch close to 2mbyte/s
when copying files via scp from test box to my home workstation.

Looking at tcpdump on sender it looks like hardware drops IPv4 TSO skbs.
This patch restores performance for me, ipv6 looks good too.

Fixes: fa6d7cb5d76c ("net: thunderx: Fix TCP/UDP checksum offload for IPv6 pkts")
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Aleksey Makarov <aleksey.makarov@auriga.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 134059fd2775be79e26c2dff87d25cc2f6ea5626)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agonet: thunderx: Fix TCP/UDP checksum offload for IPv6 pkts
Sunil Goutham [Thu, 4 Jan 2018 17:34:00 +0000 (18:34 +0100)]
net: thunderx: Fix TCP/UDP checksum offload for IPv6 pkts

BugLink: https://bugs.launchpad.net/bugs/1736593
Don't offload IP header checksum to NIC.

This fixes a previous patch which enabled checksum offloading
for both IPv4 and IPv6 packets.  So L3 checksum offload was
getting enabled for IPv6 pkts.  And HW is dropping these pkts
as it assumes the pkt is IPv4 when IP csum offload is set
in the SQ descriptor.

Fixes: 3a9024f52c2e ("net: thunderx: Enable TSO and checksum offloads for ipv6")
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: Aleksey Makarov <aleksey.makarov@auriga.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit fa6d7cb5d76cf0467c61420fc9238045aedfd379)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoPCI: Apply Cavium ThunderX ACS quirk to more Root Ports
Vadim Lomovtsev [Thu, 4 Jan 2018 17:31:00 +0000 (18:31 +0100)]
PCI: Apply Cavium ThunderX ACS quirk to more Root Ports

BugLink: https://bugs.launchpad.net/bugs/1736774
Extend the Cavium ThunderX ACS quirk to cover more device IDs and restrict
it to only Root Ports.

Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
[bhelgaas: changelog, stable tag]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: stable@vger.kernel.org # v4.12+
(cherry picked from commit f2ddaf8dfd4a5071ad09074d2f95ab85d35c8a1e)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoPCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
Vadim Lomovtsev [Thu, 4 Jan 2018 17:31:00 +0000 (18:31 +0100)]
PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF

BugLink: https://bugs.launchpad.net/bugs/1736774
The Cavium ThunderX (CN8XXX) family of PCIe Root Ports does not advertise
an ACS capability.  However, the RTL internally implements similar
protection as if ACS had Request Redirection, Completion Redirection,
Source Validation, and Upstream Forwarding features enabled.

Change Cavium ACS capabilities quirk flags accordingly.

Fixes: b404bcfbf035 ("PCI: Add ACS quirk for all Cavium devices")
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev@cavium.com>
[bhelgaas: tidy changelog, comment, stable tag]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: stable@vger.kernel.org # v4.6+: b77d537d00d0: PCI: Apply Cavium ACS quirk only to CN81xx/CN83xx/CN88xx devices
(cherry picked from commit 7f342678634f16795892677204366e835e450dda)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Acked-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agolocking/qrwlock: Prevent slowpath writers getting held up by fastpath
Will Deacon [Fri, 5 Jan 2018 01:48:37 +0000 (18:48 -0700)]
locking/qrwlock: Prevent slowpath writers getting held up by fastpath

BugLink: https://bugs.launchpad.net/bugs/1732238
When a prospective writer takes the qrwlock locking slowpath due to the
lock being held, it attempts to cmpxchg the wmode field from 0 to
_QW_WAITING so that concurrent lockers also take the slowpath and queue
on the spinlock accordingly, allowing the lockers to drain.

Unfortunately, this isn't fair, because a fastpath writer that comes in
after the lock is made available but before the _QW_WAITING flag is set
can effectively jump the queue. If there is a steady stream of prospective
writers, then the waiter will be held off indefinitely.

This patch restores fairness by separating _QW_WAITING and _QW_LOCKED
into two distinct fields: _QW_LOCKED continues to occupy the bottom byte
of the lockword so that it can be cleared unconditionally when unlocking,
but _QW_WAITING now occupies what used to be the bottom bit of the reader
count. This then forces the slow-path for concurrent lockers.

Tested-by: Waiman Long <longman@redhat.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Tested-by: Adam Wallis <awallis@codeaurora.org>
Tested-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jeremy.Linton@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1507810851-306-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit d133166146333e1f13fc81c0e6c43c8d99290a8a)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agolocking/qrwlock, arm64: Move rwlock implementation over to qrwlocks
Will Deacon [Fri, 5 Jan 2018 01:48:36 +0000 (18:48 -0700)]
locking/qrwlock, arm64: Move rwlock implementation over to qrwlocks

BugLink: https://bugs.launchpad.net/bugs/1732238
Now that the qrwlock can make use of WFE, remove our homebrewed rwlock
code in favour of the generic queued implementation.

Tested-by: Waiman Long <longman@redhat.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Tested-by: Adam Wallis <awallis@codeaurora.org>
Tested-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Jeremy.Linton@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1507810851-306-5-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 087133ac90763cd339b6b67f2998f87dcc136c52)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agolocking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwlock
Will Deacon [Fri, 5 Jan 2018 01:48:35 +0000 (18:48 -0700)]
locking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwlock

BugLink: https://bugs.launchpad.net/bugs/1732238
The qrwlock slowpaths involve spinning when either a prospective reader
is waiting for a concurrent writer to drain, or a prospective writer is
waiting for concurrent readers to drain. In both of these situations,
atomic_cond_read_acquire() can be used to avoid busy-waiting and make use
of any backoff functionality provided by the architecture.

This patch replaces the open-code loops and rspin_until_writer_unlock()
implementation with atomic_cond_read_acquire(). The write mode transition
zero to _QW_WAITING is left alone, since (a) this doesn't need acquire
semantics and (b) should be fast.

Tested-by: Waiman Long <longman@redhat.com>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Tested-by: Adam Wallis <awallis@codeaurora.org>
Tested-by: Jan Glauber <jglauber@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jeremy.Linton@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1507810851-306-4-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit b519b56e378ee82caf9b079b04f5db87dedc3251)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agolocking/atomic: Add atomic_cond_read_acquire()
Will Deacon [Fri, 5 Jan 2018 01:48:34 +0000 (18:48 -0700)]
locking/atomic: Add atomic_cond_read_acquire()

BugLink: https://bugs.launchpad.net/bugs/1732238
smp_cond_load_acquire() provides a way to spin on a variable with acquire
semantics until some conditional expression involving the variable is
satisfied. Architectures such as arm64 can potentially enter a low-power
state, waking up only when the value of the variable changes, which
reduces the system impact of tight polling loops.

This patch makes the same interface available to users of atomic_t,
atomic64_t and atomic_long_t, rather than require messy accesses to the
structure internals.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jeremy.Linton@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1507810851-306-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 4df714be4dcf40bfb0d4af0f851a6e1977afa02e)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agolocking/qrwlock: Use 'struct qrwlock' instead of 'struct __qrwlock'
Will Deacon [Fri, 5 Jan 2018 01:48:33 +0000 (18:48 -0700)]
locking/qrwlock: Use 'struct qrwlock' instead of 'struct __qrwlock'

BugLink: https://bugs.launchpad.net/bugs/1732238
There's no good reason to keep the internal structure of struct qrwlock
hidden from qrwlock.h, particularly as it's actually needed for unlock
and ends up being abstracted independently behind the __qrwlock_write_byte()
function.

Stop pretending we can hide this stuff, and move the __qrwlock definition
into qrwlock, removing the __qrwlock_write_byte() nastiness and using the
same struct definition everywhere instead.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jeremy.Linton@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1507810851-306-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit e0d02285f16e8d5810f3d5d5e8a5886ca0015d3b)
Signed-off-by: dann frazier <dann.frazier@canonical.com>
Acked-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoscsi: libiscsi: Allow sd_shutdown on bad transport
Rafael David Tinoco [Tue, 23 Jan 2018 19:18:02 +0000 (19:18 +0000)]
scsi: libiscsi: Allow sd_shutdown on bad transport

BugLink: https://bugs.launchpad.net/bugs/1569925
If, for any reason, userland shuts down iscsi transport interfaces
before proper logouts - like when logging in to LUNs manually, without
logging out on server shutdown, or when automated scripts can't
umount/logout from logged LUNs - kernel will hang forever on its
sd_sync_cache() logic, after issuing the SYNCHRONIZE_CACHE cmd to all
still existent paths.

PID: 1 TASK: ffff8801a69b8000 CPU: 1 COMMAND: "systemd-shutdow"
 #0 [ffff8801a69c3a30] __schedule at ffffffff8183e9ee
 #1 [ffff8801a69c3a80] schedule at ffffffff8183f0d5
 #2 [ffff8801a69c3a98] schedule_timeout at ffffffff81842199
 #3 [ffff8801a69c3b40] io_schedule_timeout at ffffffff8183e604
 #4 [ffff8801a69c3b70] wait_for_completion_io_timeout at ffffffff8183fc6c
 #5 [ffff8801a69c3bd0] blk_execute_rq at ffffffff813cfe10
 #6 [ffff8801a69c3c88] scsi_execute at ffffffff815c3fc7
 #7 [ffff8801a69c3cc8] scsi_execute_req_flags at ffffffff815c60fe
 #8 [ffff8801a69c3d30] sd_sync_cache at ffffffff815d37d7
 #9 [ffff8801a69c3da8] sd_shutdown at ffffffff815d3c3c

This happens because iscsi_eh_cmd_timed_out(), the transport layer
timeout helper, would tell the queue timeout function (scsi_times_out)
to reset the request timer over and over, until the session state is
back to logged in state. Unfortunately, during server shutdown, this
might never happen again.

Other option would be "not to handle" the issue in the transport
layer. That would trigger the error handler logic, which would also need
the session state to be logged in again.

Best option, for such case, is to tell upper layers that the command was
handled during the transport layer error handler helper, marking it as
DID_NO_CONNECT, which will allow completion and inform about the
problem.

After the session was marked as ISCSI_STATE_FAILED, due to the first
timeout during the server shutdown phase, all subsequent cmds will fail
to be queued, allowing upper logic to fail faster.

Signed-off-by: Rafael David Tinoco <rafael.tinoco@canonical.com>
Reviewed-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(cherry-picked from commit d754941225a7dbc61f6dd2173fa9498049f9a7ee next-20180117)
Signed-off-by: Rafael David Tinoco <rafael.tinoco@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoblk-mq-tag: check for NULL rq when iterating tags
Jens Axboe [Fri, 19 Jan 2018 12:43:12 +0000 (10:43 -0200)]
blk-mq-tag: check for NULL rq when iterating tags

BugLink: https://bugs.launchpad.net/bugs/1744300
Since we introduced blk-mq-sched, the tags->rqs[] array has been
dynamically assigned. So we need to check for NULL when iterating,
since there's a window of time where the bit is set, but we haven't
dynamically assigned the tags->rqs[] array position yet.

This is perfectly safe, since the memory backing of the request is
never going away while the device is alive.

Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry-pick from commit 7f5562d5ecc44c757599b201df928ba52fa05047)
Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
Acked-by: Kamal Mostafa <kamal@canonical.com>
Acked-by: Khalid Elmously <khalid.elmously@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
6 years agoUBUNTU: SAUCE: drm: hibmc: Initialize the hibmc_bo_driver.io_mem_pfn
Michal Srb [Wed, 3 Jan 2018 23:23:00 +0000 (00:23 +0100)]
UBUNTU: SAUCE: drm: hibmc: Initialize the hibmc_bo_driver.io_mem_pfn

BugLink: https://bugs.launchpad.net/bugs/1738334
The io_mem_pfn field was added in ea642c3216cb2a60d1c0e760ae47ee85c9c16447 and
is used unconditionally. Most drivers were updated to set it to the default
implementation ttm_bo_default_io_mem_pfn, but hibmc was not.

This fixes crash in ttm_bo_vm_fault when hibmc driver is used.

Signed-off-by: Michal Srb <msrb@suse.com>
(cherry-picked from https://lists.freedesktop.org/archives/dri-devel/2017-November/159002.html)
[upstream fix at
  https://lists.freedesktop.org/archives/dri-devel/2017-December/161184.html
  is quite intrusive - this is the minimal fix]
Signed-off-by: Daniel Axtens <daniel.axtens@canonical.com>
Acked-by: Paolo Pisati <paolo.pisati@canonical.com>
Acked-by: Marcelo Henrique Cerri <marcelo.cerri@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86/unwind: Fix dereference of untrusted pointer
Josh Poimboeuf [Tue, 10 Oct 2017 01:20:02 +0000 (20:20 -0500)]
x86/unwind: Fix dereference of untrusted pointer

Tetsuo Handa and Fengguang Wu reported a panic in the unwinder:

  BUG: unable to handle kernel NULL pointer dereference at 000001f2
  IP: update_stack_state+0xd4/0x340
  *pde = 00000000

  Oops: 0000 [#1] PREEMPT SMP
  CPU: 0 PID: 18728 Comm: 01-cpu-hotplug Not tainted 4.13.0-rc4-00170-gb09be67 #592
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-20161025_171302-gandalf 04/01/2014
  task: bb0b53c0 task.stack: bb3ac000
  EIP: update_stack_state+0xd4/0x340
  EFLAGS: 00010002 CPU: 0
  EAX: 0000a570 EBX: bb3adccb ECX: 0000f401 EDX: 0000a570
  ESI: 00000001 EDI: 000001ba EBP: bb3adc6b ESP: bb3adc3f
   DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
  CR0: 80050033 CR2: 000001f2 CR3: 0b3a7000 CR4: 00140690
  DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
  DR6: fffe0ff0 DR7: 00000400
  Call Trace:
   ? unwind_next_frame+0xea/0x400
   ? __unwind_start+0xf5/0x180
   ? __save_stack_trace+0x81/0x160
   ? save_stack_trace+0x20/0x30
   ? __lock_acquire+0xfa5/0x12f0
   ? lock_acquire+0x1c2/0x230
   ? tick_periodic+0x3a/0xf0
   ? _raw_spin_lock+0x42/0x50
   ? tick_periodic+0x3a/0xf0
   ? tick_periodic+0x3a/0xf0
   ? debug_smp_processor_id+0x12/0x20
   ? tick_handle_periodic+0x23/0xc0
   ? local_apic_timer_interrupt+0x63/0x70
   ? smp_trace_apic_timer_interrupt+0x235/0x6a0
   ? trace_apic_timer_interrupt+0x37/0x3c
   ? strrchr+0x23/0x50
  Code: 0f 95 c1 89 c7 89 45 e4 0f b6 c1 89 c6 89 45 dc 8b 04 85 98 cb 74 bc 88 4d e3 89 45 f0 83 c0 01 84 c9 89 04 b5 98 cb 74 bc 74 3b <8b> 47 38 8b 57 34 c6 43 1d 01 25 00 00 02 00 83 e2 03 09 d0 83
  EIP: update_stack_state+0xd4/0x340 SS:ESP: 0068:bb3adc3f
  CR2: 00000000000001f2
  ---[ end trace 0d147fd4aba8ff50 ]---
  Kernel panic - not syncing: Fatal exception in interrupt

On x86-32, after decoding a frame pointer to get a regs address,
regs_size() dereferences the regs pointer when it checks regs->cs to see
if the regs are user mode.  This is dangerous because it's possible that
what looks like a decoded frame pointer is actually a corrupt value, and
we don't want the unwinder to make things worse.

Instead of calling regs_size() on an unsafe pointer, just assume they're
kernel regs to start with.  Later, once it's safe to access the regs, we
can do the user mode check and corresponding safety check for the
remaining two regs.

Reported-and-tested-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: LKP <lkp@01.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 5ed8d8bb38c5 ("x86/unwind: Move common code into update_stack_state()")
Link: http://lkml.kernel.org/r/7f95b9a6993dec7674b3f3ab3dcd3294f7b9644d.1507597785.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 62dd86ac01f9fb6386d7f8c6b389c3ea4582a50a)
BugLink: http://bugs.launchpad.net/bugs/1747263
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: claim mitigation via observable speculation barrier
Andy Whitcroft [Tue, 30 Jan 2018 14:46:27 +0000 (14:46 +0000)]
UBUNTU: SAUCE: claim mitigation via observable speculation barrier

CVE-2017-5753 (Spectre v1 Intel)

Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: s390/spinlock: add osb memory barrier
Martin Schwidefsky [Mon, 18 Dec 2017 06:58:11 +0000 (07:58 +0100)]
UBUNTU: SAUCE: s390/spinlock: add osb memory barrier

CVE-2017-5753 (Spectre v1 Intel)

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: powerpc: add osb barrier
Andy Whitcroft [Wed, 20 Dec 2017 12:12:08 +0000 (12:12 +0000)]
UBUNTU: SAUCE: powerpc: add osb barrier

CVE-2017-5753 (Spectre v1 Intel)

Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agouserns: prevent speculative execution
Elena Reshetova [Fri, 15 Dec 2017 10:29:09 +0000 (02:29 -0800)]
userns: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the pos value in function m_start()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
map->extent, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoudf: prevent speculative execution
Elena Reshetova [Wed, 13 Dec 2017 08:15:30 +0000 (10:15 +0200)]
udf: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the eahd->appAttrLocation value in function
udf_add_extendedattr() seems to be controllable by
userspace and later on conditionally (upon bound check)
used in following memmove, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agonet: mpls: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:55:54 +0000 (13:55 +0300)]
net: mpls: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the index value in function mpls_route_input_rcu()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
platform_label, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agofs: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:52:22 +0000 (13:52 +0300)]
fs: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the fd value in function __fcheck_files()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
fdt->fd, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoipv6: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:48:35 +0000 (13:48 +0300)]
ipv6: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the offset value in function raw6_getfrag()
seems to be controllable by userspace and later on
conditionally (upon bound check) used in the
following memcpy, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoipv4: prevent speculative execution
Elena Reshetova [Wed, 13 Dec 2017 08:16:07 +0000 (10:16 +0200)]
ipv4: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the offset value in function raw_getfrag()
seems to be controllable by userspace and later on
conditionally (upon bound check) used in the following
memcpy, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoThermal/int340x: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:47:12 +0000 (13:47 +0300)]
Thermal/int340x: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the trip value in function int340x_thermal_get_trip_temp()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
d->aux_trips, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agocw1200: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:46:21 +0000 (13:46 +0300)]
cw1200: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the queue value in function cw1200_conf_tx()
seems to be controllable by userspace and later on
conditionally (upon bound check) used in
WSM_TX_QUEUE_SET, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoqla2xxx: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:45:35 +0000 (13:45 +0300)]
qla2xxx: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the handle value in functions qlafx00_status_entry()
and qlafx00_multistatus_entry() seems to be controllable
by userspace and later on conditionally (upon bound check)
used to resolve req->outstanding_cmds, insert an observable
speculation barrier before its usage. This should prevent
observable speculation on that branch and avoid kernel
memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agop54: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:44:38 +0000 (13:44 +0300)]
p54: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the queue value in function p54_conf_tx()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
priv->qos_params, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agocarl9170: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:43:39 +0000 (13:43 +0300)]
carl9170: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the queue value in function carl9170_op_conf_tx()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
ar9170_qmap and following ar->edcf, insert an observable
speculation barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agouvcvideo: prevent speculative execution
Elena Reshetova [Wed, 30 Aug 2017 10:41:27 +0000 (13:41 +0300)]
uvcvideo: prevent speculative execution

CVE-2017-5753 (Spectre v1 Intel)

Since the index value in function uvc_ioctl_enum_input()
seems to be controllable by userspace and later on
conditionally (upon bound check) used to resolve
selector->baSourceID, insert an observable speculation
barrier before its usage. This should prevent
observable speculation on that branch and avoid
kernel memory leak.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agoUBUNTU: SAUCE: FIX: x86, bpf, jit: prevent speculative execution when JIT is enabled
Andy Whitcroft [Wed, 31 Jan 2018 13:23:49 +0000 (13:23 +0000)]
UBUNTU: SAUCE: FIX: x86, bpf, jit: prevent speculative execution when JIT is enabled

CVE-2017-5753 (Spectre v1 Intel)

Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agox86, bpf, jit: prevent speculative execution when JIT is enabled
Elena Reshetova [Tue, 8 Aug 2017 09:06:58 +0000 (12:06 +0300)]
x86, bpf, jit: prevent speculative execution when JIT is enabled

CVE-2017-5753 (Spectre v1 Intel)

When constant blinding is enabled (bpf_jit_harden = 1), this adds
an observable speculation barrier before emitting x86 jitted code
for the BPF_ALU(64)_OR_X and BPF_ALU_LHS_X
(for BPF_REG_AX register) eBPF instructions. This is needed in order
to prevent speculative execution on out of bounds BPF_MAP array
indexes when JIT is enabled. This way an arbitary kernel memory is
not exposed through side-channel attacks.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agobpf: prevent speculative execution in eBPF interpreter
Elena Reshetova [Mon, 7 Aug 2017 08:10:28 +0000 (11:10 +0300)]
bpf: prevent speculative execution in eBPF interpreter

CVE-2017-5753 (Spectre v1 Intel)

This adds an observable speculation barrier before LD_IMM_DW and
LDX_MEM_B/H/W/DW eBPF instructions during eBPF program
execution in order to prevent speculative execution on out
of bound BFP_MAP array indexes. This way an arbitary kernel
memory is not exposed through side channel attacks.

Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
6 years agolocking/barriers: introduce new observable speculation barrier
Elena Reshetova [Mon, 7 Aug 2017 08:03:42 +0000 (11:03 +0300)]
locking/barriers: introduce new observable speculation barrier

CVE-2017-5753 (Spectre v1 Intel)

The new observable speculation barrier, osb(), ensures
that any user observable speculation doesn't cross the boundary.

Any user observable speculative activity on this CPU
thread before this point either completes, reaches a
state it can no longer cause an observable activity, or
is aborted before instructions after the barrier execute.

In x86 case, osb() resolves in lfence if X86_FEATURE_LFENCE_RDTSC
is present. Other architectures can define their variants.

Suggested-by: Arjan van de Ven <arjan@linux.intel.com>
Suggested-by: Alan Cox <alan.cox@intel.com>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>