]> git.proxmox.com Git - pve-kernel.git/blame - patches/kernel/0248-x86-process-Define-cpu_tss_rw-in-same-section-as-dec.patch
update ZFS to 0.7.4 + ARC hit rate cherry-pick
[pve-kernel.git] / patches / kernel / 0248-x86-process-Define-cpu_tss_rw-in-same-section-as-dec.patch
CommitLineData
035dbe67
FG
1From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
2From: Nick Desaulniers <ndesaulniers@google.com>
3Date: Wed, 3 Jan 2018 12:39:52 -0800
4Subject: [PATCH] x86/process: Define cpu_tss_rw in same section as declaration
5MIME-Version: 1.0
6Content-Type: text/plain; charset=UTF-8
7Content-Transfer-Encoding: 8bit
8
9CVE-2017-5754
10
11cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
12but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
13leading to section mismatch warnings.
14
15Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
16it's mapped to the cpu entry area and must be page aligned.
17
18[ tglx: Massaged changelog a bit ]
19
20Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
21Suggested-by: Thomas Gleixner <tglx@linutronix.de>
22Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
23Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
24Cc: thomas.lendacky@amd.com
25Cc: Borislav Petkov <bpetkov@suse.de>
26Cc: tklauser@distanz.ch
27Cc: minipli@googlemail.com
28Cc: me@kylehuey.com
29Cc: namit@vmware.com
30Cc: luto@kernel.org
31Cc: jpoimboe@redhat.com
32Cc: tj@kernel.org
33Cc: cl@linux.com
34Cc: bp@suse.de
35Cc: thgarnie@google.com
36Cc: kirill.shutemov@linux.intel.com
37Cc: stable@vger.kernel.org
38Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
39
40(cherry picked from commit 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb)
41Signed-off-by: Andy Whitcroft <apw@canonical.com>
42Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
43(cherry picked from commit f45e574914ae47825d2eea46abc9d6fbabe55e56)
44Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
45---
46 arch/x86/kernel/process.c | 2 +-
47 1 file changed, 1 insertion(+), 1 deletion(-)
48
49diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
50index 3688a7b9d055..07e6218ad7d9 100644
51--- a/arch/x86/kernel/process.c
52+++ b/arch/x86/kernel/process.c
53@@ -46,7 +46,7 @@
54 * section. Since TSS's are completely CPU-local, we want them
55 * on exact cacheline boundaries, to eliminate cacheline ping-pong.
56 */
57-__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = {
58+__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = {
59 .x86_tss = {
60 /*
61 * .sp0 is only used when entering ring 0 from a lower
62--
632.14.2
64