]> git.proxmox.com Git - pve-kernel.git/blob - patches/kernel/0248-x86-process-Define-cpu_tss_rw-in-same-section-as-dec.patch
24c5a1b8e4cfdeb0a7e072358a855a2c48553941
[pve-kernel.git] / patches / kernel / 0248-x86-process-Define-cpu_tss_rw-in-same-section-as-dec.patch
1 From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
2 From: Nick Desaulniers <ndesaulniers@google.com>
3 Date: Wed, 3 Jan 2018 12:39:52 -0800
4 Subject: [PATCH] x86/process: Define cpu_tss_rw in same section as declaration
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 CVE-2017-5754
10
11 cpu_tss_rw is declared with DECLARE_PER_CPU_PAGE_ALIGNED
12 but then defined with DEFINE_PER_CPU_SHARED_ALIGNED
13 leading to section mismatch warnings.
14
15 Use DEFINE_PER_CPU_PAGE_ALIGNED consistently. This is necessary because
16 it's mapped to the cpu entry area and must be page aligned.
17
18 [ tglx: Massaged changelog a bit ]
19
20 Fixes: 1a935bc3d4ea ("x86/entry: Move SYSENTER_stack to the beginning of struct tss_struct")
21 Suggested-by: Thomas Gleixner <tglx@linutronix.de>
22 Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
23 Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
24 Cc: thomas.lendacky@amd.com
25 Cc: Borislav Petkov <bpetkov@suse.de>
26 Cc: tklauser@distanz.ch
27 Cc: minipli@googlemail.com
28 Cc: me@kylehuey.com
29 Cc: namit@vmware.com
30 Cc: luto@kernel.org
31 Cc: jpoimboe@redhat.com
32 Cc: tj@kernel.org
33 Cc: cl@linux.com
34 Cc: bp@suse.de
35 Cc: thgarnie@google.com
36 Cc: kirill.shutemov@linux.intel.com
37 Cc: stable@vger.kernel.org
38 Link: https://lkml.kernel.org/r/20180103203954.183360-1-ndesaulniers@google.com
39
40 (cherry picked from commit 2fd9c41aea47f4ad071accf94b94f94f2c4d31eb)
41 Signed-off-by: Andy Whitcroft <apw@canonical.com>
42 Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
43 (cherry picked from commit f45e574914ae47825d2eea46abc9d6fbabe55e56)
44 Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
45 ---
46 arch/x86/kernel/process.c | 2 +-
47 1 file changed, 1 insertion(+), 1 deletion(-)
48
49 diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
50 index 3688a7b9d055..07e6218ad7d9 100644
51 --- a/arch/x86/kernel/process.c
52 +++ b/arch/x86/kernel/process.c
53 @@ -46,7 +46,7 @@
54 * section. Since TSS's are completely CPU-local, we want them
55 * on exact cacheline boundaries, to eliminate cacheline ping-pong.
56 */
57 -__visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss_rw) = {
58 +__visible DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_struct, cpu_tss_rw) = {
59 .x86_tss = {
60 /*
61 * .sp0 is only used when entering ring 0 from a lower
62 --
63 2.14.2
64