]> git.proxmox.com Git - pve-kernel.git/blame - patches/kernel/0101-x86-entry-64-Stop-initializing-TSS.sp0-at-boot.patch
KPTI: add follow-up fixes
[pve-kernel.git] / patches / kernel / 0101-x86-entry-64-Stop-initializing-TSS.sp0-at-boot.patch
CommitLineData
321d628a
FG
1From d9170f22073657aceba14c49e8df535df4409a6c Mon Sep 17 00:00:00 2001
2From: Andy Lutomirski <luto@kernel.org>
3Date: Thu, 2 Nov 2017 00:59:13 -0700
e4cdf2a5 4Subject: [PATCH 101/241] x86/entry/64: Stop initializing TSS.sp0 at boot
321d628a
FG
5MIME-Version: 1.0
6Content-Type: text/plain; charset=UTF-8
7Content-Transfer-Encoding: 8bit
8
9CVE-2017-5754
10
11In my quest to get rid of thread_struct::sp0, I want to clean up or
12remove all of its readers. Two of them are in cpu_init() (32-bit and
1364-bit), and they aren't needed. This is because we never enter
14userspace at all on the threads that CPUs are initialized in.
15
16Poison the initial TSS.sp0 and stop initializing it on CPU init.
17
18The comment text mostly comes from Dave Hansen. Thanks!
19
20Signed-off-by: Andy Lutomirski <luto@kernel.org>
21Cc: Borislav Petkov <bpetkov@suse.de>
22Cc: Brian Gerst <brgerst@gmail.com>
23Cc: Dave Hansen <dave.hansen@intel.com>
24Cc: Linus Torvalds <torvalds@linux-foundation.org>
25Cc: Peter Zijlstra <peterz@infradead.org>
26Cc: Thomas Gleixner <tglx@linutronix.de>
27Link: http://lkml.kernel.org/r/ee4a00540ad28c6cff475fbcc7769a4460acc861.1509609304.git.luto@kernel.org
28Signed-off-by: Ingo Molnar <mingo@kernel.org>
29(cherry picked from commit 20bb83443ea79087b5e5f8dab4e9d80bb9bf7acb)
30Signed-off-by: Andy Whitcroft <apw@canonical.com>
31Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
32(cherry picked from commit 8c6b12e88bd87433087ea1f1cd5a9a4975e4623c)
33Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
34---
35 arch/x86/kernel/cpu/common.c | 13 ++++++++++---
36 arch/x86/kernel/process.c | 8 +++++++-
37 2 files changed, 17 insertions(+), 4 deletions(-)
38
39diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
40index 6562acbfc4e0..121fe3570d6f 100644
41--- a/arch/x86/kernel/cpu/common.c
42+++ b/arch/x86/kernel/cpu/common.c
43@@ -1570,9 +1570,13 @@ void cpu_init(void)
44 BUG_ON(me->mm);
45 enter_lazy_tlb(&init_mm, me);
46
47- load_sp0(current->thread.sp0);
48+ /*
49+ * Initialize the TSS. Don't bother initializing sp0, as the initial
50+ * task never enters user mode.
51+ */
52 set_tss_desc(cpu, t);
53 load_TR_desc();
54+
55 load_mm_ldt(&init_mm);
56
57 clear_all_debug_regs();
58@@ -1594,7 +1598,6 @@ void cpu_init(void)
59 int cpu = smp_processor_id();
60 struct task_struct *curr = current;
61 struct tss_struct *t = &per_cpu(cpu_tss, cpu);
62- struct thread_struct *thread = &curr->thread;
63
64 wait_for_master_cpu(cpu);
65
66@@ -1624,9 +1627,13 @@ void cpu_init(void)
67 BUG_ON(curr->mm);
68 enter_lazy_tlb(&init_mm, curr);
69
70- load_sp0(thread->sp0);
71+ /*
72+ * Initialize the TSS. Don't bother initializing sp0, as the initial
73+ * task never enters user mode.
74+ */
75 set_tss_desc(cpu, t);
76 load_TR_desc();
77+
78 load_mm_ldt(&init_mm);
79
80 t->x86_tss.io_bitmap_base = offsetof(struct tss_struct, io_bitmap);
81diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
82index 3ca198080ea9..ccf3a4f4ef68 100644
83--- a/arch/x86/kernel/process.c
84+++ b/arch/x86/kernel/process.c
85@@ -48,7 +48,13 @@
86 */
87 __visible DEFINE_PER_CPU_SHARED_ALIGNED(struct tss_struct, cpu_tss) = {
88 .x86_tss = {
89- .sp0 = TOP_OF_INIT_STACK,
90+ /*
91+ * .sp0 is only used when entering ring 0 from a lower
92+ * privilege level. Since the init task never runs anything
93+ * but ring 0 code, there is no need for a valid value here.
94+ * Poison it.
95+ */
96+ .sp0 = (1UL << (BITS_PER_LONG-1)) + 1,
97 #ifdef CONFIG_X86_32
98 .ss0 = __KERNEL_DS,
99 .ss1 = __KERNEL_CS,
100--
1012.14.2
102