]> git.proxmox.com Git - pve-kernel.git/blob - patches/kernel/0152-x86-espfix-64-Stop-assuming-that-pt_regs-is-on-the-e.patch
add objtool build fix
[pve-kernel.git] / patches / kernel / 0152-x86-espfix-64-Stop-assuming-that-pt_regs-is-on-the-e.patch
1 From 9715c46ae2c6d48c0e34409efad8d260a67ca6d6 Mon Sep 17 00:00:00 2001
2 From: Andy Lutomirski <luto@kernel.org>
3 Date: Mon, 4 Dec 2017 15:07:22 +0100
4 Subject: [PATCH 152/233] x86/espfix/64: Stop assuming that pt_regs is on the
5 entry stack
6 MIME-Version: 1.0
7 Content-Type: text/plain; charset=UTF-8
8 Content-Transfer-Encoding: 8bit
9
10 CVE-2017-5754
11
12 When we start using an entry trampoline, a #GP from userspace will
13 be delivered on the entry stack, not on the task stack. Fix the
14 espfix64 #DF fixup to set up #GP according to TSS.SP0, rather than
15 assuming that pt_regs + 1 == SP0. This won't change anything
16 without an entry stack, but it will make the code continue to work
17 when an entry stack is added.
18
19 While we're at it, improve the comments to explain what's actually
20 going on.
21
22 Signed-off-by: Andy Lutomirski <luto@kernel.org>
23 Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
24 Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
25 Reviewed-by: Borislav Petkov <bp@suse.de>
26 Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
27 Cc: Borislav Petkov <bp@alien8.de>
28 Cc: Borislav Petkov <bpetkov@suse.de>
29 Cc: Brian Gerst <brgerst@gmail.com>
30 Cc: Dave Hansen <dave.hansen@intel.com>
31 Cc: Dave Hansen <dave.hansen@linux.intel.com>
32 Cc: David Laight <David.Laight@aculab.com>
33 Cc: Denys Vlasenko <dvlasenk@redhat.com>
34 Cc: Eduardo Valentin <eduval@amazon.com>
35 Cc: Greg KH <gregkh@linuxfoundation.org>
36 Cc: H. Peter Anvin <hpa@zytor.com>
37 Cc: Josh Poimboeuf <jpoimboe@redhat.com>
38 Cc: Juergen Gross <jgross@suse.com>
39 Cc: Linus Torvalds <torvalds@linux-foundation.org>
40 Cc: Peter Zijlstra <peterz@infradead.org>
41 Cc: Rik van Riel <riel@redhat.com>
42 Cc: Will Deacon <will.deacon@arm.com>
43 Cc: aliguori@amazon.com
44 Cc: daniel.gruss@iaik.tugraz.at
45 Cc: hughd@google.com
46 Cc: keescook@google.com
47 Link: https://lkml.kernel.org/r/20171204150606.130778051@linutronix.de
48 Signed-off-by: Ingo Molnar <mingo@kernel.org>
49 (cherry picked from commit 6d9256f0a89eaff97fca6006100bcaea8d1d8bdb)
50 Signed-off-by: Andy Whitcroft <apw@canonical.com>
51 Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
52 (cherry picked from commit f5d8df279d00c22e4c338a5891a874a59947e5f5)
53 Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
54 ---
55 arch/x86/kernel/traps.c | 37 ++++++++++++++++++++++++++++---------
56 1 file changed, 28 insertions(+), 9 deletions(-)
57
58 diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
59 index 7b1d0df624cf..b69db1ee8733 100644
60 --- a/arch/x86/kernel/traps.c
61 +++ b/arch/x86/kernel/traps.c
62 @@ -360,9 +360,15 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
63
64 /*
65 * If IRET takes a non-IST fault on the espfix64 stack, then we
66 - * end up promoting it to a doublefault. In that case, modify
67 - * the stack to make it look like we just entered the #GP
68 - * handler from user space, similar to bad_iret.
69 + * end up promoting it to a doublefault. In that case, take
70 + * advantage of the fact that we're not using the normal (TSS.sp0)
71 + * stack right now. We can write a fake #GP(0) frame at TSS.sp0
72 + * and then modify our own IRET frame so that, when we return,
73 + * we land directly at the #GP(0) vector with the stack already
74 + * set up according to its expectations.
75 + *
76 + * The net result is that our #GP handler will think that we
77 + * entered from usermode with the bad user context.
78 *
79 * No need for ist_enter here because we don't use RCU.
80 */
81 @@ -370,13 +376,26 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
82 regs->cs == __KERNEL_CS &&
83 regs->ip == (unsigned long)native_irq_return_iret)
84 {
85 - struct pt_regs *normal_regs = task_pt_regs(current);
86 + struct pt_regs *gpregs = (struct pt_regs *)this_cpu_read(cpu_tss.x86_tss.sp0) - 1;
87 +
88 + /*
89 + * regs->sp points to the failing IRET frame on the
90 + * ESPFIX64 stack. Copy it to the entry stack. This fills
91 + * in gpregs->ss through gpregs->ip.
92 + *
93 + */
94 + memmove(&gpregs->ip, (void *)regs->sp, 5*8);
95 + gpregs->orig_ax = 0; /* Missing (lost) #GP error code */
96
97 - /* Fake a #GP(0) from userspace. */
98 - memmove(&normal_regs->ip, (void *)regs->sp, 5*8);
99 - normal_regs->orig_ax = 0; /* Missing (lost) #GP error code */
100 + /*
101 + * Adjust our frame so that we return straight to the #GP
102 + * vector with the expected RSP value. This is safe because
103 + * we won't enable interupts or schedule before we invoke
104 + * general_protection, so nothing will clobber the stack
105 + * frame we just set up.
106 + */
107 regs->ip = (unsigned long)general_protection;
108 - regs->sp = (unsigned long)&normal_regs->orig_ax;
109 + regs->sp = (unsigned long)&gpregs->orig_ax;
110
111 return;
112 }
113 @@ -401,7 +420,7 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
114 *
115 * Processors update CR2 whenever a page fault is detected. If a
116 * second page fault occurs while an earlier page fault is being
117 - * deliv- ered, the faulting linear address of the second fault will
118 + * delivered, the faulting linear address of the second fault will
119 * overwrite the contents of CR2 (replacing the previous
120 * address). These updates to CR2 occur even if the page fault
121 * results in a double fault or occurs during the delivery of a
122 --
123 2.14.2
124