]> git.proxmox.com Git - pve-kernel.git/blob - patches/kernel/0019-x86-entry-64-Initialize-the-top-of-the-IRQ-stack-bef.patch
KPTI: add follow-up fixes
[pve-kernel.git] / patches / kernel / 0019-x86-entry-64-Initialize-the-top-of-the-IRQ-stack-bef.patch
1 From 63463bcffe420067411ad3d4d01b79c872fffc3a Mon Sep 17 00:00:00 2001
2 From: Andy Lutomirski <luto@kernel.org>
3 Date: Tue, 11 Jul 2017 10:33:39 -0500
4 Subject: [PATCH 019/241] x86/entry/64: Initialize the top of the IRQ stack
5 before switching stacks
6 MIME-Version: 1.0
7 Content-Type: text/plain; charset=UTF-8
8 Content-Transfer-Encoding: 8bit
9
10 CVE-2017-5754
11
12 The OOPS unwinder wants the word at the top of the IRQ stack to
13 point back to the previous stack at all times when the IRQ stack
14 is in use. There's currently a one-instruction window in ENTER_IRQ_STACK
15 during which this isn't the case. Fix it by writing the old RSP to the
16 top of the IRQ stack before jumping.
17
18 This currently writes the pointer to the stack twice, which is a bit
19 ugly. We could get rid of this by replacing irq_stack_ptr with
20 irq_stack_ptr_minus_eight (better name welcome). OTOH, there may be
21 all kinds of odd microarchitectural considerations in play that
22 affect performance by a few cycles here.
23
24 Reported-by: Mike Galbraith <efault@gmx.de>
25 Reported-by: Josh Poimboeuf <jpoimboe@redhat.com>
26 Signed-off-by: Andy Lutomirski <luto@kernel.org>
27 Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
28 Cc: Borislav Petkov <bp@alien8.de>
29 Cc: Brian Gerst <brgerst@gmail.com>
30 Cc: Denys Vlasenko <dvlasenk@redhat.com>
31 Cc: H. Peter Anvin <hpa@zytor.com>
32 Cc: Jiri Slaby <jslaby@suse.cz>
33 Cc: Linus Torvalds <torvalds@linux-foundation.org>
34 Cc: Peter Zijlstra <peterz@infradead.org>
35 Cc: Thomas Gleixner <tglx@linutronix.de>
36 Cc: live-patching@vger.kernel.org
37 Link: http://lkml.kernel.org/r/aae7e79e49914808440ad5310ace138ced2179ca.1499786555.git.jpoimboe@redhat.com
38 Signed-off-by: Ingo Molnar <mingo@kernel.org>
39 (cherry picked from commit 2995590964da93e1fd9a91550f9c9d9fab28f160)
40 Signed-off-by: Andy Whitcroft <apw@canonical.com>
41 Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
42 (cherry picked from commit a753ff654dfd07a7f8d6f39a27126589eac7e55f)
43 Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
44 ---
45 arch/x86/entry/entry_64.S | 24 +++++++++++++++++++++++-
46 1 file changed, 23 insertions(+), 1 deletion(-)
47
48 diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
49 index 07b4056af8a8..184b70712545 100644
50 --- a/arch/x86/entry/entry_64.S
51 +++ b/arch/x86/entry/entry_64.S
52 @@ -469,6 +469,7 @@ END(irq_entries_start)
53 DEBUG_ENTRY_ASSERT_IRQS_OFF
54 movq %rsp, \old_rsp
55 incl PER_CPU_VAR(irq_count)
56 + jnz .Lirq_stack_push_old_rsp_\@
57
58 /*
59 * Right now, if we just incremented irq_count to zero, we've
60 @@ -478,9 +479,30 @@ END(irq_entries_start)
61 * it must be *extremely* careful to limit its stack usage. This
62 * could include kprobes and a hypothetical future IST-less #DB
63 * handler.
64 + *
65 + * The OOPS unwinder relies on the word at the top of the IRQ
66 + * stack linking back to the previous RSP for the entire time we're
67 + * on the IRQ stack. For this to work reliably, we need to write
68 + * it before we actually move ourselves to the IRQ stack.
69 + */
70 +
71 + movq \old_rsp, PER_CPU_VAR(irq_stack_union + IRQ_STACK_SIZE - 8)
72 + movq PER_CPU_VAR(irq_stack_ptr), %rsp
73 +
74 +#ifdef CONFIG_DEBUG_ENTRY
75 + /*
76 + * If the first movq above becomes wrong due to IRQ stack layout
77 + * changes, the only way we'll notice is if we try to unwind right
78 + * here. Assert that we set up the stack right to catch this type
79 + * of bug quickly.
80 */
81 + cmpq -8(%rsp), \old_rsp
82 + je .Lirq_stack_okay\@
83 + ud2
84 + .Lirq_stack_okay\@:
85 +#endif
86
87 - cmovzq PER_CPU_VAR(irq_stack_ptr), %rsp
88 +.Lirq_stack_push_old_rsp_\@:
89 pushq \old_rsp
90 .endm
91
92 --
93 2.14.2
94