]> git.proxmox.com Git - pve-kernel.git/blame - patches/kernel/0188-x86-mm-pti-Disable-global-pages-if-PAGE_TABLE_ISOLAT.patch
KPTI: add follow-up fixes
[pve-kernel.git] / patches / kernel / 0188-x86-mm-pti-Disable-global-pages-if-PAGE_TABLE_ISOLAT.patch
CommitLineData
321d628a
FG
1From 22da8888c8168530496ddfc0867181a8910089b3 Mon Sep 17 00:00:00 2001
2From: Dave Hansen <dave.hansen@linux.intel.com>
3Date: Mon, 4 Dec 2017 15:07:34 +0100
e4cdf2a5 4Subject: [PATCH 188/241] x86/mm/pti: Disable global pages if
321d628a
FG
5 PAGE_TABLE_ISOLATION=y
6MIME-Version: 1.0
7Content-Type: text/plain; charset=UTF-8
8Content-Transfer-Encoding: 8bit
9
10CVE-2017-5754
11
12Global pages stay in the TLB across context switches. Since all contexts
13share the same kernel mapping, these mappings are marked as global pages
14so kernel entries in the TLB are not flushed out on a context switch.
15
16But, even having these entries in the TLB opens up something that an
17attacker can use, such as the double-page-fault attack:
18
19 http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf
20
21That means that even when PAGE_TABLE_ISOLATION switches page tables
22on return to user space the global pages would stay in the TLB cache.
23
24Disable global pages so that kernel TLB entries can be flushed before
25returning to user space. This way, all accesses to kernel addresses from
26userspace result in a TLB miss independent of the existence of a kernel
27mapping.
28
29Suppress global pages via the __supported_pte_mask. The user space
30mappings set PAGE_GLOBAL for the minimal kernel mappings which are
31required for entry/exit. These mappings are set up manually so the
32filtering does not take place.
33
34[ The __supported_pte_mask simplification was written by Thomas Gleixner. ]
35Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
36Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
37Reviewed-by: Borislav Petkov <bp@suse.de>
38Cc: Andy Lutomirski <luto@kernel.org>
39Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
40Cc: Borislav Petkov <bp@alien8.de>
41Cc: Brian Gerst <brgerst@gmail.com>
42Cc: David Laight <David.Laight@aculab.com>
43Cc: Denys Vlasenko <dvlasenk@redhat.com>
44Cc: Eduardo Valentin <eduval@amazon.com>
45Cc: Greg KH <gregkh@linuxfoundation.org>
46Cc: H. Peter Anvin <hpa@zytor.com>
47Cc: Josh Poimboeuf <jpoimboe@redhat.com>
48Cc: Juergen Gross <jgross@suse.com>
49Cc: Linus Torvalds <torvalds@linux-foundation.org>
50Cc: Peter Zijlstra <peterz@infradead.org>
51Cc: Will Deacon <will.deacon@arm.com>
52Cc: aliguori@amazon.com
53Cc: daniel.gruss@iaik.tugraz.at
54Cc: hughd@google.com
55Cc: keescook@google.com
56Cc: linux-mm@kvack.org
57Signed-off-by: Ingo Molnar <mingo@kernel.org>
58(cherry picked from commit c313ec66317d421fb5768d78c56abed2dc862264)
59Signed-off-by: Andy Whitcroft <apw@canonical.com>
60Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
61(cherry picked from commit ace78e99d765da1e59f6b151adac6c360c67af7d)
62Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
63---
64 arch/x86/mm/init.c | 12 +++++++++---
65 1 file changed, 9 insertions(+), 3 deletions(-)
66
67diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
68index a22c2b95e513..020223420308 100644
69--- a/arch/x86/mm/init.c
70+++ b/arch/x86/mm/init.c
71@@ -161,6 +161,12 @@ struct map_range {
72
73 static int page_size_mask;
74
75+static void enable_global_pages(void)
76+{
77+ if (!static_cpu_has(X86_FEATURE_PTI))
78+ __supported_pte_mask |= _PAGE_GLOBAL;
79+}
80+
81 static void __init probe_page_size_mask(void)
82 {
83 /*
84@@ -179,11 +185,11 @@ static void __init probe_page_size_mask(void)
85 cr4_set_bits_and_update_boot(X86_CR4_PSE);
86
87 /* Enable PGE if available */
88+ __supported_pte_mask &= ~_PAGE_GLOBAL;
89 if (boot_cpu_has(X86_FEATURE_PGE)) {
90 cr4_set_bits_and_update_boot(X86_CR4_PGE);
91- __supported_pte_mask |= _PAGE_GLOBAL;
92- } else
93- __supported_pte_mask &= ~_PAGE_GLOBAL;
94+ enable_global_pages();
95+ }
96
97 /* Enable 1 GB linear kernel mappings if available: */
98 if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
99--
1002.14.2
101