]> git.proxmox.com Git - pve-kernel.git/blame - patches/kernel/0046-x86-mm-64-Stop-using-CR3.PCID-0-in-ASID-aware-code.patch
update ZFS to 0.7.4 + ARC hit rate cherry-pick
[pve-kernel.git] / patches / kernel / 0046-x86-mm-64-Stop-using-CR3.PCID-0-in-ASID-aware-code.patch
CommitLineData
59d5af67 1From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
321d628a
FG
2From: Andy Lutomirski <luto@kernel.org>
3Date: Sun, 17 Sep 2017 09:03:49 -0700
59d5af67 4Subject: [PATCH] x86/mm/64: Stop using CR3.PCID == 0 in ASID-aware code
321d628a
FG
5MIME-Version: 1.0
6Content-Type: text/plain; charset=UTF-8
7Content-Transfer-Encoding: 8bit
8
9CVE-2017-5754
10
11Putting the logical ASID into CR3's PCID bits directly means that we
12have two cases to consider separately: ASID == 0 and ASID != 0.
13This means that bugs that only hit in one of these cases trigger
14nondeterministically.
15
16There were some bugs like this in the past, and I think there's
17still one in current kernels. In particular, we have a number of
18ASID-unware code paths that save CR3, write some special value, and
19then restore CR3. This includes suspend/resume, hibernate, kexec,
20EFI, and maybe other things I've missed. This is currently
21dangerous: if ASID != 0, then this code sequence will leave garbage
22in the TLB tagged for ASID 0. We could potentially see corruption
23when switching back to ASID 0. In principle, an
24initialize_tlbstate_and_flush() call after these sequences would
25solve the problem, but EFI, at least, does not call this. (And it
26probably shouldn't -- initialize_tlbstate_and_flush() is rather
27expensive.)
28
29Signed-off-by: Andy Lutomirski <luto@kernel.org>
30Cc: Borislav Petkov <bpetkov@suse.de>
31Cc: Linus Torvalds <torvalds@linux-foundation.org>
32Cc: Peter Zijlstra <peterz@infradead.org>
33Cc: Thomas Gleixner <tglx@linutronix.de>
34Link: http://lkml.kernel.org/r/cdc14bbe5d3c3ef2a562be09a6368ffe9bd947a6.1505663533.git.luto@kernel.org
35Signed-off-by: Ingo Molnar <mingo@kernel.org>
36(cherry picked from commit 52a2af400c1075219b3f0ce5c96fc961da44018a)
37Signed-off-by: Andy Whitcroft <apw@canonical.com>
38Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
39(cherry picked from commit 15e474753e66e44da1365049f465427053a453ba)
40Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
41---
42 arch/x86/include/asm/mmu_context.h | 21 +++++++++++++++++++--
43 1 file changed, 19 insertions(+), 2 deletions(-)
44
45diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
46index a999ba6b721f..c120b5db178a 100644
47--- a/arch/x86/include/asm/mmu_context.h
48+++ b/arch/x86/include/asm/mmu_context.h
49@@ -286,14 +286,31 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
50 return __pkru_allows_pkey(vma_pkey(vma), write);
51 }
52
53+/*
54+ * If PCID is on, ASID-aware code paths put the ASID+1 into the PCID
55+ * bits. This serves two purposes. It prevents a nasty situation in
56+ * which PCID-unaware code saves CR3, loads some other value (with PCID
57+ * == 0), and then restores CR3, thus corrupting the TLB for ASID 0 if
58+ * the saved ASID was nonzero. It also means that any bugs involving
59+ * loading a PCID-enabled CR3 with CR4.PCIDE off will trigger
60+ * deterministically.
61+ */
62+
63 static inline unsigned long build_cr3(struct mm_struct *mm, u16 asid)
64 {
65- return __sme_pa(mm->pgd) | asid;
66+ if (static_cpu_has(X86_FEATURE_PCID)) {
67+ VM_WARN_ON_ONCE(asid > 4094);
68+ return __sme_pa(mm->pgd) | (asid + 1);
69+ } else {
70+ VM_WARN_ON_ONCE(asid != 0);
71+ return __sme_pa(mm->pgd);
72+ }
73 }
74
75 static inline unsigned long build_cr3_noflush(struct mm_struct *mm, u16 asid)
76 {
77- return __sme_pa(mm->pgd) | asid | CR3_NOFLUSH;
78+ VM_WARN_ON_ONCE(asid > 4094);
79+ return __sme_pa(mm->pgd) | (asid + 1) | CR3_NOFLUSH;
80 }
81
82 /*
83--
842.14.2
85