]> git.proxmox.com Git - pve-kernel.git/blob - patches/kernel/0236-x86-mm-Set-MODULES_END-to-0xffffffffff000000.patch
e40d8fefbbd4c6e76697434c4499237839fb6f22
[pve-kernel.git] / patches / kernel / 0236-x86-mm-Set-MODULES_END-to-0xffffffffff000000.patch
1 From 650929f1bdce50bab031b0886ae91d459edcd18e Mon Sep 17 00:00:00 2001
2 From: Andrey Ryabinin <aryabinin@virtuozzo.com>
3 Date: Thu, 28 Dec 2017 19:06:20 +0300
4 Subject: [PATCH 236/241] x86/mm: Set MODULES_END to 0xffffffffff000000
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 commit f5a40711fa58f1c109165a4fec6078bf2dfd2bdc upstream.
10
11 Since f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
12 kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.
13
14 So passing page unaligned address to kasan_populate_zero_shadow() have two
15 possible effects:
16
17 1) It may leave one page hole in supposed to be populated area. After commit
18 21506525fb8d ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
19 hole happens to be in the shadow covering fixmap area and leads to crash:
20
21 BUG: unable to handle kernel paging request at fffffbffffe8ee04
22 RIP: 0010:check_memory_region+0x5c/0x190
23
24 Call Trace:
25 <NMI>
26 memcpy+0x1f/0x50
27 ghes_copy_tofrom_phys+0xab/0x180
28 ghes_read_estatus+0xfb/0x280
29 ghes_notify_nmi+0x2b2/0x410
30 nmi_handle+0x115/0x2c0
31 default_do_nmi+0x57/0x110
32 do_nmi+0xf8/0x150
33 end_repeat_nmi+0x1a/0x1e
34
35 Note, the crash likely disappeared after commit 92a0f81d8957, which
36 changed kasan_populate_zero_shadow() call the way it was before
37 commit 21506525fb8d.
38
39 2) Attempt to load module near MODULES_END will fail, because
40 __vmalloc_node_range() called from kasan_module_alloc() will hit the
41 WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.
42
43 To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
44 which means that MODULES_END should be 8*PAGE_SIZE aligned.
45
46 The whole point of commit f06bdd4001c2 was to move MODULES_END down if
47 NR_CPUS is big, so the cpu_entry_area takes a lot of space.
48 But since 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
49 the cpu_entry_area is no longer in fixmap, so we could just set
50 MODULES_END to a fixed 8*PAGE_SIZE aligned address.
51
52 Fixes: f06bdd4001c2 ("x86/mm: Adapt MODULES_END based on fixmap section size")
53 Reported-by: Jakub Kicinski <kubakici@wp.pl>
54 Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
55 Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
56 Cc: Andy Lutomirski <luto@kernel.org>
57 Cc: Thomas Garnier <thgarnie@google.com>
58 Link: https://lkml.kernel.org/r/20171228160620.23818-1-aryabinin@virtuozzo.com
59 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
60 Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
61 ---
62 Documentation/x86/x86_64/mm.txt | 5 +----
63 arch/x86/include/asm/pgtable_64_types.h | 2 +-
64 2 files changed, 2 insertions(+), 5 deletions(-)
65
66 diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
67 index ad41b3813f0a..ddd5ffd31bd0 100644
68 --- a/Documentation/x86/x86_64/mm.txt
69 +++ b/Documentation/x86/x86_64/mm.txt
70 @@ -43,7 +43,7 @@ ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks
71 ffffffef00000000 - fffffffeffffffff (=64 GB) EFI region mapping space
72 ... unused hole ...
73 ffffffff80000000 - ffffffff9fffffff (=512 MB) kernel text mapping, from phys 0
74 -ffffffffa0000000 - [fixmap start] (~1526 MB) module mapping space
75 +ffffffffa0000000 - fffffffffeffffff (1520 MB) module mapping space
76 [fixmap start] - ffffffffff5fffff kernel-internal fixmap range
77 ffffffffff600000 - ffffffffff600fff (=4 kB) legacy vsyscall ABI
78 ffffffffffe00000 - ffffffffffffffff (=2 MB) unused hole
79 @@ -67,9 +67,6 @@ memory window (this size is arbitrary, it can be raised later if needed).
80 The mappings are not part of any other kernel PGD and are only available
81 during EFI runtime calls.
82
83 -The module mapping space size changes based on the CONFIG requirements for the
84 -following fixmap section.
85 -
86 Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
87 physical memory, vmalloc/ioremap space and virtual memory map are randomized.
88 Their order is preserved but their base will be offset early at boot time.
89 diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
90 index e8a809ee0bb6..c92bd73b1e46 100644
91 --- a/arch/x86/include/asm/pgtable_64_types.h
92 +++ b/arch/x86/include/asm/pgtable_64_types.h
93 @@ -103,7 +103,7 @@ typedef struct { pteval_t pte; } pte_t;
94
95 #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
96 /* The module sections ends with the start of the fixmap */
97 -#define MODULES_END __fix_to_virt(__end_of_fixed_addresses + 1)
98 +#define MODULES_END _AC(0xffffffffff000000, UL)
99 #define MODULES_LEN (MODULES_END - MODULES_VADDR)
100
101 #define ESPFIX_PGD_ENTRY _AC(-2, UL)
102 --
103 2.14.2
104