]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/commitdiff
arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
authorYueyi Li <liyueyi@live.com>
Mon, 24 Dec 2018 07:40:07 +0000 (07:40 +0000)
committerKleber Sacilotto de Souza <kleber.souza@canonical.com>
Wed, 14 Aug 2019 09:18:49 +0000 (11:18 +0200)
BugLink: https://bugs.launchpad.net/bugs/1838116
[ Upstream commit c8a43c18a97845e7f94ed7d181c11f41964976a2 ]

When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), the top 4K of kernel
virtual address space may be mapped to physical addresses despite being
reserved for ERR_PTR values.

Fix the randomization of the linear region so that we avoid mapping the
last page of the virtual address space.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: liyueyi <liyueyi@live.com>
[will: rewrote commit message; merged in suggestion from Ard]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin (Microsoft) <sashal@kernel.org>
Signed-off-by: Kamal Mostafa <kamal@canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
arch/arm64/mm/init.c

index caa295cd5d09aede3219042ae200fe2ef0b9e164..9e6c822d458dd825c1cadfb8083f403a4292970d 100644 (file)
@@ -447,7 +447,7 @@ void __init arm64_memblock_init(void)
                 * memory spans, randomize the linear region as well.
                 */
                if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
-                       range = range / ARM64_MEMSTART_ALIGN + 1;
+                       range /= ARM64_MEMSTART_ALIGN;
                        memstart_addr -= ARM64_MEMSTART_ALIGN *
                                         ((range * memstart_offset_seed) >> 16);
                }