]> git.proxmox.com Git - mirror_ubuntu-hirsute-kernel.git/log
mirror_ubuntu-hirsute-kernel.git
5 years agopowerpc/32: Don't add dummy frames when calling trace_hardirqs_on/off
Christophe Leroy [Tue, 30 Apr 2019 12:39:05 +0000 (12:39 +0000)]
powerpc/32: Don't add dummy frames when calling trace_hardirqs_on/off

No need to add dummy frames when calling trace_hardirqs_on or
trace_hardirqs_off. GCC properly handles empty stacks.

In addition, powerpc doesn't set CONFIG_FRAME_POINTER, therefore
__builtin_return_address(1..) returns NULL at all time. So the
dummy frames are definitely unneeded here.

In the meantime, avoid reading memory for loading r1 with a value
we already know.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: don't do syscall stuff in transfer_to_handler
Christophe Leroy [Tue, 30 Apr 2019 12:39:04 +0000 (12:39 +0000)]
powerpc/32: don't do syscall stuff in transfer_to_handler

As syscalls are now handled via a fast entry path, syscall related
actions can be removed from the generic transfer_to_handler path.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: implement fast entry for syscalls on BOOKE
Christophe Leroy [Tue, 30 Apr 2019 12:39:03 +0000 (12:39 +0000)]
powerpc/32: implement fast entry for syscalls on BOOKE

This patch implements a fast entry for syscalls.

Syscalls don't have to preserve non volatile registers except LR.

This patch then implement a fast entry for syscalls, where
volatile registers get clobbered.

As this entry is dedicated to syscall it always sets MSR_EE
and warns in case MSR_EE was previously off

It also assumes that the call is always from user, system calls are
unexpected from kernel.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: implement fast entry for syscalls on non BOOKE
Christophe Leroy [Tue, 30 Apr 2019 12:39:02 +0000 (12:39 +0000)]
powerpc/32: implement fast entry for syscalls on non BOOKE

This patch implements a fast entry for syscalls.

Syscalls don't have to preserve non volatile registers except LR.

This patch then implement a fast entry for syscalls, where
volatile registers get clobbered.

As this entry is dedicated to syscall it always sets MSR_EE
and warns in case MSR_EE was previously off

It also assumes that the call is always from user, system calls are
unexpected from kernel.

The overall series improves null_syscall selftest by 12,5% on an 83xx
and by 17% on a 8xx.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: Fix 32-bit handling of MSR_EE on exceptions
Christophe Leroy [Tue, 30 Apr 2019 12:39:01 +0000 (12:39 +0000)]
powerpc: Fix 32-bit handling of MSR_EE on exceptions

[text mostly copied from benh's RFC/WIP]

ppc32 are still doing something rather gothic and wrong on 32-bit
which we stopped doing on 64-bit a while ago.

We have that thing where some handlers "copy" the EE value from the
original stack frame into the new MSR before transferring to the
handler.

Thus for a number of exceptions, we enter the handlers with interrupts
enabled.

This is rather fishy, some of the stuff that handlers might do early
on such as irq_enter/exit or user_exit, context tracking, etc...
should be run with interrupts off afaik.

Generally our handlers know when to re-enable interrupts if needed.

The problem we were having is that we assumed these interrupts would
return with interrupts enabled. However that isn't the case.

Instead, this patch changes things so that we always enter exception
handlers with interrupts *off* with the notable exception of syscalls
which are special (and get a fast path).

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: get rid of COPY_EE in exception entry
Christophe Leroy [Tue, 30 Apr 2019 12:39:00 +0000 (12:39 +0000)]
powerpc/32: get rid of COPY_EE in exception entry

EXC_XFER_TEMPLATE() is not called with COPY_EE anymore so
we can get rid of copyee parameters and related COPY_EE and NOCOPY
macros.

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[splited out from benh RFC patch]

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: Enter exceptions with MSR_EE unset
Christophe Leroy [Tue, 30 Apr 2019 12:38:59 +0000 (12:38 +0000)]
powerpc/32: Enter exceptions with MSR_EE unset

All exceptions handlers know when to reenable interrupts, so
it is safer to enter all of them with MSR_EE unset, except
for syscalls.

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[splited out from benh RFC patch]

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: enter syscall with MSR_EE inconditionaly set
Christophe Leroy [Tue, 30 Apr 2019 12:38:58 +0000 (12:38 +0000)]
powerpc/32: enter syscall with MSR_EE inconditionaly set

syscalls are expected to be entered with MSR_EE set. Lets
make it inconditional by forcing MSR_EE on syscalls.

This patch adds EXC_XFER_SYS for that.

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[splited out from benh RFC patch]

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/fsl_booke: ensure SPEFloatingPointException() reenables interrupts
Christophe Leroy [Tue, 30 Apr 2019 12:38:57 +0000 (12:38 +0000)]
powerpc/fsl_booke: ensure SPEFloatingPointException() reenables interrupts

SPEFloatingPointException() is the only exception handler which 'forgets' to
re-enable interrupts. This patch makes sure it does.

Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/40x: Refactor exception entry macros by using head_32.h
Christophe Leroy [Tue, 30 Apr 2019 12:38:56 +0000 (12:38 +0000)]
powerpc/40x: Refactor exception entry macros by using head_32.h

Refactor exception entry macros by using the ones defined in head_32.h

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/40x: Split and rename NORMAL_EXCEPTION_PROLOG
Christophe Leroy [Tue, 30 Apr 2019 12:38:55 +0000 (12:38 +0000)]
powerpc/40x: Split and rename NORMAL_EXCEPTION_PROLOG

This patch splits NORMAL_EXCEPTION_PROLOG in the same way as in
head_8xx.S and head_32.S and renames it EXCEPTION_PROLOG() as well
to match head_32.h

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/40x: add exception frame marker
Christophe Leroy [Tue, 30 Apr 2019 12:38:54 +0000 (12:38 +0000)]
powerpc/40x: add exception frame marker

This patch adds STACK_FRAME_REGS_MARKER in the stack at exception entry
in order to see interrupts in call traces as below:

[    0.013964] Call Trace:
[    0.014014] [c0745db0] [c007a9d4] tick_periodic.constprop.5+0xd8/0x104 (unreliable)
[    0.014086] [c0745dc0] [c007aa20] tick_handle_periodic+0x20/0x9c
[    0.014181] [c0745de0] [c0009cd0] timer_interrupt+0xa0/0x264
[    0.014258] [c0745e10] [c000e484] ret_from_except+0x0/0x14
[    0.014390] --- interrupt: 901 at console_unlock.part.7+0x3f4/0x528
[    0.014390]     LR = console_unlock.part.7+0x3f0/0x528
[    0.014455] [c0745ee0] [c0050334] console_unlock.part.7+0x114/0x528 (unreliable)
[    0.014542] [c0745f30] [c00524e0] register_console+0x3d8/0x44c
[    0.014625] [c0745f60] [c0675aac] cpm_uart_console_init+0x18/0x2c
[    0.014709] [c0745f70] [c06614f4] console_init+0x114/0x1cc
[    0.014795] [c0745fb0] [c0658b68] start_kernel+0x300/0x3d8
[    0.014864] [c0745ff0] [c00022cc] start_here+0x44/0x98

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/40x: Don't use SPRN_SPRG_SCRATCH2 in EXCEPTION_PROLOG
Christophe Leroy [Tue, 30 Apr 2019 12:38:53 +0000 (12:38 +0000)]
powerpc/40x: Don't use SPRN_SPRG_SCRATCH2 in EXCEPTION_PROLOG

Unlike said in the comment, r1 is not reused by the critical
exception handler, as it uses a dedicated critirq_ctx stack.
Decrementing r1 early is then unneeded.

Should the above be valid, the code is crap buggy anyway as
r1 gets some intermediate values that would jeopardise the
whole process (for instance after mfspr   r1,SPRN_SPRG_THREAD)

Using SPRN_SPRG_SCRATCH2 to save r1 is then not needed, r11 can be
used instead. This avoids one mtspr and one mfspr and makes the
prolog closer to what's done on 6xx and 8xx.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: make the 6xx/8xx EXC_XFER_TEMPLATE() similar to the 40x/booke one
Christophe Leroy [Tue, 30 Apr 2019 12:38:52 +0000 (12:38 +0000)]
powerpc/32: make the 6xx/8xx EXC_XFER_TEMPLATE() similar to the 40x/booke one

6xx/8xx EXC_XFER_TEMPLATE() macro adds a i##n symbol which is
unused and can be removed.
40x and booke EXC_XFER_TEMPLATE() macros takes msr from the caller
while the 6xx/8xx version uses only MSR_KERNEL as msr value.

This patch modifies the 6xx/8xx version to make it similar to the
40x and booke versions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: move LOAD_MSR_KERNEL() into head_32.h and use it
Christophe Leroy [Tue, 30 Apr 2019 12:38:51 +0000 (12:38 +0000)]
powerpc/32: move LOAD_MSR_KERNEL() into head_32.h and use it

As preparation for using head_32.h for head_40x.S, move
LOAD_MSR_KERNEL() there and use it to load r10 with MSR_KERNEL value.

In the mean time, this patch modifies it so that it takes into account
the size of the passed value to determine if 'li' can be used or if
'lis/ori' is needed instead of using the size of MSR_KERNEL. This is
done by using gas macro.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: Refactor EXCEPTION entry macros for head_8xx.S and head_32.S
Christophe Leroy [Tue, 30 Apr 2019 12:38:50 +0000 (12:38 +0000)]
powerpc/32: Refactor EXCEPTION entry macros for head_8xx.S and head_32.S

EXCEPTION_PROLOG is similar in head_8xx.S and head_32.S

This patch creates head_32.h and moves EXCEPTION_PROLOG macro
into it. It also converts it from a GCC macro to a GAS macro
in order to ease refactorisation with 40x later, since
GAS macros allows the use of #ifdef/#else/#endif inside it.
And it also has the advantage of not requiring the uggly "; \"
at the end of each line.

This patch also moves EXCEPTION() and EXC_XFER_XXXX() macros which
are also similar while adding START_EXCEPTION() out of EXCEPTION().

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: print hash info in a helper
Christophe Leroy [Fri, 26 Apr 2019 16:36:39 +0000 (16:36 +0000)]
powerpc/mm: print hash info in a helper

Reduce #ifdef mess by defining a helper to print
hash info at startup.

In the meantime, remove the display of hash table address
to reduce leak of non necessary information.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32s: don't try to print hash table address.
Christophe Leroy [Fri, 26 Apr 2019 16:36:37 +0000 (16:36 +0000)]
powerpc/32s: don't try to print hash table address.

Due to %p, (ptrval) is printed in lieu of the hash table address.

showing the hash table address isn't an operationnal need so just
don't print it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32s: drop Hash_end
Christophe Leroy [Fri, 26 Apr 2019 16:36:36 +0000 (16:36 +0000)]
powerpc/32s: drop Hash_end

Hash_end has never been used, drop it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32s: map kasan zero shadow with PAGE_READONLY instead of PAGE_KERNEL_RO
Christophe Leroy [Fri, 26 Apr 2019 16:23:37 +0000 (16:23 +0000)]
powerpc/32s: map kasan zero shadow with PAGE_READONLY instead of PAGE_KERNEL_RO

For hash32, the zero shadow page gets mapped with PAGE_READONLY instead
of PAGE_KERNEL_RO, because the PP bits don't provide a RO kernel, so
PAGE_KERNEL_RO is equivalent to PAGE_KERNEL. By using PAGE_READONLY,
the page is RO for both kernel and user, but this is not a security issue
as it contains only zeroes.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32s: set up an early static hash table for KASAN.
Christophe Leroy [Fri, 26 Apr 2019 16:23:36 +0000 (16:23 +0000)]
powerpc/32s: set up an early static hash table for KASAN.

KASAN requires early activation of hash table, before memblock()
functions are available.

This patch implements an early hash_table statically defined in
__initdata.

During early boot, a single page table is used.

For hash32, when doing the final init, one page table is allocated
for each PGD entry because of the _PAGE_HASHPTE flag which can't be
common to several virt pages. This is done after memblock get
available but before switching to the final hash table, otherwise
there are issues with TLB flushing due to the shared entries.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32s: move hash code patching out of MMU_init_hw()
Christophe Leroy [Fri, 26 Apr 2019 16:23:35 +0000 (16:23 +0000)]
powerpc/32s: move hash code patching out of MMU_init_hw()

For KASAN, hash table handling will be activated early for
accessing to KASAN shadow areas.

In order to avoid any modification of the hash functions while
they are still used with the early hash table, the code patching
is moved out of MMU_init_hw() and put close to the big-bang switch
to the final hash table.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: Add KASAN support
Christophe Leroy [Fri, 26 Apr 2019 16:23:34 +0000 (16:23 +0000)]
powerpc/32: Add KASAN support

This patch adds KASAN support for PPC32. The following patch
will add an early activation of hash table for book3s. Until
then, a warning will be raised if trying to use KASAN on an
hash 6xx.

To support KASAN, this patch initialises that MMU mapings for
accessing to the KASAN shadow area defined in a previous patch.

An early mapping is set as soon as the kernel code has been
relocated at its definitive place.

Then the definitive mapping is set once paging is initialised.

For modules, the shadow area is allocated at module_alloc().

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: disable KASAN instrumentation on early/critical files.
Christophe Leroy [Fri, 26 Apr 2019 16:23:33 +0000 (16:23 +0000)]
powerpc: disable KASAN instrumentation on early/critical files.

All files containing functions run before kasan_early_init() is called
must have KASAN instrumentation disabled.

For those file, branch profiling also have to be disabled otherwise
each if () generates a call to ftrace_likely_update().

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: prepare shadow area for KASAN
Christophe Leroy [Fri, 26 Apr 2019 16:23:32 +0000 (16:23 +0000)]
powerpc/32: prepare shadow area for KASAN

This patch prepares a shadow area for KASAN.

The shadow area will be at the top of the kernel virtual
memory space above the fixmap area and will occupy one
eighth of the total kernel virtual memory space.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: make KVIRT_TOP dependent on FIXMAP_START
Christophe Leroy [Fri, 26 Apr 2019 16:23:31 +0000 (16:23 +0000)]
powerpc/32: make KVIRT_TOP dependent on FIXMAP_START

When we add KASAN shadow area, KVIRT_TOP can't be anymore fixed
at 0xfe000000.

This patch uses FIXADDR_START to define KVIRT_TOP.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: use memset() instead of memset_io() to zero BSS
Christophe Leroy [Fri, 26 Apr 2019 16:23:30 +0000 (16:23 +0000)]
powerpc/32: use memset() instead of memset_io() to zero BSS

Since commit 400c47d81ca38 ("powerpc32: memset: only use dcbz once cache is
enabled"), memset() can be used before activation of the cache,
so no need to use memset_io() for zeroing the BSS.

Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: don't use direct assignation during early boot.
Christophe Leroy [Fri, 26 Apr 2019 16:23:29 +0000 (16:23 +0000)]
powerpc: don't use direct assignation during early boot.

In kernel/cputable.c, explicitly use memcpy() instead of *y = *x;
This will allow GCC to replace it with __memcpy() when KASAN is
selected.

Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/prom_init: don't use string functions from lib/
Christophe Leroy [Fri, 26 Apr 2019 16:23:28 +0000 (16:23 +0000)]
powerpc/prom_init: don't use string functions from lib/

When KASAN is active, the string functions in lib/ are doing the
KASAN checks. This is too early for prom_init.

This patch implements dedicated string functions for prom_init,
which will be compiled in with KASAN disabled.

Size of prom_init before the patch:
   text    data     bss     dec     hex filename
  12060     488    6960   19508    4c34 arch/powerpc/kernel/prom_init.o

Size of prom_init after the patch:
   text    data     bss     dec     hex filename
  12460     488    6960   19908    4dc4 arch/powerpc/kernel/prom_init.o

This increases the size of prom_init a bit, but as prom_init is
in __init section, it is freed after boot anyway.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: remove CONFIG_CMDLINE #ifdef mess
Christophe Leroy [Fri, 26 Apr 2019 16:23:27 +0000 (16:23 +0000)]
powerpc: remove CONFIG_CMDLINE #ifdef mess

This patch makes CONFIG_CMDLINE defined at all time. It avoids
having to enclose related code inside #ifdef CONFIG_CMDLINE

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: prepare string/mem functions for KASAN
Christophe Leroy [Fri, 26 Apr 2019 16:23:26 +0000 (16:23 +0000)]
powerpc: prepare string/mem functions for KASAN

CONFIG_KASAN implements wrappers for memcpy() memmove() and memset()
Those wrappers are doing the verification then call respectively
__memcpy() __memmove() and __memset(). The arches are therefore
expected to rename their optimised functions that way.

For files on which KASAN is inhibited, #defines are used to allow
them to directly call optimised versions of the functions without
going through the KASAN wrappers.

See commit 393f203f5fd5 ("x86_64: kasan: add interceptors for
memset/memmove/memcpy functions") for details.

Other string / mem functions do not (yet) have kasan wrappers,
we therefore have to fallback to the generic versions when
KASAN is active, otherwise KASAN checks will be skipped.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Fixups to keep selftests working]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/32: Move early_init() in a separate file
Christophe Leroy [Fri, 26 Apr 2019 16:23:25 +0000 (16:23 +0000)]
powerpc/32: Move early_init() in a separate file

In preparation of KASAN, move early_init() into a separate
file in order to allow deactivation of KASAN for that function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: refactor pgd_alloc() and pgd_free() on nohash
Christophe Leroy [Fri, 26 Apr 2019 15:58:13 +0000 (15:58 +0000)]
powerpc/mm: refactor pgd_alloc() and pgd_free() on nohash

pgd_alloc() and pgd_free() are identical on nohash 32 and 64.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: refactor pmd_pgtable()
Christophe Leroy [Fri, 26 Apr 2019 15:58:12 +0000 (15:58 +0000)]
powerpc/mm: refactor pmd_pgtable()

pmd_pgtable() is identical on the 4 subarches, refactor it.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: refactor pgtable freeing functions on nohash
Christophe Leroy [Fri, 26 Apr 2019 15:58:11 +0000 (15:58 +0000)]
powerpc/mm: refactor pgtable freeing functions on nohash

pgtable_free() and others are identical on nohash/32 and 64,
so move them into asm/nohash/pgalloc.h

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Only keep one version of pmd_populate() functions on nohash/32
Christophe Leroy [Fri, 26 Apr 2019 15:58:10 +0000 (15:58 +0000)]
powerpc/mm: Only keep one version of pmd_populate() functions on nohash/32

Use IS_ENABLED(CONFIG_BOOKE) to make single versions of
pmd_populate() and pmd_populate_kernel()

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: refactor definition of pgtable_cache[]
Christophe Leroy [Fri, 26 Apr 2019 15:58:09 +0000 (15:58 +0000)]
powerpc/mm: refactor definition of pgtable_cache[]

pgtable_cache[] is the same for the 4 subarches, lets make it common.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: refactor pte_alloc_one() and pte_free() families definition.
Christophe Leroy [Fri, 26 Apr 2019 15:58:08 +0000 (15:58 +0000)]
powerpc/mm: refactor pte_alloc_one() and pte_free() families definition.

Functions pte_alloc_one(), pte_alloc_one_kernel(), pte_free(),
pte_free_kernel() are identical for the four subarches.

This patch moves their definition in a common place.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: inline pte_alloc_one_kernel() and pte_alloc_one() on PPC32
Christophe Leroy [Fri, 26 Apr 2019 15:58:07 +0000 (15:58 +0000)]
powerpc/mm: inline pte_alloc_one_kernel() and pte_alloc_one() on PPC32

pte_alloc_one_kernel() and pte_alloc_one() are simple calls to
pte_fragment_alloc(), so they are good candidates for inlining as
already done on PPC64.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: don't use pte_alloc_kernel() until slab is available on PPC32
Christophe Leroy [Fri, 26 Apr 2019 15:58:06 +0000 (15:58 +0000)]
powerpc/mm: don't use pte_alloc_kernel() until slab is available on PPC32

In the same way as PPC64, implement early allocation functions and
avoid calling pte_alloc_kernel() before slab is available.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/book3e: move early_alloc_pgtable() to init section
Christophe Leroy [Fri, 26 Apr 2019 15:58:05 +0000 (15:58 +0000)]
powerpc/book3e: move early_alloc_pgtable() to init section

early_alloc_pgtable() is only used during init.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/Kconfig: select PPC_MM_SLICES from subarch type
Christophe Leroy [Fri, 26 Apr 2019 15:58:04 +0000 (15:58 +0000)]
powerpc/Kconfig: select PPC_MM_SLICES from subarch type

Lets select PPC_MM_SLICES from the subarch config item instead of
doing it via defaults declaration in the PPC_MM_SLICES item itself.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: get rid of nohash/32/mmu.h and nohash/64/mmu.h
Christophe Leroy [Fri, 26 Apr 2019 15:58:03 +0000 (15:58 +0000)]
powerpc/mm: get rid of nohash/32/mmu.h and nohash/64/mmu.h

Those files have no real added values, especially the 64 bit
which only includes the common book3e mmu.h which is also
included from 32 bits side.

So lets do the final inclusion directly from nohash/mmu.h

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: move pgtable_t in asm/mmu.h
Christophe Leroy [Fri, 26 Apr 2019 15:58:02 +0000 (15:58 +0000)]
powerpc/mm: move pgtable_t in asm/mmu.h

pgtable_t is now identical for all subarches, move it to the
top level asm/mmu.h

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: convert Book3E 64 to pte_fragment
Christophe Leroy [Fri, 26 Apr 2019 15:58:01 +0000 (15:58 +0000)]
powerpc/mm: convert Book3E 64 to pte_fragment

Book3E 64 is the only subarch not using pte_fragment. In order
to allow refactorisation, this patch converts it to pte_fragment.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: drop __bad_pte()
Christophe Leroy [Fri, 26 Apr 2019 15:57:59 +0000 (15:57 +0000)]
powerpc/mm: drop __bad_pte()

This has never been called (since Kernel has been in git at least),
drop it.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: flatten function __find_linux_pte() step 3
Christophe Leroy [Fri, 26 Apr 2019 05:59:53 +0000 (05:59 +0000)]
powerpc/mm: flatten function __find_linux_pte() step 3

__find_linux_pte() is full of if/else which is hard to
follow allthough the handling is pretty simple.

Previous patches left a { } block. This patch removes it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: flatten function __find_linux_pte() step 2
Christophe Leroy [Fri, 26 Apr 2019 05:59:52 +0000 (05:59 +0000)]
powerpc/mm: flatten function __find_linux_pte() step 2

__find_linux_pte() is full of if/else which is hard to
follow allthough the handling is pretty simple.

Previous patch left { } blocks. This patch removes the first one
by shifting its content to the left.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: flatten function __find_linux_pte() step 1
Christophe Leroy [Fri, 26 Apr 2019 05:59:51 +0000 (05:59 +0000)]
powerpc/mm: flatten function __find_linux_pte() step 1

__find_linux_pte() is full of if/else which is hard to
follow allthough the handling is pretty simple.

This patch flattens the function by getting rid of as much if/else
as possible. In order to ease the review, this is done in three steps.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: cleanup remaining ifdef mess in hugetlbpage.c
Christophe Leroy [Fri, 26 Apr 2019 05:59:49 +0000 (05:59 +0000)]
powerpc/mm: cleanup remaining ifdef mess in hugetlbpage.c

Only 3 subarches support huge pages. So when it is either 2 of them,
it is not the third one.

And mmu_has_feature() is known by all subarches so IS_ENABLED() can
be used instead of #ifdef

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: cleanup HPAGE_SHIFT setup
Christophe Leroy [Fri, 26 Apr 2019 05:59:48 +0000 (05:59 +0000)]
powerpc/mm: cleanup HPAGE_SHIFT setup

Only book3s/64 may select default among several HPAGE_SHIFT at runtime.
8xx always defines 512K pages as default
FSL_BOOK3E always defines 4M pages as default

This patch limits HUGETLB_PAGE_SIZE_VARIABLE to book3s/64
moves the definitions in subarches files.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: move hugetlb_disabled into asm/hugetlb.h
Christophe Leroy [Fri, 26 Apr 2019 05:59:47 +0000 (05:59 +0000)]
powerpc/mm: move hugetlb_disabled into asm/hugetlb.h

No need to have this in asm/page.h, move it into asm/hugetlb.h

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: cleanup ifdef mess in add_huge_page_size()
Christophe Leroy [Fri, 26 Apr 2019 05:59:46 +0000 (05:59 +0000)]
powerpc/mm: cleanup ifdef mess in add_huge_page_size()

Introduce a subarch specific helper check_and_get_huge_psize()
to check the huge page sizes and cleanup the ifdef mess in
add_huge_page_size()

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: add a helper to populate hugepd
Christophe Leroy [Fri, 26 Apr 2019 05:59:45 +0000 (05:59 +0000)]
powerpc/mm: add a helper to populate hugepd

This patchs adds a subarch helper to populate hugepd.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: split asm/hugetlb.h into dedicated subarch files
Christophe Leroy [Fri, 26 Apr 2019 05:59:44 +0000 (05:59 +0000)]
powerpc/mm: split asm/hugetlb.h into dedicated subarch files

Three subarches support hugepages:
  - fsl book3e
  - book3s/64
  - 8xx

This patch splits asm/hugetlb.h to reduce the #ifdef mess.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: make gup_hugepte() static
Christophe Leroy [Fri, 26 Apr 2019 05:59:43 +0000 (05:59 +0000)]
powerpc/mm: make gup_hugepte() static

gup_huge_pd() is the only user of gup_hugepte() and it is
located in the same file. This patch moves gup_huge_pd()
after gup_hugepte() and makes gup_hugepte() static.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: make hugetlbpage.c depend on CONFIG_HUGETLB_PAGE
Christophe Leroy [Fri, 26 Apr 2019 05:59:42 +0000 (05:59 +0000)]
powerpc/mm: make hugetlbpage.c depend on CONFIG_HUGETLB_PAGE

The only function in hugetlbpage.c which doesn't depend on
CONFIG_HUGETLB_PAGE is gup_hugepte(), and this function is
only called from gup_huge_pd() which depends on
CONFIG_HUGETLB_PAGE so all the content of hugetlbpage.c
depends on CONFIG_HUGETLB_PAGE.

This patch modifies Makefile to only compile hugetlbpage.c
when CONFIG_HUGETLB_PAGE is set.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: move __find_linux_pte() out of hugetlbpage.c
Christophe Leroy [Fri, 26 Apr 2019 05:59:41 +0000 (05:59 +0000)]
powerpc/mm: move __find_linux_pte() out of hugetlbpage.c

__find_linux_pte() is the only function in hugetlbpage.c
which is compiled in regardless on CONFIG_HUGETLBPAGE

This patch moves it in pgtable.c.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/book3e: hugetlbpage is only for CONFIG_PPC_FSL_BOOK3E
Christophe Leroy [Fri, 26 Apr 2019 05:59:40 +0000 (05:59 +0000)]
powerpc/book3e: hugetlbpage is only for CONFIG_PPC_FSL_BOOK3E

As per Kconfig.cputype, only CONFIG_PPC_FSL_BOOK3E gets to
select SYS_SUPPORTS_HUGETLBFS so simplify accordingly.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/64: only book3s/64 supports CONFIG_PPC_64K_PAGES
Christophe Leroy [Fri, 26 Apr 2019 05:59:39 +0000 (05:59 +0000)]
powerpc/64: only book3s/64 supports CONFIG_PPC_64K_PAGES

CONFIG_PPC_64K_PAGES cannot be selected by nohash/64.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/book3e: drop mmu_get_tsize()
Christophe Leroy [Fri, 26 Apr 2019 05:59:38 +0000 (05:59 +0000)]
powerpc/book3e: drop mmu_get_tsize()

This function is not used anymore, drop it.

Fixes: b42279f0165c ("powerpc/mm/nohash: MM_SLICE is only used by book3s 64")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT
Christophe Leroy [Thu, 25 Apr 2019 14:29:36 +0000 (14:29 +0000)]
powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT

This patch defines a subarch specific SLB_ADDR_LIMIT_DEFAULT
to remove the #ifdefs around the setup of mm->context.slb_addr_limit

It also generalises the use of mm_ctx_set_slb_addr_limit() helper.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: define get_slice_psize() all the time
Christophe Leroy [Thu, 25 Apr 2019 14:29:35 +0000 (14:29 +0000)]
powerpc/mm: define get_slice_psize() all the time

get_slice_psize() can be defined regardless of CONFIG_PPC_MM_SLICES
to avoid ifdefs

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices
Christophe Leroy [Thu, 25 Apr 2019 14:29:34 +0000 (14:29 +0000)]
powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices

The 8xx only selects CONFIG_PPC_MM_SLICES when CONFIG_HUGETLB_PAGE
is set.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c
Christophe Leroy [Thu, 25 Apr 2019 14:29:33 +0000 (14:29 +0000)]
powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c

This patch replaces a couple of #ifdef CONFIG_PPC_64K_PAGES
by IS_ENABLED(CONFIG_PPC_64K_PAGES) to improve code maintainability.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: remove unnecessary #ifdef CONFIG_PPC64
Christophe Leroy [Thu, 25 Apr 2019 14:29:32 +0000 (14:29 +0000)]
powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64

For PPC32 that's a noop, gcc should be smart enough to ignore it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: get rid of mm_ctx_slice_mask_xxx()
Christophe Leroy [Thu, 25 Apr 2019 14:29:31 +0000 (14:29 +0000)]
powerpc/mm: get rid of mm_ctx_slice_mask_xxx()

Now that slice_mask_for_size() is in mmu.h, the mm_ctx_slice_mask_xxx()
are not needed anymore, so drop them. Note that the 8xx ones where
not used anyway.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: move slice_mask_for_size() into mmu.h
Christophe Leroy [Thu, 25 Apr 2019 14:29:30 +0000 (14:29 +0000)]
powerpc/mm: move slice_mask_for_size() into mmu.h

Move slice_mask_for_size() into subarch mmu.h

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Retain the BUG_ON()s, rather than converting to VM_BUG_ON()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct
Christophe Leroy [Thu, 25 Apr 2019 14:29:29 +0000 (14:29 +0000)]
powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct

slice_mask_for_size() only uses mm->context, so hand directly a
pointer to the context. This will help moving the function in
subarch mmu.h in the next patch by avoiding having to include
the definition of struct mm_struct

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: no slice for nohash/64
Christophe Leroy [Thu, 25 Apr 2019 14:29:28 +0000 (14:29 +0000)]
powerpc/mm: no slice for nohash/64

Only nohash/32 and book3s/64 support mm slices.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: fix erroneous duplicate slb_addr_limit init
Christophe Leroy [Thu, 25 Apr 2019 14:29:27 +0000 (14:29 +0000)]
powerpc/mm: fix erroneous duplicate slb_addr_limit init

Commit 67fda38f0d68 ("powerpc/mm: Move slb_addr_linit to
early_init_mmu") moved slb_addr_limit init out of setup_arch().

Commit 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t
for radix") brought it back into setup_arch() by error.

This patch reverts that erroneous regress.

Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Move nohash specifics in subdirectory mm/nohash
Christophe Leroy [Fri, 29 Mar 2019 10:00:02 +0000 (10:00 +0000)]
powerpc/mm: Move nohash specifics in subdirectory mm/nohash

Many files in arch/powerpc/mm are only for nohash. This patch
creates a subdirectory for them.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Shorten new filenames]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Move book3s32 specifics in subdirectory mm/book3s64
Christophe Leroy [Fri, 29 Mar 2019 10:00:01 +0000 (10:00 +0000)]
powerpc/mm: Move book3s32 specifics in subdirectory mm/book3s64

Several files in arch/powerpc/mm are only for book3S32. This patch
creates a subdirectory for them.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Shorten new filenames]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Move book3s64 specifics in subdirectory mm/book3s64
Christophe Leroy [Fri, 29 Mar 2019 10:00:00 +0000 (10:00 +0000)]
powerpc/mm: Move book3s64 specifics in subdirectory mm/book3s64

Many files in arch/powerpc/mm are only for book3S64. This patch
creates a subdirectory for them.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Update the selftest sym links, shorten new filenames, cleanup some
      whitespace and formatting in the new files.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: change #include "mmu_decl.h" to <mm/mmu_decl.h>
Christophe Leroy [Fri, 29 Mar 2019 09:59:59 +0000 (09:59 +0000)]
powerpc/mm: change #include "mmu_decl.h" to <mm/mmu_decl.h>

This patch make inclusion of mmu_decl.h independant of the location
of the file including it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/nohash64: clean pgtable.h
Christophe Leroy [Thu, 28 Mar 2019 13:19:47 +0000 (13:19 +0000)]
powerpc/nohash64: clean pgtable.h

TRANSPARENT_HUGEPAGE is only supported by book3s

VMEMMAP_REGION_ID is never used

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/book3e: drop BUG_ON() in map_kernel_page()
Christophe Leroy [Thu, 28 Mar 2019 13:03:45 +0000 (13:03 +0000)]
powerpc/book3e: drop BUG_ON() in map_kernel_page()

early_alloc_pgtable() never returns NULL as it panics on failure.

This patch drops the three BUG_ON() which check the non nullity
of early_alloc_pgtable() returned value.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/tm: Avoid machine crash on rt_sigreturn()
Breno Leitao [Wed, 16 Jan 2019 16:47:44 +0000 (14:47 -0200)]
powerpc/tm: Avoid machine crash on rt_sigreturn()

There is a kernel crash that happens if rt_sigreturn() is called inside
a transactional block.

This crash happens if the kernel hits an in-kernel page fault when
accessing userspace memory, usually through copy_ckvsx_to_user(). A
major page fault calls might_sleep() function, which can cause a task
reschedule. A task reschedule (switch_to()) reclaim and recheckpoint
the TM states, but, in the signal return path, the checkpointed memory
was already reclaimed, thus the exception stack has MSR that points to
MSR[TS]=0.

When the code returns from might_sleep() and a task reschedule
happened, then this task is returned with the memory recheckpointed,
and CPU MSR[TS] = suspended.

This means that there is a side effect at might_sleep() if it is
called with CPU MSR[TS] = 0 and the task has regs->msr[TS] != 0.

This side effect can cause a TM bad thing, since at the exception
entrance, the stack saves MSR[TS]=0, and this is what will be used at
RFID, but, the processor has MSR[TS] = Suspended, and this transition
will be invalid and a TM Bad thing will be raised, causing the
following crash:

  Unexpected TM Bad Thing exception at c00000000000e9ec (msr 0x8000000302a03031) tm_scratch=800000010280b033
  cpu 0xc: Vector: 700 (Program Check) at [c00000003ff1fd70]
      pc: c00000000000e9ec: fast_exception_return+0x100/0x1bc
      lr: c000000000032948: handle_rt_signal64+0xb8/0xaf0
      sp: c0000004263ebc40
     msr: 8000000302a03031
    current = 0xc000000415050300
    paca    = 0xc00000003ffc4080  irqmask: 0x03  irq_happened: 0x01
      pid   = 25006, comm = sigfuz
  Linux version 5.0.0-rc1-00001-g3bd6e94bec12 (breno@debian) (gcc version 8.2.0 (Debian 8.2.0-3)) #899 SMP Mon Jan 7 11:30:07 EST 2019
  WARNING: exception is not recoverable, can't continue
  enter ? for help
  [c0000004263ebc40c000000000032948 handle_rt_signal64+0xb8/0xaf0 (unreliable)
  [c0000004263ebd30c000000000022780 do_notify_resume+0x2f0/0x430
  [c0000004263ebe20c00000000000e844 ret_from_except_lite+0x70/0x74
  --- Exception: c00 (System Call) at 00007fffbaac400c
  SP (7fffeca90f40) is in userspace

The solution for this problem is running the sigreturn code with
regs->msr[TS] disabled, thus, avoiding hitting the side effect above.
This does not seem to be a problem since regs->msr will be replaced by
the ucontext value, so, it is being flushed already. In this case, it
is flushed earlier.

Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm/radix: Fix kernel crash when running subpage protect test
Aneesh Kumar K.V [Tue, 30 Apr 2019 07:59:07 +0000 (13:29 +0530)]
powerpc/mm/radix: Fix kernel crash when running subpage protect test

This patch fixes the below crash by making sure we touch the subpage
protection related structures only if we know they are allocated on
the platform. With radix translation we don't allocate hash context at
all and trying to access subpage_prot_table results in:

  Faulting instruction address: 0xc00000000008bdb4
  Oops: Kernel access of bad area, sig: 11 [#1]
  LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
  ....
  NIP [c00000000008bdb4] sys_subpage_prot+0x74/0x590
  LR [c00000000000b688] system_call+0x5c/0x70
  Call Trace:
  [c00020002c6b7d30] [c00020002c6b7d90] 0xc00020002c6b7d90 (unreliable)
  [c00020002c6b7e20] [c00000000000b688] system_call+0x5c/0x70
  Instruction dump:
  fb61ffd8 fb81ffe0 fba1ffe8 fbc1fff0 fbe1fff8 f821ff11 e92d1178 f9210068
  39200000 e92d0968 ebe90630 e93f03e8 <eb89103860000000 3860fffe e9410068

We also move the subpage_prot_table with mmp_sem held to avoid race
between two parallel subpage_prot syscall.

Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
Reported-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Tested-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/powernv/mce: Print additional information about MCE error.
Mahesh Salgaonkar [Mon, 29 Apr 2019 18:16:02 +0000 (23:46 +0530)]
powerpc/powernv/mce: Print additional information about MCE error.

Print more information about MCE error whether it is an hardware or
software error.

Some of the MCE errors can be easily categorized as hardware or
software errors e.g. UEs are due to hardware error, where as error
triggered due to invalid usage of tlbie is a pure software bug. But
not all the MCE errors can be easily categorize into either software
or hardware. There are errors like multihit errors which are usually
result of a software bug, but in some rare cases a hardware failure
can cause a multihit error. In past, we have seen case where after
replacing faulty chip, multihit errors stopped occurring. Same with
parity errors, which are usually due to faulty hardware but there are
chances where multihit can also cause an parity error. Such errors are
difficult to determine what really caused it. Hence this patch
classifies MCE errors into following four categorize:

  1. Hardware error:
   UE and Link timeout failure errors.
  2. Probable hardware error (some chance of software cause)
   SLB/ERAT/TLB Parity errors.
  3. Software error
   Invalid tlbie form.
  4. Probable software error (some chance of hardware cause)
   SLB/ERAT/TLB Multihit errors.

Sample output:

  MCE: CPU80: machine check (Warning) Guest SLB Multihit DAR: 000001001b6e0320 [Recovered]
  MCE: CPU80: PID: 24765 Comm: qemu-system-ppc Guest NIP: [00007fffa309dc60]
  MCE: CPU80: Probable Software error (some chance of hardware cause)

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/powernv/mce: Print correct severity for MCE error.
Mahesh Salgaonkar [Mon, 29 Apr 2019 18:15:55 +0000 (23:45 +0530)]
powerpc/powernv/mce: Print correct severity for MCE error.

Currently all machine check errors are printed as severe errors which
isn't correct. Print soft errors as warning instead of severe errors.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/powernv/mce: Reduce MCE console logs to lesser lines.
Mahesh Salgaonkar [Mon, 29 Apr 2019 18:15:48 +0000 (23:45 +0530)]
powerpc/powernv/mce: Reduce MCE console logs to lesser lines.

Also add cpu number while displaying MCE log. This will help cleaner
logs when MCE hits on multiple cpus simultaneously.

Before the changes the MCE output was:

  Severe Machine check interrupt [Recovered]
    NIP [d00000000ba80280]: insert_slb_entry.constprop.0+0x278/0x2c0 [mcetest_slb]
    Initiator: CPU
    Error type: SLB [Multihit]
      Effective address: d00000000ba80280

After this patch series changes the MCE output will be:

  MCE: CPU80: machine check (Warning) Host SLB Multihit [Recovered]
  MCE: CPU80: NIP: [d00000000b550280] insert_slb_entry.constprop.0+0x278/0x2c0 [mcetest_slb]
  MCE: CPU80: Probable software error (some chance of hardware cause)

UE in host application:

  MCE: CPU48: machine check (Severe) Host UE Load/Store DAR: 00007fffc6079a80 paddr: 0000000f8e260000 [Not recovered]
  MCE: CPU48: PID: 4584 Comm: find NIP: [0000000010023368]
  MCE: CPU48: Hardware error

and for MCE in Guest:

  MCE: CPU80: machine check (Warning) Guest SLB Multihit DAR: 000001001b6e0320 [Recovered]
  MCE: CPU80: PID: 24765 Comm: qemu-system-ppc Guest NIP: [00007fffa309dc60]
  MCE: CPU80: Probable software error (some chance of hardware cause)

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc: Add doorbell tracepoints
Anton Blanchard [Thu, 4 Oct 2018 06:23:37 +0000 (16:23 +1000)]
powerpc: Add doorbell tracepoints

When analysing sources of OS jitter, I noticed that doorbells cannot be
traced.

Signed-off-by: Anton Blanchard <anton@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoocxl: remove set but not used variables 'tid' and 'lpid'
YueHaibing [Fri, 29 Mar 2019 15:44:56 +0000 (23:44 +0800)]
ocxl: remove set but not used variables 'tid' and 'lpid'

Fixes gcc '-Wunused-but-set-variable' warning:

  drivers/misc/ocxl/link.c: In function 'xsl_fault_handler':
  drivers/misc/ocxl/link.c:187:17: warning: variable 'tid' set but not used
  drivers/misc/ocxl/link.c:187:6: warning: variable 'lpid' set but not used

They are never used and can be removed.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/64s: Remove 'dummy_copy_buffer'
Mathieu Malaterre [Wed, 13 Mar 2019 20:00:30 +0000 (21:00 +0100)]
powerpc/64s: Remove 'dummy_copy_buffer'

In commit 2bf1071a8d50 ("powerpc/64s: Remove POWER9 DD1 support") the
function __switch_to remove usage for 'dummy_copy_buffer'. Since it is
not used anywhere else, remove it completely.

This remove the following warning:
  arch/powerpc/kernel/process.c:1156:17: error: 'dummy_copy_buffer' defined but not used

Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/cacheinfo: Fix kobject memleak
Tobin C. Harding [Tue, 30 Apr 2019 01:09:23 +0000 (11:09 +1000)]
powerpc/cacheinfo: Fix kobject memleak

Currently error return from kobject_init_and_add() is not followed by
a call to kobject_put(). This means there is a memory leak.

Add call to kobject_put() in error path of kobject_init_and_add().

Signed-off-by: Tobin C. Harding <tobin@kernel.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/vdso: Drop unnecessary cc-ldoption
Nick Desaulniers [Tue, 23 Apr 2019 21:11:14 +0000 (14:11 -0700)]
powerpc/vdso: Drop unnecessary cc-ldoption

Towards the goal of removing cc-ldoption, it seems that --hash-style=
was added to binutils 2.17.50.0.2 in 2006. The minimal required
version of binutils for the kernel according to
Documentation/process/changes.rst is 2.20.

Suggested-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/powernv/ioda: Handle failures correctly in pnv_pci_ioda_iommu_bypass_supported()
Alexey Kardashevskiy [Wed, 10 Apr 2019 06:48:00 +0000 (16:48 +1000)]
powerpc/powernv/ioda: Handle failures correctly in pnv_pci_ioda_iommu_bypass_supported()

When the return value type was changed from int to bool, few places
were left unchanged, this fixes them. We did not hit these failures as
the first one is not happening at all and the second one is little
more likely to happen if the user switches a 33..58bit DMA capable
device between the VFIO and vendor drivers and there are not so many
of these.

Fixes: 2d6ad41b2c21 ("powerpc/powernv: use the generic iommu bypass code")
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Tue, 30 Apr 2019 12:52:03 +0000 (22:52 +1000)]
Merge branch 'topic/ppc-kvm' into next

Merge our topic branch shared with KVM. In particular this includes the
rewrite of the idle code into C.

5 years agopowerpc/powernv/idle: Restore AMR/UAMOR/AMOR/IAMR after idle
Michael Ellerman [Tue, 30 Apr 2019 04:28:17 +0000 (14:28 +1000)]
powerpc/powernv/idle: Restore AMR/UAMOR/AMOR/IAMR after idle

This is an implementation of commits 53a712bae5dd
("powerpc/powernv/idle: Restore AMR/UAMOR/AMOR after idle") and
a3f3072db6ca ("powerpc/powernv/idle: Restore IAMR after idle") using
the new C-based idle code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Extract from Nick's patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/64s: Reimplement book3s idle code in C
Nicholas Piggin [Fri, 12 Apr 2019 14:30:52 +0000 (00:30 +1000)]
powerpc/64s: Reimplement book3s idle code in C

Reimplement Book3S idle code in C, moving POWER7/8/9 implementation
speific HV idle code to the powernv platform code.

Book3S assembly stubs are kept in common code and used only to save
the stack frame and non-volatile GPRs before executing architected
idle instructions, and restoring the stack and reloading GPRs then
returning to C after waking from idle.

The complex logic dealing with threads and subcores, locking, SPRs,
HMIs, timebase resync, etc., is all done in C which makes it more
maintainable.

This is not a strict translation to C code, there are some
significant differences:

- Idle wakeup no longer uses the ->cpu_restore call to reinit SPRs,
  but saves and restores them itself.

- The optimisation where EC=ESL=0 idle modes did not have to save GPRs
  or change MSR is restored, because it's now simple to do. ESL=1
  sleeps that do not lose GPRs can use this optimization too.

- KVM secondary entry and cede is now more of a call/return style
  rather than branchy. nap_state_lost is not required because KVM
  always returns via NVGPR restoring path.

- KVM secondary wakeup from offline sequence is moved entirely into
  the offline wakeup, which avoids a hwsync in the normal idle wakeup
  path.

Performance measured with context switch ping-pong on different
threads or cores, is possibly improved a small amount, 1-3% depending
on stop state and core vs thread test for shallow states. Deep states
it's in the noise compared with other latencies.

KVM improvements:

- Idle sleepers now always return to caller rather than branch out
  to KVM first.

- This allows optimisations like very fast return to caller when no
  state has been lost.

- KVM no longer requires nap_state_lost because it controls NVGPR
  save/restore itself on the way in and out.

- The heavy idle wakeup KVM request check can be moved out of the
  normal host idle code and into the not-performance-critical offline
  code.

- KVM nap code now returns from where it is called, which makes the
  flow a bit easier to follow.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Squash the KVM changes in]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/watchdog: Use hrtimers for per-CPU heartbeat
Nicholas Piggin [Tue, 9 Apr 2019 04:40:05 +0000 (14:40 +1000)]
powerpc/watchdog: Use hrtimers for per-CPU heartbeat

Using a jiffies timer creates a dependency on the tick_do_timer_cpu
incrementing jiffies. If that CPU has locked up and jiffies is not
incrementing, the watchdog heartbeat timer for all CPUs stops and
creates false positives and confusing warnings on local CPUs, and
also causes the SMP detector to stop, so the root cause is never
detected.

Fix this by using hrtimer based timers for the watchdog heartbeat,
like the generic kernel hardlockup detector.

Cc: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Reported-by: Ravikumar Bangoria <ravi.bangoria@in.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Reported-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/pseries: Track LMB nid instead of using device tree
Nathan Fontenot [Tue, 2 Oct 2018 15:35:59 +0000 (10:35 -0500)]
powerpc/pseries: Track LMB nid instead of using device tree

When removing memory we need to remove the memory from the node
it was added to instead of looking up the node it should be in
in the device tree.

During testing we have seen scenarios where the affinity for a
LMB changes due to a partition migration or PRRN event. In these
cases the node the LMB exists in may not match the node the device
tree indicates it belongs in. This can lead to a system crash
when trying to DLPAR remove the LMB after a migration or PRRN
event. The current code looks up the node in the device tree to
remove the LMB from, the crash occurs when we try to offline this
node and it does not have any data, i.e. node_data[nid] == NULL.

36:mon> e
cpu 0x36: Vector: 300 (Data Access) at [c0000001828b7810]
    pc: c00000000036d08c: try_offline_node+0x2c/0x1b0
    lr: c0000000003a14ec: remove_memory+0xbc/0x110
    sp: c0000001828b7a90
   msr: 800000000280b033
   dar: 9a28
 dsisr: 40000000
  current = 0xc0000006329c4c80
  paca    = 0xc000000007a55200   softe: 0        irq_happened: 0x01
    pid   = 76926, comm = kworker/u320:3

36:mon> t
[link register   ] c0000000003a14ec remove_memory+0xbc/0x110
[c0000001828b7a90c00000000006a1cc arch_remove_memory+0x9c/0xd0 (unreliable)
[c0000001828b7ad0c0000000003a14e0 remove_memory+0xb0/0x110
[c0000001828b7b20c0000000000c7db4 dlpar_remove_lmb+0x94/0x160
[c0000001828b7b60c0000000000c8ef8 dlpar_memory+0x7e8/0xd10
[c0000001828b7bf0c0000000000bf828 handle_dlpar_errorlog+0xf8/0x160
[c0000001828b7c60c0000000000bf8cc pseries_hp_work_fn+0x3c/0xa0
[c0000001828b7c90c000000000128cd8 process_one_work+0x298/0x5a0
[c0000001828b7d20c000000000129068 worker_thread+0x88/0x620
[c0000001828b7dc0c00000000013223c kthread+0x1ac/0x1c0
[c0000001828b7e30c00000000000b45c ret_from_kernel_thread+0x5c/0x80

To resolve this we need to track the node a LMB belongs to when
it is added to the system so we can remove it from that node instead
of the node that the device tree indicates it should belong to.

Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: fix spelling mistake "Outisde" -> "Outside"
Colin Ian King [Tue, 23 Apr 2019 15:10:17 +0000 (16:10 +0100)]
powerpc/mm: fix spelling mistake "Outisde" -> "Outside"

There are several identical spelling mistakes in warning messages,
fix these.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Fix section mismatch warning
Aneesh Kumar K.V [Sat, 30 Mar 2019 05:43:45 +0000 (11:13 +0530)]
powerpc/mm: Fix section mismatch warning

This patch fix the below section mismatch warnings.

WARNING: vmlinux.o(.text+0x2d1f44): Section mismatch in reference from the function devm_memremap_pages_release() to the function .meminit.text:arch_remove_memory()
WARNING: vmlinux.o(.text+0x2d265c): Section mismatch in reference from the function devm_memremap_pages() to the function .meminit.text:arch_add_memory()

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm/hash: Rename KERNEL_REGION_ID to LINEAR_MAP_REGION_ID
Aneesh Kumar K.V [Wed, 17 Apr 2019 12:59:19 +0000 (18:29 +0530)]
powerpc/mm/hash: Rename KERNEL_REGION_ID to LINEAR_MAP_REGION_ID

The region actually point to linear map. Rename the #define to
clarify thati.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Print kernel map details to dmesg
Aneesh Kumar K.V [Wed, 17 Apr 2019 12:59:18 +0000 (18:29 +0530)]
powerpc/mm: Print kernel map details to dmesg

This helps in debugging. We can look at the dmesg to find out
different kernel mapping details.

On 4K config this shows

 kernel vmalloc start   = 0xc000100000000000
 kernel IO start        = 0xc000200000000000
 kernel vmemmap start   = 0xc000300000000000

On 64K config:

 kernel vmalloc start   = 0xc008000000000000
 kernel IO start        = 0xc00a000000000000
 kernel vmemmap start   = 0xc00c000000000000

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm/hash: Simplify the region id calculation.
Aneesh Kumar K.V [Wed, 17 Apr 2019 12:59:17 +0000 (18:29 +0530)]
powerpc/mm/hash: Simplify the region id calculation.

This reduces multiple comparisons in get_region_id to a bit shift operation.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Drop the unnecessary region check
Aneesh Kumar K.V [Wed, 17 Apr 2019 12:59:16 +0000 (18:29 +0530)]
powerpc/mm: Drop the unnecessary region check

All the regions are now mapped with top nibble 0xc. Hence the region id
check is not needed for virt_addr_valid()

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
5 years agopowerpc/mm: Validate address values against different region limits
Aneesh Kumar K.V [Wed, 17 Apr 2019 12:59:15 +0000 (18:29 +0530)]
powerpc/mm: Validate address values against different region limits

This adds an explicit check in various functions.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>