]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/log
mirror_ubuntu-bionic-kernel.git
6 years agocrypto: cryptd - pass through absence of ->setkey()
Eric Biggers [Wed, 3 Jan 2018 19:16:23 +0000 (11:16 -0800)]
crypto: cryptd - pass through absence of ->setkey()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 841a3ff329713f796a63356fef6e2f72e4a3f6a3 upstream.

When the cryptd template is used to wrap an unkeyed hash algorithm,
don't install a ->setkey() method to the cryptd instance.  This change
is necessary for cryptd to keep working with unkeyed hash algorithms
once we start enforcing that ->setkey() is called when present.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agocrypto: hash - introduce crypto_hash_alg_has_setkey()
Eric Biggers [Wed, 3 Jan 2018 19:16:22 +0000 (11:16 -0800)]
crypto: hash - introduce crypto_hash_alg_has_setkey()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit cd6ed77ad5d223dc6299fb58f62e0f5267f7e2ba upstream.

Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
to determine whether the underlying algorithm requires a key or not.
But there was no corresponding function for ahash spawns.  Add it.

Note that the new function actually has to support both shash and ahash
algorithms, since the ahash API can be used with either.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoahci: Add Intel Cannon Lake PCH-H PCI ID
Mika Westerberg [Thu, 11 Jan 2018 12:55:50 +0000 (15:55 +0300)]
ahci: Add Intel Cannon Lake PCH-H PCI ID

BugLink: http://bugs.launchpad.net/bugs/1751064
commit f919dde0772a894c693a1eeabc77df69d6a9b937 upstream.

Add Intel Cannon Lake PCH-H PCI ID to the list of supported controllers.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoahci: Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI
Hans de Goede [Wed, 6 Dec 2017 15:41:09 +0000 (16:41 +0100)]
ahci: Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 998008b779e424bd7513c434d0ab9c1268459009 upstream.

Add PCI ids for Intel Bay Trail, Cherry Trail and Apollo Lake AHCI
SATA controllers. This commit is a preparation patch for allowing a
different default sata link powermanagement policy for mobile chipsets.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoahci: Annotate PCI ids for mobile Intel chipsets as such
Hans de Goede [Wed, 6 Dec 2017 15:41:08 +0000 (16:41 +0100)]
ahci: Annotate PCI ids for mobile Intel chipsets as such

BugLink: http://bugs.launchpad.net/bugs/1751064
commit ca1b4974bd237f2373b0e980b11957aac3499b56 upstream.

Intel uses different SATA PCI ids for the Desktop and Mobile SKUs of their
chipsets. For older models the comment describing which chipset the PCI id
is for, aksi indicates when we're dealing with a mobile SKU. Extend the
comments for recent chipsets to also indicate mobile SKUs.

The information this commit adds comes from Intel's chipset datasheets.

This commit is a preparation patch for allowing a different default
sata link powermanagement policy for mobile chipsets.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agokernfs: fix regression in kernfs_fop_write caused by wrong type
Ivan Vecera [Fri, 19 Jan 2018 08:18:54 +0000 (09:18 +0100)]
kernfs: fix regression in kernfs_fop_write caused by wrong type

BugLink: http://bugs.launchpad.net/bugs/1751064
commit ba87977a49913129962af8ac35b0e13e0fa4382d upstream.

Commit b7ce40cff0b9 ("kernfs: cache atomic_write_len in
kernfs_open_file") changes type of local variable 'len' from ssize_t
to size_t. This change caused that the *ppos value is updated also
when the previous write callback failed.

Mentioned snippet:
...
len = ops->write(...); <- return value can be negative
...
if (len > 0)           <- true here in this case
        *ppos += len;
...

Fixes: b7ce40cff0b9 ("kernfs: cache atomic_write_len in kernfs_open_file")
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agonfsd: Detect unhashed stids in nfsd4_verify_open_stid()
Trond Myklebust [Fri, 12 Jan 2018 22:42:30 +0000 (17:42 -0500)]
nfsd: Detect unhashed stids in nfsd4_verify_open_stid()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 4f1764172a0aa7395d12b96cae640ca1438c5085 upstream.

The state of the stid is guaranteed by 2 locks:
- The nfs4_client 'cl_lock' spinlock
- The nfs4_ol_stateid 'st_mutex' mutex

so it is quite possible for the stid to be unhashed after lookup,
but before calling nfsd4_lock_ol_stateid(). So we do need to check
for a zero value for 'sc_type' in nfsd4_verify_open_stid().

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Tested-by: Checuk Lever <chuck.lever@oracle.com>
Fixes: 659aefb68eca "nfsd: Ensure we don't recognise lock stateids..."
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoNFS: Fix a race between mmap() and O_DIRECT
Trond Myklebust [Sun, 28 Jan 2018 14:29:41 +0000 (09:29 -0500)]
NFS: Fix a race between mmap() and O_DIRECT

BugLink: http://bugs.launchpad.net/bugs/1751064
commit e231c6879cfd44e4fffd384bb6dd7d313249a523 upstream.

When locking the file in order to do O_DIRECT on it, we must unmap
any mmapped ranges on the pagecache so that we can flush out the
dirty data.

Fixes: a5864c999de67 ("NFS: Do not serialise O_DIRECT reads and writes")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoNFS: reject request for id_legacy key without auxdata
Eric Biggers [Fri, 19 Jan 2018 23:15:34 +0000 (15:15 -0800)]
NFS: reject request for id_legacy key without auxdata

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 49686cbbb3ebafe42e63868222f269d8053ead00 upstream.

nfs_idmap_legacy_upcall() is supposed to be called with 'aux' pointing
to a 'struct idmap', via the call to request_key_with_auxdata() in
nfs_idmap_request_key().

However it can also be reached via the request_key() system call in
which case 'aux' will be NULL, causing a NULL pointer dereference in
nfs_idmap_prepare_pipe_upcall(), assuming that the key description is
valid enough to get that far.

Fix this by making nfs_idmap_legacy_upcall() negate the key if no
auxdata is provided.

As usual, this bug was found by syzkaller.  A simple reproducer using
the command-line keyctl program is:

    keyctl request2 id_legacy uid:0 '' @s

Fixes: 57e62324e469 ("NFS: Store the legacy idmapper result in the keyring")
Reported-by: syzbot+5dfdbcf7b3eb5912abbb@syzkaller.appspotmail.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Trond Myklebust <trondmy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoNFS: commit direct writes even if they fail partially
J. Bruce Fields [Tue, 16 Jan 2018 15:08:00 +0000 (10:08 -0500)]
NFS: commit direct writes even if they fail partially

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 1b8d97b0a837beaf48a8449955b52c650a7114b4 upstream.

If some of the WRITE calls making up an O_DIRECT write syscall fail,
we neglect to commit, even if some of the WRITEs succeed.

We also depend on the commit code to free the reference count on the
nfs_page taken in the "if (request_commit)" case at the end of
nfs_direct_write_completion().  The problem was originally noticed
because ENOSPC's encountered partway through a write would result in a
closed file being sillyrenamed when it should have been unlinked.

Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoNFS: Fix nfsstat breakage due to LOOKUPP
Trond Myklebust [Sat, 6 Jan 2018 14:53:49 +0000 (09:53 -0500)]
NFS: Fix nfsstat breakage due to LOOKUPP

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 8634ef5e05311f32d7f2aee06f6b27a8834a3bd6 upstream.

The LOOKUPP operation was inserted into the nfs4_procedures array
rather than being appended, which put /proc/net/rpc/nfs out of
whack, and broke the nfsstat utility.
Fix by moving the LOOKUPP operation to the end of the array, and
by ensuring that it keeps the same length whether or not NFSV4.1
and NFSv4.2 are compiled in.

Fixes: 5b5faaf6df734 ("nfs4: add NFSv4 LOOKUPP handlers")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoNFS: Add a cond_resched() to nfs_commit_release_pages()
Trond Myklebust [Mon, 18 Dec 2017 19:39:13 +0000 (14:39 -0500)]
NFS: Add a cond_resched() to nfs_commit_release_pages()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 7f1bda447c9bd48b415acedba6b830f61591601f upstream.

The commit list can get very large, and so we need a cond_resched()
in nfs_commit_release_pages() in order to ensure we don't hog the CPU
for excessive periods of time.

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agonfs41: do not return ENOMEM on LAYOUTUNAVAILABLE
Tigran Mkrtchyan [Tue, 16 Jan 2018 21:38:50 +0000 (22:38 +0100)]
nfs41: do not return ENOMEM on LAYOUTUNAVAILABLE

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 7ff4cff637aa0bd2abbd81f53b2a6206c50afd95 upstream.

A pNFS server may return LAYOUTUNAVAILABLE error on LAYOUTGET for files
which don't have any layout. In this situation pnfs_update_layout
currently returns NULL. As this NULL is converted into ENOMEM, IO
requests fails instead of falling back to MDS.

Do not return ENOMEM on LAYOUTUNAVAILABLE and let client retry through
MDS.

Fixes 8d40b0f14846f. I will suggest to backport this fix to affected
stable branches.

Signed-off-by: Tigran Mkrtchyan <tigran.mkrtchyan@desy.de>
[trondmy: Use IS_ERR_OR_NULL()]
Fixes: 8d40b0f14846 ("NFS filelayout:call GETDEVICEINFO after...")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agonfs/pnfs: fix nfs_direct_req ref leak when i/o falls back to the mds
Scott Mayhew [Fri, 15 Dec 2017 21:12:32 +0000 (16:12 -0500)]
nfs/pnfs: fix nfs_direct_req ref leak when i/o falls back to the mds

BugLink: http://bugs.launchpad.net/bugs/1751064
commit ba4a76f703ab7eb72941fdaac848502073d6e9ee upstream.

Currently when falling back to doing I/O through the MDS (via
pnfs_{read|write}_through_mds), the client frees the nfs_pgio_header
without releasing the reference taken on the dreq
via pnfs_generic_pg_{read|write}pages -> nfs_pgheader_init ->
nfs_direct_pgio_init.  It then takes another reference on the dreq via
nfs_generic_pg_pgios -> nfs_pgheader_init -> nfs_direct_pgio_init and
as a result the requester will become stuck in inode_dio_wait.  Once
that happens, other processes accessing the inode will become stuck as
well.

Ensure that pnfs_read_through_mds() and pnfs_write_through_mds() clean
up correctly by calling hdr->completion_ops->completion() instead of
calling hdr->release() directly.

This can be reproduced (sometimes) by performing "storage failover
takeover" commands on NetApp filer while doing direct I/O from a client.

This can also be reproduced using SystemTap to simulate a failure while
doing direct I/O from a client (from Dave Wysochanski
<dwysocha@redhat.com>):

stap -v -g -e 'probe module("nfs_layout_nfsv41_files").function("nfs4_fl_prepare_ds").return { $return=NULL; exit(); }'

Suggested-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Scott Mayhew <smayhew@redhat.com>
Fixes: 1ca018d28d ("pNFS: Fix a memory leak when attempted pnfs fails")
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoubifs: free the encrypted symlink target
Eric Biggers [Fri, 12 Jan 2018 04:27:00 +0000 (23:27 -0500)]
ubifs: free the encrypted symlink target

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 6b46d444146eb8d0b99562795cea8086639d7282 upstream.

ubifs_symlink() forgot to free the kmalloc()'ed buffer holding the
encrypted symlink target, creating a memory leak.  Fix it.

(UBIFS could actually encrypt directly into ui->data, removing the
temporary buffer, but that is left for the patch that switches to use
the symlink helper functions.)

Fixes: ca7f85be8d6c ("ubifs: Add support for encrypted symlinks")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoubi: block: Fix locking for idr_alloc/idr_remove
Bradley Bolen [Thu, 18 Jan 2018 13:55:20 +0000 (08:55 -0500)]
ubi: block: Fix locking for idr_alloc/idr_remove

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 7f29ae9f977bcdc3654e68bc36d170223c52fd48 upstream.

This fixes a race with idr_alloc where gd->first_minor can be set to the
same value for two simultaneous calls to ubiblock_create.  Each instance
calls device_add_disk with the same first_minor.  device_add_disk calls
bdi_register_owner which generates several warnings.

WARNING: CPU: 1 PID: 179 at kernel-source/fs/sysfs/dir.c:31
sysfs_warn_dup+0x68/0x88
sysfs: cannot create duplicate filename '/devices/virtual/bdi/252:2'

WARNING: CPU: 1 PID: 179 at kernel-source/lib/kobject.c:240
kobject_add_internal+0x1ec/0x2f8
kobject_add_internal failed for 252:2 with -EEXIST, don't try to
register things with the same name in the same directory

WARNING: CPU: 1 PID: 179 at kernel-source/fs/sysfs/dir.c:31
sysfs_warn_dup+0x68/0x88
sysfs: cannot create duplicate filename '/dev/block/252:2'

However, device_add_disk does not error out when bdi_register_owner
returns an error.  Control continues until reaching blk_register_queue.
It then BUGs.

kernel BUG at kernel-source/fs/sysfs/group.c:113!
[<c01e26cc>] (internal_create_group) from [<c01e2950>]
(sysfs_create_group+0x20/0x24)
[<c01e2950>] (sysfs_create_group) from [<c00e3d38>]
(blk_trace_init_sysfs+0x18/0x20)
[<c00e3d38>] (blk_trace_init_sysfs) from [<c02bdfbc>]
(blk_register_queue+0xd8/0x154)
[<c02bdfbc>] (blk_register_queue) from [<c02cec84>]
(device_add_disk+0x194/0x44c)
[<c02cec84>] (device_add_disk) from [<c0436ec8>]
(ubiblock_create+0x284/0x2e0)
[<c0436ec8>] (ubiblock_create) from [<c0427bb8>]
(vol_cdev_ioctl+0x450/0x554)
[<c0427bb8>] (vol_cdev_ioctl) from [<c0189110>] (vfs_ioctl+0x30/0x44)
[<c0189110>] (vfs_ioctl) from [<c01892e0>] (do_vfs_ioctl+0xa0/0x790)
[<c01892e0>] (do_vfs_ioctl) from [<c0189a14>] (SyS_ioctl+0x44/0x68)
[<c0189a14>] (SyS_ioctl) from [<c0010640>] (ret_fast_syscall+0x0/0x34)

Locking idr_alloc/idr_remove removes the race and keeps gd->first_minor
unique.

Fixes: 2bf50d42f3a4 ("UBI: block: Dynamically allocate minor numbers")
Signed-off-by: Bradley Bolen <bradleybolen@gmail.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoubi: fastmap: Erase outdated anchor PEBs during attach
Sascha Hauer [Tue, 5 Dec 2017 15:01:20 +0000 (16:01 +0100)]
ubi: fastmap: Erase outdated anchor PEBs during attach

BugLink: http://bugs.launchpad.net/bugs/1751064
commit f78e5623f45bab2b726eec29dc5cefbbab2d0b1c upstream.

The fastmap update code might erase the current fastmap anchor PEB
in case it doesn't find any new free PEB. When a power cut happens
in this situation we must not have any outdated fastmap anchor PEB
on the device, because that would be used to attach during next
boot.
The easiest way to make that sure is to erase all outdated fastmap
anchor PEBs synchronously during attach.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Richard Weinberger <richard@nod.at>
Fixes: dbb7d2a88d2a ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoubi: Fix race condition between ubi volume creation and udev
Clay McClure [Fri, 22 Sep 2017 02:01:34 +0000 (19:01 -0700)]
ubi: Fix race condition between ubi volume creation and udev

BugLink: http://bugs.launchpad.net/bugs/1751064
commit a51a0c8d213594bc094cb8e54aad0cb6d7f7b9a6 upstream.

Similar to commit 714fb87e8bc0 ("ubi: Fix race condition between ubi
device creation and udev"), we should make the volume active before
registering it.

Signed-off-by: Clay McClure <clay@daemons.net>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomtd: nand: sunxi: Fix ECC strength choice
Miquel Raynal [Wed, 24 Jan 2018 22:49:31 +0000 (23:49 +0100)]
mtd: nand: sunxi: Fix ECC strength choice

BugLink: http://bugs.launchpad.net/bugs/1751064
commit f4c6cd1a7f2275d5bc0e494b21fff26f8dde80f0 upstream.

When the requested ECC strength does not exactly match the strengths
supported by the ECC engine, the driver is selecting the closest
strength meeting the 'selected_strength > requested_strength'
constraint. Fix the fact that, in this particular case, ecc->strength
value was not updated to match the 'selected_strength'.

For instance, one can encounter this issue when no ECC requirement is
filled in the device tree while the NAND chip minimum requirement is not
a strength/step_size combo natively supported by the ECC engine.

Fixes: 1fef62c1423b ("mtd: nand: add sunxi NAND flash controller support")
Suggested-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomtd: nand: Fix nand_do_read_oob() return value
Miquel Raynal [Fri, 12 Jan 2018 09:13:36 +0000 (10:13 +0100)]
mtd: nand: Fix nand_do_read_oob() return value

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 87e89ce8d0d14f573c068c61bec2117751fb5103 upstream.

Starting from commit 041e4575f034 ("mtd: nand: handle ECC errors in
OOB"), nand_do_read_oob() (from the NAND core) did return 0 or a
negative error, and the MTD layer expected it.

However, the trend for the NAND layer is now to return an error or a
positive number of bitflips. Deciding which status to return to the user
belongs to the MTD layer.

Commit e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
brought this logic to the mtd_read_oob() function while the return value
coming from nand_do_read_oob() (called by the ->_read_oob() hook) was
left unchanged.

Fixes: e47f68587b82 ("mtd: check for max_bitflips in mtd_read_oob()")
Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomtd: nand: brcmnand: Disable prefetch by default
Kamal Dasu [Mon, 8 Jan 2018 20:36:48 +0000 (15:36 -0500)]
mtd: nand: brcmnand: Disable prefetch by default

BugLink: http://bugs.launchpad.net/bugs/1751064
commit f953f0f89663c39f08f4baaa8a4a881401b65654 upstream.

Brcm nand controller prefetch feature needs to be disabled
by default. Enabling affects performance on random reads as
well as dma reads.

Signed-off-by: Kamal Dasu <kdasu.kdev@gmail.com>
Fixes: 27c5b17cd1b1 ("mtd: nand: add NAND driver "library" for Broadcom STB NAND controller")
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomtd: cfi: convert inline functions to macros
Arnd Bergmann [Wed, 11 Oct 2017 13:54:10 +0000 (15:54 +0200)]
mtd: cfi: convert inline functions to macros

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 9e343e87d2c4c707ef8fae2844864d4dde3a2d13 upstream.

The map_word_() functions, dating back to linux-2.6.8, try to perform
bitwise operations on a 'map_word' structure. This may have worked
with compilers that were current then (gcc-3.4 or earlier), but end
up being rather inefficient on any version I could try now (gcc-4.4 or
higher). Specifically we hit a problem analyzed in gcc PR81715 where we
fail to reuse the stack space for local variables.

This can be seen immediately in the stack consumption for
cfi_staa_erase_varsize() and other functions that (with CONFIG_KASAN)
can be up to 2200 bytes. Changing the inline functions into macros brings
this down to 1280 bytes.  Without KASAN, the same problem exists, but
the stack consumption is lower to start with, my patch shrinks it from
920 to 496 bytes on with arm-linux-gnueabi-gcc-5.4, and saves around
1KB in .text size for cfi_cmdset_0020.c, as it avoids copying map_word
structures for each call to one of these helpers.

With the latest gcc-8 snapshot, the problem is fixed in upstream gcc,
but nobody uses that yet, so we should still work around it in mainline
kernels and probably backport the workaround to stable kernels as well.
We had a couple of other functions that suffered from the same gcc bug,
and all of those had a simpler workaround involving dummy variables
in the inline function. Unfortunately that did not work here, the
macro hack was the best I could come up with.

It would also be helpful to have someone to a little performance testing
on the patch, to see how much it helps in terms of CPU utilitzation.

Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Kill PSCI_GET_VERSION as a variant-2 workaround
Marc Zyngier [Tue, 6 Feb 2018 17:56:21 +0000 (17:56 +0000)]
arm64: Kill PSCI_GET_VERSION as a variant-2 workaround

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 3a0a397ff5ff upstream.

Now that we've standardised on SMCCC v1.1 to perform the branch
prediction invalidation, let's drop the previous band-aid.
If vendors haven't updated their firmware to do SMCCC 1.1, they
haven't updated PSCI either, so we don't loose anything.

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support
Marc Zyngier [Tue, 6 Feb 2018 17:56:20 +0000 (17:56 +0000)]
arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit b092201e0020 upstream.

Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1.
It is lovely. Really.

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: smccc: Implement SMCCC v1.1 inline primitive
Marc Zyngier [Tue, 6 Feb 2018 17:56:19 +0000 (17:56 +0000)]
arm/arm64: smccc: Implement SMCCC v1.1 inline primitive

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f2d3b2e8759a upstream.

One of the major improvement of SMCCC v1.1 is that it only clobbers
the first 4 registers, both on 32 and 64bit. This means that it
becomes very easy to provide an inline version of the SMC call
primitive, and avoid performing a function call to stash the
registers that would otherwise be clobbered by SMCCC v1.0.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: smccc: Make function identifiers an unsigned quantity
Marc Zyngier [Tue, 6 Feb 2018 17:56:18 +0000 (17:56 +0000)]
arm/arm64: smccc: Make function identifiers an unsigned quantity

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit ded4c39e93f3 upstream.

Function identifiers are a 32bit, unsigned quantity. But we never
tell so to the compiler, resulting in the following:

 4ac:   b26187e0        mov     x0, #0xffffffff80000001

We thus rely on the firmware narrowing it for us, which is not
always a reasonable expectation.

Cc: stable@vger.kernel.org
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agofirmware/psci: Expose SMCCC version through psci_ops
Marc Zyngier [Tue, 6 Feb 2018 17:56:17 +0000 (17:56 +0000)]
firmware/psci: Expose SMCCC version through psci_ops

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit e78eef554a91 upstream.

Since PSCI 1.0 allows the SMCCC version to be (indirectly) probed,
let's do that at boot time, and expose the version of the calling
convention as part of the psci_ops structure.

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agofirmware/psci: Expose PSCI conduit
Marc Zyngier [Tue, 6 Feb 2018 17:56:16 +0000 (17:56 +0000)]
firmware/psci: Expose PSCI conduit

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 09a8d6d48499 upstream.

In order to call into the firmware to apply workarounds, it is
useful to find out whether we're using HVC or SMC. Let's expose
this through the psci_ops.

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
Marc Zyngier [Tue, 6 Feb 2018 17:56:15 +0000 (17:56 +0000)]
arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f72af90c3783 upstream.

We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support
Marc Zyngier [Tue, 6 Feb 2018 17:56:14 +0000 (17:56 +0000)]
arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6167ec5c9145 upstream.

A new feature of SMCCC 1.1 is that it offers firmware-based CPU
workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
BP hardening for CVE-2017-5715.

If the host has some mitigation for this issue, report that
we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
host workaround on every guest exit.

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conflicts:
arch/arm/include/asm/kvm_host.h
arch/arm64/include/asm/kvm_host.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Turn kvm_psci_version into a static inline
Marc Zyngier [Tue, 6 Feb 2018 17:56:13 +0000 (17:56 +0000)]
arm/arm64: KVM: Turn kvm_psci_version into a static inline

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit a4097b351118 upstream.

We're about to need kvm_psci_version in HYP too. So let's turn it
into a static inline, and pass the kvm structure as a second
parameter (so that HYP can do a kern_hyp_va on it).

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: KVM: Make PSCI_VERSION a fast path
Marc Zyngier [Wed, 3 Jan 2018 16:38:37 +0000 (16:38 +0000)]
arm64: KVM: Make PSCI_VERSION a fast path

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 90348689d500 upstream.

For those CPUs that require PSCI to perform a BP invalidation,
going all the way to the PSCI code for not much is a waste of
precious cycles. Let's terminate that call as early as possible.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Advertise SMCCC v1.1
Marc Zyngier [Tue, 6 Feb 2018 17:56:12 +0000 (17:56 +0000)]
arm/arm64: KVM: Advertise SMCCC v1.1

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 09e6be12effd upstream.

The new SMC Calling Convention (v1.1) allows for a reduced overhead
when calling into the firmware, and provides a new feature discovery
mechanism.

Make it visible to KVM guests.

Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Implement PSCI 1.0 support
Marc Zyngier [Tue, 6 Feb 2018 17:56:11 +0000 (17:56 +0000)]
arm/arm64: KVM: Implement PSCI 1.0 support

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 58e0b2239a4d upstream.

PSCI 1.0 can be trivially implemented by providing the FEATURES
call on top of PSCI 0.2 and returning 1.0 as the PSCI version.

We happily ignore everything else, as they are either optional or
are clarifications that do not require any additional change.

PSCI 1.0 is now the default until we decide to add a userspace
selection API.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Add smccc accessors to PSCI code
Marc Zyngier [Tue, 6 Feb 2018 17:56:10 +0000 (17:56 +0000)]
arm/arm64: KVM: Add smccc accessors to PSCI code

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 84684fecd7ea upstream.

Instead of open coding the accesses to the various registers,
let's add explicit SMCCC accessors.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Add PSCI_VERSION helper
Marc Zyngier [Tue, 6 Feb 2018 17:56:09 +0000 (17:56 +0000)]
arm/arm64: KVM: Add PSCI_VERSION helper

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit d0a144f12a7c upstream.

As we're about to trigger a PSCI version explosion, it doesn't
hurt to introduce a PSCI_VERSION helper that is going to be
used everywhere.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm/arm64: KVM: Consolidate the PSCI include files
Marc Zyngier [Tue, 6 Feb 2018 17:56:08 +0000 (17:56 +0000)]
arm/arm64: KVM: Consolidate the PSCI include files

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 1a2fb94e6a77 upstream.

As we're about to update the PSCI support, and because I'm lazy,
let's move the PSCI include file to include/kvm so that both
ARM architectures can find it.

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: KVM: Increment PC after handling an SMC trap
Marc Zyngier [Tue, 6 Feb 2018 17:56:07 +0000 (17:56 +0000)]
arm64: KVM: Increment PC after handling an SMC trap

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f5115e8869e1 upstream.

When handling an SMC trap, the "preferred return address" is set
to that of the SMC, and not the next PC (which is a departure from
the behaviour of an SMC that isn't trapped).

Increment PC in the handler, as the guest is otherwise forever
stuck...

Cc: stable@vger.kernel.org
Fixes: acfb3b883f6d ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls")
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Branch predictor hardening for Cavium ThunderX2
Jayachandran C [Fri, 19 Jan 2018 12:22:47 +0000 (04:22 -0800)]
arm64: Branch predictor hardening for Cavium ThunderX2

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f3d795d9b360 upstream.

Use PSCI based mitigation for speculative execution attacks targeting
the branch predictor. We use the same mechanism as the one used for
Cortex-A CPUs, we expect the PSCI version call to have a side effect
of clearing the BTBs.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Implement branch predictor hardening for Falkor
Shanker Donthineni [Fri, 5 Jan 2018 20:28:59 +0000 (14:28 -0600)]
arm64: Implement branch predictor hardening for Falkor

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit ec82b567a74f upstream.

Falkor is susceptible to branch predictor aliasing and can
theoretically be attacked by malicious code. This patch
implements a mitigation for these attacks, preventing any
malicious entries from affecting other victim contexts.

Signed-off-by: Shanker Donthineni <shankerd@codeaurora.org>
[will: fix label name when !CONFIG_KVM and remove references to MIDR_FALKOR]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Implement branch predictor hardening for affected Cortex-A CPUs
Will Deacon [Wed, 3 Jan 2018 12:46:21 +0000 (12:46 +0000)]
arm64: Implement branch predictor hardening for affected Cortex-A CPUs

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit aa6acde65e03 upstream.

Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
and can theoretically be attacked by malicious code.

This patch implements a PSCI-based mitigation for these CPUs when available.
The call into firmware will invalidate the branch predictor state, preventing
any malicious entries from affecting other victim contexts.

Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
Will Deacon [Wed, 3 Jan 2018 11:19:34 +0000 (11:19 +0000)]
arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit a65d219fe5dc upstream.

Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they
will soon need MIDR matches for hardening the branch predictor.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Apply BP hardening for suspicious interrupts from EL0
Will Deacon [Fri, 2 Feb 2018 17:31:40 +0000 (17:31 +0000)]
arm64: entry: Apply BP hardening for suspicious interrupts from EL0

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 30d88c0e3ace upstream.

It is possible to take an IRQ from EL0 following a branch to a kernel
address in such a way that the IRQ is prioritised over the instruction
abort. Whilst an attacker would need to get the stars to align here,
it might be sufficient with enough calibration so perform BP hardening
in the rare case that we see a kernel address in the ELR when handling
an IRQ from EL0.

Reported-by: Dan Hettena <dhettena@nvidia.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Apply BP hardening for high-priority synchronous exceptions
Will Deacon [Fri, 2 Feb 2018 17:31:39 +0000 (17:31 +0000)]
arm64: entry: Apply BP hardening for high-priority synchronous exceptions

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 5dfc6ed27710 upstream.

Software-step and PC alignment fault exceptions have higher priority than
instruction abort exceptions, so apply the BP hardening hooks there too
if the user PC appears to reside in kernel space.

Reported-by: Dan Hettena <dhettena@nvidia.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: KVM: Use per-CPU vector when BP hardening is enabled
Marc Zyngier [Wed, 3 Jan 2018 16:38:35 +0000 (16:38 +0000)]
arm64: KVM: Use per-CPU vector when BP hardening is enabled

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6840bdd73d07 upstream.

Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm/include/asm/kvm_mmu.h
arch/arm64/include/asm/kvm_mmu.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Move BP hardening to check_and_switch_context
Marc Zyngier [Fri, 19 Jan 2018 15:42:09 +0000 (15:42 +0000)]
arm64: Move BP hardening to check_and_switch_context

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit a8e4c0a919ae upstream.

We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
which has the unexpected consequence of being triggered on every
exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
even if no context switch actually occured.

This is a bit suboptimal, and it would be more logical to only
invalidate the branch predictor when we actually switch to
a different mm.

In order to solve this, move the call to arm64_apply_bp_hardening()
into check_and_switch_context(), where we're guaranteed to pick
a different mm context.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Add skeleton to harden the branch predictor against aliasing attacks
Will Deacon [Wed, 3 Jan 2018 11:17:58 +0000 (11:17 +0000)]
arm64: Add skeleton to harden the branch predictor against aliasing attacks

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0f15adbb2861 upstream.

Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.

This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.

Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/kernel/cpufeature.c
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Move post_ttbr_update_workaround to C code
Marc Zyngier [Tue, 2 Jan 2018 18:19:39 +0000 (18:19 +0000)]
arm64: Move post_ttbr_update_workaround to C code

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 95e3de3590e3 upstream.

We will soon need to invoke a CPU-specific function pointer after changing
page tables, so move post_ttbr_update_workaround out into C code to make
this possible.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/include/asm/assembler.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agodrivers/firmware: Expose psci_get_version through psci_ops structure
Will Deacon [Tue, 2 Jan 2018 21:45:41 +0000 (21:45 +0000)]
drivers/firmware: Expose psci_get_version through psci_ops structure

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit d68e3ba5303f upstream.

Entry into recent versions of ARM Trusted Firmware will invalidate the CPU
branch predictor state in order to protect against aliasing attacks.

This patch exposes the PSCI "VERSION" function via psci_ops, so that it
can be invoked outside of the PSCI driver where necessary.

Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: cpufeature: Pass capability structure to ->enable callback
Will Deacon [Tue, 2 Jan 2018 21:37:25 +0000 (21:37 +0000)]
arm64: cpufeature: Pass capability structure to ->enable callback

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0a0d111d40fd upstream.

In order to invoke the CPU capability ->matches callback from the ->enable
callback for applying local-CPU workarounds, we need a handle on the
capability structure.

This patch passes a pointer to the capability structure to the ->enable
callback.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Run enable method for errata work arounds on late CPUs
Suzuki K Poulose [Wed, 17 Jan 2018 17:42:20 +0000 (17:42 +0000)]
arm64: Run enable method for errata work arounds on late CPUs

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 55b35d070c25 upstream.

When a CPU is brought up after we have finalised the system
wide capabilities (i.e, features and errata), we make sure the
new CPU doesn't need a new errata work around which has not been
detected already. However we don't run enable() method on the new
CPU for the errata work arounds already detected. This could
cause the new CPU running without potential work arounds.
It is upto the "enable()" method to decide if this CPU should
do something about the errata.

Fixes: commit 6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU")
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: cpufeature: __this_cpu_has_cap() shouldn't stop early
James Morse [Mon, 15 Jan 2018 19:38:54 +0000 (19:38 +0000)]
arm64: cpufeature: __this_cpu_has_cap() shouldn't stop early

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit edf298cfce47 upstream.

this_cpu_has_cap() tests caps->desc not caps->matches, so it stops
walking the list when it finds a 'silent' feature, instead of
walking to the end of the list.

Prior to v4.6's 644c2ae198412 ("arm64: cpufeature: Test 'matches' pointer
to find the end of the list") we always tested desc to find the end of
a capability list. This was changed for dubious things like PAN_NOT_UAO.
v4.7's e3661b128e53e ("arm64: Allow a capability to be checked on
single CPU") added this_cpu_has_cap() using the old desc style test.

CC: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: futex: Mask __user pointers prior to dereference
Will Deacon [Mon, 5 Feb 2018 15:34:24 +0000 (15:34 +0000)]
arm64: futex: Mask __user pointers prior to dereference

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 91b2d3442f6a upstream.

The arm64 futex code has some explicit dereferencing of user pointers
where performing atomic operations in response to a futex command. This
patch uses masking to limit any speculative futex operations to within
the user address space.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user
Will Deacon [Mon, 5 Feb 2018 15:34:23 +0000 (15:34 +0000)]
arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f71c2ffcb20d upstream.

Like we've done for get_user and put_user, ensure that user pointers
are masked before invoking the underlying __arch_{clear,copy_*}_user
operations.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_user
Will Deacon [Mon, 5 Feb 2018 15:34:22 +0000 (15:34 +0000)]
arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_user

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 84624087dd7e upstream.

access_ok isn't an expensive operation once the addr_limit for the current
thread has been loaded into the cache. Given that the initial access_ok
check preceding a sequence of __{get,put}_user operations will take
the brunt of the miss, we can make the __* variants identical to the
full-fat versions, which brings with it the benefits of address masking.

The likely cost in these sequences will be from toggling PAN/UAO, which
we can address later by implementing the *_unsafe versions.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: uaccess: Prevent speculative use of the current addr_limit
Will Deacon [Mon, 5 Feb 2018 15:34:21 +0000 (15:34 +0000)]
arm64: uaccess: Prevent speculative use of the current addr_limit

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit c2f0ad4fc089 upstream.

A mispredicted conditional call to set_fs could result in the wrong
addr_limit being forwarded under speculation to a subsequent access_ok
check, potentially forming part of a spectre-v1 attack using uaccess
routines.

This patch prevents this forwarding from taking place, but putting heavy
barriers in set_fs after writing the addr_limit.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Ensure branch through syscall table is bounded under speculation
Will Deacon [Mon, 5 Feb 2018 15:34:20 +0000 (15:34 +0000)]
arm64: entry: Ensure branch through syscall table is bounded under speculation

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6314d90e6493 upstream.

In a similar manner to array_index_mask_nospec, this patch introduces an
assembly macro (mask_nospec64) which can be used to bound a value under
speculation. This macro is then used to ensure that the indirect branch
through the syscall table is bounded under speculation, with out-of-range
addresses speculating as calls to sys_io_setup (0).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Use pointer masking to limit uaccess speculation
Robin Murphy [Mon, 5 Feb 2018 15:34:19 +0000 (15:34 +0000)]
arm64: Use pointer masking to limit uaccess speculation

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 4d8efc2d5ee4 upstream.

Similarly to x86, mitigate speculation past an access_ok() check by
masking the pointer against the address limit before use.

Even if we don't expect speculative writes per se, it is plausible that
a CPU may still speculate at least as far as fetching a cache line for
writing, hence we also harden put_user() and clear_user() for peace of
mind.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Make USER_DS an inclusive limit
Robin Murphy [Mon, 5 Feb 2018 15:34:18 +0000 (15:34 +0000)]
arm64: Make USER_DS an inclusive limit

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 51369e398d0d upstream.

Currently, USER_DS represents an exclusive limit while KERNEL_DS is
inclusive. In order to do some clever trickery for speculation-safe
masking, we need them both to behave equivalently - there aren't enough
bits to make KERNEL_DS exclusive, so we have precisely one option. This
also happens to correct a longstanding false negative for a range
ending on the very top byte of kernel memory.

Mark Rutland points out that we've actually got the semantics of
addresses vs. segments muddled up in most of the places we need to
amend, so shuffle the {USER,KERNEL}_DS definitions around such that we
can correct those properly instead of just pasting "-1"s everywhere.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Implement array_index_mask_nospec()
Robin Murphy [Mon, 5 Feb 2018 15:34:17 +0000 (15:34 +0000)]
arm64: Implement array_index_mask_nospec()

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 022620eed3d0 upstream.

Provide an optimised, assembly implementation of array_index_mask_nospec()
for arm64 so that the compiler is not in a position to transform the code
in ways which affect its ability to inhibit speculation (e.g. by introducing
conditional branches).

This is similar to the sequence used by x86, modulo architectural differences
in the carry/borrow flags.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: barrier: Add CSDB macros to control data-value prediction
Will Deacon [Mon, 5 Feb 2018 15:34:16 +0000 (15:34 +0000)]
arm64: barrier: Add CSDB macros to control data-value prediction

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 669474e772b9 upstream.

For CPUs capable of data value prediction, CSDB waits for any outstanding
predictions to architecturally resolve before allowing speculative execution
to continue. Provide macros to expose it to the arch code.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/include/asm/assembler.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoperf: arm_spe: Fail device probe when arm64_kernel_unmapped_at_el0()
Will Deacon [Mon, 27 Nov 2017 15:49:53 +0000 (15:49 +0000)]
perf: arm_spe: Fail device probe when arm64_kernel_unmapped_at_el0()

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 7a4a0c1555b8 upstream.

When running with the kernel unmapped whilst at EL0, the virtually-addressed
SPE buffer is also unmapped, which can lead to buffer faults if userspace
profiling is enabled and potentially also when writing back kernel samples
unless an expensive drain operation is performed on exception return.

For now, fail the SPE driver probe when arm64_kernel_unmapped_at_el0().

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
Will Deacon [Mon, 29 Jan 2018 12:00:00 +0000 (12:00 +0000)]
arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 439e70e27a51 upstream.

The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.

Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Reword comment about post_ttbr_update_workaround
Will Deacon [Mon, 29 Jan 2018 11:59:58 +0000 (11:59 +0000)]
arm64: entry: Reword comment about post_ttbr_update_workaround

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f167211a93ac upstream.

We don't fully understand the Cavium ThunderX erratum, but it appears
that mapping the kernel as nG can lead to horrible consequences such as
attempting to execute userspace from kernel context. Since kpti isn't
enabled for these CPUs anyway, simplify the comment justifying the lack
of post_ttbr_update_workaround in the exception trampoline.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Force KPTI to be disabled on Cavium ThunderX
Marc Zyngier [Mon, 29 Jan 2018 11:59:56 +0000 (11:59 +0000)]
arm64: Force KPTI to be disabled on Cavium ThunderX

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6dc52b15c4a4 upstream.

Cavium ThunderX's erratum 27456 results in a corruption of icache
entries that are loaded from memory that is mapped as non-global
(i.e. ASID-tagged).

As KPTI is based on memory being mapped non-global, let's prevent
it from kicking in if this erratum is detected.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: Update comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: kpti: Add ->enable callback to remap swapper using nG mappings
Will Deacon [Tue, 6 Feb 2018 22:22:50 +0000 (22:22 +0000)]
arm64: kpti: Add ->enable callback to remap swapper using nG mappings

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit f992b4dfd58b upstream.

Defaulting to global mappings for kernel space is generally good for
performance and appears to be necessary for Cavium ThunderX. If we
subsequently decide that we need to enable kpti, then we need to rewrite
our existing page table entries to be non-global. This is fiddly, and
made worse by the possible use of contiguous mappings, which require
a strict break-before-make sequence.

Since the enable callback runs on each online CPU from stop_machine
context, we can have all CPUs enter the idmap, where secondaries can
wait for the primary CPU to rewrite swapper with its MMU off. It's all
fairly horrible, but at least it only runs once.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/mm/proc.S
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Permit transitioning from Global to Non-Global without BBM
Will Deacon [Mon, 29 Jan 2018 11:59:54 +0000 (11:59 +0000)]
arm64: mm: Permit transitioning from Global to Non-Global without BBM

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 4e6020565596 upstream.

Break-before-make is not needed when transitioning from Global to
Non-Global mappings, provided that the contiguous hint is not being used.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()
Will Deacon [Mon, 29 Jan 2018 11:59:53 +0000 (11:59 +0000)]
arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 41acec624087 upstream.

To allow systems which do not require kpti to continue running with
global kernel mappings (which appears to be a requirement for Cavium
ThunderX due to a CPU erratum), make the use of nG in the kernel page
tables dependent on arm64_kernel_unmapped_at_el0(), which is resolved
at runtime.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Turn on KPTI only on CPUs that need it
Jayachandran C [Fri, 19 Jan 2018 12:22:48 +0000 (04:22 -0800)]
arm64: Turn on KPTI only on CPUs that need it

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0ba2e29c7fc1 upstream.

Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in
unmap_kernel_at_el0(). These CPUs are not vulnerable to
CVE-2017-5754 and do not need KPTI when KASLR is off.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
Jayachandran C [Mon, 8 Jan 2018 06:53:35 +0000 (22:53 -0800)]
arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0d90718871fe upstream.

Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.

Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: kpti: Fix the interaction between ASID switching and software PAN
Catalin Marinas [Wed, 10 Jan 2018 13:18:30 +0000 (13:18 +0000)]
arm64: kpti: Fix the interaction between ASID switching and software PAN

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6b88a32c7af6 upstream.

With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
active ASID to decide whether user access was enabled (non-zero ASID)
when the exception was taken. On return from exception, if user access
was previously disabled, it re-instates TTBR0_EL1 from the per-thread
saved value (updated in switch_mm() or efi_set_pgd()).

Commit 7655abb95386 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e75711
("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
__uaccess_ttbr0_disable() function and asm macro to first write the
reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
exception occurs between these two, the exception return code will
re-instate a valid TTBR0_EL1. Similar scenario can happen in
cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
update in cpu_do_switch_mm().

This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
disables the interrupts around the TTBR0_EL1 and ASID switching code in
__uaccess_ttbr0_disable(). It also ensures that, when returning from the
EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.

The accesses to current_thread_info()->ttbr0 are updated to use
READ_ONCE/WRITE_ONCE.

As a safety measure, __uaccess_ttbr0_enable() always masks out any
existing non-zero ASID TTBR1_EL1 before writing in the new ASID.

Fixes: 27a921e75711 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
Acked-by: Will Deacon <will.deacon@arm.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conflicts:
arch/arm64/include/asm/asm-uaccess.h
arch/arm64/include/asm/uaccess.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBR
Will Deacon [Fri, 1 Dec 2017 17:33:48 +0000 (17:33 +0000)]
arm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBR

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit b519538dfefc upstream.

There are now a handful of open-coded masks to extract the ASID from a
TTBR value, so introduce a TTBR_ASID_MASK and use that instead.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: capabilities: Handle duplicate entries for a capability
Suzuki K Poulose [Tue, 9 Jan 2018 16:12:18 +0000 (16:12 +0000)]
arm64: capabilities: Handle duplicate entries for a capability

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 67948af41f2e upstream.

Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Take into account ID_AA64PFR0_EL1.CSV3
Will Deacon [Mon, 27 Nov 2017 18:29:30 +0000 (18:29 +0000)]
arm64: Take into account ID_AA64PFR0_EL1.CSV3

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 179a56f6f9fb upstream.

For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/kernel/cpufeature.c
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
Will Deacon [Tue, 14 Nov 2017 16:19:39 +0000 (16:19 +0000)]
arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0617052ddde3 upstream.

Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.

Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
Will Deacon [Tue, 14 Nov 2017 14:41:01 +0000 (14:41 +0000)]
arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 084eb77cd3a8 upstream.

Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: use RET instruction for exiting the trampoline
Will Deacon [Tue, 14 Nov 2017 16:15:59 +0000 (16:15 +0000)]
arm64: use RET instruction for exiting the trampoline

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit be04a6d1126b upstream.

Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.

This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: kaslr: Put kernel vectors address in separate data page
Will Deacon [Wed, 6 Dec 2017 11:24:02 +0000 (11:24 +0000)]
arm64: kaslr: Put kernel vectors address in separate data page

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 6c27c4082f4f upstream.

The literal pool entry for identifying the vectors base is the only piece
of information in the trampoline page that identifies the true location
of the kernel.

This patch moves it into a page-aligned region of the .rodata section
and maps this adjacent to the trampoline text via an additional fixmap
entry, which protects against any accidental leakage of the trampoline
contents.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Add fake CPU feature for unmapping the kernel at EL0
Will Deacon [Tue, 14 Nov 2017 14:38:19 +0000 (14:38 +0000)]
arm64: entry: Add fake CPU feature for unmapping the kernel at EL0

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit ea1e3de85e94 upstream.

Allow explicit disabling of the entry trampoline on the kernel command
line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
Will Deacon [Tue, 14 Nov 2017 14:33:28 +0000 (14:33 +0000)]
arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 18011eac28c7 upstream.

When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: cpu_errata: Add Kryo to Falkor 1003 errata
Stephen Boyd [Wed, 13 Dec 2017 22:19:37 +0000 (14:19 -0800)]
arm64: cpu_errata: Add Kryo to Falkor 1003 errata

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit bb48711800e6 upstream.

The Kryo CPUs are also affected by the Falkor 1003 errata, so
we need to do the same workaround on Kryo CPUs. The MIDR is
slightly more complicated here, where the PART number is not
always the same when looking at all the bits from 15 to 4. Drop
the lower 8 bits and just look at the top 4 to see if it's '2'
and then consider those as Kryo CPUs. This covers all the
combinations without having to list them all out.

Fixes: 38fd94b0275c ("arm64: Work around Falkor erratum 1003")
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conflicts:
arch/arm64/include/asm/cputype.h
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: erratum: Work around Falkor erratum #E1003 in trampoline code
Will Deacon [Tue, 14 Nov 2017 14:29:19 +0000 (14:29 +0000)]
arm64: erratum: Work around Falkor erratum #E1003 in trampoline code

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit d1777e686ad1 upstream.

We rely on an atomic swizzling of TTBR1 when transitioning from the entry
trampoline to the kernel proper on an exception. We can't rely on this
atomicity in the face of Falkor erratum #E1003, so on affected cores we
can issue a TLB invalidation to invalidate the walk cache prior to
jumping into the kernel. There is still the possibility of a TLB conflict
here due to conflicting walk cache entries prior to the invalidation, but
this doesn't appear to be the case on these CPUs in practice.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Hook up entry trampoline to exception vectors
Will Deacon [Tue, 14 Nov 2017 14:24:29 +0000 (14:24 +0000)]
arm64: entry: Hook up entry trampoline to exception vectors

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 4bf3286d29f3 upstream.

Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Explicitly pass exception level to kernel_ventry macro
Will Deacon [Tue, 14 Nov 2017 14:20:21 +0000 (14:20 +0000)]
arm64: entry: Explicitly pass exception level to kernel_ventry macro

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 5b1f7fe41909 upstream.

We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Map entry trampoline into trampoline and kernel page tables
Will Deacon [Tue, 14 Nov 2017 14:14:17 +0000 (14:14 +0000)]
arm64: mm: Map entry trampoline into trampoline and kernel page tables

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 51a0048beb44 upstream.

The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.

This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: entry: Add exception trampoline page for exceptions from EL0
Will Deacon [Tue, 14 Nov 2017 14:07:40 +0000 (14:07 +0000)]
arm64: entry: Add exception trampoline page for exceptions from EL0

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit c7b9adaf85f8 upstream.

To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.

This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
Will Deacon [Thu, 10 Aug 2017 13:13:33 +0000 (14:13 +0100)]
arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 9b0de864b5bc upstream.

Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Add arm64_kernel_unmapped_at_el0 helper
Will Deacon [Tue, 14 Nov 2017 13:58:08 +0000 (13:58 +0000)]
arm64: mm: Add arm64_kernel_unmapped_at_el0 helper

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit fc0e1299da54 upstream.

In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.

Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Allocate ASIDs in pairs
Will Deacon [Thu, 10 Aug 2017 13:10:28 +0000 (14:10 +0100)]
arm64: mm: Allocate ASIDs in pairs

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 0c8ea531b774 upstream.

In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
Will Deacon [Thu, 10 Aug 2017 12:58:16 +0000 (13:58 +0100)]
arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 27a921e75711 upstream.

With the ASID now installed in TTBR1, we can re-enable ARM64_SW_TTBR0_PAN
by ensuring that we switch to a reserved ASID of zero when disabling
user access and restore the active user ASID on the uaccess enable path.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Rename post_ttbr0_update_workaround
Will Deacon [Thu, 10 Aug 2017 12:34:30 +0000 (13:34 +0100)]
arm64: mm: Rename post_ttbr0_update_workaround

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 158d495899ce upstream.

The post_ttbr0_update_workaround hook applies to any change to TTBRx_EL1.
Since we're using TTBR1 for the ASID, rename the hook to make it clearer
as to what it's doing.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003
Will Deacon [Thu, 10 Aug 2017 12:29:06 +0000 (13:29 +0100)]
arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 85d13c001497 upstream.

The pre_ttbr0_update_workaround hook is called prior to context-switching
TTBR0 because Falkor erratum E1003 can cause TLB allocation with the wrong
ASID if both the ASID and the base address of the TTBR are updated at
the same time.

With the ASID sitting safely in TTBR1, we no longer update things
atomically, so we can remove the pre_ttbr0_update_workaround macro as
it's no longer required. The erratum infrastructure and documentation
is left around for #E1003, as it will be required by the entry
trampoline code in a future patch.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Move ASID from TTBR0 to TTBR1
Will Deacon [Thu, 10 Aug 2017 12:19:09 +0000 (13:19 +0100)]
arm64: mm: Move ASID from TTBR0 to TTBR1

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 7655abb95386 upstream.

In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
Will Deacon [Thu, 10 Aug 2017 12:04:48 +0000 (13:04 +0100)]
arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit 376133b7edc2 upstream.

We're about to rework the way ASIDs are allocated, switch_mm is
implemented and low-level kernel entry/exit is handled, so keep the
ARM64_SW_TTBR0_PAN code out of the way whilst we do the heavy lifting.

It will be re-enabled in a subsequent patch.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agoarm64: mm: Use non-global mappings for kernel space
Will Deacon [Thu, 10 Aug 2017 11:56:18 +0000 (12:56 +0100)]
arm64: mm: Use non-global mappings for kernel space

BugLink: http://bugs.launchpad.net/bugs/1751064
Commit e046eb0c9bf2 upstream.

In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomedia: hdpvr: Fix an error handling path in hdpvr_probe()
Arvind Yadav [Fri, 22 Sep 2017 13:07:06 +0000 (09:07 -0400)]
media: hdpvr: Fix an error handling path in hdpvr_probe()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit c0f71bbb810237a38734607ca4599632f7f5d47f upstream.

Here, hdpvr_register_videodev() is responsible for setup and
register a video device. Also defining and initializing a worker.
hdpvr_register_videodev() is calling by hdpvr_probe at last.
So no need to flush any work here.
Unregister v4l2, free buffers and memory. If hdpvr_probe() will fail.

Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomedia: dvb-usb-v2: lmedm04: move ts2020 attach to dm04_lme2510_tuner
Malcolm Priestley [Tue, 26 Sep 2017 21:10:21 +0000 (17:10 -0400)]
media: dvb-usb-v2: lmedm04: move ts2020 attach to dm04_lme2510_tuner

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 7bf7a7116ed313c601307f7e585419369926ab05 upstream.

When the tuner was split from m88rs2000 the attach function is in wrong
place.

Move to dm04_lme2510_tuner to trap errors on failure and removing
a call to lme_coldreset.

Prevents driver starting up without any tuner connected.

Fixes to trap for ts2020 fail.
LME2510(C): FE Found M88RS2000
ts2020: probe of 0-0060 failed with error -11
...
LME2510(C): TUN Found RS2000 tuner
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] PREEMPT SMP KASAN

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agomedia: dvb-usb-v2: lmedm04: Improve logic checking of warm start
Malcolm Priestley [Tue, 26 Sep 2017 21:10:20 +0000 (17:10 -0400)]
media: dvb-usb-v2: lmedm04: Improve logic checking of warm start

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 3d932ee27e852e4904647f15b64dedca51187ad7 upstream.

Warm start has no check as whether a genuine device has
connected and proceeds to next execution path.

Check device should read 0x47 at offset of 2 on USB descriptor read
and it is the amount requested of 6 bytes.

Fix for
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access as

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agosched/rt: Up the root domain ref count when passing it around via IPIs
Steven Rostedt (VMware) [Wed, 24 Jan 2018 01:45:38 +0000 (20:45 -0500)]
sched/rt: Up the root domain ref count when passing it around via IPIs

BugLink: http://bugs.launchpad.net/bugs/1751064
commit 364f56653708ba8bcdefd4f0da2a42904baa8eeb upstream.

When issuing an IPI RT push, where an IPI is sent to each CPU that has more
than one RT task scheduled on it, it references the root domain's rto_mask,
that contains all the CPUs within the root domain that has more than one RT
task in the runable state. The problem is, after the IPIs are initiated, the
rq->lock is released. This means that the root domain that is associated to
the run queue could be freed while the IPIs are going around.

Add a sched_get_rd() and a sched_put_rd() that will increment and decrement
the root domain's ref count respectively. This way when initiating the IPIs,
the scheduler will up the root domain's ref count before releasing the
rq->lock, ensuring that the root domain does not go away until the IPI round
is complete.

Reported-by: Pavan Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
6 years agosched/rt: Use container_of() to get root domain in rto_push_irq_work_func()
Steven Rostedt (VMware) [Wed, 24 Jan 2018 01:45:37 +0000 (20:45 -0500)]
sched/rt: Use container_of() to get root domain in rto_push_irq_work_func()

BugLink: http://bugs.launchpad.net/bugs/1751064
commit ad0f1d9d65938aec72a698116cd73a980916895e upstream.

When the rto_push_irq_work_func() is called, it looks at the RT overloaded
bitmask in the root domain via the runqueue (rq->rd). The problem is that
during CPU up and down, nothing here stops rq->rd from changing between
taking the rq->rd->rto_lock and releasing it. That means the lock that is
released is not the same lock that was taken.

Instead of using this_rq()->rd to get the root domain, as the irq work is
part of the root domain, we can simply get the root domain from the irq work
that is passed to the routine:

 container_of(work, struct root_domain, rto_push_work)

This keeps the root domain consistent.

Reported-by: Pavan Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 4bdced5c9a292 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/CAEU1=PkiHO35Dzna8EQqNSKW1fr1y1zRQ5y66X117MG06sQtNA@mail.gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>