]> git.proxmox.com Git - mirror_ubuntu-zesty-kernel.git/log
mirror_ubuntu-zesty-kernel.git
7 years agoblock: deal with stale req count of plug list
Ming Lei [Wed, 16 Nov 2016 10:07:05 +0000 (18:07 +0800)]
block: deal with stale req count of plug list

In both legacy and mq path, req count of plug list is computed
before allocating request, so the number can be stale when falling
back to slept allocation, also the new introduced wbt can sleep
too.

This patch deals with the case by checking if plug list becomes
empty, and fixes the KASAN report of 'BUG: KASAN: stack-out-of-bounds'
which is introduced by Shaohua's patches of dispatching big request.

Fixes: 600271d900002(blk-mq: immediately dispatch big size request)
Fixes: 50d24c34403c6(block: immediately dispatch big size request)
Cc: Shaohua Li <shli@fb.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoscsi_lib: untangle 0 and BLK_MQ_RQ_QUEUE_OK
Omar Sandoval [Tue, 15 Nov 2016 19:11:59 +0000 (11:11 -0800)]
scsi_lib: untangle 0 and BLK_MQ_RQ_QUEUE_OK

Let's not depend on any of the BLK_MQ_RQ_QUEUE_* constants having
specific values. No functional change.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agonvme: untangle 0 and BLK_MQ_RQ_QUEUE_OK
Omar Sandoval [Tue, 15 Nov 2016 19:11:58 +0000 (11:11 -0800)]
nvme: untangle 0 and BLK_MQ_RQ_QUEUE_OK

Let's not depend on any of the BLK_MQ_RQ_QUEUE_* constants having
specific values. No functional change.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoloop: return proper error from loop_queue_rq()
Omar Sandoval [Mon, 14 Nov 2016 22:56:17 +0000 (14:56 -0800)]
loop: return proper error from loop_queue_rq()

->queue_rq() should return one of the BLK_MQ_RQ_QUEUE_* constants, not
an errno.

f4aa4c7bbac6 ("block: loop: convert to per-device workqueue")
Signed-off-by: Omar Sandoval <osandov@fb.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agosd_zbc: Force use of READ16/WRITE16
Damien Le Moal [Fri, 11 Nov 2016 05:53:26 +0000 (14:53 +0900)]
sd_zbc: Force use of READ16/WRITE16

Normally, sd_read_capacity sets sdp->use_16_for_rw to 1 based on the
disk capacity so that READ16/WRITE16 are used for large drives.
However, for a zoned disk with RC_BASIS set to 0, the capacity reported
through READ_CAPACITY may be very small, leading to use_16_for_rw not being
set and READ10/WRITE10 commands being used, even after the actual zoned disk
capacity is corrected in sd_zbc_read_zones. This causes LBA offset overflow for
accesses beyond 2TB.

As the ZBC standard makes it mandatory for ZBC drives to support
the READ16/WRITE16 commands anyway, make sure that use_16_for_rw is set.

Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
eviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobsg: Add sparse annotations to bsg_request_fn()
Bart Van Assche [Mon, 26 Sep 2016 02:54:38 +0000 (19:54 -0700)]
bsg: Add sparse annotations to bsg_request_fn()

Avoid that sparse complains about unbalanced lock actions.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-wbt: use BLK_STAT_{READ,WRITE} instead of 0/1
Jens Axboe [Fri, 11 Nov 2016 15:23:53 +0000 (08:23 -0700)]
blk-wbt: use BLK_STAT_{READ,WRITE} instead of 0/1

Since we have proper enums for the stats directions, use them.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-wbt: remove stat ops
Jens Axboe [Fri, 11 Nov 2016 04:50:51 +0000 (21:50 -0700)]
blk-wbt: remove stat ops

Again a leftover from when the throttling code was generic. Now that we
just have the block user, get rid of the stat ops and indirections.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-wbt: store queue instead of bdi
Jens Axboe [Fri, 11 Nov 2016 04:52:53 +0000 (21:52 -0700)]
blk-wbt: store queue instead of bdi

The bdi was a leftover from when the code was block layer agnostic.
Now that we just support a block layer user, store the queue directly.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: move poll code to blk-mq
Jens Axboe [Fri, 4 Nov 2016 15:34:34 +0000 (09:34 -0600)]
block: move poll code to blk-mq

The poll code is blk-mq specific, let's move it to blk-mq.c. This
is a prep patch for improving the polling code.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
7 years agoBlock: mtip32xx: Improvement in code readability when memdup_user() fails.
Sachin Shukla [Fri, 11 Nov 2016 09:04:51 +0000 (14:34 +0530)]
Block: mtip32xx: Improvement in code readability when memdup_user() fails.

There is no need to call kfree() if memdup_user() fails, as no memory
was allocated and the error in the error-valued pointer should be returned.

Signed-off-by: Sachin Shukla <sachin.s5@samsung.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: blk_mq_try_issue_directly() should lookup hardware queue
Jens Axboe [Fri, 11 Nov 2016 19:24:46 +0000 (12:24 -0700)]
blk-mq: blk_mq_try_issue_directly() should lookup hardware queue

A previous commit changed this to pass in the hardware queue, but
it was using the wrong hardware queue. Hence a request that was
allocated on one hardware queue ended up being issued on another
one, and that caused IO timeouts and oopses on some drivers. Since
the request holds hardware queue private resources, like a tag,
we can't just issue it on a different hardware queue.

Fixes: 2253efc850c4 ("blk-mq: Move more code into blk_mq_direct_issue_request()")
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: hook up writeback throttling
Jens Axboe [Wed, 9 Nov 2016 19:38:14 +0000 (12:38 -0700)]
block: hook up writeback throttling

Enable throttling of buffered writeback to make it a lot
more smooth, and has way less impact on other system activity.
Background writeback should be, by definition, background
activity. The fact that we flush huge bundles of it at the time
means that it potentially has heavy impacts on foreground workloads,
which isn't ideal. We can't easily limit the sizes of writes that
we do, since that would impact file system layout in the presence
of delayed allocation. So just throttle back buffered writeback,
unless someone is waiting for it.

The algorithm for when to throttle takes its inspiration in the
CoDel networking scheduling algorithm. Like CoDel, blk-wb monitors
the minimum latencies of requests over a window of time. In that
window of time, if the minimum latency of any request exceeds a
given target, then a scale count is incremented and the queue depth
is shrunk. The next monitoring window is shrunk accordingly. Unlike
CoDel, if we hit a window that exhibits good behavior, then we
simply increment the scale count and re-calculate the limits for that
scale value. This prevents us from oscillating between a
close-to-ideal value and max all the time, instead remaining in the
windows where we get good behavior.

Unlike CoDel, blk-wb allows the scale count to to negative. This
happens if we primarily have writes going on. Unlike positive
scale counts, this doesn't change the size of the monitoring window.
When the heavy writers finish, blk-bw quickly snaps back to it's
stable state of a zero scale count.

The patch registers a sysfs entry, 'wb_lat_usec'. This sets the latency
target to me met. It defaults to 2 msec for non-rotational storage, and
75 msec for rotational storage. Setting this value to '0' disables
blk-wb. Generally, a user would not have to touch this setting.

We don't enable WBT on devices that are managed with CFQ, and have
a non-root block cgroup attached. If we have a proportional share setup
on this particular disk, then the wbt throttling will interfere with
that. We don't have a strong need for wbt for that case, since we will
rely on CFQ doing that for us.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-wbt: add general throttling mechanism
Jens Axboe [Wed, 9 Nov 2016 19:36:15 +0000 (12:36 -0700)]
blk-wbt: add general throttling mechanism

We can hook this up to the block layer, to help throttle buffered
writes.

wbt registers a few trace points that can be used to track what is
happening in the system:

wbt_lat: 259:0: latency 2446318
wbt_stat: 259:0: rmean=2446318, rmin=2446318, rmax=2446318, rsamples=1,
               wmean=518866, wmin=15522, wmax=5330353, wsamples=57
wbt_step: 259:0: step down: step=1, window=72727272, background=8, normal=16, max=32

This shows a sync issue event (wbt_lat) that exceeded it's time. wbt_stat
dumps the current read/write stats for that window, and wbt_step shows a
step down event where we now scale back writes. Each trace includes the
device, 259:0 in this case.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: add scalable completion tracking of requests
Jens Axboe [Tue, 8 Nov 2016 04:32:37 +0000 (21:32 -0700)]
block: add scalable completion tracking of requests

For legacy block, we simply track them in the request queue. For
blk-mq, we track them on a per-sw queue basis, which we can then
sum up through the hardware queues and finally to a per device
state.

The stats are tracked in, roughly, 0.1s interval windows.

Add sysfs files to display the stats.

The feature is off by default, to avoid any extra overhead. In-kernel
users of it can turn it on by setting QUEUE_FLAG_STATS in the queue
flags. We currently don't turn it on if someone just reads any of
the stats files, that is something we could add as well.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: cfq_cpd_alloc() should use @gfp
Tejun Heo [Thu, 10 Nov 2016 16:16:37 +0000 (11:16 -0500)]
block: cfq_cpd_alloc() should use @gfp

cfq_cpd_alloc() which is the cpd_alloc_fn implementation for cfq was
incorrectly hard coding GFP_KERNEL instead of using the mask specified
through the @gfp parameter.  This currently doesn't cause any actual
issues because all current callers specify GFP_KERNEL.  Fix it.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: e4a9bde9589f ("blkcg: replace blkcg_policy->cpd_size with ->cpd_alloc/free_fn() methods")
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agonvme: don't pass the full CQE to nvme_complete_async_event
Christoph Hellwig [Thu, 10 Nov 2016 15:32:34 +0000 (07:32 -0800)]
nvme: don't pass the full CQE to nvme_complete_async_event

We only need the status and result fields, and passing them explicitly
makes life a lot easier for the Fibre Channel transport which doesn't
have a full CQE for the fast path case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agonvme: introduce struct nvme_request
Christoph Hellwig [Thu, 10 Nov 2016 15:32:33 +0000 (07:32 -0800)]
nvme: introduce struct nvme_request

This adds a shared per-request structure for all NVMe I/O.  This structure
is embedded as the first member in all NVMe transport drivers request
private data and allows to implement common functionality between the
drivers.

The first use is to replace the current abuse of the SCSI command
passthrough fields in struct request for the NVMe command passthrough,
but it will grow a field more fields to allow implementing things
like common abort handlers in the future.

The passthrough commands are handled by having a pointer to the SQE
(struct nvme_command) in struct nvme_request, and the union of the
possible result fields, which had to be turned from an anonymous
into a named union for that purpose.  This avoids having to pass
a reference to a full CQE around and thus makes checking the result
a lot more lightweight.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoskd: fix function prototype
Arnd Bergmann [Wed, 9 Nov 2016 12:55:35 +0000 (13:55 +0100)]
skd: fix function prototype

Building with W=1 shows a harmless warning for the skd driver:

drivers/block/skd_main.c:2959:1: error: ‘static’ is not at beginning of declaration [-Werror=old-style-declaration]

This changes the prototype to the expected formatting.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoskd: fix msix error handling
Arnd Bergmann [Wed, 9 Nov 2016 12:55:34 +0000 (13:55 +0100)]
skd: fix msix error handling

As reported by gcc -Wmaybe-uninitialized, the cleanup path for
skd_acquire_msix tries to free the already allocated msi-x vectors
in reverse order, but the index variable may not have been
used yet:

drivers/block/skd_main.c: In function ‘skd_acquire_irq’:
drivers/block/skd_main.c:3890:8: error: ‘i’ may be used uninitialized in this function [-Werror=maybe-uninitialized]

This changes the failure path to skip releasing the interrupts
if we have not started requesting them yet.

Fixes: 180b0ae77d49 ("skd: use pci_alloc_irq_vectors")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: set REQ_SYNC if we clear REQ_FUA|REQ_PREFLUSH
Jens Axboe [Wed, 9 Nov 2016 02:39:28 +0000 (19:39 -0700)]
block: set REQ_SYNC if we clear REQ_FUA|REQ_PREFLUSH

If we insert a flush request, we clear REQ_PREFLUSH and/or REQ_FUA,
depending on flush settings. Since op_is_sync() factors those flags
in for deciding whether this request is sync or not, we should
set REQ_SYNC to avoid screwing up this accounting.

This should be less fragile.

Reported-by: Logan Gunthorpe <logang@deltatee.com>
Fixes: b685d3d65ac ("block: treat REQ_FUA and REQ_PREFLUSH as synchronous")
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agowriteback: track if we're sleeping on progress in balance_dirty_pages()
Jens Axboe [Thu, 1 Sep 2016 16:20:33 +0000 (10:20 -0600)]
writeback: track if we're sleeping on progress in balance_dirty_pages()

Note in the bdi_writeback structure whenever a task ends up sleeping
waiting for progress. We can use that information in the lower layers
to increase the priority of writes.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
7 years agoskd_main: use %*ph to dump small buffers
Andy Shevchenko [Sat, 22 Oct 2016 16:52:01 +0000 (19:52 +0300)]
skd_main: use %*ph to dump small buffers

Replace custom approach by %*ph specifier to dump small buffers in hex format.

Unfortunately we can't use print_hex_dump_bytes() here since tha gap is
present, though one familiar with the code may change this.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoskd: use pci_alloc_irq_vectors
Christoph Hellwig [Mon, 7 Nov 2016 19:14:07 +0000 (11:14 -0800)]
skd: use pci_alloc_irq_vectors

Switch the skd driver to use pci_alloc_irq_vectors.  We need to two calls to
pci_alloc_irq_vectors as skd only supports multiple MSI-X vectors, but not
multiple MSI vectors.

Otherwise this cleans up a lot of cruft and allows to a lot more common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agopktcdvd: don't scribble over the bvec array
Christoph Hellwig [Thu, 27 Oct 2016 07:04:44 +0000 (09:04 +0200)]
pktcdvd: don't scribble over the bvec array

Hi Peter, hi Jens,

I've been looking over the multi page bio vec work again recently, and
one of the stumbling blocks is raw biovec access in the pktcdvd.

The first issue is that it directly sets up the page and offset pointers
in the biovec just before calling bio_add_page.  As bio_add_page already
does the setup it's trivial to just switch it to stack variables for the
arguments.

The second issue is the copy code in pkt_make_local_copy, which
effectively is an opencoded version of bio_copy_data except that it
skips pages that already are the same in the ѕource and destination.
But we look at the only calleer we just set up the bio using bio_add_page
to point exactly to the page array that pkt_make_local_copy compares,
so the pages will always be the same and we can just remove this function.

Note that all this is done based on code inspection, I don't have any
packet writing hardware myself.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Always schedule hctx->next_cpu
Gabriel Krisman Bertazi [Wed, 28 Sep 2016 03:24:24 +0000 (00:24 -0300)]
blk-mq: Always schedule hctx->next_cpu

Commit 0e87e58bf60e ("blk-mq: improve warning for running a queue on the
wrong CPU") attempts to avoid triggering the WARN_ON in
__blk_mq_run_hw_queue when the expected CPU is dead.  Problem is, in the
last batch execution before round robin, blk_mq_hctx_next_cpu can
schedule a dead CPU and also update next_cpu to the next alive CPU in
the mask, which will trigger the WARN_ON despite the previous
workaround.

The following patch fixes this scenario by always scheduling the value
in hctx->next_cpu.  This changes the moment when we round-robin the CPU
running the hctx, but it really doesn't matter, since it still executes
BLK_MQ_CPU_WORK_BATCH times in a row before switching to another CPU.

Fixes: 0e87e58bf60e ("blk-mq: improve warning for running a queue on the wrong CPU")
Signed-off-by: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: add code to track actual device queue depth
Jens Axboe [Wed, 30 Mar 2016 16:21:08 +0000 (10:21 -0600)]
block: add code to track actual device queue depth

For blk-mq, ->nr_requests does track queue depth, at least at init
time. But for the older queue paths, it's simply a soft setting.
On top of that, it's generally larger than the hardware setting
on purpose, to allow backup of requests for merging.

Fill a hole in struct request with a 'queue_depth' member, that
drivers can call to more closely inform the block layer of the
real queue depth.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
7 years agoblk-mq: immediately dispatch big size request
Shaohua Li [Fri, 4 Nov 2016 00:03:54 +0000 (17:03 -0700)]
blk-mq: immediately dispatch big size request

This is corresponding part for blk-mq. Disk with multiple hardware
queues doesn't need this as we only hold 1 request at most.

Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: immediately dispatch big size request
Shaohua Li [Fri, 4 Nov 2016 00:03:53 +0000 (17:03 -0700)]
block: immediately dispatch big size request

Currently block plug holds up to 16 non-mergeable requests. This makes
sense if the request size is small, eg, reduce lock contention. But if
request size is big enough, we don't need to worry about lock
contention. Holding such request makes no sense and it lows the disk
utilization.

In practice, this improves 10% throughput for my raid5 sequential write
workload.

The size (128k) is arbitrary right now, but it makes sure lock
contention is small. This probably could be more intelligent, eg, check
average request size holded. Since this is mainly for sequential IO,
probably not worthy.

V2: check the last request instead of the first request, so as long as
there is one big size request we flush the plug.

Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: drop q argument from bsg_validate_sgv4_hdr
Johannes Thumshirn [Thu, 3 Nov 2016 09:38:54 +0000 (03:38 -0600)]
block: drop q argument from bsg_validate_sgv4_hdr

bsg_validate_sgv4_hdr() doesn't care about the request_queue, so drop it
from it's arguments.

Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agonvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
Bart Van Assche [Sat, 29 Oct 2016 00:23:40 +0000 (17:23 -0700)]
nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code

Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. Change
blk_queue_stopped() tests into blk_mq_queue_stopped().

This patch fixes a race condition: using queue_flag_clear_unlocked()
is not safe if any other function that manipulates the queue flags
can be called concurrently, e.g. blk_cleanup_queue().

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agonvme: Fix a race condition related to stopping queues
Bart Van Assche [Sat, 29 Oct 2016 00:23:19 +0000 (17:23 -0700)]
nvme: Fix a race condition related to stopping queues

Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agodm: Fix a race condition related to stopping and starting queues
Bart Van Assche [Sat, 29 Oct 2016 00:22:16 +0000 (17:22 -0700)]
dm: Fix a race condition related to stopping and starting queues

Ensure that all ongoing dm_mq_queue_rq() and dm_mq_requeue_request()
calls have stopped before setting the "queue stopped" flag. This
allows to remove the "queue stopped" test from dm_mq_queue_rq() and
dm_mq_requeue_request(). This patch fixes a race condition because
dm_mq_queue_rq() is called without holding the queue lock and hence
BLK_MQ_S_STOPPED can be set at any time while dm_mq_queue_rq() is
in progress. This patch prevents that the following hang occurs
sporadically when using dm-mq:

INFO: task systemd-udevd:10111 blocked for more than 480 seconds.
Call Trace:
 [<ffffffff8161f397>] schedule+0x37/0x90
 [<ffffffff816239ef>] schedule_timeout+0x27f/0x470
 [<ffffffff8161e76f>] io_schedule_timeout+0x9f/0x110
 [<ffffffff8161fb36>] bit_wait_io+0x16/0x60
 [<ffffffff8161f929>] __wait_on_bit_lock+0x49/0xa0
 [<ffffffff8114fe69>] __lock_page+0xb9/0xc0
 [<ffffffff81165d90>] truncate_inode_pages_range+0x3e0/0x760
 [<ffffffff81166120>] truncate_inode_pages+0x10/0x20
 [<ffffffff81212a20>] kill_bdev+0x30/0x40
 [<ffffffff81213d41>] __blkdev_put+0x71/0x360
 [<ffffffff81214079>] blkdev_put+0x49/0x170
 [<ffffffff812141c0>] blkdev_close+0x20/0x30
 [<ffffffff811d48e8>] __fput+0xe8/0x1f0
 [<ffffffff811d4a29>] ____fput+0x9/0x10
 [<ffffffff810842d3>] task_work_run+0x83/0xb0
 [<ffffffff8106606e>] do_exit+0x3ee/0xc40
 [<ffffffff8106694b>] do_group_exit+0x4b/0xc0
 [<ffffffff81073d9a>] get_signal+0x2ca/0x940
 [<ffffffff8101bf43>] do_signal+0x23/0x660
 [<ffffffff810022b3>] exit_to_usermode_loop+0x73/0xb0
 [<ffffffff81002cb0>] syscall_return_slowpath+0xb0/0xc0
 [<ffffffff81624e33>] entry_SYSCALL_64_fastpath+0xa6/0xa8

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agodm: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code
Bart Van Assche [Sat, 29 Oct 2016 00:22:00 +0000 (17:22 -0700)]
dm: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code

Instead of manipulating both QUEUE_FLAG_STOPPED and BLK_MQ_S_STOPPED
in the dm start and stop queue functions, only manipulate the latter
flag. Change blk_queue_stopped() tests into blk_mq_queue_stopped().

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Add a kick_requeue_list argument to blk_mq_requeue_request()
Bart Van Assche [Sat, 29 Oct 2016 00:21:41 +0000 (17:21 -0700)]
blk-mq: Add a kick_requeue_list argument to blk_mq_requeue_request()

Most blk_mq_requeue_request() and blk_mq_add_to_requeue_list() calls
are followed by kicking the requeue list. Hence add an argument to
these two functions that allows to kick the requeue list. This was
proposed by Christoph Hellwig.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Introduce blk_mq_quiesce_queue()
Bart Van Assche [Wed, 2 Nov 2016 16:09:51 +0000 (10:09 -0600)]
blk-mq: Introduce blk_mq_quiesce_queue()

blk_mq_quiesce_queue() waits until ongoing .queue_rq() invocations
have finished. This function does *not* wait until all outstanding
requests have finished (this means invocation of request.end_io()).
The algorithm used by blk_mq_quiesce_queue() is as follows:
* Hold either an RCU read lock or an SRCU read lock around
  .queue_rq() calls. The former is used if .queue_rq() does not
  block and the latter if .queue_rq() may block.
* blk_mq_quiesce_queue() first calls blk_mq_stop_hw_queues()
  followed by synchronize_srcu() or synchronize_rcu(). The latter
  call waits for .queue_rq() invocations that started before
  blk_mq_quiesce_queue() was called.
* The blk_mq_hctx_stopped() calls that control whether or not
  .queue_rq() will be called are called with the (S)RCU read lock
  held. This is necessary to avoid race conditions against
  blk_mq_quiesce_queue().

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Remove blk_mq_cancel_requeue_work()
Bart Van Assche [Sat, 29 Oct 2016 00:20:49 +0000 (17:20 -0700)]
blk-mq: Remove blk_mq_cancel_requeue_work()

Since blk_mq_requeue_work() no longer restarts stopped queues
canceling requeue work is no longer needed to prevent that a
stopped queue would be restarted. Hence remove this function.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Avoid that requeueing starts stopped queues
Bart Van Assche [Sat, 29 Oct 2016 00:20:32 +0000 (17:20 -0700)]
blk-mq: Avoid that requeueing starts stopped queues

Since blk_mq_requeue_work() starts stopped queues and since
execution of this function can be scheduled after a queue has
been stopped it is not possible to stop queues without using
an additional state variable to track whether or not the queue
has been stopped. Hence modify blk_mq_requeue_work() such that it
does not start stopped queues. My conclusion after a review of
the blk_mq_stop_hw_queues() and blk_mq_{delay_,}kick_requeue_list()
callers is as follows:
* In the dm driver starting and stopping queues should only happen
  if __dm_suspend() or __dm_resume() is called and not if the
  requeue list is processed.
* In the SCSI core queue stopping and starting should only be
  performed by the scsi_internal_device_block() and
  scsi_internal_device_unblock() functions but not by any other
  function. Although the blk_mq_stop_hw_queue() call in
  scsi_queue_rq() may help to reduce CPU load if a LLD queue is
  full, figuring out whether or not a queue should be restarted
  when requeueing a command would require to introduce additional
  locking in scsi_mq_requeue_cmd() to avoid a race with
  scsi_internal_device_block(). Avoid this complexity by removing
  the blk_mq_stop_hw_queue() call from scsi_queue_rq().
* In the NVMe core only the functions that call
  blk_mq_start_stopped_hw_queues() explicitly should start stopped
  queues.
* A blk_mq_start_stopped_hwqueues() call must be added in the
  xen-blkfront driver in its blkif_recover() function.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: James Bottomley <jejb@linux.vnet.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Move more code into blk_mq_direct_issue_request()
Bart Van Assche [Sat, 29 Oct 2016 00:20:02 +0000 (17:20 -0700)]
blk-mq: Move more code into blk_mq_direct_issue_request()

Move the "hctx stopped" test and the insert request calls into
blk_mq_direct_issue_request(). Rename that function into
blk_mq_try_issue_directly() to reflect its new semantics. Pass
the hctx pointer to that function instead of looking it up a
second time. These changes avoid that code has to be duplicated
in the next patch.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Introduce blk_mq_queue_stopped()
Bart Van Assche [Sat, 29 Oct 2016 00:19:37 +0000 (17:19 -0700)]
blk-mq: Introduce blk_mq_queue_stopped()

The function blk_queue_stopped() allows to test whether or not a
traditional request queue has been stopped. Introduce a helper
function that allows block drivers to query easily whether or not
one or more hardware contexts of a blk-mq queue have been stopped.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Introduce blk_mq_hctx_stopped()
Bart Van Assche [Sat, 29 Oct 2016 00:19:15 +0000 (17:19 -0700)]
blk-mq: Introduce blk_mq_hctx_stopped()

Multiple functions test the BLK_MQ_S_STOPPED bit so introduce
a helper function that performs this test.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: Do not invoke .queue_rq() for a stopped queue
Bart Van Assche [Sat, 29 Oct 2016 00:18:48 +0000 (17:18 -0700)]
blk-mq: Do not invoke .queue_rq() for a stopped queue

The meaning of the BLK_MQ_S_STOPPED flag is "do not call
.queue_rq()". Hence modify blk_mq_make_request() such that requests
are queued instead of issued if a queue has been stopped.

Reported-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ming Lei <tom.leiming@gmail.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: add bio_iov_iter_get_pages()
Kent Overstreet [Mon, 31 Oct 2016 17:59:24 +0000 (11:59 -0600)]
block: add bio_iov_iter_get_pages()

This is a helper that pins down a range from an iov_iter and adds it to
a bio without requiring a separate memory allocation for the page array.
It will be used for upcoming direct I/O implementations for block devices
and iomap based file systems.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
[hch: ported to the iov_iter interface, renamed and added comments.
      All blame should be directed to me and all fame should go to Kent
      after this!]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agowriteback: mark background writeback as such
Jens Axboe [Tue, 1 Nov 2016 16:01:35 +0000 (10:01 -0600)]
writeback: mark background writeback as such

If we're doing background type writes, then use the appropriate
background write flags for that.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
7 years agowriteback: add wbc_to_write_flags()
Jens Axboe [Tue, 1 Nov 2016 16:00:38 +0000 (10:00 -0600)]
writeback: add wbc_to_write_flags()

Add wbc_to_write_flags(), which returns the write modifier flags to use,
based on a struct writeback_control. No functional changes in this
patch, but it prepares us for factoring other wbc fields for write type.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
7 years agoblock: add REQ_BACKGROUND
Jens Axboe [Tue, 1 Nov 2016 15:52:57 +0000 (09:52 -0600)]
block: add REQ_BACKGROUND

This adds a new request flag, REQ_BACKGROUND, that callers can use to
tell the block layer that this is background (non-urgent) IO.

Signed-off-by: Jens Axboe <axboe@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
7 years agoblock: remove the CONFIG_BLOCK ifdef in blk_types.h
Christoph Hellwig [Tue, 1 Nov 2016 13:40:17 +0000 (07:40 -0600)]
block: remove the CONFIG_BLOCK ifdef in blk_types.h

Now that we have a separate header for struct bio_vec there is absolutely
no excuse for including this header from non-block I/O code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agomm: only include blk_types in swap.h if CONFIG_SWAP is enabled
Christoph Hellwig [Tue, 1 Nov 2016 13:40:16 +0000 (07:40 -0600)]
mm: only include blk_types in swap.h if CONFIG_SWAP is enabled

It's only needed for the CONFIG_SWAP-only use of bio_end_io_t.

Because CONFIG_SWAP implies CONFIG_BLOCK this will allow to drop some
ifdefs in blk_types.h.

Instead we'll need to add a few explicit includes that were implicit
before, though.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoceph: don't include blk_types.h in messenger.h
Christoph Hellwig [Tue, 1 Nov 2016 13:40:15 +0000 (07:40 -0600)]
ceph: don't include blk_types.h in messenger.h

The file only needs the struct bvec_iter delcaration, which is available
from bvec.h.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoarm, arm64: don't include blk_types.h in <asm/io.h>
Christoph Hellwig [Tue, 1 Nov 2016 13:40:14 +0000 (07:40 -0600)]
arm, arm64: don't include blk_types.h in <asm/io.h>

No need for it - we only use struct bio_vec in prototypes and already have
forward declarations for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock,fs: untangle fs.h and blk_types.h
Christoph Hellwig [Tue, 1 Nov 2016 13:40:13 +0000 (07:40 -0600)]
block,fs: untangle fs.h and blk_types.h

Nothing in fs.h should require blk_types.h to be included.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock, fs: move submit_bio to bio.h
Christoph Hellwig [Tue, 1 Nov 2016 13:40:12 +0000 (07:40 -0600)]
block, fs: move submit_bio to bio.h

This is where all the other bio operations live, so users must include
bio.h anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agofs: decouple READ and WRITE from the block layer ops
Christoph Hellwig [Tue, 1 Nov 2016 13:40:11 +0000 (07:40 -0600)]
fs: decouple READ and WRITE from the block layer ops

Move READ and WRITE to kernel.h and don't define them in terms of block
layer ops; they are our generic data direction indicators these days
and have no more resemblance with the block layer ops.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock,fs: use REQ_* flags directly
Christoph Hellwig [Tue, 1 Nov 2016 13:40:10 +0000 (07:40 -0600)]
block,fs: use REQ_* flags directly

Remove the WRITE_* and READ_SYNC wrappers, and just use the flags
directly.  Where applicable this also drops usage of the
bio_set_op_attrs wrapper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: replace REQ_NOIDLE with REQ_IDLE
Christoph Hellwig [Tue, 1 Nov 2016 13:40:09 +0000 (07:40 -0600)]
block: replace REQ_NOIDLE with REQ_IDLE

Noidle should be the default for writes as seen by all the compounds
definitions in fs.h using it.  In fact only direct I/O really should
be using NODILE, so turn the whole flag around to get the defaults
right, which will make our life much easier especially onces the
WRITE_* defines go away.

This assumes all the existing "raw" users of REQ_SYNC for writes
want noidle behavior, which seems to be spot on from a quick audit.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: treat REQ_FUA and REQ_PREFLUSH as synchronous
Christoph Hellwig [Tue, 1 Nov 2016 13:40:08 +0000 (07:40 -0600)]
block: treat REQ_FUA and REQ_PREFLUSH as synchronous

Instead of requiring everyone to specify the REQ_SYNC flag aѕ well.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: don't use REQ_SYNC in the READ_SYNC definition
Christoph Hellwig [Tue, 1 Nov 2016 13:40:07 +0000 (07:40 -0600)]
block: don't use REQ_SYNC in the READ_SYNC definition

Reads are synchronous per definition, don't add another flag for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobtrfs: use op_is_sync to check for synchronous requests
Christoph Hellwig [Tue, 1 Nov 2016 13:40:06 +0000 (07:40 -0600)]
btrfs: use op_is_sync to check for synchronous requests

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobcache: use op_is_sync to check for synchronous requests
Christoph Hellwig [Tue, 1 Nov 2016 13:40:05 +0000 (07:40 -0600)]
bcache: use op_is_sync to check for synchronous requests

(and remove one layer of masking for the op_is_write call next to it).

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoumem: use op_is_sync to check for synchronous requests
Christoph Hellwig [Tue, 1 Nov 2016 13:40:04 +0000 (07:40 -0600)]
umem: use op_is_sync to check for synchronous requests

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-cgroup: use op_is_sync to check for synchronous requests
Christoph Hellwig [Tue, 1 Nov 2016 13:40:03 +0000 (07:40 -0600)]
blk-cgroup: use op_is_sync to check for synchronous requests

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agocfq-iosched: use op_is_sync instead of opencoding it
Christoph Hellwig [Tue, 1 Nov 2016 13:40:02 +0000 (07:40 -0600)]
cfq-iosched: use op_is_sync instead of opencoding it

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: add a proper block layer data direction encoding
Christoph Hellwig [Thu, 20 Oct 2016 13:12:15 +0000 (15:12 +0200)]
block: add a proper block layer data direction encoding

Currently the block layer op_is_write, bio_data_dir and rq_data_dir
helper treat every operation that is not a READ as a data out operation.
This worked surprisingly long, but the new REQ_OP_ZONE_REPORT operation
actually adds a second operation that reads data from the device.
Surprisingly nothing critical relied on this direction, but this might
be a good opportunity to properly fix this issue up.

We take a little inspiration and use the least significant bit of the
operation number to encode the data direction, which just requires us
to renumber the operations to fix this scheme.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: better op and flags encoding
Christoph Hellwig [Fri, 28 Oct 2016 14:48:16 +0000 (08:48 -0600)]
block: better op and flags encoding

Now that we don't need the common flags to overflow outside the range
of a 32-bit type we can encode them the same way for both the bio and
request fields.  This in addition allows us to place the operation
first (and make some room for more ops while we're at it) and to
stop having to shift around the operation values.

In addition this allows passing around only one value in the block layer
instead of two (and eventuall also in the file systems, but we can do
that later) and thus clean up a lot of code.

Last but not least this allows decreasing the size of the cmd_flags
field in struct request to 32-bits.  Various functions passing this
value could also be updated, but I'd like to avoid the churn for now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: split out request-only flags into a new namespace
Christoph Hellwig [Thu, 20 Oct 2016 13:12:13 +0000 (15:12 +0200)]
block: split out request-only flags into a new namespace

A lot of the REQ_* flags are only used on struct requests, and only of
use to the block layer and a few drivers that dig into struct request
internals.

This patch adds a new req_flags_t rq_flags field to struct request for
them, and thus dramatically shrinks the number of common requests.  It
also removes the unfortunate situation where we have to fit the fields
from the same enum into 32 bits for struct bio and 64 bits for
struct request.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: replace REQ_THROTTLED with a bio flag
Christoph Hellwig [Thu, 20 Oct 2016 13:12:12 +0000 (15:12 +0200)]
block: replace REQ_THROTTLED with a bio flag

It's the last bio-only REQ_* flag, and we have space for it in the bio
bi_flags field.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: move REQ_RAHEAD to common flags
Christoph Hellwig [Thu, 20 Oct 2016 13:12:11 +0000 (15:12 +0200)]
block: move REQ_RAHEAD to common flags

The information that am I/O is a read-ahead can be useful for drivers.
In fact the NVMe driver already checks it, even if it won't ever be set
at the moment.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: REQ_NOMERGE is common to the bio and request
Christoph Hellwig [Thu, 20 Oct 2016 13:12:10 +0000 (15:12 +0200)]
block: REQ_NOMERGE is common to the bio and request

So move it into the common setion of the request flags.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: remove bio_is_rw
Christoph Hellwig [Thu, 20 Oct 2016 13:12:09 +0000 (15:12 +0200)]
block: remove bio_is_rw

With the addition of the zoned operations the tests in this function
became incorrect.  But I think it's much better to just open code the
allow operations in the only caller anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: get rid of confusing blk_map_ctx structure
Jens Axboe [Fri, 28 Oct 2016 01:03:55 +0000 (19:03 -0600)]
blk-mq: get rid of confusing blk_map_ctx structure

We can just use struct blk_mq_alloc_data - it has a few more
members, but we allocate it further down the stack anyway. So
this cleans up the code, and reduces the stack overhead a bit.

Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-mq: update hardware and software queues for sleeping alloc
Jens Axboe [Thu, 27 Oct 2016 15:49:19 +0000 (09:49 -0600)]
blk-mq: update hardware and software queues for sleeping alloc

If we end up sleeping due to running out of requests, we should
update the hardware and software queues in the map ctx structure.
Otherwise we could end up having rq->mq_ctx point to the pre-sleep
context, and risk corrupting ctx->rq_list since we'll be
grabbing the wrong lock when inserting the request.

Reported-by: Dave Jones <davej@codemonkey.org.uk>
Reported-by: Chris Mason <clm@fb.com>
Tested-by: Chris Mason <clm@fb.com>
Fixes: 63581af3f31e ("blk-mq: remove non-blocking pass in blk_mq_map_request")
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobrd: remove support for BLKFLSBUF
Mike Snitzer [Tue, 25 Oct 2016 14:46:23 +0000 (08:46 -0600)]
brd: remove support for BLKFLSBUF

Discontinue having the brd driver destructively free all pages in the
ramdisk in response to the BLKFLSBUF ioctl.  Doing so allows a BLKFLSBUF
ioctl issued to a logical partition to destroy pages of the parent brd
device (and all other partitions of that brd device).

This change breaks compatibility - but in this case the compatibility
breaks more than it helps.

Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobrd: Switch rd_size to unsigned long
Jan Kara [Tue, 25 Oct 2016 11:53:27 +0000 (13:53 +0200)]
brd: Switch rd_size to unsigned long

Currently rd_size was int which lead to overflow and bogus device size
once the requested ramdisk size was 1 TB or more. Although these days
ramdisks with 1 TB size are mostly a mistake, the days when they are
useful are not far.

Reported-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agosd: fix uninitialized variable access in error handling
Arnd Bergmann [Fri, 21 Oct 2016 15:32:24 +0000 (17:32 +0200)]
sd: fix uninitialized variable access in error handling

If sd_zbc_report_zones fails, the check for 'zone_blocks == 0'
later in the function accesses uninitialized data:

drivers/scsi/sd_zbc.c: In function ‘sd_zbc_read_zones’:
drivers/scsi/sd_zbc.c:520:7: error: ‘zone_blocks’ may be used uninitialized in this function [-Werror=maybe-uninitialized]

This sets it to zero, which has the desired effect of leaving
the sd_zbc_read_zones successfully with sdkp->zone_blocks = 0.

Fixes: 89d947561077 ("sd: Implement support for ZBC devices")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: zoned: fix harmless maybe-uninitialized warning
Arnd Bergmann [Fri, 21 Oct 2016 15:42:33 +0000 (17:42 +0200)]
block: zoned: fix harmless maybe-uninitialized warning

The blkdev_report_zones produces a harmless warning when
-Wmaybe-uninitialized is set, after gcc gets a little confused
about the multiple 'goto' here:

block/blk-zoned.c: In function 'blkdev_report_zones':
block/blk-zoned.c:188:13: error: 'nz' may be used uninitialized in this function [-Werror=maybe-uninitialized]

Moving the assignment to nr_zones makes this a little simpler
while also avoiding the warning reliably. I'm removing the
extraneous initialization of 'int ret' in the same patch, as
that is semi-related and could cause an uninitialized use of
that variable to not produce a warning.

Fixes: 6a0cb1bc106f ("block: Implement support for zoned block devices")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agosd: Implement support for ZBC devices
Hannes Reinecke [Tue, 18 Oct 2016 06:40:34 +0000 (15:40 +0900)]
sd: Implement support for ZBC devices

Implement ZBC support functions to setup zoned disks, both
host-managed and host-aware models. Only zoned disks that satisfy
the following conditions are supported:
1) All zones are the same size, with the exception of an eventual
   last smaller runt zone.
2) For host-managed disks, reads are unrestricted (reads are not
   failed due to zone or write pointer alignement constraints).
Zoned disks that do not satisfy these 2 conditions are setup with
a capacity of 0 to prevent their use.

The function sd_zbc_read_zones, called from sd_revalidate_disk,
checks that the device satisfies the above two constraints. This
function may also change the disk capacity previously set by
sd_read_capacity for devices reporting only the capacity of
conventional zones at the beginning of the LBA range (i.e. devices
reporting rc_basis set to 0).

The capacity message output was moved out of sd_read_capacity into
a new function sd_print_capacity to include this eventual capacity
change by sd_zbc_read_zones. This new function also includes a call
to sd_zbc_print_zones to display the number of zones and zone size
of the device.

Signed-off-by: Hannes Reinecke <hare@suse.de>
[Damien: * Removed zone cache support
         * Removed mapping of discard to reset write pointer command
         * Modified sd_zbc_read_zones to include checks that the
           device satisfies the kernel constraints
         * Implemeted REPORT ZONES setup and post-processing based
           on code from Shaun Tancheff <shaun.tancheff@seagate.com>
         * Removed confusing use of 512B sector units in functions
           interface]
Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Tested-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-zoned: implement ioctls
Shaun Tancheff [Tue, 18 Oct 2016 06:40:35 +0000 (15:40 +0900)]
blk-zoned: implement ioctls

Adds the new BLKREPORTZONE and BLKRESETZONE ioctls for respectively
obtaining the zone configuration of a zoned block device and resetting
the write pointer of sequential zones of a zoned block device.

The BLKREPORTZONE ioctl maps directly to a single call of the function
blkdev_report_zones. The zone information result is passed as an array
of struct blk_zone identical to the structure used internally for
processing the REQ_OP_ZONE_REPORT operation.  The BLKRESETZONE ioctl
maps to a single call of the blkdev_reset_zones function.

Signed-off-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: Implement support for zoned block devices
Hannes Reinecke [Tue, 18 Oct 2016 06:40:33 +0000 (15:40 +0900)]
block: Implement support for zoned block devices

Implement zoned block device zone information reporting and reset.
Zone information are reported as struct blk_zone. This implementation
does not differentiate between host-aware and host-managed device
models and is valid for both. Two functions are provided:
blkdev_report_zones for discovering the zone configuration of a
zoned block device, and blkdev_reset_zones for resetting the write
pointer of sequential zones. The helper function blk_queue_zone_size
and bdev_zone_size are also provided for, as the name suggest,
obtaining the zone size (in 512B sectors) of the zones of the device.

Signed-off-by: Hannes Reinecke <hare@suse.de>
[Damien: * Removed the zone cache
         * Implement report zones operation based on earlier proposal
           by Shaun Tancheff <shaun.tancheff@seagate.com>]
Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Tested-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: Define zoned block device operations
Shaun Tancheff [Tue, 18 Oct 2016 06:40:32 +0000 (15:40 +0900)]
block: Define zoned block device operations

Define REQ_OP_ZONE_REPORT and REQ_OP_ZONE_RESET for handling zones of
host-managed and host-aware zoned block devices. With with these two
new operations, the total number of operations defined reaches 8 and
still fits with the 3 bits definition of REQ_OP_BITS.

Signed-off-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: update chunk_sectors in blk_stack_limits()
Hannes Reinecke [Tue, 18 Oct 2016 06:40:31 +0000 (15:40 +0900)]
block: update chunk_sectors in blk_stack_limits()

Signed-off-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Tested-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblk-sysfs: Add 'chunk_sectors' to sysfs attributes
Hannes Reinecke [Tue, 18 Oct 2016 06:40:30 +0000 (15:40 +0900)]
blk-sysfs: Add 'chunk_sectors' to sysfs attributes

The queue limits already have a 'chunk_sectors' setting, so
we should be presenting it via sysfs.

Signed-off-by: Hannes Reinecke <hare@suse.de>
[Damien: Updated Documentation/ABI/testing/sysfs-block]

Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Tested-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agoblock: Add 'zoned' queue limit
Damien Le Moal [Tue, 18 Oct 2016 06:40:29 +0000 (15:40 +0900)]
block: Add 'zoned' queue limit

Add the zoned queue limit to indicate the zoning model of a block device.
Defined values are 0 (BLK_ZONED_NONE) for regular block devices,
1 (BLK_ZONED_HA) for host-aware zone block devices and 2 (BLK_ZONED_HM)
for host-managed zone block devices. The standards defined drive managed
model is not defined here since these block devices do not provide any
command for accessing zone information. Drive managed model devices will
be reported as BLK_ZONED_NONE.

The helper functions blk_queue_zoned_model and bdev_zoned_model return
the zoned limit and the functions blk_queue_is_zoned and bdev_is_zoned
return a boolean for callers to test if a block device is zoned.

The zoned attribute is also exported as a string to applications via
sysfs. BLK_ZONED_NONE shows as "none", BLK_ZONED_HA as "host-aware" and
BLK_ZONED_HM as "host-managed".

Signed-off-by: Damien Le Moal <damien.lemoal@hgst.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Tested-by: Shaun Tancheff <shaun.tancheff@seagate.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
7 years agobtrfs: assign error values to the correct bio structs
Junjie Mao [Mon, 17 Oct 2016 01:20:25 +0000 (09:20 +0800)]
btrfs: assign error values to the correct bio structs

Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio")
Signed-off-by: Junjie Mao <junjie.mao@enight.me>
Acked-by: David Sterba <dsterba@suse.cz>
Cc: stable@vger.kernel.org # 4.3+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
7 years agox86, pkeys: remove cruft from never-merged syscalls
Dave Hansen [Mon, 17 Oct 2016 20:57:09 +0000 (13:57 -0700)]
x86, pkeys: remove cruft from never-merged syscalls

pkey_set() and pkey_get() were syscalls present in older versions
of the protection keys patches.  The syscall number definitions
were inadvertently left in place.  This patch removes them.

I did a git grep and verified that these are the last places in
the tree that these appear, save for the protection_keys.c tests
and Documentation.  Those spots talk about functions called
pkey_get/set() which are wrappers for the direct PKRU
instructions, not the syscalls.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: mgorman@techsingularity.net
Cc: arnd@arndb.de
Cc: linux-api@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: luto@kernel.org
Cc: akpm@linux-foundation.org
Fixes: f9afc6197e9bb ("x86: Wire up protection keys system calls")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
7 years agogeneric syscalls: kill cruft from removed pkey syscalls
Dave Hansen [Mon, 17 Oct 2016 15:18:15 +0000 (08:18 -0700)]
generic syscalls: kill cruft from removed pkey syscalls

pkey_set() and pkey_get() were syscalls present in older versions
of the protection keys patches.  They were fully excised from the
x86 code, but some cruft was left in the generic syscall code.  The
C++ comments were intended to help to make it more glaring to me to
fix them before actually submitting them.  That technique worked,
but later than I would have liked.

I test-compiled this for arm64.

Fixes: a60f7b69d92c0 ("generic syscalls: Wire up memory protection keys syscalls")
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Cc: linux-arch@vger.kernel.org
Cc: mgorman@techsingularity.net
Cc: linux-api@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: luto@kernel.org
Cc: akpm@linux-foundation.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
7 years agoLinux 4.9-rc1
Linus Torvalds [Sat, 15 Oct 2016 19:17:50 +0000 (12:17 -0700)]
Linux 4.9-rc1

7 years agoMerge tag 'befs-v4.9-rc1' of git://github.com/luisbg/linux-befs
Linus Torvalds [Sat, 15 Oct 2016 19:09:13 +0000 (12:09 -0700)]
Merge tag 'befs-v4.9-rc1' of git://github.com/luisbg/linux-befs

Pull befs fixes from Luis de Bethencourt:
 "I recently took maintainership of the befs file system [0]. This is
  the first time I send you a git pull request, so please let me know if
  all the below is OK.

  Salah Triki and myself have been cleaning the code and fixing a few
  small bugs.

  Sorry I couldn't send this sooner in the merge window, I was waiting
  to have my GPG key signed by kernel members at ELCE in Berlin a few
  days ago."

[0] https://lkml.org/lkml/2016/7/27/502

* tag 'befs-v4.9-rc1' of git://github.com/luisbg/linux-befs: (39 commits)
  befs: befs: fix style issues in datastream.c
  befs: improve documentation in datastream.c
  befs: fix typos in datastream.c
  befs: fix typos in btree.c
  befs: fix style issues in super.c
  befs: fix comment style
  befs: add check for ag_shift in superblock
  befs: dump inode_size superblock information
  befs: remove unnecessary initialization
  befs: fix typo in befs_sb_info
  befs: add flags field to validate superblock state
  befs: fix typo in befs_find_key
  befs: remove unused BEFS_BT_PARMATCH
  fs: befs: remove ret variable
  fs: befs: remove in vain variable assignment
  fs: befs: remove unnecessary *befs_sb variable
  fs: befs: remove useless initialization to zero
  fs: befs: remove in vain variable assignment
  fs: befs: Insert NULL inode to dentry
  fs: befs: Remove useless calls to brelse in befs_find_brun_dblindirect
  ...

7 years agoMerge tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 15 Oct 2016 17:03:15 +0000 (10:03 -0700)]
Merge tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull gcc plugins update from Kees Cook:
 "This adds a new gcc plugin named "latent_entropy". It is designed to
  extract as much possible uncertainty from a running system at boot
  time as possible, hoping to capitalize on any possible variation in
  CPU operation (due to runtime data differences, hardware differences,
  SMP ordering, thermal timing variation, cache behavior, etc).

  At the very least, this plugin is a much more comprehensive example
  for how to manipulate kernel code using the gcc plugin internals"

* tag 'gcc-plugins-v4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  latent_entropy: Mark functions with __latent_entropy
  gcc-plugins: Add latent_entropy plugin

7 years agoMerge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Linus Torvalds [Sat, 15 Oct 2016 16:26:12 +0000 (09:26 -0700)]
Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus

Pull MIPS updates from Ralf Baechle:
 "This is the main MIPS pull request for 4.9:

  MIPS core arch code:
   - traps: 64bit kernels should read CP0_EBase 64bit
   - traps: Convert ebase to KSEG0
   - c-r4k: Drop bc_wback_inv() from icache flush
   - c-r4k: Split user/kernel flush_icache_range()
   - cacheflush: Use __flush_icache_user_range()
   - uprobes: Flush icache via kernel address
   - KVM: Use __local_flush_icache_user_range()
   - c-r4k: Fix flush_icache_range() for EVA
   - Fix -mabi=64 build of vdso.lds
   - VDSO: Drop duplicated -I*/-E* aflags
   - tracing: move insn_has_delay_slot to a shared header
   - tracing: disable uprobe/kprobe on compact branch instructions
   - ptrace: Fix regs_return_value for kernel context
   - Squash lines for simple wrapper functions
   - Move identification of VP(E) into proc.c from smp-mt.c
   - Add definitions of SYNC barrierstype values
   - traps: Ensure full EBase is written
   - tlb-r4k: If there are wired entries, don't use TLBINVF
   - Sanitise coherentio semantics
   - dma-default: Don't check hw_coherentio if device is non-coherent
   - Support per-device DMA coherence
   - Adjust MIPS64 CAC_BASE to reflect Config.K0
   - Support generating Flattened Image Trees (.itb)
   - generic: Introduce generic DT-based board support
   - generic: Convert SEAD-3 to a generic board
   - Enable hardened usercopy
   - Don't specify STACKPROTECTOR in defconfigs

  Octeon:
   - Delete dead code and files across the platform.
   - Change to use all memory into use by default.
   - Rename upper case variables in setup code to lowercase.
   - Delete legacy hack for broken bootloaders.
   - Leave maintaining the link state to the actual ethernet/PHY drivers.
   - Add DTS for D-Link DSR-500N.
   - Fix PCI interrupt routing on D-Link DSR-500N.

  Pistachio:
   - Remove ANDROID_TIMED_OUTPUT from defconfig

  TX39xx:
   - Move GPIO setup from .mem_setup() to .arch_init()
   - Convert to Common Clock Framework

  TX49xx:
   - Move GPIO setup from .mem_setup() to .arch_init()
   - Convert to Common Clock Framework

  txx9wdt:
   - Add missing clock (un)prepare calls for CCF

  BMIPS:
   - Add PW, GPIO SDHCI and NAND device node names
   - Support APPENDED_DTB
   - Add missing bcm97435svmb to DT_NONE
   - Rename bcm96358nb4ser to bcm6358-neufbox4-sercom
   - Add DT examples for BCM63268, BCM3368 and BCM6362
   - Add support for BCM3368 and BCM6362

  PCI
   - Reduce stack frame usage
   - Use struct list_head lists
   - Support for CONFIG_PCI_DOMAINS_GENERIC
   - Make pcibios_set_cache_line_size an initcall
   - Inline pcibios_assign_all_busses
   - Split pci.c into pci.c & pci-legacy.c
   - Introduce CONFIG_PCI_DRIVERS_LEGACY
   - Support generic drivers

  CPC
   - Convert bare 'unsigned' to 'unsigned int'
   - Avoid lock when MIPS CM >= 3 is present

  GIC:
   - Delete unused file smp-gic.c

  mt7620:
   - Delete unnecessary assignment for the field "owner" from PCI

  BCM63xx:
   - Let clk_disable() return immediately if clk is NULL

  pm-cps:
   - Change FSB workaround to CPU blacklist
   - Update comments on barrier instructions
   - Use MIPS standard lightweight ordering barrier
   - Use MIPS standard completion barrier
   - Remove selection of sync types
   - Add MIPSr6 CPU support
   - Support CM3 changes to Coherence Enable Register

  SMP:
   - Wrap call to mips_cpc_lock_other in mips_cm_lock_other
   - Introduce mechanism for freeing and allocating IPIs

  cpuidle:
   - cpuidle-cps: Enable use with MIPSr6 CPUs.

  SEAD3:
   - Rewrite to use DT and generic kernel feature.

  USB:
   - host: ehci-sead3: Remove SEAD-3 EHCI code

  FBDEV:
   - cobalt_lcdfb: Drop SEAD3 support

  dt-bindings:
   -  Document a binding for simple ASCII LCDs

  auxdisplay:
   - img-ascii-lcd: driver for simple ASCII LCD displays

  irqchip i8259:
   - i8259: Add domain before mapping parent irq
   - i8259: Allow platforms to override poll function
   - i8259: Remove unused i8259A_irq_pending

  Malta:
   - Rewrite to use DT

  of/platform:
   - Probe "isa" busses by default

  CM:
   - Print CM error reports upon bus errors

  Module:
   - Migrate exception table users off module.h and onto extable.h
   - Make various drivers explicitly non-modular:
   - Audit and remove any unnecessary uses of module.h

  mailmap:
   - Canonicalize to Qais' current email address.

  Documentation:
   - MIPS supports HAVE_REGS_AND_STACK_ACCESS_API

  Loongson1C:
   - Add CPU support for Loongson1C
   - Add board support
   - Add defconfig
   - Add RTC support for Loongson1C board

  All this except one Documentation fix has sat in linux-next and has
  survived Imagination's automated build test system"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (127 commits)
  Documentation: MIPS supports HAVE_REGS_AND_STACK_ACCESS_API
  MIPS: ptrace: Fix regs_return_value for kernel context
  MIPS: VDSO: Drop duplicated -I*/-E* aflags
  MIPS: Fix -mabi=64 build of vdso.lds
  MIPS: Enable hardened usercopy
  MIPS: generic: Convert SEAD-3 to a generic board
  MIPS: generic: Introduce generic DT-based board support
  MIPS: Support generating Flattened Image Trees (.itb)
  MIPS: Adjust MIPS64 CAC_BASE to reflect Config.K0
  MIPS: Print CM error reports upon bus errors
  MIPS: Support per-device DMA coherence
  MIPS: dma-default: Don't check hw_coherentio if device is non-coherent
  MIPS: Sanitise coherentio semantics
  MIPS: PCI: Support generic drivers
  MIPS: PCI: Introduce CONFIG_PCI_DRIVERS_LEGACY
  MIPS: PCI: Split pci.c into pci.c & pci-legacy.c
  MIPS: PCI: Inline pcibios_assign_all_busses
  MIPS: PCI: Make pcibios_set_cache_line_size an initcall
  MIPS: PCI: Support for CONFIG_PCI_DOMAINS_GENERIC
  MIPS: PCI: Use struct list_head lists
  ...

7 years agoMerge tag 'sound-fix-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai...
Linus Torvalds [Sat, 15 Oct 2016 16:20:54 +0000 (09:20 -0700)]
Merge tag 'sound-fix-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound

Pull sound fixes from Takashi Iwai:
 "Just a few trivial small fixes"

* tag 'sound-fix-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
  ALSA: line6: fix a crash in line6_hwdep_write()
  ALSA: seq: fix passing wrong pointer in function call of compatibility layer
  ALSA: hda - Fix a failure of micmute led when having multi adcs
  ALSA: line6: Fix POD X3 Live audio input

7 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Sat, 15 Oct 2016 01:19:05 +0000 (18:19 -0700)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull more misc uaccess and vfs updates from Al Viro:
 "The rest of the stuff from -next (more uaccess work) + assorted fixes"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  score: traps: Add missing include file to fix build error
  fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths
  fs/super.c: fix race between freeze_super() and thaw_super()
  overlayfs: Fix setting IOP_XATTR flag
  iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()
  blackfin: no access_ok() for __copy_{to,from}_user()
  arm64: don't zero in __copy_from_user{,_inatomic}
  arm: don't zero in __copy_from_user_inatomic()/__copy_from_user()
  arc: don't leak bits of kernel stack into coredump
  alpha: get rid of tail-zeroing in __copy_user()

7 years agoMerge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6
Linus Torvalds [Sat, 15 Oct 2016 00:47:31 +0000 (17:47 -0700)]
Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6

Pull cifs fixes from Steve French:
 "Including:

   - nine bug fixes for stable. Some of these we found at the recent two
     weeks of SMB3 test events/plugfests.

   - significant improvements in reconnection (e.g. if server or network
     crashes) especially when mounted with "persistenthandles" or to
     server which advertises Continuous Availability on the share.

   - a new mount option "idsfromsid" which improves POSIX compatibility
     in some cases (when winbind not configured e.g.) by better (and
     faster) fetching uid/gid from acl (when "cifsacl" mount option is
     enabled). NB: we are almost complete work on "cifsacl" (querying
     mode/uid/gid from ACL) for SMB3, but SMB3 support for cifsacl is
     not included in this set.

   - improved handling for SMB3 "credits" (even if server is buggy)

  Still working on two sets of changes:

   - cifsacl enablement for SMB3

   - cleanup of RFC1001 length calculation (so we can handle encryption
     and multichannel and RDMA)

  And a couple of new bugs were reported recently (unrelated to above)
  so will probably have another merge request next week"

* 'for-next' of git://git.samba.org/sfrench/cifs-2.6: (21 commits)
  CIFS: Retrieve uid and gid from special sid if enabled
  CIFS: Add new mount option to set owner uid and gid from special sids in acl
  CIFS: Reset read oplock to NONE if we have mandatory locks after reopen
  CIFS: Fix persistent handles re-opening on reconnect
  SMB2: Separate RawNTLMSSP authentication from SMB2_sess_setup
  SMB2: Separate Kerberos authentication from SMB2_sess_setup
  Expose cifs module parameters in sysfs
  Cleanup missing frees on some ioctls
  Enable previous version support
  Do not send SMB3 SET_INFO request if nothing is changing
  SMB3: Add mount parameter to allow user to override max credits
  fs/cifs: reopen persistent handles on reconnect
  Clarify locking of cifs file and tcon structures and make more granular
  Fix regression which breaks DFS mounting
  fs/cifs: keep guid when assigning fid to fileinfo
  SMB3: GUIDs should be constructed as random but valid uuids
  Set previous session id correctly on SMB3 reconnect
  cifs: Limit the overall credit acquired
  Display number of credits available
  Add way to query creation time of file via cifs xattr
  ...

7 years agoMerge branch 'for-linus-4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/mason...
Linus Torvalds [Sat, 15 Oct 2016 00:44:56 +0000 (17:44 -0700)]
Merge branch 'for-linus-4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs

Pull btrfs fixes from Chris Mason:
 "Some fixes from Omar and Dave Sterba for our new free space tree.

  This isn't heavily used yet, but as we move toward making it the new
  default we wanted to nail down an endian bug"

* 'for-linus-4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
  btrfs: tests: uninline member definitions in free_space_extent
  btrfs: tests: constify free space extent specs
  Btrfs: expand free space tree sanity tests to catch endianness bug
  Btrfs: fix extent buffer bitmap tests on big-endian systems
  Btrfs: catch invalid free space trees
  Btrfs: fix mount -o clear_cache,space_cache=v2
  Btrfs: fix free space tree bitmaps on big-endian systems

7 years agoMerge branch 'work.uaccess' into for-linus
Al Viro [Sat, 15 Oct 2016 00:42:44 +0000 (20:42 -0400)]
Merge branch 'work.uaccess' into for-linus

7 years agoscore: traps: Add missing include file to fix build error
Guenter Roeck [Wed, 12 Oct 2016 20:42:23 +0000 (13:42 -0700)]
score: traps: Add missing include file to fix build error

score images fail to build as follows.

arch/score/kernel/traps.c: In function 'show_stack':
arch/score/kernel/traps.c:55:3: error:
implicit declaration of function '__get_user'

__get_user() is declared in asm/uaccess.h, which was previously included
through asm/module.h.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: 88dd4a748da7 ("score: separate extable.h, switch module.h to it")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
7 years agofs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths
Oleg Nesterov [Mon, 26 Sep 2016 16:55:25 +0000 (18:55 +0200)]
fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths

sb_wait_write()->percpu_rwsem_release() fools lockdep to avoid the
false-positives. Now that xfs was fixed by Dave's commit dbad7c993053
("xfs: stop holding ILOCK over filldir callbacks") we can remove it and
change freeze_super() and thaw_super() to run with s_writers.rw_sem locks
held; we add two trivial helpers for that, lockdep_sb_freeze_release()
and lockdep_sb_freeze_acquire().

xfstests-dev/check `grep -il freeze tests/*/???` does not trigger any
warning from lockdep.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
7 years agoMerge branch 'overlayfs-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszer...
Linus Torvalds [Sat, 15 Oct 2016 00:23:33 +0000 (17:23 -0700)]
Merge branch 'overlayfs-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs

Pull overlayfs updates from Miklos Szeredi:
 "This update contains fixes to the "use mounter's permission to access
  underlying layers" area, and miscellaneous other fixes and cleanups.

  No new features this time"

* 'overlayfs-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs:
  ovl: use vfs_get_link()
  vfs: add vfs_get_link() helper
  ovl: use generic_readlink
  ovl: explain error values when removing acl from workdir
  ovl: Fix info leak in ovl_lookup_temp()
  ovl: during copy up, switch to mounter's creds early
  ovl: lookup: do getxattr with mounter's permission
  ovl: copy_up_xattr(): use strnlen

7 years agofs/super.c: fix race between freeze_super() and thaw_super()
Oleg Nesterov [Mon, 26 Sep 2016 16:07:48 +0000 (18:07 +0200)]
fs/super.c: fix race between freeze_super() and thaw_super()

Change thaw_super() to check frozen != SB_FREEZE_COMPLETE rather than
frozen == SB_UNFROZEN, otherwise it can race with freeze_super() which
drops sb->s_umount after SB_FREEZE_WRITE to preserve the lock ordering.

In this case thaw_super() will wrongly call s_op->unfreeze_fs() before
it was actually frozen, and call sb_freeze_unlock() which leads to the
unbalanced percpu_up_write(). Unfortunately lockdep can't detect this,
so this triggers misc BUG_ON()'s in kernel/rcu/sync.c.

Reported-and-tested-by: Nikolay Borisov <kernel@kyup.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
7 years agooverlayfs: Fix setting IOP_XATTR flag
Vivek Goyal [Fri, 14 Oct 2016 01:03:36 +0000 (03:03 +0200)]
overlayfs: Fix setting IOP_XATTR flag

ovl_fill_super calls ovl_new_inode to create a root inode for the new
superblock before initializing sb->s_xattr.  This wrongly causes
IOP_XATTR to be cleared in i_opflags of the new inode, causing SELinux
to log the following message:

  SELinux: (dev overlay, type overlay) has no xattr support

Fix this by initializing sb->s_xattr and similar fields before calling
ovl_new_inode.

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
7 years agoiov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()
Vegard Nossum [Sat, 8 Oct 2016 09:18:07 +0000 (11:18 +0200)]
iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector()

Both import_iovec() and rw_copy_check_uvector() take an array
(typically small and on-stack) which is used to hold an iovec array copy
from userspace. This is to avoid an expensive memory allocation in the
fast path (i.e. few iovec elements).

The caller may have to check whether these functions actually used
the provided buffer or allocated a new one -- but this differs between
the too. Let's just add a kernel doc to clarify what the semantics are
for each function.

Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>