Shenghui Wang [Mon, 8 Oct 2018 12:41:19 +0000 (20:41 +0800)]
bcache: use MAX_CACHES_PER_SET instead of magic number 8 in __bch_bucket_alloc_set
Current cache_set has MAX_CACHES_PER_SET caches most, and the macro
is used for
"
struct cache *cache_by_alloc[MAX_CACHES_PER_SET];
"
in the define of struct cache_set.
Use MAX_CACHES_PER_SET instead of magic number 8 in
__bch_bucket_alloc_set.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Mon, 8 Oct 2018 12:41:18 +0000 (20:41 +0800)]
bcache: replace hard coded number with BUCKET_GC_GEN_MAX
In extents.c:bch_extent_bad(), number 96 is used as parameter to call
btree_bug_on(). The purpose is to check whether stale gen value exceeds
BUCKET_GC_GEN_MAX, so it is better to use macro BUCKET_GC_GEN_MAX to
make the code more understandable.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Junhui [Mon, 8 Oct 2018 12:41:14 +0000 (20:41 +0800)]
bcache: fix miss key refill->end in writeback
refill->end record the last key of writeback, for example, at the first
time, keys (1,128K) to (1,1024K) are flush to the backend device, but
the end key (1,1024K) is not included, since the bellow code:
if (bkey_cmp(k, refill->end) >= 0) {
ret = MAP_DONE;
goto out;
}
And in the next time when we refill writeback keybuf again, we searched
key start from (1,1024K), and got a key bigger than it, so the key
(1,1024K) missed.
This patch modify the above code, and let the end key to be included to
the writeback key buffer.
Ben Peddell [Mon, 8 Oct 2018 12:41:13 +0000 (20:41 +0800)]
bcache: Populate writeback_rate_minimum attribute
Forgot to include the maintainers with my first email.
Somewhere between Michael Lyle's original
"bcache: PI controller for writeback rate V2" patch dated 07 Sep 2017
and 1d316e6 bcache: implement PI controller for writeback rate,
the mapping of the writeback_rate_minimum attribute was dropped.
Re-add the missing sysfs writeback_rate_minimum attribute mapping to
"allow the user to specify a minimum rate at which dirty blocks are
retired."
Fixes: 1d316e6 ("bcache: implement PI controller for writeback rate") Signed-off-by: Ben Peddell <klightspeed@killerwolves.net> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Junhui [Mon, 8 Oct 2018 12:41:12 +0000 (20:41 +0800)]
bcache: correct dirty data statistics
When bcache device is clean, dirty keys may still exist after
journal replay, so we need to count these dirty keys even
device in clean status, otherwise after writeback, the amount
of dirty data would be incorrect.
Coly Li [Mon, 8 Oct 2018 12:41:11 +0000 (20:41 +0800)]
bcache: fix typo in code comments of closure_return_with_destructor()
The code comments of closure_return_with_destructor() in closure.h makrs
function name as closure_return(). This patch fixes this type with the
correct name - closure_return_with_destructor.
Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Junhui [Mon, 8 Oct 2018 12:41:10 +0000 (20:41 +0800)]
bcache: fix ioctl in flash device
When doing ioctl in flash device, it will call ioctl_dev() in super.c,
then we should not to get cached device since flash only device has
no backend device. This patch just move the jugement dc->io_disable
to cached_dev_ioctl() to make ioctl in flash device correctly.
Fixes: 0f0709e6bfc3c ("bcache: stop bcache device when backing device is offline") Signed-off-by: Tang Junhui <tang.junhui.linux@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Mon, 8 Oct 2018 12:41:09 +0000 (20:41 +0800)]
bcache: use REQ_PRIO to indicate bio for metadata
In cached_dev_cache_miss() and check_should_bypass(), REQ_META is used
to check whether a bio is for metadata request. REQ_META is used for
blktrace, the correct REQ_ flag should be REQ_PRIO. This flag means the
bio should be prior to other bio, and frequently be used to indicate
metadata io in file system code.
This patch replaces REQ_META with correct flag REQ_PRIO.
CC Adam Manzanares because he explains to me what REQ_PRIO is for.
Signed-off-by: Coly Li <colyli@suse.de> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Junhui [Mon, 8 Oct 2018 12:41:08 +0000 (20:41 +0800)]
bcache: trace missed reading by cache_missed
Missed reading IOs are identified by s->cache_missed, not the
s->cache_miss, so in trace_bcache_read() using trace_bcache_read
to identify whether the IO is missed or not.
Shenghui Wang [Mon, 8 Oct 2018 12:41:07 +0000 (20:41 +0800)]
bcache: account size of buckets used in uuid write to ca->meta_sectors_written
UUIDs are considered as metadata. __uuid_write should add the number
of buckets (in sectors) written to disk to ca->meta_sectors_written.
Currently only 1 bucket is used in uuid write.
Steps to test:
1) create a fresh backing device and a fresh cache device separately.
The backing device didn't attach to any cache set.
2) cd /sys/block/<cache device>/bcache
cat metadata_written // record the output value
cat bucket_size
3) attach the backing device to cache set
4) cat metadata_written
The output value is almost the same as the value in step 2
before the change.
After the change, the value is bigger about 1 bucket size.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com> Reviewed-by: Tang Junhui <tang.junhui.linux@gmail.com> Signed-off-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Bart Van Assche [Thu, 4 Oct 2018 17:35:24 +0000 (10:35 -0700)]
blk-mq-debugfs: Also show requests that have not yet been started
When debugging e.g. the SCSI timeout handler it is important that
requests that have not yet been started or that already have
completed are also reported through debugfs.
Cc: Christoph Hellwig <hch@lst.de> Cc: Ming Lei <ming.lei@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 5 Oct 2018 14:15:12 +0000 (08:15 -0600)]
Merge branch 'nvme-4.20' of git://git.infradead.org/nvme into for-4.20/block
Pull NVMe updates from Christoph:
"A relatively boring merge window:
- better AEN tracing (Chaitanya)
- NUMA aware PCIe multipathing (me)
- RDMA workqueue fixes (Sagi)
- better bio usage in the target (Sagi)
- FC rework for target removal (James)
- better multipath handling of ->queue_rq failures (James)
- various cleanups (Milan)"
* 'nvme-4.20' of git://git.infradead.org/nvme:
nvmet-rdma: use a private workqueue for delete
nvme: take node locality into account when selecting a path
nvmet: don't split large I/Os unconditionally
nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
nvme-core: add async event trace helper
nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
nvmet_fc: support target port removal with nvmet layer
nvme-fc: fix for a minor typos
nvmet: remove redundant module prefix
nvme: fix typo in nvme_identify_ns_descs
Queue deletion is done asynchronous when the last reference on the queue
is dropped. Thus, in order to make sure we don't over allocate under a
connect/disconnect storm, we let queue deletion complete before making
forward progress.
However, given that we flush the system_wq from rdma_cm context which
runs from a workqueue context, we can have a circular locking complaint
[1]. Fix that by using a private workqueue for queue deletion.
[1]:
======================================================
WARNING: possible circular locking dependency detected
4.19.0-rc4-dbg+ #3 Not tainted
------------------------------------------------------
kworker/5:0/39 is trying to acquire lock: 00000000a10b6db9 (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x6f/0x440 [rdma_cm]
but task is already holding lock: 00000000331b4e2c ((work_completion)(&queue->release_work)){+.+.}, at: process_one_work+0x3ed/0xa20
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
Bart Van Assche [Wed, 3 Oct 2018 20:56:25 +0000 (13:56 -0700)]
block: Finish renaming REQ_DISCARD into REQ_OP_DISCARD
Some time ago REQ_DISCARD was renamed into REQ_OP_DISCARD. Some comments
and documentation files were not updated however. Update these comments
and documentation files. See also commit 4e1b2d52a80d ("block, fs,
drivers: remove REQ_OP compat defs and related code").
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Cc: Mike Christie <mchristi@redhat.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Philipp Reisner <philipp.reisner@linbit.com> Cc: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Young_X [Wed, 3 Oct 2018 12:54:29 +0000 (12:54 +0000)]
cdrom: fix improper type cast, which can leat to information leak.
There is another cast from unsigned long to int which causes
a bounds check to fail with specially crafted input. The value is
then used as an index in the slot array in cdrom_slot_status().
This issue is similar to CVE-2018-16658 and CVE-2018-10940.
nvme: take node locality into account when selecting a path
Make current_path an array with an entry for every possible node, and
cache the best path on a per-node basis. Take the node distance into
account when selecting it. This is primarily useful for dual-ported PCIe
devices which are connected to PCIe root ports on different sockets.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Hannes Reinecke <hare@suse.com>
If we know that the I/O size exceeds our inline bio vec, no
point using it and split the rest to begin with. We could
in theory reuse the inline bio and only allocate the bio_vec,
but its really not worth optimizing for.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 27 Sep 2018 23:58:54 +0000 (16:58 -0700)]
nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
When an io is rejected by nvmf_check_ready() due to validation of the
controller state, the nvmf_fail_nonready_command() will normally return
BLK_STS_RESOURCE to requeue and retry. However, if the controller is
dying or the I/O is marked for NVMe multipath, the I/O is failed so that
the controller can terminate or so that the io can be issued on a
different path. Unfortunately, as this reject point is before the
transport has accepted the command, blk-mq ends up completing the I/O
and never calls nvme_complete_rq(), which is where multipath may preserve
or re-route the I/O. The end result is, the device user ends up seeing an
EIO error.
Example: single path connectivity, controller is under load, and a reset
is induced. An I/O is received:
a) while the reset state has been set but the queues have yet to be
stopped; or
b) after queues are started (at end of reset) but before the reconnect
has completed.
The I/O finishes with an EIO status.
This patch makes the following changes:
- Adds the HOST_PATH_ERROR pathing status from TP4028
- Modifies the reject point such that it appears to queue successfully,
but actually completes the io with the new pathing status and calls
nvme_complete_rq().
- nvme_complete_rq() recognizes the new status, avoids resetting the
controller (likely was already done in order to get this new status),
and calls the multipather to clear the current path that errored.
This allows the next command (retry or new command) to select a new
path if there is one.
Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
This patch adds a new event for nvme async event notification.
We print the async event in the decoded format when we recognize
the event otherwise we just dump the result.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 13 Sep 2018 23:17:38 +0000 (16:17 -0700)]
nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
The fc transport device should allow for a rediscovery, as userspace
might have lost the events. Example is udev events not handled during
system startup.
This patch add a sysfs entry 'nvme_discovery' on the fc class to
have it replay all udev discovery events for all local port/remote
port address pairs.
Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 9 Aug 2018 23:48:14 +0000 (16:48 -0700)]
nvmet_fc: support target port removal with nvmet layer
Currently, if a targetport has been connected to via the nvmet config
(in other words, the add_port() transport routine called, and the nvmet
port pointer stored for using in upcalls on new io), and if the
targetport is then removed (say the lldd driver decides to unload or
fully reset its hardware) and then re-added (the lldd driver reloads or
reinits its hardware), the port pointer has been lost so there's no way
to continue to post commands up to nvmet via the transport port.
Correct by allocating a small "port context" structure that will be
linked to by the targetport. The context will save the targetport WWN's
and the nvmet port pointer to use for it. Initial allocation will occur
when the targetport is bound to via add_port. The context will be
deallocated when remove_port() is called. If a targetport is removed
while nvmet has the active port context, the targetport will be unlinked
from the port context before removal. If a new targetport is registered,
the port contexts without a binding are looked through and if the WWN's
match (so it's the same as nvmet's port context) the port context is
linked to the new target port. Thus new io can be received on the new
targetport and operation resumes with nvmet.
Additionally, this also resolves nvmet configuration changing out from
underneath of the nvme-fc target port (for example: a nvmetcli clear).
Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
This patch removes the redundant module prefix used in the pr_err() when
nvmet_get_smart_log_nsid() failed to find the namespace provided as a part
of smart-log command.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Milan P. Gandhi [Thu, 9 Aug 2018 16:19:24 +0000 (21:49 +0530)]
nvme: fix typo in nvme_identify_ns_descs
Signed-off-by: Milan P. Gandhi <mgandhi@redhat.com> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Mon, 1 Oct 2018 14:58:57 +0000 (08:58 -0600)]
Merge tag 'v4.19-rc6' into for-4.20/block
Merge -rc6 in, for two reasons:
1) Resolve a trivial conflict in the blk-mq-tag.c documentation
2) A few important regression fixes went into upstream directly, so
they aren't in the 4.20 branch.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
* tag 'v4.19-rc6': (780 commits)
Linux 4.19-rc6
MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c
cpufreq: qcom-kryo: Fix section annotations
perf/core: Add sanity check to deal with pinned event failure
xen/blkfront: correct purging of persistent grants
Revert "xen/blkfront: When purging persistent grants, keep them in the buffer"
selftests/powerpc: Fix Makefiles for headers_install change
blk-mq: I/O and timer unplugs are inverted in blktrace
dax: Fix deadlock in dax_lock_mapping_entry()
x86/boot: Fix kexec booting failure in the SEV bit detection code
bcache: add separate workqueue for journal_write to avoid deadlock
drm/amd/display: Fix Edid emulation for linux
drm/amd/display: Fix Vega10 lightup on S3 resume
drm/amdgpu: Fix vce work queue was not cancelled when suspend
Revert "drm/panel: Add device_link from panel device to DRM device"
xen/blkfront: When purging persistent grants, keep them in the buffer
clocksource/drivers/timer-atmel-pit: Properly handle error cases
block: fix deadline elevator drain for zoned block devices
ACPI / hotplug / PCI: Don't scan for non-hotplug bridges if slot is not bridge
drm/syncobj: Don't leak fences when WAIT_FOR_SUBMIT is set
...
Merge tag 'for-linus-20180929' of git://git.kernel.dk/linux-block
Jens writes:
"Block fixes for 4.19-rc6
A set of fixes that should go into this release. This pull request
contains:
- A fix (hopefully) for the persistent grants for xen-blkfront. A
previous fix from this series wasn't complete, hence reverted, and
this one should hopefully be it. (Boris Ostrovsky)
- Fix for an elevator drain warning with SMR devices, which is
triggered when you switch schedulers (Damien)
- bcache deadlock fix (Guoju Fang)
- Fix for the block unplug tracepoint, which has had the
timer/explicit flag reverted since 4.11 (Ilya)
- Fix a regression in this series where the blk-mq timeout hook is
invoked with the RCU read lock held, hence preventing it from
blocking (Keith)
- NVMe pull from Christoph, with a single multipath fix (Susobhan Dey)"
* tag 'for-linus-20180929' of git://git.kernel.dk/linux-block:
xen/blkfront: correct purging of persistent grants
Revert "xen/blkfront: When purging persistent grants, keep them in the buffer"
blk-mq: I/O and timer unplugs are inverted in blktrace
bcache: add separate workqueue for journal_write to avoid deadlock
xen/blkfront: When purging persistent grants, keep them in the buffer
block: fix deadline elevator drain for zoned block devices
blk-mq: Allow blocking queue tag iter callbacks
nvme: properly propagate errors in nvme_mpath_init
Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Thomas writes:
"A single fix for the AMD memory encryption boot code so it does not
read random garbage instead of the cached encryption bit when a kexec
kernel is allocated above the 32bit address limit."
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/boot: Fix kexec booting failure in the SEV bit detection code
Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Thomas writes:
"Three small fixes for clocksource drivers:
- Proper error handling in the Atmel PIT driver
- Add CLOCK_SOURCE_SUSPEND_NONSTOP for TI SoCs so suspend works again
- Fix the next event function for Facebook Backpack-CMM BMC chips so
usleep(100) doesnt sleep several milliseconds"
* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
clocksource/drivers/timer-atmel-pit: Properly handle error cases
clocksource/drivers/fttmr010: Fix set_next_event handler
clocksource/drivers/ti-32k: Add CLOCK_SOURCE_SUSPEND_NONSTOP flag for non-am43 SoCs
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Thomas writes:
"A single fix for a missing sanity check when a pinned event is tried
to be read on the wrong CPU due to a legit event scheduling failure."
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Add sanity check to deal with pinned event failure
There is currently a warning when building the Kryo cpufreq driver into
the kernel image:
WARNING: vmlinux.o(.text+0x8aa424): Section mismatch in reference from
the function qcom_cpufreq_kryo_probe() to the function
.init.text:qcom_cpufreq_kryo_get_msm_id()
The function qcom_cpufreq_kryo_probe() references
the function __init qcom_cpufreq_kryo_get_msm_id().
This is often because qcom_cpufreq_kryo_probe lacks a __init
annotation or the annotation of qcom_cpufreq_kryo_get_msm_id is wrong.
Remove the '__init' annotation from qcom_cpufreq_kryo_get_msm_id
so that there is no more mismatch warning.
Additionally, Nick noticed that the remove function was marked as
'__init' when it should really be marked as '__exit'.
Fixes: 46e2856b8e18 (cpufreq: Add Kryo CPU scaling driver) Fixes: 5ad7346b4ae2 (cpufreq: kryo: Add module remove and exit) Reported-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 4.18+ <stable@vger.kernel.org> # 4.18+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
Dmitry writes:
"Input updates for v4.19-rc5
Just a few driver fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: uinput - allow for max == min during input_absinfo validation
Input: elantech - enable middle button of touchpad on ThinkPad P72
Input: atakbd - fix Atari CapsLock behaviour
Input: atakbd - fix Atari keymap
Input: egalax_ts - add system wakeup support
Input: gpio-keys - fix a documentation index issue
Merge tag 'spi-fix-v4.19-rc5' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Mark writes:
"spi: Fixes for v4.19
Quite a few fixes for the Renesas drivers in here, plus a fix for the
Tegra driver and some documentation fixes for the recently added
spi-mem code. The Tegra fix is relatively large but fairly
straightforward and mechanical, it runs on probe so it's been
reasonably well covered in -next testing."
* tag 'spi-fix-v4.19-rc5' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: spi-mem: Move the DMA-able constraint doc to the kerneldoc header
spi: spi-mem: Add missing description for data.nbytes field
spi: rspi: Fix interrupted DMA transfers
spi: rspi: Fix invalid SPI use during system suspend
spi: sh-msiof: Fix handling of write value for SISTR register
spi: sh-msiof: Fix invalid SPI use during system suspend
spi: gpio: Fix copy-and-paste error
spi: tegra20-slink: explicitly enable/disable clock
Merge tag 'regulator-v4.19-rc5' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator
Mark writes:
"regulator: Fixes for 4.19
A collection of fairly minor bug fixes here, a couple of driver
specific ones plus two core fixes. There's one fix for the new
suspend state code which fixes some confusion with constant values
that are supposed to indicate noop operation and another fixing a
race condition with the creation of sysfs files on new regulators."
* tag 'regulator-v4.19-rc5' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator:
regulator: fix crash caused by null driver data
regulator: Fix 'do-nothing' value for regulators without suspend state
regulator: da9063: fix DT probing with constraints
regulator: bd71837: Disable voltage monitoring for LDO3/4
Merge tag 'pinctrl-v4.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Linus writes:
"Pin control fixes for v4.19:
- Fixes to x86 hardware:
- AMD interrupt debounce issues
- Faulty Intel cannonlake register offset
- Revert pin translation IRQ locking"
* tag 'pinctrl-v4.19-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
Revert "pinctrl: intel: Do pin translation when lock IRQ"
pinctrl: cannonlake: Fix HOSTSW_OWN register offset of H variant
pinctrl/amd: poll InterruptEnable bits in amd_gpio_irq_set_type
perf/core: Add sanity check to deal with pinned event failure
It is possible that a failure can occur during the scheduling of a
pinned event. The initial portion of perf_event_read_local() contains
the various error checks an event should pass before it can be
considered valid. Ensure that the potential scheduling failure
of a pinned event is checked for and have a credible error.
Josef Bacik [Fri, 28 Sep 2018 17:45:43 +0000 (13:45 -0400)]
blk-iolatency: keep track of previous windows stats
We apply a smoothing to the scale changes in order to keep sawtoothy
behavior from occurring. However our window for checking if we've
missed our target can sometimes be lower than the smoothing interval
(500ms), especially on faster drives like ssd's. In order to deal with
this keep track of the running tally of the previous intervals that we
threw away because we had already done a scale event recently.
This is needed for the ssd case as these low latency drives will have
bursts of latency, and if it happens to be ok for the window that
directly follows the opening of the scale window we could unthrottle
when previous windows we were missing our target.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Fri, 28 Sep 2018 17:45:42 +0000 (13:45 -0400)]
blk-iolatency: use a percentile approache for ssd's
We use an average latency approach for determining if we're missing our
latency target. This works well for rotational storage where we have
generally consistent latencies, but for ssd's and other low latency
devices you have more of a spikey behavior, which means we often won't
throttle misbehaving groups because a lot of IO completes at drastically
faster times than our latency target. Instead keep track of how many
IO's miss our target and how many IO's are done in our time window. If
the p(90) latency is above our target then we know we need to throttle.
With this change in place we are seeing the same throttling behavior
with our testcase on ssd's as we see with rotational drives.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Fri, 28 Sep 2018 17:45:41 +0000 (13:45 -0400)]
blk-iolatency: deal with small samples
There is logic to keep cgroups that haven't done a lot of IO in the most
recent scale window from being punished for over-active higher priority
groups. However for things like ssd's where the windows are pretty
short we'll end up with small numbers of samples, so 5% of samples will
come out to 0 if there aren't enough. Make the floor 1 sample to keep
us from improperly bailing out of scaling down.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Fri, 28 Sep 2018 17:45:40 +0000 (13:45 -0400)]
blk-iolatency: deal with nr_requests == 1
Hitting the case where blk_queue_depth() returned 1 uncovered the fact
that iolatency doesn't actually handle this case properly, it simply
doesn't scale down anybody. For this case we should go straight into
applying the time delay, which we weren't doing. Since we already limit
the floor at 1 request this if statement is not needed, and this allows
us to set our depth to 1 which allows us to apply the delay if needed.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Fri, 28 Sep 2018 17:45:39 +0000 (13:45 -0400)]
blk-iolatency: use q->nr_requests directly
We were using blk_queue_depth() assuming that it would return
nr_requests, but we hit a case in production on drives that had to have
NCQ turned off in order for them to not shit the bed which resulted in a
qd of 1, even though the nr_requests was much larger. iolatency really
only cares about requests we are allowed to queue up, as any io that
get's onto the request list is going to be serviced soonish, so we want
to be throttling before the bio gets onto the request list. To make
iolatency work as expected, simply use q->nr_requests instead of
blk_queue_depth() as that is what we actually care about.
Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* tag 'drm-fixes-2018-09-28' of git://anongit.freedesktop.org/drm/drm:
drm/amd/display: Fix Edid emulation for linux
drm/amd/display: Fix Vega10 lightup on S3 resume
drm/amdgpu: Fix vce work queue was not cancelled when suspend
Revert "drm/panel: Add device_link from panel device to DRM device"
drm/syncobj: Don't leak fences when WAIT_FOR_SUBMIT is set
drm/malidp: Fix writeback in NV12
drm: mali-dp: Call drm_crtc_vblank_reset on device init
drm/etnaviv: add DMA configuration for etnaviv platform device
Merge tag 'riscv-for-linus-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux
Palmer writes:
"A Single RISC-V Update for 4.19-rc6
The Debian guys have been pushing on our port and found some
unversioned symbols leaking into modules. This PR contains a single
fix for that issue."
* tag 'riscv-for-linus-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux:
RISC-V: include linux/ftrace.h in asm-prototypes.h
kyber: fix integer overflow of latency targets on 32-bit
NSEC_PER_SEC has type long, so 5 * NSEC_PER_SEC is calculated as a long.
However, 5 seconds is 5,000,000,000 nanoseconds, which overflows a
32-bit long. Make sure all of the targets are calculated as 64-bit
values.
* tag 'pci-v4.19-fixes-2' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
ACPI / hotplug / PCI: Don't scan for non-hotplug bridges if slot is not bridge
PCI: dwc: Fix scheduling while atomic issues
MAINTAINERS: Move mobiveil PCI driver entry where it belongs
MAINTAINERS: Update PPC contacts for PCI core error handling
xen/blkfront: correct purging of persistent grants
Commit a46b53672b2c2e3770b38a4abf90d16364d2584b ("xen/blkfront: cleanup
stale persistent grants") introduced a regression as purged persistent
grants were not pu into the list of free grants again. Correct that.
Hannes Reinecke [Fri, 28 Sep 2018 06:17:23 +0000 (08:17 +0200)]
virtio-blk: modernize sysfs attribute creation
Use new-style DEVICE_ATTR_RO/DEVICE_ATTR_RW to create the sysfs attributes
and register the disk with default sysfs attribute groups.
Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hannes Reinecke [Fri, 28 Sep 2018 06:17:21 +0000 (08:17 +0200)]
aoe: register default groups with device_add_disk()
Register default sysfs groups during device_add_disk() to avoid a
race condition with udev during startup.
Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Ed L. Cachin <ed.cashin@acm.org> Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hannes Reinecke [Fri, 28 Sep 2018 06:17:20 +0000 (08:17 +0200)]
nvme: register ns_id attributes as default sysfs groups
We should be registering the ns_id attribute as default sysfs
attribute groups, otherwise we have a race condition between
the uevent and the attributes appearing in sysfs.
Suggested-by: Bart van Assche <bvanassche@acm.org> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hannes Reinecke [Fri, 28 Sep 2018 06:17:19 +0000 (08:17 +0200)]
block: genhd: add 'groups' argument to device_add_disk
Update device_add_disk() to take an 'groups' argument so that
individual drivers can register a device with additional sysfs
attributes.
This avoids race condition the driver would otherwise have if these
groups were to be created with sysfs_add_groups().
Signed-off-by: Martin Wilck <martin.wilck@suse.com> Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Michael Ellerman [Fri, 28 Sep 2018 04:53:18 +0000 (14:53 +1000)]
selftests/powerpc: Fix Makefiles for headers_install change
Commit b2d35fa5fc80 ("selftests: add headers_install to lib.mk")
introduced a requirement that Makefiles more than one level below the
selftests directory need to define top_srcdir, but it didn't update
any of the powerpc Makefiles.
This broke building all the powerpc selftests with eg:
make[1]: Entering directory '/src/linux/tools/testing/selftests/powerpc'
BUILD_TARGET=/src/linux/tools/testing/selftests/powerpc/alignment; mkdir -p $BUILD_TARGET; make OUTPUT=$BUILD_TARGET -k -C alignment all
make[2]: Entering directory '/src/linux/tools/testing/selftests/powerpc/alignment'
../../lib.mk:20: ../../../../scripts/subarch.include: No such file or directory
make[2]: *** No rule to make target '../../../../scripts/subarch.include'.
make[2]: Failed to remake makefile '../../../../scripts/subarch.include'.
Makefile:38: recipe for target 'alignment' failed
Fix it by setting top_srcdir in the affected Makefiles.
Fixes: b2d35fa5fc80 ("selftests: add headers_install to lib.mk") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When debugging Kyber, it's really useful to know what latencies we've
been having, how the domain depths have been adjusted, and if we've
actually been throttling. Add three tracepoints, kyber_latency,
kyber_adjust, and kyber_throttled, to record that.
- It's based on the mean latency, but p99 latency tends to be more
meaningful to anyone who cares about latency. The mean can also be
skewed by rare outliers that the scheduler can't do anything about.
- The statistics calculations are purely time-based with a short window.
This works for steady, high load, but is more sensitive to outliers
with bursty workloads.
- It only considers the latency once an I/O has been submitted to the
device, but the user cares about the time spent in the kernel, as
well.
These are shortcomings of the generic blk-stat code which doesn't quite
fit the ideal use case for Kyber. So, this replaces the statistics with
a histogram used to calculate percentiles of total latency and I/O
latency, which we then use to adjust depths in a slightly more
intelligent manner:
- Sync and async writes are now the same domain.
- Discards are a separate domain.
- Domain queue depths are scaled by the ratio of the p99 total latency
to the target latency (e.g., if the p99 latency is double the target
latency, we will double the queue depth; if the p99 latency is half of
the target latency, we can halve the queue depth).
- We use the I/O latency to determine whether we should scale queue
depths down: we will only scale down if any domain's I/O latency
exceeds the target latency, which is an indicator of congestion in the
device.
These new heuristics are just as scalable as the heuristics they
replace.
kyber: don't make domain token sbitmap larger than necessary
The domain token sbitmaps are currently initialized to the device queue
depth or 256, whichever is larger, and immediately resized to the
maximum depth for that domain (256, 128, or 64 for read, write, and
other, respectively). The sbitmap is never resized larger than that, so
it's unnecessary to allocate a bitmap larger than the maximum depth.
Let's just allocate it to the maximum depth to begin with. This will use
marginally less memory, and more importantly, give us a more appropriate
number of bits per sbitmap word.
block: move call of scheduler's ->completed_request() hook
Commit 4bc6339a583c ("block: move blk_stat_add() to
__blk_mq_end_request()") consolidated some calls using ktime_get() so
we'd only need to call it once. Kyber's ->completed_request() hook also
calls ktime_get(), so let's move it to the same place, too.
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma
Jason writes:
"Second RDMA rc pull request
- Fix a long standing race bug when destroying comp_event file descriptors
- srp, hfi1, bnxt_re: Various driver crashes from missing validation
and other cases
- Fixes for regressions in patches merged this window in the gid
cache, devx, ucma and uapi."
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
RDMA/core: Set right entry state before releasing reference
IB/mlx5: Destroy the DEVX object upon error flow
IB/uverbs: Free uapi on destroy
RDMA/bnxt_re: Fix system crash during RDMA resource initialization
IB/hfi1: Fix destroy_qp hang after a link down
IB/hfi1: Fix context recovery when PBC has an UnsupportedVL
IB/hfi1: Invalid user input can result in crash
IB/hfi1: Fix SL array bounds check
RDMA/uverbs: Fix validity check for modify QP
IB/srp: Avoid that sg_reset -d ${srp_device} triggers an infinite loop
ucma: fix a use-after-free in ucma_resolve_ip()
RDMA/uverbs: Atomically flush and mark closed the comp event queue
cxgb4: fix abort_req_rss6 struct
blk-mq: I/O and timer unplugs are inverted in blktrace
trace_block_unplug() takes true for explicit unplugs and false for
implicit unplugs. schedule() unplugs are implicit and should be
reported as timer unplugs. While correct in the legacy code, this has
been inverted in blk-mq since 4.11.
Jan Kara [Thu, 27 Sep 2018 11:23:32 +0000 (13:23 +0200)]
dax: Fix deadlock in dax_lock_mapping_entry()
When dax_lock_mapping_entry() has to sleep to obtain entry lock, it will
fail to unlock mapping->i_pages spinlock and thus immediately deadlock
against itself when retrying to grab the entry lock again. Fix the
problem by unlocking mapping->i_pages before retrying.
Fixes: c2a7d2a11552 ("filesystem-dax: Introduce dax_lock_mapping_entry()") Reported-by: Barret Rhoden <brho@google.com> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Kairui Song [Thu, 27 Sep 2018 12:38:45 +0000 (20:38 +0800)]
x86/boot: Fix kexec booting failure in the SEV bit detection code
Commit
1958b5fc4010 ("x86/boot: Add early boot support when running with SEV active")
can occasionally cause system resets when kexec-ing a second kernel even
if SEV is not active.
That's because get_sev_encryption_bit() uses 32-bit rIP-relative
addressing to read the value of enc_bit - a variable which caches a
previously detected encryption bit position - but kexec may allocate
the early boot code to a higher location, beyond the 32-bit addressing
limit.
In this case, garbage will be read and get_sev_encryption_bit() will
return the wrong value, leading to accessing memory with the wrong
encryption setting.
Therefore, remove enc_bit, and thus get rid of the need to do 32-bit
rIP-relative addressing in the first place.
[ bp: massage commit message heavily. ]
Fixes: 1958b5fc4010 ("x86/boot: Add early boot support when running with SEV active") Suggested-by: Borislav Petkov <bp@suse.de> Signed-off-by: Kairui Song <kasong@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de Cc: mingo@redhat.com Cc: hpa@zytor.com Cc: brijesh.singh@amd.com Cc: kexec@lists.infradead.org Cc: dyoung@redhat.com Cc: bhe@redhat.com Cc: ghook@redhat.com Link: https://lkml.kernel.org/r/20180927123845.32052-1-kasong@redhat.com
bcache: add separate workqueue for journal_write to avoid deadlock
After write SSD completed, bcache schedules journal_write work to
system_wq, which is a public workqueue in system, without WQ_MEM_RECLAIM
flag. system_wq is also a bound wq, and there may be no idle kworker on
current processor. Creating a new kworker may unfortunately need to
reclaim memory first, by shrinking cache and slab used by vfs, which
depends on bcache device. That's a deadlock.
This patch create a new workqueue for journal_write with WQ_MEM_RECLAIM
flag. It's rescuer thread will work to avoid the deadlock.
[Why]
EDID emulation didn't work properly for linux, as we stop programming
if nothing is connected physically.
[How]
We get a flag from DRM when we want to do edid emulation. We check if
this flag is true and nothing is connected physically, if so we only
program the front end using VIRTUAL_SIGNAL.
Signed-off-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com> Reviewed-by: Harry Wentland <Harry.Wentland@amd.com> Acked-by: Leo Li <sunpeng.li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Roman Li [Wed, 26 Sep 2018 17:42:16 +0000 (13:42 -0400)]
drm/amd/display: Fix Vega10 lightup on S3 resume
[Why]
There have been a few reports of Vega10 display remaining blank
after S3 resume. The regression is caused by workaround for mode
change on Vega10 - skip set_bandwidth if stream count is 0.
As a result we skipped dispclk reset on suspend, thus on resume
we may skip the clock update assuming it hasn't been changed.
On some systems it causes display blank or 'out of range'.
[How]
Revert "drm/amd/display: Fix Vega10 black screen after mode change"
Verified that it hadn't cause mode change regression.
Signed-off-by: Roman Li <Roman.Li@amd.com> Reviewed-by: Sun peng Li <Sunpeng.Li@amd.com> Acked-by: Leo Li <sunpeng.li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Rex Zhu [Thu, 27 Sep 2018 12:48:39 +0000 (20:48 +0800)]
drm/amdgpu: Fix vce work queue was not cancelled when suspend
The vce cancel_delayed_work_sync never be called.
driver call the function in error path.
This caused the A+A suspend hang when runtime pm enebled.
As we will visit the smu in the idle queue. this will cause
smu hang because the dgpu has been suspend, and the dgpu also
will be waked up. As the smu has been hang, so the dgpu resume
will failed.
Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Feifei Xu <Feifei.Xu@amd.com> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
commit 0c08754b59da
("drm/panel: Add device_link from panel device to DRM device")
creates a circular dependency under these circumstances:
1. The panel depends on dsi-host because it is MIPI-DSI child
device.
2. dsi-host depends on the drm parent device (connector->dev->dev)
this should be allowed.
3. drm parent dev (connector->dev->dev) depends on the panel
after this patch.
This makes the dependency circular and while it appears it
does not affect any in-tree drivers (they do not seem to have
dsi hosts depending on the same parent device) this does not
seem right.
As noted in a response from Andrzej Hajda, the intent is
likely to make the panel dependent on the DRM device
(connector->dev) not its parent. But we have no way of
doing that since the DRM device doesn't contain any
struct device on its own (arguably it should).
Revert this until a proper approach is figured out.
Boris Ostrovsky [Sat, 22 Sep 2018 19:55:49 +0000 (15:55 -0400)]
xen/blkfront: When purging persistent grants, keep them in the buffer
Commit a46b53672b2c ("xen/blkfront: cleanup stale persistent grants")
added support for purging persistent grants when they are not in use. As
part of the purge, the grants were removed from the grant buffer, This
eventually causes the buffer to become empty, with BUG_ON triggered in
get_free_grant(). This can be observed even on an idle system, within
20-30 minutes.
We should keep the grants in the buffer when purging, and only free the
grant ref.
Fixes: a46b53672b2c ("xen/blkfront: cleanup stale persistent grants") Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Thu, 27 Sep 2018 01:55:13 +0000 (10:55 +0900)]
block: fix deadline elevator drain for zoned block devices
When the deadline scheduler is used with a zoned block device, writes
to a zone will be dispatched one at a time. This causes the warning
message:
deadline: forced dispatching is broken (nr_sorted=X), please report this
to be displayed when switching to another elevator with the legacy I/O
path while write requests to a zone are being retained in the scheduler
queue.
Prevent this message from being displayed when executing
elv_drain_elevator() for a zoned block device. __blk_drain_queue() will
loop until all writes are dispatched and completed, resulting in the
desired elevator queue drain without extensive modifications to the
deadline code itself to handle forced-dispatch calls.
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Fixes: 8dc8146f9c92 ("deadline-iosched: Introduce zone locking support") Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dave Airlie [Thu, 27 Sep 2018 00:19:26 +0000 (10:19 +1000)]
Merge branch 'etnaviv/fixes' of https://git.pengutronix.de/git/lst/linux into drm-fixes
one fix to get a proper DMA configuration in place for the etnaviv
virtual device. I'm sending this as a fix, as a dma-mapping change at
the ARC architecture side during the 4.19 cycle broke etnaviv on this
platform, which gets remedied with this patch, but it also enables
ARM64.
blk-mq: Enable support for runtime power management
Now that the blk-mq core processes power management requests
(marked with RQF_PREEMPT) in other states than RPM_ACTIVE, enable
runtime power management for blk-mq.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
block: Make blk_get_request() block for non-PM requests while suspended
Instead of allowing requests that are not power management requests
to enter the queue in runtime suspended status (RPM_SUSPENDED), make
the blk_get_request() caller block. This change fixes a starvation
issue: it is now guaranteed that power management requests will be
executed no matter how many blk_get_request() callers are waiting.
For blk-mq, instead of maintaining the q->nr_pending counter, rely
on q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a
request finishes instead of only if the queue depth drops to zero.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
block: Allow unfreezing of a queue while requests are in progress
A later patch will call blk_freeze_queue_start() followed by
blk_mq_unfreeze_queue() without waiting for q_usage_counter to drop
to zero. Make sure that this doesn't cause a kernel warning to appear
by switching from percpu_ref_reinit() to percpu_ref_resurrect(). The
former namely requires that the refcount it operates on is zero.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
This function will be used in a later patch to switch the struct
request_queue q_usage_counter from killed back to live. In contrast
to percpu_ref_reinit(), this new function does not require that the
refcount is zero.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Instead of scheduling runtime resume of a request queue after a
request has been queued, schedule asynchronous resume during request
allocation. The new pm_request_resume() calls occur after
blk_queue_enter() has increased the q_usage_counter request queue
member. This change is needed for a later patch that will make request
allocation block while the queue status is not RPM_ACTIVE.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
block: Split blk_pm_add_request() and blk_pm_put_request()
Move the pm_request_resume() and pm_runtime_mark_last_busy() calls into
two new functions and thereby separate legacy block layer code from code
that works for both the legacy block layer and blk-mq. A later patch will
add calls to the new functions in the blk-mq code.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
block, scsi: Change the preempt-only flag into a counter
The RQF_PREEMPT flag is used for three purposes:
- In the SCSI core, for making sure that power management requests
are executed even if a device is in the "quiesced" state.
- For domain validation by SCSI drivers that use the parallel port.
- In the IDE driver, for IDE preempt requests.
Rename "preempt-only" into "pm-only" because the primary purpose of
this mode is power management. Since the power management core may
but does not have to resume a runtime suspended device before
performing system-wide suspend and since a later patch will set
"pm-only" mode as long as a block device is runtime suspended, make
it possible to set "pm-only" mode from more than one context. Since
with this change scsi_device_quiesce() is no longer idempotent, make
that function return early if it is called for a quiesced queue.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
block: Move power management code into a new source file
Move the code for runtime power management from blk-core.c into the
new source file blk-pm.c. Move the corresponding declarations from
<linux/blkdev.h> into <linux/blk-pm.h>. For CONFIG_PM=n, leave out
the declarations of the functions that are not used in that mode.
This patch not only reduces the number of #ifdefs in the block layer
core code but also reduces the size of header file <linux/blkdev.h>
and hence should help to reduce the build time of the Linux kernel
if CONFIG_PM is not defined.
Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Hannes Reinecke <hare@suse.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mika Westerberg [Wed, 26 Sep 2018 20:39:28 +0000 (15:39 -0500)]
ACPI / hotplug / PCI: Don't scan for non-hotplug bridges if slot is not bridge
HP 6730b laptop has an ethernet NIC connected to one of the PCIe root
ports. The root ports themselves are native PCIe hotplug capable. Now,
during boot after PCI devices are scanned the BIOS triggers ACPI bus check
directly to the NIC:
ACPI: \_SB_.PCI0.RP06.NIC_: Bus check in hotplug_event()
It is not clear why it is sending bus check but regardless the ACPI hotplug
notify handler calls enable_slot() directly (instead of going through
acpiphp_check_bridge() as there is no bridge), which ends up handling
special case for non-hotplug bridges with native PCIe hotplug. This
results a crash of some kind but the reporter only sees black screen so it
is hard to figure out the exact spot and what actually happens. Based on
a few fix proposals it was tracked to crash somewhere inside
pci_assign_unassigned_bridge_resources().
In any case we should not really be in that special branch at all because
the ACPI notify happened to a slot that is not a PCI bridge (it is just a
regular PCI device).
Fix this so that we only go to that special branch if we are calling
enable_slot() for a bridge (e.g., the ACPI notification was for the
bridge).
Link: https://bugzilla.kernel.org/show_bug.cgi?id=201127 Fixes: 84c8b58ed3ad ("ACPI / hotplug / PCI: Don't scan bridges managed by native hotplug") Reported-by: Peter Anemone <peter.anemone@gmail.com> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> CC: stable@vger.kernel.org # v4.18+
xen: don't include <xen/xen.h> from <asm/io.h> and <asm/dma-mapping.h>
Nothing Xen specific in these headers, which get included from a lot
of code in the kernel. So prune the includes and move them to the
Xen-specific files that actually use them instead.
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>