]> git.proxmox.com Git - mirror_ubuntu-kernels.git/log
mirror_ubuntu-kernels.git
3 years agoio_uring: unify files and task cancel
Pavel Begunkov [Sun, 11 Apr 2021 00:46:27 +0000 (01:46 +0100)]
io_uring: unify files and task cancel

Now __io_uring_cancel() and __io_uring_files_cancel() are very similar
and mostly differ by how we count requests, merge them and allow
tctx_inflight() to handle counting.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1a5986a97df4dc1378f3fe0ca1eb483dbcf42112.1618101759.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: track inflight requests through counter
Pavel Begunkov [Sun, 11 Apr 2021 00:46:26 +0000 (01:46 +0100)]
io_uring: track inflight requests through counter

Instead of keeping requests in a inflight_list, just track them with a
per tctx atomic counter. Apart from it being much easier and more
consistent with task cancel, it frees ->inflight_entry from being shared
between iopoll and cancel-track, so less headache for us.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3c2ee0863cd7eeefa605f3eaff4c1c461a6f1157.1618101759.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: unify task and files cancel loops
Pavel Begunkov [Sun, 11 Apr 2021 00:46:25 +0000 (01:46 +0100)]
io_uring: unify task and files cancel loops

Move tracked inflight number check up the stack into
__io_uring_files_cancel() so it's similar to task cancel. Will be used
for further cleaning.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/dca5a395efebd1e3e0f3bbc6b9640c5e8aa7e468.1618101759.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify apoll hash removal
Pavel Begunkov [Fri, 9 Apr 2021 08:13:21 +0000 (09:13 +0100)]
io_uring: simplify apoll hash removal

hash_del() works well with non-hashed nodes, there's no need to check
if it is hashed first.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor io_poll_complete()
Pavel Begunkov [Fri, 9 Apr 2021 08:13:20 +0000 (09:13 +0100)]
io_uring: refactor io_poll_complete()

Remove error parameter from io_poll_complete(), 0 is always passed,
and do a bit of cleaning on top.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: clean up io_poll_task_func()
Pavel Begunkov [Fri, 9 Apr 2021 08:13:19 +0000 (09:13 +0100)]
io_uring: clean up io_poll_task_func()

io_poll_complete() always fills an event (even an overflowed one), so we
always should do io_cqring_ev_posted() afterwards. And that's what is
currently happening, because second EPOLLONESHOT check is always true,
it can't return !done for oneshots.

Remove those branching, it's much easier to read.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: Fix io_wq_worker_affinity()
Peter Zijlstra [Thu, 8 Apr 2021 09:44:50 +0000 (11:44 +0200)]
io-wq: Fix io_wq_worker_affinity()

Do not include private headers and do not frob in internals.

On top of that, while the previous code restores the affinity, it
doesn't ensure the task actually moves there if it was running,
leading to the fun situation that it can be observed running outside
of its allowed mask for potentially significant time.

Use the proper API instead.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/YG7QkiUzlEbW85TU@hirez.programming.kicks-ass.net
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't attempt re-add of multishot poll request if racing
Jens Axboe [Tue, 6 Apr 2021 15:49:31 +0000 (09:49 -0600)]
io_uring: don't attempt re-add of multishot poll request if racing

We currently allow racy updates to multishot requests, but we can end up
double adding the poll request if both completion and update does it.
Ensure that we skip re-add on the update side if someone else is
completing it.

Fixes: b69de288e913 ("io_uring: allow events and user_data update of running poll requests")
Reported-by: Joakim Hassila <joj@mac.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: simplify code in __io_worker_busy()
Hao Xu [Tue, 6 Apr 2021 03:08:45 +0000 (11:08 +0800)]
io-wq: simplify code in __io_worker_busy()

Leverage XOR to simplify the code in __io_worker_busy.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/1617678525-3129-1-git-send-email-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: kill outdated comment about splice punt
Pavel Begunkov [Thu, 1 Apr 2021 14:44:05 +0000 (15:44 +0100)]
io_uring: kill outdated comment about splice punt

The splice/tee comment in io_prep_async_work() isn't relevant since the
section was moved, delete it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/892a549c89c3d422b679677b8e68ffd3fcb736b6.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: encapsulate fixed files into struct
Pavel Begunkov [Thu, 1 Apr 2021 14:44:04 +0000 (15:44 +0100)]
io_uring: encapsulate fixed files into struct

Add struct io_fixed_file representing a single registered file, first to
hide ugly struct file **, which may be misleading, and secondly to
retype it to unsigned long as conversions to it and back to file * for
handling and masking FFS_* flags are getting nasty.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/78669731a605a7614c577c3de552631cfaf0869a.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor file tables alloc/free
Pavel Begunkov [Thu, 1 Apr 2021 14:44:03 +0000 (15:44 +0100)]
io_uring: refactor file tables alloc/free

Introduce a heler io_free_file_tables() doing all the cleaning, there
are several places where it's hand coded. Also move all allocations into
io_sqe_alloc_file_tables() and rename it, so all of it is in one place.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/502a84ebf41ff119b095e59661e678eacb752bf8.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't quiesce intial files register
Pavel Begunkov [Thu, 1 Apr 2021 14:44:02 +0000 (15:44 +0100)]
io_uring: don't quiesce intial files register

There is no reason why we would want to fully quiesce ring on
IORING_REGISTER_FILES, if it's already registered we fail.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/563bb8060bb2d3efbc32fce6101678281c574d2a.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: set proper FFS* flags on reg file update
Pavel Begunkov [Thu, 1 Apr 2021 14:44:01 +0000 (15:44 +0100)]
io_uring: set proper FFS* flags on reg file update

Set FFS_* flags (e.g. FFS_ASYNC_READ) not only in initial registration
but also on registered files update. Not a bug, but may miss getting
profit out of the feature.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/df29a841a2d3d3695b509cdffce5070777d9d942.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: deduplicate NOSIGNAL setting
Pavel Begunkov [Thu, 1 Apr 2021 14:44:00 +0000 (15:44 +0100)]
io_uring: deduplicate NOSIGNAL setting

Set MSG_NOSIGNAL and REQ_F_NOWAIT in send/recv prep routines and don't
duplicate it in all four send/recv handlers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e1133a3ed1c0e192975b7341ea4b0bf91f63b132.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: put link timeout req consistently
Pavel Begunkov [Thu, 1 Apr 2021 14:43:59 +0000 (15:43 +0100)]
io_uring: put link timeout req consistently

Don't put linked timeout req in io_async_find_and_cancel() but do it in
io_link_timeout_fn(), so we have only one point for that and won't have
to do it differently as it's now (put vs put_deferred). Btw, improve a
bit io_async_find_and_cancel()'s locking.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d75b70957f245275ab7cba83e0ac9c1b86aae78a.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify overflow handling
Pavel Begunkov [Thu, 1 Apr 2021 14:43:58 +0000 (15:43 +0100)]
io_uring: simplify overflow handling

Overflowed CQEs doesn't lock requests anymore, so we don't care so much
about cancelling them, so kill cq_overflow_flushed and simplify the
code.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/5799867aeba9e713c32f49aef78e5e1aef9fbc43.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: lock annotate timeouts and poll
Pavel Begunkov [Thu, 1 Apr 2021 14:43:57 +0000 (15:43 +0100)]
io_uring: lock annotate timeouts and poll

Add timeout and poll ->comletion_lock annotations for Sparse, makes life
easier while looking at the functions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/2345325643093d41543383ba985a735aeb899eac.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: kill unused forward decls
Pavel Begunkov [Thu, 1 Apr 2021 14:43:56 +0000 (15:43 +0100)]
io_uring: kill unused forward decls

Kill unused forward declarations for io_ring_file_put() and
io_queue_next(). Also btw rename the first one.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/64aa27c3f9662e14615cc119189f5eaf12989671.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: store reg buffer end instead of length
Pavel Begunkov [Thu, 1 Apr 2021 14:43:55 +0000 (15:43 +0100)]
io_uring: store reg buffer end instead of length

It's a bit more convenient for us to store a registered buffer end
address instead of length, see struct io_mapped_ubuf, as it allow to not
recompute it every time.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/39164403fe92f1dc437af134adeec2423cdf9395.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: improve import_fixed overflow checks
Pavel Begunkov [Thu, 1 Apr 2021 14:43:54 +0000 (15:43 +0100)]
io_uring: improve import_fixed overflow checks

Replace a hand-coded overflow check with a specialised function. Even
though compilers are smart enough to generate identical binary (i.e.
check carry bit), but it's more foolproof and conveys the intention
better.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e437dcdc929bacbb6f11a4824ecbbf17225cb82a.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor io_async_cancel()
Pavel Begunkov [Thu, 1 Apr 2021 14:43:53 +0000 (15:43 +0100)]
io_uring: refactor io_async_cancel()

Remove extra tctx==NULL checks that are already done by
io_async_cancel_one().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/70c2a8b958d942e86958a28af0452966ce1095b0.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove unused hash_wait
Pavel Begunkov [Thu, 1 Apr 2021 14:43:52 +0000 (15:43 +0100)]
io_uring: remove unused hash_wait

No users of io_uring_ctx::hash_wait left, kill it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e25cb83c233a5f75f15275596b49fbafbea606fa.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: better ref handling in poll_remove_one
Pavel Begunkov [Thu, 1 Apr 2021 14:43:51 +0000 (15:43 +0100)]
io_uring: better ref handling in poll_remove_one

Instead of io_put_req() to drop not a final ref, use req_ref_put(),
which is slimmer and will also check the invariant.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/85b5774ce13ae55cc2e705abdc8cbafe1212f1bd.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: combine lock/unlock sections on exit
Pavel Begunkov [Thu, 1 Apr 2021 14:43:50 +0000 (15:43 +0100)]
io_uring: combine lock/unlock sections on exit

io_ring_exit_work() already does uring_lock lock/unlock, no need to
repeat it for lock waiting trick in io_ring_ctx_free(). Move the waiting
with comments and spinlocking into io_ring_exit_work.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a8ae0589b0ea64ad4791e2c282e4e9b713dd7024.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove useless is_dying check on quiesce
Pavel Begunkov [Thu, 1 Apr 2021 14:43:48 +0000 (15:43 +0100)]
io_uring: remove useless is_dying check on quiesce

rsrc_data refs should always be valid for potential submitters,
io_rsrc_ref_quiesce() restores it before unlocking, so
percpu_ref_is_dying() check in io_sqe_files_unregister() does nothing
and misleading. Concurrent quiesce is prevented with
struct io_rsrc_data::quiesce.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/bf97055e1748ee3a382e66daf384a469eb90b931.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: reuse io_rsrc_node_destroy()
Pavel Begunkov [Thu, 1 Apr 2021 14:43:47 +0000 (15:43 +0100)]
io_uring: reuse io_rsrc_node_destroy()

Reuse io_rsrc_node_destroy() in __io_rsrc_put_work(). Also move it to a
more appropriate place -- to the other node routines, and remove forward
declaration.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/cccafba41aee1e5bb59988704885b1340aef3a27.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: ctx-wide rsrc nodes
Pavel Begunkov [Thu, 1 Apr 2021 14:43:46 +0000 (15:43 +0100)]
io_uring: ctx-wide rsrc nodes

If we're going to ever support multiple types of resources we need
shared rsrc nodes to not bloat requests, that is implemented in this
patch. It also gives a nicer API and saves one pointer dereference
in io_req_set_rsrc_node().

We may say that all requests bound to a resource belong to one and only
one rsrc node, and considering that nodes are removed and recycled
strictly in-order, this separates requests into generations, where
generation are changed on each node switch (i.e. io_rsrc_node_switch()).

The API is simple, io_rsrc_node_switch() switches to a new generation if
needed, and also optionally kills a passed in io_rsrc_data. Each call to
io_rsrc_node_switch() have to be preceded with
io_rsrc_node_switch_start(). The start function is idempotent and should
not necessarily be followed by switch.

One difference is that once a node was set it will always retain a valid
rsrc node, even on unregister. It may be a nuisance at the moment, but
makes much sense for multiple types of resources. Another thing changed
is that nodes are bound to/associated with a io_rsrc_data later just
before killing (i.e. switching).

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/7e9c693b4b9a2f47aa784b616ce29843021bb65a.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor io_queue_rsrc_removal()
Pavel Begunkov [Thu, 1 Apr 2021 14:43:45 +0000 (15:43 +0100)]
io_uring: refactor io_queue_rsrc_removal()

Pass rsrc_node into io_queue_rsrc_removal() explicitly. Just a
simple preparation patch, makes following changes nicer.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/002889ce4de7baf287f2b010eef86ffe889174c6.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: move rsrc_put callback into io_rsrc_data
Pavel Begunkov [Thu, 1 Apr 2021 14:43:44 +0000 (15:43 +0100)]
io_uring: move rsrc_put callback into io_rsrc_data

io_rsrc_node's callback operates only on a single io_rsrc_data and only
with its resources, so rsrc_put() callback is actually a property of
io_rsrc_data. Move it there, it makes code much nicecr.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9417c2fba3c09e8668f05747006a603d416d34b4.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: encapsulate rsrc node manipulations
Pavel Begunkov [Thu, 1 Apr 2021 14:43:43 +0000 (15:43 +0100)]
io_uring: encapsulate rsrc node manipulations

io_rsrc_node_get() and io_rsrc_node_set() are always used together,
merge them into one so most users don't even see io_rsrc_node and don't
need to care about it.

It helped to catch io_sqe_files_register() inferring rsrc data argument
for get and set differently, not a problem but a good sign.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0827b080b2e61b3dec795380f7e1a1995595d41f.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: use rsrc prealloc infra for files reg
Pavel Begunkov [Thu, 1 Apr 2021 14:43:42 +0000 (15:43 +0100)]
io_uring: use rsrc prealloc infra for files reg

Keep it consistent with update and use io_rsrc_node_prealloc() +
io_rsrc_node_get() in io_sqe_files_register() as well, that will be used
in future patches, not as error prone and allows to deduplicate
rsrc_node init.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/cf87321e6be5e38f4dc7fe5079d2aa6945b1ace0.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify io_rsrc_node_ref_zero
Pavel Begunkov [Thu, 1 Apr 2021 14:43:41 +0000 (15:43 +0100)]
io_uring: simplify io_rsrc_node_ref_zero

Replace queue_delayed_work() with mod_delayed_work() in
io_rsrc_node_ref_zero() as the later one can schedule a new work, and
cleanup it further for better readability.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3b2b23e3a1ea4bbf789cd61815d33e05d9ff945e.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: name rsrc bits consistently
Pavel Begunkov [Thu, 1 Apr 2021 14:43:40 +0000 (15:43 +0100)]
io_uring: name rsrc bits consistently

Keep resource related structs' and functions' naming consistent, in
particular use "io_rsrc" prefix for everything.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/962f5acdf810f3a62831e65da3932cde24f6d9df.1617287883.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: cancel task_work on exit only targeting the current 'wq'
Jens Axboe [Fri, 2 Apr 2021 01:57:07 +0000 (19:57 -0600)]
io-wq: cancel task_work on exit only targeting the current 'wq'

With using task_work_cancel(), we're potentially canceling task_work
that isn't related to this specific io_wq. Use the newly added
task_work_cancel_match() to ensure that we only remove and cancel work
items that are specific to this io_wq.

Fixes: 685fe7feedb9 ("io-wq: eliminate the need for a manager thread")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agotask_work: add helper for more targeted task_work canceling
Jens Axboe [Fri, 2 Apr 2021 01:53:29 +0000 (19:53 -0600)]
task_work: add helper for more targeted task_work canceling

The only exported helper we have right now is task_work_cancel(), which
cancels any task_work from a given task where func matches the queued
work item. This is a bit too coarse for some use cases. Add a
task_work_cancel_match() that allows to more specifically target
individual work items outside of purely the callback function used.

task_work_cancel() can be trivially implemented on top of that, hence do
so.

Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix race around poll update and poll triggering
Jens Axboe [Wed, 31 Mar 2021 15:03:03 +0000 (09:03 -0600)]
io_uring: fix race around poll update and poll triggering

Joakim reports that in some conditions he sees a multishot poll request
being canceled, and that it coincides with getting -EALREADY on
modification. As part of the poll update procedure, there's a small window
where the request is marked as canceled, and if this coincides with the
event actually triggering, then we can get a spurious -ECANCELED and
termination of the multishot request.

Don't mark the poll request as being canceled for update. We also don't
care if we race on removal unless it's a one-shot request, we can safely
updated for either case.

Fixes: b69de288e913 ("io_uring: allow events and user_data update of running poll requests")
Reported-by: Joakim Hassila <joj@mac.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: reg buffer overflow checks hardening
Pavel Begunkov [Wed, 24 Mar 2021 22:59:01 +0000 (22:59 +0000)]
io_uring: reg buffer overflow checks hardening

We are safe with overflows in io_sqe_buffer_register() because it will
just yield alloc failure, but it's nicer to check explicitly.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/2b0625551be3d97b80a5fd21c8cd79dc1c91f0b5.1616624589.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allow SQPOLL without CAP_SYS_ADMIN or CAP_SYS_NICE
Jens Axboe [Thu, 25 Mar 2021 16:21:35 +0000 (10:21 -0600)]
io_uring: allow SQPOLL without CAP_SYS_ADMIN or CAP_SYS_NICE

Now that we have any worker being attached to the original task as
threads, accounting of CPU time is directly attributed to the original
task as well. This means that we no longer have to restrict SQPOLL to
needing elevated privileges, as it's really no different from just having
the task spawn a busy looping thread in userspace.

Reported-by: Stefano Garzarella <sgarzare@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: eliminate the need for a manager thread
Jens Axboe [Mon, 8 Mar 2021 16:37:51 +0000 (09:37 -0700)]
io-wq: eliminate the need for a manager thread

io-wq relies on a manager thread to create/fork new workers, as needed.
But there's really no strong need for it anymore. We have the following
cases that fork a new worker:

1) Work queue. This is done from the task itself always, and it's trivial
   to create a worker off that path, if needed.

2) All workers have gone to sleep, and we have more work. This is called
   off the sched out path. For this case, use a task_work items to queue
   a fork-worker operation.

3) Hashed work completion. Don't think we need to do anything off this
   case. If need be, it could just use approach 2 as well.

Part of this change is incrementing the running worker count before the
fork, to avoid cases where we observe we need a worker and then queue
creation of one. Then new work comes in, we fork a new one. That last
queue operation should have waited for the previous worker to come up,
it's quite possible we don't even need it. Hence move the worker running
from before we fork it off to more efficiently handle that case.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agokernel: allow fork with TIF_NOTIFY_SIGNAL pending
Jens Axboe [Mon, 22 Mar 2021 15:39:12 +0000 (09:39 -0600)]
kernel: allow fork with TIF_NOTIFY_SIGNAL pending

fork() fails if signal_pending() is true, but there are two conditions
that can lead to that:

1) An actual signal is pending. We want fork to fail for that one, like
   we always have.

2) TIF_NOTIFY_SIGNAL is pending, because the task has pending task_work.
   We don't need to make it fail for that case.

Allow fork() to proceed if just task_work is pending, by changing the
signal_pending() check to task_sigpending().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allow events and user_data update of running poll requests
Jens Axboe [Wed, 17 Mar 2021 14:37:41 +0000 (08:37 -0600)]
io_uring: allow events and user_data update of running poll requests

This adds two new POLL_ADD flags, IORING_POLL_UPDATE_EVENTS and
IORING_POLL_UPDATE_USER_DATA. As with the other POLL_ADD flag, these are
masked into sqe->len. If set, the POLL_ADD will have the following
behavior:

- sqe->addr must contain the the user_data of the poll request that
  needs to be modified. This field is otherwise invalid for a POLL_ADD
  command.

- If IORING_POLL_UPDATE_EVENTS is set, sqe->poll_events must contain the
  new mask for the existing poll request. There are no checks for whether
  these are identical or not, if a matching poll request is found, then it
  is re-armed with the new mask.

- If IORING_POLL_UPDATE_USER_DATA is set, sqe->off must contain the new
  user_data for the existing poll request.

A POLL_ADD with any of these flags set may complete with any of the
following results:

1) 0, which means that we successfully found the existing poll request
   specified, and performed the re-arm procedure. Any error from that
   re-arm will be exposed as a completion event for that original poll
   request, not for the update request.
2) -ENOENT, if no existing poll request was found with the given
   user_data.
3) -EALREADY, if the existing poll request was already in the process of
   being removed/canceled/completing.
4) -EACCES, if an attempt was made to modify an internal poll request
   (eg not one originally issued ass IORING_OP_POLL_ADD).

The usual -EINVAL cases apply as well, if any invalid fields are set
in the sqe for this command type.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: abstract out a io_poll_find_helper()
Jens Axboe [Wed, 17 Mar 2021 14:17:19 +0000 (08:17 -0600)]
io_uring: abstract out a io_poll_find_helper()

We'll need this helper for another purpose, for now just abstract it
out and have io_poll_cancel() use it for lookups.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: terminate multishot poll for CQ ring overflow
Jens Axboe [Tue, 23 Feb 2021 16:02:26 +0000 (09:02 -0700)]
io_uring: terminate multishot poll for CQ ring overflow

If we hit overflow and fail to allocate an overflow entry for the
completion, terminate the multishot poll mode.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: abstract out helper for removing poll waitqs/hashes
Jens Axboe [Tue, 23 Feb 2021 15:58:04 +0000 (08:58 -0700)]
io_uring: abstract out helper for removing poll waitqs/hashes

No functional changes in this patch, just preparation for kill multishot
poll on CQ overflow.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add multishot mode for IORING_OP_POLL_ADD
Jens Axboe [Tue, 23 Feb 2021 05:08:01 +0000 (22:08 -0700)]
io_uring: add multishot mode for IORING_OP_POLL_ADD

The default io_uring poll mode is one-shot, where once the event triggers,
the poll command is completed and won't trigger any further events. If
we're doing repeated polling on the same file or socket, then it can be
more efficient to do multishot, where we keep triggering whenever the
event becomes true.

This deviates from the usual norm of having one CQE per SQE submitted. Add
a CQE flag, IORING_CQE_F_MORE, which tells the application to expect
further completion events from the submitted SQE. Right now the only user
of this is POLL_ADD in multishot mode.

Since sqe->poll_events is using the space that we normally use for adding
flags to commands, use sqe->len for the flag space for POLL_ADD. Multishot
mode is selected by setting IORING_POLL_ADD_MULTI in sqe->len. An
application should expect more CQEs for the specificed SQE if the CQE is
flagged with IORING_CQE_F_MORE. In multishot mode, only cancelation or an
error will terminate the poll request, in which case the flag will be
cleared.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: include cflags in completion trace event
Jens Axboe [Tue, 23 Feb 2021 05:05:00 +0000 (22:05 -0700)]
io_uring: include cflags in completion trace event

We should be including the completion flags for better introspection on
exactly what completion event was logged.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allocate memory for overflowed CQEs
Pavel Begunkov [Tue, 23 Feb 2021 12:40:22 +0000 (12:40 +0000)]
io_uring: allocate memory for overflowed CQEs

Instead of using a request itself for overflowed CQE stashing, allocate a
separate entry. The disadvantage is that the allocation may fail and it
will be accounted as lost (see rings->cq_overflow), so we lose reliability
in case of memory pressure if the application is driving the CQ ring into
overflow. However, it opens a way for for multiple CQEs per an SQE and
even generating SQE-less CQEs.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: use GFP_ATOMIC | __GFP_ACCOUNT]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: mask in error/nval/hangup consistently for poll
Jens Axboe [Fri, 19 Mar 2021 20:06:24 +0000 (14:06 -0600)]
io_uring: mask in error/nval/hangup consistently for poll

Instead of masking these in as part of regular POLL_ADD prep, do it in
io_init_poll_iocb(), and include NVAL as that's generally unmaskable,
and RDHUP alongside the HUP that is already set.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise rw complete error handling
Pavel Begunkov [Mon, 22 Mar 2021 01:58:34 +0000 (01:58 +0000)]
io_uring: optimise rw complete error handling

Expect read/write to succeed and create a hot path for this case, in
particular hide all error handling with resubmission under a single
check with the desired result.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: hide iter revert in resubmit_prep
Pavel Begunkov [Mon, 22 Mar 2021 01:58:33 +0000 (01:58 +0000)]
io_uring: hide iter revert in resubmit_prep

Move iov_iter_revert() resetting iterator in case of -EIOCBQUEUED into
io_resubmit_prep(), so we don't do heavy revert in hot path, also saves
a couple of checks.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't alter iopoll reissue fail ret code
Pavel Begunkov [Mon, 22 Mar 2021 01:58:32 +0000 (01:58 +0000)]
io_uring: don't alter iopoll reissue fail ret code

When reissue_prep failed in io_complete_rw_iopoll(), we change return
code to -EIO to prevent io_iopoll_complete() from doing resubmission.
Mark requests with a new flag (i.e. REQ_F_DONT_REISSUE) instead and
retain the original return value.

It also removes io_rw_reissue() from io_iopoll_complete() that will be
used later.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise kiocb_end_write for !ISREG
Pavel Begunkov [Mon, 22 Mar 2021 01:58:31 +0000 (01:58 +0000)]
io_uring: optimise kiocb_end_write for !ISREG

file_end_write() is only for regular files, so the function do a couple
of dereferences to get inode and check for it. However, we already have
REQ_F_ISREG at hand, just use it and inline file_end_write().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: kill unused REQ_F_NO_FILE_TABLE
Pavel Begunkov [Mon, 22 Mar 2021 01:58:30 +0000 (01:58 +0000)]
io_uring: kill unused REQ_F_NO_FILE_TABLE

current->files are always valid now even for io-wq threads, so kill not
used anymore REQ_F_NO_FILE_TABLE.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't init req->work fully in advance
Pavel Begunkov [Mon, 22 Mar 2021 01:58:29 +0000 (01:58 +0000)]
io_uring: don't init req->work fully in advance

req->work is mostly unused unless it's punted, and io_init_req() is too
hot for fully initialising it. Fortunately, we can skip init work.next
as it's controlled by io-wq, and can not touch work.flags by moving
everything related into io_prep_async_work(). The only field left is
req->work.creds, but there is nothing can be done, keep maintaining it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: refactor *_get_acct()
Pavel Begunkov [Mon, 22 Mar 2021 01:58:28 +0000 (01:58 +0000)]
io-wq: refactor *_get_acct()

Extract a helper for io_work_get_acct() and io_wqe_get_acct() to avoid
duplication.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove tctx->sqpoll
Pavel Begunkov [Mon, 22 Mar 2021 01:58:27 +0000 (01:58 +0000)]
io_uring: remove tctx->sqpoll

struct io_uring_task::sqpoll is not used anymore, kill it

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't do extra EXITING cancellations
Pavel Begunkov [Mon, 22 Mar 2021 01:58:25 +0000 (01:58 +0000)]
io_uring: don't do extra EXITING cancellations

io_match_task() matches all requests with PF_EXITING task, even though
those may be valid requests. It was necessary for SQPOLL cancellation,
but now it kills all requests before exiting via
io_uring_cancel_sqpoll(), so it's not needed.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't clear REQ_F_LINK_TIMEOUT
Pavel Begunkov [Mon, 22 Mar 2021 01:58:24 +0000 (01:58 +0000)]
io_uring: don't clear REQ_F_LINK_TIMEOUT

REQ_F_LINK_TIMEOUT is a hint that to look for linked timeouts to cancel,
we're leaving it even when it's already fired. Hence don't care to clear
it in io_kill_linked_timeout(), it's safe and is called only once.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise io_req_task_work_add()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:44 +0000 (17:22 +0000)]
io_uring: optimise io_req_task_work_add()

Inline io_task_work_add() into io_req_task_work_add(). They both work
with a request, so keeping them separate doesn't make things much more
clear, but merging allows optimise it. Apart from small wins like not
reading req->ctx or not calculating @notify in the hot path, i.e. with
tctx->task_state set, it avoids doing wake_up_process() for every single
add, but only after actually done task_work_add().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: abolish old io_put_file()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:43 +0000 (17:22 +0000)]
io_uring: abolish old io_put_file()

io_put_file() doesn't do a good job at generating a good code. Inline
it, so we can check REQ_F_FIXED_FILE first, prioritising FIXED_FILE case
over requests without files, and saving a memory load in that case.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise io_dismantle_req() fast path
Pavel Begunkov [Fri, 19 Mar 2021 17:22:42 +0000 (17:22 +0000)]
io_uring: optimise io_dismantle_req() fast path

Reshuffle io_dismantle_req() checks to put most of slow path stuff under
a single if.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: inline io_clean_op()'s fast path
Pavel Begunkov [Fri, 19 Mar 2021 17:22:41 +0000 (17:22 +0000)]
io_uring: inline io_clean_op()'s fast path

Inline io_clean_op(), leaving __io_clean_op() but renaming it. This will
be used in following patches.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove __io_req_task_cancel()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:40 +0000 (17:22 +0000)]
io_uring: remove __io_req_task_cancel()

Both io_req_complete_failed() and __io_req_task_cancel() do the same
thing: set failure flag, put both req refs and emit an CQE. The former
one is a bit more advance as it puts req back into a req cache, so make
it to take over __io_req_task_cancel() and remove the last one.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add helper flushing locked_free_list
Pavel Begunkov [Fri, 19 Mar 2021 17:22:39 +0000 (17:22 +0000)]
io_uring: add helper flushing locked_free_list

Add a new helper io_flush_cached_locked_reqs() that splices
locked_free_list to free_list, and does it right doing all sync and
invariant reinit.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor io_free_req_deferred()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:38 +0000 (17:22 +0000)]
io_uring: refactor io_free_req_deferred()

We don't care about ret value in io_free_req_deferred(), make the code a
bit more concise.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: inline io_put_req and friends
Pavel Begunkov [Fri, 19 Mar 2021 17:22:37 +0000 (17:22 +0000)]
io_uring: inline io_put_req and friends

One big omission is that io_put_req() haven't been marked inline, and at
least gcc 9 doesn't inline it, not to mention that it's really hot and
extra function call is intolerable, especially when it doesn't put a
final ref.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor rsrc refnode allocation
Pavel Begunkov [Fri, 19 Mar 2021 17:22:36 +0000 (17:22 +0000)]
io_uring: refactor rsrc refnode allocation

There are two problems:
1) we always allocate refnodes in advance and free them if those
haven't been used. It's expensive, takes two allocations, where one of
them is percpu. And it may be pretty common not actually using them.

2) Current API with allocating a refnode and setting some of the fields
is error prone, we don't ever want to have a file node runninng fixed
buffer callback...

Solve both with pre-init/get API. Pre-init just leaves the node for
later if not used, and for get (i.e. io_rsrc_refnode_get()), you need to
explicitly pass all arguments setting callbacks/etc., so it's more
resilient.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor io_flush_cached_reqs()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:35 +0000 (17:22 +0000)]
io_uring: refactor io_flush_cached_reqs()

Emphasize that return value of io_flush_cached_reqs() depends on number
of requests in the cache. It looks nicer and might help tools from
false-negative analyses.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise success case of __io_queue_sqe
Pavel Begunkov [Fri, 19 Mar 2021 17:22:34 +0000 (17:22 +0000)]
io_uring: optimise success case of __io_queue_sqe

Move the case of successfully issued request by doing that check first.
It's not much of a difference, just generates slightly better code for
me.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: inline __io_queue_linked_timeout()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:33 +0000 (17:22 +0000)]
io_uring: inline __io_queue_linked_timeout()

Inline __io_queue_linked_timeout(), we don't need it

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: keep io_req_free_batch() call locality
Pavel Begunkov [Fri, 19 Mar 2021 17:22:32 +0000 (17:22 +0000)]
io_uring: keep io_req_free_batch() call locality

Don't do a function call (io_dismantle_req()) in the middle and place it
to near other function calls, otherwise may lead to excessive register
spilling.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise tctx node checks/alloc
Pavel Begunkov [Fri, 19 Mar 2021 17:22:31 +0000 (17:22 +0000)]
io_uring: optimise tctx node checks/alloc

First of all, w need to set tctx->sqpoll only when we add a new entry
into ->xa, so move it from the hot path. Also extract a hot path for
io_uring_add_task_file() as an inline helper.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: optimise io_uring_enter()
Pavel Begunkov [Fri, 19 Mar 2021 17:22:30 +0000 (17:22 +0000)]
io_uring: optimise io_uring_enter()

Add unlikely annotations, because my compiler pretty much mispredicts
every first check, and apart jumping around in the fast path, it also
generates extra instructions, like in advance setting ret value.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't take ctx refs in task_work handler
Pavel Begunkov [Fri, 19 Mar 2021 17:22:29 +0000 (17:22 +0000)]
io_uring: don't take ctx refs in task_work handler

__tctx_task_work() guarantees that ctx won't be killed while running
task_works, so we can remove now unnecessary ctx pinning for internally
armed polling.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: transform ret == 0 for poll cancelation completions
Jens Axboe [Tue, 23 Feb 2021 15:19:33 +0000 (08:19 -0700)]
io_uring: transform ret == 0 for poll cancelation completions

We can set canceled == true and complete out-of-line, ensure that we catch
that and correctly return -ECANCELED if the poll operation got canceled.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: correct comment on poll vs iopoll
Jens Axboe [Tue, 23 Feb 2021 15:18:36 +0000 (08:18 -0700)]
io_uring: correct comment on poll vs iopoll

The correct function is io_iopoll_complete(), which deals with completions
of IOPOLL requests, not io_poll_complete().

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: cache async and regular file state for fixed files
Jens Axboe [Fri, 12 Mar 2021 15:30:14 +0000 (08:30 -0700)]
io_uring: cache async and regular file state for fixed files

We have to dig quite deep to check for particularly whether or not a
file supports a fast-path nonblock attempt. For fixed files, we can do
this lookup once and cache the state instead.

This adds two new bits to track whether we support async read/write
attempt, and lines up the REQ_F_ISREG bit with those two. The file slot
re-uses the last 3 (or 2, for 32-bit) of the file pointer to cache that
state, and then we mask it in when we go and use a fixed file.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't check for io_uring_fops for fixed files
Jens Axboe [Fri, 12 Mar 2021 15:27:05 +0000 (08:27 -0700)]
io_uring: don't check for io_uring_fops for fixed files

We don't allow them at registration time, so limit the check for needing
inflight tracking in io_file_get() to the non-fixed path.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify io_sqd_update_thread_idle()
Pavel Begunkov [Wed, 10 Mar 2021 13:13:55 +0000 (13:13 +0000)]
io_uring: simplify io_sqd_update_thread_idle()

Use a more comprehensible() max instead of hand coding it with ifs in
io_sqd_update_thread_idle().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: switch to atomic_t for io_kiocb reference count
Jens Axboe [Wed, 24 Feb 2021 20:32:30 +0000 (13:32 -0700)]
io_uring: switch to atomic_t for io_kiocb reference count

io_uring manipulates references twice for each request, and hence is very
sensitive to performance of the reference count. This commit borrows a
trick from:

commit f958d7b528b1b40c44cfda5eabe2d82760d868c3
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Thu Apr 11 10:06:20 2019 -0700

    mm: make page ref count overflow check tighter and more explicit

and switches to atomic_t for references, while still retaining overflow
and underflow checks.

This is good for a 2-3% increase in peak IOPS on a single core. Before:

IOPS=2970879, IOS/call=31/31, inflight=128 (128)
IOPS=2952597, IOS/call=31/31, inflight=128 (128)
IOPS=2943904, IOS/call=31/31, inflight=128 (128)
IOPS=2930006, IOS/call=31/31, inflight=96 (96)

and after:

IOPS=3054354, IOS/call=31/31, inflight=128 (128)
IOPS=3059038, IOS/call=31/31, inflight=128 (128)
IOPS=3060320, IOS/call=31/31, inflight=128 (128)
IOPS=3068256, IOS/call=31/31, inflight=96 (96)

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: wrap io_kiocb reference count manipulation in helpers
Jens Axboe [Wed, 24 Feb 2021 20:28:27 +0000 (13:28 -0700)]
io_uring: wrap io_kiocb reference count manipulation in helpers

No functional changes in this patch, just in preparation for handling the
references a bit more efficiently.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify io_resubmit_prep()
Pavel Begunkov [Sun, 28 Feb 2021 22:35:20 +0000 (22:35 +0000)]
io_uring: simplify io_resubmit_prep()

If not for async_data NULL check, io_resubmit_prep() is already an rw
specific version of io_req_prep_async(), but slower because 1) it always
goes through io_import_iovec() even if following io_setup_async_rw() the
result 2) instead of initialising iovec/iter in-place it does it
on-stack and then copies with io_setup_async_rw().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: merge defer_prep() and prep_async()
Pavel Begunkov [Sun, 28 Feb 2021 22:35:19 +0000 (22:35 +0000)]
io_uring: merge defer_prep() and prep_async()

Merge two function and do renaming in favour of the second one, it
relays the meaning better.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: rethink def->needs_async_data
Pavel Begunkov [Sun, 28 Feb 2021 22:35:18 +0000 (22:35 +0000)]
io_uring: rethink def->needs_async_data

needs_async_data controls allocation of async_data, and used in two
cases. 1) when async setup requires it (by io_req_prep_async() or
handler themselves), and 2) when op always needs additional space to
operate, like timeouts do.

Opcode preps already don't bother about the second case and do
allocation unconditionally, restrict needs_async_data to the first case
only and rename it into needs_async_setup.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: update for IOPOLL fix]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: untie alloc_async_data and needs_async_data
Pavel Begunkov [Sun, 28 Feb 2021 22:35:17 +0000 (22:35 +0000)]
io_uring: untie alloc_async_data and needs_async_data

All opcode handlers pretty well know whether they need async data or
not, and can skip testing for needs_async_data. The exception is rw
the generic path, but those test the flag by hand anyway. So, check the
flag and make io_alloc_async_data() allocating unconditionally.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor out send/recv async setup
Pavel Begunkov [Sun, 28 Feb 2021 22:35:16 +0000 (22:35 +0000)]
io_uring: refactor out send/recv async setup

IORING_OP_[SEND,RECV] don't need async setup neither will get into
io_req_prep_async(). Remove them from io_req_prep_async() and remove
needs_async_data checks from the related setup functions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: use better types for cflags
Pavel Begunkov [Sun, 28 Feb 2021 22:35:15 +0000 (22:35 +0000)]
io_uring: use better types for cflags

__io_cqring_fill_event() takes cflags as long to squeeze it into u32 in
an CQE, awhile all users pass int or unsigned. Replace it with unsigned
int and store it as u32 in struct io_completion to match CQE.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: refactor provide/remove buffer locking
Pavel Begunkov [Sun, 28 Feb 2021 22:35:13 +0000 (22:35 +0000)]
io_uring: refactor provide/remove buffer locking

Always complete request holding the mutex instead of doing that strange
dancing with conditional ordering.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add a helper failing not issued requests
Pavel Begunkov [Sun, 28 Feb 2021 22:35:12 +0000 (22:35 +0000)]
io_uring: add a helper failing not issued requests

Add a simple helper doing CQE posting, marking request for link-failure,
and putting both submission and completion references.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: further deduplicate file slot selection
Pavel Begunkov [Sun, 28 Feb 2021 22:35:11 +0000 (22:35 +0000)]
io_uring: further deduplicate file slot selection

io_fixed_file_slot() and io_file_from_index() behave pretty similarly,
DRY and call one from another.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: reuse io_req_task_queue_fail()
Pavel Begunkov [Sun, 28 Feb 2021 22:35:10 +0000 (22:35 +0000)]
io_uring: reuse io_req_task_queue_fail()

Use io_req_task_queue_fail() on the fail path of io_req_task_queue().
It's unlikely to happen, so don't care about additional overhead, but
allows to keep all the req->result invariant in a single function.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: avoid taking ctx refs for task-cancel
Pavel Begunkov [Sun, 28 Feb 2021 22:35:09 +0000 (22:35 +0000)]
io_uring: avoid taking ctx refs for task-cancel

Don't bother to take a ctx->refs for io_req_task_cancel() because it
take uring_lock before putting a request, and the context is promised to
stay alive until unlock happens.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoLinux 5.12-rc7
Linus Torvalds [Sun, 11 Apr 2021 22:16:13 +0000 (15:16 -0700)]
Linux 5.12-rc7

3 years agoMerge tag 'for-5.12-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave...
Linus Torvalds [Sun, 11 Apr 2021 18:53:36 +0000 (11:53 -0700)]
Merge tag 'for-5.12-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux

Pull btrfs fix from David Sterba:
 "One more patch that we'd like to get to 5.12 before release.

  It's changing where and how the superblock is stored in the zoned
  mode. It is an on-disk format change but so far there are no
  implications for users as the proper mkfs support hasn't been merged
  and is waiting for the kernel side to settle.

  Until now, the superblocks were derived from the zone index, but zone
  size can differ per device. This is changed to be based on fixed
  offset values, to make it independent of the device zone size.

  The work on that got a bit delayed, we discussed the exact locations
  to support potential device sizes and usecases. (Partially delayed
  also due to my vacation.) Having that in the same release where the
  zoned mode is declared usable is highly desired, there are userspace
  projects that need to be updated to recognize the feature. Pushing
  that to the next release would make things harder to test"

* tag 'for-5.12-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  btrfs: zoned: move superblock logging zone location

3 years agoMerge tag 'locking-urgent-2021-04-11' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 11 Apr 2021 18:47:03 +0000 (11:47 -0700)]
Merge tag 'locking-urgent-2021-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking fixlets from Ingo Molnar:
 "Two minor fixes: one for a Clang warning, the other improves an
  ambiguous/confusing kernel log message"

* tag 'locking-urgent-2021-04-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  lockdep: Address clang -Wformat warning printing for %hd
  lockdep: Add a missing initialization hint to the "INFO: Trying to register non-static key" message

3 years agoMerge tag 'x86_urgent_for_v5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 11 Apr 2021 18:42:18 +0000 (11:42 -0700)]
Merge tag 'x86_urgent_for_v5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Borislav Petkov:

 - Fix the vDSO exception handling return path to disable interrupts
   again.

 - A fix for the CE collector to return the proper return values to its
   callers which are used to convey what the collector has done with the
   error address.

* tag 'x86_urgent_for_v5.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/traps: Correct exc_general_protection() and math_error() return paths
  RAS/CEC: Correct ce_add_elem()'s returned values

3 years agoMerge branch 'for-5.12-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis...
Linus Torvalds [Sat, 10 Apr 2021 19:51:12 +0000 (12:51 -0700)]
Merge branch 'for-5.12-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu

Pull percpu fix from Dennis Zhou:
 "This contains a fix for sporadically failing atomic percpu
  allocations.

  I only caught it recently while I was reviewing a new series [1] and
  simultaneously saw reports by btrfs in xfstests [2] and [3].

  In v5.9, memcg accounting was extended to percpu done by adding a
  second type of chunk. I missed an interaction with the free page float
  count used to ensure we can support atomic allocations. If one type of
  chunk has no free pages, but the other has enough to satisfy the free
  page float requirement, we will not repopulate the free pages for the
  former type of chunk. This led to the sporadically failing atomic
  allocations"

Link: https://lore.kernel.org/linux-mm/20210324190626.564297-1-guro@fb.com/
Link: https://lore.kernel.org/linux-mm/20210401185158.3275.409509F4@e16-tech.com/
Link: https://lore.kernel.org/linux-mm/CAL3q7H5RNBjCi708GH7jnczAOe0BLnacT9C+OBgA-Dx9jhB6SQ@mail.gmail.com/
* 'for-5.12-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu:
  percpu: make pcpu_nr_empty_pop_pages per chunk type

3 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sat, 10 Apr 2021 19:29:19 +0000 (12:29 -0700)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Seven fixes, all in drivers.

  The hpsa three are the most extensive and the most problematic: it's a
  packed structure misalignment that oopses on ia64 but looks like it
  would also oops on quite a few non-x86 architectures.

  The pm80xx is a regression and the rest are bug fixes for patches in
  the misc tree"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: scsi_transport_srp: Don't block target in SRP_PORT_LOST state
  scsi: target: iscsi: Fix zero tag inside a trace event
  scsi: pm80xx: Fix chip initialization failure
  scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs
  scsi: ufs: core: Fix task management request completion timeout
  scsi: hpsa: Add an assert to prevent __packed reintroduction
  scsi: hpsa: Fix boot on ia64 (atomic_t alignment)
  scsi: hpsa: Use __packed on individual structs, not header-wide

3 years agoMerge tag 'powerpc-5.12-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc...
Linus Torvalds [Sat, 10 Apr 2021 16:31:52 +0000 (09:31 -0700)]
Merge tag 'powerpc-5.12-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:
 "Some some more powerpc fixes for 5.12:

   - Fix an oops triggered by ptrace when CONFIG_PPC_FPU_REGS=n

   - Fix an oops on sigreturn when the VDSO is unmapped on 32-bit

   - Fix vdso_wrapper.o not being rebuilt everytime vdso.so is rebuilt

  Thanks to Christophe Leroy"

* tag 'powerpc-5.12-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/vdso: Make sure vdso_wrapper.o is rebuilt everytime vdso.so is rebuilt
  powerpc/signal32: Fix Oops on sigreturn with unmapped VDSO
  powerpc/ptrace: Don't return error when getting/setting FP regs without CONFIG_PPC_FPU_REGS