Skip to content
  1. Mar 06, 2021
  2. Mar 05, 2021
    • Jens Axboe's avatar
      io_uring: make SQPOLL thread parking saner · 86e0d676
      Jens Axboe authored
      
      
      We have this weird true/false return from parking, and then some of the
      callers decide to look at that. It can lead to unbalanced parks and
      sqd locking. Have the callers check the thread status once it's parked.
      We know we have the lock at that point, so it's either valid or it's NULL.
      
      Fix race with parking on thread exit. We need to be careful here with
      ordering of the sdq->lock and the IO_SQ_THREAD_SHOULD_PARK bit.
      
      Rename sqd->completion to sqd->parked to reflect that this is the only
      thing this completion event doesn.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      86e0d676
    • Jens Axboe's avatar
      io-wq: kill hashed waitqueue before manager exits · 09ca6c40
      Jens Axboe authored
      If we race with shutting down the io-wq context and someone queueing
      a hashed entry, then we can exit the manager with it armed. If it then
      triggers after the manager has exited, we can have a use-after-free where
      io_wqe_hash_wake() attempts to wake a now gone manager process.
      
      Move the killing of the hashed write queue into the manager itself, so
      that we know we've killed it before the task exits.
      
      Fixes: e941894e
      
       ("io-wq: make buffered file write hashed work map per-ctx")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      09ca6c40
    • Jens Axboe's avatar
      io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return · b5b0ecb7
      Jens Axboe authored
      
      
      The callback can only be armed, if we get -EIOCBQUEUED returned. It's
      important that we clear the WAITQ bit for other cases, otherwise we can
      queue for async retry and filemap will assume that we're armed and
      return -EAGAIN instead of just blocking for the IO.
      
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b5b0ecb7
    • Jens Axboe's avatar
      io_uring: don't keep looping for more events if we can't flush overflow · ca0a2651
      Jens Axboe authored
      
      
      It doesn't make sense to wait for more events to come in, if we can't
      even flush the overflow we already have to the ring. Return -EBUSY for
      that condition, just like we do for attempts to submit with overflow
      pending.
      
      Cc: stable@vger.kernel.org # 5.11
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ca0a2651
    • Jens Axboe's avatar
      io_uring: move to using create_io_thread() · 46fe18b1
      Jens Axboe authored
      
      
      This allows us to do task creation and setup without needing to use
      completions to try and synchronize with the starting thread. Get rid of
      the old io_wq_fork_thread() wrapper, and the 'wq' and 'worker' startup
      completion events - we can now do setup before the task is running.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      46fe18b1
    • Jens Axboe's avatar
      kernel: provide create_io_thread() helper · cc440e87
      Jens Axboe authored
      
      
      Provide a generic helper for setting up an io_uring worker. Returns a
      task_struct so that the caller can do whatever setup is needed, then call
      wake_up_new_task() to kick it into gear.
      
      Add a kernel_clone_args member, io_thread, which tells copy_process() to
      mark the task with PF_IO_WORKER.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      cc440e87
    • Pavel Begunkov's avatar
      io_uring: reliably cancel linked timeouts · dd59a3d5
      Pavel Begunkov authored
      
      
      Linked timeouts are fired asynchronously (i.e. soft-irq), and use
      generic cancellation paths to do its stuff, including poking into io-wq.
      The problem is that it's racy to access tctx->io_wq, as
      io_uring_task_cancel() and others may be happening at this exact moment.
      Mark linked timeouts with REQ_F_INLIFGHT for now, making sure there are
      no timeouts before io-wq destraction.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dd59a3d5
    • Pavel Begunkov's avatar
      io_uring: cancel-match based on flags · b05a1bcd
      Pavel Begunkov authored
      
      
      Instead of going into request internals, like checking req->file->f_op,
      do match them based on REQ_F_INFLIGHT, it's set only when we want it to
      be reliably cancelled.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b05a1bcd
  3. Mar 04, 2021
    • Jens Axboe's avatar
      io-wq: ensure all pending work is canceled on exit · f0127254
      Jens Axboe authored
      
      
      If we race on shutting down the io-wq, then we should ensure that any
      work that was queued after workers shutdown is canceled. Harden the
      add work check a bit too, checking for IO_WQ_BIT_EXIT and cancel if
      it's set.
      
      Add a WARN_ON() for having any work before we kill the io-wq context.
      
      Reported-by: default avatar <syzbot+91b4b56ead187d35c9d3@syzkaller.appspotmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f0127254
    • Jens Axboe's avatar
      io_uring: ensure that threads freeze on suspend · e4b4a13f
      Jens Axboe authored
      
      
      Alex reports that his system fails to suspend using 5.12-rc1, with the
      following dump:
      
      [  240.650300] PM: suspend entry (deep)
      [  240.650748] Filesystems sync: 0.000 seconds
      [  240.725605] Freezing user space processes ...
      [  260.739483] Freezing of tasks failed after 20.013 seconds (3 tasks refusing to freeze, wq_busy=0):
      [  260.739497] task:iou-mgr-446     state:S stack:    0 pid:  516 ppid:   439 flags:0x00004224
      [  260.739504] Call Trace:
      [  260.739507]  ? sysvec_apic_timer_interrupt+0xb/0x81
      [  260.739515]  ? pick_next_task_fair+0x197/0x1cde
      [  260.739519]  ? sysvec_reschedule_ipi+0x2f/0x6a
      [  260.739522]  ? asm_sysvec_reschedule_ipi+0x12/0x20
      [  260.739525]  ? __schedule+0x57/0x6d6
      [  260.739529]  ? del_timer_sync+0xb9/0x115
      [  260.739533]  ? schedule+0x63/0xd5
      [  260.739536]  ? schedule_timeout+0x219/0x356
      [  260.739540]  ? __next_timer_interrupt+0xf1/0xf1
      [  260.739544]  ? io_wq_manager+0x73/0xb1
      [  260.739549]  ? io_wq_create+0x262/0x262
      [  260.739553]  ? ret_from_fork+0x22/0x30
      [  260.739557] task:iou-mgr-517     state:S stack:    0 pid:  522 ppid:   439 flags:0x00004224
      [  260.739561] Call Trace:
      [  260.739563]  ? sysvec_apic_timer_interrupt+0xb/0x81
      [  260.739566]  ? pick_next_task_fair+0x16f/0x1cde
      [  260.739569]  ? sysvec_apic_timer_interrupt+0xb/0x81
      [  260.739571]  ? asm_sysvec_apic_timer_interrupt+0x12/0x20
      [  260.739574]  ? __schedule+0x5b7/0x6d6
      [  260.739578]  ? del_timer_sync+0x70/0x115
      [  260.739581]  ? schedule_timeout+0x211/0x356
      [  260.739585]  ? __next_timer_interrupt+0xf1/0xf1
      [  260.739588]  ? io_wq_check_workers+0x15/0x11f
      [  260.739592]  ? io_wq_manager+0x69/0xb1
      [  260.739596]  ? io_wq_create+0x262/0x262
      [  260.739600]  ? ret_from_fork+0x22/0x30
      [  260.739603] task:iou-wrk-517     state:S stack:    0 pid:  523 ppid:   439 flags:0x00004224
      [  260.739607] Call Trace:
      [  260.739609]  ? __schedule+0x5b7/0x6d6
      [  260.739614]  ? schedule+0x63/0xd5
      [  260.739617]  ? schedule_timeout+0x219/0x356
      [  260.739621]  ? __next_timer_interrupt+0xf1/0xf1
      [  260.739624]  ? task_thread.isra.0+0x148/0x3af
      [  260.739628]  ? task_thread_unbound+0xa/0xa
      [  260.739632]  ? task_thread_bound+0x7/0x7
      [  260.739636]  ? ret_from_fork+0x22/0x30
      [  260.739647] OOM killer enabled.
      [  260.739648] Restarting tasks ... done.
      [  260.740077] PM: suspend exit
      
      Play nice and ensure that any thread we create will call try_to_freeze()
      at an opportune time so that memory suspend can proceed. For the io-wq
      worker threads, mark them as PF_NOFREEZE. They could potentially be
      blocked for a long time.
      
      Reported-by: default avatarAlex Xu (Hello71) <alex_y_xu@yahoo.ca>
      Tested-by: default avatarAlex Xu (Hello71) <alex_y_xu@yahoo.ca>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e4b4a13f
    • Pavel Begunkov's avatar
      io_uring: remove extra in_idle wake up · b23fcf47
      Pavel Begunkov authored
      
      
      io_dismantle_req() is always followed by io_put_task(), which already do
      proper in_idle wake ups, so we can skip waking the owner task in
      io_dismantle_req(). The rules are simpler now, do io_put_task() shortly
      after ending a request, and it will be fine.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b23fcf47
    • Pavel Begunkov's avatar
      io_uring: inline __io_queue_async_work() · ebf93667
      Pavel Begunkov authored
      
      
      __io_queue_async_work() is only called from io_queue_async_work(),
      inline it.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ebf93667
    • Pavel Begunkov's avatar
      io_uring: inline io_req_clean_work() · f85c310a
      Pavel Begunkov authored
      
      
      Inline io_req_clean_work(), less code and easier to analyse
      tctx dependencies and refs usage.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f85c310a
    • Pavel Begunkov's avatar
      io_uring: choose right tctx->io_wq for try cancel · 64c72123
      Pavel Begunkov authored
      
      
      When we cancel SQPOLL, @task in io_uring_try_cancel_requests() will
      differ from current. Use the right tctx from passed in @task, and don't
      forget that it can be NULL when the io_uring ctx exits.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      64c72123
    • Jens Axboe's avatar
      io_uring: fix -EAGAIN retry with IOPOLL · 3e6a0d3c
      Jens Axboe authored
      We no longer revert the iovec on -EIOCBQUEUED, see commit ab2125df
      
      ,
      and this started causing issues for IOPOLL on devies that run out of
      request slots. Turns out what outside of needing a revert for those, we
      also had a bug where we didn't properly setup retry inside the submission
      path. That could cause re-import of the iovec, if any, and that could lead
      to spurious results if the application had those allocated on the stack.
      
      Catch -EAGAIN retry and make the iovec stable for IOPOLL, just like we do
      for !IOPOLL retries.
      
      Cc: <stable@vger.kernel.org> # 5.9+
      Reported-by: default avatarAbaci Robot <abaci@linux.alibaba.com>
      Reported-by: default avatarXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3e6a0d3c
    • Jens Axboe's avatar
      io-wq: fix error path leak of buffered write hash map · dc7bbc9e
      Jens Axboe authored
      The 'err' path should include the hash put, we already grabbed a reference
      once we get that far.
      
      Fixes: e941894e
      
       ("io-wq: make buffered file write hashed work map per-ctx")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dc7bbc9e
    • Pavel Begunkov's avatar
      io_uring: remove sqo_task · 16270893
      Pavel Begunkov authored
      
      
      Now, sqo_task is used only for a warning that is not interesting anymore
      since sqo_dead is gone, remove all of that including ctx->sqo_task.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      16270893
    • Pavel Begunkov's avatar
      io_uring: kill sqo_dead and sqo submission halting · 70aacfe6
      Pavel Begunkov authored
      
      
      As SQPOLL task doesn't poke into ->sqo_task anymore, there is no need to
      kill the sqo when the master task exits. Before it was necessary to
      avoid races accessing sqo_task->files with removing them.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      [axboe: don't forget to enable SQPOLL before exit, if started disabled]
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      70aacfe6
    • Jens Axboe's avatar
      io_uring: ignore double poll add on the same waitqueue head · 1c3b3e65
      Jens Axboe authored
      syzbot reports a deadlock, attempting to lock the same spinlock twice:
      
      ============================================
      WARNING: possible recursive locking detected
      5.11.0-syzkaller #0 Not tainted
      --------------------------------------------
      swapper/1/0 is trying to acquire lock:
      ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline]
      ffff88801b2b1130 (&runtime->sleep){..-.}-{2:2}, at: io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
      
      but task is already holding lock:
      ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
      
      other info that might help us debug this:
       Possible unsafe locking scenario:
      
             CPU0
             ----
        lock(&runtime->sleep);
        lock(&runtime->sleep);
      
       *** DEADLOCK ***
      
       May be due to missing lock nesting notation
      
      2 locks held by swapper/1/0:
       #0: ffff888147474908 (&group->lock){..-.}-{2:2}, at: _snd_pcm_stream_lock_irqsave+0x9f/0xd0 sound/core/pcm_native.c:170
       #1: ffff88801b2b3130 (&runtime->sleep){..-.}-{2:2}, at: __wake_up_common_lock+0xb4/0x130 kernel/sched/wait.c:137
      
      stack backtrace:
      CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.11.0-syzkaller #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
      Call Trace:
       <IRQ>
       __dump_stack lib/dump_stack.c:79 [inline]
       dump_stack+0xfa/0x151 lib/dump_stack.c:120
       print_deadlock_bug kernel/locking/lockdep.c:2829 [inline]
       check_deadlock kernel/locking/lockdep.c:2872 [inline]
       validate_chain kernel/locking/lockdep.c:3661 [inline]
       __lock_acquire.cold+0x14c/0x3b4 kernel/locking/lockdep.c:4900
       lock_acquire kernel/locking/lockdep.c:5510 [inline]
       lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
       __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
       _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
       spin_lock include/linux/spinlock.h:354 [inline]
       io_poll_double_wake+0x25f/0x6a0 fs/io_uring.c:4960
       __wake_up_common+0x147/0x650 kernel/sched/wait.c:108
       __wake_up_common_lock+0xd0/0x130 kernel/sched/wait.c:138
       snd_pcm_update_state+0x46a/0x540 sound/core/pcm_lib.c:203
       snd_pcm_update_hw_ptr0+0xa75/0x1a50 sound/core/pcm_lib.c:464
       snd_pcm_period_elapsed+0x160/0x250 sound/core/pcm_lib.c:1805
       dummy_hrtimer_callback+0x94/0x1b0 sound/drivers/dummy.c:378
       __run_hrtimer kernel/time/hrtimer.c:1519 [inline]
       __hrtimer_run_queues+0x609/0xe40 kernel/time/hrtimer.c:1583
       hrtimer_run_softirq+0x17b/0x360 kernel/time/hrtimer.c:1600
       __do_softirq+0x29b/0x9f6 kernel/softirq.c:345
       invoke_softirq kernel/softirq.c:221 [inline]
       __irq_exit_rcu kernel/softirq.c:422 [inline]
       irq_exit_rcu+0x134/0x200 kernel/softirq.c:434
       sysvec_apic_timer_interrupt+0x93/0xc0 arch/x86/kernel/apic/apic.c:1100
       </IRQ>
       asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:632
      RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline]
      RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
      RIP: 0010:arch_irqs_disabled arch/x86/include/asm/irqflags.h:137 [inline]
      RIP: 0010:acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline]
      RIP: 0010:acpi_idle_do_entry+0x1c9/0x250 drivers/acpi/processor_idle.c:516
      Code: dd 38 6e f8 84 db 75 ac e8 54 32 6e f8 e8 0f 1c 74 f8 e9 0c 00 00 00 e8 45 32 6e f8 0f 00 2d 4e 4a c5 00 e8 39 32 6e f8 fb f4 <9c> 5b 81 e3 00 02 00 00 fa 31 ff 48 89 de e8 14 3a 6e f8 48 85 db
      RSP: 0018:ffffc90000d47d18 EFLAGS: 00000293
      RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
      RDX: ffff8880115c3780 RSI: ffffffff89052537 RDI: 0000000000000000
      RBP: ffff888141127064 R08: 0000000000000001 R09: 0000000000000001
      R10: ffffffff81794168 R11: 0000000000000000 R12: 0000000000000001
      R13: ffff888141127000 R14: ffff888141127064 R15: ffff888143331804
       acpi_idle_enter+0x361/0x500 drivers/acpi/processor_idle.c:647
       cpuidle_enter_state+0x1b1/0xc80 drivers/cpuidle/cpuidle.c:237
       cpuidle_enter+0x4a/0xa0 drivers/cpuidle/cpuidle.c:351
       call_cpuidle kernel/sched/idle.c:158 [inline]
       cpuidle_idle_call kernel/sched/idle.c:239 [inline]
       do_idle+0x3e1/0x590 kernel/sched/idle.c:300
       cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:397
       start_secondary+0x274/0x350 arch/x86/kernel/smpboot.c:272
       secondary_startup_64_no_verify+0xb0/0xbb
      
      which is due to the driver doing poll_wait() twice on the same
      wait_queue_head. That is perfectly valid, but from checking the rest
      of the kernel tree, it's the only driver that does this.
      
      We can handle this just fine, we just need to ignore the second addition
      as we'll get woken just fine on the first one.
      
      Cc: stable@vger.kernel.org # 5.8+
      Fixes: 18bceab1
      
       ("io_uring: allow POLL_ADD with double poll_wait() users")
      Reported-by: default avatar <syzbot+28abd693db9e92c160d8@syzkaller.appspotmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1c3b3e65
    • Jens Axboe's avatar
      io_uring: ensure that SQPOLL thread is started for exit · 3ebba796
      Jens Axboe authored
      
      
      If we create it in a disabled state because IORING_SETUP_R_DISABLED is
      set on ring creation, we need to ensure that we've kicked the thread if
      we're exiting before it's been explicitly disabled. Otherwise we can run
      into a deadlock where exit is waiting go park the SQPOLL thread, but the
      SQPOLL thread itself is waiting to get a signal to start.
      
      That results in the below trace of both tasks hung, waiting on each other:
      
      INFO: task syz-executor458:8401 blocked for more than 143 seconds.
            Not tainted 5.11.0-next-20210226-syzkaller #0
      "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      task:syz-executor458 state:D stack:27536 pid: 8401 ppid:  8400 flags:0x00004004
      Call Trace:
       context_switch kernel/sched/core.c:4324 [inline]
       __schedule+0x90c/0x21a0 kernel/sched/core.c:5075
       schedule+0xcf/0x270 kernel/sched/core.c:5154
       schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
       do_wait_for_common kernel/sched/completion.c:85 [inline]
       __wait_for_common kernel/sched/completion.c:106 [inline]
       wait_for_common kernel/sched/completion.c:117 [inline]
       wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
       io_sq_thread_park fs/io_uring.c:7115 [inline]
       io_sq_thread_park+0xd5/0x130 fs/io_uring.c:7103
       io_uring_cancel_task_requests+0x24c/0xd90 fs/io_uring.c:8745
       __io_uring_files_cancel+0x110/0x230 fs/io_uring.c:8840
       io_uring_files_cancel include/linux/io_uring.h:47 [inline]
       do_exit+0x299/0x2a60 kernel/exit.c:780
       do_group_exit+0x125/0x310 kernel/exit.c:922
       __do_sys_exit_group kernel/exit.c:933 [inline]
       __se_sys_exit_group kernel/exit.c:931 [inline]
       __x64_sys_exit_group+0x3a/0x50 kernel/exit.c:931
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      RIP: 0033:0x43e899
      RSP: 002b:00007ffe89376d48 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
      RAX: ffffffffffffffda RBX: 00000000004af2f0 RCX: 000000000043e899
      RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
      RBP: 0000000000000000 R08: ffffffffffffffc0 R09: 0000000010000000
      R10: 0000000000008011 R11: 0000000000000246 R12: 00000000004af2f0
      R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000001
      INFO: task iou-sqp-8401:8402 can't die for more than 143 seconds.
      task:iou-sqp-8401    state:D stack:30272 pid: 8402 ppid:  8400 flags:0x00004004
      Call Trace:
       context_switch kernel/sched/core.c:4324 [inline]
       __schedule+0x90c/0x21a0 kernel/sched/core.c:5075
       schedule+0xcf/0x270 kernel/sched/core.c:5154
       schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868
       do_wait_for_common kernel/sched/completion.c:85 [inline]
       __wait_for_common kernel/sched/completion.c:106 [inline]
       wait_for_common kernel/sched/completion.c:117 [inline]
       wait_for_completion+0x168/0x270 kernel/sched/completion.c:138
       io_sq_thread+0x27d/0x1ae0 fs/io_uring.c:6717
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
      INFO: task iou-sqp-8401:8402 blocked for more than 143 seconds.
      
      Reported-by: default avatar <syzbot+fb5458330b4442f2090d@syzkaller.appspotmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3ebba796
    • Pavel Begunkov's avatar
      io_uring: replace cmpxchg in fallback with xchg · 28c4721b
      Pavel Begunkov authored
      
      
      io_run_ctx_fallback() can use xchg() instead of cmpxchg(). It's simpler
      and faster.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      28c4721b
    • Pavel Begunkov's avatar
      io_uring: fix __tctx_task_work() ctx race · 2c32395d
      Pavel Begunkov authored
      There is an unlikely but possible race using a freed context. That's
      because req->task_work.func() can free a request, but we won't
      necessarily find a completion in submit_state.comp and so all ctx refs
      may be put by the time we do mutex_lock(&ctx->uring_ctx);
      
      There are several reasons why it can miss going through
      submit_state.comp: 1) req->task_work.func() didn't complete it itself,
      but punted to iowq (e.g. reissue) and it got freed later, or a similar
      situation with it overflowing and getting flushed by someone else, or
      being submitted to IRQ completion, 2) As we don't hold the uring_lock,
      someone else can do io_submit_flush_completions() and put our ref.
      3) Bugs and code obscurities, e.g. failing to propagate issue_flags
      properly.
      
      One example is as follows
      
        CPU1                                  |  CPU2
      =======================================================================
      @req->task_work.func()                  |
        -> @req overflwed,                    |
           so submit_state.comp,nr==0         |
                                              | flush overflows, and free @req
                                              | ctx refs == 0, free it
      ctx is dead, but we do                  |
      	lock + flush + unlock           |
      
      So take a ctx reference for each new ctx we see in __tctx_task_work(),
      and do release it until we do all our flushing.
      
      Fixes: 65453d1e
      
       ("io_uring: enable req cache for task_work items")
      Reported-by: default avatar <syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com>
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      [axboe: fold in my one-liner and fix ref mismatch]
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      2c32395d
    • Jens Axboe's avatar
      io_uring: kill io_uring_flush() · 0d30b3e7
      Jens Axboe authored
      
      
      This was always a weird work-around or file referencing, and we don't
      need it anymore. Get rid of it.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0d30b3e7
    • Jens Axboe's avatar
      io_uring: kill unnecessary io_run_ctx_fallback() in io_ring_exit_work() · 914390bc
      Jens Axboe authored
      
      
      We already run the fallback task_work in io_uring_try_cancel_requests(),
      no need to duplicate at ring exit explicitly.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      914390bc
    • Jens Axboe's avatar
      io_uring: move cred assignment into io_issue_sqe() · 5730b27e
      Jens Axboe authored
      
      
      If we move it in there, then we no longer have to care about it in io-wq.
      This means we can drop the cred handling in io-wq, and we can drop the
      REQ_F_WORK_INITIALIZED flag and async init functions as that was the last
      user of it since we moved to the new workers. Then we can also drop
      io_wq_work->creds, and just hold the personality u16 in there instead.
      
      Suggested-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5730b27e
    • Jens Axboe's avatar
      io_uring: kill unnecessary REQ_F_WORK_INITIALIZED checks · 1575f21a
      Jens Axboe authored
      
      
      We're no longer checking anything that requires the work item to be
      initialized, as we're not carrying any file related state there.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1575f21a
    • Jens Axboe's avatar
      io_uring: remove unused argument 'tsk' from io_req_caches_free() · 4010fec4
      Jens Axboe authored
      
      
      We prune the full cache regardless, get rid of the dead argument.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4010fec4
    • Pavel Begunkov's avatar
      io_uring: destroy io-wq on exec · 8452d4a6
      Pavel Begunkov authored
      
      
      Destroy current's io-wq backend and tctx on __io_uring_task_cancel(),
      aka exec(). Looks it's not strictly necessary, because it will be done
      at some point when the task dies and changes of creds/files/etc. are
      handled, but better to do that earlier to free io-wq and not potentially
      lock previous mm and other resources for the time being.
      
      It's safe to do because we wait for all requests of the current task to
      complete, so no request will use tctx afterwards. Note, that
      io_uring_files_cancel() may leave some requests for later reaping, so it
      leaves tctx intact, that's ok as the task is dying anyway.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8452d4a6
    • Pavel Begunkov's avatar
      io_uring: warn on not destroyed io-wq · ef8eaa4e
      Pavel Begunkov authored
      
      
      Make sure that we killed an io-wq by the time a task is dead.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ef8eaa4e
    • Jens Axboe's avatar
      io_uring: fix race condition in task_work add and clear · 1d5f360d
      Jens Axboe authored
      We clear the bit marking the ctx task_work as active after having run
      the queued work, but we really should be clearing it before. Otherwise
      we can hit a tiny race ala:
      
      CPU0					CPU1
      io_task_work_add()			tctx_task_work()
      					run_work
      	add_to_list
      	test_and_set_bit
      					clear_bit
      		already set
      
      and CPU0 will return thinking the task_work is queued, while in reality
      it's already being run. If we hit the condition after __tctx_task_work()
      found no more work, but before we've cleared the bit, then we'll end up
      thinking it's queued and will be run. In reality it is queued, but we
      didn't queue the ctx task_work to ensure that it gets run.
      
      Fixes: 7cbf1722
      
       ("io_uring: provide FIFO ordering for task_work")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1d5f360d
    • Jens Axboe's avatar
      io-wq: provide an io_wq_put_and_exit() helper · afcc4015
      Jens Axboe authored
      
      
      If we put the io-wq from io_uring, we really want it to exit. Provide
      a helper that does that for us. Couple that with not having the manager
      hold a reference to the 'wq' and the normal SQPOLL exit will tear down
      the io-wq context appropriate.
      
      On the io-wq side, our wq context is per task, so only the task itself
      is manipulating ->manager and hence it's safe to check and clear without
      any extra locking. We just need to ensure that the manager task stays
      around, in case it exits.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      afcc4015
    • Jens Axboe's avatar
      io_uring: don't use complete_all() on SQPOLL thread exit · 8629397e
      Jens Axboe authored
      We want to reuse this completion, and a single complete should do just
      fine. Ensure that we park ourselves first if requested, as that is what
      lead to the initial deadlock in this area. If we've got someone attempting
      to park us, then we can't proceed without having them finish first.
      
      Fixes: 37d1e2e3
      
       ("io_uring: move SQPOLL thread io-wq forked worker")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8629397e
    • Pavel Begunkov's avatar
      io_uring: run fallback on cancellation · ba50a036
      Pavel Begunkov authored
      
      
      io_uring_try_cancel_requests() matches not only current's requests, but
      also of other exiting tasks, so we need to actively cancel them and not
      just wait, especially since the function can be called on flush during
      do_exit() -> exit_files().
      Even if it's not a problem for now, it's much nicer to know that the
      function tries to cancel everything it can.
      
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ba50a036
    • Jens Axboe's avatar
      io_uring: SQPOLL stop error handling fixes · e54945ae
      Jens Axboe authored
      
      
      If we fail to fork an SQPOLL worker, we can hit cancel, and hence
      attempted thread stop, with the thread already being stopped. Ensure
      we check for that.
      
      Also guard thread stop fully by the sqd mutex, just like we do for
      park.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e54945ae
    • Jens Axboe's avatar
      io-wq: fix double put of 'wq' in error path · 470ec4ed
      Jens Axboe authored
      
      
      We are already freeing the wq struct in both spots, so don't put it and
      get it freed twice.
      
      Reported-by: default avatar <syzbot+7bf785eedca35ca05501@syzkaller.appspotmail.com>
      Fixes: 4fb6ac32
      
       ("io-wq: improve manager/worker handling over exec")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      470ec4ed
    • Jens Axboe's avatar
      io-wq: wait for manager exit on wq destroy · d364d9e5
      Jens Axboe authored
      
      
      The manager waits for the workers, hence the manager is always valid if
      workers are running. Now also have wq destroy wait for the manager on
      exit, so we now everything is gone.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d364d9e5
    • Jens Axboe's avatar
      io-wq: rename wq->done completion to wq->started · dbf99620
      Jens Axboe authored
      
      
      This is a leftover from a different use cases, it's used to wait for
      the manager to startup. Rename it as such.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dbf99620
    • Jens Axboe's avatar
      io-wq: don't ask for a new worker if we're exiting · 613eeb60
      Jens Axboe authored
      
      
      If we're in the process of shutting down the async context, then don't
      create new workers if we already have at least the fixed one.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      613eeb60
    • Jens Axboe's avatar
      io-wq: have manager wait for all workers to exit · fb3a1f6c
      Jens Axboe authored
      
      
      Instead of having to wait separately on workers and manager, just have
      the manager wait on the workers. We use an atomic_t for the reference
      here, as we need to start at 0 and allow increment from that. Since the
      number of workers is naturally capped by the allowed nr of processes,
      and that uses an int, there is no risk of overflow.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      fb3a1f6c