Skip to content
  1. Mar 20, 2022
    • Jens Axboe's avatar
      io_uring: recycle provided before arming poll · abdad709
      Jens Axboe authored
      We currently have a race where we recycle the selected buffer if poll
      returns IO_APOLL_OK. But that's too late, as the poll could already be
      triggering or have triggered. If that race happens, then we're putting a
      buffer that's already being used.
      
      Fix this by recycling before we arm poll. This does mean that we'll
      sometimes almost instantly re-select the buffer, but it's rare enough in
      testing that it should not pose a performance issue.
      
      Fixes: b1c62645
      
       ("io_uring: recycle provided buffers if request goes async")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      abdad709
  2. Mar 19, 2022
  3. Mar 18, 2022
    • Jens Axboe's avatar
      io_uring: manage provided buffers strictly ordered · dbc7d452
      Jens Axboe authored
      
      
      Workloads using provided buffers benefit from using and returning buffers
      in the right order, and so does TLBs for that matter. Manage the internal
      buffer list in a straight list, rather than use the head buffer as the
      insertion node. Use a hashed list for the buffer group IDs instead of
      xarray, the overhead is much lower this way. xarray provides internal
      locking and other trickery that is handy for some uses cases, but
      io_uring already locks internally for the buffer manipulation and needs
      none of that.
      
      This is good for about a 2% reduction in overhead, combination of the
      improved management and the fact that the workload has an easier time
      bundling back provided buffers.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dbc7d452
  4. Mar 17, 2022
  5. Mar 16, 2022
    • Dylan Yudaken's avatar
      io_uring: make tracing format consistent · 052ebf1f
      Dylan Yudaken authored
      
      
      Make the tracing formatting for user_data and flags consistent.
      
      Having consistent formatting allows one for example to grep for a specific
      user_data/flags and be able to trace a single sqe through easily.
      
      Change user_data to 0x%llx and flags to 0x%x everywhere. The '0x' is
      useful to disambiguate for example "user_data 100".
      
      Additionally remove the '=' for flags in io_uring_req_failed, again for consistency.
      
      Signed-off-by: default avatarDylan Yudaken <dylany@fb.com>
      Link: https://lore.kernel.org/r/20220316095204.2191498-1-dylany@fb.com
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      052ebf1f
    • Jens Axboe's avatar
      io_uring: recycle apoll_poll entries · 4d9237e3
      Jens Axboe authored
      
      
      Particularly for networked workloads, io_uring intensively uses its
      poll based backend to get a notification when data/space is available.
      Profiling workloads, we see 3-4% of alloc+free that is directly attributed
      to just the apoll allocation and free (and the rest being skb alloc+free).
      
      For the fast path, we have ctx->uring_lock held already for both issue
      and the inline completions, and we can utilize that to avoid any extra
      locking needed to have a basic recycling cache for the apoll entries on
      both the alloc and free side.
      
      Double poll still requires an allocation. But those are rare and not
      a fast path item.
      
      With the simple cache in place, we see a 3-4% reduction in overhead for
      the workload.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4d9237e3
  6. Mar 12, 2022
  7. Mar 11, 2022
  8. Mar 10, 2022