Commit d729cf9a authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Jens Axboe
Browse files

io_uring: don't sleep when polling for I/O



There is no point in sleeping for the expected I/O completion timeout
in the io_uring async polling model as we never poll for a specific
I/O.

Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Tested-by: default avatarMark Wunderlich <mark.wunderlich@intel.com>
Link: https://lore.kernel.org/r/20211012111226.760968-11-hch@lst.de


Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent ef99b2d3
Loading
Loading
Loading
Loading
+2 −1
Original line number Diff line number Diff line
@@ -4103,7 +4103,8 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, unsigned int flags)
	if (current->plug)
		blk_flush_plug_list(current->plug, false);

	if (q->poll_nsec != BLK_MQ_POLL_CLASSIC) {
	if (!(flags & BLK_POLL_NOSLEEP) &&
	    q->poll_nsec != BLK_MQ_POLL_CLASSIC) {
		if (blk_mq_poll_hybrid(q, cookie))
			return 1;
	}
+1 −1
Original line number Diff line number Diff line
@@ -2457,7 +2457,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
			long min)
{
	struct io_kiocb *req, *tmp;
	unsigned int poll_flags = 0;
	unsigned int poll_flags = BLK_POLL_NOSLEEP;
	LIST_HEAD(done);

	/*
+2 −0
Original line number Diff line number Diff line
@@ -566,6 +566,8 @@ blk_status_t errno_to_blk_status(int errno);

/* only poll the hardware once, don't continue until a completion was found */
#define BLK_POLL_ONESHOT		(1 << 0)
/* do not sleep to wait for the expected completion time */
#define BLK_POLL_NOSLEEP		(1 << 1)
int blk_poll(struct request_queue *q, blk_qc_t cookie, unsigned int flags);

static inline struct request_queue *bdev_get_queue(struct block_device *bdev)