Commit 71b26083 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Jens Axboe
Browse files

block: set the disk capacity to 0 in blk_mark_disk_dead



nvme and xen-blkfront are already doing this to stop buffered writes from
creating dirty pages that can't be written out later.  Move it to the
common code.

This also removes the comment about the ordering from nvme, as bd_mutex
not only is gone entirely, but also hasn't been used for locking updates
to the disk size long before that, and thus the ordering requirement
documented there doesn't apply any more.

Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
Reviewed-by: default avatarChao Leng <lengchao@huawei.com>
Reviewed-by: default avatarChaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-2-hch@lst.de


Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent aa625117
Loading
Loading
Loading
Loading
+5 −0
Original line number Diff line number Diff line
@@ -555,6 +555,11 @@ void blk_mark_disk_dead(struct gendisk *disk)
{
	set_bit(GD_DEAD, &disk->state);
	blk_queue_start_drain(disk->queue);

	/*
	 * Stop buffered writers from dirtying pages that can't be written out.
	 */
	set_capacity_and_notify(disk, 0);
}
EXPORT_SYMBOL_GPL(blk_mark_disk_dead);

+0 −1
Original line number Diff line number Diff line
@@ -2129,7 +2129,6 @@ static void blkfront_closing(struct blkfront_info *info)
	if (info->rq && info->gd) {
		blk_mq_stop_hw_queues(info->rq);
		blk_mark_disk_dead(info->gd);
		set_capacity(info->gd, 0);
	}

	for_each_rinfo(info, rinfo, i) {
+1 −6
Original line number Diff line number Diff line
@@ -5116,10 +5116,7 @@ static void nvme_stop_ns_queue(struct nvme_ns *ns)
/*
 * Prepare a queue for teardown.
 *
 * This must forcibly unquiesce queues to avoid blocking dispatch, and only set
 * the capacity to 0 after that to avoid blocking dispatchers that may be
 * holding bd_butex.  This will end buffered writers dirtying pages that can't
 * be synced.
 * This must forcibly unquiesce queues to avoid blocking dispatch.
 */
static void nvme_set_queue_dying(struct nvme_ns *ns)
{
@@ -5128,8 +5125,6 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)

	blk_mark_disk_dead(ns->disk);
	nvme_start_ns_queue(ns);

	set_capacity_and_notify(ns->disk, 0);
}

/**