Unverified Commit dc3216b1 authored by Christian Brauner's avatar Christian Brauner
Browse files

super: ensure valid info



For keyed filesystems that recycle superblocks based on s_fs_info or
information contained therein s_fs_info must be kept as long as the
superblock is on the filesystem type super list. This isn't guaranteed
as s_fs_info will be freed latest in sb->kill_sb().

The fix is simply to perform notification and list removal in
kill_anon_super(). Any filesystem needs to free s_fs_info after they
call the kill_*() helpers. If they don't they risk use-after-free right
now so fixing it here is guaranteed that s_fs_info remain valid.

For block backed filesystems notifying in pass sb->kill_sb() in
deactivate_locked_super() remains unproblematic and is required because
multiple other block devices can be shut down after kill_block_super()
has been called from a filesystem's sb->kill_sb() handler. For example,
ext4 and xfs close additional devices. Block based filesystems don't
depend on s_fs_info (btrfs does use s_fs_info but also uses
kill_anon_super() and not kill_block_super().).

Sorry for that braino. Goal should be to unify this behavior during this
cycle obviously. But let's please do a simple bugfix now.

Fixes: 2c18a63b ("super: wait until we passed kill super")
Fixes: syzbot+5b64180f8d9e39d3f061@syzkaller.appspotmail.com
Reviewed-by: default avatarJan Kara <jack@suse.cz>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Reported-by: default avatar <syzbot+5b64180f8d9e39d3f061@syzkaller.appspotmail.com>
Message-Id: <20230828-vfs-super-fixes-v1-2-b37a4a04a88f@kernel.org>
Signed-off-by: default avatarChristian Brauner <brauner@kernel.org>
parent 345a5c4a
Loading
Loading
Loading
Loading
+30 −19
Original line number Diff line number Diff line
@@ -434,6 +434,33 @@ void put_super(struct super_block *sb)
	spin_unlock(&sb_lock);
}

static void kill_super_notify(struct super_block *sb)
{
	lockdep_assert_not_held(&sb->s_umount);

	/* already notified earlier */
	if (sb->s_flags & SB_DEAD)
		return;

	/*
	 * Remove it from @fs_supers so it isn't found by new
	 * sget{_fc}() walkers anymore. Any concurrent mounter still
	 * managing to grab a temporary reference is guaranteed to
	 * already see SB_DYING and will wait until we notify them about
	 * SB_DEAD.
	 */
	spin_lock(&sb_lock);
	hlist_del_init(&sb->s_instances);
	spin_unlock(&sb_lock);

	/*
	 * Let concurrent mounts know that this thing is really dead.
	 * We don't need @sb->s_umount here as every concurrent caller
	 * will see SB_DYING and either discard the superblock or wait
	 * for SB_DEAD.
	 */
	super_wake(sb, SB_DEAD);
}

/**
 *	deactivate_locked_super	-	drop an active reference to superblock
@@ -453,6 +480,8 @@ void deactivate_locked_super(struct super_block *s)
		unregister_shrinker(&s->s_shrink);
		fs->kill_sb(s);

		kill_super_notify(s);

		/*
		 * Since list_lru_destroy() may sleep, we cannot call it from
		 * put_super(), where we hold the sb_lock. Therefore we destroy
@@ -461,25 +490,6 @@ void deactivate_locked_super(struct super_block *s)
		list_lru_destroy(&s->s_dentry_lru);
		list_lru_destroy(&s->s_inode_lru);

		/*
		 * Remove it from @fs_supers so it isn't found by new
		 * sget{_fc}() walkers anymore. Any concurrent mounter still
		 * managing to grab a temporary reference is guaranteed to
		 * already see SB_DYING and will wait until we notify them about
		 * SB_DEAD.
		 */
		spin_lock(&sb_lock);
		hlist_del_init(&s->s_instances);
		spin_unlock(&sb_lock);

		/*
		 * Let concurrent mounts know that this thing is really dead.
		 * We don't need @sb->s_umount here as every concurrent caller
		 * will see SB_DYING and either discard the superblock or wait
		 * for SB_DEAD.
		 */
		super_wake(s, SB_DEAD);

		put_filesystem(fs);
		put_super(s);
	} else {
@@ -1260,6 +1270,7 @@ void kill_anon_super(struct super_block *sb)
{
	dev_t dev = sb->s_dev;
	generic_shutdown_super(sb);
	kill_super_notify(sb);
	free_anon_bdev(dev);
}
EXPORT_SYMBOL(kill_anon_super);