Commit 905889bc authored by Kees Cook's avatar Kees Cook
Browse files

btrfs: send: Proactively round up to kmalloc bucket size



Instead of discovering the kmalloc bucket size _after_ allocation, round
up proactively so the allocation is explicitly made for the full size,
allowing the compiler to correctly reason about the resulting size of
the buffer through the existing __alloc_size() hint.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org
Acked-by: default avatarDavid Sterba <dsterba@suse.com>
Link: https://lore.kernel.org/lkml/20220922133014.GI32411@suse.cz


Signed-off-by: default avatarKees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220923202822.2667581-8-keescook@chromium.org
parent cd536db0
Loading
Loading
Loading
Loading
+6 −5
Original line number Diff line number Diff line
@@ -438,6 +438,11 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
	path_len = p->end - p->start;
	old_buf_len = p->buf_len;

	/*
	 * Allocate to the next largest kmalloc bucket size, to let
	 * the fast path happen most of the time.
	 */
	len = kmalloc_size_roundup(len);
	/*
	 * First time the inline_buf does not suffice
	 */
@@ -451,11 +456,7 @@ static int fs_path_ensure_buf(struct fs_path *p, int len)
	if (!tmp_buf)
		return -ENOMEM;
	p->buf = tmp_buf;
	/*
	 * The real size of the buffer is bigger, this will let the fast path
	 * happen most of the time
	 */
	p->buf_len = ksize(p->buf);
	p->buf_len = len;

	if (p->reversed) {
		tmp_buf = p->buf + old_buf_len - path_len - 1;