Commit 169b8873 authored by Jan Kara's avatar Jan Kara Committed by Tong Tiangen
Browse files

readahead: properly shorten readahead when falling back to do_page_cache_ra()

mainline inclusion
from mainline-v6.14-rc1
commit d5ea5e5e50dffd13fccedb690a0a1f27be56191a
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/IBRWBB

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d5ea5e5e50dffd13fccedb690a0a1f27be56191a

--------------------------------

When we succeed in creating some folios in page_cache_ra_order() but then
need to fallback to single page folios, we don't shorten the amount to
read passed to do_page_cache_ra() by the amount we've already read.  This
then results in reading more and also in placing another readahead mark in
the middle of the readahead window which confuses readahead code.  Fix the
problem by properly reducing number of pages to read.  Unlike previous
attempt at this fix (commit 7c877586da31) which had to be reverted, we are
now careful to check there is indeed something to read so that we don't
submit negative-sized readahead.

Link: https://lkml.kernel.org/r/20241204181016.15273-3-jack@suse.cz


Signed-off-by: default avatarJan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>

Conflicts:
	mm/readahead.c
[context conflict, 26cfdb395eef("readahead: allocate folios with
mapping_min_order in readahead") net merged]
Signed-off-by: default avatarTong Tiangen <tongtiangen@huawei.com>
parent 6ac40147
Loading
Loading
Loading
Loading
+10 −3
Original line number Diff line number Diff line
@@ -487,7 +487,8 @@ void page_cache_ra_order(struct readahead_control *ractl,
		struct file_ra_state *ra, unsigned int new_order)
{
	struct address_space *mapping = ractl->mapping;
	pgoff_t index = readahead_index(ractl);
	pgoff_t start = readahead_index(ractl);
	pgoff_t index = start;
	pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
	pgoff_t mark = index + ra->size - ra->async_size;
	unsigned int nofs;
@@ -546,12 +547,18 @@ void page_cache_ra_order(struct readahead_control *ractl,
	/*
	 * If there were already pages in the page cache, then we may have
	 * left some gaps.  Let the regular readahead code take care of this
	 * situation.
	 * situation below.
	 */
	if (!err)
		return;
fallback:
	do_page_cache_ra(ractl, ra->size, ra->async_size);
	/*
	 * ->readahead() may have updated readahead window size so we have to
	 * check there's still something to read.
	 */
	if (ra->size > index - start)
		do_page_cache_ra(ractl, ra->size - (index - start),
				 ra->async_size);
}

/*