Commit a72e5e5e authored by yangge's avatar yangge Committed by Jinjiang Tu
Browse files

mm/page_alloc: Separate THP PCP into movable and non-movable categories

stable inclusion
from stable-v6.6.37
commit c5978b996260534aca5ff1b6dcbef92c2bbea947
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/IAD6H2

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=c5978b996260534aca5ff1b6dcbef92c2bbea947

--------------------------------

commit bf14ed81f571f8dba31cd72ab2e50fbcc877cc31 upstream.

Since commit 5d0a661d ("mm/page_alloc: use only one PCP list for
THP-sized allocations") no longer differentiates the migration type of
pages in THP-sized PCP list, it's possible that non-movable allocation
requests may get a CMA page from the list, in some cases, it's not
acceptable.

If a large number of CMA memory are configured in system (for example, the
CMA memory accounts for 50% of the system memory), starting a virtual
machine with device passthrough will get stuck.  During starting the
virtual machine, it will call pin_user_pages_remote(..., FOLL_LONGTERM,
...) to pin memory.  Normally if a page is present and in CMA area,
pin_user_pages_remote() will migrate the page from CMA area to non-CMA
area because of FOLL_LONGTERM flag.  But if non-movable allocation
requests return CMA memory, migrate_longterm_unpinnable_pages() will
migrate a CMA page to another CMA page, which will fail to pass the check
in check_and_migrate_movable_pages() and cause migration endless.

Call trace:
pin_user_pages_remote
--__gup_longterm_locked // endless loops in this function
----_get_user_pages_locked
----check_and_migrate_movable_pages
------migrate_longterm_unpinnable_pages
--------alloc_migration_target

This problem will also have a negative impact on CMA itself.  For example,
when CMA is borrowed by THP, and we need to reclaim it through cma_alloc()
or dma_alloc_coherent(), we must move those pages out to ensure CMA's
users can retrieve that contigous memory.  Currently, CMA's memory is
occupied by non-movable pages, meaning we can't relocate them.  As a
result, cma_alloc() is more likely to fail.

To fix the problem above, we add one PCP list for THP, which will not
introduce a new cacheline for struct per_cpu_pages.  THP will have 2 PCP
lists, one PCP list is used by MOVABLE allocation, and the other PCP list
is used by UNMOVABLE allocation.  MOVABLE allocation contains GPF_MOVABLE,
and UNMOVABLE allocation contains GFP_UNMOVABLE and GFP_RECLAIMABLE.

Link: https://lkml.kernel.org/r/1718845190-4456-1-git-send-email-yangge1116@126.com


Fixes: 5d0a661d ("mm/page_alloc: use only one PCP list for THP-sized allocations")
Signed-off-by: default avataryangge <yangge1116@126.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <21cnbao@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>

Conflicts:
	mm/page_alloc.c
	include/linux/mmzone.h
[Conflicts due to ab314175 ("mm: prepare more high-order pages to be
stored on the per-cpu lists")]
Signed-off-by: default avatarJinjiang Tu <tujinjiang@huawei.com>
parent c2ad0405
Loading
Loading
Loading
Loading
+4 −5
Original line number Diff line number Diff line
@@ -678,13 +678,12 @@ enum zone_watermarks {
};

/*
 * One per migratetype for each PAGE_ALLOC_COSTLY_ORDER. One additional list
 * for THP which will usually be GFP_MOVABLE. Even if it is another type,
 * it should not contribute to serious fragmentation causing THP allocation
 * failures.
 * One per migratetype for each PAGE_ALLOC_COSTLY_ORDER. Two additional lists
 * are added for THP. One PCP list is used by GPF_MOVABLE, and the other PCP list
 * is used by GFP_UNMOVABLE and GFP_RECLAIMABLE.
 */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define NR_PCP_THP 1
#define NR_PCP_THP 2
#define NR_PCP_ORDERS (PAGE_ALLOC_COSTLY_ORDER + 2)
#else
#define NR_PCP_THP 0
+7 −2
Original line number Diff line number Diff line
@@ -528,10 +528,15 @@ static void bad_page(struct page *page, const char *reason)

static inline unsigned int order_to_pindex(int migratetype, int order)
{
	bool __maybe_unused movable;

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
	if (order > PAGE_ALLOC_COSTLY_ORDER + 1) {
		VM_BUG_ON(order != HPAGE_PMD_ORDER);
		return NR_LOWORDER_PCP_LISTS;

		movable = migratetype == MIGRATE_MOVABLE;

		return NR_LOWORDER_PCP_LISTS + movable;
	}
#else
	VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER);
@@ -545,7 +550,7 @@ static inline int pindex_to_order(unsigned int pindex)
	int order = pindex / MIGRATE_PCPTYPES;

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
	if (pindex == NR_LOWORDER_PCP_LISTS)
	if (pindex >= NR_LOWORDER_PCP_LISTS)
		order = HPAGE_PMD_ORDER;
#else
	VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER);