Commit 41b4dc14 authored by Joonsoo Kim's avatar Joonsoo Kim Committed by Linus Torvalds
Browse files

mm/gup: restrict CMA region by using allocation scope API



We have well defined scope API to exclude CMA region.  Use it rather than
manipulating gfp_mask manually.  With this change, we can now restore
__GFP_MOVABLE for gfp_mask like as usual migration target allocation.  It
would result in that the ZONE_MOVABLE is also searched by page allocator.
For hugetlb, gfp_mask is redefined since it has a regular allocation mask
filter for migration target.  __GPF_NOWARN is added to hugetlb gfp_mask
filter since a new user for gfp_mask filter, gup, want to be silent when
allocation fails.

Note that this can be considered as a fix for the commit 9a4e9f3b
("mm: update get_user_pages_longterm to migrate pages allocated from CMA
region").  However, "Fixes" tag isn't added here since it is just
suboptimal but it doesn't cause any problem.

Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
Link: http://lkml.kernel.org/r/1596180906-8442-1-git-send-email-iamjoonsoo.kim@lge.com


Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8b94e0b8
Loading
Loading
Loading
Loading
+2 −0
Original line number Diff line number Diff line
@@ -710,6 +710,8 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask)
	/* Some callers might want to enforce node */
	modified_mask |= (gfp_mask & __GFP_THISNODE);

	modified_mask |= (gfp_mask & __GFP_NOWARN);

	return modified_mask;
}

+8 −9
Original line number Diff line number Diff line
@@ -1620,10 +1620,12 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
	 * Trying to allocate a page for migration. Ignore allocation
	 * failure warnings. We don't force __GFP_THISNODE here because
	 * this node here is the node where we have CMA reservation and
	 * in some case these nodes will have really less non movable
	 * in some case these nodes will have really less non CMA
	 * allocation memory.
	 *
	 * Note that CMA region is prohibited by allocation scope.
	 */
	gfp_t gfp_mask = GFP_USER | __GFP_NOWARN;
	gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN;

	if (PageHighMem(page))
		gfp_mask |= __GFP_HIGHMEM;
@@ -1631,6 +1633,8 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
#ifdef CONFIG_HUGETLB_PAGE
	if (PageHuge(page)) {
		struct hstate *h = page_hstate(page);

		gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
		/*
		 * We don't want to dequeue from the pool because pool pages will
		 * mostly be from the CMA region.
@@ -1645,11 +1649,6 @@ static struct page *new_non_cma_page(struct page *page, unsigned long private)
		 */
		gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN;

		/*
		 * Remove the movable mask so that we don't allocate from
		 * CMA area again.
		 */
		thp_gfpmask &= ~__GFP_MOVABLE;
		thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
		if (!thp)
			return NULL;
@@ -1795,7 +1794,6 @@ static long __gup_longterm_locked(struct task_struct *tsk,
				     vmas_tmp, NULL, gup_flags);

	if (gup_flags & FOLL_LONGTERM) {
		memalloc_nocma_restore(flags);
		if (rc < 0)
			goto out;

@@ -1808,9 +1806,10 @@ static long __gup_longterm_locked(struct task_struct *tsk,

		rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages,
						 vmas_tmp, gup_flags);
out:
		memalloc_nocma_restore(flags);
	}

out:
	if (vmas_tmp != vmas)
		kfree(vmas_tmp);
	return rc;