Commit 74dd1e5c authored by Ryan Roberts's avatar Ryan Roberts Committed by Kefeng Wang
Browse files

mm: non-pmd-mappable, large folios for folio_add_new_anon_rmap()

mainline inclusion
from mainline-v6.8-rc1
commit 372cbd4d5a0665bf7e181c72f5e40e1bf59b0b08
category: feature
bugzilla: https://gitee.com/openeuler/kernel/issues/I98AW9
CVE: NA

-------------------------------------------------

In preparation for supporting anonymous multi-size THP, improve
folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
passed to it.  In this case, all contained pages are accounted using the
order-0 folio (or base page) scheme.

Link: https://lkml.kernel.org/r/20231207161211.2374093-3-ryan.roberts@arm.com


Signed-off-by: default avatarRyan Roberts <ryan.roberts@arm.com>
Reviewed-by: default avatarYu Zhao <yuzhao@google.com>
Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarBarry Song <v-songbaohua@oppo.com>
Tested-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Itaru Kitayama <itaru.kitayama@gmail.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 372cbd4d5a0665bf7e181c72f5e40e1bf59b0b08)
Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
parent 566c6ed7
Loading
Loading
Loading
Loading
+20 −8
Original line number Diff line number Diff line
@@ -1305,32 +1305,44 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 * This means the inc-and-test can be bypassed.
 * The folio does not have to be locked.
 *
 * If the folio is large, it is accounted as a THP.  As the folio
 * If the folio is pmd-mappable, it is accounted as a THP.  As the folio
 * is new, it's assumed to be mapped exclusively by a single process.
 */
void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
		unsigned long address)
{
	int nr;
	int nr = folio_nr_pages(folio);

	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
	VM_BUG_ON_VMA(address < vma->vm_start ||
			address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
	__folio_set_swapbacked(folio);
	__folio_set_anon(folio, vma, address, true);

	if (likely(!folio_test_pmd_mappable(folio))) {
	if (likely(!folio_test_large(folio))) {
		/* increment count (starts at -1) */
		atomic_set(&folio->_mapcount, 0);
		nr = 1;
		SetPageAnonExclusive(&folio->page);
	} else if (!folio_test_pmd_mappable(folio)) {
		int i;

		for (i = 0; i < nr; i++) {
			struct page *page = folio_page(folio, i);

			/* increment count (starts at -1) */
			atomic_set(&page->_mapcount, 0);
			SetPageAnonExclusive(page);
		}

		atomic_set(&folio->_nr_pages_mapped, nr);
	} else {
		/* increment count (starts at -1) */
		atomic_set(&folio->_entire_mapcount, 0);
		atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
		nr = folio_nr_pages(folio);
		SetPageAnonExclusive(&folio->page);
		__lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
	}

	__lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr);
	__folio_set_anon(folio, vma, address, true);
	SetPageAnonExclusive(&folio->page);
}

/**