Commit 5eaf35ab authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Linus Torvalds
Browse files

mm/rmap: fix assumptions of THP size



Ask the page what size it is instead of assuming it's PMD size.  Do this
for anon pages as well as file pages for when someone decides to support
that.  Leave the assumption alone for pages which are PMD mapped; we don't
currently grow THPs beyond PMD size, so we don't need to change this code
yet.

Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarSeongJae Park <sjpark@amazon.de>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Link: https://lkml.kernel.org/r/20200908195539.25896-9-willy@infradead.org


Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e2333dad
Loading
Loading
Loading
Loading
+5 −5
Original line number Diff line number Diff line
@@ -1205,7 +1205,7 @@ void page_add_file_rmap(struct page *page, bool compound)
	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
	lock_page_memcg(page);
	if (compound && PageTransHuge(page)) {
		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
			if (atomic_inc_and_test(&page[i]._mapcount))
				nr++;
		}
@@ -1246,7 +1246,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)

	/* page still mapped by someone else? */
	if (compound && PageTransHuge(page)) {
		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
			if (atomic_add_negative(-1, &page[i]._mapcount))
				nr++;
		}
@@ -1293,7 +1293,7 @@ static void page_remove_anon_compound_rmap(struct page *page)
		 * Subpages can be mapped with PTEs too. Check how many of
		 * them are still mapped.
		 */
		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
		for (i = 0, nr = 0; i < thp_nr_pages(page); i++) {
			if (atomic_add_negative(-1, &page[i]._mapcount))
				nr++;
		}
@@ -1303,10 +1303,10 @@ static void page_remove_anon_compound_rmap(struct page *page)
		 * page of the compound page is unmapped, but at least one
		 * small page is still mapped.
		 */
		if (nr && nr < HPAGE_PMD_NR)
		if (nr && nr < thp_nr_pages(page))
			deferred_split_huge_page(page);
	} else {
		nr = HPAGE_PMD_NR;
		nr = thp_nr_pages(page);
	}

	if (unlikely(PageMlocked(page)))