Commit 83a8441f authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle)
Browse files

mm/huge_memory: Avoid calling pmd_page() on a non-leaf PMD

Calling try_to_unmap() with TTU_SPLIT_HUGE_PMD and a folio that's not
mapped by a PMD causes oopses on arm64 because we now call page_folio()
on an invalid page.  pmd_page() returns a valid page for non-leaf PMDs on
some architectures, so this bug escaped testing before now.  Fix this bug
by delaying the call to pmd_page() until after we know the PMD is a leaf.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=215804


Fixes: af28a988 ("mm/huge_memory: Convert __split_huge_pmd() to take a folio")
Reported-by: default avatarZorro Lang <zlang@redhat.com>
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Tested-by: default avatarZorro Lang <zlang@redhat.com>
parent 3e732ebf
Loading
Loading
Loading
Loading
+5 −6
Original line number Diff line number Diff line
@@ -2145,15 +2145,14 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
	 * pmd against. Otherwise we can end up replacing wrong folio.
	 */
	VM_BUG_ON(freeze && !folio);
	if (folio) {
		VM_WARN_ON_ONCE(!folio_test_locked(folio));
		if (folio != page_folio(pmd_page(*pmd)))
			goto out;
	}
	VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));

	if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
	    is_pmd_migration_entry(*pmd))
	    is_pmd_migration_entry(*pmd)) {
		if (folio && folio != page_folio(pmd_page(*pmd)))
			goto out;
		__split_huge_pmd_locked(vma, pmd, range.start, freeze);
	}

out:
	spin_unlock(ptl);