Commit 04540755 authored by Brian Geffon's avatar Brian Geffon Committed by Tong Tiangen
Browse files

mm: fix finish_fault() handling for large folios

mainline inclusion
from mainline-v6.14-rc6
commit 34b82f33cf3f03bc39e9a205a913d790e1520ade
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/IBRQ5C

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34b82f33cf3f03bc39e9a205a913d790e1520ade

--------------------------------

When handling faults for anon shmem finish_fault() will attempt to install
ptes for the entire folio.  Unfortunately if it encounters a single
non-pte_none entry in that range it will bail, even if the pte that
triggered the fault is still pte_none.  When this situation happens the
fault will be retried endlessly never making forward progress.

This patch fixes this behavior and if it detects that a pte in the range
is not pte_none it will fall back to setting a single pte.

[bgeffon@google.com: tweak whitespace]
  Link: https://lkml.kernel.org/r/20250227133236.1296853-1-bgeffon@google.com
Link: https://lkml.kernel.org/r/20250226162341.915535-1-bgeffon@google.com


Fixes: 43e027e41423 ("mm: memory: extend finish_fault() to support large folio")
Signed-off-by: default avatarBrian Geffon <bgeffon@google.com>
Suggested-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
Reported-by: default avatarMarek Maslanka <mmaslanka@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickens <hughd@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarTong Tiangen <tongtiangen@huawei.com>
parent 010e974a
Loading
Loading
Loading
Loading
+10 −5
Original line number Diff line number Diff line
@@ -4846,7 +4846,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
	bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
		      !(vma->vm_flags & VM_SHARED);
	int type, nr_pages;
	unsigned long addr = vmf->address;
	unsigned long addr;
	bool needs_fallback = false;

fallback:
	addr = vmf->address;

	/* Did we COW the page? */
	if (is_cow)
@@ -4885,7 +4889,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
	 * approach also applies to non-anonymous-shmem faults to avoid
	 * inflating the RSS of the process.
	 */
	if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
	if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
	    unlikely(needs_fallback)) {
		nr_pages = 1;
	} else if (nr_pages > 1) {
		pgoff_t idx = folio_page_idx(folio, page);
@@ -4921,9 +4926,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
		ret = VM_FAULT_NOPAGE;
		goto unlock;
	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
		ret = VM_FAULT_NOPAGE;
		goto unlock;
		needs_fallback = true;
		pte_unmap_unlock(vmf->pte, vmf->ptl);
		goto fallback;
	}

	folio_ref_add(folio, nr_pages - 1);