Commit a501a070 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Andrew Morton
Browse files

mm: report success more often from filemap_map_folio_range()

Even though we had successfully mapped the relevant page, we would rarely
return success from filemap_map_folio_range().  That leads to falling back
from the VMA lock path to the mmap_lock path, which is a speed &
scalability issue.  Found by inspection.

Link: https://lkml.kernel.org/r/20230920035336.854212-1-willy@infradead.org


Fixes: 617c28ec ("filemap: batch PTE mappings")
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 7c315158
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -3503,7 +3503,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
		if (count) {
			set_pte_range(vmf, folio, page, count, addr);
			folio_ref_add(folio, count);
			if (in_range(vmf->address, addr, count))
			if (in_range(vmf->address, addr, count * PAGE_SIZE))
				ret = VM_FAULT_NOPAGE;
		}

@@ -3517,7 +3517,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
	if (count) {
		set_pte_range(vmf, folio, page, count, addr);
		folio_ref_add(folio, count);
		if (in_range(vmf->address, addr, count))
		if (in_range(vmf->address, addr, count * PAGE_SIZE))
			ret = VM_FAULT_NOPAGE;
	}