Commit e9adcfec authored by Mike Kravetz's avatar Mike Kravetz Committed by Andrew Morton
Browse files

mm: remove zap_page_range and create zap_vma_pages

zap_page_range was originally designed to unmap pages within an address
range that could span multiple vmas.  While working on [1], it was
discovered that all callers of zap_page_range pass a range entirely within
a single vma.  In addition, the mmu notification call within zap_page
range does not correctly handle ranges that span multiple vmas.  When
crossing a vma boundary, a new mmu_notifier_range_init/end call pair with
the new vma should be made.

Instead of fixing zap_page_range, do the following:
- Create a new routine zap_vma_pages() that will remove all pages within
  the passed vma.  Most users of zap_page_range pass the entire vma and
  can use this new routine.
- For callers of zap_page_range not passing the entire vma, instead call
  zap_page_range_single().
- Remove zap_page_range.

[1] https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.kravetz@oracle.com/
Link: https://lkml.kernel.org/r/20230104002732.232573-1-mike.kravetz@oracle.com


Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Suggested-by: default avatarPeter Xu <peterx@redhat.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarPeter Xu <peterx@redhat.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>	[s390]
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent bbc61844
Loading
Loading
Loading
Loading
+2 −4
Original line number Diff line number Diff line
@@ -138,13 +138,11 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
	mmap_read_lock(mm);

	for_each_vma(vmi, vma) {
		unsigned long size = vma->vm_end - vma->vm_start;

		if (vma_is_special_mapping(vma, vdso_info[VDSO_ABI_AA64].dm))
			zap_page_range(vma, vma->vm_start, size);
			zap_vma_pages(vma);
#ifdef CONFIG_COMPAT_VDSO
		if (vma_is_special_mapping(vma, vdso_info[VDSO_ABI_AA32].dm))
			zap_page_range(vma, vma->vm_start, size);
			zap_vma_pages(vma);
#endif
	}

+1 −3
Original line number Diff line number Diff line
@@ -120,10 +120,8 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)

	mmap_read_lock(mm);
	for_each_vma(vmi, vma) {
		unsigned long size = vma->vm_end - vma->vm_start;

		if (vma_is_special_mapping(vma, &vvar_spec))
			zap_page_range(vma, vma->vm_start, size);
			zap_vma_pages(vma);
	}
	mmap_read_unlock(mm);

+1 −1
Original line number Diff line number Diff line
@@ -414,7 +414,7 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
	/*
	 * When the LPAR lost credits due to core removal or during
	 * migration, invalidate the existing mapping for the current
	 * paste addresses and set windows in-active (zap_page_range in
	 * paste addresses and set windows in-active (zap_vma_pages in
	 * reconfig_close_windows()).
	 * New mapping will be done later after migration or new credits
	 * available. So continue to receive faults if the user space
+1 −2
Original line number Diff line number Diff line
@@ -760,8 +760,7 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds,
		 * is done before the original mmap() and after the ioctl.
		 */
		if (vma)
			zap_page_range(vma, vma->vm_start,
					vma->vm_end - vma->vm_start);
			zap_vma_pages(vma);

		mmap_write_unlock(task_ref->mm);
		mutex_unlock(&task_ref->mmap_mutex);
+2 −4
Original line number Diff line number Diff line
@@ -124,13 +124,11 @@ int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
	mmap_read_lock(mm);

	for_each_vma(vmi, vma) {
		unsigned long size = vma->vm_end - vma->vm_start;

		if (vma_is_special_mapping(vma, vdso_info.dm))
			zap_page_range(vma, vma->vm_start, size);
			zap_vma_pages(vma);
#ifdef CONFIG_COMPAT
		if (vma_is_special_mapping(vma, compat_vdso_info.dm))
			zap_page_range(vma, vma->vm_start, size);
			zap_vma_pages(vma);
#endif
	}

Loading