Commit 13e4ad2c authored by Nadav Amit's avatar Nadav Amit Committed by Linus Torvalds
Browse files

hugetlbfs: flush before unlock on move_hugetlb_page_tables()



We must flush the TLB before releasing i_mmap_rwsem to avoid the
potential reuse of an unshared PMDs page.  This is not true in the case
of move_hugetlb_page_tables().  The last reference on the page table can
therefore be dropped before the TLB flush took place.

Prevent it by reordering the operations and flushing the TLB before
releasing i_mmap_rwsem.

Fixes: 550a7d60 ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: default avatarNadav Amit <namit@vmware.com>
Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent a4a118f2
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -4919,9 +4919,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,

		move_huge_pte(vma, old_addr, new_addr, src_pte);
	}
	i_mmap_unlock_write(mapping);
	flush_tlb_range(vma, old_end - len, old_end);
	mmu_notifier_invalidate_range_end(&range);
	i_mmap_unlock_write(mapping);

	return len + old_addr - old_end;
}