Commit 3a255267 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Andrew Morton
Browse files

mm: add generic flush_icache_pages() and documentation

flush_icache_page() is deprecated but not yet removed, so add a range
version of it.  Change the documentation to refer to
update_mmu_cache_range() instead of update_mmu_cache().

Link: https://lkml.kernel.org/r/20230802151406.3735276-4-willy@infradead.org


Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent a3793220
Loading
Loading
Loading
Loading
+22 −17
Original line number Diff line number Diff line
@@ -88,13 +88,17 @@ changes occur:

	This is used primarily during fault processing.

5) ``void update_mmu_cache(struct vm_area_struct *vma,
   unsigned long address, pte_t *ptep)``
5) ``void update_mmu_cache_range(struct vm_fault *vmf,
   struct vm_area_struct *vma, unsigned long address, pte_t *ptep,
   unsigned int nr)``

	At the end of every page fault, this routine is invoked to
	tell the architecture specific code that a translation
	now exists at virtual address "address" for address space
	"vma->vm_mm", in the software page tables.
	At the end of every page fault, this routine is invoked to tell
	the architecture specific code that translations now exists
	in the software page tables for address space "vma->vm_mm"
	at virtual address "address" for "nr" consecutive pages.

	This routine is also invoked in various other places which pass
	a NULL "vmf".

	A port may use this information in any way it so chooses.
	For example, it could use this event to pre-load TLB
@@ -306,17 +310,18 @@ maps this page at its virtual address.
	private".  The kernel guarantees that, for pagecache pages, it will
	clear this bit when such a page first enters the pagecache.

	This allows these interfaces to be implemented much more efficiently.
	It allows one to "defer" (perhaps indefinitely) the actual flush if
	there are currently no user processes mapping this page.  See sparc64's
	flush_dcache_page and update_mmu_cache implementations for an example
	of how to go about doing this.
	This allows these interfaces to be implemented much more
	efficiently.  It allows one to "defer" (perhaps indefinitely) the
	actual flush if there are currently no user processes mapping this
	page.  See sparc64's flush_dcache_page and update_mmu_cache_range
	implementations for an example of how to go about doing this.

	The idea is, first at flush_dcache_page() time, if page_file_mapping()
	returns a mapping, and mapping_mapped on that mapping returns %false,
	just mark the architecture private page flag bit.  Later, in
	update_mmu_cache(), a check is made of this flag bit, and if set the
	flush is done and the flag bit is cleared.
	The idea is, first at flush_dcache_page() time, if
	page_file_mapping() returns a mapping, and mapping_mapped on that
	mapping returns %false, just mark the architecture private page
	flag bit.  Later, in update_mmu_cache_range(), a check is made
	of this flag bit, and if set the flush is done and the flag bit
	is cleared.

	.. important::

@@ -369,7 +374,7 @@ maps this page at its virtual address.
  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``

	All the functionality of flush_icache_page can be implemented in
	flush_dcache_page and update_mmu_cache. In the future, the hope
	flush_dcache_page and update_mmu_cache_range. In the future, the hope
	is to remove this interface completely.

The final category of APIs is for I/O to deliberately aliased address
+5 −0
Original line number Diff line number Diff line
@@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
#endif

#ifndef flush_icache_page
static inline void flush_icache_pages(struct vm_area_struct *vma,
				     struct page *page, unsigned int nr)
{
}

static inline void flush_icache_page(struct vm_area_struct *vma,
				     struct page *page)
{