Commit f358afc5 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Linus Torvalds
Browse files

mm: remove flush_kernel_dcache_page

flush_kernel_dcache_page is a rather confusing interface that implements a
subset of flush_dcache_page by not being able to properly handle page
cache mapped pages.

The only callers left are in the exec code as all other previous callers
were incorrect as they could have dealt with page cache pages.  Replace
the calls to flush_kernel_dcache_page with calls to flush_dcache_page,
which for all architectures does either exactly the same thing, can
contains one or more of the following:

 1) an optimization to defer the cache flush for page cache pages not
    mapped into userspace
 2) additional flushing for mapped page cache pages if cache aliases
    are possible

Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.de


Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
Cc: Alex Shi <alexs@kernel.org>
Cc: Geoff Levand <geoff@infradead.org>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: Helge Deller <deller@gmx.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Paul Cercueil <paul@crapouillou.net>
Cc: Rich Felker <dalias@libc.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0e84f5db
Loading
Loading
Loading
Loading
+37 −49
Original line number Diff line number Diff line
@@ -271,10 +271,15 @@ maps this page at its virtual address.

  ``void flush_dcache_page(struct page *page)``

	Any time the kernel writes to a page cache page, _OR_
	the kernel is about to read from a page cache page and
	user space shared/writable mappings of this page potentially
	exist, this routine is called.
        This routines must be called when:

	  a) the kernel did write to a page that is in the page cache page
	     and / or in high memory
	  b) the kernel is about to read from a page cache page and user space
	     shared/writable mappings of this page potentially exist.  Note
	     that {get,pin}_user_pages{_fast} already call flush_dcache_page
	     on any page found in the user address space and thus driver
	     code rarely needs to take this into account.

	.. note::

@@ -284,38 +289,34 @@ maps this page at its virtual address.
	      handling vfs symlinks in the page cache need not call
	      this interface at all.

	The phrase "kernel writes to a page cache page" means,
	specifically, that the kernel executes store instructions
	that dirty data in that page at the page->virtual mapping
	of that page.  It is important to flush here to handle
	D-cache aliasing, to make sure these kernel stores are
	visible to user space mappings of that page.

	The corollary case is just as important, if there are users
	which have shared+writable mappings of this file, we must make
	sure that kernel reads of these pages will see the most recent
	stores done by the user.

	If D-cache aliasing is not an issue, this routine may
	simply be defined as a nop on that architecture.

        There is a bit set aside in page->flags (PG_arch_1) as
	"architecture private".  The kernel guarantees that,
	for pagecache pages, it will clear this bit when such
	a page first enters the pagecache.

	This allows these interfaces to be implemented much more
	efficiently.  It allows one to "defer" (perhaps indefinitely)
	the actual flush if there are currently no user processes
	mapping this page.  See sparc64's flush_dcache_page and
	update_mmu_cache implementations for an example of how to go
	about doing this.

	The idea is, first at flush_dcache_page() time, if
	page->mapping->i_mmap is an empty tree, just mark the architecture
	private page flag bit.  Later, in update_mmu_cache(), a check is
	made of this flag bit, and if set the flush is done and the flag
	bit is cleared.
	The phrase "kernel writes to a page cache page" means, specifically,
	that the kernel executes store instructions that dirty data in that
	page at the page->virtual mapping of that page.  It is important to
	flush here to handle D-cache aliasing, to make sure these kernel stores
	are visible to user space mappings of that page.

	The corollary case is just as important, if there are users which have
	shared+writable mappings of this file, we must make sure that kernel
	reads of these pages will see the most recent stores done by the user.

	If D-cache aliasing is not an issue, this routine may simply be defined
	as a nop on that architecture.

        There is a bit set aside in page->flags (PG_arch_1) as "architecture
	private".  The kernel guarantees that, for pagecache pages, it will
	clear this bit when such a page first enters the pagecache.

	This allows these interfaces to be implemented much more efficiently.
	It allows one to "defer" (perhaps indefinitely) the actual flush if
	there are currently no user processes mapping this page.  See sparc64's
	flush_dcache_page and update_mmu_cache implementations for an example
	of how to go about doing this.

	The idea is, first at flush_dcache_page() time, if page_file_mapping()
	returns a mapping, and mapping_mapped on that mapping returns %false,
	just mark the architecture private page flag bit.  Later, in
	update_mmu_cache(), a check is made of this flag bit, and if set the
	flush is done and the flag bit is cleared.

	.. important::

@@ -351,19 +352,6 @@ maps this page at its virtual address.
	architectures).  For incoherent architectures, it should flush
	the cache of the page at vmaddr.

  ``void flush_kernel_dcache_page(struct page *page)``

	When the kernel needs to modify a user page is has obtained
	with kmap, it calls this function after all modifications are
	complete (but before kunmapping it) to bring the underlying
	page up to date.  It is assumed here that the user has no
	incoherent cached copies (i.e. the original page was obtained
	from a mechanism like get_user_pages()).  The default
	implementation is a nop and should remain so on all coherent
	architectures.  On incoherent architectures, this should flush
	the kernel cache for page (using page_address(page)).


  ``void flush_icache_range(unsigned long start, unsigned long end)``

  	When the kernel stores into addresses that it will execute
+0 −9
Original line number Diff line number Diff line
@@ -298,15 +298,6 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
	用。默认的实现是nop(对于所有相干的架构应该保持这样)。对于不一致性
	的架构,它应该刷新vmaddr处的页面缓存。

  ``void flush_kernel_dcache_page(struct page *page)``

	当内核需要修改一个用kmap获得的用户页时,它会在所有修改完成后(但在
	kunmapping之前)调用这个函数,以使底层页面达到最新状态。这里假定用
	户没有不一致性的缓存副本(即原始页面是从类似get_user_pages()的机制
	中获得的)。默认的实现是一个nop,在所有相干的架构上都应该如此。在不
	一致性的架构上,这应该刷新内核缓存中的页面(使用page_address(page))。


  ``void flush_icache_range(unsigned long start, unsigned long end)``

	当内核存储到它将执行的地址中时(例如在加载模块时),这个函数被调用。
+1 −3
Original line number Diff line number Diff line
@@ -291,6 +291,7 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *);

#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1
static inline void flush_kernel_vmap_range(void *addr, int size)
{
	if ((cache_is_vivt() || cache_is_vipt_aliasing()))
@@ -312,9 +313,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
		__flush_anon_page(vma, page, vmaddr);
}

#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
extern void flush_kernel_dcache_page(struct page *);

#define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)

+0 −33
Original line number Diff line number Diff line
@@ -345,39 +345,6 @@ void flush_dcache_page(struct page *page)
}
EXPORT_SYMBOL(flush_dcache_page);

/*
 * Ensure cache coherency for the kernel mapping of this page. We can
 * assume that the page is pinned via kmap.
 *
 * If the page only exists in the page cache and there are no user
 * space mappings, this is a no-op since the page was already marked
 * dirty at creation.  Otherwise, we need to flush the dirty kernel
 * cache lines directly.
 */
void flush_kernel_dcache_page(struct page *page)
{
	if (cache_is_vivt() || cache_is_vipt_aliasing()) {
		struct address_space *mapping;

		mapping = page_mapping_file(page);

		if (!mapping || mapping_mapped(mapping)) {
			void *addr;

			addr = page_address(page);
			/*
			 * kmap_atomic() doesn't set the page virtual
			 * address for highmem pages, and
			 * kunmap_atomic() takes care of cache
			 * flushing already.
			 */
			if (!IS_ENABLED(CONFIG_HIGHMEM) || addr)
				__cpuc_flush_dcache_area(addr, PAGE_SIZE);
		}
	}
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

/*
 * Flush an anonymous page so that users of get_user_pages()
 * can safely access the data.  The expected sequence is:
+0 −6
Original line number Diff line number Diff line
@@ -166,12 +166,6 @@ void flush_dcache_page(struct page *page)
}
EXPORT_SYMBOL(flush_dcache_page);

void flush_kernel_dcache_page(struct page *page)
{
	__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
}
EXPORT_SYMBOL(flush_kernel_dcache_page);

void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
		       unsigned long uaddr, void *dst, const void *src,
		       unsigned long len)
Loading