Unverified Commit 42af3e41 authored by openeuler-ci-bot's avatar openeuler-ci-bot Committed by Gitee
Browse files

!5199 v2 mTHP anon support

Merge Pull Request from: @ci-robot 
 
PR sync from: Kefeng Wang <wangkefeng.wang@huawei.com>
https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/UR5GOYTLQOC2ZNZDSLPIHZ3P6LZVHKML/ 
Let's support mTHP for anon page.

David Hildenbrand (10):
  mm/rmap: drop stale comment in page_add_anon_rmap and
    hugepage_add_anon_rmap()
  mm/rmap: move SetPageAnonExclusive out of __page_set_anon_rmap()
  mm/rmap: move folio_test_anon() check out of __folio_set_anon()
  mm/rmap: warn on new PTE-mapped folios in page_add_anon_rmap()
  mm/rmap: simplify PageAnonExclusive sanity checks when adding anon
    rmap
  mm/rmap: pass folio to hugepage_add_anon_rmap()
  mm/rmap: move SetPageAnonExclusive() out of page_move_anon_rmap()
  mm/rmap: convert page_move_anon_rmap() to folio_move_anon_rmap()
  memory: move exclusivity detection in do_wp_page() into
    wp_can_reuse_anon_folio()
  uprobes: use pagesize-aligned virtual address when replacing pages

Ryan Roberts (12):
  mm: more ptep_get() conversion
  mm/readahead: do not allow order-1 folio
  mm: allow deferred splitting of arbitrary anon large folios
  mm: non-pmd-mappable, large folios for folio_add_new_anon_rmap()
  mm: thp: introduce multi-size THP sysfs interface
  mm: thp: support allocation of anonymous multi-size THP
  selftests/mm/kugepaged: restore thp settings at exit
  selftests/mm: factor out thp settings management
  selftests/mm: support multi-size THP interface in thp_settings
  selftests/mm/khugepaged: enlighten for multi-size THP
  selftests/mm/cow: generalize do_run_with_thp() helper
  selftests/mm/cow: add tests for anonymous multi-size THP

Zach O'Keefe (1):
  mm/thp: fix "mm: thp: kill __transhuge_page_enabled()"


-- 
2.27.0
 
https://gitee.com/openeuler/kernel/issues/I98AW9 
 
Link:https://gitee.com/openeuler/kernel/pulls/5199

 

Reviewed-by: default avatarWeilong Chen <chenweilong@huawei.com>
Reviewed-by: default avatarXie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
parents 080c1dc3 a31200eb
Loading
Loading
Loading
Loading
+78 −19
Original line number Diff line number Diff line
@@ -45,10 +45,25 @@ components:
   the two is using hugepages just because of the fact the TLB miss is
   going to run faster.

Modern kernels support "multi-size THP" (mTHP), which introduces the
ability to allocate memory in blocks that are bigger than a base page
but smaller than traditional PMD-size (as described above), in
increments of a power-of-2 number of pages. mTHP can back anonymous
memory (for example 16K, 32K, 64K, etc). These THPs continue to be
PTE-mapped, but in many cases can still provide similar benefits to
those outlined above: Page faults are significantly reduced (by a
factor of e.g. 4, 8, 16, etc), but latency spikes are much less
prominent because the size of each page isn't as huge as the PMD-sized
variant and there is less memory to clear in each page fault. Some
architectures also employ TLB compression mechanisms to squeeze more
entries in when a set of PTEs are virtually and physically contiguous
and approporiately aligned. In this case, TLB misses will occur less
often.

THP can be enabled system wide or restricted to certain tasks or even
memory ranges inside task's address space. Unless THP is completely
disabled, there is ``khugepaged`` daemon that scans memory and
collapses sequences of basic pages into huge pages.
collapses sequences of basic pages into PMD-sized huge pages.

The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
interface and using madvise(2) and prctl(2) system calls.
@@ -95,12 +110,40 @@ Global THP controls
Transparent Hugepage Support for anonymous memory can be entirely disabled
(mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
regions (to avoid the risk of consuming more memory resources) or enabled
system wide. This can be achieved with one of::
system wide. This can be achieved per-supported-THP-size with one of::

	echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
	echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
	echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled

where <size> is the hugepage size being addressed, the available sizes
for which vary by system.

For example::

	echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled

Alternatively it is possible to specify that a given hugepage size
will inherit the top-level "enabled" value::

	echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled

For example::

	echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled

The top-level setting (for use with "inherit") can be set by issuing
one of the following commands::

	echo always >/sys/kernel/mm/transparent_hugepage/enabled
	echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
	echo never >/sys/kernel/mm/transparent_hugepage/enabled

By default, PMD-sized hugepages have enabled="inherit" and all other
hugepage sizes have enabled="never". If enabling multiple hugepage
sizes, the kernel will select the most appropriate enabled size for a
given allocation.

It's also possible to limit defrag efforts in the VM to generate
anonymous hugepages in case they're not immediately free to madvise
regions or to never try to defrag memory and simply fallback to regular
@@ -146,25 +189,34 @@ madvise
never
	should be self-explanatory.

By default kernel tries to use huge zero page on read page fault to
anonymous mapping. It's possible to disable huge zero page by writing 0
or enable it back by writing 1::
By default kernel tries to use huge, PMD-mappable zero page on read
page fault to anonymous mapping. It's possible to disable huge zero
page by writing 0 or enable it back by writing 1::

	echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
	echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page

Some userspace (such as a test program, or an optimized memory allocation
library) may want to know the size (in bytes) of a transparent hugepage::
Some userspace (such as a test program, or an optimized memory
allocation library) may want to know the size (in bytes) of a
PMD-mappable transparent hugepage::

	cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size

khugepaged will be automatically started when
transparent_hugepage/enabled is set to "always" or "madvise, and it'll
be automatically shutdown if it's set to "never".
khugepaged will be automatically started when one or more hugepage
sizes are enabled (either by directly setting "always" or "madvise",
or by setting "inherit" while the top-level enabled is set to "always"
or "madvise"), and it'll be automatically shutdown when the last
hugepage size is disabled (either by directly setting "never", or by
setting "inherit" while the top-level enabled is set to "never").

Khugepaged controls
-------------------

.. note::
   khugepaged currently only searches for opportunities to collapse to
   PMD-sized THP and no attempt is made to collapse to other THP
   sizes.

khugepaged runs usually at low frequency so while one may not want to
invoke defrag algorithms synchronously during the page faults, it
should be worth invoking defrag at least in khugepaged. However it's
@@ -282,19 +334,26 @@ force
Need of application restart
===========================

The transparent_hugepage/enabled values and tmpfs mount option only affect
future behavior. So to make them effective you need to restart any
application that could have been using hugepages. This also applies to the
regions registered in khugepaged.
The transparent_hugepage/enabled and
transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
option only affect future behavior. So to make them effective you need
to restart any application that could have been using hugepages. This
also applies to the regions registered in khugepaged.

Monitoring usage
================

The number of anonymous transparent huge pages currently used by the
.. note::
   Currently the below counters only record events relating to
   PMD-sized THP. Events relating to other THP sizes are not included.

The number of PMD-sized anonymous transparent huge pages currently used by the
system is available by reading the AnonHugePages field in ``/proc/meminfo``.
To identify what applications are using anonymous transparent huge pages,
it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages fields
for each mapping.
To identify what applications are using PMD-sized anonymous transparent huge
pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
fields for each mapping. (Note that AnonHugePages only applies to traditional
PMD-sized THP for historical reasons and should have been called
AnonHugePmdMapped).

The number of file transparent huge pages mapped to userspace is available
by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
@@ -413,7 +472,7 @@ for huge pages.
Optimizing the applications
===========================

To be guaranteed that the kernel will map a 2M page immediately in any
To be guaranteed that the kernel will map a THP immediately in any
memory region, the mmap region has to be hugepage naturally
aligned. posix_memalign() can provide that guarantee.

+3 −3
Original line number Diff line number Diff line
@@ -532,9 +532,9 @@ replaced by copy-on-write) part of the underlying shmem object out on swap.
does not take into account swapped out page of underlying shmem objects.
"Locked" indicates whether the mapping is locked in memory or not.

"THPeligible" indicates whether the mapping is eligible for allocating THP
pages as well as the THP is PMD mappable or not - 1 if true, 0 otherwise.
It just shows the current status.
"THPeligible" indicates whether the mapping is eligible for allocating
naturally aligned THP pages of any currently enabled size. 1 if true, 0
otherwise.

"VmFlags" field deserves a separate description. This member represents the
kernel flags associated with the particular virtual memory area in two letter
+2 −1
Original line number Diff line number Diff line
@@ -869,7 +869,8 @@ static int show_smap(struct seq_file *m, void *v)
	__show_smap(m, &mss, false);

	seq_printf(m, "THPeligible:    %8u\n",
		   hugepage_vma_check(vma, vma->vm_flags, true, false, true));
		   !!thp_vma_allowable_orders(vma, vma->vm_flags, true, false,
					      true, THP_ORDERS_ALL));

	if (arch_pkeys_enabled())
		seq_printf(m, "ProtectionKey:  %8u\n", vma_pkey(vma));
+157 −26
Original line number Diff line number Diff line
@@ -67,6 +67,26 @@ extern struct kobj_attribute shmem_enabled_attr;
#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT)
#define HPAGE_PMD_NR (1<<HPAGE_PMD_ORDER)

/*
 * Mask of all large folio orders supported for anonymous THP; all orders up to
 * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
 * (which is a limitation of the THP implementation).
 */
#define THP_ORDERS_ALL_ANON	((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))

/*
 * Mask of all large folio orders supported for file THP.
 */
#define THP_ORDERS_ALL_FILE	(BIT(PMD_ORDER) | BIT(PUD_ORDER))

/*
 * Mask of all large folio orders supported for THP.
 */
#define THP_ORDERS_ALL		(THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE)

#define thp_vma_allowable_order(vma, vm_flags, smaps, in_pf, enforce_sysfs, order) \
	(!!thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf, enforce_sysfs, BIT(order)))

#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define HPAGE_PMD_SHIFT PMD_SHIFT
#define HPAGE_PMD_SIZE	((1UL) << HPAGE_PMD_SHIFT)
@@ -77,45 +97,105 @@ extern struct kobj_attribute shmem_enabled_attr;
#define HPAGE_PUD_MASK	(~(HPAGE_PUD_SIZE - 1))

extern unsigned long transparent_hugepage_flags;
extern unsigned long huge_anon_orders_always;
extern unsigned long huge_anon_orders_madvise;
extern unsigned long huge_anon_orders_inherit;

#define hugepage_flags_enabled()					       \
	(transparent_hugepage_flags &				       \
	 ((1<<TRANSPARENT_HUGEPAGE_FLAG) |		       \
	  (1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG)))
#define hugepage_flags_always()				\
	(transparent_hugepage_flags &			\
	 (1<<TRANSPARENT_HUGEPAGE_FLAG))
static inline bool hugepage_global_enabled(void)
{
	return transparent_hugepage_flags &
			((1<<TRANSPARENT_HUGEPAGE_FLAG) |
			(1<<TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG));
}

static inline bool hugepage_global_always(void)
{
	return transparent_hugepage_flags &
			(1<<TRANSPARENT_HUGEPAGE_FLAG);
}

static inline bool hugepage_flags_enabled(void)
{
	/*
	 * We cover both the anon and the file-backed case here; we must return
	 * true if globally enabled, even when all anon sizes are set to never.
	 * So we don't need to look at huge_anon_orders_inherit.
	 */
	return hugepage_global_enabled() ||
	       huge_anon_orders_always ||
	       huge_anon_orders_madvise;
}

static inline int highest_order(unsigned long orders)
{
	return fls_long(orders) - 1;
}

static inline int next_order(unsigned long *orders, int prev)
{
	*orders &= ~BIT(prev);
	return highest_order(*orders);
}

/*
 * Do the below checks:
 *   - For file vma, check if the linear page offset of vma is
 *     HPAGE_PMD_NR aligned within the file.  The hugepage is
 *     guaranteed to be hugepage-aligned within the file, but we must
 *     check that the PMD-aligned addresses in the VMA map to
 *     PMD-aligned offsets within the file, else the hugepage will
 *     not be PMD-mappable.
 *   - For all vmas, check if the haddr is in an aligned HPAGE_PMD_SIZE
 *     order-aligned within the file.  The hugepage is
 *     guaranteed to be order-aligned within the file, but we must
 *     check that the order-aligned addresses in the VMA map to
 *     order-aligned offsets within the file, else the hugepage will
 *     not be mappable.
 *   - For all vmas, check if the haddr is in an aligned hugepage
 *     area.
 */
static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
		unsigned long addr)
static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
		unsigned long addr, int order)
{
	unsigned long hpage_size = PAGE_SIZE << order;
	unsigned long haddr;

	/* Don't have to check pgoff for anonymous vma */
	if (!vma_is_anonymous(vma)) {
		if (!IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
				HPAGE_PMD_NR))
				hpage_size >> PAGE_SHIFT))
			return false;
	}

	haddr = addr & HPAGE_PMD_MASK;
	haddr = ALIGN_DOWN(addr, hpage_size);

	if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end)
	if (haddr < vma->vm_start || haddr + hpage_size > vma->vm_end)
		return false;
	return true;
}

/*
 * Filter the bitfield of input orders to the ones suitable for use in the vma.
 * See thp_vma_suitable_order().
 * All orders that pass the checks are returned as a bitfield.
 */
static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
		unsigned long addr, unsigned long orders)
{
	int order;

	/*
	 * Iterate over orders, highest to lowest, removing orders that don't
	 * meet alignment requirements from the set. Exit loop at first order
	 * that meets requirements, since all lower orders must also meet
	 * requirements.
	 */

	order = highest_order(orders);

	while (orders) {
		if (thp_vma_suitable_order(vma, addr, order))
			break;
		order = next_order(&orders, order);
	}

	return orders;
}

static inline bool file_thp_enabled(struct vm_area_struct *vma)
{
	struct inode *inode;
@@ -130,8 +210,52 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
	       !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
}

bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags,
			bool smaps, bool in_pf, bool enforce_sysfs);
unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
					 unsigned long vm_flags, bool smaps,
					 bool in_pf, bool enforce_sysfs,
					 unsigned long orders);

/**
 * thp_vma_allowable_orders - determine hugepage orders that are allowed for vma
 * @vma:  the vm area to check
 * @vm_flags: use these vm_flags instead of vma->vm_flags
 * @smaps: whether answer will be used for smaps file
 * @in_pf: whether answer will be used by page fault handler
 * @enforce_sysfs: whether sysfs config should be taken into account
 * @orders: bitfield of all orders to consider
 *
 * Calculates the intersection of the requested hugepage orders and the allowed
 * hugepage orders for the provided vma. Permitted orders are encoded as a set
 * bit at the corresponding bit position (bit-2 corresponds to order-2, bit-3
 * corresponds to order-3, etc). Order-0 is never considered a hugepage order.
 *
 * Return: bitfield of orders allowed for hugepage in the vma. 0 if no hugepage
 * orders are allowed.
 */
static inline
unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
				       unsigned long vm_flags, bool smaps,
				       bool in_pf, bool enforce_sysfs,
				       unsigned long orders)
{
	/* Optimization to check if required orders are enabled early. */
	if (enforce_sysfs && vma_is_anonymous(vma)) {
		unsigned long mask = READ_ONCE(huge_anon_orders_always);

		if (vm_flags & VM_HUGEPAGE)
			mask |= READ_ONCE(huge_anon_orders_madvise);
		if (hugepage_global_always() ||
		    ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled()))
			mask |= READ_ONCE(huge_anon_orders_inherit);

		orders &= mask;
		if (!orders)
			return 0;
	}

	return __thp_vma_allowable_orders(vma, vm_flags, smaps, in_pf,
					  enforce_sysfs, orders);
}

#define transparent_hugepage_use_zero_page()				\
	(transparent_hugepage_flags &					\
@@ -267,17 +391,24 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
	return false;
}

static inline bool transhuge_vma_suitable(struct vm_area_struct *vma,
		unsigned long addr)
static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
		unsigned long addr, int order)
{
	return false;
}

static inline bool hugepage_vma_check(struct vm_area_struct *vma,
static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
		unsigned long addr, unsigned long orders)
{
	return 0;
}

static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
					unsigned long vm_flags, bool smaps,
				      bool in_pf, bool enforce_sysfs)
					bool in_pf, bool enforce_sysfs,
					unsigned long orders)
{
	return false;
	return 0;
}

static inline void folio_prep_large_rmappable(struct folio *folio) {}
+2 −2
Original line number Diff line number Diff line
@@ -189,7 +189,7 @@ typedef int __bitwise rmap_t;
/*
 * rmap interfaces called when adding or removing pte of page
 */
void page_move_anon_rmap(struct page *, struct vm_area_struct *);
void folio_move_anon_rmap(struct folio *, struct vm_area_struct *);
void page_add_anon_rmap(struct page *, struct vm_area_struct *,
		unsigned long address, rmap_t flags);
void page_add_new_anon_rmap(struct page *, struct vm_area_struct *,
@@ -203,7 +203,7 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
void page_remove_rmap(struct page *, struct vm_area_struct *,
		bool compound);

void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
void hugepage_add_anon_rmap(struct folio *, struct vm_area_struct *,
		unsigned long address, rmap_t flags);
void hugepage_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
		unsigned long address);
Loading