Unverified Commit e9340cc4 authored by openeuler-ci-bot's avatar openeuler-ci-bot Committed by Gitee
Browse files

!14767 mm: shmem: control THP support through the kernel command line

Merge Pull Request from: @wedm23414 
 
Patch series "mm: add more kernel parameters to control mTHP", v5.

This series introduces four patches related to the kernel parameters
controlling mTHP and a fifth patch replacing `strcpy()` for `strscpy()` in
the file `mm/huge_memory.c`.

The first patch is a straightforward documentation update, correcting the
format of the kernel parameter ``thp_anon=``.

The second, third, and fourth patches focus on controlling THP support for
shmem via the kernel command line.  The second patch introduces a
parameter to control the global default huge page allocation policy for
the internal shmem mount.  The third patch moves a piece of code to a
shared header to ease the implementation of the fourth patch.  Finally,
the fourth patch implements a parameter similar to ``thp_anon=``, but for
shmem.

The goal of these changes is to simplify the configuration of systems that
rely on mTHP support for shmem.  For instance, a platform with a GPU that
benefits from huge pages may want to enable huge pages for shmem.  Having
these kernel parameters streamlines the configuration process and ensures
consistency across setups.


This patch (of 4):

Add a new kernel command line to control the hugepage allocation policy
for the internal shmem mount, ``transparent_hugepage_shmem``. The
parameter is similar to ``transparent_hugepage`` and has the following
format:

transparent_hugepage_shmem=<policy>

where ``<policy>`` is one of the seven valid policies available for
shmem.

Configuring the default huge page allocation policy for the internal
shmem mount can be beneficial for DRM GPU drivers. Just as CPU
architectures, GPUs can also take advantage of huge pages, but this is
possible only if DRM GEM objects are backed by huge pages.

Since GEM uses shmem to allocate anonymous pageable memory, having control
over the default huge page allocation policy allows for the exploration of
huge pages use on GPUs that rely on GEM objects backed by shmem.

Maíra Canal (4):
  mm: shmem: control THP support through the kernel command line
  mm: move ``get_order_from_str()`` to internal.h
  mm: shmem: override mTHP shmem default with a kernel parameter

-- 
2.34.1

 
 
Link:https://gitee.com/openeuler/kernel/pulls/14767

 

Reviewed-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: default avatarZhang Peng <zhangpeng362@huawei.com>
Signed-off-by: default avatarZhang Peng <zhangpeng362@huawei.com>
parents 0d8e54dc 330c367c
Loading
Loading
Loading
Loading
+17 −0
Original line number Diff line number Diff line
@@ -6537,6 +6537,16 @@
			Force threading of all interrupt handlers except those
			marked explicitly IRQF_NO_THREAD.

	thp_shmem=	[KNL]
			Format: <size>[KMG],<size>[KMG]:<policy>;<size>[KMG]-<size>[KMG]:<policy>
			Control the default policy of each hugepage size for the
			internal shmem mount. <policy> is one of policies available
			for the shmem mount ("always", "inherit", "never", "within_size",
			and "advise").
			It can be used multiple times for multiple shmem THP sizes.
			See Documentation/admin-guide/mm/transhuge.rst for more
			details.

	topology=	[S390]
			Format: {off | on}
			Specify if the kernel should make use of the cpu
@@ -6726,6 +6736,13 @@
			See Documentation/admin-guide/mm/transhuge.rst
			for more details.

	transparent_hugepage_shmem= [KNL]
			Format: [always|within_size|advise|never|deny|force]
			Can be used to control the hugepage allocation policy for
			the internal shmem mount.
			See Documentation/admin-guide/mm/transhuge.rst
			for more details.

	trusted.source=	[KEYS]
			Format: <string>
			This parameter identifies the trust source as a backend
+23 −0
Original line number Diff line number Diff line
@@ -349,6 +349,29 @@ PMD_ORDER THP policy will be overridden. If the policy for PMD_ORDER
is not defined within a valid ``thp_anon``, its policy will default to
``never``.

Similarly to ``transparent_hugepage``, you can control the hugepage
allocation policy for the internal shmem mount by using the kernel parameter
``transparent_hugepage_shmem=<policy>``, where ``<policy>`` is one of the
seven valid policies for shmem (``always``, ``within_size``, ``advise``,
``never``, ``deny``, and ``force``).

In the same manner as ``thp_anon`` controls each supported anonymous THP
size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem``
has the same format as ``thp_anon``, but also supports the policy
``within_size``.

``thp_shmem=`` may be specified multiple times to configure all THP sizes
as required. If ``thp_shmem=`` is specified at least once, any shmem THP
sizes not explicitly configured on the command line are implicitly set to
``never``.

``transparent_hugepage_shmem`` setting only affects the global toggle. If
``thp_shmem`` is not specified, PMD_ORDER hugepage will default to
``inherit``. However, if a valid ``thp_shmem`` setting is provided by the
user, the PMD_ORDER hugepage policy will be overridden. If the policy for
PMD_ORDER is not defined within a valid ``thp_shmem``, its policy will
default to ``never``.

Hugepages in tmpfs/shmem
========================

+15 −23
Original line number Diff line number Diff line
@@ -1067,26 +1067,6 @@ static int __init setup_transparent_hugepage(char *str)
}
__setup("transparent_hugepage=", setup_transparent_hugepage);

static inline int get_order_from_str(const char *size_str)
{
	unsigned long size;
	char *endptr;
	int order;

	size = memparse(size_str, &endptr);

	if (!is_power_of_2(size))
		goto err;
	order = get_order(size);
	if (BIT(order) & ~THP_ORDERS_ALL_ANON)
		goto err;

	return order;
err:
	pr_err("invalid size %s in thp_anon boot parameter\n", size_str);
	return -EINVAL;
}

static char str_dup[PAGE_SIZE] __initdata;
static int __init setup_thp_anon(char *str)
{
@@ -1116,10 +1096,22 @@ static int __init setup_thp_anon(char *str)
				start_size = strsep(&subtoken, "-");
				end_size = subtoken;

				start = get_order_from_str(start_size);
				end = get_order_from_str(end_size);
				start = get_order_from_str(start_size, THP_ORDERS_ALL_ANON);
				end = get_order_from_str(end_size, THP_ORDERS_ALL_ANON);
			} else {
				start = end = get_order_from_str(subtoken);
				start_size = end_size = subtoken;
				start = end = get_order_from_str(subtoken,
								 THP_ORDERS_ALL_ANON);
			}

			if (start == -EINVAL) {
				pr_err("invalid size %s in thp_anon boot parameter\n", start_size);
				goto err;
			}

			if (end == -EINVAL) {
				pr_err("invalid size %s in thp_anon boot parameter\n", end_size);
				goto err;
			}

			if (start < 0 || end < 0 || start > end)
+22 −0
Original line number Diff line number Diff line
@@ -1259,6 +1259,28 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
				   unsigned long addr, pmd_t *pmd,
				   unsigned int flags);

/*
 * Parses a string with mem suffixes into its order. Useful to parse kernel
 * parameters.
 */
static inline int get_order_from_str(const char *size_str,
				     unsigned long valid_orders)
{
	unsigned long size;
	char *endptr;
	int order;

	size = memparse(size_str, &endptr);

	if (!is_power_of_2(size))
		return -EINVAL;
	order = get_order(size);
	if (BIT(order) & ~valid_orders)
		return -EINVAL;

	return order;
}

enum {
	/* mark page accessed */
	FOLL_TOUCH = 1 << 16,
+162 −46
Original line number Diff line number Diff line
@@ -136,6 +136,7 @@ static unsigned long huge_shmem_orders_always __read_mostly;
static unsigned long huge_shmem_orders_madvise __read_mostly;
static unsigned long huge_shmem_orders_inherit __read_mostly;
static unsigned long huge_shmem_orders_within_size __read_mostly;
static bool shmem_orders_configured __initdata;
#endif

#ifdef CONFIG_TMPFS
@@ -542,17 +543,15 @@ static bool shmem_confirm_swap(struct address_space *mapping,

static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;

static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
				      loff_t write_end, bool shmem_huge_force,
					struct vm_area_struct *vma,
				      unsigned long vm_flags)
{
	struct mm_struct *mm = vma ? vma->vm_mm : NULL;
	loff_t i_size;

	if (!S_ISREG(inode->i_mode))
	if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
		return false;
	if (mm && ((vm_flags & VM_NOHUGEPAGE) || test_bit(MMF_DISABLE_THP, &mm->flags)))
	if (!S_ISREG(inode->i_mode))
		return false;
	if (shmem_huge == SHMEM_HUGE_DENY)
		return false;
@@ -570,7 +569,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
			return true;
		fallthrough;
	case SHMEM_HUGE_ADVISE:
		if (mm && (vm_flags & VM_HUGEPAGE))
		if (vm_flags & VM_HUGEPAGE)
			return true;
		fallthrough;
	default:
@@ -578,35 +577,39 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
	}
}

static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
		   loff_t write_end, bool shmem_huge_force,
		   struct vm_area_struct *vma, unsigned long vm_flags)
static int shmem_parse_huge(const char *str)
{
	if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
		return false;
	int huge;

	return __shmem_huge_global_enabled(inode, index, write_end,
					   shmem_huge_force, vma, vm_flags);
}
	if (!str)
		return -EINVAL;

#if defined(CONFIG_SYSFS)
static int shmem_parse_huge(const char *str)
{
	if (!strcmp(str, "never"))
		return SHMEM_HUGE_NEVER;
	if (!strcmp(str, "always"))
		return SHMEM_HUGE_ALWAYS;
	if (!strcmp(str, "within_size"))
		return SHMEM_HUGE_WITHIN_SIZE;
	if (!strcmp(str, "advise"))
		return SHMEM_HUGE_ADVISE;
	if (!strcmp(str, "deny"))
		return SHMEM_HUGE_DENY;
	if (!strcmp(str, "force"))
		return SHMEM_HUGE_FORCE;
		huge = SHMEM_HUGE_NEVER;
	else if (!strcmp(str, "always"))
		huge = SHMEM_HUGE_ALWAYS;
	else if (!strcmp(str, "within_size"))
		huge = SHMEM_HUGE_WITHIN_SIZE;
	else if (!strcmp(str, "advise"))
		huge = SHMEM_HUGE_ADVISE;
	else if (!strcmp(str, "deny"))
		huge = SHMEM_HUGE_DENY;
	else if (!strcmp(str, "force"))
		huge = SHMEM_HUGE_FORCE;
	else
		return -EINVAL;

	if (!has_transparent_hugepage() &&
	    huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
		return -EINVAL;

	/* Do not override huge allocation policy with non-PMD sized mTHP */
	if (huge == SHMEM_HUGE_FORCE &&
	    huge_shmem_orders_inherit != BIT(HPAGE_PMD_ORDER))
		return -EINVAL;

	return huge;
}
#endif

#if defined(CONFIG_SYSFS) || defined(CONFIG_TMPFS)
static const char *shmem_format_huge(int huge)
@@ -772,7 +775,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,

static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
				      loff_t write_end, bool shmem_huge_force,
		struct vm_area_struct *vma, unsigned long vm_flags)
				      unsigned long vm_flags)
{
	return false;
}
@@ -1169,7 +1172,7 @@ static int shmem_getattr(struct mnt_idmap *idmap,
			STATX_ATTR_NODUMP);
	generic_fillattr(idmap, request_mask, inode, stat);

	if (shmem_huge_global_enabled(inode, 0, 0, false, NULL, 0))
	if (shmem_huge_global_enabled(inode, 0, 0, false, 0))
		stat->blksize = HPAGE_PMD_SIZE;

	if (request_mask & STATX_BTIME) {
@@ -1690,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
		return 0;

	global_huge = shmem_huge_global_enabled(inode, index, write_end,
					shmem_huge_force, vma, vm_flags);
						shmem_huge_force, vm_flags);
	if (!vma || !vma_is_anon_shmem(vma)) {
		/*
		 * For tmpfs, we now only support PMD sized THP if huge page
@@ -4984,6 +4987,7 @@ void __init shmem_init(void)
	 * Default to setting PMD-sized THP to inherit the global setting and
	 * disable all other multi-size THPs.
	 */
	if (!shmem_orders_configured)
		huge_shmem_orders_inherit = BIT(HPAGE_PMD_ORDER);
#endif

@@ -5042,15 +5046,7 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,

	huge = shmem_parse_huge(tmp);
	if (huge == -EINVAL)
		return -EINVAL;
	if (!has_transparent_hugepage() &&
			huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
		return -EINVAL;

	/* Do not override huge allocation policy with non-PMD sized mTHP */
	if (huge == SHMEM_HUGE_FORCE &&
	    huge_shmem_orders_inherit != BIT(HPAGE_PMD_ORDER))
		return -EINVAL;
		return huge;

	shmem_huge = huge;
	if (shmem_huge > SHMEM_HUGE_DENY)
@@ -5139,6 +5135,126 @@ struct kobj_attribute thpsize_shmem_enabled_attr =
	__ATTR(shmem_enabled, 0644, thpsize_shmem_enabled_show, thpsize_shmem_enabled_store);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */

#if defined(CONFIG_TRANSPARENT_HUGEPAGE)

static int __init setup_transparent_hugepage_shmem(char *str)
{
	int huge;

	huge = shmem_parse_huge(str);
	if (huge == -EINVAL) {
		pr_warn("transparent_hugepage_shmem= cannot parse, ignored\n");
		return huge;
	}

	shmem_huge = huge;
	return 1;
}
__setup("transparent_hugepage_shmem=", setup_transparent_hugepage_shmem);

static char str_dup[PAGE_SIZE] __initdata;
static int __init setup_thp_shmem(char *str)
{
	char *token, *range, *policy, *subtoken;
	unsigned long always, inherit, madvise, within_size;
	char *start_size, *end_size;
	int start, end, nr;
	char *p;

	if (!str || strlen(str) + 1 > PAGE_SIZE)
		goto err;
	strcpy(str_dup, str);

	always = huge_shmem_orders_always;
	inherit = huge_shmem_orders_inherit;
	madvise = huge_shmem_orders_madvise;
	within_size = huge_shmem_orders_within_size;
	p = str_dup;
	while ((token = strsep(&p, ";")) != NULL) {
		range = strsep(&token, ":");
		policy = token;

		if (!policy)
			goto err;

		while ((subtoken = strsep(&range, ",")) != NULL) {
			if (strchr(subtoken, '-')) {
				start_size = strsep(&subtoken, "-");
				end_size = subtoken;

				start = get_order_from_str(start_size,
							   THP_ORDERS_ALL_FILE_DEFAULT);
				end = get_order_from_str(end_size,
							 THP_ORDERS_ALL_FILE_DEFAULT);
			} else {
				start_size = end_size = subtoken;
				start = end = get_order_from_str(subtoken,
								 THP_ORDERS_ALL_FILE_DEFAULT);
			}

			if (start == -EINVAL) {
				pr_err("invalid size %s in thp_shmem boot parameter\n",
				       start_size);
				goto err;
			}

			if (end == -EINVAL) {
				pr_err("invalid size %s in thp_shmem boot parameter\n",
				       end_size);
				goto err;
			}

			if (start < 0 || end < 0 || start > end)
				goto err;

			nr = end - start + 1;
			if (!strcmp(policy, "always")) {
				bitmap_set(&always, start, nr);
				bitmap_clear(&inherit, start, nr);
				bitmap_clear(&madvise, start, nr);
				bitmap_clear(&within_size, start, nr);
			} else if (!strcmp(policy, "advise")) {
				bitmap_set(&madvise, start, nr);
				bitmap_clear(&inherit, start, nr);
				bitmap_clear(&always, start, nr);
				bitmap_clear(&within_size, start, nr);
			} else if (!strcmp(policy, "inherit")) {
				bitmap_set(&inherit, start, nr);
				bitmap_clear(&madvise, start, nr);
				bitmap_clear(&always, start, nr);
				bitmap_clear(&within_size, start, nr);
			} else if (!strcmp(policy, "within_size")) {
				bitmap_set(&within_size, start, nr);
				bitmap_clear(&inherit, start, nr);
				bitmap_clear(&madvise, start, nr);
				bitmap_clear(&always, start, nr);
			} else if (!strcmp(policy, "never")) {
				bitmap_clear(&inherit, start, nr);
				bitmap_clear(&madvise, start, nr);
				bitmap_clear(&always, start, nr);
				bitmap_clear(&within_size, start, nr);
			} else {
				pr_err("invalid policy %s in thp_shmem boot parameter\n", policy);
				goto err;
			}
		}
	}

	huge_shmem_orders_always = always;
	huge_shmem_orders_madvise = madvise;
	huge_shmem_orders_inherit = inherit;
	huge_shmem_orders_within_size = within_size;
	shmem_orders_configured = true;
	return 1;

err:
	pr_warn("thp_shmem=%s: error parsing string, ignoring setting\n", str);
	return 0;
}
__setup("thp_shmem=", setup_thp_shmem);

#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

#else /* !CONFIG_SHMEM */

/*