Commit 4d45c3af authored by Yang Yang's avatar Yang Yang Committed by Linus Torvalds
Browse files

mm/vmstat: add event for ksm swapping in copy

When faults in from swap what used to be a KSM page and that page had been
swapped in before, system has to make a copy, and leaves remerging the
pages to a later pass of ksmd.

That is not good for performace, we'd better to reduce this kind of copy.
There are some ways to reduce it, for example lessen swappiness or
madvise(, , MADV_MERGEABLE) range.  So add this event to support doing
this tuning.  Just like this patch: "mm, THP, swap: add THP swapping out
fallback counting".

Link: https://lkml.kernel.org/r/20220113023839.758845-1-yang.yang29@zte.com.cn


Signed-off-by: default avatarYang Yang <yang.yang29@zte.com.cn>
Reviewed-by: default avatarRan Xiaokai <ran.xiaokai@zte.com.cn>
Cc: Hugh Dickins <hughd@google.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Saravanan D <saravanand@fb.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d8c47cc7
Loading
Loading
Loading
Loading
+3 −0
Original line number Diff line number Diff line
@@ -129,6 +129,9 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#ifdef CONFIG_SWAP
		SWAP_RA,
		SWAP_RA_HIT,
#ifdef CONFIG_KSM
		KSM_SWPIN_COPY,
#endif
#endif
#ifdef CONFIG_X86
		DIRECT_MAP_LEVEL2_SPLIT,
+3 −0
Original line number Diff line number Diff line
@@ -2595,6 +2595,9 @@ struct page *ksm_might_need_to_copy(struct page *page,
		SetPageDirty(new_page);
		__SetPageUptodate(new_page);
		__SetPageLocked(new_page);
#ifdef CONFIG_SWAP
		count_vm_event(KSM_SWPIN_COPY);
#endif
	}

	return new_page;
+3 −0
Original line number Diff line number Diff line
@@ -1388,6 +1388,9 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_SWAP
	"swap_ra",
	"swap_ra_hit",
#ifdef CONFIG_KSM
	"ksm_swpin_copy",
#endif
#endif
#ifdef CONFIG_X86
	"direct_map_level2_splits",