Commit 96403bfe authored by Muchun Song's avatar Muchun Song Committed by Linus Torvalds
Browse files

mm: memcontrol: fix slub memory accounting

SLUB currently account kmalloc() and kmalloc_node() allocations larger
than order-1 page per-node.  But it forget to update the per-memcg
vmstats.  So it can lead to inaccurate statistics of "slab_unreclaimable"
which is from memory.stat.  Fix it by using mod_lruvec_page_state instead
of mod_node_page_state.

Link: https://lkml.kernel.org/r/20210223092423.42420-1-songmuchun@bytedance.com


Fixes: 6a486c0a ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: default avatarMuchun Song <songmuchun@bytedance.com>
Reviewed-by: default avatarShakeel Butt <shakeelb@google.com>
Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
Reviewed-by: default avatarMichal Koutný <mkoutny@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 1685bde6
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -898,7 +898,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
	page = alloc_pages(flags, order);
	if (likely(page)) {
		ret = page_address(page);
		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
				      PAGE_SIZE << order);
	}
	ret = kasan_kmalloc_large(ret, size, flags);
+4 −4
Original line number Diff line number Diff line
@@ -4042,7 +4042,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
	page = alloc_pages_node(node, flags, order);
	if (page) {
		ptr = page_address(page);
		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
				      PAGE_SIZE << order);
	}

@@ -4174,7 +4174,7 @@ void kfree(const void *x)

		BUG_ON(!PageCompound(page));
		kfree_hook(object);
		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
				      -(PAGE_SIZE << order));
		__free_pages(page, order);
		return;