!3064 mm: PCP high auto-tuning
Merge Pull Request from: @ci-robot PR sync from: Ze Zuo <zuoze1@huawei.com> https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/53JBDDN3LWQ7YXTD7DR3DIRQUP6IMN7Z/ The page allocation performance requirements of different workloads are often different. So, we need to tune the PCP (Per-CPU Pageset) high on each CPU automatically to optimize the page allocation performance. The list of patches in series is as follows, [1/9] mm, pcp: avoid to drain PCP when process exit [2/9] cacheinfo: calculate per-CPU data cache size [3/9] mm, pcp: reduce lock contention for draining high-order pages [4/9] mm: restrict the pcp batch scale factor to avoid too long latency [5/9] mm, page_alloc: scale the number of pages that are batch allocated [6/9] mm: add framework for PCP high auto-tuning [7/9] mm: tune PCP high automatically [8/9] mm, pcp: decrease PCP high if free pages < high watermark [9/9] mm, pcp: reduce detecting time of consecutive high order page freeing Patch [1/9], [2/9], [3/9] optimize the PCP draining for consecutive high-order pages freeing. Patch [4/9], [5/9] optimize batch freeing and allocating. Patch [6/9], [7/9], [8/9] implement and optimize a PCP high auto-tuning method. Patch [9/9] optimize the PCP draining for consecutive high order page freeing based on PCP high auto-tuning. Huang Ying (9): mm, pcp: avoid to drain PCP when process exit cacheinfo: calculate size of per-CPU data cache slice mm, pcp: reduce lock contention for draining high-order pages mm: restrict the pcp batch scale factor to avoid too long latency mm, page_alloc: scale the number of pages that are batch allocated mm: add framework for PCP high auto-tuning mm: tune PCP high automatically mm, pcp: decrease PCP high if free pages < high watermark mm, pcp: reduce detecting time of consecutive high order page freeing -- 2.25.1 https://gitee.com/openeuler/kernel/issues/I8JXIR Link:https://gitee.com/openeuler/kernel/pulls/3064 Reviewed-by:Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Zheng Zengkai <zhengzengkai@huawei.com>
Loading
Please sign in to comment