Commit fa8596f2 authored by Guo Mengqi's avatar Guo Mengqi Committed by Zheng Zengkai
Browse files

mm: sharepool: sp_alloc_mmap_populate bugfix

hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I5J0Z9


CVE: NA

--------------------------------

when there is only one mm in a group allocating memory,
if process is killed, the error path in sp_alloc_mmap_populate tries to
access the next spg_node->master->mm in group's proc list. However, in
this case the next spg_node in proc list is head and spg_node->master
would be NULL, which leads to log below:

[file:test_sp_alloc.c, func:alloc_large_repeat, line:437] start to alloc...
[  264.699086][ T1772] share pool: gonna sp_alloc_unmap...
[  264.699939][ T1772] share pool: list_next_entry(spg_node, proc_node) is ffff0004c4907028
[  264.700380][ T1772] share pool: master is 0
[  264.701240][ T1772] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000018
...
[  264.704764][ T1772] Internal error: Oops: 96000006 [#1] SMP
[  264.705166][ T1772] Modules linked in: sharepool_dev(OE)
[  264.705823][ T1772] CPU: 3 PID: 1772 Comm: test_sp_alloc Tainted: G           OE     5.10.0+ #23
...
[  264.712513][ T1772] Call trace:
[  264.713057][ T1772]  sp_alloc+0x528/0xa88
[  264.713740][ T1772]  dev_ioctl+0x6ec/0x1d00 [sharepool_dev]
[  264.714035][ T1772]  __arm64_sys_ioctl+0xb0/0xe8
...
[  264.716891][ T1772] ---[ end trace 1587677032f666c6 ]---
[  264.717457][ T1772] Kernel panic - not syncing: Oops: Fatal exception
[  264.717961][ T1772] SMP: stopping secondary CPUs
[  264.718787][ T1772] Kernel Offset: disabled
[  264.719718][ T1772] Memory Limit: none
[  264.720333][ T1772] ---[ end Kernel panic - not syncing: Oops: Fatal exception ]---

Add a list_is_last check to avoid this null pointer access.

Signed-off-by: default avatarGuo Mengqi <guomengqi3@huawei.com>
Reviewed-by: default avatarWeilong Chen <chenweilong@huawei.com>
Signed-off-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
parent f30182b3
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment