Commit 329c17b7 authored by Nam Cao's avatar Nam Cao Committed by Lin Yujun
Browse files

riscv: rewrite __kernel_map_pages() to fix sleeping in invalid context

stable inclusion
from stable-v6.1.95
commit 919f8626099d9909b9a9620b05e8c8ab06581876
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/IACS5F
CVE: CVE-2024-40915

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=919f8626099d9909b9a9620b05e8c8ab06581876



---------------------------

commit fb1cf0878328fe75d47f0aed0a65b30126fcefc4 upstream.

__kernel_map_pages() is a debug function which clears the valid bit in page
table entry for deallocated pages to detect illegal memory accesses to
freed pages.

This function set/clear the valid bit using __set_memory(). __set_memory()
acquires init_mm's semaphore, and this operation may sleep. This is
problematic, because  __kernel_map_pages() can be called in atomic context,
and thus is illegal to sleep. An example warning that this causes:

BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1578
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2, name: kthreadd
preempt_count: 2, expected: 0
CPU: 0 PID: 2 Comm: kthreadd Not tainted 6.9.0-g1d4c6d784ef6 #37
Hardware name: riscv-virtio,qemu (DT)
Call Trace:
[<ffffffff800060dc>] dump_backtrace+0x1c/0x24
[<ffffffff8091ef6e>] show_stack+0x2c/0x38
[<ffffffff8092baf8>] dump_stack_lvl+0x5a/0x72
[<ffffffff8092bb24>] dump_stack+0x14/0x1c
[<ffffffff8003b7ac>] __might_resched+0x104/0x10e
[<ffffffff8003b7f4>] __might_sleep+0x3e/0x62
[<ffffffff8093276a>] down_write+0x20/0x72
[<ffffffff8000cf00>] __set_memory+0x82/0x2fa
[<ffffffff8000d324>] __kernel_map_pages+0x5a/0xd4
[<ffffffff80196cca>] __alloc_pages_bulk+0x3b2/0x43a
[<ffffffff8018ee82>] __vmalloc_node_range+0x196/0x6ba
[<ffffffff80011904>] copy_process+0x72c/0x17ec
[<ffffffff80012ab4>] kernel_clone+0x60/0x2fe
[<ffffffff80012f62>] kernel_thread+0x82/0xa0
[<ffffffff8003552c>] kthreadd+0x14a/0x1be
[<ffffffff809357de>] ret_from_fork+0xe/0x1c

Rewrite this function with apply_to_existing_page_range(). It is fine to
not have any locking, because __kernel_map_pages() works with pages being
allocated/deallocated and those pages are not changed by anyone else in the
meantime.

Fixes: 5fde3db5 ("riscv: add ARCH_SUPPORTS_DEBUG_PAGEALLOC support")
Signed-off-by: default avatarNam Cao <namcao@linutronix.de>
Cc: stable@vger.kernel.org
Reviewed-by: default avatarAlexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/1289ecba9606a19917bc12b6c27da8aa23e1e5ae.1715750938.git.namcao@linutronix.de


Signed-off-by: default avatarPalmer Dabbelt <palmer@rivosinc.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Conflicts:
	arch/riscv/mm/pageattr.c
[Due to 5d6ad668 arch, mm: restore dependency of __kernel_map_pages()
on DEBUG_PAGEALLOC is not applied to 5.10, __kernel_map_pages() are
not included with CONFIG_DEBUG_PAGEALLOC.
Fix warning: ISO C90 forbids mixed declarations and code]
Signed-off-by: default avatarLin Yujun <linyujun809@huawei.com>
parent 2e754c9d
Loading
Loading
Loading
Loading
+22 −6
Original line number Diff line number Diff line
@@ -184,15 +184,31 @@ int set_direct_map_default_noflush(struct page *page)
	return ret;
}

static int debug_pagealloc_set_page(pte_t *pte, unsigned long addr, void *data)
{
	int enable = *(int *)data;

	unsigned long val = pte_val(ptep_get(pte));

	if (enable)
		val |= _PAGE_PRESENT;
	else
		val &= ~_PAGE_PRESENT;

	set_pte(pte, __pte(val));

	return 0;
}

void __kernel_map_pages(struct page *page, int numpages, int enable)
{
	unsigned long start = (unsigned long)page_address(page);
	unsigned long size = PAGE_SIZE * numpages;

	if (!debug_pagealloc_enabled())
		return;

	if (enable)
		__set_memory((unsigned long)page_address(page), numpages,
			     __pgprot(_PAGE_PRESENT), __pgprot(0));
	else
		__set_memory((unsigned long)page_address(page), numpages,
			     __pgprot(0), __pgprot(_PAGE_PRESENT));
	apply_to_existing_page_range(&init_mm, start, size, debug_pagealloc_set_page, &enable);

	flush_tlb_kernel_range(start, start + size);
}