Commit 31d02e7a authored by Lecopzer Chen's avatar Lecopzer Chen Committed by Catalin Marinas
Browse files

arm64: kaslr: support randomized module area with KASAN_VMALLOC



After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
	VMALLOC area ffffffc010000000 fffffffdf0000000

	before the patch:
		module_alloc_base/end ffffffc008b80000 ffffffc010000000
	after the patch:
		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

	And the function that insmod some modules is fine.

Suggested-by: default avatarArd Biesheuvel <ardb@kernel.org>
Signed-off-by: default avatarLecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com


Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent 71b613fc
Loading
Loading
Loading
Loading
+10 −8
Original line number Diff line number Diff line
@@ -128,15 +128,17 @@ u64 __init kaslr_early_init(void)
	/* use the top 16 bits to randomize the linear region */
	memstart_offset_seed = seed >> 48;

	if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
	    IS_ENABLED(CONFIG_KASAN_SW_TAGS))
	if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
	    (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
	     IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
		/*
		 * KASAN does not expect the module region to intersect the
		 * vmalloc region, since shadow memory is allocated for each
		 * module at load time, whereas the vmalloc region is shadowed
		 * by KASAN zero pages. So keep modules out of the vmalloc
		 * region if KASAN is enabled, and put the kernel well within
		 * 4 GB of the module region.
		 * KASAN without KASAN_VMALLOC does not expect the module region
		 * to intersect the vmalloc region, since shadow memory is
		 * allocated for each module at load time, whereas the vmalloc
		 * region is shadowed by KASAN zero pages. So keep modules
		 * out of the vmalloc region if KASAN is enabled without
		 * KASAN_VMALLOC, and put the kernel well within 4 GB of the
		 * module region.
		 */
		return offset % SZ_2G;

+9 −7
Original line number Diff line number Diff line
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
				NUMA_NO_NODE, __builtin_return_address(0));

	if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
	    !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
	    !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
	    (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
	     (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
	      !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
		/*
		 * KASAN can only deal with module allocations being served
		 * from the reserved module region, since the remainder of
		 * the vmalloc region is already backed by zero shadow pages,
		 * and punching holes into it is non-trivial. Since the module
		 * region is not randomized when KASAN is enabled, it is even
		 * KASAN without KASAN_VMALLOC can only deal with module
		 * allocations being served from the reserved module region,
		 * since the remainder of the vmalloc region is already
		 * backed by zero shadow pages, and punching holes into it
		 * is non-trivial. Since the module region is not randomized
		 * when KASAN is enabled without KASAN_VMALLOC, it is even
		 * less likely that the module region gets exhausted, so we
		 * can simply omit this fallback in that case.
		 */