Commit e9b08012 authored by Michal Koutný's avatar Michal Koutný Committed by Kaixiong Yu
Browse files

x86/mm: Do not shuffle CPU entry areas without KASLR

mainline inclusion
from mainline-v6.3-rc4
commit a3f547ad
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/IBGU7R

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a3f547addcaa10df5a226526bc9e2d9a94542344



--------------------------------

The commit 97e3d26b ("x86/mm: Randomize per-cpu entry area") fixed
an omission of KASLR on CPU entry areas. It doesn't take into account
KASLR switches though, which may result in unintended non-determinism
when a user wants to avoid it (e.g. debugging, benchmarking).

Generate only a single combination of CPU entry areas offsets -- the
linear array that existed prior randomization when KASLR is turned off.

Since we have 3f148f33 ("x86/kasan: Map shadow for percpu pages on
demand") and followups, we can use the more relaxed guard
kasrl_enabled() (in contrast to kaslr_memory_enabled()).

Fixes: 97e3d26b ("x86/mm: Randomize per-cpu entry area")
Signed-off-by: default avatarMichal Koutný <mkoutny@suse.com>
Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/20230306193144.24605-1-mkoutny%40suse.com


Signed-off-by: default avatarKaixiong Yu <yukaixiong@huawei.com>
parent 6aa478cf
Loading
Loading
Loading
Loading
+7 −0
Original line number Diff line number Diff line
@@ -11,6 +11,7 @@
#include <asm/fixmap.h>
#include <asm/desc.h>
#include <asm/kasan.h>
#include <asm/setup.h>

static DEFINE_PER_CPU_PAGE_ALIGNED(struct entry_stack_page, entry_stack_storage);

@@ -30,6 +31,12 @@ static __init void init_cea_offsets(void)
	unsigned int max_cea;
	unsigned int i, j;

	if (!kaslr_enabled()) {
		for_each_possible_cpu(i)
			per_cpu(_cea_offset, i) = i;
		return;
	}

	max_cea = (CPU_ENTRY_AREA_MAP_SIZE - PAGE_SIZE) / CPU_ENTRY_AREA_SIZE;

	/* O(sodding terrible) */