Commit 63ffacab authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Yang Yingliang
Browse files

kthread: Fix PF_KTHREAD vs to_kthread() race



mainline inclusion
from mainline-v5.13-rc1
commit 3a7956e2
bugzilla: 52510
CVE: NA

--------------------------------

The kthread_is_per_cpu() construct relies on only being called on
PF_KTHREAD tasks (per the WARN in to_kthread). This gives rise to the
following usage pattern:

	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))

However, as reported by syzcaller, this is broken. The scenario is:

	CPU0				CPU1 (running p)

	(p->flags & PF_KTHREAD) // true

					begin_new_exec()
					  me->flags &= ~(PF_KTHREAD|...);
	kthread_is_per_cpu(p)
	  to_kthread(p)
	    WARN(!(p->flags & PF_KTHREAD) <-- *SPLAT*

Introduce __to_kthread() that omits the WARN and is sure to check both
values.

Use this to remove the problematic pattern for kthread_is_per_cpu()
and fix a number of other kthread_*() functions that have similar
issues but are currently not used in ways that would expose the
problem.

Notably kthread_func() is only ever called on 'current', while
kthread_probe_data() is only used for PF_WQ_WORKER, which implies the
task is from kthread_create*().

Fixes: ac687e6e ("kthread: Extract KTHREAD_IS_PER_CPU")
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: default avatarValentin Schneider <Valentin.Schneider@arm.com>
Link: https://lkml.kernel.org/r/YH6WJc825C4P0FCK@hirez.programming.kicks-ass.net


Signed-off-by: Zheng Zucheng <zhengzucheng.huawei.com>

Conflicts:
	kernel/kthread.c
	kernel/sched/core.c
	kernel/sched/fair.c
Reviewed-by: default avatarCheng Jian <cj.chengjian@huawei.com>
Reviewed-by: default avatarCheng Jian <cj.chengjian@huawei.com>
Reviewed-by: default avatarChen Hui <judy.chenhui@huawei.com>
Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
parent 42412d30
Loading
Loading
Loading
Loading
+24 −4
Original line number Diff line number Diff line
@@ -73,6 +73,25 @@ static inline struct kthread *to_kthread(struct task_struct *k)
	return (__force void *)k->set_child_tid;
}

/*
 * Variant of to_kthread() that doesn't assume @p is a kthread.
 *
 * Per construction; when:
 *
 *   (p->flags & PF_KTHREAD) && p->set_child_tid
 *
 * the task is both a kthread and struct kthread is persistent. However
 * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and
 * begin_new_exec()).
 */
static inline struct kthread *__to_kthread(struct task_struct *p)
{
	void *kthread = (__force void *)p->set_child_tid;
	if (kthread && !(p->flags & PF_KTHREAD))
		kthread = NULL;
	return kthread;
}

void free_kthread_struct(struct task_struct *k)
{
	struct kthread *kthread;
@@ -176,9 +195,10 @@ void set_kthreadd_affinity(void)
 */
void *kthread_probe_data(struct task_struct *task)
{
	struct kthread *kthread = to_kthread(task);
	struct kthread *kthread = __to_kthread(task);
	void *data = NULL;

	if (kthread)
		probe_kernel_read(&data, &kthread->data, sizeof(data));
	return data;
}
@@ -477,9 +497,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu)
	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
}

bool kthread_is_per_cpu(struct task_struct *k)
bool kthread_is_per_cpu(struct task_struct *p)
{
	struct kthread *kthread = to_kthread(k);
	struct kthread *kthread = __to_kthread(p);
	if (!kthread)
		return false;