Commit 96da3f7d authored by Alexei Starovoitov's avatar Alexei Starovoitov Committed by Daniel Borkmann
Browse files

bpf: Remove tracing program restriction on map types



The hash map is now fully converted to bpf_mem_alloc. Its implementation is not
allocating synchronously and not calling call_rcu() directly. It's now safe to
use non-preallocated hash maps in all types of tracing programs including
BPF_PROG_TYPE_PERF_EVENT that runs out of NMI context.

Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Acked-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220902211058.60789-13-alexei.starovoitov@gmail.com
parent ee4ed53c
Loading
Loading
Loading
Loading
+0 −42
Original line number Diff line number Diff line
@@ -12623,48 +12623,6 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,

{
	enum bpf_prog_type prog_type = resolve_prog_type(prog);
	/*
	 * Validate that trace type programs use preallocated hash maps.
	 *
	 * For programs attached to PERF events this is mandatory as the
	 * perf NMI can hit any arbitrary code sequence.
	 *
	 * All other trace types using non-preallocated per-cpu hash maps are
	 * unsafe as well because tracepoint or kprobes can be inside locked
	 * regions of the per-cpu memory allocator or at a place where a
	 * recursion into the per-cpu memory allocator would see inconsistent
	 * state. Non per-cpu hash maps are using bpf_mem_alloc-tor which is
	 * safe to use from kprobe/fentry and in RT.
	 *
	 * On RT enabled kernels run-time allocation of all trace type
	 * programs is strictly prohibited due to lock type constraints. On
	 * !RT kernels it is allowed for backwards compatibility reasons for
	 * now, but warnings are emitted so developers are made aware of
	 * the unsafety and can fix their programs before this is enforced.
	 */
	if (is_tracing_prog_type(prog_type) && !is_preallocated_map(map)) {
		if (prog_type == BPF_PROG_TYPE_PERF_EVENT) {
			/* perf_event bpf progs have to use preallocated hash maps
			 * because non-prealloc is still relying on call_rcu to free
			 * elements.
			 */
			verbose(env, "perf_event programs can only use preallocated hash map\n");
			return -EINVAL;
		}
		if (map->map_type == BPF_MAP_TYPE_PERCPU_HASH ||
		    (map->inner_map_meta &&
		     map->inner_map_meta->map_type == BPF_MAP_TYPE_PERCPU_HASH)) {
			if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
				verbose(env,
					"trace type programs can only use preallocated per-cpu hash map\n");
				return -EINVAL;
			}
			WARN_ONCE(1, "trace type BPF program uses run-time allocation\n");
			verbose(env,
				"trace type programs with run-time allocated per-cpu hash maps are unsafe."
				" Switch to preallocated hash maps.\n");
		}
	}

	if (map_value_has_spin_lock(map)) {
		if (prog_type == BPF_PROG_TYPE_SOCKET_FILTER) {