Commit d7f5ef65 authored by Kumar Kartikeya Dwivedi's avatar Kumar Kartikeya Dwivedi Committed by Alexei Starovoitov
Browse files

bpf: Do btf_record_free outside map_free callback



Since the commit being fixed, we now miss freeing btf_record for local
storage maps which will have a btf_record populated in case they have
bpf_spin_lock element.

This was missed because I made the choice of offloading the job to free
kptr_off_tab (now btf_record) to the map_free callback when adding
support for kptrs.

Revisiting the reason for this decision, there is the possibility that
the btf_record gets used inside map_free callback (e.g. in case of maps
embedding kptrs) to iterate over them and free them, hence doing it
before the map_free callback would be leaking special field memory, and
do invalid memory access. The btf_record keeps module references which
is critical to ensure the dtor call made for referenced kptr is safe to
do.

If doing it after map_free callback, the map area is already freed, so
we cannot access bpf_map structure anymore.

To fix this and prevent such lapses in future, move bpf_map_free_record
out of the map_free callback, and do it after map_free by remembering
the btf_record pointer. There is no need to access bpf_map structure in
that case, and we can avoid missing this case when support for new map
types is added for other special fields.

Since a btf_record and its btf_field_offs are used together, for
consistency delay freeing of field_offs as well. While not a problem
right now, a lot of code assumes that either both record and field_offs
are set or none at once.

Note that in case of map of maps (outer maps), inner_map_meta->record is
only used during verification, not to free fields in map value, hence we
simply keep the bpf_map_free_record call as is in bpf_map_meta_free and
never touch map->inner_map_meta in bpf_map_free_deferred.

Add a comment making note of these details.

Fixes: db559117 ("bpf: Consolidate spin_lock, timer management into btf_record")
Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20221118015614.2013203-3-memxor@gmail.com


Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parent c237bfa5
Loading
Loading
Loading
Loading
+0 −1
Original line number Diff line number Diff line
@@ -430,7 +430,6 @@ static void array_map_free(struct bpf_map *map)
			for (i = 0; i < array->map.max_entries; i++)
				bpf_obj_free_fields(map->record, array_map_elem_ptr(array, i));
		}
		bpf_map_free_record(map);
	}

	if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
+0 −1
Original line number Diff line number Diff line
@@ -1511,7 +1511,6 @@ static void htab_map_free(struct bpf_map *map)
		prealloc_destroy(htab);
	}

	bpf_map_free_record(map);
	free_percpu(htab->extra_elems);
	bpf_map_area_free(htab->buckets);
	bpf_mem_alloc_destroy(&htab->pcpu_ma);
+14 −4
Original line number Diff line number Diff line
@@ -659,14 +659,24 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
static void bpf_map_free_deferred(struct work_struct *work)
{
	struct bpf_map *map = container_of(work, struct bpf_map, work);
	struct btf_field_offs *foffs = map->field_offs;
	struct btf_record *rec = map->record;

	security_bpf_map_free(map);
	kfree(map->field_offs);
	bpf_map_release_memcg(map);
	/* implementation dependent freeing, map_free callback also does
	 * bpf_map_free_record, if needed.
	 */
	/* implementation dependent freeing */
	map->ops->map_free(map);
	/* Delay freeing of field_offs and btf_record for maps, as map_free
	 * callback usually needs access to them. It is better to do it here
	 * than require each callback to do the free itself manually.
	 *
	 * Note that the btf_record stashed in map->inner_map_meta->record was
	 * already freed using the map_free callback for map in map case which
	 * eventually calls bpf_map_free_meta, since inner_map_meta is only a
	 * template bpf_map struct used during verification.
	 */
	kfree(foffs);
	btf_record_free(rec);
}

static void bpf_map_put_uref(struct bpf_map *map)