Commit 2dc6241e authored by Jakub Kicinski's avatar Jakub Kicinski Committed by Zhengchao Shao
Browse files

net/sched: act_mirred: use the backlog for mirred ingress

mainline inclusion
from mainline-v6.8-rc6
commit 52f671db18823089a02f07efc04efdb2272ddc17
category: bugfix
bugzilla: https://gitee.com/src-openeuler/kernel/issues/I9E2LT
CVE: CVE-2024-26740

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=52f671db18823089a02f07efc04efdb2272ddc17



--------------------------------

The test Davide added in commit ca22da2f ("act_mirred: use the backlog
for nested calls to mirred ingress") hangs our testing VMs every 10 or so
runs, with the familiar tcp_v4_rcv -> tcp_v4_rcv deadlock reported by
lockdep.

The problem as previously described by Davide (see Link) is that
if we reverse flow of traffic with the redirect (egress -> ingress)
we may reach the same socket which generated the packet. And we may
still be holding its socket lock. The common solution to such deadlocks
is to put the packet in the Rx backlog, rather than run the Rx path
inline. Do that for all egress -> ingress reversals, not just once
we started to nest mirred calls.

In the past there was a concern that the backlog indirection will
lead to loss of error reporting / less accurate stats. But the current
workaround does not seem to address the issue.

Fixes: 53592b36 ("net/sched: act_mirred: Implement ingress actions")
Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Suggested-by: default avatarDavide Caratti <dcaratti@redhat.com>
Link: https://lore.kernel.org/netdev/33dc43f587ec1388ba456b4915c75f02a8aae226.1663945716.git.dcaratti@redhat.com/


Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
Acked-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>

Conflicts:
	net/sched/act_mirred.c

Signed-off-by: default avatarZhengchao Shao <shaozhengchao@huawei.com>
parent 9b7534a0
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment