Commit 351bdbb6 authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Jakub Kicinski
Browse files

net: Revert the softirq will run annotation in ____napi_schedule().



The lockdep annotation lockdep_assert_softirq_will_run() expects that
either hard or soft interrupts are disabled because both guaranty that
the "raised" soft-interrupts will be processed once the context is left.

This triggers in flush_smp_call_function_from_idle() but it this case it
explicitly calls do_softirq() in case of pending softirqs.

Revert the "softirq will run" annotation in ____napi_schedule() and move
the check back to __netif_rx() as it was. Keep the IRQ-off assert in
____napi_schedule() because this is always required.

Fixes: fbd9a2ce ("net: Add lockdep asserts to ____napi_schedule().")
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
Link: https://lore.kernel.org/r/YjhD3ZKWysyw8rc6@linutronix.de


Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent ca4f3f18
Loading
Loading
Loading
Loading
+0 −7
Original line number Diff line number Diff line
@@ -329,12 +329,6 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);

#define lockdep_assert_none_held_once()		\
	lockdep_assert_once(!current->lockdep_depth)
/*
 * Ensure that softirq is handled within the callchain and not delayed and
 * handled by chance.
 */
#define lockdep_assert_softirq_will_run()	\
	lockdep_assert_once(hardirq_count() | softirq_count())

#define lockdep_recursing(tsk)	((tsk)->lockdep_recursion)

@@ -420,7 +414,6 @@ extern int lockdep_is_held(const void *);
#define lockdep_assert_held_read(l)		do { (void)(l); } while (0)
#define lockdep_assert_held_once(l)		do { (void)(l); } while (0)
#define lockdep_assert_none_held_once()	do { } while (0)
#define lockdep_assert_softirq_will_run()	do { } while (0)

#define lockdep_recursing(tsk)			(0)

+1 −2
Original line number Diff line number Diff line
@@ -4277,7 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd,
{
	struct task_struct *thread;

	lockdep_assert_softirq_will_run();
	lockdep_assert_irqs_disabled();

	if (test_bit(NAPI_STATE_THREADED, &napi->state)) {
@@ -4887,7 +4886,7 @@ int __netif_rx(struct sk_buff *skb)
{
	int ret;

	lockdep_assert_softirq_will_run();
	lockdep_assert_once(hardirq_count() | softirq_count());

	trace_netif_rx_entry(skb);
	ret = netif_rx_internal(skb);