Loading Documentation/RCU/Design/Requirements/Requirements.rst +5 −5 Original line number Diff line number Diff line Loading @@ -1844,10 +1844,10 @@ that meets this requirement. Furthermore, NMI handlers can be interrupted by what appear to RCU to be normal interrupts. One way that this can happen is for code that directly invokes rcu_irq_enter() and rcu_irq_exit() to be called directly invokes ct_irq_enter() and ct_irq_exit() to be called from an NMI handler. This astonishing fact of life prompted the current code structure, which has rcu_irq_enter() invoking rcu_nmi_enter() and rcu_irq_exit() invoking rcu_nmi_exit(). code structure, which has ct_irq_enter() invoking ct_nmi_enter() and ct_irq_exit() invoking ct_nmi_exit(). And yes, I also learned of this requirement the hard way. Loadable Modules Loading Loading @@ -2195,7 +2195,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: sections, and RCU believes this CPU to be idle, no problem. This sort of thing is used by some architectures for light-weight exception handlers, which can then avoid the overhead of rcu_irq_enter() and rcu_irq_exit() at exception entry and ct_irq_enter() and ct_irq_exit() at exception entry and exit, respectively. Some go further and avoid the entireties of irq_enter() and irq_exit(). Just make very sure you are running some of your tests with Loading Loading @@ -2226,7 +2226,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: +-----------------------------------------------------------------------+ | **Answer**: | +-----------------------------------------------------------------------+ | One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so | | One approach is to do ``ct_irq_exit();ct_irq_enter();`` every so | | often. But given that long-running interrupt handlers can cause other | | problems, not least for response time, shouldn't you work to keep | | your interrupt handler's runtime within reasonable bounds? | Loading Documentation/RCU/stallwarn.rst +3 −3 Original line number Diff line number Diff line Loading @@ -97,12 +97,12 @@ warnings: which will include additional debugging information. - A low-level kernel issue that either fails to invoke one of the variants of rcu_user_enter(), rcu_user_exit(), rcu_idle_enter(), rcu_idle_exit(), rcu_irq_enter(), or rcu_irq_exit() on the one variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(), ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one hand, or that invokes one of them too many times on the other. Historically, the most frequent issue has been an omission of either irq_enter() or irq_exit(), which in turn invoke rcu_irq_enter() or rcu_irq_exit(), respectively. Building your ct_irq_enter() or ct_irq_exit(), respectively. Building your kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types of issues, which sometimes arise in architecture-specific code. Loading Documentation/features/time/context-tracking/arch-support.txt +3 −3 Original line number Diff line number Diff line # # Feature name: context-tracking # Kconfig: HAVE_CONTEXT_TRACKING # description: arch supports context tracking for NO_HZ_FULL # Feature name: user-context-tracking # Kconfig: HAVE_CONTEXT_TRACKING_USER # description: arch supports user context tracking for NO_HZ_FULL # ----------------------- | arch |status| Loading MAINTAINERS +1 −0 Original line number Diff line number Diff line Loading @@ -5039,6 +5039,7 @@ F: include/linux/console* CONTEXT TRACKING M: Frederic Weisbecker <frederic@kernel.org> M: "Paul E. McKenney" <paulmck@kernel.org> S: Maintained F: kernel/context_tracking.c F: include/linux/context_tracking* Loading arch/Kconfig +4 −4 Original line number Diff line number Diff line Loading @@ -774,7 +774,7 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES and similar) by implementing an inline arch_within_stack_frames(), which is used by CONFIG_HARDENED_USERCOPY. config HAVE_CONTEXT_TRACKING config HAVE_CONTEXT_TRACKING_USER bool help Provide kernel/user boundaries probes necessary for subsystems Loading @@ -782,10 +782,10 @@ config HAVE_CONTEXT_TRACKING Syscalls need to be wrapped inside user_exit()-user_enter(), either optimized behind static key or through the slow path using TIF_NOHZ flag. Exceptions handlers must be wrapped as well. Irqs are already protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal protected inside ct_irq_enter/ct_irq_exit() but preemption or signal handling on irq exit still need to be protected. config HAVE_CONTEXT_TRACKING_OFFSTACK config HAVE_CONTEXT_TRACKING_USER_OFFSTACK bool help Architecture neither relies on exception_enter()/exception_exit() Loading @@ -797,7 +797,7 @@ config HAVE_CONTEXT_TRACKING_OFFSTACK - Critical entry code isn't preemptible (or better yet: not interruptible). - No use of RCU read side critical sections, unless rcu_nmi_enter() - No use of RCU read side critical sections, unless ct_nmi_enter() got called. - No use of instrumentation, unless instrumentation_begin() got called. Loading Loading
Documentation/RCU/Design/Requirements/Requirements.rst +5 −5 Original line number Diff line number Diff line Loading @@ -1844,10 +1844,10 @@ that meets this requirement. Furthermore, NMI handlers can be interrupted by what appear to RCU to be normal interrupts. One way that this can happen is for code that directly invokes rcu_irq_enter() and rcu_irq_exit() to be called directly invokes ct_irq_enter() and ct_irq_exit() to be called from an NMI handler. This astonishing fact of life prompted the current code structure, which has rcu_irq_enter() invoking rcu_nmi_enter() and rcu_irq_exit() invoking rcu_nmi_exit(). code structure, which has ct_irq_enter() invoking ct_nmi_enter() and ct_irq_exit() invoking ct_nmi_exit(). And yes, I also learned of this requirement the hard way. Loadable Modules Loading Loading @@ -2195,7 +2195,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: sections, and RCU believes this CPU to be idle, no problem. This sort of thing is used by some architectures for light-weight exception handlers, which can then avoid the overhead of rcu_irq_enter() and rcu_irq_exit() at exception entry and ct_irq_enter() and ct_irq_exit() at exception entry and exit, respectively. Some go further and avoid the entireties of irq_enter() and irq_exit(). Just make very sure you are running some of your tests with Loading Loading @@ -2226,7 +2226,7 @@ scheduling-clock interrupt be enabled when RCU needs it to be: +-----------------------------------------------------------------------+ | **Answer**: | +-----------------------------------------------------------------------+ | One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so | | One approach is to do ``ct_irq_exit();ct_irq_enter();`` every so | | often. But given that long-running interrupt handlers can cause other | | problems, not least for response time, shouldn't you work to keep | | your interrupt handler's runtime within reasonable bounds? | Loading
Documentation/RCU/stallwarn.rst +3 −3 Original line number Diff line number Diff line Loading @@ -97,12 +97,12 @@ warnings: which will include additional debugging information. - A low-level kernel issue that either fails to invoke one of the variants of rcu_user_enter(), rcu_user_exit(), rcu_idle_enter(), rcu_idle_exit(), rcu_irq_enter(), or rcu_irq_exit() on the one variants of rcu_eqs_enter(true), rcu_eqs_exit(true), ct_idle_enter(), ct_idle_exit(), ct_irq_enter(), or ct_irq_exit() on the one hand, or that invokes one of them too many times on the other. Historically, the most frequent issue has been an omission of either irq_enter() or irq_exit(), which in turn invoke rcu_irq_enter() or rcu_irq_exit(), respectively. Building your ct_irq_enter() or ct_irq_exit(), respectively. Building your kernel with CONFIG_RCU_EQS_DEBUG=y can help track down these types of issues, which sometimes arise in architecture-specific code. Loading
Documentation/features/time/context-tracking/arch-support.txt +3 −3 Original line number Diff line number Diff line # # Feature name: context-tracking # Kconfig: HAVE_CONTEXT_TRACKING # description: arch supports context tracking for NO_HZ_FULL # Feature name: user-context-tracking # Kconfig: HAVE_CONTEXT_TRACKING_USER # description: arch supports user context tracking for NO_HZ_FULL # ----------------------- | arch |status| Loading
MAINTAINERS +1 −0 Original line number Diff line number Diff line Loading @@ -5039,6 +5039,7 @@ F: include/linux/console* CONTEXT TRACKING M: Frederic Weisbecker <frederic@kernel.org> M: "Paul E. McKenney" <paulmck@kernel.org> S: Maintained F: kernel/context_tracking.c F: include/linux/context_tracking* Loading
arch/Kconfig +4 −4 Original line number Diff line number Diff line Loading @@ -774,7 +774,7 @@ config HAVE_ARCH_WITHIN_STACK_FRAMES and similar) by implementing an inline arch_within_stack_frames(), which is used by CONFIG_HARDENED_USERCOPY. config HAVE_CONTEXT_TRACKING config HAVE_CONTEXT_TRACKING_USER bool help Provide kernel/user boundaries probes necessary for subsystems Loading @@ -782,10 +782,10 @@ config HAVE_CONTEXT_TRACKING Syscalls need to be wrapped inside user_exit()-user_enter(), either optimized behind static key or through the slow path using TIF_NOHZ flag. Exceptions handlers must be wrapped as well. Irqs are already protected inside rcu_irq_enter/rcu_irq_exit() but preemption or signal protected inside ct_irq_enter/ct_irq_exit() but preemption or signal handling on irq exit still need to be protected. config HAVE_CONTEXT_TRACKING_OFFSTACK config HAVE_CONTEXT_TRACKING_USER_OFFSTACK bool help Architecture neither relies on exception_enter()/exception_exit() Loading @@ -797,7 +797,7 @@ config HAVE_CONTEXT_TRACKING_OFFSTACK - Critical entry code isn't preemptible (or better yet: not interruptible). - No use of RCU read side critical sections, unless rcu_nmi_enter() - No use of RCU read side critical sections, unless ct_nmi_enter() got called. - No use of instrumentation, unless instrumentation_begin() got called. Loading