Commit 85e853c5 authored by Ingo Molnar's avatar Ingo Molnar
Browse files

Merge branch 'for-mingo-rcu' of...

Merge branch 'for-mingo-rcu' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

 into core/rcu

Pull RCU updates from Paul E. McKenney:

- Documentation updates.

- Miscellaneous fixes.

- kfree_rcu() updates: Addition of mem_dump_obj() to provide allocator return
  addresses to more easily locate bugs.  This has a couple of RCU-related commits,
  but is mostly MM.  Was pulled in with akpm's agreement.

- Per-callback-batch tracking of numbers of callbacks,
  which enables better debugging information and smarter
  reactions to large numbers of callbacks.

- The first round of changes to allow CPUs to be runtime switched from and to
  callback-offloaded state.

- CONFIG_PREEMPT_RT-related changes.

- RCU CPU stall warning updates.
- Addition of polling grace-period APIs for SRCU.

- Torture-test and torture-test scripting updates, including a "torture everything"
  script that runs rcutorture, locktorture, scftorture, rcuscale, and refscale.
  Plus does an allmodconfig build.

Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents dcc0b490 0d2460ba
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -38,7 +38,7 @@ sections.
RCU-preempt Expedited Grace Periods
===================================

``CONFIG_PREEMPT=y`` kernels implement RCU-preempt.
``CONFIG_PREEMPTION=y`` kernels implement RCU-preempt.
The overall flow of the handling of a given CPU by an RCU-preempt
expedited grace period is shown in the following diagram:

@@ -112,7 +112,7 @@ things.
RCU-sched Expedited Grace Periods
---------------------------------

``CONFIG_PREEMPT=n`` kernels implement RCU-sched. The overall flow of
``CONFIG_PREEMPTION=n`` kernels implement RCU-sched. The overall flow of
the handling of a given CPU by an RCU-sched expedited grace period is
shown in the following diagram:

+374 −358

File changed.

Preview size limit exceeded, changes collapsed.

+4 −6
Original line number Diff line number Diff line
@@ -70,7 +70,7 @@ over a rather long period of time, but improvements are always welcome!
	is less readable and prevents lockdep from detecting locking issues.

	Letting RCU-protected pointers "leak" out of an RCU read-side
	critical section is every bid as bad as letting them leak out
	critical section is every bit as bad as letting them leak out
	from under a lock.  Unless, of course, you have arranged some
	other means of protection, such as a lock or a reference count
	-before- letting them out of the RCU read-side critical section.
@@ -129,9 +129,7 @@ over a rather long period of time, but improvements are always welcome!
		accesses.  The rcu_dereference() primitive ensures that
		the CPU picks up the pointer before it picks up the data
		that the pointer points to.  This really is necessary
		on Alpha CPUs.	If you don't believe me, see:

			http://www.openvms.compaq.com/wizard/wiz_2637.html
		on Alpha CPUs.

		The rcu_dereference() primitive is also an excellent
		documentation aid, letting the person reading the
@@ -214,9 +212,9 @@ over a rather long period of time, but improvements are always welcome!
	the rest of the system.

7.	As of v4.20, a given kernel implements only one RCU flavor,
	which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
	which is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y.
	If the updater uses call_rcu() or synchronize_rcu(),
	then the corresponding readers my use rcu_read_lock() and
	then the corresponding readers may use rcu_read_lock() and
	rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
	or any pair of primitives that disables and re-enables preemption,
	for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
+3 −3
Original line number Diff line number Diff line
@@ -9,7 +9,7 @@ RCU (read-copy update) is a synchronization mechanism that can be thought
of as a replacement for read-writer locking (among other things), but with
very low-overhead readers that are immune to deadlock, priority inversion,
and unbounded latency. RCU read-side critical sections are delimited
by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION
kernels, generate no code whatsoever.

This means that RCU writers are unaware of the presence of concurrent
@@ -329,10 +329,10 @@ Answer: This cannot happen. The reason is that on_each_cpu() has its last
	to smp_call_function() and further to smp_call_function_on_cpu(),
	causing this latter to spin until the cross-CPU invocation of
	rcu_barrier_func() has completed. This by itself would prevent
	a grace period from completing on non-CONFIG_PREEMPT kernels,
	a grace period from completing on non-CONFIG_PREEMPTION kernels,
	since each CPU must undergo a context switch (or other quiescent
	state) before the grace period can complete. However, this is
	of no use in CONFIG_PREEMPT kernels.
	of no use in CONFIG_PREEMPTION kernels.

	Therefore, on_each_cpu() disables preemption across its call
	to smp_call_function() and also across the local call to
+24 −3
Original line number Diff line number Diff line
@@ -25,7 +25,7 @@ warnings:

-	A CPU looping with bottom halves disabled.

-	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
-	For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the kernel
	without invoking schedule().  If the looping in the kernel is
	really expected and desirable behavior, you might need to add
	some calls to cond_resched().
@@ -44,7 +44,7 @@ warnings:
	result in the ``rcu_.*kthread starved for`` console-log message,
	which will include additional debugging information.

-	A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
-	A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might
	happen to preempt a low-priority task in the middle of an RCU
	read-side critical section.   This is especially damaging if
	that low-priority task is not permitted to run on any other CPU,
@@ -92,7 +92,9 @@ warnings:
	buggy timer hardware through bugs in the interrupt or exception
	path (whether hardware, firmware, or software) through bugs
	in Linux's timer subsystem through bugs in the scheduler, and,
	yes, even including bugs in RCU itself.
	yes, even including bugs in RCU itself.  It can also result in
	the ``rcu_.*timer wakeup didn't happen for`` console-log message,
	which will include additional debugging information.

-	A bug in the RCU implementation.

@@ -292,6 +294,25 @@ kthread is waiting for a short timeout, the "state" precedes value of the
task_struct ->state field, and the "cpu" indicates that the grace-period
kthread last ran on CPU 5.

If the relevant grace-period kthread does not wake from FQS wait in a
reasonable time, then the following additional line is printed::

	kthread timer wakeup didn't happen for 23804 jiffies! g7076 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402

The "23804" indicates that kthread's timer expired more than 23 thousand
jiffies ago.  The rest of the line has meaning similar to the kthread
starvation case.

Additionally, the following line is printed::

	Possible timer handling issue on cpu=4 timer-softirq=11142

Here "cpu" indicates that the grace-period kthread last ran on CPU 4,
where it queued the fqs timer.  The number following the "timer-softirq"
is the current ``TIMER_SOFTIRQ`` count on cpu 4.  If this value does not
change on successive RCU CPU stall warnings, there is further reason to
suspect a timer problem.


Multiple Warnings From One Stall
================================
Loading