Skip to content
Commit 165d1cc0 authored by Luis R. Rodriguez's avatar Luis R. Rodriguez Committed by Jessica Yu
Browse files

kmod: reduce atomic operations on kmod_concurrent and simplify



When checking if we want to allow a kmod thread to kick off we increment,
then read to see if we should enable a thread. If we were over the allowed
limit limit we decrement. Splitting the increment far apart from decrement
means there could be a time where two increments happen potentially
giving a false failure on a thread which should have been allowed.

CPU1			CPU2
atomic_inc()
			atomic_inc()
atomic_read()
			atomic_read()
atomic_dec()
			atomic_dec()

In this case a read on CPU1 gets the atomic_inc()'s and we could negate
it from getting a kmod thread. We could try to prevent this with a lock
or preemption but that is overkill. We can fix by reducing the number of
atomic operations. We do this by inverting the logic of of the enabler,
instead of incrementing kmod_concurrent as we get new kmod users, define the
variable kmod_concurrent_max as the max number of currently allowed kmod
users and as we get new kmod users just decrement it if its still positive.
This combines the dec and read in one atomic operation.

In this case we no longer get the same false failure:

CPU1			CPU2
atomic_dec_if_positive()
			atomic_dec_if_positive()
atomic_inc()
			atomic_inc()

The number of threads is computed at init, and since the current computation
of kmod_concurrent includes the thread count we can avoid setting
kmod_concurrent_max later in boot through an init call by simply sticking to
50 as the kmod_concurrent_max. The assumption here is a system with modules
must at least have ~16 MiB of RAM.

Suggested-by: default avatarPetr Mladek <pmladek@suse.com>
Suggested-by: default avatarDmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: default avatarLuis R. Rodriguez <mcgrof@kernel.org>
Reviewed-by: default avatarPetr Mladek <pmladek@suse.com>
Signed-off-by: default avatarJessica Yu <jeyu@kernel.org>
parent 93437353
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment