Skip to content
  1. Mar 06, 2021
    • Barry Song's avatar
      sched/topology: fix the issue groups don't span domain->span for NUMA diameter > 2 · 585b6d27
      Barry Song authored
      
      
      As long as NUMA diameter > 2, building sched_domain by sibling's child
      domain will definitely create a sched_domain with sched_group which will
      span out of the sched_domain:
      
                     +------+         +------+        +-------+       +------+
                     | node |  12     |node  | 20     | node  |  12   |node  |
                     |  0   +---------+1     +--------+ 2     +-------+3     |
                     +------+         +------+        +-------+       +------+
      
      domain0        node0            node1            node2          node3
      
      domain1        node0+1          node0+1          node2+3        node2+3
                                                       +
      domain2        node0+1+2                         |
                   group: node0+1                      |
                     group:node2+3 <-------------------+
      
      when node2 is added into the domain2 of node0, kernel is using the child
      domain of node2's domain2, which is domain1(node2+3). Node 3 is outside
      the span of the domain including node0+1+2.
      
      This will make load_balance() run based on screwed avg_load and group_type
      in the sched_group spanning out of the sched_domain, and it also makes
      select_task_rq_fair() pick an idle CPU outside the sched_domain.
      
      Real servers which suffer from this problem include Kunpeng920 and 8-node
      Sun Fire X4600-M2, at least.
      
      Here we move to use the *child* domain of the *child* domain of node2's
      domain2 as the new added sched_group. At the same, we re-use the lower
      level sgc directly.
                     +------+         +------+        +-------+       +------+
                     | node |  12     |node  | 20     | node  |  12   |node  |
                     |  0   +---------+1     +--------+ 2     +-------+3     |
                     +------+         +------+        +-------+       +------+
      
      domain0        node0            node1          +- node2          node3
                                                     |
      domain1        node0+1          node0+1        | node2+3        node2+3
                                                     |
      domain2        node0+1+2                       |
                   group: node0+1                    |
                     group:node2 <-------------------+
      
      While the lower level sgc is re-used, this patch only changes the remote
      sched_groups for those sched_domains playing grandchild trick, therefore,
      sgc->next_update is still safe since it's only touched by CPUs that have
      the group span as local group. And sgc->imbalance is also safe because
      sd_parent remains the same in load_balance and LB only tries other CPUs
      from the local group.
      Moreover, since local groups are not touched, they are still getting
      roughly equal size in a TL. And should_we_balance() only matters with
      local groups, so the pull probability of those groups are still roughly
      equal.
      
      Tested by the below topology:
      qemu-system-aarch64  -M virt -nographic \
       -smp cpus=8 \
       -numa node,cpus=0-1,nodeid=0 \
       -numa node,cpus=2-3,nodeid=1 \
       -numa node,cpus=4-5,nodeid=2 \
       -numa node,cpus=6-7,nodeid=3 \
       -numa dist,src=0,dst=1,val=12 \
       -numa dist,src=0,dst=2,val=20 \
       -numa dist,src=0,dst=3,val=22 \
       -numa dist,src=1,dst=2,val=22 \
       -numa dist,src=2,dst=3,val=12 \
       -numa dist,src=1,dst=3,val=24 \
       -m 4G -cpu cortex-a57 -kernel arch/arm64/boot/Image
      
      w/o patch, we get lots of "groups don't span domain->span":
      [    0.802139] CPU0 attaching sched-domain(s):
      [    0.802193]  domain-0: span=0-1 level=MC
      [    0.802443]   groups: 0:{ span=0 cap=1013 }, 1:{ span=1 cap=979 }
      [    0.802693]   domain-1: span=0-3 level=NUMA
      [    0.802731]    groups: 0:{ span=0-1 cap=1992 }, 2:{ span=2-3 cap=1943 }
      [    0.802811]    domain-2: span=0-5 level=NUMA
      [    0.802829]     groups: 0:{ span=0-3 cap=3935 }, 4:{ span=4-7 cap=3937 }
      [    0.802881] ERROR: groups don't span domain->span
      [    0.803058]     domain-3: span=0-7 level=NUMA
      [    0.803080]      groups: 0:{ span=0-5 mask=0-1 cap=5843 }, 6:{ span=4-7 mask=6-7 cap=4077 }
      [    0.804055] CPU1 attaching sched-domain(s):
      [    0.804072]  domain-0: span=0-1 level=MC
      [    0.804096]   groups: 1:{ span=1 cap=979 }, 0:{ span=0 cap=1013 }
      [    0.804152]   domain-1: span=0-3 level=NUMA
      [    0.804170]    groups: 0:{ span=0-1 cap=1992 }, 2:{ span=2-3 cap=1943 }
      [    0.804219]    domain-2: span=0-5 level=NUMA
      [    0.804236]     groups: 0:{ span=0-3 cap=3935 }, 4:{ span=4-7 cap=3937 }
      [    0.804302] ERROR: groups don't span domain->span
      [    0.804520]     domain-3: span=0-7 level=NUMA
      [    0.804546]      groups: 0:{ span=0-5 mask=0-1 cap=5843 }, 6:{ span=4-7 mask=6-7 cap=4077 }
      [    0.804677] CPU2 attaching sched-domain(s):
      [    0.804687]  domain-0: span=2-3 level=MC
      [    0.804705]   groups: 2:{ span=2 cap=934 }, 3:{ span=3 cap=1009 }
      [    0.804754]   domain-1: span=0-3 level=NUMA
      [    0.804772]    groups: 2:{ span=2-3 cap=1943 }, 0:{ span=0-1 cap=1992 }
      [    0.804820]    domain-2: span=0-5 level=NUMA
      [    0.804836]     groups: 2:{ span=0-3 mask=2-3 cap=3991 }, 4:{ span=0-1,4-7 mask=4-5 cap=5985 }
      [    0.804944] ERROR: groups don't span domain->span
      [    0.805108]     domain-3: span=0-7 level=NUMA
      [    0.805134]      groups: 2:{ span=0-5 mask=2-3 cap=5899 }, 6:{ span=0-1,4-7 mask=6-7 cap=6125 }
      [    0.805223] CPU3 attaching sched-domain(s):
      [    0.805232]  domain-0: span=2-3 level=MC
      [    0.805249]   groups: 3:{ span=3 cap=1009 }, 2:{ span=2 cap=934 }
      [    0.805319]   domain-1: span=0-3 level=NUMA
      [    0.805336]    groups: 2:{ span=2-3 cap=1943 }, 0:{ span=0-1 cap=1992 }
      [    0.805383]    domain-2: span=0-5 level=NUMA
      [    0.805399]     groups: 2:{ span=0-3 mask=2-3 cap=3991 }, 4:{ span=0-1,4-7 mask=4-5 cap=5985 }
      [    0.805458] ERROR: groups don't span domain->span
      [    0.805605]     domain-3: span=0-7 level=NUMA
      [    0.805626]      groups: 2:{ span=0-5 mask=2-3 cap=5899 }, 6:{ span=0-1,4-7 mask=6-7 cap=6125 }
      [    0.805712] CPU4 attaching sched-domain(s):
      [    0.805721]  domain-0: span=4-5 level=MC
      [    0.805738]   groups: 4:{ span=4 cap=984 }, 5:{ span=5 cap=924 }
      [    0.805787]   domain-1: span=4-7 level=NUMA
      [    0.805803]    groups: 4:{ span=4-5 cap=1908 }, 6:{ span=6-7 cap=2029 }
      [    0.805851]    domain-2: span=0-1,4-7 level=NUMA
      [    0.805867]     groups: 4:{ span=4-7 cap=3937 }, 0:{ span=0-3 cap=3935 }
      [    0.805915] ERROR: groups don't span domain->span
      [    0.806108]     domain-3: span=0-7 level=NUMA
      [    0.806130]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=5985 }, 2:{ span=0-3 mask=2-3 cap=3991 }
      [    0.806214] CPU5 attaching sched-domain(s):
      [    0.806222]  domain-0: span=4-5 level=MC
      [    0.806240]   groups: 5:{ span=5 cap=924 }, 4:{ span=4 cap=984 }
      [    0.806841]   domain-1: span=4-7 level=NUMA
      [    0.806866]    groups: 4:{ span=4-5 cap=1908 }, 6:{ span=6-7 cap=2029 }
      [    0.806934]    domain-2: span=0-1,4-7 level=NUMA
      [    0.806953]     groups: 4:{ span=4-7 cap=3937 }, 0:{ span=0-3 cap=3935 }
      [    0.807004] ERROR: groups don't span domain->span
      [    0.807312]     domain-3: span=0-7 level=NUMA
      [    0.807386]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=5985 }, 2:{ span=0-3 mask=2-3 cap=3991 }
      [    0.807686] CPU6 attaching sched-domain(s):
      [    0.807710]  domain-0: span=6-7 level=MC
      [    0.807750]   groups: 6:{ span=6 cap=1017 }, 7:{ span=7 cap=1012 }
      [    0.807840]   domain-1: span=4-7 level=NUMA
      [    0.807870]    groups: 6:{ span=6-7 cap=2029 }, 4:{ span=4-5 cap=1908 }
      [    0.807952]    domain-2: span=0-1,4-7 level=NUMA
      [    0.807985]     groups: 6:{ span=4-7 mask=6-7 cap=4077 }, 0:{ span=0-5 mask=0-1 cap=5843 }
      [    0.808045] ERROR: groups don't span domain->span
      [    0.808257]     domain-3: span=0-7 level=NUMA
      [    0.808571]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6125 }, 2:{ span=0-5 mask=2-3 cap=5899 }
      [    0.808848] CPU7 attaching sched-domain(s):
      [    0.808860]  domain-0: span=6-7 level=MC
      [    0.808880]   groups: 7:{ span=7 cap=1012 }, 6:{ span=6 cap=1017 }
      [    0.808953]   domain-1: span=4-7 level=NUMA
      [    0.808974]    groups: 6:{ span=6-7 cap=2029 }, 4:{ span=4-5 cap=1908 }
      [    0.809034]    domain-2: span=0-1,4-7 level=NUMA
      [    0.809055]     groups: 6:{ span=4-7 mask=6-7 cap=4077 }, 0:{ span=0-5 mask=0-1 cap=5843 }
      [    0.809128] ERROR: groups don't span domain->span
      [    0.810361]     domain-3: span=0-7 level=NUMA
      [    0.810400]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=5961 }, 2:{ span=0-5 mask=2-3 cap=5903 }
      
      w/ patch, we don't get "groups don't span domain->span" any more:
      [    1.486271] CPU0 attaching sched-domain(s):
      [    1.486820]  domain-0: span=0-1 level=MC
      [    1.500924]   groups: 0:{ span=0 cap=980 }, 1:{ span=1 cap=994 }
      [    1.515717]   domain-1: span=0-3 level=NUMA
      [    1.515903]    groups: 0:{ span=0-1 cap=1974 }, 2:{ span=2-3 cap=1989 }
      [    1.516989]    domain-2: span=0-5 level=NUMA
      [    1.517124]     groups: 0:{ span=0-3 cap=3963 }, 4:{ span=4-5 cap=1949 }
      [    1.517369]     domain-3: span=0-7 level=NUMA
      [    1.517423]      groups: 0:{ span=0-5 mask=0-1 cap=5912 }, 6:{ span=4-7 mask=6-7 cap=4054 }
      [    1.520027] CPU1 attaching sched-domain(s):
      [    1.520097]  domain-0: span=0-1 level=MC
      [    1.520184]   groups: 1:{ span=1 cap=994 }, 0:{ span=0 cap=980 }
      [    1.520429]   domain-1: span=0-3 level=NUMA
      [    1.520487]    groups: 0:{ span=0-1 cap=1974 }, 2:{ span=2-3 cap=1989 }
      [    1.520687]    domain-2: span=0-5 level=NUMA
      [    1.520744]     groups: 0:{ span=0-3 cap=3963 }, 4:{ span=4-5 cap=1949 }
      [    1.520948]     domain-3: span=0-7 level=NUMA
      [    1.521038]      groups: 0:{ span=0-5 mask=0-1 cap=5912 }, 6:{ span=4-7 mask=6-7 cap=4054 }
      [    1.522068] CPU2 attaching sched-domain(s):
      [    1.522348]  domain-0: span=2-3 level=MC
      [    1.522606]   groups: 2:{ span=2 cap=1003 }, 3:{ span=3 cap=986 }
      [    1.522832]   domain-1: span=0-3 level=NUMA
      [    1.522885]    groups: 2:{ span=2-3 cap=1989 }, 0:{ span=0-1 cap=1974 }
      [    1.523043]    domain-2: span=0-5 level=NUMA
      [    1.523092]     groups: 2:{ span=0-3 mask=2-3 cap=4037 }, 4:{ span=4-5 cap=1949 }
      [    1.523302]     domain-3: span=0-7 level=NUMA
      [    1.523352]      groups: 2:{ span=0-5 mask=2-3 cap=5986 }, 6:{ span=0-1,4-7 mask=6-7 cap=6102 }
      [    1.523748] CPU3 attaching sched-domain(s):
      [    1.523774]  domain-0: span=2-3 level=MC
      [    1.523825]   groups: 3:{ span=3 cap=986 }, 2:{ span=2 cap=1003 }
      [    1.524009]   domain-1: span=0-3 level=NUMA
      [    1.524086]    groups: 2:{ span=2-3 cap=1989 }, 0:{ span=0-1 cap=1974 }
      [    1.524281]    domain-2: span=0-5 level=NUMA
      [    1.524331]     groups: 2:{ span=0-3 mask=2-3 cap=4037 }, 4:{ span=4-5 cap=1949 }
      [    1.524534]     domain-3: span=0-7 level=NUMA
      [    1.524586]      groups: 2:{ span=0-5 mask=2-3 cap=5986 }, 6:{ span=0-1,4-7 mask=6-7 cap=6102 }
      [    1.524847] CPU4 attaching sched-domain(s):
      [    1.524873]  domain-0: span=4-5 level=MC
      [    1.524954]   groups: 4:{ span=4 cap=958 }, 5:{ span=5 cap=991 }
      [    1.525105]   domain-1: span=4-7 level=NUMA
      [    1.525153]    groups: 4:{ span=4-5 cap=1949 }, 6:{ span=6-7 cap=2006 }
      [    1.525368]    domain-2: span=0-1,4-7 level=NUMA
      [    1.525428]     groups: 4:{ span=4-7 cap=3955 }, 0:{ span=0-1 cap=1974 }
      [    1.532726]     domain-3: span=0-7 level=NUMA
      [    1.532811]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=6003 }, 2:{ span=0-3 mask=2-3 cap=4037 }
      [    1.534125] CPU5 attaching sched-domain(s):
      [    1.534159]  domain-0: span=4-5 level=MC
      [    1.534303]   groups: 5:{ span=5 cap=991 }, 4:{ span=4 cap=958 }
      [    1.534490]   domain-1: span=4-7 level=NUMA
      [    1.534572]    groups: 4:{ span=4-5 cap=1949 }, 6:{ span=6-7 cap=2006 }
      [    1.534734]    domain-2: span=0-1,4-7 level=NUMA
      [    1.534783]     groups: 4:{ span=4-7 cap=3955 }, 0:{ span=0-1 cap=1974 }
      [    1.536057]     domain-3: span=0-7 level=NUMA
      [    1.536430]      groups: 4:{ span=0-1,4-7 mask=4-5 cap=6003 }, 2:{ span=0-3 mask=2-3 cap=3896 }
      [    1.536815] CPU6 attaching sched-domain(s):
      [    1.536846]  domain-0: span=6-7 level=MC
      [    1.536934]   groups: 6:{ span=6 cap=1005 }, 7:{ span=7 cap=1001 }
      [    1.537144]   domain-1: span=4-7 level=NUMA
      [    1.537262]    groups: 6:{ span=6-7 cap=2006 }, 4:{ span=4-5 cap=1949 }
      [    1.537553]    domain-2: span=0-1,4-7 level=NUMA
      [    1.537613]     groups: 6:{ span=4-7 mask=6-7 cap=4054 }, 0:{ span=0-1 cap=1805 }
      [    1.537872]     domain-3: span=0-7 level=NUMA
      [    1.537998]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6102 }, 2:{ span=0-5 mask=2-3 cap=5845 }
      [    1.538448] CPU7 attaching sched-domain(s):
      [    1.538505]  domain-0: span=6-7 level=MC
      [    1.538586]   groups: 7:{ span=7 cap=1001 }, 6:{ span=6 cap=1005 }
      [    1.538746]   domain-1: span=4-7 level=NUMA
      [    1.538798]    groups: 6:{ span=6-7 cap=2006 }, 4:{ span=4-5 cap=1949 }
      [    1.539048]    domain-2: span=0-1,4-7 level=NUMA
      [    1.539111]     groups: 6:{ span=4-7 mask=6-7 cap=4054 }, 0:{ span=0-1 cap=1805 }
      [    1.539571]     domain-3: span=0-7 level=NUMA
      [    1.539610]      groups: 6:{ span=0-1,4-7 mask=6-7 cap=6102 }, 2:{ span=0-5 mask=2-3 cap=5845 }
      
      Signed-off-by: default avatarBarry Song <song.bao.hua@hisilicon.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Tested-by: default avatarMeelis Roos <mroos@linux.ee>
      Link: https://lkml.kernel.org/r/20210224030944.15232-1-song.bao.hua@hisilicon.com
      585b6d27
    • Vincent Donnefort's avatar
      cpu/hotplug: Add cpuhp_invoke_callback_range() · 453e4108
      Vincent Donnefort authored
      
      
      Factorizing and unifying cpuhp callback range invocations, especially for
      the hotunplug path, where two different ways of decrementing were used. The
      first one, decrements before the callback is called:
      
       cpuhp_thread_fun()
           state = st->state;
           st->state--;
           cpuhp_invoke_callback(state);
      
      The second one, after:
      
       take_down_cpu()|cpuhp_down_callbacks()
           cpuhp_invoke_callback(st->state);
           st->state--;
      
      This is problematic for rolling back the steps in case of error, as
      depending on the decrement, the rollback will start from N or N-1. It also
      makes tracing inconsistent, between steps run in the cpuhp thread and
      the others.
      
      Additionally, avoid useless cpuhp_thread_fun() loops by skipping empty
      steps.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/20210216103506.416286-4-vincent.donnefort@arm.com
      453e4108
    • Vincent Donnefort's avatar
      cpu/hotplug: CPUHP_BRINGUP_CPU failure exception · 62f25069
      Vincent Donnefort authored
      
      
      The atomic states (between CPUHP_AP_IDLE_DEAD and CPUHP_AP_ONLINE) are
      triggered by the CPUHP_BRINGUP_CPU step. If the latter fails, no atomic
      state can be rolled back.
      
      DEAD callbacks too can't fail and disallow recovery. As a consequence,
      during hotunplug, the fail injection interface should prohibit all states
      from CPUHP_BRINGUP_CPU to CPUHP_ONLINE.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/20210216103506.416286-3-vincent.donnefort@arm.com
      62f25069
    • Vincent Donnefort's avatar
      cpu/hotplug: Allowing to reset fail injection · 3ae70c25
      Vincent Donnefort authored
      
      
      Currently, the only way of resetting the fail injection is to trigger a
      hotplug, hotunplug or both. This is rather annoying for testing
      and, as the default value for this file is -1, it seems pretty natural to
      let a user write it.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: https://lkml.kernel.org/r/20210216103506.416286-2-vincent.donnefort@arm.com
      3ae70c25
    • Vincent Donnefort's avatar
      sched/pelt: Fix task util_est update filtering · b89997aa
      Vincent Donnefort authored
      
      
      Being called for each dequeue, util_est reduces the number of its updates
      by filtering out when the EWMA signal is different from the task util_avg
      by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the
      decay from a previous high util_avg, EWMA might now be close enough to
      the new util_avg. No update would then happen while it would leave
      ue.enqueued with an out-of-date value.
      
      Taking into consideration the two util_est members, EWMA and enqueued for
      the filtering, ensures, for both, an up-to-date value.
      
      This is for now an issue only for the trace probe that might return the
      stale value. Functional-wise, it isn't a problem, as the value is always
      accessed through max(enqueued, ewma).
      
      This problem has been observed using LISA's UtilConvergence:test_means on
      the sd845c board.
      
      No regression observed with Hackbench on sd845c and Perf-bench sched pipe
      on hikey/hikey960.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Reviewed-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Link: https://lkml.kernel.org/r/20210225165820.1377125-1-vincent.donnefort@arm.com
      b89997aa
    • Valentin Schneider's avatar
      sched/fair: Fix shift-out-of-bounds in load_balance() · 39a2a6eb
      Valentin Schneider authored
      Syzbot reported a handful of occurrences where an sd->nr_balance_failed can
      grow to much higher values than one would expect.
      
      A successful load_balance() resets it to 0; a failed one increments
      it. Once it gets to sd->cache_nice_tries + 3, this *should* trigger an
      active balance, which will either set it to sd->cache_nice_tries+1 or reset
      it to 0. However, in case the to-be-active-balanced task is not allowed to
      run on env->dst_cpu, then the increment is done without any further
      modification.
      
      This could then be repeated ad nauseam, and would explain the absurdly high
      values reported by syzbot (86, 149). VincentG noted there is value in
      letting sd->cache_nice_tries grow, so the shift itself should be
      fixed. That means preventing:
      
        """
        If the value of the right operand is negative or is greater than or equal
        to the width of the promoted left operand, the behavior is undefined.
        """
      
      Thus we need to cap the shift exponent to
        BITS_PER_TYPE(typeof(lefthand)) - 1.
      
      I had a look around for other similar cases via coccinelle:
      
        @expr@
        position pos;
        expression E1;
        expression E2;
        @@
        (
        E1 >> E2@pos
        |
        E1 >> E2@pos
        )
      
        @cst depends on expr@
        position pos;
        expression expr.E1;
        constant cst;
        @@
        (
        E1 >> cst@pos
        |
        E1 << cst@pos
        )
      
        @script:python depends on !cst@
        pos << expr.pos;
        exp << expr.E2;
        @@
        # Dirty hack to ignore constexpr
        if exp.upper() != exp:
           coccilib.report.print_report(pos[0], "Possible UB shift here")
      
      The only other match in kernel/sched is rq_clock_thermal() which employs
      sched_thermal_decay_shift, and that exponent is already capped to 10, so
      that one is fine.
      
      Fixes: 5a7f5559
      
       ("sched/fair: Relax constraint on task's load during load balance")
      Reported-by: default avatar <syzbot+d7581744d5fd27c9fbe1@syzkaller.appspotmail.com>
      Signed-off-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Link: http://lore.kernel.org/r/000000000000ffac1205b9a2112f@google.com
      39a2a6eb
    • Vincent Donnefort's avatar
      sched/fair: use lsub_positive in cpu_util_next() · 736cc6b3
      Vincent Donnefort authored
      
      
      The sub_positive local version is saving an explicit load-store and is
      enough for the cpu_util_next() usage.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarQuentin Perret <qperret@google.com>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Link: https://lkml.kernel.org/r/20210225083612.1113823-3-vincent.donnefort@arm.com
      736cc6b3
    • Vincent Donnefort's avatar
      sched/fair: Fix task utilization accountability in compute_energy() · 0372e1cf
      Vincent Donnefort authored
      
      
      find_energy_efficient_cpu() (feec()) computes for each perf_domain (pd) an
      energy delta as follows:
      
        feec(task)
          for_each_pd
            base_energy = compute_energy(task, -1, pd)
              -> for_each_cpu(pd)
                 -> cpu_util_next(cpu, task, -1)
      
            energy_delta = compute_energy(task, dst_cpu, pd)
              -> for_each_cpu(pd)
                 -> cpu_util_next(cpu, task, dst_cpu)
            energy_delta -= base_energy
      
      Then it picks the best CPU as being the one that minimizes energy_delta.
      
      cpu_util_next() estimates the CPU utilization that would happen if the
      task was placed on dst_cpu as follows:
      
        max(cpu_util + task_util, cpu_util_est + _task_util_est)
      
      The task contribution to the energy delta can then be either:
      
        (1) _task_util_est, on a mostly idle CPU, where cpu_util is close to 0
            and _task_util_est > cpu_util.
        (2) task_util, on a mostly busy CPU, where cpu_util > _task_util_est.
      
        (cpu_util_est doesn't appear here. It is 0 when a CPU is idle and
         otherwise must be small enough so that feec() takes the CPU as a
         potential target for the task placement)
      
      This is problematic for feec(), as cpu_util_next() might give an unfair
      advantage to a CPU which is mostly busy (2) compared to one which is
      mostly idle (1). _task_util_est being always bigger than task_util in
      feec() (as the task is waking up), the task contribution to the energy
      might look smaller on certain CPUs (2) and this breaks the energy
      comparison.
      
      This issue is, moreover, not sporadic. By starving idle CPUs, it keeps
      their cpu_util < _task_util_est (1) while others will maintain cpu_util >
      _task_util_est (2).
      
      Fix this problem by always using max(task_util, _task_util_est) as a task
      contribution to the energy (ENERGY_UTIL). The new estimated CPU
      utilization for the energy would then be:
      
        max(cpu_util, cpu_util_est) + max(task_util, _task_util_est)
      
      compute_energy() still needs to know which OPP would be selected if the
      task would be migrated in the perf_domain (FREQUENCY_UTIL). Hence,
      cpu_util_next() is still used to estimate the maximum util within the pd.
      
      Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarQuentin Perret <qperret@google.com>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Link: https://lkml.kernel.org/r/20210225083612.1113823-2-vincent.donnefort@arm.com
      0372e1cf
    • Vincent Guittot's avatar
      sched/fair: Reduce the window for duplicated update · 39b6a429
      Vincent Guittot authored
      
      
      Start to update last_blocked_load_update_tick to reduce the possibility
      of another cpu starting the update one more time
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-8-vincent.guittot@linaro.org
      39b6a429
    • Vincent Guittot's avatar
      sched/fair: Trigger the update of blocked load on newly idle cpu · c6f88654
      Vincent Guittot authored
      
      
      Instead of waking up a random and already idle CPU, we can take advantage
      of this_cpu being about to enter idle to run the ILB and update the
      blocked load.
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-7-vincent.guittot@linaro.org
      c6f88654
    • Vincent Guittot's avatar
      sched/fair: Reorder newidle_balance pulled_task tests · 6553fc18
      Vincent Guittot authored
      
      
      Reorder the tests and skip useless ones when no load balance has been
      performed and rq lock has not been released.
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-6-vincent.guittot@linaro.org
      6553fc18
    • Vincent Guittot's avatar
      sched/fair: Merge for each idle cpu loop of ILB · 7a82e5f5
      Vincent Guittot authored
      
      
      Remove the specific case for handling this_cpu outside for_each_cpu() loop
      when running ILB. Instead we use for_each_cpu_wrap() and start with the
      next cpu after this_cpu so we will continue to finish with this_cpu.
      
      update_nohz_stats() is now used for this_cpu too and will prevents
      unnecessary update. We don't need a special case for handling the update of
      nohz.next_balance for this_cpu anymore because it is now handled by the
      loop like others.
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-5-vincent.guittot@linaro.org
      7a82e5f5
    • Vincent Guittot's avatar
      sched/fair: Remove unused parameter of update_nohz_stats · 64f84f27
      Vincent Guittot authored
      
      
      idle load balance is the only user of update_nohz_stats and doesn't use
      force parameter. Remove it
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-4-vincent.guittot@linaro.org
      64f84f27
    • Vincent Guittot's avatar
      ab2dde5e
    • Vincent Guittot's avatar
      sched/fair: Remove update of blocked load from newidle_balance · 0826530d
      Vincent Guittot authored
      
      
      newidle_balance runs with both preempt and irq disabled which prevent
      local irq to run during this period. The duration for updating the
      blocked load of CPUs varies according to the number of CPU cgroups
      with non-decayed load and extends this critical period to an uncontrolled
      level.
      
      Remove the update from newidle_balance and trigger a normal ILB that
      will take care of the update instead.
      
      This reduces the IRQ latency from O(nr_cgroups * nr_nohz_cpus) to
      O(nr_cgroups).
      
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224133007.28644-2-vincent.guittot@linaro.org
      0826530d
    • Sebastian Andrzej Siewior's avatar
      kcov: Remove kcov include from sched.h and move it to its users. · 183f47fc
      Sebastian Andrzej Siewior authored
      
      
      The recent addition of in_serving_softirq() to kconv.h results in
      compile failure on PREEMPT_RT because it requires
      task_struct::softirq_disable_cnt. This is not available if kconv.h is
      included from sched.h.
      
      It is not needed to include kconv.h from sched.h. All but the net/ user
      already include the kconv header file.
      
      Move the include of the kconv.h header from sched.h it its users.
      Additionally include sched.h from kconv.h to ensure that everything
      task_struct related is available.
      
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarJohannes Berg <johannes@sipsolutions.net>
      Acked-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Link: https://lkml.kernel.org/r/20210218173124.iy5iyqv3a4oia4vv@linutronix.de
      183f47fc
    • Valentin Schneider's avatar
      sched: Simplify migration_cpu_stop() · e140749c
      Valentin Schneider authored
      
      
      Since, when ->stop_pending, only the stopper can uninstall
      p->migration_pending. This could simplify a few ifs, because:
      
        (pending != NULL) => (pending == p->migration_pending)
      
      Also, the fatty comment above affine_move_task() probably needs a bit
      of gardening.
      
      Signed-off-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e140749c
    • Mathieu Desnoyers's avatar
      sched/membarrier: fix missing local execution of ipi_sync_rq_state() · ce29ddc4
      Mathieu Desnoyers authored
      The function sync_runqueues_membarrier_state() should copy the
      membarrier state from the @mm received as parameter to each runqueue
      currently running tasks using that mm.
      
      However, the use of smp_call_function_many() skips the current runqueue,
      which is unintended. Replace by a call to on_each_cpu_mask().
      
      Fixes: 227a4aad
      
       ("sched/membarrier: Fix p->mm->membarrier_state racy load")
      Reported-by: default avatarNadav Amit <nadav.amit@gmail.com>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: stable@vger.kernel.org # 5.4.x+
      Link: https://lore.kernel.org/r/74F1E842-4A84-47BF-B6C2-5407DFDD4A4A@gmail.com
      ce29ddc4
    • Peter Zijlstra's avatar
      sched: Simplify set_affinity_pending refcounts · 50caf9c1
      Peter Zijlstra authored
      Now that we have set_affinity_pending::stop_pending to indicate if a
      stopper is in progress, and we have the guarantee that if that stopper
      exists, it will (eventually) complete our @pending we can simplify the
      refcount scheme by no longer counting the stopper thread.
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.724130207@infradead.org
      50caf9c1
    • Peter Zijlstra's avatar
      sched: Fix affine_move_task() self-concurrency · 9e81889c
      Peter Zijlstra authored
      Consider:
      
         sched_setaffinity(p, X);		sched_setaffinity(p, Y);
      
      Then the first will install p->migration_pending = &my_pending; and
      issue stop_one_cpu_nowait(pending); and the second one will read
      p->migration_pending and _also_ issue: stop_one_cpu_nowait(pending),
      the _SAME_ @pending.
      
      This causes stopper list corruption.
      
      Add set_affinity_pending::stop_pending, to indicate if a stopper is in
      progress.
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.649146419@infradead.org
      9e81889c
    • Peter Zijlstra's avatar
      sched: Optimize migration_cpu_stop() · 3f1bc119
      Peter Zijlstra authored
      When the purpose of migration_cpu_stop() is to migrate the task to
      'any' valid CPU, don't migrate the task when it's already running on a
      valid CPU.
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.569238629@infradead.org
      3f1bc119
    • Peter Zijlstra's avatar
      sched: Collate affine_move_task() stoppers · 58b1a450
      Peter Zijlstra authored
      The SCA_MIGRATE_ENABLE and task_running() cases are almost identical,
      collapse them to avoid further duplication.
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.500108964@infradead.org
      58b1a450
    • Peter Zijlstra's avatar
      sched: Simplify migration_cpu_stop() · c20cf065
      Peter Zijlstra authored
      When affine_move_task() issues a migration_cpu_stop(), the purpose of
      that function is to complete that @pending, not any random other
      p->migration_pending that might have gotten installed since.
      
      This realization much simplifies migration_cpu_stop() and allows
      further necessary steps to fix all this as it provides the guarantee
      that @pending's stopper will complete @pending (and not some random
      other @pending).
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.430014682@infradead.org
      c20cf065
    • Peter Zijlstra's avatar
      sched: Fix migration_cpu_stop() requeueing · 8a6edb52
      Peter Zijlstra authored
      When affine_move_task(p) is called on a running task @p, which is not
      otherwise already changing affinity, we'll first set
      p->migration_pending and then do:
      
      	 stop_one_cpu(cpu_of_rq(rq), migration_cpu_stop, &arg);
      
      This then gets us to migration_cpu_stop() running on the CPU that was
      previously running our victim task @p.
      
      If we find that our task is no longer on that runqueue (this can
      happen because of a concurrent migration due to load-balance etc.),
      then we'll end up at the:
      
      	} else if (dest_cpu < 1 || pending) {
      
      branch. Which we'll take because we set pending earlier. Here we first
      check if the task @p has already satisfied the affinity constraints,
      if so we bail early [A]. Otherwise we'll reissue migration_cpu_stop()
      onto the CPU that is now hosting our task @p:
      
      	stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop,
      			    &pending->arg, &pending->stop_work);
      
      Except, we've never initialized pending->arg, which will be all 0s.
      
      This then results in running migration_cpu_stop() on the next CPU with
      arg->p == NULL, which gives the by now obvious result of fireworks.
      
      The cure is to change affine_move_task() to always use pending->arg,
      furthermore we can use the exact same pattern as the
      SCA_MIGRATE_ENABLE case, since we'll block on the pending->done
      completion anyway, no point in adding yet another completion in
      stop_one_cpu().
      
      This then gives a clear distinction between the two
      migration_cpu_stop() use cases:
      
        - sched_exec() / migrate_task_to() : arg->pending == NULL
        - affine_move_task() : arg->pending != NULL;
      
      And we can have it ignore p->migration_pending when !arg->pending. Any
      stop work from sched_exec() / migrate_task_to() is in addition to stop
      works from affine_move_task(), which will be sufficient to issue the
      completion.
      
      Fixes: 6d337eab
      
       ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
      Cc: stable@kernel.org
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Link: https://lkml.kernel.org/r/20210224131355.357743989@infradead.org
      8a6edb52
    • Linus Torvalds's avatar
      Linux 5.12-rc2 · a38fd874
      Linus Torvalds authored
      a38fd874
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma · f3ed4de6
      Linus Torvalds authored
      Pull rdma fixes from Jason Gunthorpe:
       "Nothing special here, though Bob's regression fixes for rxe would have
        made it before the rc cycle had there not been such strong winter
        weather!
      
         - Fix corner cases in the rxe reference counting cleanup that are
           causing regressions in blktests for SRP
      
         - Two kdoc fixes so W=1 is clean
      
         - Missing error return in error unwind for mlx5
      
         - Wrong lock type nesting in IB CM"
      
      * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma:
        RDMA/rxe: Fix errant WARN_ONCE in rxe_completer()
        RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt()
        RDMA/rxe: Fix missed IB reference counting in loopback
        RDMA/uverbs: Fix kernel-doc warning of _uverbs_alloc
        RDMA/mlx5: Set correct kernel-doc identifier
        IB/mlx5: Add missing error code
        RDMA/rxe: Fix missing kconfig dependency on CRYPTO
        RDMA/cm: Fix IRQ restore in ib_send_cm_sidr_rep
      f3ed4de6
    • Linus Torvalds's avatar
      Merge tag 'gcc-plugins-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux · de5bd6c5
      Linus Torvalds authored
      Pull gcc-plugins fixes from Kees Cook:
       "Tiny gcc-plugin fixes for v5.12-rc2. These issues are small but have
        been reported a couple times now by static analyzers, so best to get
        them fixed to reduce the noise. :)
      
         - Fix coding style issues (Jason Yan)"
      
      * tag 'gcc-plugins-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
        gcc-plugins: latent_entropy: remove unneeded semicolon
        gcc-plugins: structleak: remove unneeded variable 'ret'
      de5bd6c5
    • Linus Torvalds's avatar
      Merge tag 'pstore-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux · 8b24ef44
      Linus Torvalds authored
      Pull pstore fixes from Kees Cook:
      
       - Rate-limit ECC warnings (Dmitry Osipenko)
      
       - Fix error path check for NULL (Tetsuo Handa)
      
      * tag 'pstore-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
        pstore/ram: Rate-limit "uncorrectable error in header" message
        pstore: Fix warning in pstore_kill_sb()
      8b24ef44
    • Linus Torvalds's avatar
      Merge tag 'for-5.12/dm-fixes' of... · 63dcd69d
      Linus Torvalds authored
      Merge tag 'for-5.12/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
      
      Pull device mapper fixes from Mike Snitzer:
       "Fix DM verity target's optional Forward Error Correction (FEC) for
        Reed-Solomon roots that are unaligned to block size"
      
      * tag 'for-5.12/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
        dm verity: fix FEC for RS roots unaligned to block size
        dm bufio: subtract the number of initial sectors in dm_bufio_get_device_size
      63dcd69d
    • Linus Torvalds's avatar
      Merge tag 'block-5.12-2021-03-05' of git://git.kernel.dk/linux-block · 47454caf
      Linus Torvalds authored
      Pull block fixes from Jens Axboe:
      
       - NVMe fixes:
            - more device quirks (Julian Einwag, Zoltán Böszörményi, Pascal
              Terjan)
            - fix a hwmon error return (Daniel Wagner)
            - fix the keep alive timeout initialization (Martin George)
            - ensure the model_number can't be changed on a used subsystem
              (Max Gurtovoy)
      
       - rsxx missing -EFAULT on copy_to_user() failure (Dan)
      
       - rsxx remove unused linux.h include (Tian)
      
       - kill unused RQF_SORTED (Jean)
      
       - updated outdated BFQ comments (Joseph)
      
       - revert work-around commit for bd_size_lock, since we removed the
         offending user in this merge window (Damien)
      
      * tag 'block-5.12-2021-03-05' of git://git.kernel.dk/linux-block:
        nvmet: model_number must be immutable once set
        nvme-fabrics: fix kato initialization
        nvme-hwmon: Return error code when registration fails
        nvme-pci: add quirks for Lexar 256GB SSD
        nvme-pci: mark Kingston SKC2000 as not supporting the deepest power state
        nvme-pci: mark Seagate Nytro XM1440 as QUIRK_NO_NS_DESC_LIST.
        rsxx: Return -EFAULT if copy_to_user() fails
        block/bfq: update comments and default value in docs for fifo_expire
        rsxx: remove unused including <linux/version.h>
        block: Drop leftover references to RQF_SORTED
        block: revert "block: fix bd_size_lock use"
      47454caf
    • Linus Torvalds's avatar
      Merge tag 'io_uring-5.12-2021-03-05' of git://git.kernel.dk/linux-block · f292e873
      Linus Torvalds authored
      Pull io_uring fixes from Jens Axboe:
       "A bit of a mix between fallout from the worker change, cleanups and
        reductions now possible from that change, and fixes in general. In
        detail:
      
         - Fully serialize manager and worker creation, fixing races due to
           that.
      
         - Clean up some naming that had gone stale.
      
         - SQPOLL fixes.
      
         - Fix race condition around task_work rework that went into this
           merge window.
      
         - Implement unshare. Used for when the original task does unshare(2)
           or setuid/seteuid and friends, drops the original workers and forks
           new ones.
      
         - Drop the only remaining piece of state shuffling we had left, which
           was cred. Move it into issue instead, and we can drop all of that
           code too.
      
         - Kill f_op->flush() usage. That was such a nasty hack that we had
           out of necessity, we no longer need it.
      
         - Following from ->flush() removal, we can also drop various bits of
           ctx state related to SQPOLL and cancelations.
      
         - Fix an issue with IOPOLL retry, which originally was fallout from a
           filemap change (removing iov_iter_revert()), but uncovered an issue
           with iovec re-import too late.
      
         - Fix an issue with system suspend.
      
         - Use xchg() for fallback work, instead of cmpxchg().
      
         - Properly destroy io-wq on exec.
      
         - Add create_io_thread() core helper, and use that in io-wq and
           io_uring. This allows us to remove various silly completion events
           related to thread setup.
      
         - A few error handling fixes.
      
        This should be the grunt of fixes necessary for the new workers, next
        week should be quieter. We've got a pending series from Pavel on
        cancelations, and how tasks and rings are indexed. Outside of that,
        should just be minor fixes. Even with these fixes, we're still killing
        a net ~80 lines"
      
      * tag 'io_uring-5.12-2021-03-05' of git://git.kernel.dk/linux-block: (41 commits)
        io_uring: don't restrict issue_flags for io_openat
        io_uring: make SQPOLL thread parking saner
        io-wq: kill hashed waitqueue before manager exits
        io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return
        io_uring: don't keep looping for more events if we can't flush overflow
        io_uring: move to using create_io_thread()
        kernel: provide create_io_thread() helper
        io_uring: reliably cancel linked timeouts
        io_uring: cancel-match based on flags
        io-wq: ensure all pending work is canceled on exit
        io_uring: ensure that threads freeze on suspend
        io_uring: remove extra in_idle wake up
        io_uring: inline __io_queue_async_work()
        io_uring: inline io_req_clean_work()
        io_uring: choose right tctx->io_wq for try cancel
        io_uring: fix -EAGAIN retry with IOPOLL
        io-wq: fix error path leak of buffered write hash map
        io_uring: remove sqo_task
        io_uring: kill sqo_dead and sqo submission halting
        io_uring: ignore double poll add on the same waitqueue head
        ...
      f292e873
    • Linus Torvalds's avatar
      Merge tag 'pm-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · 6d47254c
      Linus Torvalds authored
      Pull power management fixes from Rafael Wysocki:
       "These fix the usage of device links in the runtime PM core code and
        update the DTPM (Dynamic Thermal Power Management) feature added
        recently.
      
        Specifics:
      
         - Make the runtime PM core code avoid attempting to suspend supplier
           devices before updating the PM-runtime status of a consumer to
           'suspended' (Rafael Wysocki).
      
         - Fix DTPM (Dynamic Thermal Power Management) root node
           initialization and label that feature as EXPERIMENTAL in Kconfig
           (Daniel Lezcano)"
      
      * tag 'pm-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        powercap/drivers/dtpm: Add the experimental label to the option description
        powercap/drivers/dtpm: Fix root node initialization
        PM: runtime: Update device status before letting suppliers suspend
      6d47254c
    • Linus Torvalds's avatar
      Merge tag 'acpi-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · ea6be461
      Linus Torvalds authored
      Pull ACPI fix from Rafael Wysocki:
       "Make the empty stubs of some helper functions used when CONFIG_ACPI is
        not set actually match those functions (Andy Shevchenko)"
      
      * tag 'acpi-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        ACPI: bus: Constify is_acpi_node() and friends (part 2)
      ea6be461
    • Linus Torvalds's avatar
      Merge tag 'iommu-fixes-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu · fc2c8d0a
      Linus Torvalds authored
      Pull iommu fixes from Joerg Roedel:
      
       - Fix a sleeping-while-atomic issue in the AMD IOMMU code
      
       - Disable lazy IOTLB flush for untrusted devices in the Intel VT-d
         driver
      
       - Fix status code definitions for Intel VT-d
      
       - Fix IO Page Fault issue in Tegra IOMMU driver
      
      * tag 'iommu-fixes-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
        iommu/vt-d: Fix status code for Allocate/Free PASID command
        iommu: Don't use lazy flush for untrusted device
        iommu/tegra-smmu: Fix mc errors on tegra124-nyan
        iommu/amd: Fix sleeping in atomic in increase_address_space()
      fc2c8d0a
    • Linus Torvalds's avatar
      Merge tag 'for-5.12-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux · f09b04cc
      Linus Torvalds authored
      Pull btrfs fixes from David Sterba:
       "More regression fixes and stabilization.
      
        Regressions:
      
         - zoned mode
            - count zone sizes in wider int types
            - fix space accounting for read-only block groups
      
         - subpage: fix page tail zeroing
      
        Fixes:
      
         - fix spurious warning when remounting with free space tree
      
         - fix warning when creating a directory with smack enabled
      
         - ioctl checks for qgroup inheritance when creating a snapshot
      
         - qgroup
            - fix missing unlock on error path in zero range
            - fix amount of released reservation on error
            - fix flushing from unsafe context with open transaction,
              potentially deadlocking
      
         - minor build warning fixes"
      
      * tag 'for-5.12-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
        btrfs: zoned: do not account freed region of read-only block group as zone_unusable
        btrfs: zoned: use sector_t for zone sectors
        btrfs: subpage: fix the false data csum mismatch error
        btrfs: fix warning when creating a directory with smack enabled
        btrfs: don't flush from btrfs_delayed_inode_reserve_metadata
        btrfs: export and rename qgroup_reserve_meta
        btrfs: free correct amount of space in btrfs_delayed_inode_reserve_metadata
        btrfs: fix spurious free_space_tree remount warning
        btrfs: validate qgroup inherit for SNAP_CREATE_V2 ioctl
        btrfs: unlock extents in btrfs_zero_range in case of quota reservation errors
        btrfs: ref-verify: use 'inline void' keyword ordering
      f09b04cc
    • Linus Torvalds's avatar
      Merge tag 'devicetree-fixes-for-5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux · 6bf331d5
      Linus Torvalds authored
      Pull devicetree fixes from Rob Herring:
      
       - Another batch of graph and video-interfaces schema conversions
      
       - Drop DT header symlink for dropped C6X arch
      
       - Fix bcm2711-hdmi schema error
      
      * tag 'devicetree-fixes-for-5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
        dt-bindings: media: Use graph and video-interfaces schemas, round 2
        dts: drop dangling c6x symlink
        dt-bindings: bcm2711-hdmi: Fix broken schema
      6bf331d5
    • Linus Torvalds's avatar
      Merge tag 'trace-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace · 54663cf3
      Linus Torvalds authored
      Pull tracing fixes from Steven Rostedt:
       "Functional fixes:
      
         - Fix big endian conversion for arm64 in recordmcount processing
      
         - Fix timestamp corruption in ring buffer on discarding events
      
         - Fix memory leak in __create_synth_event()
      
         - Skip selftests if tracing is disabled as it will cause them to
           fail.
      
        Non-functional fixes:
      
         - Fix help text in Kconfig
      
         - Remove duplicate prototype for trace_empty()
      
         - Fix stale comment about the trace_event_call flags.
      
        Self test update:
      
         - Add more information to the validation output of when a corrupt
           timestamp is found in the ring buffer, and also trigger a warning
           to make sure that tests catch it"
      
      * tag 'trace-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
        tracing: Fix comment about the trace_event_call flags
        tracing: Skip selftests if tracing is disabled
        tracing: Fix memory leak in __create_synth_event()
        ring-buffer: Add a little more information and a WARN when time stamp going backwards is detected
        ring-buffer: Force before_stamp and write_stamp to be different on discard
        tracing: Fix help text of TRACEPOINT_BENCHMARK in Kconfig
        tracing: Remove duplicate declaration from trace.h
        ftrace: Have recordmcount use w8 to read relp->r_info in arm64_is_fake_mcount
      54663cf3
    • Bob Pearson's avatar
      RDMA/rxe: Fix errant WARN_ONCE in rxe_completer() · 545c4ab4
      Bob Pearson authored
      In rxe_comp.c in rxe_completer() the function free_pkt() did not clear skb
      which triggered a warning at 'done:' and could possibly at 'exit:'. The
      WARN_ONCE() calls are not actually needed.  The call to free_pkt() is
      moved to the end to clearly show that all skbs are freed.
      
      Fixes: 899aba89 ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()")
      Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com
      
      
      Signed-off-by: default avatarBob Pearson <rpearsonhpe@gmail.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      545c4ab4
    • Bob Pearson's avatar
      RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt() · 5e4a7ccc
      Bob Pearson authored
      rxe_rcv_mcast_pkt() dropped a reference to ib_device when no error
      occurred causing an underflow on the reference counter.  This code is
      cleaned up to be clearer and easier to read.
      
      Fixes: 899aba89 ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()")
      Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com
      
      
      Signed-off-by: default avatarBob Pearson <rpearsonhpe@gmail.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      5e4a7ccc
    • Bob Pearson's avatar
      RDMA/rxe: Fix missed IB reference counting in loopback · 21e27ac8
      Bob Pearson authored
      When the noted patch below extending the reference taken by
      rxe_get_dev_from_net() in rxe_udp_encap_recv() until each skb is freed it
      was not matched by a reference in the loopback path resulting in
      underflows.
      
      Fixes: 899aba89 ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()")
      Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.com
      
      
      Signed-off-by: default avatarBob Pearson <rpearsonhpe@gmail.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
      21e27ac8