Skip to content
  1. Jun 01, 2019
    • brakmo's avatar
      bpf: Update BPF_CGROUP_RUN_PROG_INET_EGRESS calls · 956fe219
      brakmo authored
      
      
      Update BPF_CGROUP_RUN_PROG_INET_EGRESS() callers to support returning
      congestion notifications from the BPF programs.
      
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      956fe219
    • brakmo's avatar
      bpf: Update __cgroup_bpf_run_filter_skb with cn · e7a3160d
      brakmo authored
      
      
      For egress packets, __cgroup_bpf_fun_filter_skb() will now call
      BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() instead of PROG_CGROUP_RUN_ARRAY()
      in order to propagate congestion notifications (cn) requests to TCP
      callers.
      
      For egress packets, this function can return:
         NET_XMIT_SUCCESS    (0)    - continue with packet output
         NET_XMIT_DROP       (1)    - drop packet and notify TCP to call cwr
         NET_XMIT_CN         (2)    - continue with packet output and notify TCP
                                      to call cwr
         -EPERM                     - drop packet
      
      For ingress packets, this function will return -EPERM if any attached
      program was found and if it returned != 1 during execution. Otherwise 0
      is returned.
      
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e7a3160d
    • brakmo's avatar
      bpf: cgroup inet skb programs can return 0 to 3 · 5cf1e914
      brakmo authored
      
      
      Allows cgroup inet skb programs to return values in the range [0, 3].
      The second bit is used to deterine if congestion occurred and higher
      level protocol should decrease rate. E.g. TCP would call tcp_enter_cwr()
      
      The bpf_prog must set expected_attach_type to BPF_CGROUP_INET_EGRESS
      at load time if it uses the new return values (i.e. 2 or 3).
      
      The expected_attach_type is currently not enforced for
      BPF_PROG_TYPE_CGROUP_SKB.  e.g Meaning the current bpf_prog with
      expected_attach_type setting to BPF_CGROUP_INET_EGRESS can attach to
      BPF_CGROUP_INET_INGRESS.  Blindly enforcing expected_attach_type will
      break backward compatibility.
      
      This patch adds a enforce_expected_attach_type bit to only
      enforce the expected_attach_type when it uses the new
      return value.
      
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      5cf1e914
    • brakmo's avatar
      bpf: Create BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY · 1f52f6c0
      brakmo authored
      
      
      Create new macro BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY() to be used by
      __cgroup_bpf_run_filter_skb for EGRESS BPF progs so BPF programs can
      request cwr for TCP packets.
      
      Current cgroup skb programs can only return 0 or 1 (0 to drop the
      packet. This macro changes the behavior so the low order bit
      indicates whether the packet should be dropped (0) or not (1)
      and the next bit is used for congestion notification (cn).
      
      Hence, new allowed return values of CGROUP EGRESS BPF programs are:
        0: drop packet
        1: keep packet
        2: drop packet and call cwr
        3: keep packet and call cwr
      
      This macro then converts it to one of NET_XMIT values or -EPERM
      that has the effect of dropping the packet with no cn.
        0: NET_XMIT_SUCCESS  skb should be transmitted (no cn)
        1: NET_XMIT_DROP     skb should be dropped and cwr called
        2: NET_XMIT_CN       skb should be transmitted and cwr called
        3: -EPERM            skb should be dropped (no cn)
      
      Note that when more than one BPF program is called, the packet is
      dropped if at least one of programs requests it be dropped, and
      there is cn if at least one program returns cn.
      
      Signed-off-by: default avatarLawrence Brakmo <brakmo@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      1f52f6c0
  2. May 30, 2019
  3. May 29, 2019
    • Stanislav Fomichev's avatar
      bpf: tracing: properly use bpf_prog_array api · e672db03
      Stanislav Fomichev authored
      
      
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      e672db03
    • Stanislav Fomichev's avatar
      bpf: cgroup: properly use bpf_prog_array api · dbcc1ba2
      Stanislav Fomichev authored
      
      
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      We also don't need __rcu annotations on cgroup_bpf.inactive since
      it's not read/updated concurrently.
      
      v4:
      * drop cgroup_rcu_xyz wrappers and use rcu APIs directly; presumably
        should be more clear to understand which mutex/refcount protects
        each particular place
      
      v3:
      * amend cgroup_rcu_dereference to include percpu_ref_is_dying;
        cgroup_bpf is now reference counted and we don't hold cgroup_mutex
        anymore in cgroup_bpf_release
      
      v2:
      * replace xchg with rcu_swap_protected
      
      Cc: Roman Gushchin <guro@fb.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      dbcc1ba2
    • Stanislav Fomichev's avatar
      bpf: media: properly use bpf_prog_array api · 02205d2e
      Stanislav Fomichev authored
      
      
      Now that we don't have __rcu markers on the bpf_prog_array helpers,
      let's use proper rcu_dereference_protected to obtain array pointer
      under mutex.
      
      Cc: linux-media@vger.kernel.org
      Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
      Cc: Sean Young <sean@mess.org>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      02205d2e
    • Stanislav Fomichev's avatar
      bpf: remove __rcu annotations from bpf_prog_array · 54e9c9d4
      Stanislav Fomichev authored
      
      
      Drop __rcu annotations and rcu read sections from bpf_prog_array
      helper functions. They are not needed since all existing callers
      call those helpers from the rcu update side while holding a mutex.
      This guarantees that use-after-free could not happen.
      
      In the next patches I'll fix the callers with missing
      rcu_dereference_protected to make sparse/lockdep happy, the proper
      way to use these helpers is:
      
      	struct bpf_prog_array __rcu *progs = ...;
      	struct bpf_prog_array *p;
      
      	mutex_lock(&mtx);
      	p = rcu_dereference_protected(progs, lockdep_is_held(&mtx));
      	bpf_prog_array_length(p);
      	bpf_prog_array_copy_to_user(p, ...);
      	bpf_prog_array_delete_safe(p, ...);
      	bpf_prog_array_copy_info(p, ...);
      	bpf_prog_array_copy(p, ...);
      	bpf_prog_array_free(p);
      	mutex_unlock(&mtx);
      
      No functional changes! rcu_dereference_protected with lockdep_is_held
      should catch any cases where we update prog array without a mutex
      (I've looked at existing call sites and I think we hold a mutex
      everywhere).
      
      Motivation is to fix sparse warnings:
      kernel/bpf/core.c:1803:9: warning: incorrect type in argument 1 (different address spaces)
      kernel/bpf/core.c:1803:9:    expected struct callback_head *head
      kernel/bpf/core.c:1803:9:    got struct callback_head [noderef] <asn:4> *
      kernel/bpf/core.c:1877:44: warning: incorrect type in initializer (different address spaces)
      kernel/bpf/core.c:1877:44:    expected struct bpf_prog_array_item *item
      kernel/bpf/core.c:1877:44:    got struct bpf_prog_array_item [noderef] <asn:4> *
      kernel/bpf/core.c:1901:26: warning: incorrect type in assignment (different address spaces)
      kernel/bpf/core.c:1901:26:    expected struct bpf_prog_array_item *existing
      kernel/bpf/core.c:1901:26:    got struct bpf_prog_array_item [noderef] <asn:4> *
      kernel/bpf/core.c:1935:26: warning: incorrect type in assignment (different address spaces)
      kernel/bpf/core.c:1935:26:    expected struct bpf_prog_array_item *[assigned] existing
      kernel/bpf/core.c:1935:26:    got struct bpf_prog_array_item [noderef] <asn:4> *
      
      v2:
      * remove comment about potential race; that can't happen
        because all callers are in rcu-update section
      
      Cc: Roman Gushchin <guro@fb.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      54e9c9d4
    • Alan Maguire's avatar
      selftests/bpf: fix compilation error for flow_dissector.c · fe937ea1
      Alan Maguire authored
      
      
      When building the tools/testing/selftest/bpf subdirectory,
      (running both a local directory "make" and a
      "make -C tools/testing/selftests/bpf") I keep hitting the
      following compilation error:
      
      prog_tests/flow_dissector.c: In function ‘create_tap’:
      prog_tests/flow_dissector.c:150:38: error: ‘IFF_NAPI’ undeclared (first
      use in this function)
         .ifr_flags = IFF_TAP | IFF_NO_PI | IFF_NAPI | IFF_NAPI_FRAGS,
                                            ^
      prog_tests/flow_dissector.c:150:38: note: each undeclared identifier is
      reported only once for each function it appears in
      prog_tests/flow_dissector.c:150:49: error: ‘IFF_NAPI_FRAGS’ undeclared
      
      Adding include/uapi/linux/if_tun.h to tools/include/uapi/linux
      resolves the problem and ensures the compilation of the file
      does not depend on having up-to-date kernel headers locally.
      
      Signed-off-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      fe937ea1
    • Alexei Starovoitov's avatar
      Merge branch 'cgroup-auto-detach' · d0a3a4b2
      Alexei Starovoitov authored
      
      
      Roman Gushchin says:
      
      ====================
      This patchset implements a cgroup bpf auto-detachment functionality:
      bpf programs are detached as soon as possible after removal of the
      cgroup, without waiting for the release of all associated resources.
      
      Patches 2 and 3 are required to implement a corresponding kselftest
      in patch 4.
      
      v5:
        1) rebase
      
      v4:
        1) release cgroup bpf data using a workqueue
        2) add test_cgroup_attach to .gitignore
      
      v3:
        1) some minor changes and typo fixes
      
      v2:
        1) removed a bogus check in patch 4
        2) moved buf[len] = 0 in patch 2
      ====================
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d0a3a4b2
    • Roman Gushchin's avatar
      selftests/bpf: add auto-detach test · d5506591
      Roman Gushchin authored
      
      
      Add a kselftest to cover bpf auto-detachment functionality.
      The test creates a cgroup, associates some resources with it,
      attaches a couple of bpf programs and deletes the cgroup.
      
      Then it checks that bpf programs are going away in 5 seconds.
      
      Expected output:
        $ ./test_cgroup_attach
        #override:PASS
        #multi:PASS
        #autodetach:PASS
        test_cgroup_attach:PASS
      
      On a kernel without auto-detaching:
        $ ./test_cgroup_attach
        #override:PASS
        #multi:PASS
        #autodetach:FAIL
        test_cgroup_attach:FAIL
      
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d5506591
    • Roman Gushchin's avatar
      selftests/bpf: enable all available cgroup v2 controllers · 596092ef
      Roman Gushchin authored
      
      
      Enable all available cgroup v2 controllers when setting up
      the environment for the bpf kselftests. It's required to properly test
      the bpf prog auto-detach feature. Also it will generally increase
      the code coverage.
      
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      596092ef
    • Roman Gushchin's avatar
      selftests/bpf: convert test_cgrp2_attach2 example into kselftest · ba0c0cc0
      Roman Gushchin authored
      
      
      Convert test_cgrp2_attach2 example into a proper test_cgroup_attach
      kselftest. It's better because we do run kselftest on a constant
      basis, so there are better chances to spot a potential regression.
      
      Also make it slightly less verbose to conform kselftests output style.
      
      Output example:
        $ ./test_cgroup_attach
        #override:PASS
        #multi:PASS
        test_cgroup_attach:PASS
      
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      ba0c0cc0
    • Roman Gushchin's avatar
      bpf: decouple the lifetime of cgroup_bpf from cgroup itself · 4bfc0bb2
      Roman Gushchin authored
      
      
      Currently the lifetime of bpf programs attached to a cgroup is bound
      to the lifetime of the cgroup itself. It means that if a user
      forgets (or intentionally avoids) to detach a bpf program before
      removing the cgroup, it will stay attached up to the release of the
      cgroup. Since the cgroup can stay in the dying state (the state
      between being rmdir()'ed and being released) for a very long time, it
      leads to a waste of memory. Also, it blocks a possibility to implement
      the memcg-based memory accounting for bpf objects, because a circular
      reference dependency will occur. Charged memory pages are pinning the
      corresponding memory cgroup, and if the memory cgroup is pinning
      the attached bpf program, nothing will be ever released.
      
      A dying cgroup can not contain any processes, so the only chance for
      an attached bpf program to be executed is a live socket associated
      with the cgroup. So in order to release all bpf data early, let's
      count associated sockets using a new percpu refcounter. On cgroup
      removal the counter is transitioned to the atomic mode, and as soon
      as it reaches 0, all bpf programs are detached.
      
      Because cgroup_bpf_release() can block, it can't be called from
      the percpu ref counter callback directly, so instead an asynchronous
      work is scheduled.
      
      The reference counter is not socket specific, and can be used for any
      other types of programs, which can be executed from a cgroup-bpf hook
      outside of the process context, had such a need arise in the future.
      
      Signed-off-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: jolsa@redhat.com
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      4bfc0bb2
  4. May 28, 2019
  5. May 25, 2019
    • Matteo Croce's avatar
      samples: bpf: add ibumad sample to .gitignore · d9a6f413
      Matteo Croce authored
      
      
      This commit adds ibumad to .gitignore which is
      currently ommited from the ignore file.
      
      Signed-off-by: default avatarMatteo Croce <mcroce@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d9a6f413
    • Alexei Starovoitov's avatar
      Merge branch 'optimize-zext' · 198ae936
      Alexei Starovoitov authored
      
      
      Jiong Wang says:
      
      ====================
      v9:
        - Split patch 5 in v8.
          make bpf uapi header file sync a separate patch. (Alexei)
      
      v8:
        - For stack slot read, mark them as REG_LIVE_READ64. (Alexei)
        - Change DEF_NOT_SUBREG from -1 to 0. (Alexei)
        - Rebased on top of latest bpf-next.
      
      v7:
        - Drop the first patch in v6, the one adding 32-bit return value and
          argument type. (Alexei)
        - Rename bpf_jit_hardware_zext to bpf_jit_needs_zext. (Alexei)
        - Use mov32 with imm == 1 to indicate it is zext. (Alexei)
        - JIT back-ends peephole next insn to optimize out unnecessary zext
          inserted by verifier. (Alexei)
        - Testing:
          + patch set tested (bpf selftest) on x64 host with llvm 9.0
            no regression observed no both JIT and interpreter modes.
          + patch set tested (bpf selftest) on x32 host.
            By Yanqing Wang, thanks!
            no regression observed on both JIT and interpreter modes.
          + patch set tested (bpf selftest) on RV64 host with llvm 9.0,
            By Björn Töpel, thanks!
            no regression observed before and after this set with JIT_ALWAYS_ON.
            test_progs_32 also enabled as LLVM 9.0 is used by Björn.
          + cross compiled the other affected targets, arm, PowerPC, SPARC, S390.
      
      v6:
        - Fixed s390 kbuild test robot error. (kbuild)
        - Make comment style in backends patches more consistent.
      
      v5:
        - Adjusted several test_verifier helpers to make them works on hosts
          w and w/o hardware zext. (Naveen)
        - Make sure zext flag not set when verifier by-passed, for example,
          libtest_bpf.ko. (Naveen)
        - Conservatively mark bpf main return value as 64-bit. (Alexei)
        - Make sure read flag is either READ64 or READ32, not the mix of both.
          (Alexei)
        - Merged patch 1 and 2 in v4. (Alexei)
        - Fixed kbuild test robot warning on NFP. (kbuild)
        - Proposed new BPF_ZEXT insn to have optimal code-gen for various JIT
          back-ends.
        - Conservately set zext flags for patched-insn.
        - Fixed return value zext for helper function calls.
        - Also adjusted test_verifier scalability unit test to avoid triggerring
          too many insn patch which will hang computer.
        - re-tested on x86 host with llvm 9.0, no regression on test_verifier,
          test_progs, test_progs_32.
        - re-tested offload target (nfp), no regression on local testsuite.
      
      v4:
        - added the two missing fixes which addresses two Jakub's reviewes in v3.
        - rebase on top of bpf-next.
      
      v3:
        - remove redundant check in "propagate_liveness_reg". (Jakub)
        - add extra check in "mark_reg_read" to prune more search. (Jakub)
        - re-implemented "prog_flags" passing mechanism, removed use of
          global switch inside libbpf.
        - enabled high 32-bit randomization beyond "test_verifier" and
          "test_progs". Now it should have been enabled for all possible
          tests. Re-run all tests, haven't noticed regression.
        - remove RFC tag.
      
      v2:
        - rebased on top of bpf-next master.
        - added comments for what is sub-register def index. (Edward, Alexei)
        - removed patch 1 which turns bit mask from enum to macro. (Alexei)
        - removed sysctl/bpf_jit_32bit_opt. (Alexei)
        - merged sub-register def insn index into reg state. (Alexei)
        - change test methodology (Alexei):
            + instead of simple unit tests on x86_64 for which this optimization
              doesn't enabled due to there is hardware support, poison high
              32-bit for whose def identified as safe to do so. this could let
              the correctness of this patch set checked when daily bpf selftest
              ran which delivers very stressful test on host machine like x86_64.
            + hi32 poisoning is gated by a new BPF_F_TEST_RND_HI32 prog flags.
            + BPF_F_TEST_RND_HI32 is enabled for all tests of "test_progs" and
              "test_verifier", the latter needs minor tweak on two unit tests,
              please see the patch for the change.
            + introduced a new global variable "libbpf_test_mode" into libbpf.
              once it is set to true, it will set BPF_F_TEST_RND_HI32 for all the
              later PROG_LOAD syscall, the goal is to easy the enable of hi32
              poison on exsiting testsuite.
              we could also introduce new APIs, for example "bpf_prog_test_load",
              then use -Dbpf_prog_load=bpf_prog_test_load to migrate tests under
              test_progs, but there are several load APIs, and such new API need
              some change on struture like "struct bpf_prog_load_attr".
            + removed old unit tests. it is based on insn scan and requires quite
              a few test_verifier generic code change. given hi32 randomization
              could offer good test coverage, the unit tests doesn't add much
              extra test value.
        - enhanced register width check ("is_reg64") when record sub-register
          write, now, it returns more accurate width.
        - Re-run all tests under "test_progs" and "test_verifier" on x86_64, no
          regression. Fixed a couple of bugs exposed:
            1. ctx field size transformation was not taken into account.
            2. insn patch could cause lost of original aux data which is
               important for ctx field conversion.
            3. return value for propagate_liveness was wrong and caused
               regression on processed insn number.
            4. helper call arg wasn't handled properly that path prune may cause
               64-bit read info in pruned path lost.
        - Re-run Cilium bpf prog for processed-insn-number benchmarking, no
          regression.
      
      v1:
        - Fixed the missing handling on callee-saved for bpf-to-bpf call,
          sub-register defs therefore moved to frame state. (Jakub Kicinski)
        - Removed redundant "cross_reg". (Jakub Kicinski)
        - Various coding styles & grammar fixes. (Jakub Kicinski, Quentin Monnet)
      
      eBPF ISA specification requires high 32-bit cleared when low 32-bit
      sub-register is written. This applies to destination register of ALU32 etc.
      JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and
      AArch64 ISA has the same semantics, so the corresponding JIT back-end
      doesn't need to do extra work.
      
      However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches
      (PowerPC, SPARC etc) need to do explicit zero extension to meet this
      requirement, otherwise code like the following will fail.
      
        u64_value = (u64) u32_value
        ... other uses of u64_value
      
      This is because compiler could exploit the semantic described above and
      save those zero extensions for extending u32_value to u64_value, these JIT
      back-ends are expected to guarantee this through inserting extra zero
      extensions which however could be a significant increase on the code size.
      Some benchmarks show there could be ~40% sub-register writes out of total
      insns, meaning at least ~40% extra code-gen.
      
      One observation is these extra zero extensions are not always necessary.
      Take above code snippet for example, it is possible u32_value will never be
      casted into a u64, the value of high 32-bit of u32_value then could be
      ignored and extra zero extension could be eliminated.
      
      This patch implements this idea, insns defining sub-registers will be
      marked when the high 32-bit of the defined sub-register matters. For
      those unmarked insns, it is safe to eliminate high 32-bit clearnace for
      them.
      
      Algo
      ====
      We could use insn scan based static analysis to tell whether one
      sub-register def doesn't need zero extension. However, using such static
      analysis, we must do conservative assumption at branching point where
      multiple uses could be introduced. So, for any sub-register def that is
      active at branching point, we need to mark it as needing zero extension.
      This could introducing quite a few false alarms, for example ~25% on
      Cilium bpf_lxc.
      
      It will be far better to use dynamic data-flow tracing which verifier
      fortunately already has and could be easily extend to serve the purpose of
      this patch set.
      
       - Split read flags into READ32 and READ64.
      
       - Record index of insn that does sub-register write. Keep the index inside
         reg state and update it during verifier insn walking.
      
       - A full register read on a sub-register marks its definition insn as
         needing zero extension on dst register.
      
         A new sub-register write overrides the old one.
      
       - When propagating read64 during path pruning, also mark any insn defining
         a sub-register that is read in the pruned path as full-register.
      
      Benchmark
      =========
       - I estimate the JITed image could be 10% ~ 30% smaller on these affected
         arches (nfp, arm, x32, risv, ppc, sparc, s390), depending on the prog.
      
       - For Cilium bpf_lxc, there is ~11500 insns in the compiled binary (use
         latest LLVM snapshot, and with -mcpu=v3 -mattr=+alu32 enabled), 4460 of
         them has sub-register writes (~40%). Calculated by:
      
          cat dump | grep -P "\tw" | wc -l       (ALU32)
          cat dump | grep -P "r.*=.*u32" | wc -l (READ_W)
          cat dump | grep -P "r.*=.*u16" | wc -l (READ_H)
          cat dump | grep -P "r.*=.*u8" | wc -l  (READ_B)
      
         After this patch set enabled, > 25% of those 4460 could be identified as
         doesn't needing zero extension on the destination, and the percentage
         could go further up to more than 50% with some follow up optimizations
         based on the infrastructure offered by this set. This leads to
         significant save on JITed image.
      ====================
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      198ae936
    • Jiong Wang's avatar
      nfp: bpf: eliminate zero extension code-gen · 0b4de1ff
      Jiong Wang authored
      
      
      This patch eliminate zero extension code-gen for instructions including
      both alu and load/store. The only exception is for ctx load, because
      offload target doesn't go through host ctx convert logic so we do
      customized load and ignores zext flag set by verifier.
      
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0b4de1ff
    • Jiong Wang's avatar
      riscv: bpf: eliminate zero extension code-gen · 66d0d5a8
      Jiong Wang authored
      
      
      Cc: Björn Töpel <bjorn.topel@gmail.com>
      Acked-by: default avatarBjörn Töpel <bjorn.topel@gmail.com>
      Tested-by: default avatarBjörn Töpel <bjorn.topel@gmail.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      66d0d5a8
    • Jiong Wang's avatar
      x32: bpf: eliminate zero extension code-gen · 836256bf
      Jiong Wang authored
      
      
      Cc: Wang YanQing <udknight@gmail.com>
      Tested-by: default avatarWang YanQing <udknight@gmail.com>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      836256bf
    • Jiong Wang's avatar
      sparc: bpf: eliminate zero extension code-gen · 3e2a33cf
      Jiong Wang authored
      
      
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3e2a33cf