Skip to content
  1. Mar 13, 2019
    • Douglas Anderson's avatar
      tracing: kdb: Fix ftdump to not sleep · 31b265b3
      Douglas Anderson authored
      
      
      As reported back in 2016-11 [1], the "ftdump" kdb command triggers a
      BUG for "sleeping function called from invalid context".
      
      kdb's "ftdump" command wants to call ring_buffer_read_prepare() in
      atomic context.  A very simple solution for this is to add allocation
      flags to ring_buffer_read_prepare() so kdb can call it without
      triggering the allocation error.  This patch does that.
      
      Note that in the original email thread about this, it was suggested
      that perhaps the solution for kdb was to either preallocate the buffer
      ahead of time or create our own iterator.  I'm hoping that this
      alternative of adding allocation flags to ring_buffer_read_prepare()
      can be considered since it means I don't need to duplicate more of the
      core trace code into "trace_kdb.c" (for either creating my own
      iterator or re-preparing a ring allocator whose memory was already
      allocated).
      
      NOTE: another option for kdb is to actually figure out how to make it
      reuse the existing ftrace_dump() function and totally eliminate the
      duplication.  This sounds very appealing and actually works (the "sr
      z" command can be seen to properly dump the ftrace buffer).  The
      downside here is that ftrace_dump() fully consumes the trace buffer.
      Unless that is changed I'd rather not use it because it means "ftdump
      | grep xyz" won't be very useful to search the ftrace buffer since it
      will throw away the whole trace on the first grep.  A future patch to
      dump only the last few lines of the buffer will also be hard to
      implement.
      
      [1] https://lkml.kernel.org/r/20161117191605.GA21459@google.com
      
      Link: http://lkml.kernel.org/r/20190308193205.213659-1-dianders@chromium.org
      
      Reported-by: default avatarBrian Norris <briannorris@chromium.org>
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      31b265b3
  2. Mar 12, 2019
  3. Mar 06, 2019
  4. Mar 05, 2019
    • Tom Zanussi's avatar
      tracing: Use strncpy instead of memcpy for string keys in hist triggers · 9f0bbf31
      Tom Zanussi authored
      Because there may be random garbage beyond a string's null terminator,
      it's not correct to copy the the complete character array for use as a
      hist trigger key.  This results in multiple histogram entries for the
      'same' string key.
      
      So, in the case of a string key, use strncpy instead of memcpy to
      avoid copying in the extra bytes.
      
      Before, using the gdbus entries in the following hist trigger as an
      example:
      
        # echo 'hist:key=comm' > /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
        # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist
      
        ...
      
        { comm: ImgDecoder #4                      } hitcount:        203
        { comm: gmain                              } hitcount:        213
        { comm: gmain                              } hitcount:        216
        { comm: StreamTrans #73                    } hitcount:        221
        { comm: mozStorage #3                      } hitcount:        230
        { comm: gdbus                              } hitcount:        233
        { comm: StyleThread#5                      } hitcount:        253
        { comm: gdbus                              } hitcount:        256
        { comm: gdbus                              } hitcount:        260
        { comm: StyleThread#4                      } hitcount:        271
      
        ...
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist | egrep gdbus | wc -l
        51
      
      After:
      
        # cat /sys/kernel/debug/tracing/events/sched/sched_waking/hist | egrep gdbus | wc -l
        1
      
      Link: http://lkml.kernel.org/r/50c35ae1267d64eee975b8125e151e600071d4dc.1549309756.git.tom.zanussi@linux.intel.com
      
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: stable@vger.kernel.org
      Fixes: 79e577cb
      
       ("tracing: Support string type key properly")
      Signed-off-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      9f0bbf31
    • Tom Zanussi's avatar
      tracing: Use str_has_prefix() in synth_event_create() · ed581aaf
      Tom Zanussi authored
      
      
      Since we now have a str_has_prefix() that returns the length, we can
      use that instead of explicitly calculating it.
      
      Link: http://lkml.kernel.org/r/03418373fd1e80030e7394b8e3e081c5de28a710.1549309756.git.tom.zanussi@linux.intel.com
      
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: default avatarTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      ed581aaf
    • Steven Rostedt (VMware)'s avatar
      x86/ftrace: Fix warning and considate ftrace_jmp_replace() and ftrace_call_replace() · 745cfeaa
      Steven Rostedt (VMware) authored
      
      
      Arnd reported the following compiler warning:
      
      arch/x86/kernel/ftrace.c:669:23: error: 'ftrace_jmp_replace' defined but not used [-Werror=unused-function]
      
      The ftrace_jmp_replace() function now only has a single user and should be
      simply moved by that user. But looking at the code, it shows that
      ftrace_jmp_replace() is similar to ftrace_call_replace() except that instead
      of using the opcode of 0xe8 it uses 0xe9. It makes more sense to consolidate
      that function into one implementation that both ftrace_jmp_replace() and
      ftrace_call_replace() use by passing in the op code separate.
      
      The structure in ftrace_code_union is also modified to replace the "e8"
      field with the more appropriate name "op".
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Link: http://lkml.kernel.org/r/20190304200748.1418790-1-arnd@arndb.de
      Fixes: d2a68c4e
      
       ("x86/ftrace: Do not call function graph from dynamic trampolines")
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      745cfeaa
  5. Feb 21, 2019
  6. Feb 16, 2019
    • Elena Reshetova's avatar
      uprobes: convert uprobe.ref to refcount_t · ce59b8e9
      Elena Reshetova authored
      
      
      atomic_t variables are currently used to implement reference
      counters with the following properties:
       - counter is initialized to 1 using atomic_set()
       - a resource is freed upon counter reaching zero
       - once counter reaches zero, its further
         increments aren't allowed
       - counter schema uses basic atomic operations
         (set, inc, inc_not_zero, dec_and_test, etc.)
      
      Such atomic variables should be converted to a newly provided
      refcount_t type and API that prevents accidental counter overflows
      and underflows. This is important since overflows and underflows
      can lead to use-after-free situation and be exploitable.
      
      The variable uprobe.ref is used as pure reference counter.
      Convert it to refcount_t and fix up the operations.
      
      **Important note for maintainers:
      
      Some functions from refcount_t API defined in lib/refcount.c
      have different memory ordering guarantees than their atomic
      counterparts.
      The full comparison can be seen in
      https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
      in state to be merged to the documentation tree.
      Normally the differences should not matter since refcount_t provides
      enough guarantees to satisfy the refcounting use cases, but in
      some rare cases it might matter.
      Please double check that you don't have some undocumented
      memory guarantees for this variable usage.
      
      For the uprobe.ref it might make a difference
      in following places:
       - put_uprobe(): decrement in refcount_dec_and_test() only
         provides RELEASE ordering and control dependency on success
         vs. fully ordered atomic counterpart
      
      Link: http://lkml.kernel.org/r/1547637627-29526-1-git-send-email-elena.reshetova@intel.com
      
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Reviewed-by: default avatarDavid Windsor <dwindsor@gmail.com>
      Reviewed-by: default avatarHans Liljestrand <ishkamiel@gmail.com>
      Reviewed-by: default avatarSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      ce59b8e9
    • Steven Rostedt (VMware)'s avatar
      ftrace: Allow enabling of filters via index of available_filter_functions · f79b3f33
      Steven Rostedt (VMware) authored
      
      
      Enabling of large number of functions by echoing in a large subset of the
      functions in available_filter_functions can take a very long time. The
      process requires testing all functions registered by the function tracer
      (which is in the 10s of thousands), and doing a kallsyms lookup to convert
      the ip address into a name, then comparing that name with the string passed
      in.
      
      When a function causes the function tracer to crash the system, a binary
      bisect of the available_filter_functions can be done to find the culprit.
      But this requires passing in half of the functions in
      available_filter_functions over and over again, which makes it basically a
      O(n^2) operation. With 40,000 functions, that ends up bing 1,600,000,000
      opertions! And enabling this can take over 20 minutes.
      
      As a quick speed up, if a number is passed into one of the filter files,
      instead of doing a search, it just enables the function at the corresponding
      line of the available_filter_functions file. That is:
      
       # echo 50 > set_ftrace_filter
       # cat set_ftrace_filter
       x86_pmu_commit_txn
      
       # head -50 available_filter_functions | tail -1
       x86_pmu_commit_txn
      
      This allows setting of half the available_filter_functions to take place in
      less than a second!
      
       # time seq 20000 > set_ftrace_filter
       real    0m0.042s
       user    0m0.005s
       sys     0m0.015s
      
       # wc -l set_ftrace_filter
       20000 set_ftrace_filter
      
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      f79b3f33
  7. Feb 12, 2019
    • Changbin Du's avatar
      tracing: Change the function format to display function names by perf · 85acbb21
      Changbin Du authored
      
      
      Here is an example for this change.
      
      $ sudo perf record -e 'ftrace:function' --filter='ip==schedule'
      $ sudo perf report
      
      The output of perf before this patch:
      
      \# Samples: 100  of event 'ftrace:function'
      \# Event count (approx.): 100
      \#
      \# Overhead  Trace output
      \# ........  ......................................
      \#
          51.00%   ffffffff81f6aaa0 <-- ffffffff81158e8d
          29.00%   ffffffff81f6aaa0 <-- ffffffff8116ccb2
           8.00%   ffffffff81f6aaa0 <-- ffffffff81f6f2ed
           4.00%   ffffffff81f6aaa0 <-- ffffffff811628db
           4.00%   ffffffff81f6aaa0 <-- ffffffff81f6ec5b
           2.00%   ffffffff81f6aaa0 <-- ffffffff81f6f21a
           1.00%   ffffffff81f6aaa0 <-- ffffffff811b04af
           1.00%   ffffffff81f6aaa0 <-- ffffffff8143ce17
      
      After this patch:
      
      \# Samples: 36  of event 'ftrace:function'
      \# Event count (approx.): 36
      \#
      \# Overhead  Trace output
      \# ........  ............................................
      \#
          38.89%   schedule <-- schedule_hrtimeout_range_clock
          27.78%   schedule <-- worker_thread
          13.89%   schedule <-- schedule_timeout
          11.11%   schedule <-- smpboot_thread_fn
           5.56%   schedule <-- rcu_gp_kthread
           2.78%   schedule <-- exit_to_usermode_loop
      
      Link: http://lkml.kernel.org/r/20190209161919.32350-1-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      85acbb21
  8. Feb 07, 2019
    • Miroslav Benes's avatar
      ring-buffer: Remove unused function ring_buffer_page_len() · d325c402
      Miroslav Benes authored
      Commit 6b7e633f
      
       ("tracing: Remove extra zeroing out of the ring
      buffer page") removed the only caller of ring_buffer_page_len(). The
      function is now unused and may be removed.
      
      Link: http://lkml.kernel.org/r/20181228133847.106177-1-mbenes@suse.cz
      
      Signed-off-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      d325c402
    • Changbin Du's avatar
      tracing: Show stacktrace for wakeup tracers · f52d569f
      Changbin Du authored
      
      
      This align the behavior of wakeup tracers with irqsoff latency tracer
      that we record stacktrace at the beginning and end of waking up. The
      stacktrace shows us what is happening in the kernel.
      
      Link: http://lkml.kernel.org/r/20190116160249.7554-1-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      f52d569f
    • Changbin Du's avatar
      tracing/doc: Add latency tracer funcgraph example · 88d380eb
      Changbin Du authored
      
      
      This add an example about how to use funcgraph with latency tracers.
      
      Link: http://lkml.kernel.org/r/20190101154614.8887-6-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      88d380eb
    • Changbin Du's avatar
      tracing: Put a margin between flags and duration for wakeup tracers · afbab501
      Changbin Du authored
      
      
      Don't mix context flags with function duration info.
      
      Instead of this:
      
       # tracer: wakeup_rt
       #
       # wakeup_rt latency trace v1.1.5 on 5.0.0-rc1-test+
       # --------------------------------------------------------------------
       # latency: 177 us, #545/545, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
       #    -----------------
       #    | task: migration/0-11 (uid:0 nice:0 policy:1 rt_prio:99)
       #    -----------------
       #
       #                                       _-----=> irqs-off
       #                                      / _----=> need-resched
       #                                     | / _---=> hardirq/softirq
       #                                     || / _--=> preempt-depth
       #                                     ||| /
       #   REL TIME      CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
       #      |          |     |    |        ||||   |   |                     |   |   |   |
               0 us |   0)    <idle>-0    |  dNh5              |  /*      0:120:R   + [000]    11:  0:R migration/0 */
               2 us |   0)    <idle>-0    |  dNh5  0.000 us    |            (null)();
               4 us |   0)    <idle>-0    |  dNh4              |  _raw_spin_unlock() {
               4 us |   0)    <idle>-0    |  dNh4  0.304 us    |    preempt_count_sub();
               5 us |   0)    <idle>-0    |  dNh3  1.063 us    |  }
               5 us |   0)    <idle>-0    |  dNh3  0.266 us    |  ttwu_stat();
               6 us |   0)    <idle>-0    |  dNh3              |  _raw_spin_unlock_irqrestore() {
               6 us |   0)    <idle>-0    |  dNh3  0.273 us    |    preempt_count_sub();
               6 us |   0)    <idle>-0    |  dNh2  0.818 us    |  }
      
      Show this:
      
       # tracer: wakeup
       #
       # wakeup latency trace v1.1.5 on 4.20.0+
       # --------------------------------------------------------------------
       # latency: 593 us, #674/674, CPU#0 | (M:desktop VP:0, KP:0, SP:0 HP:0 #P:4)
       #    -----------------
       #    | task: kworker/0:1H-339 (uid:0 nice:-20 policy:0 rt_prio:0)
       #    -----------------
       #
       #                                      _-----=> irqs-off
       #                                     / _----=> need-resched
       #                                    | / _---=> hardirq/softirq
       #                                    || / _--=> preempt-depth
       #                                    ||| /
       #  REL TIME      CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
       #     |          |     |    |        ||||      |   |                     |   |   |   |
              0 us |   0)    <idle>-0    |  dNs. |               |  /*      0:120:R   + [000]   339:100:R kworker/0:1H */
              3 us |   0)    <idle>-0    |  dNs. |   0.000 us    |            (null)();
             67 us |   0)    <idle>-0    |  dNs. |   0.721 us    |  ttwu_stat();
             69 us |   0)    <idle>-0    |  dNs. |   0.607 us    |  _raw_spin_unlock_irqrestore();
             71 us |   0)    <idle>-0    |  .Ns. |   0.598 us    |  _raw_spin_lock_irq();
             72 us |   0)    <idle>-0    |  .Ns. |   0.584 us    |  _raw_spin_lock_irq();
             73 us |   0)    <idle>-0    |  dNs. | + 11.118 us   |  __next_timer_interrupt();
             75 us |   0)    <idle>-0    |  dNs. |               |  call_timer_fn() {
             76 us |   0)    <idle>-0    |  dNs. |               |    delayed_work_timer_fn() {
             76 us |   0)    <idle>-0    |  dNs. |               |      __queue_work() {
             ...
      
      Link: http://lkml.kernel.org/r/20190101154614.8887-4-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      afbab501
    • Changbin Du's avatar
      tracing: Show more info for funcgraph wakeup tracers · 97f0a3bc
      Changbin Du authored
      
      
      Add these info fields to funcgraph wakeup tracers:
        o Show CPU info since the waker could be on a different CPU.
        o Show function duration and overhead.
        o Show IRQ markers.
      
      Link: http://lkml.kernel.org/r/20190101154614.8887-3-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      97f0a3bc
    • Steven Rostedt (VMware)'s avatar
      tracing: Add comment to predicate_parse() about "&&" or "||" · 6c6dbce1
      Steven Rostedt (VMware) authored
      
      
      As the predicat_parse() code is rather complex, commenting subtleties is
      important. The switch case statement should be commented to describe that it
      is only looking for two '&' or '|' together, which is why the fall through
      to an error is after the check.
      
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      6c6dbce1
    • Mathieu Malaterre's avatar
      tracing: Annotate implicit fall through in predicate_parse() · 9399ca21
      Mathieu Malaterre authored
      
      
      There is a plan to build the kernel with -Wimplicit-fallthrough and
      this place in the code produced a warning (W=1).
      
      This commit remove the following warning:
      
        kernel/trace/trace_events_filter.c:494:8: warning: this statement may fall through [-Wimplicit-fallthrough=]
      
      Link: http://lkml.kernel.org/r/20190114203039.16535-2-malat@debian.org
      
      Signed-off-by: default avatarMathieu Malaterre <malat@debian.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      9399ca21
    • Mathieu Malaterre's avatar
      tracing: Annotate implicit fall through in parse_probe_arg() · 91457c01
      Mathieu Malaterre authored
      
      
      There is a plan to build the kernel with -Wimplicit-fallthrough and
      this place in the code produced a warning (W=1).
      
      This commit remove the following warning:
      
        kernel/trace/trace_probe.c:302:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
      
      Link: http://lkml.kernel.org/r/20190114203039.16535-1-malat@debian.org
      
      Signed-off-by: default avatarMathieu Malaterre <malat@debian.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      91457c01
    • Changbin Du's avatar
      function_graph: Support displaying relative timestamp · 9acd8de6
      Changbin Du authored
      
      
      When function_graph is used for latency tracers, relative timestamp
      is more straightforward than absolute timestamp as function trace
      does. This change adds relative timestamp support to function_graph
      and applies to latency tracers (wakeup and irqsoff).
      
      Instead of:
      
       # tracer: irqsoff
       #
       # irqsoff latency trace v1.1.5 on 5.0.0-rc1-test
       # --------------------------------------------------------------------
       # latency: 521 us, #1125/1125, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
       #    -----------------
       #    | task: swapper/2-0 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: __schedule
       #  => ended at:   _raw_spin_unlock_irq
       #
       #
       #                                       _-----=> irqs-off
       #                                      / _----=> need-resched
       #                                     | / _---=> hardirq/softirq
       #                                     || / _--=> preempt-depth
       #                                     ||| /
       #     TIME        CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
       #      |          |     |    |        ||||   |   |                     |   |   |   |
         124.974306 |   2)  systemd-693   |  d..1  0.000 us    |  __schedule();
         124.974307 |   2)  systemd-693   |  d..1              |    rcu_note_context_switch() {
         124.974308 |   2)  systemd-693   |  d..1  0.487 us    |      rcu_preempt_deferred_qs();
         124.974309 |   2)  systemd-693   |  d..1  0.451 us    |      rcu_qs();
         124.974310 |   2)  systemd-693   |  d..1  2.301 us    |    }
      [..]
         124.974826 |   2)    <idle>-0    |  d..2              |  finish_task_switch() {
         124.974826 |   2)    <idle>-0    |  d..2              |    _raw_spin_unlock_irq() {
         124.974827 |   2)    <idle>-0    |  d..2  0.000 us    |  _raw_spin_unlock_irq();
         124.974828 |   2)    <idle>-0    |  d..2  0.000 us    |  tracer_hardirqs_on();
         <idle>-0       2d..2  552us : <stack trace>
        => __schedule
        => schedule_idle
        => do_idle
        => cpu_startup_entry
        => start_secondary
        => secondary_startup_64
      
      Show:
      
       # tracer: irqsoff
       #
       # irqsoff latency trace v1.1.5 on 5.0.0-rc1-test+
       # --------------------------------------------------------------------
       # latency: 511 us, #1053/1053, CPU#7 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
       #    -----------------
       #    | task: swapper/7-0 (uid:0 nice:0 policy:0 rt_prio:0)
       #    -----------------
       #  => started at: __schedule
       #  => ended at:   _raw_spin_unlock_irq
       #
       #
       #                                       _-----=> irqs-off
       #                                      / _----=> need-resched
       #                                     | / _---=> hardirq/softirq
       #                                     || / _--=> preempt-depth
       #                                     ||| /
       #   REL TIME      CPU  TASK/PID       ||||  DURATION                  FUNCTION CALLS
       #      |          |     |    |        ||||   |   |                     |   |   |   |
               0 us |   7)   sshd-1704    |  d..1  0.000 us    |  __schedule();
               1 us |   7)   sshd-1704    |  d..1              |    rcu_note_context_switch() {
               1 us |   7)   sshd-1704    |  d..1  0.611 us    |      rcu_preempt_deferred_qs();
               2 us |   7)   sshd-1704    |  d..1  0.484 us    |      rcu_qs();
               3 us |   7)   sshd-1704    |  d..1  2.599 us    |    }
      [..]
             509 us |   7)    <idle>-0    |  d..2              |  finish_task_switch() {
             510 us |   7)    <idle>-0    |  d..2              |    _raw_spin_unlock_irq() {
             510 us |   7)    <idle>-0    |  d..2  0.000 us    |  _raw_spin_unlock_irq();
             512 us |   7)    <idle>-0    |  d..2  0.000 us    |  tracer_hardirqs_on();
         <idle>-0       7d..2  543us : <stack trace>
        => __schedule
        => schedule_idle
        => do_idle
        => cpu_startup_entry
        => start_secondary
        => secondary_startup_64
      
      Link: http://lkml.kernel.org/r/20190101154614.8887-2-changbin.du@gmail.com
      
      Signed-off-by: default avatarChangbin Du <changbin.du@gmail.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      9acd8de6