Skip to content
  1. Feb 07, 2018
    • Vincent Legoll's avatar
      lib/: make RUNTIME_TESTS a menuconfig to ease disabling it all · d3deafaa
      Vincent Legoll authored
      
      
      No need to get into the submenu to disable all related config entries.
      
      This makes it easier to disable all RUNTIME_TESTS config options without
      entering the submenu.  It will also enable one to see that en/dis-abled
      state from the outside menu.
      
      This is only intended to change menuconfig UI, not change the config
      dependencies.
      
      Link: http://lkml.kernel.org/r/20171209162742.7363-1-vincent.legoll@gmail.com
      Signed-off-by: default avatarVincent Legoll <vincent.legoll@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: "Luis R. Rodriguez" <mcgrof@kernel.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d3deafaa
    • Clement Courbet's avatar
      lib: optimize cpumask_next_and() · 0ade34c3
      Clement Courbet authored
      
      
      We've measured that we spend ~0.6% of sys cpu time in cpumask_next_and().
      It's essentially a joined iteration in search for a non-zero bit, which is
      currently implemented as a lookup join (find a nonzero bit on the lhs,
      lookup the rhs to see if it's set there).
      
      Implement a direct join (find a nonzero bit on the incrementally built
      join).  Also add generic bitmap benchmarks in the new `test_find_bit`
      module for new function (see `find_next_and_bit` in [2] and [3] below).
      
      For cpumask_next_and, direct benchmarking shows that it's 1.17x to 14x
      faster with a geometric mean of 2.1 on 32 CPUs [1].  No impact on memory
      usage.  Note that on Arm, the new pure-C implementation still outperforms
      the old one that uses a mix of C and asm (`find_next_bit`) [3].
      
      [1] Approximate benchmark code:
      
      ```
        unsigned long src1p[nr_cpumask_longs] = {pattern1};
        unsigned long src2p[nr_cpumask_longs] = {pattern2};
        for (/*a bunch of repetitions*/) {
          for (int n = -1; n <= nr_cpu_ids; ++n) {
            asm volatile("" : "+rm"(src1p)); // prevent any optimization
            asm volatile("" : "+rm"(src2p));
            unsigned long result = cpumask_next_and(n, src1p, src2p);
            asm volatile("" : "+rm"(result));
          }
        }
      ```
      
      Results:
      pattern1    pattern2     time_before/time_after
      0x0000ffff  0x0000ffff   1.65
      0x0000ffff  0x00005555   2.24
      0x0000ffff  0x00001111   2.94
      0x0000ffff  0x00000000   14.0
      0x00005555  0x0000ffff   1.67
      0x00005555  0x00005555   1.71
      0x00005555  0x00001111   1.90
      0x00005555  0x00000000   6.58
      0x00001111  0x0000ffff   1.46
      0x00001111  0x00005555   1.49
      0x00001111  0x00001111   1.45
      0x00001111  0x00000000   3.10
      0x00000000  0x0000ffff   1.18
      0x00000000  0x00005555   1.18
      0x00000000  0x00001111   1.17
      0x00000000  0x00000000   1.25
      -----------------------------
                     geo.mean  2.06
      
      [2] test_find_next_bit, X86 (skylake)
      
       [ 3913.477422] Start testing find_bit() with random-filled bitmap
       [ 3913.477847] find_next_bit: 160868 cycles, 16484 iterations
       [ 3913.477933] find_next_zero_bit: 169542 cycles, 16285 iterations
       [ 3913.478036] find_last_bit: 201638 cycles, 16483 iterations
       [ 3913.480214] find_first_bit: 4353244 cycles, 16484 iterations
       [ 3913.480216] Start testing find_next_and_bit() with random-filled
       bitmap
       [ 3913.481074] find_next_and_bit: 89604 cycles, 8216 iterations
       [ 3913.481075] Start testing find_bit() with sparse bitmap
       [ 3913.481078] find_next_bit: 2536 cycles, 66 iterations
       [ 3913.481252] find_next_zero_bit: 344404 cycles, 32703 iterations
       [ 3913.481255] find_last_bit: 2006 cycles, 66 iterations
       [ 3913.481265] find_first_bit: 17488 cycles, 66 iterations
       [ 3913.481266] Start testing find_next_and_bit() with sparse bitmap
       [ 3913.481272] find_next_and_bit: 764 cycles, 1 iterations
      
      [3] test_find_next_bit, arm (v7 odroid XU3).
      
      [  267.206928] Start testing find_bit() with random-filled bitmap
      [  267.214752] find_next_bit: 4474 cycles, 16419 iterations
      [  267.221850] find_next_zero_bit: 5976 cycles, 16350 iterations
      [  267.229294] find_last_bit: 4209 cycles, 16419 iterations
      [  267.279131] find_first_bit: 1032991 cycles, 16420 iterations
      [  267.286265] Start testing find_next_and_bit() with random-filled
      bitmap
      [  267.302386] find_next_and_bit: 2290 cycles, 8140 iterations
      [  267.309422] Start testing find_bit() with sparse bitmap
      [  267.316054] find_next_bit: 191 cycles, 66 iterations
      [  267.322726] find_next_zero_bit: 8758 cycles, 32703 iterations
      [  267.329803] find_last_bit: 84 cycles, 66 iterations
      [  267.336169] find_first_bit: 4118 cycles, 66 iterations
      [  267.342627] Start testing find_next_and_bit() with sparse bitmap
      [  267.356919] find_next_and_bit: 91 cycles, 1 iterations
      
      [courbet@google.com: v6]
        Link: http://lkml.kernel.org/r/20171129095715.23430-1-courbet@google.com
      [geert@linux-m68k.org: m68k/bitops: always include <asm-generic/bitops/find.h>]
        Link: http://lkml.kernel.org/r/1512556816-28627-1-git-send-email-geert@linux-m68k.org
      Link: http://lkml.kernel.org/r/20171128131334.23491-1-courbet@google.com
      Signed-off-by: default avatarClement Courbet <courbet@google.com>
      Signed-off-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Yury Norov <ynorov@caviumnetworks.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ade34c3
    • Yury Norov's avatar
      lib/find_bit_benchmark.c: improvements · 15ff67bf
      Yury Norov authored
      
      
      As suggested in review comments:
      * printk: align numbers using whitespaces instead of tabs;
      * return error value from init() to avoid calling rmmod if testing again;
      * use ktime_get instead of get_cycles as some arches don't support it;
      
      The output in dmesg (on QEMU arm64):
      [   38.823430] Start testing find_bit() with random-filled bitmap
      [   38.845358] find_next_bit:                20138448 ns, 163968 iterations
      [   38.856217] find_next_zero_bit:           10615328 ns, 163713 iterations
      [   38.863564] find_last_bit:                 7111888 ns, 163967 iterations
      [   40.944796] find_first_bit:             2081007216 ns, 163968 iterations
      [   40.944975]
      [   40.944975] Start testing find_bit() with sparse bitmap
      [   40.945268] find_next_bit:                   73216 ns,    656 iterations
      [   40.967858] find_next_zero_bit:           22461008 ns, 327025 iterations
      [   40.968047] find_last_bit:                   62320 ns,    656 iterations
      [   40.978060] find_first_bit:                9889360 ns,    656 iterations
      
      Link: http://lkml.kernel.org/r/20171124143040.a44jvhmnaiyedg2i@yury-thinkpad
      Signed-off-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Tested-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Clement Courbet <courbet@google.com>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      15ff67bf
    • Yury Norov's avatar
      lib/test_find_bit.c: rename to find_bit_benchmark.c · dceeb3e7
      Yury Norov authored
      
      
      As suggested in review comments, rename test_find_bit.c to
      find_bit_benchmark.c.
      
      Link: http://lkml.kernel.org/r/20171124143040.a44jvhmnaiyedg2i@yury-thinkpad
      Signed-off-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Tested-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Clement Courbet <courbet@google.com>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dceeb3e7
    • Alexander Potapenko's avatar
      lib/stackdepot.c: use a non-instrumented version of memcmp() · a571b272
      Alexander Potapenko authored
      
      
      stackdepot used to call memcmp(), which compiler tools normally
      instrument, therefore every lookup used to unnecessarily call instrumented
      code.  This is somewhat ok in the case of KASAN, but under KMSAN a lot of
      time was spent in the instrumentation.
      
      Link: http://lkml.kernel.org/r/20171117172149.69562-1-glider@google.com
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a571b272
    • Andy Shevchenko's avatar
      include/linux/bitmap.h: make bitmap_fill() and bitmap_zero() consistent · 334cfa48
      Andy Shevchenko authored
      
      
      Behaviour of bitmap_fill() differs from bitmap_zero() in a way how bits
      behind bitmap are handed.  bitmap_zero() clears entire bitmap by unsigned
      long boundary, while bitmap_fill() mimics bitmap_set().
      
      Here we change bitmap_fill() behaviour to be consistent with bitmap_zero()
      and add a note to documentation.
      
      The change might reveal some bugs in the code where unused bits are
      handled differently and in such cases bitmap_set() has to be used.
      
      Link: http://lkml.kernel.org/r/20180109172430.87452-4-andriy.shevchenko@linux.intel.com
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Suggested-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Yury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      334cfa48
    • Andy Shevchenko's avatar
      lib/test_bitmap.c: clean up test_zero_fill_copy() test case and rename · fe81814c
      Andy Shevchenko authored
      
      
      Since we have separate explicit test cases for bitmap_zero() /
      bitmap_clear() and bitmap_fill() / bitmap_set(), clean up
      test_zero_fill_copy() to only test bitmap_copy() functionality and thus
      rename a function to reflect the changes.
      
      While here, replace bitmap_fill() by bitmap_set() with proper values.
      
      Link: http://lkml.kernel.org/r/20180109172430.87452-3-andriy.shevchenko@linux.intel.com
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fe81814c
    • Andy Shevchenko's avatar
      lib/test_bitmap.c: add bitmap_fill()/bitmap_set() test cases · 978f369c
      Andy Shevchenko authored
      
      
      Explicitly test bitmap_fill() and bitmap_set() functions.
      
      For bitmap_fill() we expect a consistent behaviour as in bitmap_zero(),
      i.e.  the trailing bits will be set up to unsigned long boundary.
      
      Link: http://lkml.kernel.org/r/20180109172430.87452-2-andriy.shevchenko@linux.intel.com
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      978f369c
    • Andy Shevchenko's avatar
      lib/test_bitmap.c: add bitmap_zero()/bitmap_clear() test cases · ee3527bd
      Andy Shevchenko authored
      
      
      Explicitly test bitmap_zero() and bitmap_clear() functions.
      
      Link: http://lkml.kernel.org/r/20180109172430.87452-1-andriy.shevchenko@linux.intel.com
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ee3527bd
    • Yury Norov's avatar
      bitmap: replace bitmap_{from,to}_u32array · 3aa56885
      Yury Norov authored
      
      
      with bitmap_{from,to}_arr32 over the kernel. Additionally to it:
      * __check_eq_bitmap() now takes single nbits argument.
      * __check_eq_u32_array is not used in new test but may be used in
        future. So I don't remove it here, but annotate as __used.
      
      Tested on arm64 and 32-bit BE mips.
      
      [arnd@arndb.de: perf: arm_dsu_pmu: convert to bitmap_from_arr32]
        Link: http://lkml.kernel.org/r/20180201172508.5739-2-ynorov@caviumnetworks.com
      [ynorov@caviumnetworks.com: fix net/core/ethtool.c]
        Link: http://lkml.kernel.org/r/20180205071747.4ekxtsbgxkj5b2fz@yury-thinkpad
      Link: http://lkml.kernel.org/r/20171228150019.27953-2-ynorov@caviumnetworks.com
      Signed-off-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David Decotigny <decot@googlers.com>,
      Cc: David S. Miller <davem@davemloft.net>,
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Heiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3aa56885
    • Yury Norov's avatar
      bitmap: new bitmap_copy_safe and bitmap_{from,to}_arr32 · c724f193
      Yury Norov authored
      
      
      This patchset replaces bitmap_{to,from}_u32array with more simple and
      standard looking copy-like functions.
      
      bitmap_from_u32array() takes 4 arguments (bitmap_to_u32array is similar):
       - unsigned long *bitmap, which is destination;
       - unsigned int nbits, the length of destination bitmap, in bits;
       - const u32 *buf, the source; and
       - unsigned int nwords, the length of source buffer in ints.
      
      In description to the function it is detailed like:
      * copy min(nbits, 32*nwords) bits from @buf to @bitmap, remaining
      * bits between nword and nbits in @bitmap (if any) are cleared.
      
      Having two size arguments looks unneeded and potentially dangerous.
      
      It is unneeded because normally user of copy-like function should take
      care of the size of destination and make it big enough to fit source
      data.
      
      And it is dangerous because function may hide possible error if user
      doesn't provide big enough bitmap, and data becomes silently dropped.
      
      That's why all copy-like functions have 1 argument for size of copying
      data, and I don't see any reason to make bitmap_from_u32array()
      different.
      
      One exception that comes in mind is strncpy() which also provides size
      of destination in arguments, but it's strongly argued by the possibility
      of taking broken strings in source.  This is not the case of
      bitmap_{from,to}_u32array().
      
      There is no many real users of bitmap_{from,to}_u32array(), and they all
      very clearly provide size of destination matched with the size of
      source, so additional functionality is not used in fact. Like this:
      bitmap_from_u32array(to->link_modes.supported,
      		__ETHTOOL_LINK_MODE_MASK_NBITS,
      		link_usettings.link_modes.supported,
      		__ETHTOOL_LINK_MODE_MASK_NU32);
      Where:
      #define __ETHTOOL_LINK_MODE_MASK_NU32 \
      	DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32)
      
      In this patch, bitmap_copy_safe and bitmap_{from,to}_arr32 are introduced.
      
      'Safe' in bitmap_copy_safe() stands for clearing unused bits in bitmap
      beyond last bit till the end of last word. It is useful for hardening
      API when bitmap is assumed to be exposed to userspace.
      
      bitmap_{from,to}_arr32 functions are replacements for
      bitmap_{from,to}_u32array. They don't take unneeded nwords argument, and
      so simpler in implementation and understanding.
      
      This patch suggests optimization for 32-bit systems - aliasing
      bitmap_{from,to}_arr32 to bitmap_copy_safe.
      
      Other possible optimization is aliasing 64-bit LE bitmap_{from,to}_arr32 to
      more generic function(s). But I didn't end up with the function that would
      be helpful by itself, and can be used to alias 64-bit LE
      bitmap_{from,to}_arr32, like bitmap_copy_safe() does. So I preferred to
      leave things as is.
      
      The following patch switches kernel to new API and introduces test for it.
      
      Discussion is here: https://lkml.org/lkml/2017/11/15/592
      
      [ynorov@caviumnetworks.com: rename bitmap_copy_safe to bitmap_copy_clear_tail]
        Link: http://lkml.kernel.org/r/20180201172508.5739-3-ynorov@caviumnetworks.com
      Link: http://lkml.kernel.org/r/20171228150019.27953-1-ynorov@caviumnetworks.com
      Signed-off-by: default avatarYury Norov <ynorov@caviumnetworks.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David Decotigny <decot@googlers.com>,
      Cc: David S. Miller <davem@davemloft.net>,
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c724f193
    • Stephen Boyd's avatar
      MAINTAINERS: update sboyd's email address · eed9c249
      Stephen Boyd authored
      
      
      Replace my codeaurora.org address with my kernel.org address so that
      emails don't bounce.
      
      Link: http://lkml.kernel.org/r/20180129173258.10643-1-sboyd@codeaurora.org
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eed9c249
    • Rasmus Villemoes's avatar
      kernel/async.c: revert "async: simplify lowest_in_progress()" · 4f7e988e
      Rasmus Villemoes authored
      This reverts commit 92266d6e ("async: simplify lowest_in_progress()")
      which was simply wrong: In the case where domain is NULL, we now use the
      wrong offsetof() in the list_first_entry macro, so we don't actually
      fetch the ->cookie value, but rather the eight bytes located
      sizeof(struct list_head) further into the struct async_entry.
      
      On 64 bit, that's the data member, while on 32 bit, that's a u64 built
      from func and data in some order.
      
      I think the bug happens to be harmless in practice: It obviously only
      affects callers which pass a NULL domain, and AFAICT the only such
      caller is
      
        async_synchronize_full() ->
        async_synchronize_full_domain(NULL) ->
        async_synchronize_cookie_domain(ASYNC_COOKIE_MAX, NULL)
      
      and the ASYNC_COOKIE_MAX means that in practice we end up waiting for
      the async_global_pending list to be empty - but it would break if
      somebody happened to pass (void*)-1 as the data element to
      async_schedule, and of course also if somebody ever does a
      async_synchronize_cookie_domain(, NULL) with a "finite" cookie value.
      
      Maybe the "harmless in practice" means this isn't -stable material.  But
      I'm not completely confident my quick git grep'ing is enough, and there
      might be affected code in one of the earlier kernels that has since been
      removed, so I'll leave the decision to the stable guys.
      
      Link: http://lkml.kernel.org/r/20171128104938.3921-1-linux@rasmusvillemoes.dk
      Fixes: 92266d6e
      
       "async: simplify lowest_in_progress()"
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Adam Wallis <awallis@codeaurora.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: <stable@vger.kernel.org>	[3.10+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4f7e988e
    • Sergey Senozhatsky's avatar
      tools/lib/subcmd/pager.c: do not alias select() params · ad343a98
      Sergey Senozhatsky authored
      
      
      Use a separate fd set for select()-s exception fds param to fix the
      following gcc warning:
      
        pager.c:36:12: error: passing argument 2 to restrict-qualified parameter aliases with argument 4 [-Werror=restrict]
          select(1, &in, NULL, &in, NULL);
                    ^~~        ~~~
      
      Link: http://lkml.kernel.org/r/20180101105626.7168-1-sergey.senozhatsky@gmail.com
      Signed-off-by: default avatarSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ad343a98
    • Alexey Dobriyan's avatar
      uuid: cleanup <uapi/linux/uuid.h> · dfbc3c6c
      Alexey Dobriyan authored
      
      
      Exported header doesn't use anything from <linux/string.h>,
      it is <linux/uuid.h> which uses memcmp().
      
      Link: http://lkml.kernel.org/r/20171225171121.GA22754@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Reviewed-by: default avatarAndy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dfbc3c6c
    • Kees Cook's avatar
      Makefile: introduce CONFIG_CC_STACKPROTECTOR_AUTO · 44c6dc94
      Kees Cook authored
      Nearly all modern compilers support a stack-protector option, and nearly
      all modern distributions enable the kernel stack-protector, so enabling
      this by default in kernel builds would make sense.  However, Kconfig does
      not have knowledge of available compiler features, so it isn't safe to
      force on, as this would unconditionally break builds for the compilers or
      architectures that don't have support.  Instead, this introduces a new
      option, CONFIG_CC_STACKPROTECTOR_AUTO, which attempts to discover the best
      possible stack-protector available, and will allow builds to proceed even
      if the compiler doesn't support any stack-protector.
      
      This option is made the default so that kernels built with modern
      compilers will be protected-by-default against stack buffer overflows,
      avoiding things like the recent BlueBorne attack.  Selection of a specific
      stack-protector option remains available, including disabling it.
      
      Additionally, tiny.config is adjusted to use CC_STACK...
      44c6dc94
    • Kees Cook's avatar
      Makefile: move stack-protector availability out of Kconfig · 2bc2f688
      Kees Cook authored
      
      
      Various portions of the kernel, especially per-architecture pieces,
      need to know if the compiler is building with the stack protector.
      This was done in the arch/Kconfig with 'select', but this doesn't
      allow a way to do auto-detected compiler support. In preparation for
      creating an on-if-available default, move the logic for the definition of
      CONFIG_CC_STACKPROTECTOR into the Makefile.
      
      Link: http://lkml.kernel.org/r/1510076320-69931-3-git-send-email-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2bc2f688
    • Kees Cook's avatar
      Makefile: move stack-protector compiler breakage test earlier · 2b838392
      Kees Cook authored
      
      
      In order to make stack-protector failures warn instead of unconditionally
      breaking the build, this moves the compiler output sanity-check earlier,
      and sets a flag for later testing.  Future patches can choose to warn or
      fail, depending on the flag value.
      
      Link: http://lkml.kernel.org/r/1510076320-69931-2-git-send-email-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2b838392
    • Markus Elfring's avatar
      fs/proc/consoles.c: use seq_putc() in show_console_dev() · 4bf8ba81
      Markus Elfring authored
      
      
      A single character (line break) should be put into a sequence.  Thus use
      the corresponding function "seq_putc".
      
      This issue was detected by using the Coccinelle software.
      
      Link: http://lkml.kernel.org/r/04fb69fe-d820-9141-820f-07e9a48f4635@users.sourceforge.net
      Signed-off-by: default avatarMarkus Elfring <elfring@users.sourceforge.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4bf8ba81
    • Alexey Dobriyan's avatar
      proc: rearrange args · 93ad5bc6
      Alexey Dobriyan authored
      
      
      Rearrange args for smaller code.
      
      lookup revolves around memcmp() which gets len 3rd arg, so propagate
      length as 3rd arg.
      
      readdir and lookup add additional arg to VFS ->readdir and ->lookup, so
      better add it to the end.
      
      Space savings on x86_64:
      
      	add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-18 (-18)
      	Function                                     old     new   delta
      	proc_readdir                                  22      13      -9
      	proc_lookup                                   18       9      -9
      
      proc_match() is smaller if not inlined, I promise!
      
      Link: http://lkml.kernel.org/r/20180104175958.GB5204@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93ad5bc6
    • Alexey Dobriyan's avatar
      proc: spread likely/unlikely a bit · 15b158b4
      Alexey Dobriyan authored
      
      
      use_pde() is used at every open/read/write/...  of every random /proc
      file.  Negative refcount happens only if PDE is being deleted by module
      (read: never).  So it gets "likely".
      
      unuse_pde() gets "unlikely" for the same reason.
      
      close_pdeo() gets unlikely as the completion is filled only if there is a
      race between PDE removal and close() (read: never ever).
      
      It even saves code on x86_64 defconfig:
      
      	add/remove: 0/0 grow/shrink: 1/2 up/down: 2/-20 (-18)
      	Function                                     old     new   delta
      	close_pdeo                                   183     185      +2
      	proc_reg_get_unmapped_area                   119     111      -8
      	proc_reg_poll                                 85      73     -12
      
      Link: http://lkml.kernel.org/r/20180104175657.GA5204@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      15b158b4
    • Alexey Dobriyan's avatar
      fs/proc: use __ro_after_init · efb1a57d
      Alexey Dobriyan authored
      
      
      /proc/self inode numbers, value of proc_inode_cache and st_nlink of
      /proc/$TGID are fixed constants.
      
      Link: http://lkml.kernel.org/r/20180103184707.GA31849@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      efb1a57d
    • Alexey Dobriyan's avatar
      fs/proc/internal.h: fix up comment · 53f63345
      Alexey Dobriyan authored
      
      
      Document what ->pde_unload_lock actually does.
      
      Link: http://lkml.kernel.org/r/20180103185120.GB31849@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      53f63345
    • Alexey Dobriyan's avatar
      fs/proc/internal.h: rearrange struct proc_dir_entry · 163cf548
      Alexey Dobriyan authored
      
      
      struct proc_dir_entry became bit messy over years:
      
      * move 16-bit ->mode_t before namelen to get rid of padding
      * make ->in_use first field: it seems to be most used resulting in
        smaller code on x86_64 (defconfig):
      
      	add/remove: 0/0 grow/shrink: 7/13 up/down: 24/-67 (-43)
      	Function                                     old     new   delta
      	proc_readdir_de                              451     455      +4
      	proc_get_inode                               282     286      +4
      	pde_put                                       65      69      +4
      	remove_proc_subtree                          294     297      +3
      	remove_proc_entry                            297     300      +3
      	proc_register                                295     298      +3
      	proc_notify_change                            94      97      +3
      	unuse_pde                                     27      26      -1
      	proc_reg_write                                89      85      -4
      	proc_reg_unlocked_ioctl                       85      81      -4
      	proc_reg_read                                 89      85      -4
      	proc_reg_llseek                               87      83      -4
      	proc_reg_get_unmapped_area                   123     119      -4
      	proc_entry_rundown                           139     135      -4
      	proc_reg_poll                                 91      85      -6
      	proc_reg_mmap                                 79      73      -6
      	proc_get_link                                 55      49      -6
      	proc_reg_release                             108     101      -7
      	proc_reg_open                                298     291      -7
      	close_pdeo                                   228     218     -10
      
      * move writeable fields together to a first cacheline (on x86_64),
        those include
      	* ->in_use: reference count, taken every open/read/write/close etc
      	* ->count: reference count, taken at readdir on every entry
      	* ->pde_openers: tracks (nearly) every open, dirtied
      	* ->pde_unload_lock: spinlock protecting ->pde_openers
      	* ->proc_iops, ->proc_fops, ->data: writeonce fields,
      	  used right together with previous group.
      
      * other rarely written fields go into 1st/2nd and 2nd/3rd cacheline on
        32-bit and 64-bit respectively.
      
      Additionally on 32-bit, ->subdir, ->subdir_node, ->namelen, ->name go
      fully into 2nd cacheline, separated from writeable fields.  They are all
      used during lookup.
      
      Link: http://lkml.kernel.org/r/20171220215914.GA7877@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      163cf548
    • Heiko Carstens's avatar
      fs/proc/kcore.c: use probe_kernel_read() instead of memcpy() · d0290bc2
      Heiko Carstens authored
      Commit df04abfd ("fs/proc/kcore.c: Add bounce buffer for ktext
      data") added a bounce buffer to avoid hardened usercopy checks.  Copying
      to the bounce buffer was implemented with a simple memcpy() assuming
      that it is always valid to read from kernel memory iff the
      kern_addr_valid() check passed.
      
      A simple, but pointless, test case like "dd if=/proc/kcore of=/dev/null"
      now can easily crash the kernel, since the former execption handling on
      invalid kernel addresses now doesn't work anymore.
      
      Also adding a kern_addr_valid() implementation wouldn't help here.  Most
      architectures simply return 1 here, while a couple implemented a page
      table walk to figure out if something is mapped at the address in
      question.
      
      With DEBUG_PAGEALLOC active mappings are established and removed all the
      time, so that relying on the result of kern_addr_valid() before
      executing the memcpy() also doesn't work.
      
      Therefore simply use probe_kernel_read() to copy to the bounce buffer.
      This also allows to simplify read_kcore().
      
      At least on s390 this fixes the observed crashes and doesn't introduce
      warnings that were removed with df04abfd ("fs/proc/kcore.c: Add
      bounce buffer for ktext data"), even though the generic
      probe_kernel_read() implementation uses uaccess functions.
      
      While looking into this I'm also wondering if kern_addr_valid() could be
      completely removed...(?)
      
      Link: http://lkml.kernel.org/r/20171202132739.99971-1-heiko.carstens@de.ibm.com
      Fixes: df04abfd ("fs/proc/kcore.c: Add bounce buffer for ktext data")
      Fixes: f5509cc1
      
       ("mm: Hardened usercopy")
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0290bc2
    • Alexey Dobriyan's avatar
      fs/proc/array.c: delete children_seq_release() · 171ef917
      Alexey Dobriyan authored
      
      
      It is 1:1 wrapper around seq_release().
      
      Link: http://lkml.kernel.org/r/20171122171510.GA12161@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Acked-by: default avatarCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      171ef917
    • Alexey Dobriyan's avatar
      proc: less memory for /proc/*/map_files readdir · 20d28cde
      Alexey Dobriyan authored
      
      
      dentry name can be evaluated later, right before calling into VFS.
      
      Also, spend less time under ->mmap_sem.
      
      Link: http://lkml.kernel.org/r/20171110163034.GA2534@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20d28cde
    • Alexey Dobriyan's avatar
      fs/proc/vmcore.c: simpler /proc/vmcore cleanup · 593bc695
      Alexey Dobriyan authored
      
      
      Iterators aren't necessary as you can just grab the first entry and delete
      it until no entries left.
      
      Link: http://lkml.kernel.org/r/20171121191121.GA20757@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      593bc695
    • Alexey Dobriyan's avatar
      proc: fix /proc/*/map_files lookup · ac7f1061
      Alexey Dobriyan authored
      
      
      Current code does:
      
      	if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2)
      
      However sscanf() is broken garbage.
      
      It silently accepts whitespace between format specifiers
      (did you know that?).
      
      It silently accepts valid strings which result in integer overflow.
      
      Do not use sscanf() for any even remotely reliable parsing code.
      
      	OK
      	# readlink '/proc/1/map_files/55a23af39000-55a23b05b000'
      	/lib/systemd/systemd
      
      	broken
      	# readlink '/proc/1/map_files/               55a23af39000-55a23b05b000'
      	/lib/systemd/systemd
      
      	broken
      	# readlink '/proc/1/map_files/55a23af39000-55a23b05b000    '
      	/lib/systemd/systemd
      
      	very broken
      	# readlink '/proc/1/map_files/1000000000000000055a23af39000-55a23b05b000'
      	/lib/systemd/systemd
      
      Andrei said:
      
      : This patch breaks criu.  It was a bug in criu.  And this bug is on a minor
      : path, which works when memfd_create() isn't available.  It is a reason why
      : I ask to not backport this patch to stable kernels.
      :
      : In CRIU this bug can be triggered, only if this patch will be backported
      : to a kernel which version is lower than v3.16.
      
      Link: http://lkml.kernel.org/r/20171120212706.GA14325@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Andrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ac7f1061
    • Alexey Dobriyan's avatar
      proc: don't use READ_ONCE/WRITE_ONCE for /proc/*/fail-nth · 9f7118b2
      Alexey Dobriyan authored
      
      
      READ_ONCE and WRITE_ONCE are useless when there is only one read/write
      is being made.
      
      Link: http://lkml.kernel.org/r/20171120204033.GA9446@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9f7118b2
    • Alexey Dobriyan's avatar
      proc: use %u for pid printing and slightly less stack · e3912ac3
      Alexey Dobriyan authored
      
      
      PROC_NUMBUF is 13 which is enough for "negative int + \n + \0".
      
      However PIDs and TGIDs are never negative and newline is not a concern,
      so use just 10 per integer.
      
      Link: http://lkml.kernel.org/r/20171120203005.GA27743@avx2
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Alexander Viro <viro@ftp.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e3912ac3
    • Colin Ian King's avatar
      kasan: remove redundant initialization of variable 'real_size' · 48c23239
      Colin Ian King authored
      
      
      Variable real_size is initialized with a value that is never read, it is
      re-assigned a new value later on, hence the initialization is redundant
      and can be removed.
      
      Cleans up clang warning:
      
        lib/test_kasan.c:422:21: warning: Value stored to 'real_size' during its initialization is never read
      
      Link: http://lkml.kernel.org/r/20180206144950.32457-1-colin.king@canonical.com
      Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      48c23239
    • Andrey Konovalov's avatar
      kasan: clean up KASAN_SHADOW_SCALE_SHIFT usage · 917538e2
      Andrey Konovalov authored
      
      
      Right now the fact that KASAN uses a single shadow byte for 8 bytes of
      memory is scattered all over the code.
      
      This change defines KASAN_SHADOW_SCALE_SHIFT early in asm include files
      and makes use of this constant where necessary.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/34937ca3b90736eaad91b568edf5684091f662e3.1515775666.git.andreyknvl@google.com
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      917538e2
    • Andrey Konovalov's avatar
      kasan: fix prototype author email address · 5f21f3a8
      Andrey Konovalov authored
      
      
      Use the new one.
      
      Link: http://lkml.kernel.org/r/de3b7ffc30a55178913a7d3865216aa7accf6c40.1515775666.git.andreyknvl@google.com
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5f21f3a8
    • Dmitry Vyukov's avatar
      kasan: detect invalid frees · b1d57289
      Dmitry Vyukov authored
      
      
      Detect frees of pointers into middle of heap objects.
      
      Link: http://lkml.kernel.org/r/cb569193190356beb018a03bb8d6fbae67e7adbc.1514378558.git.dvyukov@google.com
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1d57289
    • Dmitry Vyukov's avatar
      kasan: unify code between kasan_slab_free() and kasan_poison_kfree() · 1db0e0f9
      Dmitry Vyukov authored
      
      
      Both of these functions deal with freeing of slab objects.
      However, kasan_poison_kfree() mishandles SLAB_TYPESAFE_BY_RCU
      (must also not poison such objects) and does not detect double-frees.
      
      Unify code between these functions.
      
      This solves both of the problems and allows to add more common code
      (e.g. detection of invalid frees).
      
      Link: http://lkml.kernel.org/r/385493d863acf60408be219a021c3c8e27daa96f.1514378558.git.dvyukov@google.com
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1db0e0f9
    • Dmitry Vyukov's avatar
      kasan: detect invalid frees for large mempool objects · 6860f634
      Dmitry Vyukov authored
      
      
      Detect frees of pointers into middle of mempool objects.
      
      I did a one-off test, but it turned out to be very tricky, so I reverted
      it.  First, mempool does not call kasan_poison_kfree() unless allocation
      function fails.  I stubbed an allocation function to fail on second and
      subsequent allocations.  But then mempool stopped to call
      kasan_poison_kfree() at all, because it does it only when allocation
      function is mempool_kmalloc().  We could support this special failing
      test allocation function in mempool, but it also can't live with kasan
      tests, because these are in a module.
      
      Link: http://lkml.kernel.org/r/bf7a7d035d7a5ed62d2dd0e3d2e8a4fcdf456aa7.1514378558.git.dvyukov@google.com
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6860f634
    • Dmitry Vyukov's avatar
      kasan: don't use __builtin_return_address(1) · ee3ce779
      Dmitry Vyukov authored
      
      
      __builtin_return_address(1) is unreliable without frame pointers.
      With defconfig on kmalloc_pagealloc_invalid_free test I am getting:
      
      BUG: KASAN: double-free or invalid-free in           (null)
      
      Pass caller PC from callers explicitly.
      
      Link: http://lkml.kernel.org/r/9b01bc2d237a4df74ff8472a3bf6b7635908de01.1514378558.git.dvyukov@google.com
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ee3ce779
    • Dmitry Vyukov's avatar
      kasan: detect invalid frees for large objects · 47adccce
      Dmitry Vyukov authored
      
      
      Patch series "kasan: detect invalid frees".
      
      KASAN detects double-frees, but does not detect invalid-frees (when a
      pointer into a middle of heap object is passed to free).  We recently had
      a very unpleasant case in crypto code which freed an inner object inside
      of a heap allocation.  This left unnoticed during free, but totally
      corrupted heap and later lead to a bunch of random crashes all over kernel
      code.
      
      Detect invalid frees.
      
      This patch (of 5):
      
      Detect frees of pointers into middle of large heap objects.
      
      I dropped const from kasan_kfree_large() because it starts propagating
      through a bunch of functions in kasan_report.c, slab/slub nearest_obj(),
      all of their local variables, fixup_red_left(), etc.
      
      Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>a
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      47adccce
    • Alexander Potapenko's avatar
      kasan: add functions for unpoisoning stack variables · d321599c
      Alexander Potapenko authored
      
      
      As a code-size optimization, LLVM builds since r279383 may bulk-manipulate
      the shadow region when (un)poisoning large memory blocks.  This requires
      new callbacks that simply do an uninstrumented memset().
      
      This fixes linking the Clang-built kernel when using KASAN.
      
      [arnd@arndb.de: add declarations for internal functions]
        Link: http://lkml.kernel.org/r/20180105094112.2690475-1-arnd@arndb.de
      [fengguang.wu@intel.com: __asan_set_shadow_00 can be static]
        Link: http://lkml.kernel.org/r/20171223125943.GA74341@lkp-ib03
      [ghackmann@google.com: fix memset() parameters, and tweak commit message to describe new callbacks]
      Link: http://lkml.kernel.org/r/20171204191735.132544-6-paullawrence@google.com
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarGreg Hackmann <ghackmann@google.com>
      Signed-off-by: default avatarPaul Lawrence <paullawrence@google.com>
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Matthias Kaehlcke <mka@chromium.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d321599c