Skip to content
  1. Nov 07, 2021
    • Shakeel Butt's avatar
      memcg: flush stats only if updated · 11192d9c
      Shakeel Butt authored
      At the moment, the kernel flushes the memcg stats on every refault and
      also on every reclaim iteration.  Although rstat maintains per-cpu
      update tree but on the flush the kernel still has to go through all the
      cpu rstat update tree to check if there is anything to flush.  This
      patch adds the tracking on the stats update side to make flush side more
      clever by skipping the flush if there is no update.
      
      The stats update codepath is very sensitive performance wise for many
      workloads and benchmarks.  So, we can not follow what the commit
      aa48e47e ("memcg: infrastructure to flush memcg stats") did which
      was triggering async flush through queue_work() and caused a lot
      performance regression reports.  That got reverted by the commit
      1f828223
      
       ("memcg: flush lruvec stats in the refault").
      
      In this patch we kept the stats update codepath very minimal and let the
      stats reader side to flush the stats only when the updates are over a
      specific threshold.  For now the threshold is (nr_cpus * CHARGE_BATCH).
      
      To evaluate the impact of this patch, an 8 GiB tmpfs file is created on
      a system with swap-on-zram and the file was pushed to swap through
      memory.force_empty interface.  On reading the whole file, the memcg stat
      flush in the refault code path is triggered.  With this patch, we
      observed 63% reduction in the read time of 8 GiB file.
      
      Link: https://lkml.kernel.org/r/20211001190040.48086-1-shakeelb@google.com
      Signed-off-by: default avatarShakeel Butt <shakeelb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Reviewed-by: default avatar"Michal Koutný" <mkoutny@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      11192d9c
    • Peter Xu's avatar
      mm/memcg: drop swp_entry_t* in mc_handle_file_pte() · 48384b0b
      Peter Xu authored
      It is unused after the rework of commit f5df8635
      
       ("mm: use
      find_get_incore_page in memcontrol").
      
      Link: https://lkml.kernel.org/r/20210916193014.80129-1-peterx@redhat.com
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarMuchun Song <songmuchun@bytedance.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      48384b0b
    • Matthew Wilcox (Oracle)'s avatar
      mm: optimise put_pages_list() · 988c69f1
      Matthew Wilcox (Oracle) authored
      
      
      Instead of calling put_page() one page at a time, pop pages off the list
      if their refcount was too high and pass the remainder to
      put_unref_page_list().  This should be a speed improvement, but I have
      no measurements to support that.  Current callers do not care about
      performance, but I hope to add some which do.
      
      Link: https://lkml.kernel.org/r/20211007192138.561673-1-willy@infradead.org
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Reviewed-by: default avatarAnthony Yznaga <anthony.yznaga@oracle.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      988c69f1
    • Rafael Aquini's avatar
      mm/swapfile: fix an integer overflow in swap_show() · 642929a2
      Rafael Aquini authored
      
      
      This one is just a minor nuisance for people going through /proc/swaps
      if any of their swapareas is bigger than, or equal to 1073741824 pages
      (4TB).
      
      seq_printf() format string casts as uint the conversion from pages to
      KB, and that will overflow in the aforementioned case.
      
      Albeit being almost unthinkable that someone would actually set up such
      big of a single swaparea, there is a ticket recently filed against RHEL:
      https://bugzilla.redhat.com/show_bug.cgi?id=2008812
      
      Given that all other codesites that use format strings for the same swap
      pages-to-KB conversion do cast it as ulong, this patch just follows
      suit.
      
      Link: https://lkml.kernel.org/r/20211006184011.2579054-1-aquini@redhat.com
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      642929a2
    • vulab's avatar
      mm/swapfile: remove needless request_queue NULL pointer check · 363dc512
      vulab authored
      
      
      The request_queue pointer returned from bdev_get_queue() shall never be
      NULL, so the null check is unnecessary, just remove it.
      
      Link: https://lkml.kernel.org/r/20210917082111.33923-1-vulab@iscas.ac.cn
      Signed-off-by: default avatarXu Wang <vulab@iscas.ac.cn>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      363dc512
    • John Hubbard's avatar
      mm/gup: further simplify __gup_device_huge() · 20b7fee7
      John Hubbard authored
      Commit 6401c4eb
      
       ("mm: gup: fix potential pgmap refcnt leak in
      __gup_device_huge()") simplified the return paths, but didn't go quite
      far enough, as discussed in [1].
      
      Remove the "ret" variable entirely, because there is enough information
      already available to provide the return value.
      
      [1] https://lore.kernel.org/r/CAHk-=wgQTRX=5SkCmS+zfmpqubGHGJvXX_HgnPG8JSpHKHBMeg@mail.gmail.com
      
      Link: https://lkml.kernel.org/r/20210904004224.86391-1-jhubbard@nvidia.com
      Signed-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20b7fee7
    • Jens Axboe's avatar
      mm: move more expensive part of XA setup out of mapping check · f8ee8909
      Jens Axboe authored
      
      
      The fast path here is not needing any writeback, yet we spend time
      setting up the xarray lookup data upfront.  Move the part that actually
      needs to iterate the address space mapping into a separate helper,
      saving ~30% of the time here.
      
      Link: https://lkml.kernel.org/r/49f67983-b802-8929-edab-d807f745c9ca@kernel.dk
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f8ee8909
    • Matthew Wilcox (Oracle)'s avatar
      mm/filemap.c: remove bogus VM_BUG_ON · d417b49f
      Matthew Wilcox (Oracle) authored
      It is not safe to check page->index without holding the page lock.  It
      can be changed if the page is moved between the swap cache and the page
      cache for a shmem file, for example.  There is a VM_BUG_ON below which
      checks page->index is correct after taking the page lock.
      
      Link: https://lkml.kernel.org/r/20210818144932.940640-1-willy@infradead.org
      Fixes: 5c211ba2
      
       ("mm: add and use find_lock_entries")
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Reported-by: default avatar <syzbot+c87be4f669d920c76330@syzkaller.appspotmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d417b49f
    • Jens Axboe's avatar
      mm: don't read i_size of inode unless we need it · 61d0017e
      Jens Axboe authored
      
      
      We always go through i_size_read(), and we rarely end up needing it.
      Push the read to down where we need to check it, which avoids it for
      most cases.
      
      It looks like we can even remove this check entirely, which might be
      worth pursuing.  But at least this takes it out of the hot path.
      
      Link: https://lkml.kernel.org/r/6b67981f-57d4-c80e-bc07-6020aa601381@kernel.dk
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Acked-by: default avatarChris Mason <clm@fb.com>
      Cc: Josef Bacik <josef@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Pavel Begunkov <asml.silence@gmail.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      61d0017e
    • Christoph Hellwig's avatar
      mm: simplify bdi refcounting · efee1713
      Christoph Hellwig authored
      
      
      Move grabbing and releasing the bdi refcount out of the common
      wb_init/wb_exit helpers into code that is only used for the non-default
      memcg driven bdi_writeback structures.
      
      [hch@lst.de: add comment]
        Link: https://lkml.kernel.org/r/20211027074207.GA12793@lst.de
      [akpm@linux-foundation.org: fix typo]
      
      Link: https://lkml.kernel.org/r/20211021124441.668816-6-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Vignesh Raghavendra <vigneshr@ti.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      efee1713
    • Christoph Hellwig's avatar
      mm: don't automatically unregister bdis · 702f2d1e
      Christoph Hellwig authored
      
      
      All BDI users now unregister explicitly.
      
      Link: https://lkml.kernel.org/r/20211021124441.668816-5-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Vignesh Raghavendra <vigneshr@ti.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      702f2d1e
    • Christoph Hellwig's avatar
      fs: explicitly unregister per-superblock BDIs · 0b3ea092
      Christoph Hellwig authored
      
      
      Add a new SB_I_ flag to mark superblocks that have an ephemeral bdi
      associated with them, and unregister it when the superblock is shut
      down.
      
      Link: https://lkml.kernel.org/r/20211021124441.668816-4-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Vignesh Raghavendra <vigneshr@ti.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0b3ea092
    • Christoph Hellwig's avatar
      mtd: call bdi_unregister explicitly · 9718c59c
      Christoph Hellwig authored
      
      
      Call bdi_unregister explicitly instead of relying on the automatic
      unregistration.
      
      Link: https://lkml.kernel.org/r/20211021124441.668816-3-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Vignesh Raghavendra <vigneshr@ti.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9718c59c
    • Christoph Hellwig's avatar
      mm: export bdi_unregister · c6fd3ac0
      Christoph Hellwig authored
      
      
      Patch series "simplify bdi unregistation".
      
      This series simplifies the BDI code to get rid of the magic
      auto-unregister feature that hid a recent block layer refcounting bug.
      
      This patch (of 5):
      
      To wind down the magic auto-unregister semantics we'll need to push this
      into modular code.
      
      Link: https://lkml.kernel.org/r/20211021124441.668816-1-hch@lst.de
      Link: https://lkml.kernel.org/r/20211021124441.668816-2-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Vignesh Raghavendra <vigneshr@ti.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c6fd3ac0
    • David Howells's avatar
      mm: stop filemap_read() from grabbing a superfluous page · 8c8387ee
      David Howells authored
      
      
      Under some circumstances, filemap_read() will allocate sufficient pages
      to read to the end of the file, call readahead/readpages on them and
      copy the data over - and then it will allocate another page at the EOF
      and call readpage on that and then ignore it.  This is unnecessary and a
      waste of time and resources.
      
      filemap_read() *does* check for this, but only after it has already done
      the allocation and I/O.  Fix this by checking before calling
      filemap_get_pages() also.
      
      Link: https://lkml.kernel.org/r/163472463105.3126792.7056099385135786492.stgit@warthog.procyon.org.uk
      Link: https://lore.kernel.org/r/160588481358.3465195.16552616179674485179.stgit@warthog.procyon.org.uk/
      Link: https://lore.kernel.org/r/163456863216.2614702.6384850026368833133.stgit@warthog.procyon.org.uk/
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarJeff Layton <jlayton@kernel.org>
      Reviewed-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8c8387ee
    • Yinan Zhang's avatar
      mm/page_ext.c: fix a comment · d1fea155
      Yinan Zhang authored
      
      
      I have noticed that the previous macro is #ifndef CONFIG_SPARSEMEM.  I
      think the comment of #else should be CONFIG_SPARSEMEM.
      
      Link: https://lkml.kernel.org/r/20211008140312.6492-1-zhangyinan2019@email.szu.edu.cn
      Signed-off-by: default avatarYinan Zhang <zhangyinan2019@email.szu.edu.cn>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d1fea155
    • Kees Cook's avatar
      percpu: add __alloc_size attributes for better bounds checking · 17197dd4
      Kees Cook authored
      
      
      As already done in GrapheneOS, add the __alloc_size attribute for
      appropriate percpu allocator interfaces, to provide additional hinting
      for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other
      compiler optimizations.
      
      Note that due to the implementation of the percpu API, this is unlikely
      to ever actually provide compile-time checking beyond very simple
      non-SMP builds.  But, since they are technically allocators, mark them
      as such.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-9-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Co-developed-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Acked-by: default avatarDennis Zhou <dennis@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      17197dd4
    • Kees Cook's avatar
      mm/page_alloc: add __alloc_size attributes for better bounds checking · abd58f38
      Kees Cook authored
      
      
      As already done in GrapheneOS, add the __alloc_size attribute for
      appropriate page allocator interfaces, to provide additional hinting for
      better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other
      compiler optimizations.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-8-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Co-developed-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      abd58f38
    • Kees Cook's avatar
      mm/vmalloc: add __alloc_size attributes for better bounds checking · 894f24bb
      Kees Cook authored
      
      
      As already done in GrapheneOS, add the __alloc_size attribute for
      appropriate vmalloc allocator interfaces, to provide additional hinting
      for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other
      compiler optimizations.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-7-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Co-developed-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      894f24bb
    • Kees Cook's avatar
      mm/kvmalloc: add __alloc_size attributes for better bounds checking · 56bcf40f
      Kees Cook authored
      
      
      As already done in GrapheneOS, add the __alloc_size attribute for
      regular kvmalloc interfaces, to provide additional hinting for better
      bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler
      optimizations.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-6-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Co-developed-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      56bcf40f
    • Kees Cook's avatar
      slab: add __alloc_size attributes for better bounds checking · c37495d6
      Kees Cook authored
      
      
      As already done in GrapheneOS, add the __alloc_size attribute for
      regular kmalloc interfaces, to provide additional hinting for better
      bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler
      optimizations.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-5-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Co-developed-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Reviewed-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c37495d6
    • Kees Cook's avatar
      slab: clean up function prototypes · 72d67229
      Kees Cook authored
      
      
      Based on feedback from Joe Perches and Linus Torvalds, regularize the
      slab function prototypes before making attribute changes.
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-4-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      72d67229
    • Kees Cook's avatar
      Compiler Attributes: add __alloc_size() for better bounds checking · 86cffecd
      Kees Cook authored
      
      
      GCC and Clang can use the "alloc_size" attribute to better inform the
      results of __builtin_object_size() (for compile-time constant values).
      Clang can additionally use alloc_size to inform the results of
      __builtin_dynamic_object_size() (for run-time values).
      
      Because GCC sees the frequent use of struct_size() as an allocator size
      argument, and notices it can return SIZE_MAX (the overflow indication),
      it complains about these call sites overflowing (since SIZE_MAX is
      greater than the default -Walloc-size-larger-than=PTRDIFF_MAX).  This
      isn't helpful since we already know a SIZE_MAX will be caught at
      run-time (this was an intentional design).  To deal with this, we must
      disable this check as it is both a false positive and redundant.  (Clang
      does not have this warning option.)
      
      Unfortunately, just checking the -Wno-alloc-size-larger-than is not
      sufficient to make the __alloc_size attribute behave correctly under
      older GCC versions.  The attribute itself must be disabled in those
      situations too, as there appears to be no way to reliably silence the
      SIZE_MAX constant expression cases for GCC versions less than 9.1:
      
         In file included from ./include/linux/resource_ext.h:11,
                          from ./include/linux/pci.h:40,
                          from drivers/net/ethernet/intel/ixgbe/ixgbe.h:9,
                          from drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c:4:
         In function 'kmalloc_node',
             inlined from 'ixgbe_alloc_q_vector' at ./include/linux/slab.h:743:9:
         ./include/linux/slab.h:618:9: error: argument 1 value '18446744073709551615' exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=]
           return __kmalloc_node(size, flags, node);
                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         ./include/linux/slab.h: In function 'ixgbe_alloc_q_vector':
         ./include/linux/slab.h:455:7: note: in a call to allocation function '__kmalloc_node' declared here
          void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_slab_alignment __malloc;
                ^~~~~~~~~~~~~~
      
      Specifically:
       '-Wno-alloc-size-larger-than' is not correctly handled by GCC < 9.1
          https://godbolt.org/z/hqsfG7q84 (doesn't disable)
          https://godbolt.org/z/P9jdrPTYh (doesn't admit to not knowing about option)
          https://godbolt.org/z/465TPMWKb (only warns when other warnings appear)
      
       '-Walloc-size-larger-than=18446744073709551615' is not handled by GCC < 8.2
          https://godbolt.org/z/73hh1EPxz (ignores numeric value)
      
      Since anything marked with __alloc_size would also qualify for marking
      with __malloc, just include __malloc along with it to avoid redundant
      markings.  (Suggested by Linus Torvalds.)
      
      Finally, make sure checkpatch.pl doesn't get confused about finding the
      __alloc_size attribute on functions.  (Thanks to Joe Perches.)
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-3-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Tested-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: kernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      86cffecd
    • Kees Cook's avatar
      rapidio: avoid bogus __alloc_size warning · 75da0eba
      Kees Cook authored
      
      
      Patch series "Add __alloc_size()", v3.
      
      GCC and Clang both use the "alloc_size" attribute to assist with bounds
      checking around the use of allocation functions.  Add the attribute,
      adjust the Makefile to silence needless warnings, and add the hints to
      the allocators where possible.  These changes have been in use for a
      while now in GrapheneOS.
      
      This patch (of 8):
      
      After adding __alloc_size attributes to the allocators, GCC 9.3 (but not
      later) may incorrectly evaluate the arguments to check_copy_size(),
      getting seemingly confused by the size being returned from array_size().
      Instead, perform the calculation once, which both makes the code more
      readable and avoids the bug in GCC.
      
         In file included from arch/x86/include/asm/preempt.h:7,
                          from include/linux/preempt.h:78,
                          from include/linux/spinlock.h:55,
                          from include/linux/mm_types.h:9,
                          from include/linux/buildid.h:5,
                          from include/linux/module.h:14,
                          from drivers/rapidio/devices/rio_mport_cdev.c:13:
         In function 'check_copy_size',
             inlined from 'copy_from_user' at include/linux/uaccess.h:191:6,
             inlined from 'rio_mport_transfer_ioctl' at drivers/rapidio/devices/rio_mport_cdev.c:983:6:
         include/linux/thread_info.h:213:4: error: call to '__bad_copy_to' declared with attribute error: copy destination size is too small
           213 |    __bad_copy_to();
               |    ^~~~~~~~~~~~~~~
      
      But the allocation size and the copy size are identical:
      
      	transfer = vmalloc(array_size(sizeof(*transfer), transaction.count));
      	if (!transfer)
      		return -ENOMEM;
      
      	if (unlikely(copy_from_user(transfer,
      				    (void __user *)(uintptr_t)transaction.block,
      				    array_size(sizeof(*transfer), transaction.count)))) {
      
      Link: https://lkml.kernel.org/r/20210930222704.2631604-1-keescook@chromium.org
      Link: https://lkml.kernel.org/r/20210930222704.2631604-2-keescook@chromium.org
      Link: https://lore.kernel.org/linux-mm/202109091134.FHnRmRxu-lkp@intel.com/
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Alexandre Bounine <alex.bou9@gmail.com>
      Cc: Jing Xiangfeng <jingxiangfeng@huawei.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Cc: Souptick Joarder <jrdr.linux@gmail.com>
      Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
      Cc: Andy Whitcroft <apw@canonical.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
      Cc: Miguel Ojeda <ojeda@kernel.org>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      75da0eba
    • Kees Cook's avatar
      kasan: test: bypass __alloc_size checks · d73dad4e
      Kees Cook authored
      
      
      Intentional overflows, as performed by the KASAN tests, are detected at
      compile time[1] (instead of only at run-time) with the addition of
      __alloc_size.  Fix this by forcing the compiler into not being able to
      trust the size used following the kmalloc()s.
      
      [1] https://lore.kernel.org/lkml/20211005184717.65c6d8eb39350395e387b71f@linux-foundation.org
      
      Link: https://lkml.kernel.org/r/20211006181544.1670992-1-keescook@chromium.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d73dad4e
    • Guo Ren's avatar
      mm: debug_vm_pgtable: don't use __P000 directly · 8772716f
      Guo Ren authored
      
      
      The __Pxxx/__Sxxx macros are only for protection_map[] init.  All usage
      of them in linux should come from protection_map array.
      
      Because a lot of architectures would re-initilize protection_map[]
      array, eg: x86-mem_encrypt, m68k-motorola, mips, arm, sparc.
      
      Using __P000 is not rigorous.
      
      Link: https://lkml.kernel.org/r/20210924060821.1138281-1-guoren@kernel.org
      Signed-off-by: default avatarGuo Ren <guoren@linux.alibaba.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8772716f
    • Peter Xu's avatar
      mm/smaps: simplify shmem handling of pte holes · 23010032
      Peter Xu authored
      
      
      Firstly, check_shmem_swap variable is actually not necessary, because
      it's always set with pte_hole hook; checking each would work.
      
      Meanwhile, the check within smaps_pte_entry is not easy to follow.
      E.g., pte_none() check is not needed as "!pte_present && !is_swap_pte"
      is the same.  Since at it, use the pte_hole() helper rather than dup the
      page cache lookup.
      
      Still keep the CONFIG_SHMEM part so the code can be optimized to nop for
      !SHMEM.
      
      There will be a very slight functional change in smaps_pte_entry(), that
      for !SHMEM we'll return early for pte_none (before checking page==NULL),
      but that's even nicer.
      
      Link: https://lkml.kernel.org/r/20210917164756.8586-4-peterx@redhat.com
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23010032
    • Peter Xu's avatar
      mm/smaps: use vma->vm_pgoff directly when counting partial swap · 02399c88
      Peter Xu authored
      
      
      As it's trying to cover the whole vma anyways, use direct vm_pgoff value
      and vma_pages() rather than linear_page_index.
      
      Link: https://lkml.kernel.org/r/20210917164756.8586-3-peterx@redhat.com
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02399c88
    • Peter Xu's avatar
      mm/smaps: fix shmem pte hole swap calculation · 10c848c8
      Peter Xu authored
      Patch series "mm/smaps: Fixes and optimizations on shmem swap handling".
      
      This patch (of 3):
      
      The shmem swap calculation on the privately writable mappings are using
      wrong parameters as spotted by Vlastimil.  Fix them.  This was
      introduced in commit 48131e03 ("mm, proc: reduce cost of
      /proc/pid/smaps for unpopulated shmem mappings"), when shmem_swap_usage
      was reworked to shmem_partial_swap_usage.
      
      Test program:
      
        void main(void)
        {
            char *buffer, *p;
            int i, fd;
      
            fd = memfd_create("test", 0);
            assert(fd > 0);
      
            /* isize==2M*3, fill in pages, swap them out */
            ftruncate(fd, SIZE_2M * 3);
            buffer = mmap(NULL, SIZE_2M * 3, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
            assert(buffer);
            for (i = 0, p = buffer; i < SIZE_2M * 3 / 4096; i++) {
                *p = 1;
                p += 4096;
            }
            madvise(buffer, SIZE_2M * 3, MADV_PAGEOUT);
            munmap(buffer, SIZE_2M * 3);
      
            /*
             * Remap with private+writtable mappings on partial of the inode (<= 2M*3),
             * while the size must also be >= 2M*2 to make sure there's a none pmd so
             * smaps_pte_hole will be triggered.
             */
            buffer = mmap(NULL, SIZE_2M * 2, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
            printf("pid=%d, buffer=%p\n", getpid(), buffer);
      
            /* Check /proc/$PID/smap_rollup, should see 4MB swap */
            sleep(1000000);
        }
      
      Before the patch, smaps_rollup shows <4MB swap and the number will be
      random depending on the alignment of the buffer of mmap() allocated.
      After this patch, it'll show 4MB.
      
      Link: https://lkml.kernel.org/r/20210917164756.8586-1-peterx@redhat.com
      Link: https://lkml.kernel.org/r/20210917164756.8586-2-peterx@redhat.com
      Fixes: 48131e03
      
       ("mm, proc: reduce cost of /proc/pid/smaps for unpopulated shmem mappings")
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reported-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      10c848c8
    • Peter Collingbourne's avatar
      kasan: test: add memcpy test that avoids out-of-bounds write · 758cabae
      Peter Collingbourne authored
      With HW tag-based KASAN, error checks are performed implicitly by the
      load and store instructions in the memcpy implementation.  A failed
      check results in tag checks being disabled and execution will keep
      going.  As a result, under HW tag-based KASAN, prior to commit
      1b0668be ("kasan: test: disable kmalloc_memmove_invalid_size for
      HW_TAGS"), this memcpy would end up corrupting memory until it hits an
      inaccessible page and causes a kernel panic.
      
      This is a pre-existing issue that was revealed by commit 28513304
      ("arm64: Import latest memcpy()/memmove() implementation") which changed
      the memcpy implementation from using signed comparisons (incorrectly,
      resulting in the memcpy being terminated early for negative sizes) to
      using unsigned comparisons.
      
      It is unclear how this could be handled by memcpy itself in a reasonable
      way.  One possibility would be to add an exception handler that would
      force memcpy to return if a tag check fault is detected -- this would
      make the behavior roughly similar to generic and SW tag-based KASAN.
      However, this wouldn't solve the problem for asynchronous mode and also
      makes memcpy behavior inconsistent with manually copying data.
      
      This test was added as a part of a series that taught KASAN to detect
      negative sizes in memory operations, see commit 8cceeff4
      
       ("kasan:
      detect negative size in memory operation function").  Therefore we
      should keep testing for negative sizes with generic and SW tag-based
      KASAN.  But there is some value in testing small memcpy overflows, so
      let's add another test with memcpy that does not destabilize the kernel
      by performing out-of-bounds writes, and run it in all modes.
      
      Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882
      Link: https://lkml.kernel.org/r/20210910211356.3603758-1-pcc@google.com
      Signed-off-by: default avatarPeter Collingbourne <pcc@google.com>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Acked-by: default avatarMarco Elver <elver@google.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Evgenii Stepanov <eugenis@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      758cabae
    • Matthew Wilcox (Oracle)'s avatar
      kasan: fix tag for large allocations when using CONFIG_SLAB · 820a1e6e
      Matthew Wilcox (Oracle) authored
      If an object is allocated on a tail page of a multi-page slab, kasan
      will get the wrong tag because page->s_mem is NULL for tail pages.  I'm
      not quite sure what the user-visible effect of this might be.
      
      Link: https://lkml.kernel.org/r/20211001024105.3217339-1-willy@infradead.org
      Fixes: 7f94ffbc
      
       ("kasan: add hooks implementation for tag-based mode")
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Acked-by: default avatarMarco Elver <elver@google.com>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      820a1e6e
    • Marco Elver's avatar
      workqueue, kasan: avoid alloc_pages() when recording stack · f70da745
      Marco Elver authored
      Shuah Khan reported:
      
       | When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
       | kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
       | it tries to allocate memory attempting to acquire spinlock in page
       | allocation code while holding workqueue pool raw_spinlock.
       |
       | There are several instances of this problem when block layer tries
       | to __queue_work(). Call trace from one of these instances is below:
       |
       |     kblockd_mod_delayed_work_on()
       |       mod_delayed_work_on()
       |         __queue_delayed_work()
       |           __queue_work() (rcu_read_lock, raw_spin_lock pool->lock held)
       |             insert_work()
       |               kasan_record_aux_stack()
       |                 kasan_save_stack()
       |                   stack_depot_save()
       |                     alloc_pages()
       |                       __alloc_pages()
       |                         get_page_from_freelist()
       |                           rm_queue()
       |                             rm_queue_pcplist()
       |                               local_lock_irqsave(&pagesets.lock, flags);
       |                               [ BUG: Invalid wait context triggered ]
      
      The default kasan_record_aux_stack() calls stack_depot_save() with
      GFP_NOWAIT, which in turn can then call alloc_pages(GFP_NOWAIT, ...).
      In general, however, it is not even possible to use either GFP_ATOMIC
      nor GFP_NOWAIT in certain non-preemptive contexts, including
      raw_spin_locks (see gfp.h and commmit ab00db21
      
      ).
      
      Fix it by instructing stackdepot to not expand stack storage via
      alloc_pages() in case it runs out by using
      kasan_record_aux_stack_noalloc().
      
      While there is an increased risk of failing to insert the stack trace,
      this is typically unlikely, especially if the same insertion had already
      succeeded previously (stack depot hit).
      
      For frequent calls from the same location, it therefore becomes
      extremely unlikely that kasan_record_aux_stack_noalloc() fails.
      
      Link: https://lkml.kernel.org/r/20210902200134.25603-1-skhan@linuxfoundation.org
      Link: https://lkml.kernel.org/r/20210913112609.2651084-7-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Reported-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Taras Madan <tarasmadan@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f70da745
    • Marco Elver's avatar
      kasan: generic: introduce kasan_record_aux_stack_noalloc() · 7cb3007c
      Marco Elver authored
      
      
      Introduce a variant of kasan_record_aux_stack() that does not do any
      memory allocation through stackdepot.  This will permit using it in
      contexts that cannot allocate any memory.
      
      Link: https://lkml.kernel.org/r/20210913112609.2651084-6-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Taras Madan <tarasmadan@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7cb3007c
    • Marco Elver's avatar
      kasan: common: provide can_alloc in kasan_save_stack() · 7594b347
      Marco Elver authored
      
      
      Add another argument, can_alloc, to kasan_save_stack() which is passed
      as-is to __stack_depot_save().
      
      No functional change intended.
      
      Link: https://lkml.kernel.org/r/20210913112609.2651084-5-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Taras Madan <tarasmadan@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7594b347
    • Marco Elver's avatar
      lib/stackdepot: introduce __stack_depot_save() · 11ac25c6
      Marco Elver authored
      
      
      Add __stack_depot_save(), which provides more fine-grained control over
      stackdepot's memory allocation behaviour, in case stackdepot runs out of
      "stack slabs".
      
      Normally stackdepot uses alloc_pages() in case it runs out of space;
      passing can_alloc==false to __stack_depot_save() prohibits this, at the
      cost of more likely failure to record a stack trace.
      
      Link: https://lkml.kernel.org/r/20210913112609.2651084-4-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Taras Madan <tarasmadan@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      11ac25c6
    • Marco Elver's avatar
      lib/stackdepot: remove unused function argument · 7f2b8818
      Marco Elver authored
      
      
      alloc_flags in depot_alloc_stack() is no longer used; remove it.
      
      Link: https://lkml.kernel.org/r/20210913112609.2651084-3-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Taras Madan <tarasmadan@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7f2b8818
    • Marco Elver's avatar
      lib/stackdepot: include gfp.h · 7857ccdf
      Marco Elver authored
      Patch series "stackdepot, kasan, workqueue: Avoid expanding stackdepot
      slabs when holding raw_spin_lock", v2.
      
      Shuah Khan reported [1]:
      
       | When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled,
       | kasan_record_aux_stack() runs into "BUG: Invalid wait context" when
       | it tries to allocate memory attempting to acquire spinlock in page
       | allocation code while holding workqueue pool raw_spinlock.
       |
       | There are several instances of this problem when block layer tries
       | to __queue_work(). Call trace from one of these instances is below:
       |
       |     kblockd_mod_delayed_work_on()
       |       mod_delayed_work_on()
       |         __queue_delayed_work()
       |           __queue_work() (rcu_read_lock, raw_spin_lock pool->lock held)
       |             insert_work()
       |               kasan_record_aux_stack()
       |                 kasan_save_stack()
       |                   stack_depot_save()
       |                     alloc_pages()
       |                       __alloc_pages()
       |                         get_page_from_freelist()
       |                           rm_queue()
       |                             rm_queue_pcplist()
       |                               local_lock_irqsave(&pagesets.lock, flags);
       |                               [ BUG: Invalid wait context triggered ]
      
      PROVE_RAW_LOCK_NESTING is pointing out that (on RT kernels) the locking
      rules are being violated.  More generally, memory is being allocated
      from a non-preemptive context (raw_spin_lock'd c-s) where it is not
      allowed.
      
      To properly fix this, we must prevent stackdepot from replenishing its
      "stack slab" pool if memory allocations cannot be done in the current
      context: it's a bug to use either GFP_ATOMIC nor GFP_NOWAIT in certain
      non-preemptive contexts, including raw_spin_locks (see gfp.h and commit
      ab00db21
      
      ).
      
      The only downside is that saving a stack trace may fail if: stackdepot
      runs out of space AND the same stack trace has not been recorded before.
      I expect this to be unlikely, and a simple experiment (boot the kernel)
      didn't result in any failure to record stack trace from insert_work().
      
      The series includes a few minor fixes to stackdepot that I noticed in
      preparing the series.  It then introduces __stack_depot_save(), which
      exposes the option to force stackdepot to not allocate any memory.
      Finally, KASAN is changed to use the new stackdepot interface and
      provide kasan_record_aux_stack_noalloc(), which is then used by
      workqueue code.
      
      [1] https://lkml.kernel.org/r/20210902200134.25603-1-skhan@linuxfoundation.org
      
      This patch (of 6):
      
      <linux/stackdepot.h> refers to gfp_t, but doesn't include gfp.h.
      
      Fix it by including <linux/gfp.h>.
      
      Link: https://lkml.kernel.org/r/20210913112609.2651084-1-elver@google.com
      Link: https://lkml.kernel.org/r/20210913112609.2651084-2-elver@google.com
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Tested-by: default avatarShuah Khan <skhan@linuxfoundation.org>
      Acked-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@gmail.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Walter Wu <walter-zh.wu@mediatek.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
      Cc: Taras Madan <tarasmadan@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7857ccdf
    • Christoph Hellwig's avatar
      mm: don't include <linux/dax.h> in <linux/mempolicy.h> · 96c84dde
      Christoph Hellwig authored
      
      
      Not required at all, and having this causes a huge kernel rebuild as
      soon as something in dax.h changes.
      
      Link: https://lkml.kernel.org/r/20210921082253.1859794-1-hch@lst.de
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarNaoya Horiguchi <naoya.horiguchi@nec.com>
      Reviewed-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      96c84dde
    • Sebastian Andrzej Siewior's avatar
      mm: disable NUMA_BALANCING_DEFAULT_ENABLED and TRANSPARENT_HUGEPAGE on PREEMPT_RT · 554b0f3c
      Sebastian Andrzej Siewior authored
      
      
      TRANSPARENT_HUGEPAGE:
        There are potential non-deterministic delays to an RT thread if a
        critical memory region is not THP-aligned and a non-RT buffer is
        located in the same hugepage-aligned region. It's also possible for an
        unrelated thread to migrate pages belonging to an RT task incurring
        unexpected page faults due to memory defragmentation even if
        khugepaged is disabled.
      
      Regular HUGEPAGEs are not affected by this can be used.
      
      NUMA_BALANCING:
        There is a non-deterministic delay to mark PTEs PROT_NONE to gather
        NUMA fault samples, increased page faults of regions even if mlocked
        and non-deterministic delays when migrating pages.
      
      [Mel Gorman worded 99% of the commit description].
      
      Link: https://lore.kernel.org/all/20200304091159.GN3818@techsingularity.net/
      Link: https://lore.kernel.org/all/20211026165100.ahz5bkx44lrrw5pt@linutronix.de/
      Link: https://lkml.kernel.org/r/20211028143327.hfbxjze7palrpfgp@linutronix.de
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      554b0f3c
    • Hyeonggon Yoo's avatar
      mm, slub: use prefetchw instead of prefetch · 04b4b006
      Hyeonggon Yoo authored
      Commit 0ad9500e
      
       ("slub: prefetch next freelist pointer in
      slab_alloc()") introduced prefetch_freepointer() because when other
      cpu(s) freed objects into a page that current cpu owns, the freelist
      link is hot on cpu(s) which freed objects and possibly very cold on
      current cpu.
      
      But if freelist link chain is hot on cpu(s) which freed objects, it's
      better to invalidate that chain because they're not going to access
      again within a short time.
      
      So use prefetchw instead of prefetch.  On supported architectures like
      x86 and arm, it invalidates other copied instances of a cache line when
      prefetching it.
      
      Before:
      
      Time: 91.677
      
       Performance counter stats for 'hackbench -g 100 -l 10000':
              1462938.07 msec cpu-clock                 #   15.908 CPUs utilized
                18072550      context-switches          #   12.354 K/sec
                 1018814      cpu-migrations            #  696.416 /sec
                  104558      page-faults               #   71.471 /sec
           1580035699271      cycles                    #    1.080 GHz                      (54.51%)
           2003670016013      instructions              #    1.27  insn per cycle           (54.31%)
              5702204863      branch-misses                                                 (54.28%)
            643368500985      cache-references          #  439.778 M/sec                    (54.26%)
             18475582235      cache-misses              #    2.872 % of all cache refs      (54.28%)
            642206796636      L1-dcache-loads           #  438.984 M/sec                    (46.87%)
             18215813147      L1-dcache-load-misses     #    2.84% of all L1-dcache accesses  (46.83%)
            653842996501      dTLB-loads                #  446.938 M/sec                    (46.63%)
              3227179675      dTLB-load-misses          #    0.49% of all dTLB cache accesses  (46.85%)
            537531951350      iTLB-loads                #  367.433 M/sec                    (54.33%)
               114750630      iTLB-load-misses          #    0.02% of all iTLB cache accesses  (54.37%)
            630135543177      L1-icache-loads           #  430.733 M/sec                    (46.80%)
             22923237620      L1-icache-load-misses     #    3.64% of all L1-icache accesses  (46.76%)
      
            91.964452802 seconds time elapsed
      
            43.416742000 seconds user
          1422.441123000 seconds sys
      
      After:
      
      Time: 90.220
      
       Performance counter stats for 'hackbench -g 100 -l 10000':
              1437418.48 msec cpu-clock                 #   15.880 CPUs utilized
                17694068      context-switches          #   12.310 K/sec
                  958257      cpu-migrations            #  666.651 /sec
                  100604      page-faults               #   69.989 /sec
           1583259429428      cycles                    #    1.101 GHz                      (54.57%)
           2004002484935      instructions              #    1.27  insn per cycle           (54.37%)
              5594202389      branch-misses                                                 (54.36%)
            643113574524      cache-references          #  447.409 M/sec                    (54.39%)
             18233791870      cache-misses              #    2.835 % of all cache refs      (54.37%)
            640205852062      L1-dcache-loads           #  445.386 M/sec                    (46.75%)
             17968160377      L1-dcache-load-misses     #    2.81% of all L1-dcache accesses  (46.79%)
            651747432274      dTLB-loads                #  453.415 M/sec                    (46.59%)
              3127124271      dTLB-load-misses          #    0.48% of all dTLB cache accesses  (46.75%)
            535395273064      iTLB-loads                #  372.470 M/sec                    (54.38%)
               113500056      iTLB-load-misses          #    0.02% of all iTLB cache accesses  (54.35%)
            628871845924      L1-icache-loads           #  437.501 M/sec                    (46.80%)
             22585641203      L1-icache-load-misses     #    3.59% of all L1-icache accesses  (46.79%)
      
            90.514819303 seconds time elapsed
      
            43.877656000 seconds user
          1397.176001000 seconds sys
      
      Link: https://lkml.org/lkml/2021/10/8/598=20
      Link: https://lkml.kernel.org/r/20211011144331.70084-1-42.hyeyoo@gmail.com
      Signed-off-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04b4b006