Skip to content
  1. Aug 20, 2019
  2. Aug 16, 2019
    • Jason Gunthorpe's avatar
      misc/sgi-gru: use mmu_notifier_get/put for struct gru_mm_struct · e4c057d0
      Jason Gunthorpe authored
      
      
      GRU is already using almost the same algorithm as get/put, it even
      helpfully has a 10 year old comment to make this algorithm common, which
      is finally happening.
      
      There are a few differences and fixes from this conversion:
      - GRU used rcu not srcu to read the hlist
      - Unclear how the locking worked to prevent gru_register_mmu_notifier()
        from running concurrently with gru_drop_mmu_notifier() - this version is
        safe
      - GRU had a release function which only set a variable without any locking
        that skiped the synchronize_srcu during unregister which looks racey,
        but this makes it reliable via the integrated call_srcu().
      - It is unclear if the mmap_sem is actually held when
        __mmu_notifier_register() was called, lockdep will now warn if this is
        wrong
      
      Link: https://lore.kernel.org/r/20190806231548.25242-5-jgg@ziepe.ca
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDimitri Sivanich <sivanich@hpe.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      e4c057d0
    • Jason Gunthorpe's avatar
      mm/mmu_notifiers: add a get/put scheme for the registration · 2c7933f5
      Jason Gunthorpe authored
      Many places in the kernel have a flow where userspace will create some
      object and that object will need to connect to the subsystem's
      mmu_notifier subscription for the duration of its lifetime.
      
      In this case the subsystem is usually tracking multiple mm_structs and it
      is difficult to keep track of what struct mmu_notifier's have been
      allocated for what mm's.
      
      Since this has been open coded in a variety of exciting ways, provide core
      functionality to do this safely.
      
      This approach uses the struct mmu_notifier_ops * as a key to determine if
      the subsystem has a notifier registered on the mm or not. If there is a
      registration then the existing notifier struct is returned, otherwise the
      ops->alloc_notifiers() is used to create a new per-subsystem notifier for
      the mm.
      
      The destroy side incorporates an async call_srcu based destruction which
      will avoid bugs in the callers such as commit 6d7c3cde
      
       ("mm/hmm: fix
      use after free with struct hmm in the mmu notifiers").
      
      Since we are inside the mmu notifier core locking is fairly simple, the
      allocation uses the same approach as for mmu_notifier_mm, the write side
      of the mmap_sem makes everything deterministic and we only need to do
      hlist_add_head_rcu() under the mm_take_all_locks(). The new users count
      and the discoverability in the hlist is fully serialized by the
      mmu_notifier_mm->lock.
      
      Link: https://lore.kernel.org/r/20190806231548.25242-4-jgg@ziepe.ca
      Co-developed-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@infradead.org>
      Reviewed-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Tested-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      2c7933f5
    • Jason Gunthorpe's avatar
      mm/mmu_notifiers: do not speculatively allocate a mmu_notifier_mm · 70df291b
      Jason Gunthorpe authored
      A prior commit e0f3c3f7 ("mm/mmu_notifier: init notifier if necessary")
      made an attempt at doing this, but had to be reverted as calling
      the GFP_KERNEL allocator under the i_mmap_mutex causes deadlock, see
      commit 35cfa2b0
      
       ("mm/mmu_notifier: allocate mmu_notifier in advance").
      
      However, we can avoid that problem by doing the allocation only under
      the mmap_sem, which is already happening.
      
      Since all writers to mm->mmu_notifier_mm hold the write side of the
      mmap_sem reading it under that sem is deterministic and we can use that to
      decide if the allocation path is required, without speculation.
      
      The actual update to mmu_notifier_mm must still be done under the
      mm_take_all_locks() to ensure read-side coherency.
      
      Link: https://lore.kernel.org/r/20190806231548.25242-3-jgg@ziepe.ca
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Tested-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      70df291b
    • Jason Gunthorpe's avatar
      mm/mmu_notifiers: hoist do_mmu_notifier_register down_write to the caller · 56c57103
      Jason Gunthorpe authored
      
      
      This simplifies the code to not have so many one line functions and extra
      logic. __mmu_notifier_register() simply becomes the entry point to
      register the notifier, and the other one calls it under lock.
      
      Also add a lockdep_assert to check that the callers are holding the lock
      as expected.
      
      Link: https://lore.kernel.org/r/20190806231548.25242-2-jgg@ziepe.ca
      Suggested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Tested-by: default avatarRalph Campbell <rcampbell@nvidia.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      56c57103
  3. Aug 08, 2019
  4. Jul 26, 2019