Skip to content
  1. Jun 29, 2018
  2. Jun 28, 2018
  3. Jun 27, 2018
  4. Jun 24, 2018
  5. Jun 23, 2018
    • Jan Kara's avatar
      bdi: Fix another oops in wb_workfn() · 3ee7e869
      Jan Kara authored
      
      
      syzbot is reporting NULL pointer dereference at wb_workfn() [1] due to
      wb->bdi->dev being NULL. And Dmitry confirmed that wb->state was
      WB_shutting_down after wb->bdi->dev became NULL. This indicates that
      unregister_bdi() failed to call wb_shutdown() on one of wb objects.
      
      The problem is in cgwb_bdi_unregister() which does cgwb_kill() and thus
      drops bdi's reference to wb structures before going through the list of
      wbs again and calling wb_shutdown() on each of them. This way the loop
      iterating through all wbs can easily miss a wb if that wb has already
      passed through cgwb_remove_from_bdi_list() called from wb_shutdown()
      from cgwb_release_workfn() and as a result fully shutdown bdi although
      wb_workfn() for this wb structure is still running. In fact there are
      also other ways cgwb_bdi_unregister() can race with
      cgwb_release_workfn() leading e.g. to use-after-free issues:
      
      CPU1                            CPU2
                                      cgwb_bdi_unregister()
                                        cgwb_kill(*slot);
      
      cgwb_release()
        queue_work(cgwb_release_wq, &wb->release_work);
      cgwb_release_workfn()
                                        wb = list_first_entry(&bdi->wb_list, ...)
                                        spin_unlock_irq(&cgwb_lock);
        wb_shutdown(wb);
        ...
        kfree_rcu(wb, rcu);
                                        wb_shutdown(wb); -> oops use-after-free
      
      We solve these issues by synchronizing writeback structure shutdown from
      cgwb_bdi_unregister() with cgwb_release_workfn() using a new mutex. That
      way we also no longer need synchronization using WB_shutting_down as the
      mutex provides it for CONFIG_CGROUP_WRITEBACK case and without
      CONFIG_CGROUP_WRITEBACK wb_shutdown() can be called only once from
      bdi_unregister().
      
      Reported-by: default avatarsyzbot <syzbot+4a7438e774b21ddd8eca@syzkaller.appspotmail.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3ee7e869
    • Geert Uytterhoeven's avatar
      lightnvm: Remove depends on HAS_DMA in case of platform dependency · 0ae52ddf
      Geert Uytterhoeven authored
      
      
      Remove dependencies on HAS_DMA where a Kconfig symbol depends on another
      symbol that implies HAS_DMA, and, optionally, on "|| COMPILE_TEST".
      In most cases this other symbol is an architecture or platform specific
      symbol, or PCI.
      
      Generic symbols and drivers without platform dependencies keep their
      dependencies on HAS_DMA, to prevent compiling subsystems or drivers that
      cannot work anyway.
      
      This simplifies the dependencies, and allows to improve compile-testing.
      
      Signed-off-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: default avatarMatias Bjørling <mb@lightnvm.io>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0ae52ddf
  6. Jun 22, 2018
    • Jens Axboe's avatar
      Merge branch 'nvme-4.18' of git://git.infradead.org/nvme into for-linus · f9da9d07
      Jens Axboe authored
      Pull NVMe fixes from Christoph:
      
      "Various relatively small fixes, mostly to fix error handling of various
       sorts."
      
      * 'nvme-4.18' of git://git.infradead.org/nvme:
        nvme-pci: limit max IO size and segments to avoid high order allocations
        nvme-pci: move nvme_kill_queues to nvme_remove_dead_ctrl
        nvme-fc: release io queues to allow fast fail
        nvmet: reset keep alive timer in controller enable
        nvme-rdma: don't override opts->queue_size
        nvme-rdma: Fix command completion race at error recovery
        nvme-rdma: fix possible free of a non-allocated async event buffer
        nvme-rdma: fix possible double free condition when failing to create a controller
      f9da9d07
    • Jens Axboe's avatar
      nvme-pci: limit max IO size and segments to avoid high order allocations · 943e942e
      Jens Axboe authored
      
      
      nvme requires an sg table allocation for each request. If the request
      is large, then the allocation can become quite large. For instance,
      with our default software settings of 1280KB IO size, we'll need
      10248 bytes of sg table. That turns into a 2nd order allocation,
      which we can't always guarantee. If we fail the allocation, blk-mq
      will retry it later. But there's no guarantee that we'll EVER be
      able to allocate that much contigious memory.
      
      Limit the IO size such that we never need more than a single page
      of memory. That's a lot faster and more reliable. Then back that
      allocation with a mempool, so that we know we'll always be able
      to succeed the allocation at some point.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Acked-by: default avatarKeith Busch <keith.busch@intel.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      943e942e
  7. Jun 21, 2018
  8. Jun 20, 2018
    • Max Gurtuvoy's avatar
      nvmet: reset keep alive timer in controller enable · d68a90e1
      Max Gurtuvoy authored
      
      
      Controllers that are not yet enabled should not really enforce keep alive
      timeouts, but we still want to track a timeout and cleanup in case a host
      died before it enabled the controller.  Hence, simply reset the keep
      alive timer when the controller is enabled.
      
      Suggested-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d68a90e1
    • Sagi Grimberg's avatar
      nvme-rdma: don't override opts->queue_size · 5e77d61c
      Sagi Grimberg authored
      
      
      That is user argument, and theoretically controller limits can change
      over time (over reconnects/resets).  Instead, use the sqsize controller
      attribute to check queue depth boundaries and use it to the tagset
      allocation.
      
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      5e77d61c
    • Israel Rukshin's avatar
      nvme-rdma: Fix command completion race at error recovery · c947657b
      Israel Rukshin authored
      
      
      The race is between completing the request at error recovery work and
      rdma completions.  If we cancel the request before getting the good
      rdma completion we get a NULL deref of the request MR at
      nvme_rdma_process_nvme_rsp().
      
      When Canceling the request we return its mr to the mr pool (set mr to
      NULL) and also unmap its data.  Canceling the requests while the rdma
      queues are active is not safe.  Because rdma queues are active and we
      get good rdma completions that can use the mr pointer which may be NULL.
      Completing the request too soon may lead also to performing DMA to/from
      user buffers which might have been already unmapped.
      
      The commit fixes the race by draining the QP before starting the abort
      commands mechanism.
      
      Signed-off-by: default avatarIsrael Rukshin <israelr@mellanox.com>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      c947657b
    • Sagi Grimberg's avatar
      nvme-rdma: fix possible free of a non-allocated async event buffer · 94e42213
      Sagi Grimberg authored
      
      
      If nvme_rdma_configure_admin_queue fails before we allocated
      the async event buffer, we will falsly free it because
      nvme_rdma_free_queue is freeing it. Fix it by allocating the buffer right
      after nvme_rdma_alloc_queue and free it right before nvme_rdma_queue_free
      to maintain orderly reverse cleanup sequence.
      
      Reported-by: default avatarIsrael Rukshin <israelr@mellanox.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      94e42213
    • Sagi Grimberg's avatar
      nvme-rdma: fix possible double free condition when failing to create a controller · 3d064101
      Sagi Grimberg authored
      
      
      Failures after nvme_init_ctrl will defer resource cleanups to .free_ctrl
      when the reference is released, hence we should not free the controller
      queues for these failures.
      
      Fix that by moving controller queues allocation before controller
      initialization and correctly freeing them for failures before
      initialization and skip them for failures after initialization.
      
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      3d064101
    • Bart Van Assche's avatar
      Revert "block: Add warning for bi_next not NULL in bio_endio()" · 9c24c10a
      Bart Van Assche authored
      Commit 0ba99ca4 ("block: Add warning for bi_next not NULL in
      bio_endio()") breaks the dm driver. end_clone_bio() detects whether
      or not a bio is the last bio associated with a request by checking
      the .bi_next field. Commit 0ba99ca4 clears that field before
      end_clone_bio() has had a chance to inspect that field. Hence revert
      commit 0ba99ca4.
      
      This patch avoids that KASAN reports the following complaint when
      running the srp-test software (srp-test/run_tests -c -d -r 10 -t 02-mq):
      
      ==================================================================
      BUG: KASAN: use-after-free in bio_advance+0x11b/0x1d0
      Read of size 4 at addr ffff8801300e06d0 by task ksoftirqd/0/9
      
      CPU: 0 PID: 9 Comm: ksoftirqd/0 Not tainted 4.18.0-rc1-dbg+ #1
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014
      Call Trace:
       dump_stack+0xa4/0xf5
       print_address_description+0x6f/0x270
       kasan_report+0x241/0x360
       __asan_load4+0x78/0x80
       bio_advance+0x11b/0x1d0
       blk_update_request+0xa7/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
       run_ksoftirqd+0x3f/0x60
       smpboot_thread_fn+0x352/0x460
       kthread+0x1c1/0x1e0
       ret_from_fork+0x24/0x30
      
      Allocated by task 1918:
       save_stack+0x43/0xd0
       kasan_kmalloc+0xad/0xe0
       kasan_slab_alloc+0x11/0x20
       kmem_cache_alloc+0xfe/0x350
       mempool_alloc_slab+0x15/0x20
       mempool_alloc+0xfb/0x270
       bio_alloc_bioset+0x244/0x350
       submit_bh_wbc+0x9c/0x2f0
       __block_write_full_page+0x299/0x5a0
       block_write_full_page+0x16b/0x180
       blkdev_writepage+0x18/0x20
       __writepage+0x42/0x80
       write_cache_pages+0x376/0x8a0
       generic_writepages+0xbe/0x110
       blkdev_writepages+0xe/0x10
       do_writepages+0x9b/0x180
       __filemap_fdatawrite_range+0x178/0x1c0
       file_write_and_wait_range+0x59/0xc0
       blkdev_fsync+0x46/0x80
       vfs_fsync_range+0x66/0x100
       do_fsync+0x3d/0x70
       __x64_sys_fsync+0x21/0x30
       do_syscall_64+0x77/0x230
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Freed by task 9:
       save_stack+0x43/0xd0
       __kasan_slab_free+0x137/0x190
       kasan_slab_free+0xe/0x10
       kmem_cache_free+0xd3/0x380
       mempool_free_slab+0x17/0x20
       mempool_free+0x63/0x160
       bio_free+0x81/0xa0
       bio_put+0x59/0x60
       end_bio_bh_io_sync+0x5d/0x70
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       end_clone_bio+0xa3/0xd0 [dm_mod]
       bio_endio+0x1a7/0x360
       blk_update_request+0xd0/0x5b0
       scsi_end_request+0x56/0x320 [scsi_mod]
       scsi_io_completion+0x7d6/0xb20 [scsi_mod]
       scsi_finish_command+0x1c0/0x280 [scsi_mod]
       scsi_softirq_done+0x19a/0x230 [scsi_mod]
       blk_mq_complete_request+0x160/0x240
       scsi_mq_done+0x50/0x1a0 [scsi_mod]
       srp_recv_done+0x515/0x1330 [ib_srp]
       __ib_process_cq+0xa0/0xf0 [ib_core]
       ib_poll_handler+0x38/0xa0 [ib_core]
       irq_poll_softirq+0xe8/0x1f0
       __do_softirq+0x128/0x60d
      
      The buggy address belongs to the object at ffff8801300e0640
       which belongs to the cache bio-0 of size 200
      The buggy address is located 144 bytes inside of
       200-byte region [ffff8801300e0640, ffff8801300e0708)
      The buggy address belongs to the page:
      page:ffffea0004c03800 count:1 mapcount:0 mapping:ffff88015a563a00 index:0x0 compound_mapcount: 0
      flags: 0x8000000000008100(slab|head)
      raw: 8000000000008100 dead000000000100 dead000000000200 ffff88015a563a00
      raw: 0000000000000000 0000000000330033 00000001ffffffff 0000000000000000
      page dumped because: kasan: bad access detected
      
      Memory state around the buggy address:
       ffff8801300e0580: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
       ffff8801300e0600: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb
      >ffff8801300e0680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                       ^
       ffff8801300e0700: fb fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
       ffff8801300e0780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      ==================================================================
      
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Fixes: 0ba99ca4
      
       ("block: Add warning for bi_next not NULL in bio_endio()")
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarBart Van Assche <bart.vanassche@wdc.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9c24c10a
    • Christoph Hellwig's avatar
      block: fix timeout changes for legacy request drivers · 0cc61e64
      Christoph Hellwig authored
      blk_mq_complete_request can only be called for blk-mq drivers, but when
      removing the BLK_EH_HANDLED return value, two legacy request timeout
      methods incorrectly got switched to call blk_mq_complete_request.
      Call __blk_complete_request instead to reinstance the previous behavior.
      For that __blk_complete_request needs to be exported.
      
      Fixes: 1fc2b62e ("scsi_transport_fc: complete requests from ->timeout")
      Fixes: 0df0bb08
      
       ("null_blk: complete requests from ->timeout")
      Reported-by: default avatarJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0cc61e64
  9. Jun 15, 2018
  10. Jun 14, 2018
    • Christoph Hellwig's avatar
      blk-mq: remove blk_mq_tagset_iter · e6c3456a
      Christoph Hellwig authored
      
      
      Unused now that nvme stopped using it.
      
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJens Axboe <axboe@kernel.dk>
      e6c3456a
    • Christoph Hellwig's avatar
      nvme: remove nvme_reinit_tagset · 14dfa400
      Christoph Hellwig authored
      
      
      Unused now that all transports stopped using it.
      
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarJens Axboe <axboe@kernel.dk>
      14dfa400
    • James Smart's avatar
      nvme-fc: fix nulling of queue data on reconnect · 3e493c00
      James Smart authored
      
      
      The reconnect path is calling the init routines to clear a queue
      structure. But the queue structure has state that perhaps needs
      to persist as long as the controller is live.
      
      Remove the nvme_fc_init_queue() calls on reconnect.
      The nvme_fc_free_queue() calls will clear state bits and reset
      any relevant queue state for a new connection.
      
      Signed-off-by: default avatarJames Smart <james.smart@broadcom.com>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      3e493c00
    • James Smart's avatar
      nvme-fc: remove reinit_request routine · 587331f7
      James Smart authored
      
      
      The reinit_request routine is not necessary. Remove support for the
      op callback.
      
      As all that nvme_reinit_tagset() does is itterate and call the
      reinit routine, it too has no purpose. Remove the call.
      
      Signed-off-by: default avatarJames Smart <james.smart@broadcom.com>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      587331f7
    • Christoph Hellwig's avatar
      blk-mq: don't time out requests again that are in the timeout handler · da661267
      Christoph Hellwig authored
      We can currently call the timeout handler again on a request that has
      already been handed over to the timeout handler.  Prevent that with a new
      flag.
      
      Fixes: 12f5b931
      
       ("blk-mq: Remove generation seqeunce")
      Reported-by: default avatarAndrew Randrianasulu <randrianasulu@gmail.com>
      Tested-by: default avatarAndrew Randrianasulu <randrianasulu@gmail.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      da661267
    • James Smart's avatar
      nvme-fc: change controllers first connect to use reconnect path · 4c984154
      James Smart authored
      
      
      Current code follows the framework that has been in the transports
      from the beginning where initial link-side controller connect occurs
      as part of "creating the controller". Thus that first connect fully
      talks to the controller and obtains values that can then be used in
      for blk-mq setup, etc. It also means that everything about the
      controller is fully know before the "create controller" call returns.
      
      This has several weaknesses:
      - The initial create_ctrl call made by the cli will block for a long
        time as wire transactions are performed synchronously. This delay
        becomes longer if errors occur or connectivity is lost and retries
        need to be performed.
      - Code wise, it means there is a separate connect path for initial
        controller connect vs the (same) steps used in the reconnect path.
      - And as there's separate paths, it means there's separate error
        handling and retry logic. It also plays havoc with the NEW state
        (should transition out of it after successful initial connect) vs
        the RESETTING and CONNECTING (reconnect) states that want to be
        transitioned to on error.
      - As there's separate paths, to recover from errors and disruptions,
        it requires separate recovery/retry paths as well and can severely
        convolute the controller state.
      
      This patch reworks the fc transport to use the same connect paths
      for the initial connection as it uses for reconnect. This makes a
      single path for error recovery and handling.
      
      This patch:
      - Removes the driving of the initial connect and replaces it with
        a state transition to CONNECTING and initiating the reconnect
        thread. A dummy state transition of RESETTING had to be traversed
        as a direct transtion of NEW->CONNECTING is not allowed. Given
        that the controller is "new", the RESETTING transition is a simple
        no-op. Once in the reconnecting thread, the normal behaviors of
        ctrl_loss_tmo (max_retries * connect_delay) and dev_loss_tmo will
        apply before the controller is torn down.
      - Only if the state transitions couldn't be traversed and the
        reconnect thread not scheduled, will the controller be torn down
        while in create_ctrl.
      - The prior code used the controller state of NEW to indicate
        whether request queues had been initialized or not. For the admin
        queue, the request queue is always created, so there's no need to
        check a state. For IO queues, change to tracking whether a successful
        io request queue create has occurred (e.g. 1st successful connect).
      - The initial controller id is initialized to the dynamic controller
        id used in the initial connect message. It will be overwritten by
        the real controller id once the controller is connected on the wire.
      
      Signed-off-by: default avatarJames Smart <james.smart@broadcom.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      4c984154
  11. Jun 13, 2018
  12. Jun 11, 2018