Skip to content
  1. Feb 15, 2021
  2. Feb 12, 2021
    • Jens Axboe's avatar
      Merge tag 'nvme-5.12-2021-02-11' of git://git.infradead.org/nvme into for-5.12/drivers · 65fb1b0d
      Jens Axboe authored
      Pull NVMe updates from Christoph:
      
      "nvme updates for 5.12:
      
       - fix multipath handling of ->queue_rq errors (Chao Leng)
       - nvmet cleanups (Chaitanya Kulkarni)
       - add a quirk for buggy Amazon controller (Filippo Sironi)
       - avoid devm allocations in nvme-hwmon that don't interact well with
         fabrics (Hannes Reinecke)
       - sysfs cleanups (Jiapeng Chong)
       - fix nr_zones for multipath (Keith Busch)
       - nvme-tcp crash fix for no-data commands (Sagi Grimberg)
       - nvmet-tcp fixes (Sagi Grimberg)
       - add a missing __rcu annotation (me)"
      
      * tag 'nvme-5.12-2021-02-11' of git://git.infradead.org/nvme: (22 commits)
        nvme-tcp: fix crash triggered with a dataless request submission
        nvme: add 48-bit DMA address quirk for Amazon NVMe controllers
        nvme-hwmon: rework to avoid devm allocation
        nvmet: remove else at the end of the function
        nvmet: add nvmet_req_subsys() helper
        nvmet: use min of device_path and disk len
        nvmet: use invalid cmd opcode helper
        nvmet: use invalid cmd opcode helper
        nvmet: add helper to report invalid opcode
        nvmet: remove extra variable in id-ns handler
        nvmet: make nvmet_find_namespace() req based
        nvmet: return uniform error for invalid ns
        nvmet: set status to 0 in case for invalid nsid
        nvmet-fc: add a missing __rcu annotation to nvmet_fc_tgt_assoc.queues
        nvme-multipath: set nr_zones for zoned namespaces
        nvmet-tcp: fix potential race of tcp socket closing accept_work
        nvmet-tcp: fix receive data digest calculation for multiple h2cdata PDUs
        nvme-rdma: handle nvme_rdma_post_send failures better
        nvme-fabrics: avoid double completions in nvmf_fail_nonready_command
        nvme: introduce a nvme_host_path_error helper
        ...
      65fb1b0d
  3. Feb 11, 2021
    • Sagi Grimberg's avatar
      nvme-tcp: fix crash triggered with a dataless request submission · e11e5116
      Sagi Grimberg authored
      write-zeros has a bio, but does not have any data buffers associated
      with it. Hence should not initialize the request iter for it (which
      attempts to reference the bi_io_vec (and crash).
      --
       run blktests nvme/012 at 2021-02-05 21:53:34
       BUG: kernel NULL pointer dereference, address: 0000000000000008
       #PF: supervisor read access in kernel mode
       #PF: error_code(0x0000) - not-present page
       PGD 0 P4D 0
       Oops: 0000 [#1] SMP NOPTI
       CPU: 15 PID: 12069 Comm: kworker/15:2H Tainted: G S        I       5.11.0-rc6+ #1
       Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.10.0 11/12/2020
       Workqueue: kblockd blk_mq_run_work_fn
       RIP: 0010:nvme_tcp_init_iter+0x7d/0xd0 [nvme_tcp]
       RSP: 0018:ffffbd084447bd18 EFLAGS: 00010246
       RAX: 0000000000000000 RBX: ffffa0bba9f3ce80 RCX: 0000000000000000
       RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000002000000
       RBP: ffffa0ba8ac6fec0 R08: 0000000002000000 R09: 0000000000000000
       R10: 0000000002800809 R11: 0000000000000000 R12: 0000000000000000
       R13: ffffa0bba9f3cf90 R14: 0000000000000000 R15: 0000000000000000
       FS:  0000000000000000(0000) GS:ffffa0c9ff9c0000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 0000000000000008 CR3: 00000001c9c6c005 CR4: 00000000007706e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
       PKRU: 55555554
       Call Trace:
        nvme_tcp_queue_rq+0xef/0x330 [nvme_tcp]
        blk_mq_dispatch_rq_list+0x11c/0x7c0
        ? blk_mq_flush_busy_ctxs+0xf6/0x110
        __blk_mq_sched_dispatch_requests+0x12b/0x170
        blk_mq_sched_dispatch_requests+0x30/0x60
        __blk_mq_run_hw_queue+0x2b/0x60
        process_one_work+0x1cb/0x360
        ? process_one_work+0x360/0x360
        worker_thread+0x30/0x370
        ? process_one_work+0x360/0x360
        kthread+0x116/0x130
        ? kthread_park+0x80/0x80
        ret_from_fork+0x1f/0x30
      --
      
      Fixes: cb9b870f
      
       ("nvme-tcp: fix wrong setting of request iov_iter")
      Reported-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Tested-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      e11e5116
    • Kees Cook's avatar
      block: Replace lkml.org links with lore · 59788683
      Kees Cook authored
      As started by commit 05a5f51c
      
       ("Documentation: Replace lkml.org
      links with lore"), replace lkml.org links with lore to better use a
      single source that's more likely to stay available long-term.
      
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      59788683
    • Liao Pingfang's avatar
      nbd: Convert to DEFINE_SHOW_ATTRIBUTE · a2d52a6c
      Liao Pingfang authored
      
      
      Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
      
      Signed-off-by: default avatarLiao Pingfang <winndows@163.com>
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a2d52a6c
  4. Feb 10, 2021
    • Filippo Sironi's avatar
      nvme: add 48-bit DMA address quirk for Amazon NVMe controllers · 4bdf2603
      Filippo Sironi authored
      
      
      Some Amazon NVMe controllers do not follow the NVMe specification
      and are limited to 48-bit DMA addresses.  Add a quirk to force
      bounce buffering if needed and limit the IOVA allocation for these
      devices.
      
      This affects all current Amazon NVMe controllers that expose EBS
      volumes (0x0061, 0x0065, 0x8061) and local instance storage
      (0xcd00, 0xcd01, 0xcd02).
      
      Signed-off-by: default avatarFilippo Sironi <sironi@amazon.de>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      4bdf2603
    • Hannes Reinecke's avatar
      nvme-hwmon: rework to avoid devm allocation · ed7770f6
      Hannes Reinecke authored
      
      
      The original design to use device-managed resource allocation
      doesn't really work as the NVMe controller has a vastly different
      lifetime than the hwmon sysfs attributes, causing warning about
      duplicate sysfs entries upon reconnection.
      This patch reworks the hwmon allocation to avoid device-managed
      resource allocation, and uses the NVMe controller as parent for
      the sysfs attributes.
      
      Cc: Guenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarHannes Reinecke <hare@suse.de>
      Tested-by: default avatarEnzo Matsumiya <ematsumiya@suse.de>
      Tested-by: default avatarDaniel Wagner <dwagner@suse.de>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ed7770f6
    • Chaitanya Kulkarni's avatar
      nvmet: remove else at the end of the function · 295a39f5
      Chaitanya Kulkarni authored
      
      
      The function nvmet_parse_io_cmd() returns value from
      nvmet_file_parse_io_cmd() or nvmet_bdev_parse_io_cmd() based on which
      backend is set for the request. Remove the else and just return the
      value from nvmet_bdev_parse_io_cmd().
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      295a39f5
    • Chaitanya Kulkarni's avatar
      nvmet: add nvmet_req_subsys() helper · 20c2c3bb
      Chaitanya Kulkarni authored
      
      
      Just like what we have to get the passthru ctrl from the req, add an
      helper to get the subsystem associated with the nvmet_req() instead
      of open coding the chain of structures.
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      20c2c3bb
    • Chaitanya Kulkarni's avatar
      nvmet: use min of device_path and disk len · d86481e9
      Chaitanya Kulkarni authored
      
      
      In function __assign_req_name() instead of using the DEVICE_NAME_LEN in
      strncpy() use min of DISK_NAME_LEN and strlen(req->ns->device_path).
      
      This is needed to turn off the following warnings:-
      
      In file included from drivers/nvme/target/core.c:14:
      In function ‘__assign_req_name’,
          inlined from ‘trace_event_raw_event_nvmet_req_init’ at drivers/nvme/target/./trace.h:58:1:
      drivers/nvme/target/trace.h:52:3: warning: ‘strncpy’ specified bound 32 equals destination size [-Wstringop-truncation]
         strncpy(name, req->ns->device_path, DISK_NAME_LEN);
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      In function ‘__assign_req_name’,
          inlined from ‘perf_trace_nvmet_req_complete’ at drivers/nvme/target/./trace.h:100:1:
      drivers/nvme/target/trace.h:52:3: warning: ‘strncpy’ specified bound 32 equals destination size [-Wstringop-truncation]
         strncpy(name, req->ns->device_path, DISK_NAME_LEN);
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      In function ‘__assign_req_name’,
          inlined from ‘perf_trace_nvmet_req_init’ at drivers/nvme/target/./trace.h:58:1:
      drivers/nvme/target/trace.h:52:3: warning: ‘strncpy’ specified bound 32 equals destination size [-Wstringop-truncation]
         strncpy(name, req->ns->device_path, DISK_NAME_LEN);
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      In function ‘__assign_req_name’,
          inlined from ‘trace_event_raw_event_nvmet_req_complete’ at drivers/nvme/target/./trace.h:100:1:
      drivers/nvme/target/trace.h:52:3: warning: ‘strncpy’ specified bound 32 equals destination size [-Wstringop-truncation]
         strncpy(name, req->ns->device_path, DISK_NAME_LEN);
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d86481e9
    • Chaitanya Kulkarni's avatar
      nvmet: use invalid cmd opcode helper · 07116ea5
      Chaitanya Kulkarni authored
      
      
      In the NVMeOF block device backend, file backend, and passthru backend
      we reject and report the commands if opcode is not handled.
      
      Use the previously introduced helper in the passthru backend to make the
      error message uniform.
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      07116ea5
    • Chaitanya Kulkarni's avatar
      nvmet: use invalid cmd opcode helper · 1c2c7613
      Chaitanya Kulkarni authored
      
      
      In the NVMeOF block device backend, file backend, and passthru backend
      we reject and report the commands if opcode is not handled.
      
      Use the previously introduced helper in file backend to reduce the
      duplicate code and make the error message uniform.
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      1c2c7613
    • Chaitanya Kulkarni's avatar
      nvmet: add helper to report invalid opcode · d81d57cf
      Chaitanya Kulkarni authored
      
      
      In the NVMeOF block device backend, file backend, and passthru backend
      we reject and report the commands if opcode is not handled.
      
      Add an helper and use it in block device backend to keep the code
      and error message uniform.
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d81d57cf
    • Chaitanya Kulkarni's avatar
      nvmet: remove extra variable in id-ns handler · 3999434b
      Chaitanya Kulkarni authored
      
      
      In nvmet_execute_identify_ns() local variable ctrl is accessed only in
      one place, remove that and directly use it from nvmet_req->sq->ctrl.
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      3999434b
    • Chaitanya Kulkarni's avatar
      nvmet: make nvmet_find_namespace() req based · 3a1f7c79
      Chaitanya Kulkarni authored
      
      
      The six callers of nvmet_find_namespace() duplicate the error log page
      update and status setting code for each call on failure.
      
      All callers are nvmet requests based functions, so we can pass req
      to the nvmet_find_namesapce() & derive ctrl from req, that'll allow us
      to update the error log page in nvmet_find_namespace(). Now that we
      pass the request we can also get rid of the local variable in
      nvmet_find_namespace() and use the req->ns and return the error code.
      
      Replace the ctrl parameter with nvmet_req for nvmet_find_namespace(),
      centralize the error log page update for non allocated namesapces, and
      return uniform error for non-allocated namespace.
      
      The nvmet_find_namespace() takes nsid parameter which is from NVMe
      commands structures such as get_log_page, identify, rw and common. All
      these commands have same offset for the nsid field.
      
      Derive nsid from req->cmd->common.nsid) & remove the extra parameter
      from the nvmet_find_namespace().
      
      Lastly now we associate the ns to the req parameter that we pass to the
      nvmet_find_namespace(), rename nvmet_find_namespace() to
      nvmet_req_find_ns().
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      3a1f7c79
    • Chaitanya Kulkarni's avatar
      nvmet: return uniform error for invalid ns · aa0aff60
      Chaitanya Kulkarni authored
      
      
      For nvmet_find_namespace() error case we have inconsistent error code
      mapping in the function nvmet_get_smart_log_nsid() and
      nvmet_set_feat_write_protect().
      
      There is no point in retrying for the invalid namesapce from the host
      side. Set the error code to the NVME_SC_INVALID_NS | NVME_SC_DNR which
      matches what we have in nvmet_execute_identify_desclist().
      
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      aa0aff60
    • Chaitanya Kulkarni's avatar
      nvmet: set status to 0 in case for invalid nsid · 40244ad3
      Chaitanya Kulkarni authored
      For unallocated namespace in nvmet_execute_identify_ns() don't set the
      status to NVME_SC_INVALID_NS, set it to zero.
      
      Fixes: bffcd507
      
       ("nvmet: set right status on error in id-ns handler")
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      40244ad3
    • Christoph Hellwig's avatar
      nvmet-fc: add a missing __rcu annotation to nvmet_fc_tgt_assoc.queues · b5df8e79
      Christoph Hellwig authored
      Make sparse happy after the recent conversion to RCU lookups.
      
      Fixes: 4e2f02bf
      
       ("nvmet-fc: use RCU proctection for assoc_list")
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: default avatarJames Smart <james.smart@broadcom.com>
      b5df8e79
    • Keith Busch's avatar
      nvme-multipath: set nr_zones for zoned namespaces · 73a1a229
      Keith Busch authored
      The bio based drivers only require the request_queue's nr_zones is set,
      so set this field in the head if the namespace path is zoned.
      
      Fixes: 240e6ee2
      
       ("nvme: support for zoned namespaces")
      Reported-by: default avatarMinwoo Im <minwoo.im.dev@gmail.com>
      Cc: Damien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      73a1a229
    • Sagi Grimberg's avatar
      nvmet-tcp: fix potential race of tcp socket closing accept_work · 0fbcfb08
      Sagi Grimberg authored
      When we accept a TCP connection and allocate an nvmet-tcp queue we should
      make sure not to fully establish it or reference it as the connection may
      be already closing, which triggers queue release work, which does not
      fence against queue establishment.
      
      In order to address such a race, we make sure to check the sk_state and
      contain the queue reference to be done underneath the sk_callback_lock
      such that the queue release work correctly fences against it.
      
      Fixes: 872d26a3
      
       ("nvmet-tcp: add NVMe over TCP target driver")
      Reported-by: default avatarElad Grupi <elad.grupi@dell.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      0fbcfb08
    • Sagi Grimberg's avatar
      nvmet-tcp: fix receive data digest calculation for multiple h2cdata PDUs · fda871c0
      Sagi Grimberg authored
      When a host sends multiple h2cdata PDUs for a single command, we
      should verify the data digest calculation per PDU and not
      per command.
      
      Fixes: 872d26a3
      
       ("nvmet-tcp: add NVMe over TCP target driver")
      Reported-by: default avatarNarayan Ayalasomayajula <Narayan.Ayalasomayajula@wdc.com>
      Tested-by: default avatarNarayan Ayalasomayajula <Narayan.Ayalasomayajula@wdc.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      fda871c0
    • Chao Leng's avatar
      nvme-rdma: handle nvme_rdma_post_send failures better · 62eca397
      Chao Leng authored
      
      
      nvme_rdma_post_send failing is a path related error and should bounce
      to another path when using nvme-multipath.  Call nvme_host_path_error
      when nvme_rdma_post_send returns -EIO to ensure nvme_complete_rq gets
      invoked to fail over to another path if there is one.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      62eca397
    • Chao Leng's avatar
      nvme-fabrics: avoid double completions in nvmf_fail_nonready_command · ea5e5f42
      Chao Leng authored
      
      
      When reconnecting, the request may be completed with
      NVME_SC_HOST_PATH_ERROR in nvmf_fail_nonready_command, which currently
      set the state of the request to MQ_RQ_IN_FLIGHT before calling
      nvme_complete_rq.  When this happens for a request that is freed by
      the caller, such as nvme_submit_user_cmd, in the worst case the request
      could be completed again in tear down process.
      
      Instead of calling blk_mq_start_request from nvmf_fail_nonready_command,
      just use the new nvme_host_path_error helper to complete the command
      without starting it.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ea5e5f42
    • Chao Leng's avatar
      nvme: introduce a nvme_host_path_error helper · dda3248e
      Chao Leng authored
      
      
      When using nvme native multipathing, if a path related error occurs
      during ->queue_rq, the request needs to be completed with
      NVME_SC_HOST_PATH_ERROR so that the request can be failed over.
      
      Introduce a helper to complete the command from ->queue_rq in a wait
      that invokes nvme_complete_rq.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      [hch: renamed, added a return value to clean up the callers a bit]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      dda3248e
    • Chao Leng's avatar
      blk-mq: introduce blk_mq_set_request_complete · 83fba8c8
      Chao Leng authored
      
      
      nvme drivers need to set the state of request to MQ_RQ_COMPLETE when
      directly complete request in queue_rq.
      So add blk_mq_set_request_complete.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      83fba8c8
    • Jiapeng Chong's avatar
      nvme: convert sysfs sprintf/snprintf family to sysfs_emit · f720a8ed
      Jiapeng Chong authored
      
      
      Fix the following coccicheck warning:
      
      ./drivers/nvme/host/core.c:3580:8-16: WARNING: use scnprintf or sprintf.
      ./drivers/nvme/host/core.c:3570:8-16: WARNING: use scnprintf or sprintf.
      ./drivers/nvme/host/core.c:3560:8-16: WARNING: use scnprintf or sprintf.
      ./drivers/nvme/host/core.c:3526:8-16: WARNING: use scnprintf or sprintf.
      ./drivers/nvme/host/core.c:2833:8-16: WARNING: use scnprintf or sprintf.
      
      Reported-by: default avatarAbaci <Robot&lt;abaci@linux.alibaba.com>
      Signed-off-by: default avatarJiapeng Chong <jiapeng.chong@linux.alibaba.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      f720a8ed
    • Joe Perches's avatar
      bcache: Avoid comma separated statements · 6751c1e3
      Joe Perches authored
      
      
      Use semicolons and braces.
      
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6751c1e3
    • Kai Krakow's avatar
      bcache: Move journal work to new flush wq · afe78ab4
      Kai Krakow authored
      
      
      This is potentially long running and not latency sensitive, let's get
      it out of the way of other latency sensitive events.
      
      As observed in the previous commit, the `system_wq` comes easily
      congested by bcache, and this fixes a few more stalls I was observing
      every once in a while.
      
      Let's not make this `WQ_MEM_RECLAIM` as it showed to reduce performance
      of boot and file system operations in my tests. Also, without
      `WQ_MEM_RECLAIM`, I no longer see desktop stalls. This matches the
      previous behavior as `system_wq` also does no memory reclaim:
      
      > // workqueue.c:
      > system_wq = alloc_workqueue("events", 0, 0);
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: default avatarKai Krakow <kai@kaishome.de>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      afe78ab4
    • Kai Krakow's avatar
      bcache: Give btree_io_wq correct semantics again · d797bd98
      Kai Krakow authored
      
      
      Before killing `btree_io_wq`, the queue was allocated using
      `create_singlethread_workqueue()` which has `WQ_MEM_RECLAIM`. After
      killing it, it no longer had this property but `system_wq` is not
      single threaded.
      
      Let's combine both worlds and make it multi threaded but able to
      reclaim memory.
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: default avatarKai Krakow <kai@kaishome.de>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d797bd98
    • Kai Krakow's avatar
      Revert "bcache: Kill btree_io_wq" · 9f233ffe
      Kai Krakow authored
      This reverts commit 56b30770.
      
      With the btree using the `system_wq`, I seem to see a lot more desktop
      latency than I should.
      
      After some more investigation, it looks like the original assumption
      of 56b30770
      
       no longer is true, and bcache has a very high potential of
      congesting the `system_wq`. In turn, this introduces laggy desktop
      performance, IO stalls (at least with btrfs), and input events may be
      delayed.
      
      So let's revert this. It's important to note that the semantics of
      using `system_wq` previously mean that `btree_io_wq` should be created
      before and destroyed after other bcache wqs to keep the same
      assumptions.
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: default avatarKai Krakow <kai@kaishome.de>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9f233ffe
    • Kai Krakow's avatar
      bcache: Fix register_device_aync typo · d7fae7b4
      Kai Krakow authored
      
      
      Should be `register_device_async`.
      
      Cc: Coly Li <colyli@suse.de>
      Signed-off-by: default avatarKai Krakow <kai@kaishome.de>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d7fae7b4
    • dongdong tao's avatar
      bcache: consider the fragmentation when update the writeback rate · 71dda2a5
      dongdong tao authored
      
      
      Current way to calculate the writeback rate only considered the
      dirty sectors, this usually works fine when the fragmentation
      is not high, but it will give us unreasonable small rate when
      we are under a situation that very few dirty sectors consumed
      a lot dirty buckets. In some case, the dirty bucekts can reached
      to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) not even
      reached the writeback_percent, the writeback rate will still
      be the minimum value (4k), thus it will cause all the writes to be
      stucked in a non-writeback mode because of the slow writeback.
      
      We accelerate the rate in 3 stages with different aggressiveness,
      the first stage starts when dirty buckets percent reach above
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW (50), the second is
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID (57), the third is
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH (64). By default
      the first stage tries to writeback the amount of dirty data
      in one bucket (on average) in (1 / (dirty_buckets_percent - 50)) second,
      the second stage tries to writeback the amount of dirty data in one bucket
      in (1 / (dirty_buckets_percent - 57)) * 100 millisecond, the third
      stage tries to writeback the amount of dirty data in one bucket in
      (1 / (dirty_buckets_percent - 64)) millisecond.
      
      the initial rate at each stage can be controlled by 3 configurable
      parameters writeback_rate_fp_term_{low|mid|high}, they are by default
      1, 10, 1000, the hint of IO throughput that these values are trying
      to achieve is described by above paragraph, the reason that
      I choose those value as default is based on the testing and the
      production data, below is some details:
      
      A. When it comes to the low stage, there is still a bit far from the 70
         threshold, so we only want to give it a little bit push by setting the
         term to 1, it means the initial rate will be 170 if the fragment is 6,
         it is calculated by bucket_size/fragment, this rate is very small,
         but still much reasonable than the minimum 8.
         For a production bcache with unheavy workload, if the cache device
         is bigger than 1 TB, it may take hours to consume 1% buckets,
         so it is very possible to reclaim enough dirty buckets in this stage,
         thus to avoid entering the next stage.
      
      B. If the dirty buckets ratio didn't turn around during the first stage,
         it comes to the mid stage, then it is necessary for mid stage
         to be more aggressive than low stage, so i choose the initial rate
         to be 10 times more than low stage, that means 1700 as the initial
         rate if the fragment is 6. This is some normal rate
         we usually see for a normal workload when writeback happens
         because of writeback_percent.
      
      C. If the dirty buckets ratio didn't turn around during the low and mid
         stages, it comes to the third stage, and it is the last chance that
         we can turn around to avoid the horrible cutoff writeback sync issue,
         then we choose 100 times more aggressive than the mid stage, that
         means 170000 as the initial rate if the fragment is 6. This is also
         inferred from a production bcache, I've got one week's writeback rate
         data from a production bcache which has quite heavy workloads,
         again, the writeback is triggered by the writeback percent,
         the highest rate area is around 100000 to 240000, so I believe this
         kind aggressiveness at this stage is reasonable for production.
         And it should be mostly enough because the hint is trying to reclaim
         1000 bucket per second, and from that heavy production env,
         it is consuming 50 bucket per second on average in one week's data.
      
      Option writeback_consider_fragment is to control whether we want
      this feature to be on or off, it's on by default.
      
      Lastly, below is the performance data for all the testing result,
      including the data from production env:
      https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharing
      
      Signed-off-by: default avatardongdong tao <dongdong.tao@canonical.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      71dda2a5
  5. Feb 04, 2021
  6. Feb 02, 2021
    • Jens Axboe's avatar
      Merge tag 'nvme-5.21-2020-02-02' of git://git.infradead.org/nvme into for-5.12/drivers · 0d738971
      Jens Axboe authored
      Pull NVMe updates from Christoph:
      
      "nvme updates for 5.12:
      
       - failed reconnect fixes (Chao Leng)
       - various tracing improvements (Michal Krakowiak, Johannes Thumshirn)
       - switch the nvmet-fc assoc_list to use RCU protection (Leonid Ravich)
       - resync the status codes with the latest spec (Max Gurtovoy)
       - minor nvme-tcp improvements (Sagi Grimberg)
       - various cleanups (Rikard Falkeborn, Minwoo Im, Chaitanya Kulkarni,
         Israel Rukshin)"
      
      * tag 'nvme-5.21-2020-02-02' of git://git.infradead.org/nvme: (22 commits)
        nvme-tcp: use cancel tagset helper for tear down
        nvme-rdma: use cancel tagset helper for tear down
        nvme-tcp: add clean action for failed reconnection
        nvme-rdma: add clean action for failed reconnection
        nvme-core: add cancel tagset helpers
        nvme-core: get rid of the extra space
        nvme: add tracing of zns commands
        nvme: parse format nvm command details when tracing
        nvme: update enumerations for status codes
        nvmet: add lba to sect conversion helpers
        nvmet: remove extra variable in identify ns
        nvmet: remove extra variable in id-desclist
        nvmet: remove extra variable in smart log nsid
        nvme: refactor ns->ctrl by request
        nvme-tcp: pass multipage bvec to request iov_iter
        nvme-tcp: get rid of unused helper function
        nvme-tcp: fix wrong setting of request iov_iter
        nvme: support command retry delay for admin command
        nvme: constify static attribute_group structs
        nvmet-fc: use RCU proctection for assoc_list
        ...
      0d738971
    • Chao Leng's avatar
      nvme-tcp: use cancel tagset helper for tear down · 563c8158
      Chao Leng authored
      
      
      Use nvme_cancel_tagset and nvme_cancel_admin_tagset to clean code for
      tear down process.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      563c8158
    • Chao Leng's avatar
      nvme-rdma: use cancel tagset helper for tear down · c4189d68
      Chao Leng authored
      
      
      Use nvme_cancel_tagset and nvme_cancel_admin_tagset to clean code for
      tear down process.
      
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      c4189d68