Skip to content
  1. May 16, 2019
  2. May 10, 2019
  3. May 09, 2019
  4. May 08, 2019
  5. May 01, 2019
    • Martin Wilck's avatar
      dm mpath: always free attached_handler_name in parse_path() · 940bc471
      Martin Wilck authored
      Commit b592211c ("dm mpath: fix attached_handler_name leak and
      dangling hw_handler_name pointer") fixed a memory leak for the case
      where setup_scsi_dh() returns failure. But setup_scsi_dh may return
      success and not "use" attached_handler_name if the
      retain_attached_hwhandler flag is not set on the map. As setup_scsi_sh
      properly "steals" the pointer by nullifying it, freeing it
      unconditionally in parse_path() is safe.
      
      Fixes: b592211c
      
       ("dm mpath: fix attached_handler_name leak and dangling hw_handler_name pointer")
      Cc: stable@vger.kernel.org
      Reported-by: default avatarYufen Yu <yuyufen@huawei.com>
      Signed-off-by: default avatarMartin Wilck <mwilck@suse.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      940bc471
    • Helen Koike's avatar
      dm init: fix max devices/targets checks · 8e890c1a
      Helen Koike authored
      dm-init should allow up to DM_MAX_{DEVICES,TARGETS} for devices/targets,
      and not DM_MAX_{DEVICES,TARGETS} - 1.
      
      Fix the checks and also fix the error message when the number of devices
      is surpassed.
      
      Fixes: 6bbc923d
      
       ("dm: add support to directly boot to a mapped device")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHelen Koike <helen.koike@collabora.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      8e890c1a
    • Bryan Gurney's avatar
      dm: add dust target · e4f3fabd
      Bryan Gurney authored
      Add the dm-dust target, which simulates the behavior of bad sectors
      at arbitrary locations, and the ability to enable the emulation of
      the read failures at an arbitrary time.
      
      This target behaves similarly to a linear target.  At a given time,
      the user can send a message to the target to start failing read
      requests on specific blocks.  When the failure behavior is enabled,
      reads of blocks configured "bad" will fail with EIO.
      
      Writes of blocks configured "bad" will result in the following:
      
      1. Remove the block from the "bad block list".
      2. Successfully complete the write.
      
      After this point, the block will successfully contain the written
      data, and will service reads and writes normally.  This emulates the
      behavior of a "remapped sector" on a hard disk drive.
      
      dm-dust provides logging of which blocks have been added or removed
      to the "bad block list", as well as logging when a block has been
      removed from the bad block list.  These messages can be used
      alongside the messages from the driv...
      e4f3fabd
  6. Apr 26, 2019
  7. Apr 19, 2019
    • Mike Snitzer's avatar
      dm thin metadata: do not write metadata if no changes occurred · 873f258b
      Mike Snitzer authored
      
      
      Otherwise, just activating a thin-pool and thin device and then
      deactivating them will cause the thin-pool metadata to be changed
      (e.g. superblock written) -- even without any metadata being changed.
      
      Add 'in_service' flag to struct dm_pool_metadata and set it in
      pmd_write_lock() because all on-disk metadata changes must take a write
      lock of pmd->root_lock.  Once 'in_service' is set it is never cleared.
      __commit_transaction() will return 0 if 'in_service' is not set.
      dm_pool_commit_metadata() is updated to use __pmd_write_lock() so that
      it isn't the sole reason for putting a thin-pool in service.
      
      Also fix dm_pool_commit_metadata() to open the next transaction if the
      return from __commit_transaction() is 0.  Not seeing why the early
      return ever made since for a return of 0 given that dm-io's async_io(),
      as used by bufio, always returns 0.
      
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      873f258b
    • Mike Snitzer's avatar
      dm thin metadata: add wrappers for managing write locking of metadata · 6a1b1ddc
      Mike Snitzer authored
      
      
      No functional change, but this prepares to hook off of pmd_write_lock()
      with additional functionality (as provided in next commit).
      
      Suggested-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      6a1b1ddc
    • Mike Snitzer's avatar
      dm thin metadata: check __commit_transaction()'s return · a1ed4d9e
      Mike Snitzer authored
      
      
      Fix __reserve_metadata_snap() to return early if __commit_transaction()
      fails.
      
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      a1ed4d9e
    • Mike Snitzer's avatar
      dm space map common: zero entire ll_disk · c6e086e0
      Mike Snitzer authored
      
      
      Otherwise, memory that is allocated (and potentially not previously
      zeroed) will get written to disk as part of the space maps.
      
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      c6e086e0
    • Huaisheng Ye's avatar
      dm writecache: add unlikely for returned value of rb_next/prev · 84420b1e
      Huaisheng Ye authored
      
      
      In functions writecache_discard() and writecache_find_entry() there is a
      high probablity that the pointer of structure rb_node won't equal NULL.
      Add unlikely for the pointer node NULL.
      
      Signed-off-by: default avatarHuaisheng Ye <yehs1@lenovo.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      84420b1e
    • Huaisheng Ye's avatar
      dm writecache: remove needless dereferences in __writecache_writeback_pmem() · 09f2d656
      Huaisheng Ye authored
      
      
      bio is already available so there is no need to access it in terms of
      the wb pointer.
      
      Signed-off-by: default avatarHuaisheng Ye <yehs1@lenovo.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      09f2d656
    • Nikos Tsironis's avatar
      dm snapshot: Use fine-grained locking scheme · 3f1637f2
      Nikos Tsironis authored
      Substitute the global locking scheme with a fine grained one, employing
      the read-write semaphore and the scalable exception tables with
      per-bucket locks introduced by the previous two commits.
      
      Summarizing, we now use a read-write semaphore to protect the mostly
      read fields of the snapshot structure, e.g., valid, active, etc., and
      per-bucket bit spinlocks to protect accesses to the complete and pending
      exception tables.
      
      Finally, we use an extra spinlock (pe_allocation_lock) to serialize the
      allocation of new exceptions by the exception store. This allocation is
      really fast, so the extra spinlock doesn't hurt the performance.
      
      This scheme allows dm-snapshot to scale better, resulting in increased
      IOPS and reduced latency.
      
      Following are some benchmark results using the null_blk device:
      
        modprobe null_blk gb=1024 bs=512 submit_queues=8 hw_queue_depth=4096 \
         queue_mode=2 irqmode=1 completion_nsec=1 nr_devices=1
      
      * Benchmark fio_origin_randwrite_throughput_N, from the device mapper
        test suite [1] (direct IO, random 4K writes to origin device, IO
        engine libaio):
      
        +--------------+-------------+------------+
        | # of workers | IOPS Before | IOPS After |
        +--------------+-------------+------------+
        |      1       |    57708    |   66421    |
        |      2       |    63415    |   77589    |
        |      4       |    67276    |   98839    |
        |      8       |    60564    |   109258   |
        +--------------+-------------+------------+
      
      * Benchmark fio_origin_randwrite_latency_N, from the device mapper test
        suite [1] (direct IO, random 4K writes to origin device, IO engine
        psync):
      
        +--------------+-----------------------+----------------------+
        | # of workers | Latency (usec) Before | Latency (usec) After |
        +--------------+-----------------------+----------------------+
        |      1       |         16.25         |        13.27         |
        |      2       |         31.65         |        25.08         |
        |      4       |         55.28         |        41.08         |
        |      8       |         121.47        |        74.44         |
        +--------------+-----------------------+----------------------+
      
      * Benchmark fio_snapshot_randwrite_throughput_N, from the device mapper
        test suite [1] (direct IO, random 4K writes to snapshot device, IO
        engine libaio):
      
        +--------------+-------------+------------+
        | # of workers | IOPS Before | IOPS After |
        +--------------+-------------+------------+
        |      1       |    72593    |   84938    |
        |      2       |    97379    |   134973   |
        |      4       |    90610    |   143077   |
        |      8       |    90537    |   180085   |
        +--------------+-------------+------------+
      
      * Benchmark fio_snapshot_randwrite_latency_N, from the device mapper
        test suite [1] (direct IO, random 4K writes to snapshot device, IO
        engine psync):
      
        +--------------+-----------------------+----------------------+
        | # of workers | Latency (usec) Before | Latency (usec) After |
        +--------------+-----------------------+----------------------+
        |      1       |         12.53         |         10.6         |
        |      2       |         19.78         |        14.89         |
        |      4       |         40.37         |        23.47         |
        |      8       |         89.32         |        48.48         |
        +--------------+-----------------------+----------------------+
      
      [1] https://github.com/jthornber/device-mapper-test-suite
      
      
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Acked-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      3f1637f2
    • Nikos Tsironis's avatar
      dm snapshot: Make exception tables scalable · f79ae415
      Nikos Tsironis authored
      
      
      Use list_bl to implement the exception hash tables' buckets. This change
      permits concurrent access, to distinct buckets, by multiple threads.
      
      Also, implement helper functions to lock and unlock the exception tables
      based on the chunk number of the exception at hand.
      
      We retain the global locking, by means of down_write(), which is
      replaced by the next commit.
      
      Still, we must acquire the per-bucket spinlocks when accessing the hash
      tables, since list_bl does not allow modification on unlocked lists.
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Acked-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      f79ae415
    • Nikos Tsironis's avatar
      dm snapshot: Replace mutex with rw semaphore · 4ad8d880
      Nikos Tsironis authored
      dm-snapshot uses a single mutex to serialize every access to the
      snapshot state. This includes all accesses to the complete and pending
      exception tables, which occur at every origin write, every snapshot
      read/write and every exception completion.
      
      The lock statistics indicate that this mutex is a bottleneck (average
      wait time ~480 usecs for 8 processes doing random 4K writes to the
      origin device) preventing dm-snapshot to scale as the number of threads
      doing IO increases.
      
      The major contention points are __origin_write()/snapshot_map() and
      pending_complete(), i.e., the submission and completion of pending
      exceptions.
      
      Replace this mutex with a rw semaphore.
      
      We essentially revert commit ae1093be
      
       ("dm snapshot: use mutex
      instead of rw_semaphore") and together with the next two patches we
      substitute the single mutex with a fine-grained locking scheme, where we
      use a read-write semaphore to protect the mostly read fields of the
      snapshot structure, e.g., valid, active, etc., and per-bucket bit
      spinlocks to protect accesses to the complete and pending exception
      tables.
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Acked-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      4ad8d880
    • Nikos Tsironis's avatar
      dm snapshot: Don't sleep holding the snapshot lock · 65fc7c37
      Nikos Tsironis authored
      
      
      When completing a pending exception, pending_complete() waits for all
      conflicting reads to drain, before inserting the final, completed
      exception. Conflicting reads are snapshot reads redirected to the
      origin, because the relevant chunk is not remapped to the COW device the
      moment we receive the read.
      
      The completed exception must be inserted into the exception table after
      all conflicting reads drain to ensure snapshot reads don't return
      corrupted data. This is required because inserting the completed
      exception into the exception table signals that the relevant chunk is
      remapped and both origin writes and snapshot merging will now overwrite
      the chunk in origin.
      
      This wait is done holding the snapshot lock to ensure that
      pending_complete() doesn't starve if new snapshot reads keep coming for
      this chunk.
      
      In preparation for the next commit, where we use a spinlock instead of a
      mutex to protect the exception tables, we remove the need for holding
      the lock while waiting for conflicting reads to drain.
      
      We achieve this in two steps:
      
      1. pending_complete() inserts the completed exception before waiting for
         conflicting reads to drain and removes the pending exception after
         all conflicting reads drain.
      
         This ensures that new snapshot reads will be redirected to the COW
         device, instead of the origin, and thus pending_complete() will not
         starve. Moreover, we use the existence of both a completed and
         a pending exception to signify that the COW is done but there are
         conflicting reads in flight.
      
      2. In __origin_write() we check first if there is a pending exception
         and then if there is a completed exception. If there is a pending
         exception any submitted BIO is delayed on the pe->origin_bios list and
         DM_MAPIO_SUBMITTED is returned. This ensures that neither writes to the
         origin nor snapshot merging can overwrite the origin chunk, until all
         conflicting reads drain, and thus snapshot reads will not return
         corrupted data.
      
      Summarizing, we now have the following possible combinations of pending
      and completed exceptions for a chunk, along with their meaning:
      
      A. No exceptions exist: The chunk has not been remapped yet.
      B. Only a pending exception exists: The chunk is currently being copied
         to the COW device.
      C. Both a pending and a completed exception exist: COW for this chunk
         has completed but there are snapshot reads in flight which had been
         redirected to the origin before the chunk was remapped.
      D. Only the completed exception exists: COW has been completed and there
         are no conflicting reads in flight.
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Acked-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      65fc7c37
    • Nikos Tsironis's avatar
      list_bl: Add hlist_bl_add_before/behind helpers · 34191ae8
      Nikos Tsironis authored
      
      
      Add hlist_bl_add_before/behind helpers to add an element before/after an
      existing element in a bl_list.
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      34191ae8
    • Nikos Tsironis's avatar
      list: Don't use WRITE_ONCE() in hlist_add_behind() · ae325dcd
      Nikos Tsironis authored
      Commit 1c97be67
      
       ("list: Use WRITE_ONCE() when adding to lists and
      hlists") introduced the use of WRITE_ONCE() to atomically write the list
      head's ->next pointer.
      
      hlist_add_behind() doesn't touch the hlist head's ->first pointer so
      there is no reason to use WRITE_ONCE() in this case.
      
      Co-developed-by: default avatarIlias Tsitsimpis <iliastsi@arrikto.com>
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      ae325dcd
    • Nikos Tsironis's avatar
      dm cache metadata: Fix loading discard bitset · e28adc3b
      Nikos Tsironis authored
      Add missing dm_bitset_cursor_next() to properly advance the bitset
      cursor.
      
      Otherwise, the discarded state of all blocks is set according to the
      discarded state of the first block.
      
      Fixes: ae4a46a1
      
       ("dm cache metadata: use bitset cursor api to load discard bitset")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarNikos Tsironis <ntsironis@arrikto.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      e28adc3b
    • Damien Le Moal's avatar
      dm zoned: Fix zone report handling · 7aedf75f
      Damien Le Moal authored
      The function blkdev_report_zones() returns success even if no zone
      information is reported (empty report). Empty zone reports can only
      happen if the report start sector passed exceeds the device capacity.
      The conditions for this to happen are either a bug in the caller code,
      or, a change in the device that forced the low level driver to change
      the device capacity to a value that is lower than the report start
      sector. This situation includes a failed disk revalidation resulting in
      the disk capacity being changed to 0.
      
      If this change happens while dm-zoned is in its initialization phase
      executing dmz_init_zones(), this function may enter an infinite loop
      and hang the system. To avoid this, add a check to disallow empty zone
      reports and bail out early. Also fix the function dmz_update_zone() to
      make sure that the report for the requested zone was correctly obtained.
      
      Fixes: 3b1a94c8
      
       ("dm zoned: drive-managed zoned block device target")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: default avatarShaun Tancheff <shaun@tancheff.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      7aedf75f
    • Dan Carpenter's avatar
      dm zoned: Silence a static checker warning · a3839bc6
      Dan Carpenter authored
      My static checker complains about this line from dmz_get_zoned_device()
      
      	aligned_capacity = dev->capacity & ~(blk_queue_zone_sectors(q) - 1);
      
      The problem is that "aligned_capacity" and "dev->capacity" are sector_t
      type (which is a u64 under most configs) but blk_queue_zone_sectors(q)
      returns a u32 so the higher 32 bits in aligned_capacity are cleared to
      zero.  This patch adds a cast to address the issue.
      
      Fixes: 114e0259
      
       ("dm zoned: ignore last smaller runt zone")
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Reviewed-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      a3839bc6
    • Christoph Hellwig's avatar
      dm crypt: fix endianness annotations around org_sector_of_dmreq · c13b5487
      Christoph Hellwig authored
      
      
      The sector used here is a little endian value, so use the right
      type for it.
      
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarBart Van Assche <bvanassche@acm.org>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      c13b5487
  8. Apr 15, 2019