Skip to content
  1. Apr 06, 2021
  2. Apr 03, 2021
  3. Mar 29, 2021
  4. Mar 26, 2021
    • Jens Axboe's avatar
      Merge branch 'md-next' of... · f8d62edf
      Jens Axboe authored
      Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.13/drivers
      
      Pull MD updates from Song:
      
      "The major changes are:
      
        1. Performance improvement for raid10 discard requests, from Xiao Ni.
        2. Fix missing information of /proc/mdstat, from Jan Glauber."
      
      * 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md: Fix missing unused status line of /proc/mdstat
        md/raid10: improve discard request for far layout
        md/raid10: improve raid10 discard request
        md/raid10: pull the code that wait for blocked dev into one function
        md/raid10: extend r10bio devs to raid disks
        md: add md_submit_discard_bio() for submitting discard bio
      f8d62edf
  5. Mar 25, 2021
    • Jan Glauber's avatar
      md: Fix missing unused status line of /proc/mdstat · 7abfabaf
      Jan Glauber authored
      Reading /proc/mdstat with a read buffer size that would not
      fit the unused status line in the first read will skip this
      line from the output.
      
      So 'dd if=/proc/mdstat bs=64 2>/dev/null' will not print something
      like: unused devices: <none>
      
      Don't return NULL immediately in start() for v=2 but call
      show() once to print the status line also for multiple reads.
      
      Cc: stable@vger.kernel.org
      Fixes: 1f4aace6
      
       ("fs/seq_file.c: simplify seq_file iteration code and interface")
      Signed-off-by: default avatarJan Glauber <jglauber@digitalocean.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      7abfabaf
    • Xiao Ni's avatar
      md/raid10: improve discard request for far layout · 254c271d
      Xiao Ni authored
      
      
      For far layout, the discard region is not continuous on disks. So it needs
      far copies r10bio to cover all regions. It needs a way to know all r10bios
      have finish or not. Similar with raid10_sync_request, only the first r10bio
      master_bio records the discard bio. Other r10bios master_bio record the
      first r10bio. The first r10bio can finish after other r10bios finish and
      then return the discard bio.
      
      Tested-by: default avatarAdrian Huang <ahuang12@lenovo.com>
      Signed-off-by: default avatarXiao Ni <xni@redhat.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      254c271d
    • Xiao Ni's avatar
      md/raid10: improve raid10 discard request · d30588b2
      Xiao Ni authored
      Now the discard request is split by chunk size. So it takes a long time
      to finish mkfs on disks which support discard function. This patch improve
      handling raid10 discard request. It uses the similar way with patch
      29efc390
      
       (md/md0: optimize raid0 discard handling).
      
      But it's a little complex than raid0. Because raid10 has different layout.
      If raid10 is offset layout and the discard request is smaller than stripe
      size. There are some holes when we submit discard bio to underlayer disks.
      
      For example: five disks (disk1 - disk5)
      D01 D02 D03 D04 D05
      D05 D01 D02 D03 D04
      D06 D07 D08 D09 D10
      D10 D06 D07 D08 D09
      The discard bio just wants to discard from D03 to D10. For disk3, there is
      a hole between D03 and D08. For disk4, there is a hole between D04 and D09.
      D03 is a chunk, raid10_write_request can handle one chunk perfectly. So
      the part that is not aligned with stripe size is still handled by
      raid10_write_request.
      
      If reshape is running when discard bio comes and the discard bio spans the
      reshape position, raid10_write_request is responsible to handle this
      discard bio.
      
      I did a test with this patch set.
      Without patch:
      time mkfs.xfs /dev/md0
      real4m39.775s
      user0m0.000s
      sys0m0.298s
      
      With patch:
      time mkfs.xfs /dev/md0
      real0m0.105s
      user0m0.000s
      sys0m0.007s
      
      nvme3n1           259:1    0   477G  0 disk
      └─nvme3n1p1       259:10   0    50G  0 part
      nvme4n1           259:2    0   477G  0 disk
      └─nvme4n1p1       259:11   0    50G  0 part
      nvme5n1           259:6    0   477G  0 disk
      └─nvme5n1p1       259:12   0    50G  0 part
      nvme2n1           259:9    0   477G  0 disk
      └─nvme2n1p1       259:15   0    50G  0 part
      nvme0n1           259:13   0   477G  0 disk
      └─nvme0n1p1       259:14   0    50G  0 part
      
      Reviewed-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Tested-by: default avatarAdrian Huang <ahuang12@lenovo.com>
      Signed-off-by: default avatarXiao Ni <xni@redhat.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      d30588b2
    • Xiao Ni's avatar
      md/raid10: pull the code that wait for blocked dev into one function · f2e7e269
      Xiao Ni authored
      
      
      The following patch will reuse these logics, so pull the same codes into
      one function.
      
      Tested-by: default avatarAdrian Huang <ahuang12@lenovo.com>
      Signed-off-by: default avatarXiao Ni <xni@redhat.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      f2e7e269