Skip to content
  1. Sep 04, 2019
    • zhengbin's avatar
      paride/pcd: need to check if cd->disk is null in pcd_detect · 03754ea3
      zhengbin authored
      
      
      If alloc_disk fails in pcd_init_units, cd->disk & pi are empty, we need
      to check if cd->disk is null in pcd_detect.
      
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarzhengbin <zhengbin13@huawei.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      03754ea3
    • zhengbin's avatar
      paride/pcd: need to set queue to NULL before put_disk · d821cce8
      zhengbin authored
      In pcd_init_units, if blk_mq_init_sq_queue fails, need to set queue to
      NULL before put_disk, otherwise null-ptr-deref Read will occur.
      
      put_disk
        kobject_put
          disk_release
            blk_put_queue(disk->queue)
      
      Fixes: f0d17625
      
       ("paride/pcd: Fix potential NULL pointer dereference and mem leak")
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarzhengbin <zhengbin13@huawei.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d821cce8
    • zhengbin's avatar
      paride/pf: need to set queue to NULL before put_disk · ecf4d59a
      zhengbin authored
      In pf_init_units, if blk_mq_init_sq_queue fails, need to set queue to
      NULL before put_disk, otherwise null-ptr-deref Read will occur.
      
      put_disk
        kobject_put
          disk_release
            blk_put_queue(disk->queue)
      
      Fixes: 77218ddf
      
       ("paride: convert pf to blk-mq")
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarzhengbin <zhengbin13@huawei.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ecf4d59a
    • Jens Axboe's avatar
      Merge branch 'md-next' of git://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.4/block · c5ef62e6
      Jens Axboe authored
      Pull MD fixes from Song.
      
      * 'md-next' of git://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md/raid5: use bio_end_sector to calculate last_sector
        md/raid1: fail run raid1 array when active disk less than one
        md raid0/linear: Mark array as 'broken' and fail BIOs if a member is gone
      c5ef62e6
    • Guoqing Jiang's avatar
      md/raid5: use bio_end_sector to calculate last_sector · b0f01ecf
      Guoqing Jiang authored
      
      
      Use the common way to get last_sector.
      
      Signed-off-by: default avatarGuoqing Jiang <guoqing.jiang@cloud.ionos.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      b0f01ecf
    • Yufen Yu's avatar
      md/raid1: fail run raid1 array when active disk less than one · 07f1a685
      Yufen Yu authored
      
      
      When run test case:
        mdadm -CR /dev/md1 -l 1 -n 4 /dev/sd[a-d] --assume-clean --bitmap=internal
        mdadm -S /dev/md1
        mdadm -A /dev/md1 /dev/sd[b-c] --run --force
      
        mdadm --zero /dev/sda
        mdadm /dev/md1 -a /dev/sda
      
        echo offline > /sys/block/sdc/device/state
        echo offline > /sys/block/sdb/device/state
        sleep 5
        mdadm -S /dev/md1
      
        echo running > /sys/block/sdb/device/state
        echo running > /sys/block/sdc/device/state
        mdadm -A /dev/md1 /dev/sd[a-c] --run --force
      
      mdadm run fail with kernel message as follow:
      [  172.986064] md: kicking non-fresh sdb from array!
      [  173.004210] md: kicking non-fresh sdc from array!
      [  173.022383] md/raid1:md1: active with 0 out of 4 mirrors
      [  173.022406] md1: failed to create bitmap (-5)
      
      In fact, when active disk in raid1 array less than one, we
      need to return fail in raid1_run().
      
      Reviewed-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarYufen Yu <yuyufen@huawei.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      07f1a685
    • Guilherme G. Piccoli's avatar
      md raid0/linear: Mark array as 'broken' and fail BIOs if a member is gone · 62f7b198
      Guilherme G. Piccoli authored
      
      
      Currently md raid0/linear are not provided with any mechanism to validate
      if an array member got removed or failed. The driver keeps sending BIOs
      regardless of the state of array members, and kernel shows state 'clean'
      in the 'array_state' sysfs attribute. This leads to the following
      situation: if a raid0/linear array member is removed and the array is
      mounted, some user writing to this array won't realize that errors are
      happening unless they check dmesg or perform one fsync per written file.
      Despite udev signaling the member device is gone, 'mdadm' cannot issue the
      STOP_ARRAY ioctl successfully, given the array is mounted.
      
      In other words, no -EIO is returned and writes (except direct ones) appear
      normal. Meaning the user might think the wrote data is correctly stored in
      the array, but instead garbage was written given that raid0 does stripping
      (and so, it requires all its members to be working in order to not corrupt
      data). For md/linear, writes to the available members will work fine, but
      if the writes go to the missing member(s), it'll cause a file corruption
      situation, whereas the portion of the writes to the missing devices aren't
      written effectively.
      
      This patch changes this behavior: we check if the block device's gendisk
      is UP when submitting the BIO to the array member, and if it isn't, we flag
      the md device as MD_BROKEN and fail subsequent I/Os to that device; a read
      request to the array requiring data from a valid member is still completed.
      While flagging the device as MD_BROKEN, we also show a rate-limited warning
      in the kernel log.
      
      A new array state 'broken' was added too: it mimics the state 'clean' in
      every aspect, being useful only to distinguish if the array has some member
      missing. We rely on the MD_BROKEN flag to put the array in the 'broken'
      state. This state cannot be written in 'array_state' as it just shows
      one or more members of the array are missing but acts like 'clean', it
      wouldn't make sense to write it.
      
      With this patch, the filesystem reacts much faster to the event of missing
      array member: after some I/O errors, ext4 for instance aborts the journal
      and prevents corruption. Without this change, we're able to keep writing
      in the disk and after a machine reboot, e2fsck shows some severe fs errors
      that demand fixing. This patch was tested in ext4 and xfs filesystems, and
      requires a 'mdadm' counterpart to handle the 'broken' state.
      
      Cc: Song Liu <songliubraving@fb.com>
      Reviewed-by: default avatarNeilBrown <neilb@suse.de>
      Signed-off-by: default avatarGuilherme G. Piccoli <gpiccoli@canonical.com>
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      62f7b198
  2. Sep 03, 2019
  3. Aug 31, 2019
    • Tejun Heo's avatar
      writeback: don't access page->mapping directly in track_foreign_dirty TP · 0feacaa2
      Tejun Heo authored
      page->mapping may encode different values in it and page_mapping()
      should always be used to access the mapping pointer.
      track_foreign_dirty tracepoint was incorrectly accessing page->mapping
      directly.  Use page_mapping() instead.  Also, add NULL checks while at
      it.
      
      Fixes: 3a8e9ac8
      
       ("writeback: add tracepoints for cgroup foreign writebacks")
      Reported-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0feacaa2
    • Jens Axboe's avatar
      Merge branch 'nvme-5.4' of git://git.infradead.org/nvme into for-5.4/block · 8f5914bc
      Jens Axboe authored
      Pull NVMe changes from Sagi:
      
      "The nvme updates include:
       - ana log parse fix from Anton
       - nvme quirks support for Apple devices from Ben
       - fix missing bio completion tracing for multipath stack devices from
         Hannes and Mikhail
       - IP TOS settings for nvme rdma and tcp transports from Israel
       - rq_dma_dir cleanups from Israel
       - tracing for Get LBA Status command from Minwoo
       - Some nvme-tcp cleanups from Minwoo, Potnuri and Myself
       - Some consolidation between the fabrics transports for handling the CAP
         register
       - reset race with ns scanning fix for fabrics (move fabrics commands to
         a dedicated request queue with a different lifetime from the admin
         request queue)."
      
      * 'nvme-5.4' of git://git.infradead.org/nvme: (30 commits)
        nvme-rdma: Use rq_dma_dir macro
        nvme-fc: Use rq_dma_dir macro
        nvme-pci: Tidy up nvme_unmap_data
        nvme: make fabrics command run on a separate request queue
        nvme-pci: Support shared tags across queues for Apple 2018 controllers
        nvme-pci: Add support for Apple 2018+ models
        nvme-pci: Add support for variable IO SQ element size
        nvme-pci: Pass the queue to SQ_SIZE/CQ_SIZE macros
        nvme: trace bio completion
        nvme-multipath: fix ana log nsid lookup when nsid is not found
        nvmet-tcp: Add TOS for tcp transport
        nvme-tcp: Add TOS for tcp transport
        nvme-tcp: Use struct nvme_ctrl directly
        nvme-rdma: Add TOS for rdma transport
        nvme-fabrics: Add type of service (TOS) configuration
        nvmet-tcp: fix possible memory leak
        nvmet-tcp: fix possible NULL deref
        nvmet: trace: parse Get LBA Status command in detail
        nvme: trace: parse Get LBA Status command in detail
        nvme: trace: support for Get LBA Status opcode parsed
        ...
      8f5914bc
  4. Aug 30, 2019