Skip to content
  1. Oct 07, 2020
    • Filipe Manana's avatar
      btrfs: make fast fsyncs wait only for writeback · 48778179
      Filipe Manana authored
      Currently regardless of a full or a fast fsync we always wait for ordered
      extents to complete, and then start logging the inode after that. However
      for fast fsyncs we can just wait for the writeback to complete, we don't
      need to wait for the ordered extents to complete since we use the list of
      modified extents maps to figure out which extents we must log and we can
      get their checksums directly from the ordered extents that are still in
      flight, otherwise look them up from the checksums tree.
      
      Until commit b5e6c3e1 ("btrfs: always wait on ordered extents at
      fsync time"), for fast fsyncs, we used to start logging without even
      waiting for the writeback to complete first, we would wait for it to
      complete after logging, while holding a transaction open, which lead to
      performance issues when using cgroups and probably for other cases too,
      as wait for IO while holding a transaction handle should be avoided as
      much as possible. After that, for fast fsyncs, we started to wait for
      ordered extents to complete before starting to log, which adds some
      latency to fsyncs and we even got at least one report about a performance
      drop which bisected to that particular change:
      
      https://lore.kernel.org/linux-btrfs/20181109215148.GF23260@techsingularity.net/
      
      
      
      This change makes fast fsyncs only wait for writeback to finish before
      starting to log the inode, instead of waiting for both the writeback to
      finish and for the ordered extents to complete. This brings back part of
      the logic we had that extracts checksums from in flight ordered extents,
      which are not yet in the checksums tree, and making sure transaction
      commits wait for the completion of ordered extents previously logged
      (by far most of the time they have already completed by the time a
      transaction commit starts, resulting in no wait at all), to avoid any
      data loss if an ordered extent completes after the transaction used to
      log an inode is committed, followed by a power failure.
      
      When there are no other tasks accessing the checksums and the subvolume
      btrees, the ordered extent completion is pretty fast, typically taking
      100 to 200 microseconds only in my observations. However when there are
      other tasks accessing these btrees, ordered extent completion can take a
      lot more time due to lock contention on nodes and leaves of these btrees.
      I've seen cases over 2 milliseconds, which starts to be significant. In
      particular when we do have concurrent fsyncs against different files there
      is a lot of contention on the checksums btree, since we have many tasks
      writing the checksums into the btree and other tasks that already started
      the logging phase are doing lookups for checksums in the btree.
      
      This change also turns all ranged fsyncs into full ranged fsyncs, which
      is something we already did when not using the NO_HOLES features or when
      doing a full fsync. This is to guarantee we never miss checksums due to
      writeback having been triggered only for a part of an extent, and we end
      up logging the full extent but only checksums for the written range, which
      results in missing checksums after log replay. Allowing ranged fsyncs to
      operate again only in the original range, when using the NO_HOLES feature
      and doing a fast fsync is doable but requires some non trivial changes to
      the writeback path, which can always be worked on later if needed, but I
      don't think they are a very common use case.
      
      Several tests were performed using fio for different numbers of concurrent
      jobs, each writing and fsyncing its own file, for both sequential and
      random file writes. The tests were run on bare metal, no virtualization,
      on a box with 12 cores (Intel i7-8700), 64Gb of RAM and a NVMe device,
      with a kernel configuration that is the default of typical distributions
      (debian in this case), without debug options enabled (kasan, kmemleak,
      slub debug, debug of page allocations, lock debugging, etc).
      
      The following script that calls fio was used:
      
        $ cat test-fsync.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/btrfs
        MOUNT_OPTIONS="-o ssd -o space_cache=v2"
        MKFS_OPTIONS="-d single -m single"
      
        if [ $# -ne 5 ]; then
          echo "Use $0 NUM_JOBS FILE_SIZE FSYNC_FREQ BLOCK_SIZE [write|randwrite]"
          exit 1
        fi
      
        NUM_JOBS=$1
        FILE_SIZE=$2
        FSYNC_FREQ=$3
        BLOCK_SIZE=$4
        WRITE_MODE=$5
      
        if [ "$WRITE_MODE" != "write" ] && [ "$WRITE_MODE" != "randwrite" ]; then
          echo "Invalid WRITE_MODE, must be 'write' or 'randwrite'"
          exit 1
        fi
      
        cat <<EOF > /tmp/fio-job.ini
        [writers]
        rw=$WRITE_MODE
        fsync=$FSYNC_FREQ
        fallocate=none
        group_reporting=1
        direct=0
        bs=$BLOCK_SIZE
        ioengine=sync
        size=$FILE_SIZE
        directory=$MNT
        numjobs=$NUM_JOBS
        EOF
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        echo
        echo "Using config:"
        echo
        cat /tmp/fio-job.ini
        echo
      
        umount $MNT &> /dev/null
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
        fio /tmp/fio-job.ini
        umount $MNT
      
      The results were the following:
      
      *************************
      *** sequential writes ***
      *************************
      
      ==== 1 job, 8GiB file, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=8192MiB (8590MB), run=223689-223689msec
      
      After patch:
      
      WRITE: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=8192MiB (8590MB), run=203980-203980msec
      (+9.8%, -8.8% runtime)
      
      ==== 2 jobs, 4GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=8192MiB (8590MB), run=228950-228950msec
      
      After patch:
      
      WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=8192MiB (8590MB), run=188272-188272msec
      (+21.5% throughput, -17.8% runtime)
      
      ==== 4 jobs, 2GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=8192MiB (8590MB), run=163446-163446msec
      
      After patch:
      
      WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=8192MiB (8590MB), run=126987-126987msec
      (+28.7% throughput, -22.3% runtime)
      
      ==== 8 jobs, 1GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=64.0MiB/s (68.1MB/s), 64.0MiB/s-64.0MiB/s (68.1MB/s-68.1MB/s), io=8192MiB (8590MB), run=126075-126075msec
      
      After patch:
      
      WRITE: bw=86.8MiB/s (91.0MB/s), 86.8MiB/s-86.8MiB/s (91.0MB/s-91.0MB/s), io=8192MiB (8590MB), run=94358-94358msec
      (+35.6% throughput, -25.2% runtime)
      
      ==== 16 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=79.8MiB/s (83.6MB/s), 79.8MiB/s-79.8MiB/s (83.6MB/s-83.6MB/s), io=8192MiB (8590MB), run=102694-102694msec
      
      After patch:
      
      WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=8192MiB (8590MB), run=76446-76446msec
      (+34.1% throughput, -25.6% runtime)
      
      ==== 32 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=16.0GiB (17.2GB), run=175836-175836msec
      
      After patch:
      
      WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=16.0GiB (17.2GB), run=147001-147001msec
      (+19.1% throughput, -16.4% runtime)
      
      ==== 64 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=32.0GiB (34.4GB), run=302656-302656msec
      
      After patch:
      
      WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=32.0GiB (34.4GB), run=246003-246003msec
      (+23.1% throughput, -18.7% runtime)
      
      ************************
      ***   random writes  ***
      ************************
      
      ==== 1 job, 8GiB file, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=11.5MiB/s (12.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=8192MiB (8590MB), run=714281-714281msec
      
      After patch:
      
      WRITE: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=8192MiB (8590MB), run=705959-705959msec
      (+0.9% throughput, -1.7% runtime)
      
      ==== 2 jobs, 4GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=8192MiB (8590MB), run=638101-638101msec
      
      After patch:
      
      WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=8192MiB (8590MB), run=625374-625374msec
      (+2.3% throughput, -2.0% runtime)
      
      ==== 4 jobs, 2GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=15.4MiB/s (16.2MB/s), 15.4MiB/s-15.4MiB/s (16.2MB/s-16.2MB/s), io=8192MiB (8590MB), run=531146-531146msec
      
      After patch:
      
      WRITE: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=8192MiB (8590MB), run=460431-460431msec
      (+15.6% throughput, -13.3% runtime)
      
      ==== 8 jobs, 1GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=19.9MiB/s (20.8MB/s), 19.9MiB/s-19.9MiB/s (20.8MB/s-20.8MB/s), io=8192MiB (8590MB), run=412664-412664msec
      
      After patch:
      
      WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=8192MiB (8590MB), run=368589-368589msec
      (+11.6% throughput, -10.7% runtime)
      
      ==== 16 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=8192MiB (8590MB), run=279924-279924msec
      
      After patch:
      
      WRITE: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=8192MiB (8590MB), run=269258-269258msec
      (+3.8% throughput, -3.8% runtime)
      
      ==== 32 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=16.0GiB (17.2GB), run=443581-443581msec
      
      After patch:
      
      WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=16.0GiB (17.2GB), run=394114-394114msec
      (+12.7% throughput, -11.2% runtime)
      
      ==== 64 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=32.0GiB (34.4GB), run=714614-714614msec
      
      After patch:
      
      WRITE: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=32.0GiB (34.4GB), run=672087-672087msec
      (+6.3% throughput, -6.0% runtime)
      
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      48778179
    • Filipe Manana's avatar
      btrfs: do not commit logs and transactions during link and rename operations · 75b463d2
      Filipe Manana authored
      Since commit d4682ba0
      
       ("Btrfs: sync log after logging new name") we
      started to commit logs, and fallback to transaction commits when we failed
      to log the new names or commit the logs, after link and rename operations
      when the target inodes (or their parents) were previously logged in the
      current transaction. This was to avoid losing directories despite an
      explicit fsync on them when they are ancestors of some inode that got a
      new named logged, due to a link or rename operation. However that adds the
      cost of starting IO and waiting for it to complete, which can cause higher
      latencies for applications.
      
      Instead of doing that, just make sure that when we log a new name for an
      inode we don't mark any of its ancestors as logged, so that if any one
      does an fsync against any of them, without doing any other change on them,
      the fsync commits the log. This way we only pay the cost of a log commit
      (or a transaction commit if something goes wrong or a new block group was
      created) if the application explicitly asks to fsync any of the parent
      directories.
      
      Using dbench, which mixes several filesystems operations including renames,
      revealed some significant latency gains. The following script that uses
      dbench was used to test this:
      
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/btrfs
        MOUNT_OPTIONS="-o ssd -o space_cache=v2"
        MKFS_OPTIONS="-m single -d single"
        THREADS=16
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        dbench -t 300 -D $MNT $THREADS
      
        umount $MNT
      
      The test was run on bare metal, no virtualization, on a box with 12 cores
      (Intel i7-8700), 64Gb of RAM and using a NVMe device, with a kernel
      configuration that is the default of typical distributions (debian in this
      case), without debug options enabled (kasan, kmemleak, slub debug, debug
      of page allocations, lock debugging, etc).
      
      Results before this patch:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    10750455     0.011   155.088
       Close         7896674     0.001     0.243
       Rename         455222     2.158  1101.947
       Unlink        2171189     0.067   121.638
       Deltree           256     2.425     7.816
       Mkdir             128     0.002     0.003
       Qpathinfo     9744323     0.006    21.370
       Qfileinfo     1707092     0.001     0.146
       Qfsinfo       1786756     0.001    11.228
       Sfileinfo      875612     0.003    21.263
       Find          3767281     0.025     9.617
       WriteX        5356924     0.011   211.390
       ReadX        16852694     0.003     9.442
       LockX           35008     0.002     0.119
       UnlockX         35008     0.001     0.138
       Flush          753458     4.252  1102.249
      
      Throughput 1128.35 MB/sec  16 clients  16 procs  max_latency=1102.255 ms
      
      Results after this patch:
      
      16 clients, after
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    11471098     0.012   448.281
       Close         8426396     0.001     0.925
       Rename         485746     0.123   267.183
       Unlink        2316477     0.080    63.433
       Deltree           288     2.830    11.144
       Mkdir             144     0.003     0.010
       Qpathinfo    10397420     0.006    10.288
       Qfileinfo     1822039     0.001     0.169
       Qfsinfo       1906497     0.002    14.039
       Sfileinfo      934433     0.004     2.438
       Find          4019879     0.026    10.200
       WriteX        5718932     0.011   200.985
       ReadX        17981671     0.003    10.036
       LockX           37352     0.002     0.076
       UnlockX         37352     0.001     0.109
       Flush          804018     5.015   778.033
      
      Throughput 1201.98 MB/sec  16 clients  16 procs  max_latency=778.036 ms
      (+6.5% throughput, -29.4% max latency, -75.8% rename latency)
      
      Test case generic/498 from fstests tests the scenario that the previously
      mentioned commit fixed.
      
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      75b463d2
    • Filipe Manana's avatar
      btrfs: do not take the log_mutex of the subvolume when pinning the log · 5522a27e
      Filipe Manana authored
      
      
      During a rename we pin the log to make sure no one commits a log that
      reflects an ongoing rename operation, as it might result in a committed
      log where it recorded the unlink of the old name without having recorded
      the new name. However we are taking the subvolume's log_mutex before
      incrementing the log_writers counter, which is not necessary since that
      counter is atomic and we only remove the old name from the log and add
      the new name to the log after we have incremented log_writers, ensuring
      that no one can commit the log after we have removed the old name from
      the log and before we added the new name to the log.
      
      By taking the log_mutex lock we are just adding unnecessary contention on
      the lock, which can become visible for workloads that mix renames with
      fsyncs, writes for files opened with O_SYNC and unlink operations (if the
      inode or its parent were fsynced before in the current transaction).
      
      So just remove the lock and unlock of the subvolume's log_mutex at
      btrfs_pin_log_trans().
      
      Using dbench, which mixes different types of operations that end up taking
      that mutex (fsyncs, renames, unlinks and writes into files opened with
      O_SYNC) revealed some small gains. The following script that calls dbench
      was used:
      
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/btrfs
        MOUNT_OPTIONS="-o ssd -o space_cache=v2"
        MKFS_OPTIONS="-m single -d single"
        THREADS=32
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        dbench -s -t 600 -D $MNT $THREADS
      
        umount $MNT
      
      The test was run on bare metal, no virtualization, on a box with 12 cores
      (Intel i7-8700), 64Gb of RAM and using a NVMe device, with a kernel
      configuration that is the default of typical distributions (debian in this
      case), without debug options enabled (kasan, kmemleak, slub debug, debug
      of page allocations, lock debugging, etc).
      
      Results before this patch:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    4410848     0.017   738.640
       Close        3240222     0.001     0.834
       Rename        186850     7.478  1272.476
       Unlink        890875     0.128   785.018
       Deltree          128     2.846    12.081
       Mkdir             64     0.002     0.003
       Qpathinfo    3997659     0.009    11.171
       Qfileinfo     701307     0.001     0.478
       Qfsinfo       733494     0.002     1.103
       Sfileinfo     359362     0.004     3.266
       Find         1546226     0.041     4.128
       WriteX       2202803     7.905  1376.989
       ReadX        6917775     0.003     3.887
       LockX          14392     0.002     0.043
       UnlockX        14392     0.001     0.085
       Flush         309225     0.128  1033.936
      
      Throughput 231.555 MB/sec (sync open)  32 clients  32 procs  max_latency=1376.993 ms
      
      Results after this patch:
      
      Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    4603244     0.017   232.776
       Close        3381299     0.001     1.041
       Rename        194871     7.251  1073.165
       Unlink        929730     0.133   119.233
       Deltree          128     2.871    10.199
       Mkdir             64     0.002     0.004
       Qpathinfo    4171343     0.009    11.317
       Qfileinfo     731227     0.001     1.635
       Qfsinfo       765079     0.002     3.568
       Sfileinfo     374881     0.004     1.220
       Find         1612964     0.041     4.675
       WriteX       2296720     7.569  1178.204
       ReadX        7213633     0.003     3.075
       LockX          14976     0.002     0.076
       UnlockX        14976     0.001     0.061
       Flush         322635     0.102   579.505
      
      Throughput 241.4 MB/sec (sync open)  32 clients  32 procs  max_latency=1178.207 ms
      (+4.3% throughput, -14.4% max latency)
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5522a27e
    • David Sterba's avatar
      btrfs: send: remove indirect callback parameter for changed_cb · 1b51d6fc
      David Sterba authored
      
      
      There's a custom callback passed to btrfs_compare_trees which happens to
      be named exactly same as the existing function implementing it. This is
      confusing and the indirection is not necessary for our needs. Compiler
      is clever enough to call it directly so there's effectively no change.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1b51d6fc
    • David Sterba's avatar
      btrfs: scrub: rename ratelimit state varaible to avoid shadowing · 8bb1cf1b
      David Sterba authored
      
      
      There's already defined _rs within ctree.h:btrfs_printk_ratelimited,
      local variables should not use _ to avoid such name clashes with
      macro-local variables.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8bb1cf1b
    • David Sterba's avatar
      btrfs: remove unnecessarily shadowed variables · 0af447d0
      David Sterba authored
      
      
      In btrfs_orphan_cleanup, there's another instance of fs_info, but it's
      the same as the one we already have.
      
      In btrfs_backref_finish_upper_links, rb_node is same type and used
      as temporary cursor to the tree.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0af447d0
    • David Sterba's avatar
      btrfs: compression: move declarations to header · cb4c9198
      David Sterba authored
      
      
      The declarations of compression algorithm callbacks are defined in the
      .c file as they're used from there. Compiler warns that there are no
      declarations for public functions when compiling lzo.c/zlib.c/zstd.c.
      Fix that by moving the declarations to the header as it's the common
      place for all of them.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cb4c9198
    • David Sterba's avatar
      btrfs: remove const from btrfs_feature_set_name · 9e6df7ce
      David Sterba authored
      
      
      The function btrfs_feature_set_name returns a const char pointer, the
      second const is not necessary and reported as a warning:
      
       In file included from fs/btrfs/space-info.c:6:
       fs/btrfs/sysfs.h:16:1: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
          16 | const char * const btrfs_feature_set_name(enum btrfs_feature_set set);
             | ^~~~~
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9e6df7ce
    • Qu Wenruo's avatar
      btrfs: cleanup calculation of lockend in lock_and_cleanup_extent_if_need() · e21139c6
      Qu Wenruo authored
      
      
      We're just doing rounding up to sectorsize to calculate the lockend.
      There is no need to do the unnecessary length calculation, just direct
      round_up() is enough.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e21139c6
    • Josef Bacik's avatar
      btrfs: fix possible infinite loop in data async reclaim · c4923027
      Josef Bacik authored
      
      
      Dave reported an issue where generic/102 would sometimes hang.  This
      turned out to be because we'd get into this spot where we were no longer
      making progress on data reservations because our exit condition was not
      met.  The log is basically
      
      while (!space_info->full && !list_empty(&space_info->tickets))
      	flush_space(space_info, flush_state);
      
      where flush state is our various flush states, but doesn't include
      ALLOC_CHUNK_FORCE.  This is because we actually lead with allocating
      chunks, and so the assumption was that once you got to the actual
      flushing states you could no longer allocate chunks.  This was a stupid
      assumption, because you could have deleted block groups that would be
      reclaimed by a transaction commit, thus unsetting space_info->full.
      This is essentially what happens with generic/102, and so sometimes
      you'd get stuck in the flushing loop because we weren't allocating
      chunks, but flushing space wasn't giving us what we needed to make
      progress.
      
      Fix this by adding ALLOC_CHUNK_FORCE to the end of our flushing states,
      that way we will eventually bail out because we did end up with
      space_info->full if we free'd a chunk previously.  Otherwise, as is the
      case for this test, we'll allocate our chunk and continue on our happy
      merry way.
      
      Reported-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c4923027
    • Josef Bacik's avatar
      btrfs: add a comment explaining the data flush steps · 1a7a92c8
      Josef Bacik authored
      
      
      The data flushing steps are not obvious to people other than myself and
      Chris.  Write a giant comment explaining the reasoning behind each flush
      step for data as well as why it is in that particular order.
      
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1a7a92c8
    • Josef Bacik's avatar
      btrfs: do async reclaim for data reservations · 57056740
      Josef Bacik authored
      
      
      Now that we have the data ticketing stuff in place, move normal data
      reservations to use an async reclaim helper to satisfy tickets.  Before
      we could have multiple tasks race in and both allocate chunks, resulting
      in more data chunks than we would necessarily need.  Serializing these
      allocations and making a single thread responsible for flushing will
      only allocate chunks as needed, as well as cut down on transaction
      commits and other flush related activities.
      
      Priority reservations will still work as they have before, simply
      trying to allocate a chunk until they can make their reservation.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      57056740
    • Josef Bacik's avatar
      btrfs: flush delayed refs when trying to reserve data space · cb3e3930
      Josef Bacik authored
      
      
      We can end up with freed extents in the delayed refs, and thus
      may_commit_transaction() may not think we have enough pinned space to
      commit the transaction and we'll ENOSPC early.  Handle this by running
      the delayed refs in order to make sure pinned is uptodate before we try
      to commit the transaction.
      
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cb3e3930
    • Josef Bacik's avatar
      btrfs: run delayed iputs before committing the transaction for data · 327feeeb
      Josef Bacik authored
      
      
      Before we were waiting on iputs after we committed the transaction, but
      this doesn't really make much sense.  We want to reclaim any space we
      may have in order to be more likely to commit the transaction, due to
      pinned space being added by running the delayed iputs.  Fix this by
      making delayed iputs run before committing the transaction.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      327feeeb
    • Josef Bacik's avatar
      btrfs: don't force commit if we are data · bb86bd3d
      Josef Bacik authored
      
      
      We used to unconditionally commit the transaction at least 2 times and
      then on the 3rd try check against pinned space to make sure committing
      the transaction was worth the effort.  This is overkill, we know nobody
      is going to steal our reservation, and if we can't make our reservation
      with the pinned amount simply bail out.
      
      This also cleans up the passing of bytes_needed to
      may_commit_transaction, as that was the thing we added into place in
      order to accomplish this behavior.  We no longer need it so remove that
      mess.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      bb86bd3d
    • Josef Bacik's avatar
      btrfs: drop the commit_cycles stuff for data reservations · 02827001
      Josef Bacik authored
      
      
      This was an old wart left over from how we previously did data
      reservations.  Before we could have people race in and take a
      reservation while we were flushing space, so we needed to make sure we
      looped a few times before giving up.  Now that we're using the ticketing
      infrastructure we don't have to worry about this and can drop the logic
      altogether.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      02827001
    • Josef Bacik's avatar
      btrfs: use the same helper for data and metadata reservations · f3bda421
      Josef Bacik authored
      
      
      Now that data reservations follow the same pattern as metadata
      reservations we can simply rename __reserve_metadata_bytes to
      __reserve_bytes and use that helper for data reservations.
      
      Things to keep in mind, btrfs_can_overcommit() returns 0 for data,
      because we can never overcommit.  We also will never pass in FLUSH_ALL
      for data, so we'll simply be added to the priority list and go straight
      into handle_reserve_ticket.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f3bda421
    • Josef Bacik's avatar
      btrfs: serialize data reservations if we are flushing · 0532a6f8
      Josef Bacik authored
      
      
      Nikolay reported a problem where generic/371 would fail sometimes with a
      slow drive.  The gist of the test is that we fallocate a file in
      parallel with a pwrite of a different file.  These two files combined
      are smaller than the file system, but sometimes the pwrite would ENOSPC.
      
      A fair bit of investigation uncovered the fact that the fallocate
      workload was racing in and grabbing the free space that the pwrite
      workload was trying to free up so it could make its own reservation.
      After a few loops of this eventually the pwrite workload would error out
      with an ENOSPC.
      
      We've had the same problem with metadata as well, and we serialized all
      metadata allocations to satisfy this problem.  This wasn't usually a
      problem with data because data reservations are more straightforward,
      but obviously could still happen.
      
      Fix this by not allowing reservations to occur if there are any pending
      tickets waiting to be satisfied on the space info.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0532a6f8
    • Josef Bacik's avatar
      btrfs: use ticketing for data space reservations · 1004f686
      Josef Bacik authored
      
      
      Now that we have all the infrastructure in place, use the ticketing
      infrastructure to make data allocations.  This still maintains the exact
      same flushing behavior, but now we're using tickets to get our
      reservations satisfied.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1004f686
    • Josef Bacik's avatar
      btrfs: add btrfs_reserve_data_bytes and use it · 8698fc4e
      Josef Bacik authored
      
      
      Create a new function btrfs_reserve_data_bytes() in order to handle data
      reservations.  This uses the new flush types and flush states to handle
      making data reservations.
      
      This patch specifically does not change any functionality, and is
      purposefully not cleaned up in order to make bisection easier for the
      future patches.  The new helper is identical to the old helper in how it
      handles data reservations.  We first try to force a chunk allocation,
      and then we run through the flush states all at once and in the same
      order that they were done with the old helper.
      
      Subsequent patches will clean this up and change the behavior of the
      flushing, and it is important to keep those changes separate so we can
      easily bisect down to the patch that caused the regression, rather than
      the patch that made us start using the new infrastructure.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8698fc4e
    • Josef Bacik's avatar
      btrfs: add the data transaction commit logic into may_commit_transaction · a1ed0a82
      Josef Bacik authored
      
      
      Data space flushing currently unconditionally commits the transaction
      twice in a row, and the last time it checks if there's enough pinned
      extents to satisfy its reservation before deciding to commit the
      transaction for the 3rd and final time.
      
      Encode this logic into may_commit_transaction().  In the next patch we
      will pass in U64_MAX for bytes_needed the first two times, and the final
      time we will pass in the actual bytes we need so the normal logic will
      apply.
      
      This patch exists solely to make the logical changes I will make to the
      flushing state machine separate to make it easier to bisect any
      performance related regressions.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a1ed0a82
    • Josef Bacik's avatar
      btrfs: add flushing states for handling data reservations · 058e6d1d
      Josef Bacik authored
      
      
      Currently the way we do data reservations is by seeing if we have enough
      space in our space_info.  If we do not and we're a normal inode we'll
      
      1) Attempt to force a chunk allocation until we can't anymore.
      2) If that fails we'll flush delalloc, then commit the transaction, then
         run the delayed iputs.
      
      If we are a free space inode we're only allowed to force a chunk
      allocation.  In order to use the normal flushing mechanism we need to
      encode this into a flush state array for normal inodes.  Since both will
      start with allocating chunks until the space info is full there is no
      need to add this as a flush state, this will be handled specially.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      058e6d1d
    • Josef Bacik's avatar
      btrfs: check tickets after waiting on ordered extents · 448b966b
      Josef Bacik authored
      
      
      Right now if the space is freed up after the ordered extents complete
      (which is likely since the reservations are held until they complete),
      we would do extra delalloc flushing before we'd notice that we didn't
      have any more tickets.  Fix this by moving the tickets check after our
      wait_ordered_extents check.
      
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      448b966b
    • Josef Bacik's avatar
      btrfs: use btrfs_start_delalloc_roots in shrink_delalloc · 38d715f4
      Josef Bacik authored
      
      
      The original iteration of flushing had us flushing delalloc and then
      checking to see if we could make our reservation, thus we were very
      careful about how many pages we would flush at once.
      
      But now that everything is async and we satisfy tickets as the space
      becomes available we don't have to keep track of any of this, simply
      try and flush the number of dirty inodes we may have in order to
      reclaim space to make our reservation.  This cleans up our delalloc
      flushing significantly.
      
      The async_pages stuff is dropped because btrfs_start_delalloc_roots()
      handles the case that we generate async extents for us, so we no longer
      require this extra logic.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      38d715f4
    • Josef Bacik's avatar
      btrfs: use the btrfs_space_info_free_bytes_may_use helper for delalloc · 39753e4a
      Josef Bacik authored
      
      
      We are going to use the ticket infrastructure for data, so use the
      btrfs_space_info_free_bytes_may_use() helper in
      btrfs_free_reserved_data_space_noquota() so we get the
      btrfs_try_granting_tickets call when we free our reservation.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      39753e4a
    • Josef Bacik's avatar
      btrfs: call btrfs_try_granting_tickets when reserving space · 99ffb43e
      Josef Bacik authored
      
      
      If we have compression on we could free up more space than we reserved,
      and thus be able to make a space reservation.  Add the call for this
      scenario.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      99ffb43e
    • Josef Bacik's avatar
      btrfs: call btrfs_try_granting_tickets when unpinning anything · 2732798c
      Josef Bacik authored
      
      
      When unpinning we were only calling btrfs_try_granting_tickets() if
      global_rsv->space_info == space_info, which is problematic because we
      use ticketing for SYSTEM chunks, and want to use it for DATA as well.
      Fix this by moving this call outside of that if statement.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2732798c
    • Josef Bacik's avatar
      btrfs: call btrfs_try_granting_tickets when freeing reserved bytes · 3308234a
      Josef Bacik authored
      
      
      We were missing a call to btrfs_try_granting_tickets in
      btrfs_free_reserved_bytes, so add it to handle the case where we're able
      to satisfy an allocation because we've freed a pending reservation.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3308234a
    • Josef Bacik's avatar
      btrfs: make ALLOC_CHUNK use the space info flags · c6c45303
      Josef Bacik authored
      
      
      We have traditionally used flush_space() to flush metadata space, so
      we've been unconditionally using btrfs_metadata_alloc_profile() for our
      profile to allocate a chunk. However if we're going to use this for
      data we need to use btrfs_get_alloc_profile() on the space_info we pass
      in.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c6c45303
    • Josef Bacik's avatar
      btrfs: make shrink_delalloc take space_info as an arg · 920a9958
      Josef Bacik authored
      
      
      Currently shrink_delalloc just looks up the metadata space info, but
      this won't work if we're trying to reclaim space for data chunks.  We
      get the right space_info we want passed into flush_space, so simply pass
      that along to shrink_delalloc.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      920a9958
    • Josef Bacik's avatar
      btrfs: handle U64_MAX for shrink_delalloc · d7f81fac
      Josef Bacik authored
      
      
      Data allocations are going to want to pass in U64_MAX for flushing
      space, adjust shrink_delalloc to handle this properly.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d7f81fac
    • Josef Bacik's avatar
      btrfs: remove orig from shrink_delalloc · 288be2d9
      Josef Bacik authored
      We don't use this anywhere inside of shrink_delalloc since 17024ad0
      
      
      ("Btrfs: fix early ENOSPC due to delalloc"), remove it.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      288be2d9
    • Josef Bacik's avatar
      btrfs: change nr to u64 in btrfs_start_delalloc_roots · b4912139
      Josef Bacik authored
      
      
      We have btrfs_wait_ordered_roots() which takes a u64 for nr, but
      btrfs_start_delalloc_roots() that takes an int for nr, which makes using
      them in conjunction, especially for something like (u64)-1, annoying and
      inconsistent.  Fix btrfs_start_delalloc_roots() to take a u64 for nr and
      adjust start_delalloc_inodes() and it's callers appropriately.
      
      This means we've adjusted start_delalloc_inodes() to take a pointer of
      nr since we want to preserve the ability for start-delalloc_inodes() to
      return an error, so simply make it do the nr adjusting as necessary.
      
      Part of adjusting the callers to this means changing
      btrfs_writeback_inodes_sb_nr() to take a u64 for items.  This may be
      confusing because it seems unrelated, but the caller of
      btrfs_writeback_inodes_sb_nr() already passes in a u64, it's just the
      function variable that needs to be changed.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Tested-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b4912139
    • Nikolay Borisov's avatar
      btrfs: remove fsid argument from btrfs_sysfs_update_sprout_fsid · 8e560081
      Nikolay Borisov authored
      
      
      It can be accessed from 'fs_devices' as it's identical to
      fs_info->fs_devices. Also add a comment about why we are calling the
      function. No semantic changes.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8e560081
    • Nikolay Borisov's avatar
      btrfs: remove spurious BUG_ON in btrfs_get_extent · 57297c1e
      Nikolay Borisov authored
      
      
      That BUG_ON cannot ever trigger because as the comment there states -
      'err' is always set. Simply remove it as it brings no value.
      
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      57297c1e
    • Randy Dunlap's avatar
      btrfs: delete duplicated words + other fixes in comments · 260db43c
      Randy Dunlap authored
      
      
      Delete repeated words in fs/btrfs/.
      {to, the, a, and old}
      and change "into 2 part" to "into 2 parts".
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      260db43c
    • Qu Wenruo's avatar
      btrfs: tracepoints: output proper root owner for trace_find_free_extent() · 437490fe
      Qu Wenruo authored
      The current trace event always output result like this:
      
       find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA)
       find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA)
       find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA)
       find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA)
       find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA)
       find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA)
      
      T's saying we're allocating data extent for EXTENT tree, which is not
      even possible.
      
      It's because we always use EXTENT tree as the owner for
      trace_find_free_extent() without using the @root from
      btrfs_reserve_extent().
      
      This patch will change the parameter to use proper @root
      
       for
      trace_find_free_extent():
      
      Now it looks much better:
      
       find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
       find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA)
       find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=1(DATA)
       find_free_extent: root=5(FS_TREE) len=4096 empty_size=0 flags=1(DATA)
       find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA)
       find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
       find_free_extent: root=7(CSUM_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
       find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
       find_free_extent: root=1(ROOT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
      
      Reported-by: default avatarHans van Kranenburg <hans@knorrie.org>
      CC: stable@vger.kernel.org # 5.4+
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      437490fe
  2. Oct 05, 2020
  3. Oct 04, 2020