Skip to content
  1. Jan 07, 2022
    • Qu Wenruo's avatar
      btrfs: remove reada infrastructure · f26c9238
      Qu Wenruo authored
      
      
      Currently there is only one user for btrfs metadata readahead, and
      that's scrub.
      
      But even for the single user, it's not providing the correct
      functionality it needs, as scrub needs reada for commit root, which
      current readahead can't provide. (Although it's pretty easy to add such
      feature).
      
      Despite this, there are some extra problems related to metadata
      readahead:
      
      - Duplicated feature with btrfs_path::reada
      
      - Partly duplicated feature of btrfs_fs_info::buffer_radix
        Btrfs already caches its metadata in buffer_radix, while readahead
        tries to read the tree block no matter if it's already cached.
      
      - Poor layer separation
        Metadata readahead works kinda at device level.
        This is definitely not the correct layer it should be, since metadata
        is at btrfs logical address space, it should not bother device at all.
      
        This brings extra chance for bugs to sneak in, while brings
        unnecessary complexity.
      
      - Dead code
        In the very beginning of scrub.c we have #undef DEBUG, rendering all
        the debug related code useless and unable to test.
      
      Thus here I purpose to remove the metadata readahead mechanism
      completely.
      
      [BENCHMARK]
      There is a full benchmark for the scrub performance difference using the
      old btrfs_reada_add() and btrfs_path::reada.
      
      For the worst case (no dirty metadata, slow HDD), there could be a 5%
      performance drop for scrub.
      For other cases (even SATA SSD), there is no distinguishable performance
      difference.
      
      The number is reported scrub speed, in MiB/s.
      The resolution is limited by the reported duration, which only has a
      resolution of 1 second.
      
      	Old		New		Diff
      SSD	455.3		466.332		+2.42%
      HDD	103.927 	98.012		-5.69%
      
      Comprehensive test methodology is in the cover letter of the patch.
      
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f26c9238
    • Qu Wenruo's avatar
      btrfs: scrub: use btrfs_path::reada for extent tree readahead · dcf62b20
      Qu Wenruo authored
      
      
      For scrub, we trigger two readaheads for two trees, extent tree to get
      where to scrub, and csum tree to get the data checksum.
      
      For csum tree we already trigger readahead in
      btrfs_lookup_csums_range(), by setting path->reada.
      But for extent tree we don't have any path based readahead.
      
      Add the readahead for extent tree as well, so we can later remove the
      btrfs_reada_add() based readahead.
      
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      dcf62b20
    • Qu Wenruo's avatar
      btrfs: scrub: remove the unnecessary path parameter for scrub_raid56_parity() · 2522dbe8
      Qu Wenruo authored
      
      
      In function scrub_stripe() we allocated two btrfs_path's, one @path for
      extent tree search and another @ppath for full stripe extent tree search
      for RAID56.
      
      This is totally umncessary, as the @ppath usage is completely inside
      scrub_raid56_parity(), thus we can move the path allocation into
      scrub_raid56_parity() completely.
      
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2522dbe8
    • Nikolay Borisov's avatar
      btrfs: refactor unlock_up · c1227996
      Nikolay Borisov authored
      
      
      The purpose of this function is to unlock all nodes in a btrfs path
      which are above 'lowest_unlock' and whose slot used is different than 0.
      As such it used slightly awkward structure of 'if' as well as somewhat
      cryptic "no_skip" control variable which denotes whether we should
      check the current level of skipability or no.
      
      This patch does the following (cosmetic) refactorings:
      
      * Renames 'no_skip' to 'check_skip' and makes it a boolean. This
        variable controls whether we are below the lowest_unlock/skip_level
        levels.
      
      * Consolidates the 2 conditions which warrant checking whether the
        current level should be skipped under 1 common if (check_skip) branch,
        this increase indentation level but is not critical.
      
      * Consolidates the 'skip_level < i && i >= lowest_unlock' and
        'i >= lowest_unlock && i > skip_level' condition into a common branch
        since those are identical.
      
      * Eliminates the local extent_buffer variable as in this case it doesn't
        bring anything to function readability.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c1227996
    • Filipe Manana's avatar
      btrfs: skip transaction commit after failure to create subvolume · 1b58ae0e
      Filipe Manana authored
      
      
      At ioctl.c:create_subvol(), when we fail to create a subvolume we always
      commit the transaction. In most cases this is a no-op, since all the error
      paths, except for one, abort the transaction - the only exception is when
      we fail to insert the new root item into the root tree, in that case we
      don't abort the transaction because we didn't do anything that is
      irreversible - however we end up committing the transaction which although
      is not a functional problem, it adds unnecessary rotation of the backup
      roots in the superblock and unnecessary work.
      
      So change that to commit a transaction only when no error happened,
      otherwise just call btrfs_end_transaction() to release our reference on
      the transaction.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1b58ae0e
    • Naohiro Aota's avatar
      btrfs: zoned: fix chunk allocation condition for zoned allocator · 82187d2e
      Naohiro Aota authored
      
      
      The ZNS specification defines a limit on the number of "active"
      zones. That limit impose us to limit the number of block groups which
      can be used for an allocation at the same time. Not to exceed the
      limit, we reuse the existing active block groups as much as possible
      when we can't activate any other zones without sacrificing an already
      activated block group in commit a85f05e5 ("btrfs: zoned: avoid
      chunk allocation if active block group has enough space").
      
      However, the check is wrong in two ways. First, it checks the
      condition for every raid index (ffe_ctl->index). Even if it reaches
      the condition and "ffe_ctl->max_extent_size >=
      ffe_ctl->min_alloc_size" is met, there can be other block groups
      having enough space to hold ffe_ctl->num_bytes. (Actually, this won't
      happen in the current zoned code as it only supports SINGLE
      profile. But, it can happen once it enables other RAID types.)
      
      Second, it checks the active zone availability depending on the
      raid index. The raid index is just an index for
      space_info->block_groups, so it has nothing to do with chunk allocation.
      
      These mistakes are causing a faulty allocation in a certain
      situation. Consider we are running zoned btrfs on a device whose
      max_active_zone == 0 (no limit). And, suppose no block group have a
      room to fit ffe_ctl->num_bytes but some room to meet
      ffe_ctl->min_alloc_size (i.e. max_extent_size > num_bytes >=
      min_alloc_size).
      
      In this situation, the following occur:
      
      - With SINGLE raid_index, it reaches the chunk allocation checking
        code
      - The check returns true because we can activate a new zone (no limit)
      - But, before allocating the chunk, it iterates to the next raid index
        (RAID5)
      - Since there are no RAID5 block groups on zoned mode, it again
        reaches the check code
      - The check returns false because of btrfs_can_activate_zone()'s "if
        (raid_index != BTRFS_RAID_SINGLE)" part
      - That results in returning -ENOSPC without allocating a new chunk
      
      As a result, we end up hitting -ENOSPC too early.
      
      Move the check to the right place in the can_allocate_chunk() hook,
      and do the active zone check depending on the allocation flag, not on
      the raid index.
      
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      82187d2e
    • Naohiro Aota's avatar
      btrfs: add extent allocator hook to decide to allocate chunk or not · 50475cd5
      Naohiro Aota authored
      
      
      Introduce a new hook for an extent allocator policy. With the new
      hook, a policy can decide to allocate a new block group or not. If
      not, it will return -ENOSPC, so btrfs_reserve_extent() will cut the
      allocation size in half and retry the allocation if min_alloc_size is
      large enough.
      
      The hook has a place holder and will be replaced with the real
      implementation in the next patch.
      
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      50475cd5
    • Naohiro Aota's avatar
      btrfs: zoned: unset dedicated block group on allocation failure · 1ada69f6
      Naohiro Aota authored
      
      
      Allocating an extent from a block group can fail for various reasons.
      When an allocation from a dedicated block group (for tree-log or
      relocation data) fails, we need to unregister it as a dedicated one so
      that we can allocate a new block group for the dedicated one.
      
      However, we are returning early when the block group in case it is
      read-only, fully used, or not be able to activate the zone. As a result,
      we keep the non-usable block group as a dedicated one, leading to
      further allocation failure. With many block groups, the allocator will
      iterate hopeless loop to find a free extent, results in a hung task.
      
      Fix the issue by delaying the return and doing the proper cleanups.
      
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1ada69f6
    • Johannes Thumshirn's avatar
      btrfs: zoned: drop redundant check for REQ_OP_ZONE_APPEND and btrfs_is_zoned · 73672710
      Johannes Thumshirn authored
      
      
      REQ_OP_ZONE_APPEND can only work on zoned devices, so it is redundant to
      check if the filesystem is zoned when REQ_OP_ZONE_APPEND is set as the
      bio's bio_op.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      73672710
    • Johannes Thumshirn's avatar
      btrfs: zoned: sink zone check into btrfs_repair_one_zone · 554aed7d
      Johannes Thumshirn authored
      
      
      Sink zone check into btrfs_repair_one_zone() so we don't need to do it
      in all callers.
      
      Also as btrfs_repair_one_zone() doesn't return a sensible error, make it
      a boolean function and return false in case it got called on a non-zoned
      filesystem and true on a zoned filesystem.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      554aed7d
    • Johannes Thumshirn's avatar
      btrfs: zoned: simplify btrfs_check_meta_write_pointer · 8fdf54fe
      Johannes Thumshirn authored
      
      
      btrfs_check_meta_write_pointer() will always be called with a NULL
      'cache_ret' argument.
      
      As there's no need to check if we have a valid block_group passed in
      remove these checks.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8fdf54fe
    • Johannes Thumshirn's avatar
      btrfs: zoned: encapsulate inode locking for zoned relocation · 869f4cdc
      Johannes Thumshirn authored
      
      
      Encapsulate the inode lock needed for serializing the data relocation
      writes on a zoned filesystem into a helper.
      
      This streamlines the code reading flow and hides special casing for
      zoned filesystems.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      869f4cdc
    • Anand Jain's avatar
      btrfs: sysfs: add devinfo/fsid to retrieve actual fsid from the device · a26d60de
      Anand Jain authored
      
      
      In the case of the seed device, the fsid can be different from the mounted
      sprout fsid.  The userland has to read the device superblock to know the
      fsid but, that idea fails if the device is missing. So add a sysfs
      interface devinfo/<devid>/fsid to show the fsid of the device.
      
      For example:
        $ cd /sys/fs/btrfs/b10b02a5-f9de-4276-b9e8-2bfd09a578a8
      
        $ cat devinfo/1/fsid
        c44d771f-639d-4df3-99ec-5bc7ad2af93b
        $ cat  devinfo/3/fsid
        b10b02a5-f9de-4276-b9e8-2bfd09a578a8
      
      Though it's related to seeding, the name of the sysfs file is plain fsid as it
      matches what blkid says.  A path to the device's fsid will aid scripting.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a26d60de
    • Josef Bacik's avatar
      btrfs: reserve extra space for the free space tree · c18e3235
      Josef Bacik authored
      
      
      Filipe reported a problem where sometimes he'd get an ENOSPC abort when
      running delayed refs with generic/619 and the free space tree enabled.
      This is partly because we do not reserve space for modifying the free
      space tree, nor do we have a block rsv associated with that tree.
      
      The delayed_refs_rsv tracks the amount of space required to run delayed
      refs.  This means 1 modification means 1 change to the extent root.
      With the free space tree this turns into 2 changes, because modifying 1
      extent means updating the extent tree and potentially updating the free
      space tree to either remove that entry or add the free space.  Thus if
      we have the FST enabled, simply double the reservation size for our
      modification.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c18e3235
    • Josef Bacik's avatar
      btrfs: include the free space tree in the global rsv minimum calculation · 9506f953
      Josef Bacik authored
      
      
      Filipe reported a problem where generic/619 was failing with an ENOSPC
      abort while running delayed refs, like the following
      
        BTRFS: Transaction aborted (error -28)
        WARNING: CPU: 3 PID: 522920 at fs/btrfs/free-space-tree.c:1049 add_to_free_space_tree+0xe5/0x110 [btrfs]
        CPU: 3 PID: 522920 Comm: kworker/u16:19 Tainted: G        W         5.16.0-rc2-btrfs-next-106 #1
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
        Workqueue: events_unbound btrfs_async_reclaim_metadata_space [btrfs]
        RIP: 0010:add_to_free_space_tree+0xe5/0x110 [btrfs]
        RSP: 0000:ffffa65087fb7b20 EFLAGS: 00010282
        RAX: 0000000000000000 RBX: 0000000000001000 RCX: 0000000000000000
        RDX: 0000000000000001 RSI: ffffffff9131eeaa RDI: 00000000ffffffff
        RBP: ffff8d62e26481b8 R08: ffffffff9ad97ce0 R09: 0000000000000001
        R10: 0000000000000000 R11: 0000000000000001 R12: 00000000ffffffe4
        R13: ffff8d61c25fe688 R14: ffff8d61ebd88800 R15: ffff8d61ebd88a90
        FS:  0000000000000000(0000) GS:ffff8d64ed400000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007fa46a8b1000 CR3: 0000000148d18003 CR4: 0000000000370ee0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         <TASK>
         __btrfs_free_extent+0x516/0x950 [btrfs]
         __btrfs_run_delayed_refs+0x2b1/0x1250 [btrfs]
         btrfs_run_delayed_refs+0x86/0x210 [btrfs]
         flush_space+0x403/0x630 [btrfs]
         ? call_rcu_tasks_generic+0x50/0x80
         ? lock_release+0x223/0x4a0
         ? btrfs_get_alloc_profile+0xb5/0x290 [btrfs]
         ? do_raw_spin_unlock+0x4b/0xa0
         btrfs_async_reclaim_metadata_space+0x139/0x320 [btrfs]
         process_one_work+0x24c/0x5b0
         worker_thread+0x55/0x3c0
         ? process_one_work+0x5b0/0x5b0
         kthread+0x17c/0x1a0
         ? set_kthread_struct+0x40/0x40
         ret_from_fork+0x22/0x30
      
      There's a couple of reasons for this, but in generic/619's case the
      largest reason is because it is a very small file system, ad we do not
      reserve enough space for the global reserve.
      
      With the free space tree we now have the free space tree that we need to
      modify when running delayed refs.  This means we need the global reserve
      to take this into account when it calculates the minimum size it needs
      to be.  This is especially important for very small file systems.
      
      Fix this by adjusting the minimum global block rsv size math to include
      the size of the free space tree when calculating the size.
      
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9506f953
    • Qu Wenruo's avatar
      btrfs: scrub: merge SCRUB_PAGES_PER_RD_BIO and SCRUB_PAGES_PER_WR_BIO · c9d328c0
      Qu Wenruo authored
      
      
      These two values were introduced in commit ff023aac ("Btrfs: add code
      to scrub to copy read data to another disk") as an optimization.
      
      But the truth is, block layer scheduler can do whatever it wants to
      merge/split bios to improve performance.
      
      Doing such "optimization" is not really going to affect much, especially
      considering how good current block layer optimizations are doing.
      Remove such old and immature optimization from our code.
      
      Since we're here, also change BUG_ON()s using these two macros to use
      ASSERT()s.
      
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c9d328c0
    • Qu Wenruo's avatar
      btrfs: update SCRUB_MAX_PAGES_PER_BLOCK · 0bb3acdc
      Qu Wenruo authored
      
      
      Use BTRFS_MAX_METADATA_BLOCKSIZE and SZ_4K (minimal sectorsize) to
      calculate this value.
      
      And remove one stale comment on the value, in fact with recent subpage
      support, BTRFS_MAX_METADATA_BLOCKSIZE * PAGE_SIZE is already beyond
      BTRFS_STRIPE_LEN, just we don't use the full page.
      
      Also since we're here, update the BUG_ON() related to
      SCRUB_MAX_PAGES_PER_BLOCK to ASSERT().
      
      As those ASSERT() are really only for developers to catch early obvious
      bugs, not to let end users suffer.
      
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0bb3acdc
    • Josef Bacik's avatar
      btrfs: do not check -EAGAIN when truncating inodes in the log root · 8697b8f8
      Josef Bacik authored
      
      
      We only throttle the btrfs_truncate_inode_items if the root is
      SHAREABLE, which isn't set on the log root, which means this loop is
      unnecessary.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8697b8f8
    • Josef Bacik's avatar
      btrfs: make should_throttle loop local in btrfs_truncate_inode_items · e48dac7f
      Josef Bacik authored
      
      
      We reset this bool on every loop through the truncate loop, make this
      variable local to the loop.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e48dac7f
    • Josef Bacik's avatar
      btrfs: combine extra if statements in btrfs_truncate_inode_items · 0adbc619
      Josef Bacik authored
      
      
      We have
      
          if (del_item)
      	    // do something
          else
      	    // something else
          if (del_item)
      	    // do yet another thing
          else
      	    // something else entirely
      
      back to back in btrfs_truncate_inode_items, collapse these two sets of
      if statements into one.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      0adbc619
    • Josef Bacik's avatar
      btrfs: convert BUG() for pending_del_nr into an ASSERT · 376b91d5
      Josef Bacik authored
      
      
      This is a logic correctness check, convert it into an ASSERT() instead
      of a BUG().
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      376b91d5
    • Josef Bacik's avatar
      btrfs: convert BUG_ON() in btrfs_truncate_inode_items to ASSERT · 56e1edb0
      Josef Bacik authored
      
      
      We have a correctness BUG_ON() in btrfs_truncate_inode_items to make
      sure that we're always using min_type == BTRFS_EXTENT_DATA_KEY if
      new_size is > 0.  Convert this to an ASSERT.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      56e1edb0
    • Josef Bacik's avatar
      btrfs: add inode to truncate control · 71d18b53
      Josef Bacik authored
      
      
      In the future we're going to want to use btrfs_truncate_inode_items
      without looking up the associated inode.  In order to accommodate this
      add the inode to btrfs_truncate_control and handle the case where
      control->inode is NULL appropriately.  This is fairly straightforward,
      we simply need to add a helper for the trace points, as the file extent
      map update is controlled by a flag on btrfs_truncate_control.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      71d18b53
    • Josef Bacik's avatar
      btrfs: pass the ino via truncate control · 487e81d2
      Josef Bacik authored
      
      
      In the future we are going to want to truncate inode items without
      needing to have an btrfs_inode to pass in, so add ino to the
      btrfs_truncate_control and use that to look up the inode items to
      truncate.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      487e81d2
    • Josef Bacik's avatar
      btrfs: use a flag to control when to clear the file extent range · 655807b8
      Josef Bacik authored
      
      
      We only care about updating the file extent range when we are doing a
      normal truncation.  We skip this for tree logging currently, but we can
      also skip this for eviction as well.  Using a flag makes it more
      explicit when we want to do this work.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      655807b8
    • Josef Bacik's avatar
      btrfs: control extent reference updates with a control flag for truncate · 5caa490e
      Josef Bacik authored
      
      
      We've had weird bugs in the past where we forgot to adjust the truncate
      path to deal with the fact that we can be called by the tree log path.
      Instead of checking if our root is a LOG_ROOT use a flag on the
      btrfs_truncate_control to indicate that we don't want to do extent
      reference updates during this truncate.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5caa490e
    • Josef Bacik's avatar
      btrfs: only call inode_sub_bytes in truncate paths that care · 462b728e
      Josef Bacik authored
      
      
      We currently have a bunch of awkward checks to make sure we only update
      the inode i_bytes if we're truncating the real inode.  Instead keep
      track of the number of bytes we need to sub in the
      btrfs_truncate_control, and then do the appropriate adjustment in the
      truncate paths that care.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      462b728e
    • Josef Bacik's avatar
      btrfs: only update i_size in truncate paths that care · c2ddb612
      Josef Bacik authored
      
      
      We currently will update the i_size of the inode as we truncate it down,
      however we skip this if we're calling btrfs_truncate_inode_items from
      the tree log code.  However we also don't care about this in the case of
      evict.  Instead keep track of this value in the btrfs_truncate_control
      and then have btrfs_truncate() and the free space cache truncate path
      both do the i_size update themselves.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c2ddb612
    • Josef Bacik's avatar
      btrfs: add truncate control struct · d9ac19c3
      Josef Bacik authored
      
      
      I'm going to be adding more arguments and counters to
      btrfs_truncate_inode_items, so add a control struct to handle all of the
      extra arguments to make it easier to follow.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d9ac19c3
    • Josef Bacik's avatar
      btrfs: remove found_extent from btrfs_truncate_inode_items · 7097a941
      Josef Bacik authored
      
      
      We only set this if we find a normal file extent, del_item == 1, and the
      file extent points to a real extent and isn't a hole extent.  We can use
      del_item == 1 && extent_start != 0 to get the same information that
      found_extent provides, so remove this variable and use the other
      variables instead.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7097a941
    • Josef Bacik's avatar
      btrfs: move btrfs_kill_delayed_inode_items into evict · 2adc75d6
      Josef Bacik authored
      
      
      We have a special case in btrfs_truncate_inode_items() to call
      btrfs_kill_delayed_inode_items() if min_type == 0, which is only called
      during evict.
      
      Instead move this out into evict proper, and add some comments because I
      erroneously attempted to remove this code altogether without
      understanding what we were doing.
      
      Evict is updating the inode only because we only care about making sure
      the i_nlink count has hit disk.  If we had pending deletions we don't
      want to process those via the delayed inode updates, we simply want to
      drop all of them and reclaim the reserved metadata space.  Then from
      there the btrfs_truncate_inode_items() will do the work to remove all of
      the items as appropriate.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2adc75d6
    • Josef Bacik's avatar
      btrfs: remove free space cache inode check in btrfs_truncate_inode_items · 275312a0
      Josef Bacik authored
      
      
      We no longer have inode cache feature, so this check is extraneous as
      the only inode cache is in the tree_root, which is not marked as
      SHAREABLE.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      275312a0
    • Josef Bacik's avatar
      btrfs: move extent locking outside of btrfs_truncate_inode_items · 9a4a1429
      Josef Bacik authored
      
      
      Currently we are locking the extent and dropping the extent cache for
      any inodes we truncate, unless they're in the tree log.  We call this
      helper from:
      
      - truncate
      - evict
      - tree log
      - free space cache truncation
      
      For evict we've already dropped all of the extent cache for this inode
      once we've gotten here, and we're the only one accessing this inode, so
      this step is unnecessary.
      
      For the tree log code we already skip this part.
      
      Pull this work into the truncate path and the free space cache
      truncation path.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9a4a1429
    • Josef Bacik's avatar
      btrfs: move btrfs_truncate_inode_items to inode-item.c · 54f03ab1
      Josef Bacik authored
      
      
      This is an inode item related manipulation with a few vfs related
      adjustments.  I'm going to remove the vfs related code from this helper
      and simplify it a lot, but I want those changes to be easily seen via
      git blame, so move this function now and then the simplification work
      can be done.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      54f03ab1
    • Josef Bacik's avatar
      btrfs: add an inode-item.h · 26c2c454
      Josef Bacik authored
      
      
      We have a few helpers in inode-item.c, and I'm going to make a few
      changes to how we do truncate in the future, so break out these
      definitions into their own header file to trim down ctree.h some and
      make it easier to do the work on truncate in the future.
      
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      26c2c454
    • Filipe Manana's avatar
      btrfs: remove stale comment about locking at btrfs_search_slot() · 727e6060
      Filipe Manana authored
      
      
      The comment refers to the old extent buffer locking code, where we used to
      have custom locks that had blocking and spinning behaviour modes. That is
      not the case anymore, since we have transitioned to rw semaphores, so the
      comment does not offer any value anymore. Remove it.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      727e6060
    • Filipe Manana's avatar
      btrfs: remove BUG_ON() after splitting leaf · bb8e9a60
      Filipe Manana authored
      
      
      After calling split_leaf() we BUG_ON() if the returned value is greater
      than zero. However split_leaf() only returns 0, in case of success, or a
      negative value in case of an error.
      
      The reason for the BUG_ON() is that if we ever get a positive return
      value from split_leaf(), we can not simply propagate it to the callers
      of btrfs_search_slot(), as that would be interpreted as "key not found"
      and not as an error. That means it could result in callers ending up
      causing some potential silent corruption.
      
      So change the BUG_ON() to an ASSERT(), and in case assertions are
      disabled, produce a warning and set the return value to an error, to make
      it not possible to get into a silent corruption and having the error not
      noticed.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      bb8e9a60
    • Filipe Manana's avatar
      btrfs: move leaf search logic out of btrfs_search_slot() · 109324cf
      Filipe Manana authored
      
      
      There's quite a significant amount of code for doing the key search for a
      leaf at btrfs_search_slot(), with a couple labels and gotos in it, plus
      btrfs_search_slot() is already big enough.
      
      So move the logic that does the key search on a leaf into a new helper
      function. This makes it better organized, removing the need for the labels
      and the gotos, as well as reducing the indentation level and the size of
      btrfs_search_slot().
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      109324cf
    • Filipe Manana's avatar
      btrfs: remove useless condition check before splitting leaf · e5e1c174
      Filipe Manana authored
      
      
      When inserting a key, we check if the write_lock_level is less than 1,
      and if so we set it to 1, release the path and retry the tree traversal.
      
      However that is unnecessary, because when ins_len is greater than 0, we
      know that write_lock_level can never be less than 1.
      
      The logic to retry is also buggy, because in case ins_len was decremented,
      due to an exact key match and the search is not meant for item extension
      (path->search_for_extension is 0), we retry without incrementing ins_len,
      which would make the next retry decrement it again by the same amount.
      
      So remove the check for write_lock_level being less than 1 and add an
      assertion to assert it's always >= 1.
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e5e1c174
    • Filipe Manana's avatar
      btrfs: try to unlock parent nodes earlier when inserting a key · e2e58d0f
      Filipe Manana authored
      
      
      When inserting a new key, we release the write lock on the leaf's parent
      only after doing the binary search on the leaf. This is because if the
      key ends up at slot 0, we will have to update the key at slot 0 of the
      parent node. The same reasoning applies to any other upper level nodes
      when their slot is 0. We also need to keep the parent locked in case the
      leaf does not have enough free space to insert the new key/item, because
      in that case we will split the leaf and we will need to add a new key to
      the parent due to a new leaf resulting from the split operation.
      
      However if the leaf has enough space for the new key and the key does not
      end up at slot 0 of the leaf we could release our write lock on the parent
      before doing the binary search on the leaf to figure out the destination
      slot. That leads to reducing the amount of time other tasks are blocked
      waiting to lock the parent, therefore increasing parallelism when there
      are other tasks that are trying to access other leaves accessible through
      the same parent. This also applies to other upper nodes besides the
      immediate parent, when their slot is 0, since we keep locks on them until
      we figure out if the leaf slot is slot 0 or not.
      
      In fact, having the key ending at up slot 0 when is rare. Typically it
      only happens when the key is less than or equals to the smallest, the
      "left most", key of the entire btree, during a split attempt when we try
      to push to the right sibling leaf or when the caller just wants to update
      the item of an existing key. It's also very common that a leaf has enough
      space to insert a new key, since after a split we move about half of the
      keys from one into the new leaf.
      
      So unlock the parent, and any other upper level nodes, when during a key
      insertion we notice the key is greater then the first key in the leaf and
      the leaf has enough free space. After unlocking the upper level nodes, do
      the binary search using a low boundary of slot 1 and not slot 0, to figure
      out the slot where the key will be inserted (or where the key already is
      in case it exists and the caller wants to modify its item data).
      This extra comparison, with the first key, is cheap and the key is very
      likely already in a cache line because it immediately follows the header
      of the extent buffer and we have recently read the level field of the
      header (which in fact is the last field of the header).
      
      The following fs_mark test was run on a non-debug kernel (debian's default
      kernel config), with a 12 cores intel CPU, and using a NVMe device:
      
        $ cat run-fsmark.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/nvme0n1
        MOUNT_OPTIONS="-o ssd"
        MKFS_OPTIONS="-O no-holes -R free-space-tree"
        FILES=100000
        THREADS=$(nproc --all)
        FILE_SIZE=0
      
        echo "performance" | \
      	tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        OPTS="-S 0 -L 10 -n $FILES -s $FILE_SIZE -t $THREADS -k"
        for ((i = 1; i <= $THREADS; i++)); do
            OPTS="$OPTS -d $MNT/d$i"
        done
      
        fs_mark $OPTS
      
        umount $MNT
      
      Before this change:
      
      FSUse%        Count         Size    Files/sec     App Overhead
           0      1200000            0     165273.6          5958381
           0      2400000            0     190938.3          6284477
           0      3600000            0     181429.1          6044059
           0      4800000            0     173979.2          6223418
           0      6000000            0     139288.0          6384560
           0      7200000            0     163000.4          6520083
           1      8400000            0      57799.2          5388544
           1      9600000            0      66461.6          5552969
           2     10800000            0      49593.5          5163675
           2     12000000            0      57672.1          4889398
      
      After this change:
      
      FSUse%        Count         Size    Files/sec            App Overhead
           0      1200000            0     167987.3 (+1.6%)         6272730
           0      2400000            0     198563.9 (+4.0%)         6048847
           0      3600000            0     197436.6 (+8.8%)         6163637
           0      4800000            0     202880.7 (+16.6%)        6371771
           1      6000000            0     167275.9 (+20.1%)        6556733
           1      7200000            0     204051.2 (+25.2%)        6817091
           1      8400000            0      69622.8 (+20.5%)        5525675
           1      9600000            0      69384.5 (+4.4%)         5700723
           1     10800000            0      61454.1 (+23.9%)        5363754
           3     12000000            0      61908.7 (+7.3%)         5370196
      
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e2e58d0f