Commit 2688cc6e authored by Long Li's avatar Long Li
Browse files

xfs: fix dir3 block read verify fail during log recover

hulk inclusion
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8LHTR


CVE: NA

--------------------------------

Our growfs test trigger mount error show as below:

 XFS (dm-3): Starting recovery (logdev: internal)
 XFS (dm-3): Internal error !ino_ok at line 201 of file fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x54/0xc0 [xfs]
 CPU: 0 PID: 3719345 Comm: mount Kdump: loaded Not tainted 5.10.0-136.12.0.86.h1036.kasan.eulerosv2r12.aarch64 #1
 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
 Call trace:
  dump_backtrace+0x0/0x3a4
  show_stack+0x34/0x4c
  dump_stack+0x170/0x1dc
  xfs_corruption_error+0x104/0x11c [xfs]
  xfs_dir_ino_validate+0x8c/0xc0 [xfs]
  __xfs_dir3_data_check+0x5e4/0xb60 [xfs]
  xfs_dir3_block_verify+0x108/0x190 [xfs]
  xfs_dir3_block_read_verify+0x10c/0x184 [xfs]
  xlog_recover_buf_commit_pass2+0x388/0x8e0 [xfs]
  xlog_recover_items_pass2+0xc8/0x160 [xfs]
  xlog_recover_commit_trans+0x56c/0x58c [xfs]
  xlog_recovery_process_trans+0x174/0x180 [xfs]
  xlog_recover_process_ophdr+0x120/0x210 [xfs]
  xlog_recover_process_data+0xcc/0x1b4 [xfs]
  xlog_recover_process+0x124/0x25c [xfs]
  xlog_do_recovery_pass+0x534/0x864 [xfs]
  xlog_do_log_recovery+0x98/0xc4 [xfs]
  xlog_do_recover+0x64/0x2ec [xfs]
  xlog_recover+0x1c4/0x2f0 [xfs]
  xfs_log_mount+0x1b8/0x550 [xfs]
  xfs_mountfs+0x768/0xe40 [xfs]
  xfs_fc_fill_super+0xb54/0xeb0 [xfs]
  get_tree_bdev+0x240/0x3e0
  xfs_fc_get_tree+0x30/0x40 [xfs]
  vfs_get_tree+0x5c/0x1a4
  do_new_mount+0x1c8/0x220
  path_mount+0x2a8/0x3f0
  __arm64_sys_mount+0x1cc/0x220
  el0_svc_common.constprop.0+0xc0/0x2c4
  do_el0_svc+0xb4/0xec
  el0_svc+0x24/0x3c
  el0_sync_handler+0x160/0x164
  el0_sync+0x160/0x180
 XFS (dm-3): Corruption detected. Unmount and run xfs_repair
 XFS (dm-3): Invalid inode number 0xd00082
 XFS (dm-3): Metadata corruption detected at __xfs_dir3_data_check+0xa08/0xb60 [xfs], xfs_dir3_block block 0x100070
 XFS (dm-3): Unmount and run xfs_repair
 XFS (dm-3): First 128 bytes of corrupted metadata buffer:
 00000000: 58 44 42 33 f3 c3 ae cf 00 00 00 00 00 10 00 70  XDB3...........p
 00000010: 00 00 00 01 00 00 0c 3e 85 a6 68 6c 63 b3 42 30  .......>..hlc.B0
 00000020: b0 6b d1 b5 9d eb 55 7d 00 00 00 00 00 10 00 90  .k....U}........
 00000030: 03 60 0b 78 01 40 00 40 00 b0 00 40 00 00 00 00  .`.x.@.@...@....
 00000040: 00 00 00 00 00 10 00 90 01 2e 02 00 00 00 00 40  ...............@
 00000050: 00 00 00 00 00 10 00 81 02 2e 2e 02 00 00 00 50  ...............P
 00000060: 00 00 00 00 00 10 00 91 02 66 65 01 00 00 00 60  .........fe....`
 00000070: 00 00 00 00 00 10 00 92 02 66 66 01 00 00 00 70  .........ff....p

Consider the following log format, dir3 block has 2 items in the log, and
the inode number recorded ondisk in diri3 block exceeds the current file
system boundary. When replaying log items, it will skipping replay of first
dir3 buffer log item due to the log item LSN being behind the ondisk buffer,
but verification is still required. Since the superblock hasn't been replayed
yet, the inode number in dir3 block exceeds the file system boundary and
causes log recovery to fail.

log record:
  +---------------+----------------+--------------+-------------------+
  | dir3 buf item | growfs sb item |   inode item |  dir3 buf item .. |
  +---------------+----------------+--------------+-------------------+
        lsn X	       lsn X+A          lsn X+A+B       lsn X+A+B+C

metadata block:
  +-----------+-----------+------------+-----------+-----------+
  | sb block  |     ...   | dir3 block |     ...   |   inodes  |
  +-----------+-----------+------------+-----------+-----------+
     lsn < X               lsn X+A+B+C

Remove buffer read verify during log recovry pass2, clear buffer's
XBF_DONE flag, so it can be verified in the next buf read after
log recover.

Fixes: 22ed903e ("xfs: verify buffer contents when we skip log replay")
Signed-off-by: default avatarLong Li <leo.lilong@huawei.com>
parent 9e45cbd2
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment