+4
−0
+6
−0
+14
−9
+189
−1
+107
−0
Loading
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9DN5Z CVE: NA -------------------------------- Implement the buffered writeback iomap path, including the map_blocks() and the prepare_ioend() callback in iomap_writeback_ops and the corresponding end io path. Add ext4_iomap_map_blocks() to dirty map status before writeback, start journal handle and allocate new blocks if it's not been allocated. Add ext4_iomap_prepare_ioend() to register the end io handler of converting unwritten extents to mapped extents. Note that current iomap call iomap_do_writepage() to write back dirty folios one by one, but we can't map or allocate block(s) for dirty folio one by one because it's expensive if folio size is small. In order to reduce the number of blocks mapping times, we can calculate the length through wbc->range_end carefully and map an entire delayed extent on the first call. Besides, since we always allocate unwritten extents for the new allocated blocks, there are other 4 processes are different from the buffered_head writeback path, which could be more simple. 1. We have to allow splitting extents in endio during the unwritten to written conversion. 2. We don't need to write back the data before the metadata, there is no risk of exposing stale data, the data=ordered journal mode becomes useless. So we don't need to attach data to the jinode, and the journal thread doesn't need to write data. 3. Since data=ordered is not used, we don't need to reserve journal credits and use reserved handle for the extent status conversion. 4. We can postpone the i_disksize updating to endio path. Signed-off-by:Zhang Yi <yi.zhang@huawei.com>