mm: support special async readahead
hulk inclusion
category: feature
bugzilla: 173267
CVE: NA
---------------------------
For hibench applications, include kmeans, wordcount and terasort,
they will read whole blk_xxx and blk_xxx.meta from disk in sequential.
And almost all of the read issued to disk are triggered by async
readahead.
While sequential read of single thread does't means sequential io
on disk when multiple threads cocurrently do that. Multiple threads
interleaving sequentail read can make io issued into disk become
random, which will limit disk IO throughput.
To reduce disk randomization, we can consider to increase readahead
window. Then IO generated by filesystem will be bigger in each time
of async readahead. But, limited by disk max_hw_sectors_kb, big IO
will be split and the whole bio need to wait all split bios complete,
which can cause longer io latency.
Our trace shows that many long latency in threads are caused by waiting
async readahead IO complete when set readahead window with a big value.
That means, thread read read speed is faster than async readahead io
complete.
To improve performance, we try to provide a special async readahead
method:
* On the one hand, we try to read more sequential data from disk,
which can reduce disk randomization when multiple thread
interleaving.
* On the other hand, size of each IO issued to disk is 2M, which
can avoid big IO split and long io latency.
Signed-off-by:
Yufen Yu <yuyufen@huawei.com>
Signed-off-by:
Zhihao Cheng <chengzhihao1@huawei.com>
Reviewed-by:
Hou Tao <houtao1@huawei.com>
Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
Loading
Please sign in to comment