net/smc: rdma write inline if qp has sufficient inline space
mainline inclusion from mainline-v5.19-rc1 commit 793a7df6 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I77V5Z CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/net/smc?id=793a7df63071eb09e5b88addf2a569d7bfd3c973 -------------------------------- Rdma write with inline flag when sending small packages, whose length is shorter than the qp's max_inline_data, can help reducing latency. In my test environment, which are 2 VMs running on the same physical host and whose NICs(ConnectX-4Lx) are working on SR-IOV mode, qperf shows 0.5us-0.7us improvement in latency. Test command: server: smc_run taskset -c 1 qperf client: smc_run taskset -c 1 qperf <server ip> -oo \ msg_size:1:2K:*2 -t 30 -vu tcp_lat The results shown below: msgsize before after 1B 11.2 us 10.6 us (-0.6 us) 2B 11.2 us 10.7 us (-0.5 us) 4B 11.3 us 10.7 us (-0.6 us) 8B 11.2 us 10.6 us (-0.6 us) 16B 11.3 us 10.7 us (-0.6 us) 32B 11.3 us 10.6 us (-0.7 us) 64B 11.2 us 11.2 us (0 us) 128B 11.2 us 11.2 us (0 us) 256B 11.2 us 11.2 us (0 us) 512B 11.4 us 11.3 us (-0.1 us) 1KB 11.4 us 11.5 us (0.1 us) 2KB 11.5 us 11.5 us (0 us) Signed-off-by:Guangguan Wang <guangguan.wang@linux.alibaba.com> Reviewed-by:
Tony Lu <tonylu@linux.alibaba.com> Tested-by:
kernel test robot <lkp@intel.com> Acked-by:
Karsten Graul <kgraul@linux.ibm.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Signed-off-by:
Yingyu Zeng <zengyingyu@sangfor.com.cn>
Loading
Please sign in to comment