Commit ed913b05 authored by Miaohe Lin's avatar Miaohe Lin Committed by Andrew Morton
Browse files

lib/test_hmm: avoid accessing uninitialized pages

If make_device_exclusive_range() fails or returns pages marked for
exclusive access less than required, remaining fields of pages will left
uninitialized.  So dmirror_atomic_map() will access those yet
uninitialized fields of pages.  To fix it, do dmirror_atomic_map() iff all
pages are marked for exclusive access (we will break if mapped is less
than required anyway) so we won't access those uninitialized fields of
pages.

Link: https://lkml.kernel.org/r/20220609130835.35110-1-linmiaohe@huawei.com


Fixes: b659baea ("mm: selftests for exclusive device memory")
Signed-off-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 23689037
Loading
Loading
Loading
Loading
+8 −2
Original line number Diff line number Diff line
@@ -732,7 +732,7 @@ static int dmirror_exclusive(struct dmirror *dmirror,

	mmap_read_lock(mm);
	for (addr = start; addr < end; addr = next) {
		unsigned long mapped;
		unsigned long mapped = 0;
		int i;

		if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT))
@@ -741,6 +741,12 @@ static int dmirror_exclusive(struct dmirror *dmirror,
			next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT);

		ret = make_device_exclusive_range(mm, addr, next, pages, NULL);
		/*
		 * Do dmirror_atomic_map() iff all pages are marked for
		 * exclusive access to avoid accessing uninitialized
		 * fields of pages.
		 */
		if (ret == (next - addr) >> PAGE_SHIFT)
			mapped = dmirror_atomic_map(addr, next, pages, dmirror);
		for (i = 0; i < ret; i++) {
			if (pages[i]) {