Commit 863a8eb3 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Rodrigo Vivi
Browse files

i915: Limit the length of an sg list to the requested length



The folio conversion changed the behaviour of shmem_sg_alloc_table() to
put the entire length of the last folio into the sg list, even if the sg
list should have been shorter.  gen8_ggtt_insert_entries() relied on the
list being the right length and would overrun the end of the page tables.
Other functions may also have been affected.

Clamp the length of the last entry in the sg list to be the expected
length.

Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Fixes: 0b62af28 ("i915: convert shmem_sg_free_table() to use a folio_batch")
Cc: stable@vger.kernel.org # 6.5.x
Link: https://gitlab.freedesktop.org/drm/intel/-/issues/9256
Link: https://lore.kernel.org/lkml/6287208.lOV4Wx5bFT@natalenko.name/


Reported-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
Reviewed-by: default avatarAndrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: default avatarAndrzej Hajda <andrzej.hajda@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230919194855.347582-1-willy@infradead.org


(cherry picked from commit 26a8e32e6d77900819c0c730fbfb393692dbbeea)
Signed-off-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
parent 6465e260
Loading
Loading
Loading
Loading
+7 −4
Original line number Diff line number Diff line
@@ -100,6 +100,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
	st->nents = 0;
	for (i = 0; i < page_count; i++) {
		struct folio *folio;
		unsigned long nr_pages;
		const unsigned int shrink[] = {
			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND,
			0,
@@ -150,6 +151,8 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
			}
		} while (1);

		nr_pages = min_t(unsigned long,
				folio_nr_pages(folio), page_count - i);
		if (!i ||
		    sg->length >= max_segment ||
		    folio_pfn(folio) != next_pfn) {
@@ -157,13 +160,13 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
				sg = sg_next(sg);

			st->nents++;
			sg_set_folio(sg, folio, folio_size(folio), 0);
			sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0);
		} else {
			/* XXX: could overflow? */
			sg->length += folio_size(folio);
			sg->length += nr_pages * PAGE_SIZE;
		}
		next_pfn = folio_pfn(folio) + folio_nr_pages(folio);
		i += folio_nr_pages(folio) - 1;
		next_pfn = folio_pfn(folio) + nr_pages;
		i += nr_pages - 1;

		/* Check that the i965g/gm workaround works. */
		GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL);