Skip to content
  1. Dec 06, 2019
    • Chris Wilson's avatar
      drm/i915: Claim vma while under closed_lock in i915_vma_parked() · 77853186
      Chris Wilson authored
      
      
      Remove the vma we wish to destroy from the gt->closed_list to avoid
      having two i915_vma_parked() try and free it.
      
      Fixes: aa5e4453 ("drm/i915/gem: Try to flush pending unbind events")
      References: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205214159.829727-1-chris@chris-wilson.co.uk
      77853186
    • Chris Wilson's avatar
      drm/i915/gt: Trim gen6 ppgtt updates to PD cachelines · d315fe8b
      Chris Wilson authored
      
      
      It appears now that we have the ring TLB invalidation in place, we need
      only update the page directory cachelines that we have altered. A great
      reduction from rewriting the whole 2MiB ppgtt on every update.
      
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: default avatarMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205234059.1010030-1-chris@chris-wilson.co.uk
      d315fe8b
    • Chris Wilson's avatar
      drm/i915: Serialise i915_active_acquire() with __active_retire() · bbca083d
      Chris Wilson authored
      
      
      As __active_retire() does it's final atomic_dec() under the
      ref->tree_lock spinlock, in order to prevent ourselves from reusing the
      ref->cache and ref->tree as they are being destroyed, we need to
      serialise with the retirement during i915_active_acquire().
      
      [  +0.000005] kernel BUG at drivers/gpu/drm/i915/i915_active.c:157!
      [  +0.000011] invalid opcode: 0000 [#1] SMP
      [  +0.000004] CPU: 7 PID: 188 Comm: kworker/u16:4 Not tainted 5.4.0-rc8-03070-gac5e57322614 #89
      [  +0.000002] Hardware name: Razer Razer Blade Stealth 13 Late 2019/LY320, BIOS 1.02 09/10/2019
      [  +0.000082] Workqueue: events_unbound active_work [i915]
      [  +0.000059] RIP: 0010:__active_retire+0x115/0x120 [i915]
      [  +0.000003] Code: 75 28 48 8b 3d 8c 6e 1a 00 48 89 ee e8 e4 5f a5 c0 48 8b 44 24 10 65 48 33 04 25 28 00 00 00 75 0f 48 83 c4 18 5b 5d 41 5c c3 <0f> 0b 0f 0b 0f 0b e8 a0 90 87 c0 0f 1f 44 00 00 48 8b 3d 54 6e 1a
      [  +0.000002] RSP: 0018:ffffb833003f7e48 EFLAGS: 00010286
      [  +0.000003] RAX: ffff8d6e8d726d00 RBX: ffff8d6f9db4e840 RCX: 0000000000000000
      [  +0.000001] RDX: ffffffff82605930 RSI: ffff8d6f9adc4908 RDI: ffff8d6e96cefe28
      [  +0.000002] RBP: ffff8d6e96cefe00 R08: 0000000000000000 R09: ffff8d6f9ffe9a50
      [  +0.000002] R10: 0000000000000048 R11: 0000000000000018 R12: ffff8d6f9adc4930
      [  +0.000001] R13: ffff8d6f9e04fb00 R14: 0000000000000000 R15: ffff8d6f9adc4988
      [  +0.000002] FS:  0000000000000000(0000) GS:ffff8d6f9ffc0000(0000) knlGS:0000000000000000
      [  +0.000002] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  +0.000002] CR2: 000055eb5a34cf10 CR3: 000000018d609002 CR4: 0000000000760ee0
      [  +0.000002] PKRU: 55555554
      [  +0.000001] Call Trace:
      [  +0.000010]  process_one_work+0x1aa/0x350
      [  +0.000004]  worker_thread+0x4d/0x3a0
      [  +0.000004]  kthread+0xfb/0x130
      [  +0.000004]  ? process_one_work+0x350/0x350
      [  +0.000003]  ? kthread_park+0x90/0x90
      [  +0.000005]  ret_from_fork+0x1f/0x40
      
      Reported-by: default avatarKenneth Graunke <kenneth@whitecape.org>
      Fixes: c9ad602f ("drm/i915: Split i915_active.mutex into an irq-safe spinlock for the rbtree")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Tested-by: default avatarKenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: default avatarKenneth Graunke <kenneth@whitecape.org>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205183332.801237-1-chris@chris-wilson.co.uk
      bbca083d
    • Andi Shyti's avatar
      drm/i915/gt: Replace I915_READ with intel_uncore_read · 92c964ca
      Andi Shyti authored
      
      
      Get rid of the last remaining I915_READ in gt/ and make gt-land
      the first I915_READ-free happy island.
      
      Suggested-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarAndi Shyti <andi.shyti@intel.com>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205164422.727968-1-chris@chris-wilson.co.uk
      92c964ca
    • Chris Wilson's avatar
      drm/i915/gt: Save irqstate around virtual_context_destroy · 6f7ac828
      Chris Wilson authored
      
      
      As virtual_context_destroy() may be called from a request signal, it may
      be called from inside an irq-off section, and so we need to do a full
      save/restore of the irq state rather than blindly re-enable irqs upon
      unlocking.
      
      <4> [110.024262] WARNING: inconsistent lock state
      <4> [110.024277] 5.4.0-rc8-CI-CI_DRM_7489+ #1 Tainted: G     U
      <4> [110.024292] --------------------------------
      <4> [110.024305] inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
      <4> [110.024323] kworker/0:0/5 [HC0[0]:SC0[0]:HE1:SE1] takes:
      <4> [110.024338] ffff88826a0c7a18 (&(&rq->lock)->rlock){?.-.}, at: i915_request_retire+0x221/0x930 [i915]
      <4> [110.024592] {IN-HARDIRQ-W} state was registered at:
      <4> [110.024612]   lock_acquire+0xa7/0x1c0
      <4> [110.024627]   _raw_spin_lock_irqsave+0x33/0x50
      <4> [110.024788]   intel_engine_breadcrumbs_irq+0x38c/0x600 [i915]
      <4> [110.024808]   irq_work_run_list+0x49/0x70
      <4> [110.024824]   irq_work_run+0x26/0x50
      <4> [110.024839]   smp_irq_work_interrupt+0x44/0x1e0
      <4> [110.024855]   irq_work_interrupt+0xf/0x20
      <4> [110.024871]   __do_softirq+0xb7/0x47f
      <4> [110.024885]   irq_exit+0xba/0xc0
      <4> [110.024898]   do_IRQ+0x83/0x160
      <4> [110.024910]   ret_from_intr+0x0/0x1d
      <4> [110.024922] irq event stamp: 172864
      <4> [110.024938] hardirqs last  enabled at (172863): [<ffffffff819ea214>] _raw_spin_unlock_irq+0x24/0x50
      <4> [110.024963] hardirqs last disabled at (172864): [<ffffffff819e9fba>] _raw_spin_lock_irq+0xa/0x40
      <4> [110.024988] softirqs last  enabled at (172812): [<ffffffff81c00385>] __do_softirq+0x385/0x47f
      <4> [110.025012] softirqs last disabled at (172797): [<ffffffff810b829a>] irq_exit+0xba/0xc0
      <4> [110.025031]
      other info that might help us debug this:
      <4> [110.025049]  Possible unsafe locking scenario:
      
      <4> [110.025065]        CPU0
      <4> [110.025075]        ----
      <4> [110.025084]   lock(&(&rq->lock)->rlock);
      <4> [110.025099]   <Interrupt>
      <4> [110.025109]     lock(&(&rq->lock)->rlock);
      <4> [110.025124]
       *** DEADLOCK ***
      
      <4> [110.025144] 4 locks held by kworker/0:0/5:
      <4> [110.025156]  #0: ffff88827588f528 ((wq_completion)events){+.+.}, at: process_one_work+0x1de/0x620
      <4> [110.025187]  #1: ffffc9000006fe78 ((work_completion)(&engine->retire_work)){+.+.}, at: process_one_work+0x1de/0x620
      <4> [110.025219]  #2: ffff88825605e270 (&kernel#2){+.+.}, at: engine_retire+0x57/0xe0 [i915]
      <4> [110.025405]  #3: ffff88826a0c7a18 (&(&rq->lock)->rlock){?.-.}, at: i915_request_retire+0x221/0x930 [i915]
      <4> [110.025634]
      stack backtrace:
      <4> [110.025653] CPU: 0 PID: 5 Comm: kworker/0:0 Tainted: G     U            5.4.0-rc8-CI-CI_DRM_7489+ #1
      <4> [110.025675] Hardware name:  /NUC7i5BNB, BIOS BNKBL357.86A.0054.2017.1025.1822 10/25/2017
      <4> [110.025856] Workqueue: events engine_retire [i915]
      <4> [110.025872] Call Trace:
      <4> [110.025891]  dump_stack+0x71/0x9b
      <4> [110.025907]  mark_lock+0x49a/0x500
      <4> [110.025926]  ? print_shortest_lock_dependencies+0x200/0x200
      <4> [110.025946]  mark_held_locks+0x49/0x70
      <4> [110.025962]  ? _raw_spin_unlock_irq+0x24/0x50
      <4> [110.025978]  lockdep_hardirqs_on+0xa2/0x1c0
      <4> [110.025995]  _raw_spin_unlock_irq+0x24/0x50
      <4> [110.026171]  virtual_context_destroy+0xc5/0x2e0 [i915]
      <4> [110.026376]  __active_retire+0xb4/0x290 [i915]
      <4> [110.026396]  dma_fence_signal_locked+0x9e/0x1b0
      <4> [110.026613]  i915_request_retire+0x451/0x930 [i915]
      <4> [110.026766]  retire_requests+0x4d/0x60 [i915]
      <4> [110.026919]  engine_retire+0x63/0xe0 [i915]
      
      Fixes: b1e3177b ("drm/i915: Coordinate i915_active with its own mutex")
      Fixes: 6d06779e ("drm/i915: Load balancing across a virtual engine")
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205145934.663183-1-chris@chris-wilson.co.uk
      6f7ac828
  2. Dec 05, 2019
  3. Dec 04, 2019
  4. Dec 03, 2019