Skip to content
Commit 9488307a authored by Oded Gabbay's avatar Oded Gabbay
Browse files

habanalabs: prevent soft lockup during unmap



When using Deep learning framework such as tensorflow or pytorch, there
are tens of thousands of host memory mappings. When the user frees
all those mappings at the same time, the process of unmapping and
unpinning them can take a long time, which may cause a soft lockup
bug.

To prevent this, we need to free the core to do other things during
the unmapping process. For now, we chose to do it every 32K unmappings
(each unmap is a single 4K page).

Signed-off-by: default avatarOded Gabbay <ogabbay@kernel.org>
parent aa6df653
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment