Revision 85b144f860176ec18db927d6d9ecdfb24d9c6483 authored by Maarten Lankhorst on 29 November 2012, 11:36:54 UTC, committed by Dave Airlie on 10 December 2012, 10:21:03 UTC
By removing the unlocking of lru and retaking it immediately, a race is removed where the bo is taken off the swap list or the lru list between the unlock and relock. As such the cleanup_refs code can be simplified, it will attempt to call ttm_bo_wait non-blockingly, and if it fails it will drop the locks and perform a blocking wait, or return an error if no_wait_gpu was set. The need for looping is also eliminated, since swapout and evict_mem_first will always follow the destruction path, no new fence is allowed to be attached. As far as I can see this may already have been the case, but the unlocking / relocking required a complicated loop to deal with re-reservation. Changes since v1: - Simplify no_wait_gpu case by folding it in with empty ddestroy. - Hold a reservation while calling ttm_bo_cleanup_memtype_use again. Changes since v2: - Do not remove bo from lru list while waiting Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
1 parent 6ed9ccb
File | Mode | Size |
---|---|---|
Kconfig | -rw-r--r-- | 9.3 KB |
Makefile | -rw-r--r-- | 460 bytes |
autosleep.c | -rw-r--r-- | 2.6 KB |
block_io.c | -rw-r--r-- | 2.4 KB |
console.c | -rw-r--r-- | 614 bytes |
hibernate.c | -rw-r--r-- | 25.8 KB |
main.c | -rw-r--r-- | 14.5 KB |
power.h | -rw-r--r-- | 8.4 KB |
poweroff.c | -rw-r--r-- | 987 bytes |
process.c | -rw-r--r-- | 4.7 KB |
qos.c | -rw-r--r-- | 14.6 KB |
snapshot.c | -rw-r--r-- | 60.4 KB |
suspend.c | -rw-r--r-- | 7.4 KB |
suspend_test.c | -rw-r--r-- | 5.0 KB |
swap.c | -rw-r--r-- | 35.8 KB |
user.c | -rw-r--r-- | 9.7 KB |
wakelock.c | -rw-r--r-- | 5.4 KB |
![swh spinner](/static/img/swh-spinner.gif)
Computing file changes ...