Revision 3ad33b2436b545cbe8b28e53f3710432cad457ab authored by Lee Schermerhorn on 15 November 2007, 00:59:10 UTC, committed by Linus Torvalds on 15 November 2007, 02:45:38 UTC
We hit the BUG_ON() in mm/rmap.c:vma_address() when trying to migrate via mbind(MPOL_MF_MOVE) a non-anon region that spans multiple vmas. For anon-regions, we just fail to migrate any pages beyond the 1st vma in the range. This occurs because do_mbind() collects a list of pages to migrate by calling check_range(). check_range() walks the task's mm, spanning vmas as necessary, to collect the migratable pages into a list. Then, do_mbind() calls migrate_pages() passing the list of pages, a function to allocate new pages based on vma policy [new_vma_page()], and a pointer to the first vma of the range. For each page in the list, new_vma_page() calls page_address_in_vma() passing the page and the vma [first in range] to obtain the address to get for alloc_page_vma(). The page address is needed to get interleaving policy correct. If the pages in the list come from multiple vmas, eventually, new_page_address() will pass that page to page_address_in_vma() with the incorrect vma. For !PageAnon pages, this will result in a bug check in rmap.c:vma_address(). For anon pages, vma_address() will just return EFAULT and fail the migration. This patch modifies new_vma_page() to check the return value from page_address_in_vma(). If the return value is EFAULT, new_vma_page() searchs forward via vm_next for the vma that maps the page--i.e., that does not return EFAULT. This assumes that the pages in the list handed to migrate_pages() is in address order. This is currently case. The patch documents this assumption in a new comment block for new_vma_page(). If new_vma_page() cannot locate the vma mapping the page in a forward search in the mm, it will pass a NULL vma to alloc_page_vma(). This will result in the allocation using the task policy, if any, else system default policy. This situation is unlikely, but the patch documents this behavior with a comment. Note, this patch results in restarting from the first vma in a multi-vma range each time new_vma_page() is called. If this is not acceptable, we can make the vma argument a pointer, both in new_vma_page() and it's caller unmap_and_move() so that the value held by the loop in migrate_pages() always passes down the last vma in which a page was found. This will require changes to all new_page_t functions passed to migrate_pages(). Is this necessary? For this patch to work, we can't bug check in vma_address() for pages outside the argument vma. This patch removes the BUG_ON(). All other callers [besides new_vma_page()] already check the return status. Tested on x86_64, 4 node NUMA platform. Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Acked-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent e1a1c99
File | Mode | Size |
---|---|---|
Kconfig | -rw-r--r-- | 5.7 KB |
Makefile | -rw-r--r-- | 1.1 KB |
allocpercpu.c | -rw-r--r-- | 3.6 KB |
backing-dev.c | -rw-r--r-- | 2.0 KB |
bootmem.c | -rw-r--r-- | 12.0 KB |
bounce.c | -rw-r--r-- | 6.4 KB |
fadvise.c | -rw-r--r-- | 2.6 KB |
filemap.c | -rw-r--r-- | 67.3 KB |
filemap_xip.c | -rw-r--r-- | 10.1 KB |
fremap.c | -rw-r--r-- | 6.1 KB |
highmem.c | -rw-r--r-- | 8.1 KB |
hugetlb.c | -rw-r--r-- | 27.6 KB |
internal.h | -rw-r--r-- | 1.3 KB |
madvise.c | -rw-r--r-- | 9.5 KB |
memory.c | -rw-r--r-- | 73.5 KB |
memory_hotplug.c | -rw-r--r-- | 13.9 KB |
mempolicy.c | -rw-r--r-- | 51.1 KB |
mempool.c | -rw-r--r-- | 9.0 KB |
migrate.c | -rw-r--r-- | 22.8 KB |
mincore.c | -rw-r--r-- | 5.7 KB |
mlock.c | -rw-r--r-- | 5.6 KB |
mmap.c | -rw-r--r-- | 57.6 KB |
mmzone.c | -rw-r--r-- | 750 bytes |
mprotect.c | -rw-r--r-- | 7.4 KB |
mremap.c | -rw-r--r-- | 10.8 KB |
msync.c | -rw-r--r-- | 2.4 KB |
nommu.c | -rw-r--r-- | 33.1 KB |
oom_kill.c | -rw-r--r-- | 13.2 KB |
page-writeback.c | -rw-r--r-- | 34.5 KB |
page_alloc.c | -rw-r--r-- | 123.0 KB |
page_io.c | -rw-r--r-- | 3.4 KB |
page_isolation.c | -rw-r--r-- | 3.4 KB |
pdflush.c | -rw-r--r-- | 6.4 KB |
prio_tree.c | -rw-r--r-- | 6.3 KB |
quicklist.c | -rw-r--r-- | 2.0 KB |
readahead.c | -rw-r--r-- | 13.3 KB |
rmap.c | -rw-r--r-- | 26.4 KB |
shmem.c | -rw-r--r-- | 65.2 KB |
shmem_acl.c | -rw-r--r-- | 4.6 KB |
slab.c | -rw-r--r-- | 115.2 KB |
slob.c | -rw-r--r-- | 15.0 KB |
slub.c | -rw-r--r-- | 94.8 KB |
sparse-vmemmap.c | -rw-r--r-- | 4.0 KB |
sparse.c | -rw-r--r-- | 10.0 KB |
swap.c | -rw-r--r-- | 13.3 KB |
swap_state.c | -rw-r--r-- | 9.4 KB |
swapfile.c | -rw-r--r-- | 43.9 KB |
thrash.c | -rw-r--r-- | 2.0 KB |
tiny-shmem.c | -rw-r--r-- | 3.1 KB |
truncate.c | -rw-r--r-- | 12.9 KB |
util.c | -rw-r--r-- | 2.7 KB |
vmalloc.c | -rw-r--r-- | 19.1 KB |
vmscan.c | -rw-r--r-- | 53.0 KB |
vmstat.c | -rw-r--r-- | 19.7 KB |
![swh spinner](/static/img/swh-spinner.gif)
Computing file changes ...