kmemcheck: move hook into __alloc_pages_nodemask() for the page allocator
authorXishi Qiu <qiuxishi@huawei.com>
Wed, 11 Feb 2015 23:25:07 +0000 (15:25 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 12 Feb 2015 01:06:01 +0000 (17:06 -0800)
Now kmemcheck_pagealloc_alloc() is only called by __alloc_pages_slowpath().
__alloc_pages_nodemask()
__alloc_pages_slowpath()
kmemcheck_pagealloc_alloc()

And the page will not be tracked by kmemcheck in the following path.
__alloc_pages_nodemask()
get_page_from_freelist()

So move kmemcheck_pagealloc_alloc() into __alloc_pages_nodemask(),
like this:
__alloc_pages_nodemask()
...
get_page_from_freelist()
if (!page)
__alloc_pages_slowpath()
kmemcheck_pagealloc_alloc()
...

Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Li Zefan <lizefan@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index 1c7d90f..a88cb0c 100644 (file)
@@ -2842,11 +2842,7 @@ retry:
 
 nopage:
        warn_alloc_failed(gfp_mask, order, NULL);
-       return page;
 got_pg:
-       if (kmemcheck_enabled)
-               kmemcheck_pagealloc_alloc(page, order, gfp_mask);
-
        return page;
 }
 
@@ -2916,6 +2912,9 @@ retry_cpuset:
                                preferred_zone, classzone_idx, migratetype);
        }
 
+       if (kmemcheck_enabled && page)
+               kmemcheck_pagealloc_alloc(page, order, gfp_mask);
+
        trace_mm_page_alloc(page, order, alloc_mask, migratetype);
 
 out: