mm/slab: use more appropriate condition check for debug_pagealloc
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>
Tue, 15 Mar 2016 21:54:18 +0000 (14:54 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 15 Mar 2016 23:55:16 +0000 (16:55 -0700)
debug_pagealloc debugging is related to SLAB_POISON flag rather than
FORCED_DEBUG option, although FORCED_DEBUG option will enable
SLAB_POISON.  Fix it.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slab.c

index 4807cf4..8bca9be 100644 (file)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2169,7 +2169,6 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
                else
                        size += BYTES_PER_WORD;
        }
-#if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
        /*
         * To activate debug pagealloc, off-slab management is necessary
         * requirement. In early phase of initialization, small sized slab
@@ -2177,14 +2176,13 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
         * to check size >= 256. It guarantees that all necessary small
         * sized slab is initialized in current slab initialization sequence.
         */
-       if (debug_pagealloc_enabled() &&
+       if (debug_pagealloc_enabled() && (flags & SLAB_POISON) &&
                !slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
                size >= 256 && cachep->object_size > cache_line_size() &&
                ALIGN(size, cachep->align) < PAGE_SIZE) {
                cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);
                size = PAGE_SIZE;
        }
-#endif
 #endif
 
        /*