1. 30 Mar, 2010 1 commit
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  2. 14 Dec, 2009 1 commit
  3. 11 Oct, 2009 1 commit
  4. 17 Mar, 2009 2 commits
    • Thomas Gleixner's avatar
      debugobjects: delay free of internal objects · 337fff8b
      Thomas Gleixner authored
      
      Impact: avoid recursive kfree calls, less slab activity on heavy load
      
      debugobjects checks on kfree whether tracked objects are freed. When a
      tracked object is freed debugobjects frees the internal reference
      object as well. The debug object slab cache is marked to not recurse
      into debugobjects when a slab objects is freed, but the recursive call
      can be problematic versus locking in the memory allocator.
      
      Defer the freeing of debug slab objects via schedule_work. The reasons
      not to use RCU are:
      
      1) rcu makes the data structure larger
      2) there is no real need for rcu as nothing references the obj after
         we freed it
      3) under heavy load it is easier to reuse the to be freed objects instead
         of allocating new objects from the slab. This lowered the slab activity
         significantly in a heavy load networking test where lots of timers are
         created/destroyed. The workqueue based delayed free allows us just to
         put the to be freed objects back into the object pool and reuse them
         right away.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
      337fff8b
    • Thomas Gleixner's avatar
      debugobjects: replace static objects when slab cache becomes available · 1be1cb7b
      Thomas Gleixner authored
      
      Impact: refactor/consolidate object management, prepare for delayed free
      
      debugobjects allocates static reference objects to track objects which
      are initialized or activated before the slab cache becomes
      available. These static reference objects have to be handled
      seperately in free_object(). The handling of these objects is in the
      way of implementing a delayed free functionality. The delayed free is
      required to avoid callbacks into the mm code from
      debug_check_no_obj_freed().
      
      Replace the static object references with dynamic ones after the slab
      cache has been initialized. The static objects are now marked initdata.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
      1be1cb7b
  5. 02 Mar, 2009 1 commit
  6. 26 Nov, 2008 1 commit
    • Ingo Molnar's avatar
      debugobjects: add boot parameter default value · 3ae70205
      Ingo Molnar authored
      
      Impact: add .config driven boot parameter default value
      
      Right now debugobjects can only be activated if the debug_objects
      boot parameter is passed in via the boot command line.
      
      Make this more convenient (and randomizable) by also providing
      a .config method. Enable it by default. (DEBUG_OBJECTS itself
      is default-off)
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3ae70205
  7. 01 Sep, 2008 1 commit
    • Vegard Nossum's avatar
      debugobjects: fix lockdep warning · 673d62cc
      Vegard Nossum authored
      
      Daniel J. Blueman reported:
      > =======================================================
      > [ INFO: possible circular locking dependency detected ]
      > 2.6.27-rc4-224c #1
      > -------------------------------------------------------
      > hald/4680 is trying to acquire lock:
      >  (&n->list_lock){++..}, at: [<ffffffff802bfa26>] add_partial+0x26/0x80
      >
      > but task is already holding lock:
      >  (&obj_hash[i].lock){++..}, at: [<ffffffff8041cfdc>]
      > debug_object_free+0x5c/0x120
      
      We fix it by moving the actual freeing to outside the lock (the lock
      now only protects the list).
      
      The pool lock is also promoted to irq-safe (suggested by Dan). It's
      necessary because free_pool is now called outside the irq disabled
      region. So we need to protect against an interrupt handler which calls
      debug_object_init().
      
      [tglx@linutronix.de: added hlist_move_list helper to avoid looping
      		     through the list twice]
      Reported-by: default avatarDaniel J Blueman <daniel.blueman@gmail.com>
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      673d62cc
  8. 26 Jul, 2008 1 commit
  9. 24 Jul, 2008 1 commit
  10. 18 Jun, 2008 1 commit
    • Vegard Nossum's avatar
      debugobjects: fix lockdep warning · 50db04dd
      Vegard Nossum authored
      
      Daniel J Blueman reported:
      | =======================================================
      | [ INFO: possible circular locking dependency detected ]
      | 2.6.26-rc5-201c #1
      | -------------------------------------------------------
      | nscd/3669 is trying to acquire lock:
      |  (&n->list_lock){.+..}, at: [<ffffffff802bab03>] deactivate_slab+0x173/0x1e0
      |
      | but task is already holding lock:
      |  (&obj_hash[i].lock){++..}, at: [<ffffffff803fa56f>]
      | __debug_object_init+0x2f/0x350
      |
      | which lock already depends on the new lock.
      
      There are two locks involved here; the first is a SLUB-local lock, and
      the second is a debugobjects-local lock. They are basically taken in two
      different orders:
      
      1. SLUB { debugobjects { ... } }
      2. debugobjects { SLUB { ... } }
      
      This patch changes pattern #2 by trying to fill the memory pool (e.g.
      the call into SLUB/kmalloc()) outside the debugobjects lock, so now the
      two patterns look like this:
      
      1. SLUB { debugobjects { ... } }
      2. SLUB { } debugobjects { ... }
      
      [ daniel.blueman@gmail.com: pool_lock needs to be taken irq safe in fill_pool ]
      Reported-by: default avatarDaniel J Blueman <daniel.blueman@gmail.com>
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      50db04dd
  11. 30 Apr, 2008 1 commit
    • Thomas Gleixner's avatar
      infrastructure to debug (dynamic) objects · 3ac7fe5a
      Thomas Gleixner authored
      We can see an ever repeating problem pattern with objects of any kind in the
      kernel:
      
      1) freeing of active objects
      2) reinitialization of active objects
      
      Both problems can be hard to debug because the crash happens at a point where
      we have no chance to decode the root cause anymore.  One problem spot are
      kernel timers, where the detection of the problem often happens in interrupt
      context and usually causes the machine to panic.
      
      While working on a timer related bug report I had to hack specialized code
      into the timer subsystem to get a reasonable hint for the root cause.  This
      debug hack was fine for temporary use, but far from a mergeable solution due
      to the intrusiveness into the timer code.
      
      The code further lacked the ability to detect and report the root cause
      instantly and keep the system operational.
      
      Keeping the system operational is important to get hold of the debug
      information without special debugging aids like serial consoles and special
      knowledge of ...
      3ac7fe5a