1. 15 Aug, 2012 1 commit
    • Ryusuke Konishi's avatar
      nilfs2: fix deadlock issue between chcp and thaw ioctls · d7c61cba
      Ryusuke Konishi authored
      commit 572d8b39
      
       upstream.
      
      An fs-thaw ioctl causes deadlock with a chcp or mkcp -s command:
      
       chcp            D ffff88013870f3d0     0  1325   1324 0x00000004
       ...
       Call Trace:
         nilfs_transaction_begin+0x11c/0x1a0 [nilfs2]
         wake_up_bit+0x20/0x20
         copy_from_user+0x18/0x30 [nilfs2]
         nilfs_ioctl_change_cpmode+0x7d/0xcf [nilfs2]
         nilfs_ioctl+0x252/0x61a [nilfs2]
         do_page_fault+0x311/0x34c
         get_unmapped_area+0x132/0x14e
         do_vfs_ioctl+0x44b/0x490
         __set_task_blocked+0x5a/0x61
         vm_mmap_pgoff+0x76/0x87
         __set_current_blocked+0x30/0x4a
         sys_ioctl+0x4b/0x6f
         system_call_fastpath+0x16/0x1b
       thaw            D ffff88013870d890     0  1352   1351 0x00000004
       ...
       Call Trace:
         rwsem_down_failed_common+0xdb/0x10f
         call_rwsem_down_write_failed+0x13/0x20
         down_write+0x25/0x27
         thaw_super+0x13/0x9e
         do_vfs_ioctl+0x1f5/0x490
         vm_mmap_pgoff+0x76/0x87
         sys_ioctl+0x4b/0x6f
         filp_close+0x64/0x6c
         system_call_fastpath+0x16/0x1b
      
      where the thaw ioctl deadlocked at thaw_super() when called while chcp was
      waiting at nilfs_transaction_begin() called from
      nilfs_ioctl_change_cpmode().  This deadlock is 100% reproducible.
      
      This is because nilfs_ioctl_change_cpmode() first locks sb->s_umount in
      read mode and then waits for unfreezing in nilfs_transaction_begin(),
      whereas thaw_super() locks sb->s_umount in write mode.  The locking of
      sb->s_umount here was intended to make snapshot mounts and the downgrade
      of snapshots to checkpoints exclusive.
      
      This fixes the deadlock issue by replacing the sb->s_umount usage in
      nilfs_ioctl_change_cpmode() with a dedicated mutex which protects snapshot
      mounts.
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
      Tested-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d7c61cba
  2. 21 Mar, 2012 2 commits
  3. 07 Jan, 2012 1 commit
  4. 04 Jan, 2012 1 commit
    • Al Viro's avatar
      vfs: fix the stupidity with i_dentry in inode destructors · 6b520e05
      Al Viro authored
      
      Seeing that just about every destructor got that INIT_LIST_HEAD() copied into
      it, there is no point whatsoever keeping this INIT_LIST_HEAD in inode_init_once();
      the cost of taking it into inode_init_always() will be negligible for pipes
      and sockets and negative for everything else.  Not to mention the removal of
      boilerplate code from ->destroy_inode() instances...
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      6b520e05
  5. 10 May, 2011 3 commits
  6. 09 Mar, 2011 6 commits
  7. 24 Feb, 2011 1 commit
    • Miklos Szeredi's avatar
      mm: prevent concurrent unmap_mapping_range() on the same inode · 2aa15890
      Miklos Szeredi authored
      
      Michael Leun reported that running parallel opens on a fuse filesystem
      can trigger a "kernel BUG at mm/truncate.c:475"
      
      Gurudas Pai reported the same bug on NFS.
      
      The reason is, unmap_mapping_range() is not prepared for more than
      one concurrent invocation per inode.  For example:
      
        thread1: going through a big range, stops in the middle of a vma and
           stores the restart address in vm_truncate_count.
      
        thread2: comes in with a small (e.g. single page) unmap request on
           the same vma, somewhere before restart_address, finds that the
           vma was already unmapped up to the restart address and happily
           returns without doing anything.
      
      Another scenario would be two big unmap requests, both having to
      restart the unmapping and each one setting vm_truncate_count to its
      own value.  This could go on forever without any of them being able to
      finish.
      
      Truncate and hole punching already serialize with i_mutex.  Other
      callers of unmap_mapping_range() do not, and it's difficult to get
      i_mutex protection for all callers.  In particular ->d_revalidate(),
      which calls invalidate_inode_pages2_range() in fuse, may be called
      with or without i_mutex.
      
      This patch adds a new mutex to 'struct address_space' to prevent
      running multiple concurrent unmap_mapping_range() on the same mapping.
      
      [ We'll hopefully get rid of all this with the upcoming mm
        preemptibility series by Peter Zijlstra, the "mm: Remove i_mmap_mutex
        lockbreak" patch in particular.  But that is for 2.6.39 ]
      Signed-off-by: default avatarMiklos Szeredi <mszeredi@suse.cz>
      Reported-by: default avatarMichael Leun <lkml20101129@newton.leun.net>
      Reported-by: default avatarGurudas Pai <gurudas.pai@oracle.com>
      Tested-by: default avatarGurudas Pai <gurudas.pai@oracle.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2aa15890
  8. 22 Jan, 2011 1 commit
    • Ryusuke Konishi's avatar
      nilfs2: fix crash after one superblock became unavailable · 0ca7a5b9
      Ryusuke Konishi authored
      
      Fixes the following kernel oops in nilfs_setup_super() which could
      arise if one of two super-blocks is unavailable.
      
      > BUG: unable to handle kernel NULL pointer dereference at   (null)
      > Pid: 3529, comm: mount.nilfs2 Not tainted 2.6.37 #1 /
      > EIP: 0060:[<c03196bc>] EFLAGS: 00010202 CPU: 3
      > EIP is at memcpy+0xc/0x1b
      > Call Trace:
      >  [<f953720e>] ? nilfs_setup_super+0x6c/0xa5 [nilfs2]
      >  [<f95369e9>] ? nilfs_get_root_dentry+0x81/0xcb [nilfs2]
      >  [<f9537a08>] ? nilfs_mount+0x4f9/0x62c [nilfs2]
      >  [<c02745cf>] ? kstrdup+0x36/0x3f
      >  [<f953750f>] ? nilfs_mount+0x0/0x62c [nilfs2]
      >  [<c0293940>] ? vfs_kern_mount+0x4d/0x12c
      >  [<c02a5100>] ? get_fs_type+0x76/0x8f
      >  [<c0293a68>] ? do_kern_mount+0x33/0xbf
      >  [<c02a784a>] ? do_mount+0x2ed/0x714
      >  [<c02a6171>] ? copy_mount_options+0x28/0xfc
      >  [<c02a7ce3>] ? sys_mount+0x72/0xaf
      >  [<c0473085>] ? syscall_call+0x7/0xb
      Reported-by: default avatarWakko Warner <wakko@animx.eu.org>
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Tested-by: default avatarWakko Warner <wakko@animx.eu.org>
      Cc: stable <stable@kernel.org> [2.6.37, 2.6.36]
      LKML-Reference: <20110121024918.GA29598@animx.eu.org>
      0ca7a5b9
  9. 10 Jan, 2011 3 commits
  10. 07 Jan, 2011 2 commits
    • Nick Piggin's avatar
      fs: icache RCU free inodes · fa0d7e3d
      Nick Piggin authored
      
      RCU free the struct inode. This will allow:
      
      - Subsequent store-free path walking patch. The inode must be consulted for
        permissions when walking, so an RCU inode reference is a must.
      - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want
        to take i_lock no longer need to take sb_inode_list_lock to walk the list in
        the first place. This will simplify and optimize locking.
      - Could remove some nested trylock loops in dcache code
      - Could potentially simplify things a bit in VM land. Do not need to take the
        page lock to follow page->mapping.
      
      The downsides of this is the performance cost of using RCU. In a simple
      creat/unlink microbenchmark, performance drops by about 10% due to inability to
      reuse cache-hot slab objects. As iterations increase and RCU freeing starts
      kicking over, this increases to about 20%.
      
      In cases where inode lifetimes are longer (ie. many inodes may be allocated
      during the average life span of a single inode), a lot of this cache reuse is
      not applicable, so the regression caused by this patch is smaller.
      
      The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU,
      however this adds some complexity to list walking and store-free path walking,
      so I prefer to implement this at a later date, if it is shown to be a win in
      real situations. I haven't found a regression in any non-micro benchmark so I
      doubt it will be a problem.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      fa0d7e3d
    • Nick Piggin's avatar
      fs: dcache scale dentry refcount · b7ab39f6
      Nick Piggin authored
      
      Make d_count non-atomic and protect it with d_lock. This allows us to ensure a
      0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when
      we start protecting many other dentry members with d_lock.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b7ab39f6
  11. 13 Nov, 2010 2 commits
    • Tejun Heo's avatar
      block: clean up blkdev_get() wrappers and their users · d4d77629
      Tejun Heo authored
      
      After recent blkdev_get() modifications, open_by_devnum() and
      open_bdev_exclusive() are simple wrappers around blkdev_get().
      Replace them with blkdev_get_by_dev() and blkdev_get_by_path().
      
      blkdev_get_by_dev() is identical to open_by_devnum().
      blkdev_get_by_path() is slightly different in that it doesn't
      automatically add %FMODE_EXCL to @mode.
      
      All users are converted.  Most conversions are mechanical and don't
      introduce any behavior difference.  There are several exceptions.
      
      * btrfs now sets FMODE_EXCL in btrfs_device->mode, so there's no
        reason to OR it explicitly on blkdev_put().
      
      * gfs2, nilfs2 and the generic mount_bdev() now set FMODE_EXCL in
        sb->s_mode.
      
      * With the above changes, sb->s_mode now always should contain
        FMODE_EXCL.  WARN_ON_ONCE() added to kill_block_super() to detect
        errors.
      
      The new blkdev_get_*() functions are with proper docbook comments.
      While at it, add function description to blkdev_get() too.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Philipp Reisner <philipp.reisner@linbit.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Joern Engel <joern@lazybastard.org>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: reiserfs-devel@vger.kernel.org
      Cc: xfs-masters@oss.sgi.com
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      d4d77629
    • Tejun Heo's avatar
      block: make blkdev_get/put() handle exclusive access · e525fd89
      Tejun Heo authored
      
      Over time, block layer has accumulated a set of APIs dealing with bdev
      open, close, claim and release.
      
      * blkdev_get/put() are the primary open and close functions.
      
      * bd_claim/release() deal with exclusive open.
      
      * open/close_bdev_exclusive() are combination of open and claim and
        the other way around, respectively.
      
      * bd_link/unlink_disk_holder() to create and remove holder/slave
        symlinks.
      
      * open_by_devnum() wraps bdget() + blkdev_get().
      
      The interface is a bit confusing and the decoupling of open and claim
      makes it impossible to properly guarantee exclusive access as
      in-kernel open + claim sequence can disturb the existing exclusive
      open even before the block layer knows the current open if for another
      exclusive access.  Reorganize the interface such that,
      
      * blkdev_get() is extended to include exclusive access management.
        @holder argument is added and, if is @FMODE_EXCL specified, it will
        gain exclusive access atomically w.r.t. other exclusive accesses.
      
      * blkdev_put() is similarly extended.  It now takes @mode argument and
        if @FMODE_EXCL is set, it releases an exclusive access.  Also, when
        the last exclusive claim is released, the holder/slave symlinks are
        removed automatically.
      
      * bd_claim/release() and close_bdev_exclusive() are no longer
        necessary and either made static or removed.
      
      * bd_link_disk_holder() remains the same but bd_unlink_disk_holder()
        is no longer necessary and removed.
      
      * open_bdev_exclusive() becomes a simple wrapper around lookup_bdev()
        and blkdev_get().  It also has an unexpected extra bdev_read_only()
        test which probably should be moved into blkdev_get().
      
      * open_by_devnum() is modified to take @holder argument and pass it to
        blkdev_get().
      
      Most of bdev open/close operations are unified into blkdev_get/put()
      and most exclusive accesses are tested atomically at the open time (as
      it should).  This cleans up code and removes some, both valid and
      invalid, but unnecessary all the same, corner cases.
      
      open_bdev_exclusive() and open_by_devnum() can use further cleanup -
      rename to blkdev_get_by_path() and blkdev_get_by_devt() and drop
      special features.  Well, let's leave them for another day.
      
      Most conversions are straight-forward.  drbd conversion is a bit more
      involved as there was some reordering, but the logic should stay the
      same.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarNeil Brown <neilb@suse.de>
      Acked-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarPhilipp Reisner <philipp.reisner@linbit.com>
      Cc: Peter Osterlund <petero2@telia.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <joel.becker@oracle.com>
      Cc: Alex Elder <aelder@sgi.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: dm-devel@redhat.com
      Cc: drbd-dev@lists.linbit.com
      Cc: Leo Chen <leochen@broadcom.com>
      Cc: Scott Branden <sbranden@broadcom.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: reiserfs-devel@vger.kernel.org
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      e525fd89
  12. 29 Oct, 2010 1 commit
  13. 23 Oct, 2010 16 commits