1. 18 Nov, 2021 1 commit
  2. 03 Sep, 2021 1 commit
  3. 23 Aug, 2021 1 commit
    • Josef Bacik's avatar
      fs: add a filemap_fdatawrite_wbc helper · 5a798493
      Josef Bacik authored
      
      Btrfs sometimes needs to flush dirty pages on a bunch of dirty inodes in
      order to reclaim metadata reservations.  Unfortunately most helpers in
      this area are too smart for us:
      
      1) The normal filemap_fdata* helpers only take range and sync modes, and
         don't give any indication of how much was written, so we can only
         flush full inodes, which isn't what we want in most cases.
      2) The normal writeback path requires us to have the s_umount sem held,
         but we can't unconditionally take it in this path because we could
         deadlock.
      3) The normal writeback path also skips inodes with I_SYNC set if we
         write with WB_SYNC_NONE.  This isn't the behavior we want under heavy
         ENOSPC pressure, we want to actually make sure the pages are under
         writeback before returning, and if another thread is in the middle of
         writing the file we may return before they're under writeback and
         miss our ordered extents and not properly wait for completion.
      4) sync_inode() uses the normal writeback path and has the same problem
         as #3.
      
      What we really want is to call do_writepages() with our wbc.  This way
      we can make sure that writeback is actually started on the pages, and we
      can control how many pages are written as a whole as we write many
      inodes using the same wbc.  Accomplish this with a new helper that does
      just that so we can use it for our ENOSPC flushing infrastructure.
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5a798493
  4. 13 Jul, 2021 2 commits
    • Jan Kara's avatar
      mm: Add functions to lock invalidate_lock for two mappings · 7506ae6a
      Jan Kara authored
      
      Some operations such as reflinking blocks among files will need to lock
      invalidate_lock for two mappings. Add helper functions to do that.
      Reviewed-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      7506ae6a
    • Jan Kara's avatar
      mm: Protect operations adding pages to page cache with invalidate_lock · 730633f0
      Jan Kara authored
      
      Currently, serializing operations such as page fault, read, or readahead
      against hole punching is rather difficult. The basic race scheme is
      like:
      
      fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..
        truncate_inode_pages_range()
      						  <create pages in page
      						   cache here>
        <update fs block mapping and free blocks>
      
      Now the problem is in this way read / page fault / readahead can
      instantiate pages in page cache with potentially stale data (if blocks
      get quickly reused). Avoiding this race is not simple - page locks do
      not work because we want to make sure there are *no* pages in given
      range. inode->i_rwsem does not work because page fault happens under
      mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
      the performance for mixed read-write workloads suffer.
      
      So create a new rw_semaphore in the address_space - invalidate_lock -
      that protects adding of pages to page cache for page faults / reads /
      readahead.
      Reviewed-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      730633f0
  5. 12 Jul, 2021 1 commit
  6. 29 Jun, 2021 1 commit
  7. 10 Jun, 2021 1 commit
  8. 02 Jun, 2021 1 commit
  9. 07 May, 2021 1 commit
  10. 05 May, 2021 3 commits
  11. 30 Apr, 2021 5 commits
  12. 23 Apr, 2021 4 commits
  13. 26 Feb, 2021 9 commits
  14. 24 Feb, 2021 9 commits