mbox series

[0/14,v7] fs: Hole punch vs page cache filling races

Message ID 20210607144631.8717-1-jack@suse.cz
Headers show
Series fs: Hole punch vs page cache filling races | expand

Message

Jan Kara June 7, 2021, 2:52 p.m. UTC
Hello,

here is another version of my patches to address races between hole punching
and page cache filling functions for ext4 and other filesystems. The biggest
change since the last time is providing function to lock two mappings and using
it from XFS. Other changes were pretty minor.

Out of all filesystem supporting hole punching, only GFS2 and OCFS2 remain
unresolved. GFS2 people are working on their own solution (cluster locking is
involved), OCFS2 has even bigger issues (maintainers informed, looking into
it).

As a next step, I'd like to actually make sure all calls to
truncate_inode_pages() happen under mapping->invalidate_lock, add the assert
and then we can also get rid of i_size checks in some places (truncate can
use the same serialization scheme as hole punch). But that step is mostly
a cleanup so I'd like to get these functional fixes in first.

Note that the first patch of the series is already in mm tree but I'm
submitting it here so that the series applies to Linus' tree cleanly.

Changes since v6:
* Added some reviewed-by tags
* Added wrapper for taking invalidate_lock similar to inode_lock
* Renamed wrappers for taking invalidate_lock for two inodes
* Added xfs patch to make xfs_isilocked() work better even without lockdep
* Some minor documentation fixes

Changes since v5:
* Added some reviewed-by tags
* Added functions for locking two mappings and using them from XFS where needed
* Some minor code style & comment fixes

Changes since v4:
* Rebased onto 5.13-rc1
* Removed shmfs conversion patches
* Fixed up zonefs changelog
* Fixed up XFS comments
* Added patch fixing up definition of file_operations in Documentation/vfs/
* Updated documentation and comments to explain invalidate_lock is used also
  to prevent changes through memory mappings to existing pages for some VFS
  operations.

Changes since v3:
* Renamed and moved lock to struct address_space
* Added conversions of tmpfs, ceph, cifs, fuse, f2fs
* Fixed error handling path in filemap_read()
* Removed .page_mkwrite() cleanup from the series for now

Changes since v2:
* Added documentation and comments regarding lock ordering and how the lock is
  supposed to be used
* Added conversions of ext2, xfs, zonefs
* Added patch removing i_mapping_sem protection from .page_mkwrite handlers

Changes since v1:
* Moved to using inode->i_mapping_sem instead of aops handler to acquire
  appropriate lock

---
Motivation:

Amir has reported [1] a that ext4 has a potential issues when reads can race
with hole punching possibly exposing stale data from freed blocks or even
corrupting filesystem when stale mapping data gets used for writeout. The
problem is that during hole punching, new page cache pages can get instantiated
and block mapping from the looked up in a punched range after
truncate_inode_pages() has run but before the filesystem removes blocks from
the file. In principle any filesystem implementing hole punching thus needs to
implement a mechanism to block instantiating page cache pages during hole
punching to avoid this race. This is further complicated by the fact that there
are multiple places that can instantiate pages in page cache.  We can have
regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also
result in reading in page cache pages through force_page_cache_readahead().

There are couple of ways how to fix this. First way (currently implemented by
XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are
serialized with hole punching. This is easy to do but as a result all reads
would then be serialized with writes and thus mixed read-write workloads suffer
heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it
when creating new pages in the page cache and looking up their corresponding
block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem
which provides necessary serialization with hole punching for ext4.

								Honza

[1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

Previous versions:
Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/
Link: https://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz
Link: https://lore.kernel.org/r/20210423171010.12-1-jack@suse.cz
Link: https://lore.kernel.org/r/20210512101639.22278-1-jack@suse.cz
Link: http://lore.kernel.org/r/20210525125652.20457-1-jack@suse.cz

Comments

Jan Kara June 8, 2021, 11:54 a.m. UTC | #1
Hello,

I wanted to add one more thing: The series has gathered decent amount of
review so it seems to be mostly ready to go. But I'd still like some review
from FUSE side (Miklos?) and then someone from pagecache / mm side checking
mainly patch 3/14. Since most of the changes are in fs, I guess it makes
most sense to merge this through some fs tree - either XFS, ext4, or VFS.

								Honza

On Mon 07-06-21 16:52:10, Jan Kara wrote:
> Hello,

> 

> here is another version of my patches to address races between hole punching

> and page cache filling functions for ext4 and other filesystems. The biggest

> change since the last time is providing function to lock two mappings and using

> it from XFS. Other changes were pretty minor.

> 

> Out of all filesystem supporting hole punching, only GFS2 and OCFS2 remain

> unresolved. GFS2 people are working on their own solution (cluster locking is

> involved), OCFS2 has even bigger issues (maintainers informed, looking into

> it).

> 

> As a next step, I'd like to actually make sure all calls to

> truncate_inode_pages() happen under mapping->invalidate_lock, add the assert

> and then we can also get rid of i_size checks in some places (truncate can

> use the same serialization scheme as hole punch). But that step is mostly

> a cleanup so I'd like to get these functional fixes in first.

> 

> Note that the first patch of the series is already in mm tree but I'm

> submitting it here so that the series applies to Linus' tree cleanly.

> 

> Changes since v6:

> * Added some reviewed-by tags

> * Added wrapper for taking invalidate_lock similar to inode_lock

> * Renamed wrappers for taking invalidate_lock for two inodes

> * Added xfs patch to make xfs_isilocked() work better even without lockdep

> * Some minor documentation fixes

> 

> Changes since v5:

> * Added some reviewed-by tags

> * Added functions for locking two mappings and using them from XFS where needed

> * Some minor code style & comment fixes

> 

> Changes since v4:

> * Rebased onto 5.13-rc1

> * Removed shmfs conversion patches

> * Fixed up zonefs changelog

> * Fixed up XFS comments

> * Added patch fixing up definition of file_operations in Documentation/vfs/

> * Updated documentation and comments to explain invalidate_lock is used also

>   to prevent changes through memory mappings to existing pages for some VFS

>   operations.

> 

> Changes since v3:

> * Renamed and moved lock to struct address_space

> * Added conversions of tmpfs, ceph, cifs, fuse, f2fs

> * Fixed error handling path in filemap_read()

> * Removed .page_mkwrite() cleanup from the series for now

> 

> Changes since v2:

> * Added documentation and comments regarding lock ordering and how the lock is

>   supposed to be used

> * Added conversions of ext2, xfs, zonefs

> * Added patch removing i_mapping_sem protection from .page_mkwrite handlers

> 

> Changes since v1:

> * Moved to using inode->i_mapping_sem instead of aops handler to acquire

>   appropriate lock

> 

> ---

> Motivation:

> 

> Amir has reported [1] a that ext4 has a potential issues when reads can race

> with hole punching possibly exposing stale data from freed blocks or even

> corrupting filesystem when stale mapping data gets used for writeout. The

> problem is that during hole punching, new page cache pages can get instantiated

> and block mapping from the looked up in a punched range after

> truncate_inode_pages() has run but before the filesystem removes blocks from

> the file. In principle any filesystem implementing hole punching thus needs to

> implement a mechanism to block instantiating page cache pages during hole

> punching to avoid this race. This is further complicated by the fact that there

> are multiple places that can instantiate pages in page cache.  We can have

> regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also

> result in reading in page cache pages through force_page_cache_readahead().

> 

> There are couple of ways how to fix this. First way (currently implemented by

> XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are

> serialized with hole punching. This is easy to do but as a result all reads

> would then be serialized with writes and thus mixed read-write workloads suffer

> heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it

> when creating new pages in the page cache and looking up their corresponding

> block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem

> which provides necessary serialization with hole punching for ext4.

> 

> 								Honza

> 

> [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/

> 

> Previous versions:

> Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/

> Link: https://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz

> Link: https://lore.kernel.org/r/20210423171010.12-1-jack@suse.cz

> Link: https://lore.kernel.org/r/20210512101639.22278-1-jack@suse.cz

> Link: http://lore.kernel.org/r/20210525125652.20457-1-jack@suse.cz

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR
Jan Kara June 8, 2021, 12:19 p.m. UTC | #2
On Mon 07-06-21 09:09:22, Darrick J. Wong wrote:
> On Mon, Jun 07, 2021 at 04:52:13PM +0200, Jan Kara wrote:

> > Currently, serializing operations such as page fault, read, or readahead

> > against hole punching is rather difficult. The basic race scheme is

> > like:

> > 

> > fallocate(FALLOC_FL_PUNCH_HOLE)			read / fault / ..

> >   truncate_inode_pages_range()

> > 						  <create pages in page

> > 						   cache here>

> >   <update fs block mapping and free blocks>

> > 

> > Now the problem is in this way read / page fault / readahead can

> > instantiate pages in page cache with potentially stale data (if blocks

> > get quickly reused). Avoiding this race is not simple - page locks do

> > not work because we want to make sure there are *no* pages in given

> > range. inode->i_rwsem does not work because page fault happens under

> > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes

> > the performance for mixed read-write workloads suffer.

> > 

> > So create a new rw_semaphore in the address_space - invalidate_lock -

> > that protects adding of pages to page cache for page faults / reads /

> > readahead.

> > 

> > Signed-off-by: Jan Kara <jack@suse.cz>

...
> > +->fallocate implementation must be really careful to maintain page cache

> > +consistency when punching holes or performing other operations that invalidate

> > +page cache contents. Usually the filesystem needs to call

> > +truncate_inode_pages_range() to invalidate relevant range of the page cache.

> > +However the filesystem usually also needs to update its internal (and on disk)

> > +view of file offset -> disk block mapping. Until this update is finished, the

> > +filesystem needs to block page faults and reads from reloading now-stale page

> > +cache contents from the disk. VFS provides mapping->invalidate_lock for this

> > +and acquires it in shared mode in paths loading pages from disk

> > +(filemap_fault(), filemap_read(), readahead paths). The filesystem is

> > +responsible for taking this lock in its fallocate implementation and generally

> > +whenever the page cache contents needs to be invalidated because a block is

> > +moving from under a page.

> 

> Having a page cache invalidation lock isn't optional anymore, so I think

> these last two sentences could be condensed:

> 

> "...from reloading now-stale page cache contents from disk.  Since VFS

> acquires mapping->invalidate_lock in shared mode when loading pages from

> disk (filemap_fault(), filemap_read(), readahead), the fallocate

> implementation must take the invalidate_lock to prevent reloading."

> 

> > +

> > +->copy_file_range and ->remap_file_range implementations need to serialize

> > +against modifications of file data while the operation is running. For

> > +blocking changes through write(2) and similar operations inode->i_rwsem can be

> > +used. For blocking changes through memory mapping, the filesystem can use

> > +mapping->invalidate_lock provided it also acquires it in its ->page_mkwrite

> > +implementation.

> 

> Following the same line of reasoning, if taking the invalidate_lock is

> no longer optional, then the conditional language in this last sentence

> is incorrect.  How about:

> 

> "To block changes to file contents via a memory mapping during the

> operation, the filesystem must take mapping->invalidate_lock to

> coordinate with ->page_mkwrite."

> 

> The code changes look fine to me, though I'm no mm expert. ;)


OK, I've updated the documentation as you suggested. Thanks for review.

									Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR
Jan Kara June 8, 2021, 12:23 p.m. UTC | #3
On Mon 07-06-21 08:56:33, Darrick J. Wong wrote:
> On Mon, Jun 07, 2021 at 04:52:18PM +0200, Jan Kara wrote:

> > Use invalidate_lock instead of XFS internal i_mmap_lock. The intended

> > purpose of invalidate_lock is exactly the same. Note that the locking in

> > __xfs_filemap_fault() slightly changes as filemap_fault() already takes

> > invalidate_lock.

> > 

> > Reviewed-by: Christoph Hellwig <hch@lst.de>

> > CC: <linux-xfs@vger.kernel.org>

> > CC: "Darrick J. Wong" <djwong@kernel.org>

> > Signed-off-by: Jan Kara <jack@suse.cz>

> > ---

> >  fs/xfs/xfs_file.c  | 13 +++++++-----

> >  fs/xfs/xfs_inode.c | 50 ++++++++++++++++++++++++----------------------

> >  fs/xfs/xfs_inode.h |  1 -

> >  fs/xfs/xfs_super.c |  2 --

> >  4 files changed, 34 insertions(+), 32 deletions(-)

> > 

> > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c

> > index 396ef36dcd0a..7cb7703c2209 100644

> > --- a/fs/xfs/xfs_file.c

> > +++ b/fs/xfs/xfs_file.c

> > @@ -1282,7 +1282,7 @@ xfs_file_llseek(

> >   *

> >   * mmap_lock (MM)

> >   *   sb_start_pagefault(vfs, freeze)

> > - *     i_mmaplock (XFS - truncate serialisation)

> > + *     invalidate_lock (vfs/XFS_MMAPLOCK - truncate serialisation)

> >   *       page_lock (MM)

> >   *         i_lock (XFS - extent map serialisation)

> >   */

> > @@ -1303,24 +1303,27 @@ __xfs_filemap_fault(

> >  		file_update_time(vmf->vma->vm_file);

> >  	}

> >  

> > -	xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);

> >  	if (IS_DAX(inode)) {

> >  		pfn_t pfn;

> >  

> > +		xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);

> >  		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,

> >  				(write_fault && !vmf->cow_page) ?

> >  				 &xfs_direct_write_iomap_ops :

> >  				 &xfs_read_iomap_ops);

> >  		if (ret & VM_FAULT_NEEDDSYNC)

> >  			ret = dax_finish_sync_fault(vmf, pe_size, pfn);

> > +		xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);

> 

> I've been wondering if iomap_page_mkwrite and dax_iomap_fault should be

> taking these locks?  I guess that would violate the premise that iomap

> requires that callers arrange for concurrency control (i.e. iomap

> doesn't take locks).


Well, iomap does take page locks but I agree that generally it stays away
from high-level locks. So keeping invalidate_lock out of it makes more
sense to me as well.

> Code changes look fine, though.

> 

> Reviewed-by: Darrick J. Wong <djwong@kernel.org>


Thanks!

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR