From f13242a46438e690067a4bf47068fde4d5719947 Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Sat, 16 Nov 2024 00:41:14 +1100 Subject: [PATCH 01/12] selftests/mount_setattr: Fix failures on 64K PAGE_SIZE kernels Currently the mount_setattr_test fails on machines with a 64K PAGE_SIZE, with errors such as: # RUN mount_setattr_idmapped.invalid_fd_negative ... mkfs.ext4: No space left on device while writing out and closing file system # mount_setattr_test.c:1055:invalid_fd_negative:Expected system("mkfs.ext4 -q /mnt/C/ext4.img") (256) == 0 (0) # invalid_fd_negative: Test terminated by assertion # FAIL mount_setattr_idmapped.invalid_fd_negative not ok 12 mount_setattr_idmapped.invalid_fd_negative The code creates a 100,000 byte tmpfs: ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV, "size=100000,mode=700"), 0); And then a little later creates a 2MB ext4 filesystem in that tmpfs: ASSERT_EQ(ftruncate(img_fd, 1024 * 2048), 0); ASSERT_EQ(system("mkfs.ext4 -q /mnt/C/ext4.img"), 0); At first glance it seems like that should never work, after all 2MB is larger than 100,000 bytes. However the filesystem image doesn't actually occupy 2MB on "disk" (actually RAM, due to tmpfs). On 4K kernels the ext4.img uses ~84KB of actual space (according to du), which just fits. However on 64K PAGE_SIZE kernels the ext4.img takes at least 256KB, which is too large to fit in the tmpfs, hence the errors. It seems fraught to rely on the ext4.img taking less space on disk than the allocated size, so instead create the tmpfs with a size of 2MB. With that all 21 tests pass on 64K PAGE_SIZE kernels. Fixes: 01eadc8dd96d ("tests: add mount_setattr() selftests") Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20241115134114.1219555-1-mpe@ellerman.id.au Reviewed-by: Ritesh Harjani (IBM) Signed-off-by: Christian Brauner --- tools/testing/selftests/mount_setattr/mount_setattr_test.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c index 68801e1a9ec2d1..70f65eb320a7a7 100644 --- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c +++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c @@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped) "size=100000,mode=700"), 0); ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV, - "size=100000,mode=700"), 0); + "size=2m,mode=700"), 0); ASSERT_EQ(mkdir("/mnt/A", 0777), 0); From eb65540aa9fc828e9f3f8b30d6dc37f1ed35263d Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 15 Nov 2024 09:59:31 -0500 Subject: [PATCH 02/12] iomap: warn on zero range of a post-eof folio iomap_zero_range() uses buffered writes for manual zeroing, no longer updates i_size for such writes, but is still explicitly called for post-eof ranges. The historical use case for this is zeroing post-eof speculative preallocation on extending writes from XFS. However, XFS also recently changed to convert all post-eof delalloc mappings to unwritten in the iomap_begin() handler, which means it now never expects manual zeroing of post-eof mappings. In other words, all post-eof mappings should be reported as holes or unwritten. This is a subtle dependency that can be hard to detect if violated because associated codepaths are likely to update i_size after folio locks are dropped, but before writeback happens to occur. For example, if XFS reverts back to some form of manual zeroing of post-eof blocks on write extension, writeback of those zeroed folios will now race with the presumed i_size update from the subsequent buffered write. Since iomap_zero_range() can't correctly zero post-eof mappings beyond EOF without updating i_size, warn if this ever occurs. This serves as minimal indication that if this use case is reintroduced by a filesystem, iomap_zero_range() might need to reconsider i_size updates for write extending use cases. Signed-off-by: Brian Foster Link: https://lore.kernel.org/r/20241115145931.535207-1-bfoster@redhat.com Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Christian Brauner --- fs/iomap/buffered-io.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ce73d2a48c1eb3..9ae71b9dafde22 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1397,6 +1397,8 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero, if (iter->iomap.flags & IOMAP_F_STALE) break; + /* warn about zeroing folios beyond eof that won't write back */ + WARN_ON_ONCE(folio_pos(folio) > iter->inode->i_size); offset = offset_in_folio(folio, pos); if (bytes > folio_size(folio) - offset) bytes = folio_size(folio) - offset; From 2519369201f36a6b2571bc672c4e48f88c6b68d6 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 15 Nov 2024 15:01:53 -0500 Subject: [PATCH 03/12] iomap: reset per-iter state on non-error iter advances iomap_iter_advance() zeroes the processed and mapping fields on every non-error iteration except for the last expected iteration (i.e. return 0 expected to terminate the iteration loop). This appears to be circumstantial as nothing currently relies on these fields after the final iteration. Therefore to better faciliate iomap_iter reuse in subsequent patches, update iomap_iter_advance() to always reset per-iteration state on successful completion. Signed-off-by: Brian Foster Link: https://lore.kernel.org/r/20241115200155.593665-2-bfoster@redhat.com Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Christian Brauner --- fs/iomap/iter.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c index 79a0614eaab777..3790918646af76 100644 --- a/fs/iomap/iter.c +++ b/fs/iomap/iter.c @@ -22,26 +22,25 @@ static inline int iomap_iter_advance(struct iomap_iter *iter) { bool stale = iter->iomap.flags & IOMAP_F_STALE; + int ret = 1; /* handle the previous iteration (if any) */ if (iter->iomap.length) { if (iter->processed < 0) return iter->processed; - if (!iter->processed && !stale) - return 0; if (WARN_ON_ONCE(iter->processed > iomap_length(iter))) return -EIO; iter->pos += iter->processed; iter->len -= iter->processed; - if (!iter->len) - return 0; + if (!iter->len || (!iter->processed && !stale)) + ret = 0; } - /* clear the state for the next iteration */ + /* clear the per iteration state */ iter->processed = 0; memset(&iter->iomap, 0, sizeof(iter->iomap)); memset(&iter->srcmap, 0, sizeof(iter->srcmap)); - return 1; + return ret; } static inline void iomap_iter_done(struct iomap_iter *iter) From 889ac75787cbeb129df7faf917ce7d53a32ea696 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 15 Nov 2024 15:01:54 -0500 Subject: [PATCH 04/12] iomap: lift zeroed mapping handling into iomap_zero_range() In preparation for special handling of subranges, lift the zeroed mapping logic from the iterator into the caller. Since this puts the pagecache dirty check and flushing in the same place, streamline the comments a bit as well. Signed-off-by: Brian Foster Link: https://lore.kernel.org/r/20241115200155.593665-3-bfoster@redhat.com Reviewed-by: Darrick J. Wong Signed-off-by: Christian Brauner --- fs/iomap/buffered-io.c | 66 +++++++++++++++--------------------------- 1 file changed, 24 insertions(+), 42 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9ae71b9dafde22..cb5aa3cded0e5f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1350,40 +1350,12 @@ static inline int iomap_zero_iter_flush_and_stale(struct iomap_iter *i) return filemap_write_and_wait_range(mapping, i->pos, end); } -static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero, - bool *range_dirty) +static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) { - const struct iomap *srcmap = iomap_iter_srcmap(iter); loff_t pos = iter->pos; loff_t length = iomap_length(iter); loff_t written = 0; - /* - * We must zero subranges of unwritten mappings that might be dirty in - * pagecache from previous writes. We only know whether the entire range - * was clean or not, however, and dirty folios may have been written - * back or reclaimed at any point after mapping lookup. - * - * The easiest way to deal with this is to flush pagecache to trigger - * any pending unwritten conversions and then grab the updated extents - * from the fs. The flush may change the current mapping, so mark it - * stale for the iterator to remap it for the next pass to handle - * properly. - * - * Note that holes are treated the same as unwritten because zero range - * is (ab)used for partial folio zeroing in some cases. Hole backed - * post-eof ranges can be dirtied via mapped write and the flush - * triggers writeback time post-eof zeroing. - */ - if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN) { - if (*range_dirty) { - *range_dirty = false; - return iomap_zero_iter_flush_and_stale(iter); - } - /* range is clean and already zeroed, nothing to do */ - return length; - } - do { struct folio *folio; int status; @@ -1435,24 +1407,34 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, bool range_dirty; /* - * Zero range wants to skip pre-zeroed (i.e. unwritten) mappings, but - * pagecache must be flushed to ensure stale data from previous - * buffered writes is not exposed. A flush is only required for certain - * types of mappings, but checking pagecache after mapping lookup is - * racy with writeback and reclaim. + * Zero range can skip mappings that are zero on disk so long as + * pagecache is clean. If pagecache was dirty prior to zero range, the + * mapping converts on writeback completion and so must be zeroed. * - * Therefore, check the entire range first and pass along whether any - * part of it is dirty. If so and an underlying mapping warrants it, - * flush the cache at that point. This trades off the occasional false - * positive (and spurious flush, if the dirty data and mapping don't - * happen to overlap) for simplicity in handling a relatively uncommon - * situation. + * The simplest way to deal with this across a range is to flush + * pagecache and process the updated mappings. To avoid an unconditional + * flush, check pagecache state and only flush if dirty and the fs + * returns a mapping that might convert on writeback. */ range_dirty = filemap_range_needs_writeback(inode->i_mapping, pos, pos + len - 1); + while ((ret = iomap_iter(&iter, ops)) > 0) { + const struct iomap *srcmap = iomap_iter_srcmap(&iter); - while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_zero_iter(&iter, did_zero, &range_dirty); + if (srcmap->type == IOMAP_HOLE || + srcmap->type == IOMAP_UNWRITTEN) { + loff_t proc = iomap_length(&iter); + + if (range_dirty) { + range_dirty = false; + proc = iomap_zero_iter_flush_and_stale(&iter); + } + iter.processed = proc; + continue; + } + + iter.processed = iomap_zero_iter(&iter, did_zero); + } return ret; } EXPORT_SYMBOL_GPL(iomap_zero_range); From fde4c4c3ec1c1590eb09f97f9525fa7dd8df8343 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 15 Nov 2024 15:01:55 -0500 Subject: [PATCH 05/12] iomap: elide flush from partial eof zero range iomap zero range flushes pagecache in certain situations to determine which parts of the range might require zeroing if dirty data is present in pagecache. The kernel robot recently reported a regression associated with this flushing in the following stress-ng workload on XFS: stress-ng --timeout 60 --times --verify --metrics --no-rand-seed --metamix 64 This workload involves repeated small, strided, extending writes. On XFS, this produces a pattern of post-eof speculative preallocation, conversion of preallocation from delalloc to unwritten, dirtying pagecache over newly unwritten blocks, and then rinse and repeat from the new EOF. This leads to repetitive flushing of the EOF folio via the zero range call XFS uses for writes that start beyond current EOF. To mitigate this problem, special case EOF block zeroing to prefer zeroing the folio over a flush when the EOF folio is already dirty. To do this, split out and open code handling of an unaligned start offset. This brings most of the performance back by avoiding flushes on zero range calls via write and truncate extension operations. The flush doesn't occur in these situations because the entire range is post-eof and therefore the folio that overlaps EOF is the only one in the range. Signed-off-by: Brian Foster Link: https://lore.kernel.org/r/20241115200155.593665-4-bfoster@redhat.com Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Christian Brauner --- fs/iomap/buffered-io.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index cb5aa3cded0e5f..0708be7767408d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1403,6 +1403,10 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, .len = len, .flags = IOMAP_ZERO, }; + struct address_space *mapping = inode->i_mapping; + unsigned int blocksize = i_blocksize(inode); + unsigned int off = pos & (blocksize - 1); + loff_t plen = min_t(loff_t, len, blocksize - off); int ret; bool range_dirty; @@ -1412,12 +1416,28 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, * mapping converts on writeback completion and so must be zeroed. * * The simplest way to deal with this across a range is to flush - * pagecache and process the updated mappings. To avoid an unconditional - * flush, check pagecache state and only flush if dirty and the fs - * returns a mapping that might convert on writeback. + * pagecache and process the updated mappings. To avoid excessive + * flushing on partial eof zeroing, special case it to zero the + * unaligned start portion if already dirty in pagecache. + */ + if (off && + filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) { + iter.len = plen; + while ((ret = iomap_iter(&iter, ops)) > 0) + iter.processed = iomap_zero_iter(&iter, did_zero); + + iter.len = len - (iter.pos - pos); + if (ret || !iter.len) + return ret; + } + + /* + * To avoid an unconditional flush, check pagecache state and only flush + * if dirty and the fs returns a mapping that might convert on + * writeback. */ range_dirty = filemap_range_needs_writeback(inode->i_mapping, - pos, pos + len - 1); + iter.pos, iter.pos + iter.len - 1); while ((ret = iomap_iter(&iter, ops)) > 0) { const struct iomap *srcmap = iomap_iter_srcmap(&iter); From a514e6f8f5caa24413731bed54b322bd34d918dd Mon Sep 17 00:00:00 2001 From: Thorsten Blum Date: Fri, 28 Jun 2024 08:23:30 +0200 Subject: [PATCH 06/12] fscache: Remove duplicate included header Remove duplicate included header file linux/uio.h Reviewed-by: Simon Horman Signed-off-by: Thorsten Blum Link: https://lore.kernel.org/r/20240628062329.321162-2-thorsten.blum@toblux.com Reviewed-by: Jeff Layton Signed-off-by: Christian Brauner --- fs/netfs/fscache_io.c | 1 - 1 file changed, 1 deletion(-) diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c index 38637e5c9b5776..b1722a82c03d3d 100644 --- a/fs/netfs/fscache_io.c +++ b/fs/netfs/fscache_io.c @@ -9,7 +9,6 @@ #include #include #include -#include #include "internal.h" /** From d18516a0218da360dd27ae204acbd8d1440f6d6b Mon Sep 17 00:00:00 2001 From: Miklos Szeredi Date: Wed, 20 Nov 2024 15:27:23 +0100 Subject: [PATCH 07/12] statmount: clean up unescaped option handling Move common code from opt_array/opt_sec_array to helper. This helper does more than just unescape options, so rename to statmount_opt_process(). Handle corner case of just a single character in options. Rename some local variables to better describe their function. Signed-off-by: Miklos Szeredi Link: https://lore.kernel.org/r/20241120142732.55210-1-mszeredi@redhat.com Reviewed-by: Jeff Layton Signed-off-by: Christian Brauner --- fs/namespace.c | 44 +++++++++++++++++++------------------------- 1 file changed, 19 insertions(+), 25 deletions(-) diff --git a/fs/namespace.c b/fs/namespace.c index 6b0a17487d0f8e..17563d8e382b19 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -5057,21 +5057,32 @@ static int statmount_mnt_opts(struct kstatmount *s, struct seq_file *seq) return 0; } -static inline int statmount_opt_unescape(struct seq_file *seq, char *buf_start) +static inline int statmount_opt_process(struct seq_file *seq, size_t start) { - char *buf_end, *opt_start, *opt_end; + char *buf_end, *opt_end, *src, *dst; int count = 0; + if (unlikely(seq_has_overflowed(seq))) + return -EAGAIN; + buf_end = seq->buf + seq->count; + dst = seq->buf + start; + src = dst + 1; /* skip initial comma */ + + if (src >= buf_end) { + seq->count = start; + return 0; + } + *buf_end = '\0'; - for (opt_start = buf_start + 1; opt_start < buf_end; opt_start = opt_end + 1) { - opt_end = strchrnul(opt_start, ','); + for (; src < buf_end; src = opt_end + 1) { + opt_end = strchrnul(src, ','); *opt_end = '\0'; - buf_start += string_unescape(opt_start, buf_start, 0, UNESCAPE_OCTAL) + 1; + dst += string_unescape(src, dst, 0, UNESCAPE_OCTAL) + 1; if (WARN_ON_ONCE(++count == INT_MAX)) return -EOVERFLOW; } - seq->count = buf_start - 1 - seq->buf; + seq->count = dst - 1 - seq->buf; return count; } @@ -5080,24 +5091,16 @@ static int statmount_opt_array(struct kstatmount *s, struct seq_file *seq) struct vfsmount *mnt = s->mnt; struct super_block *sb = mnt->mnt_sb; size_t start = seq->count; - char *buf_start; int err; if (!sb->s_op->show_options) return 0; - buf_start = seq->buf + start; err = sb->s_op->show_options(seq, mnt->mnt_root); if (err) return err; - if (unlikely(seq_has_overflowed(seq))) - return -EAGAIN; - - if (seq->count == start) - return 0; - - err = statmount_opt_unescape(seq, buf_start); + err = statmount_opt_process(seq, start); if (err < 0) return err; @@ -5110,22 +5113,13 @@ static int statmount_opt_sec_array(struct kstatmount *s, struct seq_file *seq) struct vfsmount *mnt = s->mnt; struct super_block *sb = mnt->mnt_sb; size_t start = seq->count; - char *buf_start; int err; - buf_start = seq->buf + start; - err = security_sb_show_options(seq, sb); if (!err) return err; - if (unlikely(seq_has_overflowed(seq))) - return -EAGAIN; - - if (seq->count == start) - return 0; - - err = statmount_opt_unescape(seq, buf_start); + err = statmount_opt_process(seq, start); if (err < 0) return err; From 3e5360167ac3bccdc032cdafa68d4904a8fa0c75 Mon Sep 17 00:00:00 2001 From: Christian Brauner Date: Wed, 20 Nov 2024 09:17:25 +0100 Subject: [PATCH 08/12] statmount: fix security option retrieval Fix the inverted check for security_sb_show_options(). Link: https://lore.kernel.org/r/c8eaa647-5d67-49b6-9401-705afcb7e4d7@stanley.mountain Link: https://lore.kernel.org/r/20241120-verehren-rhabarber-83a11b297bcc@brauner Fixes: aefff51e1c29 ("statmount: retrieve security mount options") Reviewed-by: Jeff Layton Reported-by: Dan Carpenter Cc: stable@vger.kernel.org # mainline only Signed-off-by: Christian Brauner --- fs/namespace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/namespace.c b/fs/namespace.c index 17563d8e382b19..23e81c2a1e3fee 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -5116,7 +5116,7 @@ static int statmount_opt_sec_array(struct kstatmount *s, struct seq_file *seq) int err; err = security_sb_show_options(seq, sb); - if (!err) + if (err) return err; err = statmount_opt_process(seq, start); From 088f294609d8f8816dc316681aef2eb61982e0da Mon Sep 17 00:00:00 2001 From: Jiri Olsa Date: Fri, 22 Nov 2024 00:11:18 +0100 Subject: [PATCH 09/12] fs/proc/kcore.c: Clear ret value in read_kcore_iter after successful iov_iter_zero If iov_iter_zero succeeds after failed copy_from_kernel_nofault, we need to reset the ret value to zero otherwise it will be returned as final return value of read_kcore_iter. This fixes objdump -d dump over /proc/kcore for me. Cc: stable@vger.kernel.org Cc: Alexander Gordeev Fixes: 3d5854d75e31 ("fs/proc/kcore.c: allow translation of physical memory addresses") Signed-off-by: Jiri Olsa Link: https://lore.kernel.org/r/20241121231118.3212000-1-jolsa@kernel.org Acked-by: Alexander Gordeev Signed-off-by: Christian Brauner --- fs/proc/kcore.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 51446c59388f10..c82c408e573efd 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -600,6 +600,7 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) ret = -EFAULT; goto out; } + ret = 0; /* * We know the bounce buffer is safe to copy from, so * use _copy_to_iter() directly. From b6512519496e29270bca6b2df9baa3cc2d9d5356 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Fri, 22 Nov 2024 13:29:24 +0100 Subject: [PATCH 10/12] fs: require inode_owner_or_capable for F_SET_RW_HINT F_SET_RW_HINT controls data placement in the file system and / or device and should not be available to everyone who can read a given file. Signed-off-by: Christoph Hellwig Link: https://lore.kernel.org/r/20241122122931.90408-2-hch@lst.de Reviewed-by: Jan Kara Signed-off-by: Christian Brauner --- fs/fcntl.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/fcntl.c b/fs/fcntl.c index ac77dd912412ef..49884fa3c81d2e 100644 --- a/fs/fcntl.c +++ b/fs/fcntl.c @@ -374,6 +374,9 @@ static long fcntl_set_rw_hint(struct file *file, unsigned int cmd, u64 __user *argp = (u64 __user *)arg; u64 hint; + if (!inode_owner_or_capable(file_mnt_idmap(file), inode)) + return -EPERM; + if (copy_from_user(&hint, argp, sizeof(hint))) return -EFAULT; if (!rw_hint_valid(hint)) From c66f759832a83cb273ba5a55c66dcc99384efa74 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Mon, 25 Nov 2024 13:50:21 -0800 Subject: [PATCH 11/12] fs_parser: update mount_api doc to match function signature Add the missing 'name' parameter to the mount_api documentation for fs_validate_description(). Fixes: 96cafb9ccb15 ("fs_parser: remove fs_parameter_description name field") Signed-off-by: Randy Dunlap Link: https://lore.kernel.org/r/20241125215021.231758-1-rdunlap@infradead.org Cc: Eric Sandeen Cc: David Howells Cc: Al Viro Cc: Christian Brauner Cc: Jan Kara Cc: Jonathan Corbet Cc: linux-doc@vger.kernel.org Signed-off-by: Christian Brauner --- Documentation/filesystems/mount_api.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst index 317934c9e8fcac..d92c276f1575af 100644 --- a/Documentation/filesystems/mount_api.rst +++ b/Documentation/filesystems/mount_api.rst @@ -770,7 +770,8 @@ process the parameters it is given. * :: - bool fs_validate_description(const struct fs_parameter_description *desc); + bool fs_validate_description(const char *name, + const struct fs_parameter_description *desc); This performs some validation checks on a parameter description. It returns true if the description is good and false if it is not. It will From 2957fa4931a3b658d8e54eda9439d4c57967e8ad Mon Sep 17 00:00:00 2001 From: Amir Goldstein Date: Tue, 26 Nov 2024 15:53:42 +0100 Subject: [PATCH 12/12] fs/backing_file: fix wrong argument in callback Commit 48b50624aec4 ("backing-file: clean up the API") unintentionally changed the argument in the ->accessed() callback from the user file to the backing file. Fixes: 48b50624aec4 ("backing-file: clean up the API") Reported-by: syzbot+8d1206605b05ca9a0e6a@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-unionfs/67447b3c.050a0220.1cc393.0085.GAE@google.com/ Tested-by: syzbot+8d1206605b05ca9a0e6a@syzkaller.appspotmail.com Signed-off-by: Amir Goldstein Link: https://lore.kernel.org/r/20241126145342.364869-1-amir73il@gmail.com Acked-by: Miklos Szeredi Signed-off-by: Christian Brauner --- fs/backing-file.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/backing-file.c b/fs/backing-file.c index 526ddb4d6f764e..cbdad8b684742e 100644 --- a/fs/backing-file.c +++ b/fs/backing-file.c @@ -327,6 +327,7 @@ int backing_file_mmap(struct file *file, struct vm_area_struct *vma, struct backing_file_ctx *ctx) { const struct cred *old_cred; + struct file *user_file = vma->vm_file; int ret; if (WARN_ON_ONCE(!(file->f_mode & FMODE_BACKING))) @@ -342,7 +343,7 @@ int backing_file_mmap(struct file *file, struct vm_area_struct *vma, revert_creds_light(old_cred); if (ctx->accessed) - ctx->accessed(vma->vm_file); + ctx->accessed(user_file); return ret; }