Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix request size too large when use direct io #191

Merged
merged 1 commit into from
Apr 11, 2024

Conversation

zyfjeff
Copy link
Contributor

@zyfjeff zyfjeff commented Apr 11, 2024

The fuse directio call path is as follows

fuse_read_fill+0xa8/0xb0
fuse_send_read+0x3f/0xb0
fuse_direct_io+0x34a/0x5f0
__fuse_direct_read+0x4e/0x70
fuse_file_read_iter+0x9e/0x140
new_sync_read+0xde/0x120
__vfs_read+0x27/0x40
vfs_read+0x94/0x190
ksys_read+0x4e/0xd0

under the direct io path, fuse initiates a request whose request size is determined by the combination of the user-supplied buffer size and max_read mount parameters.

size_t nmax = write ? fc->max_write : fc->max_read; 
size_t count = iov_iter_count(iter);
size_t nbytes = min(count, nmax);
nres = fuse_send_read(req, io, pos, nbytes, owner);

so we have a problem with the checking of the request size in the code, we always compare the request size with MAX_BUFFER_SIZE, but in fact the maximum value of the request size depends on max_read, in virtiofs scenario max_read is UINT_MAX by default, and in fuse scenario it is possible to adjust max_read by the mounting parameter.

the current implementation of fuse-backend-rs uses a fixed buffer to store the fuse response
the default value of this buffer is as follows, but in fact, the kernel in the direct io path,
the size of the request may be larger than the length of this buffer, which leads to the buffer
is not enough to fill the read content, resulting in read failure. so here we limit the size of
max_read to the length of our buffer so that the fuse kernel will not send requests that exceed
the length of the buffer. in virtiofs scene max_read can't be adjusted, his default is UINT_MAX,
but we don't have to worry about it, because the buffer is allocated by the kernel driver, we
just use this buffer to fill the response, so we don't need to do any adjustment.

The fuse directio call path is as follows

fuse_read_fill+0xa8/0xb0
fuse_send_read+0x3f/0xb0
fuse_direct_io+0x34a/0x5f0
__fuse_direct_read+0x4e/0x70
fuse_file_read_iter+0x9e/0x140
new_sync_read+0xde/0x120
__vfs_read+0x27/0x40
vfs_read+0x94/0x190
ksys_read+0x4e/0xd0

under the direct io path, fuse initiates a request whose request size is determined
by the combination of the user-supplied buffer size and max_read mount parameters.

size_t nmax = write ? fc->max_write : fc->max_read;
size_t count = iov_iter_count(iter);
size_t nbytes = min(count, nmax);
nres = fuse_send_read(req, io, pos, nbytes, owner);

so we have a problem with the checking of the request size in the code, we always
compare the request size with MAX_BUFFER_SIZE, but in fact the maximum value of the
request size depends on max_read, in virtiofs scenario max_read is UINT_MAX by default,
and in fuse scenario it is possible to adjust max_read by the mounting parameter.

the current implementation of fuse-backend-rs uses a fixed buffer to store the fuse response
the default value of this buffer is as follows, but in fact, the kernel in the direct io path,
the size of the request may be larger than the length of this buffer, which leads to the buffer
is not enough to fill the read content, resulting in read failure. so here we limit the size of
max_read to the length of our buffer so that the fuse kernel will not send requests that exceed
the length of the buffer. in virtiofs scene max_read can't be adjusted, his default is UINT_MAX,
but we don't have to worry about it, because the buffer is allocated by the kernel driver, we
just use this buffer to fill the response, so we don't need to do any adjustment.

Signed-off-by: zyfjeff <zyfjeff@linux.alibaba.com>
@eryugey eryugey merged commit de3231b into cloud-hypervisor:master Apr 11, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants