On Thu, Feb 28, 2019 at 09:10:15AM +0100, Miklos Szeredi wrote:
On Wed, Feb 27, 2019 at 9:39 PM Kirill Smelkov kirr@nexedi.com wrote:
I more or less agree with this statement. However can we please make the breakage to be explicitly visible with an error instead of exhibiting it via harder to debug stucks/deadlocks? For example sys_read < max_write -> error instead of getting stuck. And if notify_retrieve requests buffer larger than max_write -> error or cut to max_write, but don't return OK when we know we will never send what was requested to filesystem even if it uses max_write sized reads. What is the point of breaking in hard to diagnose way when we can make the breakage showing itself explicitly? Would a patch for such behaviour accepted?
Sure, if it's only adds a couple of lines. Adding more than say ten lines for such a non-bug fix is definitely excessive.
Ok, thanks. Please consider applying the following patch. (It's a bit pity to hear the problem is not considered to be a bug, but anyway).
I will also send the second patch as another mail, since I could not made `git am --scissors` to apply several patched extracted from one mail successfully.
Thanks, Kirill
---- 8< ---- From: Kirill Smelkov kirr@nexedi.com Date: Thu, 28 Feb 2019 13:06:18 +0300 Subject: [PATCH 1/2] fuse: retrieve: cap requested size to negotiated max_write MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit
FUSE filesystem server and kernel client negotiate during initialization phase, what should be the maximum write size the client will ever issue. Correspondingly the filesystem server then queues sys_read calls to read requests with buffer capacity large enough to carry request header + that max_write bytes. A filesystem server is free to set its max_write in anywhere in the range between [1·page, fc->max_pages·page]. In particular go-fuse[2] sets max_write by default as 64K, wheres default fc->max_pages corresponds to 128K. Libfuse also allows users to configure max_write, but by default presets it to possible maximum.
If max_write is < fc->max_pages·page, and in NOTIFY_RETRIEVE handler we allow to retrieve more than max_write bytes, corresponding prepared NOTIFY_REPLY will be thrown away by fuse_dev_do_read, because the filesystem server, in full correspondence with server/client contract, will be only queuing sys_read with ~max_write buffer capacity, and fuse_dev_do_read throws away requests that cannot fit into server request buffer. In turn the filesystem server could get stuck waiting indefinitely for NOTIFY_REPLY since NOTIFY_RETRIEVE handler returned OK which is understood by clients as that NOTIFY_REPLY was queued and will be sent back.
-> Cap requested size to negotiate max_write to avoid the problem. This aligns with the way NOTIFY_RETRIEVE handler works, which already unconditionally caps requested retrieve size to fuse_conn->max_pages. This way it should not hurt NOTIFY_RETRIEVE semantic if we return less data than was originally requested.
Please see [1] for context where the problem of stuck filesystem was hit for real, how the situation was traced and for more involving patch that did not make it into the tree.
[1] https://marc.info/?l=linux-fsdevel&m=155057023600853&w=2 [2] https://github.com/hanwen/go-fuse
Signed-off-by: Kirill Smelkov kirr@nexedi.com Cc: Han-Wen Nienhuys hanwen@google.com Cc: Jakob Unterwurzacher jakobunt@gmail.com Cc: stable@vger.kernel.org # v2.6.36+ --- fs/fuse/dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 8a63e52785e9..38e94bc43053 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -1749,7 +1749,7 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode, offset = outarg->offset & ~PAGE_MASK; file_size = i_size_read(inode);
- num = outarg->size; + num = min(outarg->size, fc->max_write); if (outarg->offset > file_size) num = 0; else if (outarg->offset + num > file_size)