Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
While you are at it, why not also add a readfiles system call to read multiple, presumably small, files? The initial unoptimized implementation of readfiles syscall can simply call readfile sequentially.
Sincerely Jan (atomsymbol)
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
On Sun, Jul 5, 2020 at 4:16 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
I am proposing the following:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by kernel int reserved; // not used by kernel };
int readfiles(struct readfile_t *requests, size_t count);
Returns zero if all requests succeeded, otherwise the returned value is non-zero (glibc wrapper: -1) and user-space is expected to check which requests have succeeded and which have failed. retval in readfile_t is set to what the single-file readfile syscall would return if it was called with the contents of the corresponding readfile_t struct.
The glibc library wrapper of this system call is expected to store the errno in the "reserved" field. Thus, a programmer using glibc sees:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by glibc (-1 on error) int errno; // set by glibc if retval is -1 };
retval and errno in glibc's readfile_t are set to what the single-file glibc readfile would return (retval) and set (errno) if it was called with the contents of the corresponding readfile_t struct. In case of an error, glibc will pick one readfile_t which failed (such as: the 1st failed one) and use it to set glibc's errno.
On Sun, Jul 05, 2020 at 04:46:04AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 4:16 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
I am proposing the following:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by kernel int reserved; // not used by kernel };
int readfiles(struct readfile_t *requests, size_t count);
Returns zero if all requests succeeded, otherwise the returned value is non-zero (glibc wrapper: -1) and user-space is expected to check which requests have succeeded and which have failed. retval in readfile_t is set to what the single-file readfile syscall would return if it was called with the contents of the corresponding readfile_t struct.
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
On Sun, Jul 5, 2020 at 5:27 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I believe that we should look at the single-file readfile syscall from a performance viewpoint. If an application is expecting to read a couple of small/medium-size files per second, then neither readfile nor readfiles makes sense in terms of improving performance. The benefits start to show up only in case an application is expecting to read at least a hundred of files per second. The "per second" part is important, it cannot be left out. Because readfile only improves performance for many-file reads, the syscall that applications performing many-file reads actually want is the multi-file version, not the single-file version.
I am not sure I understand why you think that a pointer to an array of readfile_t structures is very complex. If it was very complex then it would be a deep tree or a large graph.
On Sun, Jul 05, 2020 at 06:09:03AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:27 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I believe that we should look at the single-file readfile syscall from a performance viewpoint. If an application is expecting to read a couple of small/medium-size files per second, then neither readfile nor readfiles makes sense in terms of improving performance. The benefits start to show up only in case an application is expecting to read at least a hundred of files per second. The "per second" part is important, it cannot be left out. Because readfile only improves performance for many-file reads, the syscall that applications performing many-file reads actually want is the multi-file version, not the single-file version.
It also is a measurable increase over reading just a single file. Here's my really really fast AMD system doing just one call to readfile vs. one call sequence to open/read/close:
$ ./readfile_speed -l 1 Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3410 ns Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3780 ns
370ns isn't all that much, yes, but it is 370ns that could have been used for something else :)
Look at the overhead these days of a syscall using something like perf to see just how bad things have gotten on Intel-based systems (above was AMD which doesn't suffer all the syscall slowdowns, only some).
I'm going to have to now dig up my old rpi to get the stats on that thing, as well as some Intel boxes to show the problem I'm trying to help out with here. I'll post that for the next round of this patch series.
I am not sure I understand why you think that a pointer to an array of readfile_t structures is very complex. If it was very complex then it would be a deep tree or a large graph.
Of course you can make it more complex if you want, but look at the existing tools that currently do many open/read/close sequences. The apis there don't lend themselves very well to knowing the larger list of files ahead of time. But I could be looking at the wrong thing, what userspace programs are you thinking of that could be easily converted into using something like this?
thanks,
greg k-h
On Sun, Jul 5, 2020 at 1:58 PM Greg KH gregkh@linuxfoundation.org wrote:
On Sun, Jul 05, 2020 at 06:09:03AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:27 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I believe that we should look at the single-file readfile syscall from a performance viewpoint. If an application is expecting to read a couple of small/medium-size files per second, then neither readfile nor readfiles makes sense in terms of improving performance. The benefits start to show up only in case an application is expecting to read at least a hundred of files per second. The "per second" part is important, it cannot be left out. Because readfile only improves performance for many-file reads, the syscall that applications performing many-file reads actually want is the multi-file version, not the single-file version.
It also is a measurable increase over reading just a single file. Here's my really really fast AMD system doing just one call to readfile vs. one call sequence to open/read/close:
$ ./readfile_speed -l 1 Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3410 ns Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3780 ns
370ns isn't all that much, yes, but it is 370ns that could have been used for something else :)
I am curious as to how you amortized or accounted for the fact that readfile() first needs to open the dirfd and then close it later.
From performance viewpoint, only codes where readfile() is called
multiple times from within a loop make sense:
dirfd = open(); for(...) { readfile(dirfd, ...); } close(dirfd);
Look at the overhead these days of a syscall using something like perf to see just how bad things have gotten on Intel-based systems (above was AMD which doesn't suffer all the syscall slowdowns, only some).
I'm going to have to now dig up my old rpi to get the stats on that thing, as well as some Intel boxes to show the problem I'm trying to help out with here. I'll post that for the next round of this patch series.
I am not sure I understand why you think that a pointer to an array of readfile_t structures is very complex. If it was very complex then it would be a deep tree or a large graph.
Of course you can make it more complex if you want, but look at the existing tools that currently do many open/read/close sequences. The apis there don't lend themselves very well to knowing the larger list of files ahead of time. But I could be looking at the wrong thing, what userspace programs are you thinking of that could be easily converted into using something like this?
Perhaps, passing multiple filenames to tools via the command-line is a valid and quite general use case where it is known ahead of time that multiple files are going to be read, such as "gcc *.o" which is commonly used to link shared libraries and executables. Although, in case of "gcc *.o" some of the object files are likely to be cached in memory and thus unlikely to be required to be fetched from HDD/SSD, so the valid use case where we could see a speedup (if gcc was to use the multi-file readfiles() syscall) is when the programmer/Makefile invokes "gcc *.o" after rebuilding a small subset of the object files and the objects files which did not have to be rebuilt are stored on HDD/SSD, so basically this means 1st-time use of a project's Makefile in a particular day.
On Mon, Jul 06, 2020 at 08:07:46AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 1:58 PM Greg KH gregkh@linuxfoundation.org wrote:
It also is a measurable increase over reading just a single file. Here's my really really fast AMD system doing just one call to readfile vs. one call sequence to open/read/close:
$ ./readfile_speed -l 1 Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3410 ns Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3780 ns
370ns isn't all that much, yes, but it is 370ns that could have been used for something else :)
I am curious as to how you amortized or accounted for the fact that readfile() first needs to open the dirfd and then close it later.
From performance viewpoint, only codes where readfile() is called
multiple times from within a loop make sense:
dirfd = open(); for(...) { readfile(dirfd, ...); } close(dirfd);
dirfd can be AT_FDCWD or if the path is absolute, dirfd will be ignored, so one does not have to open anything. It would be an optimisation if one wanted to read several files relating to the same process:
char dir[50]; sprintf(dir, "/proc/%d", pid); dirfd = open(dir); readfile(dirfd, "maps", ...); readfile(dirfd, "stack", ...); readfile(dirfd, "comm", ...); readfile(dirfd, "environ", ...); close(dirfd);
but one would not have to do that.
On Mon, Jul 06, 2020 at 08:07:46AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 1:58 PM Greg KH gregkh@linuxfoundation.org wrote:
On Sun, Jul 05, 2020 at 06:09:03AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:27 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I believe that we should look at the single-file readfile syscall from a performance viewpoint. If an application is expecting to read a couple of small/medium-size files per second, then neither readfile nor readfiles makes sense in terms of improving performance. The benefits start to show up only in case an application is expecting to read at least a hundred of files per second. The "per second" part is important, it cannot be left out. Because readfile only improves performance for many-file reads, the syscall that applications performing many-file reads actually want is the multi-file version, not the single-file version.
It also is a measurable increase over reading just a single file. Here's my really really fast AMD system doing just one call to readfile vs. one call sequence to open/read/close:
$ ./readfile_speed -l 1 Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3410 ns Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3780 ns
370ns isn't all that much, yes, but it is 370ns that could have been used for something else :)
I am curious as to how you amortized or accounted for the fact that readfile() first needs to open the dirfd and then close it later.
I do not open a dirfd, look at the benchmark code in the patch, it's all right there.
I can make it simpler, will do that for the next round as I want to make it really obvious for people to test on their hardware.
From performance viewpoint, only codes where readfile() is called
multiple times from within a loop make sense:
dirfd = open(); for(...) { readfile(dirfd, ...); } close(dirfd);
No need to open dirfd at all, my benchmarks did not do that, just pass in an absolute path if you don't want to. But if you want to, because you want to read a bunch of files, you can, faster than you could if you wanted to read a number of individual files without it :)
thanks,
greg k-h
On Sun, Jul 05, 2020 at 04:27:32AM +0100, Matthew Wilcox wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I disagree re: "aren't part of the problem space".
Reading small files from procfs was specifically called out in the rationale for the syscall.
In my experience you're rarely monitoring a single proc file in any situation where you care about the syscall overhead. You're monitoring many of them, and any serious effort to do this efficiently in a repeatedly sampled situation has cached the open fds and already uses pread() to simply restart from 0 on every sample and not repeatedly pay for the name lookup.
Basically anything optimally using the existing interfaces for sampling proc files needs a way to read multiple open file descriptors in a single syscall to move the needle.
This syscall doesn't provide that. It doesn't really give any advantage over what we can achieve already. It seems basically pointless to me, from a monitoring proc files perspective.
Regards, Vito Caputo
On Sun, Jul 05, 2020 at 01:07:14AM -0700, Vito Caputo wrote:
On Sun, Jul 05, 2020 at 04:27:32AM +0100, Matthew Wilcox wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I disagree re: "aren't part of the problem space".
Reading small files from procfs was specifically called out in the rationale for the syscall.
In my experience you're rarely monitoring a single proc file in any situation where you care about the syscall overhead. You're monitoring many of them, and any serious effort to do this efficiently in a repeatedly sampled situation has cached the open fds and already uses pread() to simply restart from 0 on every sample and not repeatedly pay for the name lookup.
That's your use case, but many other use cases are just "read a bunch of sysfs files in one shot". Examples of that are tools that monitor uevents and lots of hardware-information gathering tools.
Also not all tools sem to be as smart as you think they are, look at util-linux for loads of the "open/read/close" lots of files pattern. I had a half-baked patch to convert it to use readfile which I need to polish off and post with the next series to show how this can be used to both make userspace simpler as well as use less cpu time.
Basically anything optimally using the existing interfaces for sampling proc files needs a way to read multiple open file descriptors in a single syscall to move the needle.
Is psutils using this type of interface, or do they constantly open different files?
What about fun tools like bashtop: https://github.com/aristocratos/bashtop.git which thankfully now relies on python's psutil package to parse proc in semi-sane ways, but that package does loads of constant open/read/close of proc files all the time from what I can tell.
And lots of people rely on python's psutil, right?
This syscall doesn't provide that. It doesn't really give any advantage over what we can achieve already. It seems basically pointless to me, from a monitoring proc files perspective.
What "good" monitoring programs do you suggest follow the pattern you recommend?
thanks,
greg k-h
On Sun, Jul 05, 2020 at 01:44:54PM +0200, Greg KH wrote:
On Sun, Jul 05, 2020 at 01:07:14AM -0700, Vito Caputo wrote:
On Sun, Jul 05, 2020 at 04:27:32AM +0100, Matthew Wilcox wrote:
On Sun, Jul 05, 2020 at 05:18:58AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 5:12 AM Matthew Wilcox willy@infradead.org wrote:
You should probably take a look at io_uring. That has the level of complexity of this proposal and supports open/read/close along with many other opcodes.
Then glibc can implement readfile using io_uring and there is no need for a new single-file readfile syscall.
It could, sure. But there's also a value in having a simple interface to accomplish a simple task. Your proposed API added a very complex interface to satisfy needs that clearly aren't part of the problem space that Greg is looking to address.
I disagree re: "aren't part of the problem space".
Reading small files from procfs was specifically called out in the rationale for the syscall.
In my experience you're rarely monitoring a single proc file in any situation where you care about the syscall overhead. You're monitoring many of them, and any serious effort to do this efficiently in a repeatedly sampled situation has cached the open fds and already uses pread() to simply restart from 0 on every sample and not repeatedly pay for the name lookup.
That's your use case, but many other use cases are just "read a bunch of sysfs files in one shot". Examples of that are tools that monitor uevents and lots of hardware-information gathering tools.
Also not all tools sem to be as smart as you think they are, look at util-linux for loads of the "open/read/close" lots of files pattern. I had a half-baked patch to convert it to use readfile which I need to polish off and post with the next series to show how this can be used to both make userspace simpler as well as use less cpu time.
Basically anything optimally using the existing interfaces for sampling proc files needs a way to read multiple open file descriptors in a single syscall to move the needle.
Is psutils using this type of interface, or do they constantly open different files?
When I last checked, psutils was not an optimal example, nor did I suggest it was.
What about fun tools like bashtop: https://github.com/aristocratos/bashtop.git which thankfully now relies on python's psutil package to parse proc in semi-sane ways, but that package does loads of constant open/read/close of proc files all the time from what I can tell.
And lots of people rely on python's psutil, right?
If python's psutil is constantly reopening the same files in /proc, this is an argument to go improve python's psutil, especially if it's popular.
Your proposed syscall doesn't magically make everything suboptimally sampling proc more efficient. It still requires going out and modifying everything to use the new syscall.
In order to actually realize a gain comparable to what can be done using existing interfaces, but with your new syscall, if the code wasn't already reusing the open fd it still requires a refactor to do so with your syscall, to eliminate the directory lookup on every sample.
At the end of the day, if you did all this work, you'd have code that only works on kernels with the new syscall, didn't enjoy a significant performance gain over what could have been achieved using the existing interfaces, and still required basically the same amount of work as optimizing for the existing interfaces would have. For what gain?
This syscall doesn't provide that. It doesn't really give any advantage over what we can achieve already. It seems basically pointless to me, from a monitoring proc files perspective.
What "good" monitoring programs do you suggest follow the pattern you recommend?
"Good" is not generally a word I'd use to describe software, surely that's not me you're quoting... but I assume you mean "optimal".
I'm sure sysprof is at least reusing open files when sampling proc, because we discussed the issue when Christian took over maintenance.
It appears he's currently using the lseek()->read() sequence:
https://gitlab.gnome.org/GNOME/sysprof/-/blob/master/src/libsysprof/sysprof-... https://gitlab.gnome.org/GNOME/sysprof/-/blob/master/src/libsysprof/sysprof-... https://gitlab.gnome.org/GNOME/sysprof/-/blob/master/src/libsysprof/sysprof-...
It'd be more efficient to just use pread() and lose the lseek(), at which point it'd be just a single pread() call per sample per proc file. Nothing your proposed syscall would improve upon, not that it'd be eligible for software that wants to work on existing kernels from distros like Debian and Centos/RHEL anyways.
If this were a conversation about providing something like a better scatter-gather interface akin to p{read,write}v but with the fd in the iovec, then we'd be talking about something very lucrative for proc sampling. But like you've said elsewhere in this thread, io_uring() may suffice as an alternative solution in that vein.
My personal interest in this topic stems from an experimental window manager I made, and still use, which monitors every descendant process for the X session at frequencies up to 60HZ. The code opens a bunch of proc files for every process, and keeps them open until the process goes away or falls out of scope. See the attachment for some idea of what /proc/$(pidof wm)/fd looks like. All those proc files are read at up to 60HZ continuously.
All top-like tools are really no different, and already shouldn't be reopening things on every sample. They should be fixed if not - with or without your syscall, it's equal effort, but the existing interfaces... exist.
Regards, Vito Caputo
On Jul 4, 2020, at 8:46 PM, Jan Ziak 0xe2.0x9a.0x9b@gmail.com wrote:
On Sun, Jul 5, 2020 at 4:16 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
I am proposing the following:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by kernel int reserved; // not used by kernel };
If you are going to pass a struct from userspace to the kernel, it should not mix int and pointer types (which may be 64-bit values, so that there are not structure packing issues, like:
struct readfile { int dirfd; int flags; const char *pathname; void *buf; size_t count; ssize_t retval; };
It would be better if "retval" was returned in "count", so that the structure fits nicely into 32 bytes on a 64-bit system, instead of being 40 bytes per entry, which adds up over many entries, like.
struct readfile { int dirfd; int flags; const char *pathname; void *buf; ssize_t count; /* input: bytes requested, output: bytes read or -errno */ };
However, there is still an issue with passing pointers from userspace, since they may be 32-bit userspace pointers on a 64-bit kernel.
int readfiles(struct readfile_t *requests, size_t count);
It's not clear why count is a "size_t" since it is not a size. An unsigned int is fine here, since it should never be negative.
Returns zero if all requests succeeded, otherwise the returned value is non-zero (glibc wrapper: -1) and user-space is expected to check which requests have succeeded and which have failed. retval in readfile_t is set to what the single-file readfile syscall would return if it was called with the contents of the corresponding readfile_t struct.
The glibc library wrapper of this system call is expected to store the errno in the "reserved" field. Thus, a programmer using glibc sees:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by glibc (-1 on error) int errno; // set by glibc if retval is -1 };
Why not just return the errno directly in "retval", or in "count" as proposed? That avoids further bloating the structure by another field.
retval and errno in glibc's readfile_t are set to what the single-file glibc readfile would return (retval) and set (errno) if it was called with the contents of the corresponding readfile_t struct. In case of an error, glibc will pick one readfile_t which failed (such as: the 1st failed one) and use it to set glibc's errno.
Cheers, Andreas
On Sun, Jul 5, 2020 at 8:32 AM Andreas Dilger adilger@dilger.ca wrote:
On Jul 4, 2020, at 8:46 PM, Jan Ziak 0xe2.0x9a.0x9b@gmail.com wrote:
On Sun, Jul 5, 2020 at 4:16 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
I am proposing the following:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by kernel int reserved; // not used by kernel };
If you are going to pass a struct from userspace to the kernel, it should not mix int and pointer types (which may be 64-bit values, so that there are not structure packing issues, like:
struct readfile { int dirfd; int flags; const char *pathname; void *buf; size_t count; ssize_t retval; };
It would be better if "retval" was returned in "count", so that the structure fits nicely into 32 bytes on a 64-bit system, instead of being 40 bytes per entry, which adds up over many entries, like.
I know what you mean and it is a valid point, but in my opinion it shouldn't (in most cases) be left to the programmer to decide what the binary layout of a data structure is - instead it should be left to an optimizing compiler to decide it. Just like code optimization, determining the physical layout of data structures can be subject to automatic optimizations as well. It is kind of unfortunate that in C/C++, and in many other statically compiled languages (even recent ones), the physical layout of all data structures is determined by the programmer rather than the compiler. Also, tagging fields as "input", "output", or both (the default) would be helpful in obtaining smaller sizes:
struct readfile_t { input int dirfd; input const char *pathname; input void *buf; input size_t count; input int flags; output ssize_t retval; // set by kernel output int reserved; // not used by kernel };
int readfiles(struct readfile_t *requests, size_t count);
struct readfile_t r[10]; // Write r[i] inputs int status = readfiles(r, nelem(r)); // Read r[i] outputs
A data-layout optimizing compiler should be able to determine that the optimal layout of readfile_t is UNION(INPUT: 2*int+2*pointer+1*size_t, OUTPUT: 1*ssize_t+1*int).
In the unfortunate case of the non-optimizing C language and if it is just a micro-optimization (optimizing readfile_t is a micro-optimization), it is better to leave the data structure in a form that is appropriate for being efficiently readable by programmers rather than to micro-optimize it and make it confusing to programmers.
struct readfile { int dirfd; int flags; const char *pathname; void *buf; ssize_t count; /* input: bytes requested, output: bytes read or -errno */ };
However, there is still an issue with passing pointers from userspace, since they may be 32-bit userspace pointers on a 64-bit kernel.
int readfiles(struct readfile_t *requests, size_t count);
It's not clear why count is a "size_t" since it is not a size. An unsigned int is fine here, since it should never be negative.
Generally speaking, size_t reflects the size of the address space while unsigned int doesn't and therefore it is easier for unsigned int to overflow on very large data sets.
Returns zero if all requests succeeded, otherwise the returned value is non-zero (glibc wrapper: -1) and user-space is expected to check which requests have succeeded and which have failed. retval in readfile_t is set to what the single-file readfile syscall would return if it was called with the contents of the corresponding readfile_t struct.
The glibc library wrapper of this system call is expected to store the errno in the "reserved" field. Thus, a programmer using glibc sees:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by glibc (-1 on error) int errno; // set by glibc if retval is -1 };
Why not just return the errno directly in "retval", or in "count" as proposed? That avoids further bloating the structure by another field.
retval and errno in glibc's readfile_t are set to what the single-file glibc readfile would return (retval) and set (errno) if it was called with the contents of the corresponding readfile_t struct. In case of an error, glibc will pick one readfile_t which failed (such as: the 1st failed one) and use it to set glibc's errno.
Cheers, Andreas
On Sun, Jul 05, 2020 at 09:25:39AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 8:32 AM Andreas Dilger adilger@dilger.ca wrote:
On Jul 4, 2020, at 8:46 PM, Jan Ziak 0xe2.0x9a.0x9b@gmail.com wrote:
On Sun, Jul 5, 2020 at 4:16 AM Matthew Wilcox willy@infradead.org wrote:
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
What API would you use for this?
ssize_t readfiles(int dfd, char **files, void **bufs, size_t *lens);
I pretty much hate this interface, so I hope you have something better in mind.
I am proposing the following:
struct readfile_t { int dirfd; const char *pathname; void *buf; size_t count; int flags; ssize_t retval; // set by kernel int reserved; // not used by kernel };
If you are going to pass a struct from userspace to the kernel, it should not mix int and pointer types (which may be 64-bit values, so that there are not structure packing issues, like:
struct readfile { int dirfd; int flags; const char *pathname; void *buf; size_t count; ssize_t retval; };
It would be better if "retval" was returned in "count", so that the structure fits nicely into 32 bytes on a 64-bit system, instead of being 40 bytes per entry, which adds up over many entries, like.
I know what you mean and it is a valid point, but in my opinion it shouldn't (in most cases) be left to the programmer to decide what the binary layout of a data structure is - instead it should be left to an optimizing compiler to decide it.
We don't get that luxury when creating user/kernel apis in C, sorry.
I suggest using the pahole tool if you are interested in seeing the "best" way a structure can be layed out, it can perform that optimization for you so that you know how to fix your code.
thanks,
greg k-h
On Sun, Jul 05, 2020 at 04:06:22AM +0200, Jan Ziak wrote:
Hello
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless an application is using multiple reader threads or somehow instructs the kernel to prefetch multiple files into memory).
There's not much (but it is mesurable) need to prefetch virtual files into memory first, which is primarily what this syscall is for (procfs, sysfs, securityfs, etc.) If you are dealing with real-disks, then yes, the overhead of the syscall might be in the noise compared to the i/o path of the data.
While you are at it, why not also add a readfiles system call to read multiple, presumably small, files? The initial unoptimized implementation of readfiles syscall can simply call readfile sequentially.
Again, that's what io_uring is for.
thanks,
greg k-h
Hi!
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall. Pavel
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
I think that's quite important as io_uring becomes mainstream.
Thanks, Miklos
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
I think that's quite important as io_uring becomes mainstream.
On Tue, Jul 14, 2020 at 1:36 PM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
Yeah, it's an interesting problem. The uring has the same events, as far as I understand, that are recorded in a multithreaded strace output (syscall entry, syscall exit); nothing more is needed.
I do think this needs to be integrated into strace(1), otherwise the usefulness of that tool (which I think is *very* high) would go down drastically as io_uring usage goes up.
Thanks, Miklos
On 14/07/2020 14:55, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 1:36 PM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
At first, I thought that the proposed system call is capable of reading *multiple* small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
Yeah, it's an interesting problem. The uring has the same events, as far as I understand, that are recorded in a multithreaded strace output (syscall entry, syscall exit); nothing more is needed> I do think this needs to be integrated into strace(1), otherwise the usefulness of that tool (which I think is *very* high) would go down drastically as io_uring usage goes up.
Not touching the topic of usefulness of strace + io_uring, but I'd rather have a tool that solves a problem, than a problem that created and honed for a tool.
On Wed, Jul 15, 2020 at 10:33 AM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 14:55, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 1:36 PM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
> At first, I thought that the proposed system call is capable of > reading *multiple* small files using a single system call - which > would help increase HDD/SSD queue utilization and increase IOPS (I/O > operations per second) - but that isn't the case and the proposed > system call can read just a single file.
If you want to do this for multple files, use io_ring, that's what it was designed for. I think Jens was going to be adding support for the open/read/close pattern to it as well, after some other more pressing features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
Yeah, it's an interesting problem. The uring has the same events, as far as I understand, that are recorded in a multithreaded strace output (syscall entry, syscall exit); nothing more is needed> I do think this needs to be integrated into strace(1), otherwise the usefulness of that tool (which I think is *very* high) would go down drastically as io_uring usage goes up.
Not touching the topic of usefulness of strace + io_uring, but I'd rather have a tool that solves a problem, than a problem that created and honed for a tool.
Sorry, I'm not getting the metaphor. Can you please elaborate?
Thanks, Miklos
On 15/07/2020 11:41, Miklos Szeredi wrote:
On Wed, Jul 15, 2020 at 10:33 AM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 14:55, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 1:36 PM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote:
Hi!
>> At first, I thought that the proposed system call is capable of >> reading *multiple* small files using a single system call - which >> would help increase HDD/SSD queue utilization and increase IOPS (I/O >> operations per second) - but that isn't the case and the proposed >> system call can read just a single file. > > If you want to do this for multple files, use io_ring, that's what it > was designed for. I think Jens was going to be adding support for the > open/read/close pattern to it as well, after some other more pressing > features/fixes were finished.
What about... just using io_uring for single file, too? I'm pretty sure it can be wrapped in a library that is simple to use, avoiding need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
Yeah, it's an interesting problem. The uring has the same events, as far as I understand, that are recorded in a multithreaded strace output (syscall entry, syscall exit); nothing more is needed> I do think this needs to be integrated into strace(1), otherwise the usefulness of that tool (which I think is *very* high) would go down drastically as io_uring usage goes up.
Not touching the topic of usefulness of strace + io_uring, but I'd rather have a tool that solves a problem, than a problem that created and honed for a tool.
Sorry, I'm not getting the metaphor. Can you please elaborate?
Sure, I mean _if_ there are tools that conceptually suit better, I'd prefer to work with them, then trying to shove a new and possibly alien infrastructure into strace.
But my knowledge of strace is very limited, so can't tell whether that's the case. E.g. can it utilise static trace points?
On 15/07/2020 11:49, Pavel Begunkov wrote:
On 15/07/2020 11:41, Miklos Szeredi wrote:
On Wed, Jul 15, 2020 at 10:33 AM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 14:55, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 1:36 PM Pavel Begunkov asml.silence@gmail.com wrote:
On 14/07/2020 11:07, Miklos Szeredi wrote:
On Tue, Jul 14, 2020 at 8:51 AM Pavel Machek pavel@denx.de wrote: > > Hi! > >>> At first, I thought that the proposed system call is capable of >>> reading *multiple* small files using a single system call - which >>> would help increase HDD/SSD queue utilization and increase IOPS (I/O >>> operations per second) - but that isn't the case and the proposed >>> system call can read just a single file. >> >> If you want to do this for multple files, use io_ring, that's what it >> was designed for. I think Jens was going to be adding support for the >> open/read/close pattern to it as well, after some other more pressing >> features/fixes were finished. > > What about... just using io_uring for single file, too? I'm pretty > sure it can be wrapped in a library that is simple to use, avoiding > need for new syscall.
Just wondering: is there a plan to add strace support to io_uring? And I don't just mean the syscalls associated with io_uring, but tracing the ring itself.
What kind of support do you mean? io_uring is asynchronous in nature with all intrinsic tracing/debugging/etc. problems of such APIs. And there are a lot of handy trace points, are those not enough?
Though, this can be an interesting project to rethink how async APIs are worked with.
Yeah, it's an interesting problem. The uring has the same events, as far as I understand, that are recorded in a multithreaded strace output (syscall entry, syscall exit); nothing more is needed> I do think this needs to be integrated into strace(1), otherwise the usefulness of that tool (which I think is *very* high) would go down drastically as io_uring usage goes up.
Not touching the topic of usefulness of strace + io_uring, but I'd rather have a tool that solves a problem, than a problem that created and honed for a tool.
Sorry, I'm not getting the metaphor. Can you please elaborate?
Sure, I mean _if_ there are tools that conceptually suit better, I'd prefer to work with them, then trying to shove a new and possibly alien infrastructure into strace.
But my knowledge of strace is very limited, so can't tell whether that's the case. E.g. can it utilise static trace points?
I think, if you're going to push this idea, we should start a new thread CC'ing strace devs.
On Wed, Jul 15, 2020 at 11:02 AM Pavel Begunkov asml.silence@gmail.com wrote:
I think, if you're going to push this idea, we should start a new thread CC'ing strace devs.
Makes sense. I've pruned the Cc list, so here's the link for reference:
https://lore.kernel.org/linux-fsdevel/CAJfpegu3EwbBFTSJiPhm7eMyTK2MzijLUp1gc...
Thanks, Miklos
linux-kselftest-mirror@lists.linaro.org