On Mon, Jul 06, 2020 at 08:07:46AM +0200, Jan Ziak wrote:
On Sun, Jul 5, 2020 at 1:58 PM Greg KH gregkh@linuxfoundation.org wrote:
It also is a measurable increase over reading just a single file. Here's my really really fast AMD system doing just one call to readfile vs. one call sequence to open/read/close:
$ ./readfile_speed -l 1 Running readfile test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3410 ns Running open/read/close test on file /sys/devices/system/cpu/vulnerabilities/meltdown for 1 loops... Took 3780 ns
370ns isn't all that much, yes, but it is 370ns that could have been used for something else :)
I am curious as to how you amortized or accounted for the fact that readfile() first needs to open the dirfd and then close it later.
From performance viewpoint, only codes where readfile() is called
multiple times from within a loop make sense:
dirfd = open(); for(...) { readfile(dirfd, ...); } close(dirfd);
dirfd can be AT_FDCWD or if the path is absolute, dirfd will be ignored, so one does not have to open anything. It would be an optimisation if one wanted to read several files relating to the same process:
char dir[50]; sprintf(dir, "/proc/%d", pid); dirfd = open(dir); readfile(dirfd, "maps", ...); readfile(dirfd, "stack", ...); readfile(dirfd, "comm", ...); readfile(dirfd, "environ", ...); close(dirfd);
but one would not have to do that.