On Sun, Jun 4, 2023, at 10:29, 吴章金 wrote:
Sorry for missing part of your feedbacks, I will check if -nostdlib stops the linking of libgcc_s or my own separated test script forgot linking the libgcc_s manually.
According to the gcc documentation, -nostdlib drops libgcc.a, but adding -lgcc is the recommended way to bring it back.
And as suggestion from Thomas' reply,
Perhaps we really need to add the missing __divdi3 and __aeabi_ldivmod and the ones for the other architectures, or get one from lib/math/div64.c.
No, these ones come from the compiler via libgcc_s, we must not try to
reimplement them. And we should do our best to avoid depending on them to avoid the error you got above.
So, the explicit conversion is used instead in the patch.
I think a cast to a 32-bit type is ideal when converting the clock_gettime() result into microseconds, since the kernel guarantees that the timespec value is normalized, with all zeroes in the upper 34 bits. Going through __aeabi_ldivmod would make the conversion much slower.
For user supplied non-normalized timeval values, it's not obvious whether we need the full 64-bit division
Arnd
On Sun, Jun 04, 2023 at 11:24:39AM +0200, Arnd Bergmann wrote:
On Sun, Jun 4, 2023, at 10:29, ??? wrote:
Sorry for missing part of your feedbacks, I will check if -nostdlib stops the linking of libgcc_s or my own separated test script forgot linking the libgcc_s manually.
According to the gcc documentation, -nostdlib drops libgcc.a, but adding -lgcc is the recommended way to bring it back.
And as suggestion from Thomas' reply,
Perhaps we really need to add the missing __divdi3 and __aeabi_ldivmod and the ones for the other architectures, or get one from lib/math/div64.c.
No, these ones come from the compiler via libgcc_s, we must not try to
reimplement them. And we should do our best to avoid depending on them to avoid the error you got above.
So, the explicit conversion is used instead in the patch.
I think a cast to a 32-bit type is ideal when converting the clock_gettime() result into microseconds, since the kernel guarantees that the timespec value is normalized, with all zeroes in the upper 34 bits. Going through __aeabi_ldivmod would make the conversion much slower.
For user supplied non-normalized timeval values, it's not obvious whether we need the full 64-bit division
We don't have to care about these here for the microsecond part, because for decades these were exclusively 32-bit. Also the only one consuming this field would have been settimeofday() and it's already documented as returning EINVAL if tv_usec is not within the expected 0..999999 range.
And when in doubt we should keep in mind that nolibc's purpose is not to become a yet-another full-blown libc alternative but just a small piece of software allowing to produce portable and compact binaries for testing or booting. Being a bit stricter than other libcs for the sake of code compactness is better here. Originally for example it was necessary to always pass the 3 arguments to open(). Over time we managed to make simple code compile with both glibc and nolibc, but when it comes at the cost of adding size and burden for the developers, such as forcing them to add libgcc, I prefer that we slightly limit the domain of application instead.
Thanks! Willy
On Sun, Jun 4, 2023, at 13:27, Willy Tarreau wrote:
On Sun, Jun 04, 2023 at 11:24:39AM +0200, Arnd Bergmann wrote:
For user supplied non-normalized timeval values, it's not obvious whether we need the full 64-bit division
We don't have to care about these here for the microsecond part, because for decades these were exclusively 32-bit. Also the only one consuming this field would have been settimeofday() and it's already documented as returning EINVAL if tv_usec is not within the expected 0..999999 range.
Right
Over time we managed to make simple code compile with both glibc and nolibc, but when it comes at the cost of adding size and burden for the developers, such as forcing them to add libgcc, I prefer that we slightly limit the domain of application instead.
Good point. This also reminds me that the compilers I build for https://mirrors.edge.kernel.org/pub/tools/crosstool/ don't always have every version of libgcc that may be needed, for instance the mips compilers only provide a big-endian libgcc and the arm compilers only provide a little-endian one, even though the compilers can build code both ways with the right flags.
Arnd
On Sun, Jun 04, 2023 at 01:38:39PM +0200, Arnd Bergmann wrote:
Over time we managed to make simple code compile with both glibc and nolibc, but when it comes at the cost of adding size and burden for the developers, such as forcing them to add libgcc, I prefer that we slightly limit the domain of application instead.
Good point. This also reminds me that the compilers I build for https://mirrors.edge.kernel.org/pub/tools/crosstool/ don't always have every version of libgcc that may be needed, for instance the mips compilers only provide a big-endian libgcc and the arm compilers only provide a little-endian one, even though the compilers can build code both ways with the right flags.
That reminds me something indeed, I know that MIPS is a great platform for testing portability due to libgcc and/or atomics not always being complete depending how it's built. At work when I double-check that haproxy still builds and starts on my EdgeRouter-X, then it will build everywhere ;-)
Willy
On Sun, Jun 04, 2023 at 11:24:39AM +0200, Arnd Bergmann wrote:
On Sun, Jun 4, 2023, at 10:29, ??? wrote:
Sorry for missing part of your feedbacks, I will check if -nostdlib stops the linking of libgcc_s or my own separated test script forgot linking the libgcc_s manually.
According to the gcc documentation, -nostdlib drops libgcc.a, but adding -lgcc is the recommended way to bring it back.
And as suggestion from Thomas' reply,
Perhaps we really need to add the missing __divdi3 and __aeabi_ldivmod and the ones for the other architectures, or get one from lib/math/div64.c.
No, these ones come from the compiler via libgcc_s, we must not try to
reimplement them. And we should do our best to avoid depending on them to avoid the error you got above.
So, the explicit conversion is used instead in the patch.
I think a cast to a 32-bit type is ideal when converting the clock_gettime() result into microseconds, since the kernel guarantees that the timespec value is normalized, with all zeroes in the upper 34 bits. Going through __aeabi_ldivmod would make the conversion much slower.
Perfectly, this message is really required to be added to the coming clock_gettime/time64 patches, I did worry about the (unsigned int) conversion may lose the upper bits, thanks Arnd.
For user supplied non-normalized timeval values, it's not obvious whether we need the full 64-bit division
We don't have to care about these here for the microsecond part, because for decades these were exclusively 32-bit. Also the only one consuming this field would have been settimeofday() and it's already documented as returning EINVAL if tv_usec is not within the expected 0..999999 range.
And this one, thanks Willy.
And when in doubt we should keep in mind that nolibc's purpose is not to become a yet-another full-blown libc alternative but just a small piece of software allowing to produce portable and compact binaries for testing or booting. Being a bit stricter than other libcs for the sake of code compactness is better here. Originally for example it was necessary to always pass the 3 arguments to open(). Over time we managed to make simple code compile with both glibc and nolibc, but when it comes at the cost of adding size and burden for the developers, such as forcing them to add libgcc, I prefer that we slightly limit the domain of application instead.
This explains why it is 'no' libc ;-)
Best regards, Zhangjin
Thanks! Willy
linux-kselftest-mirror@lists.linaro.org