IIUC, an idea behind clock_getres() is to give a hint about the resolution of specified clock. This hint may be used by an application programmer to check whether this clock is suitable for a some purpose. So why clock_getres() always returns something like {0, 1} (if hrtimers are enabled) regardless of the underlying platform's real numbers?
For example, OMAP4's real resolution of CLOCK_REALTIME is 30.5us for 32K timer and 26ns for MPU timer. Such a difference definitely makes sense - but clock_getres(CLOCK_REALTIME,..) always returns {0, KTIME_HIGH_RES}. Since this behavior causes a confusion like http://lists.linaro.org/pipermail/linaro-dev/2012-February/010112.html, I'm considering this as a stupid misfeature.
Dmitry
On Wed, Feb 08, 2012 at 08:31:03AM -0800, Dmitry Antipov wrote:
IIUC, an idea behind clock_getres() is to give a hint about the resolution of specified clock. This hint may be used by an application programmer to check whether this clock is suitable for a some purpose. So why clock_getres() always returns something like {0, 1} (if hrtimers are enabled) regardless of the underlying platform's real numbers?
I think "resolution" does not mean tick duration, but rather the finest timer unit.
HTH, Richard
On 02/08/2012 09:12 PM, Richard Cochran wrote:
I think "resolution" does not mean tick duration, but rather the finest timer unit.
#include <stdio.h> #include <time.h>
int main (int argc, char *argv[]) { int i; struct timespec rs, ts[10];
clock_getres (CLOCK_REALTIME, &rs); printf ("res: %lus %luns\n", rs.tv_sec, rs.tv_nsec);
for (i = 0; i < 10; i++) clock_gettime (CLOCK_REALTIME, ts + i); for (i = 0; i < 10; i++) printf ("%d: %lus %luns\n", i, ts[i].tv_sec, ts[i].tv_nsec);
return 0; }
=>
res: 0s 10000000ns 0: 1328779203s 975317500ns 1: 1328779203s 975317900ns 2: 1328779203s 975318200ns 3: 1328779203s 975318400ns 4: 1328779203s 975318600ns 5: 1328779203s 975318800ns 6: 1328779203s 975319000ns 7: 1328779203s 975319300ns 8: 1328779203s 975319500ns 9: 1328779203s 975319600ns
Old Sun Fire 880, SunOS 5.10 Generic_139555-08.
100ns precision with 10ms "finest timer unit"???
Dmitry
On 02/09/2012 10:40 AM, Richard Cochran wrote:
I thought this list was about Linux kernel development, but now it seems to be about Sun's old bugs.
This Sun (probably) has ~100000x more accurate hrtimers than it's said, and it's a bug. My panda board (with 32K timer enabled) has ~30000x less accurate hrtimers than it's said, and it's not a bug. Great.
Dmitry
On Thu, Feb 09, 2012 at 11:32:16AM -0800, Dmitry Antipov wrote:
On 02/09/2012 10:40 AM, Richard Cochran wrote:
I thought this list was about Linux kernel development, but now it seems to be about Sun's old bugs.
This Sun (probably) has ~100000x more accurate hrtimers than it's said, and it's a bug. My panda board (with 32K timer enabled) has ~30000x less accurate hrtimers than it's said, and it's not a bug. Great.
If I understand you correctly, what you are looking for is a way to get a promise from the OS regarding a real time deadlines, right? But that is a different question than the timer unit resolution:
Q: What is the finest timer duration that I may request? A: One nanosecond (answer by getres).
Q: What kind of real time deadline will my system provide? A: Nobody knows for sure.
Just because the OS claims timer resolution X does mean that your application can assume, "okay, I'll just set my periodic task at duration X and the OS will take care of the rest." The only thing the user can count on is that the timer expiration will not come _before_ the deadline. The nanosleep man page puts its like this:
If the interval specified in req is not an exact multiple of the granularity underlying clock (see time(7)), then the interval will be rounded up to the next multiple. Furthermore, after the sleep completes, there may still be a delay before the CPU becomes free to once again execute the calling thread.
I agree that getres does not provide very useful information. Under Linux, it merely indicates the present of high resolution timer support.
The practical solution to what you are asking for is overall system testing tuning, and this is unfortunately manual labor.
Richard
On Wed, 8 Feb 2012, Dmitry Antipov wrote:
IIUC, an idea behind clock_getres() is to give a hint about the resolution of specified clock. This hint may be used by an application programmer to check whether this clock is suitable for a some purpose. So why clock_getres() always returns something like {0, 1} (if hrtimers are enabled) regardless of the underlying platform's real numbers?
For example, OMAP4's real resolution of CLOCK_REALTIME is 30.5us for 32K timer and 26ns for MPU timer. Such a difference definitely makes sense - but clock_getres(CLOCK_REALTIME,..) always returns {0, KTIME_HIGH_RES}. Since this behavior causes a confusion like http://lists.linaro.org/pipermail/linaro-dev/2012-February/010112.html, I'm considering this as a stupid misfeature.
We had this discussion before. The point is that the accuracy of the internal kernel timer handling is 1nsec in case of high resolution timers. The fact that the underlying clock event device has a coarser resolution does not change that.
It would be possible to return the real resolution of the clock event device, but we have systems, where the clockevent device is dynamically changing. So which resolution do we expose to an application? The one of the current active device or some magic number of a device which might not even be initialized? That's more confusing than telling user space that high resolution timers are active and the kernel is trying to achieve the 1ns accuracy.
Thanks,
tglx
On 02/09/2012 02:12 AM, Thomas Gleixner wrote:
It would be possible to return the real resolution of the clock event device, but we have systems, where the clockevent device is dynamically changing. So which resolution do we expose to an application? The one of the current active device or some magic number of a device which might not even be initialized? That's more confusing than telling user space that high resolution timers are active and the kernel is trying to achieve the 1ns accuracy.
First of all, it's not necessary to make unrealizable promises to an application programmer. If it's known that _any_ hardware configuration can't guarantee, for example, <20ns precision, it's better to return {0, 20} than {0, 1} from clock_getres(...). If high-res subsystem isn't active, just return -1 and set errno to EINVAL, regardless of an arguments passed.
Second, it's very hard to deny that some applications really needs precise time measurements. So, if the clockevent device is dynamically changing, it would be nice to have a method to prevent the loss of precision for such an application. For example, an application may issue prctl(PR_SET_CLOCK_STABLE, 1) to make sure that hrtimer's resolution isn't changed (or at least not changed with the loss of precision) until prctl(PR_SET_CLOCK_STABLE, 0) or exit(); if some system-wide event decreases hrtimer accuracy, such an application might receive a signal, etc.
Dmitry