On Tue, 2012-02-07 at 20:21 -0500, Andrew Richardson wrote:
On Feb 7, 2012, at 7:30 PM, John Stultz wrote:
On Wed, 2012-02-08 at 00:16 +0100, Zygmunt Krynicki wrote: Hrm. No, that shouldn't be the case. CLOCK_MONOTONIC and CLOCK_REALTIME are driven by the same accumulation, and are only different by an offset.
That said, in the test case you're using CLOCK_MONOTONIC_RAW, which I don't think you really want, as its not NTP freq corrected. In addition it is driven by some slightly different accumulation logic. But you said CLOCK_REALTIME showed the same issue, so its probably not some CLOCK_MONOTONIC_RAW specific bug.
In general, I don't want the time value moving around on me (in case something weird is going on and it's changing too much). This seems to be what most people advise when it comes to profiling something with sub-second execution, but I might be misunderstanding you slightly.
The difference is "hardware constant" vs "software controlled time constant". And with all things time, its all relative. :)
CLOCK_MONOTONIC_RAW is uncorrected, so a second may not really be a second and things like thermal changes can cause fluctuations in your timing intervals. CLOCK_MONOTONIC is NTP corrected, so a second should be a second and thermal drift should be corrected for, but that depends on how much you trust ntpd.
That said, CLOCK_MONOTONIC can really only be corrected to 500ppm of CLOCK_MONOTONIC_RAW, so I suspect the difference won't really matter that much unless your measuring longer intervals. Its not like CLOCK_REALTIME, which is more problematic as it may be set back and forth any amount of time.
Seems a bit too high, right? I did get some low values, such as a 500nanoseconds difference, once. I was expected a harsh lower bound (e.g. a few ms), but a measurement of 500ns elapsed makes that theory unlikely.
Yea. I suspect something else is at play here.
thanks -john