-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Require timestamp sequence to be monotonic when device is synchronized #37
Comments
This proposal seems to only consider the non-monotonicity as a consequence of clock synchronization. However, there are other scenarios where timestamps can be locally non-monotonic. For instance, a |
@bruno-f-cruz that is a good point. For our purposes I think it would be enough if we can consider that within single registers the sequence of timestamps is monotonic. |
@bruno-f-cruz btw, the scenario you outlined would completely invalidate using |
Completly agree. That's why I think messages to be used for software timestamping should result from filtering messages from a single periodic register (e.g. behavior's ADC). This guarantees monotonicity (assuming the jitter of synchronization is never above the period of the register) and uniform timestamping (since the register has a periodic dispatch rate). This is also why I think the Clock Generator family of boards should also optionally dispatch a period event to afford this software synchronization. |
An even more weird scenario is what happens to timestamps of extended operations if someone changes the clock time in the middle, so yeah completely agree. Anyway, worth discussing this since it can get confusing in analysis really fast. |
Can y'all clarify the scenario where this manifests as a problem? It sounds like after the device has synchronized once, small future corrections to its current time are causing it to occasionally report an event that happened later in time with a timestamp earlier than a previous event. Example:
Can y'all confirm the above?
I think the better way to handle this would be to have the subordinate Harp device either "tick faster" or "tick slower" rather than physically tick backwards. I'm imagining two runners on a track where one sets the pace and the other keeps up with the pace. Here, neither runner runs "extra" distance on the track by running backwards. Instead the runner keeping up with the other simply runs faster or slower such that the small adjustment catches them both up. Possible Implementation SolutionOne possible solution would be to implement synchronization with a PI loop on the device side to handle small corrections and use it after the initial synchronization message is received, which will require a big jump. Each synchronization message would be subtracted from the current time to get an error signal. We would multiply this error signal by a gain factor to make the current time tick faster or slower to keep up with the external timekeeper. The benefit of this strategy is that the time reported from the device would always be monotonic. The downside is that this could be rather difficult to implement as the device may lack the granularity to tick faster or slower in small increments, so we'd need to spoof this behavior with small bursts. I should mention that something to this effect is how I've seen people sychronize against PTP time servers with the IEEE 1588 PTP protocol. |
Implementation should guarantee:
This should be added to the protocol as one expected behavior from the implementation. |
Summary
Externally synchronized devices should report strictly monotonic timestamp sequences. This could be implemented by delaying clock updates instead of forcing clock back for small corrections.
(see below for details, this proposal will likely need extended clarification and discussion for clarity).
Motivation
Automatic clock synchronization by necessity requires non-linear and non-monotonic time, since the clock may be set backwards as well as forwards. However, this greatly complicates data analysis as data frames cannot be assumed to be sorted by timestamp, which makes it harder to treat time axis as a primary search key (search is exponentially faster if the key is sorted).
Furthermore, in popular time-series frameworks like pandas, time range queries are only allowed if the index is sorted. Unfortunately simply sorting the data by time can be dangerous as it may rearrange the order of measurements and there is currently no way of reliably knowing when this is admissible.
Detailed Design
Clock synchronization must be allowed to set the clock backwards, otherwise it would compromise synchronization in general. However, this proposal suggests synchronization behavior could be split into two states. The below discussion also assumes that #8 is resolved.
The device is not synchronized: if the
synchronized
flag is cleared and the device suddenly starts receiving synchronization pulses, we expect that larger changes will be required to align the independent clocks. In this state, synchronization would proceed in the same way as it does now, with the added requirement that thesynchronized
flag would not be set until the clocks are deemed to be "aligned" (what aligned means here requires clarification).The device is synchronized: if the
synchronized
flag is set, then ongoing changes to the clock are expected to be small. In this case, we assume there are two possible kinds of correction:device clock is late: in this case time needs to move forward; this is still allowed under this proposal since it preserves monotonic time;
device clock is running ahead: in this case time needs to move backwards; under this proposal we would want to avoid this while the
synchronized
flag is set:synchronized
flag should be cleared so that time can travel backwards again.Drawbacks
If there is no way to efficiently implement this feature while keeping similar synchronization performance on existing devices, we might not want to do this.
Alternatives
Discuss best practices for sorting timestamped data in post-processing. Currently we simply try to sort data chunks which have out-of-order timestamps, but there is no good intuition for the impact of this workaround without a deeper understanding of the current synchronization algorithm.
Alternatively, hosts could also impose sorting of timestamps at acquisition time. As above, this would require understanding assumptions for non-monotonic time in the clock synchronization protocol.
Unresolved Questions
The clock synchronization mechanism might not be appropriate for firmware implementation, or there may be better solutions to explore. The only requirement of the proposal is to make time monotonic while device is synchronized, so any solution achieving this would be acceptable.
What should be the value of time delta used to decide when a device is synchronized?
Design Meetings
N/A
The text was updated successfully, but these errors were encountered: