-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gPTP clock sync accuracy #837
Comments
The gPTP time synchronization protocol is implemented in INET, see the showcase at https://inet.omnetpp.org/docs/showcases/tsn/timesynchronization/gptp/doc/index.html Isn't this what you are looking for? |
Hi levy, thanks for the reply. Am I missing something for the setup of my network or is this feature not supported/implemented entirely? |
Oh, I see now. INET doesn't fully implement 802.1AS-2020, only the multiple time domains have been added compared to the previous version. So yes, you are right that clock drift rates are not synchronized. |
Would it be possible to clarify the following: |
gPTP can be used to synchronize the time of INET clocks which can drift away from each other. Yes, you can simulate the time synchronization and the drifting of clocks and its effects on time-aware shaping, for example. The current gPTP implementation doesn't change the drift rate of clocks, it only sets the time value of clocks. Setting the drift rate would not be too difficult to do, I think. |
With the understanding that the features are still maturing and under development, we would like to know what the future goals and plans are for the gPTP simulation.
In the quote above you mentioned that gPTP can be used to synchronize the time. But currently we have found this to not be true, as gPTP itself has strict constraints around the Time Error that can be introduced per hop. Currently we cannot simulate gPTP time sync correctly. When using the gPTP max required clock PPM (+- 100ppm), the simulation synchronization accuracy is outside of the gPTP specification. The MAX Time Error for one hop is expected to be about +- 70 to +- 80ns, but the simulation gives us a MAX Time Error of about ~12.5us on the first hop. This is detrimental to all other TSN features that depend on Time Sync. Are we perhaps misunderstanding the simulation goals of gPTP in OMNET/INET? As currently it cannot be used to simulate gPTP clock accuracy, and all TSN features that depend on it such as the Time Aware Shaper would inherit the gPTP clock error. |
Hmmm, you make a very good point. I just looked at this issue more deeply. You should know that the gPTP protocol itself is an external contribution to INET (just like most of the protocols) and it seems that the integration wasn't done correctly or completely. When time synchronization happens in the gPTP protocol, the protocol module simply sets the time of the clock and doesn't compensate for the clock drift rate. This happens despite the fact that the grand master relative drift rate is actually calculated in the protocol during time synchronization. So this must be an integration issue between the gPTP protocol and the INET clock interface. The fix isn't very difficult, but the INET clock interface doesn't support the compensation parameter right now. Of course, this can be added quite easily and the gPTP synchronization can be updated afterwards. I would expect the clock time difference to decrease with several orders of magnitude. This issue will be fixed soon. |
Thank you so much, we really appreciate you looking in to this further and look forward to the patch! If any assistance is needed in the verification process we are happy to assist. |
We worked on this issue at the end of last week because it's very important to get this right in order to correctly implement TSN. We added a new feature to INET clocks that allows one to set an oscillator compensation factor. This value is 1.0 by default, which means the clock simply counts the number oscillator ticks, and there's no compensation. Even if the value is 1.0, due to imprecise drifting oscillators (constantly or randomly), clocks drift away from each other over time. Setting the oscillator compensation factor in gPTP to a value calculated from the grand master rate ratio is what is missing, I think. Do you agree with this? For testing purposes, INET contains a SimpleClockSynchronizer module which synchronizes clocks without actual communication between the nodes, just by looking at the clock times of master and slave nodes. This module does two things when synchronization happens:
In the following diagram, the slave clocks are periodically synchronized to the master clock both in time and also in the rate of change of their clock times . Then they drift away randomly using a RandomDriftOscillator (which has a random walk model for drifting). I think this diagram proves that the clocks work as expected and synchronization can be done. Now, the task is to get something very similar using gPTP. Perhaps the result would be less accurate because it uses real communication between the nodes. We will let you know when we think it works as expected. It would be great if you could help us validating the model. Best regards |
The INET branch topic/gptpsynch contains the updated clock model and the fixed gPTP model that supposed to correctly synchronize time between clocks both in terms of their clock time and also the rate of change of their clock time. We tested the changes in the showcases/tsn/timesynchronization/gptp example OneMasterClock configuration. It's possible to test both gPTP and the above SimpleClockSynchronizer module by uncommenting a few lines in the INI file. Could you please check the changes in this branch and do some validation tests before we push this into the master branch? |
Thanks for the rapid development. |
The changes have been merged into the master branch. |
Hi Levy, Thanks for the update! All the best for the festive season. |
Thanks, have a nice holiday and see you next year! |
Hi everyone, I was just thinking about using Omnet's gPTP implementation to evaluate some research for which I need a realistic timing model. For my use case I need the time offsets to the grandmaster follow a normal distribution with similar parameters to those of real systems) and I immediately noticed the lack of synthonization (frequency compensation) after having a short look into the documentation page. I was very happy after I saw that an issue was already created. @levy Thanks a lot for the effort you put into this and that there's already progress. There are a few things I noticed that might be interesting to you regarding the implementation. In the diagram above you still have a big timing error of ~1us after a synchronization period of 125ms. Note: I assume the distance from the grandmaster to the switch is 1 hop. It's possible to achieve a much higher precision in real systems even with very cheap oscillators and I think the reason is that the drift rate of the oscillator (+- 100ppm) needs to be split into a component-specific part (caused by manufacturing process) which is very much static for individual oscillators and a dynamically changing drift caused by heat, voltage levels, etc. The component-specific drift rate of each oscillator is much higher than the drift due to heat and due to its static nature it can be fully compensated (more or less). The dynamic drift due to heat that can't be compensated for is a lot smaller than the +-100ppm. So I propose splitting the drift parameter into two parameters A (static drift offset) and B (dynamic drift change). A is only initialized once when starting the simulation while B is evaluated repeatedly on every tick. The clock ratio at every tick is then composed of A and B, for example with A=uniform(-100ppm, 100ppm) and B=uniform(-1ppm, +1ppm). If B is modeled as a random walk the clock rate could be set to 1.0 + A + B_sum at every tick. I think this simple model should come a lot closer to real systems and is very simple to implement. If you would add an additional parameter to INET's oscillator module for the "static drift" with a default value of 0 this could be downwards compatible to existing solutions. Best regards |
Could you please take a look at https://github.com/inet-framework/inet/blob/master/src/inet/clock/oscillator/RandomDriftOscillator.ned in the master branch of INET, because I think it implements exactly what you are suggesting here. The oscillator drift is a random initial value plus the sum of a random walk process. |
Thanks a lot! This is pretty much exactly what I meant. Seems like I didn't notice that the feature already existed. |
No problem. Please note that there are changes in that module in the current master branch since INET 4.4 version. |
Hi Levy, From our brief look through the new feature set it is definitely a good improvement over the old system. There are however some concerns that we have identified:
Upon further investigating the IEEE802.1AS-2020 standard there is no single way of synchronising a clock, thus the current method of setting the new clock time is valid. The introduction of a clock servo can be a good future alternative for clock synchronisation. This would have the clock drift rate value be the only value that changes with a synchronisation method such as pi or linear regression. LinuxPTP (ptp4l) implements such a servo. Any comment on the above is appreciated. |
Happy New Year! For 1, the oscillator compensation factor is currently a value around 1. The value 1 means the clock counts 1 for every tick of the oscillator that is no compensation. A value larger than 1 means the clock runs slower, a value less than 1 means the clock runs faster. You are right about having 0.99999 or 1.00001 values are difficult to understand for humans, so we could perhaps store this value in PPM similarly to the drift rates. For example, 1 PPM oscillator compensation factor would mean that for every millionth tick the clock is not incremented, -1 PPM would mean that for every millionth tick the clock is incremented twice. Is this what you are suggesting? For 2, initialFoo is often used in such sense in INET, so we think that these parameter names are good as they are. We understand your reasons but these parameters work differently. For example, if you change the driftRate parameter of a ConstantDriftOscillator while the simulation is running, then it will affect the drift immediately, but this isn't the case for the initialDriftRate parameter of RandomDriftOscillator. For the last point, the describe clock servo could even be an optional mode of operation of the Gptp module. So instead of setting the clock time at the current synchronization point, Gptp would overshoot the oscillator compensation factor in a way that causes the slave clock to catch up with the master clock by the time of the next synchronization point. |
|
If the clock servo is implemented as proportional integral (PI) controller it would be nice to be able to configure
|
I created a separate issue for the gPTP clock servo feature. |
This allows easier human consumption of the compensation factor, see #837.
This allows easier human consumption of the compensation factor, see #837.
The remaining part of this issue, changing the internal representation of the oscillator compensation to PPM, has been pushed to the master branch. There's nothing left, so I'll close this issue. |
We are trying to understand and correlate the gPTP (802.1AS) clock accuracy output analysis to real world cases. It seems that in Omnet/INET, no clock Syntonization is implemented, and therefore if we simulate gPTP clocks using the Clock Quality requirements of 802.1AS (+-100ppm or 50ppm) we end up with a max Time Error of about 6us - which is of cause out of spec according to 802.1AS.
How to we use the clock analysis for real world cases?
The ClockDriftShowcase shows this behavior and our projects match this setup closely.
The text was updated successfully, but these errors were encountered: