Difference between revisions of "Local Clocks"

From Nsnam
Jump to: navigation, search
(Created page with "''This is a continuation of the local clocks/per-node clock discussion from GSOC2015MpTcpImplementation#On_the_per_node_clock''' == Developer List Discussion 2019-01 ==...")
 
(No difference)

Latest revision as of 18:58, 1 February 2019

This is a continuation of the local clocks/per-node clock discussion from GSOC2015MpTcpImplementation#On_the_per_node_clock'

Developer List Discussion 2019-01

On 24/01/19 at 09:15, Tommaso Pecorella wrote:

Hi all,

there's an unused and yet potentially very interesting feature in ns-3: Node::GetLocalTime.

On 25 Jan 2019, at 02:55, Natale Patriciello

Matt was working on that as part of his GSoC 4 years ago.

GSOC2015MpTcpImplementation#On_the_per_node_clock

On Feb 1, 2019, at 9:15 AM, Barnes, Peter D. <barnes26@llnl.gov> wrote:

I worked with Matt on this GSoC (gosh was it 4 years ago already?). This turned out to be more complex than we expected:

1. (should have been the easy part) Finding code to simulate noisy clocks. The simple thing would be to just sum samples from a Gaussian (or Poisson) for the actual tick size, but this would give an unrealistic Allen variance (just the noise portion, not the drift, which is typically more significant.)

2. Each Node has to support it’s own Scheduler, in local time. (This is so you don’t have to predict accumulated variation over long times, which makes #1 harder.[1])

3.a For every Now() call you have to decide if it should be GetLocalTime() 3.b For every Schedule() call you have to decide if it should be in local time or absolute time.

Happy to talk more, Peter

[1] Longer explanation:

Suppose we’re at time 100 (absolute: 100_a), and want to schedule an event A for +100 (local: +100_l), which will end up at ~200_a. We could look up the Allan deviation for tau=100 (or equivalently the time deviation), and sample from that to determine the absolute delay to schedule the event.

Now the simulation proceeds and at time 110_a needs to schedule an event B for +80_l, which will end up ~190_a, so we follow the same process. The difficulty is that each event inserted before A subdivides the original interval, but the shorter intervals will no longer result in the intended Allan deviation. In this example we sampled at tau = 100_l, then tau = 80_l, leaving a gap of 10_l *which we didn’t sample from the full Allan deviation*

I think (but since I don’t understand#1 above it’s more like conjecture) that the right way is to store the events in a local-time schedule, then as we prepare to execute the next event E @ t_E,l, sample the noisy clock for t_E,l - t’_l (where t’_l is the local time of the previous event), in order to compute t_E,a, the absolute time of event E.

Possibly Relevant Literature

Series on the Dynamic Allan Variance

  • "The Dynamic Allan Variance," L. Galleani, 2009. (pdf)
  • "The Dynamic Allan Variance II: A Fast Computational Algorithm," L. Galleani, 2019. (pdf)
  • "The Dynamic Allan Variance III: Confidence and Detection Surfaces," L. Galleani, 2019. (pdf)
  • "The Dynamic Allan VarianceIV: Characterization of Atomic Clock Anomalies," L. Galleani, 2009. (pdf)
  • "The Dynamic Allan VarianceV: Recent Advances in Dynamic Stability Analysis," L. Galleani, 2009. (pdf)

Simulation of Noisy Clocks

  • "Discrete Simulation Of Power Law Noise," N.Jeremy Kasdin and Todd Walter, 1992. (pdf)
    Includes C source code.
  • "Discrete Simulation of Power Law Noise," Neil Ashby, 2011. (pdf)
  • colorednoise," Julia Leute, 2018. (colorednoise Github Python repo, now merged into next reference)
  • "allantools," Anders Wallin, 2018. (allantools Github Python repo)