Bug 255

Summary: Deterministic Behavior May Cause Problems
Product: ns-3 Reporter: Craig Dowell <craigdo>
Component: internetAssignee: Craig Dowell <craigdo>
Status: RESOLVED WONTFIX    
Severity: minor CC: gjcarneiro, ns-bugs
Priority: P4    
Version: pre-release   
Hardware: All   
OS: All   

Description Craig Dowell 2008-07-19 01:00:52 EDT
Consider:
   - Two applications are started at precisely t seconds;
   - Applications generate packets that require arp at precisely the same time;
   - LOOP:
   - Both arp protocols schedule retransmission for precisely t+timeout
seconds;
   - Generated arp packets are sent at precisely t seconds;
   - Device/channel mechanism causes arp packets to collide and be 
discarded;
   - At t+timeout seconds, arp protocols retry on both nodes (at precisely the same simulation time);
   - t = t+timeout;
   - goto LOOP.

The result is that ARP retransmissions happen at precisely the same time and collide over and over again.

Our models may be "too deterministic" in some cases.  Should models such as arp be configurable to add a random "fuzz" to their timeouts to simulate real timing uncertainties?
Comment 1 Mathieu Lacage 2008-07-21 11:17:24 EDT
(In reply to comment #0)
> Consider:
>    - Two applications are started at precisely t seconds;
>    - Applications generate packets that require arp at precisely the same time;
>    - LOOP:
>    - Both arp protocols schedule retransmission for precisely t+timeout
> seconds;
>    - Generated arp packets are sent at precisely t seconds;
>    - Device/channel mechanism causes arp packets to collide and be 
> discarded;
>    - At t+timeout seconds, arp protocols retry on both nodes (at precisely the
> same simulation time);
>    - t = t+timeout;
>    - goto LOOP.
> 
> The result is that ARP retransmissions happen at precisely the same time and
> collide over and over again.
> 
> Our models may be "too deterministic" in some cases.  Should models such as arp
> be configurable to add a random "fuzz" to their timeouts to simulate real
> timing uncertainties?

There are some cases where, indeed, this sort of thing is really needed (for example, TCP timeouts might need some random fuzzing because, if you use a 500ms timeout synchronized on time zero, all nodes will be making tcp decisions at the same time which is pretty bad).

Here, however, I see no reason to do this in ARP. The problem is simply that the applications are fully synchronized and a user should be able to spot this and react to it. I understand that we could avoid the problem in the first place by adding this fuzzyness somewhere but I would argue that experienced simulation users would find this extremely surprising in this case.

i.e., if we wanted to add some sort of fuzzyness, I would make that a NetDevice-specific tx random delay to modelize the non-deterministic transfer time from host memory to device memory.