Bug 2388 - Let simple links automatically calculate BDP and set the right queue size
Let simple links automatically calculate BDP and set the right queue size
Status: RESOLVED FIXED
Product: ns-3
Classification: Unclassified
Component: traffic-control
pre-release
PC Linux
: P5 enhancement
Assigned To: Stefano Avallone
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2016-04-26 09:26 EDT by natale.patriciello
Modified: 2018-03-09 09:21 EST (History)
3 users (show)

See Also:


Attachments
outline for possible tutorial extension (2.52 KB, text/plain)
2016-11-24 11:32 EST, Tom Henderson
Details
possible tutorial extension v2 (5.76 KB, text/plain)
2018-03-04 10:43 EST, Stefano Avallone
Details

Note You need to log in before you can comment on or make changes to this bug.
Description natale.patriciello 2016-04-26 09:26:13 EDT
In ns-3.24, we had (by default) 100 packets-long DropTail queue on simple links (e.g. p2p and csma).

Now, by default we have 1000-packets long queue. I am not too convinced about this, and probably this should have been the same as previous release. However, the raw values don't matter: the matter of facts is that the default value is not good for every link, and at the same time we cannot ask the users to manually set them every time (or we have to upgrade tutorials and manuals). Without changing default value, bufferbloat problem could destroy simulations outcomes.

Systemd uses as default Codel, which basically eliminates that dimensioning problem. For me this is ok as well, but maybe it introduces too much differencies with the previous release.

So, my proposal is to allow simple netdevice (I was thinking of p2p and csma basically) to set, through their helpers, the right queue size (equal to the BDP, as RFC 7567 indicates) in the TC layer, while maintaining DropTail as default queue. As option 2), using CoDel with 1000 packets limit. What do you think ? Is it feasible ?
Comment 1 Stefano Avallone 2016-04-26 13:50:38 EDT
Well, the default queue disc in Linux is still pfifo_fast (systemd overrides this) with a capacity equal to the txqueuelen of the interface, which is usually 1000 for Ethernet and Wifi. Thus, ns-3 has the same default configuration as Linux.

That said, we can however change the defaults in ns-3 to mitigate the bufferbloat problem. Option 1) you propose should be doable, but a bit tricky; option 2) should be easy to implement. Anyway, I would like to hear other opinions on whether we want to change defaults and, in that case, which default to use. Keep also in mind that other bufferbloat-related features such as fq-codel and BQL should be ready for review soon.
Comment 2 Tom Henderson 2016-04-26 14:08:24 EDT
(In reply to Stefano Avallone from comment #1)
> Well, the default queue disc in Linux is still pfifo_fast (systemd overrides
> this) with a capacity equal to the txqueuelen of the interface, which is
> usually 1000 for Ethernet and Wifi. Thus, ns-3 has the same default
> configuration as Linux.
> 
> That said, we can however change the defaults in ns-3 to mitigate the
> bufferbloat problem. Option 1) you propose should be doable, but a bit
> tricky; option 2) should be easy to implement. Anyway, I would like to hear
> other opinions on whether we want to change defaults and, in that case,
> which default to use. Keep also in mind that other bufferbloat-related
> features such as fq-codel and BQL should be ready for review soon.

I believe that 64 packet default is used for routers (e.g. Cisco) although in practice, more complicated queueing than pfifo_fast is typically deployed on routers such as CBWFQ+WRED.

I tend to agree that 1000 packets, left unmodified and hidden from the user, is probably the wrong setting.

In my recent work on point-to-point links and routers, I don't recall autotuning queue limit setting based on BDP detection.  I think it might be better to change the examples and tutorials and explicitly set the length, based on the configured BDP, and leave a comment in the code that "this queue length is set based on assumed BDP according to RFC 7567".  This makes the user aware that there is something to pay attention to.  Changing default to Codel would also perhaps work as a default.
Comment 3 Stefano Avallone 2016-04-26 17:05:16 EDT
> In my recent work on point-to-point links and routers, I don't recall
> autotuning queue limit setting based on BDP detection.  I think it might be
> better to change the examples and tutorials and explicitly set the length,
> based on the configured BDP, and leave a comment in the code that "this
> queue length is set based on assumed BDP according to RFC 7567".  This makes
> the user aware that there is something to pay attention to.  

Just to be sure I understood it correctly: you are suggesting to only set the pfifo_fast limit to the BDP in the examples and leave the source code unchanged (I.e., limits equal to 1000)? Or do you want to change the default value in the source code?

> Changing default to Codel would also perhaps work as a default.
Comment 4 natale.patriciello 2016-11-24 03:42:10 EST
Hi,

can I update the tutorial ("building topologies" section) and the second.cc example ? I would like to add the part on queue sizing. Are we still using the pfifo_fast 1000 packets, right ?
Comment 5 Tom Henderson 2016-11-24 11:32:13 EST
(In reply to natale.patriciello from comment #4)
> Hi,
> 
> can I update the tutorial ("building topologies" section) and the second.cc
> example ? I would like to add the part on queue sizing. Are we still using
> the pfifo_fast 1000 packets, right ?

Yes, 1000 default.

I agree to update the tutorial.  Given current tutorial structure, it might make sense to introduce the problem (as a warning) in building topologies section, and then later, after tracing has been introduced, to introduce a new example program that traces end-to-end latency using an application that generates congestion in the queues, and at that point, show the user how to swap out pfifo for AQM, etc.  second.cc example has only a very trivial echo client/server application, and tracing is not yet introduced.

See possible outline for this first section introducing queues at some point in 'building topologies' section--- what do you think?
Comment 6 Tom Henderson 2016-11-24 11:32:55 EST
Created attachment 2684 [details]
outline for possible tutorial extension
Comment 7 Stefano Avallone 2016-11-24 12:37:05 EST
Natale, you are of course more than welcome to update the tutorial and add an overview of the various queues present in the stack and an example showing how to change the default values. Tom, the outline you attached is an excellent starting point. I am willing to contribute to this section, though I will likely not be able to do so in the very next few days. Perhaps we want to share a document so that everyone can access the latest version?
Comment 8 natale.patriciello 2018-03-01 04:56:14 EST
Hello,

does the tutorial have to change in light of recent queue attribute change? Can we merge this new section as well, in order to close the bug?
Comment 9 Tom Henderson 2018-03-01 12:08:07 EST
(In reply to natale.patriciello from comment #8)
> Hello,
> 
> does the tutorial have to change in light of recent queue attribute change?
> Can we merge this new section as well, in order to close the bug?

I agree that now is a good time to finish this off.  I suggest that Stefano include within scope of his queue work at https://github.com/stavallo/ns-3-dev-git/commits/queue-disc-limit
Comment 10 Stefano Avallone 2018-03-04 10:43:44 EST
Created attachment 3072 [details]
possible tutorial extension v2

I expanded the draft prepared by Tom. Please let me know what you think about it.
Comment 11 Tom Henderson 2018-03-04 11:41:07 EST
(In reply to Stefano Avallone from comment #10)
> Created attachment 3072 [details]
> possible tutorial extension v2
> 
> I expanded the draft prepared by Tom. Please let me know what you think
> about it.

Looks good, moving it to last call.

Where you say, e.g.:

"At the traffic-control layer, these are the options:

* pfifo_fast: The default maximum size is 1000 packets
..."

I suggest to refer to them by ns-3 class name, not the Linux name (or put the Linux name in parentheses)

For UAN, there is a default 10 packet queue at the MAC layer

For LTE, there is queueing at the RLC layer and also packets are queued at the PHY for mapping into transmission opportunities (RLC UM default buffer is 10 * 1024 bytes, RLC AM does not have a buffer limit).
Comment 12 Stefano Avallone 2018-03-09 09:21:50 EST
Applied the changes suggested by Tom and pushed to ns-3-dev.

Thanks!