Bugzilla – Full Text Bug Listing |
Summary: | disabling TCP SACK does not fall back to NewReno behavior | ||
---|---|---|---|
Product: | ns-3 | Reporter: | Tom Henderson <tomh> |
Component: | tcp | Assignee: | natale.patriciello |
Status: | RESOLVED FIXED | ||
Severity: | normal | CC: | ns-bugs |
Priority: | P3 | ||
Version: | unspecified | ||
Hardware: | All | ||
OS: | All | ||
Attachments: |
correct packet trace
incorrect packet trace |
Description
Tom Henderson
2017-02-05 21:08:07 EST
Hi Tom, Right now there is no need for inflation/deflation of the window. In fact, after each dupack, the implementation is freeing one segment from the amount of the in-flight bytes. Therefore, if LimitedTransmit is enabled, there is a space to send exactly one segment. Please note that this has the same behavior as the inflating/deflating mechanism. After all, one segment should be sent out. (In reply to natale.patriciello from comment #1) > Hi Tom, > > Right now there is no need for inflation/deflation of the window. In fact, > after each dupack, the implementation is freeing one segment from the amount > of the in-flight bytes. Therefore, if LimitedTransmit is enabled, there is a > space to send exactly one segment. > > Please note that this has the same behavior as the inflating/deflating > mechanism. After all, one segment should be sent out. It does not have the same behavior in the test I mentioned; fewer segments are sent off the top of the window. NewReno sends the additional segments to avoid problems with multiple losses in a window, when SACK is not used. Current behavior appears to be Reno in this case, not NewReno. I think we need to make the non-SACK case behave like TCP NewReno since we are advertising it as such; RFC 6675 behavior applies to the SACK-enabled case. Can I look to the traces? On my system is impossible to use NSC. thanks Any news? I'm of changing the title of the bug since the New Reno specifications are about the partial ACK management and it seems correct from your explanation. temporarily disabled ns3-tcp-cwnd until this is fixed, so as not to obscure other ns-3-dev issues Created attachment 2923 [details]
correct packet trace
Created attachment 2924 [details]
incorrect packet trace
For the archive, I'm attaching a correct tcpdump trace (obtained by 'tcpdump -r tcp-cwnd-ood-0-0.pcap -nn -tt' on the PCAP generated by the test) and incorrect trace. The correct one is from ns-3.26; the incorrect from ns-3-dev @ 13073:0ad94b8b6fb4. The main problem is that some code in CA_DISORDER and CA_RECOVERY that assumes that SACK is being used (or RFC 6675 loss recovery), and as a result, NewReno values for cWnd are not correctly set when SACK is disabled. There is an attempt to work around this with TcpTxBuffer::CraftSackOption(), but it is not enough. I chose to restore some ns-3.26 code for fast retransmit/recovery phases when m_sackEnabled is false. In testing this, I found that an ns3-tcp-cwnd test vector had an unneeded change event upon leaving fast recovery: 2.93531s 0 Ns3CwndTest:CwndChange(): [DEBUG] Cwnd change event 26 at +2.93531195999999999996s 5092 1608 2.93531s 0 Ns3CwndTest:CwndChange(): [DEBUG] Cwnd change event 27 at +2.93531195999999999996s 1608 2144 so I regenerated the test vectors for that test to reduce the number from 40 to 39 (all other change events now line up). The test 'ns3-tcp-loss.cc' can also be reverted to ns-3.26 response vectors and code, if Sack is configured to false. fixed in changeset 13099:373cf44ebc8f |