New TCP Socket Architecture

From Nsnam
Revision as of 00:02, 30 July 2010 by Adriantam (Talk | contribs) (Case 7)

Jump to: navigation, search

This page describes an on-going rework on NS-3 TCP



In the following, a new architecture for TCP socket implementation is proposed. This is to replace the old TcpSocketImpl class in NS-3.8 so that different favors of TCP can be easily implemented.

The current working directory is located at http://code.nsnam.org/adrian/ns-3-tcp

Old Structure

As of change set 6273:8d70de29d514 in the Mercurial, TCP simulation is implemented by class TcpSocketImpl, in src/internet-stack/tcp-socket-impl.h and src/internet-stack/tcp-socket-impl.cc. The TcpSocketImpl class is implementing TCP NewReno, despite the Doxygen comment claims that is implementing Tahoe.

The TcpSocketImpl class is derived from TcpSocket class, which in turn, is derived from Socket class. The TcpSocket class is merely an empty class defining the interface for attribute get/set. Examples of the attributes configured by the interface of TcpSocket class are the send and receive buffer sizes, initial congestion window size, etc. The Socket class, however, provides the interface for the L7 application to call.

How to use TcpSocketImpl

TCP state machine transitions are defined in tcp-l4-protocol.h and tcp-l4-protocol.cc. The class TcpSocketImpl does not maintain the transition rule but keeps track on the state of the current socket.

When an application needs a TCP connection, it has to get a socket from TcpL4Protocol::CreateSocket(). This call will allocate a TcpSocketImpl object and configure it (e.g. assign it to a particular node). The TcpL4Protocol object is unique on a TCP/IP stack and serve as a mux/demux layer for the real sockets, namely, TcpSocketImpl.

Once the TcpSocketImpl object is created, it is in CLOSED state. The application can instruct it to Bind() to a port number and then Connect() or Listen() as the traditional BSD socket does.

The Bind() call is to register its port in the TcpL4Protocol object as an Ipv4EndPoint, and set up the callback functions (by way of FinishBind()), so that mux/demux can be done.

The Listen() call puts the socket into LISTEN state. The Connect() call, on the other hand, puts the socket in SYN_SENT state and initiates three way handshake.

The application can close the socket by calling the Close() call, which in turn, destroys the Ipv4EndPoint after the FIN packets.

Once the socket is ready to send, application invokes the Send() call in TcpSocketImpl. The receiver side application calls Recv() to get the packet data.

Inside TcpSocketImpl

The operation of TcpSockImpl is carried out in two parallel mechanisms. To communicate with higher level applications, the Send() and Recv() calls are dealing with the buffers directly. They append the data into the send buffer and retrieve data from the receive buffer respectively. To send and receive data over the network through lower layers, functions ProcessEvent(), ProcessAction(), and ProcessPacketAction() are called.

Two functions are crucial to trigger these three process functions. The function ForwardUp() is invoked when the lower layer (Ipv4L3Protocol) received a packet that destined to this TCP socket. The function SendPendingData() is invoked whenever the application has anything appended to the send buffer.

ForwardUp() converts the incoming packet's TCP flags into an event. Then, it updates the current state machine by calling ProcessEvent(), and perform the subsequent action with ProcessPacketAction(). ProcessEvent() handles only the connection set up and tear down. All other cases are handled by ProcessPacketAction() and ProcessAction(). The function ProcessPacketAction() handles those cases that need to reference the TCP header of packet, other cases are handed over to ProcessAction().

SendPendingData() manages the send window. When the send window is big enough to send a packet, it extracts data from the send buffer and package it with a TCP header, then pass it over the lower layers.

New Structure

The new structure, TcpSocketBase class, is having the same relationship to TcpSocket and Socket classes as TcpSocketImpl. However, instead of providing a concrete TCP implementation, it is designed to meet the following goals:

  • Provide only the function common to all TCP classes, namely, the implementation of TCP state machine
  • Minimize the code footprint and make it modular to make it easier to understand

From a lower-layer's point of view, TCP has not changed since 1980. The TCP state machine remained the same. The only different between different variants of TCP is on the congestion control and fairness distribution. The TcpSocketBase class keeps the state machine operation, i.e. ProcessEvent() and ProcessAction() calls, the same as TcpSocketImpl class. These functions, however, will be tidied up in the future.

In the current TcpSocketBase class, two auxiliary classes are used, namely, TcpRxBuffer and TcpTxBuffer.

The TcpRxBuffer is the receive (Rx) buffer for TCP. It accepts packet fragments at any position. Function call TcpRxBuffer::Add() inserts a packet into the Rx buffer. It obtains the sequence number of the data from the provided TcpHeader. The Rx buffer has a maximum buffer size, defaults to 32KiB, can be set by TcpRxBuffer::SetMaxBufferSize(). The sequence number of the head of the buffer can be set by TcpRxBuffer::SetNextRxSeq(). This is supposed to be called upon the connection is established so that it can report out-of-sequence packets. TcpRxBuffer handles all the reordering work so that TcpSocketBase can simply extract from it in a single call, TcpRxBuffer::Extract().

The TcpTxBuffer is the transmit (Tx) buffer for TCP. The upper layer application sends data to TcpSocketBase. The data is then appended to the TcpTxBuffer by TcpTxBuffer::Add() call. Similar to TcpRxBuffer, it can also set the maximum buffer size by TcpTxBuffer::SetMaxBufferSize(). Appending data will fail if the buffer is going to store more data than its maximum size. Because appending to TcpTxBuffer is supposed to be sequential, without overlap, TcpTxBuffer::Add() call is merely put the data into the end of a list. TcpTxBuffer, however, support extracting data from anywhere in the buffer. This is done by TcpTxBuffer::CopyFromSeq() call.

Operation

Although the API (specifically, the public functions) did not change from TcpSocketImpl to TcpSocketBase, the internal operation between the two classes are vastly different. The most important change is the break down of TCP state machine into different functions. This design is in alignment of the TCP code in Linux. The change of state is done explicitly in various functions instead of doing a look up on the state transition table. Accordingly, in TcpSocketBase, the functions ProcessEvent, ProcessAction, and ProcessPacketAction are removed.

The following describe the operation of different interactions to the upper and lower layers:

Bind

The upper layer (application) can call Bind() in TcpSocketBase to bind a socket to an address/port. It allocates an end point (i.e. a mux/demux hook in TcpL4Protocol) and set up callback functions by SetupCallback(). One of the most important callback functions is ForwardUp(), which is invoked when a packet is passed from lower layers to this TCP socket.

Connect

The upper layer (application) initiates a connection by calling Connect(). It configures the end point to specify the peer's address, send a SYN packet to initiate the three-way handshake, and move to SYN_SENT state. Exception is when this socket already has a connection. In which cases, a RST packet is sent and the connection is torn down. Such state checking is done in DoConnect().

Listen

Instead of actively start a connection, application can also wait for an incoming connection by calling Listen(). What it does is just move the socket from CLOSED state to LISTEN state. If the socket was not in CLOSED state, an error is reported.

Close

When the application decided to close the connection, it calls Close(). This function will check if the close can be done immediately, in which case, DoClose() is called. If not, it asserts m_closeOnEmpty so that it withholds the close until all data are transmitted. The close, either by way of DoClose() or by the packet-sending routines with m_closeOnEmpty asserted, is to send a FIN packet to the peer.

Application send data

Data is sent by function Send(). The function SendTo(), which allows the specification of an address as parameter, is identical to Send(). It stores the supplied data into m_txBuffer and call SendPendingData(), if the state allows transmission of data. SendPendingData() is basically a loop to send as much data as possible according to the limit of the sending window. It extracts data from m_txBuffer and packages the outgoing packet with a TCP header. Once the packet is ready, it pass the packet into m_tcp by invoking TcpL4Protocol::SendPacket().

Application receive data

Application can call Recv() or RecvFrom() to extract data from the TCP socket. RecvFrom() is identical to Recv() except it also returns the remote peer's address. Recv() extracts data from m_rxBuffer and return it.

Lower layer forward an incoming packet to socket

When the lower layer (e.g. Ipv4L3Protocol) received a TCP packet, it is passed on to the TcpL4Protocol and the packet is forwarded to the socket if it matches the fingerprint in the socket's endpoint. The forwarding function is a callback function in endpoint. In TcpSocketBase, it is ForwardUp(). This function does three tasks: (1) invoke RTT calculation if the incoming packet has ACK asserted, (2) adjust the Rx window size to the value reported by the peer, (3) based on the current state, invoke corresponding processing function to handle the incoming packet. The last one is implemented as a switch structure with each state handled independently.

Basically there is a process function for each state, this is mimicking the behaviour of Linux. For example, in tcp_input.c of the Linux kernel, there is a function tcp_rcv_established() to handle all the incoming packets when the TCP socket is in ESTABLISHED state. In TcpSocketBase, this role is on function ProcessEstablished(). The similar functions are ProcessListen(), ProcessSynSent(), ProcessSynRcvd(), ProcessWait(), ProcessClosing() and ProcessLastAck(). The function ProcessWait() is responsible for states CLOSE_WAIT, FIN_WAIT_1, and FIN_WAIT_2. The rest are self-explanatory.

Difference in Behaviour

From the packet trace, there could be two behavioural difference between TcpSocketImpl and TcpSocketBase.

First, when a socket is moved from SYN_RCVD state to ESTABLISHED, TcpSocketBase set m_delAckCount to m_delAckMaxCount so that the first incoming data must be acknowledged immediately. This is not done in TcpSocketImpl. Thus, in TcpSocketImpl, there must be a delay ACK timeout between the sender sends the first and second data packet. The following is an excerpt from tcp-long-transfer.tr from the reference trace that showing the delay ACK timeout is blocking the sender from sending the second data packet:

r 0.0602016 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/MacRx ns3::Ipv4Header (tos 0x0 ttl 63 id 1 protocol 6 offset 0 flags [none] length: 40 10.1.3.1 > 10.1.2.2) ns3::TcpHeader (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
r 0.0610928 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/MacRx ns3::Ipv4Header (tos 0x0 ttl 63 id 2 protocol 6 offset 0 flags [none] length: 576 10.1.3.1 > 10.1.2.2) ns3::TcpHeader (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535) Payload Fragment [0:536]
+ 0.261093 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=537 Win=65535)
- 0.261093 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Dequeue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=537 Win=65535)
r 0.271126 /NodeList/1/DeviceList/1/$ns3::PointToPointNetDevice/MacRx ns3::Ipv4Header (tos 0x0 ttl 64 id 1 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=537 Win=65535)

In the above, at time 0.0610928, the first data packet arrived the receiver node (node 2). Only until time 0.261093, i.e. 0.2 second later, the ACK is sent. In the meantime, nothing is sent from the sender because its sending window is only 1 packet wide and the outstanding packet is not yet acknowledged.

The second difference is at the termination. In TcpSocketImpl, if a FIN packet is piggybacked on a data packet, there would be two back-to-back ACK packets responded, one for the data and one for the FIN. In TcpSocketBase, we avoided this by sending only one ACK, for the FIN because the FIN's sequence number is one plus the last data byte's sequence number. The following excerpt from tcp-long-transfer.tr shows the behaviour of TcpSocketImpl:

r 2.35371 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/MacRx ns3::Ipv4Header (tos 0x0 ttl 63 id 3733 protocol 6 offset 0 flags [none] length: 224 10.1.3.1 > 10.1.2.2) ns3::TcpHeader (49153 > 50000 [ FIN  ACK ] Seq=1999817 Ack=1 Win=65535) Payload Fragment [784:888] Payload (size=80)
+ 2.35371 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1867 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=2000001 Win=65535)
- 2.35371 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Dequeue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1867 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=2000001 Win=65535)
+ 2.35371 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1868 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=2000002 Win=65535)
- 2.35374 /NodeList/2/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Dequeue ns3::PppHeader (Point-to-Point Protocol: IP (0x0021)) ns3::Ipv4Header (tos 0x0 ttl 64 id 1868 protocol 6 offset 0 flags [none] length: 40 10.1.2.2 > 10.1.3.1) ns3::TcpHeader (50000 > 49153 [ ACK ] Seq=1 Ack=2000002 Win=65535)

In the current repository, one can search for the keyword "old NS-3" in tcp-socket-base.cc for these two differences. The current code in the repository is made to perform exactly the same with TcpSocketImpl so that it can pass the regression test.

Pluggable Congestion Control in Linux TCP

The next step would be to port the pluggable congestion control from Linux to NS-3. The ideal outcome would be a converter that can take Linux source code as input, produces NS-3 modules for each different TCP congestion control variants. In linux/include/tcp.h, the following structure is defined:

 struct tcp_congestion_ops {
       struct list_head        list;
       unsigned long flags;
 
       /* initialize private data (optional) */
       void (*init)(struct sock *sk);
       /* cleanup private data  (optional) */
       void (*release)(struct sock *sk);
 
       /* return slow start threshold (required) */
       u32 (*ssthresh)(struct sock *sk);
       /* lower bound for congestion window (optional) */
       u32 (*min_cwnd)(const struct sock *sk);
       /* do new cwnd calculation (required) */
       void (*cong_avoid)(struct sock *sk, u32 ack, u32 in_flight);
       /* call before changing ca_state (optional) */
       void (*set_state)(struct sock *sk, u8 new_state);
       /* call when cwnd event occurs (optional) */
       void (*cwnd_event)(struct sock *sk, enum tcp_ca_event ev);
       /* new value of cwnd after loss (optional) */
       u32  (*undo_cwnd)(struct sock *sk);
       /* hook for packet ack accounting (optional) */
       void (*pkts_acked)(struct sock *sk, u32 num_acked, s32 rtt_us);
       /* get info for inet_diag (optional) */
       void (*get_info)(struct sock *sk, u32 ext, struct sk_buff *skb);
 
       char            name[TCP_CA_NAME_MAX];
       struct module   *owner;
 };

A new congestion control for TCP would be a copy of such structure with function pointers at least ssthresh and cong_avoid. Because the variables used in Linux TCP is limited, the ideal way to port would be finding the one-to-one mapping between the Linux TCP's variable and NS-3's variable. Besides the conversion, we should call, for example, cong_avoid in TcpSocketBase as well.

This is the work in future. Current code does not support this, yet.

Verification

The most recent patch set in http://codereview.appspot.com/1702042 included a verification program. In this section, the result of the verification program is presented to show the detail of TCP operation.

Running the test program

There are two test programs in the patch set. They checks (1) the operation of TCP state machine (tcp-testcases); and (2) the behaviour of TCP upon packet losses (tcp-loss-response). Both test program are resided in directory example/tcp.

Synopsis for tcp-test-cases

 ./waf --run "tcp-testcases --testcase=n"

The program will send some data over a TCP flow from a source node to a destination node. The destination node is running a PacketSink over its TCP socket.

Option --testcase for tcp-test-cases is to show different behaviours of TCP. They are namely:

  • --testcase=0: Send one packet of data to verify the connection establishment and closing
  • --testcase=1: Send 100 packets of data to check the sliding window operation
  • --testcase=2: Dropping the SYN packet at three way handshake
  • --testcase=3: Dropping the SYN+ACK packet at three way handshake
  • --testcase=4: Dropping the ACK packet at three way handshake
  • --testcase=5: Connection initiator send an immediate FIN packet upon receiving SYN+ACK
  • --testcase=6: Simultaneous close
  • --testcase=7: Lost of close initiator's FIN packet
  • --testcase=8: Lost of close responder's FIN packet

Besides the testcase option, it also supports --verbose=1 to show the log messages of TCP code, or --verbose=2 to show more detailed log. Users can also run with, for example, --tcpModel=ns3::TcpTahoe to use another TCP variants.

Similar options for tcp-loss-response, the main control option is --losses, which indicates how many (consecutive) losses in the flow. Allowed ranges is from 0 to 4 inclusive. It also takes --verbose and --tcpModel options, as well as option --tracing=1 to trace the cwnd changes.

Result from tcp-testcases

In order to produce a neater packet trace, the output shown here are filtered with the following Perl snippet:

 while(<>) {
   s|ns3::PppHeader \(Point-to-Point Protocol: IP \(0x0021\)\) ||;
   s|/TxQueue||;
   s|/TxQ/|Q|;
   s|NodeList/|N|;
   s|/DeviceList/|D|;
   s|/MacRx||;
   s|/Enqueue||;
   s|/Dequeue||;
   s|/\$ns3::QbbNetDevice||;
   s|/\$ns3::PointToPointNetDevice||;
   s| /|\t|;
   s| ns3::|\t|g;
   s|tos 0x0 ||;
   s|protocol 6 ||;
   s|offset 0 ||;
   s|flags \[none\] ||;
   s|length:|len|;
   s|Header||g;
   s|/PhyRxDrop||;
   print;
 };

Case 0

The output of "tcp-testcase --testcase=0 --verbose" is as follows:

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
- 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
TcpTestCases:WriteUntilBufferFull(): Submitting 1000 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Close socket at 0.000000000
0.000000000 [node 0] TcpSocketBase:Close(): Socket 0x102669070 deferring close, state SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
r 0.000336000	N1D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
+ 0.000336000	N1D2	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
- 0.000336000	N1D2	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
r 0.000672000	N2D1	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
- 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 0.001008000	N1D2	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
+ 0.001008000	N1D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
- 0.001008000	N1D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 0.001344000	N0D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
0.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
- 0.001344000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
0.001344000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
- 0.001680000	N0D1	Ipv4 (ttl 64 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
r 0.001680000	N1D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
+ 0.001680000	N1D2	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
- 0.001680000	N1D2	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
r 0.002016000	N2D1	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
0.002016000 [node 2] TcpSocketBase:ProcessSynRcvd(): SYN_RCVD -> ESTABLISHED
r 0.010016000	N1D1	Ipv4 (ttl 64 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
+ 0.010016000	N1D2	Ipv4 (ttl 63 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
- 0.010016000	N1D2	Ipv4 (ttl 63 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
r 0.018352000	N2D1	Ipv4 (ttl 63 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
+ 0.018352000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
- 0.018352000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
0.018352000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 0.018352000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
0.018352000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
- 0.018688000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
r 0.018688000	N1D2	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
+ 0.018688000	N1D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
- 0.018688000	N1D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
r 0.019024000	N1D2	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
+ 0.019024000	N1D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
- 0.019024000	N1D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
r 0.019024000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 0.019024000 seconds
0.019024000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
0.019024000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
r 0.019360000	N0D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
0.019360000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 0.019360000	N0D1	Ipv4 (ttl 64 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
- 0.019360000	N0D1	Ipv4 (ttl 64 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
r 0.019696000	N1D1	Ipv4 (ttl 64 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
+ 0.019696000	N1D2	Ipv4 (ttl 63 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
- 0.019696000	N1D2	Ipv4 (ttl 63 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
r 0.020032000	N2D1	Ipv4 (ttl 63 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
0.020032000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

From the above, with the lines marked TcpSocketBase, one can verify the TCP state machine worked correctly. Note that, in the old TCP code, the closing behaviour is slightly different. Usually in simulation, the server side of a connection is simulated by a PacketSink application and the client side is initiating the connect and close operations. In the old code, upon the client invoked Socket::Close(), a FIN packet is sent to the server and the server just respond with an ACK. In the new code, besides the ACK-to-FIN response, there will be another FIN packet sent in case the server side confirmed that it has nothing else to send. This is the case for PacketSink, as it called Socket::ShutdownSend() at the beginning. In the old code, the TCP state machine at client side will move from SYN_SENT to ESTABLISHED to FIN_WAIT_1 and FIN_WAIT_2 but never TIME_WAIT as it does not receive a FIN from the server side. We extended this to make the state machine transition complete. The last two line in the output above shows that two TCP sockets are closed. The first CLOSED state refers to the socket that handles the connected flow. The second one refers to the socket that is listening on port 50000 to wait for a new connection.

Case 1

Output with insignificant lines removed. The TCP socket has delayed ACK disabled and network buffer of 20 packets:

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
TcpTestCases:WriteUntilBufferFull(): Submitting 1040 bytes to TCP socket
......
TcpTestCases:WriteUntilBufferFull(): Submitting 1040 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Submitting 160 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Close socket at 0.000000000
0.000000000 [node 0] TcpSocketBase:Close(): Socket 0x102669070 deferring close, state SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
r 0.000672000	N2D1	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 0.001344000	N0D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
0.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535) Payload Fragment [0:1000]
r 0.002016000	N2D1	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
0.002016000 [node 2] TcpSocketBase:ProcessSynRcvd(): SYN_RCVD -> ESTABLISHED
r 0.018352000	N2D1	Ipv4 (ttl 63 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535) Payload Fragment [0:1000]
+ 0.018352000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1001 Win=65535)
r 0.019024000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 0.019024000 seconds
0.019024000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
+ 0.019024000	N0D1	Ipv4 (ttl 64 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1001 Ack=1 Win=65535) Payload Fragment [1000:1040] Payload Fragment [0:960]
+ 0.019024000	N0D1	Ipv4 (ttl 64 id 4 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2001 Ack=1 Win=65535) Payload Fragment [960:1040] Payload Fragment [0:920]
r 0.035696000	N2D1	Ipv4 (ttl 63 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1001 Ack=1 Win=65535) Payload Fragment [1000:1040] Payload Fragment [0:960]
+ 0.035696000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=2001 Win=65535)
r 0.036368000	N0D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=2001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 2000 to 3000 at time 0.036368000 seconds
0.036368000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 3000 ssthresh 65535
+ 0.036368000	N0D1	Ipv4 (ttl 64 id 5 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=3001 Ack=1 Win=65535) Payload Fragment [920:1040] Payload Fragment [0:880]
+ 0.036368000	N0D1	Ipv4 (ttl 64 id 6 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=4001 Ack=1 Win=65535) Payload Fragment [880:1040] Payload Fragment [0:840]
r 0.044032000	N2D1	Ipv4 (ttl 63 id 4 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2001 Ack=1 Win=65535) Payload Fragment [960:1040] Payload Fragment [0:920]
+ 0.044032000	N2D1	Ipv4 (ttl 64 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=3001 Win=65535)
r 0.044704000	N0D1	Ipv4 (ttl 63 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=3001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 3000 to 4000 at time 0.044704000 seconds
0.044704000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 4000 ssthresh 65535
+ 0.044704000	N0D1	Ipv4 (ttl 64 id 7 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5001 Ack=1 Win=65535) Payload Fragment [840:1040] Payload Fragment [0:800]
+ 0.044704000	N0D1	Ipv4 (ttl 64 id 8 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=6001 Ack=1 Win=65535) Payload Fragment [800:1040] Payload Fragment [0:760]
r 0.053040000	N2D1	Ipv4 (ttl 63 id 5 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=3001 Ack=1 Win=65535) Payload Fragment [920:1040] Payload Fragment [0:880]
+ 0.053040000	N2D1	Ipv4 (ttl 64 id 4 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=4001 Win=65535)
r 0.053712000	N0D1	Ipv4 (ttl 63 id 4 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=4001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 4000 to 5000 at time 0.053712000 seconds
0.053712000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 5000 ssthresh 65535
+ 0.053712000	N0D1	Ipv4 (ttl 64 id 9 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=7001 Ack=1 Win=65535) Payload Fragment [760:1040] Payload Fragment [0:720]
+ 0.053712000	N0D1	Ipv4 (ttl 64 id 10 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=8001 Ack=1 Win=65535) Payload Fragment [720:1040] Payload Fragment [0:680]
......
+ 0.203760000	N0D1	Ipv4 (ttl 64 id 45 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=43001 Ack=1 Win=65535) Payload Fragment [360:1040] Payload Fragment [0:320]
d 0.203760000	N0D1	Ipv4 (ttl 64 id 46 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=44001 Ack=1 Win=65535) Payload Fragment [320:1040] Payload Fragment [0:280]
r 0.211424000	N2D1	Ipv4 (ttl 63 id 24 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=22001 Ack=1 Win=65535) Payload Fragment [160:1040] Payload Fragment [0:120]
+ 0.211424000	N2D1	Ipv4 (ttl 64 id 23 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=23001 Win=65535)
r 0.212096000	N0D1	Ipv4 (ttl 63 id 23 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=23001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 23000 to 24000 at time 0.212096000 seconds
0.212096000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 24000 ssthresh 65535
+ 0.212096000	N0D1	Ipv4 (ttl 64 id 47 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=45001 Ack=1 Win=65535) Payload Fragment [280:1040] Payload Fragment [0:240]
d 0.212096000	N0D1	Ipv4 (ttl 64 id 48 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=46001 Ack=1 Win=65535) Payload Fragment [240:1040] Payload Fragment [0:200]
......
r 0.386480000	N2D1	Ipv4 (ttl 63 id 45 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=43001 Ack=1 Win=65535) Payload Fragment [360:1040] Payload Fragment [0:320]
+ 0.386480000	N2D1	Ipv4 (ttl 64 id 44 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.387152000	N0D1	Ipv4 (ttl 63 id 44 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 44000 to 45000 at time 0.387152000 seconds
0.387152000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 45000 ssthresh 65535
+ 0.387152000	N0D1	Ipv4 (ttl 64 id 89 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=87001 Ack=1 Win=65535) Payload Fragment [680:1040] Payload Fragment [0:640]
d 0.387152000	N0D1	Ipv4 (ttl 64 id 90 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=88001 Ack=1 Win=65535) Payload Fragment [640:1040] Payload Fragment [0:600]
r 0.394816000	N2D1	Ipv4 (ttl 63 id 47 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=45001 Ack=1 Win=65535) Payload Fragment [280:1040] Payload Fragment [0:240]
+ 0.394816000	N2D1	Ipv4 (ttl 64 id 45 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.395488000	N0D1	Ipv4 (ttl 63 id 45 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.403152000	N2D1	Ipv4 (ttl 63 id 49 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=47001 Ack=1 Win=65535) Payload Fragment [200:1040] Payload Fragment [0:160]
+ 0.403152000	N2D1	Ipv4 (ttl 64 id 46 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.403824000	N0D1	Ipv4 (ttl 63 id 46 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.411488000	N2D1	Ipv4 (ttl 63 id 51 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=49001 Ack=1 Win=65535) Payload Fragment [120:1040] Payload Fragment [0:80]
+ 0.411488000	N2D1	Ipv4 (ttl 64 id 47 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
r 0.412160000	N0D1	Ipv4 (ttl 63 id 47 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=44001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 45000 to 25500 at time 0.412160000 seconds
0.412160000 [node 0] TcpNewReno:DupAck(): Triple dupack. Enter fast recovery mode. Reset cwnd to 25500, ssthresh to 22500 at fast recovery seqnum 89001
+ 0.412160000	N0D1	Ipv4 (ttl 64 id 91 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=44001 Ack=1 Win=65535) Payload Fragment [320:1040] Payload Fragment [0:280]
...
r 0.982096000	N0D1	Ipv4 (ttl 63 id 99 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=84001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 46000 to 47000 at time 0.982096000 seconds
0.982096000 [node 0] TcpNewReno:NewAck(): Partial ACK in fast recovery: cwnd set to 47000
+ 0.982096000	N0D1	Ipv4 (ttl 64 id 124 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=84001 Ack=1 Win=65535) Payload Fragment [800:1040] Payload Fragment [0:760]
r 0.998768000	N2D1	Ipv4 (ttl 63 id 124 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=84001 Ack=1 Win=65535) Payload Fragment [800:1040] Payload Fragment [0:760]
+ 0.998768000	N2D1	Ipv4 (ttl 64 id 100 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=86001 Win=65535)
r 0.999440000	N0D1	Ipv4 (ttl 63 id 100 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=86001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 47000 to 48000 at time 0.999440000 seconds
0.999440000 [node 0] TcpNewReno:NewAck(): Partial ACK in fast recovery: cwnd set to 48000
+ 0.999440000	N0D1	Ipv4 (ttl 64 id 125 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=86001 Ack=1 Win=65535) Payload Fragment [720:1040] Payload Fragment [0:680]
r 1.016112000	N2D1	Ipv4 (ttl 63 id 125 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=86001 Ack=1 Win=65535) Payload Fragment [720:1040] Payload Fragment [0:680]
+ 1.016112000	N2D1	Ipv4 (ttl 64 id 101 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=88001 Win=65535)
r 1.016784000	N0D1	Ipv4 (ttl 63 id 101 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=88001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 48000 to 49000 at time 1.016784000 seconds
1.016784000 [node 0] TcpNewReno:NewAck(): Partial ACK in fast recovery: cwnd set to 49000
+ 1.016784000	N0D1	Ipv4 (ttl 64 id 126 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=88001 Ack=1 Win=65535) Payload Fragment [640:1040] Payload Fragment [0:600]
r 1.033456000	N2D1	Ipv4 (ttl 63 id 126 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=88001 Ack=1 Win=65535) Payload Fragment [640:1040] Payload Fragment [0:600]
+ 1.033456000	N2D1	Ipv4 (ttl 64 id 102 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=100002 Win=65535)
1.033456000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 1.033456000	N2D1	Ipv4 (ttl 64 id 103 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=100002 Win=65535)
1.033456000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 1.034128000	N0D1	Ipv4 (ttl 63 id 102 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=100002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 49000 to 49020 at time 1.034128000 seconds
1.034128000 [node 0] TcpTahoe:NewAck(): In CongAvoid, updated to cwnd 49020 ssthresh 24000
1.034128000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
r 1.034464000	N0D1	Ipv4 (ttl 63 id 103 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=100002 Win=65535)
1.034464000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 1.034464000	N0D1	Ipv4 (ttl 64 id 127 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=100002 Ack=2 Win=65535)
r 1.035136000	N2D1	Ipv4 (ttl 63 id 127 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=100002 Ack=2 Win=65535)
1.035136000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The above shows that the TCP socket notices triple duplicated acknowledgements. We will evaluate the correctness of congestion window behaviour using another test program.

Case 2

This is to verify a TCP client can survive a SYN packet lost. Output as follows:

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
TcpTestCases:WriteUntilBufferFull(): Submitting 1000 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Close socket at 0.000000000
0.000000000 [node 0] TcpSocketBase:Close(): Socket 0x102669070 deferring close, state SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
d 0.000336000	N1D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
+ 3.000000000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
r 3.000336000	N1D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
+ 3.000336000	N1D2	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
r 3.000672000	N2D1	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
3.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 3.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 3.001008000	N1D2	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
+ 3.001008000	N1D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 3.001344000	N0D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
3.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 3.001344000	N0D1	Ipv4 (ttl 64 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
3.001344000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 3.001344000	N0D1	Ipv4 (ttl 64 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
r 3.001680000	N1D1	Ipv4 (ttl 64 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
+ 3.001680000	N1D2	Ipv4 (ttl 63 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
r 3.002016000	N2D1	Ipv4 (ttl 63 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
3.002016000 [node 2] TcpSocketBase:ProcessSynRcvd(): SYN_RCVD -> ESTABLISHED
r 3.010016000	N1D1	Ipv4 (ttl 64 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
+ 3.010016000	N1D2	Ipv4 (ttl 63 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
r 3.018352000	N2D1	Ipv4 (ttl 63 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
+ 3.018352000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
3.018352000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 3.018352000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
3.018352000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 3.018688000	N1D2	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
+ 3.018688000	N1D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
r 3.019024000	N1D2	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
+ 3.019024000	N1D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
r 3.019024000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 3.019024000 seconds
3.019024000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
3.019024000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
r 3.019360000	N0D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
3.019360000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 3.019360000	N0D1	Ipv4 (ttl 64 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
r 3.019696000	N1D1	Ipv4 (ttl 64 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
+ 3.019696000	N1D2	Ipv4 (ttl 63 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
r 3.020032000	N2D1	Ipv4 (ttl 63 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
3.020032000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The SYN packet is sent at time 0 and at time 336us, it was dropped by the intermediate hop (node 1). The SYN packet is retransmitted at time 3s.

Case 3

This is to check a connection can survive a SYN+ACK lost.

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
TcpTestCases:WriteUntilBufferFull(): Submitting 1000 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Close socket at 0.000000000
0.000000000 [node 0] TcpSocketBase:Close(): Socket 0x102669070 deferring close, state SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
r 0.000672000	N2D1	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
d 0.001008000	N1D2	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
+ 3.000000000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
+ 3.000672000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 3.000672000	N2D1	Ipv4 (ttl 63 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
+ 3.000672000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 3.001344000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
3.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 3.001344000	N0D1	Ipv4 (ttl 64 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
3.001344000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 3.001344000	N0D1	Ipv4 (ttl 64 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
r 3.001680000	N0D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 3.002016000	N2D1	Ipv4 (ttl 63 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
3.002016000 [node 2] TcpSocketBase:ProcessSynRcvd(): SYN_RCVD -> ESTABLISHED
+ 3.002016000	N2D1	Ipv4 (ttl 64 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1 Win=65535)
r 3.002688000	N0D1	Ipv4 (ttl 63 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1 Win=65535)
r 3.018352000	N2D1	Ipv4 (ttl 63 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535) Payload (size=1000)
+ 3.018352000	N2D1	Ipv4 (ttl 64 id 4 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
3.018352000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 3.018352000	N2D1	Ipv4 (ttl 64 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
3.018352000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 3.019024000	N0D1	Ipv4 (ttl 63 id 4 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 3.019024000 seconds
3.019024000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
3.019024000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
r 3.019360000	N0D1	Ipv4 (ttl 63 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=1002 Win=65535)
3.019360000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 3.019360000	N0D1	Ipv4 (ttl 64 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
r 3.020032000	N2D1	Ipv4 (ttl 63 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1002 Ack=2 Win=65535)
3.020032000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The SYN+ACK packet is sent by node 2 at time 672us and it was dropped by node 1 at 0.001008s. Because node 0 cannot see the SYN+ACK coming back, it resends the SYN at time 3s.

Case 4

This checks the connection can survive from the lost of ACK in three way handshake.

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
TcpTestCases:WriteUntilBufferFull(): Submitting 1040 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Submitting 960 bytes to TCP socket
TcpTestCases:WriteUntilBufferFull(): Close socket at 0.000000000
0.000000000 [node 0] TcpSocketBase:Close(): Socket 0x102669070 deferring close, state SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
r 0.000672000	N2D1	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 0.001344000	N0D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
0.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535) Payload Fragment [0:1000]
d 0.001680000	N1D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
r 0.018352000	N2D1	Ipv4 (ttl 63 id 2 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535) Payload Fragment [0:1000]
0.018352000 [node 2] TcpSocketBase:ProcessSynRcvd(): SYN_RCVD -> ESTABLISHED
+ 0.018352000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1001 Win=65535)
r 0.019024000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=1001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 0.019024000 seconds
0.019024000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
0.019024000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 0.019024000	N0D1	Ipv4 (ttl 64 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1001 Ack=1 Win=65535) Payload Fragment [1000:1040] Payload (size=960)
r 0.035696000	N2D1	Ipv4 (ttl 63 id 3 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1001 Ack=1 Win=65535) Payload Fragment [1000:1040] Payload (size=960)
+ 0.035696000	N2D1	Ipv4 (ttl 64 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=2002 Win=65535)
0.035696000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 0.035696000	N2D1	Ipv4 (ttl 64 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=2002 Win=65535)
0.035696000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 0.036368000	N0D1	Ipv4 (ttl 63 id 2 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=2002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 2000 to 3000 at time 0.036368000 seconds
0.036368000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 3000 ssthresh 65535
0.036368000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
r 0.036704000	N0D1	Ipv4 (ttl 63 id 3 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=2002 Win=65535)
0.036704000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 0.036704000	N0D1	Ipv4 (ttl 64 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2002 Ack=2 Win=65535)
r 0.037376000	N2D1	Ipv4 (ttl 63 id 4 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2002 Ack=2 Win=65535)
0.037376000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The ACK to complete the three way handshake is sent at 0.001344s, and it is dropped by node 1 at 0.00168s. Node 2 receive the data packet (id 2) and this also triggers the correct state change.

Case 5

This demonstrates the case that the server received a FIN in SYN_RCVD state. The behaviour is created by dropping the ACK packet in 3WHS and send a FIN after the client socket moved to ESTABLISHED state.

TcpTestCases:StartFlow(): Starting flow at time 0.000000000
+ 0.000000000	N0D1	Ipv4 (ttl 64 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000000000 [node 0] TcpSocketBase:DoConnect(): CLOSED -> SYN_SENT
0.000000000 [node 2] TcpSocketBase:Listen(): CLOSED -> LISTEN
r 0.000672000	N2D1	Ipv4 (ttl 63 id 0 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ SYN ] Seq=0 Ack=0 Win=65535)
0.000672000 [node 2] TcpSocketBase:CompleteFork(): LISTEN -> SYN_RCVD
+ 0.000672000	N2D1	Ipv4 (ttl 64 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
r 0.001344000	N0D1	Ipv4 (ttl 63 id 0 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ SYN  ACK ] Seq=0 Ack=1 Win=65535)
0.001344000 [node 0] TcpSocketBase:ProcessSynSent(): SYN_SENT -> ESTABLISHED
+ 0.001344000	N0D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
d 0.001680000	N1D1	Ipv4 (ttl 64 id 1 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=1 Ack=1 Win=65535)
+ 0.002000000	N0D1	Ipv4 (ttl 64 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535)
0.002000000 [node 0] TcpSocketBase:DoClose(): ESTABLISHED -> FIN_WAIT_1
r 0.002672000	N2D1	Ipv4 (ttl 63 id 2 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=1 Ack=1 Win=65535)
0.002672000 [node 2] TcpSocketBase:DoPeerClose(): SYN_RCVD -> CLOSE_WAIT
+ 0.002672000	N2D1	Ipv4 (ttl 64 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=2 Win=65535)
0.002672000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 0.003344000	N0D1	Ipv4 (ttl 63 id 1 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=2 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 0.003344000 seconds
0.003344000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 65535
0.003344000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> CLOSING
0.003344000 [node 0] TcpSocketBase:ProcessWait(): CLOSING -> TIME_WAIT
+ 0.003344000	N0D1	Ipv4 (ttl 64 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2 Ack=2 Win=65535)
r 0.004016000	N2D1	Ipv4 (ttl 63 id 3 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=2 Ack=2 Win=65535)
0.004016000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The FIN is sent at time 0.002s. At time 0.002672s, state of node 2 changed from SYN_RCVD to CLOSE_WAIT.

Case 6

This shows the simultaneous close of TCP sockets.

0.036368000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 0.036368000	N0D1	Ipv4 (ttl 64 id 6 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=4001 Ack=1 Win=65535) Payload Fragment [880:1040] Payload (size=840)
......
r 0.061376000	N2D1	Ipv4 (ttl 63 id 6 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=4001 Ack=1 Win=65535) Payload Fragment [880:1040] Payload (size=840)
+ 0.061376000	N2D1	Ipv4 (ttl 64 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=5002 Win=65535)
0.061376000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 0.061376000	N2D1	Ipv4 (ttl 64 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
0.061376000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
d 0.061712000	N1D2	Ipv4 (ttl 64 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=5002 Win=65535)
r 0.062384000	N0D1	Ipv4 (ttl 63 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
0.062384000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> CLOSING
0.062384000 [node 0] TcpSocketBase:ProcessWait(): CLOSING -> TIME_WAIT
+ 0.062384000	N0D1	Ipv4 (ttl 64 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
r 0.063056000	N2D1	Ipv4 (ttl 63 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
0.063056000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The FIN packet from node 0 is sent at time 0.036368s. Node 2 respond with ACK at time 0.061376s, which is dropped at time 0.061712. The FIN packet from node 2 is therefore the next packet arrived node 0, causing it moved to CLOSING state.

Case 7

This shows a connection can survive from lost of first FIN packet.

+ 0.040000000	N0D1	Ipv4 (ttl 64 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=5001 Ack=1 Win=65535)
0.040000000 [node 0] TcpSocketBase:DoClose(): ESTABLISHED -> FIN_WAIT_1
d 0.053376000	N1D1	Ipv4 (ttl 64 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=5001 Ack=1 Win=65535)
r 0.062048000	N0D1	Ipv4 (ttl 63 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=5001 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 5000 to 6000 at time 0.062048000 seconds
0.062048000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 6000 ssthresh 65535
TcpTestCases:CwndTracer(): Moving cwnd from 6000 to 1000 at time 0.262048000 seconds
0.262048000 [node 0] TcpReno:Retransmit(): RTO. Reset cwnd to 1000, ssthresh to 2000, restart from seqnum 5001
+ 0.262048000	N0D1	Ipv4 (ttl 64 id 8 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=5001 Ack=1 Win=65535)
r 0.262720000	N2D1	Ipv4 (ttl 63 id 8 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=5001 Ack=1 Win=65535)
0.262720000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 0.262720000	N2D1	Ipv4 (ttl 64 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
0.262720000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
r 0.263392000	N0D1	Ipv4 (ttl 63 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 1000 to 2000 at time 0.263392000 seconds
0.263392000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 2000 ssthresh 2000
0.263392000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> CLOSING
0.263392000 [node 0] TcpSocketBase:ProcessWait(): CLOSING -> TIME_WAIT
+ 0.263392000	N0D1	Ipv4 (ttl 64 id 9 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
r 0.264064000	N2D1	Ipv4 (ttl 63 id 9 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
0.264064000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

The bare FIN packet is sent at time 0.04s by node 0, and is dropped at time 0.053376s. Node 0 received ACK to sequence 5001 at time 0.062048s, which moves the retransmission timeout to 0.262048s. So upon retransmission timeout, the FIN packet is resent.

Case 8

This demonstrates the case of losing close responder's FIN packet.

0.036368000 [node 0] TcpSocketBase:SendPendingData(): ESTABLISHED -> FIN_WAIT_1
+ 0.036368000	N0D1	Ipv4 (ttl 64 id 6 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=4001 Ack=1 Win=65535) Payload Fragment [880:1040] Payload (size=840)
r 0.061376000	N2D1	Ipv4 (ttl 63 id 6 len 1040 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ FIN  ACK ] Seq=4001 Ack=1 Win=65535) Payload Fragment [880:1040] Payload (size=840)
+ 0.061376000	N2D1	Ipv4 (ttl 64 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=5002 Win=65535)
0.061376000 [node 2] TcpSocketBase:DoPeerClose(): ESTABLISHED -> CLOSE_WAIT
+ 0.061376000	N2D1	Ipv4 (ttl 64 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
0.061376000 [node 2] TcpSocketBase:DoClose(): CLOSE_WAIT -> LAST_ACK
d 0.062048000	N1D2	Ipv4 (ttl 64 id 6 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
r 0.062048000	N0D1	Ipv4 (ttl 63 id 5 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ ACK ] Seq=1 Ack=5002 Win=65535)
TcpTestCases:CwndTracer(): Moving cwnd from 5000 to 6000 at time 0.062048000 seconds
0.062048000 [node 0] TcpTahoe:NewAck(): In SlowStart, updated to cwnd 6000 ssthresh 65535
0.062048000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_1 -> FIN_WAIT_2
+ 2.061376000	N2D1	Ipv4 (ttl 64 id 7 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
2.061376000 [node 2] TcpSocketBase:CloseAndNotify(): LAST_ACK -> CLOSED
r 2.062048000	N0D1	Ipv4 (ttl 63 id 7 len 40 10.1.2.2 > 10.1.3.1)	Tcp (50000 > 49153 [ FIN  ACK ] Seq=1 Ack=5002 Win=65535)
2.062048000 [node 0] TcpSocketBase:ProcessWait(): FIN_WAIT_2 -> TIME_WAIT
+ 2.062048000	N0D1	Ipv4 (ttl 64 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
r 2.062720000	N2D1	Ipv4 (ttl 63 id 7 len 40 10.1.3.1 > 10.1.2.2)	Tcp (49153 > 50000 [ ACK ] Seq=5002 Ack=2 Win=65535)
100.000000000 [node 2] TcpSocketBase:CloseAndNotify(): LISTEN -> CLOSED

Node 0 sent a FIN packet at time 0.036368s and at time 0.061376s, the packet is received by node 2. Node 2 then sent an ACK to acknowledge the FIN packet and it also sent its FIN packet after notifying application. At this point, node 2 is in LAST_ACK state with last ack timeout scheduled. The FIN packet from node 2 is dropped at 0.062048s. At the same time, the ACK-to-FIN packet is received by node 0, trigger the state transition from FIN_WAIT_1 to FIN_WAIT_2. At time 2.061376s, the last ack timeout cause the FIN packet of node 2 retransmitted.

Result from tcp-loss-response