ICCRG TCP Evaluation Suite port - Summer Project 2016
- Student: Dharmendra Kumar Mishra
- Mentors: Mohit Tahiliani , Tom Henderson
- Abstract: A lot of new TCP extensions have been proposed over past few decades. Comparing the performance of these TCP variants is a tough job due to a lack of agreed set of performance metrics. To address this problem, the Internet Congestion Control Research Group (ICCRG) has provided a common test suite for evaluating new TCP extensions. This suite is already available for ns-2. The goal of this project is to add Delay/Throughput tradeoff scenario [section 5.1 of the RFC] to the existing code of TCP Evaluation Suite for ns-3. This project, which is a port of the ns-2 version suite, also aims to modify the existing ns-3 TcpEval code to make it compliant with the RFC.
- Referred RFC: Common TCP Evaluation Suite [ https://tools.ietf.org/html/draft-irtf-iccrg-tcpeval-01 ]
- Referred ns-2 code: [ https://bitbucket.org/hayesd/tcp-evaluation-suite-public ]
- ns-3 TcpEval code: [ https://github.com/dharmendra-mishra/TcpEvalSummerProject ]
- About me: M.Tech in Computer Science from National Institute of Technology Karnataka, Surathkal, India. Currently, I am hired as Network Consultant by CISCO Systems, Bangalore, India.
* Identify the changes (topology setup, parameter configurations) required to simulate Delay_Throughput_Tradeoff scenario in ns-3 (section 5.1 in RFC) * Implement the changes. * Document them in a .rst with reference to the parts completed in the RFC
* Test/review Tmix integration with ns-3. * Generate traffic in ns-3 TcpEval using Tmix. * Update the documentation.
* Write code to shuffle the Tmix traces (section 2.3.2 in RFC) * Test the working of Tmix module against the traces used by ns-2 TcpEval
* Implement the statistics generation in ns-3 TcpEval by using ns-3 callback based tracing. * Following metrics (section 4.5 in RFC) for the central link will be calculated: 1. the aggregate link utilization, 2. the average packet drop rate, and 3. the average queueing delay. * Document the working.
* Continue week 4 work: per flow statistics generation
* Compare the results obtained from ns-2 TcpEval and ns-3 TcpEval for Delay_Throughput_Tradeoff scenario (section 5.1 in RFC). * Document the observations (similarities and differences) * Mid-term review can be scheduled around this week. In the project, following issues may consume more time:- 1. Integration of Tmix with TcpEval for ns-3. 2. Shuffling of Tmix traces. 3. Per flow statistics generation.
* Continuation of Week 6: conduct detailed comparison in different configurations of Delay_Throughput_Tradeoff scenario (section 5.1 in RFC). * Document the observations * Write a new test suite for Delay_Throughput_Tradeoff scenario (section 5.1 in RFC).
* Respond to comments received on the code review. * Finalize any remaining issues before declaring the code to be 'ready to merge' state. * Finalize the documentation.
Week 9 to 12
* Simulate RED or other qdiscs using the ns-3 TcpEval implemented in previous 8 weeks. * Compare the results with ns-2 TcpEval. * Test and document the observations.