A Discrete-Event Network Simulator
Models

ns-3 Model Library

This is the ns-3 Model Library documentation. Primary documentation for the ns-3 project is available in five forms:

This document is written in reStructuredText for Sphinx and is maintained in the doc/models directory of ns-3’s source code.

Organization

This manual compiles documentation for ns-3 models and supporting software that enable users to construct network simulations. It is important to distinguish between modules and models:

  • ns-3 software is organized into separate modules that are each built as a separate software library. Individual ns-3 programs can link the modules (libraries) they need to conduct their simulation.
  • ns-3 models are abstract representations of real-world objects, protocols, devices, etc.

An ns-3 module may consist of more than one model (for instance, the internet module contains models for both TCP and UDP). In general, ns-3 models do not span multiple software modules, however.

This manual provides documentation about the models of ns-3. It complements two other sources of documentation concerning models:

  • the model APIs are documented, from a programming perspective, using Doxygen. Doxygen for ns-3 models is available on the project web server.
  • the ns-3 core is documented in the developer’s manual. ns-3 models make use of the facilities of the core, such as attributes, default values, random numbers, test frameworks, etc. Consult the main web site to find copies of the manual.

Finally, additional documentation about various aspects of ns-3 may exist on the project wiki.

A sample outline of how to write model library documentation can be found by executing the create-module.py program and looking at the template created in the file new-module/doc/new-module.rst.

$ cd src
$ ./create-module.py new-module

The remainder of this document is organized alphabetically by module name.

If you are new to ns-3, you might first want to read below about the network module, which contains some fundamental models for the simulator. The packet model, models for different address formats, and abstract base classes for objects such as nodes, net devices, channels, sockets, and applications are discussed there.

Animation

Animation is an important tool for network simulation. While ns-3 does not contain a default graphical animation tool, we currently have two ways to provide animation, namely using the PyViz method or the NetAnim method. The PyViz method is described in http://www.nsnam.org/wiki/PyViz.

We will describe the NetAnim method briefly here.

NetAnim

NetAnim is a standalone, Qt4-based software executable that uses a trace file generated during an ns-3 simulation to display the topology and animate the packet flow between nodes.

_images/NetAnim_3_105.png

An example of packet animation on wired-links

In addition, NetAnim also provides useful features such as tables to display meta-data of packets like the image below

_images/PacketStatistics.png

An example of tables for packet meta-data with protocol filters

A way to visualize the trajectory of a mobile node

_images/Trajectory.png

An example of the trajectory of a mobile node

A way to display the routing-tables of multiple nodes at various points in time

_images/RoutingTables.png

A way to display counters associated with multiple nodes as a chart or a table

_images/NodeCountersChart.png
_images/NodeCountersTable.png

A way to view the timeline of packet transmit and receive events

_images/PacketTimeline.png

Methodology

The class ns3::AnimationInterface is responsible for the creation the trace XML file. AnimationInterface uses the tracing infrastructure to track packet flows between nodes. AnimationInterface registers itself as a trace hook for tx and rx events before the simulation begins. When a packet is scheduled for transmission or reception, the corresponding tx and rx trace hooks in AnimationInterface are called. When the rx hooks are called, AnimationInterface will be aware of the two endpoints between which a packet has flowed, and adds this information to the trace file, in XML format along with the corresponding tx and rx timestamps. The XML format will be discussed in a later section. It is important to note that AnimationInterface records a packet only if the rx trace hooks are called. Every tx event must be matched by an rx event.

Downloading NetAnim

If NetAnim is not already available in the ns-3 package you downloaded, you can do the following:

Please ensure that you have installed mercurial. The latest version of NetAnim can be downloaded using mercurial with the following command:

$ hg clone http://code.nsnam.org/netanim

Building NetAnim

Prerequisites

Qt4 (4.8 and over) is required to build NetAnim. This can be obtained using the following ways:

For Debian/Ubuntu Linux distributions:

$ apt-get install qt4-dev-tools

For Red Hat/Fedora based distribution:

$ yum install qt4
$ yum install qt4-devel

For Mac/OSX, see http://qt.nokia.com/downloads/

Build steps

To build NetAnim use the following commands:

$ cd netanim
$ make clean
$ qmake NetAnim.pro  (For MAC Users: qmake -spec macx-g++ NetAnim.pro)
$ make

Note: qmake could be “qmake-qt4” in some systems

This should create an executable named “NetAnim” in the same directory:

 $ ls -l NetAnim
-rwxr-xr-x 1 john john 390395 2012-05-22 08:32 NetAnim

Usage

Using NetAnim is a two-step process

Step 1:Generate the animation XML trace file during simulation using “ns3::AnimationInterface” in the ns-3 code base.

Step 2:Load the XML trace file generated in Step 1 with the offline Qt4-based animator named NetAnim.

Step 1: Generate XML animation trace file

The class “AnimationInterface” under “src/netanim” uses underlying ns-3 trace sources to construct a timestamped ASCII file in XML format.

Examples are found under src/netanim/examples Example:

$ ./waf -d debug configure --enable-examples
$ ./waf --run "dumbbell-animation"

The above will create an XML file dumbbell-animation.xml

Mandatory
  1. Ensure that your program’s wscript includes the “netanim” module. An example of such a wscript is at src/netanim/examples/wscript.
  2. Include the header [#include “ns3/netanim-module.h”] in your test program
  3. Add the statement
AnimationInterface anim ("animation.xml");  // where "animation.xml" is any arbitrary filename

[for versions before ns-3.13 you also have to use the line “anim.SetXMLOutput() to set the XML mode and also use anim.StartAnimation();]

Optional

The following are optional but useful steps:

// Step 1
anim.SetMobilityPollInterval (Seconds (1));

AnimationInterface records the position of all nodes every 250 ms by default. The statement above sets the periodic interval at which AnimationInterface records the position of all nodes. If the nodes are expected to move very little, it is useful to set a high mobility poll interval to avoid large XML files.

// Step 2
anim.SetConstantPosition (Ptr< Node > n, double x, double y);

AnimationInterface requires that the position of all nodes be set. In ns-3 this is done by setting an associated MobilityModel. “SetConstantPosition” is a quick way to set the x-y coordinates of a node which is stationary.

// Step 3
anim.SetStartTime (Seconds(150)); and anim.SetStopTime (Seconds(150));

AnimationInterface can generate large XML files. The above statements restricts the window between which AnimationInterface does tracing. Restricting the window serves to focus only on relevant portions of the simulation and creating manageably small XML files

// Step 4
AnimationInterface anim ("animation.xml", 50000);

Using the above constructor ensures that each animation XML trace file has only 50000 packets. For example, if AnimationInterface captures 150000 packets, using the above constructor splits the capture into 3 files

  • animation.xml - containing the packet range 1-50000
  • animation.xml-1 - containing the packet range 50001-100000
  • animation.xml-2 - containing the packet range 100001-150000
// Step 5
anim.EnablePacketMetadata (true);

With the above statement, AnimationInterface records the meta-data of each packet in the xml trace file. Metadata can be used by NetAnim to provide better statistics and filter, along with providing some brief information about the packet such as TCP sequence number or source & destination IP address during packet animation.

CAUTION: Enabling this feature will result in larger XML trace files. Please do NOT enable this feature when using Wimax links.

// Step 6
anim.UpdateNodeDescription (5, "Access-point");

With the above statement, AnimationInterface assigns the text “Access-point” to node 5.

// Step 7
anim.UpdateNodeSize (6, 1.5, 1.5);

With the above statement, AnimationInterface sets the node size to scale by 1.5. NetAnim automatically scales the graphics view to fit the oboundaries of the topology. This means that NetAnim, can abnormally scale a node’s size too high or too low. Using AnimationInterface::UpdateNodeSize allows you to overwrite the default scaling in NetAnim and use your own custom scale.

// Step 8
anim.UpdateNodeCounter (89, 7, 3.4);

With the above statement, AnimationInterface sets the counter with Id == 89, associated with Node 7 with the value 3.4. The counter with Id 89 is obtained using AnimationInterface::AddNodeCounter. An example usage for this is in src/netanim/examples/resource-counters.cc.

Step 2: Loading the XML in NetAnim
  1. Assuming NetAnim was built, use the command ”./NetAnim” to launch NetAnim. Please review the section “Building NetAnim” if NetAnim is not available.
  2. When NetAnim is opened, click on the File open button at the top-left corner, select the XML file generated during Step 1.
  3. Hit the green play button to begin animation.

Here is a video illustrating this http://www.youtube.com/watch?v=tz_hUuNwFDs

Wiki

For detailed instructions on installing “NetAnim”, F.A.Qs and loading the XML trace file (mentioned earlier) using NetAnim please refer: http://www.nsnam.org/wiki/NetAnim

Antenna Module

Design documentation

Overview

The Antenna module provides:

  1. a new base class (AntennaModel) that provides an interface for the modeling of the radiation pattern of an antenna;
  2. a set of classes derived from this base class that each models the radiation pattern of different types of antennas.

AntennaModel

The AntennaModel uses the coordinate system adopted in [Balanis] and depicted in Figure Coordinate system of the AntennaModel. This system is obtained by traslating the cartesian coordinate system used by the ns-3 MobilityModel into the new origin o which is the location of the antenna, and then transforming the coordinates of every generic point p of the space from cartesian coordinates (x,y,z) into spherical coordinates (r, \theta,\phi). The antenna model neglects the radial component r, and only considers the angle components (\theta, \phi). An antenna radiation pattern is then expressed as a mathematical function g(\theta, \phi) \longrightarrow \mathcal{R} that returns the gain (in dB) for each possible direction of transmission/reception. All angles are expressed in radians.

_images/antenna-coordinate-system.png

Coordinate system of the AntennaModel

Provided models

In this section we describe the antenna radiation pattern models that are included within the antenna module.

IsotropicAntennaModel

This antenna radiation pattern model provides a unitary gain (0 dB) for all direction.

CosineAntennaModel

This is the cosine model described in [Chunjian]: the antenna gain is determined as:

g(\phi, \theta) = \cos^{n} \left(\frac{\phi - \phi_{0}}{2}  \right)

where \phi_{0} is the azimuthal orientation of the antenna (i.e., its direction of maximum gain) and the exponential

n = -\frac{3}{20 \log_{10} \left( \cos \frac{\phi_{3dB}}{4} \right)}

determines the desired 3dB beamwidth \phi_{3dB}. Note that this radiation pattern is independent of the inclination angle \theta.

A major difference between the model of [Chunjian] and the one implemented in the class CosineAntennaModel is that only the element factor (i.e., what described by the above formulas) is considered. In fact, [Chunjian] also considered an additional antenna array factor. The reason why the latter is excluded is that we expect that the average user would desire to specify a given beamwidth exactly, without adding an array factor at a latter stage which would in practice alter the effective beamwidth of the resulting radiation pattern.

ParabolicAntennaModel

This model is based on the parabolic approximation of the main lobe radiation pattern. It is often used in the context of cellular system to model the radiation pattern of a cell sector, see for instance [R4-092042a] and [Calcev]. The antenna gain in dB is determined as:

g_{dB}(\phi, \theta) = -\min \left( 12 \left(\frac{\phi  - \phi_{0}}{\phi_{3dB}} \right)^2, A_{max} \right)

where \phi_{0} is the azimuthal orientation of the antenna (i.e., its direction of maximum gain), \phi_{3dB} is its 3 dB beamwidth, and A_{max} is the maximum attenuation in dB of the antenna. Note that this radiation pattern is independent of the inclination angle \theta.

[Balanis]C.A. Balanis, “Antenna Theory - Analysis and Design”, Wiley, 2nd Ed.
[Chunjian](1, 2, 3) Li Chunjian, “Efficient Antenna Patterns for Three-Sector WCDMA Systems”, Master of Science Thesis, Chalmers University of Technology, Göteborg, Sweden, 2003
[Calcev]George Calcev and Matt Dillon, “Antenna Tilt Control in CDMA Networks”, in Proc. of the 2nd Annual International Wireless Internet Conference (WICON), 2006
[R4-092042a]3GPP TSG RAN WG4 (Radio) Meeting #51, R4-092042, Simulation assumptions and parameters for FDD HeNB RF requirements.

User Documentation

The antenna moduled can be used with all the wireless technologies and physical layer models that support it. Currently, this includes the physical layer models based on the SpectrumPhy. Please refer to the documentation of each of these models for details.

Testing Documentation

In this section we describe the test suites included with the antenna module that verify its correct functionality.

Angles

The unit test suite angles verifies that the Angles class is constructed properly by correct conversion from 3D cartesian coordinates according to the available methods (construction from a single vector and from a pair of vectors). For each method, several test cases are provided that compare the values (\phi, \theta) determied by the constructor to known reference values. The test passes if for each case the values are equal to the reference up to a tolerance of 10^{-10} which accounts for numerical errors.

DegreesToRadians

The unit test suite degrees-radians verifies that the methods DegreesToRadians and RadiansToDegrees work properly by comparing with known reference values in a number of test cases. Each test case passes if the comparison is equal up to a tolerance of 10^{-10} which accounts for numerical errors.

IsotropicAntennaModel

The unit test suite isotropic-antenna-model checks that the IsotropicAntennaModel class works properly, i.e., returns always a 0dB gain regardless of the direction.

CosineAntennaModel

The unit test suite cosine-antenna-model checks that the CosineAntennaModel class works properly. Several test cases are provided that check for the antenna gain value calculated at different directions and for different values of the orientation, the reference gain and the beamwidth. The reference gain is calculated by hand. Each test case passes if the reference gain in dB is equal to the value returned by CosineAntennaModel within a tolerance of 0.001, which accounts for the approximation done for the calculation of the reference values.

ParabolicAntennaModel

The unit test suite parabolic-antenna-model checks that the ParabolicAntennaModel class works properly. Several test cases are provided that check for the antenna gain value calculated at different directions and for different values of the orientation, the maximum attenuation and the beamwidth. The reference gain is calculated by hand. Each test case passes if the reference gain in dB is equal to the value returned by ParabolicAntennaModel within a tolerance of 0.001, which accounts for the approximation done for the calculation of the reference values.

Ad Hoc On-Demand Distance Vector (AODV)

This model implements the base specification of the Ad Hoc On-Demand Distance Vector (AODV) protocol. The implementation is based on RFC 3561.

The model was written by Elena Buchatskaia and Pavel Boyko of ITTP RAS, and is based on the ns-2 AODV model developed by the CMU/MONARCH group and optimized and tuned by Samir Das and Mahesh Marina, University of Cincinnati, and also on the AODV-UU implementation by Erik Nordström of Uppsala University.

Model Description

The source code for the AODV model lives in the directory src/aodv.

Design

Class ns3::aodv::RoutingProtocol implements all functionality of service packet exchange and inherits from ns3::Ipv4RoutingProtocol. The base class defines two virtual functions for packet routing and forwarding. The first one, ns3::aodv::RouteOutput, is used for locally originated packets, and the second one, ns3::aodv::RouteInput, is used for forwarding and/or delivering received packets.

Protocol operation depends on many adjustable parameters. Parameters for this functionality are attributes of ns3::aodv::RoutingProtocol. Parameter default values are drawn from the RFC and allow the enabling/disabling protocol features, such as broadcasting HELLO messages, broadcasting data packets and so on.

AODV discovers routes on demand. Therefore, the AODV model buffers all packets while a route request packet (RREQ) is disseminated. A packet queue is implemented in aodv-rqueue.cc. A smart pointer to the packet, ns3::Ipv4RoutingProtocol::ErrorCallback, ns3::Ipv4RoutingProtocol::UnicastForwardCallback, and the IP header are stored in this queue. The packet queue implements garbage collection of old packets and a queue size limit.

The routing table implementation supports garbage collection of old entries and state machine, defined in the standard. It is implemented as a STL map container. The key is a destination IP address.

Some elements of protocol operation aren’t described in the RFC. These elements generally concern cooperation of different OSI model layers. The model uses the following heuristics:

  • This AODV implementation can detect the presence of unidirectional links and avoid them if necessary. If the node the model receives an RREQ for is a neighbor, the cause may be a unidirectional link. This heuristic is taken from AODV-UU implementation and can be disabled.
  • Protocol operation strongly depends on broken link detection mechanism. The model implements two such heuristics. First, this implementation support HELLO messages. However HELLO messages are not a good way to perform neighbor sensing in a wireless environment (at least not over 802.11). Therefore, one may experience bad performance when running over wireless. There are several reasons for this: 1) HELLO messages are broadcasted. In 802.11, broadcasting is often done at a lower bit rate than unicasting, thus HELLO messages can travel further than unicast data. 2) HELLO messages are small, thus less prone to bit errors than data transmissions, and 3) Broadcast transmissions are not guaranteed to be bidirectional, unlike unicast transmissions. Second, we use layer 2 feedback when possible. Link are considered to be broken if frame transmission results in a transmission failure for all retries. This mechanism is meant for active links and works faster than the first method.

The layer 2 feedback implementation relies on the TxErrHeader trace source, currently supported in AdhocWifiMac only.

Scope and Limitations

The model is for IPv4 only. The following optional protocol optimizations are not implemented:

  1. Expanding ring search.
  2. Local link repair.
  3. RREP, RREQ and HELLO message extensions.

These techniques require direct access to IP header, which contradicts the assertion from the AODV RFC that AODV works over UDP. This model uses UDP for simplicity, hindering the ability to implement certain protocol optimizations. The model doesn’t use low layer raw sockets because they are not portable.

Future Work

No announced plans.

Applications

Placeholder chapter

Bridge NetDevice

Placeholder chapter

Some examples of the use of Bridge NetDevice can be found in examples/csma/ directory.

BRITE Integration

This model implements an interface to BRITE, the Boston university Representative Internet Topology gEnerator [1]. BRITE is a standard tool for generating realistic internet topologies. The ns-3 model, described herein, provides a helper class to facilitate generating ns-3 specific topologies using BRITE configuration files. BRITE builds the original graph which is stored as nodes and edges in the ns-3 BriteTopolgyHelper class. In the ns-3 integration of BRITE, the generator generates a topology and then provides access to leaf nodes for each AS generated. ns-3 users can than attach custom topologies to these leaf nodes either by creating them manually or using topology generators provided in ns-3.

There are three major types of topologies available in BRITE: Router, AS, and Hierarchical which is a combination of AS and Router. For the purposes of ns-3 simulation, the most useful are likely to be Router and Hierarchical. Router level topologies be generated using either the Waxman model or the Barabasi-Albert model. Each model has different parameters that effect topology creation. For flat router topologies, all nodes are considered to be in the same AS.

BRITE Hierarchical topologies contain two levels. The first is the AS level. This level can be also be created by using either the Waxman model or the Barabasi-Albert model. Then for each node in the AS topology, a router level topology is constructed. These router level topologies can again either use the Waxman model or the Barbasi-Albert model. BRITE interconnects these separate router topologies as specified by the AS level topology. Once the hierarchical topology is constructed, it is flattened into a large router level topology.

Further information can be found in the BRITE user manual: http://www.cs.bu.edu/brite/publications/usermanual.pdf

Model Description

The model relies on building an external BRITE library, and then building some ns-3 helpers that call out to the library. The source code for the ns-3 helpers lives in the directory src/brite/helper.

Design

To generate the BRITE topology, ns-3 helpers call out to the external BRITE library, and using a standard BRITE configuration file, the BRITE code builds a graph with nodes and edges according to this configuration file. Please see the BRITE documenation or the example configuration files in src/brite/examples/conf_files to get a better grasp of BRITE configuration options. The graph built by BRITE is returned to ns-3, and a ns-3 implementation of the graph is built. Leaf nodes for each AS are available for the user to either attach custom topologies or install ns-3 applications directly.

References

[1]Alberto Medina, Anukool Lakhina, Ibrahim Matta, and John Byers. BRITE: An Approach to Universal Topology Generation. In Proceedings of the International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems- MASCOTS ‘01, Cincinnati, Ohio, August 2001.

Usage

The brite-generic-example can be referenced to see basic usage of the BRITE interface. In summary, the BriteTopologyHelper is used as the interface point by passing in a BRITE configuration file. Along with the configuration file a BRITE formatted random seed file can also be passed in. If a seed file is not passed in, the helper will create a seed file using ns-3’s UniformRandomVariable. Once the topology has been generated by BRITE, BuildBriteTopology() is called to create the ns-3 representation. Next IP Address can be assigned to the topology using either AssignIpv4Addresses() or AssignIpv6Addresses(). It should be noted that each point-to-point link in the topology will be treated as a new network therefore for IPV4 a /30 subnet should be used to avoid wasting a large amount of the available address space.

Example BRITE configuration files can be found in /src/brite/examples/conf_files/. ASBarbasi and ASWaxman are examples of AS only topologies. The RTBarabasi and RTWaxman files are examples of router only topologies. Finally the TD_ASBarabasi_RTWaxman configuration file is an example of a Hierarchical topology that uses the Barabasi-Albert model for the AS level and the Waxman model for each of the router level topologies. Information on the BRITE parameters used in these files can be found in the BRITE user manual.

Building BRITE Integration

The first step is to download and build the ns-3 specific BRITE repository:

$ hg clone http://code.nsnam.org/BRITE
$ cd BRITE
$ make

This will build BRITE and create a library, libbrite.so, within the BRITE directory.

Once BRITE has been built successfully, we proceed to configure ns-3 with BRITE support. Change to your ns-3 directory:

$ ./waf configure --with-brite=/your/path/to/brite/source --enable-examples

Make sure it says ‘enabled’ beside ‘BRITE Integration’. If it does not, then something has gone wrong. Either you have forgotten to build BRITE first following the steps above, or ns-3 could not find your BRITE directory.

Next, build ns-3:

$ ./waf

Examples

For an example demonstrating BRITE integration run:

$ ./waf --run 'brite-generic-example'

By enabling the verbose parameter, the example will print out the node and edge information in a similar format to standard BRITE output. There are many other command-line parameters including confFile, tracing, and nix, described below:

confFile
A BRITE configuration file. Many different BRITE configuration file examples exist in the src/brite/examples/conf_files directory, for example, RTBarabasi20.conf and RTWaxman.conf. Please refer to the conf_files directory for more examples.
tracing
Enables ascii tracing.
nix
Enables nix-vector routing. Global routing is used by default.

The generic BRITE example also support visualization using pyviz, assuming python bindings in ns-3 are enabled:

$ ./waf --run brite-generic-example --vis

Simulations involving BRITE can also be used with MPI. The total number of MPI instances is passed to the BRITE topology helper where a modulo divide is used to assign the nodes for each AS to a MPI instance. An example can be found in src/brite/examples:

$ mpirun -np 2 ./waf --run brite-MPI-example

Please see the ns-3 MPI documentation for information on setting up MPI with ns-3.

Buildings Module

cd .. include:: replace.txt

Design documentation

Overview

The Buildings module provides:

  1. a new class (Building) that models the presence of a building in a simulation scenario;
  2. a new class (MobilityBuildingInfo) that allows to specify the location, size and characteristics of buildings present in the simulated area, and allows the placement of nodes inside those buildings;
  3. a container class with the definition of the most useful pathloss models and the correspondent variables called BuildingsPropagationLossModel.
  4. a new propagation model (HybridBuildingsPropagationLossModel) working with the mobility model just introduced, that allows to model the phenomenon of indoor/outdoor propagation in the presence of buildings.
  5. a simplified model working only with Okumura Hata (OhBuildingsPropagationLossModel) considering the phenomenon of indoor/outdoor propagation in the presence of buildings.

The models have been designed with LTE in mind, though their implementation is in fact independent from any LTE-specific code, and can be used with other ns-3 wireless technologies as well (e.g., wifi, wimax).

The HybridBuildingsPropagationLossModel pathloss model included is obtained through a combination of several well known pathloss models in order to mimic different environmental scenarios such as urban, suburban and open areas. Moreover, the model considers both outdoor and indoor indoor and outdoor communication has to be included since HeNB might be installed either within building and either outside. In case of indoor communication, the model has to consider also the type of building in outdoor <-> indoor communication according to some general criteria such as the wall penetration losses of the common materials; moreover it includes some general configuration for the internal walls in indoor communications.

The OhBuildingsPropagationLossModel pathloss model has been created for simplifying the previous one removing the thresholds for switching from one model to other. For doing this it has been used only one propagation model from the one available (i.e., the Okumura Hata). The presence of building is still considered in the model; therefore all the considerations of above regarding the building type are still valid. The same consideration can be done for what concern the environmental scenario and frequency since both of them are parameters of the model considered.

The Building class

The model includes a specific class called Building which contains a ns3 Box class for defining the dimension of the building. In order to implements the characteristics of the pathloss models included, the Building class supports the following attributes:

  • building type:
    • Residential (default value)
    • Office
    • Commercial
  • external walls type
    • Wood
    • ConcreteWithWindows (default value)
    • ConcreteWithoutWindows
    • StoneBlocks
  • number of floors (default value 1, which means only ground-floor)
  • number of rooms in x-axis (default value 1)
  • number of rooms in y-axis (default value 1)

The Building class is based on the following assumptions:

  • a buildings is represented as a rectangular parallelepiped (i.e., a box)
  • the walls are parallel to the x, y, and z axis
  • a building is divided into a grid of rooms, identified by the following parameters:
    • number of floors
    • number of rooms along the x-axis
    • number of rooms along the y-axis
  • the z axis is the vertical axis, i.e., floor numbers increase for increasing z axis values
  • the x and y room indices start from 1 and increase along the x and y axis respectively
  • all rooms in a building have equal size

The MobilityBuildingInfo class

The MobilityBuildingInfo class, which inherits from the ns3 class Object, is in charge of maintaining information about the position of a node with respect to building. The information managed by MobilityBuildingInfo is:

  • whether the node is indoor or outdoor
  • if indoor:
    • in which building the node is
    • in which room the node is positioned (x, y and floor room indices)

The class MobilityBuildingInfo is used by BuildingsPropagationLossModel class, which inherits from the ns3 class PropagationLossModel and manages the pathloss computation of the single components and their composition according to the nodes’ positions. Moreover, it implements also the shadowing, that is the loss due to obstacles in the main path (i.e., vegetation, buildings, etc.).

It is to be noted that, MobilityBuildingInfo can be used by any other propagation model. However, based on the information at the time of this writing, only the ones defined in the building module are designed for considering the constraints introduced by the buildings.

ItuR1238PropagationLossModel

This class implements a building-dependent indoor propagation loss model based on the ITU P.1238 model, which includes losses due to type of building (i.e., residential, office and commercial). The analytical expression is given in the following.

L_\mathrm{total} = 20\log f + N\log d + L_f(n)- 28 [dB]

where:

N = \left\{ \begin{array}{lll} 28 & residential \\ 30 & office \\ 22 & commercial\end{array} \right. : power loss coefficient [dB]

L_f = \left\{ \begin{array}{lll} 4n & residential \\ 15+4(n-1) & office \\ 6+3(n-1) & commercial\end{array} \right.

n : number of floors between base station and mobile (n\ge 1)

f : frequency [MHz]

d : distance (where d > 1) [m]

BuildingsPropagationLossModel

The BuildingsPropagationLossModel provides an additional set of building-dependent pathloss model elements that are used to implement different pathloss logics. These pathloss model elements are described in the following subsections.

External Wall Loss (EWL)

This component models the penetration loss through walls for indoor to outdoor communications and vice-versa. The values are taken from the [cost231] model.

  • Wood ~ 4 dB
  • Concrete with windows (not metallized) ~ 7 dB
  • Concrete without windows ~ 15 dB (spans between 10 and 20 in COST231)
  • Stone blocks ~ 12 dB
Internal Walls Loss (IWL)

This component models the penetration loss occurring in indoor-to-indoor communications within the same building. The total loss is calculated assuming that each single internal wall has a constant penetration loss L_{siw}, and approximating the number of walls that are penetrated with the manhattan distance (in number of rooms) between the transmitter and the receiver. In detail, let x_1, y_1, x_2, y_2 denote the room number along the x and y axis respectively for user 1 and 2; the total loss L_{IWL} is calculated as

L_{IWL} = L_{siw} (|x_1 -x_2| + |y_1 - y_2|)

Height Gain Model (HG)

This component model the gain due to the fact that the transmitting device is on a floor above the ground. In the literature [turkmani] this gain has been evaluated as about 2 dB per floor. This gain can be applied to all the indoor to outdoor communications and vice-versa.

Shadowing Model

The shadowing is modeled according to a log-normal distribution with variable standard deviation as function of the relative position (indoor or outdoor) of the MobilityModel instances involved. One random value is drawn for each pair of MobilityModels, and stays constant for that pair during the whole simulation. Thus, the model is appropriate for static nodes only.

The model considers that the mean of the shadowing loss in dB is always 0. For the variance, the model considers three possible values of standard deviation, in detail:

  • outdoor (m_shadowingSigmaOutdoor, defaul value of 7 dB) \rightarrow X_\mathrm{O} \sim N(\mu_\mathrm{O}, \sigma_\mathrm{O}^2).
  • indoor (m_shadowingSigmaIndoor, defaul value of 10 dB) \rightarrow X_\mathrm{I} \sim N(\mu_\mathrm{I}, \sigma_\mathrm{I}^2).
  • external walls penetration (m_shadowingSigmaExtWalls, default value 5 dB) \rightarrow X_\mathrm{W} \sim N(\mu_\mathrm{W}, \sigma_\mathrm{W}^2)

The simulator generates a shadowing value per each active link according to nodes’ position the first time the link is used for transmitting. In case of transmissions from outdoor nodes to indoor ones, and vice-versa, the standard deviation (\sigma_\mathrm{IO}) has to be calculated as the square root of the sum of the quadratic values of the standard deviatio in case of outdoor nodes and the one for the external walls penetration. This is due to the fact that that the components producing the shadowing are independent of each other; therefore, the variance of a distribution resulting from the sum of two independent normal ones is the sum of the variances.

X \sim N(\mu,\sigma^2) \mbox{ and } Y \sim N(\nu,\tau^2)

Z = X + Y \sim Z (\mu + \nu, \sigma^2 + \tau^2)

\Rightarrow \sigma_\mathrm{IO} = \sqrt{\sigma_\mathrm{O}^2 + \sigma_\mathrm{W}^2}

Pathloss logics

In the following we describe the different pathloss logic that are implemented by inheriting from BuildingsPropagationLossModel.

HybridBuildingsPropagationLossModel

The HybridBuildingsPropagationLossModel pathloss model included is obtained through a combination of several well known pathloss models in order to mimic different outdoor and indoor scenarios, as well as indoor-to-outdoor and outdoor-to-indoor scenarios. In detail, the class HybridBuildingsPropagationLossModel integrates the following pathloss models:

  • OkumuraHataPropagationLossModel (OH) (at frequencies > 2.3 GHz substituted by Kun2600MhzPropagationLossModel)
  • ItuR1411LosPropagationLossModel and ItuR1411NlosOverRooftopPropagationLossModel (I1411)
  • ItuR1238PropagationLossModel (I1238)
  • the pathloss elements of the BuildingsPropagationLossModel (EWL, HG, IWL)

The following pseudo-code illustrates how the different pathloss model elements described above are integrated in HybridBuildingsPropagationLossModel:

if (txNode is outdoor)
  then
    if (rxNode is outdoor)
      then
        if (distance > 1 km)
          then
            if (rxNode or txNode is below the rooftop)
              then
                L = I1411
              else
                L = OH
          else
            L = I1411
      else (rxNode is indoor)
        if (distance > 1 km)
          then
            if (rxNode or txNode is below the rooftop)
              L = I1411 + EWL + HG
            else
              L = OH + EWL + HG
          else
            L = I1411 + EWL + HG
else (txNode is indoor)
  if (rxNode is indoor)
    then
     if (same building)
        then
          L = I1238 + IWL
        else
          L = I1411 + 2*EWL
   else (rxNode is outdoor)
    if (distance > 1 km)
      then
        if (rxNode or txNode is below the rooftop)
              then
                L = I1411 + EWL + HG
              else
                L = OH + EWL + HG
      else
        L = I1411 + EWL

We note that, for the case of communication between two nodes below rooftop level with distance is greater then 1 km, we still consider the I1411 model, since OH is specifically designed for macro cells and therefore for antennas above the roof-top level.

For the ITU-R P.1411 model we consider both the LOS and NLoS versions. In particular, we considers the LoS propagation for distances that are shorted than a tunable threshold (m_itu1411NlosThreshold). In case on NLoS propagation, the over the roof-top model is taken in consideration for modeling both macro BS and SC. In case on NLoS several parameters scenario dependent have been included, such as average street width, orientation, etc. The values of such parameters have to be properly set according to the scenario implemented, the model does not calculate natively their values. In case any values is provided, the standard ones are used, apart for the height of the mobile and BS, which instead their integrity is tested directly in the code (i.e., they have to be greater then zero). In the following we give the expressions of the components of the model.

We also note that the use of different propagation models (OH, I1411, I1238 with their variants) in HybridBuildingsPropagationLossModel can result in discontinuities of the pathloss with respect to distance. A proper tuning of the attributes (especially the distance threshold attributes) can avoid these discontinuities. However, since the behavior of each model depends on several other parameters (frequency, node heigth, etc), there is no default value of these thresholds that can avoid the discontinuities in all possible configurations. Hence, an appropriate tuning of these parameters is left to the user.

OhBuildingsPropagationLossModel

The OhBuildingsPropagationLossModel class has been created as a simple means to solve the discontinuity problems of HybridBuildingsPropagationLossModel without doing scenario-specific parameter tuning. The solution is to use only one propagation loss model (i.e., Okumura Hata), while retaining the structure of the pathloss logic for the calculation of other path loss components (such as wall penetration losses). The result is a model that is free of discontinuities (except those due to walls), but that is less realistic overall for a generic scenario with buildings and outdoor/indoor users, e.g., because Okumura Hata is not suitable neither for indoor communications nor for outdoor communications below rooftop level.

In detail, the class OhBuildingsPropagationLossModel integrates the following pathloss models:

  • OkumuraHataPropagationLossModel (OH)
  • the pathloss elements of the BuildingsPropagationLossModel (EWL, HG, IWL)

The following pseudo-code illustrates how the different pathloss model elements described above are integrated in OhBuildingsPropagationLossModel:

if (txNode is outdoor)
  then
    if (rxNode is outdoor)
      then
        L = OH
      else (rxNode is indoor)
        L = OH + EWL
else (txNode is indoor)
  if (rxNode is indoor)
    then
     if (same building)
        then
          L = OH + IWL
        else
          L = OH + 2*EWL
   else (rxNode is outdoor)
      L = OH + EWL

We note that OhBuildingsPropagationLossModel is a significant simplification with respect to HybridBuildingsPropagationLossModel, due to the fact that OH is used always. While this gives a less accurate model in some scenarios (especially below rooftop and indoor), it effectively avoids the issue of pathloss discontinuities that affects HybridBuildingsPropagationLossModel.

User Documentation

How to use buildings in a simulation

In this section we explain the basic usage of the buildings model within a simulation program.

Include the headers

Add this at the beginning of your simulation program:

#include <ns3/buildings-module.h>
Create a building

As an example, let’s create a residential 10 x 20 x 10 building:

double x_min = 0.0;
double x_max = 10.0;
double y_min = 0.0;
double y_max = 20.0;
double z_min = 0.0;
double z_max = 10.0;
Ptr<Building> b = CreateObject <Building> ();
b->SetBoundaries (Box (x_min, x_max, y_min, y_max, z_min, z_max));
b->SetBuildingType (Building::Residential);
b->SetExtWallsType (Building::ConcreteWithWindows);
b->SetNFloors (3);
b->SetNRoomsX (3);
b->SetNRoomsY (2);

This building has three floors and an internal 3 x 2 grid of rooms of equal size.

The helper class GridBuildingAllocator is also available to easily create a set of buildings with identical characteristics placed on a rectangular grid. Here’s an example of how to use it:

Ptr<GridBuildingAllocator>  gridBuildingAllocator;
gridBuildingAllocator = CreateObject<GridBuildingAllocator> ();
gridBuildingAllocator->SetAttribute ("GridWidth", UintegerValue (3));
gridBuildingAllocator->SetAttribute ("LengthX", DoubleValue (7));
gridBuildingAllocator->SetAttribute ("LengthY", DoubleValue (13));
gridBuildingAllocator->SetAttribute ("DeltaX", DoubleValue (3));
gridBuildingAllocator->SetAttribute ("DeltaY", DoubleValue (3));
gridBuildingAllocator->SetAttribute ("Height", DoubleValue (6));
gridBuildingAllocator->SetBuildingAttribute ("NRoomsX", UintegerValue (2));
gridBuildingAllocator->SetBuildingAttribute ("NRoomsY", UintegerValue (4));
gridBuildingAllocator->SetBuildingAttribute ("NFloors", UintegerValue (2));
gridBuildingAllocator->SetAttribute ("MinX", DoubleValue (0));
gridBuildingAllocator->SetAttribute ("MinY", DoubleValue (0));
gridBuildingAllocator->Create (6);

This will create a 3x2 grid of 6 buildings, each 7 x 13 x 6 m with 2 x 4 rooms inside and 2 foors; the buildings are spaced by 3 m on both the x and the y axis.

Setup nodes and mobility models

Nodes and mobility models are configured as usual, however in order to use them with the buildings model you need an additional call to BuildingsHelper::Install(), so as to let the mobility model include the informtion on their position w.r.t. the buildings. Here is an example:

MobilityHelper mobility;
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
ueNodes.Create (2);
mobility.Install (ueNodes);
BuildingsHelper::Install (ueNodes);

It is to be noted that any mobility model can be used. However, the user is advised to make sure that the behavior of the mobility model being used is consistent with the presence of Buildings. For example, using a simple random mobility over the whole simulation area in presence of buildings might easily results in node moving in and out of buildings, regardless of the presence of walls.

Place some nodes

You can place nodes in your simulation using several methods, which are described in the following.

Legacy positioning methods

Any legacy ns-3 positioning method can be used to place node in the simulation. The important additional step is to For example, you can place nodes manually like this:

Ptr<ConstantPositionMobilityModel> mm0 = enbNodes.Get (0)->GetObject<ConstantPositionMobilityModel> ();
Ptr<ConstantPositionMobilityModel> mm1 = enbNodes.Get (1)->GetObject<ConstantPositionMobilityModel> ();
mm0->SetPosition (Vector (5.0, 5.0, 1.5));
mm1->SetPosition (Vector (30.0, 40.0, 1.5));

MobilityHelper mobility;
mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
ueNodes.Create (2);
mobility.Install (ueNodes);
BuildingsHelper::Install (ueNodes);
mm0->SetPosition (Vector (5.0, 5.0, 1.5));
mm1->SetPosition (Vector (30.0, 40.0, 1.5));

Alternatively, you could use any existing PositionAllocator class. The coordinates of the node will determine whether it is placed outdoor or indoor and, if indoor, in which building and room it is placed.

Building-specific positioning methods

The following position allocator classes are available to place node in special positions with respect to buildings:

  • RandomBuildingPositionAllocator: Allocate each position by randomly chosing a building from the list of all buildings, and then randomly chosing a position inside the building.
  • RandomRoomPositionAllocator: Allocate each position by randomly chosing a room from the list of rooms in all buildings, and then randomly chosing a position inside the room.
  • SameRoomPositionAllocator: Walks a given NodeContainer sequentially, and for each node allocate a new position randomly in the same room of that node.
  • FixedRoomPositionAllocator: Generate a random position uniformly distributed in the volume of a chosen room inside a chosen building.
Make the Mobility Model Consistent

Important: whenever you use buildings, you have to issue the following command after we have placed all nodes and buildings in the simulation:

BuildingsHelper::MakeMobilityModelConsistent ();

This command will go through the lists of all nodes and of all buildings, determine for each user if it is indoor or outdoor, and if indoor it will also determine the building in which the user is located and the corresponding floor and number inside the building.

Building-aware pathloss model

After you placed buildings and nodes in a simulation, you can use a building-aware pathloss model in a simulation exactly in the same way you would use any regular path loss model. How to do this is specific for the wireless module that you are considering (lte, wifi, wimax, etc.), so please refer to the documentation of that model for specific instructions.

Main configurable attributes

The Building class has the following configurable parameters:

  • building type: Residential, Office and Commercial.
  • external walls type: Wood, ConcreteWithWindows, ConcreteWithoutWindows and StoneBlocks.
  • building bounds: a Box class with the building bounds.
  • number of floors.
  • number of rooms in x-axis and y-axis (rooms can be placed only in a grid way).

The BuildingMobilityLossModel parameter configurable with the ns3 attribute system is represented by the bound (string Bounds) of the simulation area by providing a Box class with the area bounds. Moreover, by means of its methos the following parameters can be configured:

  • the number of floor the node is placed (default 0).
  • the position in the rooms grid.

The BuildingPropagationLossModel class has the following configurable parameters configurable with the attribute system:

  • Frequency: reference frequency (default 2160 MHz), note that by setting the frequency the wavelength is set accordingly automatically and viceversa).
  • Lambda: the wavelength (0.139 meters, considering the above frequency).
  • ShadowSigmaOutdoor: the standard deviation of the shadowing for outdoor nodes (defaul 7.0).
  • ShadowSigmaIndoor: the standard deviation of the shadowing for indoor nodes (default 8.0).
  • ShadowSigmaExtWalls: the standard deviation of the shadowing due to external walls penetration for outdoor to indoor communications (default 5.0).
  • RooftopLevel: the level of the rooftop of the building in meters (default 20 meters).
  • Los2NlosThr: the value of distance of the switching point between line-of-sigth and non-line-of-sight propagation model in meters (default 200 meters).
  • ITU1411DistanceThr: the value of distance of the switching point between short range (ITU 1211) communications and long range (Okumura Hata) in meters (default 200 meters).
  • MinDistance: the minimum distance in meters between two nodes for evaluating the pathloss (considered neglictible before this threshold) (default 0.5 meters).
  • Environment: the environment scenario among Urban, SubUrban and OpenAreas (default Urban).
  • CitySize: the dimension of the city among Small, Medium, Large (default Large).

In order to use the hybrid mode, the class to be used is the HybridBuildingMobilityLossModel, which allows the selection of the proper pathloss model according to the pathloss logic presented in the design chapter. However, this solution has the problem that the pathloss model switching points might present discontinuities due to the different characteristics of the model. This implies that according to the specific scenario, the threshold used for switching have to be properly tuned. The simple OhBuildingMobilityLossModel overcome this problem by using only the Okumura Hata model and the wall penetration losses.

Testing Documentation

Overview

To test and validate the ns-3 Building Pathloss module, some test suites is provided which are integrated with the ns-3 test framework. To run them, you need to have configured the build of the simulator in this way:

$ ./waf configure --enable-tests --enable-modules=buildings
$ ./test.py

The above will run not only the test suites belonging to the buildings module, but also those belonging to all the other ns-3 modules on which the buildings module depends. See the ns-3 manual for generic information on the testing framework.

You can get a more detailed report in HTML format in this way:

$ ./test.py -w results.html

After the above command has run, you can view the detailed result for each test by opening the file results.html with a web browser.

You can run each test suite separately using this command:

$ ./test.py -s test-suite-name

For more details about test.py and the ns-3 testing framework, please refer to the ns-3 manual.

Description of the test suites

BuildingsHelper test

The test suite buildings-helper checks that the method BuildingsHelper::MakeAllInstancesConsistent () works properly, i.e., that the BuildingsHelper is successful in locating if nodes are outdoor or indoor, and if indoor that they are located in the correct building, room and floor. Several test cases are provided with different buildings (having different size, position, rooms and floors) and different node positions. The test passes if each every node is located correctly.

BuildingPositionAllocator test

The test suite building-position-allocator feature two test cases that check that respectively RandomRoomPositionAllocator and SameRoomPositionAllocator work properly. Each test cases involves a single 2x3x2 room building (total 12 rooms) at known coordinates and respectively 24 and 48 nodes. Both tests check that the number of nodes allocated in each room is the expected one and that the position of the nodes is also correct.

Buildings Pathloss tests

The test suite buildings-pathloss-model provides different unit tests that compare the expected results of the buildings pathloss module in specific scenarios with pre calculated values obtained offline with an Octave script (test/reference/buildings-pathloss.m). The tests are considered passed if the two values are equal up to a tolerance of 0.1, which is deemed appropriate for the typical usage of pathloss values (which are in dB).

In the following we detailed the scenarios considered, their selection has been done for covering the wide set of possible pathloss logic combinations. The pathloss logic results therefore implicitly tested.

Test #1 Okumura Hata

In this test we test the standard Okumura Hata model; therefore both eNB and UE are placed outside at a distance of 2000 m. The frequency used is the E-UTRA band #5, which correspond to 869 MHz (see table 5.5-1 of 36.101). The test includes also the validation of the areas extensions (i.e., urban, suburban and open-areas) and of the city size (small, medium and large).

Test #2 COST231 Model

This test is aimed at validating the COST231 model. The test is similar to the Okumura Hata one, except that the frequency used is the EUTRA band #1 (2140 MHz) and that the test can be performed only for large and small cities in urban scenarios due to model limitations.

Test #3 2.6 GHz model

This test validates the 2.6 GHz Kun model. The test is similar to Okumura Hata one except that the frequency is the EUTRA band #7 (2620 MHz) and the test can be performed only in urban scenario.

Test #4 ITU1411 LoS model

This test is aimed at validating the ITU1411 model in case of line of sight within street canyons transmissions. In this case the UE is placed at 100 meters far from the eNB, since the threshold for switching between LoS and NLoS is left to default one (i.e., 200 m.).

Test #5 ITU1411 NLoS model

This test is aimed at validating the ITU1411 model in case of non line of sight over the rooftop transmissions. In this case the UE is placed at 900 meters far from the eNB, in order to be above the threshold for switching between LoS and NLoS is left to default one (i.e., 200 m.).

Test #6 ITUP1238 model

This test is aimed at validating the ITUP1238 model in case of indoor transmissions. In this case both the UE and the eNB are placed in a residential building with walls made of concrete with windows. Ue is placed at the second floor and distances 30 meters far from the eNB, which is placed at the first floor.

Test #7 Outdoor -> Indoor with Okumura Hata model

This test validates the outdoor to indoor transmissions for large distances. In this case the UE is placed in a residential building with wall made of concrete with windows and distances 2000 meters from the outdoor eNB.

Test #8 Outdoor -> Indoor with ITU1411 model

This test validates the outdoor to indoor transmissions for short distances. In this case the UE is placed in a residential building with walls made of concrete with windows and distances 100 meters from the outdoor eNB.

Test #9 Indoor -> Outdoor with ITU1411 model

This test validates the outdoor to indoor transmissions for very short distances. In this case the eNB is placed in the second floor of a residential building with walls made of concrete with windows and distances 100 meters from the outdoor UE (i.e., LoS communication). Therefore the height gain has to be included in the pathloss evaluation.

Test #10 Indoor -> Outdoor with ITU1411 model

This test validates the outdoor to indoor transmissions for short distances. In this case the eNB is placed in the second floor of a residential building with walls made of concrete with windows and distances 500 meters from the outdoor UE (i.e., NLoS communication). Therefore the height gain has to be included in the pathloss evaluation.

Buildings Shadowing Test

The test suite buildings-shadowing-test is a unit test intended to verify the statistical distribution of the shadowing model implemented by BuildingsPathlossModel. The shadowing is modeled according to a normal distribution with mean \mu = 0 and variable standard deviation \sigma, according to models commonly used in literature. Three test cases are provided, which cover the cases of indoor, outdoor and indoor-to-outdoor communications. Each test case generates 1000 different samples of shadowing for different pairs of MobilityModel instances in a given scenario. Shadowing values are obtained by subtracting from the total loss value returned by HybridBuildingsPathlossModel the path loss component which is constant and pre-determined for each test case. The test verifies that the sample mean and sample variance of the shadowing values fall within the 99% confidence interval of the sample mean and sample variance. The test also verifies that the shadowing values returned at successive times for the same pair of MobilityModel instances is constant.

References

[turkmani]Turkmani A.M.D., J.D. Parson and D.G. Lewis, “Radio propagation into buildings at 441, 900 and 1400 MHz”, in Proc. of 4th Int. Conference on Land Mobile Radio, 1987.

Click Modular Router Integration

Click is a software architecture for building configurable routers. By using different combinations of packet processing units called elements, a Click router can be made to perform a specific kind of functionality. This flexibility provides a good platform for testing and experimenting with different protocols.

Model Description

The source code for the Click model lives in the directory src/click.

Design

ns-3’s design is well suited for an integration with Click due to the following reasons:

  • Packets in ns-3 are serialised/deserialised as they move up/down the stack. This allows ns-3 packets to be passed to and from Click as they are.
  • This also means that any kind of ns-3 traffic generator and transport should work easily on top of Click.
  • By striving to implement click as an Ipv4RoutingProtocol instance, we can avoid significant changes to the LL and MAC layer of the ns-3 code.

The design goal was to make the ns-3-click public API simple enough such that the user needs to merely add an Ipv4ClickRouting instance to the node, and inform each Click node of the Click configuration file (.click file) that it is to use.

This model implements the interface to the Click Modular Router and provides the Ipv4ClickRouting class to allow a node to use Click for external routing. Unlike normal Ipv4RoutingProtocol sub types, Ipv4ClickRouting doesn’t use a RouteInput() method, but instead, receives a packet on the appropriate interface and processes it accordingly. Note that you need to have a routing table type element in your Click graph to use Click for external routing. This is needed by the RouteOutput() function inherited from Ipv4RoutingProtocol. Furthermore, a Click based node uses a different kind of L3 in the form of Ipv4L3ClickProtocol, which is a trimmed down version of Ipv4L3Protocol. Ipv4L3ClickProtocol passes on packets passing through the stack to Ipv4ClickRouting for processing.

Developing a Simulator API to allow ns-3 to interact with Click

Much of the API is already well defined, which allows Click to probe for information from the simulator (like a Node’s ID, an Interface ID and so forth). By retaining most of the methods, it should be possible to write new implementations specific to ns-3 for the same functionality.

Hence, for the Click integration with ns-3, a class named Ipv4ClickRouting will handle the interaction with Click. The code for the same can be found in src/click/model/ipv4-click-routing.{cc,h}.

Packet hand off between ns-3 and Click

There are four kinds of packet hand-offs that can occur between ns-3 and Click.

  • L4 to L3
  • L3 to L4
  • L3 to L2
  • L2 to L3

To overcome this, we implement Ipv4L3ClickProtocol, a stripped down version of Ipv4L3Protocol. Ipv4L3ClickProtocol passes packets to and from Ipv4ClickRouting appropriately to perform routing.

Scope and Limitations

  • In its current state, the NS-3 Click Integration is limited to use only with L3, leaving NS-3 to handle L2. We are currently working on adding Click MAC support as well. See the usage section to make sure that you design your Click graphs accordingly.
  • Furthermore, ns-3-click will work only with userlevel elements. The complete list of elements are available at http://read.cs.ucla.edu/click/elements. Elements that have ‘all’, ‘userlevel’ or ‘ns’ mentioned beside them may be used.
  • As of now, the ns-3 interface to Click is Ipv4 only. We will be adding Ipv6 support in the future.

References

  • Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, and M. Frans Kaashoek. The click modular router. ACM Transactions on Computer Systems 18(3), August 2000, pages 263-297.
  • Lalith Suresh P., and Ruben Merz. Ns-3-click: click modular router integration for ns-3. In Proc. of 3rd International ICST Workshop on NS-3 (WNS3), Barcelona, Spain. March, 2011.
  • Michael Neufeld, Ashish Jain, and Dirk Grunwald. Nsclick: bridging network simulation and deployment. MSWiM ‘02: Proceedings of the 5th ACM international workshop on Modeling analysis and simulation of wireless and mobile systems, 2002, Atlanta, Georgia, USA. http://doi.acm.org/10.1145/570758.570772

Usage

Building Click

The first step is to clone Click from the github repository and build it:

$ git clone https://github.com/kohler/click
$ cd click/
$ ./configure --disable-linuxmodule --enable-nsclick --enable-wifi
$ make

The –enable-wifi flag may be skipped if you don’t intend on using Click with Wifi. * Note: You don’t need to do a ‘make install’.

Once Click has been built successfully, change into the ns-3 directory and configure ns-3 with Click Integration support:

$ ./waf configure --enable-examples --enable-tests --with-nsclick=/path/to/click/source

Hint: If you have click installed one directory above ns-3 (such as in the ns-3-allinone directory), and the name of the directory is ‘click’ (or a symbolic link to the directory is named ‘click’), then the –with-nsclick specifier is not necessary; the ns-3 build system will successfully find the directory.

If it says ‘enabled’ beside ‘NS-3 Click Integration Support’, then you’re good to go. Note: If running modular ns-3, the minimum set of modules required to run all ns-3-click examples is wifi, csma and config-store.

Next, try running one of the examples:

$ ./waf --run nsclick-simple-lan

You may then view the resulting .pcap traces, which are named nsclick-simple-lan-0-0.pcap and nsclick-simple-lan-0-1.pcap.

Click Graph Instructions

The following should be kept in mind when making your Click graph:

  • Only userlevel elements can be used.
  • You will need to replace FromDevice and ToDevice elements with FromSimDevice and ToSimDevice elements.
  • Packets to the kernel are sent up using ToSimDevice(tap0,IP).
  • For any node, the device which sends/receives packets to/from the kernel, is named ‘tap0’. The remaining interfaces should be named eth0, eth1 and so forth (even if you’re using wifi). Please note that the device numbering should begin from 0. In future, this will be made flexible so that users can name devices in their Click file as they wish.
  • A routing table element is a mandatory. The OUTports of the routing table element should correspond to the interface number of the device through which the packet will ultimately be sent out. Violating this rule will lead to really weird packet traces. This routing table element’s name should then be passed to the Ipv4ClickRouting protocol object as a simulation parameter. See the Click examples for details.
  • The current implementation leaves Click with mainly L3 functionality, with ns-3 handling L2. We will soon begin working to support the use of MAC protocols on Click as well. This means that as of now, Click’s Wifi specific elements cannot be used with ns-3.

Debugging Packet Flows from Click

From any point within a Click graph, you may use the Print (http://read.cs.ucla.edu/click/elements/print) element and its variants for pretty printing of packet contents. Furthermore, you may generate pcap traces of packets flowing through a Click graph by using the ToDump (http://read.cs.ucla.edu/click/elements/todump) element as well. For instance:

myarpquerier
 -> Print(fromarpquery,64)
 -> ToDump(out_arpquery,PER_NODE 1)
 -> ethout;

and ...will print the contents of packets that flow out of the ArpQuerier, then generate a pcap trace file which will have a suffix ‘out_arpquery’, for each node using the Click file, before pushing packets onto ‘ethout’.

Helper

To have a node run Click, the easiest way would be to use the ClickInternetStackHelper class in your simulation script. For instance:

ClickInternetStackHelper click;
click.SetClickFile (myNodeContainer, "nsclick-simple-lan.click");
click.SetRoutingTableElement (myNodeContainer, "u/rt");
click.Install (myNodeContainer);

The example scripts inside src/click/examples/ demonstrate the use of Click based nodes in different scenarios. The helper source can be found inside src/click/helper/click-internet-stack-helper.{h,cc}

Examples

The following examples have been written, which can be found in src/click/examples/:

  • nsclick-simple-lan.cc and nsclick-raw-wlan.cc: A Click based node communicating with a normal ns-3 node without Click, using Csma and Wifi respectively. It also demonstrates the use of TCP on top of Click, something which the original nsclick implementation for NS-2 couldn’t achieve.
  • nsclick-udp-client-server-csma.cc and nsclick-udp-client-server-wifi.cc: A 3 node LAN (Csma and Wifi respectively) wherein 2 Click based nodes run a UDP client, that sends packets to a third Click based node running a UDP server.
  • nsclick-routing.cc: One Click based node communicates to another via a third node that acts as an IP router (using the IP router Click configuration). This demonstrates routing using Click.

Scripts are available within <click-dir>/conf/ that allow you to generate Click files for some common scenarios. The IP Router used in nsclick-routing.cc was generated from the make-ip-conf.pl file and slightly adapted to work with ns-3-click.

Validation

This model has been tested as follows:

  • Unit tests have been written to verify the internals of Ipv4ClickRouting. This can be found in src/click/ipv4-click-routing-test.cc. These tests verify whether the methods inside Ipv4ClickRouting which deal with Device name to ID, IP Address from device name and Mac Address from device name bindings work as expected.
  • The examples have been used to test Click with actual simulation scenarios. These can be found in src/click/examples/. These tests cover the following: the use of different kinds of transports on top of Click, TCP/UDP, whether Click nodes can communicate with non-Click based nodes, whether Click nodes can communicate with each other, using Click to route packets using static routing.
  • Click has been tested with Csma, Wifi and Point-to-Point devices. Usage instructions are available in the preceding section.

CSMA NetDevice

This is the introduction to CSMA NetDevice chapter, to complement the Csma model doxygen.

Overview of the CSMA model

The ns-3 CSMA device models a simple bus network in the spirit of Ethernet. Although it does not model any real physical network you could ever build or buy, it does provide some very useful functionality.

Typically when one thinks of a bus network Ethernet or IEEE 802.3 comes to mind. Ethernet uses CSMA/CD (Carrier Sense Multiple Access with Collision Detection with exponentially increasing backoff to contend for the shared transmission medium. The ns-3 CSMA device models only a portion of this process, using the nature of the globally available channel to provide instantaneous (faster than light) carrier sense and priority-based collision “avoidance.” Collisions in the sense of Ethernet never happen and so the ns-3 CSMA device does not model collision detection, nor will any transmission in progress be “jammed.”

CSMA Layer Model

There are a number of conventions in use for describing layered communications architectures in the literature and in textbooks. The most common layering model is the ISO seven layer reference model. In this view the CsmaNetDevice and CsmaChannel pair occupies the lowest two layers – at the physical (layer one), and data link (layer two) positions. Another important reference model is that specified by RFC 1122, “Requirements for Internet Hosts – Communication Layers.” In this view the CsmaNetDevice and CsmaChannel pair occupies the lowest layer – the link layer. There is also a seemingly endless litany of alternative descriptions found in textbooks and in the literature. We adopt the naming conventions used in the IEEE 802 standards which speak of LLC, MAC, MII and PHY layering. These acronyms are defined as:

  • LLC: Logical Link Control;
  • MAC: Media Access Control;
  • MII: Media Independent Interface;
  • PHY: Physical Layer.

In this case the LLC and MAC are sublayers of the OSI data link layer and the MII and PHY are sublayers of the OSI physical layer.

The “top” of the CSMA device defines the transition from the network layer to the data link layer. This transition is performed by higher layers by calling either CsmaNetDevice::Send or CsmaNetDevice::SendFrom.

In contrast to the IEEE 802.3 standards, there is no precisely specified PHY in the CSMA model in the sense of wire types, signals or pinouts. The “bottom” interface of the CsmaNetDevice can be thought of as as a kind of Media Independent Interface (MII) as seen in the “Fast Ethernet” (IEEE 802.3u) specifications. This MII interface fits into a corresponding media independent interface on the CsmaChannel. You will not find the equivalent of a 10BASE-T or a 1000BASE-LX PHY.

The CsmaNetDevice calls the CsmaChannel through a media independent interface. There is a method defined to tell the channel when to start “wiggling the wires” using the method CsmaChannel::TransmitStart, and a method to tell the channel when the transmission process is done and the channel should begin propagating the last bit across the “wire”: CsmaChannel::TransmitEnd.

When the TransmitEnd method is executed, the channel will model a single uniform signal propagation delay in the medium and deliver copes of the packet to each of the devices attached to the packet via the CsmaNetDevice::Receive method.

There is a “pin” in the device media independent interface corresponding to “COL” (collision). The state of the channel may be sensed by calling CsmaChannel::GetState. Each device will look at this “pin” before starting a send and will perform appropriate backoff operations if required.

Properly received packets are forwarded up to higher levels from the CsmaNetDevice via a callback mechanism. The callback function is initialized by the higher layer (when the net device is attached) using CsmaNetDevice::SetReceiveCallback and is invoked upon “proper” reception of a packet by the net device in order to forward the packet up the protocol stack.

CSMA Channel Model

The class CsmaChannel models the actual transmission medium. There is no fixed limit for the number of devices connected to the channel. The CsmaChannel models a data rate and a speed-of-light delay which can be accessed via the attributes “DataRate” and “Delay” respectively. The data rate provided to the channel is used to set the data rates used by the transmitter sections of the CSMA devices connected to the channel. There is no way to independently set data rates in the devices. Since the data rate is only used to calculate a delay time, there is no limitation (other than by the data type holding the value) on the speed at which CSMA channels and devices can operate; and no restriction based on any kind of PHY characteristics.

The CsmaChannel has three states, IDLE, TRANSMITTING and PROPAGATING. These three states are “seen” instantaneously by all devices on the channel. By this we mean that if one device begins or ends a simulated transmission, all devices on the channel are immediately aware of the change in state. There is no time during which one device may see an IDLE channel while another device physically further away in the collision domain may have begun transmitting with the associated signals not propagated down the channel to other devices. Thus there is no need for collision detection in the CsmaChannel model and it is not implemented in any way.

We do, as the name indicates, have a Carrier Sense aspect to the model. Since the simulator is single threaded, access to the common channel will be serialized by the simulator. This provides a deterministic mechanism for contending for the channel. The channel is allocated (transitioned from state IDLE to state TRANSMITTING) on a first-come first-served basis. The channel always goes through a three state process:

IDLE -> TRANSMITTING -> PROPAGATING -> IDLE

The TRANSMITTING state models the time during which the source net device is actually wiggling the signals on the wire. The PROPAGATING state models the time after the last bit was sent, when the signal is propagating down the wire to the “far end.”

The transition to the TRANSMITTING state is driven by a call to CsmaChannel::TransmitStart which is called by the net device that transmits the packet. It is the responsibility of that device to end the transmission with a call to CsmaChannel::TransmitEnd at the appropriate simulation time that reflects the time elapsed to put all of the packet bits on the wire. When TransmitEnd is called, the channel schedules an event corresponding to a single speed-of-light delay. This delay applies to all net devices on the channel identically. You can think of a symmetrical hub in which the packet bits propagate to a central location and then back out equal length cables to the other devices on the channel. The single “speed of light” delay then corresponds to the time it takes for: 1) a signal to propagate from one CsmaNetDevice through its cable to the hub; plus 2) the time it takes for the hub to forward the packet out a port; plus 3) the time it takes for the signal in question to propagate to the destination net device.

The CsmaChannel models a broadcast medium so the packet is delivered to all of the devices on the channel (including the source) at the end of the propagation time. It is the responsibility of the sending device to determine whether or not it receives a packet broadcast over the channel.

The CsmaChannel provides following Attributes:

  • DataRate: The bitrate for packet transmission on connected devices;
  • Delay: The speed of light transmission delay for the channel.

CSMA Net Device Model

The CSMA network device appears somewhat like an Ethernet device. The CsmaNetDevice provides following Attributes:

  • Address: The Mac48Address of the device;
  • SendEnable: Enable packet transmission if true;
  • ReceiveEnable: Enable packet reception if true;
  • EncapsulationMode: Type of link layer encapsulation to use;
  • RxErrorModel: The receive error model;
  • TxQueue: The transmit queue used by the device;
  • InterframeGap: The optional time to wait between “frames”;
  • Rx: A trace source for received packets;
  • Drop: A trace source for dropped packets.

The CsmaNetDevice supports the assignment of a “receive error model.” This is an ErrorModel object that is used to simulate data corruption on the link.

Packets sent over the CsmaNetDevice are always routed through the transmit queue to provide a trace hook for packets sent out over the network. This transmit queue can be set (via attribute) to model different queuing strategies.

Also configurable by attribute is the encapsulation method used by the device. Every packet gets an EthernetHeader that includes the destination and source MAC addresses, and a length/type field. Every packet also gets an EthernetTrailer which includes the FCS. Data in the packet may be encapsulated in different ways.

By default, or by setting the “EncapsulationMode” attribute to “Dix”, the encapsulation is according to the DEC, Intel, Xerox standard. This is sometimes called EthernetII framing and is the familiar destination MAC, source MAC, EtherType, Data, CRC format.

If the “EncapsulationMode” attribute is set to “Llc”, the encapsulation is by LLC SNAP. In this case, a SNAP header is added that contains the EtherType (IP or ARP).

The other implemented encapsulation modes are IP_ARP (set “EncapsulationMode” to “IpArp”) in which the length type of the Ethernet header receives the protocol number of the packet; or ETHERNET_V1 (set “EncapsulationMode” to “EthernetV1”) in which the length type of the Ethernet header receives the length of the packet. A “Raw” encapsulation mode is defined but not implemented – use of the RAW mode results in an assertion.

Note that all net devices on a channel must be set to the same encapsulation mode for correct results. The encapsulation mode is not sensed at the receiver.

The CsmaNetDevice implements a random exponential backoff algorithm that is executed if the channel is determined to be busy (TRANSMITTING or PPROPAGATING) when the device wants to start propagating. This results in a random delay of up to pow (2, retries) - 1 microseconds before a retry is attempted. The default maximum number of retries is 1000.

Using the CsmaNetDevice

The CSMA net devices and channels are typically created and configured using the associated CsmaHelper object. The various ns-3 device helpers generally work in a similar way, and their use is seen in many of our example programs.

The conceptual model of interest is that of a bare computer “husk” into which you plug net devices. The bare computers are created using a NodeContainer helper. You just ask this helper to create as many computers (we call them Nodes) as you need on your network:

NodeContainer csmaNodes;
csmaNodes.Create (nCsmaNodes);

Once you have your nodes, you need to instantiate a CsmaHelper and set any attributes you may want to change.:

CsmaHelper csma;
csma.SetChannelAttribute ("DataRate", StringValue ("100Mbps"));
csma.SetChannelAttribute ("Delay", TimeValue (NanoSeconds (6560)));

csma.SetDeviceAttribute ("EncapsulationMode", StringValue ("Dix"));
csma.SetDeviceAttribute ("FrameSize", UintegerValue (2000));

Once the attributes are set, all that remains is to create the devices and install them on the required nodes, and to connect the devices together using a CSMA channel. When we create the net devices, we add them to a container to allow you to use them in the future. This all takes just one line of code.:

NetDeviceContainer csmaDevices = csma.Install (csmaNodes);

We recommend thinking carefully about changing these Attributes, since it can result in behavior that surprises users. We allow this because we believe flexibility is important. As an example of a possibly surprising effect of changing Attributes, consider the following:

The Mtu Attribute indicates the Maximum Transmission Unit to the device. This is the size of the largest Protocol Data Unit (PDU) that the device can send. This Attribute defaults to 1500 bytes and corresponds to a number found in RFC 894, “A Standard for the Transmission of IP Datagrams over Ethernet Networks.” The number is actually derived from the maximum packet size for 10Base5 (full-spec Ethernet) networks – 1518 bytes. If you subtract DIX encapsulation overhead for Ethernet packets (18 bytes) you will end up with a maximum possible data size (MTU) of 1500 bytes. One can also find that the MTU for IEEE 802.3 networks is 1492 bytes. This is because LLC/SNAP encapsulation adds an extra eight bytes of overhead to the packet. In both cases, the underlying network hardware is limited to 1518 bytes, but the MTU is different because the encapsulation is different.

If one leaves the Mtu Attribute at 1500 bytes and changes the encapsulation mode Attribute to Llc, the result will be a network that encapsulates 1500 byte PDUs with LLC/SNAP framing resulting in packets of 1526 bytes. This would be illegal in many networks, but we allow you do do this. This results in a simulation that quite subtly does not reflect what you might be expecting since a real device would balk at sending a 1526 byte packet.

There also exist jumbo frames (1500 < MTU <= 9000 bytes) and super-jumbo (MTU > 9000 bytes) frames that are not officially sanctioned by IEEE but are available in some high-speed (Gigabit) networks and NICs. In the CSMA model, one could leave the encapsulation mode set to Dix, and set the Mtu to 64000 bytes – even though an associated CsmaChannel DataRate was left at 10 megabits per second (certainly not Gigabit Ethernet). This would essentially model an Ethernet switch made out of vampire-tapped 1980s-style 10Base5 networks that support super-jumbo datagrams, which is certainly not something that was ever made, nor is likely to ever be made; however it is quite easy for you to configure.

Be careful about assumptions regarding what CSMA is actually modelling and how configuration (Attributes) may allow you to swerve considerably away from reality.

CSMA Tracing

Like all ns-3 devices, the CSMA Model provides a number of trace sources. These trace sources can be hooked using your own custom trace code, or you can use our helper functions to arrange for tracing to be enabled on devices you specify.

Upper-Level (MAC) Hooks

From the point of view of tracing in the net device, there are several interesting points to insert trace hooks. A convention inherited from other simulators is that packets destined for transmission onto attached networks pass through a single “transmit queue” in the net device. We provide trace hooks at this point in packet flow, which corresponds (abstractly) only to a transition from the network to data link layer, and call them collectively the device MAC hooks.

When a packet is sent to the CSMA net device for transmission it always passes through the transmit queue. The transmit queue in the CsmaNetDevice inherits from Queue, and therefore inherits three trace sources:

  • An Enqueue operation source (see Queue::m_traceEnqueue);
  • A Dequeue operation source (see Queue::m_traceDequeue);
  • A Drop operation source (see Queue::m_traceDrop).

The upper-level (MAC) trace hooks for the CsmaNetDevice are, in fact, exactly these three trace sources on the single transmit queue of the device.

The m_traceEnqueue event is triggered when a packet is placed on the transmit queue. This happens at the time that CsmaNetDevice::Send or CsmaNetDevice::SendFrom is called by a higher layer to queue a packet for transmission.

The m_traceDequeue event is triggered when a packet is removed from the transmit queue. Dequeues from the transmit queue can happen in three situations: 1) If the underlying channel is idle when the CsmaNetDevice::Send or CsmaNetDevice::SendFrom is called, a packet is dequeued from the transmit queue and immediately transmitted; 2) If the underlying channel is idle, a packet may be dequeued and immediately transmitted in an internal TransmitCompleteEvent that functions much like a transmit complete interrupt service routine; or 3) from the random exponential backoff handler if a timeout is detected.

Case (3) implies that a packet is dequeued from the transmit queue if it is unable to be transmitted according to the backoff rules. It is important to understand that this will appear as a Dequeued packet and it is easy to incorrectly assume that the packet was transmitted since it passed through the transmit queue. In fact, a packet is actually dropped by the net device in this case. The reason for this behavior is due to the definition of the Queue Drop event. The m_traceDrop event is, by definition, fired when a packet cannot be enqueued on the transmit queue because it is full. This event only fires if the queue is full and we do not overload this event to indicate that the CsmaChannel is “full.”

Lower-Level (PHY) Hooks

Similar to the upper level trace hooks, there are trace hooks available at the lower levels of the net device. We call these the PHY hooks. These events fire from the device methods that talk directly to the CsmaChannel.

The trace source m_dropTrace is called to indicate a packet that is dropped by the device. This happens in two cases: First, if the receive side of the net device is not enabled (see CsmaNetDevice::m_receiveEnable and the associated attribute “ReceiveEnable”).

The m_dropTrace is also used to indicate that a packet was discarded as corrupt if a receive error model is used (see CsmaNetDevice::m_receiveErrorModel and the associated attribute “ReceiveErrorModel”).

The other low-level trace source fires on reception of an accepted packet (see CsmaNetDevice::m_rxTrace). A packet is accepted if it is destined for the broadcast address, a multicast address, or to the MAC address assigned to the net device.

Summary

The ns3 CSMA model is a simplistic model of an Ethernet-like network. It supports a Carrier-Sense function and allows for Multiple Access to a shared medium. It is not physical in the sense that the state of the medium is instantaneously shared among all devices. This means that there is no collision detection required in this model and none is implemented. There will never be a “jam” of a packet already on the medium. Access to the shared channel is on a first-come first-served basis as determined by the simulator scheduler. If the channel is determined to be busy by looking at the global state, a random exponential backoff is performed and a retry is attempted.

Ns-3 Attributes provide a mechanism for setting various parameters in the device and channel such as addresses, encapsulation modes and error model selection. Trace hooks are provided in the usual manner with a set of upper level hooks corresponding to a transmit queue and used in ASCII tracing; and also a set of lower level hooks used in pcap tracing.

Although the ns-3 CsmaChannel and CsmaNetDevice does not model any kind of network you could build or buy, it does provide us with some useful functionality. You should, however, understand that it is explicitly not Ethernet or any flavor of IEEE 802.3 but an interesting subset.

Data Collection

This chapter describes the ns-3 Data Collection Framework (DCF), which provides capabilities to obtain data generated by models in the simulator, to perform on-line reduction and data processing, and to marshal raw or transformed data into various output formats.

The framework presently supports standalone ns-3 runs that don’t rely on any external program execution control. The objects provided by the DCF may be hooked to ns-3 trace sources to enable data processing.

The source code for the classes lives in the directory src/stats.

This chapter is organized as follows. First, an overview of the architecture is presented. Next, the helpers for these classes are presented; this initial treatment should allow basic use of the data collection framework for many use cases. Users who wish to produce output outside of the scope of the current helpers, or who wish to create their own data collection objects, should read the remainder of the chapter, which goes into detail about all of the basic DCF object types and provides low-level coding examples.

Design

The DCF consists of three basic classes:

  • Probe is a mechanism to instrument and control the output of simulation data that is used to monitor interesting events. It produces output in the form of one or more ns-3 trace sources. Probe objects are hooked up to one or more trace sinks (called Collectors), which process samples on-line and prepare them for output.
  • Collector consumes the data generated by one or more Probe objects. It performs transformations on the data, such as normalization, reduction, and the computation of basic statistics. Collector objects do not produce data that is directly output by the ns-3 run; instead, they output data downstream to another type of object, called Aggregator, which performs that function. Typically, Collectors output their data in the form of trace sources as well, allowing collectors to be chained in series.
  • Aggregator is the end point of the data collected by a network of Probes and Collectors. The main responsibility of the Aggregator is to marshal data and their corresponding metadata, into different output formats such as plain text files, spreadsheet files, or databases.

All three of these classes provide the capability to dynamically turn themselves on or off throughout a simulation.

Any standalone ns-3 simulation run that uses the DCF will typically create at least one instance of each of the three classes above.

_images/dcf-overview.png

Data Collection Framework overview

The overall flow of data processing is depicted in Data Collection Framework overview. On the left side, a running ns-3 simulation is depicted. In the course of running the simulation, data is made available by models through trace sources, or via other means. The diagram depicts that probes can be connected to these trace sources to receive data asynchronously, or probes can poll for data. Data is then passed to a collector object that transforms the data. Finally, an aggregator can be connected to the outputs of the collector, to generate plots, files, or databases.

_images/dcf-overview-with-aggregation.png

Data Collection Framework aggregation

A variation on the above figure is provided in Data Collection Framework aggregation. This second figure illustrates that the DCF objects may be chained together in a manner that downstream objects take inputs from multiple upstream objects. The figure conceptually shows that multiple probes may generate output that is fed into a single collector; as an example, a collector that outputs a ratio of two counters would typically acquire each counter data from separate probes. Multiple collectors can also feed into a single aggregator, which (as its name implies) may collect a number of data streams for inclusion into a single plot, file, or database.

Data Collection Helpers

The full flexibility of the data collection framework is provided by the interconnection of probes, collectors, and aggregators. Performing all of these interconnections leads to many configuration statements in user programs. For ease of use, some of the most common operations can be combined and encapsulated in helper functions. In addition, some statements involving ns-3 trace sources do not have Python bindings, due to limitations in the bindings.

Data Collection Helpers Overview

In this section, we provide an overview of some helper classes that have been created to ease the configuration of the data collection framework for some common use cases. The helpers allow users to form common operations with only a few statements in their C++ or Python programs. But, this ease of use comes at the cost of significantly less flexibility than low-level configuration can provide, and the need to explicitly code support for new Probe types into the helpers (to work around an issue described below).

The emphasis on the current helpers is to marshal data out of ns-3 trace sources into gnuplot plots or text files, without a high degree of output customization or statistical processing (initially). Also, the use is constrained to the available probe types in ns-3. Later sections of this documentation will go into more detail about creating new Probe types, as well as details about hooking together Probes, Collectors, and Aggregators in custom arrangements.

To date, two Data Collection helpers have been implemented:

  • GnuplotHelper
  • FileHelper

GnuplotHelper

The GnuplotHelper is a helper class for producing output files used to make gnuplots. The overall goal is to provide the ability for users to quickly make plots from data exported in ns-3 trace sources. By default, a minimal amount of data transformation is performed; the objective is to generate plots with as few (default) configuration statements as possible.

GnuplotHelper Overview

The GnuplotHelper will create 3 different files at the end of the simulation:

  • A space separated gnuplot data file
  • A gnuplot control file
  • A shell script to generate the gnuplot

There are two configuration statements that are needed to produce plots. The first statement configures the plot (filename, title, legends, and output type, where the output type defaults to PNG if unspecified):

void ConfigurePlot (const std::string &outputFileNameWithoutExtension,
                    const std::string &title,
                    const std::string &xLegend,
                    const std::string &yLegend,
                    const std::string &terminalType = ".png");

The second statement hooks the trace source of interest:

void PlotProbe (const std::string &typeId,
                const std::string &path,
                const std::string &probeTraceSource,
                const std::string &title);

The arguments are as follows:

  • typeId: The ns-3 TypeId of the Probe
  • path: The path in the ns-3 configuration namespace to one or more trace sources
  • probeTraceSource: Which output of the probe (itself a trace source) should be plotted
  • title: The title to associate with the dataset(s) (in the gnuplot legend)

A variant on the PlotProbe above is to specify a fifth optional argument that controls where in the plot the key (legend) is placed.

A fully worked example (from seventh.cc) is shown below:

// Create the gnuplot helper.
GnuplotHelper plotHelper;

// Configure the plot.
// Configure the plot.  The first argument is the file name prefix
// for the output files generated.  The second, third, and fourth
// arguments are, respectively, the plot title, x-axis, and y-axis labels
plotHelper.ConfigurePlot ("seventh-packet-byte-count",
                          "Packet Byte Count vs. Time",
                          "Time (Seconds)",
                          "Packet Byte Count",
                          "png");

// Specify the probe type, trace source path (in configuration namespace), and
// probe output trace source ("OutputBytes") to plot.  The fourth argument
// specifies the name of the data series label on the plot.  The last
// argument formats the plot by specifying where the key should be placed.
plotHelper.PlotProbe (probeType,
                      tracePath,
                      "OutputBytes",
                      "Packet Byte Count",
                      GnuplotAggregator::KEY_BELOW);

In this example, the probeType and tracePath are as follows (for IPv4):

probeType = "ns3::Ipv4PacketProbe";
tracePath = "/NodeList/*/$ns3::Ipv4L3Protocol/Tx";

The probeType is a key parameter for this helper to work. This TypeId must be registered in the system, and the signature on the Probe’s trace sink must match that of the trace source it is being hooked to. Probe types are pre-defined for a number of data types corresponding to ns-3 traced values, and for a few other trace source signatures such as the ‘Tx’ trace source of ns3::Ipv4L3Protocol class.

Note that the trace source path specified may contain wildcards. In this case, multiple datasets are plotted on one plot; one for each matched path.

The main output produced will be three files:

seventh-packet-byte-count.dat
seventh-packet-byte-count.plt
seventh-packet-byte-count.sh

At this point, users can either hand edit the .plt file for further customizations, or just run it through gnuplot. Running sh seventh-packet-byte-count.sh simply runs the plot through gnuplot, as shown below.

_images/seventh-packet-byte-count.png

2-D Gnuplot Created by seventh.cc Example.

It can be seen that the key elements (legend, title, legend placement, xlabel, ylabel, and path for the data) are all placed on the plot. Since there were two matches to the configuration path provided, the two data series are shown:

  • Packet Byte Count-0 corresponds to /NodeList/0/$ns3::Ipv4L3Protocol/Tx
  • Packet Byte Count-1 corresponds to /NodeList/1/$ns3::Ipv4L3Protocol/Tx
GnuplotHelper ConfigurePlot

The GnuplotHelper’s ConfigurePlot() function can be used to configure plots.

It has the following prototype:

void ConfigurePlot (const std::string &outputFileNameWithoutExtension,
                    const std::string &title,
                    const std::string &xLegend,
                    const std::string &yLegend,
                    const std::string &terminalType = ".png");

It has the following arguments:

Argument Description
outputFileNameWithoutExtension Name of gnuplot related files to write with no extension.
title Plot title string to use for this plot.
xLegend The legend for the x horizontal axis.
yLegend The legend for the y vertical axis.
terminalType Terminal type setting string for output. The default terminal type is “png”.

The GnuplotHelper’s ConfigurePlot() function configures plot related parameters for this gnuplot helper so that it will create a space separated gnuplot data file named outputFileNameWithoutExtension + ”.dat”, a gnuplot control file named outputFileNameWithoutExtension + ”.plt”, and a shell script to generate the gnuplot named outputFileNameWithoutExtension + ”.sh”.

An example of how to use this function can be seen in the seventh.cc code described above where it was used as follows:

plotHelper.ConfigurePlot ("seventh-packet-byte-count",
                          "Packet Byte Count vs. Time",
                          "Time (Seconds)",
                          "Packet Byte Count",
                          "png");
GnuplotHelper PlotProbe

The GnuplotHelper’s PlotProbe() function can be used to plot values generated by probes.

It has the following prototype:

void PlotProbe (const std::string &typeId,
                const std::string &path,
                const std::string &probeTraceSource,
                const std::string &title,
                enum GnuplotAggregator::KeyLocation keyLocation = GnuplotAggregator::KEY_INSIDE);

It has the following arguments:

Argument Description
typeId The type ID for the probe created by this helper.
path Config path to access the trace source.
probeTraceSource The probe trace source to access.
title The title to be associated to this dataset
keyLocation The location of the key in the plot. The default location is inside.

The GnuplotHelper’s PlotProbe() function plots a dataset generated by hooking the ns-3 trace source with a probe created by the helper, and then plotting the values from the probeTraceSource. The dataset will have the provided title, and will consist of the ‘newValue’ at each timestamp.

If the config path has more than one match in the system because there is a wildcard, then one dataset for each match will be plotted. The dataset titles will be suffixed with the matched characters for each of the wildcards in the config path, separated by spaces. For example, if the proposed dataset title is the string “bytes”, and there are two wildcards in the path, then dataset titles like “bytes-0 0” or “bytes-12 9” will be possible as labels for the datasets that are plotted.

An example of how to use this function can be seen in the seventh.cc code described above where it was used (with variable substitution) as follows:

plotHelper.PlotProbe ("ns3::Ipv4PacketProbe",
                      "/NodeList/*/$ns3::Ipv4L3Protocol/Tx",
                      "OutputBytes",
                      "Packet Byte Count",
                      GnuplotAggregator::KEY_BELOW);
Other Examples
Gnuplot Helper Example

A slightly simpler example than the seventh.cc example can be found in src/stats/examples/gnuplot-helper-example.cc. The following 2-D gnuplot was created using the example.

_images/gnuplot-helper-example.png

2-D Gnuplot Created by gnuplot-helper-example.cc Example.

In this example, there is an Emitter object that increments its counter according to a Poisson process and then emits the counter’s value as a trace source.

Ptr<Emitter> emitter = CreateObject<Emitter> ();
Names::Add ("/Names/Emitter", emitter);

Note that because there are no wildcards in the path used below, only 1 datastream was drawn in the plot. This single datastream in the plot is simply labeled “Emitter Count”, with no extra suffixes like one would see if there were wildcards in the path.

// Create the gnuplot helper.
GnuplotHelper plotHelper;

// Configure the plot.
plotHelper.ConfigurePlot ("gnuplot-helper-example",
                          "Emitter Counts vs. Time",
                          "Time (Seconds)",
                          "Emitter Count",
                          "png");

// Plot the values generated by the probe.  The path that we provide
// helps to disambiguate the source of the trace.
plotHelper.PlotProbe ("ns3::Uinteger32Probe",
                      "/Names/Emitter/Counter",
                      "Output",
                      "Emitter Count",
                      GnuplotAggregator::KEY_INSIDE);

FileHelper

The FileHelper is a helper class used to put data values into a file. The overall goal is to provide the ability for users to quickly make formatted text files from data exported in ns-3 trace sources. By default, a minimal amount of data transformation is performed; the objective is to generate files with as few (default) configuration statements as possible.

FileHelper Overview

The FileHelper will create 1 or more text files at the end of the simulation.

The FileHelper can create 4 different types of text files:

  • Formatted
  • Space separated (the default)
  • Comma separated
  • Tab separated

Formatted files use C-style format strings and the sprintf() function to print their values in the file being written.

The following text file with 2 columns of formatted values named seventh-packet-byte-count-0.txt was created using more new code that was added to the original ns-3 Tutorial example’s code. Only the first 10 lines of this file are shown here for brevity.

Time (Seconds) = 1.000e+00    Packet Byte Count = 40
Time (Seconds) = 1.004e+00    Packet Byte Count = 40
Time (Seconds) = 1.004e+00    Packet Byte Count = 576
Time (Seconds) = 1.009e+00    Packet Byte Count = 576
Time (Seconds) = 1.009e+00    Packet Byte Count = 576
Time (Seconds) = 1.015e+00    Packet Byte Count = 512
Time (Seconds) = 1.017e+00    Packet Byte Count = 576
Time (Seconds) = 1.017e+00    Packet Byte Count = 544
Time (Seconds) = 1.025e+00    Packet Byte Count = 576
Time (Seconds) = 1.025e+00    Packet Byte Count = 544

...

The following different text file with 2 columns of formatted values named seventh-packet-byte-count-1.txt was also created using the same new code that was added to the original ns-3 Tutorial example’s code. Only the first 10 lines of this file are shown here for brevity.

Time (Seconds) = 1.002e+00    Packet Byte Count = 40
Time (Seconds) = 1.007e+00    Packet Byte Count = 40
Time (Seconds) = 1.013e+00    Packet Byte Count = 40
Time (Seconds) = 1.020e+00    Packet Byte Count = 40
Time (Seconds) = 1.028e+00    Packet Byte Count = 40
Time (Seconds) = 1.036e+00    Packet Byte Count = 40
Time (Seconds) = 1.045e+00    Packet Byte Count = 40
Time (Seconds) = 1.053e+00    Packet Byte Count = 40
Time (Seconds) = 1.061e+00    Packet Byte Count = 40
Time (Seconds) = 1.069e+00    Packet Byte Count = 40

...

The new code that was added to produce the two text files is below. More details about this API will be covered in a later section.

Note that because there were 2 matches for the wildcard in the path, 2 separate text files were created. The first text file, which is named “seventh-packet-byte-count-0.txt”, corresponds to the wildcard match with the “*” replaced with “0”. The second text file, which is named “seventh-packet-byte-count-1.txt”, corresponds to the wildcard match with the “*” replaced with “1”. Also, note that the function call to WriteProbe() will give an error message if there are no matches for a path that contains wildcards.

// Create the file helper.
FileHelper fileHelper;

// Configure the file to be written.
fileHelper.ConfigureFile ("seventh-packet-byte-count",
                          FileAggregator::FORMATTED);

// Set the labels for this formatted output file.
fileHelper.Set2dFormat ("Time (Seconds) = %.3e\tPacket Byte Count = %.0f");

// Write the values generated by the probe.
fileHelper.WriteProbe ("ns3::Ipv4PacketProbe",
                       "/NodeList/*/$ns3::Ipv4L3Protocol/Tx",
                       "OutputBytes");
FileHelper ConfigureFile

The FileHelper’s ConfigureFile() function can be used to configure text files.

It has the following prototype:

void ConfigureFile (const std::string &outputFileNameWithoutExtension,
                    enum FileAggregator::FileType fileType = FileAggregator::SPACE_SEPARATED);

It has the following arguments:

Argument Description
outputFileNameWithoutExtension Name of output file to write with no extension.
fileType Type of file to write. The default type of file is space separated.

The FileHelper’s ConfigureFile() function configures text file related parameters for the file helper so that it will create a file named outputFileNameWithoutExtension plus possible extra information from wildcard matches plus ”.txt” with values printed as specified by fileType. The default file type is space-separated.

An example of how to use this function can be seen in the seventh.cc code described above where it was used as follows:

fileHelper.ConfigureFile ("seventh-packet-byte-count",
                          FileAggregator::FORMATTED);
FileHelper WriteProbe

The FileHelper’s WriteProbe() function can be used to write values generated by probes to text files.

It has the following prototype:

void WriteProbe (const std::string &typeId,
                 const std::string &path,
                 const std::string &probeTraceSource);

It has the following arguments:

Argument Description
typeId The type ID for the probe to be created.
path Config path to access the trace source.
probeTraceSource The probe trace source to access.

The FileHelper’s WriteProbe() function creates output text files generated by hooking the ns-3 trace source with a probe created by the helper, and then writing the values from the probeTraceSource. The output file names will have the text stored in the member variable m_outputFileNameWithoutExtension plus ”.txt”, and will consist of the ‘newValue’ at each timestamp.

If the config path has more than one match in the system because there is a wildcard, then one output file for each match will be created. The output file names will contain the text in m_outputFileNameWithoutExtension plus the matched characters for each of the wildcards in the config path, separated by dashes, plus ”.txt”. For example, if the value in m_outputFileNameWithoutExtension is the string “packet-byte-count”, and there are two wildcards in the path, then output file names like “packet-byte-count-0-0.txt” or “packet-byte-count-12-9.txt” will be possible as names for the files that will be created.

An example of how to use this function can be seen in the seventh.cc code described above where it was used as follows:

fileHelper.WriteProbe ("ns3::Ipv4PacketProbe",
                       "/NodeList/*/$ns3::Ipv4L3Protocol/Tx",
                       "OutputBytes");
Other Examples
File Helper Example

A slightly simpler example than the seventh.cc example can be found in src/stats/examples/file-helper-example.cc. This example only uses the FileHelper.

The following text file with 2 columns of formatted values named file-helper-example.txt was created using the example. Only the first 10 lines of this file are shown here for brevity.

Time (Seconds) = 0.203  Count = 1
Time (Seconds) = 0.702  Count = 2
Time (Seconds) = 1.404  Count = 3
Time (Seconds) = 2.368  Count = 4
Time (Seconds) = 3.364  Count = 5
Time (Seconds) = 3.579  Count = 6
Time (Seconds) = 5.873  Count = 7
Time (Seconds) = 6.410  Count = 8
Time (Seconds) = 6.472  Count = 9
...

In this example, there is an Emitter object that increments its counter according to a Poisson process and then emits the counter’s value as a trace source.

Ptr<Emitter> emitter = CreateObject<Emitter> ();
Names::Add ("/Names/Emitter", emitter);

Note that because there are no wildcards in the path used below, only 1 text file was created. This single text file is simply named “file-helper-example.txt”, with no extra suffixes like you would see if there were wildcards in the path.

// Create the file helper.
FileHelper fileHelper;

// Configure the file to be written.
fileHelper.ConfigureFile ("file-helper-example",
                          FileAggregator::FORMATTED);

// Set the labels for this formatted output file.
fileHelper.Set2dFormat ("Time (Seconds) = %.3e\tCount = %.0f");

// Write the values generated by the probe.  The path that we
// provide helps to disambiguate the source of the trace.
fileHelper.WriteProbe ("ns3::Uinteger32Probe",
                       "/Names/Emitter/Counter",
                       "Output");

Scope and Limitations

Currently, only these Probes have been implemented and connected to the GnuplotHelper and to the FileHelper:

  • BooleanProbe
  • DoubleProbe
  • Uinteger8Probe
  • Uinteger16Probe
  • Uinteger32Probe
  • TimeProbe
  • PacketProbe
  • ApplicationPacketProbe
  • Ipv4PacketProbe

These Probes, therefore, are the only TypeIds available to be used in PlotProbe() and WriteProbe().

In the next few sections, we cover each of the fundamental object types (Probe, Collector, and Aggregator) in more detail, and show how they can be connected together using lower-level API.

Probes

This section details the functionalities provided by the Probe class to an ns-3 simulation, and gives examples on how to code them in a program. This section is meant for users interested in developing simulations with the ns-3 tools and using the Data Collection Framework, of which the Probe class is a part, to generate data output with their simulation’s results.

Probe Overview

A Probe object is supposed to be connected to a variable from the simulation whose values throughout the experiment are relevant to the user. The Probe will record what were values assumed by the variable throughout the simulation and pass such data to another member of the Data Collection Framework. While it is out of this section’s scope to discuss what happens after the Probe produces its output, it is sufficient to say that, by the end of the simulation, the user will have detailed information about what values were stored inside the variable being probed during the simulation.

Typically, a Probe is connected to an ns-3 trace source. In this manner, whenever the trace source exports a new value, the Probe consumes the value (and exports it downstream to another object via its own trace source).

The Probe can be thought of as kind of a filter on trace sources. The main reasons for possibly hooking to a Probe rather than directly to a trace source are as follows:

  • Probes may be dynamically turned on and off during the simulation with calls to Enable() and Disable(). For example, the outputting of data may be turned off during the simulation warmup phase.

  • Probes may perform operations on the data to extract values from more complicated structures; for instance, outputting the packet size value from a received ns3::Packet.

  • Probes register a name in the ns3::Config namespace (using Names::Add ()) so that other objects may refer to them.

  • Probes provide a static method that allows one to manipulate a Probe by name, such as what is done in ns2measure [Cic06]

    Stat::put ("my_metric", ID, sample);
    

    The ns-3 equivalent of the above ns2measure code is, e.g.

    DoubleProbe::SetValueByPath ("/path/to/probe", sample);
    
Creation

Note that a Probe base class object can not be created because it is an abstract base class, i.e. it has pure virtual functions that have not been implemented. An object of type DoubleProbe, which is a subclass of the Probe class, will be created here to show what needs to be done.

One declares a DoubleProbe in dynamic memory by using the smart pointer class (Ptr<T>). To create a DoubleProbe in dynamic memory with smart pointers, one just needs to call the ns-3 method CreateObject():

Ptr<DoubleProbe> myprobe = CreateObject<DoubleProbe> ();

The declaration above creates DoubleProbes using the default values for its attributes. There are four attributes in the DoubleProbe class; two in the base class object DataCollectionObject, and two in the Probe base class:

  • “Name” (DataCollectionObject), a StringValue
  • “Enabled” (DataCollectionObject), a BooleanValue
  • “Start” (Probe), a TimeValue
  • “Stop” (Probe), a TimeValue

One can set such attributes at object creation by using the following method:

Ptr<DoubleProbe> myprobe = CreateObjectWithAttributes<DoubleProbe> (
    "Name", StringValue ("myprobe"),
    "Enabled", BooleanValue (false),
    "Start", TimeValue (Seconds (100.0)),
    "Stop", TimeValue (Seconds (1000.0)));

Start and Stop are Time variables which determine the interval of action of the Probe. The Probe will only output data if the current time of the Simulation is inside of that interval. The special time value of 0 seconds for Stop will disable this attribute (i.e. keep the Probe on for the whole simulation). Enabled is a flag that turns the Probe on or off, and must be set to true for the Probe to export data. The Name is the object’s name in the DCF framework.

Importing and exporting data

ns-3 trace sources are strongly typed, so the mechanisms for hooking Probes to a trace source and for exporting data belong to its subclasses. For instance, the default distribution of ns-3 provides a class DoubleProbe that is designed to hook to a trace source exporting a double value. We’ll next detail the operation of the DoubleProbe, and then discuss how other Probe classes may be defined by the user.

DoubleProbe Overview

The DoubleProbe connects to a double-valued ns-3 trace source, and itself exports a different double-valued ns-3 trace source.

The following code, drawn from src/stats/examples/double-probe-example.cc, shows the basic operations of plumbing the DoubleProbe into a simulation, where it is probing a Counter exported by an emitter object (class Emitter).

Ptr<Emitter> emitter = CreateObject<Emitter> ();
Names::Add ("/Names/Emitter", emitter);
...

Ptr<DoubleProbe> probe1 = CreateObject<DoubleProbe> ();

// Connect the probe to the emitter's Counter
bool connected = probe1->ConnectByObject ("Counter", emitter);

The following code is probing the same Counter exported by the same emitter object. This DoubleProbe, however, is using a path in the configuration namespace to make the connection. Note that the emitter registered itself in the configuration namespace after it was created; otherwise, the ConnectByPath would not work.

Ptr<DoubleProbe> probe2 = CreateObject<DoubleProbe> ();

// Note, no return value is checked here
probe2->ConnectByPath ("/Names/Emitter/Counter");

The next DoubleProbe shown that is shown below will have its value set using its path in the configuration namespace. Note that this time the DoubleProbe registered itself in the configuration namespace after it was created.

Ptr<DoubleProbe> probe3 = CreateObject<DoubleProbe> ();
probe3->SetName ("StaticallyAccessedProbe");

// We must add it to the config database
Names::Add ("/Names/Probes", probe3->GetName (), probe3);

The emitter’s Count() function is now able to set the value for this DoubleProbe as follows:

void
Emitter::Count (void)
{
  ...
  m_counter += 1.0;
  DoubleProbe::SetValueByPath ("/Names/StaticallyAccessedProbe", m_counter);
  ...
}

The above example shows how the code calling the Probe does not have to have an explicit reference to the Probe, but can direct the value setting through the Config namespace. This is similar in functionality to the Stat::Put method introduced by ns2measure paper [Cic06], and allows users to temporarily insert Probe statements like printf statements within existing ns-3 models. Note that in order to be able to use the DoubleProbe in this example like this, 2 things were necessary:

  1. the stats module header file was included in the example .cc file
  2. the example was made dependent on the stats module in its wscript file.

Analogous things need to be done in order to add other Probes in other places in the ns-3 code base.

The values for the DoubleProbe can also be set using the function DoubleProbe::SetValue(), while the values for the DoubleProbe can be gotten using the function DoubleProbe::GetValue().

The DoubleProbe exports double values in its “Output” trace source; a downstream object can hook a trace sink (NotifyViaProbe) to this as follows:

connected = probe1->TraceConnect ("Output", probe1->GetName (), MakeCallback (&NotifyViaProbe));

Other probes

Besides the DoubleProbe, the following Probes are also available:

  • Uinteger8Probe connects to an ns-3 trace source exporting an uint8_t.
  • Uinteger16Probe connects to an ns-3 trace source exporting an uint16_t.
  • Uinteger32Probe connects to an ns-3 trace source exporting an uint32_t.
  • PacketProbe connects to an ns-3 trace source exporting a packet.
  • ApplicationPacketProbe connects to an ns-3 trace source exporting a packet and a socket address.
  • Ipv4PacketProbe connects to an ns-3 trace source exporting a packet, an IPv4 object, and an interface.

Creating new Probe types

To create a new Probe type, you need to perform the following steps:

  • Be sure that your new Probe class is derived from the Probe base class.
  • Be sure that the pure virtual functions that your new Probe class inherits from the Probe base class are implemented.
  • Find an existing Probe class that uses a trace source that is closest in type to the type of trace source your Probe will be using.
  • Copy that existing Probe class’s header file (.h) and implementation file (.cc) to two new files with names matching your new Probe.
  • Replace the types, arguments, and variables in the copied files with the appropriate type for your Probe.
  • Make necessary modifications to make the code compile and to make it behave as you would like.

Examples

Two examples will be discussed in detail here:

  • Double Probe Example
  • IPv4 Packet Plot Example
Double Probe Example

The double probe example has been discussed previously. The example program can be found in src/stats/examples/double-probe-example.cc. To summarize what occurs in this program, there is an emitter that exports a counter that increments according to a Poisson process. In particular, two ways of emitting data are shown:

  1. through a traced variable hooked to one Probe:

    TracedValue<double> m_counter;  // normally this would be integer type
    
  2. through a counter whose value is posted to a second Probe, referenced by its name in the Config system:

void
Emitter::Count (void)
{
  NS_LOG_FUNCTION (this);
  NS_LOG_DEBUG ("Counting at " << Simulator::Now ().GetSeconds ());
  m_counter += 1.0;
  DoubleProbe::SetValueByPath ("/Names/StaticallyAccessedProbe", m_counter);
  Simulator::Schedule (Seconds (m_var->GetValue ()), &Emitter::Count, this);
}

Let’s look at the Probe more carefully. Probes can receive their values in a multiple ways:

  1. by the Probe accessing the trace source directly and connecting a trace sink to it
  2. by the Probe accessing the trace source through the config namespace and connecting a trace sink to it
  3. by the calling code explicitly calling the Probe’s SetValue() method
  4. by the calling code explicitly calling SetValueByPath (“/path/through/Config/namespace”, ...)

The first two techniques are expected to be the most common. Also in the example, the hooking of a normal callback function is shown, as is typically done in ns-3. This callback function is not associated with a Probe object. We’ll call this case 0) below.

// This is a function to test hooking a raw function to the trace source
void
NotifyViaTraceSource (std::string context, double oldVal, double newVal)
{
  NS_LOG_DEBUG ("context: " << context << " old " << oldVal << " new " << newVal);
}

First, the emitter needs to be setup:

Ptr<Emitter> emitter = CreateObject<Emitter> ();
Names::Add ("/Names/Emitter", emitter);

// The Emitter object is not associated with an ns-3 node, so
// it won't get started automatically, so we need to do this ourselves
Simulator::Schedule (Seconds (0.0), &Emitter::Start, emitter);

The various DoubleProbes interact with the emitter in the example as shown below.

Case 0):

// The below shows typical functionality without a probe
// (connect a sink function to a trace source)
//
connected = emitter->TraceConnect ("Counter", "sample context", MakeCallback (&NotifyViaTraceSource));
NS_ASSERT_MSG (connected, "Trace source not connected");

case 1):

//
// Probe1 will be hooked directly to the Emitter trace source object
//

// probe1 will be hooked to the Emitter trace source
Ptr<DoubleProbe> probe1 = CreateObject<DoubleProbe> ();
// the probe's name can serve as its context in the tracing
probe1->SetName ("ObjectProbe");

// Connect the probe to the emitter's Counter
connected = probe1->ConnectByObject ("Counter", emitter);
NS_ASSERT_MSG (connected, "Trace source not connected to probe1");

case 2):

//
// Probe2 will be hooked to the Emitter trace source object by
// accessing it by path name in the Config database
//

// Create another similar probe; this will hook up via a Config path
Ptr<DoubleProbe> probe2 = CreateObject<DoubleProbe> ();
probe2->SetName ("PathProbe");

// Note, no return value is checked here
probe2->ConnectByPath ("/Names/Emitter/Counter");

case 4) (case 3 is not shown in this example):

//
// Probe3 will be called by the emitter directly through the
// static method SetValueByPath().
//
Ptr<DoubleProbe> probe3 = CreateObject<DoubleProbe> ();
probe3->SetName ("StaticallyAccessedProbe");
// We must add it to the config database
Names::Add ("/Names/Probes", probe3->GetName (), probe3);

And finally, the example shows how the probes can be hooked to generate output:

// The probe itself should generate output.  The context that we provide
// to this probe (in this case, the probe name) will help to disambiguate
// the source of the trace
connected = probe3->TraceConnect ("Output",
                                  "/Names/Probes/StaticallyAccessedProbe/Output",
                                  MakeCallback (&NotifyViaProbe));
NS_ASSERT_MSG (connected, "Trace source not .. connected to probe3 Output");

The following callback is hooked to the Probe in this example for illustrative purposes; normally, the Probe would be hooked to a Collector object.

// This is a function to test hooking it to the probe output
void
NotifyViaProbe (std::string context, double oldVal, double newVal)
{
  NS_LOG_DEBUG ("context: " << context << " old " << oldVal << " new " << newVal);
}
IPv4 Packet Plot Example

The IPv4 packet plot example is based on the fifth.cc example from the ns-3 Tutorial. It can be found in src/stats/examples/ipv4-packet-plot-example.cc.

      node 0                 node 1
+----------------+    +----------------+
|    ns-3 TCP    |    |    ns-3 TCP    |
+----------------+    +----------------+
|    10.1.1.1    |    |    10.1.1.2    |
+----------------+    +----------------+
| point-to-point |    | point-to-point |
+----------------+    +----------------+
        |                     |
        +---------------------+

We’ll just look at the Probe, as it illustrates that Probes may also unpack values from structures (in this case, packets) and report those values as trace source outputs, rather than just passing through the same type of data.

There are other aspects of this example that will be explained later in the documentation. The two types of data that are exported are the packet itself (Output) and a count of the number of bytes in the packet (OutputBytes).

TypeId
Ipv4PacketProbe::GetTypeId ()
{
  static TypeId tid = TypeId ("ns3::Ipv4PacketProbe")
    .SetParent<Probe> ()
    .AddConstructor<Ipv4PacketProbe> ()
    .AddTraceSource ( "Output",
                      "The packet plus its IPv4 object and interface that serve as the output for this probe",
                      MakeTraceSourceAccessor (&Ipv4PacketProbe::m_output))
    .AddTraceSource ( "OutputBytes",
                      "The number of bytes in the packet",
                      MakeTraceSourceAccessor (&Ipv4PacketProbe::m_outputBytes))
  ;
  return tid;
}

When the Probe’s trace sink gets a packet, if the Probe is enabled, then it will output the packet on its Output trace source, but it will also output the number of bytes on the OutputBytes trace source.

void
Ipv4PacketProbe::TraceSink (Ptr<const Packet> packet, Ptr<Ipv4> ipv4, uint32_t interface)
{
  NS_LOG_FUNCTION (this << packet << ipv4 << interface);
  if (IsEnabled ())
    {
      m_packet    = packet;
      m_ipv4      = ipv4;
      m_interface = interface;
      m_output (packet, ipv4, interface);

      uint32_t packetSizeNew = packet->GetSize ();
      m_outputBytes (m_packetSizeOld, packetSizeNew);
      m_packetSizeOld = packetSizeNew;
    }
}

References

[Cic06](1, 2) Claudio Cicconetti, Enzo Mingozzi, Giovanni Stea, “An Integrated Framework for Enabling Effective Data Collection and Statistical Analysis with ns2, Workshop on ns-2 (WNS2), Pisa, Italy, October 2006.

Collectors

This section is a placeholder to detail the functionalities provided by the Collector class to an ns-3 simulation, and gives examples on how to code them in a program.

Note: As of ns-3.18, Collectors are still under development and not yet provided as part of the framework.

Aggregators

This section details the functionalities provided by the Aggregator class to an ns-3 simulation. This section is meant for users interested in developing simulations with the ns-3 tools and using the Data Collection Framework, of which the Aggregator class is a part, to generate data output with their simulation’s results.

Aggregator Overview

An Aggregator object is supposed to be hooked to one or more trace sources in order to receive input. Aggregators are the end point of the data collected by the network of Probes and Collectors during the simulation. It is the Aggregator’s job to take these values and transform them into their final output format such as plain text files, spreadsheet files, plots, or databases.

Typically, an aggregator is connected to one or more Collectors. In this manner, whenever the Collectors’ trace sources export new values, the Aggregator can process the value so that it can be used in the final output format where the data values will reside after the simulation.

Note the following about Aggregators:

  • Aggregators may be dynamically turned on and off during the simulation with calls to Enable() and Disable(). For example, the aggregating of data may be turned off during the simulation warmup phase, which means those values won’t be included in the final output medium.
  • Aggregators receive data from Collectors via callbacks. When a Collector is associated to an aggregator, a call to TraceConnect is made to establish the Aggregator’s trace sink method as a callback.

To date, two Aggregators have been implemented:

  • GnuplotAggregator
  • FileAggregator

GnuplotAggregator

The GnuplotAggregator produces output files used to make gnuplots.

The GnuplotAggregator will create 3 different files at the end of the simulation:

  • A space separated gnuplot data file
  • A gnuplot control file
  • A shell script to generate the gnuplot
Creation

An object of type GnuplotAggregator will be created here to show what needs to be done.

One declares a GnuplotAggregator in dynamic memory by using the smart pointer class (Ptr<T>). To create a GnuplotAggregator in dynamic memory with smart pointers, one just needs to call the ns-3 method CreateObject(). The following code from src/stats/examples/gnuplot-aggregator-example.cc shows how to do this:

string fileNameWithoutExtension = "gnuplot-aggregator";

// Create an aggregator.
Ptr<GnuplotAggregator> aggregator =
  CreateObject<GnuplotAggregator> (fileNameWithoutExtension);

The first argument for the constructor, fileNameWithoutExtension, is the name of the gnuplot related files to write with no extension. This GnuplotAggregator will create a space separated gnuplot data file named “gnuplot-aggregator.dat”, a gnuplot control file named “gnuplot-aggregator.plt”, and a shell script to generate the gnuplot named + “gnuplot-aggregator.sh”.

The gnuplot that is created can have its key in 4 different locations:

  • No key
  • Key inside the plot (the default)
  • Key above the plot
  • Key below the plot

The following gnuplot key location enum values are allowed to specify the key’s position:

enum KeyLocation {
  NO_KEY,
  KEY_INSIDE,
  KEY_ABOVE,
  KEY_BELOW
};

If it was desired to have the key below rather than the default position of inside, then you could do the following.

aggregator->SetKeyLocation(GnuplotAggregator::KEY_BELOW);
Examples

One example will be discussed in detail here:

  • Gnuplot Aggregator Example
Gnuplot Aggregator Example

An example that exercises the GnuplotAggregator can be found in src/stats/examples/gnuplot-aggregator-example.cc.

The following 2-D gnuplot was created using the example.

_images/gnuplot-aggregator.png

2-D Gnuplot Created by gnuplot-aggregator-example.cc Example.

This code from the example shows how to construct the GnuplotAggregator as was discussed above.

void Create2dPlot ()
{
  using namespace std;

  string fileNameWithoutExtension = "gnuplot-aggregator";
  string plotTitle                = "Gnuplot Aggregator Plot";
  string plotXAxisHeading         = "Time (seconds)";
  string plotYAxisHeading         = "Double Values";
  string plotDatasetLabel         = "Data Values";
  string datasetContext           = "Dataset/Context/String";

  // Create an aggregator.
  Ptr<GnuplotAggregator> aggregator =
    CreateObject<GnuplotAggregator> (fileNameWithoutExtension);

Various GnuplotAggregator attributes are set including the 2-D dataset that will be plotted.

// Set the aggregator's properties.
aggregator->SetTerminal ("png");
aggregator->SetTitle (plotTitle);
aggregator->SetLegend (plotXAxisHeading, plotYAxisHeading);

// Add a data set to the aggregator.
aggregator->Add2dDataset (datasetContext, plotDatasetLabel);

// aggregator must be turned on
aggregator->Enable ();

Next, the 2-D values are calculated, and each one is individually written to the GnuplotAggregator using the Write2d() function.

  double time;
  double value;

  // Create the 2-D dataset.
  for (time = -5.0; time <= +5.0; time += 1.0)
    {
      // Calculate the 2-D curve
      //
      //                   2
      //     value  =  time   .
      //
      value = time * time;

      // Add this point to the plot.
      aggregator->Write2d (datasetContext, time, value);
    }

  // Disable logging of data for the aggregator.
  aggregator->Disable ();
}

FileAggregator

The FileAggregator sends the values it receives to a file.

The FileAggregator can create 4 different types of files:

  • Formatted
  • Space separated (the default)
  • Comma separated
  • Tab separated

Formatted files use C-style format strings and the sprintf() function to print their values in the file being written.

Creation

An object of type FileAggregator will be created here to show what needs to be done.

One declares a FileAggregator in dynamic memory by using the smart pointer class (Ptr<T>). To create a FileAggregator in dynamic memory with smart pointers, one just needs to call the ns-3 method CreateObject. The following code from src/stats/examples/file-aggregator-example.cc shows how to do this:

string fileName       = "file-aggregator-formatted-values.txt";

// Create an aggregator that will have formatted values.
Ptr<FileAggregator> aggregator =
  CreateObject<FileAggregator> (fileName, FileAggregator::FORMATTED);

The first argument for the constructor, filename, is the name of the file to write; the second argument, fileType, is type of file to write. This FileAggregator will create a file named “file-aggregator-formatted-values.txt” with its values printed as specified by fileType, i.e., formatted in this case.

The following file type enum values are allowed:

enum FileType {
  FORMATTED,
  SPACE_SEPARATED,
  COMMA_SEPARATED,
  TAB_SEPARATED
};
Examples

One example will be discussed in detail here:

  • File Aggregator Example
File Aggregator Example

An example that exercises the FileAggregator can be found in src/stats/examples/file-aggregator-example.cc.

The following text file with 2 columns of values separated by commas was created using the example.

-5,25
-4,16
-3,9
-2,4
-1,1
0,0
1,1
2,4
3,9
4,16
5,25

This code from the example shows how to construct the FileAggregator as was discussed above.

void CreateCommaSeparatedFile ()
{
  using namespace std;

  string fileName       = "file-aggregator-comma-separated.txt";
  string datasetContext = "Dataset/Context/String";

  // Create an aggregator.
  Ptr<FileAggregator> aggregator =
    CreateObject<FileAggregator> (fileName, FileAggregator::COMMA_SEPARATED);

FileAggregator attributes are set.

// aggregator must be turned on
aggregator->Enable ();

Next, the 2-D values are calculated, and each one is individually written to the FileAggregator using the Write2d() function.

  double time;
  double value;

  // Create the 2-D dataset.
  for (time = -5.0; time <= +5.0; time += 1.0)
    {
      // Calculate the 2-D curve
      //
      //                   2
      //     value  =  time   .
      //
      value = time * time;

      // Add this point to the plot.
      aggregator->Write2d (datasetContext, time, value);
    }

  // Disable logging of data for the aggregator.
  aggregator->Disable ();
}

The following text file with 2 columns of formatted values was also created using the example.

Time = -5.000e+00     Value = 25
Time = -4.000e+00     Value = 16
Time = -3.000e+00     Value = 9
Time = -2.000e+00     Value = 4
Time = -1.000e+00     Value = 1
Time = 0.000e+00      Value = 0
Time = 1.000e+00      Value = 1
Time = 2.000e+00      Value = 4
Time = 3.000e+00      Value = 9
Time = 4.000e+00      Value = 16
Time = 5.000e+00      Value = 25

This code from the example shows how to construct the FileAggregator as was discussed above.

void CreateFormattedFile ()
{
  using namespace std;

  string fileName       = "file-aggregator-formatted-values.txt";
  string datasetContext = "Dataset/Context/String";

  // Create an aggregator that will have formatted values.
  Ptr<FileAggregator> aggregator =
    CreateObject<FileAggregator> (fileName, FileAggregator::FORMATTED);

FileAggregator attributes are set, including the C-style format string to use.

// Set the format for the values.
aggregator->Set2dFormat ("Time = %.3e\tValue = %.0f");

// aggregator must be turned on
aggregator->Enable ();

Next, the 2-D values are calculated, and each one is individually written to the FileAggregator using the Write2d() function.

  double time;
  double value;

  // Create the 2-D dataset.
  for (time = -5.0; time <= +5.0; time += 1.0)
    {
      // Calculate the 2-D curve
      //
      //                   2
      //     value  =  time   .
      //
      value = time * time;

      // Add this point to the plot.
      aggregator->Write2d (datasetContext, time, value);
    }

  // Disable logging of data for the aggregator.
  aggregator->Disable ();
}

Adaptors

This section details the functionalities provided by the Adaptor class to an ns-3 simulation. This section is meant for users interested in developing simulations with the ns-3 tools and using the Data Collection Framework, of which the Adaptor class is a part, to generate data output with their simulation’s results.

Note: the term ‘adaptor’ may also be spelled ‘adapter’; we chose the spelling aligned with the C++ standard.

Adaptor Overview

An Adaptor is used to make connections between different types of DCF objects.

To date, one Adaptor has been implemented:

  • TimeSeriesAdaptor

Time Series Adaptor

The TimeSeriesAdaptor lets Probes connect directly to Aggregators without needing any Collector in between.

Both of the implemented DCF helpers utilize TimeSeriesAdaptors in order to take probed values of different types and output the current time plus the value with both converted to doubles.

The role of the TimeSeriesAdaptor class is that of an adaptor, which takes raw-valued probe data of different types and outputs a tuple of two double values. The first is a timestamp, which may be set to different resolutions (e.g. Seconds, Milliseconds, etc.) in the future but which is presently hardcoded to Seconds. The second is the conversion of a non-double value to a double value (possibly with loss of precision).

Scope/Limitations

This section discusses the scope and limitations of the Data Collection Framework.

Currently, only these Probes have been implemented in DCF:

  • BooleanProbe
  • DoubleProbe
  • Uinteger8Probe
  • Uinteger16Probe
  • Uinteger32Probe
  • TimeProbe
  • PacketProbe
  • ApplicationPacketProbe
  • Ipv4PacketProbe

Currently, no Collectors are available in the DCF, although a BasicStatsCollector is under development.

Currently, only these Aggregators have been implemented in DCF:

  • GnuplotAggregator
  • FileAggregator

Currently, only this Adaptor has been implemented in DCF:

Time-Series Adaptor.

Future Work

This section discusses the future work to be done on the Data Collection Framework.

Here are some things that still need to be done:

  • Hook up more trace sources in ns-3 code to get more values out of the simulator.
  • Implement more types of Probes than there currently are.
  • Implement more than just the single current 2-D Collector, BasicStatsCollector.
  • Implement more Aggregators.
  • Implement more than just Adaptors.

DSDV Routing

Destination-Sequenced Distance Vector (DSDV) routing protocol is a pro-active, table-driven routing protocol for MANETs developed by Charles E. Perkins and Pravin Bhagwat in 1994. It uses the hop count as metric in route selection.

This model was developed by the ResiliNets research group at the University of Kansas. A paper on this model exists at this URL.

DSDV Routing Overview

DSDV Routing Table: Every node will maintain a table listing all the other nodes it has known either directly or through some neighbors. Every node has a single entry in the routing table. The entry will have information about the node’s IP address, last known sequence number and the hop count to reach that node. Along with these details the table also keeps track of the nexthop neighbor to reach the destination node, the timestamp of the last update received for that node.

The DSDV update message consists of three fields, Destination Address, Sequence Number and Hop Count.

Each node uses 2 mechanisms to send out the DSDV updates. They are,

  1. Periodic Updates

    Periodic updates are sent out after every m_periodicUpdateInterval(default:15s). In this update the node broadcasts out its entire routing table.

  2. Trigger Updates

    Trigger Updates are small updates in-between the periodic updates. These updates are sent out whenever a node receives a DSDV packet that caused a change in its routing table. The original paper did not clearly mention when for what change in the table should a DSDV update be sent out. The current implemntation sends out an update irrespective of the change in the routing table.

The updates are accepted based on the metric for a particular node. The first factor determinig the acceptance of an update is the sequence number. It has to accept the update if the sequence number of the update message is higher irrespective of the metric. If the update with same sequence number is received, then the update with least metric (hopCount) is given precedence.

In highly mobile scenarios, there is a high chance of route fluctuations, thus we have the concept of weighted settling time where an update with change in metric will not be advertised to neighbors. The node waits for the settling time to make sure that it did not receive the update from its old neighbor before sending out that update.

The current implementation covers all the above features of DSDV. The current implementation also has a request queue to buffer packets that have no routes to destination. The default is set to buffer up to 5 packets per destination.

DSR Routing

Dynamic Source Routing (DSR) protocol is a reactive routing protocol designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes.

This model was developed by the ResiliNets research group at the University of Kansas.

DSR Routing Overview

This model implements the base specification of the Dynamic Source Routing (DSR) protocol. Implementation is based on RFC 4728, with some extensions and modifications to the RFC specifications.

DSR operates on a on-demand behavior. Therefore, our DSR model buffers all packets while a route request packet (RREQ) is disseminated. We implement a packet buffer in dsr-rsendbuff.cc. The packet queue implements garbage collection of old packets and a queue size limit. When the packet is sent out from the send buffer, it will be queued in maintenance buffer for next hop acknowledgment.

The maintenance buffer then buffers the already sent out packets and waits for the notification of packet delivery. Protocol operation strongly depends on broken link detection mechanism. We implement the three heuristics recommended based the RFC as follows:

First, we use link layer feedback when possible, which is also the fastest mechanism of these three to detect link errors. A link is considered to be broken if frame transmission results in a transmission failure for all retries. This mechanism is meant for active links and works much faster than in its absence. DSR is able to detect the link layer transmission failure and notify that as broken. Recalculation of routes will be triggered when needed. If user does not want to use link layer acknowledgment, it can be tuned by setting “LinkAcknowledgment” attribute to false in “dsr-routing.cc”.

Second, passive acknowledgment should be used whenever possible. The node turns on “promiscuous” receive mode, in which it can receive packets not destined for itself, and when the node assures the delivery of that data packet to its destination, it cancels the passive acknowledgment timer.

Last, we use a network layer acknowledge scheme to notify the receipt of a packet. Route request packet will not be acknowledged or retransmitted.

The Route Cache implementation support garbage collection of old entries and state machine, as defined in the standard. It implements as a STL map container. The key is the destination IP address.

DSR operates with direct access to IP header, and operates between network and transport layer. When packet is sent out from transport layer, it passes itself to DSR and DSR header is appended.

We have two caching mechanisms: path cache and link cache. The path cache saves the whole path in the cache. The paths are sorted based on the hop count, and whenever one path is not able to be used, we change to the next path. The link cache is a slightly better design in the sense that it uses different subpaths and uses Implemented Link Cache using Dijsktra algorithm, and this part is implemented by Song Luan <lsuper@mail.ustc.edu.cn>.

The following optional protocol optimizations aren’t implemented:

  • Flow state

  • First Hop External (F), Last Hop External (L) flags

  • Handling unknown DSR options

  • Two types of error headers:
    1. flow state not supported option
    2. unsupported option (not going to happen in simulation)

DSR update in ns-3.17

We originally used “TxErrHeader” in Ptr<WifiMac> to indicate the transmission error of a specific packet in link layer, however, it was not working quite correctly since even when the packet was dropped, this header was not recorded in the trace file. We used to a different path on implementing the link layer notification mechanism. We look into the trace file by finding packet receive event. If we find one receive event for the data packet, we count that as the indicator for successful data delivery.

Useful parameters

+------------------------- +------------------------------------+-------------+
| Parameter                | Description                        | Default     |
+==========================+====================================+=============+
| MaxSendBuffLen           | Maximum number of packets that can | 64          |
|                          | be stored in send buffer           |             |
+------------------------- +------------------------------------+-------------+
| MaxSendBuffTime          | Maximum time packets can be queued | Seconds(30) |
|                          | in the send buffer                 |             |
+------------------------- +------------------------------------+-------------+
| MaxMaintLen              | Maximum number of packets that can | 50          |
|                          | be stored in maintenance buffer    |             |
+------------------------- +------------------------------------+-------------+
| MaxMaintTime             | Maximum time packets can be queued | Seconds(30) |
|                          | in maintenance buffer              |             |
+------------------------- +------------------------------------+-------------+
| MaxCacheLen              | Maximum number of route entries    | 64          |
|                          | that can be stored in route cache  |             |
+------------------------- +------------------------------------+-------------+
| RouteCacheTimeout        | Maximum time the route cache can   | Seconds(300)|
|                          | be queued in route cache           |             |
+------------------------- +------------------------------------+-------------+
| RreqRetries              | Maximum number of retransmissions  | 16          |
|                          | for request discovery of a route   |             |
+------------------------- +------------------------------------+-------------+
| CacheType                | Use Link Cache or use Path Cache   | "LinkCache" |
|                          |                                    |             |
+------------------------- +------------------------------------+-------------+
| LinkAcknowledgment       | Enable Link layer acknowledgment   | True        |
|                          | mechanism                          |             |
+------------------------- +------------------------------------+-------------+

Implementation modification

  • The DsrFsHeader has added 3 fields: message type, source id, destination id, and these changes only for post-processing
    1. Message type is used to identify the data packet from control packet
    2. source id is used to identify the real source of the data packet since we have to deliver the packet hop-by-hop and the ipv4header is not carrying the real source and destination ip address as needed
    3. destination id is for same reason of above
  • Route Reply header is not word-aligned in DSR RFC, change it to word-aligned in implementation

  • DSR works as a shim header between transport and network protocol, it needs its own forwarding mechanism, we are changing the packet transmission to hop-by-hop delivery, so we added two fields in dsr fixed header to notify packet delivery

Current Route Cache implementation

This implementation used “path cache”, which is simple to implement and ensures loop-free paths:

  • the path cache has automatic expire policy
  • the cache saves multiple route entries for a certain destination and sort the entries based on hop counts
  • the MaxEntriesEachDst can be tuned to change the maximum entries saved for a single destination
  • when adding mulitiple routes for one destination, the route is compared based on hop-count and expire time, the one with less hop count or relatively new route is favored
  • Future implementation may include “link cache” as another possibility

DSR Instructions

The following should be kept in mind when running DSR as routing protocol:

  • NodeTraversalTime is the time it takes to traverse two neighboring nodes and should be chosen to fit the transmission range
  • PassiveAckTimeout is the time a packet in maintenance buffer wait for passive acknowledgment, normally set as two times of NodeTraversalTime
  • RouteCacheTimeout should be set smaller value when the nodes’ velocity become higher. The default value is 300s.

Helper

To have a node run DSR, the easiest way would be to use the DsrHelper and DsrMainHelpers in your simulation script. For instance:

DsrHelper dsr;
DsrMainHelper dsrMain;
dsrMain.Install (dsr, adhocNodes);

The example scripts inside src/dsr/examples/ demonstrate the use of DSR based nodesin different scenarios. The helper source can be found inside src/dsr/helper/dsr-main-helper.{h,cc} and src/dsr/helper/dsr-helper.{h,cc}

Examples

The example can be found in src/dsr/examples/:

  • dsr.cc use DSR as routing protocol within a traditional MANETs environment[3].

DSR is also built in the routing comparison case in examples/routing/:

  • manet-routing-compare.cc is a comparison case with built in MANET routing protocols and can generate its own results.

Validation

This model has been tested as follows:

  • Unit tests have been written to verify the internals of DSR. This can be found in src/dsr/test/dsr-test-suite.cc. These tests verify whether the methods inside DSR module which deal with packet buffer, headers work correctly.
  • Simulation cases similar to [3] have been tested and have comparable results.
  • manet-routing-compare.cc has been used to compare DSR with three of other routing protocols.

A paper was presented on these results at the Workshop on ns-3 in 2011.

Limitations

The model is not fully compliant with RFC 4728. As an example, Dsr fixed size header has been extended and it is four octects longer then the RFC specification. As a consequence, the DSR headers can not be correctly decoded by Wireshark.

The model full compliance with the RFC is planned for the future.

Emulation Overview

ns-3 has been designed for integration into testbed and virtual machine environments. We have addressed this need by providing two kinds of net devices. The first kind of device is a file descriptor net device (FdNetDevice), which is a generic device type that can read and write from a file descriptor. By associating this file descriptor with different things on the host system, different capabilities can be provided. For instance, the FdNetDevice can be associated with an underlying packet socket to provide emulation capabilities. This allows ns-3 simulations to send data on a “real” network. The second kind, called a TapBridge NetDevice allows a “real” host to participate in an ns-3 simulation as if it were one of the simulated nodes. An ns-3 simulation may be constructed with any combination of simulated or emulated devices.

Note: Prior to ns-3.17, the emulation capability was provided by a special device called an Emu NetDevice; the Emu NetDevice has been replaced by the FdNetDevice.

One of the use-cases we want to support is that of a testbed. A concrete example of an environment of this kind is the ORBIT testbed. ORBIT is a laboratory emulator/field trial network arranged as a two dimensional grid of 400 802.11 radio nodes. We integrate with ORBIT by using their “imaging” process to load and run ns-3 simulations on the ORBIT array. We can use our EmuFdNetDevice to drive the hardware in the testbed and we can accumulate results either using the ns-3 tracing and logging functions, or the native ORBIT data gathering techniques. See http://www.orbit-lab.org/ for details on the ORBIT testbed.

A simulation of this kind is shown in the following figure:

_images/testbed.png

Example Implementation of Testbed Emulation.

You can see that there are separate hosts, each running a subset of a “global” simulation. Instead of an ns-3 channel connecting the hosts, we use real hardware provided by the testbed. This allows ns-3 applications and protocol stacks attached to a simulation node to communicate over real hardware.

We expect the primary use for this configuration will be to generate repeatable experimental results in a real-world network environment that includes all of the ns-3 tracing, logging, visualization and statistics gathering tools.

In what can be viewed as essentially an inverse configuration, we allow “real” machines running native applications and protocol stacks to integrate with an ns-3 simulation. This allows for the simulation of large networks connected to a real machine, and also enables virtualization. A simulation of this kind is shown in the following figure:

_images/emulated-channel.png

Implementation overview of emulated channel.

Here, you will see that there is a single host with a number of virtual machines running on it. An ns-3 simulation is shown running in the virtual machine shown in the center of the figure. This simulation has a number of nodes with associated ns-3 applications and protocol stacks that are talking to an ns-3 channel through native simulated ns-3 net devices.

There are also two virtual machines shown at the far left and far right of the figure. These VMs are running native (Linux) applications and protocol stacks. The VM is connected into the simulation by a Linux Tap net device. The user-mode handler for the Tap device is instantiated in the simulation and attached to a proxy node that represents the native VM in the simulation. These handlers allow the Tap devices on the native VMs to behave as if they were ns-3 net devices in the simulation VM. This, in turn, allows the native software and protocol suites in the native VMs to believe that they are connected to the simulated ns-3 channel.

We expect the typical use case for this environment will be to analyze the behavior of native applications and protocol suites in the presence of large simulated ns-3 networks.

For more details:

Energy Framework

Energy consumption is a key issue for wireless devices, and wireless network researchers often need to investigate the energy consumption at a node or in the overall network while running network simulations in ns-3. This requires ns-3 to support energy consumption modeling. Further, as concepts such as fuel cells and energy scavenging are becoming viable for low power wireless devices, incorporating the effect of these emerging technologies into simulations requires support for modeling diverse energy sources in ns-3. The ns-3 Energy Framework provides the basis for energy consumption, energy source and energy harvesting modeling.

Model Description

The source code for the Energy Framework is currently at: src/energy.

Design

The ns-3 Energy Framework is composed of 3 parts: Energy Source, Device Energy Model and Energy Harvester. The framework is implemented into the src/energy/models folder.

Energy Source

The Energy Source represents the power supply on each node. A node can have one or more energy sources, and each energy source can be connected to multiple device energy models. Connecting an energy source to a device energy model implies that the corresponding device draws power from the source. The basic functionality of the Energy Source is to provide energy for devices on the node. When energy is completely drained from the Energy Source, it notifies the devices on node such that each device can react to this event. Further, each node can access the Energy Source Objects for information such as remaining energy or energy fraction (battery level). This enables the implementation of energy aware protocols in ns-3.

In order to model a wide range of power supplies such as batteries, the Energy Source must be able to capture characteristics of these supplies. There are 2 important characteristics or effects related to practical batteries:

Rate Capacity Effect
Decrease of battery lifetime when the current draw is higher than the rated value of the battery.
Recovery Effect
Increase of battery lifetime when the battery is alternating between discharge and idle states.

In order to incorporate the Rate Capacity Effect, the Energy Source uses current draw from all the devices on the same node to calculate energy consumption. Moreover, multiple Energy Harvesters can be connected to the Energy Source in order to replenish its energy. The Energy Source periodically polls all the devices and energy harvesters on the same node to calculate the total current drain and hence the energy consumption. When a device changes state, its corresponding Device Energy Model will notify the Energy Source of this change and new total current draw will be calculated. Similarly, every Energy Harvester update triggers an update to the connected Energy Source.

The Energy Source base class keeps a list of devices (Device Energy Model objects) and energy harvesters (Energy Harvester objects) that are using the particular Energy Source as power supply. When energy is completely drained, the Energy Source will notify all devices on this list. Each device can then handle this event independently, based on the desired behavior that should be followed in case of power outage.

Device Energy Model

The Device Energy Model is the energy consumption model of a device installed on the node. It is designed to be a state based model where each device is assumed to have a number of states, and each state is associated with a power consumption value. Whenever the state of the device changes, the corresponding Device Energy Model will notify the Energy Source of the new current draw of the device. The Energy Source will then calculate the new total current draw and update the remaining energy.

The Device Energy Model can also be used for devices that do not have finite number of states. For example, in an electric vehicle, the current draw of the motor is determined by its speed. Since the vehicle’s speed can take continuous values within a certain range, it is infeasible to define a set of discrete states of operation. However, by converting the speed value into current directly, the same set of Device Energy Model APIs can still be used.

Energy Harvester

The energy harvester represents the elements that harvest energy from the environment and recharge the Energy Source to which it is connected. The energy harvester includes the complete implementation of the actual energy harvesting device (e.g., a solar panel) and the environment (e.g., the solar radiation). This means that in implementing an energy harvester, the energy contribution of the environment and the additional energy requirements of the energy harvesting device such as the conversion efficiency and the internal power consumption of the device needs to be jointly modeled.

WiFi Radio Energy Model

The WiFi Radio Energy Model is the energy consumption model of a Wifi net device. It provides a state for each of the available states of the PHY layer: Idle, CcaBusy, Tx, Rx, ChannelSwitch, Sleep. Each of such states is associated with a value (in Ampere) of the current draw (see below for the corresponding attribute names). A Wifi Radio Energy Model PHY Listener is registered to the Wifi PHY in order to be notified of every Wifi PHY state transition. At every transition, the energy consumed in the previous state is computed and the energy source is notified in order to update its remaining energy.

The Wifi Tx Current Model gives the possibility to compute the current draw in the transmit state as a function of the nominal tx power (in dBm), as observed in several experimental measurements. To this purpose, the Wifi Radio Energy Model PHY Listener is notified of the nominal tx power used to transmit the current frame and passes such a value to the Wifi Tx Current Model which takes care of updating the current draw in the Tx state. Hence, the energy consumption is correctly computed even if the Wifi Remote Station Manager performs per-frame power control. Currently, a Linear Wifi Tx Current Model is implemented which computes the tx current as a linear function (according to parameters that can be specified by the user) of the nominal tx power in dBm.

The Wifi Radio Energy Model offers the possibility to specify a callback that is invoked when the energy source is depleted. If such a callback is not specified when the Wifi Radio Energy Model Helper is used to install the model on a device, a callback is implicitly made so that the Wifi PHY is put in the SLEEP mode (hence no frame is transmitted nor received afterwards) when the energy source is depleted. Likewise, it is possible to specify a callback that is invoked when the energy source is recharged (which might occur in case an energy harvester is connected to the energy source). If such a callback is not specified when the Wifi Radio Energy Model Helper is used to install the model on a device, a callback is implicitly made so that the Wifi PHY is resumed from the SLEEP mode when the energy source is recharged.

Future Work

For Device Energy Models, we are planning to include support for other PHY layer models provided in ns-3 such as WiMAX, and to model the energy consumptions of other non communicating devices, like a generic sensor and a CPU. For Energy Sources, we are planning to included new types of Energy Sources such as Supercapacitor and Nickel-Metal Hydride (Ni-MH) battery. For the Energy Harvesters, we are planning to implement an energy harvester that recharges the energy sources according to the power levels defined in a user customizable dataset of real measurements.

References

[1]ns-2 Energy model: http://www.cubinlab.ee.unimelb.edu.au/~jrid/Docs/Manuel-NS2/node204.html
[2]H. Wu, S. Nabar and R. Poovendran. An Energy Framework for the Network Simulator 3 (ns-3).
[3]M. Handy and D. Timmermann. Simulation of mobile wireless networks with accurate modelling of non-linear battery effects. In Proc. of Applied simulation and Modeling (ASM), 2003.
[4]D. N. Rakhmatov and S. B. Vrudhula. An analytical high-level battery model for use in energy management of portable electronic systems. In Proc. of IEEE/ACM International Conference on Computer Aided Design (ICCAD‘01), pages 488-493, November 2001.
[5]D. N. Rakhmatov, S. B. Vrudhula, and D. A. Wallach. Battery lifetime prediction for energy-aware computing. In Proc. of the 2002 International Symposium on Low Power Electronics and Design (ISLPED‘02), pages 154-159, 2002.
[6]C. Tapparello, H. Ayatollahi and W. Heinzelman. Extending the Energy Framework for Network Simulator 3 (ns-3). Workshop on ns-3 (WNS3), Poster Session, Atlanta, GA, USA. May, 2014.
[7]C. Tapparello, H. Ayatollahi and W. Heinzelman. Energy Harvesting Framework for Network Simulator 3 (ns-3). 2nd International Workshop on Energy Neutral Sensing Systems (ENSsys), Memphis, TN, USA. November 6, 2014.

Usage

The main way that ns-3 users will typically interact with the Energy Framework is through the helper API and through the publicly visible attributes of the framework. The helper API is defined in src/energy/helper/*.h.

In order to use the energy framework, the user must install an Energy Source for the node of interest, the corresponding Device Energy Model for the network devices and, if necessary, the one or more Energy Harvester. Energy Source (objects) are aggregated onto each node by the Energy Source Helper. In order to allow multiple energy sources per node, we aggregate an Energy Source Container rather than directly aggregating a source object.

The Energy Source object keeps a list of Device Energy Model and Energy Harvester objects using the source as power supply. Device Energy Model objects are installed onto the Energy Source by the Device Energy Model Helper, while Energy Harvester object are installed by the Energy Harvester Helper. User can access the Device Energy Model objects through the Energy Source object to obtain energy consumption information of individual devices. Moreover, the user can access to the Energy Harvester objects in order to gather information regarding the current harvestable power and the total energy harvested by the harvester.

Examples

The example directories, src/examples/energy and examples/energy, contain some basic code that shows how to set up the framework.

Helpers

Energy Source Helper

Base helper class for Energy Source objects, this helper Aggregates Energy Source object onto a node. Child implementation of this class creates the actual Energy Source object.

Device Energy Model Helper

Base helper class for Device Energy Model objects, this helper attaches Device Energy Model objects onto Energy Source objects. Child implementation of this class creates the actual Device Energy Model object.

Energy Harvesting Helper

Base helper class for Energy Harvester objects, this helper attaches Energy Harvester objects onto Energy Source objects. Child implementation of this class creates the actual Energy Harvester object.

Attributes

Attributes differ between Energy Sources, Devices Energy Models and Energy Harvesters implementations, please look at the specific child class for details.

Basic Energy Source
  • BasicEnergySourceInitialEnergyJ: Initial energy stored in basic energy source.
  • BasicEnergySupplyVoltageV: Initial supply voltage for basic energy source.
  • PeriodicEnergyUpdateInterval: Time between two consecutive periodic energy updates.
RV Battery Model
  • RvBatteryModelPeriodicEnergyUpdateInterval: RV battery model sampling interval.
  • RvBatteryModelOpenCircuitVoltage: RV battery model open circuit voltage.
  • RvBatteryModelCutoffVoltage: RV battery model cutoff voltage.
  • RvBatteryModelAlphaValue: RV battery model alpha value.
  • RvBatteryModelBetaValue: RV battery model beta value.
  • RvBatteryModelNumOfTerms: The number of terms of the infinite sum for estimating battery level.
WiFi Radio Energy Model
  • IdleCurrentA: The default radio Idle current in Ampere.
  • CcaBusyCurrentA: The default radio CCA Busy State current in Ampere.
  • TxCurrentA: The radio Tx current in Ampere.
  • RxCurrentA: The radio Rx current in Ampere.
  • SwitchingCurrentA: The default radio Channel Switch current in Ampere.
  • SleepCurrentA: The radio Sleep current in Ampere.
  • TxCurrentModel: A pointer to the attached tx current model.
Basic Energy Harvester
  • PeriodicHarvestedPowerUpdateInterval: Time between two consecutive periodic updates of the harvested power.
  • HarvestablePower: Random variables that represents the amount of power that is provided by the energy harvester.

Tracing

Traced values differ between Energy Sources, Devices Energy Models and Energy Harvesters implementations, please look at the specific child class for details.

Basic Energy Source
  • RemainingEnergy: Remaining energy at BasicEnergySource.
RV Battery Model
  • RvBatteryModelBatteryLevel: RV battery model battery level.
  • RvBatteryModelBatteryLifetime: RV battery model battery lifetime.
WiFi Radio Energy Model
  • TotalEnergyConsumption: Total energy consumption of the radio device.
Basic Energy Harvester
  • HarvestedPower: Current power provided by the BasicEnergyHarvester.
  • TotalEnergyHarvested: Total energy harvested by the BasicEnergyHarvester.

Validation

Comparison of the Energy Framework against actual devices have not been performed. Current implementation of the Energy Framework is checked numerically for computation errors. The RV battery model is validated by comparing results with what was presented in the original RV battery model paper.

File Descriptor NetDevice

The src/fd-net-device module provides the FdNetDevice class, which is able to read and write traffic using a file descriptor provided by the user. This file descriptor can be associated to a TAP device, to a raw socket, to a user space process generating/consuming traffic, etc. The user has full freedom to define how external traffic is generated and ns-3 traffic is consumed.

Different mechanisms to associate a simulation to external traffic can be provided through helper classes. Three specific helpers are provided:

  • EmuFdNetDeviceHelper (to associate the ns-3 device with a physical device in the host machine)
  • TapFdNetDeviceHelper (to associate the ns-3 device with the file descriptor from a tap device in the host machine)
  • PlanteLabFdNetDeviceHelper (to automate the creation of tap devices in PlanetLab nodes, enabling ns-3 simulations that can send and receive traffic though the Internet using PlanetLab resource.

Model Description

The source code for this module lives in the directory src/fd-net-device.

The FdNetDevice is a special type of ns-3 NetDevice that reads traffic to and from a file descriptor. That is, unlike pure simulation NetDevice objects that write frames to and from a simulated channel, this FdNetDevice directs frames out of the simulation to a file descriptor. The file descriptor may be associated to a Linux TUN/TAP device, to a socket, or to a user-space process.

It is up to the user of this device to provide a file descriptor. The type of file descriptor being provided determines what is being modelled. For instance, if the file descriptor provides a raw socket to a WiFi card on the host machine, the device being modelled is a WiFi device.

From the conceptual “top” of the device looking down, it looks to the simulated node like a device supporting a 48-bit IEEE MAC address that can be bridged, supports broadcast, and uses IPv4 ARP or IPv6 Neighbor Discovery, although these attributes can be tuned on a per-use-case basis.

Design

The FdNetDevice implementation makes use of a reader object, extended from the FdReader class in the ns-3 src/core module, which manages a separate thread from the main ns-3 execution thread, in order to read traffic from the file descriptor.

Upon invocation of the StartDevice method, the reader object is initialized and starts the reading thread. Before device start, a file descriptor must be previously associated to the FdNetDevice with the SetFileDescriptor invocation.

The creation and configuration of the file descriptor can be left to a number of helpers, described in more detail below. When this is done, the invocation of SetFileDescriptor is responsibility of the helper and must not be directly invoked by the user.

Upon reading an incoming frame from the file descriptor, the reader will pass the frame to the ReceiveCallback method, whose task it is to schedule the reception of the frame by the device as a ns-3 simulation event. Since the new frame is passed from the reader thread to the main ns-3 simulation thread, thread-safety issues are avoided by using the ScheduleWithContext call instead of the regular Schedule call.

In order to avoid overwhelming the scheduler when the incoming data rate is too high, a counter is kept with the number of frames that are currently scheduled to be received by the device. If this counter reaches the value given by the RxQueueSize attribute in the device, then the new frame will be dropped silently.

The actual reception of the new frame by the device occurs when the scheduled FordwarUp method is invoked by the simulator. This method acts as if a new frame had arrived from a channel attached to the device. The device then decapsulates the frame, removing any layer 2 headers, and forwards it to upper network stack layers of the node. The ForwardUp method will remove the frame headers, according to the frame encapsulation type defined by the EncapsulationMode attribute, and invoke the receive callback passing an IP packet.

An extra header, the PI header, can be present when the file descriptor is associated to a TAP device that was created without setting the IFF_NO_PI flag. This extra header is removed if EncapsulationMode is set to DIXPI value.

In the opposite direction, packets generated inside the simulation that are sent out through the device, will be passed to the Send method, which will in turn invoke the SendFrom method. The latter method will add the necessary layer 2 headers, and simply write the newly created frame to the file descriptor.

Scope and Limitations

Users of this device are cautioned that there is no flow control across the file descriptor boundary, when using in emulation mode. That is, in a Linux system, if the speed of writing network packets exceeds the ability of the underlying physical device to buffer the packets, backpressure up to the writing application will be applied to avoid local packet loss. No such flow control is provided across the file descriptor interface, so users must be aware of this limitation.

As explained before, the RxQueueSize attribute limits the number of packets that can be pending to be received by the device. Frames read from the file descriptor while the number of pending packets is in its maximum will be silently dropped.

The mtu of the device defaults to the Ethernet II MTU value. However, helpers are supposed to set the mtu to the right value to reflect the characteristics of the network interface associated to the file descriptor. If no helper is used, then the responsibility of setting the correct mtu value for the device falls back to the user. The size of the read buffer on the file descriptor reader is set to the mtu value in the StartDevice method.

The FdNetDevice class currently supports three encapsulation modes, DIX for Ethernet II frames, LLC for 802.2 LLC/SNAP frames, and DIXPI for Ethernet II frames with an additional TAP PI header. This means that traffic traversing the file descriptor is expected to be Ethernet II compatible. IEEE 802.1q (VLAN) tagging is not supported. Attaching an FdNetDevice to a wireless interface is possible as long as the driver provides Ethernet II frames to the socket API. Note that to associate a FdNetDevice to a wireless card in ad-hoc mode, the MAC address of the device must be set to the real card MAC address, else any incoming traffic a fake MAC address will be discarded by the driver.

As mentioned before, three helpers are provided with the fd-net-device module. Each individual helper (file descriptor type) may have platform limitations. For instance, threading, real-time simulation mode, and the ability to create TUN/TAP devices are prerequisites to using the provided helpers. Support for these modes can be found in the output of the waf configure step, e.g.:

Threading Primitives          : enabled
Real Time Simulator           : enabled
Emulated Net Device           : enabled
Tap Bridge                    : enabled

It is important to mention that while testing the FdNetDevice we have found an upper bound limit for TCP throughput when using 1Gb Ethernet links of 60Mbps. This limit is most likely due to the processing power of the computers involved in the tests.

Usage

The usage pattern for this type of device is similar to other net devices with helpers that install to node pointers or node containers. When using the base FdNetDeviceHelper the user is responsible for creating and setting the file descriptor by himself.

FdNetDeviceHelper fd;
NetDeviceContainer devices = fd.Install (nodes);

// file descriptor generation
...

device->SetFileDescriptor (fd);

Most commonly a FdNetDevice will be used to interact with the host system. In these cases it is almost certain that the user will want to run in real-time emulation mode, and to enable checksum computations. The typical program statements are as follows:

GlobalValue::Bind ("SimulatorImplementationType", StringValue ("ns3::RealtimeSimulatorImpl"));
GlobalValue::Bind ("ChecksumEnabled", BooleanValue (true));

The easiest way to set up an experiment that interacts with a Linux host system is to user the Emu and Tap helpers. Perhaps the most unusual part of these helper implementations relates to the requirement for executing some of the code with super-user permissions. Rather than force the user to execute the entire simulation as root, we provide a small “creator” program that runs as root and does any required high-permission sockets work. The easiest way to set the right privileges for the “creator” programs, is by enabling the --enable-sudo flag when performing waf configure.

We do a similar thing for both the Emu and the Tap devices. The high-level view is that the CreateFileDescriptor method creates a local interprocess (Unix) socket, forks, and executes the small creation program. The small program, which runs as suid root, creates a raw socket and sends back the raw socket file descriptor over the Unix socket that is passed to it as a parameter. The raw socket is passed as a control message (sometimes called ancillary data) of type SCM_RIGHTS.

Helpers

EmuFdNetDeviceHelper

The EmuFdNetDeviceHelper creates a raw socket to an underlying physical device, and provides the socket descriptor to the FdNetDevice. This allows the ns-3 simulation to read frames from and write frames to a network device on the host.

The emulation helper permits to transparently integrate a simulated ns-3 node into a network composed of real nodes.

+----------------------+     +-----------------------+
|         host 1       |     |         host 2        |
+----------------------+     +-----------------------+
|    ns-3 simulation   |     |                       |
+----------------------+     |         Linux         |
|       ns-3 Node      |     |     Network Stack     |
|  +----------------+  |     |   +----------------+  |
|  |    ns-3 TCP    |  |     |   |       TCP      |  |
|  +----------------+  |     |   +----------------+  |
|  |    ns-3 IP     |  |     |   |       IP       |  |
|  +----------------+  |     |   +----------------+  |
|  |   FdNetDevice  |  |     |   |                |  |
|  |    10.1.1.1    |  |     |   |                |  |
|  +----------------+  |     |   +    ETHERNET    +  |
|  |   raw socket   |  |     |   |                |  |
|--+----------------+--|     |   +----------------+  |
|       | eth0 |       |     |        | eth0 |       |
+-------+------+-------+     +--------+------+-------+

        10.1.1.11                     10.1.1.12

            |                            |
            +----------------------------+

This helper replaces the functionality of the EmuNetDevice found in ns-3 prior to ns-3.17, by bringing this type of device into the common framework of the FdNetDevice. The EmuNetDevice was deprecated in favor of this new helper.

The device is configured to perform MAC spoofing to separate simulation network traffic from other network traffic that may be flowing to and from the host.

One can use this helper in a testbed situation where the host on which the simulation is running has a specific interface of interest which drives the testbed hardware. You would also need to set this specific interface into promiscuous mode and provide an appropriate device name to the ns-3 simulation. Additionally, hardware offloading of segmentation and checksums should be disabled.

The helper only works if the underlying interface is up and in promiscuous mode. Packets will be sent out over the device, but we use MAC spoofing. The MAC addresses will be generated (by default) using the Organizationally Unique Identifier (OUI) 00:00:00 as a base. This vendor code is not assigned to any organization and so should not conflict with any real hardware.

It is always up to the user to determine that using these MAC addresses is okay on your network and won’t conflict with anything else (including another simulation using such devices) on your network. If you are using the emulated FdNetDevice configuration in separate simulations, you must consider global MAC address assignment issues and ensure that MAC addresses are unique across all simulations. The emulated net device respects the MAC address provided in the Address attribute so you can do this manually. For larger simulations, you may want to set the OUI in the MAC address allocation function.

Before invoking the Install method, the correct device name must be configured on the helper using the SetDeviceName method. The device name is required to identify which physical device should be used to open the raw socket.

EmuFdNetDeviceHelper emu;
emu.SetDeviceName (deviceName);
NetDeviceContainer devices = emu.Install (node);
Ptr<NetDevice> device = devices.Get (0);
device->SetAttribute ("Address", Mac48AddressValue (Mac48Address::Allocate ()));
TapFdNetDeviceHelper

A Tap device is a special type of Linux device for which one end of the device appears to the kernel as a virtual net_device, and the other end is provided as a file descriptor to user-space. This file descriptor can be passed to the FdNetDevice. Packets forwarded to the TAP device by the kernel will show up in the FdNetDevice in ns-3.

Users should note that this usage of TAP devices is different than that provided by the TapBridge NetDevice found in src/tap-bridge. The model in this helper is as follows:

+-------------------------------------+
|                host                 |
+-------------------------------------+
|    ns-3 simulation   |              |
+----------------------+              |
|      ns-3 Node       |              |
|  +----------------+  |              |
|  |    ns-3 TCP    |  |              |
|  +----------------+  |              |
|  |    ns-3 IP     |  |              |
|  +----------------+  |              |
|  |   FdNetDevice  |  |              |
|--+----------------+--+    +------+  |
|       | TAP  |            | eth0 |  |
|       +------+            +------+  |
|     192.168.0.1               |     |
+-------------------------------|-----+
                                |
                                |
                                ------------ (Internet) -----

In the above, the configuration requires that the host be able to forward traffic generated by the simulation to the Internet.

The model in TapBridge (in another module) is as follows:

+--------+
|  Linux |
|  host  |                    +----------+
| ------ |                    |   ghost  |
|  apps  |                    |   node   |
| ------ |                    | -------- |
|  stack |                    |    IP    |     +----------+
| ------ |                    |   stack  |     |   node   |
|  TAP   |                    |==========|     | -------- |
| device | <----- IPC ------> |   tap    |     |    IP    |
+--------+                    |  bridge  |     |   stack  |
                              | -------- |     | -------- |
                              |   ns-3   |     |   ns-3   |
                              |   net    |     |   net    |
                              |  device  |     |  device  |
                              +----------+     +----------+
                                   ||               ||
                              +---------------------------+
                              |        ns-3 channel       |
                              +---------------------------+

In the above, packets instead traverse ns-3 NetDevices and Channels.

The usage pattern for this example is that the user sets the MAC address and either (or both) the IPv4 and IPv6 addresses and masks on the device, and the PI header if needed. For example:

TapFdNetDeviceHelper helper;
helper.SetDeviceName (deviceName);
helper.SetModePi (modePi);
helper.SetTapIpv4Address (tapIp);
helper.SetTapIpv4Mask (tapMask);
...
helper.Install (node);
PlanetLabFdNetDeviceHelper

PlanetLab is a world wide distributed network testbed composed of nodes connected to the Internet. Running ns-3 simulations in PlanetLab nodes using the PlanetLabFdNetDeviceHelper allows to send simulated traffic generated by ns-3 directly to the Internet. This setup can be useful to validate ns-3 Internet protocols or other future protocols implemented in ns-3.

To run experiments using PlanetLab nodes it is required to have a PlanetLab account. Only members of PlanetLab partner institutions can obtain such accounts ( for more information visit http://www.planet-lab.org/ or http://www.planet-lab.eu ). Once the account is obtained, a PlanetLab slice must be requested in order to conduct experiments. A slice represents an experiment unit related to a group of PlanetLab users, and can be associated to virtual machines in different PlanetLab nodes. Slices can also be customized by adding configuration tags to it (this is done by PlanetLab administrators).

The PlanetLabFdNetDeviceHelper creates TAP devices on PlanetLab nodes using specific PlanetLab mechanisms (i.e. the vsys system), and associates the TAP device to a FdNetDevice in ns-3. The functionality provided by this helper is similar to that provided by the FdTapNetDeviceHelper, except that the underlying mechanisms to create the TAP device are different.

+-------------------------------------+
|         PlanetLab  host             |
+-------------------------------------+
|    ns-3 simulation   |              |
+----------------------+              |
|       ns-3 Node      |              |
|  +----------------+  |              |
|  |    ns-3 TCP    |  |              |
|  +----------------+  |              |
|  |    ns-3 IP     |  |              |
|  +----------------+  |              |
|  |   FdNetDevice  |  |              |
|--+----------------+--+    +------+  |
|       | TAP  |            | eth0 |  |
|       +------+            +------+  |
|     192.168.0.1               |     |
+-------------------------------|-----+
                                |
                                |
                                ------------ (Internet) -----

In order to be able to assign private IPv4 addresses to the TAP devices, account holders must request the vsys_vnet tag to be added to their slice by PlanetLab administrators. The vsys_vnet tag is associated to private network segment and only addresses from this segment can be used in experiments.

The syntax used to create a TAP device with this helper is similar to that used for the previously described helpers:

PlanetLabFdNetDeviceHelper helper;
helper.SetTapIpAddress (tapIp);
helper.SetTapMask (tapMask);
...
helper.Install (node);

PlanetLab nodes have a Fedora based distribution, so ns-3 can be installed following the instructions for ns-3 Linux installation.

Attributes

The FdNetDevice provides a number of attributes:

  • Address: The MAC address of the device

  • Start: The simulation start time to spin up the device thread

  • Stop: The simulation start time to stop the device thread

  • EncapsulationMode: Link-layer encapsulation format

  • RxQueueSize: The buffer size of the read queue on the file descriptor

    thread (default of 1000 packets)

Start and Stop do not normally need to be specified unless the user wants to limit the time during which this device is active. Address needs to be set to some kind of unique MAC address if the simulation will be interacting with other real devices somehow using real MAC addresses. Typical code:

device->SetAttribute ("Address", Mac48AddressValue (Mac48Address::Allocate ()));

Output

Ascii and PCAP tracing is provided similar to the other ns-3 NetDevice types, through the helpers, such as (e.g.):

::
EmuFdNetDeviceHelper emu; NetDeviceContainer devices = emu.Install (node); ... emu.EnablePcap (“emu-ping”, device, true);

The standard set of Mac-level NetDevice trace sources is provided.

  • MaxTx: Trace source triggered when ns-3 provides the device with a new frame to send
  • MaxTxDrop: Trace source if write to file descriptor fails
  • MaxPromiscRx: Whenever any valid Mac frame is received
  • MaxRx: Whenever a valid Mac frame is received for this device
  • Sniffer: Non-promiscuous packet sniffer
  • PromiscSniffer: Promiscuous packet sniffer (for tcpdump-like traces)

Examples

Several examples are provided:

  • dummy-network.cc: This simple example creates two nodes and interconnects them with a Unix pipe by passing the file descriptors from the socketpair into the FdNetDevice objects of the respective nodes.
  • realtime-dummy-network.cc: Same as dummy-network.cc but uses the real time simulator implementnation instead of the default one.
  • fd2fd-onoff.cc: This example is aimed at measuring the throughput of the FdNetDevice in a pure simulation. For this purpose two FdNetDevices, attached to different nodes but in a same simulation, are connected using a socket pair. TCP traffic is sent at a saturating data rate.
  • fd-emu-onoff.cc: This example is aimed at measuring the throughput of the FdNetDevice when using the EmuFdNetDeviceHelper to attach the simulated device to a real device in the host machine. This is achieved by saturating the channel with TCP traffic.
  • fd-emu-ping.cc: This example uses the EmuFdNetDeviceHelper to send ICMP traffic over a real channel.
  • fd-emu-udp-echo.cc: This example uses the EmuFdNetDeviceHelper to send UDP traffic over a real channel.
  • fd-planetlab-ping.cc: This example shows how to set up an experiment to send ICMP traffic from a PlanetLab node to the Internet.
  • fd-tap-ping.cc: This example uses the TapFdNetDeviceHelper to send ICMP traffic over a real channel.

Flow Monitor

Model Description

The source code for the new module lives in the directory src/flow-monitor.

The Flow Monitor module goal is to provide a flexible system to measure the performance of network protocols. The module uses probes, installed in network nodes, to track the packets exchanged by the nodes, and it will measure a number of parameters. Packets are divided according to the flow they belong to, where each flow is defined according to the probe’s characteristics (e.g., for IP, a flow is defined as the packets with the same {protocol, source (IP, port), destination (IP, port)} tuple.

The statistics are collected for each flow can be exported in XML format. Moreover, the user can access the probes directly to request specific stats about each flow.

Design

Flow Monitor module is designed in a modular way. It can be extended by subclassing ns3::FlowProbe and ns3::FlowClassifier.

The full module design is described in [FlowMonitor]

Scope and Limitations

At the moment, probes and classifiers are available for IPv4 and IPv6.

Each probe will classify packets in four points:

  • When a packet is sent (SendOutgoing IPv[4,6] traces)
  • When a packet is forwarded (UnicastForward IPv[4,6] traces)
  • When a packet is received (LocalDeliver IPv[4,6] traces)
  • When a packet is dropped (Drop IPv[4,6] traces)

Since the packets are tracked at IP level, any retransmission caused by L4 protocols (e.g., TCP) will be seen by the probe as a new packet.

A Tag will be added to the packet (ns3::Ipv[4,6]FlowProbeTag). The tag will carry basic packet’s data, useful for the packet’s classification.

It must be underlined that only L4 (TCP, UDP) packets are, so far, classified. Moreover, only unicast packets will be classified. These limitations may be removed in the future.

The data collected for each flow are:

  • timeFirstTxPacket: when the first packet in the flow was transmitted;
  • timeLastTxPacket: when the last packet in the flow was transmitted;
  • timeFirstRxPacket: when the first packet in the flow was received by an end node;
  • timeLastRxPacket: when the last packet in the flow was received;
  • delaySum: the sum of all end-to-end delays for all received packets of the flow;
  • jitterSum: the sum of all end-to-end delay jitter (delay variation) values for all received packets of the flow, as defined in RFC 3393;
  • txBytes, txPackets: total number of transmitted bytes / packets for the flow;
  • rxBytes, rxPackets: total number of received bytes / packets for the flow;
  • lostPackets: total number of packets that are assumed to be lost (not reported over 10 seconds);
  • timesForwarded: the number of times a packet has been reportedly forwarded;
  • delayHistogram, jitterHistogram, packetSizeHistogram: histogram versions for the delay, jitter, and packet sizes, respectively;
  • packetsDropped, bytesDropped: the number of lost packets and bytes, divided according to the loss reason code (defined in the probe).

It is worth pointing out that the probes measure the packet bytes including IP headers. The L2 headers are not included in the measure.

These stats will be written in XML form upon request (see the Usage section).

References

[FlowMonitor]
  1. Carneiro, P. Fortuna, and M. Ricardo. 2009. FlowMonitor: a network monitoring framework for the network simulator 3 (NS-3). In Proceedings of the Fourth International ICST Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS ‘09). http://dx.doi.org/10.4108/ICST.VALUETOOLS2009.7493

Usage

The module usage is extremely simple. The helper will take care of about everything.

The typical use is:

// Flow monitor
Ptr<FlowMonitor> flowMonitor;
FlowMonitorHelper flowHelper;
flowMonitor = flowHelper.InstallAll();

Simulator::Stop (Seconds(stop_time));
Simulator::Run ();

flowMonitor->SerializeToXmlFile("NameOfFile.xml", true, true);

the SerializeToXmlFile () function 2nd and 3rd parameters are used respectively to activate/deactivate the histograms and the per-probe detailed stats.

Other possible alternatives can be found in the Doxygen documentation.

Helpers

The helper API follows the pattern usage of normal helpers. Through the helper you can install the monitor in the nodes, set the monitor attributes, and print the statistics.

One important thing is: the ns3::FlowMonitorHelper must be instantiated only once in the main.

Attributes

The module provides the following attributes in ns3::FlowMonitor:

  • MaxPerHopDelay (Time, default 10s): The maximum per-hop delay that should be considered;
  • StartTime (Time, default 0s): The time when the monitoring starts;
  • DelayBinWidth (double, default 0.001): The width used in the delay histogram;
  • JitterBinWidth (double, default 0.001): The width used in the jitter histogram;
  • PacketSizeBinWidth (double, default 20.0): The width used in the packetSize histogram;
  • FlowInterruptionsBinWidth (double, default 0.25): The width used in the flowInterruptions histogram;
  • FlowInterruptionsMinTime (double, default 0.5): The minimum inter-arrival time that is considered a flow interruption.

Output

The main model output is an XML formatted report about flow statistics. An example is:

<?xml version="1.0" ?>
<FlowMonitor>
  <FlowStats>
  <Flow flowId="1" timeFirstTxPacket="+0.0ns" timeFirstRxPacket="+20067198.0ns" timeLastTxPacket="+2235764408.0ns" timeLastRxPacket="+2255831606.0ns" delaySum="+138731526300.0ns" jitterSum="+1849692150.0ns" lastDelay="+20067198.0ns" txBytes="2149400" rxBytes="2149400" txPackets="3735" rxPackets="3735" lostPackets="0" timesForwarded="7466">
  </Flow>
  </FlowStats>
  <Ipv4FlowClassifier>
  <Flow flowId="1" sourceAddress="10.1.3.1" destinationAddress="10.1.2.2" protocol="6" sourcePort="49153" destinationPort="50000" />
  </Ipv4FlowClassifier>
  <Ipv6FlowClassifier>
  </Ipv6FlowClassifier>
  <FlowProbes>
  <FlowProbe index="0">
    <FlowStats  flowId="1" packets="3735" bytes="2149400" delayFromFirstProbeSum="+0.0ns" >
    </FlowStats>
  </FlowProbe>
  <FlowProbe index="2">
    <FlowStats  flowId="1" packets="7466" bytes="2224020" delayFromFirstProbeSum="+199415389258.0ns" >
    </FlowStats>
  </FlowProbe>
  <FlowProbe index="4">
    <FlowStats  flowId="1" packets="3735" bytes="2149400" delayFromFirstProbeSum="+138731526300.0ns" >
    </FlowStats>
  </FlowProbe>
  </FlowProbes>
</FlowMonitor>

The output was generated by a TCP flow from 10.1.3.1 to 10.1.2.2.

It is worth noticing that the index 2 probe is reporting more packets and more bytes than the other probes. That’s a perfectly normal behaviour, as packets are fragmented at IP level in that node.

It should also be observed that the receiving node’s probe (index 4) doesn’t count the fragments, as the reassembly is done before the probing point.

Examples

The examples are located in src/flow-monitor/examples.

Moreover, the following examples use the flow-monitor module:

  • examples/matrix-topology/matrix-topology.cc
  • examples/routing/manet-routing-compare.cc
  • examples/routing/simple-global-routing.cc
  • examples/tcp/tcp-variants-comparison.cc
  • examples/wireless/multirate.cc
  • examples/wireless/wifi-hidden-terminal.cc

Troubleshooting

Do not define more than one ns3::FlowMonitorHelper in the simulation.

Validation

The paper in the references contains a full description of the module validation against a test network.

Tests are provided to ensure the Histogram correct functionality.

Internet Models (IP, TCP, Routing, UDP, Internet Applications)

Internet Stack

Internet stack aggregation

A bare class Node is not very useful as-is; other objects must be aggregated to it to provide useful node functionality.

The ns-3 source code directory src/internet provides implementation of TCP/IPv4- and IPv6-related components. These include IPv4, ARP, UDP, TCP, IPv6, Neighbor Discovery, and other related protocols.

Internet Nodes are not subclasses of class Node; they are simply Nodes that have had a bunch of IP-related objects aggregated to them. They can be put together by hand, or via a helper function InternetStackHelper::Install () which does the following to all nodes passed in as arguments:

void
InternetStackHelper::Install (Ptr<Node> node) const
{
  if (m_ipv4Enabled)
    {
      /* IPv4 stack */
      if (node->GetObject<Ipv4> () != 0)
        {
          NS_FATAL_ERROR ("InternetStackHelper::Install (): Aggregating "
                          "an InternetStack to a node with an existing Ipv4 object");
          return;
        }

      CreateAndAggregateObjectFromTypeId (node, "ns3::ArpL3Protocol");
      CreateAndAggregateObjectFromTypeId (node, "ns3::Ipv4L3Protocol");
      CreateAndAggregateObjectFromTypeId (node, "ns3::Icmpv4L4Protocol");
      // Set routing
      Ptr<Ipv4> ipv4 = node->GetObject<Ipv4> ();
      Ptr<Ipv4RoutingProtocol> ipv4Routing = m_routing->Create (node);
      ipv4->SetRoutingProtocol (ipv4Routing);
    }

  if (m_ipv6Enabled)
    {
      /* IPv6 stack */
      if (node->GetObject<Ipv6> () != 0)
        {
          NS_FATAL_ERROR ("InternetStackHelper::Install (): Aggregating "
                          "an InternetStack to a node with an existing Ipv6 object");
          return;
        }

      CreateAndAggregateObjectFromTypeId (node, "ns3::Ipv6L3Protocol");
      CreateAndAggregateObjectFromTypeId (node, "ns3::Icmpv6L4Protocol");
      // Set routing
      Ptr<Ipv6> ipv6 = node->GetObject<Ipv6> ();
      Ptr<Ipv6RoutingProtocol> ipv6Routing = m_routingv6->Create (node);
      ipv6->SetRoutingProtocol (ipv6Routing);

      /* register IPv6 extensions and options */
      ipv6->RegisterExtensions ();
      ipv6->RegisterOptions ();
    }

  if (m_ipv4Enabled || m_ipv6Enabled)
    {
      /* UDP and TCP stacks */
      CreateAndAggregateObjectFromTypeId (node, "ns3::UdpL4Protocol");
      node->AggregateObject (m_tcpFactory.Create<Object> ());
      Ptr<PacketSocketFactory> factory = CreateObject<PacketSocketFactory> ();
      node->AggregateObject (factory);
    }
}

Where multiple implementations exist in ns-3 (TCP, IP routing), these objects are added by a factory object (TCP) or by a routing helper (m_routing).

Note that the routing protocol is configured and set outside this function. By default, the following protocols are added:

void InternetStackHelper::Initialize ()
{
  SetTcp ("ns3::TcpL4Protocol");
  Ipv4StaticRoutingHelper staticRouting;
  Ipv4GlobalRoutingHelper globalRouting;
  Ipv4ListRoutingHelper listRouting;
  Ipv6ListRoutingHelper listRoutingv6;
  Ipv6StaticRoutingHelper staticRoutingv6;
  listRouting.Add (staticRouting, 0);
  listRouting.Add (globalRouting, -10);
  listRoutingv6.Add (staticRoutingv6, 0);
  SetRoutingHelper (listRouting);
  SetRoutingHelper (listRoutingv6);
}

By default, IPv4 and IPv6 are enabled.

Internet Node structure

An IP-capable Node (an ns-3 Node augmented by aggregation to have one or more IP stacks) has the following internal structure.

Layer-3 protocols

At the lowest layer, sitting above the NetDevices, are the “layer 3” protocols, including IPv4, IPv6, ARP and so on. The class Ipv4L3Protocol is an implementation class whose public interface is typically class Ipv4, but the Ipv4L3Protocol public API is also used internally at present.

In class Ipv4L3Protocol, one method described below is Receive ():

/**
  * Lower layer calls this method after calling L3Demux::Lookup
  * The ARP subclass needs to know from which NetDevice this
  * packet is coming to:
  *    - implement a per-NetDevice ARP cache
  *    - send back arp replies on the right device
  */
void Receive( Ptr<NetDevice> device, Ptr<const Packet> p, uint16_t protocol,
const Address &from, const Address &to, NetDevice::PacketType packetType);

First, note that the Receive () function has a matching signature to the ReceiveCallback in the class Node. This function pointer is inserted into the Node’s protocol handler when AddInterface () is called. The actual registration is done with a statement such as follows:

RegisterProtocolHandler ( MakeCallback (&Ipv4Protocol::Receive, ipv4),
                          Ipv4L3Protocol::PROT_NUMBER, 0);

The Ipv4L3Protocol object is aggregated to the Node; there is only one such Ipv4L3Protocol object. Higher-layer protocols that have a packet to send down to the Ipv4L3Protocol object can call GetObject<Ipv4L3Protocol> () to obtain a pointer, as follows:

Ptr<Ipv4L3Protocol> ipv4 = m_node->GetObject<Ipv4L3Protocol> ();
if (ipv4 != 0)
  {
    ipv4->Send (packet, saddr, daddr, PROT_NUMBER);
  }

This class nicely demonstrates two techniques we exploit in ns-3 to bind objects together: callbacks, and object aggregation.

Once IPv4 routing has determined that a packet is for the local node, it forwards it up the stack. This is done with the following function:

void
Ipv4L3Protocol::LocalDeliver (Ptr<const Packet> packet, Ipv4Header const&ip, uint32_t iif)

The first step is to find the right Ipv4L4Protocol object, based on IP protocol number. For instance, TCP is registered in the demux as protocol number 6. Finally, the Receive() function on the Ipv4L4Protocol (such as TcpL4Protocol::Receive is called.

We have not yet introduced the class Ipv4Interface. Basically, each NetDevice is paired with an IPv4 representation of such device. In Linux, this class Ipv4Interface roughly corresponds to the struct in_device; the main purpose is to provide address-family specific information (addresses) about an interface.

All the classes have appropriate traces in order to track sent, received and lost packets. The users is encouraged to use them so to find out if (and where) a packet is dropped. A common mistake is to forget the effects of local queues when sending packets, e.g., the ARP queue. This can be particularly puzzling when sending jumbo packets or packet bursts using UDP. The ARP cache pending queue is limited (3 datagrams) and IP packets might be fragmented, easily overfilling the ARP cache queue size. In those cases it is useful to increase the ARP cache pending size to a proper value, e.g.:

Config::SetDefault ("ns3::ArpCache::PendingQueueSize", UintegerValue (MAX_BURST_SIZE/L2MTU*3));

The IPv6 implementation follows a similar architecture. Dual-stacked nodes (one with support for both IPv4 and IPv6) will allow an IPv6 socket to receive IPv4 connections as a standard dual-stacked system does. A socket bound and listening to an IPv6 endpoint can receive an IPv4 connection and will return the remote address as an IPv4-mapped address. Support for the IPV6_V6ONLY socket option does not currently exist.

Layer-4 protocols and sockets

We next describe how the transport protocols, sockets, and applications tie together. In summary, each transport protocol implementation is a socket factory. An application that needs a new socket

For instance, to create a UDP socket, an application would use a code snippet such as the following:

Ptr<Udp> udpSocketFactory = GetNode ()->GetObject<Udp> ();
Ptr<Socket> m_socket = socketFactory->CreateSocket ();
m_socket->Bind (m_local_address);
...

The above will query the node to get a pointer to its UDP socket factory, will create one such socket, and will use the socket with an API similar to the C-based sockets API, such as Connect () and Send (). The address passed to the Bind (), Connect (), or Send () functions may be a Ipv4Address, Ipv6Address, or Address. If a Address is passed in and contains anything other than a Ipv4Address or Ipv6Address, these functions will return an error. The Bind (void) and Bind6 (void) functions bind to “0.0.0.0” and ”::” respectively.

The socket can also be bound to a specific NetDevice though the BindToNetDevice (Ptr<NetDevice> netdevice) function. BindToNetDevice (Ptr<NetDevice> netdevice) will bind the socket to “0.0.0.0” and ”::” (equivalent to calling Bind () and Bind6 (), unless the socket has been already bound to a specific address. Summarizing, the correct sequence is:

 Ptr<Udp> udpSocketFactory = GetNode ()->GetObject<Udp> ();
 Ptr<Socket> m_socket = socketFactory->CreateSocket ();
 m_socket->BindToNetDevice (n_netDevice);
...

or:

Ptr<Udp> udpSocketFactory = GetNode ()->GetObject<Udp> ();
Ptr<Socket> m_socket = socketFactory->CreateSocket ();
m_socket->Bind (m_local_address);
m_socket->BindToNetDevice (n_netDevice);
...

The following raises an error:

Ptr<Udp> udpSocketFactory = GetNode ()->GetObject<Udp> ();
Ptr<Socket> m_socket = socketFactory->CreateSocket ();
m_socket->BindToNetDevice (n_netDevice);
m_socket->Bind (m_local_address);
...

See the chapter on ns-3 sockets for more information.

We have described so far a socket factory (e.g. class Udp) and a socket, which may be specialized (e.g., class UdpSocket). There are a few more key objects that relate to the specialized task of demultiplexing a packet to one or more receiving sockets. The key object in this task is class Ipv4EndPointDemux. This demultiplexer stores objects of class Ipv4EndPoint. This class holds the addressing/port tuple (local port, local address, destination port, destination address) associated with the socket, and a receive callback. This receive callback has a receive function registered by the socket. The Lookup () function to Ipv4EndPointDemux returns a list of Ipv4EndPoint objects (there may be a list since more than one socket may match the packet). The layer-4 protocol copies the packet to each Ipv4EndPoint and calls its ForwardUp () method, which then calls the Receive () function registered by the socket.

An issue that arises when working with the sockets API on real systems is the need to manage the reading from a socket, using some type of I/O (e.g., blocking, non-blocking, asynchronous, ...). ns-3 implements an asynchronous model for socket I/O; the application sets a callback to be notified of received data ready to be read, and the callback is invoked by the transport protocol when data is available. This callback is specified as follows:

void Socket::SetRecvCallback (Callback<void, Ptr<Socket>,
                              Ptr<Packet>,
                              const Address&> receivedData);

The data being received is conveyed in the Packet data buffer. An example usage is in class PacketSink:

m_socket->SetRecvCallback (MakeCallback(&PacketSink::HandleRead, this));

To summarize, internally, the UDP implementation is organized as follows:

  • a UdpImpl class that implements the UDP socket factory functionality
  • a UdpL4Protocol class that implements the protocol logic that is socket-independent
  • a UdpSocketImpl class that implements socket-specific aspects of UDP
  • a class called Ipv4EndPoint that stores the addressing tuple (local port, local address, destination port, destination address) associated with the socket, and a receive callback for the socket.
IP-capable node interfaces

Many of the implementation details, or internal objects themselves, of IP-capable Node objects are not exposed at the simulator public API. This allows for different implementations; for instance, replacing the native ns-3 models with ported TCP/IP stack code.

The C++ public APIs of all of these objects is found in the src/network directory, including principally:

  • address.h
  • socket.h
  • node.h
  • packet.h

These are typically base class objects that implement the default values used in the implementation, implement access methods to get/set state variables, host attributes, and implement publicly-available methods exposed to clients such as CreateSocket.

Example path of a packet

These two figures show an example stack trace of how packets flow through the Internet Node objects.

_images/internet-node-send.png

Send path of a packet.

_images/internet-node-recv.png

Receive path of a packet.

IPv4

Tracing in the IPv4 Stack

The internet stack provides a number of trace sources in its various protocol implementations. These trace sources can be hooked using your own custom trace code, or you can use our helper functions in some cases to arrange for tracing to be enabled.

Tracing in ARP

ARP provides two trace hooks, one in the cache, and one in the layer three protocol. The trace accessor in the cache is given the name “Drop.” When a packet is transmitted over an interface that requires ARP, it is first queued for transmission in the ARP cache until the required MAC address is resolved. There are a number of retries that may be done trying to get the address, and if the maximum retry count is exceeded the packet in question is dropped by ARP. The single trace hook in the ARP cache is called,

  • If an outbound packet is placed in the ARP cache pending address resolution and no resolution can be made within the maximum retry count, the outbound packet is dropped and this trace is fired;

A second trace hook lives in the ARP L3 protocol (also named “Drop”) and may be called for a number of reasons.

  • If an ARP reply is received for an entry that is not waiting for a reply, the ARP reply packet is dropped and this trace is fired;
  • If an ARP reply is received for a non-existant entry, the ARP reply packet is dropped and this trace is fired;
  • If an ARP cache entry is in the DEAD state (has timed out) and an ARP reply packet is received, the reply packet is dropped and this trace is fired.
  • Each ARP cache entry has a queue of pending packets. If the size of the queue is exceeded, the outbound packet is dropped and this trace is fired.
Tracing in IPv4

The IPv4 layer three protocol provides three trace hooks. These are the “Tx” (ns3::Ipv4L3Protocol::m_txTrace), “Rx” (ns3::Ipv4L3Protocol::m_rxTrace) and “Drop” (ns3::Ipv4L3Protocol::m_dropTrace) trace sources.

The “Tx” trace is fired in a number of situations, all of which indicate that a given packet is about to be sent down to a given ns3::Ipv4Interface.

  • In the case of a packet destined for the broadcast address, the Ipv4InterfaceList is iterated and for every interface that is up and can fragment the packet or has a large enough MTU to transmit the packet, the trace is hit. See ns3::Ipv4L3Protocol::Send.
  • In the case of a packet that needs routing, the “Tx” trace may be fired just before a packet is sent to the interface appropriate to the default gateway. See ns3::Ipv4L3Protocol::SendRealOut.
  • Also in the case of a packet that needs routing, the “Tx” trace may be fired just before a packet is sent to the outgoing interface appropriate to the discovered route. See ns3::Ipv4L3Protocol::SendRealOut.

The “Rx” trace is fired when a packet is passed from the device up to the ns3::Ipv4L3Protocol::Receive function.

  • In the receive function, the Ipv4InterfaceList is iterated, and if the Ipv4Interface corresponding to the receiving device is fount to be in the UP state, the trace is fired.

The “Drop” trace is fired in any case where the packet is dropped (in both the transmit and receive paths).

  • In the ns3::Ipv4Interface::Receive function, the packet is dropped and the drop trace is hit if the interface corresponding to the receiving device is in the DOWN state.
  • Also in the ns3::Ipv4Interface::Receive function, the packet is dropped and the drop trace is hit if the checksum is found to be bad.
  • In ns3::Ipv4L3Protocol::Send, an outgoing packet bound for the broadcast address is dropped and the “Drop” trace is fired if the “don’t fragment” bit is set and fragmentation is available and required.
  • Also in ns3::Ipv4L3Protocol::Send, an outgoing packet destined for the broadcast address is dropped and the “Drop” trace is hit if fragmentation is not available and is required (MTU < packet size).
  • In the case of a broadcast address, an outgoing packet is cloned for each outgoing interface. If any of the interfaces is in the DOWN state, the “Drop” trace event fires with a reference to the copied packet.
  • In the case of a packet requiring a route, an outgoing packet is dropped and the “Drop” trace event fires if no route to the remote host is found.
  • In ns3::Ipv4L3Protocol::SendRealOut, an outgoing packet being routed is dropped and the “Drop” trace is fired if the “don’t fragment” bit is set and fragmentation is available and required.
  • Also in ns3::Ipv4L3Protocol::SendRealOut, an outgoing packet being routed is dropped and the “Drop” trace is hit if fragmentation is not available and is required (MTU < packet size).
  • An outgoing packet being routed is dropped and the “Drop” trace event fires if the required Ipv4Interface is in the DOWN state.
  • If a packet is being forwarded, and the TTL is exceeded (see ns3::Ipv4L3Protocol::DoForward), the packet is dropped and the “Drop” trace event is fired.

IPv6

This chapter describes the ns-3 IPv6 model capabilities and limitations along with its usage and examples.

IPv6 model description

The IPv6 model is loosely patterned after the Linux implementation; the implementation is not complete as some features of IPv6 are not of much interest to simulation studies, and some features of IPv6 are simply not modeled yet in ns-3.

The base class Ipv6 defines a generic API, while the class Ipv6L3Protocol is the actual class implementing the protocol. The actual classes used by the IPv6 stack are located mainly in the directory src/internet.

The implementation of IPv6 is contained in the following files:

src/internet/model/icmpv6-header.{cc,h}
src/internet/model/icmpv6-l4-protocol.{cc,h}
src/internet/model/ipv6.{cc,h}
src/internet/model/ipv6-address-generator.{cc,h}
src/internet/model/ipv6-autoconfigured-prefix.{cc,h}
src/internet/model/ipv6-end-point.{cc,h}
src/internet/model/ipv6-end-point-demux.{cc,h}
src/internet/model/ipv6-extension.{cc,h}
src/internet/model/ipv6-extension-demux.{cc,h}
src/internet/model/ipv6-extension-header.{cc,h}
src/internet/model/ipv6-header.{cc,h}
src/internet/model/ipv6-interface.{cc,h}
src/internet/model/ipv6-interface-address.{cc,h}
src/internet/model/ipv6-l3-protocol.{cc,h}
src/internet/model/ipv6-list-routing.{cc,h}
src/internet/model/ipv6-option.{cc,h}
src/internet/model/ipv6-option-demux.{cc,h}
src/internet/model/ipv6-option-header.{cc,h}
src/internet/model/ipv6-packet-info-tag.{cc,h}
src/internet/model/ipv6-pmtu-cache.{cc,h}
src/internet/model/ipv6-raw-socket-factory.{cc,h}
src/internet/model/ipv6-raw-socket-factory-impl.{cc,h}
src/internet/model/ipv6-raw-socket-impl.{cc,h}
src/internet/model/ipv6-route.{cc,h}
src/internet/model/ipv6-routing-protocol.{cc,h}
src/internet/model/ipv6-routing-table-entry.{cc,h}
src/internet/model/ipv6-static-routing.{cc,h}
src/internet/model/ndisc-cache.{cc,h}
src/network/utils/inet6-socket-address.{cc,h}
src/network/utils/ipv6-address.{cc,h}

Also some helpers are involved with IPv6:

src/internet/helper/internet-stack-helper.{cc,h}
src/internet/helper/ipv6-address-helper.{cc,h}
src/internet/helper/ipv6-interface-container.{cc,h}
src/internet/helper/ipv6-list-routing-helper.{cc,h}
src/internet/helper/ipv6-routing-helper.{cc,h}
src/internet/helper/ipv6-static-routing-helper.{cc,h}

The model files can be roughly divided into:

  • protocol models (e.g., ipv6, ipv6-l3-protocol, icmpv6-l4-protocol, etc.)
  • routing models (i.e., anything with ‘routing’ in its name)
  • sockets and interfaces (e.g., ipv6-raw-socket, ipv6-interface, ipv6-end-point, etc.)
  • address-related things
  • headers, option headers, extension headers, etc.
  • accessory classes (e.g., ndisc-cache)

Usage

The following description is based on using the typical helpers found in the example code.

IPv6 does not need to be activated in a node. it is automatically added to the available protocols once the Internet Stack is installed.

In order to not install IPv6 along with IPv4, it is possible to use ns3::InternetStackHelper method SetIpv6StackInstall (bool enable) before installing the InternetStack in the nodes.

Note that to have an IPv6-only network (i.e., to not install the IPv4 stack in a node) one should use ns3::InternetStackHelper method SetIpv4StackInstall (bool enable) before the stack installation.

As an example, in the following code node 0 will have both IPv4 and IPv6, node 1 only only IPv6 and node 2 only IPv4:

NodeContainer n;
n.Create (3);

InternetStackHelper internet;
InternetStackHelper internetV4only;
InternetStackHelper internetV6only;

internetV4only.SetIpv6StackInstall (false);
internetV6only.SetIpv4StackInstall (false);

internet.Install (n.Get (0));
internetV6only.Install (n.Get (1));
internetV4only.Install (n.Get (2));
IPv6 addresses assignment

In order to use IPv6 on a network, the first thing to do is assigning IPv6 addresses.

Any IPv6-enabled ns-3 node will have at least one NetDevice: the ns3::LoopbackNetDevice. The loopback device address is ::1. All the other NetDevices will have one or more IPv6 addresses:

  • One link-local address: fe80::interface ID, where interface ID is derived from the NetDevice MAC address.
  • Zero or more global addresses, e.g., 2001:db8::1.

Typically the first address on an interface will be the link-local one, with the global address(es) being the following ones.

IPv6 global addresses might be:

  • manually assigned
  • auto-generated

ns-3 can use both methods, and it’s quite important to understand the implications of both.

Manually assigned IPv6 adddresses

This is probably the easiest and most used method. As an example:

Ptr<Node> n0 = CreateObject<Node> ();
Ptr<Node> n1 = CreateObject<Node> ();
NodeContainer net (n0, n1);
CsmaHelper csma;
NetDeviceContainer ndc = csma.Install (net);

NS_LOG_INFO ("Assign IPv6 Addresses.");
Ipv6AddressHelper ipv6;
ipv6.SetBase (Ipv6Address ("2001:db8::"), Ipv6Prefix (64));
Ipv6InterfaceContainer ic = ipv6.Assign (ndc);

This method will add two global IPv6 addresses to the nodes. Note that, as usual for IPv6, all the nodes will also have a link-local address. Typically the first address on an interface will be the link-local one, with the global address(es) being the following ones.

Note that the global addesses will be derived from the MAC address. As a consequence, expect to have addresses similar to 2001:db8::200:ff:fe00:1.

It is possible to repeat the above to assign more than one global address to a node. However, due to the Ipv6AddressHelper singleton nature, one should first assign all the adddresses of a network, then change the network base (SetBase), then do a new assignment.

Alternatively, it is possible to assign a specific address to a node:

Ptr<Node> n0 = CreateObject<Node> ();
NodeContainer net (n0);
CsmaHelper csma;
NetDeviceContainer ndc = csma.Install (net);

NS_LOG_INFO ("Specifically Assign an IPv6 Address.");
Ipv6AddressHelper ipv6;
Ptr<NetDevice> device = ndc.Get (0);
Ptr<Node> node = device->GetNode ();
Ptr<Ipv6> ipv6proto = node->GetObject<Ipv6> ();
int32_t ifIndex = 0;
ifIndex = ipv6proto->GetInterfaceForDevice (device);
Ipv6InterfaceAddress ipv6Addr = Ipv6InterfaceAddress (Ipv6Address ("2001:db8:f00d:cafe::42"), Ipv6Prefix (64));
ipv6proto->AddAddress (ifIndex, ipv6Addr);
Auto-generated IPv6 adddresses

This is accomplished by relying on the RADVD protocol, implemented by the class Radvd. A helper class is available, which can be used to ease the most common tasks, e.g., setting up a prefix on an interface, if it is announced periodically, and if the router is the default router for that interface.

A fine grain configuration is possible though the RadvdInterface class, which allows to setup every parameter of the announced router advetisement on a given interface.

It is worth mentioning that the configurations must be set up before installing the application in the node.

Upon using this method, the nodes will acquire dynamically (i.e., during the simulation) one (or more) global address(es) according to the RADVD configuration. These addresses will be bases on the RADVD announced prefix and the node’s EUI-64.

Examples of RADVD use are shown in examples/ipv6/radvd.cc and examples/ipv6/radvd-two-prefix.cc.

Random-generated IPv6 adddresses

While IPv6 real nodes will use randomly generated addresses to protect privacy, ns-3 does NOT have this capability. This feature haven’t been so far considered as interesting for simulation.

Duplicate Address Detection (DAD)

Nodes will perform DAD (it can be disabled using an Icmpv6L4Protocol attribute). Upon receiving a DAD, however, nodes will not react to it. As is: DAD reaction is incomplete so far. The main reason relies on the missing random-generated address capability. Moreover, since ns-3 nodes will usually be well-behaving, therea should’t be any Duplicate Address. This might be changed in the future, so as to avoid issues with real-world integrated simulations.

Host and Router behaviour in IPv6 and ns-3

In IPv6 there is a clear distinction between routers and hosts. As one might expect, routers can forward packets from an interface to another interface, while hosts drop packets not directed to them.

Unfortunately, forwarding is not the only thing affected by this distinction, and forwarding itself might be fine-tuned, e.g., to forward packets incoming from an interface and drop packets from another interface.

In ns-3 a node is configured to be an host by default. There are two main ways to change this behaviour:

  • Using ns3::Ipv6InterfaceContainer SetForwarding(uint32_t i, bool router) where i is the interface index in the container.
  • Changing the ns3::Ipv6 attribute IpForward.

Either one can be used during the simulation.

A fine-grained setup can be accomplished by using ns3::Ipv6Interface SetForwarding (bool forward); which allows to change the behaviour on a per-interface-basis.

Note that the node-wide configuration only serves as a convenient method to enable/disable the ns3::Ipv6Interface specific setting. An Ipv6Interface added to a node with forwarding enabled will be set to be forwarding as well. This is really important when a node has interfaces added during the simulation.

According to the ns3::Ipv6Interface forwarding state, the following happens:

  • Forwarding OFF
  • The node will NOT reply to Router Solicitation
  • The node will react to Router Advertisement
  • The node will periodically send Router Solicitation
  • Routing protocols MUST DROP packets not directed to the node
  • Forwarding ON
  • The node will reply to Router Solicitation
  • The node will NOT react to Router Advertisement
  • The node will NOT send Router Solicitation
  • Routing protocols MUST forward packets

The behaviour is matching ip-sysctl.txt (http://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt) with the difference that it’s not possible to override the behaviour using esoteric settings (e.g., forwarding but accept router advertisements, accept_ra=2, or forwarding but send router solicitations forwarding=2).

Consider carefully the implications of packet forwarding. As an example, a node will NOT send ICMPv6 PACKET_TOO_BIG messages from an interface with frowarding off. This is completely normal, as the Routing protocol will drop the packet before attempting to forward it.

Helpers

Typically the helpers used in IPv6 setup are:

  • ns3::InternetStackHelper
  • ns3::Ipv6AddressHelper
  • ns3::Ipv6InterfaceContainer

The use is almost identical to the corresponding IPv4 case, e.g.:

NodeContainer n;
n.Create (4);

NS_LOG_INFO ("Create IPv6 Internet Stack");
InternetStackHelper internetv6;
internetv6.Install (n);

NS_LOG_INFO ("Create channels.");
CsmaHelper csma;
NetDeviceContainer d = csma.Install (n);

NS_LOG_INFO ("Create networks and assign IPv6 Addresses.");
Ipv6AddressHelper ipv6;
ipv6.SetBase (Ipv6Address ("2001:db8::"), Ipv6Prefix (64));
Ipv6InterfaceContainer iic = ipv6.Assign (d);

Additionally, a common task is to enable forwarding on one of the nodes and to setup a default route toward it in the other nodes, e.g.:

iic.SetForwarding (0, true);
iic.SetDefaultRouteInAllNodes (0);

This will enable forwarding on the node 0 and will setup a default route in ns3::Ipv6StaticRouting on all the other nodes. Note that this requires that Ipv6StaticRouting is present in the nodes.

The IPv6 routing helpers enable the user to perform specific tasks on the particular routing algorith and to print the routing tables.

Attributes

Many classes in the ns-3 IPv6 implementation contain attributes. The most useful ones are:

  • ns3::Ipv6
  • IpForward, boolean, default false. Globally enable or disable IP forwarding for all current and future IPv6 devices.
  • MtuDiscover, boolean, default true. If disabled, every interface will have its MTU set to 1280 bytes.
  • ns3::Ipv6L3Protocol
  • DefaultTtl, uint8_t, default 64. The TTL value set by default on all outgoing packets generated on this node.
  • SendIcmpv6Redirect, boolean, default true. Send the ICMPv6 Redirect when appropriate.
  • ns3::Icmpv6L4Protocol
  • DAD, boolean, default true. Always do DAD (Duplicate Address Detection) check.
  • ns3::NdiscCache
  • UnresolvedQueueSize, uint32_t, default 3. Size of the queue for packets pending an NA reply.
Output

The IPv6 stack provides some useful trace sources:

  • ns3::Ipv6L3Protocol
  • Tx, Send IPv6 packet to outgoing interface.
  • Rx, Receive IPv6 packet from incoming interface.
  • Drop, Drop IPv6 packet.
  • ns3::Ipv6Extension
  • Drop, Drop IPv6 packet.

The latest trace source is generated when a packet contains an unknown option blocking its processing.

Mind that ns3::NdiscCache could drop packets as well, and they are not logged in a trace source (yet). This might generate some confusion in the sent/received packets counters.

Advanced Usage
IPv6 maximum transmission unit (MTU) and fragmentation

ns-3 NetDevices define the MTU according to the L2 simulated Device. IPv6 requires that the minimum MTU is 1280 bytes, so all NetDevices are required to support at least this MTU. This is the link-MTU.

In order to support different MTUs in a source-destination path, ns-3 IPv6 model can perform fragmentation. This can be either triggered by receiving a packet bigger than the link-MTU from the L4 protocols (UDP, TCP, etc.), or by receving an ICMPv6 PACKET_TOO_BIG message. The model mimics RFC 1981, with the following notable exceptions:

  • L4 protocols are not informed of the Path MTU change
  • TCP can not change its Segment Size according to the Path-MTU.

Both limitations are going to be removed in due time.

The Path-MTU cache is currently based on the source-destination IPv6 addresses. Further classifications (e.g., flow label) are possible but not yet implemented.

The Path-MTU default validity time is 10 minutes. After the cache entry expiration, the Path-MTU information is removed and the next packet will (eventually) trigger a new ICMPv6 PACKET_TOO_BIG message. Note that 1) this is consistent with the RFC specification and 2) L4 protocols are responsible for retransmitting the packets.

Examples

The examples for IPv6 are in the directory examples/ipv6. These examples focus on the most interesting IPv6 peculiarities, such as fragmentation, redirect and so on.

Moreover, most TCP and UDP examples located in examples/udp, examples/tcp, etc. have a command-line option to use IPv6 instead of IPv4.

Troubleshooting

There are just a few pitfalls to avoid while using ns-3 IPv6.

Routing loops

Since the only (so far) routing scheme available for IPv6 is ns3::Ipv6StaticRouting, default router have to be setup manually. When there are two or more routers in a network (e.g., node A and node B), avoid using the helper function SetDefaultRouteInAllNodes for more than one router.

The consequence would be to install a default route to B in A and a default route pointing to A in B, generating a loop.

Global address leakage

Remember that addresses in IPv6 are global by definition. When using IPv6 with an emulation ns-3 capability, avoid at all costs address leakage toward the global Internet. It is advisable to setup an external firewall to prevent leakage.

2001:DB8::/32 addresses

IPv6 standard (RFC 3849) defines the 2001:DB8::/32 address class for the documentation. This manual uses this convention. The addresses in this class are, however, only usable in a document, and routers should discard them.

Validation

The IPv6 protocols has not yet been extensively validated against real implementations. The actual tests involve mainly performing checks of the .pcap trace files with Wireshark, and the results are positive.

Routing overview

ns-3 is intended to support traditional routing approaches and protocols, support ports of open source routing implementations, and facilitate research into unorthodox routing techniques. The overall routing architecture is described below in Routing architecture. Users who wish to just read about how to configure global routing for wired topologies can read Global centralized routing. Unicast routing protocols are described in Unicast routing. Multicast routing is documented in Multicast routing.

Routing architecture

_images/routing.png

Overview of routing

Overview of routing shows the overall routing architecture for Ipv4. The key objects are Ipv4L3Protocol, Ipv4RoutingProtocol(s) (a class to which all routing/forwarding has been delegated from Ipv4L3Protocol), and Ipv4Route(s).

Ipv4L3Protocol must have at least one Ipv4RoutingProtocol added to it at simulation setup time. This is done explicitly by calling Ipv4::SetRoutingProtocol ().

The abstract base class Ipv4RoutingProtocol () declares a minimal interface, consisting of two methods: RouteOutput () and RouteInput (). For packets traveling outbound from a host, the transport protocol will query Ipv4 for the Ipv4RoutingProtocol object interface, and will request a route via Ipv4RoutingProtocol::RouteOutput (). A Ptr to Ipv4Route object is returned. This is analagous to a dst_cache entry in Linux. The Ipv4Route is carried down to the Ipv4L3Protocol to avoid a second lookup there. However, some cases (e.g. Ipv4 raw sockets) will require a call to RouteOutput() directly from Ipv4L3Protocol.

For packets received inbound for forwarding or delivery, the following steps occur. Ipv4L3Protocol::Receive() calls Ipv4RoutingProtocol::RouteInput(). This passes the packet ownership to the Ipv4RoutingProtocol object. There are four callbacks associated with this call:

  • LocalDeliver
  • UnicastForward
  • MulticastForward
  • Error

The Ipv4RoutingProtocol must eventually call one of these callbacks for each packet that it takes responsibility for. This is basically how the input routing process works in Linux.

_images/routing-specialization.png

Ipv4Routing specialization.

This overall architecture is designed to support different routing approaches, including (in the future) a Linux-like policy-based routing implementation, proactive and on-demand routing protocols, and simple routing protocols for when the simulation user does not really care about routing.

Ipv4Routing specialization. illustrates how multiple routing protocols derive from this base class. A class Ipv4ListRouting (implementation class Ipv4ListRoutingImpl) provides the existing list routing approach in ns-3. Its API is the same as base class Ipv4Routing except for the ability to add multiple prioritized routing protocols (Ipv4ListRouting::AddRoutingProtocol(), Ipv4ListRouting::GetRoutingProtocol()).

The details of these routing protocols are described below in Unicast routing. For now, we will first start with a basic unicast routing capability that is intended to globally build routing tables at simulation time t=0 for simulation users who do not care about dynamic routing.

Global centralized routing

Global centralized routing is sometimes called “God” routing; it is a special implementation that walks the simulation topology and runs a shortest path algorithm, and populates each node’s routing tables. No actual protocol overhead (on the simulated links) is incurred with this approach. It does have a few constraints:

  • Wired only: It is not intended for use in wireless networks.
  • Unicast only: It does not do multicast.
  • Scalability: Some users of this on large topologies (e.g. 1000 nodes) have noticed that the current implementation is not very scalable. The global centralized routing will be modified in the future to reduce computations and runtime performance.

Presently, global centralized IPv4 unicast routing over both point-to-point and shared (CSMA) links is supported.

By default, when using the ns-3 helper API and the default InternetStackHelper, global routing capability will be added to the node, and global routing will be inserted as a routing protocol with lower priority than the static routes (i.e., users can insert routes via Ipv4StaticRouting API and they will take precedence over routes found by global routing).

Global Unicast Routing API

The public API is very minimal. User scripts include the following:

#include "ns3/internet-module.h"

If the default InternetStackHelper is used, then an instance of global routing will be aggregated to each node. After IP addresses are configured, the following function call will cause all of the nodes that have an Ipv4 interface to receive forwarding tables entered automatically by the GlobalRouteManager:

Ipv4GlobalRoutingHelper::PopulateRoutingTables ();

Note: A reminder that the wifi NetDevice will work but does not take any wireless effects into account. For wireless, we recommend OLSR dynamic routing described below.

It is possible to call this function again in the midst of a simulation using the following additional public function:

Ipv4GlobalRoutingHelper::RecomputeRoutingTables ();

which flushes the old tables, queries the nodes for new interface information, and rebuilds the routes.

For instance, this scheduling call will cause the tables to be rebuilt at time 5 seconds:

Simulator::Schedule (Seconds (5),
                     &Ipv4GlobalRoutingHelper::RecomputeRoutingTables);

There are two attributes that govern the behavior. The first is Ipv4GlobalRouting::RandomEcmpRouting. If set to true, packets are randomly routed across equal-cost multipath routes. If set to false (default), only one route is consistently used. The second is Ipv4GlobalRouting::RespondToInterfaceEvents. If set to true, dynamically recompute the global routes upon Interface notification events (up/down, or add/remove address). If set to false (default), routing may break unless the user manually calls RecomputeRoutingTables() after such events. The default is set to false to preserve legacy ns-3 program behavior.

Global Routing Implementation

This section is for those readers who care about how this is implemented. A singleton object (GlobalRouteManager) is responsible for populating the static routes on each node, using the public Ipv4 API of that node. It queries each node in the topology for a “globalRouter” interface. If found, it uses the API of that interface to obtain a “link state advertisement (LSA)” for the router. Link State Advertisements are used in OSPF routing, and we follow their formatting.

It is important to note that all of these computations are done before packets are flowing in the network. In particular, there are no overhead or control packets being exchanged when using this implementation. Instead, this global route manager just walks the list of nodes to build the necessary information and configure each node’s routing table.

The GlobalRouteManager populates a link state database with LSAs gathered from the entire topology. Then, for each router in the topology, the GlobalRouteManager executes the OSPF shortest path first (SPF) computation on the database, and populates the routing tables on each node.

The quagga (http://www.quagga.net) OSPF implementation was used as the basis for the routing computation logic. One benefit of following an existing OSPF SPF implementation is that OSPF already has defined link state advertisements for all common types of network links:

  • point-to-point (serial links)
  • point-to-multipoint (Frame Relay, ad hoc wireless)
  • non-broadcast multiple access (ATM)
  • broadcast (Ethernet)

Therefore, we think that enabling these other link types will be more straightforward now that the underlying OSPF SPF framework is in place.

Presently, we can handle IPv4 point-to-point, numbered links, as well as shared broadcast (CSMA) links. Equal-cost multipath is also supported. Although wireless link types are supported by the implementation, note that due to the nature of this implementation, any channel effects will not be considered and the routing tables will assume that every node on the same shared channel is reachable from every other node (i.e. it will be treated like a broadcast CSMA link).

The GlobalRouteManager first walks the list of nodes and aggregates a GlobalRouter interface to each one as follows:

typedef std::vector < Ptr<Node> >::iterator Iterator;
for (Iterator i = NodeList::Begin (); i != NodeList::End (); i++)
  {
    Ptr<Node> node = *i;
    Ptr<GlobalRouter> globalRouter = CreateObject<GlobalRouter> (node);
    node->AggregateObject (globalRouter);
  }

This interface is later queried and used to generate a Link State Advertisement for each router, and this link state database is fed into the OSPF shortest path computation logic. The Ipv4 API is finally used to populate the routes themselves.

Unicast routing

There are presently eigth unicast routing protocols defined for IPv4 and three for IPv6:

  • class Ipv4StaticRouting (covering both unicast and multicast)
  • IPv4 Optimized Link State Routing (OLSR) (a MANET protocol defined in RFC 3626)
  • IPv4 Ad Hoc On Demand Distance Vector (AODV) (a MANET protocol defined in RFC 3561)
  • IPv4 Destination Sequenced Distance Vector (DSDV) (a MANET protocol)
  • IPv4 Dynamic Source Routing (DSR) (a MANET protocol)
  • class Ipv4ListRouting (used to store a prioritized list of routing protocols)
  • class Ipv4GlobalRouting (used to store routes computed by the global route manager, if that is used)
  • class Ipv4NixVectorRouting (a more efficient version of global routing that stores source routes in a packet header field)
  • class Rip - the IPv4 RIPv2 protocol (RFC 2453)
  • class Ipv6ListRouting (used to store a prioritized list of routing protocols)
  • class Ipv6StaticRouting
  • class RipNg - the IPv6 RIPng protocol (RFC 2080)

In the future, this architecture should also allow someone to implement a Linux-like implementation with routing cache, or a Click modular router, but those are out of scope for now.

Ipv[4,6]ListRouting

This section describes the current default ns-3 Ipv[4,6]RoutingProtocol. Typically, multiple routing protocols are supported in user space and coordinate to write a single forwarding table in the kernel. Presently in ns-3, the implementation instead allows for multiple routing protocols to build/keep their own routing state, and the IP implementation will query each one of these routing protocols (in some order determined by the simulation author) until a route is found.

We chose this approach because it may better facilitate the integration of disparate routing approaches that may be difficult to coordinate the writing to a single table, approaches where more information than destination IP address (e.g., source routing) is used to determine the next hop, and on-demand routing approaches where packets must be cached.

Ipv[4,6]4ListRouting::AddRoutingProtocol

Classes Ipv4ListRouting and Ipv6ListRouting provides a pure virtual function declaration for the method that allows one to add a routing protocol:

void AddRoutingProtocol (Ptr<Ipv4RoutingProtocol> routingProtocol,
                         int16_t priority);

void AddRoutingProtocol (Ptr<Ipv6RoutingProtocol> routingProtocol,
                         int16_t priority);

These methods are implemented respectively by class Ipv4ListRoutingImpl and by class Ipv6ListRoutingImpl in the internet module.

The priority variable above governs the priority in which the routing protocols are inserted. Notice that it is a signed int. By default in ns-3, the helper classes will instantiate a Ipv[4,6]ListRoutingImpl object, and add to it an Ipv[4,6]StaticRoutingImpl object at priority zero. Internally, a list of Ipv[4,6]RoutingProtocols is stored, and and the routing protocols are each consulted in decreasing order of priority to see whether a match is found. Therefore, if you want your Ipv4RoutingProtocol to have priority lower than the static routing, insert it with priority less than 0; e.g.:

Ptr<MyRoutingProtocol> myRoutingProto = CreateObject<MyRoutingProtocol> ();
listRoutingPtr->AddRoutingProtocol (myRoutingProto, -10);

Upon calls to RouteOutput() or RouteInput(), the list routing object will search the list of routing protocols, in priority order, until a route is found. Such routing protocol will invoke the appropriate callback and no further routing protocols will be searched.

RIP and RIPng

The RIPv2 protocol for IPv4 is described in the RFC 2453, and it consolidates a number of improvements over the base protocol defined in RFC 1058.

This IPv6 routing protocol (RFC 2080) is the evolution of the well-known RIPv1 (see RFC 1058 and RFC 1723) routing protocol for IPv4.

The protocols are very simple, and are normally suitable for flat, simple network topologies.

RIPv1, RIPv2, and RIPng have the very same goals and limitations. In particular, RIP considers any route with a metric equal or greater than 16 as unreachable. As a consequence, the maximum number of hops is the network must be less than 15 (the number of routers is not set). Users are encouraged to read RFC 2080 and RFC 1058 to fully understand RIP behaviour and limitations.

Routing convergence

RIP uses a Distance-Vector algorithm, and routes are updated according to the Bellman-Ford algorithm (sometimes known as Ford-Fulkerson algorithm). The algorithm has a convergence time of O(|V|*|E|) where |V| and |E| are the number of vertices (routers) and edges (links) respectively. It should be stressed that the convergence time is the number of steps in the algorithm, and each step is triggered by a message. Since Triggered Updates (i.e., when a route is changed) have a 1-5 seconds cooldown, the toplogy can require some time to be stabilized.

Users should be aware that, during routing tables construction, the routers might drop packets. Data traffic should be sent only after a time long enough to allow RIP to build the network topology. Usually 80 seconds should be enough to have a suboptimal (but working) routing setup. This includes the time needed to propagate the routes to the most distant router (16 hops) with Triggered Updates.

If the network topology is changed (e.g., a link is broken), the recovery time might be quite high, and it might be even higher than the initial setup time. Moreover, the network topology recovery is affected by the Split Horizoning strategy.

The examples examples/routing/ripng-simple-network.cc and examples/routing/rip-simple-network.cc shows both the network setup and network recovery phases.

Split Horizoning

Split Horizon is a strategy to prevent routing instability. Three options are possible:

  • No Split Horizon
  • Split Horizon
  • Poison Reverse

In the first case, routes are advertised on all the router’s interfaces. In the second case, routers will not advertise a route on the interface from which it was learned. Poison Reverse will advertise the route on the interface from which it was learned, but with a metric of 16 (infinity). For a full analysis of the three techniques, see RFC 1058, section 2.2.

The examples are based on the network toplogy described in the RFC, but it does not show the effect described there.

The reason are the Triggered Updates, together with the fact that when a router invalidates a route, it will immediately propagate the route unreachability, thus preventing most of the issues described in the RFC.

However, with complex toplogies, it is still possible to have route instability phenomena similar to the one described in the RFC after a link failure. As a consequence, all the considerations about Split Horizon remanins valid.

Default routes

RIP protocol should be installed only on routers. As a consequence, nodes will not know what is the default router.

To overcome this limitation, users should either install the default route manually (e.g., by resorting to Ipv4StaticRouting or Ipv6StaticRouting), or by using RADVd (in case of IPv6). RADVd is available in ns-3 in the Applications module, and it is strongly suggested.

Protocol parameters and options

The RIP ns-3 implementations allow to change all the timers associated with route updates and routes lifetime.

Moreover, users can change the interface metrics on a per-node basis.

The type of Split Horizoning (to avoid routes back-propagation) can be selected on a per-node basis, with the choices being “no split horizon”, “split horizon” and “poison reverse”. See RFC 2080 for further details, and RFC 1058 for a complete discussion on the split horizoning strategies.

Moreover, it is possible to use a non-standard value for Link Down Value (i.e., the value after which a link is considered down). The defaul is value is 16.

Limitations

There is no support for the Next Hop option (RFC 2080, Section 2.1.1). The Next Hop option is useful when RIP is not being run on all of the routers on a network. Support for this option may be considered in the future.

There is no support for CIDR prefix aggregation. As a result, both routing tables and route advertisements may be larger than necessary. Prefix aggregation may be added in the future.

Multicast routing

The following function is used to add a static multicast route to a node:

void
Ipv4StaticRouting::AddMulticastRoute (Ipv4Address origin,
                                      Ipv4Address group,
                                      uint32_t inputInterface,
                                      std::vector<uint32_t> outputInterfaces);

A multicast route must specify an origin IP address, a multicast group and an input network interface index as conditions and provide a vector of output network interface indices over which packets matching the conditions are sent.

Typically there are two main types of multicast routes: routes of the first kind are used during forwarding. All of the conditions must be explicitly provided. The second kind of routes are used to get packets off of a local node. The difference is in the input interface. Routes for forwarding will always have an explicit input interface specified. Routes off of a node will always set the input interface to a wildcard specified by the index Ipv4RoutingProtocol::IF_INDEX_ANY.

For routes off of a local node wildcards may be used in the origin and multicast group addresses. The wildcard used for Ipv4Adresses is that address returned by Ipv4Address::GetAny () – typically “0.0.0.0”. Usage of a wildcard allows one to specify default behavior to varying degrees.

For example, making the origin address a wildcard, but leaving the multicast group specific allows one (in the case of a node with multiple interfaces) to create different routes using different output interfaces for each multicast group.

If the origin and multicast addresses are made wildcards, you have created essentially a default multicast address that can forward to multiple interfaces. Compare this to the actual default multicast address that is limited to specifying a single output interface for compatibility with existing functionality in other systems.

Another command sets the default multicast route:

void
Ipv4StaticRouting::SetDefaultMulticastRoute (uint32_t outputInterface);

This is the multicast equivalent of the unicast version SetDefaultRoute. We tell the routing system what to do in the case where a specific route to a destination multicast group is not found. The system forwards packets out the specified interface in the hope that “something out there” knows better how to route the packet. This method is only used in initially sending packets off of a host. The default multicast route is not consulted during forwarding – exact routes must be specified using AddMulticastRoute for that case.

Since we’re basically sending packets to some entity we think may know better what to do, we don’t pay attention to “subtleties” like origin address, nor do we worry about forwarding out multiple interfaces. If the default multicast route is set, it is returned as the selected route from LookupStatic irrespective of origin or multicast group if another specific route is not found.

Finally, a number of additional functions are provided to fetch and remove multicast routes:

uint32_t GetNMulticastRoutes (void) const;

Ipv4MulticastRoute *GetMulticastRoute (uint32_t i) const;

Ipv4MulticastRoute *GetDefaultMulticastRoute (void) const;

bool RemoveMulticastRoute (Ipv4Address origin,
                           Ipv4Address group,
                           uint32_t inputInterface);

void RemoveMulticastRoute (uint32_t index);

TCP models in ns-3

This chapter describes the TCP models available in ns-3.

Generic support for TCP

ns-3 was written to support multiple TCP implementations. The implementations inherit from a few common header classes in the src/network directory, so that user code can swap out implementations with minimal changes to the scripts.

There are two important abstract base classes:

  • class TcpSocket: This is defined in src/internet/model/tcp-socket.{cc,h}. This class exists for hosting TcpSocket attributes that can be reused across different implementations. For instance, the attribute InitialCwnd can be used for any of the implementations that derive from class TcpSocket.
  • class TcpSocketFactory: This is used by the layer-4 protocol instance to create TCP sockets of the right type.

There are presently three implementations of TCP available for ns-3.

It should also be mentioned that various ways of combining virtual machines with ns-3 makes available also some additional TCP implementations, but those are out of scope for this chapter.

ns-3 TCP

In brief, the native ns-3 TCP model supports a full bidirectional TCP with connection setup and close logic. Several congestion control algorithms are supported, with NewReno the default, and Westwood, Hybla, and HighSpeed also supported. Multipath-TCP and TCP Selective Acknowledgements (SACK) are not yet supported in the ns-3 releases.

Model history

Until the ns-3.10 release, ns-3 contained a port of the TCP model from GTNetS. This implementation was substantially rewritten by Adriam Tam for ns-3.10. In 2015, the TCP module has been redesigned in order to create a better environment for creating and carrying out automated tests. One of the main changes involves congestion control algorithms, and how they are implemented.

Before ns-3.25 release, a congestion control was considered as a stand-alone TCP through an inheritance relation: each congestion control (e.g. TcpNewReno) was a subclass of TcpSocketBase, reimplementing some inherited methods. The architecture was redone to avoid this inheritance, the fundamental principle of the GSoC proposal was avoiding this inheritance, by making each congestion control a separate class, and making an interface to exchange important data between TcpSocketBase and the congestion modules. For instance, similar modularity is used in Linux.

Along with congestion control, Fast Retransmit and Fast Recovery algorithms have been modified; in previous releases, these algorithms were demanded to TcpSocketBase subclasses. Starting from ns-3.25, they have been merged inside TcpSocketBase. In future releases, they can be extracted as separate modules, following the congestion control design.

Usage

In many cases, usage of TCP is set at the application layer by telling the ns-3 application which kind of socket factory to use.

Using the helper functions defined in src/applications/helper and src/network/helper, here is how one would create a TCP receiver:

// Create a packet sink on the star "hub" to receive these packets
uint16_t port = 50000;
Address sinkLocalAddress(InetSocketAddress (Ipv4Address::GetAny (), port));
PacketSinkHelper sinkHelper ("ns3::TcpSocketFactory", sinkLocalAddress);
ApplicationContainer sinkApp = sinkHelper.Install (serverNode);
sinkApp.Start (Seconds (1.0));
sinkApp.Stop (Seconds (10.0));

Similarly, the below snippet configures OnOffApplication traffic source to use TCP:

// Create the OnOff applications to send TCP to the server
OnOffHelper clientHelper ("ns3::TcpSocketFactory", Address ());

The careful reader will note above that we have specified the TypeId of an abstract base class TcpSocketFactory. How does the script tell ns-3 that it wants the native ns-3 TCP vs. some other one? Well, when internet stacks are added to the node, the default TCP implementation that is aggregated to the node is the ns-3 TCP. This can be overridden as we show below when using Network Simulation Cradle. So, by default, when using the ns-3 helper API, the TCP that is aggregated to nodes with an Internet stack is the native ns-3 TCP.

To configure behavior of TCP, a number of parameters are exported through the ns-3 attribute system. These are documented in the Doxygen <http://www.nsnam.org/doxygen/classns3_1_1_tcp_socket.html> for class TcpSocket. For example, the maximum segment size is a settable attribute.

To set the default socket type before any internet stack-related objects are created, one may put the following statement at the top of the simulation program:

Config::SetDefault ("ns3::TcpL4Protocol::SocketType", StringValue ("ns3::TcpNewReno"));

For users who wish to have a pointer to the actual socket (so that socket operations like Bind(), setting socket options, etc. can be done on a per-socket basis), Tcp sockets can be created by using the Socket::CreateSocket() method. The TypeId passed to CreateSocket() must be of type ns3::SocketFactory, so configuring the underlying socket type must be done by twiddling the attribute associated with the underlying TcpL4Protocol object. The easiest way to get at this would be through the attribute configuration system. In the below example, the Node container “n0n1” is accessed to get the zeroth element, and a socket is created on this node:

// Create and bind the socket...
TypeId tid = TypeId::LookupByName ("ns3::TcpNewReno");
Config::Set ("/NodeList/*/$ns3::TcpL4Protocol/SocketType", TypeIdValue (tid));
Ptr<Socket> localSocket =
  Socket::CreateSocket (n0n1.Get (0), TcpSocketFactory::GetTypeId ());

Above, the “*” wild card for node number is passed to the attribute configuration system, so that all future sockets on all nodes are set to NewReno, not just on node ‘n0n1.Get (0)’. If one wants to limit it to just the specified node, one would have to do something like:

// Create and bind the socket...
TypeId tid = TypeId::LookupByName ("ns3::TcpNewReno");
std::stringstream nodeId;
nodeId << n0n1.Get (0)->GetId ();
std::string specificNode = "/NodeList/" + nodeId.str () + "/$ns3::TcpL4Protocol/SocketType";
Config::Set (specificNode, TypeIdValue (tid));
Ptr<Socket> localSocket =
  Socket::CreateSocket (n0n1.Get (0), TcpSocketFactory::GetTypeId ());

Once a TCP socket is created, one will want to follow conventional socket logic and either connect() and send() (for a TCP client) or bind(), listen(), and accept() (for a TCP server). Please note that applications usually create the sockets they use automatically, and so is not straightforward to connect direcly to them using pointers. Please refer to the source code of your preferred application to discover how and when it creates the socket.

TCP Socket interaction and interface with Application layer

In the following there is an analysis on the public interface of the TCP socket, and how it can be used to interact with the socket itself. An analysis of the callback fired by the socket is also carried out. Please note that, for the sake of clarity, we will use the terminology “Sender” and “Receiver” to clearly divide the functionality of the socket. However, in TCP these two roles can be applied at the same time (i.e. a socket could be a sender and a receiver at the same time): our distinction does not lose generality, since the following definition can be applied to both sockets in case of full-duplex mode.


TCP state machine (for commodity use)

_images/tcp-state-machine.png

TCP State machine

In ns-3 we are fully compliant with the state machine depicted in Figure TCP State machine.


Public interface for receivers (e.g. servers receiving data)

Bind()
Bind the socket to an address, or to a general endpoint. A general endpoint is an endpoint with an ephemeral port allocation (that is, a random port allocation) on the 0.0.0.0 IP address. For instance, in current applications, data senders usually binds automatically after a Connect() over a random port. Consequently, the connection will start from this random port towards the well-defined port of the receiver. The IP 0.0.0.0 is then translated by lower layers into the real IP of the device.
Bind6()
Same as Bind(), but for IPv6.
BindToNetDevice()
Bind the socket to the specified NetDevice, creating a general endpoint.
Listen()
Listen on the endpoint for an incoming connection. Please note that this function can be called only in the TCP CLOSED state, and transit in the LISTEN state. When an incoming request for connection is detected (i.e. the other peer invoked Connect()) the application will be signaled with the callback NotifyConnectionRequest (set in SetAcceptCallback() beforehand). If the connection is accepted (the default behavior, when the associated callback is a null one) the Socket will fork itself, i.e. a new socket is created to handle the incoming data/connection, in the state SYN_RCVD. Please note that this newly created socket is not connected anymore to the callbacks on the “father” socket (e.g. DataSent, Recv); the pointer of the newly created socket is provided in the Callback NotifyNewConnectionCreated (set beforehand in SetAcceptCallback), and should be used to connect new callbacks to interesting events (e.g. Recv callback). After receiving the ACK of the SYN-ACK, the socket will set the congestion control, move into ESTABLISHED state, and then notify the application with NotifyNewConnectionCreated.
ShutdownSend()
Signal a termination of send, or in other words revents data from being added to the buffer. After this call, if buffer is already empty, the socket will send a FIN, otherwise FIN will go when buffer empties. Please note that this is useful only for modeling “Sink” applications. If you have data to transmit, please refer to the Send() / Close() combination of API.
GetRxAvailable()
Get the amount of data that could be returned by the Socket in one or multiple call to Recv or RecvFrom. Please use the Attribute system to configure the maximum available space on the receiver buffer (property “RcvBufSize”).
Recv()
Grab data from the TCP socket. Please remember that TCP is a stream socket, and it is allowed to concatenate multiple packets into bigger ones. If no data is present (i.e. GetRxAvailable returns 0) an empty packet is returned. Set the callback RecvCallback through SetRecvCallback() in order to have the application automatically notified when some data is ready to be read. It’s important to connect that callback to the newly created socket in case of forks.
RecvFrom()
Same as Recv, but with the source address as parameter.

Public interface for senders (e.g. clients uploading data)

Connect()
Set the remote endpoint, and try to connect to it. The local endpoint should be set before this call, or otherwise an ephemeral one will be created. The TCP then will be in the SYN_SENT state. If a SYN-ACK is received, the TCP will setup the congestion control, and then call the callback ConnectionSucceeded.
GetTxAvailable()
Return the amount of data that can be stored in the TCP Tx buffer. Set this property through the Attribute system (“SndBufSize”).
Send()
Send the data into the TCP Tx buffer. From there, the TCP rules will decide if, and when, this data will be transmitted. Please note that, if the tx buffer has enough data to fill the congestion (or the receiver) window, dynamically varying the rate at which data is injected in the TCP buffer does not have any noticeable effect on the amount of data transmitted on the wire, that will continue to be decided by the TCP rules.
SendTo()
Same as Send().
Close()
Terminate the local side of the connection, by sending a FIN (after all data in the tx buffer has been transmitted). This does not prevent the socket in receiving data, and employing retransmit mechanism if losses are detected. If the application calls Close() with unread data in its rx buffer, the socket will send a reset. If the socket is in the state SYN_SENT, CLOSING, LISTEN or LAST_ACK, after that call the application will be notified with NotifyNormalClose(). In all the other cases, the notification is delayed (see NotifyNormalClose()).

Public callbacks

These callbacks are called by the TCP socket to notify the application of interesting events. We will refer to these with the protected name used in socket.h, but we will provide the API function to set the pointers to these callback as well.

NotifyConnectionSucceeded: SetConnectCallback, 1st argument
Called in the SYN_SENT state, before moving to ESTABLISHED. In other words, we have sent the SYN, and we received the SYN-ACK: the socket prepare the sequence numbers, send the ACK for the SYN-ACK, try to send out more data (in another segment) and then invoke this callback. After this callback, it invokes the NotifySend callback.
NotifyConnectionFailed: SetConnectCallback, 2nd argument
Called after the SYN retransmission count goes to 0. SYN packet is lost multiple time, and the socket give up.
NotifyNormalClose: SetCloseCallbacks, 1st argument
A normal close is invoked. A rare case is when we receive an RST segment (or a segment with bad flags) in normal states. All other cases are: - The application tries to Connect() over an already connected socket - Received an ACK for the FIN sent, with or without the FIN bit set (we are in LAST_ACK) - The socket reaches the maximum amount of retries in retransmitting the SYN (*) - We receive a timeout in the LAST_ACK state - After 2*Maximum Segment Lifetime seconds passed since the socket entered the TIME_WAIT state.
NotifyErrorClose: SetCloseCallbacks, 2nd argument
Invoked when we send an RST segment (for whatever reason) or we reached the maximum amount of data retries.
NotifyConnectionRequest: SetAcceptCallback, 1st argument
Invoked in the LISTEN state, when we receive a SYN. The return value indicates if the socket should accept the connection (return true) or should ignore it (return false).
NotifyNewConnectionCreated: SetAcceptCallback, 2nd argument
Invoked when from SYN_RCVD the socket passes to ESTABLISHED, and after setting up the congestion control, the sequence numbers, and processed the incoming ACK. If there is some space in the buffer, NotifySend is called shortly after this callback. The Socket pointer, passed with this callback, is the newly created socket, after a Fork().
NotifyDataSent: SetDataSentCallback
The Socket notifies the application that some bytes has been transmitted on the IP level. These bytes could still be lost in the node (traffic control layer) or in the network.
NotifySend: SetSendCallback
Invoked if there is some space in the tx buffer when entering the ESTABLISHED state (e.g. after the ACK for SYN-ACK is received), after the connection succeeds (e.g. after the SYN-ACK is received) and after each new ack (i.e. that advances SND.UNA).
NotifyDataRecv: SetRecvCallback
Called when in the receiver buffere there are in-order bytes, and when in FIN_WAIT_1 or FIN_WAIT_2 the socket receive a in-sequence FIN (that can carry data).
Congestion Control Algorithms

Here follows a list of supported TCP congestion control algorithms. For an academic peer-reviewed paper on these congestion control algorithms, see http://dl.acm.org/citation.cfm?id=2756518 .

New Reno

New Reno algorithm introduces partial ACKs inside the well-established Reno algorithm. This and other modifications are described in RFC 6582. We have two possible congestion window increment strategy: slow start and congestion avoidance. Taken from RFC 5681:

During slow start, a TCP increments cwnd by at most SMSS bytes for each ACK received that cumulatively acknowledges new data. Slow start ends when cwnd exceeds ssthresh (or, optionally, when it reaches it, as noted above) or when congestion is observed. While traditionally TCP implementations have increased cwnd by precisely SMSS bytes upon receipt of an ACK covering new data, we RECOMMEND that TCP implementations increase cwnd, per Equation (1), where N is the number of previously unacknowledged bytes acknowledged in the incoming ACK.

(1)cwnd += min (N, SMSS)

During congestion avoidance, cwnd is incremented by roughly 1 full-sized segment per round-trip time (RTT), and for each congestion event, the slow start threshold is halved.

High Speed

TCP HighSpeed is designed for high-capacity channels or, in general, for TCP connections with large congestion windows. Conceptually, with respect to the standard TCP, HighSpeed makes the cWnd grow faster during the probing phases and accelerates the cWnd recovery from losses. This behavior is executed only when the window grows beyond a certain threshold, which allows TCP Highspeed to be friendly with standard TCP in environments with heavy congestion, without introducing new dangers of congestion collapse.

Mathematically:

cWnd = cWnd + \frac{a(cWnd)}{cWnd}

The function a() is calculated using a fixed RTT the value 100 ms (the lookup table for this function is taken from RFC 3649). For each congestion event, the slow start threshold is decreased by a value that depends on the size of the slow start threshold itself. Then, the congestion window is set to such value.

cWnd = (1-b(cWnd)) \cdot cWnd

The lookup table for the function b() is taken from the same RFC. More informations at: http://dl.acm.org/citation.cfm?id=2756518

Hybla

The key idea behind TCP Hybla is to obtain for long RTT connections the same instantaneous transmission rate of a reference TCP connection with lower RTT. With analytical steps, it is shown that this goal can be achieved by modifying the time scale, in order for the throughput to be independent from the RTT. This independence is obtained through the use of a coefficient rho.

This coefficient is used to calculate both the slow start threshold and the congestion window when in slow start and in congestion avoidance, respectively.

More informations at: http://dl.acm.org/citation.cfm?id=2756518

Westwood

Westwood and Westwood+ employ the AIAD (Additive Increase/Adaptive Decrease)· congestion control paradigm. When a congestion episode happens,· instead of halving the cwnd, these protocols try to estimate the network’s bandwidth and use the estimated value to adjust the cwnd.· While Westwood performs the bandwidth sampling every ACK reception,· Westwood+ samples the bandwidth every RTT.

More informations at: http://dl.acm.org/citation.cfm?id=381704 and http://dl.acm.org/citation.cfm?id=2512757

Vegas

TCP Vegas is a pure delay-based congestion control algorithm implementing a proactive scheme that tries to prevent packet drops by maintaining a small backlog at the bottleneck queue. Vegas continuously samples the RTT and computes the actual throughput a connection achieves using Equation (1) and compares it with the expected throughput calculated in Equation (2). The difference between these 2 sending rates in Equation (3) reflects the amount of extra packets being queued at the bottleneck.

actual &= \frac{cWnd}{RTT}        \\
expected &= \frac{cWnd}{BaseRTT}  \\
diff &= expected - actual

To avoid congestion, Vegas linearly increases/decreases its congestion window to ensure the diff value fall between the two predefined thresholds, alpha and beta. diff and another threshold, gamma, are used to determine when Vegas should change from its slow-start mode to linear increase/decrease mode. Following the implementation of Vegas in Linux, we use 2, 4, and 1 as the default values of alpha, beta, and gamma, respectively, but they can be modified through the Attribute system.

More informations at: http://dx.doi.org/10.1109/49.464716

Scalable

Scalable improves TCP performance to better utilize the available bandwidth of a highspeed wide area network by altering NewReno congestion window adjustment algorithm. When congestion has not been detected, for each ACK received in an RTT, Scalable increases its cwnd per:

cwnd = cwnd + 0.01

Following Linux implementation of Scalable, we use 50 instead of 100 to account for delayed ACK.

On the first detection of congestion in a given RTT, cwnd is reduced based on the following equation:

cwnd = cwnd - ceil(0.125 \cdot cwnd)

More informations at: http://dl.acm.org/citation.cfm?id=956989

Veno

TCP Veno enhances Reno algorithm for more effectively dealing with random packet loss in wireless access networks by employing Vegas’s method in estimating the backlog at the bottleneck queue to distinguish between congestive and non-congestive states.

The backlog (the number of packets accumulated at the bottleneck queue) is calculated using Equation (1):

N &= Actual \cdot (RTT - BaseRTT) \\
  &= Diff \cdot BaseRTT

where:

Diff &= Expected - Actual \\
     &= \frac{cWnd}{BaseRTT} - \frac{cWnd}{RTT}

Veno makes decision on cwnd modification based on the calculated N and its predefined threshold beta.

Specifically, it refines the additive increase algorithm of Reno so that the connection can stay longer in the stable state by incrementing cwnd by 1/cwnd for every other new ACK received after the available bandwidth has been fully utilized, i.e. when N exceeds beta. Otherwise, Veno increases its cwnd by 1/cwnd upon every new ACK receipt as in Reno.

In the multiplicative decrease algorithm, when Veno is in the non-congestive state, i.e. when N is less than beta, Veno decrements its cwnd by only 1/5 because the loss encountered is more likely a corruption-based loss than a congestion-based. Only when N is greater than beta, Veno halves its sending rate as in Reno.

More informations at: http://dx.doi.org/10.1109/JSAC.2002.807336

Bic

In TCP Bic the congestion control problem is viewed as a search problem. Taking as a starting point the current window value and as a target point the last maximum window value (i.e. the cWnd value just before the loss event) a binary search technique can be used to update the cWnd value at the midpoint between the two, directly or using an additive increase strategy if the distance from the current window is too large.

This way, assuming a no-loss period, the congestion window logarithmically approaches the maximum value of cWnd until the difference between it and cWnd falls below a preset threshold. After reaching such a value (or the maximum window is unknown, i.e. the binary search does not start at all) the algorithm switches to probing the new maximum window with a ‘slow start’ strategy.

If a loss occur in either these phases, the current window (before the loss) can be treated as the new maximum, and the reduced (with a multiplicative decrease factor Beta) window size can be used as the new minimum.

More informations at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1354672

YeAH

YeAH-TCP (Yet Another HighSpeed TCP) is a heuristic designed to balance various requirements of a state-of-the-art congestion control algorithm:

  1. fully exploit the link capacity of high BDP networks while inducing a small number of congestion events
  2. compete friendly with Reno flows
  3. achieve intra and RTT fairness
  4. robust to random losses
  5. achieve high performance regardless of buffer size

YeAH operates between 2 modes: Fast and Slow mode. In the Fast mode when the queue occupancy is small and the network congestion level is low, YeAH increments its congestion window according to the aggressive STCP rule. When the number of packets in the queue grows beyond a threshold and the network congestion level is high, YeAH enters its Slow mode, acting as Reno with a decongestion algorithm. YeAH employs Vegas’ mechanism for calculating the backlog as in Equation (2). The estimation of the network congestion level is shown in Equation (3).

(2)Q = (RTT - BaseRTT) \cdot \frac{cWnd}{RTT}

(3)L = \frac{RTT - BaseRTT}{BaseRTT}

To ensure TCP friendliness, YeAH also implements an algorithm to detect the presence of legacy Reno flows. Upon the receipt of 3 duplicate ACKs, YeAH decreases its slow start threshold according to Equation (3) if it’s not competing with Reno flows. Otherwise, the ssthresh is halved as in Reno:

ssthresh = min(max(\frac{cWnd}{8}, Q), \frac{cWnd}{2})

More information: http://www.csc.lsu.edu/~sjpark/cs7601/4-YeAH_TCP.pdf

Illinois

TCP Illinois is a hybrid congestion control algorithm designed for high-speed networks. Illinois implements a Concave-AIMD (or C-AIMD) algorithm that uses packet loss as the primary congestion signal to determine the direction of window update and queueing delay as the secondary congestion signal to determine the amount of change.

The additive increase and multiplicative decrease factors (denoted as alpha and beta, respectively) are functions of the current average queueing delay da as shown in Equations (1) and (2). To improve the protocol robustness against sudden fluctuations in its delay sampling, Illinois allows the increment of alpha to alphaMax only if da stays below d1 for a some (theta) amount of time.

alpha &=
\begin{cases}
   \quad alphaMax              & \quad \text{if } da <= d1 \\
   \quad k1 / (k2 + da)        & \quad \text{otherwise} \\
\end{cases} \\
\\
beta &=
\begin{cases}
   \quad betaMin               & \quad \text{if } da <= d2 \\
   \quad k3 + k4 \, da         & \quad \text{if } d2 < da < d3 \\
   \quad betaMax               & \quad \text{otherwise}
\end{cases}

where the calculations of k1, k2, k3, and k4 are shown in the following:

k1 &= \frac{(dm - d1) \cdot alphaMin \cdot alphaMax}{alphaMax - alphaMin} \\
\\
k2 &= \frac{(dm - d1) \cdot alphaMin}{alphaMax - alphaMin} - d1 \\
\\
k3 &= \frac{alphaMin \cdot d3 - alphaMax \cdot d2}{d3 - d2} \\
\\
k4 &= \frac{alphaMax - alphaMin}{d3 - d2}

Other parameters include da (the current average queueing delay), and Ta (the average RTT, calculated as sumRtt / cntRtt in the implementation) and Tmin (baseRtt in the implementation) which is the minimum RTT ever seen. dm is the maximum (average) queueing delay, and Tmax (maxRtt in the implementation) is the maximum RTT ever seen.

da &= Ta - Tmin

dm &= Tmax - Tmin

d_i &= eta_i \cdot dm

Illinois only executes its adaptation of alpha and beta when cwnd exceeds a threshold called winThresh. Otherwise, it sets alpha and beta to the base values of 1 and 0.5, respectively.

Following the implementation of Illinois in the Linux kernel, we use the following default parameter settings:

  • alphaMin = 0.3 (0.1 in the Illinois paper)
  • alphaMax = 10.0
  • betaMin = 0.125
  • betaMax = 0.5
  • winThresh = 15 (10 in the Illinois paper)
  • theta = 5
  • eta1 = 0.01
  • eta2 = 0.1
  • eta3 = 0.8

More information: http://www.doi.org/10.1145/1190095.1190166

H-TCP

H-TCP has been designed for high BDP (Bandwidth-Delay Product) paths. It is a dual mode protocol. In normal conditions, it works like traditional TCP with the same rate of increment and decrement for the congestion window. However, in high BDP networks, when it finds no congestion on the path after deltal seconds, it increases the window size based on the alpha function in the following:

alpha(delta)=1+10(delta-deltal)+0.5(delta-deltal)^2

where deltal is a threshold in seconds for switching between the modes and delta is the elapsed time from the last congestion. During congestion, it reduces the window size by multiplying by beta function provided in the reference paper. The calculated throughput between the last two consecutive congestion events is considered for beta calculation.

The transport TcpHtcp can be selected in the program examples/tcp/tcp-variants/comparison to perform an experiment with H-TCP, although it is useful to increase the bandwidth in this example (e.g. to 20 Mb/s) to create a higher BDP link, such as

./waf --run "tcp-variants-comparison --transport_prot=TcpHtcp --bandwidth=20Mbps --duration=10"

More information (paper): http://www.hamilton.ie/net/htcp3.pdf

More information (Internet Draft): https://tools.ietf.org/html/draft-leith-tcp-htcp-06

Validation

The following tests are found in the src/internet/test directory. In general, TCP tests inherit from a class called TcpGeneralTest, which provides common operations to set up test scenarios involving TCP objects. For more information on how to write new tests, see the section below on Writing TCP tests.

  • tcp: Basic transmission of string of data from client to server
  • tcp-bytes-in-flight-test: TCP correctly estimates bytes in flight under loss conditions
  • tcp-cong-avoid-test: TCP congestion avoidance for different packet sizes
  • tcp-datasentcb: Check TCP’s ‘data sent’ callback
  • tcp-endpoint-bug2211-test: A test for an issue that was causing stack overflow
  • tcp-fast-retr-test: Fast Retransmit testing
  • tcp-header: Unit tests on the TCP header
  • tcp-highspeed-test: Unit tests on the Highspeed congestion control
  • tcp-htcp-test: Unit tests on the H-TCP congestion control
  • tcp-hybla-test: Unit tests on the Hybla congestion control
  • tcp-vegas-test: Unit tests on the Vegas congestion control
  • tcp-veno-test: Unit tests on the Veno congestion control
  • tcp-scalable-test: Unit tests on the Scalable congestion control
  • tcp-bic-test: Unit tests on the BIC congestion control
  • tcp-yeah-test: Unit tests on the YeAH congestion control
  • tcp-illinois-test: Unit tests on the Illinois congestion control
  • tcp-option: Unit tests on TCP options
  • tcp-pkts-acked-test: Unit test the number of time that PktsAcked is called
  • tcp-rto-test: Unit test behavior after a RTO timeout occurs
  • tcp-rtt-estimation-test: Check RTT calculations, including retransmission cases
  • tcp-slow-start-test: Check behavior of slow start
  • tcp-timestamp: Unit test on the timestamp option
  • tcp-wscaling: Unit test on the window scaling option
  • tcp-zero-window-test: Unit test persist behavior for zero window conditions

Several tests have dependencies outside of the internet module, so they are located in a system test directory called src/test/ns3tcp. Three of these six tests involve use of the Network Simulation Cradle, and are disabled if NSC is not enabled in the build.

  • ns3-tcp-cwnd: Check to see that ns-3 TCP congestion control works against liblinux2.6.26.so implementation
  • ns3-tcp-interoperability: Check to see that ns-3 TCP interoperates with liblinux2.6.26.so implementation
  • ns3-tcp-loss: Check behavior of ns-3 TCP upon packet losses
  • nsc-tcp-loss: Check behavior of NSC TCP upon packet losses
  • ns3-tcp-no-delay: Check that ns-3 TCP Nagle”s algorithm works correctly and that it can be disabled
  • ns3-tcp-socket: Check that ns-3 TCP successfully transfers an application data write of various sizes
  • ns3-tcp-state: Check the operation of the TCP state machine for several cases

Several TCP validation test results can also be found in the wiki page describing this implementation.

Writing a new congestion control algorithm

Writing (or porting) a congestion control algorithms from scratch (or from other systems) is a process completely separated from the internals of TcpSocketBase.

All operations that are delegated to a congestion control are contained in the class TcpCongestionOps. It mimics the structure tcp_congestion_ops of Linux, and the following operations are defined:

virtual std::string GetName () const;
virtual uint32_t GetSsThresh (Ptr<const TcpSocketState> tcb, uint32_t bytesInFlight);
virtual void IncreaseWindow (Ptr<TcpSocketState> tcb, uint32_t segmentsAcked);
virtual void PktsAcked (Ptr<TcpSocketState> tcb, uint32_t segmentsAcked,const Time& rtt);
virtual Ptr<TcpCongestionOps> Fork ();

The most interesting methods to write are GetSsThresh and IncreaseWindow. The latter is called when TcpSocketBase decides that it is time to increase the congestion window. Much information is available in the Transmission Control Block, and the method should increase cWnd and/or ssThresh based on the number of segments acked.

GetSsThresh is called whenever the socket needs an updated value of the slow start threshold. This happens after a loss; congestion control algorithms are then asked to lower such value, and to return it.

PktsAcked is used in case the algorithm needs timing information (such as RTT), and it is called each time an ACK is received.

Current limitations
  • SACK is not supported
  • TcpCongestionOps interface does not contain every possible Linux operation
  • Fast retransmit / fast recovery are bound with TcpSocketBase, thereby preventing easy simulation of TCP Tahoe
Writing TCP tests

The TCP subsystem supports automated test cases on both socket functions and congestion control algorithms. To show how to write tests for TCP, here we explain the process of creating a test case that reproduces a bug (#1571 in the project bug tracker).

The bug concerns the zero window situation, which happens when the receiver can not handle more data. In this case, it advertises a zero window, which causes the sender to pause transmission and wait for the receiver to increase the window.

The sender has a timer to periodically check the receiver’s window: however, in modern TCP implementations, when the receiver has freed a “significant” amount of data, the receiver itself sends an “active” window update, meaning that the transmission could be resumed. Nevertheless, the sender timer is still necessary because window updates can be lost.

Note

During the text, we will assume some knowledge about the general design of the TCP test infrastructure, which is explained in detail into the Doxygen documentation. As a brief summary, the strategy is to have a class that sets up a TCP connection, and that calls protected members of itself. In this way, subclasses can implement the necessary members, which will be called by the main TcpGeneralTest class when events occour. For example, after processing an ACK, the method ProcessedAck will be invoked. Subclasses interested in checking some particular things which must have happened during an ACK processing, should implement the ProcessedAck method and check the interesting values inside the method. To get a list of available methods, please check the Doxygen documentation.

We describe the writing of two test case, covering both situations: the sender’s zero-window probing and the receiver “active” window update. Our focus will be on dealing with the reported problems, which are:

  • an ns-3 receiver does not send “active” window update when its receive buffer is being freed;
  • even if the window update is artificially crafted, the transmission does not resume.

However, other things should be checked in the test:

  • Persistent timer setup
  • Persistent timer teardown if rWnd increases

To construct the test case, one first derives from the TcpGeneralTest class:

The code is the following:

TcpZeroWindowTest::TcpZeroWindowTest (const std::string &desc)
   : TcpGeneralTest (desc)
{
}

Then, one should define the general parameters for the TCP connection, which will be one-sided (one node is acting as SENDER, while the other is acting as RECEIVER):

  • Application packet size set to 500, and 20 packets in total (meaning a stream of 10k bytes)
  • Segment size for both SENDER and RECEIVER set to 500 bytes
  • Initial slow start threshold set to UINT32_MAX
  • Initial congestion window for the SENDER set to 10 segments (5000 bytes)
  • Congestion control: NewReno

We have also to define the link properties, because the above definition does not work for every combination of propagation delay and sender application behavior.

  • Link one-way propagation delay: 50 ms
  • Application packet generation interval: 10 ms
  • Application starting time: 20 s after the starting point

To define the properties of the environment (e.g. properties which should be set before the object creation, such as propagation delay) one next implements ehe method ConfigureEnvironment:

void
TcpZeroWindowTest::ConfigureEnvironment ()
{
  TcpGeneralTest::ConfigureEnvironment ();
  SetAppPktCount (20);
  SetMTU (500);
  SetTransmitStart (Seconds (2.0));
  SetPropagationDelay (MilliSeconds (50));
}

For other properties, set after the object creation, one can use ConfigureProperties (). The difference is that some values, such as initial congestion window or initial slow start threshold, are applicable only to a single instance, not to every instance we have. Usually, methods that requires an id and a value are meant to be called inside ConfigureProperties (). Please see the doxygen documentation for an exhaustive list of the tunable properties.

void
TcpZeroWindowTest::ConfigureProperties ()
{
  TcpGeneralTest::ConfigureProperties ();
  SetInitialCwnd (SENDER, 10);
}

To see the default value for the experiment, please see the implementation of both methods inside TcpGeneralTest class.

Note

If some configuration parameters are missing, add a method called “SetSomeValue” which takes as input the value only (if it is meant to be called inside ConfigureEnvironment) or the socket and the value (if it is meant to be called inside ConfigureProperties).

To define a zero-window situation, we choose (by design) to initiate the connection with a 0-byte rx buffer. This implies that the RECEIVER, in its first SYN-ACK, advertises a zero window. This can be accomplished by implementing the method CreateReceiverSocket, setting an Rx buffer value of 0 bytes (at line 6 of the following code):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Ptr<TcpSocketMsgBase>
TcpZeroWindowTest::CreateReceiverSocket (Ptr<Node> node)
{
  Ptr<TcpSocketMsgBase> socket = TcpGeneralTest::CreateReceiverSocket (node);

  socket->SetAttribute("RcvBufSize", UintegerValue (0));
  Simulator::Schedule (Seconds (10.0),
                       &TcpZeroWindowTest::IncreaseBufSize, this);

  return socket;
}

Even so, to check the active window update, we should schedule an increase of the buffer size. We do this at line 7 and 8, scheduling the function IncreaseBufSize.

void
TcpZeroWindowTest::IncreaseBufSize ()
{
  SetRcvBufSize (RECEIVER, 2500);
}

Which utilizes the SetRcvBufSize method to edit the RxBuffer object of the RECEIVER. As said before, check the Doxygen documentation for class TcpGeneralTest to be aware of the various possibilities that it offers.

Note

By design, we choose to mantain a close relationship between TcpSocketBase and TcpGeneralTest: they are connected by a friendship relation. Since friendship is not passed through inheritance, if one discovers that one needs to access or to modify a private (or protected) member of TcpSocketBase, one can do so by adding a method in the class TcpGeneralSocket. An example of such method is SetRcvBufSize, which allows TcpGeneralSocket subclasses to forcefully set the RxBuffer size.

void
TcpGeneralTest::SetRcvBufSize (SocketWho who, uint32_t size)
{
  if (who == SENDER)
    {
      m_senderSocket->SetRcvBufSize (size);
    }
  else if (who == RECEIVER)
    {
      m_receiverSocket->SetRcvBufSize (size);
    }
  else
    {
      NS_FATAL_ERROR ("Not defined");
    }
}

Next, we can start to follow the TCP connection:

  1. At time 0.0 s the connection is opened sender side, with a SYN packet sent from SENDER to RECEIVER
  2. At time 0.05 s the RECEIVER gets the SYN and replies with a SYN-ACK
  3. At time 0.10 s the SENDER gets the SYN-ACK and replies with a SYN.

While the general structure is defined, and the connection is started, we need to define a way to check the rWnd field on the segments. To this aim, we can implement the methods Rx and Tx in the TcpGeneralTest subclass, checking each time the actions of the RECEIVER and the SENDER. These methods are defined in TcpGeneralTest, and they are attached to the Rx and Tx traces in the TcpSocketBase. One should write small tests for every detail that one wants to ensure during the connection (it will prevent the test from changing over the time, and it ensures that the behavior will stay consistent through releases). We start by ensuring that the first SYN-ACK has 0 as advertised window size:

void
TcpZeroWindowTest::Tx(const Ptr<const Packet> p, const TcpHeader &h, SocketWho who)
{
  ...
  else if (who == RECEIVER)
    {
      NS_LOG_INFO ("\tRECEIVER TX " << h << " size " << p->GetSize());

      if (h.GetFlags () & TcpHeader::SYN)
        {
          NS_TEST_ASSERT_MSG_EQ (h.GetWindowSize(), 0,
                                 "RECEIVER window size is not 0 in the SYN-ACK");
        }
    }
    ....
 }

Pratically, we are checking that every SYN packet sent by the RECEIVER has the advertised window set to 0. The same thing is done also by checking, in the Rx method, that each SYN received by SENDER has the advertised window set to 0. Thanks to the log subsystem, we can print what is happening through messages. If we run the experiment, enabling the logging, we can see the following:

./waf shell
gdb --args ./build/utils/ns3-dev-test-runner-debug --test-name=tcp-zero-window-test --stop-on-failure --fullness=QUICK --assert-on-failure --verbose
(gdb) run

0.00s TcpZeroWindowTestSuite:Tx(): 0.00      SENDER TX 49153 > 4477 [SYN] Seq=0 Ack=0 Win=32768 ns3::TcpOptionWinScale(2) ns3::TcpOptionTS(0;0) size 36
0.05s TcpZeroWindowTestSuite:Rx(): 0.05      RECEIVER RX 49153 > 4477 [SYN] Seq=0 Ack=0 Win=32768 ns3::TcpOptionWinScale(2) ns3::TcpOptionTS(0;0) ns3::TcpOptionEnd(EOL) size 0
0.05s TcpZeroWindowTestSuite:Tx(): 0.05      RECEIVER TX 4477 > 49153 [SYN|ACK] Seq=0 Ack=1 Win=0 ns3::TcpOptionWinScale(0) ns3::TcpOptionTS(50;0) size 36
0.10s TcpZeroWindowTestSuite:Rx(): 0.10      SENDER RX 4477 > 49153 [SYN|ACK] Seq=0 Ack=1 Win=0 ns3::TcpOptionWinScale(0) ns3::TcpOptionTS(50;0) ns3::TcpOptionEnd(EOL) size 0
0.10s TcpZeroWindowTestSuite:Tx(): 0.10      SENDER TX 49153 > 4477 [ACK] Seq=1 Ack=1 Win=32768 ns3::TcpOptionTS(100;50) size 32
0.15s TcpZeroWindowTestSuite:Rx(): 0.15      RECEIVER RX 49153 > 4477 [ACK] Seq=1 Ack=1 Win=32768 ns3::TcpOptionTS(100;50) ns3::TcpOptionEnd(EOL) size 0
(...)

The output is cut to show the threeway handshake. As we can see from the headers, the rWnd of RECEIVER is set to 0, and thankfully our tests are not failing. Now we need to test for the persistent timer, which sould be started by the SENDER after it receives the SYN-ACK. Since the Rx method is called before any computation on the received packet, we should utilize another method, namely ProcessedAck, which is the method called after each processed ACK. In the following, we show how to check if the persistent event is running after the processing of the SYN-ACK:

void
TcpZeroWindowTest::ProcessedAck (const Ptr<const TcpSocketState> tcb,
                                 const TcpHeader& h, SocketWho who)
{
  if (who == SENDER)
    {
      if (h.GetFlags () & TcpHeader::SYN)
        {
          EventId persistentEvent = GetPersistentEvent (SENDER);
          NS_TEST_ASSERT_MSG_EQ (persistentEvent.IsRunning (), true,
                                 "Persistent event not started");
        }
    }
 }

Since we programmed the increase of the buffer size after 10 simulated seconds, we expect the persistent timer to fire before any rWnd changes. When it fires, the SENDER should send a window probe, and the receiver should reply reporting again a zero window situation. At first, we investigates on what the sender sends:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
  if (Simulator::Now ().GetSeconds () <= 6.0)
    {
      NS_TEST_ASSERT_MSG_EQ (p->GetSize () - h.GetSerializedSize(), 0,
                             "Data packet sent anyway");
    }
  else if (Simulator::Now ().GetSeconds () > 6.0 &&
           Simulator::Now ().GetSeconds () <= 7.0)
    {
      NS_TEST_ASSERT_MSG_EQ (m_zeroWindowProbe, false, "Sent another probe");

      if (! m_zeroWindowProbe)
        {
          NS_TEST_ASSERT_MSG_EQ (p->GetSize () - h.GetSerializedSize(), 1,
                                 "Data packet sent instead of window probe");
          NS_TEST_ASSERT_MSG_EQ (h.GetSequenceNumber(), SequenceNumber32 (1),
                                 "Data packet sent instead of window probe");
          m_zeroWindowProbe = true;
        }
    }

We divide the events by simulated time. At line 1, we check everything that happens before the 6.0 seconds mark; for instance, that no data packets are sent, and that the state remains OPEN for both sender and receiver.

Since the persist timeout is initialized at 6 seconds (excercise left for the reader: edit the test, getting this value from the Attribute system), we need to check (line 6) between 6.0 and 7.0 simulated seconds that the probe is sent. Only one probe is allowed, and this is the reason for the check at line 11.

1
2
3
4
5
6
7
8
if (Simulator::Now ().GetSeconds () > 6.0 &&
    Simulator::Now ().GetSeconds () <= 7.0)
  {
    NS_TEST_ASSERT_MSG_EQ (h.GetSequenceNumber(), SequenceNumber32 (1),
                           "Data packet sent instead of window probe");
    NS_TEST_ASSERT_MSG_EQ (h.GetWindowSize(), 0,
                           "No zero window advertised by RECEIVER");
  }

For the RECEIVER, the interval between 6 and 7 seconds is when the zero-window segment is sent.

Other checks are redundant; the safest approach is to deny any other packet exchange between the 7 and 10 seconds mark.

else if (Simulator::Now ().GetSeconds () > 7.0 &&
         Simulator::Now ().GetSeconds () < 10.0)
  {
    NS_FATAL_ERROR ("No packets should be sent before the window update");
  }

The state checks are performed at the end of the methods, since they are valid in every condition:

NS_TEST_ASSERT_MSG_EQ (GetCongStateFrom (GetTcb(SENDER)), TcpSocketState::CA_OPEN,
                       "Sender State is not OPEN");
NS_TEST_ASSERT_MSG_EQ (GetCongStateFrom (GetTcb(RECEIVER)), TcpSocketState::CA_OPEN,
                       "Receiver State is not OPEN");

Now, the interesting part in the Tx method is to check that after the 10.0 seconds mark (when the RECEIVER sends the active window update) the value of the window should be greater than zero (and precisely, set to 2500):

else if (Simulator::Now().GetSeconds() >= 10.0)
  {
    NS_TEST_ASSERT_MSG_EQ (h.GetWindowSize(), 2500,
                           "Receiver window not updated");
  }

To be sure that the sender receives the window update, we can use the Rx method:

1
2
3
4
5
6
if (Simulator::Now().GetSeconds() >= 10.0)
  {
    NS_TEST_ASSERT_MSG_EQ (h.GetWindowSize(), 2500,
                           "Receiver window not updated");
    m_windowUpdated = true;
  }

We check every packet after the 10 seconds mark to see if it has the window updated. At line 5, we also set to true a boolean variable, to check that we effectively reach this test.

Last but not least, we implement also the NormalClose() method, to check that the connection ends with a success:

void
TcpZeroWindowTest::NormalClose (SocketWho who)
{
  if (who == SENDER)
    {
      m_senderFinished = true;
    }
  else if (who == RECEIVER)
    {
      m_receiverFinished = true;
    }
}

The method is called only if all bytes are transmitted successfully. Then, in the method FinalChecks(), we check all variables, which should be true (which indicates that we have perfectly closed the connection).

void
TcpZeroWindowTest::FinalChecks ()
{
  NS_TEST_ASSERT_MSG_EQ (m_zeroWindowProbe, true,
                         "Zero window probe not sent");
  NS_TEST_ASSERT_MSG_EQ (m_windowUpdated, true,
                         "Window has not updated during the connection");
  NS_TEST_ASSERT_MSG_EQ (m_senderFinished, true,
                         "Connection not closed successfully (SENDER)");
  NS_TEST_ASSERT_MSG_EQ (m_receiverFinished, true,
                         "Connection not closed successfully (RECEIVER)");
}

To run the test, the usual way is

./test.py -s tcp-zero-window-test

PASS: TestSuite tcp-zero-window-test
1 of 1 tests passed (1 passed, 0 skipped, 0 failed, 0 crashed, 0 valgrind errors)

To see INFO messages, use a combination of ./waf shell and gdb (really useful):

./waf shell && gdb --args ./build/utils/ns3-dev-test-runner-debug --test-name=tcp-zero-window-test --stop-on-failure --fullness=QUICK --assert-on-failure --verbose

and then, hit “Run”.

Note

This code magically runs without any reported errors; however, in real cases, when you discover a bug you should expect the existing test to fail (this could indicate a well-written test and a bad-writted model, or a bad-written test; hopefull the first situation). Correcting bugs is an iterative process. For instance, commits created to make this test case running without errors are 11633:6b74df04cf44, (others to be merged).

Network Simulation Cradle

The Network Simulation Cradle (NSC) is a framework for wrapping real-world network code into simulators, allowing simulation of real-world behavior at little extra cost. This work has been validated by comparing situations using a test network with the same situations in the simulator. To date, it has been shown that the NSC is able to produce extremely accurate results. NSC supports four real world stacks: FreeBSD, OpenBSD, lwIP and Linux. Emphasis has been placed on not changing any of the network stacks by hand. Not a single line of code has been changed in the network protocol implementations of any of the above four stacks. However, a custom C parser was built to programmatically change source code.

NSC has previously been ported to ns-2 and OMNeT++, and was was added to ns-3 in September 2008 (ns-3.2 release). This section describes the ns-3 port of NSC and how to use it.

To some extent, NSC has been superseded by the Linux kernel support within Direct Code Execution (DCE). However, NSC is still available through the bake build system. NSC supports Linux kernels 2.6.18 and 2.6.26, but newer versions of the kernel have not been ported.

Prerequisites

Presently, NSC has been tested and shown to work on these platforms: Linux i386 and Linux x86-64. NSC does not support powerpc. Use on FreeBSD or OS X is unsupported (although it may be able to work).

Building NSC requires the packages flex and bison.

Configuring and Downloading

As of ns-3.17 or later, NSC must either be downloaded separately from its own repository, or downloading when using the bake build system of ns-3.

For ns-3.17 or later releases, when using bake, one must configure NSC as part of an “allinone” configuration, such as:

$ cd bake
$ python bake.py configure -e ns-allinone-3.19
$ python bake.py download
$ python bake.py build

Instead of a released version, one may use the ns-3 development version by specifying “ns-3-allinone” to the configure step above.

NSC may also be downloaded from its download site using Mercurial:

$ hg clone https://secure.wand.net.nz/mercurial/nsc

Prior to the ns-3.17 release, NSC was included in the allinone tarball and the released version did not need to be separately downloaded.

Building and validating

NSC may be built as part of the bake build process; alternatively, one may build NSC by itself using its build system; e.g.:

$ cd nsc-dev
$ python scons.py

Once NSC has been built either manually or through the bake system, change into the ns-3 source directory and try running the following configuration:

$ ./waf configure

If NSC has been previously built and found by waf, then you will see:

Network Simulation Cradle     : enabled

If NSC has not been found, you will see:

Network Simulation Cradle     : not enabled (NSC not found (see option --with-nsc))

In this case, you must pass the relative or absolute path to the NSC libraries with the “–with-nsc” configure option; e.g.

$ ./waf configure --with-nsc=/path/to/my/nsc/directory

For ns-3 releases prior to the ns-3.17 release, using the build.py script in ns-3-allinone directory, NSC will be built by default unless the platform does not support it. To explicitly disable it when building ns-3, type:

$ ./waf configure --enable-examples --enable-tests --disable-nsc

If waf detects NSC, then building ns-3 with NSC is performed the same way with waf as without it. Once ns-3 is built, try running the following test suite:

$ ./test.py -s ns3-tcp-interoperability

If NSC has been successfully built, the following test should show up in the results:

PASS TestSuite ns3-tcp-interoperability

This confirms that NSC is ready to use.

Usage

There are a few example files. Try:

$ ./waf --run tcp-nsc-zoo
$ ./waf --run tcp-nsc-lfn

These examples will deposit some .pcap files in your directory, which can be examined by tcpdump or wireshark.

Let’s look at the examples/tcp/tcp-nsc-zoo.cc file for some typical usage. How does it differ from using native ns-3 TCP? There is one main configuration line, when using NSC and the ns-3 helper API, that needs to be set:

InternetStackHelper internetStack;

internetStack.SetNscStack ("liblinux2.6.26.so");
// this switches nodes 0 and 1 to NSCs Linux 2.6.26 stack.
internetStack.Install (n.Get(0));
internetStack.Install (n.Get(1));

The key line is the SetNscStack. This tells the InternetStack helper to aggregate instances of NSC TCP instead of native ns-3 TCP to the remaining nodes. It is important that this function be called before calling the Install() function, as shown above.

Which stacks are available to use? Presently, the focus has been on Linux 2.6.18 and Linux 2.6.26 stacks for ns-3. To see which stacks were built, one can execute the following find command at the ns-3 top level directory:

$ find nsc -name "*.so" -type f
nsc/linux-2.6.18/liblinux2.6.18.so
nsc/linux-2.6.26/liblinux2.6.26.so

This tells us that we may either pass the library name liblinux2.6.18.so or liblinux2.6.26.so to the above configuration step.

Stack configuration

NSC TCP shares the same configuration attributes that are common across TCP sockets, as described above and documented in Doxygen

Additionally, NSC TCP exports a lot of configuration variables into the ns-3 attributes system, via a sysctl-like interface. In the examples/tcp/tcp-nsc-zoo example, you can see the following configuration:

// this disables TCP SACK, wscale and timestamps on node 1 (the attributes
  represent sysctl-values).
Config::Set ("/NodeList/1/$ns3::Ns3NscStack<linux2.6.26>/net.ipv4.tcp_sack",
  StringValue ("0"));
Config::Set ("/NodeList/1/$ns3::Ns3NscStack<linux2.6.26>/net.ipv4.tcp_timestamps",
StringValue ("0"));
Config::Set ("/NodeList/1/$ns3::Ns3NscStack<linux2.6.26>/net.ipv4.tcp_window_scaling",
StringValue ("0"));

These additional configuration variables are not available to native ns-3 TCP.

Also note that default values for TCP attributes in ns-3 TCP may differ from the nsc TCP implementation. Specifically in ns-3:

  1. TCP default MSS is 536
  2. TCP Delayed Ack count is 2

Therefore when making comparisons between results obtained using nsc and ns-3 TCP, care must be taken to ensure these values are set appropriately. See /examples/tcp/tcp-nsc-comparision.cc for an example.

NSC API

This subsection describes the API that NSC presents to ns-3 or any other simulator. NSC provides its API in the form of a number of classes that are defined in sim/sim_interface.h in the nsc directory.

  • INetStack INetStack contains the ‘low level’ operations for the operating system network stack, e.g. in and output functions from and to the network stack (think of this as the ‘network driver interface’. There are also functions to create new TCP or UDP sockets.
  • ISendCallback This is called by NSC when a packet should be sent out to the network. This simulator should use this callback to re-inject the packet into the simulator so the actual data can be delivered/routed to its destination, where it will eventually be handed into Receive() (and eventually back to the receivers NSC instance via INetStack->if_receive() ).
  • INetStreamSocket This is the structure defining a particular connection endpoint (file descriptor). It contains methods to operate on this endpoint, e.g. connect, disconnect, accept, listen, send_data/read_data, ...
  • IInterruptCallback This contains the wakeup callback, which is called by NSC whenever something of interest happens. Think of wakeup() as a replacement of the operating systems wakeup function: Whenever the operating system would wake up a process that has been waiting for an operation to complete (for example the TCP handshake during connect()), NSC invokes the wakeup() callback to allow the simulator to check for state changes in its connection endpoints.
ns-3 implementation

The ns-3 implementation makes use of the above NSC API, and is implemented as follows.

The three main parts are:

  • ns3::NscTcpL4Protocol: a subclass of Ipv4L4Protocol (and two nsc classes: ISendCallback and IInterruptCallback)
  • ns3::NscTcpSocketImpl: a subclass of TcpSocket
  • ns3::NscTcpSocketFactoryImpl: a factory to create new NSC sockets

src/internet/model/nsc-tcp-l4-protocol is the main class. Upon Initialization, it loads an nsc network stack to use (via dlopen()). Each instance of this class may use a different stack. The stack (=shared library) to use is set using the SetNscLibrary() method (at this time its called indirectly via the internet stack helper). The nsc stack is then set up accordingly (timers etc). The NscTcpL4Protocol::Receive() function hands the packet it receives (must be a complete tcp/ip packet) to the nsc stack for further processing. To be able to send packets, this class implements the nsc send_callback method. This method is called by nsc whenever the nsc stack wishes to send a packet out to the network. Its arguments are a raw buffer, containing a complete TCP/IP packet, and a length value. This method therefore has to convert the raw data to a Ptr<Packet> usable by ns-3. In order to avoid various ipv4 header issues, the nsc ip header is not included. Instead, the tcp header and the actual payload are put into the Ptr<Packet>, after this the Packet is passed down to layer 3 for sending the packet out (no further special treatment is needed in the send code path).

This class calls ns3::NscTcpSocketImpl both from the nsc wakeup() callback and from the Receive path (to ensure that possibly queued data is scheduled for sending).

src/internet/model/nsc-tcp-socket-impl implements the nsc socket interface. Each instance has its own nscTcpSocket. Data that is Send() will be handed to the nsc stack via m_nscTcpSocket->send_data(). (and not to nsc-tcp-l4, this is the major difference compared to ns-3 TCP). The class also queues up data that is Send() before the underlying descriptor has entered an ESTABLISHED state. This class is called from the nsc-tcp-l4 class, when the nsc-tcp-l4 wakeup() callback is invoked by nsc. nsc-tcp-socket-impl then checks the current connection state (SYN_SENT, ESTABLISHED, LISTEN...) and schedules appropriate callbacks as needed, e.g. a LISTEN socket will schedule Accept to see if a new connection must be accepted, an ESTABLISHED socket schedules any pending data for writing, schedule a read callback, etc.

Note that ns3::NscTcpSocketImpl does not interact with nsc-tcp directly: instead, data is redirected to nsc. nsc-tcp calls the nsc-tcp-sockets of a node when its wakeup callback is invoked by nsc.

Limitations
  • NSC only works on single-interface nodes; attempting to run it on a multi-interface node will cause a program error.
  • Cygwin and OS X PPC are not supported; OS X Intel is not supported but may work
  • The non-Linux stacks of NSC are not supported in ns-3
  • Not all socket API callbacks are supported

For more information, see this wiki page.

Internet Applications Module Documentation

The goal of this module is to hold all the Internet-specific applications, and most notably some very specific applications (e.g., ping) or daemons (e.g., radvd). Other non-Internet-specific applications such as packet generators are contained in other modules.

Model Description

The source code for the new module lives in the directory src/internet-apps.

Each application has its own goals, limitations and scope, which are briefly explained in the following.

V4Ping

This app mimics a “ping” (ICMP Echo) using IPv4. The application allows the following attributes to be set:

  • Remote address
  • Verbose mode
  • Packet size (default 56 bytes)
  • Packet interval (default 1 second)

Moreover, the user can access the measured rtt value (as a Traced Source).

Ping6

This app mimics a “ping” (ICMP Echo) using IPv6. The application allows the following attributes to be set:

  • Remote address
  • Local address (sender address)
  • Packet size (default 56 bytes)
  • Packet interval (default 1 second)
  • Max number of packets to send
Radvd

This app mimics a “RADVD” daemon. I.e., the daemon responsible for IPv6 routers advertisements. All the IPv6 routers should have a RADVD daemon installed.

The configuration of the Radvd application mimics the one of the radvd Linux program.

Examples and use

All the applications are extensively used in the top-level examples directories. The users are encouraged to check the scripts therein to have a clear overview of the various options and usage tricks.

PageBreak

Low-Rate Wireless Personal Area Network (LR-WPAN)

This chapter describes the implementation of ns-3 models for the low-rate, wireless personal area network (LR-WPAN) as specified by IEEE standard 802.15.4 (2006).

Model Description

The source code for the lr-wpan module lives in the directory src/lr-wpan.

Design

The model design closely follows the standard from an architectural standpoint.

_images/lr-wpan-arch.png

Architecture and scope of lr-wpan models

The grey areas in the figure (adapted from Fig 3. of IEEE Std. 802.15.4-2006) show the scope of the model.

The Spectrum NetDevice from Nicola Baldo is the basis for the implementation.

The implementation also plans to borrow from the ns-2 models developed by Zheng and Lee in the future.

APIs

The APIs closely follow the standard, adapted for ns-3 naming conventions and idioms. The APIs are organized around the concept of service primitives as shown in the following figure adapted from Figure 14 of IEEE Std. 802.15.4-2006.

_images/lr-wpan-primitives.png

Service primitives

The APIs are organized around four conceptual services and service access points (SAP):

  • MAC data service (MCPS)
  • MAC management service (MLME)
  • PHY data service (PD)
  • PHY management service (PLME)

In general, primitives are standardized as follows (e.g. Sec 7.1.1.1.1 of IEEE 802.15.4-2006)::

MCPS-DATA.request      (
                        SrcAddrMode,
                        DstAddrMode,
                        DstPANId,
                        DstAddr,
                        msduLength,
                        msdu,
                        msduHandle,
                        TxOptions,
                        SecurityLevel,
                        KeyIdMode,
                        KeySource,
                        KeyIndex
                        )

This maps to ns-3 classes and methods such as::

struct McpsDataRequestParameters
{
  uint8_t m_srcAddrMode;
  uint8_t m_dstAddrMode;
  ...
};

void
LrWpanMac::McpsDataRequest (McpsDataRequestParameters params)
{
...
}
MAC

The MAC at present implements the unslotted CSMA/CA variant, without beaconing. Currently there is no support for coordinators and the relavant APIs.

The implemented MAC is similar to Contiki’s NullMAC, i.e., a MAC without sleep features. The radio is assumed to be always active (receiving or transmitting), of completely shut down. Frame reception is not disabled while performing the CCA.

The main API supported is the data transfer API (McpsDataRequest/Indication/Confirm). CSMA/CA according to Stc 802.15.4-2006, section 7.5.1.4 is supported. Frame reception and rejection according to Std 802.15.4-2006, section 7.5.6.2 is supported, including acknowledgements. Only short addressing completely implemented. Various trace sources are supported, and trace sources can be hooked to sinks.

PHY

The physical layer components consist of a Phy model, an error rate model, and a loss model. The error rate model presently models the error rate for IEEE 802.15.4 2.4 GHz AWGN channel for OQPSK; the model description can be found in IEEE Std 802.15.4-2006, section E.4.1.7. The Phy model is based on SpectrumPhy and it follows specification described in section 6 of IEEE Std 802.15.4-2006. It models PHY service specifications, PPDU formats, PHY constants and PIB attributes. It currently only supports the transmit power spectral density mask specified in 2.4 GHz per section 6.5.3.1. The noise power density assumes uniformly distributed thermal noise across the frequency bands. The loss model can fully utilize all existing simple (non-spectrum phy) loss models. The Phy model uses the existing single spectrum channel model. The physical layer is modeled on packet level, that is, no preamble/SFD detection is done. Packet reception will be started with the first bit of the preamble (which is not modeled), if the SNR is more than -5 dB, see IEEE Std 802.15.4-2006, appendix E, Figure E.2. Reception of the packet will finish after the packet was completely transmitted. Other packets arriving during reception will add up to the interference/noise.

Currently the receiver sensitivity is set to a fixed value of -106.58 dBm. This corresponds to a packet error rate of 1% for 20 byte reference packets for this signal power, according to IEEE Std 802.15.4-2006, section 6.1.7. In the future we will provide support for changing the sensitivity to different values.

_images/802-15-4-per-sens.png

Packet error rate vs. signal power

NetDevice

Although it is expected that other technology profiles (such as 6LoWPAN and ZigBee) will write their own NetDevice classes, a basic LrWpanNetDevice is provided, which encapsulates the common operations of creating a generic LrWpan device and hooking things together.

Scope and Limitations

Future versions of this document will contain a PICS proforma similar to Appendix D of IEEE 802.15.4-2006. The current emphasis is on the unslotted mode of 802.15.4 operation for use in Zigbee, and the scope is limited to enabling a single mode (CSMA/CA) with basic data transfer capabilities. Association with PAN coordinators is not yet supported, nor the use of extended addressing. Interference is modeled as AWGN but this is currently not thoroughly tested.

The NetDevice Tx queue is not limited, i.e., packets are never dropped due to queue becoming full. They may be dropped due to excessive transmission retries or channel access failure.

References

  • Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (WPANs), IEEE Computer Society, IEEE Std 802.15.4-2006, 8 September 2006.
    1. Zheng and Myung J. Lee, “A comprehensive performance study of IEEE 802.15.4,” Sensor Network Operations, IEEE Press, Wiley Interscience, Chapter 4, pp. 218-237, 2006.

Usage

Enabling lr-wpan

Add lr-wpan to the list of modules built with ns-3.

Helper

The helper is patterned after other device helpers. In particular, tracing (ascii and pcap) is enabled similarly, and enabling of all lr-wpan log components is performed similarly. Use of the helper is exemplified in examples/lr-wpan-data.cc. For ascii tracing, the transmit and receive traces are hooked at the Mac layer.

The default propagation loss model added to the channel, when this helper is used, is the LogDistancePropagationLossModel with default parameters.

Examples

The following examples have been written, which can be found in src/lr-wpan/examples/:

  • lr-wpan-data.cc: A simple example showing end-to-end data transfer.
  • lr-wpan-error-distance-plot.cc: An example to plot variations of the packet success ratio as a function of distance.
  • lr-wpan-error-model-plot.cc: An example to test the phy.
  • lr-wpan-packet-print.cc: An example to print out the MAC header fields.
  • lr-wpan-phy-test.cc: An example to test the phy.

In particular, the module enables a very simplified end-to-end data transfer scenario, implemented in lr-wpan-data.cc. The figure shows a sequence of events that are triggered when the MAC receives a DataRequest from the higher layer. It invokes a Clear Channel Assessment (CCA) from the PHY, and if successful, sends the frame down to the PHY where it is transmitted over the channel and results in a DataIndication on the peer node.

_images/lr-wpan-data-example.png

Data example for simple LR-WPAN data transfer end-to-end

The example lr-wpan-error-distance-plot.cc plots the packet success ratio (PSR) as a function of distance, using the default LogDistance propagation loss model and the 802.15.4 error model. The channel (default 11), packet size (default 20 bytes) and transmit power (default 0 dBm) can be varied by command line arguments. The program outputs a file named 802.15.4-psr-distance.plt. Loading this file into gnuplot yields a file 802.15.4-psr-distance.eps, which can be converted to pdf or other formats. The default output is shown below.

_images/802-15-4-psr-distance.png

Default output of the program lr-wpan-error-distance-plot.cc

Tests

The following tests have been written, which can be found in src/lr-wpan/tests/:

  • lr-wpan-ack-test.cc: Check that acknowledgments are being used and issued in the correct order.
  • lr-wpan-collision-test.cc: Test correct reception of packets with interference and collisions.
  • lr-wpan-error-model-test.cc: Check that the error model gives predictable values.
  • lr-wpan-packet-test.cc: Test the 802.15.4 MAC header/trailer classes
  • lr-wpan-pd-plme-sap-test.cc: Test the PLME and PD SAP per IEEE 802.15.4
  • lr-wpan-spectrum-value-helper-test.cc: Test that the conversion between power (expressed as a scalar quantity) and spectral power, and back again, falls within a 25% tolerance across the range of possible channels and input powers.

Validation

The model has not been validated against real hardware. The error model has been validated against the data in IEEE Std 802.15.4-2006, section E.4.1.7 (Figure E.2). The MAC behavior (CSMA backoff) has been validated by hand against expected behavior. The below plot is an example of the error model validation and can be reproduced by running lr-wpan-error-model-plot.cc:

_images/802-15-4-ber.png

Default output of the program lr-wpan-error-model-plot.cc

LTE Module

Design Documentation

Overview

An overview of the LTE-EPC simulation model is depicted in the figure Overview of the LTE-EPC simulation model. There are two main components:

  • the LTE Model. This model includes the LTE Radio Protocol stack (RRC, PDCP, RLC, MAC, PHY). These entities reside entirely within the UE and the eNB nodes.
  • the EPC Model. This models includes core network interfaces, protocols and entities. These entities and protocols reside within the SGW, PGW and MME nodes, and partially within the eNB nodes.
_images/epc-topology.png

Overview of the LTE-EPC simulation model

Design Criteria

LTE Model

The LTE model has been designed to support the evaluation of the following aspects of LTE systems:

  • Radio Resource Management
  • QoS-aware Packet Scheduling
  • Inter-cell Interference Coordination
  • Dynamic Spectrum Access

In order to model LTE systems to a level of detail that is sufficient to allow a correct evaluation of the above mentioned aspects, the following requirements have been considered:

  1. At the radio level, the granularity of the model should be at least that of the Resource Block (RB). In fact, this is the fundamental unit being used for resource allocation. Without this minimum level of granularity, it is not possible to model accurately packet scheduling and inter-cell-interference. The reason is that, since packet scheduling is done on a per-RB basis, an eNB might transmit on a subset only of all the available RBs, hence interfering with other eNBs only on those RBs where it is transmitting. Note that this requirement rules out the adoption of a system level simulation approach, which evaluates resource allocation only at the granularity of call/bearer establishment.
  2. The simulator should scale up to tens of eNBs and hundreds of User Equipments (UEs). This rules out the use of a link level simulator, i.e., a simulator whose radio interface is modeled with a granularity up to the symbol level. This is because to have a symbol level model it is necessary to implement all the PHY layer signal processing, whose huge computational complexity severely limits simulation. In fact, link-level simulators are normally limited to a single eNB and one or a few UEs.
  3. It should be possible within the simulation to configure different cells so that they use different carrier frequencies and system bandwidths. The bandwidth used by different cells should be allowed to overlap, in order to support dynamic spectrum licensing solutions such as those described in [Ofcom2600MHz] and [RealWireless]. The calculation of interference should handle appropriately this case.
  4. To be more representative of the LTE standard, as well as to be as close as possible to real-world implementations, the simulator should support the MAC Scheduler API published by the FemtoForum [FFAPI]. This interface is expected to be used by femtocell manufacturers for the implementation of scheduling and Radio Resource Management (RRM) algorithms. By introducing support for this interface in the simulator, we make it possible for LTE equipment vendors and operators to test in a simulative environment exactly the same algorithms that would be deployed in a real system.
  5. The LTE simulation model should contain its own implementation of the API defined in [FFAPI]. Neither binary nor data structure compatibility with vendor-specific implementations of the same interface are expected; hence, a compatibility layer should be interposed whenever a vendor-specific MAC scheduler is to be used with the simulator. This requirement is necessary to allow the simulator to be independent from vendor-specific implementations of this interface specification. We note that [FFAPI] is a logical specification only, and its implementation (e.g., translation to some specific programming language) is left to the vendors.
  6. The model is to be used to simulate the transmission of IP packets by the upper layers. With this respect, it shall be considered that in LTE the Scheduling and Radio Resource Management do not work with IP packets directly, but rather with RLC PDUs, which are obtained by segmentation and concatenation of IP packets done by the RLC entities. Hence, these functionalities of the RLC layer should be modeled accurately.
EPC Model

The main objective of the EPC model is to provides means for the simulation of end-to-end IP connectivity over the LTE model. To this aim, it supports for the interconnection of multiple UEs to the Internet, via a radio access network of multiple eNBs connected to a single SGW/PGW node, as shown in Figure Overview of the LTE-EPC simulation model.

The following design choices have been made for the EPC model:

  1. The only Packet Data Network (PDN) type supported is IPv4.
  2. The SGW and PGW functional entities are implemented within a single node, which is hence referred to as the SGW/PGW node.
  3. The scenarios with inter-SGW mobility are not of interests. Hence, a single SGW/PGW node will be present in all simulations scenarios
  4. A requirement for the EPC model is that it can be used to simulate the end-to-end performance of realistic applications. Hence, it should be possible to use with the EPC model any regular ns-3 application working on top of TCP or UDP.
  5. Another requirement is the possibility of simulating network topologies with the presence of multiple eNBs, some of which might be equipped with a backhaul connection with limited capabilities. In order to simulate such scenarios, the user data plane protocols being used between the eNBs and the SGW/PGW should be modeled accurately.
  6. It should be possible for a single UE to use different applications with different QoS profiles. Hence, multiple EPS bearers should be supported for each UE. This includes the necessary classification of TCP/UDP traffic over IP done at the UE in the uplink and at the PGW in the downlink.
  7. The focus of the EPC model is mainly on the EPC data plane. The accurate modeling of the EPC control plane is, for the time being, not a requirement; hence, the necessary control plane interactions can be modeled in a simplified way by leveraging on direct interaction among the different simulation objects via the provided helper objects.
  8. The focus of the EPC model is on simulations of active users in ECM connected mode. Hence, all the functionality that is only relevant for ECM idle mode (in particular, tracking area update and paging) are not modeled at all.
  9. The model should allow the possibility to perform an X2-based handover between two eNBs.

Architecture

LTE Model
UE architecture

The architecture of the LTE radio protocol stack model of the UE is represented in the figures LTE radio protocol stack architecture for the UE on the data plane and LTE radio protocol stack architecture for the UE on the control plane which highlight respectively the data plane and the control plane.

_images/lte-arch-ue-data.png

LTE radio protocol stack architecture for the UE on the data plane

_images/lte-arch-ue-ctrl.png

LTE radio protocol stack architecture for the UE on the control plane

The architecture of the PHY/channel model of the UE is represented in figure PHY and channel model architecture for the UE.

_images/lte-ue-phy.png

PHY and channel model architecture for the UE

eNB architecture

The architecture of the LTE radio protocol stack model of the eNB is represented in the figures LTE radio protocol stack architecture for the eNB on the data plane and LTE radio protocol stack architecture for the eNB on the control plane which highlight respectively the data plane and the control plane.

_images/lte-arch-enb-data.png

LTE radio protocol stack architecture for the eNB on the data plane

_images/lte-arch-enb-ctrl.png

LTE radio protocol stack architecture for the eNB on the control plane

The architecture of the PHY/channel model of the eNB is represented in figure PHY and channel model architecture for the eNB.

_images/lte-enb-phy.png

PHY and channel model architecture for the eNB

EPC Model
EPC data plane

In Figure LTE-EPC data plane protocol stack, we represent the end-to-end LTE-EPC data plane protocol stack as it is modeled in the simulator. From the figure, it is evident that the biggest simplification introduced in the data plane model is the inclusion of the SGW and PGW functionality within a single SGW/PGW node, which removes the need for the S5 or S8 interfaces specified by 3GPP. On the other hand, for both the S1-U protocol stack and the LTE radio protocol stack all the protocol layers specified by 3GPP are present.

_images/lte-epc-e2e-data-protocol-stack.png

LTE-EPC data plane protocol stack

EPC control plane

The architecture of the implementation of the control plane model is shown in figure EPC control model. The control interfaces that are modeled explicitly are the S1-AP, the X2-AP and the S11 interfaces.

We note that the S1-AP and the S11 interfaces are modeled in a simplified fashion, by using just one pair of interface classes to model the interaction between entities that reside on different nodes (the eNB and the MME for the S1-AP interface, and the MME and the SGW for the S11 interface). In practice, this means that the primitives of these interfaces are mapped to a direct function call between the two objects. On the other hand, the X2-AP interface is being modeled using protocol data units sent over an X2 link (modeled as a point-to-point link); for this reason, the X2-AP interface model is more realistic.

_images/epc-ctrl-arch.png

EPC control model

Channel and Propagation

For channel modeling purposes, the LTE module uses the SpectrumChannel interface provided by the spectrum module. At the time of this writing, two implementations of such interface are available: SingleModelSpectrumChannel and MultiModelSpectrumChannel, and the LTE module requires the use of the MultiModelSpectrumChannel in order to work properly. This is because of the need to support different frequency and bandwidth configurations. All the the propagation models supported by MultiModelSpectrumChannel can be used within the LTE module.

Use of the Buildings model with LTE

The recommended propagation model to be used with the LTE module is the one provided by the Buildings module, which was in fact designed specifically with LTE (though it can be used with other wireless technologies as well). Please refer to the documentation of the Buildings module for generic information on the propagation model it provides.

In this section we will highlight some considerations that specifically apply when the Buildings module is used together with the LTE module.

The naming convention used in the following will be:

  • User equipment: UE
  • Macro Base Station: MBS
  • Small cell Base Station (e.g., pico/femtocell): SC

The LTE module considers FDD only, and implements downlink and uplink propagation separately. As a consequence, the following pathloss computations are performed

  • MBS <-> UE (indoor and outdoor)
  • SC (indoor and outdoor) <-> UE (indoor and outdoor)

The LTE model does not provide the following pathloss computations:

  • UE <-> UE
  • MBS <-> MBS
  • MBS <-> SC
  • SC <-> SC

The Buildings model does not know the actual type of the node; i.e., it is not aware of whether a transmitter node is a UE, a MBS, or a SC. Rather, the Buildings model only cares about the position of the node: whether it is indoor and outdoor, and what is its z-axis respect to the rooftop level. As a consequence, for an eNB node that is placed outdoor and at a z-coordinate above the rooftop level, the propagation models typical of MBS will be used by the Buildings module. Conversely, for an eNB that is placed outdoor but below the rooftop, or indoor, the propagation models typical of pico and femtocells will be used.

For communications involving at least one indoor node, the corresponding wall penetration losses will be calculated by the Buildings model. This covers the following use cases:

  • MBS <-> indoor UE
  • outdoor SC <-> indoor UE
  • indoor SC <-> indoor UE
  • indoor SC <-> outdoor UE

Please refer to the documentation of the Buildings module for details on the actual models used in each case.

Fading Model

The LTE module includes a trace-based fading model derived from the one developed during the GSoC 2010 [Piro2011]. The main characteristic of this model is the fact that the fading evaluation during simulation run-time is based on per-calculated traces. This is done to limit the computational complexity of the simulator. On the other hand, it needs huge structures for storing the traces; therefore, a trade-off between the number of possible parameters and the memory occupancy has to be found. The most important ones are:

  • users’ speed: relative speed between users (affects the Doppler frequency, which in turns affects the time-variance property of the fading)
  • number of taps (and relative power): number of multiple paths considered, which affects the frequency property of the fading.
  • time granularity of the trace: sampling time of the trace.
  • frequency granularity of the trace: number of values in frequency to be evaluated.
  • length of trace: ideally large as the simulation time, might be reduced by windowing mechanism.
  • number of users: number of independent traces to be used (ideally one trace per user).

With respect to the mathematical channel propagation model, we suggest the one provided by the rayleighchan function of Matlab, since it provides a well accepted channel modelization both in time and frequency domain. For more information, the reader is referred to [mathworks].

The simulator provides a matlab script (src/lte/model/fading-traces/fading-trace-generator.m) for generating traces based on the format used by the simulator. In detail, the channel object created with the rayleighchan function is used for filtering a discrete-time impulse signal in order to obtain the channel impulse response. The filtering is repeated for different TTI, thus yielding subsequent time-correlated channel responses (one per TTI). The channel response is then processed with the pwelch function for obtaining its power spectral density values, which are then saved in a file with the proper format compatible with the simulator model.

Since the number of variable it is pretty high, generate traces considering all of them might produce a high number of traces of huge size. On this matter, we considered the following assumptions of the parameters based on the 3GPP fading propagation conditions (see Annex B.2 of [TS36104]):

  • users’ speed: typically only a few discrete values are considered, i.e.:
    • 0 and 3 kmph for pedestrian scenarios
    • 30 and 60 kmph for vehicular scenarios
    • 0, 3, 30 and 60 for urban scenarios
  • channel taps: only a limited number of sets of channel taps are normally considered, for example three models are mentioned in Annex B.2 of [TS36104].
  • time granularity: we need one fading value per TTI, i.e., every 1 ms (as this is the granularity in time of the ns-3 LTE PHY model).
  • frequency granularity: we need one fading value per RB (which is the frequency granularity of the spectrum model used by the ns-3 LTE model).
  • length of the trace: the simulator includes the windowing mechanism implemented during the GSoC 2011, which consists of picking up a window of the trace each window length in a random fashion.
  • per-user fading process: users share the same fading trace, but for each user a different starting point in the trace is randomly picked up. This choice was made to avoid the need to provide one fading trace per user.

According to the parameters we considered, the following formula express in detail the total size S_{traces} of the fading traces:

S_{traces} = S_{sample} \times N_{RB} \times \frac{T_{trace}}{T_{sample}} \times N_{scenarios} \mbox{ [bytes]}

where S_{sample} is the size in bytes of the sample (e.g., 8 in case of double precision, 4 in case of float precision), N_{RB} is the number of RB or set of RBs to be considered, T_{trace} is the total length of the trace, T_{sample} is the time resolution of the trace (1 ms), and N_{scenarios} is the number of fading scenarios that are desired (i.e., combinations of different sets of channel taps and user speed values). We provide traces for 3 different scenarios one for each taps configuration defined in Annex B.2 of [TS36104]:

  • Pedestrian: with nodes’ speed of 3 kmph.
  • Vehicular: with nodes’ speed of 60 kmph.
  • Urban: with nodes’ speed of 3 kmph.

hence N_{scenarios} = 3. All traces have T_{trace} = 10 s and RB_{NUM} = 100. This results in a total 24 MB bytes of traces.

Antennas

Being based on the SpectrumPhy, the LTE PHY model supports antenna modeling via the ns-3 AntennaModel class. Hence, any model based on this class can be associated with any eNB or UE instance. For instance, the use of the CosineAntennaModel associated with an eNB device allows to model one sector of a macro base station. By default, the IsotropicAntennaModel is used for both eNBs and UEs.

PHY

Overview

The physical layer model provided in this LTE simulator is based on the one described in [Piro2011], with the following modifications. The model now includes the inter cell intereference calculation and the simulation of uplink traffic, including both packet transmission and CQI generation.

Subframe Structure

The subframe is divided into control and data part as described in Figure LTE subframe division..

_images/lte-subframe-structure.png

LTE subframe division.

Considering the granularity of the simulator based on RB, the control and the reference signaling have to be consequently modeled considering this constraint. According to the standard [TS36211], the downlink control frame starts at the beginning of each subframe and lasts up to three symbols across the whole system bandwidth, where the actual duration is provided by the Physical Control Format Indicator Channel (PCFICH). The information on the allocation are then mapped in the remaining resource up to the duration defined by the PCFICH, in the so called Physical Downlink Control Channel (PDCCH). A PDCCH transports a single message called Downlink Control Information (DCI) coming from the MAC layer, where the scheduler indicates the resource allocation for a specific user. The PCFICH and PDCCH are modeled with the transmission of the control frame of a fixed duration of 3/14 of milliseconds spanning in the whole available bandwidth, since the scheduler does not estimate the size of the control region. This implies that a single transmission block models the entire control frame with a fixed power (i.e., the one used for the PDSCH) across all the available RBs. According to this feature, this transmission represents also a valuable support for the Reference Signal (RS). This allows of having every TTI an evaluation of the interference scenario since all the eNB are transmitting (simultaneously) the control frame over the respective available bandwidths. We note that, the model does not include the power boosting since it does not reflect any improvement in the implemented model of the channel estimation.

The Sounding Reference Signal (SRS) is modeled similar to the downlink control frame. The SRS is periodically placed in the last symbol of the subframe in the whole system bandwidth. The RRC module already includes an algorithm for dynamically assigning the periodicity as function of the actual number of UEs attached to a eNB according to the UE-specific procedure (see Section 8.2 of [TS36213]).

MAC to Channel delay

To model the latency of real MAC and PHY implementations, the PHY model simulates a MAC-to-channel delay in multiples of TTIs (1ms). The transmission of both data and control packets are delayed by this amount.

CQI feedback

The generation of CQI feedback is done accordingly to what specified in [FFAPI]. In detail, we considered the generation of periodic wideband CQI (i.e., a single value of channel state that is deemed representative of all RBs in use) and inband CQIs (i.e., a set of value representing the channel state for each RB).

The CQI index to be reported is obtained by first obtaining a SINR measurement and then passing this SINR measurement to the Adaptive Modulation and Coding module which will map it to the CQI index.

In downlink, the SINR used to generate CQI feedback can be calculated in two different ways:

  1. Ctrl method: SINR is calculated combining the signal power from the reference signals (which in the simulation is equivalent to the PDCCH) and the interference power from the PDCCH. This approach results in considering any neighboring eNB as an interferer, regardless of whether this eNB is actually performing any PDSCH transmission, and regardless of the power and RBs used for eventual interfering PDSCH transmissions.
  2. Mixed method: SINR is calculated combining the signal power from the reference signals (which in the simulation is equivalent to the PDCCH) and the interference power from the PDSCH. This approach results in considering as interferers only those neighboring eNBs that are actively transmitting data on the PDSCH, and allows to generate inband CQIs that account for different amounts of interference on different RBs according to the actual interference level. In the case that no PDSCH transmission is performed by any eNB, this method consider that interference is zero, i.e., the SINR will be calculated as the ratio of signal to noise only.

To switch between this two CQI generation approaches, LteHelper::UsePdschForCqiGeneration needs to be configured: false for first approach and true for second approach (true is default value):

Config::SetDefault ("ns3::LteHelper::UsePdschForCqiGeneration", BooleanValue (true));

In uplink, two types of CQIs are implemented:

  • SRS based, periodically sent by the UEs.
  • PUSCH based, calculated from the actual transmitted data.

The scheduler interface include an attribute system calld UlCqiFilter for managing the filtering of the CQIs according to their nature, in detail:

  • SRS_UL_CQI for storing only SRS based CQIs.
  • PUSCH_UL_CQI for storing only PUSCH based CQIs.
  • ALL_UL_CQI for storing all the CQIs received.

It has to be noted that, the FfMacScheduler provides only the interface and it is matter of the actual scheduler implementation to include the code for managing these attibutes (see scheduler related section for more information on this matter).

Interference Model

The PHY model is based on the well-known Gaussian interference models, according to which the powers of interfering signals (in linear units) are summed up together to determine the overall interference power.

The sequence diagram of Figure Sequence diagram of the PHY interference calculation procedure shows how interfering signals are processed to calculate the SINR, and how SINR is then used for the generation of CQI feedback.

_images/lte-phy-interference.png

Sequence diagram of the PHY interference calculation procedure

LTE Spectrum Model

The usage of the radio spectrum by eNBs and UEs in LTE is described in [TS36101]. In the simulator, radio spectrum usage is modeled as follows. Let f_c denote the LTE Absolute Radio Frequency Channel Number, which identifies the carrier frequency on a 100 kHz raster; furthermore, let B be the Transmission Bandwidth Configuration in number of Resource Blocks. For every pair (f_c,B) used in the simulation we define a corresponding SpectrumModel using the functionality provided by the Spectrum Module . model using the Spectrum framework described in [Baldo2009]. f_c and B can be configured for every eNB instantiated in the simulation; hence, each eNB can use a different spectrum model. Every UE will automatically use the spectrum model of the eNB it is attached to. Using the MultiModelSpectrumChannel described in [Baldo2009], the interference among eNBs that use different spectrum models is properly accounted for. This allows to simulate dynamic spectrum access policies, such as for example the spectrum licensing policies that are discussed in [Ofcom2600MHz].

Data PHY Error Model

The simulator includes an error model of the data plane (i.e., PDSCH and PUSCH) according to the standard link-to-system mapping (LSM) techniques. The choice is aligned with the standard system simulation methodology of OFDMA radio transmission technology. Thanks to LSM we are able to maintain a good level of accuracy and at the same time limiting the computational complexity increase. It is based on the mapping of single link layer performance obtained by means of link level simulators to system (in our case network) simulators. In particular link the layer simulator is used for generating the performance of a single link from a PHY layer perspective, usually in terms of code block error rate (BLER), under specific static conditions. LSM allows the usage of these parameters in more complex scenarios, typical of system/network simulators, where we have more links, interference and “colored” channel propagation phenomena (e.g., frequency selective fading).

To do this the Vienna LTE Simulator [ViennaLteSim] has been used for what concerns the extraction of link layer performance and the Mutual Information Based Effective SINR (MIESM) as LSM mapping function using part of the work recently published by the Signet Group of University of Padua [PaduaPEM].

MIESM

The specific LSM method adopted is the one based on the usage of a mutual information metric, commonly referred to as the mutual information per per coded bit (MIB or MMIB when a mean of multiples MIBs is involved). Another option would be represented by the Exponential ESM (EESM); however, recent studies demonstrate that MIESM outperforms EESM in terms of accuracy [LozanoCost].

_images/miesm_scheme.png

MIESM computational procedure diagram

The mutual information (MI) is dependent on the constellation mapping and can be calculated per transport block (TB) basis, by evaluating the MI over the symbols and the subcarrier. However, this would be too complex for a network simulator. Hence, in our implementation a flat channel response within the RB has been considered; therefore the overall MI of a TB is calculated averaging the MI evaluated per each RB used in the TB. In detail, the implemented scheme is depicted in Figure MIESM computational procedure diagram, where we see that the model starts by evaluating the MI value for each RB, represented in the figure by the SINR samples. Then the equivalent MI is evaluated per TB basis by averaging the MI values. Finally, a further step has to be done since the link level simulator returns the performance of the link in terms of block error rate (BLER) in a addive white guassian noise (AWGN) channel, where the blocks are the code blocks (CBs) independently encoded/decoded by the turbo encoder. On this matter the standard 3GPP segmentation scheme has been used for estimating the actual CB size (described in section 5.1.2 of [TS36212]). This scheme divides the the TB in N_{K_-} blocks of size K_- and N_{K+} blocks of size K_+. Therefore the overall TB BLER (TBLER) can be expressed as

TBLER = 1- \prod\limits_{i=1}^{C}(1-CBLER_i)

where the CBLER_i is the BLER of the CB i obtained according to the link level simulator CB BLER curves. For estimating the CBLER_i, the MI evaluation has been implemented according to its numerical approximation defined in [wimaxEmd]. Moreover, for reducing the complexity of the computation, the approximation has been converted into lookup tables. In detail, Gaussian cumulative model has been used for approximating the AWGN BLER curves with three parameters which provides a close fit to the standard AWGN performances, in formula:

CBLER_i = \frac{1}{2}\left[1-erf\left(\frac{x-b_{ECR}}{\sqrt{2}c_{ECR}} \right) \right]

where x is the MI of the TB, b_{ECR} represents the “transition center” and c_{ECR} is related to the “transition width” of the Gaussian cumulative distribution for each Effective Code Rate (ECR) which is the actual transmission rate according to the channel coding and MCS. For limiting the computational complexity of the model we considered only a subset of the possible ECRs in fact we would have potentially 5076 possible ECRs (i.e., 27 MCSs and 188 CB sizes). On this respect, we will limit the CB sizes to some representative values (i.e., 40, 140, 160, 256, 512, 1024, 2048, 4032, 6144), while for the others the worst one approximating the real one will be used (i.e., the smaller CB size value available respect to the real one). This choice is aligned to the typical performance of turbo codes, where the CB size is not strongly impacting on the BLER. However, it is to be notes that for CB sizes lower than 1000 bits the effect might be relevant (i.e., till 2 dB); therefore, we adopt this unbalanced sampling interval for having more precision where it is necessary. This behaviour is confirmed by the figures presented in the Annes Section.

BLER Curves

On this respect, we reused part of the curves obtained within [PaduaPEM]. In detail, we introduced the CB size dependency to the CB BLER curves with the support of the developers of [PaduaPEM] and of the LTE Vienna Simulator. In fact, the module released provides the link layer performance only for what concerns the MCSs (i.e, with a given fixed ECR). In detail the new error rate curves for each has been evaluated with a simulation campaign with the link layer simulator for a single link with AWGN noise and for CB size of 104, 140, 256, 512, 1024, 2048, 4032 and 6144. These curves has been mapped with the Gaussian cumulative model formula presented above for obtaining the correspondents b_{ECR} and c_{ECR} parameters.

The BLER perfomance of all MCS obtained with the link level simulator are plotted in the following figures (blue lines) together with their correspondent mapping to the Gaussian cumulative distribution (red dashed lines).

_images/MCS_1_4.png

BLER for MCS 1, 2, 3 and 4.

_images/MCS_5_8.png

BLER for MCS 5, 6, 7 and 8.

_images/MCS_9_12.png

BLER for MCS 9, 10, 11 and 12.

_images/MCS_13_16.png

BLER for MCS 13, 14, 15 and 16.

_images/MCS_17_20.png

BLER for MCS 17, 17, 19 and 20.

_images/MCS_21_24.png

BLER for MCS 21, 22, 23 and 24.

_images/MCS_25_28.png

BLER for MCS 25, 26, 27 and 28.

_images/MCS_29_29.png

BLER for MCS 29.

Integration of the BLER curves in the ns-3 LTE module

The model implemented uses the curves for the LSM of the recently LTE PHY Error Model released in the ns3 community by the Signet Group [PaduaPEM] and the new ones generated for different CB sizes. The LteSpectrumPhy class is in charge of evaluating the TB BLER thanks to the methods provided by the LteMiErrorModel class, which is in charge of evaluating the TB BLER according to the vector of the perceived SINR per RB, the MCS and the size in order to proper model the segmentation of the TB in CBs. In order to obtain the vector of the perceived SINR two instances of LtePemSinrChunkProcessor (child of LteChunkProcessor dedicated to evaluate the SINR for obtaining physical error performance) have been attached to UE downlink and eNB uplink LteSpectrumPhy modules for evaluating the error model distribution respectively of PDSCH (UE side) and ULSCH (eNB side).

The model can be disabled for working with a zero-losses channel by setting the PemEnabled attribute of the LteSpectrumPhy class (by default is active). This can be done according to the standard ns3 attribute system procedure, that is:

Config::SetDefault ("ns3::LteSpectrumPhy::DataErrorModelEnabled", BooleanValue (false));
Control Channels PHY Error Model

The simulator includes the error model for downlink control channels (PCFICH and PDCCH), while in uplink it is assumed and ideal error-free channel. The model is based on the MIESM approach presented before for considering the effects of the frequency selective channel since most of the control channels span the whole available bandwidth.

PCFICH + PDCCH Error Model

The model adopted for the error distribution of these channels is based on an evaluation study carried out in the RAN4 of 3GPP, where different vendors investigated the demodulation performance of the PCFICH jointly with PDCCH. This is due to the fact that the PCFICH is the channel in charge of communicating to the UEs the actual dimension of the PDCCH (which spans between 1 and 3 symbols); therefore the correct decodification of the DCIs depends on the correct interpretation of both ones. In 3GPP this problem have been evaluated for improving the cell-edge performance [FujitsuWhitePaper], where the interference among neighboring cells can be relatively high due to signal degradation. A similar problem has been notices in femto-cell scenario and, more in general, in HetNet scenarios the bottleneck has been detected mainly as the PCFICH channel [Bharucha2011], where in case of many eNBs are deployed in the same service area, this channel may collide in frequency, making impossible the correct detection of the PDCCH channel, too.

In the simulator, the SINR perceived during the reception has been estimated according to the MIESM model presented above in order to evaluate the error distribution of PCFICH and PDCCH. In detail, the SINR samples of all the RBs are included in the evaluation of the MI associated to the control frame and, according to this values, the effective SINR (eSINR) is obtained by inverting the MI evaluation process. It has to be noted that, in case of MIMO transmission, both PCFICH and the PDCCH use always the transmit diversity mode as defined by the standard. According to the eSINR perceived the decodification error probability can be estimated as function of the results presented in [R4-081920]. In case an error occur, the DCIs discarded and therefore the UE will be not able to receive the correspondent Tbs, therefore resulting lost.

MIMO Model

The use of multiple antennas both at transmitter and receiver side, known as multiple-input and multiple-output (MIMO), is a problem well studied in literature during the past years. Most of the work concentrate on evaluating analytically the gain that the different MIMO schemes might have in term of capacity; however someones provide also information of the gain in terms of received power [CatreuxMIMO].

According to the considerations above, a model more flexible can be obtained considering the gain that MIMO schemes bring in the system from a statistical point of view. As highlighted before, [CatreuxMIMO] presents the statistical gain of several MIMO solutions respect to the SISO one in case of no correlation between the antennas. In the work the gain is presented as the cumulative distribution function (CDF) of the output SINR for what concern SISO, MIMO-Alamouti, MIMO-MMSE, MIMO-OSIC-MMSE and MIMO-ZF schemes. Elaborating the results, the output SINR distribution can be approximated with a log-normal one with different mean and variance as function of the scheme considered. However, the variances are not so different and they are approximatively equal to the one of the SISO mode already included in the shadowing component of the BuildingsPropagationLossModel, in detail:

  • SISO: \mu = 13.5 and \sigma = 20 [dB].
  • MIMO-Alamouti: \mu = 17.7 and \sigma = 11.1 [dB].
  • MIMO-MMSE: \mu = 10.7 and \sigma = 16.6 [dB].
  • MIMO-OSIC-MMSE: \mu = 12.6 and \sigma = 15.5 [dB].
  • MIMO-ZF: \mu = 10.3 and \sigma = 12.6 [dB].

Therefore the PHY layer implements the MIMO model as the gain perceived by the receiver when using a MIMO scheme respect to the one obtained using SISO one. We note that, these gains referred to a case where there is no correlation between the antennas in MIMO scheme; therefore do not model degradation due to paths correlation.

UE PHY Measurements Model

According to [TS36214], the UE has to report a set of measurements of the eNBs that the device is able to perceive: the the reference signal received power (RSRP) and the reference signal received quality (RSRQ). The former is a measure of the received power of a specific eNB, while the latter includes also channel interference and thermal noise. The UE has to report the measurements jointly with the physical cell identity (PCI) of the cell. Both the RSRP and RSRQ measurements are performed during the reception of the RS, while the PCI is obtained with the Primary Synchronization Signal (PSS). The PSS is sent by the eNB each 5 subframes and in detail in the subframes 1 and 6. In real systems, only 504 distinct PCIs are available, and hence it could occur that two nearby eNBs use the same PCI; however, in the simulator we model PCIs using simulation metadata, and we allow up to 65535 distinct PCIs, thereby avoiding PCI collisions provided that less that 65535 eNBs are simulated in the same scenario.

According to [TS36133] sections 9.1.4 and 9.1.7, RSRP is reported by PHY layer in dBm while RSRQ in dB. The values of RSRP and RSRQ are provided to higher layers through the C-PHY SAP (by means of UeMeasurementsParameters struct) every 200 ms as defined in [TS36331]. Layer 1 filtering is performed by averaging the all the measurements collected during the last window slot. The periodicity of reporting can be adjusted for research purposes by means of the LteUePhy::UeMeasurementsFilterPeriod attribute.

The formulas of the RSRP and RSRQ can be simplified considering the assumption of the PHY layer that the channel is flat within the RB, the finest level of accuracy. In fact, this implies that all the REs within a RB have the same power, therefore:

RSRP = \frac{\sum_{k=0}^{K-1}\frac{\sum_{m=0}^{M-1}(P(k,m))}{M}}{K}
     = \frac{\sum_{k=0}^{K-1}\frac{(M \times P(k))}{M}}{K}
     = \frac{\sum_{k=0}^{K-1}(P(k))}{K}

where P(k,m) represents the signal power of the RE m within the RB k, which, as observed before, is constant within the same RB and equal to P(k), M is the number of REs carrying the RS in a RB and K is the number of RBs. It is to be noted that P(k), and in general all the powers defined in this section, is obtained in the simulator from the PSD of the RB (which is provided by the LteInterferencePowerChunkProcessor), in detail:

P(k) = PSD_{RB}(k)*180000/12

where PSD_{RB}(k) is the power spectral density of the RB k, 180000 is the bandwidth in Hz of the RB and 12 is the number of REs per RB in an OFDM symbol. Similarly, for RSSI we have

RSSI = \sum_{k=0}^{K-1} \frac{\sum_{s=0}^{S-1} \sum_{r=0}^{R-1}( P(k,s,r) + I(k,s,r) + N(k,s,r))}{S}

where S is the number of OFDM symbols carrying RS in a RB and R is the number of REs carrying a RS in a OFDM symbol (which is fixed to 2) while P(k,s,r), I(k,s,r) and N(k,s,r) represent respectively the perceived power of the serving cell, the interference power and the noise power of the RE r in symbol s. As for RSRP, the measurements within a RB are always equals among each others according to the PHY model; therefore P(k,s,r) = P(k), I(k,s,r) = I(k) and N(k,s,r) = N(k), which implies that the RSSI can be calculated as:

RSSI = \sum_{k=0}^{K-1} \frac{S \times 2 \times ( P(k) + I(k) + N(k))}{S}
     = \sum_{k=0}^{K-1} 2 \times ( P(k) + I(k) + N (k))

Considering the constraints of the PHY reception chain implementation, and in order to maintain the level of computational complexity low, only RSRP can be directly obtained for all the cells. This is due to the fact that LteSpectrumPhy is designed for evaluating the interference only respect to the signal of the serving eNB. This implies that the PHY layer is optimized for managing the power signals information with the serving eNB as a reference. However, RSRP and RSRQ of neighbor cell i can be extracted by the current information available of the serving cell j as detailed in the following:

RSRP_i = \frac{\sum_{k=0}^{K-1}(P_i(k))}{K}

RSSI_i = RSSI_j = \sum_{k=0}^{K-1} 2 \times ( I_j(k) + P_j(k) + N_j(k) )

RSRQ_i^j = K \times RSRP_i / RSSI_j

where RSRP_i is the RSRP of the neighbor cell i, P_i(k) is the power perceived at any RE within the RB k, K is the total number of RBs, RSSI_i is the RSSI of the neighbor cell i when the UE is attached to cell j (which, since it is the sum of all the received powers, coincides with RSSI_j), I_j(k) is the total interference perceived by UE in any RE of RB k when attached to cell i (obtained by the LteInterferencePowerChunkProcessor), P_j(k) is the power perceived of cell j in any RE of the RB k and N is the power noise spectral density in any RE. The sample is considered as valid in case of the RSRQ evaluated is above the LteUePhy::RsrqUeMeasThreshold attribute.

HARQ

The HARQ scheme implemented is based on a incremental redundancy (IR) solutions combined with multiple stop-and-wait processes for enabling a continuous data flow. In detail, the solution adopted is the soft combining hybrid IR Full incremental redundancy (also called IR Type II), which implies that the retransmissions contain only new information respect to the previous ones. The resource allocation algorithm of the HARQ has been implemented within the respective scheduler classes (i.e., RrFfMacScheduler and PfFfMacScheduler, refer to their correspondent sections for more info), while the decodification part of the HARQ has been implemented in the LteSpectrumPhy and LteHarqPhy classes which will be detailed in this section.

According to the standard, the UL retransmissions are synchronous and therefore are allocated 7 ms after the original transmission. On the other hand, for the DL, they are asynchronous and therefore can be allocated in a more flexible way starting from 7 ms and it is a matter of the specific scheduler implementation. The HARQ processes behavior is depicted in Figure:ref:fig-harq-processes-scheme.

At the MAC layer, the HARQ entity residing in the scheduler is in charge of controlling the 8 HARQ processes for generating new packets and managing the retransmissions both for the DL and the UL. The scheduler collects the HARQ feedback from eNB and UE PHY layers (respectively for UL and DL connection) by means of the FF API primitives SchedUlTriggerReq and SchedUlTriggerReq. According to the HARQ feedback and the RLC buffers status, the scheduler generates a set of DCIs including both retransmissions of HARQ blocks received erroneous and new transmissions, in general, giving priority to the former. On this matter, the scheduler has to take into consideration one constraint when allocating the resource for HARQ retransmissions, it must use the same modulation order of the first transmission attempt (i.e., QPSK for MCS \in [0..9], 16QAM for MCS \in [10..16] and 64QAM for MCS \in [17..28]). This restriction comes from the specification of the rate matcher in the 3GPP standard [TS36212], where the algorithm fixes the modulation order for generating the different blocks of the redundancy versions.

The PHY Error Model model (i.e., the LteMiErrorModel class already presented before) has been extended for considering IR HARQ according to [wimaxEmd], where the parameters for the AWGN curves mapping for MIESM mapping in case of retransmissions are given by:

R_{eff} = \frac{X}{\sum\limits_{i=1}^q C_i}

M_{I eff} = \frac{\sum\limits_{i=1}^q C_i M_i}{\sum\limits_{i=1}^q C_i}

where X is the number of original information bits, C_i are number of coded bits, M_i are the mutual informations per HARQ block received on the total number of q retransmissions. Therefore, in order to be able to return the error probability with the error model implemented in the simulator evaluates the R_{eff} and the MI_{I eff} and return the value of error probability of the ECR of the same modulation with closest lower rate respect to the R_{eff}. In order to consider the effect of HARQ retransmissions a new sets of curves have been integrated respect to the standard one used for the original MCS. The new curves are intended for covering the cases when the most conservative MCS of a modulation is used which implies the generation of R_{eff} lower respect to the one of standard MCSs. On this matter the curves for 1, 2 and 3 retransmissions have been evaluated for 10 and 17. For MCS 0 we considered only the first retransmission since the produced code rate is already very conservative (i.e., 0.04) and returns an error rate enough robust for the reception (i.e., the downturn of the BLER is centered around -18 dB). It is to be noted that, the size of first TB transmission has been assumed as containing all the information bits to be coded; therefore X is equal to the size of the first TB sent of a an HARQ process. The model assumes that the eventual presence of parity bits in the codewords is already considered in the link level curves. This implies that as soon as the minimum R_{eff} is reached the model is not including the gain due to the transmission of further parity bits.

_images/lte-harq-processes-scheme.png

HARQ processes behavior in LTE

The part of HARQ devoted to manage the decodification of the HARQ blocks has been implemented in the LteHarqPhy and LteSpectrumPhy classes. The former is in charge of maintaining the HARQ information for each active process . The latter interacts with LteMiErrorModel class for evaluating the correctness of the blocks received and includes the messaging algorithm in charge of communicating to the HARQ entity in the scheduler the result of the decodifications. These messages are encapsulated in the dlInfoListElement for DL and ulInfoListElement for UL and sent through the PUCCH and the PHICH respectively with an ideal error free model according to the assumptions in their implementation. A sketch of the iteration between HARQ and LTE protocol stack in represented in Figure:ref:fig-harq-architecture.

Finally, the HARQ engine is always active both at MAC and PHY layer; however, in case of the scheduler does not support HARQ the system will continue to work with the HARQ functions inhibited (i.e., buffers are filled but not used). This implementation characteristic gives backward compatibility with schedulers implemented before HARQ integration.

_images/lte-harq-architecture.png

Interaction between HARQ and LTE protocol stack

MAC

Resource Allocation Model

We now briefly describe how resource allocation is handled in LTE, clarifying how it is modeled in the simulator. The scheduler is in charge of generating specific structures called Data Control Indication (DCI) which are then transmitted by the PHY of the eNB to the connected UEs, in order to inform them of the resource allocation on a per subframe basis. In doing this in the downlink direction, the scheduler has to fill some specific fields of the DCI structure with all the information, such as: the Modulation and Coding Scheme (MCS) to be used, the MAC Transport Block (TB) size, and the allocation bitmap which identifies which RBs will contain the data transmitted by the eNB to each user.

For the mapping of resources to physical RBs, we adopt a localized mapping approach (see [Sesia2009], Section 9.2.2.1); hence in a given subframe each RB is always allocated to the same user in both slots. The allocation bitmap can be coded in different formats; in this implementation, we considered the Allocation Type 0 defined in [TS36213], according to which the RBs are grouped in Resource Block Groups (RBG) of different size determined as a function of the Transmission Bandwidth Configuration in use.

For certain bandwidth values not all the RBs are usable, since the group size is not a common divisor of the group. This is for instance the case when the bandwidth is equal to 25 RBs, which results in a RBG size of 2 RBs, and therefore 1 RB will result not addressable. In uplink the format of the DCIs is different, since only adjacent RBs can be used because of the SC-FDMA modulation. As a consequence, all RBs can be allocated by the eNB regardless of the bandwidth configuration.

Adaptive Modulation and Coding

The simulator provides two Adaptive Modulation and Coding (AMC) models: one based on the GSoC model [Piro2011] and one based on the physical error model (described in the following sections).

The former model is a modified version of the model described in [Piro2011], which in turn is inspired from [Seo2004]. Our version is described in the following. Let i denote the generic user, and let \gamma_i be its SINR. We get the spectral efficiency \eta_i of user i using the following equations:

\mathrm{BER} = 0.00005

\Gamma = \frac{ -\ln{ (5 * \mathrm{BER}) } }{ 1.5}

\eta_i = \log_2 { \left( 1 + \frac{ {\gamma}_i }{ \Gamma } \right)}

The procedure described in [R1-081483] is used to get the corresponding MCS scheme. The spectral efficiency is quantized based on the channel quality indicator (CQI), rounding to the lowest value, and is mapped to the corresponding MCS scheme.

Finally, we note that there are some discrepancies between the MCS index in [R1-081483] and that indicated by the standard: [TS36213] Table 7.1.7.1-1 says that the MCS index goes from 0 to 31, and 0 appears to be a valid MCS scheme (TB size is not 0) but in [R1-081483] the first useful MCS index is 1. Hence to get the value as intended by the standard we need to subtract 1 from the index reported in [R1-081483].

The alternative model is based on the physical error model developed for this simulator and explained in the following subsections. This scheme is able to adapt the MCS selection to the actual PHY layer performance according to the specific CQI report. According to their definition, a CQI index is assigned when a single PDSCH TB with the modulation coding scheme and code rate correspondent to that CQI index in table 7.2.3-1 of [TS36213] can be received with an error probability less than 0.1. In case of wideband CQIs, the reference TB includes all the RBGs available in order to have a reference based on the whole available resources; while, for subband CQIs, the reference TB is sized as the RBGs.

Transport Block model

The model of the MAC Transport Blocks (TBs) provided by the simulator is simplified with respect to the 3GPP specifications. In particular, a simulator-specific class (PacketBurst) is used to aggregate MAC SDUs in order to achieve the simulator’s equivalent of a TB, without the corresponding implementation complexity. The multiplexing of different logical channels to and from the RLC layer is performed using a dedicated packet tag (LteRadioBearerTag), which performs a functionality which is partially equivalent to that of the MAC headers specified by 3GPP.

The FemtoForum MAC Scheduler Interface

This section describes the ns-3 specific version of the LTE MAC Scheduler Interface Specification published by the FemtoForum [FFAPI].

We implemented the ns-3 specific version of the FemtoForum MAC Scheduler Interface [FFAPI] as a set of C++ abstract classes; in particular, each primitive is translated to a C++ method of a given class. The term implemented here is used with the same meaning adopted in [FFAPI], and hence refers to the process of translating the logical interface specification to a particular programming language. The primitives in [FFAPI] are grouped in two groups: the CSCHED primitives, which deal with scheduler configuration, and the SCHED primitives, which deal with the execution of the scheduler. Furthermore, [FFAPI] defines primitives of two different kinds: those of type REQ go from the MAC to the Scheduler, and those of type IND/CNF go from the scheduler to the MAC. To translate these characteristics into C++, we define the following abstract classes that implement Service Access Points (SAPs) to be used to issue the primitives:

  • the FfMacSchedSapProvider class defines all the C++ methods that correspond to SCHED primitives of type REQ;
  • the FfMacSchedSapUser class defines all the C++ methods that correspond to SCHED primitives of type CNF/IND;
  • the FfMacCschedSapProvider class defines all the C++ methods that correspond to CSCHED primitives of type REQ;
  • the FfMacCschedSapUser class defines all the C++ methods that correspond to CSCHED primitives of type CNF/IND;

There are 3 blocks involved in the MAC Scheduler interface: Control block, Subframe block and Scheduler block. Each of these blocks provide one part of the MAC Scheduler interface. The figure below shows the relationship between the blocks and the SAPs defined in our implementation of the MAC Scheduler Interface.

_images/ff-mac-saps.png

In addition to the above principles, the following design choices have been taken:

  • The definition of the MAC Scheduler interface classes follows the naming conventions of the ns-3 Coding Style. In particular, we follow the CamelCase convention for the primitive names. For example, the primitive CSCHED_CELL_CONFIG_REQ is translated to CschedCellConfigReq in the ns-3 code.
  • The same naming conventions are followed for the primitive parameters. As the primitive parameters are member variables of classes, they are also prefixed with a m_.
  • regarding the use of vectors and lists in data structures, we note that [FFAPI] is a pretty much C-oriented API. However, considered that C++ is used in ns-3, and that the use of C arrays is discouraged, we used STL vectors (std::vector) for the implementation of the MAC Scheduler Interface, instead of using C arrays as implicitly suggested by the way [FFAPI] is written.
  • In C++, members with constructors and destructors are not allow in unions. Hence all those data structures that are said to be unions in [FFAPI] have been defined as structs in our code.

The figure below shows how the MAC Scheduler Interface is used within the eNB.

_images/ff-example.png

The User side of both the CSCHED SAP and the SCHED SAP are implemented within the eNB MAC, i.e., in the file lte-enb-mac.cc. The eNB MAC can be used with different scheduler implementations without modifications. The same figure also shows, as an example, how the Round Robin Scheduler is implemented: to interact with the MAC of the eNB, the Round Robin scheduler implements the Provider side of the SCHED SAP and CSCHED SAP interfaces. A similar approach can be used to implement other schedulers as well. A description of each of the scheduler implementations that we provide as part of our LTE simulation module is provided in the following subsections.

Round Robin (RR) Scheduler

The Round Robin (RR) scheduler is probably the simplest scheduler found in the literature. It works by dividing the available resources among the active flows, i.e., those logical channels which have a non-empty RLC queue. If the number of RBGs is greater than the number of active flows, all the flows can be allocated in the same subframe. Otherwise, if the number of active flows is greater than the number of RBGs, not all the flows can be scheduled in a given subframe; then, in the next subframe the allocation will start from the last flow that was not allocated. The MCS to be adopted for each user is done according to the received wideband CQIs.

For what concern the HARQ, RR implements the non adaptive version, which implies that in allocating the retransmission attempts RR uses the same allocation configuration of the original block, which means maintaining the same RBGs and MCS. UEs that are allocated for HARQ retransmissions are not considered for the transmission of new data in case they have a transmission opportunity available in the same TTI. Finally, HARQ can be disabled with ns3 attribute system for maintaining backward compatibility with old test cases and code, in detail:

Config::SetDefault ("ns3::RrFfMacScheduler::HarqEnabled", BooleanValue (false));

The scheduler implements the filtering of the uplink CQIs according to their nature with UlCqiFilter attibute, in detail:

  • SRS_UL_CQI: only SRS based CQI are stored in the internal attributes.
  • PUSCH_UL_CQI: only PUSCH based CQI are stored in the internal attributes.
  • ALL_UL_CQI: all CQIs are stored in the same internal attibute (i.e., the last CQI received is stored independently from its nature).
Proportional Fair (PF) Scheduler

The Proportional Fair (PF) scheduler [Sesia2009] works by scheduling a user when its instantaneous channel quality is high relative to its own average channel condition over time. Let i,j denote generic users; let t be the subframe index, and k be the resource block index; let M_{i,k}(t) be MCS usable by user i on resource block k according to what reported by the AMC model (see Adaptive Modulation and Coding); finally, let S(M, B) be the TB size in bits as defined in [TS36213] for the case where a number B of resource blocks is used. The achievable rate R_{i}(k,t) in bit/s for user i on resource block group k at subframe t is defined as

R_{i}(k,t) =  \frac{S\left( M_{i,k}(t), 1\right)}{\tau}

where \tau is the TTI duration. At the start of each subframe t, each RBG is assigned to a certain user. In detail, the index \widehat{i}_{k}(t) to which RBG k is assigned at time t is determined as

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ R_{j}(k,t) }{ T_\mathrm{j}(t) } \right)

where T_{j}(t) is the past througput performance perceived by the user j. According to the above scheduling algorithm, a user can be allocated to different RBGs, which can be either adjacent or not, depending on the current condition of the channel and the past throughput performance T_{j}(t). The latter is determined at the end of the subframe t using the following exponential moving average approach:

T_{j}(t) =
(1-\frac{1}{\alpha})T_{j}(t-1)
+\frac{1}{\alpha} \widehat{T}_{j}(t)

where \alpha is the time constant (in number of subframes) of the exponential moving average, and \widehat{T}_{j}(t) is the actual throughput achieved by the user i in the subframe t. \widehat{T}_{j}(t) is measured according to the following procedure. First we determine the MCS \widehat{M}_j(t) actually used by user j:

\widehat{M}_j(t) = \min_{k: \widehat{i}_{k}(t) = j}{M_{j,k}(t)}

then we determine the total number \widehat{B}_j(t) of RBGs allocated to user j:

\widehat{B}_j(t) = \left| \{ k :  \widehat{i}_{k}(t) = j \} \right|

where |\cdot| indicates the cardinality of the set; finally,

\widehat{T}_{j}(t) = \frac{S\left( \widehat{M}_j(t), \widehat{B}_j(t)
\right)}{\tau}

For what concern the HARQ, PF implements the non adaptive version, which implies that in allocating the retransmission attempts the scheduler uses the same allocation configuration of the original block, which means maintaining the same RBGs and MCS. UEs that are allocated for HARQ retransmissions are not considered for the transmission of new data in case they have a transmission opportunity available in the same TTI. Finally, HARQ can be disabled with ns3 attribute system for maintaining backward compatibility with old test cases and code, in detail:

Config::SetDefault ("ns3::PfFfMacScheduler::HarqEnabled", BooleanValue (false));
Maximum Throughput (MT) Scheduler

The Maximum Throughput (MT) scheduler [FCapo2012] aims to maximize the overall throughput of eNB. It allocates each RB to the user that can achieve the maximum achievable rate in the current TTI. Currently, MT scheduler in NS-3 has two versions: frequency domain (FDMT) and time domain (TDMT). In FDMT, every TTI, MAC scheduler allocates RBGs to the UE who has highest achievable rate calculated by subband CQI. In TDMT, every TTI, MAC scheduler selects one UE which has highest achievable rate calculated by wideband CQI. Then MAC scheduler allocates all RBGs to this UE in current TTI. The calculation of achievable rate in FDMT and TDMT is as same as the one in PF. Let i,j denote generic users; let t be the subframe index, and k be the resource block index; let M_{i,k}(t) be MCS usable by user i on resource block k according to what reported by the AMC model (see Adaptive Modulation and Coding); finally, let S(M, B) be the TB size in bits as defined in [TS36213] for the case where a number B of resource blocks is used. The achievable rate R_{i}(k,t) in bit/s for user i on resource block k at subframe t is defined as

R_{i}(k,t) =  \frac{S\left( M_{i,k}(t), 1\right)}{\tau}

where \tau is the TTI duration. At the start of each subframe t, each RB is assigned to a certain user. In detail, the index \widehat{i}_{k}(t) to which RB k is assigned at time t is determined as

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
    \left( { R_{j}(k,t) } \right)

When there are several UEs having the same achievable rate, current implementation always selects the first UE created in script. Although MT can maximize cell throughput, it cannot provide fairness to UEs in poor channel condition.

Throughput to Average (TTA) Scheduler

The Throughput to Average (TTA) scheduler [FCapo2012] can be considered as an intermediate between MT and PF. The metric used in TTA is calculated as follows:

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ R_{j}(k,t) }{ R_{j}(t) } \right)

Here, R_{i}(k,t) in bit/s represents the achievable rate for user i on resource block k at subframe t. The calculation method already is shown in MT and PF. Meanwhile, R_{i}(t) in bit/s stands for the achievable rate for i at subframe t. The difference between those two achievable rates is how to get MCS. For R_{i}(k,t), MCS is calculated by subband CQI while R_{i}(t) is calculated by wideband CQI. TTA scheduler can only be implemented in frequency domain (FD) because the achievable rate of particular RBG is only related to FD scheduling.

Blind Average Throughput Scheduler

The Blind Average Throughput scheduler [FCapo2012] aims to provide equal throughput to all UEs under eNB. The metric used in TTA is calculated as follows:

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ 1 }{ T_\mathrm{j}(t) } \right)

where T_{j}(t) is the past throughput performance perceived by the user j and can be calculated by the same method in PF scheduler. In the time domain blind average throughput (TD-BET), the scheduler selects the UE with largest priority metric and allocates all RBGs to this UE. On the other hand, in the frequency domain blind average throughput (FD-BET), every TTI, the scheduler first selects one UE with lowest pastAverageThroughput (largest priority metric). Then scheduler assigns one RBG to this UE, it calculates expected throughput of this UE and uses it to compare with past average throughput T_{j}(t) of other UEs. The scheduler continues to allocate RBG to this UE until its expected throughput is not the smallest one among past average throughput T_{j}(t) of all UE. Then the scheduler will use the same way to allocate RBG for a new UE which has the lowest past average throughput T_{j}(t) until all RBGs are allocated to UEs. The principle behind this is that, in every TTI, the scheduler tries the best to achieve the equal throughput among all UEs.

Token Bank Fair Queue Scheduler

Token Bank Fair Queue (TBFQ) is a QoS aware scheduler which derives from the leaky-bucket mechanism. In TBFQ, a traffic flow of user i is characterized by following parameters:

  • t_{i}: packet arrival rate (byte/sec )
  • r_{i}: token generation rate (byte/sec)
  • p_{i}: token pool size (byte)
  • E_{i}: counter that records the number of token borrowed from or given to the token bank by flow i ; E_{i} can be smaller than zero

Each K bytes data consumes k tokens. Also, TBFQ maintains a shared token bank (B) so as to balance the traffic between different flows. If token generation rate r_{i} is bigger than packet arrival rate t_{i}, then tokens overflowing from token pool are added to the token bank, and E_{i} is increased by the same amount. Otherwise, flow i needs to withdraw tokens from token bank based on a priority metric frac{E_{i}}{r_{i}}, and E_{i} is decreased. Obviously, the user contributes more on token bank has higher priority to borrow tokens; on the other hand, the user borrows more tokens from bank has lower priority to continue to withdraw tokens. Therefore, in case of several users having the same token generation rate, traffic rate and token pool size, user suffers from higher interference has more opportunity to borrow tokens from bank. In addition, TBFQ can police the traffic by setting the token generation rate to limit the throughput. Additionally, TBFQ also maintains following three parameters for each flow:

  • Debt limit d_{i}: if E_{i} belows this threshold, user i cannot further borrow tokens from bank. This is for preventing malicious UE to borrow too much tokens.
  • Credit limit c_{i}: the maximum number of tokens UE i can borrow from the bank in one time.
  • Credit threshold C: once E_{i} reaches debt limit, UE i must store C tokens to bank in order to further borrow token from bank.

LTE in NS-3 has two versions of TBFQ scheduler: frequency domain TBFQ (FD-TBFQ) and time domain TBFQ (TD-TBFQ). In FD-TBFQ, the scheduler always select UE with highest metric and allocates RBG with highest subband CQI until there are no packets within UE’s RLC buffer or all RBGs are allocated [FABokhari2009]. In TD-TBFQ, after selecting UE with maximum metric, it allocates all RBGs to this UE by using wideband CQI [WKWong2004].

Priority Set Scheduler

Priority set scheduler (PSS) is a QoS aware scheduler which combines time domain (TD) and frequency domain (FD) packet scheduling operations into one scheduler [GMonghal2008]. It controls the fairness among UEs by a specified Target Bit Rate (TBR).

In TD scheduler part, PSS first selects UEs with non-empty RLC buffer and then divide them into two sets based on the TBR:

  • set 1: UE whose past average throughput is smaller than TBR; TD scheduler calculates their priority metric in Blind Equal Throughput (BET) style:

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ 1 }{ T_\mathrm{j}(t) } \right)

  • set 2: UE whose past average throughput is larger (or equal) than TBR; TD scheduler calculates their priority metric in Proportional Fair (PF) style:

\widehat{i}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ R_{j}(k,t) }{ T_\mathrm{j}(t) } \right)

UEs belonged to set 1 have higher priority than ones in set 2. Then PSS will select N_{mux} UEs with highest metric in two sets and forward those UE to FD scheduler. In PSS, FD scheduler allocates RBG k to UE n that maximums the chosen metric. Two PF schedulers are used in PF scheduler:

  • Proportional Fair scheduled (PFsch)

\widehat{Msch}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ R_{j}(k,t) }{ Tsch_\mathrm{j}(t) } \right)

  • Carrier over Interference to Average (CoIta)

\widehat{Mcoi}_{k}(t) = \underset{j=1,...,N}{\operatorname{argmax}}
 \left( \frac{ CoI[j,k] }{ \sum_{k=0}^{N_{RBG}} CoI[j,k] } \right)

where Tsch_{j}(t) is similar past throughput performance perceived by the user j, with the difference that it is updated only when the i-th user is actually served. CoI[j,k] is an estimation of the SINR on the RBG k of UE j. Both PFsch and CoIta is for decoupling FD metric from TD scheduler. In addition, PSS FD scheduler also provide a weight metric W[n] for helping controlling fairness in case of low number of UEs.

W[n] =  max (1, \frac{TBR}{ T_{j}(t) })

where T_{j}(t) is the past throughput performance perceived by the user j . Therefore, on RBG k, the FD scheduler selects the UE j that maximizes the product of the frequency domain metric (Msch, MCoI) by weight W[n]. This strategy will guarantee the throughput of lower quality UE tend towards the TBR.

Config::SetDefault ("ns3::PfFfMacScheduler::HarqEnabled", BooleanValue (false));

The scheduler implements the filtering of the uplink CQIs according to their nature with UlCqiFilter attibute, in detail:

  • SRS_UL_CQI: only SRS based CQI are stored in the internal attributes.
  • PUSCH_UL_CQI: only PUSCH based CQI are stored in the internal attributes.
  • ALL_UL_CQI: all CQIs are stored in the same internal attibute (i.e., the last CQI received is stored independently from its nature).
Channel and QoS Aware Scheduler

The Channel and QoS Aware (CQA) Scheduler [Bbojovic2014] is an LTE MAC downlink scheduling algorithm that considers the head of line (HOL) delay, the GBR parameters and channel quality over different subbands. The CQA scheduler is based on joint TD and FD scheduling.

In the TD (at each TTI) the CQA scheduler groups users by priority. The purpose of grouping is to enforce the FD scheduling to consider first the flows with highest HOL delay. The grouping metric m_{td} for user j=1,...,N is defined in the following way:

m_{td}^{j}(t) = \lceil\frac{d_{hol}^{j}(t)}{g}\rceil \;,

where d_{hol}^{j}(t) is the current value of HOL delay of flow j, and g is a grouping parameter that determines granularity of the groups, i.e. the number of the flows that will be considered in the FD scheduling iteration.

The groups of flows selected in the TD iteration are forwarded to the FD scheduling starting from the flows with the highest value of the m_{td} metric until all RBGs are assigned in the corresponding TTI. In the FD, for each RBG k=1,...,K, the CQA scheduler assigns the current RBG to the user j that has the maximum value of the FD metric which we define in the following way:

m_{fd}^{(k,j)}(t) = d_{HOL}^{j}(t) \cdot m_{GBR}^j(t) \cdot m_{ca}^{k,j}(t) \;,

where m_{GBR}^j(t) is calculated as follows:

m_{GBR}^j(t)=\frac{GBR^j}{\overline{R^j}(t)}=\frac{GBR^j}{(1-\alpha)\cdot\overline{R^j}(t-1)+\alpha \cdot r^j(t)} \;,

where GBR^j is the bit rate specified in EPS bearer of the flow j, \overline{R^j}(t) is the past averaged throughput that is calculated with a moving average, r^{j}(t) is the throughput achieved at the time t, and \alpha is a coefficient such that 0 \le \alpha
\le1.

For m_{ca}^{(k,j)}(t) we consider two different metrics: m_{pf}^{(k,j)}(t) and m_{ff}^{(k,j)}(t). m_{pf} is the Proportional Fair metric which is defined as follows:

m_{pf}^{(k,j)}(t) = \frac{R_e^{(k,j)}}{\overline{R^j}(t)} \;,

where R_e^{(k,j)}(t) is the estimated achievable throughput of user j over RBG k calculated by the Adaptive Modulation and Coding (AMC) scheme that maps the channel quality indicator (CQI) value to the transport block size in bits.

The other channel awareness metric that we consider is m_{ff} which is proposed in [GMonghal2008] and it represents the frequency selective fading gains over RBG k for user j and is calculated in the following way:

m_{ff}^{(k,j)}(t) = \frac{CQI^{(k,j)}(t)}{\sum_{k=1}^{K}CQI(t)^{(k,j)}} \;,

where CQI^{(k,j)}(t) is the last reported CQI value from user j for the k-th RBG.

The user can select whether m_{pf} or m_{ff} is used by setting the attribute ns3::CqaFfMacScheduler::CqaMetric respectively to "CqaPf" or "CqaFf".

Random Access

The LTE model includes a model of the Random Access procedure based on some simplifying assumptions, which are detailed in the following for each of the messages and signals described in the specs [TS36321].

  • Random Access (RA) preamble: in real LTE systems this corresponds to a Zadoff-Chu (ZC) sequence using one of several formats available and sent in the PRACH slots which could in principle overlap with PUSCH. PRACH Configuration Index 14 is assumed, i.e., preambles can be sent on any system frame number and subframe number. The RA preamble is modeled using the LteControlMessage class, i.e., as an ideal message that does not consume any radio resources. The collision of preamble transmission by multiple UEs in the same cell are modeled using a protocol interference model, i.e., whenever two or more identical preambles are transmitted in same cell at the same TTI, no one of these identical preambles will be received by the eNB. Other than this collision model, no error model is associated with the reception of a RA preamble.
  • Random Access Response (RAR): in real LTE systems, this is a special MAC PDU sent on the DL-SCH. Since MAC control elements are not accurately modeled in the simulator (only RLC and above PDUs are), the RAR is modeled as an LteControlMessage that does not consume any radio resources. Still, during the RA procedure, the LteEnbMac will request to the scheduler the allocation of resources for the RAR using the FF MAC Scheduler primitive SCHED_DL_RACH_INFO_REQ. Hence, an enhanced scheduler implementation (not available at the moment) could allocate radio resources for the RAR, thus modeling the consumption of Radio Resources for the transmission of the RAR.
  • Message 3: in real LTE systems, this is an RLC TM SDU sent over resources specified in the UL Grant in the RAR. In the simulator, this is modeled as a real RLC TM RLC PDU whose UL resources are allocated by the scheduler upon call to SCHED_DL_RACH_INFO_REQ.
  • Contention Resolution (CR): in real LTE system, the CR phase is needed to address the case where two or more UE sent the same RA preamble in the same TTI, and the eNB was able to detect this preamble in spite of the collision. Since this event does not occur due to the protocol interference model used for the reception of RA preambles, the CR phase is not modeled in the simulator, i.e., the CR MAC CE is never sent by the eNB and the UEs consider the RA to be successful upon reception of the RAR. As a consequence, the radio resources consumed for the transmission of the CR MAC CE are not modeled.

Figure Sequence diagram of the Contention-based MAC Random Access procedure and Sequence diagram of the Non-contention-based MAC Random Access procedure shows the sequence diagrams of respectively the contention-based and non-contention-based MAC random access procedure, highlighting the interactions between the MAC and the other entities.

_images/mac-random-access-contention.png

Sequence diagram of the Contention-based MAC Random Access procedure

_images/mac-random-access-noncontention.png

Sequence diagram of the Non-contention-based MAC Random Access procedure

RLC

Overview

The RLC entity is specified in the 3GPP technical specification [TS36322], and comprises three different types of RLC: Transparent Mode (TM), Unacknowledge Mode (UM) and Acknowledged Mode (AM). The simulator includes one model for each of these entitities

The RLC entities provide the RLC service interface to the upper PDCP layer and the MAC service interface to the lower MAC layer. The RLC entities use the PDCP service interface from the upper PDCP layer and the MAC service interface from the lower MAC layer.

Figure Implementation Model of PDCP, RLC and MAC entities and SAPs shows the implementation model of the RLC entities and its relationship with all the other entities and services in the protocol stack.

_images/lte-rlc-implementation-model.png

Implementation Model of PDCP, RLC and MAC entities and SAPs

Service Interfaces
RLC Service Interface

The RLC service interface is divided into two parts:

  • the RlcSapProvider part is provided by the RLC layer and used by the upper PDCP layer and
  • the RlcSapUser part is provided by the upper PDCP layer and used by the RLC layer.

Both the UM and the AM RLC entities provide the same RLC service interface to the upper PDCP layer.

RLC Service Primitives

The following list specifies which service primitives are provided by the RLC service interfaces:

  • RlcSapProvider::TransmitPdcpPdu

    • The PDCP entity uses this primitive to send a PDCP PDU to the lower RLC entity in the transmitter peer
  • RlcSapUser::ReceivePdcpPdu

    • The RLC entity uses this primitive to send a PDCP PDU to the upper PDCP entity in the receiver peer
MAC Service Interface

The MAC service interface is divided into two parts:

  • the MacSapProvider part is provided by the MAC layer and used by the upper RLC layer and
  • the MacSapUser part is provided by the upper RLC layer and used by the MAC layer.
MAC Service Primitives

The following list specifies which service primitives are provided by the MAC service interfaces:

  • MacSapProvider::TransmitPdu

    • The RLC entity uses this primitive to send a RLC PDU to the lower MAC entity in the transmitter peer
  • MacSapProvider::ReportBufferStatus

    • The RLC entity uses this primitive to report the MAC entity the size of pending buffers in the transmitter peer
  • MacSapUser::NotifyTxOpportunity

    • The MAC entity uses this primitive to nofify the RLC entity a transmission opportunity
  • MacSapUser::ReceivePdu

    • The MAC entity uses this primitive to send an RLC PDU to the upper RLC entity in the receiver peer
AM RLC

The processing of the data transfer in the Acknowledge Mode (AM) RLC entity is explained in section 5.1.3 of [TS36322]. In this section we describe some details of the implementation of the RLC entity.

Buffers for the transmit operations

Our implementation of the AM RLC entity maintains 3 buffers for the transmit operations:

  • Transmission Buffer: it is the RLC SDU queue. When the AM RLC entity receives a SDU in the TransmitPdcpPdu service primitive from the upper PDCP entity, it enqueues it in the Transmission Buffer. We put a limit on the RLC buffer size and just silently drop SDUs when the buffer is full.
  • Transmitted PDUs Buffer: it is the queue of transmitted RLC PDUs for which an ACK/NACK has not been received yet. When the AM RLC entity sends a PDU to the MAC entity, it also puts a copy of the transmitted PDU in the Transmitted PDUs Buffer.
  • Retransmission Buffer: it is the queue of RLC PDUs which are considered for retransmission (i.e., they have been NACKed). The AM RLC entity moves this PDU to the Retransmission Buffer, when it retransmits a PDU from the Transmitted Buffer.
Calculation of the buffer size

The Transmission Buffer contains RLC SDUs. A RLC PDU is one or more SDU segments plus an RLC header. The size of the RLC header of one RLC PDU depends on the number of SDU segments the PDU contains.

The 3GPP standard (section 6.1.3.1 of [TS36321]) says clearly that, for the uplink, the RLC and MAC headers are not considered in the buffer size that is to be report as part of the Buffer Status Report. For the downlink, the behavior is not specified. Neither [FFAPI] specifies how to do it. Our RLC model works by assuming that the calculation of the buffer size in the downlink is done exactly as in the uplink, i.e., not considering the RLC and MAC header size.

We note that this choice affects the interoperation with the MAC scheduler, since, in response to the Notify_Tx_Opportunity service primitive, the RLC is expected to create a PDU of no more than the size requested by the MAC, including RLC overhead. Hence, unneeded fragmentation can occur if (for example) the MAC notifies a transmission exactly equal to the buffer size previously reported by the RLC. We assume that it is left to the Scheduler to implement smart strategies for the selection of the size of the transmission opportunity, in order to eventually avoid the inefficiency of unneeded fragmentation.

Concatenation and Segmentation

The AM RLC entity generates and sends exactly one RLC PDU for each transmission opportunity even if it is smaller than the size reported by the transmission opportunity. So for instance, if a STATUS PDU is to be sent, then only this PDU will be sent in that transmission opportunity.

The segmentation and concatenation for the SDU queue of the AM RLC entity follows the same philosophy as the same procedures of the UM RLC entity but there are new state variables (see [TS36322] section 7.1) only present in the AM RLC entity.

It is noted that, according to the 3GPP specs, there is no concatenation for the Retransmission Buffer.

Re-segmentation

The current model of the AM RLC entity does not support the re-segmentation of the retransmission buffer. Rather, the AM RLC entity just waits to receive a big enough transmission opportunity.

Unsupported features

We do not support the following procedures of [TS36322] :

  • “Send an indication of successful delivery of RLC SDU” (See section 5.1.3.1.1)
  • “Indicate to upper layers that max retransmission has been reached” (See section 5.2.1)
  • “SDU discard procedures” (See section 5.3)
  • “Re-establishment procedure” (See section 5.4)

We do not support any of the additional primitives of RLC SAP for AM RLC entity. In particular:

  • no SDU discard notified by PDCP
  • no notification of successful / failed delivery by AM RLC entity to PDCP entity
UM RLC

In this section we describe the implemnetation of the Unacknowledge Mode (UM) RLC entity.

Transmit operations in downlink

The transmit operations of the UM RLC are similar to those of the AM RLC previously described in Section Transmit operations in downlink, with the difference that, following the specifications of [TS36322], retransmission are not performed, and there are no STATUS PDUs.

Transmit operations in uplink

The transmit operations in the uplink are similar to those of the downlink, with the main difference that the Report_Buffer_Status is sent from the UE MAC to the MAC Scheduler in the eNB over the air using the control channel.

Calculation of the buffer size

The calculation of the buffer size for the UM RLC is done using the same approach of the AM RLC, please refer to section Calculation of the buffer size for the corresponding description.

TM RLC

In this section we describe the implementation of the Transparent Mode (TM) RLC entity.

Transmit operations in downlink

In the simulator, the TM RLC still provides to the upper layers the same service interface provided by the AM and UM RLC entities to the PDCP layer; in practice, this interface is used by an RRC entity (not a PDCP entity) for the transmission of RLC SDUs. This choice is motivated by the fact that the services provided by the TM RLC to the upper layers, according to [TS36322], is a subset of those provided by the UM and AM RLC entities to the PDCP layer; hence, we reused the same interface for simplicity.

The transmit operations in the downlink are performed as follows. When the Transmit_PDCP_PDU service primitive is called by the upper layers, the TM RLC does the following:

  • put the SDU in the Transmission Buffer
  • compute the size of the Transmission Buffer
  • call the Report_Buffer_Status service primitive of the eNB MAC entity

Afterwards, when the MAC scheduler decides that some data can be sent by the logical channel to which the TM RLC entity belongs, the MAC entity notifies it to the TM RLC entity by calling the Notify_Tx_Opportunity service primitive. Upon reception of this primitive, the TM RLC entity does the following:

  • if the TX opportunity has a size that is greater than or equal to the size of the head-of-line SDU in the Transmission Buffer
    • dequeue the head-of-line SDU from the Transmission Buffer
    • create one RLC PDU that contains entirely that SDU, without any RLC header
    • Call the Transmit_PDU primitive in order to send the RLC PDU to the MAC entity.
Transmit operations in uplink

The transmit operations in the uplink are similar to those of the downlink, with the main difference that a transmission opportunity can also arise from the assignment of the UL GRANT as part of the Random Access procedure, without an explicit Buffer Status Report issued by the TM RLC entity.

Calculation of the buffer size

As per the specifications [TS36322], the TM RLC does not add any RLC header to the PDUs being transmitted. Because of this, the buffer size reported to the MAC layer is calculated simply by summing the size of all packets in the transmission buffer, thus notifying to the MAC the exact buffer size.

SM RLC

In addition to the AM, UM and TM implementations that are modeled after the 3GPP specifications, a simplified RLC model is provided, which is called Saturation Mode (SM) RLC. This RLC model does not accept PDUs from any above layer (such as PDCP); rather, the SM RLC takes care of the generation of RLC PDUs in response to the notification of transmission opportunities notified by the MAC. In other words, the SM RLC simulates saturation conditions, i.e., it assumes that the RLC buffer is always full and can generate a new PDU whenever notified by the scheduler.

The SM RLC is used for simplified simulation scenarios in which only the LTE Radio model is used, without the EPC and hence without any IP networking support. We note that, although the SM RLC is an unrealistic traffic model, it still allows for the correct simulation of scenarios with multiple flows belonging to different (non real-time) QoS classes, in order to test the QoS performance obtained by different schedulers. This can be done since it is the task of the Scheduler to assign transmission resources based on the characteristics (e.g., Guaranteed Bit Rate) of each Radio Bearer, which are specified upon the definition of each Bearer within the simulation program.

As for schedulers designed to work with real-time QoS traffic that has delay constraints, the SM RLC is probably not an appropriate choice. This is because the absence of actual RLC SDUs (replaced by the artificial generation of Buffer Status Reports) makes it not possible to provide the Scheduler with meaningful head-of-line-delay information, which is often the metric of choice for the implementation of scheduling policies for real-time traffic flows. For the simulation and testing of such schedulers, it is advisable to use either the UM or the AM RLC models instead.

PDCP

PDCP Model Overview

The reference document for the specification of the PDCP entity is [TS36323]. With respect to this specification, the PDCP model implemented in the simulator supports only the following features:

  • transfer of data (user plane or control plane);
  • maintenance of PDCP SNs;
  • transfer of SN status (for use upon handover);

The following features are currently not supported:

  • header compression and decompression of IP data flows using the ROHC protocol;
  • in-sequence delivery of upper layer PDUs at re-establishment of lower layers;
  • duplicate elimination of lower layer SDUs at re-establishment of lower layers for radio bearers mapped on RLC AM;
  • ciphering and deciphering of user plane data and control plane data;
  • integrity protection and integrity verification of control plane data;
  • timer based discard;
  • duplicate discarding.
PDCP Service Interface

The PDCP service interface is divided into two parts:

  • the PdcpSapProvider part is provided by the PDCP layer and used by the upper layer and
  • the PdcpSapUser part is provided by the upper layer and used by the PDCP layer.
PDCP Service Primitives

The following list specifies which service primitives are provided by the PDCP service interfaces:

  • PdcpSapProvider::TransmitPdcpSdu

    • The RRC entity uses this primitive to send an RRC PDU to the lower PDCP entity in the transmitter peer
  • PdcpSapUser::ReceivePdcpSdu

    • The PDCP entity uses this primitive to send an RRC PDU to the upper RRC entity in the receiver peer

RRC

Features

The RRC model implemented in the simulator provides the following functionality:

  • generation (at the eNB) and interpretation (at the UE) of System Information (in particular the Master Information Block and, at the time of this writing, only System Information Block Type 1 and 2)
  • initial cell selection
  • RRC connection establishment procedure
  • RRC reconfiguration procedure, supporting the following use cases: + reconfiguration of the SRS configuration index + reconfiguration of the PHY TX mode (MIMO) + reconfiguration of UE measurements + data radio bearer setup + handover
  • RRC connection re-establishment, supporting the following use cases: + handover
Architecture

The RRC model is divided into the following components:

  • the RRC entities LteUeRrc and LteEnbRrc, which implement the state machines of the RRC entities respectively at the UE and the eNB;
  • the RRC SAPs LteUeRrcSapProvider, LteUeRrcSapUser, LteEnbRrcSapProvider, LteEnbRrcSapUser, which allow the RRC entities to send and receive RRC messages and information elmenents;
  • the RRC protocol classes LteUeRrcProtocolIdeal, LteEnbRrcProtocolIdeal, LteUeRrcProtocolReal, LteEnbRrcProtocolReal, which implement two different models for the transmission of RRC messages.

Additionally, the RRC components use various other SAPs in order to interact with the rest of the protocol stack. A representation of all the SAPs that are used is provided in the figures LTE radio protocol stack architecture for the UE on the data plane, LTE radio protocol stack architecture for the UE on the control plane, LTE radio protocol stack architecture for the eNB on the data plane and LTE radio protocol stack architecture for the eNB on the control plane.

UE RRC State Machine

In Figure UE RRC State Machine we represent the state machine as implemented in the RRC UE entity.

_images/lte-ue-rrc-states.png

UE RRC State Machine

It is to be noted that most of the states are transient, i.e., once the UE goes into one of the CONNECTED states it will never switch back to any of the IDLE states. This choice is done for the following reasons:

  • as discussed in the section Design Criteria, the focus of the LTE-EPC simulation model is on CONNECTED mode
  • radio link failure is not currently modeled, as discussed in the section Radio Link Failure, so an UE cannot go IDLE because of radio link failure
  • RRC connection release is currently never triggered neither by the EPC nor by the NAS

Still, we chose to model explicitly the IDLE states, because:

  • a realistic UE RRC configuration is needed for handover, which is a required feature, and in order to have a cleaner implementation it makes sense to use the same UE RRC configuration also for the initial connection establishment
  • it makes easier to implement idle mode cell selection in the future, which is a highly desirable feature
ENB RRC State Machine

The eNB RRC maintains the state for each UE that is attached to the cell. From an implementation point of view, the state of each UE is contained in an instance of the UeManager class. The state machine is represented in Figure ENB RRC State Machine for each UE.

_images/lte-enb-rrc-states.png

ENB RRC State Machine for each UE

Initial Cell Selection

Initial cell selection is an IDLE mode procedure, performed by UE when it has not yet camped or attached to an eNodeB. The objective of the procedure is to find a suitable cell and attach to it to gain access to the cellular network.

It is typically done at the beginning of simulation, as depicted in Figure Sample runs of initial cell selection in UE and timing of related events below. The time diagram on the left side is illustrating the case where initial cell selection succeed on first try, while the diagram on the right side is for the case where it fails on the first try and succeed on the second try. The timing assumes the use of real RRC protocol model (see RRC protocol models) and no transmission error.

_images/lte-cell-selection-timeline.png

Sample runs of initial cell selection in UE and timing of related events

The functionality is based on 3GPP IDLE mode specifications, such as in [TS36300], [TS36304], and [TS36331]. However, a proper implementation of IDLE mode is still missing in the simulator, so we reserve several simplifying assumptions:

  • multiple carrier frequency is not supported;
  • multiple Public Land Mobile Network (PLMN) identities (i.e. multiple network operators) is not supported;
  • RSRQ measurements are not utilized;
  • stored information cell selection is not supported;
  • “Any Cell Selection” state and camping to an acceptable cell is not supported;
  • marking a cell as barred or reserved is not supported;
  • cell reselection is not supported, hence it is not possible for UE to camp to a different cell after the initial camp has been placed; and
  • UE’s Closed Subscriber Group (CSG) white list contains only one CSG identity.

Also note that initial cell selection is only available for EPC-enabled simulations. LTE-only simulations must use the manual attachment method. See section Network Attachment of the User Documentation for more information on their differences in usage.

The next subsections cover different parts of initial cell selection, namely cell search, broadcast of system information, and cell selection evaluation.

Broadcast of System Information

System information blocks are broadcasted by eNodeB to UEs at predefined time intervals, adapted from Section 5.2.1.2 of [TS36331]. The supported system information blocks are:

  • Master Information Block (MIB)

    Contains parameters related to the PHY layer, generated during cell configuration and broadcasted every 10 ms at the beginning of radio frame as a control message.

  • System Information Block Type 1 (SIB1)

    Contains information regarding network access, broadcasted every 20 ms at the middle of radio frame as a control message. Not used in manual attachment method. UE must have decoded MIB before it can receive SIB1.

  • System Information Block Type 2 (SIB2)

    Contains UL- and RACH-related settings, scheduled to transmit via RRC protocol at 16 ms after cell configuration, and then repeats every 80 ms (configurable through LteEnbRrc::SystemInformationPeriodicity attribute. UE must be camped to a cell in order to be able to receive its SIB2.

Reception of system information is fundamental for UE to advance in its lifecycle. MIB enables the UE to increase the initial DL bandwidth of 6 RBs to the actual operating bandwidth of the network. SIB1 provides information necessary for cell selection evaluation (explained in the next section). And finally SIB2 is required before the UE is allowed to switch to CONNECTED state.

Cell Selection Evaluation

UE RRC reviews the measurement report produced in Cell Search and the cell access information provided by SIB1. Once both information is available for a specific cell, the UE triggers the evaluation process. The purpose of this process is to determine whether the cell is a suitable cell to camp to.

The evaluation process is a slightly simplified version of Section 5.2.3.2 of [TS36304]. It consists of the following criteria:

  • Rx level criterion; and
  • closed subscriber group (CSG) criterion.

The first criterion, Rx level, is based on the cell’s measured RSRP Q_{rxlevmeas}, which has to be higher than a required minimum Q_{rxlevmin} in order to pass the criterion:

Q_{rxlevmeas} - Q_{rxlevmin} > 0

where Q_{rxlevmin} is determined by each eNodeB and is obtainable by UE from SIB1.

The last criterion, CSG, is a combination of a true-or-false parameter called CSG indication and a simple number CSG identity. The basic rule is that UE shall not camp to eNodeB with a different CSG identity. But this rule is only enforced when CSG indication is valued as true. More details are provided in Section Network Attachment of the User Documentation.

When the cell passes all the above criteria, the cell is deemed as suitable. Then UE camps to it (IDLE_CAMPED_NORMALLY state).

After this, upper layer may request UE to enter CONNECTED mode. Please refer to section RRC connection establishment for details on this.

On the other hand, when the cell does not pass the CSG criterion, then the cell is labeled as acceptable (Section 10.1.1.1 [TS36300]). In this case, the RRC entity will tell the PHY entity to synchronize to the second strongest cell and repeat the initial cell selection procedure using that cell. As long as no suitable cell is found, the UE will repeat these steps while avoiding cells that have been identified as acceptable.

Radio Admission Control

Radio Admission Control is supported by having the eNB RRC reply to an RRC CONNECTION REQUEST message sent by the UE with either an RRC CONNECTION SETUP message or an RRC CONNECTION REJECT message, depending on whether the new UE is to be admitted or not. In the current implementation, the behavior is determined by the boolean attribute ns3::LteEnbRrc::AdmitRrcConnectionRequest. There is currently no Radio Admission Control algorithm that dynamically decides whether a new connection shall be admitted or not.

Radio Bearer Configuration

Some implementation choices have been made in the RRC regarding the setup of radio bearers:

  • three Logical Channel Groups (out of four available) are configured for uplink buffer status report purposes, according to the following policy:
    • LCG 0 is for signaling radio bearers
    • LCG 1 is for GBR data radio bearers
    • LCG 2 is for Non-GBR data radio bearers
UE RRC Measurements Model
UE RRC measurements support

The UE RRC entity provides support for UE measurements; in particular, it implements the procedures described in Section 5.5 of [TS36331], with the following simplifying assumptions:

  • only E-UTRA intra-frequency measurements are supported, which implies:
    • only one measurement object is used during the simulation;
    • measurement gaps are not needed to perform the measurements;
    • Event B1 and B2 are not implemented;
  • only reportStrongestCells purpose is supported, while reportCGI and reportStrongestCellsForSON purposes are not supported;
  • s-Measure is not supported;
  • since carrier aggregation is not supported in by the LTE module, the following assumptions in UE measurements hold true:
    • no notion of secondary cell (SCell);
    • primary cell (PCell) simply means serving cell;
    • Event A6 is not implemented;
  • speed dependant scaling of time-to-trigger (Section 5.5.6.2 of [TS36331]) is not supported.
Overall design

The model is based on the concept of UE measurements consumer, which is an entity that may request an eNodeB RRC entity to provide UE measurement reports. Consumers are, for example, Handover algorithm, which compute handover decision based on UE measurement reports. Test cases and user’s programs may also become consumers. Figure Relationship between UE measurements and its consumers depicts the relationship between these entities.

_images/ue-meas-consumer.png

Relationship between UE measurements and its consumers

The whole UE measurements function at the RRC level is divided into 4 major parts:

  1. Measurement configuration (handled by LteUeRrc::ApplyMeasConfig)
  2. Performing measurements (handled by LteUeRrc::DoReportUeMeasurements)
  3. Measurement report triggering (handled by LteUeRrc::MeasurementReportTriggering)
  4. Measurement reporting (handled by LteUeRrc::SendMeasurementReport)

The following sections will describe each of the parts above.

Measurement configuration

An eNodeB RRC entity configures UE measurements by sending the configuration parameters to the UE RRC entity. This set of parameters are defined within the MeasConfig Information Element (IE) of the RRC Connection Reconfiguration message (RRC connection reconfiguration).

The eNodeB RRC entity implements the configuration parameters and procedures described in Section 5.5.2 of [TS36331], with the following simplifying assumption:

  • configuration (i.e. addition, modification, and removal) can only be done before the simulation begins;
  • all UEs attached to the eNodeB will be configured the same way, i.e. there is no support for configuring specific measurement for specific UE; and
  • it is assumed that there is a one-to-one mapping between the PCI and the E-UTRAN Global Cell Identifier (EGCI). This is consistent with the PCI modeling assumptions described in UE PHY Measurements Model.

The eNodeB RRC instance here acts as an intermediary between the consumers and the attached UEs. At the beginning of simulation, each consumer provides the eNodeB RRC instance with the UE measurements configuration that it requires. After that, the eNodeB RRC distributes the configuration to attached UEs.

Users may customize the measurement configuration using several methods. Please refer to Section Configure UE measurements of the User Documentation for the description of these methods.

Performing measurements

UE RRC receives both RSRP and RSRQ measurements on periodical basis from UE PHY, as described in UE PHY Measurements Model. Layer 3 filtering will be applied to these received measurements. The implementation of the filtering follows Section 5.5.3.2 of [TS36331]:

F_n = (1 - a) \times F_{n-1} + a \times M_n

where:

  • M_n is the latest received measurement result from the physical layer;
  • F_n is the updated filtered measurement result;
  • F_{n-1} is the old filtered measurement result, where F_0 = M_1 (i.e. the first measurement is not filtered); and
  • a = (\frac{1}{2})^{\frac{k}{4}}, where k is the configurable filterCoefficent provided by the QuantityConfig;

k = 4 is the default value, but can be configured by setting the RsrpFilterCoefficient and RsrqFilterCoefficient attributes in LteEnbRrc.

Therefore k = 0 will disable Layer 3 filtering. On the other hand, past measurements can be granted more influence on the filtering results by using larger value of k.

Measurement reporting triggering

In this part, UE RRC will go through the list of active measurement configuration and check whether the triggering condition is fulfilled in accordance with Section 5.5.4 of [TS36331]. When at least one triggering condition from all the active measurement configuration is fulfilled, the measurement reporting procedure (described in the next subsection) will be initiated.

3GPP defines two kinds of triggerType: periodical and event-based. At the moment, only event-based criterion is supported. There are various events that can be selected, which are briefly described in the table below:

List of supported event-based triggering criteria
Name Description
Event A1 Serving cell becomes better than threshold
Event A2 Serving cell becomes worse than threshold
Event A3 Neighbour becomes offset dB better than serving cell
Event A4 Neighbour becomes better than threshold
Event A5 Serving becomes worse than threshold1 AND neighbour becomes better than threshold2

Two main conditions to be checked in an event-based trigger are the entering condition and the leaving condition. More details on these two can be found in Section 5.5.4 of [TS36331].

An event-based trigger can be further configured by introducing hysteresis and time-to-trigger. Hysteresis (Hys) defines the distance between the entering and leaving conditions in dB. Similarly, time-to-trigger introduces delay to both entering and leaving conditions, but as a unit of time.

The periodical type of reporting trigger is not supported, but its behaviour can be easily obtained by using an event-based trigger. This can be done by configuring the measurement in such a way that the entering condition is always fulfilled, for example, by setting the threshold of Event A1 to zero (the minimum level). As a result, the measurement reports will always be triggered at every certain interval, as determined by the reportInterval field within LteRrcSap::ReportConfigEutra, therefore producing the same behaviour as periodical reporting.

As a limitation with respect to 3GPP specifications, the current model does not support any cell-specific configuration. These configuration parameters are defined in measurement object. As a consequence, incorporating a list of black cells into the triggering process is not supported. Moreover, cell-specific offset (i.e., O_{cn} and O_{cp} in Event A3, A4, and A5) are not supported as well. The value equal to zero is always assumed in place of them.

Measurement reporting

This part handles the submission of measurement report from the UE RRC entity to the serving eNodeB entity via RRC protocol. Several simplifying assumptions have been adopted:

  • reportAmount is not applicable (i.e. always assumed to be infinite);
  • in measurement reports, the reportQuantity is always assumed to be BOTH, i.e., both RSRP and RSRQ are always reported, regardless of the triggerQuantity.
Handover

The RRC model supports UE mobility in CONNECTED mode by invoking the X2-based handover procedure. The model is intra-EUTRAN and intra-frequency, as based on Section 10.1.2.1 of [TS36300].

This section focuses on the process of triggering a handover. The handover execution procedure itself is covered in Section X2.

There are two ways to trigger the handover procedure:

  • explicitly (or manually) triggered by the simulation program by scheduling an execution of the method LteEnbRrc::SendHandoverRequest; or
  • automatically triggered by the eNodeB RRC entity based on UE measurements and according to the selected handover algorithm.

Section X2-based handover of the User Documentation provides some examples on using both explicit and automatic handover triggers in simulation. The next subsection will take a closer look on the automatic method, by describing the design aspects of the handover algorithm interface and the available handover algorithms.

Handover algorithm

Handover in 3GPP LTE has the following properties:

  • UE-assisted

    The UE provides input to the network in the form of measurement reports. This is handled by the UE RRC Measurements Model.

  • Network-controlled

    The network (i.e. the source eNodeB and the target eNodeB) decides when to trigger the handover and oversees its execution.

The handover algorithm operates at the source eNodeB and is responsible in making handover decisions in an “automatic” manner. It interacts with an eNodeB RRC instance via the Handover Management SAP interface. These relationships are illustrated in Figure Relationship between UE measurements and its consumers from the previous section.

The handover algorithm interface consists of the following methods:

  • AddUeMeasReportConfigForHandover

    (Handover Algorithm -> eNodeB RRC) Used by the handover algorithm to request measurement reports from the eNodeB RRC entity, by passing the desired reporting configuration. The configuration will be applied to all future attached UEs.

  • ReportUeMeas

    (eNodeB RRC -> Handover Algorithm) Based on the UE measurements configured earlier in AddUeMeasReportConfigForHandover, UE may submit measurement reports to the eNodeB. The eNodeB RRC entity uses the ReportUeMeas interface to forward these measurement reports to the handover algorithm.

  • TriggerHandover

    (Handover Algorithm -> eNodeB RRC) After examining the measurement reports (but not necessarily), the handover algorithm may declare a handover. This method is used to notify the eNodeB RRC entity about this decision, which will then proceed to commence the handover procedure.

One note for the AddUeMeasReportConfigForHandover. The method will return the measId (measurement identity) of the newly created measurement configuration. Typically a handover algorithm would store this unique number. It may be useful in the ReportUeMeas method, for example when more than one configuration has been requested and the handover algorithm needs to differentiate incoming reports based on the configuration that triggered them.

A handover algorithm is implemented by writing a subclass of the LteHandoverAlgorithm abstract superclass and implementing each of the above mentioned SAP interface methods. Users may develop their own handover algorithm this way, and then use it in any simulation by following the steps outlined in Section X2-based handover of the User Documentation.

Alternatively, users may choose to use one of the 3 built-in handover algorithms provided by the LTE module: no-op, A2-A4-RSRQ, and strongest cell handover algorithm. They are ready to be used in simulations or can be taken as an example of implementing a handover algorithm. Each of these built-in algorithms is covered in each of the following subsections.

No-op handover algorithm

The no-op handover algorithm (NoOpHandoverAlgorithm class) is the simplest possible implementation of handover algorithm. It basically does nothing, i.e., does not call any of the Handover Management SAP interface methods. Users may choose this handover algorithm if they wish to disable automatic handover trigger in their simulation.

A2-A4-RSRQ handover algorithm

The A2-A4-RSRQ handover algorithm provides the functionality of the default handover algorithm originally included in LENA M6 (ns-3.18), ported to the Handover Management SAP interface as the A2A4RsrqHandoverAlgorithm class.

As the name implies, the algorithm utilizes the Reference Signal Received Quality (RSRQ) measurements acquired from Event A2 and Event A4. Thus, the algorithm will add 2 measurement configuration to the corresponding eNodeB RRC instance. Their intended use are described as follows:

  • Event A2 (serving cell’s RSRQ becomes worse than threshold) is leveraged to indicate that the UE is experiencing poor signal quality and may benefit from a handover.
  • Event A4 (neighbour cell’s RSRQ becomes better than threshold) is used to detect neighbouring cells and acquire their corresponding RSRQ from every attached UE, which are then stored internally by the algorithm. By default, the algorithm configures Event A4 with a very low threshold, so that the trigger criteria are always true.

Figure A2-A4-RSRQ handover algorithm below summarizes this procedure.

_images/lte-legacy-handover-algorithm.png

A2-A4-RSRQ handover algorithm

Two attributes can be set to tune the algorithm behaviour:

  • ServingCellThreshold

    The threshold for Event A2, i.e. a UE must have an RSRQ lower than this threshold to be considered for a handover.

  • NeighbourCellOffset

    The offset that aims to ensure that the UE would receive better signal quality after the handover. A neighbouring cell is considered as a target cell for the handover only if its RSRQ is higher than the serving cell’s RSRQ by the amount of this offset.

The value of both attributes are expressed as RSRQ range (Section 9.1.7 of [TS36133]), which is an integer between 0 and 34, with 0 as the lowest RSRQ.

Strongest cell handover algorithm

The strongest cell handover algorithm, or also sometimes known as the traditional power budget (PBGT) algorithm, is developed using [Dimou2009] as reference. The idea is to provide each UE with the best possible Reference Signal Received Power (RSRP). This is done by performing a handover as soon as a better cell (i.e. with stronger RSRP) is detected.

Event A3 (neighbour cell’s RSRP becomes better than serving cell’s RSRP) is chosen to realize this concept. The A3RsrpHandoverAlgorithm class is the result of the implementation. Handover is triggered for the UE to the best cell in the measurement report.

A simulation which uses this algorithm is usually more vulnerable to ping-pong handover (consecutive handover to the previous source eNodeB within short period of time), especially when the Fading Model is enabled. This problem is typically tackled by introducing a certain delay to the handover. The algorithm does this by including hysteresis and time-to-trigger parameters (Section 6.3.5 of [TS36331]) to the UE measurements configuration.

Hysteresis (a.k.a. handover margin) delays the handover in regard of RSRP. The value is expressed in dB, ranges between 0 to 15 dB, and have a 0.5 dB accuracy, e.g., an input value of 2.7 dB is rounded to 2.5 dB.

On the other hand, time-to-trigger delays the handover in regard of time. 3GPP defines 16 valid values for time-to-trigger (all in milliseconds): 0, 40, 64, 80, 100, 128, 160, 256, 320, 480, 512, 640, 1024, 1280, 2560, and 5120.

The difference between hysteresis and time-to-trigger is illustrated in Figure Effect of hysteresis and time-to-trigger in strongest cell handover algorithm below, which is taken from the lena-x2-handover-measures example. It depicts the perceived RSRP of serving cell and a neighbouring cell by a UE which moves pass the border of the cells.

_images/lte-strongest-cell-handover-algorithm.png

Effect of hysteresis and time-to-trigger in strongest cell handover algorithm

By default, the algorithm uses a hysteresis of 3.0 dB and time-to-trigger of 256 ms. These values can be tuned through the Hysteresis and TimeToTrigger attributes of the A3RsrpHandoverAlgorithm class.

Neighbour Relation

LTE module supports a simplified Automatic Neighbour Relation (ANR) function. This is handled by the LteAnr class, which interacts with an eNodeB RRC instance through the ANR SAP interface.

Neighbour Relation Table

The ANR holds a Neighbour Relation Table (NRT), similar to the description in Section 22.3.2a of [TS36300]. Each entry in the table is called a Neighbour Relation (NR) and represents a detected neighbouring cell, which contains the following boolean fields:

  • No Remove

    Indicates that the NR shall not be removed from the NRT. This is true by default for user-provided NR and false otherwise.

  • No X2

    Indicates that the NR shall not use an X2 interface in order to initiate procedures towards the eNodeB parenting the target cell. This is false by default for user-provided NR, and true otherwise.

  • No HO

    Indicates that the NR shall not be used by the eNodeB for handover reasons. This is true in most cases, except when the NR is both user-provided and network-detected.

Each NR entry may have at least one of the following properties:

  • User-provided

    This type of NR is created as instructed by the simulation user. For example, a NR is created automatically upon a user-initiated establishment of X2 connection between 2 eNodeBs, e.g. as described in Section X2-based handover. Another way to create a user-provided NR is to call the AddNeighbourRelation function explicitly.

  • Network-detected

    This type of NR is automatically created during the simulation as a result of the discovery of a nearby cell.

In order to automatically create network-detected NR, ANR utilizes UE measurements. In other words, ANR is a consumer of UE measurements, as depicted in Figure Relationship between UE measurements and its consumers. RSRQ and Event A4 (neighbour becomes better than threshold) are used for the reporting configuration. The default Event A4 threshold is set to the lowest possible, i.e., maximum detection capability, but can be changed by setting the Threshold attribute of LteAnr class. Note that the A2-A4-RSRQ handover algorithm also utilizes a similar reporting configuration. Despite the similarity, when both ANR and this handover algorithm are active in the eNodeB, they use separate reporting configuration.

Also note that automatic setup of X2 interface is not supported. This is the reason why the No X2 and No HO fields are true in a network-detected but not user-detected NR.

Role of ANR in Simulation

The ANR SAP interface provides the means of communication between ANR and eNodeB RRC. Some interface functions are used by eNodeB RRC to interact with the NRT, as shown below:

  • AddNeighbourRelation

    (eNodeB RRC -> ANR) Add a new user-provided NR entry into the NRT.

  • GetNoRemove

    (eNodeB RRC -> ANR) Get the value of No Remove field of an NR entry of the given cell ID.

  • GetNoHo

    (eNodeB RRC -> ANR) Get the value of No HO field of an NR entry of the given cell ID.

  • GetNoX2

    (eNodeB RRC -> ANR) Get the value of No X2 field of an NR entry of the given cell ID.

Other interface functions exist to support the role of ANR as a UE measurements consumer, as listed below:

  • AddUeMeasReportConfigForAnr

    (ANR -> eNodeB RRC) Used by the ANR to request measurement reports from the eNodeB RRC entity, by passing the desired reporting configuration. The configuration will be applied to all future attached UEs.

  • ReportUeMeas

    (eNodeB RRC -> ANR) Based on the UE measurements configured earlier in AddUeMeasReportConfigForAnr, UE may submit measurement reports to the eNodeB. The eNodeB RRC entity uses the ReportUeMeas interface to forward these measurement reports to the ANR.

Please refer to the corresponding API documentation for LteAnrSap class for more details on the usage and the required parameters.

The ANR is utilized by the eNodeB RRC instance as a data structure to keep track of the situation of nearby neighbouring cells. The ANR also helps the eNodeB RRC instance to determine whether it is possible to execute a handover procedure to a neighbouring cell. This is realized by the fact that eNodeB RRC will only allow a handover procedure to happen if the NR entry of the target cell has both No HO and No X2 fields set to false.

ANR is enabled by default in every eNodeB instance in the simulation. It can be disabled by setting the AnrEnabled attribute in LteHelper class to false.

RRC sequence diagrams

In this section we provide some sequence diagrams that explain the most important RRC procedures being modeled.

RRC connection establishment

Figure Sequence diagram of the RRC Connection Establishment procedure shows how the RRC Connection Establishment procedure is modeled, highlighting the role of the RRC layer at both the UE and the eNB, as well as the interaction with the other layers.

_images/rrc-connection-establishment.png

Sequence diagram of the RRC Connection Establishment procedure

There are several timeouts related to this procedure, which are listed in the following Table Timers in RRC connection establishment procedure. If any of these timers expired, the RRC connection establishment procedure is terminated in failure. In this case, the upper layer (UE NAS) will immediately attempt to retry the procedure until it completes successfully.

Timers in RRC connection establishment procedure
Name Location Timer starts Timer stops Default duration When timer expired
Connection request timeout eNodeB RRC New UE context added Receive RRC CONNECTION REQUEST 15 ms Remove UE context
Connection timeout (T300 timer) UE RRC Send RRC CONNECTION REQUEST Receive RRC CONNECTION SETUP or REJECT 100 ms Reset UE MAC
Connection setup timeout eNodeB RRC Send RRC CONNECTION SETUP Receive RRC CONNECTION SETUP COMPLETE 100 ms Remove UE context
Connection rejected timeout eNodeB RRC Send RRC CONNECTION REJECT Never 30 ms Remove UE context
RRC connection reconfiguration

Figure Sequence diagram of the RRC Connection Reconfiguration procedure shows how the RRC Connection Reconfiguration procedure is modeled for the case where MobilityControlInfo is not provided, i.e., handover is not performed.

_images/rrc-connection-reconfiguration.png

Sequence diagram of the RRC Connection Reconfiguration procedure

Figure Sequence diagram of the RRC Connection Reconfiguration procedure for the handover case shows how the RRC Connection Reconfiguration procedure is modeled for the case where MobilityControlInfo is provided, i.e., handover is to be performed. As specified in [TS36331], After receiving the handover message, the UE attempts to access the target cell at the first available RACH occasion according to Random Access resource selection defined in [TS36321]_, i.e. the handover is asynchronous. Consequently, when allocating a dedicated preamble for the random access in the target cell, E-UTRA shall ensure it is available from the first RACH occasion the UE may use. Upon successful completion of the handover, the UE sends a message used to confirm the handover. Note that the random access procedure in this case is non-contention based, hence in a real LTE system it differs slightly from the one used in RRC connection established. Also note that the RA Preamble ID is signalled via the Handover Command included in the X2 Handover Request ACK message sent from the target eNB to the source eNB; in particular, the preamble is included in the RACH-ConfigDedicated IE which is part of MobilityControlInfo.

_images/rrc-connection-reconfiguration-handover.png

Sequence diagram of the RRC Connection Reconfiguration procedure for the handover case

RRC protocol models

As previously anticipated, we provide two different models for the transmission and reception of RRC messages: Ideal and Real. Each of them is described in one of the following subsections.

Ideal RRC protocol model

According to this model, implemented in the classes and LteUeRrcProtocolIdeal and LteEnbRrcProtocolIdeal, all RRC messages and information elements are transmitted between the eNB and the UE in an ideal fashion, without consuming radio resources and without errors. From an implementation point of view, this is achieved by passing the RRC data structure directly between the UE and eNB RRC entities, without involving the lower layers (PDCP, RLC, MAC, scheduler).

Real RRC protocol model

This model is implemented in the classes LteUeRrcProtocolReal and LteEnbRrcProtocolReal and aims at modeling the transmission of RRC PDUs as commonly performed in real LTE systems. In particular:

  • for every RRC message being sent, a real RRC PDUs is created following the ASN.1 encoding of RRC PDUs and information elements (IEs) specified in [TS36331]. Some simplification are made with respect to the IEs included in the PDU, i.e., only those IEs that are useful for simulation purposes are included. For a detailed list, please see the IEs defined in lte-rrc-sap.h and compare with [TS36331].
  • the encoded RRC PDUs are sent on Signaling Radio Bearers and are subject to the same transmission modeling used for data communications, thus including scheduling, radio resource consumption, channel errors, delays, retransmissions, etc.
Signaling Radio Bearer model

We now describe the Signaling Radio Bearer model that is used for the Real RRC protocol model.

  • SRB0 messages (over CCCH):

    • RrcConnectionRequest: in real LTE systems, this is an RLC TM SDU sent over resources specified in the UL Grant in the RAR (not in UL DCIs); the reason is that C-RNTI is not known yet at this stage. In the simulator, this is modeled as a real RLC TM RLC PDU whose UL resources are allocated by the sched upon call to SCHED_DL_RACH_INFO_REQ.
    • RrcConnectionSetup: in the simulator this is implemented as in real LTE systems, i.e., with an RLC TM SDU sent over resources indicated by a regular UL DCI, allocated with SCHED_DL_RLC_BUFFER_REQ triggered by the RLC TM instance that is mapped to LCID 0 (the CCCH).
  • SRB1 messages (over DCCH):

    • All the SRB1 messages modeled in the simulator (e.g., RrcConnectionCompleted) are implemented as in real LTE systems, i.e., with a real RLC SDU sent over RLC AM using DL resources allocated via Buffer Status Reports. See the RLC model documentation for details.
  • SRB2 messages (over DCCH):

    • According to [TS36331], “SRB1 is for RRC messages (which may include a piggybacked NAS message) as well as for NAS messages prior to the establishment of SRB2, all using DCCH logical channel”, whereas “SRB2 is for NAS messages, using DCCH logical channel” and “SRB2 has a lower-priority than SRB1 and is always configured by E-UTRAN after security activation”. Modeling security-related aspects is not a requirement of the LTE simulation model, hence we always use SRB1 and never activate SRB2.
ASN.1 encoding of RRC IE’s

The messages defined in RRC SAP, common to all Ue/Enb SAP Users/Providers, are transported in a transparent container to/from a Ue/Enb. The encoding format for the different Information Elements are specified in [TS36331], using ASN.1 rules in the unaligned variant. The implementation in Ns3/Lte has been divided in the following classes:

  • Asn1Header : Contains the encoding / decoding of basic ASN types
  • RrcAsn1Header : Inherits Asn1Header and contains the encoding / decoding of common IE’s defined in [TS36331]
  • Rrc specific messages/IEs classes : A class for each of the messages defined in RRC SAP header
Asn1Header class - Implementation of base ASN.1 types

This class implements the methods to Serialize / Deserialize the ASN.1 types being used in [TS36331], according to the packed encoding rules in ITU-T X.691. The types considered are:

  • Boolean : a boolean value uses a single bit (1=true, 0=false).
  • Integer : a constrained integer (with min and max values defined) uses the minimum amount of bits to encode its range (max-min+1).
  • Bitstring : a bistring will be copied bit by bit to the serialization buffer.
  • Octetstring : not being currently used.
  • Sequence : the sequence generates a preamble indicating the presence of optional and default fields. It also adds a bit indicating the presence of extension marker.
  • Sequence...Of : the sequence...of type encodes the number of elements of the sequence as an integer (the subsequent elements will need to be encoded afterwards).
  • Choice : indicates which element among the ones in the choice set is being encoded.
  • Enumeration : is serialized as an integer indicating which value is used, among the ones in the enumeration, with the number of elements in the enumeration as upper bound.
  • Null : the null value is not encoded, although its serialization function is defined to provide a clearer map between specification and implementation.

The class inherits from ns-3 Header, but Deserialize() function is declared pure virtual, thus inherited classes having to implement it. The reason is that deserialization will retrieve the elements in RRC messages, each of them containing different information elements.

Additionally, it has to be noted that the resulting byte length of a specific type/message can vary, according to the presence of optional fields, and due to the optimized encoding. Hence, the serialized bits will be processed using PreSerialize() function, saving the result in m_serializationResult Buffer. As the methods to read/write in a ns3 buffer are defined in a byte basis, the serialization bits are stored into m_serializationPendingBits attribute, until the 8 bits are set and can be written to buffer iterator. Finally, when invoking Serialize(), the contents of the m_serializationResult attribute will be copied to Buffer::Iterator parameter

RrcAsn1Header : Common IEs

As some Information Elements are being used for several RRC messages, this class implements the following common IE’s:

  • SrbToAddModList
  • DrbToAddModList
  • LogicalChannelConfig
  • RadioResourceConfigDedicated
  • PhysicalConfigDedicated
  • SystemInformationBlockType1
  • SystemInformationBlockType2
  • RadioResourceConfigCommonSIB
Rrc specific messages/IEs classes

The following RRC SAP have been implemented:

  • RrcConnectionRequest
  • RrcConnectionSetup
  • RrcConnectionSetupCompleted
  • RrcConnectionReconfiguration
  • RrcConnectionReconfigurationCompleted
  • HandoverPreparationInfo
  • RrcConnectionReestablishmentRequest
  • RrcConnectionReestablishment
  • RrcConnectionReestablishmentComplete
  • RrcConnectionReestablishmentReject
  • RrcConnectionRelease

NAS

The focus of the LTE-EPC model is on the NAS Active state, which corresponds to EMM Registered, ECM connected, and RRC connected. Because of this, the following simplifications are made:

  • EMM and ECM are not modeled explicitly; instead, the NAS entity at the UE will interact directy with the MME to perfom actions that are equivalent (with gross simplifications) to taking the UE to the states EMM Connected and ECM Connected;
  • the NAS also takes care of multiplexing uplink data packets coming from the upper layers into the appropriate EPS bearer by using the Traffic Flow Template classifier (TftClassifier).
  • the NAS does not support PLMN and CSG selection
  • the NAS does not support any location update/paging procedure in idle mode

Figure Sequence diagram of the attach procedure shows how the simplified NAS model implements the attach procedure. Note that both the default and eventual dedicated EPS bearers are activated as part of this procedure.

_images/nas-attach.png

Sequence diagram of the attach procedure

S1

S1-U

The S1-U interface is modeled in a realistic way by encapsulating data packets over GTP/UDP/IP, as done in real LTE-EPC systems. The corresponding protocol stack is shown in Figure LTE-EPC data plane protocol stack. As shown in the figure, there are two different layers of IP networking. The first one is the end-to-end layer, which provides end-to-end connectivity to the users; this layers involves the UEs, the PGW and the remote host (including eventual internet routers and hosts in between), but does not involve the eNB. By default, UEs are assigned a public IPv4 address in the 7.0.0.0/8 network, and the PGW gets the address 7.0.0.1, which is used by all UEs as the gateway to reach the internet.

The second layer of IP networking is the EPC local area network. This involves all eNB nodes and the SGW/PGW node. This network is implemented as a set of point-to-point links which connect each eNB with the SGW/PGW node; thus, the SGW/PGW has a set of point-to-point devices, each providing connectivity to a different eNB. By default, a 10.x.y.z/30 subnet is assigned to each point-to-point link (a /30 subnet is the smallest subnet that allows for two distinct host addresses).

As specified by 3GPP, the end-to-end IP communications is tunneled over the local EPC IP network using GTP/UDP/IP. In the following, we explain how this tunneling is implemented in the EPC model. The explanation is done by discussing the end-to-end flow of data packets.

_images/epc-data-flow-dl.png

Data flow in the downlink between the internet and the UE

To begin with, we consider the case of the downlink, which is depicted in Figure Data flow in the downlink between the internet and the UE. Downlink Ipv4 packets are generated from a generic remote host, and addressed to one of the UE device. Internet routing will take care of forwarding the packet to the generic NetDevice of the SGW/PGW node which is connected to the internet (this is the Gi interface according to 3GPP terminology). The SGW/PGW has a VirtualNetDevice which is assigned the gateway IP address of the UE subnet; hence, static routing rules will cause the incoming packet from the internet to be routed through this VirtualNetDevice. Such device starts the GTP/UDP/IP tunneling procedure, by forwarding the packet to a dedicated application in the SGW/PGW node which is called EpcSgwPgwApplication. This application does the following operations:

  1. it determines the eNB node to which the UE is attached, by looking at the IP destination address (which is the address of the UE);
  2. it classifies the packet using Traffic Flow Templates (TFTs) to identify to which EPS Bearer it belongs. EPS bearers have a one-to-one mapping to S1-U Bearers, so this operation returns the GTP-U Tunnel Endpoint Identifier (TEID) to which the packet belongs;
  3. it adds the corresponding GTP-U protocol header to the packet;
  4. finally, it sends the packet over an UDP socket to the S1-U point-to-point NetDevice, addressed to the eNB to which the UE is attached.

As a consequence, the end-to-end IP packet with newly added IP, UDP and GTP headers is sent through one of the S1 links to the eNB, where it is received and delivered locally (as the destination address of the outmost IP header matches the eNB IP address). The local delivery process will forward the packet, via an UDP socket, to a dedicated application called EpcEnbApplication. This application then performs the following operations:

  1. it removes the GTP header and retrieves the TEID which is contained in it;
  2. leveraging on the one-to-one mapping between S1-U bearers and Radio Bearers (which is a 3GPP requirement), it determines the Bearer ID (BID) to which the packet belongs;
  3. it records the BID in a dedicated tag called EpsBearerTag, which is added to the packet;
  4. it forwards the packet to the LteEnbNetDevice of the eNB node via a raw packet socket

Note that, at this point, the outmost header of the packet is the end-to-end IP header, since the IP/UDP/GTP headers of the S1 protocol stack have already been stripped. Upon reception of the packet from the EpcEnbApplication, the LteEnbNetDevice will retrieve the BID from the EpsBearerTag, and based on the BID will determine the Radio Bearer instance (and the corresponding PDCP and RLC protocol instances) which are then used to forward the packet to the UE over the LTE radio interface. Finally, the LteUeNetDevice of the UE will receive the packet, and delivery it locally to the IP protocol stack, which will in turn delivery it to the application of the UE, which is the end point of the downlink communication.

_images/epc-data-flow-ul.png

Data flow in the uplink between the UE and the internet

The case of the uplink is depicted in Figure Data flow in the uplink between the UE and the internet. Uplink IP packets are generated by a generic application inside the UE, and forwarded by the local TCP/IP stack to the LteUeNetDevice of the UE. The LteUeNetDevice then performs the following operations:

  1. it classifies the packet using TFTs and determines the Radio Bearer to which the packet belongs (and the corresponding RBID);
  2. it identifies the corresponding PDCP protocol instance, which is the entry point of the LTE Radio Protocol stack for this packet;
  3. it sends the packet to the eNB over the LTE Radio Protocol stack.

The eNB receives the packet via its LteEnbNetDevice. Since there is a single PDCP and RLC protocol instance for each Radio Bearer, the LteEnbNetDevice is able to determine the BID of the packet. This BID is then recorded onto an EpsBearerTag, which is added to the packet. The LteEnbNetDevice then forwards the packet to the EpcEnbApplication via a raw packet socket.

Upon receiving the packet, the EpcEnbApplication performs the following operations:

  1. it retrieves the BID from the EpsBearerTag in the packet;
  2. it determines the corresponding EPS Bearer instance and GTP-U TEID by leveraging on the one-to-one mapping between S1-U bearers and Radio Bearers;
  3. it adds a GTP-U header on the packet, including the TEID determined previously;
  4. it sends the packet to the SGW/PGW node via the UDP socket connected to the S1-U point-to-point net device.

At this point, the packet contains the S1-U IP, UDP and GTP headers in addition to the original end-to-end IP header. When the packet is received by the corresponding S1-U point-to-point NetDevice of the SGW/PGW node, it is delivered locally (as the destination address of the outmost IP header matches the address of the point-to-point net device). The local delivery process will forward the packet to the EpcSgwPgwApplication via the correponding UDP socket. The EpcSgwPgwApplication then removes the GTP header and forwards the packet to the VirtualNetDevice. At this point, the outmost header of the packet is the end-to-end IP header. Hence, if the destination address within this header is a remote host on the internet, the packet is sent to the internet via the corresponding NetDevice of the SGW/PGW. In the event that the packet is addressed to another UE, the IP stack of the SGW/PGW will redirect the packet again to the VirtualNetDevice, and the packet will go through the dowlink delivery process in order to reach its destination UE.

Note that the EPS Bearer QoS is not enforced on the S1-U links, it is assumed that the overprovisioning of the link bandwidth is sufficient to meet the QoS requirements of all bearers.

S1AP

The S1-AP interface provides control plane interaction between the eNB and the MME. In the simulator, this interface is modeled in an ideal fashion, with direct interaction between the eNB and the MME objects, without actually implementing the encoding of S1AP messages and information elements specified in [TS36413] and without actually transmitting any PDU on any link.

The S1-AP primitives that are modeled are:

  • INITIAL UE MESSAGE
  • INITIAL CONTEXT SETUP REQUEST
  • INITIAL CONTEXT SETUP RESPONSE
  • PATH SWITCH REQUEST
  • PATH SWITCH REQUEST ACKNOWLEDGE

X2

The X2 interface interconnects two eNBs [TS36420]. From a logical point of view, the X2 interface is a point-to-point interface between the two eNBs. In a real E-UTRAN, the logical point-to-point interface should be feasible even in the absence of a physical direct connection between the two eNBs. In the X2 model implemented in the simulator, the X2 interface is a point-to-point link between the two eNBs. A point-to-point device is created in both eNBs and the two point-to-point devices are attached to the point-to-point link.

For a representation of how the X2 interface fits in the overall architecture of the LENA simulation model, the reader is referred to the figure Overview of the LTE-EPC simulation model.

The X2 interface implemented in the simulator provides detailed implementation of the following elementary procedures of the Mobility Management functionality [TS36423]:

  • Handover Request procedure
  • Handover Request Acknowledgement procedure
  • SN Status Transfer procedure
  • UE Context Release procedure

These procedures are involved in the X2-based handover. You can find the detailed description of the handover in section 10.1.2.1 of [TS36300]. We note that the simulator model currently supports only the seamless handover as defined in Section 2.6.3.1 of [Sesia2009]; in particular, lossless handover as described in Section 2.6.3.2 of [Sesia2009] is not supported at the time of this writing.

Figure Sequence diagram of the X2-based handover below shows the interaction of the entities of the X2 model in the simulator. The shaded labels indicate the moments when the UE or eNodeB transition to another RRC state.

_images/lte-epc-x2-handover-seq-diagram.png

Sequence diagram of the X2-based handover

The figure also shows two timers within the handover procedure: the handover leaving timer is maintained by the source eNodeB, while the handover joining timer by the target eNodeB. The duration of the timers can be configured in the HandoverLeavingTimeoutDuration and HandoverJoiningTimeoutDuration attributes of the respective LteEnbRrc instances. When one of these timers expire, the handover procedure is considered as failed.

However, there is no proper handling of handover failure in the current version of LTE module. Users should tune the simulation properly in order to avoid handover failure, otherwise unexpected behaviour may occur. Please refer to Section Tuning simulation with handover of the User Documentation for some tips regarding this matter.

The X2 model is an entity that uses services from:

  • the X2 interfaces,
    • They are implemented as Sockets on top of the point-to-point devices.
    • They are used to send/receive X2 messages through the X2-C and X2-U interfaces (i.e. the point-to-point device attached to the point-to-point link) towards the peer eNB.
  • the S1 application.
    • Currently, it is the EpcEnbApplication.
    • It is used to get some information needed for the Elementary Procedures of the X2 messages.

and it provides services to:

  • the RRC entity (X2 SAP)
    • to send/receive RRC messages. The X2 entity sends the RRC message as a transparent container in the X2 message. This RRC message is sent to the UE.

Figure Implementation Model of X2 entity and SAPs shows the implentation model of the X2 entity and its relationship with all the other entities and services in the protocol stack.

_images/lte-epc-x2-entity-saps.png

Implementation Model of X2 entity and SAPs

The RRC entity manages the initiation of the handover procedure. This is done in the Handover Management submodule of the eNB RRC entity. The target eNB may perform some Admission Control procedures. This is done in the Admission Control submodule. Initially, this submodule will accept any handover request.

X2 interfaces

The X2 model contains two interfaces:

  • the X2-C interface. It is the control interface and it is used to send the X2-AP PDUs (i.e. the elementary procedures).
  • the X2-U interface. It is used to send the bearer data when there is DL forwarding.

Figure X2 interface protocol stacks shows the protocol stacks of the X2-U interface and X2-C interface modeled in the simulator.

_images/lte-epc-x2-interface.png

X2 interface protocol stacks

X2-C

The X2-C interface is the control part of the X2 interface and it is used to send the X2-AP PDUs (i.e. the elementary procedures).

In the original X2 interface control plane protocol stack, SCTP is used as the transport protocol but currently, the SCTP protocol is not modeled in the ns-3 simulator and its implementation is out-of-scope of the project. The UDP protocol is used as the datagram oriented protocol instead of the SCTP protocol.

X2-U

The X2-U interface is used to send the bearer data when there is DL forwarding during the execution of the X2-based handover procedure. Similarly to what done for the S1-U interface, data packets are encapsulated over GTP/UDP/IP when being sent over this interface. Note that the EPS Bearer QoS is not enforced on the X2-U links, it is assumed that the overprovisioning of the link bandwidth is sufficient to meet the QoS requirements of all bearers.

X2 Service Interface

The X2 service interface is used by the RRC entity to send and receive messages of the X2 procedures. It is divided into two parts:

  • the EpcX2SapProvider part is provided by the X2 entity and used by the RRC entity and
  • the EpcX2SapUser part is provided by the RRC entity and used by the RRC enity.

The primitives that are supported in our X2-C model are described in the following subsections.

X2-C primitives for handover execution

The following primitives are used for the X2-based handover:

  • HANDOVER REQUEST
  • HANDOVER REQUEST ACK
  • HANDOVER PREPARATION FAILURE
  • SN STATUS STRANSFER
  • UE CONTEXT RELEASE

all the above primitives are used by the currently implemented RRC model during the preparation and execution of the handover procedure. Their usage interacts with the RRC state machine; therefore, they are not meant to be used for code customization, at least unless it is desired to modify the RRC state machine.

X2-C SON primitives

The following primitives can be used to implement Self-Organized Network (SON) functionalities:

  • LOAD INFORMATION
  • RESOURCE STATUS UPDATE

note that the current RRC model does not actually use these primitives, they are included in the model just to make it possible to develop SON algorithms included in the RRC logic that make use of them.

As a first example, we show here how the load information primitive can be used. We assume that the LteEnbRrc has been modified to include the following new member variables:

std::vector<EpcX2Sap::UlInterferenceOverloadIndicationItem>
  m_currentUlInterferenceOverloadIndicationList;
std::vector <EpcX2Sap::UlHighInterferenceInformationItem>
  m_currentUlHighInterferenceInformationList;
EpcX2Sap::RelativeNarrowbandTxBand m_currentRelativeNarrowbandTxBand;

for a detailed description of the type of these variables, we suggest to consult the file epc-x2-sap.h, the corresponding doxygen documentation, and the references therein to the relevant sections of 3GPP TS 36.423. Now, assume that at run time these variables have been set to meaningful values following the specifications just mentioned. Then, you can add the following code in the LteEnbRrc class implementation in order to send a load information primitive:

EpcX2Sap::CellInformationItem cii;
cii.sourceCellId = m_cellId;
cii.ulInterferenceOverloadIndicationList = m_currentUlInterferenceOverloadIndicationList;
cii.ulHighInterferenceInformationList = m_currentUlHighInterferenceInformationList;
cii.relativeNarrowbandTxBand = m_currentRelativeNarrowbandTxBand;

EpcX2Sap::LoadInformationParams params;
params.targetCellId = cellId;
params.cellInformationList.push_back (cii);
m_x2SapProvider->SendLoadInformation (params);

The above code allows the source eNB to send the message. The method LteEnbRrc::DoRecvLoadInformation will be called when the target eNB receives the message. The desired processing of the load information should therefore be implemented within that method.

In the following second example we show how the resource status update primitive is used. We assume that the LteEnbRrc has been modified to include the following new member variable:

EpcX2Sap::CellMeasurementResultItem m_cmri;

similarly to before, we refer to epc-x2-sap.h and the references therein for detailed information about this variable type. Again, we assume that the variable has been already set to a meaningful value. Then, you can add the following code in order to send a resource status update:

EpcX2Sap::ResourceStatusUpdateParams params;
params.targetCellId = cellId;
params.cellMeasurementResultList.push_back (m_cmri);
m_x2SapProvider->SendResourceStatusUpdate (params);

The method eEnbRrc::DoRecvResourceStatusUpdate will be called when the target eNB receives the resource status update message. The desired processing of this message should therefore be implemented within that method.

Finally, we note that the setting and processing of the appropriate values for the variable passed to the above described primitives is deemed to be specific of the SON algorithm being implemented, and hence is not covered by this documentation.

Unsupported primitives

Mobility Robustness Optimization primitives such as Radio Link Failure indication and Handover Report are not supported at this stage.

S11

The S11 interface provides control plane interaction between the SGW and the MME using the GTPv2-C protocol specified in [TS29274]. In the simulator, this interface is modeled in an ideal fashion, with direct interaction between the SGW and the MME objects, without actually implementing the encoding of the messages and without actually transmitting any PDU on any link.

The S11 primitives that are modeled are:

  • CREATE SESSION REQUEST
  • CREATE SESSION RESPONSE
  • MODIFY BEARER REQUEST
  • MODIFY BEARER RESPONSE

Of these primitives, the first two are used upon initial UE attachment for the establishment of the S1-U bearers; the other two are used during handover to switch the S1-U bearers from the source eNB to the target eNB as a consequence of the reception by the MME of a PATH SWITCH REQUEST S1-AP message.

Power Control

This section describes the ns-3 implementation of Downlink and Uplink Power Control.

Fractional Frequency Reuse

Overview

This section describes the ns-3 support for Fractional Frequency Reuse algorithms. All implemented algorithms are described in [ASHamza2013]. Currently 7 FR algorithms are implemented:

  • ns3::LteFrNoOpAlgorithm
  • ns3::LteFrHardAlgorithm
  • ns3::LteFrStrictAlgorithm
  • ns3::LteFrSoftAlgorithm
  • ns3::LteFfrSoftAlgorithm
  • ns3::LteFfrEnhancedAlgorithm
  • ns3::LteFfrDistributedAlgorithm

New LteFfrAlgorithm class was created and it is a abstract class for Frequency Reuse algorithms implementation. Also, two new SAPs between FR-Scheduler and FR-RRC were added.

_images/lte-ffr-scheduling.png

Sequence diagram of Scheduling with FR algorithm

Figure Sequence diagram of Scheduling with FR algorithm shows the sequence diagram of scheduling process with FR algorithm. In the beginning of scheduling process, scheduler asks FR entity for avaiable RBGs. According to implementation FR returns all RBGs available in cell or filter them based on its policy. Then when trying to assign some RBG to UE, scheduler asks FR entity if this RBG is allowed for this UE. When FR returns true, scheduler can assign this RBG to this UE, if not scheduler is checking another RBG for this UE. Again, FR response depends on implementation and policy applied to UE.

Supported FR algorithms
No Frequency Reuse

The NoOp FR algorithm (LteFrNoOpAlgorithm class) is implementation of Full Frequency Reuse scheme, that means no frequency partitioning is performed between eNBs of the same network (frequency reuse factor, FRF equals 1). eNBs uses entire system bandwidth and transmit with uniform power over all RBGs. It is the simplest scheme and is the basic way of operating an LTE network. This scheme allows for achieving the high peak data rate. But from the other hand, due to heavy interference levels from neighbouring cells, cell-edge users performance is greatly limited.

Figure Full Frequency Reuse scheme below presents frequency and power plan for Full Frequency Reuse scheme.

_images/fr-full-frequency-reuse-scheme.png

Full Frequency Reuse scheme

In ns-3, the NoOp FR algorithm always allows scheduler to use full bandwidth and allows all UEs to use any RBG. It simply does nothing new (i.e. it does not limit eNB bandwidth, FR algorithm is disabled), it is the simplest implementation of FrAlgorithm class and is installed in eNb by default.

Hard Frequency Reuse

The Hard Frequency Reuse algorithm provides the simplest scheme which allows to reduce inter-cell interference level. In this scheme whole frequency bandwidth is divided into few (typically 3, 4, or 7) disjoint sub-bands. Adjacent eNBs are allocated with different sub-band. Frequency reuse factor equals the number of sub-bands. This scheme allows to significanlty reduce ICI at the cell edge, so the performance of cell-users is improved. But due to the fact, that each eNB uses only one part of whole bandwidth, peak data rate level is also reduced by the factor equal to the reuse factor.

Figure Hard Frequency Reuse scheme below presents frequency and power plan for Hard Frequency Reuse scheme.

_images/fr-hard-frequency-reuse-scheme.png

Hard Frequency Reuse scheme

In our implementation, the Hard FR algorithm has only vector of RBGs available for eNB and pass it to MAC Scheduler during scheduling functions. When scheduler ask, if RBG is allowed for specific UE it allways return true.

Strict Frequency Reuse

Strict Frequency Reuse scheme is combination of Full and Hard Frequency Reuse schemes. It consists of dividing the system bandwidth into two parts which will have different frequency reuse. One common sub-band of the system bandwidth is used in each cell interior (frequency reuse-1), while the other part of the bandwidth is divided among the neighboring eNBs as in hard frequency reuse (frequency reuse-N, N>1), in order to create one sub-band with a low inter-cell interference level in each sector. Center UEs will be granted with the fully-reused frequency chunks, while cell-edge UEs with ortogonal chunks. It means that interior UEs from one cell do not share any spectrum with edge UEs from second cell, which reduces interference for both. As can be noticed, Strict FR requires a total of N + 1 sub-bands, and allows to achieve RFR in the middle between 1 and 3.

Figure Strict Frequency Reuse scheme below presents frequency and power plan for Strict Frequency Reuse scheme with a cell-edge reuse factor of N = 3.

_images/fr-strict-frequency-reuse-scheme.png

Strict Frequency Reuse scheme

In our implementation, Strict FR algorithm has two maps, one for each sub-band. If UE can be served within private sub-band, its RNTI is added to m_privateSubBandUe map. If UE can be served within common sub-band, its RNTI is added to m_commonSubBandUe map. Strict FR algorithm needs to decide within which sub-band UE should be served. It uses UE measurements provided by RRB and compare them with signal quality threshold (this parameter can be easily tuned by attribute mechanism). Threshold has influence on interior to cell radius ratio.

Soft Frequency Reuse

In Soft Frequency Reuse (SFR) scheme each eNb transmits over the entire system bandwidth, but there are two sub-bands, within UEs are served with different power level. Since cell-center UEs share the bandwidth with neighboring cells, they usually transmit at lower power level than the cell-edge UEs. SFR is more bandwidth efficient than Strict FR, because it uses entire system bandwidth, but it also results in more interference to both cell interior and edge users.

There are two possible versions of SFR scheme:

  • In first version, the sub-band dedicated for the cell-edge UEs may also be used by the cell-center UEs but with reduced power level and only if it is not occupied by the cell-edge UEs. Cell-center sub-band is available to the centre UEs only. Figure Soft Frequency Reuse scheme version 1 below presents frequency and power plan for this version of Soft Frequency Reuse scheme.

    _images/fr-soft-frequency-reuse-scheme-v1.png

    Soft Frequency Reuse scheme version 1

  • In second version, cell-center UEs do not have access to cell-edge sub-band. In this way, each cell can use the whole system bandwidth while reducing the interference to the neighbors cells. From the other hand, lower ICI level at the cell-edge is achieved at the expense of lower spectrum utilization. Figure Soft Frequency Reuse scheme version 2 below presents frequency and power plan for this version of Soft Frequency Reuse scheme.

    _images/fr-soft-frequency-reuse-scheme-v2.png

    Soft Frequency Reuse scheme version 2

SFR algorithm maintain two maps. If UE should be served with lower power level, its RNTI is added to m_lowPowerSubBandUe map. If UE should be served with higher power level, its RNTI is added to m_highPowerSubBandUe map. To decide with which power level UE should be served SFR algorithm utilize UE measurements, and compares them to threshold. Signal quality threshold and PdschConfigDedicated (i.e. P_A value) for inner and outer area can be configured by attributes system. SFR utilizes Downlink Power Control described here.

Soft Fractional Frequency Reuse

Soft Fractional Frequency Reuse (SFFR) is an combination of Strict and Soft Frequency Reuse schemes. While Strict FR do not use the subbands allocated for outer region in the adjacent cells, soft FFR uses these subbands for the inner UEs with low transmit power. As a result, the SFFR, like SFR, use the subband with high transmit power level and with low transmit power level. Unlike the Soft FR and like Strict FR, the Soft FFR uses the common sub-band which can enhance the throughput of the inner users.

Figure Soft Fractional Fractional Frequency Reuse scheme below presents frequency and power plan for Soft Fractional Frequency Reuse.

_images/fr-soft-fractional-frequency-reuse-scheme.png

Soft Fractional Fractional Frequency Reuse scheme

Enhanced Fractional Frequency Reuse

Enhanced Fractional Frequency Reuse (EFFR) described in [ZXie2009] defines 3 cell-types for directly neighboring cells in a cellular system, and reserves for each cell-type a part of the whole frequency band named Primary Segment, which among different type cells should be orthogonal. The remaining subchannels constitute the Secondary Segment. The Primary Segment of a cell-type is at the same time a part of the Secondary Segments belonging to the other two cell-types. Each cell can occupy all subchannels of its Primary Segment at will, whereas only a part of subchannels in the Secondary Segment can be used by this cell in an interference-aware manner.The Primary Segment of each cell is divided into a reuse-3 part and reuse-1 part. The reuse-1 part can be reused by all types of cells in the system, whereas reuse-3 part can only be exclusively reused by other same type cells( i.e. the reuse-3 subchannels cannot be reused by directly neighboring cells). On the Secondary Segment cell acts as a guest, and occupying secondary subchannels is actually reuse the primary subchannels belonging to the directly neighboring cells, thus reuse on the Secondary Segment by each cell should conform to two rules:

  • monitor before use
  • resource reuse based on SINR estimation

Each cell listens on every secondary subchannel all the time. And before occupation, it makes SINR evaluation according to the gathered channel quality information (CQI) and chooses resources with best estimation values for reuse. If CQI value for RBG is above configured threshold for some user, transmission for this user can be performed using this RBG.

In [ZXie2009] scheduling process is described, it consist of three steps and two scheduling polices. Since none of currently implemented schedulers allow for this behaviour, some simplification were applied. In our implementation reuse-1 subchannels can be used only by cell center users. Reuse-3 subchannels can be used by edge users, and only if there is no edge user, transmission for cell center users can be served in reuse-3 subchannels.

Figure Enhanced Fractional Fractional Frequency Reuse scheme below presents frequency and power plan for Enhanced Fractional Frequency Reuse.

_images/fr-enhanced-fractional-frequency-reuse-scheme.png

Enhanced Fractional Fractional Frequency Reuse scheme

Distributed Fractional Frequency Reuse

This Distributed Fractional Frequency Reuse Algorithm was presented in [DKimura2012]. It automatically optimizes cell-edge sub-bands by focusing on user distribution (in particular, receive-power distribution). This algorithm adaptively selects RBs for cell-edge sub-band on basis of coordination information from adjecent cells and notifies the base stations of the adjacent cells, which RBs it selected to use in edge sub-band. The base station of each cell uses the received information and the following equation to compute cell-edge-band metric A_{k} for each RB.

A_{k} = \sum_{j\in J}w_{j}X_{j,k}

where J is a set of neighbor cells, X_{j,k}=\{0,1\} is the RNTP from the j-th neighbor cell. It takes a value of 1 when the k-th RB in the j-th neighbor cell is used as a cell-edge sub-band and 0 otherwise. The symbol w_{j} denotes weight with respect to adjacent cell j, that is, the number of users for which the difference between the power of the signal received from the serving cell i and the power of the signal received from the adjacent cell j is less than a threshold value (i.e., the number of users near the cell edge in the service cell). A large received power difference means that cell-edge users in the i-th cell suffer strong interference from the j-th cell.

The RB for which metric A_{k} is smallest is considered to be least affected by interference from another cell. Serving cell selects a configured number of RBs as cell-edge sub-band in ascending order of A_{k}. As a result, the RBs in which a small number of cell-edge users receive high interference from adjacent base stations are selected.

The updated RNTP is then sent to all the neighbor cells. In order to avoid the meaningless oscillation of cell-edge-band selection, a base station ignores an RNTP from another base station that has larger cell ID than the base station.

Repeating this process across all cells enables the allocation of RBs to cell-edge areas to be optimized over the system and to be adjusted with changes in user distribution.

Figure Sequence diagram of Distributed Frequency Reuse Scheme below presents sequence diagram of Distributed Fractional Frequency Reuse Scheme.

_images/ffr-distributed-scheme.png

Sequence diagram of Distributed Frequency Reuse Scheme

Helpers

Two helper objects are use to setup simulations and configure the various components. These objects are:

  • LteHelper, which takes care of the configuration of the LTE radio access network, as well as of coordinating the setup and release of EPS bearers. The LteHelper class provides both the API definition and its implementation.
  • EpcHelper, which takes care of the configuration of the Evolved Packet Core. The EpcHelper class is an abstract base class which only provides the API definition; the implementation is delegated to child classes in order to allow for different EPC network models.

It is possible to create a simple LTE-only simulations by using LteHelper alone, or to create complete LTE-EPC simulations by using both LteHelper and EpcHelper. When both helpers are used, they interact in a master-slave fashion, with LteHelper being the Master that interacts directly with the user program, and EpcHelper working “under the hood” to configure the EPC upon explicit methods called by LteHelper. The exact interactions are displayed in the Figure Sequence diagram of the interaction between LteHelper and EpcHelper.

_images/helpers.png

Sequence diagram of the interaction between LteHelper and EpcHelper

User Documentation

Background

We assume the reader is already familiar with how to use the ns-3 simulator to run generic simulation programs. If this is not the case, we strongly recommend the reader to consult [ns3tutorial].

Usage Overview

The ns-3 LTE model is a software library that allows the simulation of LTE networks, optionally including the Evolved Packet Core (EPC). The process of performing such simulations typically involves the following steps:

  1. Define the scenario to be simulated
  2. Write a simulation program that recreates the desired scenario topology/architecture. This is done accessing the ns-3 LTE model library using the ns3::LteHelper API defined in src/lte/helper/lte-helper.h.
  3. Specify configuration parameters of the objects that are being used for the simulation. This can be done using input files (via the ns3::ConfigStore) or directly within the simulation program.
  4. Configure the desired output to be produced by the simulator
  5. Run the simulation.

All these aspects will be explained in the following sections by means of practical examples.

Basic simulation program

Here is the minimal simulation program that is needed to do an LTE-only simulation (without EPC).

  1. Initial boilerplate:

    #include <ns3/core-module.h>
    #include <ns3/network-module.h>
    #include <ns3/mobility-module.h>
    #include <ns3/lte-module.h>
    
    using namespace ns3;
    
    int main (int argc, char *argv[])
    {
      // the rest of the simulation program follows
    
  2. Create an LteHelper object:

    Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
    

    This will instantiate some common objects (e.g., the Channel object) and provide the methods to add eNBs and UEs and configure them.

  3. Create Node objects for the eNB(s) and the UEs:

    NodeContainer enbNodes;
    enbNodes.Create (1);
    NodeContainer ueNodes;
    ueNodes.Create (2);
    

    Note that the above Node instances at this point still don’t have an LTE protocol stack installed; they’re just empty nodes.

  4. Configure the Mobility model for all the nodes:

    MobilityHelper mobility;
    mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
    mobility.Install (enbNodes);
    mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
    mobility.Install (ueNodes);
    

    The above will place all nodes at the coordinates (0,0,0). Please refer to the documentation of the ns-3 mobility model for how to set your own position or configure node movement.

  5. Install an LTE protocol stack on the eNB(s):

    NetDeviceContainer enbDevs;
    enbDevs = lteHelper->InstallEnbDevice (enbNodes);
    
  6. Install an LTE protocol stack on the UEs:

    NetDeviceContainer ueDevs;
    ueDevs = lteHelper->InstallUeDevice (ueNodes);
    
  7. Attach the UEs to an eNB. This will configure each UE according to the eNB configuration, and create an RRC connection between them:

    lteHelper->Attach (ueDevs, enbDevs.Get (0));
    
  8. Activate a data radio bearer between each UE and the eNB it is attached to:

    enum EpsBearer::Qci q = EpsBearer::GBR_CONV_VOICE;
    EpsBearer bearer (q);
    lteHelper->ActivateDataRadioBearer (ueDevs, bearer);
    

    this method will also activate two saturation traffic generators for that bearer, one in uplink and one in downlink.

  9. Set the stop time:

    Simulator::Stop (Seconds (0.005));
    

    This is needed otherwise the simulation will last forever, because (among others) the start-of-subframe event is scheduled repeatedly, and the ns-3 simulator scheduler will hence never run out of events.

  10. Run the simulation:

    Simulator::Run ();
    
  11. Cleanup and exit:

    Simulator::Destroy ();
    return 0;
    }
    

For how to compile and run simulation programs, please refer to [ns3tutorial].

Configuration of LTE model parameters

All the relevant LTE model parameters are managed through the ns-3 attribute system. Please refer to the [ns3tutorial] and [ns3manual] for detailed information on all the possible methods to do it (environmental variables, C++ API, GtkConfigStore...).

In the following, we just briefly summarize how to do it using input files together with the ns-3 ConfigStore. First of all, you need to put the following in your simulation program, right after main () starts:

CommandLine cmd;
cmd.Parse (argc, argv);
ConfigStore inputConfig;
inputConfig.ConfigureDefaults ();
// parse again so you can override default values from the command line
cmd.Parse (argc, argv);

for the above to work, make sure you also #include "ns3/config-store.h". Now create a text file named (for example) input-defaults.txt specifying the new default values that you want to use for some attributes:

default ns3::LteHelper::Scheduler "ns3::PfFfMacScheduler"
default ns3::LteHelper::PathlossModel "ns3::FriisSpectrumPropagationLossModel"
default ns3::LteEnbNetDevice::UlBandwidth "25"
default ns3::LteEnbNetDevice::DlBandwidth "25"
default ns3::LteEnbNetDevice::DlEarfcn "100"
default ns3::LteEnbNetDevice::UlEarfcn "18100"
default ns3::LteUePhy::TxPower "10"
default ns3::LteUePhy::NoiseFigure "9"
default ns3::LteEnbPhy::TxPower "30"
default ns3::LteEnbPhy::NoiseFigure "5"

Supposing your simulation program is called src/lte/examples/lte-sim-with-input, you can now pass these settings to the simulation program in the following way:

./waf --command-template="%s --ns3::ConfigStore::Filename=input-defaults.txt --ns3::ConfigStore::Mode=Load --ns3::ConfigStore::FileFormat=RawText" --run src/lte/examples/lte-sim-with-input

Furthermore, you can generate a template input file with the following command:

./waf --command-template="%s --ns3::ConfigStore::Filename=input-defaults.txt --ns3::ConfigStore::Mode=Save --ns3::ConfigStore::FileFormat=RawText" --run src/lte/examples/lte-sim-with-input

note that the above will put in the file input-defaults.txt all the default values that are registered in your particular build of the simulator, including lots of non-LTE attributes.

Configure LTE MAC Scheduler

There are several types of LTE MAC scheduler user can choose here. User can use following codes to define scheduler type:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
lteHelper->SetSchedulerType ("ns3::FdMtFfMacScheduler");    // FD-MT scheduler
lteHelper->SetSchedulerType ("ns3::TdMtFfMacScheduler");    // TD-MT scheduler
lteHelper->SetSchedulerType ("ns3::TtaFfMacScheduler");     // TTA scheduler
lteHelper->SetSchedulerType ("ns3::FdBetFfMacScheduler");   // FD-BET scheduler
lteHelper->SetSchedulerType ("ns3::TdBetFfMacScheduler");   // TD-BET scheduler
lteHelper->SetSchedulerType ("ns3::FdTbfqFfMacScheduler");  // FD-TBFQ scheduler
lteHelper->SetSchedulerType ("ns3::TdTbfqFfMacScheduler");  // TD-TBFQ scheduler
lteHelper->SetSchedulerType ("ns3::PssFfMacScheduler");     //PSS scheduler

TBFQ and PSS have more parameters than other schedulers. Users can define those parameters in following way:

* TBFQ scheduler::

 Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
 lteHelper->SetSchedulerAttribute("DebtLimit", IntegerValue(yourvalue)); // default value -625000 bytes (-5Mb)
 lteHelper->SetSchedulerAttribute("CreditLimit", UintegerValue(yourvalue)); // default value 625000 bytes (5Mb)
 lteHelper->SetSchedulerAttribute("TokenPoolSize", UintegerValue(yourvalue)); // default value 1 byte
 lteHelper->SetSchedulerAttribute("CreditableThreshold", UintegerValue(yourvalue)); // default value 0

* PSS scheduler::

 Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
 lteHelper->SetSchedulerAttribute("nMux", UIntegerValue(yourvalue)); // the maximum number of UE selected by TD scheduler
 lteHelper->SetSchedulerAttribute("PssFdSchedulerType", StringValue("CoItA")); // PF scheduler type in PSS

In TBFQ, default values of debt limit and credit limit are set to -5Mb and 5Mb respectively based on paper [FABokhari2009]. Current implementation does not consider credit threshold (C = 0). In PSS, if user does not define nMux, PSS will set this value to half of total UE. The default FD scheduler is PFsch.

In addition, token generation rate in TBFQ and target bit rate in PSS need to be configured by Guarantee Bit Rate (GBR) or Maximum Bit Rate (MBR) in epc bearer QoS parameters. Users can use following codes to define GBR and MBR in both downlink and uplink:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
enum EpsBearer::Qci q = EpsBearer::yourvalue;  // define Qci type
GbrQosInformation qos;
qos.gbrDl = yourvalue; // Downlink GBR
qos.gbrUl = yourvalue; // Uplink GBR
qos.mbrDl = yourvalue; // Downlink MBR
qos.mbrUl = yourvalue; // Uplink MBR
EpsBearer bearer (q, qos);
lteHelper->ActivateDedicatedEpsBearer (ueDevs, bearer, EpcTft::Default ());

In PSS, TBR is obtained from GBR in bearer level QoS parameters. In TBFQ, token generation rate is obtained from the MBR setting in bearer level QoS parameters, which therefore needs to be configured consistently. For constant bit rate (CBR) traffic, it is suggested to set MBR to GBR. For variance bit rate (VBR) traffic, it is suggested to set MBR k times larger than GBR in order to cover the peak traffic rate. In current implementation, k is set to three based on paper [FABokhari2009]. In addition, current version of TBFQ does not consider RLC header and PDCP header length in MBR and GBR. Another parameter in TBFQ is packet arrival rate. This parameter is calculated within scheduler and equals to the past average throughput which is used in PF scheduler.

Many useful attributes of the LTE-EPC model will be described in the following subsections. Still, there are many attributes which are not explicitly mentioned in the design or user documentation, but which are clearly documented using the ns-3 attribute system. You can easily print a list of the attributes of a given object together with their description and default value passing --PrintAttributes= to a simulation program, like this:

./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::LteHelper"

You can try also with other LTE and EPC objects, like this:

./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::LteEnbNetDevice"
./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::LteEnbMac"
./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::LteEnbPhy"
./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::LteUePhy"
./waf --run lena-simple --command-template="%s --PrintAttributes=ns3::PointToPointEpcHelper"

Simulation Output

The ns-3 LTE model currently supports the output to file of PHY, MAC, RLC and PDCP level Key Performance Indicators (KPIs). You can enable it in the following way:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();

// configure all the simulation scenario here...

lteHelper->EnablePhyTraces ();
lteHelper->EnableMacTraces ();
lteHelper->EnableRlcTraces ();
lteHelper->EnablePdcpTraces ();

Simulator::Run ();

RLC and PDCP KPIs are calculated over a time interval and stored on ASCII files, two for RLC KPIs and two for PDCP KPIs, in each case one for uplink and one for downlink. The time interval duration can be controlled using the attribute ns3::RadioBearerStatsCalculator::EpochDuration.

The columns of the RLC KPI files is the following (the same for uplink and downlink):

  1. start time of measurement interval in seconds since the start of simulation
  2. end time of measurement interval in seconds since the start of simulation
  3. Cell ID
  4. unique UE ID (IMSI)
  5. cell-specific UE ID (RNTI)
  6. Logical Channel ID
  7. Number of transmitted RLC PDUs
  8. Total bytes transmitted.
  9. Number of received RLC PDUs
  10. Total bytes received
  11. Average RLC PDU delay in seconds
  12. Standard deviation of the RLC PDU delay
  13. Minimum value of the RLC PDU delay
  14. Maximum value of the RLC PDU delay
  15. Average RLC PDU size, in bytes
  16. Standard deviation of the RLC PDU size
  17. Minimum RLC PDU size
  18. Maximum RLC PDU size

Similarly, the columns of the PDCP KPI files is the following (again, the same for uplink and downlink):

  1. start time of measurement interval in seconds since the start of simulation
  2. end time of measurement interval in seconds since the start of simulation
  3. Cell ID
  4. unique UE ID (IMSI)
  5. cell-specific UE ID (RNTI)
  6. Logical Channel ID
  7. Number of transmitted PDCP PDUs
  8. Total bytes transmitted.
  9. Number of received PDCP PDUs
  10. Total bytes received
  11. Average PDCP PDU delay in seconds
  12. Standard deviation of the PDCP PDU delay
  13. Minimum value of the PDCP PDU delay
  14. Maximum value of the PDCP PDU delay
  15. Average PDCP PDU size, in bytes
  16. Standard deviation of the PDCP PDU size
  17. Minimum PDCP PDU size
  18. Maximum PDCP PDU size

MAC KPIs are basically a trace of the resource allocation reported by the scheduler upon the start of every subframe. They are stored in ASCII files. For downlink MAC KPIs the format is the following:

  1. Simulation time in seconds at which the allocation is indicated by the scheduler
  2. Cell ID
  3. unique UE ID (IMSI)
  4. Frame number
  5. Subframe number
  6. cell-specific UE ID (RNTI)
  7. MCS of TB 1
  8. size of TB 1
  9. MCS of TB 2 (0 if not present)
  10. size of TB 2 (0 if not present)

while for uplink MAC KPIs the format is:

  1. Simulation time in seconds at which the allocation is indicated by the scheduler
  2. Cell ID
  3. unique UE ID (IMSI)
  4. Frame number
  5. Subframe number
  6. cell-specific UE ID (RNTI)
  7. MCS of TB
  8. size of TB

The names of the files used for MAC KPI output can be customized via the ns-3 attributes ns3::MacStatsCalculator::DlOutputFilename and ns3::MacStatsCalculator::UlOutputFilename.

PHY KPIs are distributed in seven different files, configurable through the attributes

  1. ns3::PhyStatsCalculator::DlRsrpSinrFilename
  2. ns3::PhyStatsCalculator::UeSinrFilename
  3. ns3::PhyStatsCalculator::InterferenceFilename
  4. ns3::PhyStatsCalculator::DlTxOutputFilename
  5. ns3::PhyStatsCalculator::UlTxOutputFilename
  6. ns3::PhyStatsCalculator::DlRxOutputFilename
  7. ns3::PhyStatsCalculator::UlRxOutputFilename

In the RSRP/SINR file, the following content is available:

  1. Simulation time in seconds at which the allocation is indicated by the scheduler
  2. Cell ID
  3. unique UE ID (IMSI)
  4. RSRP
  5. Linear average over all RBs of the downlink SINR in linear units

The contents in the UE SINR file are:

  1. Simulation time in seconds at which the allocation is indicated by the scheduler
  2. Cell ID
  3. unique UE ID (IMSI)
  4. uplink SINR in linear units for the UE

In the interference filename the content is:

  1. Simulation time in seconds at which the allocation is indicated by the scheduler
  2. Cell ID
  3. List of interference values per RB

In UL and DL transmission files the parameters included are:

  1. Simulation time in milliseconds
  2. Cell ID
  3. unique UE ID (IMSI)
  4. RNTI
  5. Layer of transmission
  6. MCS
  7. size of the TB
  8. Redundancy version
  9. New Data Indicator flag

And finally, in UL and DL reception files the parameters included are:

  1. Simulation time in milliseconds
  2. Cell ID
  3. unique UE ID (IMSI)
  4. RNTI
  5. Transmission Mode
  6. Layer of transmission
  7. MCS
  8. size of the TB
  9. Redundancy version
  10. New Data Indicator flag
  11. Correctness in the reception of the TB

Fading Trace Usage

In this section we will describe how to use fading traces within LTE simulations.

Fading Traces Generation

It is possible to generate fading traces by using a dedicated matlab script provided with the code (/lte/model/fading-traces/fading-trace-generator.m). This script already includes the typical taps configurations for three 3GPP scenarios (i.e., pedestrian, vehicular and urban as defined in Annex B.2 of [TS36104]); however users can also introduce their specific configurations. The list of the configurable parameters is provided in the following:

  • fc : the frequency in use (it affects the computation of the doppler speed).
  • v_km_h : the speed of the users
  • traceDuration : the duration in seconds of the total length of the trace.
  • numRBs : the number of the resource block to be evaluated.
  • tag : the tag to be applied to the file generated.

The file generated contains ASCII-formatted real values organized in a matrix fashion: every row corresponds to a different RB, and every column correspond to a different temporal fading trace sample.

It has to be noted that the ns-3 LTE module is able to work with any fading trace file that complies with the above described ASCII format. Hence, other external tools can be used to generate custom fading traces, such as for example other simulators or experimental devices.

Fading Traces Usage

When using a fading trace, it is of paramount importance to specify correctly the trace parameters in the simulation, so that the fading model can load and use it correcly. The parameters to be configured are:

  • TraceFilename : the name of the trace to be loaded (absolute path, or relative path w.r.t. the path from where the simulation program is executed);
  • TraceLength : the trace duration in seconds;
  • SamplesNum : the number of samples;
  • WindowSize : the size of the fading sampling window in seconds;

It is important to highlight that the sampling interval of the fading trace has to be 1 ms or greater, and in the latter case it has to be an integer multiple of 1 ms in order to be correctly processed by the fading module.

The default configuration of the matlab script provides a trace 10 seconds long, made of 10,000 samples (i.e., 1 sample per TTI=1ms) and used with a windows size of 0.5 seconds amplitude. These are also the default values of the parameters above used in the simulator; therefore their settage can be avoided in case the fading trace respects them.

In order to activate the fading module (which is not active by default) the following code should be included in the simulation program:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
lteHelper->SetFadingModel("ns3::TraceFadingLossModel");

And for setting the parameters:

lteHelper->SetFadingModelAttribute ("TraceFilename", StringValue ("src/lte/model/fading-traces/fading_trace_EPA_3kmph.fad"));
lteHelper->SetFadingModelAttribute ("TraceLength", TimeValue (Seconds (10.0)));
lteHelper->SetFadingModelAttribute ("SamplesNum", UintegerValue (10000));
lteHelper->SetFadingModelAttribute ("WindowSize", TimeValue (Seconds (0.5)));
lteHelper->SetFadingModelAttribute ("RbNum", UintegerValue (100));

It has to be noted that, TraceFilename does not have a default value, therefore is has to be always set explicitly.

The simulator provide natively three fading traces generated according to the configurations defined in in Annex B.2 of [TS36104]. These traces are available in the folder src/lte/model/fading-traces/). An excerpt from these traces is represented in the following figures.

Fading trace 3 kmph

Excerpt of the fading trace included in the simulator for a pedestrian scenario (speed of 3 kmph).

Fading trace 60 kmph

Excerpt of the fading trace included in the simulator for a vehicular scenario (speed of 60 kmph).

Fading trace 3 kmph

Excerpt of the fading trace included in the simulator for an urban scenario (speed of 3 kmph).

Mobility Model with Buildings

We now explain by examples how to use the buildings model (in particular, the MobilityBuildingInfo and the BuildingPropagationModel classes) in an ns-3 simulation program to setup an LTE simulation scenario that includes buildings and indoor nodes.

  1. Header files to be included:

    #include <ns3/mobility-building-info.h>
    #include <ns3/buildings-propagation-loss-model.h>
    #include <ns3/building.h>
    
  2. Pathloss model selection:

    Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
    
    lteHelper->SetAttribute ("PathlossModel", StringValue ("ns3::BuildingsPropagationLossModel"));
    
  3. EUTRA Band Selection

The selection of the working frequency of the propagation model has to be done with the standard ns-3 attribute system as described in the correspond section (“Configuration of LTE model parameters”) by means of the DlEarfcn and UlEarfcn parameters, for instance:

lteHelper->SetEnbDeviceAttribute ("DlEarfcn", UintegerValue (100));
lteHelper->SetEnbDeviceAttribute ("UlEarfcn", UintegerValue (18100));

It is to be noted that using other means to configure the frequency used by the propagation model (i.e., configuring the corresponding BuildingsPropagationLossModel attributes directly) might generates conflicts in the frequencies definition in the modules during the simulation, and is therefore not advised.

  1. Mobility model selection:

    MobilityHelper mobility;
    mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");
    
    It is to be noted that any mobility model can be used.
    
  2. Building creation:

    double x_min = 0.0;
    double x_max = 10.0;
    double y_min = 0.0;
    double y_max = 20.0;
    double z_min = 0.0;
    double z_max = 10.0;
    Ptr<Building> b = CreateObject <Building> ();
    b->SetBoundaries (Box (x_min, x_max, y_min, y_max, z_min, z_max));
    b->SetBuildingType (Building::Residential);
    b->SetExtWallsType (Building::ConcreteWithWindows);
    b->SetNFloors (3);
    b->SetNRoomsX (3);
    b->SetNRoomsY (2);
    

    This will instantiate a residential building with base of 10 x 20 meters and height of 10 meters whose external walls are of concrete with windows; the building has three floors and has an internal 3 x 2 grid of rooms of equal size.

  3. Node creation and positioning:

    ueNodes.Create (2);
    mobility.Install (ueNodes);
    BuildingsHelper::Install (ueNodes);
    NetDeviceContainer ueDevs;
    ueDevs = lteHelper->InstallUeDevice (ueNodes);
    Ptr<ConstantPositionMobilityModel> mm0 = enbNodes.Get (0)->GetObject<ConstantPositionMobilityModel> ();
    Ptr<ConstantPositionMobilityModel> mm1 = enbNodes.Get (1)->GetObject<ConstantPositionMobilityModel> ();
    mm0->SetPosition (Vector (5.0, 5.0, 1.5));
    mm1->SetPosition (Vector (30.0, 40.0, 1.5));
    
  4. Finalize the building and mobility model configuration:

    BuildingsHelper::MakeMobilityModelConsistent ();
    

See the documentation of the buildings module for more detailed information.

PHY Error Model

The Physical error model consists of the data error model and the downlink control error model, both of them active by default. It is possible to deactivate them with the ns3 attribute system, in detail:

Config::SetDefault ("ns3::LteSpectrumPhy::CtrlErrorModelEnabled", BooleanValue (false));
Config::SetDefault ("ns3::LteSpectrumPhy::DataErrorModelEnabled", BooleanValue (false));

MIMO Model

Is this subsection we illustrate how to configure the MIMO parameters. LTE defines 7 types of transmission modes:

  • Transmission Mode 1: SISO.
  • Transmission Mode 2: MIMO Tx Diversity.
  • Transmission Mode 3: MIMO Spatial Multiplexity Open Loop.
  • Transmission Mode 4: MIMO Spatial Multiplexity Closed Loop.
  • Transmission Mode 5: MIMO Multi-User.
  • Transmission Mode 6: Closer loop single layer precoding.
  • Transmission Mode 7: Single antenna port 5.

According to model implemented, the simulator includes the first three transmission modes types. The default one is the Transmission Mode 1 (SISO). In order to change the default Transmission Mode to be used, the attribute DefaultTransmissionMode of the LteEnbRrc can be used, as shown in the following:

Config::SetDefault ("ns3::LteEnbRrc::DefaultTransmissionMode", UintegerValue (0)); // SISO
Config::SetDefault ("ns3::LteEnbRrc::DefaultTransmissionMode", UintegerValue (1)); // MIMO Tx diversity (1 layer)
Config::SetDefault ("ns3::LteEnbRrc::DefaultTransmissionMode", UintegerValue (2)); // MIMO Spatial Multiplexity (2 layers)

For changing the transmission mode of a certain user during the simulation a specific interface has been implemented in both standard schedulers:

void TransmissionModeConfigurationUpdate (uint16_t rnti, uint8_t txMode);

This method can be used both for developing transmission mode decision engine (i.e., for optimizing the transmission mode according to channel condition and/or user’s requirements) and for manual switching from simulation script. In the latter case, the switching can be done as shown in the following:

Ptr<LteEnbNetDevice> lteEnbDev = enbDevs.Get (0)->GetObject<LteEnbNetDevice> ();
PointerValue ptrval;
enbNetDev->GetAttribute ("FfMacScheduler", ptrval);
Ptr<RrFfMacScheduler> rrsched = ptrval.Get<RrFfMacScheduler> ();
Simulator::Schedule (Seconds (0.2), &RrFfMacScheduler::TransmissionModeConfigurationUpdate, rrsched, rnti, 1);

Finally, the model implemented can be reconfigured according to different MIMO models by updating the gain values (the only constraints is that the gain has to be constant during simulation run-time and common for the layers). The gain of each Transmission Mode can be changed according to the standard ns3 attribute system, where the attributes are: TxMode1Gain, TxMode2Gain, TxMode3Gain, TxMode4Gain, TxMode5Gain, TxMode6Gain and TxMode7Gain. By default only TxMode1Gain, TxMode2Gain and TxMode3Gain have a meaningful value, that are the ones derived by _[CatreuxMIMO] (i.e., respectively 0.0, 4.2 and -2.8 dB).

Use of AntennaModel

We now show how associate a particular AntennaModel with an eNB device in order to model a sector of a macro eNB. For this purpose, it is convenient to use the CosineAntennaModel provided by the ns-3 antenna module. The configuration of the eNB is to be done via the LteHelper instance right before the creation of the EnbNetDevice, as shown in the following:

lteHelper->SetEnbAntennaModelType ("ns3::CosineAntennaModel");
lteHelper->SetEnbAntennaModelAttribute ("Orientation", DoubleValue (0));
lteHelper->SetEnbAntennaModelAttribute ("Beamwidth",   DoubleValue (60);
lteHelper->SetEnbAntennaModelAttribute ("MaxGain",     DoubleValue (0.0));

the above code will generate an antenna model with a 60 degrees beamwidth pointing along the X axis. The orientation is measured in degrees from the X axis, e.g., an orientation of 90 would point along the Y axis, and an orientation of -90 would point in the negative direction along the Y axis. The beamwidth is the -3 dB beamwidth, e.g, for a 60 degree beamwidth the antenna gain at an angle of \pm 30 degrees from the direction of orientation is -3 dB.

To create a multi-sector site, you need to create different ns-3 nodes placed at the same position, and to configure separate EnbNetDevice with different antenna orientations to be installed on each node.

Radio Environment Maps

By using the class RadioEnvironmentMapHelper it is possible to output to a file a Radio Environment Map (REM), i.e., a uniform 2D grid of values that represent the Signal-to-noise ratio in the downlink with respect to the eNB that has the strongest signal at each point. It is possible to specify if REM should be generated for data or control channel. Also user can set the RbId, for which REM will be generated. Default RbId is -1, what means that REM will generated with averaged Signal-to-noise ratio from all RBs.

To do this, you just need to add the following code to your simulation program towards the end, right before the call to Simulator::Run ():

Ptr<RadioEnvironmentMapHelper> remHelper = CreateObject<RadioEnvironmentMapHelper> ();
remHelper->SetAttribute ("ChannelPath", StringValue ("/ChannelList/0"));
remHelper->SetAttribute ("OutputFile", StringValue ("rem.out"));
remHelper->SetAttribute ("XMin", DoubleValue (-400.0));
remHelper->SetAttribute ("XMax", DoubleValue (400.0));
remHelper->SetAttribute ("XRes", UintegerValue (100));
remHelper->SetAttribute ("YMin", DoubleValue (-300.0));
remHelper->SetAttribute ("YMax", DoubleValue (300.0));
remHelper->SetAttribute ("YRes", UintegerValue (75));
remHelper->SetAttribute ("Z", DoubleValue (0.0));
remHelper->SetAttribute ("UseDataChannel", BooleanValue (true));
remHelper->SetAttribute ("RbId", IntegerValue (10));
remHelper->Install ();

By configuring the attributes of the RadioEnvironmentMapHelper object as shown above, you can tune the parameters of the REM to be generated. Note that each RadioEnvironmentMapHelper instance can generate only one REM; if you want to generate more REMs, you need to create one separate instance for each REM.

Note that the REM generation is very demanding, in particular:

  • the run-time memory consumption is approximately 5KB per pixel. For example, a REM with a resolution of 500x500 would need about 1.25 GB of memory, and a resolution of 1000x1000 would need needs about 5 GB (too much for a regular PC at the time of this writing). To overcome this issue, the REM is generated at successive steps, with each step evaluating at most a number of pixels determined by the value of the the attribute RadioEnvironmentMapHelper::MaxPointsPerIteration.
  • if you generate a REM at the beginning of a simulation, it will slow down the execution of the rest of the simulation. If you want to generate a REM for a program and also use the same program to get simulation result, it is recommended to add a command-line switch that allows to either generate the REM or run the complete simulation. For this purpose, note that there is an attribute RadioEnvironmentMapHelper::StopWhenDone (default: true) that will force the simulation to stop right after the REM has been generated.

The REM is stored in an ASCII file in the following format:

  • column 1 is the x coordinate
  • column 2 is the y coordinate
  • column 3 is the z coordinate
  • column 4 is the SINR in linear units

A minimal gnuplot script that allows you to plot the REM is given below:

set view map;
set xlabel "X"
set ylabel "Y"
set cblabel "SINR (dB)"
unset key
plot "rem.out" using ($1):($2):(10*log10($4)) with image

As an example, here is the REM that can be obtained with the example program lena-dual-stripe, which shows a three-sector LTE macrocell in a co-channel deployment with some residential femtocells randomly deployed in two blocks of apartments.

_images/lena-dual-stripe.png

REM obtained from the lena-dual-stripe example

Note that the lena-dual-stripe example program also generate gnuplot-compatible output files containing information about the positions of the UE and eNB nodes as well as of the buildings, respectively in the files ues.txt, enbs.txt and buildings.txt. These can be easily included when using gnuplot. For example, assuming that your gnuplot script (e.g., the minimal gnuplot script described above) is saved in a file named my_plot_script, running the following command would plot the location of UEs, eNBs and buildings on top of the REM:

gnuplot -p enbs.txt ues.txt buildings.txt my_plot_script

AMC Model and CQI Calculation

The simulator provides two possible schemes for what concerns the selection of the MCSs and correspondly the generation of the CQIs. The first one is based on the GSoC module [Piro2011] and works per RB basis. This model can be activated with the ns3 attribute system, as presented in the following:

Config::SetDefault ("ns3::LteAmc::AmcModel", EnumValue (LteAmc::PiroEW2010));

While, the solution based on the physical error model can be controlled with:

Config::SetDefault ("ns3::LteAmc::AmcModel", EnumValue (LteAmc::MiErrorModel));

Finally, the required efficiency of the PiroEW2010 AMC module can be tuned thanks to the Ber attribute (), for instance:

Config::SetDefault ("ns3::LteAmc::Ber", DoubleValue (0.00005));

Evolved Packet Core (EPC)

We now explain how to write a simulation program that allows to simulate the EPC in addition to the LTE radio access network. The use of EPC allows to use IPv4 networking with LTE devices. In other words, you will be able to use the regular ns-3 applications and sockets over IPv4 over LTE, and also to connect an LTE network to any other IPv4 network you might have in your simulation.

First of all, in addition to LteHelper that we already introduced in Basic simulation program, you need to use an additional EpcHelper class, which will take care of creating the EPC entities and network topology. Note that you can’t use EpcHelper directly, as it is an abstract base class; instead, you need to use one of its child classes, which provide different EPC topology implementations. In this example we will consider PointToPointEpcHelper, which implements an EPC based on point-to-point links. To use it, you need first to insert this code in your simulation program:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
Ptr<PointToPointEpcHelper> epcHelper = CreateObject<PointToPointEpcHelper> ();

Then, you need to tell the LTE helper that the EPC will be used:

lteHelper->SetEpcHelper (epcHelper);

the above step is necessary so that the LTE helper will trigger the appropriate EPC configuration in correspondance with some important configuration, such as when a new eNB or UE is added to the simulation, or an EPS bearer is created. The EPC helper will automatically take care of the necessary setup, such as S1 link creation and S1 bearer setup. All this will be done without the intervention of the user.

Calling lteHelper->SetEpcHelper (epcHelper) enables the use of EPC, and has the side effect that any new LteEnbRrc that is created will have the EpsBearerToRlcMapping attribute set to RLC_UM_ALWAYS instead of RLC_SM_ALWAYS if the latter was the default; otherwise, the attribute won’t be changed (e.g., if you changed the default to RLC_AM_ALWAYS, it won’t be touched).

It is to be noted that the EpcHelper will also automatically create the PGW node and configure it so that it can properly handle traffic from/to the LTE radio access network. Still, you need to add some explicit code to connect the PGW to other IPv4 networks (e.g., the internet). Here is a very simple example about how to connect a single remote host to the PGW via a point-to-point link:

Ptr<Node> pgw = epcHelper->GetPgwNode ();

 // Create a single RemoteHost
NodeContainer remoteHostContainer;
remoteHostContainer.Create (1);
Ptr<Node> remoteHost = remoteHostContainer.Get (0);
InternetStackHelper internet;
internet.Install (remoteHostContainer);

// Create the internet
PointToPointHelper p2ph;
p2ph.SetDeviceAttribute ("DataRate", DataRateValue (DataRate ("100Gb/s")));
p2ph.SetDeviceAttribute ("Mtu", UintegerValue (1500));
p2ph.SetChannelAttribute ("Delay", TimeValue (Seconds (0.010)));
NetDeviceContainer internetDevices = p2ph.Install (pgw, remoteHost);
Ipv4AddressHelper ipv4h;
ipv4h.SetBase ("1.0.0.0", "255.0.0.0");
Ipv4InterfaceContainer internetIpIfaces = ipv4h.Assign (internetDevices);
// interface 0 is localhost, 1 is the p2p device
Ipv4Address remoteHostAddr = internetIpIfaces.GetAddress (1);

It’s important to specify routes so that the remote host can reach LTE UEs. One way of doing this is by exploiting the fact that the PointToPointEpcHelper will by default assign to LTE UEs an IP address in the 7.0.0.0 network. With this in mind, it suffices to do:

Ipv4StaticRoutingHelper ipv4RoutingHelper;
Ptr<Ipv4StaticRouting> remoteHostStaticRouting = ipv4RoutingHelper.GetStaticRouting (remoteHost->GetObject<Ipv4> ());
remoteHostStaticRouting->AddNetworkRouteTo (Ipv4Address ("7.0.0.0"), Ipv4Mask ("255.0.0.0"), 1);

Now, you should go on and create LTE eNBs and UEs as explained in the previous sections. You can of course configure other LTE aspects such as pathloss and fading models. Right after you created the UEs, you should also configure them for IP networking. This is done as follows. We assume you have a container for UE and eNodeB nodes like this:

NodeContainer ueNodes;
NodeContainer enbNodes;

to configure an LTE-only simulation, you would then normally do something like this:

NetDeviceContainer ueLteDevs = lteHelper->InstallUeDevice (ueNodes);
lteHelper->Attach (ueLteDevs, enbLteDevs.Get (0));

in order to configure the UEs for IP networking, you just need to additionally do like this:

// we install the IP stack on the UEs
InternetStackHelper internet;
internet.Install (ueNodes);

// assign IP address to UEs
for (uint32_t u = 0; u < ueNodes.GetN (); ++u)
  {
    Ptr<Node> ue = ueNodes.Get (u);
    Ptr<NetDevice> ueLteDevice = ueLteDevs.Get (u);
    Ipv4InterfaceContainer ueIpIface = epcHelper->AssignUeIpv4Address (NetDeviceContainer (ueLteDevice));
    // set the default gateway for the UE
    Ptr<Ipv4StaticRouting> ueStaticRouting = ipv4RoutingHelper.GetStaticRouting (ue->GetObject<Ipv4> ());
    ueStaticRouting->SetDefaultRoute (epcHelper->GetUeDefaultGatewayAddress (), 1);
  }

The activation of bearers is done in a slightly different way with respect to what done for an LTE-only simulation. First, the method ActivateDataRadioBearer is not to be used when the EPC is used. Second, when EPC is used, the default EPS bearer will be activated automatically when you call LteHelper::Attach (). Third, if you want to setup dedicated EPS bearer, you can do so using the method LteHelper::ActivateDedicatedEpsBearer (). This method takes as a parameter the Traffic Flow Template (TFT), which is a struct that identifies the type of traffic that will be mapped to the dedicated EPS bearer. Here is an example for how to setup a dedicated bearer for an application at the UE communicating on port 1234:

Ptr<EpcTft> tft = Create<EpcTft> ();
EpcTft::PacketFilter pf;
pf.localPortStart = 1234;
pf.localPortEnd = 1234;
tft->Add (pf);
lteHelper->ActivateDedicatedEpsBearer (ueLteDevs, EpsBearer (EpsBearer::NGBR_VIDEO_TCP_DEFAULT), tft);

you can of course use custom EpsBearer and EpcTft configurations, please refer to the doxygen documentation for how to do it.

Finally, you can install applications on the LTE UE nodes that communicate with remote applications over the internet. This is done following the usual ns-3 procedures. Following our simple example with a single remoteHost, here is how to setup downlink communication, with an UdpClient application on the remote host, and a PacketSink on the LTE UE (using the same variable names of the previous code snippets)

uint16_t dlPort = 1234;
PacketSinkHelper packetSinkHelper ("ns3::UdpSocketFactory",
                                   InetSocketAddress (Ipv4Address::GetAny (), dlPort));
ApplicationContainer serverApps = packetSinkHelper.Install (ue);
serverApps.Start (Seconds (0.01));
UdpClientHelper client (ueIpIface.GetAddress (0), dlPort);
ApplicationContainer clientApps = client.Install (remoteHost);
clientApps.Start (Seconds (0.01));

That’s all! You can now start your simulation as usual:

Simulator::Stop (Seconds (10.0));
Simulator::Run ();

Using the EPC with emulation mode

In the previous section we used PointToPoint links for the connection between the eNBs and the SGW (S1-U interface) and among eNBs (X2-U and X2-C interfaces). The LTE module supports using emulated links instead of PointToPoint links. This is achieved by just replacing the creation of LteHelper and EpcHelper with the following code:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
Ptr<EmuEpcHelper>  epcHelper = CreateObject<EmuEpcHelper> ();
lteHelper->SetEpcHelper (epcHelper);
epcHelper->Initialize ();

The attributes ns3::EmuEpcHelper::sgwDeviceName and ns3::EmuEpcHelper::enbDeviceName are used to set the name of the devices used for transporting the S1-U, X2-U and X2-C interfaces at the SGW and eNB, respectively. We will now show how this is done in an example where we execute the example program lena-simple-epc-emu using two virtual ethernet interfaces.

First of all we build ns-3 appropriately:

# configure
./waf configure --enable-sudo --enable-modules=lte,fd-net-device --enable-examples

# build
./waf

Then we setup two virtual ethernet interfaces, and start wireshark to look at the traffic going through:

# note: you need to be root

# create two paired veth devices
ip link add name veth0 type veth peer name veth1
ip link show

# enable promiscuous mode
ip link set veth0 promisc on
ip link set veth1 promisc on

# bring interfaces up
ip link set veth0 up
ip link set veth1 up

# start wireshark and capture on veth0
wireshark &

We can now run the example program with the simulated clock:

./waf --run lena-simple-epc-emu --command="%s --ns3::EmuEpcHelper::sgwDeviceName=veth0 --ns3::EmuEpcHelper::enbDeviceName=veth1"

Using wireshark, you should see ARP resolution first, then some GTP packets exchanged both in uplink and downlink.

The default setting of the example program is 1 eNB and 1UE. You can change this via command line parameters, e.g.:

./waf --run lena-simple-epc-emu --command="%s --ns3::EmuEpcHelper::sgwDeviceName=veth0 --ns3::EmuEpcHelper::enbDeviceName=veth1 --nEnbs=2 --nUesPerEnb=2"

To get a list of the available parameters:

./waf --run lena-simple-epc-emu --command="%s --PrintHelp"

To run with the realtime clock: it turns out that the default debug build is too slow for realtime. Softening the real time constraints with the BestEffort mode is not a good idea: something can go wrong (e.g., ARP can fail) and, if so, you won’t get any data packets out. So you need a decent hardware and the optimized build with statically linked modules:

./waf configure -d optimized --enable-static --enable-modules=lte --enable-examples --enable-sudo

Then run the example program like this:

./waf --run lena-simple-epc-emu --command="%s --ns3::EmuEpcHelper::sgwDeviceName=veth0 --ns3::EmuEpcHelper::enbDeviceName=veth1 --simulatorImplementationType=ns3::RealtimeSimulatorImpl --ns3::RealtimeSimulatorImpl::SynchronizationMode=HardLimit"

note the HardLimit setting, which will cause the program to terminate if it cannot keep up with real time.

The approach described in this section can be used with any type of net device. For instance, [Baldo2014] describes how it was used to run an emulated LTE-EPC network over a real multi-layer packet-optical transport network.

Network Attachment

As shown in the basic example in section Basic simulation program, attaching a UE to an eNodeB is done by calling LteHelper::Attach function.

There are 2 possible ways of network attachment. The first method is the “manual” one, while the second one has a more “automatic” sense on it. Each of them will be covered in this section.

Manual attachment

This method uses the LteHelper::Attach function mentioned above. It has been the only available network attachment method in earlier versions of LTE module. It is typically invoked before the simulation begins:

lteHelper->Attach (ueDevs, enbDev); // attach one or more UEs to a single eNodeB

LteHelper::InstallEnbDevice and LteHelper::InstallUeDevice functions must have been called before attaching. In an EPC-enabled simulation, it is also required to have IPv4 properly pre-installed in the UE.

This method is very simple, but requires you to know exactly which UE belongs to to which eNodeB before the simulation begins. This can be difficult when the UE initial position is randomly determined by the simulation script.

One may choose the distance between the UE and the eNodeB as a criterion for selecting the appropriate cell. It is quite simple (at least from the simulator’s point of view) and sometimes practical. But it is important to note that sometimes distance does not make a single correct criterion. For instance, the eNodeB antenna directivity should be considered as well. Besides that, one should also take into account the channel condition, which might be fluctuating if there is fading or shadowing in effect. In these kind of cases, network attachment should not be based on distance alone.

In real life, UE will automatically evaluate certain criteria and select the best cell to attach to, without manual intervention from the user. Obviously this is not the case in this LteHelper::Attach function. The other network attachment method uses more “automatic” approach to network attachment, as will be described next.

Automatic attachment using Idle mode cell selection procedure

The strength of the received signal is the standard criterion used for selecting the best cell to attach to. The use of this criterion is implemented in the initial cell selection process, which can be invoked by calling another version of the LteHelper::Attach function, as shown below:

lteHelper->Attach (ueDevs); // attach one or more UEs to a strongest cell

The difference with the manual method is that the destination eNodeB is not specified. The procedure will find the best cell for the UEs, based on several criteria, including the strength of the received signal (RSRP).

After the method is called, the UE will spend some time to measure the neighbouring cells, and then attempt to attach to the best one. More details can be found in section Initial Cell Selection of the Design Documentation.

It is important to note that this method only works in EPC-enabled simulations. LTE-only simulations must resort to manual attachment method.

Closed Subscriber Group

An interesting use case of the initial cell selection process is to setup a simulation environment with Closed Subscriber Group (CSG).

For example, a certain eNodeB, typically a smaller version such as femtocell, might belong to a private owner (e.g. a household or business), allowing access only to some UEs which have been previously registered by the owner. The eNodeB and the registered UEs altogether form a CSG.

The access restriction can be simulated by “labeling” the CSG members with the same CSG ID. This is done through the attributes in both eNodeB and UE, for example using the following LteHelper functions:

// label the following eNodeBs with CSG identity of 1 and CSG indication enabled
lteHelper->SetEnbDeviceAttribute ("CsgId", UintegerValue (1));
lteHelper->SetEnbDeviceAttribute ("CsgIndication", BooleanValue (true));

// label one or more UEs with CSG identity of 1
lteHelper->SetUeDeviceAttribute ("CsgId", UintegerValue (1));

// install the eNodeBs and UEs
NetDeviceContainer csgEnbDevs = lteHelper->InstallEnbDevice (csgEnbNodes);
NetDeviceContainer csgUeDevs = lteHelper->InstallUeDevice (csgUeNodes);

Then enable the initial cell selection procedure on the UEs:

lteHelper->Attach (csgUeDevs);

This is necessary because the CSG restriction only works with automatic method of network attachment, but not in the manual method.

Note that setting the CSG indication of an eNodeB as false (the default value) will disable the restriction, i.e., any UEs can connect to this eNodeB.

Configure UE measurements

The active UE measurement configuration in a simulation is dictated by the selected so called “consumers”, such as handover algorithm. Users may add their own configuration into action, and there are several ways to do so:

  1. direct configuration in eNodeB RRC entity;
  2. configuring existing handover algorithm; and
  3. developing a new handover algorithm.

This section will cover the first method only. The second method is covered in Automatic handover trigger, while the third method is explained in length in Section Handover algorithm of the Design Documentation.

Direct configuration in eNodeB RRC works as follows. User begins by creating a new LteRrcSap::ReportConfigEutra instance and pass it to the LteEnbRrc::AddUeMeasReportConfig function. The function will return the measId (measurement identity) which is a unique reference of the configuration in the eNodeB instance. This function must be called before the simulation begins. The measurement configuration will be active in all UEs attached to the eNodeB throughout the duration of the simulation. During the simulation, user can capture the measurement reports produced by the UEs by listening to the existing LteEnbRrc::RecvMeasurementReport trace source.

The structure ReportConfigEutra is in accord with 3GPP specification. Definition of the structure and each member field can be found in Section 6.3.5 of [TS36331].

The code sample below configures Event A1 RSRP measurement to every eNodeB within the container devs:

LteRrcSap::ReportConfigEutra config;
config.eventId = LteRrcSap::ReportConfigEutra::EVENT_A1;
config.threshold1.choice = LteRrcSap::ThresholdEutra::THRESHOLD_RSRP;
config.threshold1.range = 41;
config.triggerQuantity = LteRrcSap::ReportConfigEutra::RSRP;
config.reportInterval = LteRrcSap::ReportConfigEutra::MS480;

std::vector<uint8_t> measIdList;

NetDeviceContainer::Iterator it;
for (it = devs.Begin (); it != devs.End (); it++)
{
  Ptr<NetDevice> dev = *it;
  Ptr<LteEnbNetDevice> enbDev = dev->GetObject<LteEnbNetDevice> ();
  Ptr<LteEnbRrc> enbRrc = enbDev->GetRrc ();

  uint8_t measId = enbRrc->AddUeMeasReportConfig (config);
  measIdList.push_back (measId); // remember the measId created

  enbRrc->TraceConnect ("RecvMeasurementReport",
                        "context",
                        MakeCallback (&RecvMeasurementReportCallback));
}

Note that thresholds are expressed as range. In the example above, the range 41 for RSRP corresponds to -100 dBm. The conversion from and to the range format is due to Section 9.1.4 and 9.1.7 of [TS36133]. The EutranMeasurementMapping class has several static functions that can be used for this purpose.

The corresponding callback function would have a definition similar as below:

void
RecvMeasurementReportCallback (std::string context,
                               uint64_t imsi,
                               uint16_t cellId,
                               uint16_t rnti,
                               LteRrcSap::MeasurementReport measReport);

This method will register the callback function as a consumer of UE measurements. In the case where there are more than one consumers in the simulation (e.g. handover algorithm), the measurements intended for other consumers will also be captured by this callback function. Users may utilize the the measId field, contained within the LteRrcSap::MeasurementReport argument of the callback function, to tell which measurement configuration has triggered the report.

In general, this mechanism prevents one consumer to unknowingly intervene with another consumer’s reporting configuration.

Note that only the reporting configuration part (i.e. LteRrcSap::ReportConfigEutra) of the UE measurements parameter is open for consumers to configure, while the other parts are kept hidden. The intra-frequency limitation is the main motivation behind this API implementation decision:

  • there is only one, unambiguous and definitive measurement object, thus there is no need to configure it;
  • measurement identities are kept hidden because of the fact that there is one-to-one mapping between reporting configuration and measurement identity, thus a new measurement identity is set up automatically when a new reporting configuration is created;
  • quantity configuration is configured elsewhere, see Performing measurements; and
  • measurement gaps are not supported, because it is only applicable for inter-frequency settings;

X2-based handover

As defined by 3GPP, handover is a procedure for changing the serving cell of a UE in CONNECTED mode. The two eNodeBs involved in the process are typically called the source eNodeB and the target eNodeB.

In order to enable the execution of X2-based handover in simulation, there are two requirements that must be met. Firstly, EPC must be enabled in the simulation (see Evolved Packet Core (EPC)).

Secondly, an X2 interface must be configured between the two eNodeBs, which needs to be done explicitly within the simulation program:

lteHelper->AddX2Interface (enbNodes);

where enbNodes is a NodeContainer that contains the two eNodeBs between which the X2 interface is to be configured. If the container has more than two eNodeBs, the function will create an X2 interface between every pair of eNodeBs in the container.

Lastly, the target eNodeB must be configured as “open” to X2 HANDOVER REQUEST. Every eNodeB is open by default, so no extra instruction is needed in most cases. However, users may set the eNodeB to “closed” by setting the boolean attribute LteEnbRrc::AdmitHandoverRequest to false. As an example, you can run the lena-x2-handover program and setting the attribute in this way:

NS_LOG=EpcX2:LteEnbRrc ./waf --run lena-x2-handover --command="%s --ns3::LteEnbRrc::AdmitHandoverRequest=false"

After the above three requirements are fulfilled, the handover procedure can be triggered manually or automatically. Each will be presented in the following subsections.

Manual handover trigger

Handover event can be triggered “manually” within the simulation program by scheduling an explicit handover event. The LteHelper object provides a convenient method for the scheduling of a handover event. As an example, let us assume that ueLteDevs is a NetDeviceContainer that contains the UE that is to be handed over, and that enbLteDevs is another NetDeviceContainer that contains the source and the target eNB. Then, a handover at 0.1s can be scheduled like this:

lteHelper->HandoverRequest (Seconds (0.100),
                            ueLteDevs.Get (0),
                            enbLteDevs.Get (0),
                            enbLteDevs.Get (1));

Note that the UE needs to be already connected to the source eNB, otherwise the simulation will terminate with an error message.

For an example with full source code, please refer to the lena-x2-handover example program.

Automatic handover trigger

Handover procedure can also be triggered “automatically” by the serving eNodeB of the UE. The logic behind the trigger depends on the handover algorithm currently active in the eNodeB RRC entity. Users may select and configure the handover algorithm that will be used in the simulation, which will be explained shortly in this section. Users may also opt to write their own implementation of handover algorithm, as described in Section Handover algorithm of the Design Documentation.

Selecting a handover algorithm is done via the LteHelper object and its SetHandoverAlgorithmType method as shown below:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
lteHelper->SetHandoverAlgorithmType ("ns3::A2A4RsrqHandoverAlgorithm");

The selected handover algorithm may also provide several configurable attributes, which can be set as follows:

lteHelper->SetHandoverAlgorithmAttribute ("ServingCellThreshold",
                                          UintegerValue (30));
lteHelper->SetHandoverAlgorithmAttribute ("NeighbourCellOffset",
                                          UintegerValue (1));

Three options of handover algorithm are included in the LTE module. The A2-A4-RSRQ handover algorithm (named as ns3::A2A4RsrqHandoverAlgorithm) is the default option, and the usage has already been shown above.

Another option is the strongest cell handover algorithm (named as ns3::A3RsrpHandoverAlgorithm), which can be selected and configured by the following code:

lteHelper->SetHandoverAlgorithmType ("ns3::A3RsrpHandoverAlgorithm");
lteHelper->SetHandoverAlgorithmAttribute ("Hysteresis",
                                          DoubleValue (3.0));
lteHelper->SetHandoverAlgorithmAttribute ("TimeToTrigger",
                                          TimeValue (MilliSeconds (256)));

The last option is a special one, called the no-op handover algorithm, which basically disables automatic handover trigger. This is useful for example in cases where manual handover trigger need an exclusive control of all handover decision. It does not have any configurable attributes. The usage is as follows:

lteHelper->SetHandoverAlgorithmType ("ns3::NoOpHandoverAlgorithm");

For more information on each handover algorithm’s decision policy and their attributes, please refer to their respective subsections in Section Handover algorithm of the Design Documentation.

Finally, the InstallEnbDevice function of LteHelper will instantiate one instance of the selected handover algorithm for each eNodeB device. In other words, make sure to select the right handover algorithm before finalizing it in the following line of code:

NetDeviceContainer enbLteDevs = lteHelper->InstallEnbDevice (enbNodes);

Example with full source code of using automatic handover trigger can be found in the lena-x2-handover-measures example program.

Tuning simulation with handover

As mentioned in the Design Documentation, the current implementation of handover model may produce unpredicted behaviour when handover failure occurs. This subsection will focus on the steps that should be taken into account by users if they plan to use handover in their simulations.

The major cause of handover failure that we will tackle is the error in transmitting handover-related signaling messages during the execution of a handover procedure. As apparent from the Figure Sequence diagram of the X2-based handover from the Design Documentation, there are many of them and they use different interfaces and protocols. For the sake of simplicity, we can safely assume that the X2 interface (between the source eNodeB and the target eNodeB) and the S1 interface (between the target eNodeB and the SGW/PGW) are quite stable. Therefore we will focus our attention to the RRC protocol (between the UE and the eNodeBs) and the Random Access procedure, which are normally transmitted through the air and susceptible to degradation of channel condition.

A general tips to reduce transmission error is to ensure high enough SINR level in every UE. This can be done by a proper planning of the network topology that minimizes network coverage hole. If the topology has a known coverage hole, then the UE should be configured not to venture to that area.

Another approach to keep in mind is to avoid too-late handovers. In other words, handover should happen before the UE’s SINR becomes too low, otherwise the UE may fail to receive the handover command from the source eNodeB. Handover algorithms have the means to control how early or late a handover decision is made. For example, A2-A4-RSRQ handover algorithm can be configured with a higher threshold to make it decide a handover earlier. Similarly, smaller hysteresis and/or shorter time-to-trigger in the strongest cell handover algorithm typically results in earlier handovers. In order to find the right values for these parameters, one of the factors that should be considered is the UE movement speed. Generally, a faster moving UE requires the handover to be executed earlier. Some research work have suggested recommended values, such as in [Lee2010].

The above tips should be enough in normal simulation uses, but in the case some special needs arise then an extreme measure can be taken into consideration. For instance, users may consider disabling the channel error models. This will ensure that all handover-related signaling messages will be transmitted successfully, regardless of distance and channel condition. However, it will also affect all other data or control packets not related to handover, which may be an unwanted side effect. Otherwise, it can be done as follows:

Config::SetDefault ("ns3::LteSpectrumPhy::CtrlErrorModelEnabled", BooleanValue (false));
Config::SetDefault ("ns3::LteSpectrumPhy::DataErrorModelEnabled", BooleanValue (false));

By using the above code, we disable the error model in both control and data channels and in both directions (downlink and uplink). This is necessary because handover-related signaling messages are transmitted using these channels. An exception is when the simulation uses the ideal RRC protocol. In this case, only the Random Access procedure is left to be considered. The procedure consists of control messages, therefore we only need to disable the control channel’s error model.

Handover traces

The RRC model, in particular the LteEnbRrc and LteUeRrc objects, provide some useful traces which can be hooked up to some custom functions so that they are called upon start and end of the handover execution phase at both the UE and eNB side. As an example, in your simulation program you can declare the following methods:

void
NotifyHandoverStartUe (std::string context,
                       uint64_t imsi,
                       uint16_t cellId,
                       uint16_t rnti,
                       uint16_t targetCellId)
{
  std::cout << Simulator::Now ().GetSeconds () << " " << context
            << " UE IMSI " << imsi
            << ": previously connected to CellId " << cellId
            << " with RNTI " << rnti
            << ", doing handover to CellId " << targetCellId
            << std::endl;
}

void
NotifyHandoverEndOkUe (std::string context,
                       uint64_t imsi,
                       uint16_t cellId,
                       uint16_t rnti)
{
  std::cout << Simulator::Now ().GetSeconds () << " " << context
            << " UE IMSI " << imsi
            << ": successful handover to CellId " << cellId
            << " with RNTI " << rnti
            << std::endl;
}

void
NotifyHandoverStartEnb (std::string context,
                        uint64_t imsi,
                        uint16_t cellId,
                        uint16_t rnti,
                        uint16_t targetCellId)
{
  std::cout << Simulator::Now ().GetSeconds () << " " << context
            << " eNB CellId " << cellId
            << ": start handover of UE with IMSI " << imsi
            << " RNTI " << rnti
            << " to CellId " << targetCellId
            << std::endl;
}

void
NotifyHandoverEndOkEnb (std::string context,
                        uint64_t imsi,
                        uint16_t cellId,
                        uint16_t rnti)
{
  std::cout << Simulator::Now ().GetSeconds () << " " << context
            << " eNB CellId " << cellId
            << ": completed handover of UE with IMSI " << imsi
            << " RNTI " << rnti
            << std::endl;
}

Then, you can hook up these methods to the corresponding trace sources like this:

Config::Connect ("/NodeList/*/DeviceList/*/LteEnbRrc/HandoverStart",
                 MakeCallback (&NotifyHandoverStartEnb));
Config::Connect ("/NodeList/*/DeviceList/*/LteUeRrc/HandoverStart",
                 MakeCallback (&NotifyHandoverStartUe));
Config::Connect ("/NodeList/*/DeviceList/*/LteEnbRrc/HandoverEndOk",
                 MakeCallback (&NotifyHandoverEndOkEnb));
Config::Connect ("/NodeList/*/DeviceList/*/LteUeRrc/HandoverEndOk",
                 MakeCallback (&NotifyHandoverEndOkUe));

The example program src/lte/examples/lena-x2-handover.cc illustrates how the all above instructions can be integrated in a simulation program. You can run the program like this:

./waf --run lena-x2-handover

and it will output the messages printed by the custom handover trace hooks. In order additionally visualize some meaningful logging information, you can run the program like this:

NS_LOG=LteEnbRrc:LteUeRrc:EpcX2 ./waf --run lena-x2-handover

Frequency Reuse Algorithms

In this section we will describe how to use Frequency Reuse Algorithms in eNb within LTE simulations. There are two possible ways of configuration. The first approach is the “manual” one, it requires more parameters to be configured, but allow user to configure FR algorithm as he/she needs. The second approach is more “automatic”. It is very convenient, because is the same for each FR algorithm, so user can switch FR algorithm very quickly by changing only type of FR algorithm. One drawback is that “automatic” approach uses only limited set of configurations for each algorithm, what make it less flexible, but is sufficient for most of cases.

These two approaches will be described more in following sub-section.

If user do not configure Frequency Reuse algorithm, default one (i.e. LteFrNoOpAlgorithm) is installed in eNb. It acts as if FR algorithm was disabled.

One thing that should be mentioned is that most of implemented FR algorithms work with cell bandwidth greater or equal than 15 RBs. This limitation is caused by requirement that at least three continuous RBs have to be assigned to UE for transmission.

Manual configuration

Frequency reuse algorithm can be configured “manually” within the simulation program by setting type of FR algorithm and all its attributes. Currently, seven FR algorithms are implemented:

  • ns3::LteFrNoOpAlgorithm
  • ns3::LteFrHardAlgorithm
  • ns3::LteFrStrictAlgorithm
  • ns3::LteFrSoftAlgorithm
  • ns3::LteFfrSoftAlgorithm
  • ns3::LteFfrEnhancedAlgorithm
  • ns3::LteFfrDistributedAlgorithm

Selecting a FR algorithm is done via the LteHelper object and its SetFfrAlgorithmType method as shown below:

Ptr<LteHelper> lteHelper = CreateObject<LteHelper> ();
lteHelper->SetFfrAlgorithmType ("ns3::LteFrHardAlgorithm");

Each implemented FR algorithm provide several configurable attributes. Users do not have to care about UL and DL bandwidth configuration, because it is done automatically during cell configuration. To change bandwidth for FR algorithm, configure required values for LteEnbNetDevice:

uint8_t bandwidth = 100;
lteHelper->SetEnbDeviceAttribute ("DlBandwidth", UintegerValue (bandwidth));
lteHelper->SetEnbDeviceAttribute ("UlBandwidth", UintegerValue (bandwidth));

Now, each FR algorithms configuration will be described.

Hard Frequency Reuse Algorithm

As described in Section Hard Frequency Reuse of the Design Documentation ns3::LteFrHardAlgorithm uses one sub-band. To configure this sub-band user need to specify offset and bandwidth for DL and UL in number of RBs.

Hard Frequency Reuse Algorithm provides following attributes:

  • DlSubBandOffset: Downlink Offset in number of Resource Block Groups
  • DlSubBandwidth: Downlink Transmission SubBandwidth Configuration in number of Resource Block Groups
  • UlSubBandOffset: Uplink Offset in number of Resource Block Groups
  • UlSubBandwidth: Uplink Transmission SubBandwidth Configuration in number of Resource Block Groups

Example configuration of LteFrHardAlgorithm can be done in following way:

lteHelper->SetFfrAlgorithmType ("ns3::LteFrHardAlgorithm");
lteHelper->SetFfrAlgorithmAttribute ("DlSubBandOffset", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("DlSubBandwidth", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("UlSubBandOffset", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("UlSubBandwidth", UintegerValue (8));
NetDeviceContainer enbDevs = lteHelper->InstallEnbDevice (enbNodes.Get(0));

Above example allow eNB to use only RBs from 8 to 16 in DL and UL, while entire cell bandwidth is 25.

Strict Frequency Reuse Algorithm

Strict Frequency Reuse Algorithm uses two sub-bands: one common for each cell and one private. There is also RSRQ threshold, which is needed to decide within which sub-band UE should be served. Moreover the power transmission in these sub-bands can be different.

Strict Frequency Reuse Algorithm provides following attributes:

  • UlCommonSubBandwidth: Uplink Common SubBandwidth Configuration in number of Resource Block Groups
  • UlEdgeSubBandOffset: Uplink Edge SubBand Offset in number of Resource Block Groups
  • UlEdgeSubBandwidth: Uplink Edge SubBandwidth Configuration in number of Resource Block Groups
  • DlCommonSubBandwidth: Downlink Common SubBandwidth Configuration in number of Resource Block Groups
  • DlEdgeSubBandOffset: Downlink Edge SubBand Offset in number of Resource Block Groups
  • DlEdgeSubBandwidth: Downlink Edge SubBandwidth Configuration in number of Resource Block Groups
  • RsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in edge sub-band
  • CenterPowerOffset: PdschConfigDedicated::Pa value for center sub-band, default value dB0
  • EdgePowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • CenterAreaTpc: TPC value which will be set in DL-DCI for UEs in center area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • EdgeAreaTpc: TPC value which will be set in DL-DCI for UEs in edge area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2

Example below allow eNB to use RBs from 0 to 6 as common sub-band and from 12 to 18 as private sub-band in DL and UL, RSRQ threshold is 20 dB, power in center area equals LteEnbPhy::TxPower - 3dB, power in edge area equals LteEnbPhy::TxPower + 3dB:

lteHelper->SetFfrAlgorithmType ("ns3::LteFrStrictAlgorithm");
lteHelper->SetFfrAlgorithmAttribute ("DlCommonSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("UlCommonSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandOffset", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandOffset", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("RsrqThreshold", UintegerValue (20));
lteHelper->SetFfrAlgorithmAttribute ("CenterPowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB_3));
lteHelper->SetFfrAlgorithmAttribute ("EdgePowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB3));
lteHelper->SetFfrAlgorithmAttribute ("CenterAreaTpc", UintegerValue (1));
lteHelper->SetFfrAlgorithmAttribute ("EdgeAreaTpc", UintegerValue (2));
NetDeviceContainer enbDevs = lteHelper->InstallEnbDevice (enbNodes.Get(0));
Soft Frequency Reuse Algorithm

With Soft Frequency Reuse Algorithm, eNb uses entire cell bandwidth, but there are two sub-bands, within UEs are served with different power level.

Soft Frequency Reuse Algorithm provides following attributes:

  • UlEdgeSubBandOffset: Uplink Edge SubBand Offset in number of Resource Block Groups
  • UlEdgeSubBandwidth: Uplink Edge SubBandwidth Configuration in number of Resource Block Groups
  • DlEdgeSubBandOffset: Downlink Edge SubBand Offset in number of Resource Block Groups
  • DlEdgeSubBandwidth: Downlink Edge SubBandwidth Configuration in number of Resource Block Groups
  • AllowCenterUeUseEdgeSubBand: If true center UEs can receive on edge sub-band RBGs, otherwise edge sub-band is allowed only for edge UEs, default value is true
  • RsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in edge sub-band
  • CenterPowerOffset: PdschConfigDedicated::Pa value for center sub-band, default value dB0
  • EdgePowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • CenterAreaTpc: TPC value which will be set in DL-DCI for UEs in center area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • EdgeAreaTpc: TPC value which will be set in DL-DCI for UEs in edge area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2

Example below configures RBs from 8 to 16 to be used by cell edge UEs and this sub-band is not available for cell center users. RSRQ threshold is 20 dB, power in center area equals LteEnbPhy::TxPower, power in edge area equals LteEnbPhy::TxPower + 3dB:

lteHelper->SetFfrAlgorithmType ("ns3::LteFrSoftAlgorithm");
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandOffset", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandwidth", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandOffset", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandwidth", UintegerValue (8));
lteHelper->SetFfrAlgorithmAttribute ("AllowCenterUeUseEdgeSubBand", BooleanValue (false));
lteHelper->SetFfrAlgorithmAttribute ("RsrqThreshold", UintegerValue (20));
lteHelper->SetFfrAlgorithmAttribute ("CenterPowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB0));
lteHelper->SetFfrAlgorithmAttribute ("EdgePowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB3));
NetDeviceContainer enbDevs = lteHelper->InstallEnbDevice (enbNodes.Get(0));
Soft Fractional Frequency Reuse Algorithm

Soft Fractional Frequency Reuse (SFFR) uses three sub-bands: center, medium (common) and edge. User have to configure only two of them: common and edge. Center sub-band will be composed from the remaining bandwidth. Each sub-band can be served with different transmission power. Since there are three sub-bands, two RSRQ thresholds needs to be configured.

Soft Fractional Frequency Reuse Algorithm provides following attributes:

  • UlCommonSubBandwidth: Uplink Common SubBandwidth Configuration in number of Resource Block Groups
  • UlEdgeSubBandOffset: Uplink Edge SubBand Offset in number of Resource Block Groups
  • UlEdgeSubBandwidth: Uplink Edge SubBandwidth Configuration in number of Resource Block Groups
  • DlCommonSubBandwidth: Downlink Common SubBandwidth Configuration in number of Resource Block Groups
  • DlEdgeSubBandOffset: Downlink Edge SubBand Offset in number of Resource Block Groups
  • DlEdgeSubBandwidth: Downlink Edge SubBandwidth Configuration in number of Resource Block Groups
  • CenterRsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in medium sub-band
  • EdgeRsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in edge sub-band
  • CenterAreaPowerOffset: PdschConfigDedicated::Pa value for center sub-band, default value dB0
  • MediumAreaPowerOffset: PdschConfigDedicated::Pa value for medium sub-band, default value dB0
  • EdgeAreaPowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • CenterAreaTpc: TPC value which will be set in DL-DCI for UEs in center area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • MediumAreaTpc: TPC value which will be set in DL-DCI for UEs in medium area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • EdgeAreaTpc: TPC value which will be set in DL-DCI for UEs in edge area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2

In example below RBs from 0 to 6 will be used as common (medium) sub-band, RBs from 6 to 12 will be used as edge sub-band and RBs from 12 to 24 will be used as center sub-band (it is composed with remaining RBs). RSRQ threshold between center and medium area is 28 dB, RSRQ threshold between medium and edge area is 18 dB. Power in center area equals LteEnbPhy::TxPower - 3dB, power in medium area equals LteEnbPhy::TxPower + 3dB, power in edge area equals LteEnbPhy::TxPower + 3dB:

lteHelper->SetFfrAlgorithmType ("ns3::LteFfrSoftAlgorithm");
lteHelper->SetFfrAlgorithmAttribute ("UlCommonSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("DlCommonSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandOffset", UintegerValue (0));
lteHelper->SetFfrAlgorithmAttribute ("DlEdgeSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandOffset", UintegerValue (0));
lteHelper->SetFfrAlgorithmAttribute ("UlEdgeSubBandwidth", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("CenterRsrqThreshold", UintegerValue (28));
lteHelper->SetFfrAlgorithmAttribute ("EdgeRsrqThreshold", UintegerValue (18));
lteHelper->SetFfrAlgorithmAttribute ("CenterAreaPowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB_3));
lteHelper->SetFfrAlgorithmAttribute ("MediumAreaPowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB0));
lteHelper->SetFfrAlgorithmAttribute ("EdgeAreaPowerOffset",
                      UintegerValue (LteRrcSap::PdschConfigDedicated::dB3));
NetDeviceContainer enbDevs = lteHelper->InstallEnbDevice (enbNodes.Get(0));
Enhanced Fractional Frequency Reuse Algorithm

Enhanced Fractional Frequency Reuse (EFFR) reserve part of system bandwidth for each cell (typically there are 3 cell types and each one gets 1/3 of system bandwidth). Then part of this subbandwidth it used as Primary Segment with reuse factor 3 and as Secondary Segment with reuse factor 1. User has to configure (for DL and UL) offset of the cell subbandwidth in number of RB, number of RB which will be used as Primary Segment and number of RB which will be used as Secondary Segment. Primary Segment is used by cell at will, but RBs from Secondary Segment can be assigned to UE only is CQI feedback from this UE have higher value than configured CQI threshold. UE is considered as edge UE when its RSRQ is lower than RsrqThreshold.

Since each eNb needs to know where are Primary and Secondary of other cell types, it will calculate them assuming configuration is the same for each cell and only subbandwidth offsets are different. So it is important to divide available system bandwidth equally to each cell and apply the same configuration of Primary and Secondary Segments to them.

Enhanced Fractional Frequency Reuse Algorithm provides following attributes:

  • UlSubBandOffset: Uplink SubBand Offset for this cell in number of Resource Block Groups
  • UlReuse3SubBandwidth: Uplink Reuse 3 SubBandwidth Configuration in number of Resource Block Groups
  • UlReuse1SubBandwidth: Uplink Reuse 1 SubBandwidth Configuration in number of Resource Block Groups
  • DlSubBandOffset: Downlink SubBand Offset for this cell in number of Resource Block Groups
  • DlReuse3SubBandwidth: Downlink Reuse 3 SubBandwidth Configuration in number of Resource Block Groups
  • DlReuse1SubBandwidth: Downlink Reuse 1 SubBandwidth Configuration in number of Resource Block Groups
  • RsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in edge sub-band
  • CenterAreaPowerOffset: PdschConfigDedicated::Pa value for center sub-band, default value dB0
  • EdgeAreaPowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • DlCqiThreshold: If the DL-CQI for RBG of is higher than this threshold, transmission on RBG is possible
  • UlCqiThreshold: If the UL-CQI for RBG of is higher than this threshold, transmission on RBG is possible
  • CenterAreaTpc: TPC value which will be set in DL-DCI for UEs in center area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • EdgeAreaTpc: TPC value which will be set in DL-DCI for UEs in edge area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2

In example below offset in DL and UL is 0 RB, 4 RB will be used in Primary Segment and Secondary Segment. RSRQ threshold between center and edge area is 25 dB. DL and UL CQI thresholds are set to value of 10. Power in center area equals LteEnbPhy::TxPower - 6dB, power in edge area equals LteEnbPhy::TxPower + 0dB:

lteHelper->SetFfrAlgorithmType("ns3::LteFfrEnhancedAlgorithm");
lteHelper->SetFfrAlgorithmAttribute("RsrqThreshold", UintegerValue (25));
lteHelper->SetFfrAlgorithmAttribute("DlCqiThreshold", UintegerValue (10));
lteHelper->SetFfrAlgorithmAttribute("UlCqiThreshold", UintegerValue (10));
lteHelper->SetFfrAlgorithmAttribute("CenterAreaPowerOffset",
               UintegerValue (LteRrcSap::PdschConfigDedicated::dB_6));
lteHelper->SetFfrAlgorithmAttribute("EdgeAreaPowerOffset",
               UintegerValue (LteRrcSap::PdschConfigDedicated::dB0));
lteHelper->SetFfrAlgorithmAttribute("UlSubBandOffset", UintegerValue (0));
lteHelper->SetFfrAlgorithmAttribute("UlReuse3SubBandwidth", UintegerValue (4));
lteHelper->SetFfrAlgorithmAttribute("UlReuse1SubBandwidth", UintegerValue (4));
lteHelper->SetFfrAlgorithmAttribute("DlSubBandOffset", UintegerValue (0));
lteHelper->SetFfrAlgorithmAttribute("DlReuse3SubBandwidth", UintegerValue (4));
lteHelper->SetFfrAlgorithmAttribute("DlReuse1SubBandwidth", UintegerValue (4));
Distributed Fractional Frequency Reuse Algorithm

Distributed Fractional Frequency Reuse requires X2 interface between all eNB to be installed. X2 interfaces can be installed only when EPC is configured, so this FFR scheme can be used only with EPC scenarios.

With Distributed Fractional Frequency Reuse Algorithm, eNb uses entire cell bandwidth and there can be two sub-bands: center sub-band and edge sub-band . Within these sub-bands UEs can be served with different power level. Algorithm adaptively selects RBs for cell-edge sub-band on basis of coordination information (i.e. RNTP) from adjecent cells and notifies the base stations of the adjacent cells, which RBs it selected to use in edge sub-band. If there are no UE classified as edge UE in cell, eNB will not use any RBs as edge sub-band.

Distributed Fractional Frequency Reuse Algorithm provides following attributes:

  • CalculationInterval: Time interval between calculation of Edge sub-band, Default value 1 second
  • RsrqThreshold: If the RSRQ of is worse than this threshold, UE should be served in edge sub-band
  • RsrpDifferenceThreshold: If the difference between the power of the signal received by UE from the serving cell and the power of the signal received from the adjacent cell is less than a RsrpDifferenceThreshold value, the cell weight is incremented
  • CenterPowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • EdgePowerOffset: PdschConfigDedicated::Pa value for edge sub-band, default value dB0
  • EdgeRbNum: Number of RB that can be used in edge sub-band
  • CenterAreaTpc: TPC value which will be set in DL-DCI for UEs in center area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2
  • EdgeAreaTpc: TPC value which will be set in DL-DCI for UEs in edge area, Absolute mode is used, default value 1 is mapped to -1 according to TS36.213 Table 5.1.1.1-2

In example below calculation interval is 500 ms. RSRQ threshold between center and edge area is 25. RSRP Difference Threshold is set to be 5. In DL and UL 6 RB will be used by each cell in edge sub-band. Power in center area equals LteEnbPhy::TxPower - 0dB, power in edge area equals LteEnbPhy::TxPower + 3dB:

lteHelper->SetFfrAlgorithmType("ns3::LteFfrDistributedAlgorithm");
lteHelper->SetFfrAlgorithmAttribute("CalculationInterval", TimeValue(MilliSeconds(500)));
lteHelper->SetFfrAlgorithmAttribute ("RsrqThreshold", UintegerValue (25));
lteHelper->SetFfrAlgorithmAttribute ("RsrpDifferenceThreshold", UintegerValue (5));
lteHelper->SetFfrAlgorithmAttribute ("EdgeRbNum", UintegerValue (6));
lteHelper->SetFfrAlgorithmAttribute ("CenterPowerOffset",
                UintegerValue (LteRrcSap::PdschConfigDedicated::dB0));
lteHelper->SetFfrAlgorithmAttribute ("EdgePowerOffset",
                UintegerValue (LteRrcSap::PdschConfigDedicated::dB3));
Automatic configuration

Frequency Reuse algorithms can also be configured in more “automatic” way by setting only the bandwidth and FrCellTypeId. During initialization of FR instance, configuration for set bandwidth and FrCellTypeId will be taken from configuration table. It is important that only sub-bands will be configured, thresholds and transmission power will be set to default values. If one wants, he/she can change thresholds and transmission power as show in previous sub-section.

There are three FrCellTypeId : 1, 2, 3, which correspond to three different configurations for each bandwidth. Three configurations allow to have different configurations in neighbouring cells in hexagonal eNB layout. If user needs to have more different configuration for neighbouring cells, he/she need to use manual configuration.

Example below show automatic FR algorithm configuration:

lteHelper->SetFfrAlgorithmType("ns3::LteFfrSoftAlgorithm");
lteHelper->SetFfrAlgorithmAttribute("FrCellTypeId", UintegerValue (1));
NetDeviceContainer enbDevs = lteHelper->InstallEnbDevice (enbNodes.Get(0));

Examples Programs

The directory src/lte/examples/ contains some example simulation programs that show how to simulate different LTE scenarios.

Reference scenarios

There is a vast amount of reference LTE simulation scenarios which can be found in the literature. Here we list some of them:

  • The system simulation scenarios mentioned in section A.2 of [TR36814].

  • The dual stripe model [R4-092042], which is partially implemented in the example program src/lte/examples/lena-dual-stripe.cc. This example program features a lot of configurable parameters which can be customized by changing the corresponding global variables. To get a list of all these global variables, you can run this command:

    ./waf --run lena-dual-stripe --command-template="%s --PrintGlobals"
    

    The following subsection presents an example of running a simulation campaign using this example program.

Handover simulation campaign

In this subsection, we will demonstrate an example of running a simulation campaign using the LTE module of ns-3. The objective of the campaign is to compare the effect of each built-in handover algorithm of the LTE module.

The campaign will use the lena-dual-stripe example program. First, we have to modify the example program to produce the output that we need. In this occassion, we want to produce the number of handovers, user average throughput, and average SINR.

The number of handovers can be obtained by counting the number of times the HandoverEndOk Handover traces is fired. Then the user average throughput can be obtained by enabling the RLC Simulation Output. Finally, SINR can be obtained by enabling the PHY simulation output. The following sample code snippet shows one possible way to obtain the above:

void
NotifyHandoverEndOkUe (std::string context, uint64_t imsi,
                       uint16_t cellId, uint16_t rnti)
{
  std::cout << "Handover IMSI " << imsi << std::endl;
}

int
main (int argc, char *argv[])
{
  /*** SNIP ***/

  Config::Connect ("/NodeList/*/DeviceList/*/LteUeRrc/HandoverEndOk",
                   MakeCallback (&NotifyHandoverEndOkUe));

  lteHelper->EnablePhyTraces ();
  lteHelper->EnableRlcTraces ();
  Ptr<RadioBearerStatsCalculator> rlcStats = lteHelper->GetRlcStats ();
  rlcStats->SetAttribute ("StartTime", TimeValue (Seconds (0)));
  rlcStats->SetAttribute ("EpochDuration", TimeValue (Seconds (simTime)));

  Simulator::Run ();
  Simulator::Destroy ();
  return 0;
}

Then we have to configure the parameters of the program to suit our simulation needs. We are looking for the following assumptions in our simulation:

  • 7 sites of tri-sectored macro eNodeBs (i.e. 21 macrocells) deployed in hexagonal layout with 500 m inter-site distance.
  • Although lena-dual-stripe is originally intended for a two-tier (macrocell and femtocell) simulation, we will simplify our simulation to one-tier (macrocell) simulation only.
  • UEs are randomly distributed around the sites and attach to the network automatically using Idle mode cell selection. After that, UE will roam the simulation environment with 60 kmph movement speed.
  • 50 seconds simulation duration, so UEs would have traveled far enough to trigger some handovers.
  • 46 dBm macrocell Tx power and 10 dBm UE Tx power.
  • EPC mode will be used because the X2 handover procedure requires it to be enabled.
  • Full-buffer downlink and uplink traffic, both in 5 MHz bandwidth, using TCP protocol and Proportional Fair scheduler.
  • Ideal RRC protocol.

Table lena-dual-stripe parameter configuration for handover campaign below shows how we configure the parameters of lena-dual-stripe to achieve the above assumptions.

lena-dual-stripe parameter configuration for handover campaign
Parameter name Value Description
simTime 50 50 seconds simulation duration
nBlocks 0 Disabling apartment buildings and femtocells
nMacroEnbSites 7 Number of macrocell sites (each site has 3 cells)
nMacroEnbSitesX 2 The macrocell sites will be positioned in a 2-3-2 formation
interSiteDistance 500 500 m distance between adjacent macrocell sites
macroEnbTxPowerDbm 46 46 dBm Tx power for each macrocell
epc 1 Enable EPC mode
epcDl 1 Enable full-buffer DL traffic
epcUl 1 Enable full-buffer UL traffic
useUdp 0 Disable UDP traffic and enable TCP instead
macroUeDensity 0.00002 Determines number of UEs (translates to 48 UEs in our simulation)
outdoorUeMinSpeed 16.6667 Minimum UE movement speed in m/s (60 kmph)
outdoorUeMaxSpeed 16.6667 Maximum UE movement speed in m/s (60 kmph)
macroEnbBandwidth 25 5 MHz DL and UL bandwidth
generateRem 1 (Optional) For plotting the Radio Environment Map

Some of the required assumptions are not available as parameters of lena-dual-stripe. In this case, we override the default attributes, as shown in Table Overriding default attributes for handover campaign below.

Overriding default attributes for handover campaign
Default value name Value Description
ns3::LteHelper::HandoverAlgorithm ns3::NoOpHandoverAlgorithm, ns3::A3RsrpHandoverAlgorithm, or ns3::A2A4RsrqHandoverAlgorithm Choice of handover algorithm
ns3::LteHelper::Scheduler ns3::PfFfMacScheduler Proportional Fair scheduler
ns3::LteHelper::UseIdealRrc 1 Ideal RRC protocol
ns3::RadioBearerStatsCalculator::DlRlcOutputFilename <run>-DlRlcStats.txt File name for DL RLC trace output
ns3::RadioBearerStatsCalculator::UlRlcOutputFilename <run>-UlRlcStats.txt File name for UL RLC trace output
ns3::PhyStatsCalculator::DlRsrpSinrFilename <run>-DlRsrpSinrStats.txt File name for DL PHY RSRP/SINR trace output
ns3::PhyStatsCalculator::UlSinrFilename <run>-UlSinrStats.txt File name for UL PHY SINR trace output

ns-3 provides many ways for passing configuration values into a simulation. In this example, we will use the command line arguments. It is basically done by appending the parameters and their values to the waf call when starting each individual simulation. So the waf calls for invoking our 3 simulations would look as below:

$ ./waf --run="lena-dual-stripe
  --simTime=50 --nBlocks=0 --nMacroEnbSites=7 --nMacroEnbSitesX=2
  --epc=1 --useUdp=0 --outdoorUeMinSpeed=16.6667 --outdoorUeMaxSpeed=16.6667
  --ns3::LteHelper::HandoverAlgorithm=ns3::NoOpHandoverAlgorithm
  --ns3::RadioBearerStatsCalculator::DlRlcOutputFilename=no-op-DlRlcStats.txt
  --ns3::RadioBearerStatsCalculator::UlRlcOutputFilename=no-op-UlRlcStats.txt
  --ns3::PhyStatsCalculator::DlRsrpSinrFilename=no-op-DlRsrpSinrStats.txt
  --ns3::PhyStatsCalculator::UlSinrFilename=no-op-UlSinrStats.txt
  --RngRun=1" > no-op.txt

$ ./waf --run="lena-dual-stripe
  --simTime=50 --nBlocks=0 --nMacroEnbSites=7 --nMacroEnbSitesX=2
  --epc=1 --useUdp=0 --outdoorUeMinSpeed=16.6667 --outdoorUeMaxSpeed=16.6667
  --ns3::LteHelper::HandoverAlgorithm=ns3::A3RsrpHandoverAlgorithm
  --ns3::RadioBearerStatsCalculator::DlRlcOutputFilename=a3-rsrp-DlRlcStats.txt
  --ns3::RadioBearerStatsCalculator::UlRlcOutputFilename=a3-rsrp-UlRlcStats.txt
  --ns3::PhyStatsCalculator::DlRsrpSinrFilename=a3-rsrp-DlRsrpSinrStats.txt
  --ns3::PhyStatsCalculator::UlSinrFilename=a3-rsrp-UlSinrStats.txt
  --RngRun=1" > a3-rsrp.txt

$ ./waf --run="lena-dual-stripe
  --simTime=50 --nBlocks=0 --nMacroEnbSites=7 --nMacroEnbSitesX=2
  --epc=1 --useUdp=0 --outdoorUeMinSpeed=16.6667 --outdoorUeMaxSpeed=16.6667
  --ns3::LteHelper::HandoverAlgorithm=ns3::A2A4RsrqHandoverAlgorithm
  --ns3::RadioBearerStatsCalculator::DlRlcOutputFilename=a2-a4-rsrq-DlRlcStats.txt
  --ns3::RadioBearerStatsCalculator::UlRlcOutputFilename=a2-a4-rsrq-UlRlcStats.txt
  --ns3::PhyStatsCalculator::DlRsrpSinrFilename=a2-a4-rsrq-DlRsrpSinrStats.txt
  --ns3::PhyStatsCalculator::UlSinrFilename=a2-a4-rsrq-UlSinrStats.txt
  --RngRun=1" > a2-a4-rsrq.txt

Some notes on the execution:

  • Notice that some arguments are not specified because they are already the same as the default values. We also keep the handover algorithms on each own default settings.
  • Note the file names of simulation output, e.g. RLC traces and PHY traces, because we have to make sure that they are not overwritten by the next simulation run. In this example, we specify the names one by one using the command line arguments.
  • The --RngRun=1 argument at the end is used for setting the run number used by the random number generator used in the simulation. We re-run the same simulations with different RngRun values, hence creating several independent replications of the same simulations. Then we average the results obtained from these replications to achieve some statistical confidence.
  • We can add a --generateRem=1 argument to generate the files necessary for generating the Radio Environment Map (REM) of the simulation. The result is Figure REM obtained from a simulation in handover campaign below, which can be produced by following the steps described in Section Radio Environment Maps. This figure also shows the position of eNodeBs and UEs at the beginning of a simulation using RngRun = 1. Other values of RngRun may produce different UE position.
_images/lte-handover-campaign-rem.png

REM obtained from a simulation in handover campaign

After hours of running, the simulation campaign will eventually end. Next we will perform some post-processing on the produced simulation output to obtain meaningful information out of it.

In this example, we use GNU Octave to assist the processing of throughput and SINR data, as demonstrated in a sample GNU Octave script below:

% RxBytes is the 10th column
DlRxBytes = load ("no-op-DlRlcStats.txt") (:,10);
DlAverageThroughputKbps = sum (DlRxBytes) * 8 / 1000 / 50

% RxBytes is the 10th column
UlRxBytes = load ("no-op-UlRlcStats.txt") (:,10);
UlAverageThroughputKbps = sum (UlRxBytes) * 8 / 1000 / 50

% Sinr is the 6th column
DlSinr = load ("no-op-DlRsrpSinrStats.txt") (:,6);
% eliminate NaN values
idx = isnan (DlSinr);
DlSinr (idx) = 0;
DlAverageSinrDb = 10 * log10 (mean (DlSinr)) % convert to dB

% Sinr is the 5th column
UlSinr = load ("no-op-UlSinrStats.txt") (:,5);
% eliminate NaN values
idx = isnan (UlSinr);
UlSinr (idx) = 0;
UlAverageSinrDb = 10 * log10 (mean (UlSinr)) % convert to dB

As for the number of handovers, we can use simple shell scripting to count the number of occurrences of string “Handover” in the log file:

$ grep "Handover" no-op.txt | wc -l

Table Results of handover campaign below shows the complete statistics after we are done with post-processing on every individual simulation run. The values shown are the average of the results obtained from RngRun of 1, 2, 3, and 4.

Results of handover campaign
Statistics No-op A2-A4-RSRQ Strongest cell
Average DL system throughput 6 615 kbps 20 509 kbps 19 709 kbps
Average UL system throughput 4 095 kbps 5 705 kbps 6 627 kbps
Average DL SINR -0.10 dB 5.19 dB 5.24 dB
Average UL SINR 9.54 dB 81.57 dB 79.65 dB
Number of handovers per UE per second 0 0.05694 0.04771

The results show that having a handover algorithm in a mobility simulation improves both user throughput and SINR significantly. There is little difference between the two handover algorithms in this campaign scenario. It would be interesting to see their performance in different scenarios, such as scenarios with home eNodeBs deployment.

Frequency Reuse examples

There are two examples showing Frequency Reuse Algorithms functionality.

lena-frequency-reuse is simple example with 3 eNBs in triangle layout. There are 3 cell edge UEs, which are located in the center of this triangle and 3 cell center UEs (one near each eNB). User can also specify the number of randomly located UEs. FR algorithm is installed in eNBs and each eNB has different FrCellTypeId, what means each eNB uses different FR configuration. User can run lena-frequency-reuse with 6 different FR algorithms: NoOp, Hard FR, Strict FR, Soft FR, Soft FFR and Enhanced FFR. To run scenario with Distributed FFR algorithm, user should use lena-distributed-ffr. These two examples are very similar, but they were splitted because Distributed FFR requires EPC to be used, and other algorihtms do not.

To run lena-frequency-reuse with different Frequency Reuse algorithms, user needs to specify FR algorithm by overriding the default attribute ns3::LteHelper::FfrAlgorithm. Example command to run lena-frequency-reuse with Soft FR algorithm is presented below:

$ ./waf --run "lena-frequency-reuse --ns3::LteHelper::FfrAlgorithm=ns3::LteFrSoftAlgorithm"

In these examples functionality to generate REM and spectrum analyzer trace was added. User can enable generation of it by setting generateRem and generateSpectrumTrace attributes.

Command to generate REM for RB 1 in data channel from lena-frequency-reuse scenario with Soft FR algorithm is presented below:

$ ./waf --run "lena-frequency-reuse --ns3::LteHelper::FfrAlgorithm=ns3::LteFrSoftAlgorithm
  --generateRem=true --remRbId=1"

Radio Environment Map for Soft FR is presented in Figure REM for RB 1 obtained from lena-frequency-reuse example with Soft FR algorithm enabled.

_images/lte-fr-soft-1-rem.png

REM for RB 1 obtained from lena-frequency-reuse example with Soft FR algorithm enabled

Command to generate spectrum trace from lena-frequency-reuse scenario with Soft FFR algorithm is presented below (Spectrum Analyzer position needs to be configured inside script):

$ ./waf --run "lena-frequency-reuse --ns3::LteHelper::FfrAlgorithm=ns3::LteFfrSoftAlgorithm
  --generateSpectrumTrace=true"

Example spectrum analyzer trace is presented in figure Spectrum Analyzer trace obtained from lena-frequency-reuse example with Soft FFR algorithm enabled. Spectrum Analyzer was located need eNB with FrCellTypeId 2.. As can be seen, different data channel subbands are sent with different power level (according to configuration), while control channel is transmitted with uniform power along entire system bandwidth.

_images/lte-ffr-soft-2-spectrum-trace.png

Spectrum Analyzer trace obtained from lena-frequency-reuse example with Soft FFR algorithm enabled. Spectrum Analyzer was located need eNB with FrCellTypeId 2.

lena-dual-stripe can be also run with Frequency Reuse algorithms installed in all macro eNB. User needs to specify FR algorithm by overriding the default attribute ns3::LteHelper::FfrAlgorithm. Example command to run lena-dual-stripe with Hard FR algorithm is presented below:

$ ./waf --run="lena-dual-stripe
  --simTime=50 --nBlocks=0 --nMacroEnbSites=7 --nMacroEnbSitesX=2
  --epc=1 --useUdp=0 --outdoorUeMinSpeed=16.6667 --outdoorUeMaxSpeed=16.6667
  --ns3::LteHelper::HandoverAlgorithm=ns3::NoOpHandoverAlgorithm
  --ns3::LteHelper::FfrAlgorithm=ns3::LteFrHardAlgorithm
  --ns3::RadioBearerStatsCalculator::DlRlcOutputFilename=no-op-DlRlcStats.txt
  --ns3::RadioBearerStatsCalculator::UlRlcOutputFilename=no-op-UlRlcStats.txt
  --ns3::PhyStatsCalculator::DlRsrpSinrFilename=no-op-DlRsrpSinrStats.txt
  --ns3::PhyStatsCalculator::UlSinrFilename=no-op-UlSinrStats.txt
  --RngRun=1" > no-op.txt

Example command to generate REM for RB 1 in data channel from lena-dual-stripe scenario with Hard FR algorithm is presented below:

$ ./waf --run="lena-dual-stripe
  --simTime=50 --nBlocks=0 --nMacroEnbSites=7 --nMacroEnbSitesX=2
  --epc=0 --useUdp=0 --outdoorUeMinSpeed=16.6667 --outdoorUeMaxSpeed=16.6667
  --ns3::LteHelper::HandoverAlgorithm=ns3::NoOpHandoverAlgorithm
  --ns3::LteHelper::FfrAlgorithm=ns3::LteFrHardAlgorithm
  --ns3::RadioBearerStatsCalculator::DlRlcOutputFilename=no-op-DlRlcStats.txt
  --ns3::RadioBearerStatsCalculator::UlRlcOutputFilename=no-op-UlRlcStats.txt
  --ns3::PhyStatsCalculator::DlRsrpSinrFilename=no-op-DlRsrpSinrStats.txt
  --ns3::PhyStatsCalculator::UlSinrFilename=no-op-UlSinrStats.txt
  --RngRun=1 --generateRem=true --remRbId=1" > no-op.txt

Radio Environment Maps for RB 1, 10 and 20 generated from lena-dual-stripe scenario with Hard Frequency Reuse algorithm are presented in the figures below. These RB were selected because each one is used by different FR cell type.

_images/lte-fr-hard-1-rem.png

REM for RB 1 obtained from lena-dual-stripe simulation with Hard FR algorithm enabled

_images/lte-fr-hard-2-rem.png

REM for RB 10 obtained from lena-dual-stripe simulation with Hard FR algorithm enabled

_images/lte-fr-hard-3-rem.png

REM for RB 20 obtained from lena-dual-stripe simulation with Hard FR algorithm enabled

Troubleshooting and debugging tips

Many users post on the ns-3-users mailing list asking, for example, why they don’t get any traffic in their simulation, or maybe only uplink but no downlink traffic, etc. In most of the cases, it’s a bug in the user simulation program. Here are some tips to debug the program and find out the cause of the problem.

The general approach is to selectively and incrementally enable the logging of relevant LTE module components, veryfing upon each activation that the output is as expected. In detail:

  • first check the control plane, in particular the RRC connection establishment procedure, by enabling the log components LteUeRrc and LteEnbRrc
  • then check packet transmissions on the data plane, starting by enabling the log componbents LteUeNetDevice and the EpcSgwPgwApplication, then EpcEnbApplication, then moving down the LTE radio stack (PDCP, RLC, MAC, and finally PHY). All this until you find where packets stop being processed / forwarded.

Testing Documentation

Overview

To test and validate the ns-3 LTE module, several test suites are provided which are integrated with the ns-3 test framework. To run them, you need to have configured the build of the simulator in this way:

$ ./waf configure --enable-tests --enable-modules=lte --enable-examples
$ ./test.py

The above will run not only the test suites belonging to the LTE module, but also those belonging to all the other ns-3 modules on which the LTE module depends. See the ns-3 manual for generic information on the testing framework.

You can get a more detailed report in HTML format in this way:

$ ./test.py -w results.html

After the above command has run, you can view the detailed result for each test by opening the file results.html with a web browser.

You can run each test suite separately using this command:

$ ./test.py -s test-suite-name

For more details about test.py and the ns-3 testing framework, please refer to the ns-3 manual.

Description of the test suites

Unit Tests
E-UTRA Absolute Radio Frequency Channel Number (EARFCN)

The test suite lte-earfcn checks that the carrier frequency used by the LteSpectrumValueHelper class (which implements the LTE spectrum model) is done in compliance with [TS36101], where the E-UTRA Absolute Radio Frequency Channel Number (EARFCN) is defined. The test vector for this test suite comprises a set of EARFCN values and the corresponding carrier frequency calculated by hand following the specification of [TS36101]. The test passes if the carrier frequency returned by LteSpectrumValueHelper is the same as the known value for each element in the test vector.

System Tests
Dedicated Bearer Deactivation Tests

The test suite ‘lte-test-deactivate-bearer’ creates test case with single EnodeB and Three UE’s. Each UE consists of one Default and one Dedicated EPS bearer with same bearer specification but with different ARP. Test Case Flow is as follows: Attach UE -> Create Default+Dedicated Bearer -> Deactivate one of the Dedicated bearer

Test case further deactivates dedicated bearer having bearer ID 2(LCID=BearerId+2) of First UE (UE_ID=1) User can schedule bearer deactivation after specific time delay using Simulator::Schedule () method.

Once the test case execution ends it will create DlRlcStats.txt and UlRlcStats.txt. Key fields that need to be checked in statistics are:

|Start | end | Cell ID | IMSI | RNTI | LCID | TxBytes | RxBytes |

Test case executes in three epochs:

  1. In first Epoch (0.04s-1.04s) All UE’s and corresponding bearers gets attached and packet flow over the dedicated bearers activated.
  2. In second Epoch (1.04s-2.04s), bearer deactivation is instantiated, hence User can see relatively less number of TxBytes on UE_ID=1 and LCID=4 as compared to other bearers.
  3. In third Epoch (2.04s-3.04s) since bearer deactivation of UE_ID=1 and LCID=4 is completed, user will not see any logging related to LCID=4.

Test case passes if and only if

  1. IMSI=1 and LCID=4 completely removed in third epoch
  2. No packets seen in TxBytes and RxBytes corresponding to IMSI=1 and LCID=4

If above criteria do not match, the test case is considered to be failed

Adaptive Modulation and Coding Tests

The test suite lte-link-adaptation provides system tests recreating a scenario with a single eNB and a single UE. Different test cases are created corresponding to different SNR values perceived by the UE. The aim of the test is to check that in each test case the chosen MCS corresponds to some known reference values. These reference values are obtained by re-implementing in Octave (see src/lte/test/reference/lte_amc.m) the model described in Section Adaptive Modulation and Coding for the calculation of the spectral efficiency, and determining the corresponding MCS index by manually looking up the tables in [R1-081483]. The resulting test vector is represented in Figure Test vector for Adaptive Modulation and Coding.

The MCS which is used by the simulator is measured by obtaining the tracing output produced by the scheduler after 4ms (this is needed to account for the initial delay in CQI reporting). The SINR which is calcualted by the simulator is also obtained using the LteChunkProcessor interface. The test passes if both the following conditions are satisfied:

  1. the SINR calculated by the simulator correspond to the SNR of the test vector within an absolute tolerance of 10^{-7};
  2. the MCS index used by the simulator exactly corresponds to the one in the test vector.
_images/lte-mcs-index.png

Test vector for Adaptive Modulation and Coding

Inter-cell Interference Tests

The test suite lte-interference provides system tests recreating an inter-cell interference scenario with two eNBs, each having a single UE attached to it and employing Adaptive Modulation and Coding both in the downlink and in the uplink. The topology of the scenario is depicted in Figure Topology for the inter-cell interference test. The d_1 parameter represents the distance of each UE to the eNB it is attached to, whereas the d_2 parameter represent the interferer distance. We note that the scenario topology is such that the interferer distance is the same for uplink and downlink; still, the actual interference power perceived will be different, because of the different propagation loss in the uplink and downlink bands. Different test cases are obtained by varying the d_1 and d_2 parameters.

_images/lte-interference-test-scenario.png

Topology for the inter-cell interference test

The test vectors are obtained by use of a dedicated octave script (available in src/lte/test/reference/lte_link_budget_interference.m), which does the link budget calculations (including interference) corresponding to the topology of each test case, and outputs the resulting SINR and spectral efficiency. The latter is then used to determine (using the same procedure adopted for Adaptive Modulation and Coding Tests. We note that the test vector contains separate values for uplink and downlink.

UE Measurements Tests

The test suite lte-ue-measurements provides system tests recreating an inter-cell interference scenario identical of the one defined for lte-interference test-suite. However, in this test the quantities to be tested are represented by RSRP and RSRQ measurements performed by the UE in two different points of the stack: the source, which is UE PHY layer, and the destination, that is the eNB RRC.

The test vectors are obtained by the use of a dedicated octave script (available in src/lte/test/reference/lte-ue-measurements.m), which does the link budget calculations (including interference) corresponding to the topology of each test case, and outputs the resulting RSRP and RSRQ. The obtained values are then used for checking the correctness of the UE Measurements at PHY layer. After that, they have to be converted according to 3GPP formatting for the purpose of checking their correctness at eNB RRC level.

UE measurement configuration tests

Besides the previously mentioned test suite, there are 3 other test suites for testing UE measurements: lte-ue-measurements-piecewise-1, lte-ue-measurements-piecewise-2, and lte-ue-measurements-handover. These test suites are more focused on the reporting trigger procedure, i.e. the correctness of the implementation of the event-based triggering criteria is verified here.

In more specific, the tests verify the timing and the content of each measurement reports received by eNodeB. Each test case is an stand-alone LTE simulation and the test case will pass if measurement report(s) only occurs at the prescribed time and shows the correct level of RSRP (RSRQ is not verified at the moment).

Piecewise configuration

The piecewise configuration aims to test a particular UE measurements configuration. The simulation script will setup the corresponding measurements configuration to the UE, which will be active throughout the simulation.

Since the reference values are precalculated by hands, several assumptions are made to simplify the simulation. Firstly, the channel is only affected by path loss model (in this case, Friis model is used). Secondly, the ideal RRC protocol is used, and layer 3 filtering is disabled. Finally, the UE moves in a predefined motion pattern between 4 distinct spots, as depicted in Figure UE movement trace throughout the simulation in piecewise configuration below. Therefore the fluctuation of the measured RSRP can be determined more easily.

_images/ue-meas-piecewise-motion.png

UE movement trace throughout the simulation in piecewise configuration

The motivation behind the “teleport” between the predefined spots is to introduce drastic change of RSRP level, which will guarantee the triggering of entering or leaving condition of the tested event. By performing drastic changes, the test can be run within shorter amount of time.

Figure Measured RSRP trace of an example Event A1 test case in piecewise configuration below shows the measured RSRP after layer 1 filtering by the PHY layer during the simulation with a piecewise configuration. Because layer 3 filtering is disabled, these are the exact values used by the UE RRC instance to evaluate reporting trigger procedure. Notice that the values are refreshed every 200 ms, which is the default filtering period of PHY layer measurements report. The figure also shows the time when entering and leaving conditions of an example instance of Event A1 (serving cell becomes better than threshold) occur during the simulation.

_images/ue-meas-piecewise-a1.png

Measured RSRP trace of an example Event A1 test case in piecewise configuration

Each reporting criterion is tested several times with different threshold/offset parameters. Some test scenarios also take hysteresis and time-to-trigger into account. Figure Measured RSRP trace of an example Event A1 with hysteresis test case in piecewise configuration depicts the effect of hysteresis in another example of Event A1 test.

_images/ue-meas-piecewise-a1-hys.png

Measured RSRP trace of an example Event A1 with hysteresis test case in piecewise configuration

Piecewise configuration is used in two test suites of UE measurements. The first one is lte-ue-measurements-piecewise-1, henceforth Piecewise test #1, which simulates 1 UE and 1 eNodeB. The other one is lte-ue-measurements-piecewise-2, which has 1 UE and 2 eNodeBs in the simulation.

Piecewise test #1 is intended to test the event-based criteria which are not dependent on the existence of a neighbouring cell. These criteria include Event A1 and A2. The other events are also briefly tested to verify that they are still working correctly (albeit not reporting anything) in the absence of any neighbouring cell. Table UE measurements test scenarios using piecewise configuration #1 below lists the scenarios tested in piecewise test #1.

UE measurements test scenarios using piecewise configuration #1
Test # Reporting Criteria Threshold/Offset Hysteresis Time-to-Trigger
1 Event A1 Low No No
2 Event A1 Normal No No
3 Event A1 Normal No Short
4 Event A1 Normal No Long
5 Event A1 Normal No Super
6 Event A1 Normal Yes No
7 Event A1 High No No
8 Event A2 Low No No
9 Event A2 Normal No No
10 Event A2 Normal No Short
11 Event A2 Normal No Long
12 Event A2 Normal No Super
13 Event A2 Normal Yes No
14 Event A2 High No No
15 Event A3 Zero No No
16 Event A4 Normal No No