Real World Application Integration

From Nsnam
Jump to: navigation, search

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Project Background

The goal of this proposed project is to develop frameworks and strategies that enable people to integrate already existing code into the simulator. Depending on the application, this can be a straightforward or laborious process, but still should be preferable in many cases to rewriting these protocols from scratch, and hopefully this project can come up with techniques to make these ports even easier.

Here are some initial pointers to how this problem has been worked in the past:

Gsoc Project

Liu Jian( started the gsoc project from April 2008.


The purpose of the project is to develop frameworks and strategies that enable people to integrate already existing code into the ns-3 simulator, it will be accomplished by integrating Quagga, a routing deamon which implement many useful routing protocols. The project will begin by learning experence that Quagga ever ported to INET simulator, then porting Quagga to ns3 by adding some patchings, through the porting job, summariced,documented and structured properly, a adaption layer or some methodologies for ns-3 will be implemented, through which other real world application will be easily ported by the next person.

Project Plan

  • looking at quagga to identify the system calls it use.(as functions like socket, time, signal,etc).
  • implememt these function as simu_* in the ns-3-simu tree.
  • porting quagga to ns3.

Quagga porting


  • listed all system functions which quagga calls. there is about 30 funs of total 150 need to be implemented in ns-3-simu.

After investigation, there were 4 types of functions.

  • to support socket(AF_NETLINK,XX,NETLINK_ROUTE), we need to implement a NetlinkSocket in NS3 which should support NETLINK_ROUTE protocol for quagga porting.

Through reading kernel code, libnl code[1] and RFC-3549[[2]], the principle of netlink socket was basically clear. Now the draft implementation has been acommplished, which can exchange netlink route information between userspace and kernel space as the linux kernel did, but also need more testing.

Files were at my repo[3](not merged yet):

 src/node/netlink-route-types.h(for NETLINK_ROUTE)

test files:

  • testing NetlinkSocket(porting libnl to ns3)

libnl is an library using netlink socket, it provide user many brief programs to manage and operate route inforamtion and others. So porting libnl was suggested to make it run on top of ns-3-simu, and with some aims: testing NetlinkMessage seri/deseri operation; learing how the real-world netlink socket be used and improving my ns3 NetlinkSocket code.

now the porting was basically done and some libnl test programs about netlink-route has run successfully on top of ns-3-simu, as a real world application, it use the synchronous posix/sockets APIs and macro definitions from ns-3-simu/process to replace the real world system calls and macros, and let itself run in the ns3 simulate enviroment.

meanwhile, some simu_xxx functions has been implemented which should be supported for libnl, e.g. sendto/sendmsg/recvfrom/recvmsg. also setsockopt and getsockname was under working. with the help of porting libnl, NetlinkSocket code improved a lot.

  • next: implement other simu_xxx functions(an part of ns-3-simu work)

started by simple server/client demo code[4] running in ns-3-simu and test simu_x APIs.

  • last:porting quagga to ns3<may not reached>

it was based on ns-3-simu/process module, when the essential simu_xxx functions were all implemented, the quagga can be ported and run on top of ns3 as libnl do.

Detail Schedule

  • ~-4.30 read and compile quagga source code, list simu_xxx sys-calls;
  • 5.5~5.9 read zebra codebase, get some view of the code structure and running mechanism.
  • 5.12~5.19 read kerel code, libnl code, RFC3549, get some basic view about netlink socket
  • 5.20~6.20 implement a draft netlink socket which support NETLINK_ROUTE protocol
  • 6.23~6.30 ns3 internal testing for NetlinkSocket class at file
  • 7.1~7.11 porting libnl to ns-3-simu, at src/porting
  • 7.14~7.18 testing libnl on top of ns3(mainly with NETLINK_ROUTE) and improve netlink socket code
  • 7.21~7.27 improve NetlinkSocket code for support multicast mechanism.
  • 7.28~8.15 porting libnl and quagga with elfloader
  • 8.18~9.1 code review,clean and merge, documentation, etc.

NetlinkSocket API

  • NetlinkSocket class

it was subclassed of Socket, it has the similar apis with the UDP/TCP socket.other related class:NetlinkSocketAddress/NetlinkSocketFactory/NetlinkSocketHelper

this netlink socket should act as a kernel space code, it handle the message from the user space, then send message back to user space. Now The NetlinkSocket contains some private functions to do this kernel-like job, e.g. handlemessage() unicastmessage().. more functions at src/node/netlink-socket.h

  • NetlinkMessage class

it represent the real world netlink message body. With Message header and payload, but for message serialization/desirialization, we designed the NetlinkMessage as subclass of Header class, which use the Header's seri/desi API to work. <Ptr>Packet can add/remove the message body as a header.

To be consistent between ns3 space and realworld application space, the serialize and desirialize was the very important part based on Ns3 Header subclasses's API functions, this problem get easily fixed.

  • MultipartNetlinkMessage class

To be compatible with dumping multipart netlinkmessges, MultipartNetlinkMessage class substitute NetlinkMessage's position, then NetlinkMessage become a member of it.

  • payload messages

To support NETLINK_ROUTE, three types of paylod route message was provided: InterfaceAddressMessage(to RTM_XXX_ADDRESS),InterfaceInfoMessage(to RTM_XXX_LINK),RouteMessage(to RTM_XXX_ROUTE), besides of the no-prototol-specified NetlinkErrorMessge

  • NetlinkAttributeclass

it reperents of the payload messages's attributes, which has the TLV structure.

porting libnl

libnl is an library using netlink socket, it provide user many brief programs to manage and operate route inforamtion and others. mathieu suggest me to port it and make it run on top of ns-3-simu, the goal is move libnl src file to ns3, let it run with the ns-3-simu apis, test the netlink socket to make it works well with the real world application.

Because libnl is an real-world application, i use the src/process simu module to do with it, which has the posix APIs and definetions, files at :

  • src/porting

and the steps:

  • define some macros to replace the real world definitions with SIMU_XXXX(at src/porting/porting-types.h)
  • replace the real world head files"#include<system-headfiles>" with simu head files "#include<simu_headfiles>".
  • replace the system call xxx() with simu_xxx() in libnl source files.

For the three steps, i added some simu_xxx() and SIMU_XXX the libnl used in process module , because the process module is also on developing stage by mathieu, many were unsupportable.

Libnl programs locate at src/porting/libnl/src/*.c, there were all real-world application. ns-3-simu/process module provide mechanism to run programs on top of ns3, so, for porting, where i create an program "libnl-test" (src/porting/ to run libnl at well defined ns3 Node, which has some simple network toplogy.

As for nl-addr-dump program, change the name main(int, char*[]) to nl_addr_dump(int, char*[]), then ns3 node can run it as an process.

it run as:
./waf --shell;
cd build/debug/src/porting
./libnl-test nl-addr-dump xml

then it dump the current node's interface address inforamtion through NS3 NetlinkSocket.

Now the implementd netlink socket can partly support NETLINK_ROUTE protocol, so the ported testing programs were all about this protocol.

there are 8 programs,and all run normally.

nl_addr_dump, nl_link_dump, nl_route_dump, nl_addr_add, nl_addr_delete; nl_route_add, nl_route_delete, nl_route_get.

porting zebra

Zebra, as a main deamo of quaaga, can run as an single program, so as a start, i step on porting this module to ns-3-simu.

As methieu implement ElfLoader class to load libc function dynamically, the previous function mapping method of porting was not used anymore. So the steps were:

  • build real world program with the option -fpie, -pie
  • add libc function definition.
  • add simu_xxx if necessary.

Adopt the great useful elfloader, the real world application can be easily run on top of ns-3 if all used libc function were defined.

At ns-test:lj/quagga-porting, my test program process-libnl can run libnl program at ns3 node. zebra's libc function need more support, continuing....

About Quagga

basic knowledge

Quagga is a routing soute of 5 routing protocols(RIP,RIPng,OSPFv2,OSPFv3,BGP) based on Zebra, they can be run simutaneously or separately.Zebra layer that contains what is known as the "Routing Information Base" or RIB. Zebra is responsible for maintaining the RIB and for writing routes from the RIB into the kernel forwarding table.

Quagga was planned to use multi-threaded mechanism when it runs with a kernel that supports multi-threads. There may be several protocol-specific routing daemons and zebra the kernel routing manager.The ripd daemon handles the RIP protocol, while ospfd is a daemon which supports OSPF version 2. bgpd supports the BGP-4 protocol. For changing the kernel routing table and for redistribution of routes between different routing protocols, there is a kernel routing table manager zebra daemon. Quagga system architecture, see here

see more information here:

netlink socket introduction

Netlink socket is a special IPC used for transferring information between kernel and user-space processes. It provides a full-duplex communication link between the two by way of standard socket APIs for user-space processes and a special kernel API for kernel modules. Netlink socket uses the address family AF_NETLINK, as compared to AF_INET used by TCP/IP socket. Each netlink socket feature defines its own protocol type in the kernel header file include/linux/netlink.h.

The following is a subset of features and their protocol types currently supported by the netlink socket:

  • NETLINK_ROUTE: communication channel between user-space routing dæmons, such as BGP, OSPF, RIP and kernel packet forwarding module. User-space routing dæmons update the kernel routing table through this netlink protocol type.
  • NETLINK_FIREWALL: receives packets sent by the IPv4 firewall code.
  • NETLINK_NFLOG: communication channel for the user-space iptable management tool and kernel-space Netfilter module.
  • NETLINK_ARPD: for managing the arp table from user space.

Here, in quagga, Netlink scoket with NETLINK_ROUTE was used. About netlink_route see man 7 rtnetlink

Netlink socket provides a BSD socket-style API that can be well understood.The standard socket APIs—socket(), bind(),sendmsg(), recvmsg() and close()—can be used by user-space applications to access netlink socket.

  • socket(AF_NETLINK, int type, NETLINK_ROUTE): the type is either SOCK_RAW or SOCK_DGRAM, because netlink is a datagram-oriented service.
  • bind(fd, (struct sockaddr*)&nladdr, sizeof(nladdr)):The netlink address structure is struct sockaddr_nl.
  • sendmsg (sock, &msg, 0)
  • recvmsg (sock, &msg, 0)
  • close (sock)

the APIs are all standard calls, but the netlink socket requires its own message header as well -struct nlmsghdr. A sending application must supply this header in each netlink message and a receiving application needs to allocate a buffer large enough to hold netlink message headers and message payloads.

Detail information and example code see man 7 netlink.

some notes

1, Firstly, if we donot care about how one of the protocols runs, we can get some idea about the main code-structure here, which would be useful for porting to ns3.

  • main structure. _ _ _the main thread maintains thead_master, which contains all 'threads' triggered from event, timer, I/O, background,etc; when all initial work was finished, the whole application run as event-driven mode like below:
  • thread. _ _ _Actually there were only one real-thread running at this time, it use the posix_signal hanlder functions, event functions of application-defined, many timer functions of the basic protocol-application , and select(2)system all to monitor I/O operations for multiplexing the events. the 'theads' mentioned above was not the common threads we commonly talk about, here it represent the all kinds of event-function defined above, the event-function, which was passed as function pointer and stored in thread-master, was called by sequence in main thread function depends on the different priority. the threads here has the _synchronous_ specific which was different from the threads in one real world application.
  • multithread mechanism in quagga. _ _ _As a real-world application, quagga deal with posix-signal from system kernel, I/O events from IO devices, application-level events from kernel code and timer functions of maitaining the application seems that these kinds of events were asynchronous to each other, which looks like a multithreads-application. But here in quagga, a creative mechanism was used to avoid multithreads. A threadmaster was maintained in main-thread, which store all kinds of events mentioned above in the form of sigmaster(discuss below), timer-list, io(read,write)list, event-list,etc.In the main loop, only one thread with the highest priority was called in each loop, which has some similarity with ns3 simulator's event-schedule model. (the priority: posix_signal events > application-level events > timer events > IO events > background-timer events). so the whole application runs as only one real thread avoiding multithreads-application resource sharing problem and system asynchronously controling problem.
  • signal mechanism in quagga._ _ _As we know, quagga application can receive all signals from system and I/O, but it only defined some useful signal and trapped the others avoiding large system signal response. Usually, posix signal can be triggered at anytime from kernel, and it use the kernel execution stack and context, which was asynchronous to the main thread. To avoid these asynchronism, here, in quagga, A sigmaster was used, which stored the signal information instead of calling handler function immediately when one posix signal was triggered in kernel. Then, the main_thread check out all triggered signals of last cycle from the segmaster and call their handler functions. So, the posix_signals were processed _synchronously_ in the whole application.

2, started with Zebra.....