Real World Application Integration

From Nsnam
Revision as of 13:51, 17 May 2008 by Liujatp (Talk | contribs) (Netlink Socket)

Jump to: navigation, search

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Project Background

The goal of this proposed project is to develop frameworks and strategies that enable people to integrate already existing code into the simulator. Depending on the application, this can be a straightforward or laborious process, but still should be preferable in many cases to rewriting these protocols from scratch, and hopefully this project can come up with techniques to make these ports even easier.

Here are some initial pointers to how this problem has been worked in the past:

Gsoc Project

Liu Jian(liujatp@gmail.com) started the gsoc project from April 2008.

Abstract

The purpose of the project is to develop frameworks and strategies that enable people to integrate already existing code into the ns-3 simulator, it will be accomplished by integrating Quagga, a routing deamon which implement many useful routing protocols. The project will begin by learning experence that Quagga ever ported to INET simulator, then porting Quagga to ns3 by adding some patchings, through the porting job, summariced,documented and structured properly, a adaption layer or some methodologies for ns-3 will be implemented, through which other real world application will be easily ported by the next person.

Project Plan

  • looking at quagga to identify the system calls it use.functions like socket, time, signal,etc.
  • implememt these function as simu_* in the ns-3-simu tree.
  • porting quagga to ns3.

Status

  • listed all system functions which quagga calls. there is about 30 funs of total 150 need to be implemented in ns-3-simu.

After investigation, there were 4 types of functions.

 1,sockets:
 accept;bind;close;connect;listen;recv;recvfrom;recvmsg;send;sendmsg;sendto;socket;
 getaddrinfo;freeaddrinfo;gai_strerror;getservbyname;getsockname;getsockopt;setsockopt,etc.
                
 2,time:
 ctime;gettimeofday;gmtime;localtime;mktime;strftime;time,etc.
            
 3,signal&thread&process:
 exit;fork;getuid;geteuid;getpid;setpgid;setregid;setreuid;abort;kill;prctl;shutdown;
 sigaction;sigfillset;getgroups;setgroups;sysconf;waitpid,select(2),etc.
                                     
 4,others: 
 daemon;access;openlog;closelog;execv;getrusage;hostperror
 ZCMSG_FIRSTHDR(__cmsg_nxthdr),etc.
  • next: implement these simu_xxx functions(there would be a great job)

started by simple server/client demo code from http://cs.ecs.baylor.edu/~donahoo/practical/CSockets/textcode.html running in ns-3-simu and test simu_x APIs.

  • .......

Detail Schedule

  • ~-4.30 read and compile quagga source code, list simu_xxx sys-calls;
  • 5.1~5.4 four days vacation
  • 5.5~5.9 read zebra codebase, get some view of the code structure and running mechanism.
  • 5.12.... work on a "netlink" socket for ns3.

About Quagga

basic knowledge

Quagga is a routing soute of 5 routing protocols(RIP,RIPng,OSPFv2,OSPFv3,BGP) based on Zebra, they can be run simutaneously or separately.Zebra layer that contains what is known as the "Routing Information Base" or RIB. Zebra is responsible for maintaining the RIB and for writing routes from the RIB into the kernel forwarding table.

Quagga was planned to use multi-threaded mechanism when it runs with a kernel that supports multi-threads. There may be several protocol-specific routing daemons and zebra the kernel routing manager.The ripd daemon handles the RIP protocol, while ospfd is a daemon which supports OSPF version 2. bgpd supports the BGP-4 protocol. For changing the kernel routing table and for redistribution of routes between different routing protocols, there is a kernel routing table manager zebra daemon. Quagga system architecture, see here http://www.quagga.net/docs/docs-info.php#SEC9.

see more information here:http://www.quagga.net/docs/docs-info.php

Netlink Socket

Netlink socket is a special IPC used for transferring information between kernel and user-space processes. It provides a full-duplex communication link between the two by way of standard socket APIs for user-space processes and a special kernel API for kernel modules. Netlink socket uses the address family AF_NETLINK, as compared to AF_INET used by TCP/IP socket. Each netlink socket feature defines its own protocol type in the kernel header file include/linux/netlink.h.

The following is a subset of features and their protocol types currently supported by the netlink socket:

  • NETLINK_ROUTE: communication channel between user-space routing dæmons, such as BGP, OSPF, RIP and kernel packet forwarding module. User-space routing dæmons update the kernel routing table through this netlink protocol type.
  • NETLINK_FIREWALL: receives packets sent by the IPv4 firewall code.
  • NETLINK_NFLOG: communication channel for the user-space iptable management tool and kernel-space Netfilter module.
  • NETLINK_ARPD: for managing the arp table from user space.

Here, in quagga, Netlink scoket with NETLINK_ROUTE was used.


Netlink socket provides a BSD socket-style API that can be well understood.The standard socket APIs—socket(), bind(),sendmsg(), recvmsg() and close()—can be used by user-space applications to access netlink socket.

  • socket(AF_NETLINK, int type, NETLINK_ROUTE): the type is either SOCK_RAW or SOCK_DGRAM, because netlink is a datagram-oriented service.
  • bind(fd, (struct sockaddr*)&nladdr, sizeof(nladdr)):The netlink address structure is struct sockaddr_nl.
  • sendmsg (sock, &msg, 0)
  • recvmsg (sock, &msg, 0)
  • close (sock)

the APIs are all standard calls, but the netlink socket requires its own message header as well -struct nlmsghdr. A sending application must supply this header in each netlink message and a receiving application needs to allocate a buffer large enough to hold netlink message headers and message payloads.

Detail information and example code see man 7 netlink.

some notes

1, Firstly, if we donot care about how one of the protocols runs, we can get some idea about the main code-structure here, which would be useful for porting to ns3.

  • main structure. _ _ _the main thread maintains thead_master, which contains all 'threads' triggered from event, timer, I/O, background,etc; when all initial work was finished, the whole application run as event-driven mode like below:
   main(){
       init_work();
       while(fetch_thread())
           call_thread();
   }
  • thread. _ _ _Actually there were only one real-thread running at this time, it use the posix_signal hanlder functions, event functions of application-defined, many timer functions of the basic protocol-application , and select(2)system all to monitor I/O operations for multiplexing the events. the 'theads' mentioned above was not the common threads we commonly talk about, here it represent the all kinds of event-function defined above, the event-function, which was passed as function pointer and stored in thread-master, was called by sequence in main thread function depends on the different priority. the threads here has the _synchronous_ specific which was different from the threads in one real world application.
  • multithread mechanism in quagga. _ _ _As a real-world application, quagga deal with posix-signal from system kernel, I/O events from IO devices, application-level events from kernel code and timer functions of maitaining the application noraml-running.it seems that these kinds of events were asynchronous to each other, which looks like a multithreads-application. But here in quagga, a creative mechanism was used to avoid multithreads. A threadmaster was maintained in main-thread, which store all kinds of events mentioned above in the form of sigmaster(discuss below), timer-list, io(read,write)list, event-list,etc.In the main loop, only one thread with the highest priority was called in each loop, which has some similarity with ns3 simulator's event-schedule model. (the priority: posix_signal events > application-level events > timer events > IO events > background-timer events). so the whole application runs as only one real thread avoiding multithreads-application resource sharing problem and system asynchronously controling problem.
  • signal mechanism in quagga._ _ _As we know, quagga application can receive all signals from system and I/O, but it only defined some useful signal and trapped the others avoiding large system signal response. Usually, posix signal can be triggered at anytime from kernel, and it use the kernel execution stack and context, which was asynchronous to the main thread. To avoid these asynchronism, here, in quagga, A sigmaster was used, which stored the signal information instead of calling handler function immediately when one posix signal was triggered in kernel. Then, the main_thread check out all triggered signals of last cycle from the segmaster and call their handler functions. So, the posix_signals were processed _synchronously_ in the whole application.

2, started with Zebra.