Difference between revisions of "Real World Application Integration"

From Nsnam
Jump to: navigation, search
(NetlinkSocket API)
(NetlinkSocket API)
Line 98: Line 98:
 
* '''NetlinkAttribute'''class
 
* '''NetlinkAttribute'''class
 
it reperents of the payload messages's attributes, which has the TLV structure.
 
it reperents of the payload messages's attributes, which has the TLV structure.
 +
 +
 +
== porting libnl ==
 +
the goal is move libnl src file to ns3, let it run with the ns-3-simu apis, test the netlink socket works well with the real world aaplication.
 +
 +
now the implementd netlink socket can only partly support NETLINK_ROUTE protocol, so we just port part of the source code to ns-3-simu.
  
 
= About Quagga =
 
= About Quagga =

Revision as of 07:13, 7 July 2008

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Project Background

The goal of this proposed project is to develop frameworks and strategies that enable people to integrate already existing code into the simulator. Depending on the application, this can be a straightforward or laborious process, but still should be preferable in many cases to rewriting these protocols from scratch, and hopefully this project can come up with techniques to make these ports even easier.

Here are some initial pointers to how this problem has been worked in the past:

Gsoc Project

Liu Jian(liujatp@gmail.com) started the gsoc project from April 2008.

Abstract

The purpose of the project is to develop frameworks and strategies that enable people to integrate already existing code into the ns-3 simulator, it will be accomplished by integrating Quagga, a routing deamon which implement many useful routing protocols. The project will begin by learning experence that Quagga ever ported to INET simulator, then porting Quagga to ns3 by adding some patchings, through the porting job, summariced,documented and structured properly, a adaption layer or some methodologies for ns-3 will be implemented, through which other real world application will be easily ported by the next person.

Project Plan

  • looking at quagga to identify the system calls it use.functions like socket, time, signal,etc.
  • implememt these function as simu_* in the ns-3-simu tree.
  • porting quagga to ns3.

Quagga porting

Status

  • listed all system functions which quagga calls. there is about 30 funs of total 150 need to be implemented in ns-3-simu.

After investigation, there were 4 types of functions.

 1,sockets:
 accept;bind;close;connect;listen;recv;recvfrom;recvmsg;send;sendmsg;sendto;socket;
 getaddrinfo;freeaddrinfo;gai_strerror;getservbyname;getsockname;getsockopt;setsockopt,etc.
                
 2,time:
 ctime;gettimeofday;gmtime;localtime;mktime;strftime;time,etc.
            
 3,signal&thread&process:
 exit;fork;getuid;geteuid;getpid;setpgid;setregid;setreuid;abort;kill;prctl;shutdown;
 sigaction;sigfillset;getgroups;setgroups;sysconf;waitpid,select(2),etc.
                                     
 4,others: 
 daemon;access;openlog;closelog;execv;getrusage;hostperror
 ZCMSG_FIRSTHDR(__cmsg_nxthdr),etc.
  • to support socket(AF_NETLINK,XX,NETLINK_ROUTE), we need to implement a NetlinkSocket in NS3 which should support NETLINK_ROUTE protocol for quagga porting.

Through reading kernel code, libnl code[1] and RFC-3549[[2]], the principle of netlink socket was basically clear. Now the draft implementation has been acommplished but need more testing. Files were at my repo[3](not merged yet):

 src/node/netlink-socket.cc                                 
 src/node/netlink-socket-address.cc
 src/node/netlink-socket-factory.cc
 src/node/netlink-attribute.cc
 src/node/netlink-message.cc
 src/node/netlink-message-route.cc(for NETLINK_ROUTE)

test files:

 src/node/netlink-socket-test.cc
 example/simple-netlink-socket.cc
  • testing NetlinkSocket(porting libnl to ns3)

to confirm the NetlinkSocket works well for the real world application, using synchronous posix/sockets API at ns-3-simu to run libnl, the libnl porting was the current work now. there will some simu_xxx functions which should be supported for libnl, e.g. sendto/sendmsg/recvfrom/recvmsg.


  • next: implement these simu_xxx functions(there would be a great job)

started by simple server/client demo code[4] running in ns-3-simu and test simu_x APIs.


  • last:porting quagga to ns3<may not reached>

Detail Schedule

  • ~-4.30 read and compile quagga source code, list simu_xxx sys-calls;
  • 5.5~5.9 read zebra codebase, get some view of the code structure and running mechanism.
  • 5.12~5.19 read kerel code, libnl code, RFC3549, get some basic view about netlink socket
  • 5.20~6.20 implement a draft netlink socket which support NETLINK_ROUTE protocol
  • 6.23~6.30 ns3 internal testing for NetlinkSocket class at file netlink-socket-test.cc
  • 7.1~7.9 porting libnl to ns-3-simu, at src/libnl
  • 7.10~7.16 testing libnl on top of ns3<also with midterm evaluation
  • ......

NetlinkSocket API

  • NetlinkSocket class

it was subclassed of Socket, it has the similar apis with the UDP/TCP socket.other related class:NetlinkSocketAddress/NetlinkSocketFactory/NetlinkSocketHelper

this netlink socket should act as a kernel space code, it handle the message from the user space, then send message back to user space. Now The NetlinkSocket contains some private functions to do this kernel-like job, e.g. handlemessage() unicastmessage().. more functions at src/node/netlink-socket.h

  • NetlinkMessage class

it represent the real world netlink message body. With Message header and payload, but for message serialization/desirialization, we designed the NetlinkMessage as subclass of Header class, which use the Header's seri/desi API to work. <Ptr>Packet can add/remove the message body as a header.

To be consistent between ns3 space and realworld application space, the serialize and desirialize was the very important part based on Ns3 Header subclasses's API functions, this problem get easily fixed.

  • MultipartNetlinkMessage class

To be compatible with dumping multipart netlinkmessges, MultipartNetlinkMessage class substitute NetlinkMessage's position, then NetlinkMessage become a member of it.

  • payload messages

To support NETLINK_ROUTE, three types of paylod route message was provided: InterfaceAddressMessage(to RTM_XXX_ADDRESS),InterfaceInfoMessage(to RTM_XXX_LINK),RouteMessage(to RTM_XXX_ROUTE), besides of the no-prototol-specified NetlinkErrorMessge

  • NetlinkAttributeclass

it reperents of the payload messages's attributes, which has the TLV structure.


porting libnl

the goal is move libnl src file to ns3, let it run with the ns-3-simu apis, test the netlink socket works well with the real world aaplication.

now the implementd netlink socket can only partly support NETLINK_ROUTE protocol, so we just port part of the source code to ns-3-simu.

About Quagga

basic knowledge

Quagga is a routing soute of 5 routing protocols(RIP,RIPng,OSPFv2,OSPFv3,BGP) based on Zebra, they can be run simutaneously or separately.Zebra layer that contains what is known as the "Routing Information Base" or RIB. Zebra is responsible for maintaining the RIB and for writing routes from the RIB into the kernel forwarding table.

Quagga was planned to use multi-threaded mechanism when it runs with a kernel that supports multi-threads. There may be several protocol-specific routing daemons and zebra the kernel routing manager.The ripd daemon handles the RIP protocol, while ospfd is a daemon which supports OSPF version 2. bgpd supports the BGP-4 protocol. For changing the kernel routing table and for redistribution of routes between different routing protocols, there is a kernel routing table manager zebra daemon. Quagga system architecture, see here http://www.quagga.net/docs/docs-info.php#SEC9.

see more information here:http://www.quagga.net/docs/docs-info.php

netlink socket introduction

Netlink socket is a special IPC used for transferring information between kernel and user-space processes. It provides a full-duplex communication link between the two by way of standard socket APIs for user-space processes and a special kernel API for kernel modules. Netlink socket uses the address family AF_NETLINK, as compared to AF_INET used by TCP/IP socket. Each netlink socket feature defines its own protocol type in the kernel header file include/linux/netlink.h.

The following is a subset of features and their protocol types currently supported by the netlink socket:

  • NETLINK_ROUTE: communication channel between user-space routing dæmons, such as BGP, OSPF, RIP and kernel packet forwarding module. User-space routing dæmons update the kernel routing table through this netlink protocol type.
  • NETLINK_FIREWALL: receives packets sent by the IPv4 firewall code.
  • NETLINK_NFLOG: communication channel for the user-space iptable management tool and kernel-space Netfilter module.
  • NETLINK_ARPD: for managing the arp table from user space.

Here, in quagga, Netlink scoket with NETLINK_ROUTE was used. About netlink_route see man 7 rtnetlink

Netlink socket provides a BSD socket-style API that can be well understood.The standard socket APIs—socket(), bind(),sendmsg(), recvmsg() and close()—can be used by user-space applications to access netlink socket.

  • socket(AF_NETLINK, int type, NETLINK_ROUTE): the type is either SOCK_RAW or SOCK_DGRAM, because netlink is a datagram-oriented service.
  • bind(fd, (struct sockaddr*)&nladdr, sizeof(nladdr)):The netlink address structure is struct sockaddr_nl.
  • sendmsg (sock, &msg, 0)
  • recvmsg (sock, &msg, 0)
  • close (sock)

the APIs are all standard calls, but the netlink socket requires its own message header as well -struct nlmsghdr. A sending application must supply this header in each netlink message and a receiving application needs to allocate a buffer large enough to hold netlink message headers and message payloads.

Detail information and example code see man 7 netlink.

some notes

1, Firstly, if we donot care about how one of the protocols runs, we can get some idea about the main code-structure here, which would be useful for porting to ns3.

  • main structure. _ _ _the main thread maintains thead_master, which contains all 'threads' triggered from event, timer, I/O, background,etc; when all initial work was finished, the whole application run as event-driven mode like below:
   main(){
       init_work();
       while(fetch_thread())
           call_thread();
   }
  • thread. _ _ _Actually there were only one real-thread running at this time, it use the posix_signal hanlder functions, event functions of application-defined, many timer functions of the basic protocol-application , and select(2)system all to monitor I/O operations for multiplexing the events. the 'theads' mentioned above was not the common threads we commonly talk about, here it represent the all kinds of event-function defined above, the event-function, which was passed as function pointer and stored in thread-master, was called by sequence in main thread function depends on the different priority. the threads here has the _synchronous_ specific which was different from the threads in one real world application.
  • multithread mechanism in quagga. _ _ _As a real-world application, quagga deal with posix-signal from system kernel, I/O events from IO devices, application-level events from kernel code and timer functions of maitaining the application noraml-running.it seems that these kinds of events were asynchronous to each other, which looks like a multithreads-application. But here in quagga, a creative mechanism was used to avoid multithreads. A threadmaster was maintained in main-thread, which store all kinds of events mentioned above in the form of sigmaster(discuss below), timer-list, io(read,write)list, event-list,etc.In the main loop, only one thread with the highest priority was called in each loop, which has some similarity with ns3 simulator's event-schedule model. (the priority: posix_signal events > application-level events > timer events > IO events > background-timer events). so the whole application runs as only one real thread avoiding multithreads-application resource sharing problem and system asynchronously controling problem.
  • signal mechanism in quagga._ _ _As we know, quagga application can receive all signals from system and I/O, but it only defined some useful signal and trapped the others avoiding large system signal response. Usually, posix signal can be triggered at anytime from kernel, and it use the kernel execution stack and context, which was asynchronous to the main thread. To avoid these asynchronism, here, in quagga, A sigmaster was used, which stored the signal information instead of calling handler function immediately when one posix signal was triggered in kernel. Then, the main_thread check out all triggered signals of last cycle from the segmaster and call their handler functions. So, the posix_signals were processed _synchronously_ in the whole application.

2, started with Zebra.....