App Store Technical Requirements

From Nsnam
Revision as of 00:59, 15 February 2011 by Tomh (Talk | contribs) (Stage 1 Status)

Jump to: navigation, search

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Goals

The long-term goal is to move ns-3 to separate modules, for build and maintenance reasons. For build reasons, since ns-3 is becoming large, we would like to allow users to enable/disable subsets of the available model library. For maintenance reasons, it is important that we move to a development model where modules can evolve on different timescales and be maintained by different organizations.

An analogy is the GNOME desktop, which is composed of a number of individual libraries that evolve on their own timescales. A build framework called jhbuild exists for building and managing the dependencies between these disparate projects.

Once we have a modular build, and an ability to separately download and install third-party modules, we will need to distinguish the maintenance status or certification of modules. The ns-3 project will maintain a set of core ns-3 modules including those essential for all ns-3 simulations, and will maintain a master build file containing metadata to contributed modules; this will allow users to fetch and build what they need. Eventually, modules will have a maintenance state associated with them describing aspects such as who is the maintainer, whether it is actively maintained, whether it contains documentation or tests, whether it passed an ns-3 project code review, whether it is currently passing the tests, etc. The current status of all known ns-3 modules will be maintained in a database and be browsable on the project web site.

Maintenance-status-example.PNG

Figure caption: Mock-up of future model status page (models and colors selected are just for example purposes)

The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:

  • unique name
  • version number
  • last known good ns-3 version (tested against)
  • description
  • download url
  • untar/unzip command
  • configure command
  • build command
  • system prerequisites (if user needs to apt-get install other libraries)
  • list of ns-3 package dependencies (other ns-3 packages which this package depends upon)

Requirements

Build and configuration

  • Enable optimized, debug, and static builds
  • Separate tests from models
  • Doxygen support
  • Test.py framework supports the modularization
  • Python bindings support the modularization
  • integrate lcov code coverage tests
  • integrate with buildbots

API and work flow

Assume that our download script is called download.py and our build script is called build.py.

  wget http://www.nsnam.org/releases/ns-allinone-3.x.tar.bz2
  bunzip2 ns-allinone-3.x.tar.bz2 && tar xvf ns-allinone-3.x.tar.bz2
  cd ns-allinone-3.x

In this directory, users will find the following directory layout:

  build.py download.py constants.py? VERSION LICENSE README

Download essential modules:

  ./download.py

This will leave a layout such as follows:

  download.py build.py pygccxml pybindgen ns-3.10 gcc-xml


For typical users, the next step is as follows:

  ./build.py

The above will take the following steps:

  • build all typical prerequisites such as pybindgen
  • cd ns3-10
  • ./waf configure && ./waf && ./waf install

The above will install headers into build/debug/ns3/ and libraries for each of the enabled modules into build/debug/lib. Each enabled module will be placed in a separate library that has a name of the form libns3-simulator.so for the simulator module.

Open issue: How to express and honor platform limitations, such as only trying to download/build NSC on platforms supporting it.

Once the build process is done, there will be a bunch of libraries in a common build directory, and all applicable headers. Specifically, we envision for module simulator, there may be a libns3-simulator.so, libns3-simulator-test.so, a python .so, and possibly others.

Python

We have a few choices for supporting Python. First, note that this type of system provides an opportunity for the python build tools to be added as packages to the overall build system, so that users can more easily build their bindings. We can try to build lots of small python modules, or run a scan at the very end of the ns-3 build process to build a customized python ns3 module, such as:

  • python-scan.py
  • ...
  • pybindgen

which operates on the headers in the build/debug/ns3 directory and creates a python module .so library that matches the ns-3 configured components.

Open issue: What is the eventual python API? Should each module be imported separately such as import node as ns.node, or should we try to go for a single ns3 python module? Or, we could continue to maintain bindings. Another consideration is that constantly generating bindings will slow down the builds.

Tests

Tests are run by running ./test.py, which knows how to find the available test libraries and run the tests.

Presently, test.py hardcodes the examples or samples; it needs to become smarter to learn what examples are around and need testing.

Doxygen

Doxygen can be run such as "build.py doxygen" on the build/debug/ns3 directory. I would guess that we don't try to modularize this but instead to run on the build/debug/ns3/ directory once all headers have been copied in.

Build flags

To configure different options such as -g, -fprofile-arcs, e.g., use the CFLAGS environment variable.

Running programs

To run programs:

  ./build.py shell
  build/debug/examples/csma/csma-bridge

or

  python build/debug/examples/csma/csma-bridge.py

Other examples

Build all known ns-3 modules:

  ./build.py all

In the above case, suppose that the program could not download and fetch a module. Here, it can interactively prompt the user "Module foo not found; do you wish to continue [Y/n]?".

Build the core of ns-3, plus device models WiFi and CSMA:

  ./build ns3-core wifi csma

ns3-core is a meta-module containing simulator common node mobility etc. Open issue: what is the right granularity for this module?

Example third-party ns-3 module

Suppose a research group at Example Univ. publishes a new WiMin device module. It gets a new unique module name from ns-3 project, such as wimin. It also contributes metadata to the master ns-3 build script.

Downloading

The "download.py" system should have enough metadata to fetch it, whether it is a tarball release or some other kind of release.

Open issue: How does user toggle the download behavior of a module? Is there a sticky "./download.py --disable-module=wimin" option? Or does user manually edit the module metadata file?

Open issue: Where does this module download to? Options include:

  1. ns-3-allinone/wimin
  2. ns-3-allinone/ns-3-dev/wimin
  3. ns-3-allinone/ns-3-dev/modules/wimin

In the first case above, this will result in an ns-3-allinone directory that is flat and possibly including non-ns3 modules mixed with ns-3 modules (e.g. pygccxml, nam-1, aodv, olsr, simulator, click, ... will all be at the same directory level).

In the second case, it has advantage of keeping all ns-3-specific modules within an ns-3 subdirectory. This may support better the developer who has multiple ns-3 branches that are supported by "common" libraries such as nsc, pybindgen, etc. that do not have to change.

The third case is somewhat analogous to the present situation where we have ns-3-allinone/ns-3-dev/src/wimin.

Building

 ./build.py

at the top level should recurse and cause the wimin module to be built based on the provided (jhbuild) metadata.

At this top level, there need to be some global CFLAGS in effect that cause debug/optimized/static to be consistently performed.

Let's assume that the downloaded module looked like this:

 ns-3-allinone/ns-3-dev/wimin/model
                             /examples
                             /doc
                             /test
                             /helper
                             /bindings

After the build step, it looks like this:

 ns-3-allinone/ns-3-dev/wimin/model
                             /examples
                             /doc
                             /test
                             /helper
                             /lib/libns3-wimin.so
                             /lib/libns3-wimin-test.so
                             /bindings/ns3wimin.so
                             /include
                             /bin

Installing

The install step will next put it into these places:

 installprefix/bin
 installprefix/include/wimin
 installprefix/bindings/python
 installprefix/lib

where installprefix defaults to either "ns-3-allinone/install/debug" or "ns-3-allinone/ns-3-dev/install/debug" but can be overridden if a different prefix is configured.

Plan and Status

Plan

We are aiming to first make the existing ns-3 modular, and then work on the second issue of supporting easy integration of modules maintained by third parties.

This plan will be carried out in two stages:

  • Stage 1: (for review/merge now) make existing ns-3 modular
    • users can explicitly disable unneeded modules from their build
    • tests are decoupled from the main libraries; changes to test programs do not cause all examples to be relinked
  • Stage 2: (post ns-3.11) enable the "app store" concept
    • allow jhbuild or similar scripts to orchestrate a larger build, including managing third-party dependencies (e.g. click or openflow)
    • each module gets an independent version number, and metadata coordinates version dependencies between modules
    • allow users to use their own build system if they want

Stage 1 Status

Stage 1 is proposed for ns-3.11 release. The primary goal is to be able to enable/disable modules and allow users to tailor the build to include the components that they need.

Making the existing ns-3 into a modular system is mainly a job of reorganizing the source tree and modifying waf and the python bindings generation code.

There is a prototype repository at http://code.nsnam.org/watrous/ns-3-dev-pending that basically contains what is proposed for ns-3.11. The items that have been completed to date are:

  1. Resolve the main circular dependencies in the core modules of ns-3, and reorganize the following modules: simulator, core, common, node, internet-stack into a new set of modules: core, network, internet, propagation, spectrum, along the lines of what was discussed on the list
  2. A new file .ns3rc that allows users to specify which items are in the build or out of the build.
  3. separate libraries for each module: each module builds one "model" library and one "test" library. Only the test-runner links the test code.
  4. The example programs that test.py runs (if examples are enabled in the build) are specified in each examples directory in a new file "examples-to-run.py", rather than in the main test.py program.

ns-3-dev-pending is not finished but is far enough along for people to get the idea of what it will eventually look like for ns-3.11. The items that are unfinished but are proposed to be finished for this release are:

  1. Generate python bindings, at least the first step below. Gustavo previously proposed to do this in two stages:
    1. require all modules to be built for python while modular python binding generation is worked on. The python user would import a single monolithic python module, as is presently done.
    2. make python bindings truly modular; bindings would be maintained in a bindings/ directory with each module, and the python modules will be decomposed along module boundaries (e.g. "import ns3-network")
  2. not all modules are cut over to the new module directory structure, but this appears to be mainly an issue of slogging through each directory; the circular dependency issues are
  3. src/helper will go away; helpers will be maintained on a per-module basis.
  4. Not all test code has been properly redistributed in the new directory layout. The plan is that "build verification tests" (BVTs) will live with the model code, and other tests (e.g. UNIT) will live in a test/ directory.
  5. some examples or samples may migrate from the top level examples/ or samples/ directory to the module examples/ directory for this release. However, since some examples contain many more dependencies than the individual modules that are linked, we suggest to keep a top-level directory for such example programs.
  6. Presently the system requires that each module have a test library. This requires at least that each wscript have a test block in its wscript file so that libns3-modulename-test.so is created.
  7. ./waf --doxygen needs to be re-enabled; print-introspected-doxygen utility is not presently built as part of all modules.
  8. Fix project documentation (tutorial, wiki, manual, etc.) to align with the new layout

New module layout

A prototypical module looks like this:

 src/modulename/
                doc/
                examples/
                         examples-to-run.py
                helper/
                model/
                test/
                utils/
                wscript

Not all directories will be present in each module.

Modular libraries

Enabling a module will cause two libraries to be built: libns3-modulename.so and libns3-modulename-test.so. For example, try these commands:

 ./waf  configure --disable-python --enable-modules=core
 ./waf
 cd build/debug/
 ls

and the following libraries should be present:

 bindings  libns3-core.so       ns3      scratch  utils
 examples  libns3-core-test.so  samples  src

Running test.py will cause only those tests that depend on module core to be run:

 20 of 24 tests passed (20 passed, 4 skipped, 0 failed, 0 crashed, 0 valgrind errors)

Repeat for the "network" module instead of the "core" module, and the following will be built, since network depends on core:

 bindings  libns3-core.so       libns3-network.so       ns3      scratch  utils
 examples  libns3-core-test.so  libns3-network-test.so  samples  src

the .ns3rc file

Using a command-line option (--enable-modules=modulename) to ./waf at configure stage can be used to control which modules are built. There is an alternate way, which is to write an .ns3rc file:

 cat .ns3rc
 #! /usr/bin/env python
 # A list of the modules that will be enabled when ns-3 is run.
 # Modules that depend on the listed modules will be enabled also.
 #
 # All modules can be enabled by choosing 'all_modules'.
 modules_enabled = ['core', 'network', 'internet', 'mpi', 'mobility', 'bridge', 'propagation', 'spectrum']


The precedence rules are as follows:

  1. the --enable-modules configure string overrides any .ns3rc file
  2. the .ns3rc file in the top level ns-3 directory is next consulted, if present
  3. the system searches for ~/.ns3rc if the above two are unspecified
  4. /etc/ns3rc is checked

If none of the above limits the modules to be built, all modules that waf knows about will be built.

examples-to-run.py

In ns-3.10, test.py hardcodes a number of examples to run in the test suite. In ns-3.11, we propose to add a new file "examples-to-run.py" in each module test/ directory, to tell the test framework which of the models should be run and under which test conditions. The syntax is the same as in current test.py; e.g. the src/spectrum/test/examples-to-run.py is given below:

 #! /usr/bin/env python
 ## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
 # A list of C++ examples to run in order to ensure that they remain
 # buildable and runnable over time.  Each tuple in the list contains
 #
 #     (example_name, do_run, do_valgrind_run).
 #
 # See test.py for more information.
 cpp_examples = [
   ("adhoc-aloha-ideal-phy", "True", "True"),
   ("adhoc-aloha-ideal-phy-with-microwave-oven", "True", "True"),
 ]
 # A list of Python examples to run in order to ensure that they remain
 # runnable over time.  Each tuple in the list contains
 #
 #     (example_name, do_run).
 #
 # See test.py for more information.
 python_examples = []


Open issue: Should src/ directory be renamed to modules/, or kept as is?

Open issue: Should modules be flat in the modules/ directory, or preserve hierarchy such as modules/routing/dsdv?

Open issue: Should tests be built by default, or should they be disabled? In other programs, you typically do a "make test" explicitly. Here, we could add a "./waf test" explicit command, or we could make it built by default but disabled if users "./waf configure --disable-tests". The present situation is that test libraries will be built by default.

Open issue: Bug 848: Linux FHS requires version number appended to the library. Should we start doing this?

Open issue: Do we want to keep including version number in the ns-3 directory name?

Open issue: Install/uninstall to typical file system locations, e.g. /usr/lib/libns3-simulator.so.3.10 /usr/lib/libns3-simulator.so.3 -> /usr/lib/libns3-simulator.so.3.10

Open issue: Documentation organization. Proposing that each module contain its .rst file(s) in the module/<modulename>/doc/ directory. Modules that want to chain up to the main documentation tree can do so by adding themselves to the index.rst. In the future, when users develop modules in the app store framework, users can choose to chain up to the main manual, or they can provide their own makefiles and even choose their own markup (e.g. latex) if they want, and separate html/pdf can be generated.

Module directory structure

Each module directory now will contain the following directories:

  examples
  model
  test

ns-3 configuration file

The ns-3 configuration file, .ns3rc, specifies the ns-3 modules that should be enabled. The default version of .ns3rc stored in the source code repository will enable all of the ns-3 modules that are considered to be the core of an ns-3 release.

Note that ns-3 will first try to use the .ns3rc file it finds in the current working directory, and failing to find that, will check for the .ns3rc file in the ~ directory. If ns-3 can't find a version of .ns3rc in either directory, then all modules will be enabled.

Here is an example .ns3rc file that enables only the simulator and common modules:

  #! /usr/bin/env python
  
  # A list of the modules that are enabled.
  modules_enabled = ['simulator', 'common']

If the debug version of ns-3 is built, then the headers for both of the modules will be installed into build/debug/ns3, and the following libraries will be created in the build/debug directory:

  libns3-common.so
  libns3-common-test.so
  libns3-simulator.so
  libns3-simulator-test.so

If the user chooses the --enable-modules option for waf, then the modules specified will be used rather than those in the .ns3rc file.

Open issue: If the user chooses the --enable-modules option for waf, should the .ns3rc file be modified to match the modules that were specified? Presently, waf does not touch the .ns3rc file, but --enable-modules just overrides what is stored in .ns3rc.

Users can experiment as follows:

  hg clone ...
  ./waf configure --disable-python
  ./waf 
  ./test.py (runs all tests)
  rm -rf build
  (edit .ns3rc to remove "common", repeat above steps, etc.)

Specifying module test libraries

The test suites that make up a module's test library are specified by adding code like the following in the module's wscript file, which here is for the common module:

  common_test = bld.create_ns3_module_test_library('common')
  common_test.source = [
      'test/buffer-test.cc',
      'test/packet-metadata-test.cc',
      'test/pcap-file-test-suite.cc',
  ]

Running examples

Here is how examples are handled in the modular ns-3 framework:

1. Examples that logically go with a module are with that module. Note that examples placed there should not increase the build dependencies of that module.

  • Only those with all their dependencies built will be built by waf.
  • Examples to run are specified by
     modules/module_name/tests/examples-to-run.py
  • Examples are installed to (for debug version)
     build/debug/modules/module_name/examples

2. Examples that don't logically go with a module, or that introduce more dependencies than required to build the module library itself, remain in the current example directory.

  • Only those with all their dependencies built will be built by waf
  • Examples to run are specified by
     examples/example_name/examples-to-run.py
  • Examples are installed to (for debug version)
     build/debug/examples/example_name

3. Test.py runs the appropriate examples.

  • test.py uses .ns3rc and examples-to-run.py files to determine examples to run.
  • test.py looks for test executables that were built
  • test.py no longer contains hard coded paths for examples.

Examples to run are specified using examples-to-run.py files. Here is such a file for the tutorial example:

  #! /usr/bin/env python
  ## -*- Mode: python; py-indent-offset: 4; indent-tabs-mode: nil; coding: utf-8; -*-
  
  # A list of C++ examples to run in order to ensure that they remain
  # buildable and runnable over time.  Each tuple in the list contains
  #
  #     (example_name, do_run, do_valgrind_run).
  #
  # See test.py for more information.
  cpp_examples = [
      ("first", "True", "True"),
      ("hello-simulator", "True", "True"),
  ]
  
  # A list of Python examples to run in order to ensure that they remain
  # runnable over time.  Each tuple in the list contains
  #
  #     (example_name, do_run).
  #
  # See test.py for more information.
  python_examples = [
      ("first.py", "True"),
  ]

Roadblocks to merging to ns-3

These things would need to be done before starting to merge the modular framework to ns-3.

  1. agreement with the overall concept of .ns3rc and how it works
  2. fix python bindings, decide on issues regarding modularity of python modules
  3. check whether doxygen generation needs any adjustment
  4. come up with a proposed module reorganization (e.g. merge core and simulator)
  5. merge off-tree code (such as patch to bug 445) before making the module cutovers

Stage 2 Status

For the second stage, we have been looking at GNOME jhbuild to build packages at the top-level ns-3-allinone directory. jhbuild may need to be wrapped by a specialized ns-3 build.py or download.py wrapper. However, we encountered these limitations so far:

  1. jhbuild has no support for scons
  2. waf support is limited; for instance, controlling configuration options is limited
  3. jhbuild has no support for hierarchical builds: although you can have metamodules, and moduleset includes, there is (AFAIK) no option to say "build only this metamodule". The options to "start at module X", or "skip module Y" are not sufficient.