App Store Technical Requirements: Difference between revisions

From Nsnam
Jump to navigation Jump to search
(modularity plan update)
(update plan for modularity)
Line 1: Line 1:
{{TOC}}
{{TOC}}


= Requirements =
= Goals =
 
Devise a new source code organization, build system, and package management (metadata and scripts) system that provides the following.


The top level goals are as follows:
The long-term goal is to move ns-3 to separate modules, for build and
maintenance reasons.  For build reasons, since ns-3 is becoming large,
we would like to allow users to enable/disable subsets of the available
model library.  For maintenance reasons, it is important that we move to
a development model where modules can evolve on different timescales and
be maintained by different organizations.


* '''Modular build,''' with capability to enable/disable models from the build
An analogy is the GNOME desktop, which is composed of a number of
* '''Package management system,''' to allow users to manage model packages that are provided by ns-3 as well as those that may be provided by third parties
individual libraries that evolve on their own timescale.  A build  
* Uniform way to handle and '''manage (non-system) external libraries''' used with ns-3
framework called [http://live.gnome.org/Jhbuild jhbuild] exists for
building and managing the dependencies between these disparate projects.


Beyond modularity, there is a need to distinguish the maintenance status of models:
Once we have a modular build, and an ability to separately download and
* ns-3-dev continues to contain the core models, test suites, and documentation that are actively maintained by the project and built by all users
install third-party modules, we will need to distinguish the maintenance  
* ns-3-models contains additional models that can be added to the build with user-friendly build instructionsThere will be two classifications:
status or certification of modules.  The ns-3 project will maintain a
** ''maintained:'' These contain models that are actively maintained by the project.  All models in this directory are checked for each release, and bugs in tracker are responded to
set of core ns-3 modules including those essential for all ns-3
** ''unmaintained:'' These contain models that conform to the package metadata system, but the project does not do any testing or maintenance of themThey may fall out of date over time.  They may be revived in the future and added to the maintained status.
simulations, and will maintain a master build file containing metadata
* models can also be stored at external web sites; such models will typically have "unmaintained" status.  Note that "unmaintained" means unmaintained by the project; someone may be actively maintaining them behind the scenes.
to contributed modules; this will allow users to fetch and build what
they needEventually, modules will have a maintenance state associated
with them describing aspects such as who is the maintainer, whether it
is actively maintained, whether it contains documentation or tests,
whether it passed an ns-3 project code review, whether it is currently
passing the tests, etcThe current status of all known ns-3 modules
will be maintained in a database and be browsable on the project web site.


The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:
The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:
Line 30: Line 40:
* list of ns-3 package dependencies (other ns-3 packages which this package depends upon)
* list of ns-3 package dependencies (other ns-3 packages which this package depends upon)


There are a few metadata systems to be considered, such as Debian spec files and jhbuild.
= Requirements =
 
== Build and configuration ==


= API and work flow =
* Enable optimized, debug, and static
* Separate tests from models
* Doxygen support
* Test program
* lcov code coverage
* integrate with buildbots


There are separate tasks that people will want to do on a per-module basis, and perhaps on a top-level repository with specific components enabled/disabled:
== API and work flow ==
* download code
* update code
* configure; enable/disable them from the build
* enable optimized/debug
* build, compile, and link model code
* install:  copy exported .so binaries, .h to an "install" location (maybe specified during configure)
* build examples:  compile and link examples
* build tests:  compile and link etsts
* run tests
* generate documentation


One idea is to have a workflow like this:
Assume that our new download script is called download.py and our new
  hg clone http://code.nsnam.org/build
build
  cd build
script is called build.py.
  ./build.py module-list.xml download
  ./build.py module-list.xml configure
  ./build.py module-list.xml build
  ./build.py module-list.xml install


module-list.xml would describe the list of modules to build, for each
  wget http://www.nsnam.org/releases/ns-allinone-3.x.tar.bz2
module the list of commands for each task.
  bunzip2 ns-allinone-3.x.tar.bz2 && tar xvf ns-allinone-3.x.tar.bz2
  cd ns-allinone-3.x


= Plan =
In this directory, users will find the following directory layout:
 
  build.py download.py constants.py? VERSION LICENSE README


A current prototype is being worked in watrous/ns-3-modular/
Download essential modules:


== Goals ==
  ./download.py


* user can explicitly enable/disable ns-3 optional packages
This will leave a layout such as follows:
** in a more generic form than having to modify waf, such as was done for NSC
** avoid the existing “--enable-“ prefix for now; use something like “pkgadd/pkgrm”
* avoid doing everything in waf
** we want to allow future module developers to pick their module build system (make, waf, scons)
** waf may be too painful to orchestrate everything; a dedicated python program may be better in the long run


Non-goals (for this stage of work):
  download.py build.py pygccxml pybindgen simulator core common gcc-xml
* able to download code from third party
* able to update code from third party
* package metadata
* some way to standardize on package configuration step (e.g. configure.xml)
* address the issue of third-party libraries (e.g. it++, openflow)
* change C++ API and make it available in Python


== Steps ==
For typical users, the next step is as follows:


1. Create ns-3-modular repository sandbox for this work
  ./build.py ns3


2.  Remove all trace of uan from system:
"ns3" is a meta-module that pulls in what constitutes what is considered
* bindings
to
* src/devices/uan
be the core of an ns-3 release (i.e. for starters, every module that is
* waf wscripts
in ns-3.9).


3. Add uan directory back into ns-3-modular/packages subdirectory.
The above will take the following steps:
*  build all typical prerequisites such as pybindgen
* cd core
* ./waf configure && ./waf && ./waf install


* Include a bindings/ directory containing ns3_module_uan.py.
The above will install headers into build/debug/ns3/ and a libns3core.so
* Include also a ns3_module_uan.cc until pybindgen is upgraded to handle ns-3 packages (note that this implies that we cannot generate new bindings for the moment)
into build/debug/lib/


4. Define the following top-level (ns-3-modular) bash shell scripts for proof-of-concept. These shell scripts are in lieu of modifying waf at this stage of prototyping.
  cd ../simulator
  ./waf configure && ./waf && ./waf install
  cd ..
  ...


  configure.sh
'''Open issue:''' How to express and honor platform limitations, such as
  build.sh
only trying to download/build NSC on platforms supporting it.
  install.sh
  run.sh and pyrun.sh
  distclean.sh


5configure.sh <args>
Once the build process is done, there will be a bunch of libraries in a
common build directory, and all applicable headersSpecifically,
we envision for module foo, there may be a libns3foo.so, libns3foo-test.so,
and possibly others. 


If --pkgadd-uan is passed as args, it calls “./waf configure” and then “configure.sh” in the packages/uan directory, and it writes “uan” to a “.packages” file in the top level directory.  It also calls “configure.sh” in the packages/uan directory.
At this point, unless python has been disabled, the program executes:


Here, it should be noted that build/c4che/debug.py contains lots of environment variable settings that might be importable into a Makefile using make “shell” command.  
* python-scan.py
* ...
* pybindgen


6build.sh
which operates on the headers in the build/debug/ns3 directory and creates
a python module .so library that matches the ns-3 configured components.   
'''Open issue:''' What is the eventual python API?  Should each module be
imported separately such as import node as ns.node, or should we try to
go for a single ns3 python module?


Should first call ./waf build and then, if “uan” is present in .packages file, call build.sh in packages/uan.
Tests are run by running ./test.py, which knows how to find the available
test libraries and run the tests.


7. install.sh
Doxygen can be run such as "build.py doxygen" on the build/debug/ns3
directory.


If “uan” is present in .packages file, copy over uan libraries and executables to the right place in ns-3-modular/build/ directory.
To configure different options such as  -g, -fprofile-arcs, e.g., use the
CFLAGS environment variable.


8.  run.sh, pyrun.sh
To run programs:


This is functionally equivalent to ./waf --run and ./waf --pyrun.  For starters, just set LD_LIBRARY_PATH and PYTHONPATH appropriately and call the named executable outside of waf.
  ./build.py shell
  build/debug/examples/csma/csma-bridge
or
  python build/debug/examples/csma/csma-bridge.py


An issue is what to do about python library.  Presently, python programs can just do an “import ns3”-- do they need to do an “import uan” as well? 
Build all known ns-3 modules:


9. distclean.sh
  ./build.py all


Performs the equivalent of a make distclean in the packages/uan directory, and does a ./waf distclean and removes the .packages file.
In the above case, suppose that the program could not download and fetch
a module. Here, it can interactively prompt the user "Module foo not
found; do you wish to continue [Y/n]?".


The outcome of this work is that one should be able to take the following steps:
Build the core of ns-3, plus device models WiFi and CSMA:


  ./configure.sh --pkgadd-uan
  ./build ns3-core wifi csma
  ./build.sh
  ./install.sh
  ./run.sh uan-rc-example
  ./pyrun.sh uan-example.py # you will need to create a trivial uan-example.py to test this
  ./distclean.sh
  ./configure.sh
  ./build.sh
  ./install.sh
  ./run.sh uan-rc-example (error, uan-rc-example not found)
  ./pyrun.sh uan-example.py (error, uan module not found)
  ./distclean.sh


== Open issues ==
ns3-core is a meta-module containing simulator core common node. 
'''Open issue:''' what is the right granularity for this module?


./waf --doxygen is not expected to work in this phase of prototyping
Suppose a research group at Example Univ. publishes a new WiMin device
module.  It gets a new unique module name from ns-3 project, such as
wimin.  It also contributes metadata to the master ns-3 build script.
When a third-party does the following:


./waf --python-scan is not expected to pick up any uan API changes, in this phase of prototyping
  ./build.py wimin


./test.py is not expected to pick up any uan tests in this phase of prototyping
the system will try to download and build the new wimin module (plus its
examples and tests), and put it in the usual place.


./configure.sh <args> should take <args> ($@) and strip out “--pkgadd-uan” and pass the rest to ./waf configure.
= Plan =


./configure.sh --pkgrm-uan should remove uan from the build (i.e. strip from the .packages directory)
GNOME jhbuild seems to be able to provide most or all of the "build.py"
and "download.py" functionality mentioned above.  jhbuild may need to be
wrapped by a specialized ns-3 build.py or download.py wrapper.   So,
the current plan is to try to prototype the above using jhbuild and se
how far we get


Need to consider implications of wrapping all uan API within a uan namespace.
Work can proceed in parallel:
* Have to consider backward compatibility and Python issues of this.


The above issues can start to be tackled as a second step.
# Start to try to compare jhbuild with our existing download.py/build.py at the top level directory
# Try to define the module granularity and fix broken module dependencies
# Work on python bindings
# Work on how to handle examples in a modular test framework

Revision as of 05:23, 9 November 2010

Main Page - Roadmap - Summer Projects - Project Ideas - Developer FAQ - Tools - Related Projects

HOWTOs - Installation - Troubleshooting - User FAQ - Samples - Models - Education - Contributed Code - Papers

Goals

The long-term goal is to move ns-3 to separate modules, for build and maintenance reasons. For build reasons, since ns-3 is becoming large, we would like to allow users to enable/disable subsets of the available model library. For maintenance reasons, it is important that we move to a development model where modules can evolve on different timescales and be maintained by different organizations.

An analogy is the GNOME desktop, which is composed of a number of individual libraries that evolve on their own timescale. A build framework called jhbuild exists for building and managing the dependencies between these disparate projects.

Once we have a modular build, and an ability to separately download and install third-party modules, we will need to distinguish the maintenance status or certification of modules. The ns-3 project will maintain a set of core ns-3 modules including those essential for all ns-3 simulations, and will maintain a master build file containing metadata to contributed modules; this will allow users to fetch and build what they need. Eventually, modules will have a maintenance state associated with them describing aspects such as who is the maintainer, whether it is actively maintained, whether it contains documentation or tests, whether it passed an ns-3 project code review, whether it is currently passing the tests, etc. The current status of all known ns-3 modules will be maintained in a database and be browsable on the project web site.

The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:

  • unique name
  • version number
  • last known good ns-3 version (tested against)
  • description
  • download url
  • untar/unzip command
  • configure command
  • build command
  • system prerequisites (if user needs to apt-get install other libraries)
  • list of ns-3 package dependencies (other ns-3 packages which this package depends upon)

Requirements

Build and configuration

  • Enable optimized, debug, and static
  • Separate tests from models
  • Doxygen support
  • Test program
  • lcov code coverage
  • integrate with buildbots

API and work flow

Assume that our new download script is called download.py and our new build script is called build.py.

  wget http://www.nsnam.org/releases/ns-allinone-3.x.tar.bz2
  bunzip2 ns-allinone-3.x.tar.bz2 && tar xvf ns-allinone-3.x.tar.bz2
  cd ns-allinone-3.x

In this directory, users will find the following directory layout:

  build.py download.py constants.py? VERSION LICENSE README

Download essential modules:

  ./download.py

This will leave a layout such as follows:

  download.py build.py pygccxml pybindgen simulator core common gcc-xml

For typical users, the next step is as follows:

  ./build.py ns3

"ns3" is a meta-module that pulls in what constitutes what is considered to be the core of an ns-3 release (i.e. for starters, every module that is in ns-3.9).

The above will take the following steps:

  • build all typical prerequisites such as pybindgen
  • cd core
  • ./waf configure && ./waf && ./waf install

The above will install headers into build/debug/ns3/ and a libns3core.so into build/debug/lib/

  cd ../simulator
  ./waf configure && ./waf && ./waf install
  cd ..
  ...

Open issue: How to express and honor platform limitations, such as only trying to download/build NSC on platforms supporting it.

Once the build process is done, there will be a bunch of libraries in a common build directory, and all applicable headers. Specifically, we envision for module foo, there may be a libns3foo.so, libns3foo-test.so, and possibly others.

At this point, unless python has been disabled, the program executes:

  • python-scan.py
  • ...
  • pybindgen

which operates on the headers in the build/debug/ns3 directory and creates a python module .so library that matches the ns-3 configured components. Open issue: What is the eventual python API? Should each module be imported separately such as import node as ns.node, or should we try to go for a single ns3 python module?

Tests are run by running ./test.py, which knows how to find the available test libraries and run the tests.

Doxygen can be run such as "build.py doxygen" on the build/debug/ns3 directory.

To configure different options such as -g, -fprofile-arcs, e.g., use the CFLAGS environment variable.

To run programs:

  ./build.py shell
  build/debug/examples/csma/csma-bridge

or

  python build/debug/examples/csma/csma-bridge.py

Build all known ns-3 modules:

  ./build.py all

In the above case, suppose that the program could not download and fetch a module. Here, it can interactively prompt the user "Module foo not found; do you wish to continue [Y/n]?".

Build the core of ns-3, plus device models WiFi and CSMA:

  ./build ns3-core wifi csma

ns3-core is a meta-module containing simulator core common node. Open issue: what is the right granularity for this module?

Suppose a research group at Example Univ. publishes a new WiMin device module. It gets a new unique module name from ns-3 project, such as wimin. It also contributes metadata to the master ns-3 build script. When a third-party does the following:

  ./build.py wimin

the system will try to download and build the new wimin module (plus its examples and tests), and put it in the usual place.

Plan

GNOME jhbuild seems to be able to provide most or all of the "build.py" and "download.py" functionality mentioned above. jhbuild may need to be wrapped by a specialized ns-3 build.py or download.py wrapper. So, the current plan is to try to prototype the above using jhbuild and se how far we get

Work can proceed in parallel:

  1. Start to try to compare jhbuild with our existing download.py/build.py at the top level directory
  2. Try to define the module granularity and fix broken module dependencies
  3. Work on python bindings
  4. Work on how to handle examples in a modular test framework