Difference between revisions of "App Store Technical Requirements"

From Nsnam
Jump to: navigation, search
(modularity plan update)
(update plan for modularity)
Line 1: Line 1:
 
{{TOC}}
 
{{TOC}}
  
= Requirements =
+
= Goals =
 
+
Devise a new source code organization, build system, and package management (metadata and scripts) system that provides the following.
+
  
The top level goals are as follows:
+
The long-term goal is to move ns-3 to separate modules, for build and
 +
maintenance reasons.  For build reasons, since ns-3 is becoming large,
 +
we would like to allow users to enable/disable subsets of the available
 +
model library.  For maintenance reasons, it is important that we move to
 +
a development model where modules can evolve on different timescales and
 +
be maintained by different organizations.
  
* '''Modular build,''' with capability to enable/disable models from the build
+
An analogy is the GNOME desktop, which is composed of a number of
* '''Package management system,''' to allow users to manage model packages that are provided by ns-3 as well as those that may be provided by third parties
+
individual libraries that evolve on their own timescale.  A build
* Uniform way to handle and '''manage (non-system) external libraries''' used with ns-3
+
framework called [http://live.gnome.org/Jhbuild jhbuild] exists for
 +
building and managing the dependencies between these disparate projects.
  
Beyond modularity, there is a need to distinguish the maintenance status of models:
+
Once we have a modular build, and an ability to separately download and
* ns-3-dev continues to contain the core models, test suites, and documentation that are actively maintained by the project and built by all users
+
install third-party modules, we will need to distinguish the maintenance  
* ns-3-models contains additional models that can be added to the build with user-friendly build instructionsThere will be two classifications:
+
status or certification of modules.  The ns-3 project will maintain a
** ''maintained:'' These contain models that are actively maintained by the project.  All models in this directory are checked for each release, and bugs in tracker are responded to
+
set of core ns-3 modules including those essential for all ns-3
** ''unmaintained:'' These contain models that conform to the package metadata system, but the project does not do any testing or maintenance of themThey may fall out of date over time.  They may be revived in the future and added to the maintained status.
+
simulations, and will maintain a master build file containing metadata
* models can also be stored at external web sites; such models will typically have "unmaintained" status.  Note that "unmaintained" means unmaintained by the project; someone may be actively maintaining them behind the scenes.
+
to contributed modules; this will allow users to fetch and build what
 +
they needEventually, modules will have a maintenance state associated
 +
with them describing aspects such as who is the maintainer, whether it
 +
is actively maintained, whether it contains documentation or tests,
 +
whether it passed an ns-3 project code review, whether it is currently
 +
passing the tests, etcThe current status of all known ns-3 modules
 +
will be maintained in a database and be browsable on the project web site.
  
 
The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:
 
The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:
Line 30: Line 40:
 
* list of ns-3 package dependencies (other ns-3 packages which this package depends upon)
 
* list of ns-3 package dependencies (other ns-3 packages which this package depends upon)
  
There are a few metadata systems to be considered, such as Debian spec files and jhbuild.
+
= Requirements =
  
= API and work flow =
+
== Build and configuration ==
  
There are separate tasks that people will want to do on a per-module basis, and perhaps on a top-level repository with specific components enabled/disabled:
+
* Enable optimized, debug, and static
* download code
+
* Separate tests from models
* update code
+
* Doxygen support
* configure; enable/disable them from the build
+
* Test program
* enable optimized/debug
+
* lcov code coverage
* build, compile, and link model code
+
* integrate with buildbots
* install:  copy exported .so binaries, .h to an "install" location (maybe specified during configure)
+
* build examples:  compile and link examples
+
* build tests:  compile and link etsts
+
* run tests
+
* generate documentation
+
  
One idea is to have a workflow like this:
+
== API and work flow ==
  hg clone http://code.nsnam.org/build
+
  cd build
+
  ./build.py module-list.xml download
+
  ./build.py module-list.xml configure
+
  ./build.py module-list.xml build
+
  ./build.py module-list.xml install
+
  
module-list.xml would describe the list of modules to build, for each
+
Assume that our new download script is called download.py and our new
module the list of commands for each task.
+
build
 +
script is called build.py.
  
= Plan =
+
  wget http://www.nsnam.org/releases/ns-allinone-3.x.tar.bz2
 +
  bunzip2 ns-allinone-3.x.tar.bz2 && tar xvf ns-allinone-3.x.tar.bz2
 +
  cd ns-allinone-3.x
  
A current prototype is being worked in watrous/ns-3-modular/
+
In this directory, users will find the following directory layout:
  
== Goals ==
+
  build.py download.py constants.py? VERSION LICENSE README
  
* user can explicitly enable/disable ns-3 optional packages
+
Download essential modules:
** in a more generic form than having to modify waf, such as was done for NSC
+
** avoid the existing “--enable-“ prefix for now; use something like “pkgadd/pkgrm”
+
* avoid doing everything in waf
+
** we want to allow future module developers to pick their module build system (make, waf, scons)
+
** waf may be too painful to orchestrate everything; a dedicated python program may be better in the long run
+
  
Non-goals (for this stage of work):
+
  ./download.py
* able to download code from third party
+
* able to update code from third party
+
* package metadata
+
* some way to standardize on package configuration step (e.g. configure.xml)
+
* address the issue of third-party libraries (e.g. it++, openflow)
+
* change C++ API and make it available in Python
+
  
== Steps ==
+
This will leave a layout such as follows:
  
1. Create ns-3-modular repository sandbox for this work
+
  download.py build.py pygccxml pybindgen simulator core common gcc-xml
  
2.  Remove all trace of uan from system:
+
For typical users, the next step is as follows:
* bindings
+
* src/devices/uan
+
* waf wscripts
+
  
3. Add uan directory back into ns-3-modular/packages subdirectory.
+
  ./build.py ns3
  
* Include a bindings/ directory containing ns3_module_uan.py. 
+
"ns3" is a meta-module that pulls in what constitutes what is considered
* Include also a ns3_module_uan.cc until pybindgen is upgraded to handle ns-3 packages (note that this implies that we cannot generate new bindings for the moment)
+
to
 +
be the core of an ns-3 release (i.e. for starters, every module that is
 +
in ns-3.9).
  
4.  Define the following top-level (ns-3-modular) bash shell scripts for proof-of-concept. These shell scripts are in lieu of modifying waf at this stage of prototyping.
+
The above will take the following steps:
 +
* build all typical prerequisites such as pybindgen
 +
* cd core
 +
* ./waf configure && ./waf && ./waf install
  
  configure.sh
+
The above will install headers into build/debug/ns3/ and a libns3core.so
  build.sh
+
into build/debug/lib/
  install.sh
+
  run.sh and pyrun.sh
+
  distclean.sh
+
  
5. configure.sh <args>
+
  cd ../simulator
 +
  ./waf configure && ./waf && ./waf install
 +
  cd ..
 +
  ...
  
If --pkgadd-uan is passed as args, it calls “./waf configure” and then “configure.sh” in the packages/uan directory, and it writes “uan” to a “.packages” file in the top level directory.  It also calls “configure.sh” in the packages/uan directory.
+
'''Open issue:''' How to express and honor platform limitations, such as
 +
only trying to download/build NSC on platforms supporting it.
  
Here, it should be noted that build/c4che/debug.py contains lots of environment variable settings that might be importable into a Makefile using make “shell” command.  
+
Once the build process is done, there will be a bunch of libraries in a
 +
common build directory, and all applicable headers. Specifically,
 +
we envision for module foo, there may be a libns3foo.so, libns3foo-test.so,
 +
and possibly others.
  
6.  build.sh
+
At this point, unless python has been disabled, the program executes:
  
Should first call ./waf build and then, if “uan” is present in .packages file, call build.sh in packages/uan.
+
* python-scan.py
 +
* ...
 +
* pybindgen
  
7install.sh
+
which operates on the headers in the build/debug/ns3 directory and creates
 +
a python module .so library that matches the ns-3 configured components. 
 +
'''Open issue:''' What is the eventual python API? Should each module be
 +
imported separately such as import node as ns.node, or should we try to
 +
go for a single ns3 python module?
  
If “uan” is present in .packages file, copy over uan libraries and executables to the right place in ns-3-modular/build/ directory.
+
Tests are run by running ./test.py, which knows how to find the available
 +
test libraries and run the tests.
  
8.  run.sh, pyrun.sh
+
Doxygen can be run such as "build.py doxygen" on the build/debug/ns3
 +
directory.
  
This is functionally equivalent to ./waf --run and ./waf --pyrun. For starters, just set LD_LIBRARY_PATH and PYTHONPATH appropriately and call the named executable outside of waf.
+
To configure different options such as  -g, -fprofile-arcs, e.g., use the
 +
CFLAGS environment variable.
  
An issue is what to do about python library.  Presently, python programs can just do an “import ns3”-- do they need to do an “import uan” as well? 
+
To run programs:
  
9. distclean.sh
+
  ./build.py shell
 +
  build/debug/examples/csma/csma-bridge
 +
or
 +
  python build/debug/examples/csma/csma-bridge.py
  
Performs the equivalent of a make distclean in the packages/uan directory, and does a ./waf distclean and removes the .packages file.
+
Build all known ns-3 modules:
  
The outcome of this work is that one should be able to take the following steps:
+
  ./build.py all
  
  ./configure.sh --pkgadd-uan
+
In the above case, suppose that the program could not download and fetch
  ./build.sh
+
a module. Here, it can interactively prompt the user "Module foo not  
  ./install.sh
+
found; do you wish to continue [Y/n]?".
  ./run.sh uan-rc-example
+
  ./pyrun.sh uan-example.py # you will need to create a trivial uan-example.py to test this
+
  ./distclean.sh
+
  ./configure.sh
+
  ./build.sh
+
  ./install.sh
+
  ./run.sh uan-rc-example (error, uan-rc-example not found)
+
  ./pyrun.sh uan-example.py (error, uan module not found)
+
  ./distclean.sh
+
  
== Open issues ==
+
Build the core of ns-3, plus device models WiFi and CSMA:
  
./waf --doxygen is not expected to work in this phase of prototyping
+
  ./build ns3-core wifi csma
  
./waf --python-scan is not expected to pick up any uan API changes, in this phase of prototyping
+
ns3-core is a meta-module containing simulator core common node. 
 +
'''Open issue:''' what is the right granularity for this module?
  
./test.py is not expected to pick up any uan tests in this phase of prototyping
+
Suppose a research group at Example Univ. publishes a new WiMin device
 +
module. It gets a new unique module name from ns-3 project, such as
 +
wimin.  It also contributes metadata to the master ns-3 build script.
 +
When a third-party does the following:
  
./configure.sh <args> should take <args> ($@) and strip out “--pkgadd-uan” and pass the rest to ./waf configure.
+
  ./build.py wimin
 +
 
 +
the system will try to download and build the new wimin module (plus its
 +
examples and tests), and put it in the usual place.
 +
 
 +
= Plan =
  
./configure.sh --pkgrm-uan should remove uan from the build (i.e. strip from the .packages directory)
+
GNOME jhbuild seems to be able to provide most or all of the "build.py"
 +
and "download.py" functionality mentioned above.  jhbuild may need to be
 +
wrapped by a specialized ns-3 build.py or download.py wrapper.   So,
 +
the current plan is to try to prototype the above using jhbuild and se
 +
how far we get
  
Need to consider implications of wrapping all uan API within a uan namespace.
+
Work can proceed in parallel:
* Have to consider backward compatibility and Python issues of this.
+
  
The above issues can start to be tackled as a second step.
+
# Start to try to compare jhbuild with our existing download.py/build.py at the top level directory
 +
# Try to define the module granularity and fix broken module dependencies
 +
# Work on python bindings
 +
# Work on how to handle examples in a modular test framework

Revision as of 05:23, 9 November 2010

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Goals

The long-term goal is to move ns-3 to separate modules, for build and maintenance reasons. For build reasons, since ns-3 is becoming large, we would like to allow users to enable/disable subsets of the available model library. For maintenance reasons, it is important that we move to a development model where modules can evolve on different timescales and be maintained by different organizations.

An analogy is the GNOME desktop, which is composed of a number of individual libraries that evolve on their own timescale. A build framework called jhbuild exists for building and managing the dependencies between these disparate projects.

Once we have a modular build, and an ability to separately download and install third-party modules, we will need to distinguish the maintenance status or certification of modules. The ns-3 project will maintain a set of core ns-3 modules including those essential for all ns-3 simulations, and will maintain a master build file containing metadata to contributed modules; this will allow users to fetch and build what they need. Eventually, modules will have a maintenance state associated with them describing aspects such as who is the maintainer, whether it is actively maintained, whether it contains documentation or tests, whether it passed an ns-3 project code review, whether it is currently passing the tests, etc. The current status of all known ns-3 modules will be maintained in a database and be browsable on the project web site.

The basic idea of the ns-3 app store would be to store on a server a set of user-submitted metadata which describes various source code packages. Typical metadata would include:

  • unique name
  • version number
  • last known good ns-3 version (tested against)
  • description
  • download url
  • untar/unzip command
  • configure command
  • build command
  • system prerequisites (if user needs to apt-get install other libraries)
  • list of ns-3 package dependencies (other ns-3 packages which this package depends upon)

Requirements

Build and configuration

  • Enable optimized, debug, and static
  • Separate tests from models
  • Doxygen support
  • Test program
  • lcov code coverage
  • integrate with buildbots

API and work flow

Assume that our new download script is called download.py and our new build script is called build.py.

  wget http://www.nsnam.org/releases/ns-allinone-3.x.tar.bz2
  bunzip2 ns-allinone-3.x.tar.bz2 && tar xvf ns-allinone-3.x.tar.bz2
  cd ns-allinone-3.x

In this directory, users will find the following directory layout:

  build.py download.py constants.py? VERSION LICENSE README

Download essential modules:

  ./download.py

This will leave a layout such as follows:

  download.py build.py pygccxml pybindgen simulator core common gcc-xml

For typical users, the next step is as follows:

  ./build.py ns3

"ns3" is a meta-module that pulls in what constitutes what is considered to be the core of an ns-3 release (i.e. for starters, every module that is in ns-3.9).

The above will take the following steps:

  • build all typical prerequisites such as pybindgen
  • cd core
  • ./waf configure && ./waf && ./waf install

The above will install headers into build/debug/ns3/ and a libns3core.so into build/debug/lib/

  cd ../simulator
  ./waf configure && ./waf && ./waf install
  cd ..
  ...

Open issue: How to express and honor platform limitations, such as only trying to download/build NSC on platforms supporting it.

Once the build process is done, there will be a bunch of libraries in a common build directory, and all applicable headers. Specifically, we envision for module foo, there may be a libns3foo.so, libns3foo-test.so, and possibly others.

At this point, unless python has been disabled, the program executes:

  • python-scan.py
  • ...
  • pybindgen

which operates on the headers in the build/debug/ns3 directory and creates a python module .so library that matches the ns-3 configured components. Open issue: What is the eventual python API? Should each module be imported separately such as import node as ns.node, or should we try to go for a single ns3 python module?

Tests are run by running ./test.py, which knows how to find the available test libraries and run the tests.

Doxygen can be run such as "build.py doxygen" on the build/debug/ns3 directory.

To configure different options such as -g, -fprofile-arcs, e.g., use the CFLAGS environment variable.

To run programs:

  ./build.py shell
  build/debug/examples/csma/csma-bridge

or

  python build/debug/examples/csma/csma-bridge.py

Build all known ns-3 modules:

  ./build.py all

In the above case, suppose that the program could not download and fetch a module. Here, it can interactively prompt the user "Module foo not found; do you wish to continue [Y/n]?".

Build the core of ns-3, plus device models WiFi and CSMA:

  ./build ns3-core wifi csma

ns3-core is a meta-module containing simulator core common node. Open issue: what is the right granularity for this module?

Suppose a research group at Example Univ. publishes a new WiMin device module. It gets a new unique module name from ns-3 project, such as wimin. It also contributes metadata to the master ns-3 build script. When a third-party does the following:

  ./build.py wimin

the system will try to download and build the new wimin module (plus its examples and tests), and put it in the usual place.

Plan

GNOME jhbuild seems to be able to provide most or all of the "build.py" and "download.py" functionality mentioned above. jhbuild may need to be wrapped by a specialized ns-3 build.py or download.py wrapper. So, the current plan is to try to prototype the above using jhbuild and se how far we get

Work can proceed in parallel:

  1. Start to try to compare jhbuild with our existing download.py/build.py at the top level directory
  2. Try to define the module granularity and fix broken module dependencies
  3. Work on python bindings
  4. Work on how to handle examples in a modular test framework