[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The buildbots use a Python program, test.py
, that is reponsible for
running all of the tests and collecting the resulting reports into a human-
readable form. This program is also available for use by users and developers
as well.
test.py
is very flexible in allowing the user to specify the number
and kind of tests to run; and also the amount and kind of output to generate.
By default, test.py
will run all available tests and report status
back in a very concise form. Running the command,
./test.py
will result in a number of PASS
, FAIL
, CRASH
or SKIP
indications followed by the kind of test that was run and its display name.
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' 'build' finished successfully (0.939s) FAIL: TestSuite ns3-wifi-propagation-loss-models PASS: TestSuite object-name-service PASS: TestSuite pcap-file-object PASS: TestSuite ns3-tcp-cwnd ... PASS: TestSuite ns3-tcp-interoperability PASS: Example csma-broadcast PASS: Example csma-multicast |
This mode is indented to be used by users who are interested in determining if their distribution is working correctly, and by developers who are interested in determining if changes they have made have caused any regressions.
There are a number of options available to control the behavir of test.py
.
if you run test.py --help
you should see a command summary like:
Usage: test.py [options] Options: -h, --help show this help message and exit -c KIND, --constrain=KIND constrain the test-runner by kind of test -e EXAMPLE, --example=EXAMPLE specify a single example to run -g, --grind run the test suites and examples using valgrind -k, --kinds print the kinds of tests available -l, --list print the list of known tests -m, --multiple report multiple failures from test suites and test cases -n, --nowaf do not run waf before starting testing -s TEST-SUITE, --suite=TEST-SUITE specify a single test suite to run -v, --verbose print progress and informational messages -w HTML-FILE, --web=HTML-FILE, --html=HTML-FILE write detailed test results into HTML-FILE.html -r, --retain retain all temporary files (which are normally deleted) -t TEXT-FILE, --text=TEXT-FILE write detailed test results into TEXT-FILE.txt -x XML-FILE, --xml=XML-FILE write detailed test results into XML-FILE.xml
If one specifies an optional output style, one can generate detailed descriptions
of the tests and status. Available styles are text
and HTML
.
The buildbots will select the HTML option to generate HTML test reports for the
nightly builds using,
./test.py --html=nightly.html
In this case, an HTML file named “nightly.html” would be created with a pretty summary of the testing done. A “human readable” format is available for users interested in the details.
./test.py --text=results.txt
In the example above, the test suite checking the ns-3
wireless
device propagation loss models failed. By default no further information is
provided.
To further explore the failure, test.py
allows a single test suite
to be specified. Running the command,
./test.py --suite=ns3-wifi-propagation-loss-models
results in that single test suite being run.
FAIL: TestSuite ns3-wifi-propagation-loss-models
To find detailed information regarding the failure, one must specify the kind of output desired. For example, most people will probably be interested in a text file:
./test.py --suite=ns3-wifi-propagation-loss-models --text=results.txt
This will result in that single test suite being run with the test status written to the file “results.txt”.
You should find something similar to the following in that file:
FAIL: Test Suite ``ns3-wifi-propagation-loss-models'' (real 0.02 user 0.01 system 0.00) PASS: Test Case "Check ... Friis ... model ..." (real 0.01 user 0.00 system 0.00) FAIL: Test Case "Check ... Log Distance ... model" (real 0.01 user 0.01 system 0.00) Details: Message: Got unexpected SNR value Condition: [long description of what actually failed] Actual: 176.395 Limit: 176.407 +- 0.0005 File: ../src/test/ns3wifi/propagation-loss-models-test-suite.cc Line: 360 |
Notice that the Test Suite is composed of two Test Cases. The first test case checked the Friis propagation loss model and passed. The second test case failed checking the Log Distance propagation model. In this case, an SNR of 176.395 was found, and the test expected a value of 176.407 correct to three decimal places. The file which implemented the failing test is listed as well as the line of code which triggered the failure.
If you desire, you could just as easily have written an HTML file using the
--html
option as described above.
Typically a user will run all tests at least once after downloading
ns-3
to ensure that his or her environment has been built correctly
and is generating correct results according to the test suites. Developers
will typically run the test suites before and after making a change to ensure
that they have not introduced a regression with their changes. In this case,
developers may not want to run all tests, but only a subset. For example,
the developer might only want to run the unit tests periodically while making
changes to a repository. In this case, test.py
can be told to constrain
the types of tests being run to a particular class of tests. The following
command will result in only the unit tests being run:
./test.py --constrain=unit
Similarly, the following command will result in only the example smoke tests being run:
./test.py --constrain=unit
To see a quick list of the legal kinds of constraints, you can ask for them to be listed. The following command
./test.py --kinds
will result in the following list being displayed:
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' 'build' finished successfully (0.939s)Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' bvt: Build Verification Tests (to see if build completed successfully) core: Run all TestSuite-based tests (exclude examples) example: Examples (to see if example programs run successfully) performance: Performance Tests (check to see if the system is as fast as expected) system: System Tests (spans modules to check integration of modules) unit: Unit Tests (within modules to check basic functionality) |
Any of these kinds of test can be provided as a constraint using the --constraint
option.
To see a quick list of all of the test suites available, you can ask for them to be listed. The following command,
./test.py --list
will result in a list of the test suite being displayed, similar to :
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' 'build' finished successfully (0.939s) histogram ns3-wifi-interference ns3-tcp-cwnd ns3-tcp-interoperability sample devices-mesh-flame devices-mesh-dot11s devices-mesh ... object-name-service callback attributes config global-value command-line basic-random-number object |
Any of these listed suites can be selected to be run by itself using the
--suite
option as shown above.
Similarly to test suites, one can run a single example program using the --example
option.
./test.py --example=udp-echo
results in that single example being run.
PASS: Example udp-echo
Normally when example programs are executed, they write a large amount of trace
file data. This is normally saved to the base directory of the distribution
(e.g., /home/user/ns-3-dev). When test.py
runs an example, it really
is completely unconcerned with the trace files. It just wants to to determine
if the example can be built and run without error. Since this is the case, the
trace files are written into a /tmp/unchecked-traces
directory. If you
run the above example, you should be able to find the associated
udp-echo.tr
and udp-echo-n-1.pcap
files there.
The list of available examples is defined by the contents of the “examples”
directory in the distribution. If you select an example for execution using
the --example
option, test.py
will not make any attempt to decide
if the example has been configured or not, it will just try to run it and
report the result of the attempt.
When test.py
runs, by default it will first ensure that the system has
been completely built. This can be defeated by selecting the --nowaf
option.
./test.py --list --nowaf
will result in a list of the currently built test suites being displayed, similar to :
ns3-wifi-propagation-loss-models ns3-tcp-cwnd ns3-tcp-interoperability pcap-file-object object-name-service random-number-generators
Note the absence of the Waf
build messages.
test.py
also supports running the test suites and examples under valgrind.
Valgrind is a flexible program for debugging and profiling Linux executables. By
default, valgrind runs a tool called memcheck, which performs a range of memory-
checking functions, including detecting accesses to uninitialised memory, misuse
of allocated memory (double frees, access after free, etc.) and detecting memory
leaks. This can be selected by using the --grind
option.
./test.py --grind
As it runs, test.py
and the programs that it runs indirectly, generate large
numbers of temporary files. Usually, the content of these files is not interesting,
however in some cases it can be useful (for debugging purposes) to view these files.
test.py
provides a --retain
option which will cause these temporary
files to be kept after the run is completed. The files are saved in a directory
named testpy
under a subdirectory named according to the current Coordinated
Universal Time (also known as Greenwich Mean Time).
./test.py --retain
Finally, test.py
provides a --verbose
option which will print
large amounts of information about its progress. It is not expected that this
will be terribly useful unless there is an error. In this case, you can get
access to the standard output and standard error reported by running test suites
and examples. Select verbose in the following way:
./test.py --verbose
All of these options can be mixed and matched. For example, to run all of the ns-3 core test suites under valgrind, in verbose mode, while generating an HTML output file, one would do:
./test.py --verbose --grind --constrain=core --html=results.html
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] |
This document was generated on November 13, 2009 using texi2html 1.82.