[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The test-runner is the bridge from generic Python code to ns-3
code.
It is written in C++ and uses the automatic test discovery process in the
ns-3
code to find and allow execution of all of the various tests.
Although it may not be used directly very often, it is good to understand how
test.py
actually runs the various tests.
In order to execute the test-runner, you run it like any other ns-3 executable
– using waf
. To get a list of available options, you can type:
./waf --run "test-runner --help"
You should see something like the following:
Waf: Entering directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-test/ns-3-dev/build' 'build' finished successfully (0.353s) --assert: Tell tests to segfault (like assert) if an error is detected --basedir=dir: Set the base directory (where to find src) to ``dir'' --tempdir=dir: Set the temporary directory (where to find data files) to ``dir'' --constrain=test-type: Constrain checks to test suites of type ``test-type'' --help: Print this message --kinds: List all of the available kinds of tests --list: List all of the test suites (optionally constrained by test-type) --out=file-name: Set the test status output file to ``file-name'' --suite=suite-name: Run the test suite named ``suite-name'' --verbose: Turn on messages in the run test suites |
There are a number of things available to you which will be familiar to you if
you have looked at test.py
. This should be expected since the test-
runner is just an interface between test.py
and ns-3
. You
may notice that example-related commands are missing here. That is because
the examples are really not ns-3
tests. test.py
runs them
as if they were to present a unified testing environment, but they are really
completely different and not to be found here.
The first new option that appears here, but not in test.py is the --assert
option. This option is useful when debugging a test case when running under a
debugger like gdb
. When selected, this option tells the underlying
test case to cause a segmentation violation if an error is detected. This has
the nice side-effect of causing program execution to stop (break into the
debugger) when an error is detected. If you are using gdb, you could use this
option something like,
./waf shell cd build/debug/utils gdb test-runner run --suite=global-value --assert |
If an error is then found in the global-value test suite, a segfault would be
generated and the (source level) debugger would stop at the NS_TEST_ASSERT_MSG
that detected the error.
Another new option that appears here is the --basedir
option. It turns out
that some tests may need to reference the source directory of the ns-3
distribution to find local data, so a base directory is always required to run
a test.
If you run a test from test.py, the Python program will provide the basedir option
for you. To run one of the tests directly from the test-runner using waf
,
you will need to specify the test suite to run along with the base directory. So
you could use the shell and do,
./waf --run "test-runner --basedir=`pwd` --suite=pcap-file-object"
Note the “backward” quotation marks on the pwd
command.
If you are running the test suite out of a debugger, it can be quite painful to remember and constantly type the absolute path of the distribution base directory. Because of this, if you omit the basedir, the test-runner will try to figure one out for you. It begins in the current working directory and walks up the directory tree looking for a directory file with files named “VERSION” and “LICENSE.” If it finds one, it assumes that must be the basedir and provides it for you.
Similarly, many test suites need to write temporary files (such as pcap files)
in the process of running the tests. The tests then need a temporary directory
to write to. The Python test utility (test.py) will provide a temporary file
automatically, but if run stand-alone this temporary directory must be provided.
Just as in the basedir case, it can be annoying to continually have to provide
a --tempdir
, so the test runner will figure one out for you if you don’t
provide one. It first looks for environment variables named TMP
and
TEMP
and uses those. If neither TMP
nor TEMP
are defined
it picks /tmp
. The code then tacks on an identifier indicating what
created the directory (ns-3) then the time (hh.mm.ss followed by a large random
number. The test runner creates a directory of that name to be used as the
temporary directory. Temporary files then go into a directory that will be
named something like,
/tmp/ns-3.10.25.37.61537845 |
The time is provided as a hint so that you can relatively easily reconstruct what directory was used if you need to go back and look at the files that were placed in that directory.
When you run a test suite using the test-runner it will run the test quietly
by default. The only indication that you will get that the test passed is
the absence of a message from waf
saying that the program
returned something other than a zero exit code. To get some output from the
test, you need to specify an output file to which the tests will write their
XML status using the --out
option. You need to be careful interpreting
the results because the test suites will append results onto this file.
Try,
./waf --run "test-runner --basedir=`pwd` --suite=pcap-file-object --out=myfile.xml'' |
If you look at the file myfile.xml
you should see something like,
<TestSuite> <SuiteName>pcap-file-object</SuiteName> <TestCase> <CaseName>Check to see that PcapFile::Open with mode ``w'' works</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <TestCase> <CaseName>Check to see that PcapFile::Open with mode ``r'' works</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <TestCase> <CaseName>Check to see that PcapFile::Open with mode ``a'' works</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <TestCase> <CaseName>Check to see that PcapFileHeader is managed correctly</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <TestCase> <CaseName>Check to see that PcapRecordHeader is managed correctly</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <TestCase> <CaseName>Check to see that PcapFile can read out a known good pcap file</CaseName> <CaseResult>PASS</CaseResult> <CaseTime>real 0.00 user 0.00 system 0.00</CaseTime> </TestCase> <SuiteResult>PASS</SuiteResult> <SuiteTime>real 0.00 user 0.00 system 0.00</SuiteTime> </TestSuite> |
If you are familiar with XML this should be fairly self-explanatory. It is
also not a complete XML file since test suites are designed to have their
output appended to a master XML status file as described in the test.py
section.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] |
This document was generated on April 21, 2010 using texi2html 1.82.