[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2 Writing new test cases

The test cases are really just shell scripts. They are suitable for /bin/sh on most machines. The procedure for running these is explained in Running the tests. These shell scripts read in some common function definitions (mostly from tests/common/test-common) and then proceed to conduct the tests. This section explains those commands used in the test scripts that are not simply normal shell commands. Normal shell commands like sed and grep are not described.

The best approach for writing new test scripts or just individual new test cases is to first think of some aspect that needs better test coverage, and then to write the test script, basing it on an existing script. To make sure that your new tests are really checking for the right things, you can run them against an existing SCCS implementation other than CSSC.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.1 Testing the Test Suite

The best strategy for testing the CSSC test suite itself is to run it against a genuine edition of SCCS, if you have one available. Before running make check, set the environment variable `dir' to point to the directory containing the programs to be tested; this should usually be `/usr/sccs'.

In many implementations of SCCS, some of the tools execute others (for example, delta often executes get to retrieve the previous version of the controlled file). This means that to correctly test the test suite, your PATH environment variable should be set up to select the SCCS tools you want to test. Here is an example of the correct way to set up the environment to test SCCS tools in `/usr/ccs/bin' :-

 
dir=/usr/ccs/bin
PATH=/usr/ccs/bin:$PATH
export dir
make check

When you are sure that the test script is expecting the correct behaviour from programs under test, you can then run it against CSSC. After all, if you're going to set out writing your test by assuming that CSSC is correct in the area under test, of what value is the test?


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.2 docommand

The docommand function runs a specified program, and checks its return value, standard output and error output against an expected version. If any mismatch occurs, fail is called. The docommand function is invoked with up to six arguments:-

 
docommand [--silent] label command retval stdout stderr

The docommand function normally prints the label to indicate what stage the current test script has reached, followed by "done" when it has finished. The --silent option turns off this behaviour, so that if nothing goes wrong, no progress message is printed. This is occasionally used for commands that have already been tested by a script and are known to work, but which must be repeated several times in order to make some other kind of test, which is yet to come. I recommend you try to avoid using this option.

The other arguments to docommand are:-

label

This is what is printed to indicate what is going on when the test starts. If all goes according to plan, it is followed by `...done'.

command

This is the command to be executed, with all the required arguments.

retval

This is the expected return value. If command exits returning any other value, fail will be called. If the test should not care about the return value, use `IGNORE' as retval.

stdout

This is the text expected on the standard output of command. If the test should not care about the standard output, use `IGNORE' as stdout.

stderr

This is the text expected on the error output of command. If the test should not care about the error output, use `IGNORE' as stderr.

This command will run admin with three arguments, and expect it to produce no output at all and return the value zero:-

 
docommand C5 "${admin} -ifoo -yMyComment $s" 0 "" ""

This command does something similar, but the command is expected to fail, returning 1 as its exit status:-

 
# We should not be able to admin -i if the s-file already exists.
docommand I7 "${admin} -ifoo $s" 1 "" IGNORE

In the example above, the error messages produced by SCCS and CSSC are different, but both indicate the same thing. However, since the messages are different, `IGNORE' is used.

The stdout and stderr arguments are processed with the echo_nonl function, and so escape codes are valid and indeed heavily used:-

 
# Test the -m (annotate SID) option with several deltas...
docommand N4 "$get -p -m $s" 0 \
    "1.1\tline1\n1.1\tline2\n1.2\tline3\n" \
    IGNORE

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.3 remove

The remove function is for clearing up temporary files after tests have finished, and for making sure that no instance of a file that a test is supposed to create already exists before the test is made. Typical usage is this:-

 
f=1test
s=s.$f
p=p.$f
remove $f $s $p

The remove function is defined as:-

 
remove () { rm -rf $* || miscarry Could not remove $* ; }

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.4 success

The success function prints a message indicating that the current test script has passed, and exits successfully. This is always done at the foot of a test script.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.5 fail

If a test fails, it is usually because one of the docommand calls fails, and so direct calls to the fail function are rare. However, if you do want to call this function directly, you should supply as its argument a short description of what has gone wrong. For example, the docommand function uses fail in the following way:-

 
fail "$label: $1: Expected return value $2, got return value $rv"

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.6 echo_nonl

The echo_nonl function outputs its argument, without a following newline. Escape codes as for echo(1) are understood. Depending on the actual flavour of system that the test suite is running on, this might internally use echo -n or echo -e .....\c.

Please do not use either the `-n' or `-e' options for echo(1) directly in test scripts, because they don't work in the same way on all machines. The echo_nonl function is provided for this reason; therefore, please use it. Please note also that while the printf(1) command may seem superior, it absolutely cannot be used because not all systems provide it.

Typical usage of echo_nonl might be:-

 
echo_nonl Please wait while I finish what I am doing...
# ...
echo done

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.7 miscarry

The miscarry function is used to indicate that while the test suite has not found a problem with the programs being tested, there has been some other kind of problem that prevents further testing.

Typical usage might be:-

 
remove foo
echo '%M%' > foo
test `cat foo` = '%M%' || miscarry cannot create file foo.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.8 real-thing

The various implementations of SCCS vary in several different ways, but the CSSC test suite tries very hard to pass when run against any genuine implementation of SCCS unless it has a definite bug. This means for example that although the CSSC version of admin -i will support automatic switch-over to binary mode for a file provided via stdin, and the test suite tests this, the same property is not required of SCCS itself.

The `real-thing' script checks if we are actually tesing a real implementation of SCCS. It sets the environment variable TESTING_CSSC to `true' or `false', depending on whether we are testing CSSC or not.

If you are really interested in whether the implementation being tested supports binary files or not, you should be using the `config-data' script instead.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

10.2.9 need-prt

The possible non-availability of prt is another thing that the CSSC test suite needs to know about in order to run successfully against all working versions of SCCS. Some versions of SCCS lack the prt program. For this reason, the tests for this tool (in the `tests/prt' directory) are skipped if prt is missing. When writing test scripts, you should never use prt unless you are actually testing prt itself (you can almost always use prs instead).

If your test is specifically designed to test the functionality of prt itself on the other hand, just source `need-prt' before the first test. The `need-prt' script will skip the remainder of the invoking test script if prt is missing. You might use it like this, for example :-

 
#! /bin/sh
. ../common/test-common
. ../common/need-prt
s=s.testfile
remove $s
docommand e1 "${prt} $s" 1 IGNORE IGNORE
success

[ < ] [ > ]   [ << ] [ Up ] [ >> ]

This document was generated by Build on December, 22 2005 using texi2html 1.76.