One of the key goals of atf is to let the end user — not only the developer — to easily run the tests after their corresponding application is installed. (In our case, application = NetBSD, but remember that atf also aims to be independent from NetBSD.) This also means, among other things, that the user must not need to have any development tool installed (the comp.tgz set) to run the tests, which rules out using make(1)... how glad I'm of that! :-)

Based on this idea, each application using atf will install its tests alongside its binaries, being the currently location /usr/tests/<application>. These tests will be accompanied by a control file — an Atffile — that lists which tests have to be run and in which order. (In the future this may also include configuration or some other settings.) Later on, the user will be able to launch the atf-run tool inside any of these directories to automatically run all the provided tests, and the tool will generate a pretty report while the tests are run.

Given that atf is an application, it has to be tested. After some work today, it is finally possible for atf to test itself! :-) Of course, it also comes with several bootstrap tests, written using GNU Autotest, to ensure that atf's core functionality works before one can run the tests written using atf itself. Otherwise one could get unexpected passes due to bugs in the atf code.

This is what atf installs:
$ find /tmp/local/tests
/tmp/local/tests
/tmp/local/tests/atf
/tmp/local/tests/atf/Atffile
/tmp/local/tests/atf/units
/tmp/local/tests/atf/units/Atffile
/tmp/local/tests/atf/units/t_file_handle
/tmp/local/tests/atf/units/t_filesystem
/tmp/local/tests/atf/units/t_pipe
/tmp/local/tests/atf/units/t_pistream
/tmp/local/tests/atf/units/t_postream
/tmp/local/tests/atf/units/t_systembuf
$
All the t_* files are test programs written using the features provided by libatf. As you can see, each directory provides an Atffile which lists the tests to run and the directories to descend into.

The atf-run tool already works (*cough* it's code is ugly, really ugly) and returns an appropriate error code depending on the outcomes of the tests. However, the report it generates is completely un-understandable. This will be the next thing to attack: I want to be able to generate plain-text reports to be seen as the text progresses, but also to generate pretty HTML files. To do the latter, the plan is to use some intermediate format such as XML and have another tool to do the formatting.