GSoC 2014 idea: Port FreeBSD's old-style tests to ATF

Are you a student interested in contributing to a production-quality operating system by increasing its overall quality? If so, you have come to the right place! As you may already know, the Google Summer of Code 2014 program is on and FreeBSD has been accepted as a mentoring organization. As it so happens, I have a project idea that may sound interesting to you. During the last few months, we have been hard at work adding a standardized test suite to the FreeBSD upstream source tree as described in the TestSuite project page. However, a test suite is of no use if it lacks a comprehensive collection of tests!

March 12, 2014 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/freebsd">freebsd</a>, <a href="/tags/soc">soc</a>, <a href="/tags/testing">testing</a>
Continue reading (about 3 minutes)

Killing the ATF deprecated tools code

The time to kill the deprecated tools —atf-report and atf-run principally— from the upstream ATF distribution file has come. Unfortunately, this is not the trivial task that it may seem. But wait, "Why?" and "Why now?" Because NetBSD still relies on the deprecated tools to run its test suite, they cannot just be killed. Removing them from the upstream distribution, however, is actually a good change for both ATF and NetBSD.

February 5, 2014 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 5 minutes)

Three productive days on the Kyua front

This being Thanksgiving week in the U.S. and Google giving us Thursday and Friday off, I decided to take Monday to Wednesday off as well to spend some time hacking on Kyua — yes, finally, after months of being inactive. And what a three productive days! Here comes a little briefing on the three fronts in which I made progress. (This puts on hold the header files series until next Monday... but most of you are probably away anyway. Enjoy the holidays if they apply to you!)

November 28, 2013 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/fedora">fedora</a>, <a href="/tags/freebsd">freebsd</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/lua">lua</a>
Continue reading (about 6 minutes)

Projects migrated to Git

I finally took the plunge. Yesterday night, I migrated the Kyua and Lutok repositories from Subversion to Git. And this morning I migrated ATF from Monotone and custom hosting to Git and Google Code; oh, and this took way longer than expected. Migration of Kyua and Lutok Migrating these two projects was straightforward. After preparing a fresh local Git repository following the instructions posted yesterday, pushing to Google Code is a simple matter: $ git remote add googlecode https://code.google.com/p/your-project $ git push googlecode --all $ git push googlecode --tags One of the nice things I discovered while doing this is that a Google Code project supports multiple repositories when the VCS system is set to Git or Mercurial. By default, the system creates the default and wiki repositories, but you can add more at will. This is understandable given that, in Subversion, you have the ability to check out individual directories of a project whereas you cannot do that in the other supported VCSs: you actually need different repositories to group different parts of the project. I performed the full migration under a Linux box so that I could avail of the most recent Git version along with a proven binary package. The migration went alright, but I encountered a little problem when attempting a fresh checkout from NetBSD: git under NetBSD will not work correctly against SSL servers because it lacks the necessary CA certificates. The solution is to install the security/mozilla-rootcerts package and follow the instructions printed during installation; why this does not happen automatically escapes my mind. Migration of ATF I had been having doubts about migrating ATF itself, although if Kyua was moved to Git, it was a prerequisite to move ATF as well.  Certainly I could convert the repository to Git, but where could I host it afterwards?  Creating a new Google Code project just for this seemed too much of a hassle. My illumination came when I found out, as above, that Google Code supports an arbitrary amount of repositories in a particular project when converting it to Git. So, for ATF, I just ran mtn git_export with appropriate flags, created a new atf repository on the Kyua site, and pushed the contents there. Along the way, I also decided to kill the home-grown ATF web site and replace it by a single page containing all the relevant information. At this point, ATF and Kyua are supposed to work together quite tightly (in the sense that ATF is just a "subcomponent" of Kyua), so coupling the two projects on the same site makes sense. Now, let's drink the kool aid.

February 26, 2012 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/git">git</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/lutok">lutok</a>
Continue reading (about 3 minutes)

Switching projects to Git

The purpose of this post is to tell you the story of the Version Control System (VCS) choices I have made while maintaining my open source projects ATF, Kyua and Lutok. It also details where my thoughts are headed to these days. This is not a description of centralized vs. distributed VCSs, and it does not intend to be one. This does not intend to compare Monotone to Git either, although you'll probably feel like it while reading the text. Note that I have fully known the advantages of DVCSs over centralized systems for many years, but for some reason or another I have been "forced" to use centralized systems on and off. The Subversion hiccup explained below is... well... regrettable, but it's all part of the story! Hope you enjoy the read. Looking back at Monotone (and ATF) I still remember the moment I discovered Monotone in 2004: simply put, it blew my mind. It was clear to me that Distributed Version Control Systems (DVCSs) were going to be the future, and I eagerly adopted Monotone for my own projects. A year later, Git appeared and it took all the praise for DVCSs: developers all around started migrating en masse to Git, leaving behind other (D)VCSs. Many of these developers then went on to make Git usable (it certainly wasn't at first) and well-documented. (Note: I really dislike Git's origins... but I won't get into details; it has been many years since that happened.) One of the projects in which I chose to use Monotone was ATF. That might have been a good choice at the time despite being very biased, but it has caused problems over time. These have been: Difficulty to get Monotone installed: While most Linux distributions come with a Monotone binary package these days, it was not the case years ago. But even nowadays if all Linux distributions have binary packages, the main consumers of ATF are NetBSD users, and their only choice is to build their own binaries. This generates discomfort because there is a lot of FUD surrounding C++ and Boost.High entry barrier to potential contributors: It is a fact that Monotone is not popular, which means that nobody is familiar with it. Monotone's CLI is very similar to CVS, and I'd say the knowledge transition for basic usage is trivial, but the process of cloning a remote project was really convoluted until "recently". The lack of binary packages, combined with complex instructions on just how to fetch the sources of a project only help in scaring people away.Missing features: Despite years have passed, Monotone still lacks some important features that impact its usability. For example, to my knowledge, it's still not possible to do work-directory merges and, while the interactive merges offered by the tool seem like a cool idea, they are not really practical as you get no chance to validate the merge. It is also not possible, for example, to reference the parent commit of any given commit without looking at the parent's ID. (Yeah, yeah, in a DAG there may be more than one parent, but that's not the common case.) Or know what a push/pull operation is going to change on both sides of the connection. And key management and trust has been broken since day one and is still not fixed. Etc, etc, etc.No hosting: None of the major project hosting sites support Monotone. While there are some playground hosting sites, they are toys. I have also maintained my own servers sometimes, but it's certainly inconvenient and annoying.No tools support: Pretty much no major development tools support Monotone as a VCS backend. Consider Ohloh, your favorite bug tracking system or your editor/IDE. (I attempted to install Trac with some alpha plugin to add Monotone support and it was a huge mess.)No more active development: This is the drop that spills the cup. The developers of Monotone that created the foundations of the project left years ago. While the rest of the developers did a good job in coming up with a 1.0 release by March 2011, nothing else has happened since then. To me, it looks like a dead project at this point :-(Despite all this, I have been maintaining ATF in its Monotone repository, but I have felt the pain points above for years. Furthermore, the few times some end user has approached ATF to offer some contribution, he has had tons of trouble getting a fresh checkout of the repository and given up. So staying with Monotone hurts the project more than it helps. The adoption of Subversion (in Kyua) To fix this mess, when I created the Kyua project two years ago, I decided to use Subversion instead of a DVCS. I knew upfront that it was a clear regression from a functionality point of view, but I was going to live with it. The rationale for this decision was to make the entry barrier to Kyua much lower by using off-the-shelf project hosting. And, because NetBSD developers use CVS (shrugh), choosing Subversion was a reasonable choice because of the workflow similarities to CVS and thus, supposedly, the low entry barrier. Sincerely, the choice of Subversion has not fixed anything, and it has introduced its own trouble. Let's see why: ATF continues to be hosted in a Monotone repository, and Kyua depends on ATF. You can spot the problem, can't you? It's a nightmare to check out all the dependencies of Kyua, using different tools, just to get the thing working.As of today, Git is as popular, if not more, than Subversion. All the major operating systems have binary packages for Git and/or bundle Git in their base installation (hello, OS X!). Installing Git on NetBSD is arguably easier (at least faster!) than Subversion. Developers are used to Git. Or let me fix that: developers love Git.Subversion gets on the way more than it helps; it really does once you have experienced what other VCSs have to offer. I currently maintain independent checkouts of the repository (appropriately named 1, 2 and 3) so that I can develop different patches on each before committing the changes. This gets old really quickly. Not to mention when I have to fly for hours, as being stuck without an internet connection and plain-old Subversion... is suboptimal. Disconnected operation is key.The fact that Subversion is slowing down development, and the fact that it really does not help in getting new contributors more than Git would, make me feel it is time to say Subversion goodbye. The migration to Git At this point, I am seriously considering switching all of ATF, Lutok and Kyua to Git. No Mercurial, no Bazaar, no Fossil, no anything else. Git. I am still not decided, and at this point all I am doing is toying around the migration process of the existing Monotone and Subversion repositories to Git while preserving as much of the history as possible. (It's not that hard, but there are a couple of details I want to sort out first.) But why Git?First and foremost, because it is the most popular DVCS. I really want to have the advantages of disconnected development back. (I have tried git-svn and svk and they don't make the cut.)At work, I have been using Git for a while to cope with the "deficiencies" of the centralized VCS of choice. We use the squashing functionality intensively, and I find this invaluable to constantly and shamelessly commit incomplete/broken pieces of code that no-one will ever see. Not everything deserves being in the recorded history!Related to the above, I've grown accustomed to keeping unnamed, private branches in my local copy of the repository. These branches needn't match the public repository. In Monotone, you had this functionality in the form of "multiple heads for a given branch", but this approach is not as flexible as named private branches.Monotone is able to export a repository to Git, so the transition is easy for ATF. I have actually been doing this periodically so that Ohloh can gather stats for ATF.Lutok and ATF are hosted in Google Code, and this hosting platform now supports Git out of the box.No Mercurial? Mercurial looks a lot like Monotone, and it is indeed very tempting. However, the dependency on Python is not that appropriate in the NetBSD context. Git, without its documentation, builds very quickly and is lightweight enough. Plus, if I have to change my habits, I would rather go with Git given that the other open source projects I am interested in use Git.No Bazaar? No, not that popular. And the fact that this is based on GNU arch makes me cringe.No Fossil? This tool looks awesome and provides much more than DVCS functionality: think about distributed wiki and bug tracking; cool, huh? It also appears to be a strong contender in the current discussions of what system should NetBSD choose to replace CVS. However, it is a one-man effort, much like Monotone was. And few people are familiar with it, so Fossil wouldn't solve the issue of lowering the entry barrier. Choosing Fossil would mean repeating the same mistake as choosing Monotone.So, while Git has its own deficiencies — e.g. I still don't like the fact that it is unable to record file moves (heuristics are not the same) — it seems like a very good choice. The truth is, it will ease development by a factor of a million (OK, maybe not that much) and, because the only person (right?) that currently cares about the upstream sources for any of these projects is me, nobody should be affected by the change. The decision may seem a bit arbitrary given that the points above don't provide too much rationale to compare Git against the other alternatives. But if I want to migrate, I have to make a choice and this is the one that seems most reasonable. Comments? Encouragements? Criticisms?

February 11, 2012 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/git">git</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/lutok">lutok</a>, <a href="/tags/monotone">monotone</a>, <a href="/tags/vcs">vcs</a>
Continue reading (about 8 minutes)

Kyua: Weekly status report

Created an RPM package for Lutok for inclusion in Fedora.Created a preliminary RPM spec for ATF for Fedora. Now in discussions with the FPC to figure out how to install the tests on a Fedora system, as /usr/tests may not be appropriate.No activity on Kyua itself though, unfortunately.

February 7, 2012 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/lutok">lutok</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Released ATF 0.15 and imported it into NetBSD.Added support for integer/float printf-like modifiers to the utils::format module. These will be required to beautify size and time quantities in the reports and error messages.I spent way more time than I wanted on this. At first, I attempted to use std::snprintf to parse and process the format modifiers for integers and floats so that I could avoid implementing a custom parser for them. While this sounds like a cool idea (yay, code reuse!), it resulted in a ugly, nasty and horrible mess. In the end, I just ended up implementing custom parsing of the formatters, which was way easier and "good enough" for Kyua's needs.Started work on backporting ATF's new require.memory property into Kyua. This needs having a way to parse and format byte quantities in user-friendly forms (e.g. 1k, 2m, etc.)... hence the previous work on utils::format!Set up a Google+ Page for Kyua. I have no idea what to use it for yet. Maybe the status reports should go in there. Ideas?

January 23, 2012 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Finally some progress! Backported the require.memory changes in NetBSD to the ATF upstream code, and extended them to support OS X as well.Backported local pkgsrc patches to ATF into the upstream code.Started to prepare ATF 0.15 by doing test runs of NetBSD/i386 and NetBSD/amd64 and by building the code in various Linux distributions. Several build bugs fixed along the way.Spent a long while trying to figure out how the Fedora package maintainer procedure has changed since 3 years ago to create packages for ATF, Lutok and Kyua. Not very successful yet unfortunately.Nothing on the Kyua front, but getting a new release of ATF out of the door has higher priority now!

January 15, 2012 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

The post title should mention ATF instead of Kyua... but I'm keeping the usual one for consistency: Integrated timestamps into the XML and HTML reports generated by atf-report. These should soon show up in the continuous tests of NetBSD.Work on integrating the use of POSIX timers into atf-run after Christos Zoulas performed these changes in the NetBSD tree. The result is quite awful because I need to keep compatibility with systems that do not provide the "new" interface...

December 26, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Unfortunately, not much activity this week due to travel reasons. Anyway, some work went in: Preliminary code to generate HTML reports from kyua report. This is easy peasy but boring. The current code was written as a proof of concept and is awful, hence why it was not committed. I'm now working in cleaning it up.Backported test program and test case timestamping into ATF based on a patch from Paul Goyette. This is a very useful feature to have, and it will have to be added to Kyua later. (It has always been planned to be added, but have not had the time yet.)

December 19, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Some significant improvements this week: Finally submitted the code to store and load full test case definitions. This is quite tricky (and currently very, very ugly) but it works and it will allow the reports to include all kinds of information from the test cases.Removed the Atffiles from the tree; yay! For a long time, I had been using atf-run to run broken tests because atf-run allowed me to watch the output of the test case being debugged. However, this has been unnecessary since the introduction of the debug command in late August. I now feel confident that these files can go. (And debug is much more powerful than atf-run because you can target a single test case instead of a whole test program.)Some crazy work attempting to hide the name of SQLite types from the sqlite::statement interface. I've been only able to do so somewhat decently for bind, but all my attempts at doing the same with column result in horrible code so far. So no, such changes have not been submitted.As of a few minutes ago, kyua test now records the output of the test cases (stdout and stderr) into the database. These will be invaluable for debugging of test cases, particularly when the reports are posted online.Some preliminary work at implementing HTML reports. This, however, has not received much progress due to the previous item requiring completion.I'm quite excited at this point. HTML reports are a few weeks away at most. Once that happens, it will be time to start considering replacing the atf-run / atf-report duo for good, particularly within NetBSD. This will certainly not be easy... but all the work that has gone into Kyua so far has this sole goal!

December 11, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 2 minutes)

Kyua: Weekly status report

Kyua has finally gained a report subcommand, aimed at processing the output data of an action (stored in the database) and generating a user-friendly report in a variety of formats. This is still extremely incomplete, so don't get your hopes too high yet ;-) The current version of the report command takes an action and all it does is dump its runtime context (run directory, environment variables, etc.). Consider it just a proof of concept. I have now started work on loading the data of test case results for a particular action, and once that is done, the report command will start yielding really useful data: i.e. it will actually tell you what happened during a particular execution of a test suite. The way I'm approaching the work these days is by building the skeleton code to implement the basic functionality first (which actually involves writing a lot of nasty code), with the goal of adding missing pieces later bit by bit. For example, at this moment I'm only targeting text-based outputs with a limited set of data. However, when that is done, adding extra data or different formats will be relatively easy. Generating HTML dashboards (without going through XML, as was the case of atf-report!) is definitely highly prioritized. By the way: I just realized it has already been one year since Kyua saw life. Wow, time flies. And only now we are approaching a point where killing the atf-run / atf-report pair is doable. I'm excited.

November 14, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 2 minutes)

Diversions in Autoconf (actually, in M4sugar)

Have you ever wondered how Autoconf reorganizes certain parts of your script regardless of the order in which you invoke the macros in your configure.ac script? For example, how come you can define --with-* and --enable-* flags anywhere in your script and these are all magically moved to the option-processing section of the final shell script? After all, Autoconf is just a collection of M4 macros, and a macro preprocessor's only work is to expand macros in the input with predefined output texts. Isn't it? Enter M4sugar's diversions. Diversions are a mechanism that allows M4 macros to output code to different text blocks, which are later concatenated in a specific order to form the final script. Let's consider an example (based on a few M4 macros to detect the ATF bindings from your own configure scripts). Suppose you want to define a macro FROB_ARG to provide a --with-frob argument whose argument must be either "yes" or "no". Also suppose you want to have another macro FROB_CHECK to detect whether libfrob exists. Lastly, you want the user to be able to use these two independently: when FROB_CHECK is used without invoking FROB_ARG first, you want it to unconditionally look for the library; otherwise, if FROB_ARG has been used, you want to honor its value. We could define these macros as follows: AC_DEFUN([FROB_ARG], [     AC_ARG_WITH([frob],                 [AS_HELP_STRING([--with-frob=yes|no], [enable frob])],                 [with_frob=${withval}, [with_frob=yes]) ]) AC_DEFUN([FROB_CHECK], [     m4_divert_text([DEFAULTS], [with_frob=yes])     if test "${with_frob}" = yes; then         ... code to search for libfrob ...     elif test "${with_frob}" = no; then         :  # Nothing to do.     else         AC_MSG_ERROR([--with-frob must be yes or not])     fi ]) Note the m4_divert_text call above: this macro invocation tells M4sugar to store the given text (with_frob=yes) in the DEFAULTS diversion. When the script is later generated, this text will appear at the beginning of the script before the command-line options are processed, completely separated from the shell logic that consumes this value later on. With this we ensure that the with_frob shell variable is always defined regardless of the call to the FROB_ARG macro. If this macro is called, with_frob will be defined during the processing of the options and will override the value of the variable defined in the DEFAULTS section. However, if the macro has not been called, the variable will keep its default value for the duration of the script. Of course, this example is fictitious and could be simplified in other ways. But, as you can see in the referred change and in the Autoconf code itself, diversions are extensively used for trickier purposes. In fact, Autoconf uses diversions to topologically sort macro dependencies in your script and output them in a specific order to satisfy cross-dependencies. Isn't that cool?  I can't cease to be amazed, but I also don't dare to look at how this works internally for my own sanity...

September 6, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/autoconf">autoconf</a>, <a href="/tags/m4">m4</a>
Continue reading (about 3 minutes)

Kyua: Weekly status report

Not a very active week: I've been on-call four days and they have been quite intense. Plus I have had to go through a "hurricane" in NYC. That said, I had some time to do a bit of work on Kyua and the results have been nice :-) Made calls to getopt_long(3) work with GNU Getopt by using the correct value of optind to reset option processing.Improved the configure script to error out in a clearer way when missing dependencies (pkg.m4 and Lua) are not found.Did some portability fixes.And released Kyua 0.2! (along with a pkgsrc package) At this point, I have to start thinking how to implement test suite reporting within Kyua (i.e. how to replace atf-report). This probably means learning SQLite and refreshing my incredibly rusty SQL skills. Also, it's time to (probably) split the utils::lua library in a separate package because there is several people interested in this.

August 28, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Implemented the "debug" command. Still very rudimentary, this command allows the user to run a test case without capturing its stdout nor stderr to aid in debugging of failed test cases. In the future, this command will also allow things like keeping the work directory for manual inspection, or spawning a shell or a debugger in the work directory after a test case is executed. Many build fixes under different platforms in preparation for a 0.2 release. In particular, Kyua now builds under Ubuntu 10.04.1 LTS but some tests fail. Had to disable the execution of the bootstrap test suite within Kyua because it stalls in systems where the default shell is not bash.  I presume this is a bug in GNU Autotest, so I filed a report.

August 21, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Changed the --config and --variable options to be program-wide instead of command-specific. The configuration file should be able to hold properties that tune the behavior of Kyua, not just the execution of tests, so this makes sense. Added the config subcommand, which provides a way to inspect the configuration as read by Kyua. Got rid of the test_suites_var function from configuration files and replaced it by simple assignments to variables in the test_suites global table. Enabled detection of unused parameters by the compiler and fixed all warnings. Changed developer mode to only control whether warnings are enforced or not (not to enable the warnings themselves) and made developer mode be disabled on formal releases. Barring release testing, Kyua 0.2 should be ready soon :-)

August 15, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Added the ability to explicitly define timeouts for plain test programs. This just completes the work from past week that made running plain test programs at all but had an ugly TODO in it to implement this missing feature. The bootstrap test suite now runs as a single test case within the whole Kyua test suite. Demonstrates the plain test programs interface functionality :-) Started reshuffling code to make the <tt>--config</tt> and related flags program-wide. The goal is to allow the configuration file to tune the behavior of all of Kyua, so these flags must be made generic. They were previously specific to the <tt>test</tt> subcommand only.

August 8, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Implemented the "plain" test interface. This allows plugging "foreign test programs" into a Kyua-based test suite. (A foreign test program is a program that does not use any testing framework: it reports success and failure by means of an exit code.)Generalized code between the atf and plain interfaces and did some cleanups.Attempted to fix the ATF_REQUIRE_EQ macros in ATF to evaluate their arguments only once. This has proven to be tricky and therefore it is not done yet. At the moment, the macros reevaluate the arguments on a failure condition, which is not too big of a deal.

August 1, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Finished splitting the atf-specific code from the generic structures in the engine. The engine now supports the addition of extra test interfaces with minimal effort.Started implementing a "plain" test interface for test programs that do not use any test framework. This is to allow muxing non-atf tests into atf-based test cases, which is required in the NetBSD tree.

July 25, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Slow week. I've been busy moving to NYC! Kept working on the splitting of ATF-specific code from the core test abstractions. The work is now focused on refactoring the results-reporting pieces of the code, which are non-trivial.

July 18, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

One of the major features I want in place for Kyua 0.2 is the ability to run "foreign" test programs as part of a test suite: i.e. to be able to plug non-ATF test programs into a Kyuafile. The rationale for this is to lower the entry barrier of newcomers to Kyua and, also, to allow running some foreign test suites that exist in the NetBSD source tree but that are currently not run. The work this week has gone in the direction outlined above among other things: Created an abstract base class to represent test programs and provided an implementation for ATF.Did the same thing for test cases.Moved the kyua-cli package from pkgsrc-wip into pkgsrc head. Installing Kyua is now a breeze under NetBSD (and possibly under other platforms supported by pkgsrc!) The next steps are to generalize the test case results, clearly separate the ATF-specific code from the general abstractions, and add an implementation to run simple test programs.

July 10, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Belated update:Created a pkgsrc package for kyua-cli. Still in pkgsrc-wip though because pkgsrc is in a feature freeze.Wrote a little tutorial on how to run NetBSD tests using Kyua.Started work on 0.2 by doing a minor UI fix in the about command.I've now started to look at how to split the engine into different "runners" to add support for test programs written without ATF. Not that I plan to use this feature... but having it in place will ensure that the internal interfaces are clean and may help in adoption of Kyua.

July 5, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

This has been the big week: Wrote some user documentation for the kyua binary.Fixed some distcheck problems.Released Kyua 0.1!The next immediate thing to do is to write a short tutorial on how to run the NetBSD tests with Kyua and get some people to actually try it. After that, there are many things to improve and features to add :-)

June 26, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

A couple of things have happened: Released ATF 0.14. This release was followed by an import into NetBSD and fixing of subsequent fallout.Some performance improvements to atf-sh. After killing a bunch of complex shell constructions and removing lots of obsolete functions, the performance results are significant. There is still room for improvement of course, and I still need to quantify how these optimizations behave in single-core machines.I certainly expected more progress this past week... but in case you don't know: I am moving countries very soon now, and as the move date approaches, there is more and more stuff to be done at home so less and less time for hacking.

June 19, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Added support for recursion from the top-level Kyuafile. This Kyuafile should not reference any directories explicitly because the directories at the top level are supposed to be created by the installation of packages. Closed issue 9.Improved error messages when the test programs are bogus. Closed issue 13.Backported format-printf attribute improvements from NetBSD head to ATF.Miscellaneous build and run fixes for both Kyua and ATF in NetBSD and OS X.Cut a release candidate for atf-0.14 and started testing on NetBSD.The kyua-cli codebase is now feature complete. Blocking the 0.1 release are the need to polish the release documents and the requirement of releasing atf-0.14 beforehand. Should happen soon :-)

June 13, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

Some long-standing bug fixes / improvements have gone in this week: Improvements to the cleanup routine, which is used to destroy the work directory of a test case after the test case has terminated:Heavy refactoring to be tolerant to failures. These failures may arise when a child of the test case does not exit immediately and holds temporary files in the work directory open for longer than expected.Any file systems that the test case leaves mounted within the work directory will now be unmounted, just as the ATF test interface mandates. I realize that this adds a lot of complexity to the runtime engine for very little gain. If/when we revise the tests interface, it will be worth to reconsider this and maybe leave the cleanup of mounted file systems to the test case altogether.As a result, issue 17 has been fixed!Kyua now captures common termination signals (such as SIGINT) and exits in a controlled manner. What this means is that Kyua will now kill any active test programs and clean up any existing work directories before exiting. What this also means is that issue 4 is fixed.To increase amusements, a little FYI: the above points have never worked correctly in ATF, and the codebase of ATF makes it extremely hard to implement them right. I have to confess that it has been tricky to implement the above in Kyua as well, but I feel much more confident in that the implementation works well. Of course, there may be some corner cases left... but, all in all, it's more robust and easier to manage. The list of pending tasks for 0.1 shortens!

June 5, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 2 minutes)

Kyua: Weekly status report

Some cool stuff this week, albeit not large-scale: Implemented the --variable flag in the test command. This flag, which can be specified multiple times, allows a user to override any configuration variable (be it a built-in or a test-suite variable) from the command line. This is actually the same as atf-run's -v flag, but with a clear separation between built-in configuration settings and test-suite specific settings.Added support for several environment variables to allow users (and tests) to override built-in paths. I can't imagine right now any legitimate use for these variables, but hardcoded values are bad in general, atf-run provided these same variables, and these variables are very handy for testing purposes.Added support for the new require.files test-case metadata property to both ATF and Kyua. This new property allows tests to specify a set of files that they require in order to run, and is useful for those tests that can't run before make install is executed. The functionality planned for the 0.1 release is now pretty much complete. There is still a few rough edges to clean, some documentation to write, and some little features to implement/fix. See the open bugs for 0.1 to get an idea of the remaining tasks.

May 29, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

This week:Cleaned up the internal code of the "list" command and added a few unit tests.Added integration tests for the "test" command that focus mostly on the behavior of the "test" command itself. There is still a need for full integration tests that validate the execution of the test cases themselves and their cleanup, and these will be tricky to write.Changed atf-c, atf-c++ and atf-sh to show a warning when a test program is run by hand. Users should really be using atf-run to execute the tests, or otherwise things like isolation or timeouts will not work (and they'll conclude that atf is broken!).

May 22, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report, BSDCan 2011 edition

I spent past week in Ottawa, Canada, attending the BSDCan 2011 conference. The conference was composed of lots of interesting content and hosted many influential and insightful BSD developers. While the NetBSD presence was very reduced, I could have some valuable talks with both NetBSD and FreeBSD developers. Anyway. As part of BSDCan 2011, I gave a talk titled "Automated testing in NetBSD: past, present and future". The talk focused on explaining what led to the development of ATF in the context of NetBSD, what related technologies exist in NetBSD (rump, anita and dashboards), what ATF's shortcomings are and how Kyua plans to resolve them. (Video coming soon, I hope.) The talk was later followed by several questions and off-session conversations about testing in general in BSDs. Among these, I gathered a few random ideas / feelings: The POSIX 1003.3 standard defines the particular results a test can emit (see the corresponding DejaGnu documentation). Both ATF and Kyua already implement all the results defined in the standard, but they use different names and extend the standard with many extra results. Given that the standard does not define useful concepts like "expected failures", an idea that came up is to provide a flag to force POSIX compliance at the cost of being less useful. Why? Just for the sake of saying that Kyua conforms to this standard.The audience seemed to like the idea of a "tests results store" quite a bit, and the sound of SQLite for the implementation was not bullied. This is something I'm eager to work on, but not before I publish a 0.1 release.I highlighted the possibility of allowing Kyua to run "foreign" test programs so that we could integrate the results into the database. This could be useful to run tests for which we (*BSD) have no control (e.g. gcc) in an integrated manner. The idea was not bullied by anyone either.FreeBSD has already been looking at ATF / Kyua and they are open to collaboration.OpenBSD won't import any new C++ code, and adding C-based tests to the tree while relegating the C++ runtime to the ports is not an option. Somehow I expected this.Junos (the FreeBSD-based operating system from Jupiter Networks) recently imported ATF and they are happy with it so far. Yay!Would be nice to have a feature to run tests remotely after, maybe, deploying a single particular test and its dependencies. This is gonna be tricky and not in my current immediate plans.Other than that, I had little time to do some coding:Fixed a problem in which both ATF and Kyua were not correctly resetting the timezone of the executed tests. I only found this because, after arriving in Canada, some Kyua tests would start to fail. (Yes, the fix is in both code bases!)Added some support to capture deadly signals that terminate Kyua so that Kyua can print an informational message stating that something went wrong and which log file contains more information. See r121.That's it folks! Thanks to those attending the conference and, in particular, to those that came to my talk :-)

May 16, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/conference">conference</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 3 minutes)

Kyua: Weekly status report

Ouch; I'm exhausted. I just finished a multi-hour hacking session to get the implementation of the list subcommand in control. It is now in a very nice user-facing shape, although its code deserves a little bit of house cleaning (coming soon). Anyway, this week's progress: Added the kyuaify.sh script. This little tool takes a test suite (say, NetBSD's /usr/tests directory) and converts all its Atffiles into Kyuafiles. The tool is no sophisticated at all; in fact, it is a pretty simple script that I haven't tested with any other test suites so far. See the announcement of kyuaify for some extra details.Added logging support for the Lua code and changed the Lua modules to spit some logging information while processing Kyuafiles and configuration files.Added a mechanism in the user interface module to consistently print informational, warning and error messages.Implemented proper test filtering (after several iterations). What does proper mean? Well, for starters, test filters are always relative to the test suite's root (although we already saw this in last week's report). But the most important thing is that the filters are now validated: nice, user-friendly errors will be reported when the collection of tests is non-disjoint, when it includes duplicate names or when any of the provided filters does not match any test case. I really need to document the rationale of these in the manual, but for now the r118 commit message includes a few details.Drafted some notes for the BSDCan 2011 conference. I am quite tempted to reuse parts of the presentation from NYCBSDCon 2010, but I really want to give more emphasis on Kyua this time. In case you don't know, Kyua was first announced in NYCBSDCon 2010 and it was still a very immature project. The project has changed a lot since then.The wishful plan for next week is to clean up the internals of the list command (by refactoring and adding unit tests) and implement preliminary integration tests for the test subcommand. The latter scares me quite a bit. But... hmm... I guess preparing the presentation for BSDCan 2011 has priority.

April 24, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 2 minutes)

Kyua: Weekly status report

This week started easy:Added integration tests for the about and help subcommands. These were pretty easy to do.Added integration tests for the list subcommand. I initially added these tests as expected failures to reason about the appearance and behavior of this command from the point of view of the user before actually working on the code... and I am still writing such code to make these tests pass!This is where things got a bit awry. Polishing the behavior of the list command so that its interface is consistent among all flag and argument combinations is tricky and non-trivial. I ended up having to change code deep down in the source tree to implement missing features and to change existing bits and pieces:Implemented support to print all test case properties, not only the user-specific ones. The properties recognized by the runtime engine are stored as individual arguments of a structure, so these required some externalization code.Implemented "test case filtering". This allows users to select what tests to run on the command line at the test case granularity. For example, foo/bar selects all tests in a subdirectory if bar is a directory, or all the tests in the bar test program if it is a binary. But what is new (read: not found in ATF) is that you can even do foo/bar:test-1 where test-1 is the name of a test case within the foo/bar test program. I've been silently wishing for this feature to be available in ATF because it shortens the build/test/edit cycle, but it was not easy to add.Changed the internal representation of test suites to ensure all test program names are relative to the root of the test suite. The root of the test suite is considered to be the directory in which the top-level Kyuafile lives (/usr/tests/ in NetBSD). The whole point of doing this is to provide some consistency to the filters when the user decides to execute tests that are not in the current directory. For example, the following are now equivalent: $ cd /usr/tests && kyua list fs/tmpfs $ kyua list -k /usr/tests/Kyuafile fs/tmpfsThe plans for the upcoming week are to finish with the clean up of the list command (which involves adding proper error reporting, refactoring of the command module and addition of unit tests) and start cleaning up the test command. Also, remember that BSDCan 2011 is now around the corner and that I will be talking about ATF, Kyua and NetBSD in it!  My plan was to have a kyua-cli-0.1 release by the time of the conference, although this will be tricky to achieve...

April 17, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 3 minutes)

Kyua: Weekly status report

Few things worth mentioning this week as reviewing Summer of Code student applications has taken priority. The good thing is that there are several strong applications for NetBSD; the bad thing is that none relate directly to testing. Anyway, the work this week: Added a pkg-config file for atf-sh as well as an Autoconf macro to detect its presence. This is needed by Kyua to easily find atf-sh. (Yes, I know: this is an abuse of pkg-config, but it works pretty well and is consistent with atf-c and atf-c++.)Implemented basic integration tests for Kyua in r98 using atf-sh. These tests are still very simple but provide a placeholder into which new tests will be plugged. Having good integration test coverage is key in preparation for a 0.1 release. Oh, and by the way, this revision has bumped the number of tests to 601, crossing the 600 barrier :-)That's pretty much it. Now, back to attempting to fix my home server as a fresh installation of NetBSD/macppc has decided to not boot any more.  (Yes, this has blocked most of my weekend...)

April 10, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 1 minute)

Kyua: Weekly status report

This week's work has been quite active on the ATF front but not so much in the Kyua one. I keep being incredibly busy on the weekends (read: traveling!) so it's hard to get any serious development work done. What has happened? Finally tracked down and fixed some random atf-run crashes that had been hunting the NetBSD test suite for months (see PR bin/44176). The fix is in reality an ugly workaround for the fact that a work directory cannot be considered "stable" even after the test case terminates. There may be dying processes around that touch the work directory contents, and the cleanup code in atf-run was not coping well with those. As it turns out, this problem also exists in Kyua (even though it's not as pronounced because arbitrary failures when running a test case do not crash the runtime engine) so I filed issue 17 to address it.Released ATF 0.13 and imported it both to NetBSD-current and pkgsrc. As a side note, Kyua requires the new features in this release, so putting it out there is a requirement to release Kyua 0.1. This new release does not have a big effect on NetBSD though, because the copy of ATF in NetBSD has been constantly receiving cherry-picks of the upstream fixes.Replaced several TODO items in the Kyua code with proper calls to the logging subsystem. These TODO items were referring to conditions in the code that should not happen, but for which we cannot do any proper recovery (like errors in a destructor). Sure, these could be better signaled as an assertion... but these code paths can be triggered in extremely-tricky conditions and having Kyua crash because of them is not nice (particularly when the side-effects of executing that code paths are non-critical).So, in retrospect, I have fulfilled the goal set past week of releasing ATF 0.13, but I haven't got to the addition of integration tests. Oh well... let's see if this upcoming week provides more spare time.

April 3, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 2 minutes)

Kyua: Weekly status report

This has been a slow week. In the previous report, I set the goal of getting Kyua to run the NetBSD test suite accurately (i.e. to report the same results as atf-run), and this has been accomplished. Actually, the changes required in Kyua to make this happen were minimal, but I got side-tracked fixing issues in NetBSD itself (both in the test suite and in the kernel!). So, the things done: Fixed Kyua to correctly kill any dangling subprocesses of a test case and thus match the behavior of atf-run. This was the only change required to make issue 16 happen: i.e. to get Kyua to report the same results as atf-run for the NetBSD test suite.Based on a suggestion from Antti Kantee, an alternative way to handle this would be to not kill any processes and just report the test case as broken if it fails to clean itself up. The rationale being that the runtime engine can kill dangling subprocesses in 99% of the occasions, but not always. The exception are those subprocesses that change their process group. It'd be better to make all cleanups explicit instead of hiding this corner case, as it can lead to confusion. Addressing this will have to wait though, as it is a pretty invasive change.Before closing issue 16, I want to implement some integration tests for Kyua to ensure that the whole system behaves as we expect (which is what the NetBSD test suite is currently doing implicitly).Kyua is pickier than atf-run: if a cleanup routine of a test case fails or crashes, Kyua will (correctly) report the test case as broken while atf-run will silently ignore this situation. Some NetBSD tests had crashing cleanup parts, so I fixed them.Some test programs in NetBSD were leaving unkilled subprocesses behind. These subprocesses are daemons and thus fall out of the scope of what Kyua can detect and kill during the cleanup phase. I mistakenly tracked down the problem to rump, but Antti Kantee kindly found the real problem in the kernel (not in rump!).As a side effect of processes being left behind, I extended the functionality of pidfile(3) and implemented pid file support in bozohttpd.  This is to make the tests that spawn a bozohttpd in the background more robust, by giving them a way to forcibly kill the server during cleanup.These changes are still under review and not committed yet.For the upcoming week, I plan to add some basic integration tests to Kyua and release ATF 0.13. I've been running a NetBSD system with the latest ATF code integrated for a while (because Kyua requires it) and things have been working well.

March 27, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 3 minutes)

Kyua: Weekly status report

These days, I find myself talking about Kyua to "many" people. In particular, whenever a new feature request for ATF comes in, I promise the requester that the feature will be addressed as part of Kyua. However, I can imagine that this behavior leaves the requester with mixed feelings: it is nice that the feature will be implemented but, at the same time, it is very hard to know when because the web site of Kyua does not provide many details about its current status. In an attempt to give Kyua some more visibility, I will start posting weekly activity reports in this blog. These reports will also include any work done on the ATF front, as the two projects are highly related at this point. I write these reports regularly at work and I feel like it is a pretty good habit: every week, you have to spend some time thinking about what you did for the project and you feel guilty if the list of tasks is ~zero ;-) It also, as I said, gives more visibility to the work being done so that outsiders know that the project is not being ignored. Before starting with what has happened this week, a bit of context. I have been traveling like crazy and hosting guests over for the last 2 months. This has given me virtually no time to work on Kyua but, finally, I have got a chance to do some work this past week. So, what are the news?Implemented the --loglevel command line flag, which closes issue 14. Kyua now generates run-time logs of its internal activity to aid in postmortem debugging and this flag allows the user to control the verbosity of such logs.Antti Kantee hacked support for atf-run in the NetBSD source tree to dump a stack trace of any crashing test program. I have backported this code to the upstream ATF code and filed issue 15 to implement this same functionality in Kyua.Fixed a hang in atf-run that made it get stuck when a test case spawned a child processes and atf-run failed to terminate it. A quick test seems to indicate that Kyua is affected by a similar problem: it does not get stuck but it does not correctly kill the subprocesses. The problem will be addressed as part of issue 16.Oh, and by the way: Kyua will be presented at BSDCan 2011.My plans for this week are to make Kyua run the full NetBSD test suite without regressions when compared to ATF. Basically, the results of a test run with Kyua should be exactly the same as those of a test run with ATF. No dangling processes should be left behind. Lastly, if you are interested in these reports and other Kyua news, you can subscribe to the kyua label feed and, if you want to stay up to date with any changes performed to the code, subscribe to the kyua-log mailing list.

March 20, 2011 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/report">report</a>
Continue reading (about 3 minutes)

Introducing Kyua

Wow. I have just realized that I have not blogged at all about the project that has kept me busy for the past two months! Not good, not good. "What is this project?", I hear. Well, this project is Kyua. A bit of background first: the Automated Testing Framework, or ATF for short, is a project that I started during the Summer of Code of 2007. The major goal of ATF was, and still is, to provide a testing framework for the NetBSD operating system. The ATF framework is composed of a set of libraries to aid in the implementation of test cases in C, C++ and shell, and a set of tools to ease the execution of such test cases (atf-run) and to generate reports of the execution (atf-report). At that point in time, I would say that the original design of ATF was nice. It made test programs intelligent enough to execute their test cases in a sandboxed environment. Such test programs could be executed on their own (without atf-run) and they exposed the same behavior as when they were run within the runtime monitor, atf-run. On paper this was nice, but in practice it has become a hassle. Additionally, some of these design decisions mean that particular features (in particular, parallel execution of tests) cannot be implemented at all. At the end of 2009 and beginning of 2010, I did some major refactorings to the code to make the test programs dumber and to move much of the common logic into atf-run, which helped a lot in fixing the major shortcomings encountered by the users... but the result is that, today, we have a huge mess. Additionally, while ATF is composed of different modules conceptually separate from each other, there is some hard implementation couplings among them that impose severe restrictions during development. Tangentially, during the past 2 years of working at Google (and coding mainly in Python), I have been learning new neat programming techniques to make code more testable... and these are not followed at all by ATF. In fact, while the test suite of ATF seems very extensive, it definitely is not: there are many corner cases that are not tested and for which implementing tests would be very hard (which means that nasty bugs have easily sneaked in into releases). Lastly, a very important point that affects directly the success of the project. Outsiders that want to contribute to ATF have a huge entry barrier: the source repository is managed by Monotone, the bug tracker is provided by Gnats (a truly user-unfriendly system), and the mailing lists are offered by majordomo. None of these tools is "standard" by today's common practices, and some of them are tied to NetBSD's hosting which puts some outsiders off. For all the reasons above and as this year has been moving along, I have gotten fed up with the ATF code base. (OK, things are not that bad... but in my mind they do ;-) And here is where Kyua comes into the game. Kyua is a project to address all the shortcomings listed above. First of all, the project uses off-the-shelf development tools that should make it much, much easier for external people to contribute. Secondly, the project intends to be much more modular, providing a clear separation between the different components and providing code that is easily testable. Lastly, Kyua intends to remain compatible with ATF so that there are no major disruptions for users. You can (and should) think of Kyua as ATF 2.0, not as a vastly different framework. As of today, Kyua implements a runtime engine that is on par, feature-wise, to the one provided by atf-run. It is able to run test cases implemented with the ATF libraries and it is able to test itself. It currently contains 355 test cases that run in less than 20 seconds. (Compare that to the 536 test cases of ATF, which take over a minute to run, and Kyua is still really far from catching up with all the functionality of ATF.) Next actions involve implementing reports generation and configuration files. Anyway. For more details on the project, I recommend you to read the original posting to atf-devel or the project's main page and wiki. And of course, you can also download the preliminary source code to take a look! Enjoy :-)

December 16, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/kyua">kyua</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 4 minutes)

Creating atf-based tests for NetBSD src

Thanks to Antti Kantee's efforts, atf is seeing increasing visibility in the NetBSD community during the past few months. But one of the major concerns that we keep hearing from our developers is "Where is the documentation?". Certainly I have been doing a pretty bad job at that, and the current in-tree documents are a bit disorganized. To fix the short-term problem, I have written a little tutorial that covers pretty much every aspect that you need to know to write atf tests and, in particular, how to write such tests for the NetBSD source tree. Please refer to the official announcement for more details. Comments are, of course, welcome! And if you can use this tutorial to write your first tests for NetBSD, let me know :-)

September 3, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 1 minute)

ATF 0.10 released

Ladies and gentlemen: I have just released ATF 0.10! This release with such a magic number includes lots of new exciting features and provides a much simplified source tree. Dive into the 0.10 release page for details! I'm now working in getting this release into the NetBSD tree to remove some of the custom patches that have been superseded by the official release. Will be there soon. And all this while I am a meetBSD in Kraków :-)

July 2, 2010 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

Testing NetBSD: Easy Does It

Antti Kantee has been, for a while, writing unit/integration tests for the puffs and rump systems (for which he is the author) shipped with NetBSD. Recently, he has been working on fixing the NetBSD test suite to report 0 failures in the i386 platform so as to encourage developers to keep it that way while doing changes to the tree. The goal is to require developers to run the tests themselves before submitting code. Antti has just published an introductory article, titled Testing NetBSD: Easy Does It, that describes what ATF and Anita are, how to use them and how they can help in NetBSD development and deployment. Nice work!

June 24, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 1 minute)

ATF 0.9 released (late announcement)

Oops! Looks like I forgot to announce the release of ATF 0.9 here a couple of weeks ago. Just a short notice that the formal release has been available since June 3rd and that 0.9 has been in NetBSD since June 4th! You can also enjoy a shiny-new web site! It even includes a FAQ! And, as a side note: I have added a test target to the NetBSD Makefiles, so now it's possible to just do make test within any subdirectory of src/tests/ and get what you expect.

June 18, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 1 minute)

Trac installation for ATF

During the past few months, I've got into the habit of using a bug tracker to organize my tasks at the work place. People assign tickets to me to get things done and I also create and self-assign tickets to myself to keep them as a reminder of the mini-projects to be accomplished. Sincerely, this approach works very well for me and keeps me focused. Since then, I've been wishing to have a similar system set up for ATF. Yeah, we could use the Gnats installation provided by NetBSD... but I hate this issue tracking system. It's ancient, ugly, and I do really want a web interface to manage my tickets through. So, this weekend, I finally took some time and set up a Trac installation for ATF to provide a decent bug/task tracking system. The whole Apache plus Trac setup was more complex than I imagined, but I do hope that the results will pay off :-) Take a look at the official announcement for more details!

May 10, 2010 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

ATF 0.8 imported into NetBSD

Finished importing ATF 0.8 into the NetBSD source tree. Wow, the CVS import plus merge was much easier than I expected. Note that, while the NetBSD test suite should continue to work as usual, there are some backwards incompatible changes in the command line interface of test programs. If you are used to run them by hand, expect different results. Please read the release news for details. Now let's wait for complaints about broken builds! And enjoy this new release in your NetBSD-current system!

May 8, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 1 minute)

Announcing ATF 0.8

Looks like today is a release day. I've just pushed ATF 0.8 out in the wild and will proceed to import it into pkgsrc and NetBSD later. Refer to the release announcement for details. This is an exciting release! You have been warned ;-)

May 7, 2010 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

NetBSD in Google Summer of Code 2010

For the 6th year in a row, NetBSD is a mentoring organization for Google Summer of Code 2010! If you are a bright student willing to develop full-time for an open source project during this coming summer, consider applying with us! You will have a chance to work with very smart people and, most likely, in the area that you are most passionate about. NetBSD, being an operating system project, has offers for project ideas at all levels: from the kernel to the packaging system, passing by drivers, networking tools, user-space utilities, the system installer, automation tools and more! I would like to point you at the 3 project proposals I'm willing to directly mentor: Optimize and speed-up ATF: Make the testing framework blazing fast so that running the NetBSD automated tests does not take ages on slow platforms. Reorganize ATF to improve modularity: Refactor pieces of the testing framework so that it is easier to redistribute, has cleaner interfaces and is easier to depend on from third-party projects. Rewrite pkg_comp with portability as a major goal: Use Python to create a tool to automatically build binary packages from within a sandbox.If you find any of the above projects interesting, or if you have any other project proposal that you think I could mentor, do not hesitate to contact me. Feel free to send me a draft of your application, together with a bit of information about you, so that we can discuss your proposal and make sure it gets selected! Or, if none of the projects above interests you, please do check out the full list of NetBSD project proposals. I'm sure you will find something that suits your interests :-)

March 19, 2010 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/pkgsrc">pkgsrc</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

Introducing the ATF nofork branch

Despite my time for free software being virtually zero these days, I have managed to implement a prototype of what ATF would look like if it didn't implement forking and isolation in test programs. This feature has been often requested by users to simplify their life when debugging test cases. I shouldn't repeat everything I posted on the atf-devel mailing list regarding this announcement, so please refer to that email for details. But I must say that the results look promising: the overall code of ATF is much simpler and also faster. (An execution I just tried cuts the run time of the ATF test suite from 1m 41s to 1m 16s.) Expect more simplifications and speed-ups!

March 6, 2010 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

Processing Makefile.am with M4

ATF's Makefile.am, which is a single Makefile for the whole tree, was already at the 1300 lines mark and growing. At this size, it is unmanageable, and a quick look at its contents reveals tons of repeated delicate code. Why so much repeated code, you ask, if the whole point of Automake is to simplify Makefiles? Automake does in fact simplify Makefile code when you define targets known by Automake, such as binaries and/or libraries. However, as soon as you start doing fancy things with documentation, building tons of small programs or messing with shell scripts, things get out of control because you are left on your own to define their targets and their necessary build logic. Up until now, I had just kept up with the boilerplate code... but now that I'm starting to add pretty complex rules to generate HTML and plain text documentation out of XML files, the complexity must go. And here comes my solution: I've just committed an experiment to process Makefile.am with M4. I've been trying to look for prior art behind this idea and couldn't find any, so I'm not sure how well this will work. But, so far, this has cut down 350 lines of Makefile.am code. How does this work? First of all, I've written a script to generate the Makefile.am from the Makefile.am.m4 and put it in admin/generate-makefile.sh. All this script does is call M4, but I want to keep this logic in a single place because it has to be used from two call sites as described below. Then, I've added an autogen.sh script to the top-level directory that generates Makefile.am (using the previous script) and calls autoreconf -is. I'm against autogen.sh scripts that pretend to be smart instead of just calling autoreconf, but in this case I see no other way around it. At last, I've modified Makefile.am to add an extra rule to generate itself based on the M4 version. This, of course, also uses generate-makefile.sh. We'll see how this scales, but I'm so far happy with the results.

October 25, 2009 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 2 minutes)

1000 revisions for ATF

Mmm! Revision 7ca234b9aceabcfe9a8a1340baa07d6fdc9e3d33, committed about an hour ago, marks the 1000th revision in the ATF repository. Thanks for staying with me if you are following the project :)

August 3, 2009 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

Rearchitecting ATF

During the last few weeks, I've been doing some ATF coding and, well... I'm not happy. At all. I keep implementing features but I feel, more and more, that ATF is growing out of control and that it is way too sluggish. It oughtn't be so slow. About 6 minutes to run the whole test suite in a Mac G3 I just got? HA! I bet I can do much, much, much better than that. Come on, we are targeting NetBSD, so we should support all those niche platforms rather well, and speed matters. The thing is, the current code base grew out of a prototype that didn't have that much of a design. Well, it had a design but, in my opinion, it has turned to be a bad design. I couldn't imagine that we could hit the bottlenecks (speed) and user-interface issues (for example, the huge difficulties that involve debugging a failing test case) that we are hitting. So... IT IS TIME FOR A CHANGE!!! I'm currently working on a written specification of what ATF will look like, hopefully, in the not-so-distant future. It will take a while to get there, but with enough effort, we soon will. And life will be better. And no, I'm not talking about a from-scratch rewrite; that'd only hurt the project. I plan to take incremental and safe steps, keeping the code base running all the time, but I will do a major face-lift of everything. (I wish I could say "we" instead of "I" here. But not there yet.) Why am I writing a specification, you ask? Well, because that forces me (or ANY developer) to think how I want the thing to look like and to decide, exactly, on what the design will be, which technologies will be used, which languages will be involved and in what components, etc. And no, I'm not talking of a class model design; I'm just talking about the main design of the whole picture, which is quite hard by itself. Plus having a spec will allow me to show it to you before I start coding and you will say "oh, wonderful, this new design sucks so much that I'm not going to bother with the new version". Or maybe hell will freeze and you will think, "mmm, this looks interesting, maybe it will solve these issues I'm having as regards speed, ease of debugging and ease of use". Anyway, I hope to have a draft "soon" and to hear any of the two possible comments as a result! Edit (July 29th): Alright, I have uploaded an extremely preliminary copy of the specification just so that you can see where my current ideas are headed. Expect many more changes to this document, so don't pay too much attention to the tiny details (most of which aren't there anyway yet).

July 27, 2009 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 3 minutes)

The mess of ATF's code

Yes. ATF's code is a "bit" messy, to put it bluntly. I'm quite happy with some of the newest bits but there are some huge parts in it that stink. The main reason for this is that the "ugly" parts were the ones that were written first, and they were basically a prototype; we didn't know all the requirements for the code at that point... and we still don't know them, but we know we can do much better. Even though I'm writing in plural... I'm afraid we = I at the moment :-P So, is it time for the big-rewrite-from-scratch? NO! Joel Spolsky wrote about why this is a bad idea and I have to agree with him. Yeah, I'm basically the only developer of the code so everything is in my head, and I'd do a rewrite with a fresh mind, but... I'd lose tons of work and, specially, I'd lose tons of code that deals with tricky corner-cases that are hard to remember. Sure, I want to clean things up but they'll happen incrementally. And preferably concurrently with feature additions. These two things could definitely happen at the same time if only I had infinite spare time... Anyway, the major point of this post is to describe what I don't like about the current code base and how I'd like to see it changing:A completely revamped C++ API for test cases. The current one sucks. It is not consistent with the C API. It lacks important functionality. It uses exceptions for test-case status reporting (yuck!). And it's ugly.Clear separation of "internal/helper" APIs from the test APIs. You'll agree that the "fs" module, which provides path abstraction and other file system management routines, is something that cannot be part of ATF's API. ATF is about testing. Period. Either that fs module should be in a separate library or should be completely hidden from the public. Otherwise, it'll suffer from abuse and, what scares me, will have to become part of ATF's API. And likewise, most — really — most of the modules in the current code are internal.Less dependencies from the C++ API to the C API. Most of the current C++ modules are wrappers of their corresponding C counterparts. This is nice for code reuse but makes the code extremely fragile. In C++, things like RAII can provide really robust code with minimum effort, but intermixing such C++ code with C makes things ugly really quickly. I'd like to find a way to keep the two libraries separate from each other (and thus keep the C++ binding "pure"), but at the same time I don't want to duplicate code... an interesting problem.Split the tarball into smaller pieces. People writing test cases for C applications don't want to pull in a huge package that depends on C++ and whatnot. And ATF is huge. It takes forever to compile. And this is a serious issue for broad adoption. Note: whether the tools are written in C++ or not is a separate issue, because these are not a dependency for anything!The shell binding is slow. Really slow compared to the other ones. Optimizations would be nice, but those do not address the root of the problem: it's costly to query information from shell-based tests at run time. I.e. it takes a long time to get the full list of test cases available in a test suite because you have to run every single test program with the -l flag. Keeping a separate file with test-case metadata alongside the binary could resolve this and allow more flexibility at run time.And some other things.Those are the major things I'd like to see addressed soon, but they involve tons of work. Of course, I'd like to be able to work on some features expected by other developers: easier debugging, DOCUMENTATION!... So, helpers welcome :-)

July 13, 2009 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 4 minutes)

Child-process management in C for ATF

Let's face it: spawning child processes in Unix is a "mess". Yes, the interfaces involved (fork, wait, pipe) are really elegant and easy to understand, but every single time you need to spawn a new child process to, later on, execute a random command, you have to write quite a bunch of error-prone code to cope with it. If you have ever used any other programming language with higher-level abstraction layers — just check Python's subprocess.Popen — you surely understand what I mean. The current code in ATF has many places were child processes have to be spawned. I recently had to add yet another case of this, and... enough was enough. Since then, I've been working on a C API to spawn child processes from within ATF's internals and just pushed it to the repository. It's still fairly incomplete, but with minor tweaks, it'll keep all the dirty details of process management contained in a single, one-day-to-be-portable module. The interface tries to mimic the one that was designed on my Boost.Process Summer of Code project, but in C, which is quite painful. The main idea is to have a fork function to which you pass the subroutine you want to run on the child, the behavior you want for the stdout stream and the behavior you want for the stderr steam. These behaviors can be any of capture (aka create pipes for IPC communcations), silence (aka redirect to /dev/null), redirect to file descriptor and redirect to file. For simplicity, I've omitted stdin. With all this information, the fork function returns you an opaque structure representing the child, from which you can obtain the IPC channels if you requested them and on which you can wait for finalization. Here is a little example, with tons of details such as error handling or resource finalization removed for simplicity. The code below would spawn "/bin/ls" and store its output in two files named ls.out and ls.err:static atf_error_t run_ls(const void *v) { system("/bin/ls"); return atf_no_error(); } static void some_function(...) { atf_process_stream_t outsb, errsb; atf_process_child_t child; atf_process_status_t status; atf_process_status_init_redirect_path(&outsb, "ls.out"); atf_process_status_init_redirect_path(&errsb, "ls.err"); atf_process_fork(&child, run_ls, &outsb, &errsb, NULL); ... yeah, here comes the concurrency! ... atf_process_child_wait(&child, &status); if (atf_process_status_exited(&status)) printf("Exit: %dn", atf_process_status_exitstatus(&status)); else printf("Error!"); }Yeah, quite verbose, huh? Well, it's the price to pay to simulate namespaces and similar other things in C. I'm not too happy with the interface yet, though, because I've already encountered a few gotchas when trying to convert some of the existing old fork calls to the new module. But, should you want to check the whole mess, check out the corresponding revision.

June 21, 2009 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/boost-process">boost-process</a>, <a href="/tags/c">c</a>
Continue reading (about 3 minutes)

Making ATF 'compiler-aware'

For a long time, ATF has shipped with build-time tests for its own header files to ensure that these files are self-contained and can be included from other sources without having to manually pull in obscure dependencies. However, the way I wrote these tests was a hack since the first day: I use automake to generate a temporary library that builds small source files, each one including one of the public header files. This approach works but has two drawbacks. First, if you do not have the source tree, you cannot reproduce these tests -- and one of ATF's major features is the ability to install tests and reproduce them even if you install from binaries, remember? And second, it's not reusable: I now find myself needing to do this exact same thing in another project... what if I could just use ATF for it? Even if the above were not an issue, build-time checks are a nice thing to have in virtually every project that installs libraries. You need to make sure that the installed library is linkable to new source code and, currently, there is no easy way to do this. As a matter of fact, the NetBSD tree has such tests and they haven't been migrated to ATF for a reason. I'm trying to implement this in ATF at the moment. However, running the compiler in a transparent way is a tricky thing. Which compiler do you execute? Which flags do you need to pass? How do you provide a portable-enough interface for the callers? The approach I have in mind involves caching the same compiler and flags used to build ATF itself and using those as defaults anywhere ATF needs to run the compiler itself. Then, make ATF provide some helper check functions that call the compiler for specific purposes and hide all the required logic inside them. That should work, I expect. Any better ideas?

March 5, 2009 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/c">c</a>, <a href="/tags/cxx">cxx</a>
Continue reading (about 2 minutes)

ATF 0.6 released

I am very happy to announce the availability of the 0.6 release of ATF. I have to apologize for this taking so long because the code has been mostly-ready for a while. However, doing the actual release procedure is painful. Testing the code in many different configurations to make sure it works, preparing the release files, uploading them, announcing the new release on multiple sites... not something I like doing often. Doing some late reviews, I have to admit that the code has some rough edges, but these could not delay 0.6 any more. The reason is that this release will unblock the NetBSD-SoC atfify project, making it possible to finally integrate all the work done in it into the main NetBSD source tree. Explicit thanks go to Lukasz Strzygowski. He was not supposed to contribute to ATF during his Summer of Code 2008 project, but he did, and he actually provided very valuable code. The next step is to update the NetBSD source tree to ATF 0.6. I have extensive local changes for this in my working copy, but I'm very tired at the moment. I think I'll postpone their commit until tomorrow so that I don't screw up something badly. Enjoy it and I'm looking for your feedback on the new stuff!

January 18, 2009 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

ATF talk at NYCBSDCon 2008

NYCBSDCon 2008 will take place in New York City on October 11th and 12th. Given that I am already in NYC and will still be by that time, I submitted a presentation proposal about ATF. I have just been notified that my proposal has been accepted and, therefore, I will be giving a talk on ATF itself and how it relates to NetBSD on one of those two days. The conference program and schedule have not been published yet, though, so keep tuned. Hope to see you there! :)

July 30, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/nyc">nyc</a>
Continue reading (about 1 minute)

Google Summer of Code 2008 and NetBSD

Google has launched the Summer of Code program once again this year, and NetBSD is a mentoring organization for the fourth time as announced in a netbsd-announce post. Unless things go very wrong in the following days, I will not take part this year as a student because I will be intering at Google SRE during the Summer! However, I will try to become a mentor for the "Convert all remaining regression tests to ATF" project. If you are looking for some interesting idea to apply for, this is a good one! Why? It will let you get into NetBSD internals in almost all areas of the system: you'll need to understand how the source tree is organized, how to add new components to it (because tests are almost in all aspects regular programs), how the current pieces of the system interact with each other... You will need to gain knowledge in some areas (such as the kernel or the libraries) to be able to port tests from the old framework (if it deserves that name ;-) to the new one and, if you are really up to it, even add new tests for functionality that is currently uncovered by the test suite. But adding new tests is something you will not be required to do, because the sole task of migrating the existing ones is a huge task already. Get involved in ATF's development because, as you study the existing test cases and their requirements, you will most likely find that it lacks some important functionality to make things really straightforward.And, of course, make a unvaluable contribution to the NetBSD operating system. Having a public test suite with high coverage means that the system will gain quality. Yes, you will most likely uncover bugs in many areas of the system and give them enough exposure so that someone else may fix them.Note that this project is really a Summer of Code project. It does not have a long design phase on its own so, once you have got used to the system and ATF, you'll just code and immediately make useful contributions. In the past, projects that had a heavy design phase involved were not good because, in the end, the student did not finish the code on time. So... don't hesitate to apply! I'm looking forward to see your applications for this project :-)

March 19, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

ATF's error handling in C

One of the things I miss a lot when writing the C-only code bits of ATF is an easy way to raise and handle errors. In C++, the normal control flow of the execution is not disturbed by error handling because any part of the code is free to notify error conditions by means of exceptions. Unfortunately, C has no such mechanism, so errors must be handled explicitly. At the very beginning I just made functions return integers indicating error codes and reusing the standard error codes of the C library. However, that turned out to be too simple for my needs and, depending on the return value of a function (not an integer), was not easily applicable. What I ended up doing was defining a new type, atf_error_t, which must be returned by all functions that can raise errors. This type is a pointer to a memory region that can vary in contents (and size) depending on the error raised by the code. For example, if the error comes from libc, I mux the original error code and an informative message into the error type so that the original, non-mangled information is available to the caller; or, if the error is caused by the user's misuse of the application, I simply return a string that contains the reason for the failure. The error structure contains a type field that the receiver can query to know which specific information is available and, based on that, cast down the structure to the specific type that contains detailed information. Yes, this is very similar to how you work with exceptions. In the case of no errors, a null pointer is returned. This way checking for an error condition is just a simple pointer check, which is no more expensive than an integer check. However, handling error conditions is more costly, but given that these are rare, it is certainly not a problem. What I don't like too much of this approach is that any other return value must be returned as an output parameter, which makes things a bit confusing. Furthermore, robust code ends up cluttered with error checks all around given that virtually any call to the library can produce an error somewhere. This, together with the lack of RAII modeling, complicates error handling a lot. But I can't think of any other way that could be simpler but, at the same time, as flexible as this one. Ideas? :P More details are available in the atf-c/error.h and atf-c/error.c files.

February 24, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/c">c</a>
Continue reading (about 2 minutes)

Rewriting parts of ATF in C

I have spent part of past week and this whole weekend working on a C-only library for ATF test programs. An extremely exhausting task. However, I wanted to do it because there is reluctancy in NetBSD to write test programs in C++, which is understandable, and delaying it more would have made things worse in the future. I found this situation myself some days ago when writing tests for very low level stuff; using C++ there felt clunky, but it was still possible of course. I have had to reimplement lots of stuff that are given for-free in any other, higher-level (not necessarily high-level) language. This includes, for example, a "class" to deal with dynamic strings, another one for dynamic linked lists and iterators, a way to propagate errors until the point where they can be managed... and I have spent quite a bit of time debugging crashes due to memory management bugs, something that I rarely encountered in the C++ version. However, the new interface is, I believe, quite neat. This is not because of the language per se, but because the C++ interface has grown "incorrectly". It was the first code in the project and it shows. The C version has been written from the ground up with all the requirements known beforehand, so it is cleaner. This will surely help in cleaning up the C++ version later on, which cannot die anyway. The code for this interface is in a new branch, org.NetBSD.atf.src.c, and will hopefully make it to ATF 0.5: it still lacks a lot of features, hence why it is not on mainline. Ah, the joys of a distributed VCS: I have been able to develop this experiment locally and privately until it was decent enough to be published, and now it is online with all history available! From now on C++ use will be restricted to the ATF tools inside ATF itself, and to those users who want to use it in their projects. Test cases will be written using the C library except for those that unit-test C++ code.

February 18, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/c">c</a>
Continue reading (about 2 minutes)

ATF 0.4 released

I'm pleased to announce that the fourth release of ATF, 0.4, just saw the light. The NetBSD source tree has also been updated to reflect this new release. For more details please see the announcement.

February 4, 2008 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

Home-made build farm

I'm about to publish the 0.4 release of ATF. It has been delayed more than I wanted due to the difficulty in getting time-limited test cases working and due to my laziness in testing the final tarball in multiple operating systems (because I knew I'd have to fight portability problems). But finally, this weekend I have been setting up a rather-automated build farm at home, which is composed so far of 13 systems. Yes, 13! But do I use so much machines? Of course not! Ah, the joys of virtualization. What I have done is set up a virtual machine for each system I want to test using VMware Fusion. If possible, I configure both 32-bit and 64-bit versions of the same system, because different problems can arise in them. Each virtual machine has a builder user, and that user is configured to allow passwordless SSH logins by using a private key. It also has full sudo access to the machine, so that it can issue root-only tests and can shutdown the virtual machine. And about the software it has, I only need a C++ compiler, the make tool and pkg-config. Then I have a script that, for a given virtual machine:Starts the virtual machine.Copies the distfile inside the virtual machine.Unpacks the distfile.Configures the sources.Builds the sources.Installs the results.Runs the build-time tests.Runs the install-time tests as a regular user.Runs the install-time tests as root.Powers down the virtual machine.Ideally I should also run some different combinations of compilers inside each system (for example, SUNpro and GCC in Solaris) and make tools (BSD make and GNU make). I'm also considering in replacing some of the steps above by a simple make distcheck. I take a log of the whole process for later manual inspection. This way I can simply call this script for all the virtual machines I have and get the results of all the tests for all the platforms. I still need to do some manual testing in non-virtual machines such as in my PS3 or in Mac OS X, but these are minor (but yes, they should also be automated). Starting and stopping the virtual machines is what was trickiest, but in the end I got it working. Now I would like to adapt the code to work with other virtual machines (Parallels and qemu), clean it up and publish it somehow. Parts of it do certainly belong inside ATF (such as the formatting of all logs into HTML for later publication on a web server), and I hope they will make it into the next release. For the curious, I currently have virtual machines for: Debian 4.0r2, Fedora 8, FreeBSD 6.3, NetBSD-current, openSUSE 10.2, Solaris Express Developer Edition 2007/09 and Ubuntu Server 7.10. All of them have 32-bit and 64-bit variants except for Solaris, which is only 64-bit. Setting all of them up manually was quite a tedious and boring process. And the testing process is slow. Each system takes around 10 minutes to run through the whole "start, do stuff, stop" process, and SXDE almost doubles that. In total, more than 2 hours to do all the testing. Argh, an 8-way Mac Pro could be so sweet now :-)

February 4, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/virtualization">virtualization</a>
Continue reading (about 3 minutes)

unlink(2) can actually remove directories

I have always thought that unlink(2) was meant to remove files only but, yesterday, SunOS (SXDE 200709) proved my wrong. I was sanity-checking the source tree for the imminent ATF 0.4 release under this platform, which is always scary, and the tests for the atf::fs::remove function were failing — only when run as root. The failure happened in the cleanup phase of the test case, in which ATF attempts to recursively remove the temporary work directory. When it attempted to remove one of the directories inside it, it failed with a ENOENT message, which in SunOS may mean that the directory is not empty. Strangely, when inspecting the left-over work tree, that directory was indeed empty and it could not be removed with rm -rf nor with rmdir. The manual page for unlink(2) finally gave me the clue of what was happening:If the path argument is a directory and the filesystem supports unlink() and unlinkat() on directories, the directory is unlinked from its parent with no cleanup being performed. In UFS, the disconnected directory will be found the next time the filesystem is checked with fsck(1M). The unlink() and unlinkat() functions will not fail simply because a directory is not empty. The user with appropriate privileges can orphan a non-empty directory without generating an error message.The solution was easy: as my custom remove function is supposed to remove files only, I added a check before the call to unlink(2) to ensure that the path name does not point to a directory. Not the prettiest possibility (because it is subject to race-conditions even though it is not critical), but it works.

February 3, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/portability">portability</a>, <a href="/tags/sunos">sunos</a>
Continue reading (about 2 minutes)

Testing the process-tree killing algorithm

Now that you know the procedure to kill a process tree, I can explain how the automated tests for this feature work. In fact, writing the tests is what was harder due to all the race conditions that popped up and due to my rusty knowledge of tree algorithms. Basically, the testing procedure works like this:Spawn a complete tree of processes based on a configurable degree D and height H.Make each child tell the root process its PID so that the root process can have a list of all its children, be them direct or indirect, for control purposes.Wait until all children have reported their PID and are ready to be killed.Execute the kill-tree algorithm on the root process.Wait until the children have died.Check that none of the PIDs gathered in point 2 are still alive (which could be, but reparented to init(8) if they were not properly killed). If some are, the recursive kill failed.The tricky parts were 3 and 5. In point 3, we have to wait until all children have been spawned. Doing so for direct children is easy because we spawned them, but indirect ones are a bit more difficult. What I do is create a pipe for each of the children that will be spawned (because given D and H I can know how many nodes there will be) and then each child uses the appropriate pipe to report its PID to the parent when it has finished initialization and thus is ready to be safely killed. The parent then just reads from all the pipes and gets all the PIDs. But what do I mean with safely killed? Preliminary versions of the code just ran through the children's code and then exited, leaving them in zombie status. This worked in some situations but broke in others. I had to change this to block all children in a wait loop and then, when killed, take care to do a correct wait for all of its respective children, if any. This made sure that all children remained valid until the attempt to kill them. In point 5, we have to wait until the direct children have returned so that we can be sure that the signals were delivered and processed before attempting to see if there is any process left. (Yes, if the algorithm fails to kill them we will be stalled at that point.) Given that each children can be safely killed as explained above, this wait will do a recursive wait along all the process tree making sure that everything is cleaned up before we do the final checks for non-killed PIDs. This all sounds very simple and, in fact, looking at the final code it is. But it certainly was not easy at all to write, basically because the code grew in ugly ways and the algorithms were much more complex than they ought to be.

January 17, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/process">process</a>
Continue reading (about 3 minutes)

How to kill a tree of processes

Yesterday I mentioned the need for a way to kill a tree of processes in order to effectively implement timeouts for test cases. Let's see how the current algorithm in ATF works: The root process is stopped by sending a SIGSTOP to it so that it cannot spawn any new children while being processed.Get the whole list of active processes and filter them to only get those that are direct children of the root process.Iterate over all the direct children and repeat from 1, recursively.Send the real desired signal (typically SIGTERM) to the root process.There are two major caveats in the above algorithm. First, point 2. There is no standard way to get the list of processes of a Unix system, so I have had to code three different implementations so far for this trivial requirement: one for NetBSD's KVM, one for Mac OS X's sysctl kern.proc node and one for Linux's procfs. Then, and the worst one, comes in point 4. Some systems (Linux and Mac OS X so far) do not seem to allow one to send a signal to a stopped process. Well, strictly speaking they allow it, but the second signal seems to be simply ignored whereas under NetBSD the process' execution is resumed and the signal is delivered. I do not know which behavior is right. If we cannot send the signal to the stopped process, we can run into a race condition: we have to wake it up by sending a SIGCONT and then deliver the signal, but in between these events the process may have spawned new children that we are not aware of. Still, being able to send a signal to a stopped process does not completely resolve the race condition. If we are sending a signal that the user can reprogram (such as SIGTERM), that process may fork another one before exiting, and thus we'd not kill this one. But... well... this is impossible to resolve with the existing kernel APIs as far as I can tell. One solution to this problem is killing a timed-out test by using SIGKILL instead of SIGTERM. SIGKILL could work on any case because means die immediately, without giving a chance to the process to mess with it. Therefore SIGCONT would not be needed in any case &mash;because you can simply kill a stopped process and it will die immediately as expected— and the process would not have a chance to spawn any more children after it had been stopped. Blah, after writing this I wonder why I went with all the complexity of dealing with signals that are not SIGKILL... say over-engineering if you want...

January 16, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/portability">portability</a>, <a href="/tags/process">process</a>
Continue reading (about 3 minutes)

Implementing timeouts for test cases

One of the pending to-do entries for ATF 0.4 is (was, mostly) the ability to define a timeout for a test case after which it is forcibly terminated. The idea behind this feature is to prevent broken tests from stalling the whole test suite run, something that is already needed by the factor(6) tests in NetBSD. Given that I want to release this version past weekend, I decided to work on this instead of delaying it because... you know, this sounds pretty simple, right? Hah! What I did first was to implement this feature for C++ test programs and added tests for it. So far, so good. It effectively was easy to do: just program an alarm in the test program driver and, when it fires, kill the subprocess that is executing the current test case. Then log an appropriate error message. The tests for this feature deserve some explanation. What I do is: program a timeout and then make the test case's body sleep for a period of time. I try different values for the two timers and if the timeout is smaller than the sleeping period, then the test must fail or otherwise there is a problem. The next step was to implement this in the shell interface, and this is where things got tricky. I did a quick and dirty implementation, and it seemed to make the same tests I added for the C++ interface pass. However, when running the bootstrap testsuite, it got stalled at the cleanup part. Upon further investigation, I noticed that there were quite a lot of sleep(1) processes running when the testsuite was stalled, and killing them explicitly let the process continue. You probably noticed were the problem was already. When writing a shell program, you are forking and executing external utilities constantly, and sleep(1) is one of them. It turns out that in my specific test case, the shell interpreter is just waiting for the sleep subprocess to finish (whereas in the C++ version everything happens in a single process). And, killing a process does not kill its children. There you go. My driver was just killing the main process of the test case, but not everything else that was running; hence, it did not die as expected, and things got stalled until the subprocesses also died. Solving this was the fun part. The only effective way to make this work is to kill the test case's main process and, recursively, all of its children. But killing a tree of processes is not an easy thing to do: there is no system interface to do it, there is no portable interface to get a list of children and I'm yet unsure if this can be done without race conditions. I reserve the explanation of the recursive-kill algorithm I'm using for a future post. After some days of work, I've got this working under Mac OS X and also have got automated tests to ensure that it effectively works (which were the hardest part by far). But as I foresaw, it fails miserably under NetBSD: the build was broken, which was easy to fix, but now it also fails at runtime, something that I have not diagnosed yet. Aah, the joys of Unix...

January 15, 2008 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/process">process</a>
Continue reading (about 3 minutes)

Fixing id's command line parsing

Today's work: been fixing NetBSD's id(1)'s command line parsing to match the documented syntax. Let's explain. Yesterday, for some unknown reason, I ended up running id(1) with two different user names as its arguments. Mysteriously, I only got the details for the first user back and no error for the second one. After looking at the manual page and what the GNU implementation did, I realized that the command is only supposed to take a single user or none at all as part of its arguments. OK, so "let's add a simple argc check to the code and raise the appropriate error when it is greater than 2". Yeah, right. If you look at id(1)'s main routine, you'll find an undecipherable piece of spaghetti code — have you ever thought about adding multiple ?flag variables and checking the result of the sum? — that comes from the fact that id(1)'s code is shared across three different programs: id(1), groups(1) and whoami(1). After spending some time trying to understand the rationale behind the code, I concluded that I could not safely fix the problem as easily as I first thought. And, most likely, touching the logic in there would most likely result in a regression somewhere else, basically because id(1) has multiple primary, mutually-exclusive options and groups(1) and whoami(1) are supposed to have their own syntax. Same unsafety as for refactoring it. So what did I do? Thanks to ATF being already in NetBSD, I spent the day writing tests for all possible usages of the three commands (which was not trivial at all) and, of course, added stronger tests to ensure that the documented command line syntax was enforced by the programs. After that, I was fairly confident that if I changed the code and all the new tests passed afterwards (specially those that did before), I had not broken it. So I did the change only after the tests were done. I know it will be hard to "impose" such testing/bug-fixing procedure to other developers, but I would really like them to consider extensive testing... even for obvious changes or for trivial tools such as these ones. You never know when you break something until someone else complains later.

November 16, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 2 minutes)

ATF 0.3 released

I've just published the 0.3 release of ATF. I could have delayed it indefinitely (basically because my time is limited now), so I decided it was time to do it even if it did not include some things I wanted. The important thing here is that this release will most likely be the one to be merged into the NetBSD source tree. If all goes well, this will happen during this week. Which finally will give a lot of exposure to the project :-)

November 11, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>
Continue reading (about 1 minute)

ATF meets XML

During the last couple of days, I've been working on the main major change planned for the upcoming ATF 0.3 release: the ability to generate XML reports with the tests results so that they can later be converted to HTML. It goes like this: you first run a test suite by means of the atf-run tool, then use the atf-report utility to convert atf-run's output to the new XML format and at last use a standard XSLT processor to transform the document to whichever other format you want. I'm including sample XSLT and CSS style-sheets in the package to ease this process. I've uploaded an example of what this currently looks like, but be aware that this is still a very preliminary mockup. For example, adding more details for failed stuff is needed to ease debugging later on. Comments welcome!

September 24, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/xml">xml</a>
Continue reading (about 1 minute)

ATF 0.2 released

I am pleased to tell you that ATF 0.2 has just been released! This is the first non-SoC release coming exactly after a month from the 0.1 release, which means that the project is still alive :-) This is just a quick note. For more details please see the official announcement. Enjoy!

September 20, 2007 · Tags: <a href="/tags/atf">atf</a>
Continue reading (about 1 minute)

SoC: Second preview of NetBSD with ATF

Reposting from the original ATF news entry: I have just updated the first preview of NetBSD-current release builds with ATF merged in to match the ATF 0.1 release published today. As already stated in the old news item: These will ease testing to the casual user who is interested in this project because he will not need to mess with patches to the NetBSD source tree nor rebuild a full release, which is a delicate and slow process. For the best experience, these releases are meant to be installed from scratch even though you can also do an upgrade of a current installation. They will give you a preview of how a NetBSD installation will look like once ATF is imported into it; we are not sure when that will happen, though.

August 20, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/soc">soc</a>
Continue reading (about 1 minute)

SoC: Some statistics

Here go some statistics about what has been done during the SoC 2007 program as regards ATF: The repository weights at 293 revisions, 1,174 certificates (typically 4 per revision, but some revisions have more) and 221 files. This includes ATF, the patches to merge it into the NetBSD build tree and the website sources. (mtn db info will give you some more interesting details.) The clean sources of ATF 0.1 (not counting the files generated by the GNU autotools) take 948Kb and are 20,607 lines long (wow!). This includes the source code, the manual pages, the tests and all other files included in the distribution. The patches to merge ATF into NetBSD, according to diffstat, change 209 files, do 6,299 line insertions and 4,583 line deletions. Aside merging ATF into NetBSD, these changes also convert multiple existing regression tests to the new framework. As regards the time I have spent on it... I don't know, but it has been a lot. It should have been more as I had to postpone the start of coding some weeks due to university work, but I think the results are quite successful and according to the expectations. I have been able to cover all the requirements listed in the NetBSD-SoC project page and done some work on the would-be-nice ones. I am eager to see the results of the other NetBSD-SoC 2007 projects as there was very interesting stuff in them :-)

August 20, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: ATF 0.1 released

To conclude the development of ATF as part of SoC, I've released a 0.1 version coinciding with the coding deadline (later today). This clearly draws a line between what has been done during the SoC program and what will be done afterwards. See the official announcement for more details! I hope you enjoy it as much as I did working on it.

August 20, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 1 minute)

SoC: Status report

SoC's deadline is just five days away! I'm quite happy with the status of my project, ATF, but it will require a lot more work to be in a decent shape — i.e. ready to be imported into NetBSD — and there really is no time to get it done in five days. Furthermore, it is still to unstable (in the sense that it changes a lot) so importing it right now could cause a lot of grief to end users. However, after a couple of important changes, it may be ready for a 0.1 release, and that's what I'm aiming for. I have to confess again that some parts of the code are horrible. That's basically because it has been gaining features in an iterative way, all which were not planned beforehand... so it has ended up being hack over hack. But, don't worry: as long as there is good test coverage for all the expected features, this can easily be fixed. With a decent test suite, I'll be able to later rewrite any piece of code and be pretty sure that I have not broken anything important. (In fact, I've already been doing that for the most inner code with nice results.) So what has changed since the preview? All files read by ATF as well as all data formats used for serialization of data have now a header that specifies their format (a type and a version). This is very important to have from the very beginning so that the data formats can easily be changed in the future (which will certainly happen).Rewrote how test programs and atf-run print their execution status. Now, the two print a format that is machine-parseable and which is "sequential": reading the output from top to bottom, you can immediately know what the program is doing at the moment without having to wait for future data.Added the atf-report tool, which gathers the output of atf-run and generates a user-friendly report. At the moment it outputs plain text only, but XML (and maybe HTML) are planned. The previous point was a pre-requisite for this one.Merged multiple implementation files into more generic modules.Merged the libatf and libatfprivate libraries into a single one. The simpler the better.Added build-time tests for all public headers, to ensure that they can be included without errors.Implemented run-time configuration variables for test programs and configuration files.Wow, that's a lot of stuff :-) And talking with my mentor five days ago, we got to the following list of pending work to get done before the deadline: Configuration files. Already done as of an hour ago!A plain text format that clearly describes the results of the test cases (similar to what src/regress/README explains). I haven't looked at that yet, but this will be trivial with the new atf-report tool.Would be nice: HTML output. Rather easy. But I'm unsure about this point: it may be better to define a XML format only and then use xsltproc to transform it.Manual pages: A must for 0.1 (even if they are not too detailed), but not really required for the evaluation.Code cleanups: Can be done after SoC, but I personally dislike showing ugly code. Unfortunately there is not enough time to spend on this. Cleaning up a module means: rewriting most of it, documenting each function/class and adding exhaustive unit tests for it. It is painful, really, but the results are rewarding. Keep the NetBSD patches in sync with development: I'm continuously doing that!Let's get back to work.

August 15, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 3 minutes)

SoC: First preview of NetBSD with ATF

Reposting from the original ATF news entry: I have just uploaded some NetBSD-current release builds with ATF merged in. These will ease testing to the casual user who is interested in this project because he will not need to mess with patches to the NetBSD source tree nor rebuild a full release, which is a delicate and slow process. For the best experience, these releases are meant to be installed from scratch even though you can also do an upgrade of a current installation. They will give you a preview of how a NetBSD installation will look like once ATF 0.1 is made public, which should happen later this month. For more details see my post to the NetBSD's current-users mailing list.Waiting for your feedback :-) Edit (Aug 20th): Fixed a link.

August 8, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/soc">soc</a>
Continue reading (about 1 minute)

SoC: Status report

It has already been a week since the last SoC-related post, so I owe you an status report. Development has continued at a constant rate and, despite I work a lot on the project, it may seem to advance slowly from an external point of view. The thing is that getting the ATF core components complete and right is a tough job! Just look at the current and incomplete TODO list to see what I mean. Some things worth to note: The NetBSD cross-build tool-chain no longer requires a C++ compiler to build the atf-compile host tool. I wrote a simplified version in POSIX shell to be used as the host tool alone (not to be installed). This is also used by the ATF's distfile to allow "cross-building" its own test programs. Improved the cleanup procedure of the test case's work directories by handling mount points in them. This is done through a new tool called atf-cleanup.Added a property to allow test cases specify if they require root privileges or not. Many bug fixes, cleanups and new test cases; these are driving development right now.On the NetBSD front, there have also been several cosmetic improvements and bug fixes, but most importantly I've converted the tmpfs' test suite to ATF. This conversion is what has spotted many bugs and missing features in ATF's code. The TODO file has grown basically due to this. So, at the moment, both the regress/bin and regress/sys/fs/tmpfs trees in NetBSD have been converted to ATF. I think that's enough for now and that I should focus on adding the necessary features to ATF to improve these tests. One of these is to add support for a configuration file to let the user specify how certain tests should behave; e.g. how to become root or which specific file system to use for certain tests. I also have a partial implementation to add a "fork" property to test cases to execute them in subprocesses. This way they will be able to mess all they want with the open file descriptors without disturbing the main test program. But to get here, I first need to clean up the reporting of test case's results. On the other hand, I also started preparing manual pages for the user tools as some of them should remain fairly stable at this point.

July 28, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>, <a href="/tags/tmpfs">tmpfs</a>
Continue reading (about 2 minutes)

SoC: ATF self-testing

ATF is a program, and as happens with any application, it must be (automatically) tested to ensure it works according to its specifications. But as you already know, ATF is a testing framework so... is it possible to automatically test it? Can it test itself? Should it do it? The thing is: it can and it should, but things are not so simple. ATF can test itself because it is possible to define test programs through ATF to check the ATF tools and libraries. ATF should test itself because the resulting test suite will be a great source of example code and because its execution will be on its own a good stress test for the framework. See the tests/atf directory to check what I mean; specially, the unit tests for the fs module, which I've just committed, are quite nice :-) (For the record: there currently are 14 test programs in that directory, which account for a total of 60 test cases.) However, ATF should not be tested exclusively by means of itself. If it did so, any failure (even the most trivial one) in the ATF's code could result in false positives or false negatives during the execution of the test suite, leading to wrong results hard to discover and diagnose. Imagine, for example, that a subtle bug made the reporting of test failures to appear as test passes. All tests could start to succeed immediately and nobody could easily notice, surely leading to errors in further modifications. This is why a bootstrapping test suite is required: one that ensures that the most basic functionality of ATF works as expected, but which does not use ATF to run itself. This additional test suite is already present in the source tree, and is written using GNU Autotest, given that I'm using the GNU Autotools as the build system. Check the tests/bootstrap directory to see what all this is about. ATF's self-testing is, probably, the hardest thing I've encountered in this project so far. It is quite tricky and complex to get right, but it's cool! Despite being hard, having a complete test suite for ATF is a must so it cannot be left aside. Would you trust a testing framework if you could not quickly check that it worked as advertised? I couldn't ;-)

July 20, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

Daggy fixes (in Monotone)

If you inspect the ATF's source code history, you'll see a lot of merges. But why is that, if I'm the only developer working in the project? Shouldn't the revision history be linear? Well, the thing is it needn't and it shouldn't; the subtle difference is important here :-) It needn't be linear because Monotone is a VCS that stores history in a DAG, so it is completely natural to have a non-linear history. In fact, distributed development requires such a model if you want to preserve the original history (instead of stacking changes on top of revisions different than the original ones). On the other hand, it shouldn't be linear because there are better ways to organize the history. As the DaggyFixes page in the Monotone Wiki mentions: All software has bugs, and not all changes that you commit to a source tree are entirely good. Therefore, some commits can be considered "development" (new features), and others can be considered "bugfixes" (redoing or sometimes undoing previous changes). It can often be advantageous to separate the two: it is common practice to try and avoid mixing new code and bugfixes together in the same commit, often as a matter of project policy. This is because the fix can be important on its own, such as for applying critical bugfixes to stable releases without carrying along other unrelated changes.The key idea here is that you should group bug fixes alongside the original change that introduced them, if it is clear which commit is that and you can easily locate it. And if you do that, you end up with a non-linear history that requires a merge per each bug-fix to resolve the divergences inside a single branch. I certainly recommend you to read the DaggyFixes page. One more reason to do the switch to Monotone (or any other DAG-based VCS system, of course)? ;-) Oh, I now notice I once blogged about this same idea, but that page is far more clear than my explanation. That is why you'll notice lots of merges in the ATF source tree: I've started applying this methodology to see how well it behaves and I find it very interesting so far. I'd now hate switching to CVS and losing all the history for the project (because attempting to convert it to CVS's model could be painful), even if it is that not interesting.

July 17, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/monotone">monotone</a>
Continue reading (about 2 minutes)

Recovering two old Macs

Wow, it has already been three years since a friend an I found a couple of old Macintoshes in a trash container1. Each of us picked one, and maybe a year ago or so I gave mine to him as I had no space at home to keep it. Given that he did not use them and that I enjoy playing with old hardware, I exchanged those two machines by an old Pentium 3 I had laying around :-) The plan is to install NetBSD-current on at least one of them and some other system (or NetBSD version) in the other one to let me ensure ATF is really portable to bizarre hardware (running sane systems, though). The machines are these: A Performa 475: Motorola 68040 LC, 4MB of RAM, 250MB SCSI hard disk, no CD-ROM, Ethernet card. A Performa 630: Motorola 68040 LC, 40MB of RAM, 500-something IDE hard disk (will replace it with something bigger), CD-ROM, Ethernet card.I originally kept the Performa 630 and already played with it when we found the machines. Among other things, I replaced the PRAM battery with a home-grown solution, added support to change the NetBSD's console colors (because the black on white default on NetBSD/mac68k is annoying to say the least) and imported the softfloat support for this platform. Then, the turn for Performa 475 came past week. When I tried to boot it, it failed miserably. I could hear the typical Mac's boot-time chime, but after that the screen was black and the machine was completely unresponsive. After Googling a bit, I found that the black screen could be caused by the dead PRAM battery, but I assumed that the machine could still work; the thing is I could not hear the hard disk at all, and therefore was reluctant to put a new battery in it. Anyway, I finally bought the battery (very expensive, around 7€!), put it in and the machine booted! Once it was up, I noticed that there was a huge amount of software installed: Microsoft Office, LaTeX tools, Internet utilities (including Netscape Navigator), etc. And then, when checking what hardware was on the machine I was really, really surprised. All these programs were working with only 250MB of hard disk space and 4MB of RAM! Software bloat nowadays? Maybe... Well, if I want this second machine to be usable, I'll have to find some more RAM for it. But afterwards I hope it'll be able to run another version of NetBSD or maybe a Linux system. 1 That also reminds me that this blog is three years old too!

July 16, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/hardware">hardware</a>, <a href="/tags/mac">mac</a>
Continue reading (about 3 minutes)

SoC: Web site for ATF

While waiting for a NetBSD release build to finish, I've prepared the web site for ATF. It currently lacks information in a lot of areas, but the most important ones for now — the RSS feed for news and the Repository page — are quite complete. Hope you like it! Comments welcome, of course :-)

July 16, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 1 minute)

SoC: Converting NetBSD 'regress' tests

I've finally got at a point where I can start converting some of the current regression tests in the NetBSD tree to use the new ATF system. To prove this point, I've migrated all the tests that currently live in regress/bin to the new framework. They all now live in /usr/tests/util/. This has not been a trivial task — and it is not completely done yet, as there still are some rough edges — but I'm quite happy with the results. They show me that I'm on the right track :-) and, more importantly, they show outsiders how things are supposed to work. If you want more information on this specific change you can look at the revision that implements it or simply inspect the corresponding patch file. By the way, some of the tests already fail! That's because they were not run often enough in the past, something that ATF is supposed to resolve. While waiting for a NetBSD release build to complete, I have started working on a real web site for ATF. Don't know if I'll keep working on it now because it's a tough job and there is still a lot of coding to do. Really! But, on the other hand, having a nice project page is very good marketing.

July 15, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: Code is public now

Just in time for the mid-term evaluation (well, with one day of delay), I've made the atf's source code public. This is possible thanks to the public and free monotone server run by Timothy Brownawell. It's nice to stay away from CVS ;-) See the How to get it section at the atf's page for more details on how to download the code from that server and how to trust it (it may take an hour or two for the section to appear). You can also go straight to the source code browser. If, by any chance, you decide to download the code, be sure to read the README files as they contain very important information. And... don't start nitpicking yet! I haven't had a chance to clean up the code yet, and some parts of it really suck. Cleaning it up is the next thing I'll be doing, and I already started with the shell library :-)

July 10, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 1 minute)

SoC: Short-term planning

SoC 2007's mid-term evaluation is around the corner. I have to present some code on June 9th. In fact, it'd be already public if we used a decent VCS instead of CVS, but for now I'll keep the sources in a local monotone database. We'll see how they'll be made public later on. Summarizing what has been done so far: I've already got a working prototype of the atf core functionality. I have to confess that the code is still very ugly (and with that I mean you really don't want to see it!) and that it is incomplete in many areas, but it is good enough as a "proof of concept" and a base for further incremental improvement. What it currently allows me to do is: Write test programs (a collection of test cases) in C. In reality it is C++ as we already saw, but I've added several macros to hide the internals of the library and simplify the definition of test cases. So basically the test writer will not need to know that he's using C++ under the hood. (This is also to mimic as much as possible the shell-based interface.)Write test programs in POSIX shell. Similar as above, but for tests written using shell scripts. (Which I think will be the majority.)Define a "tree of tests" in the file system and recursively run them all, collecting the results in a single log. This can be done without the source tree nor the build tools (in special make), and by the end user.Wrote many tests to test atf itself. More on this on tomorrow's post. What I'm planning to do now, before the mid-term evaluation deadline, is to integrate the current code into the NetBSD's build tree (not talking about adding it to the official CVS yet though) to show how all these ideas are applicable to NetBSD testing, and to ensure everything can work with build.sh and cross-compilation. Once this is done, which shouldn't take very long I hope, I will start polishing the current atf's core implementation. This means rewriting several parts of the code (in special to make it more error-safe), adding more tests, adding manual pages for the tools and the interfaces, etc. This is something I'm willing to do, even though it'll be a hard and long job.

July 2, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: The atf-run tool

One of the key goals of atf is to let the end user — not only the developer — to easily run the tests after their corresponding application is installed. (In our case, application = NetBSD, but remember that atf also aims to be independent from NetBSD.) This also means, among other things, that the user must not need to have any development tool installed (the comp.tgz set) to run the tests, which rules out using make(1)... how glad I'm of that! :-) Based on this idea, each application using atf will install its tests alongside its binaries, being the currently location /usr/tests/<application>. These tests will be accompanied by a control file — an Atffile — that lists which tests have to be run and in which order. (In the future this may also include configuration or some other settings.) Later on, the user will be able to launch the atf-run tool inside any of these directories to automatically run all the provided tests, and the tool will generate a pretty report while the tests are run. Given that atf is an application, it has to be tested. After some work today, it is finally possible for atf to test itself! :-) Of course, it also comes with several bootstrap tests, written using GNU Autotest, to ensure that atf's core functionality works before one can run the tests written using atf itself. Otherwise one could get unexpected passes due to bugs in the atf code. This is what atf installs:$ find /tmp/local/tests /tmp/local/tests /tmp/local/tests/atf /tmp/local/tests/atf/Atffile /tmp/local/tests/atf/units /tmp/local/tests/atf/units/Atffile /tmp/local/tests/atf/units/t_file_handle /tmp/local/tests/atf/units/t_filesystem /tmp/local/tests/atf/units/t_pipe /tmp/local/tests/atf/units/t_pistream /tmp/local/tests/atf/units/t_postream /tmp/local/tests/atf/units/t_systembuf $All the t_* files are test programs written using the features provided by libatf. As you can see, each directory provides an Atffile which lists the tests to run and the directories to descend into. The atf-run tool already works (*cough* it's code is ugly, really ugly) and returns an appropriate error code depending on the outcomes of the tests. However, the report it generates is completely un-understandable. This will be the next thing to attack: I want to be able to generate plain-text reports to be seen as the text progresses, but also to generate pretty HTML files. To do the latter, the plan is to use some intermediate format such as XML and have another tool to do the formatting.

June 30, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: Prototypes for basename and dirname

Today, I've attempted to build atf on a NetBSD 4.0_BETA2 system I've been setting up in a spare box I had around, as opposed to the Mac OS X system I'm using for daily development. The build failed due to some well-understood problems, but there was an annoying one with respect to some calls to the standard XPG basename(3) and dirname(3) functions. According to the Mac OS X manual pages for those functions, they are supposed to take a const char * argument. However, the NetBSD versions of these functions take a plain char * parameter instead — i.e., not a constant pointer. After Googling for some references and with advice from joerg@, I got the answer: it turns out that the XPG versions1 of basename and dirname can modify the input string by trimming trailing directory separators (even though the current implementation in NetBSD does not do that). This makes no sense to me, but it's what the XPG4.2 and POSIX.1 standards specify. I've resolved this issue by simply re-implementing basename and dirname (which I wanted to do anyway), making my own versions take and return constant strings. And to make things safer, I've added a check to the configure script that verifies if the basename and dirname implementations take a constant pointer and, in that (incorrect) case, use the standard functions to sanity-check the results of my own by means of an assertion. 1 How not, the GNU libc library provides its own variations of basename and dirname. However, including libgen.h forces the usage of the XPG versions.

June 28, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/portability">portability</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: Start of the atf tools

Aside from the libraries I already mentioned in a past post, atf1 will also provide several tools to run the tests. An interesting part of the problem, though, is that many tests will be written in the POSIX shell language as that will be much easier than writing them in C or C++: the ability to rapidly prototype tests is a fundamental design goal; otherwise nobody could write them! However, providing two interfaces to the same framework (one in POSIX shell and one in C++) means that there could be a lot of code duplication in the two if not done properly. Not to mention that sanely and safely implementing some of these features in shell scripting could be painful. In order to resolve the above problem, the atf will also provide several binary tools that will be helpers for the shell scripts. Most of these tools will be installed in the libexec directory as they should not be exposed to the user, yet the shell scripts will need to be able to reach them. The key idea will be to later build the shell interface on top of the binary one, reusing as much code as possible. So far I have the following tools: atf-config: Used to dynamically query information about atf's installation. This is needed to let the shell scripts locate where the tools in libexec can be found (because they are not in the path!).atf-format: Pretty-prints a message (single- or multi-paragraph), wrapping it on terminal boundaries. atf-identify: Calculates a test program's identifier based on where it is placed in the file system. Tests programs will be organized in a directory hierarchy, and each of them has to have a unique identifier.The next one to write, hopefully, will be atf-run: the tool to effectively execute multiple test programs in a row and collect their results. Oh, and in case you are wondering. Yes, I have decided to provide each tool as an independent binary instead of a big one that wraps them all (such as cvs(1)). This is to keep them as small as possible — so that shell scripts can load them quickly — and because this seems to be more in the traditional Unix philosophy of having tools for doing very specific things :-) 1 Should I spell it atf, ATF or Atf?

June 27, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: A quote

I've already spent a bunch of time working on the packaging (as in what will end up in the .tar.gz distribution files) of atf even though it is still in very preliminary stages of development. This involved: Preparing a clean and nice build framework, which due to the tools I'm using meant writing the configure.ac and Makefile.am files. This involved adding some useful comments and, despite I'm familiar with these tools, re-reading part of the GNU Automake and GNU Autoconf manuals; this last step is something that many, many developers bypass and therefore end up with really messy scripts as if they weren't important. (Read: if the package used some other tools, there'd be no reason to not write pretty and clean build files.) Preparing the package's documentation, or at least placeholders for it: I'm referring to the typical AUTHORS, COPYING, NEWS, README and TODO documents that many developers seem to treat as garbage and fill them up at the last minute before flushing out a release, ending with badly formatted texts full of typos. Come on, that's the very first thing a user will see after unpacking a distribution file, so these ought to be pretty!Having spent a lot of time packaging software for pkgsrc and dealing with source code from other projects, I have to say that I've found dozens of packages that do not have the minimum quality one can expect in the above points. I don't like to pinpoint, but I have to: this includes several GNOME packages and the libspe. This last one is fairly "interesting" because the user documentation for it is high-quality, but all the credibility slips away when you look at the source code packaging... To all those authors: “Programs should be written and polished until they acquire publication quality.” — Niklaus Wirth

June 26, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/packaging">packaging</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: Project name

The automated testing framework I'm working on is a project idea that has been around for a very long time. Back in SoC 2005, this project was selected but, unfortunately, it was not developed. A that time, the project was named regress, a name that was derived from the current name used in the NetBSD's source tree to group all available tests: the src/regress directory. In my opinion, the "regress" name was not very adequate because regression tests are just a kind of all possible tests: those that detect whether a feature that was supposed to be working has started to malfunction. There are other kinds of tests, such as unit tests, integration tests and stress tests, all of which seemed to be excluded from the project just because of its name. When I wrote my project proposal this year, I tried to avoid the "regression testing" name wherever possible and, instead, simply used the "testing" word to emphasize that the project was not focusing on any specific test type. Based on that, the NetBSD-SoC administrators chose the atf name for my project, which stands for Automated Testing Framework. This is a very simple name,and, even though it cannot be easily pronounced, I don't dislike it: it is short, feels serious and clearly represents what the project is about. And for the sake of completion, let me mention another idea I had for the project's name. Back when I proposed it, I thought it could be named "NetBSD Automated Testing Framework", which could then be shortened to nbatf or natf (very similar to the current name, eh?). Based on the latter name, I thought... the "f" makes it hard to pronounce, so it'd be reduced to "nat", and then, it could be translated to the obvious (to me) person name that derives from it: Natalie. That name stuck on my head for a short while, but it doesn't look too serious for a project name I guess ;-) But now, as the atf won't be tied to NetBSD, that doesn't make much sense anyway.

June 25, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/soc">soc</a>
Continue reading (about 2 minutes)

SoC: Getting started

This weekend I have finally been able to start coding for my SoC project: the Automated Testing Framework for NetBSD. To my disliking, this has been delayed too much... but I was so busy with my PFC that I couldn't find any other chance to get my hands on it. I've started by working on two core components: libatf: The C/C++ library that will provide the interface to easily write test cases and test suites. Among its features will be the ability to report the results of each test, a way to define meta-data for each test, etc. libatfmain: A library that provides a default entry point for test programs that takes care to run the test cases in a controlled environment — e.g. it captures all signals and deals with them gracefully — and provides a standard command-line interface for them.Soon after I started coding, I realized that I could need to write my own C code to deal with data collections and safe strings. I hate that, because that's a very boring task — it is not related to the problem at hand at all — and because it involves reinventing the wheel: virtually all other languages provide these two features for-free. But wait! NetBSD has a C++ compiler, and atf won't be a host tool1. So... I can take advantage of C++, and I'll try to. Calm down! I'll try to avoid some of the "complex" C++ features as much as possible to keep the executables' size small enough. You know how the binaries' size blows up when using templates... Oh, and by the way: keep in mind that test cases will be typically written in POSIX shell script, so in general you won't need to deal with the C++ interface. Furthermore, I see no reason for atf to be tied to NetBSD. The test cases will surely be, but the framework needn't. Thus I'm thinking of creating a standalone package for atf itself and distributing it as a completely independent project (under the TNF2 umbrella), which will later be imported into the NetBSD source tree as we currently do for other third-party projects such as Postfix. In fact, I've already started work on this direction by creating the typical infrastructure to use the GNU auto-tools. Of course this separation could always be done at a later step in the development, but doing it from the very beginning ensures the code is free of NetBSD-isms, emphasizes the portability desire and keeps the framework self-contained. I'd like to hear your comments about these "decisions" :-) 1 A host tool is a utility that is built with the operating system's native tools instead of with the NetBSD's tool-chain: i.e. host tools are what build.sh tools builds. Such tools need to be highly portable because they have to be accepted by old compilers and bizarre build environments. 2 TNF = The NetBSD Foundation.

June 24, 2007 · Tags: <a href="/tags/atf">atf</a>, <a href="/tags/netbsd">netbsd</a>, <a href="/tags/soc">soc</a>
Continue reading (about 3 minutes)