Bazel likes creating very deep and large trees on disk during a build. One example is the output tree, which naturally contains all the artifacts of your build. Another, more problematic example is the symlink forest trees created for every action when sandboxing is enabled. As garbage gets created, it must be deleted.

It turns out, however, that deleting file system trees can be very expensive—and especially so on macOS. In fact, calls to our deleteTree algorithm routinely showed up in my profiling runs when trying to diagnose slowdowns using the dynamic scheduler. One thing I quickly wondered is: why can I easily catch Bazel stuck in the tree deletion but I can never catch it busily creating such a tree? Is tree deletion inherently slow or are we doing something stupid?

A quick look at the algorithm we used to have in FileSystemUtils#deleteTree() reveals that it is the latter. Paraphrasing to remove the irrelevant differences between deleteTree and deleteTreesBelow:

public static void deleteTree(Path path) throws IOException {
  if (path.isDirectory(Symlinks.NOFOLLOW)) {
    for (Path child : path.getDirectoryEntries()) {

We can observe a few inefficiencies here:

  1. Those three path.set* calls end up being individual chmod(2) calls. They could all be combined into one. Why wasn’t this done?
  2. Do we always have to “unprotect” the directories before trying to delete their contents? Protected directories are likely not the common case given that the majority of the trees we try to delete from within Bazel are trees we created ourselves.
  3. How does the path.delete() call know if it has to issue an rmdir(2) or an unlink(2)? It likely has to try both, or it has to stat the file. But note we already stated the file once to check if it was a directory! We should be able to reuse that information.
  4. Reading the contents of a directory typically also returns the types of those entries. We could use that to avoid the stats.
  5. We are constructing a ton of superfluous strings (one for each entry to be removed), all of which are absolute paths. Paths in Bazel tend to be very long, so the amount of data copied every single time is non-trivial and thus likely has a noticeable CPU cost.


The major problem is the disproportionate number of system calls issued: for every file to delete we issue 2-3 system calls, and for every directory, 7+.

We should and can do much better:

  • readdir(2) returns the type of the entries it finds in the common case so we can avoid the stat to query the type of each entry. (We may still get DT_UNKNOWN as the type, for example when on NFS, but we can handle that with a stat.)
  • Once we know what type of entry we have to delete, we can directly issue unlink(2) or rmdir(2) system calls to avoid having to guess later on via remove(3).
  • We can assume that the tree to be deleted is readable and writable, so we shouldn’t issue any chown(2) calls unless we really have to.
  • We don’t need to build any intermediate paths: we can use operations like openat(2) and unlinkat(2) to work purely on file descriptors.

All of this is much easier to do efficiently from C code than from Java code. Fortunately, the UnixFileSystem implementation of the file system objects backing the paths we deal with already had JNI glue, so I only had to update that piece of code. The amount of code to implement this is larger than I expected however. See commit fac322b1.

So. Does this make any difference? You bet!

For a large build of an iOS app on macOS using sandboxing (the default configuration in Bazel), this cut down total build times from 7,300s to 5,100s. That’s a 30% speedup. And this has had nice (unthought) effects on other parts of the code too: cleaning up the results of that same build via bazel clean used to take about 18s and now takes about 12s (similar ~30% speedup).

Moral of the story? Be mindful of overheads. All the powerful abstractions we rely on make it too easy to miss details like this.