Some programming languages have a feature known as strict mode: a setting that makes the language interpreter disallow certain obviously-broken code that would otherwise work. The simplest examples are JavaScript and Perl but, as it turns out, the shell also has something akin to this feature. The “strict mode” name, however, is unofficial, so you won’t find many references to it online.

You can enable the shell’s strict mode by doing one of the following:

set -eu            # For POSIX-compliant shell scripts.
set -euo pipefail  # For scripts that already depend on Bash.

In the majority of the cases, prefer the former. The reasons to depend on Bash are few and rare, and I will be skeptical if you claim they apply to you… but that’s something I should cover another day.

The commands above enable three independent options: set -e, set -u, and set -o pipefail. I will cover each in detail in this post, including their downsides. I will also glance over set -x which, while not related to strict mode, is handy to troubleshoot strict mode-related problems. For your convenience:

Let’s proceed, but before, a word of caution.

WARNING: Be wary of using this “strict mode”. While its features sound good on paper, they have plenty of caveats. In my opinion, you are better off having comprehensive unit and integration tests for your program. Strict mode is indeed handy, but mostly for simple throw-away scripts.

Oh, and by the way… you may disagree, but I feel that the presence of strict mode is an indicator that the language is just flawed. Stay away if you can.

set -e

set -e causes the shell to exit when an unguarded statement evaluates to a false value. In other words: the script terminates as soon as a command returns a non-zero exit code that was not expected. You can equate this to the way languages with exceptions work: if you throw an exception and don’t catch it, the exception bubbles up and the program crashes.

The tricky thing is to define what an unexpected non-zero exit code is. Consider these as some non-exhaustive examples:

# Bare commands terminate execution if they return non-zero.
false
cp /missing-file /tmp

# Commands protected by a conditional do not terminate execution.
if false; then
    echo "False"
else
    echo "True"
fi

# Guarded commands do not terminate execution if they evaluate to true.
cp /this-will-fail /tmp || echo "cp failed but continuing anyway..."

# But guarded commands terminate execution if they evaluate to false.
cp /this-will-fail /tmp || cp /this-fails-too /tmp

The problem is that the set of rules governing the behavior of set -e are obscure, and different shells (even different versions!) handle these rules differently. Here are a few examples of things that might seem fine but that are actually buggy:

# The default arguments are never set because short-circuit evaluation evaluates
# this to false.
[ $# -eq 0 ] && set -- "default" "arguments"

# The rm call is always run; local clobbers the subshell's exit code.
local prefix="$(get_config_value PREFIX)"
rm -rf "${prefix}/bin"

There are many more cases as described in BashFAQ 105, some of which are pretty hard to reason about.

So, while set -e can be very handy for quick’n’dirty scripts, I wouldn’t really recommend using it for more elaborate ones. Get in the habit of explicitly handling errors and automatically testing error paths.

set -u

set -u causes the shell to exit with an error when an undefined variable is referenced. The exceptions are ${*} and ${@}, which you can refer to at any point: they represent the current function’s arguments and these can legitimately be empty.

A small problem with set -u is that, for example, macOS’s ancient /bin/bash contains a bug that causes it to mishandle the special case described above. If your script mentions either ${*} or ${@} at any point, you are doomed to failure. You may be able to workaround the problem, but the contortions involved have always made me stop using this mode.

set -o pipefail (Bash only)

Pipelines evaluate to the exit code of the last command in the pipeline no matter what happened to any of the previous commands in the same pipeline. This is a problem. Consider something like this:

grep -v "${user_to_remove}" users \            # Remove the given user.
    | sed -e "/last edit/s/: .*/: ${date}/" \  # Update timestamp.
    >users.new || fail "Failed to remove user"
mv users.new users

This seems good: we modify some input file and save the output to a new file, and only if we don’t fail, we update the original. The problem is… what if grep fails because it cannot open users? grep will print nothing and sed will happily run along with that, reporting success at the end. That’s not what we expected, right?

set -o pipefail to the rescue. This option causes the shell to evaluate the pipeline to the first non-zero exit code. In our example, grep’s error code would “win” over sed’s successful exit and thus trigger the execution of the fail handler. That’s the intended behavior and, arguably, the behavior that should apply all the time.

The only problem is that set -o pipefail is a Bash extension. Unless you depend on Bash already for good reason, you cannot not use this setting.

set -x

When you enable strict mode, the shell will inevitably terminate your script in unexpected places—and it won’t tell you what happened. It can be very hard to isolate the place where the script stopped execution.

set -x can help here. When set -x is enabled, the shell echoes every statement it executes to the console (remember MS-DOS’ @echo on?). The output can be extremely verbose so you won’t want to keep it in the “production” version of your script, but enabling it transiently is invaluable in finding where your script abruptly exited.

References

  • BashFAQ 105: A collection of cases demonstrating that set -e is confusing and strong advocacy for not using set -e. I bought into it a few years ago when I disabled set -e from atf-sh.

  • set -e and set -x: A post of mine from 2010 that covered part of what’s explained here. That’s one of the most popular posts in this blog by organic search. Let’s see if I can oust it with this update!

  • Use the Unofficial Bash Strict Mode (Unless You Looove Debugging): While writing this article, I came across this one. At first sight I thought: “hey, this covers exactly what I wanted to describe so no need for me to post!”. But I changed my mind. First, because that article does not cover the downsides of each setting. And second, because that article is conflating IFS with strict mode and it has nothing to do with it. In fact, the examples given in the article about IFS are wrong: note the lack of double quotes around the array expansions, which is precisely why mangling IFS “helps”.