This post has been sitting in my drafts folder for exactly three years now. Part of my hesitation in posting it is that what follows is a difficult idea to communicate. But another part is that this is likely to be controversial, if only due to the provocative title. In any case, the time has come to set this piece free!


Betteridge’s law of headlines says that the answer to the question in the title is “No”, and… I concur… with some caveats.

We have all seen discussions go like this, especially on Hacker News: someone first complains that an application like Google Chrome is wasteful because it uses multiple GBs of memory. Someone else comes along and says that memory is there to be used for speed and therefore this is the correct behavior: if the computer has multiple GBs of free memory, an application such as Chrome should make use of all the available memory in the form of a cache to be as responsive as possible. Makes sense, right?

Yes, it does makes sense—as long as Chrome is the only application running. But that’s not what usually happens, is it? There are always multiple programs running at once (think system services, but also heavyweight programs like Teams) which means that this is probably not the greatest idea.

In this post, I want to explore why allowing applications to consume all available memory for their caching needs is suboptimal and what solutions might exist to that problem. I’ll pick on Chrome because that’s the common target of this complaint, but this applies equally to all other browsers and most modern “bloated” programs. But, before that, let’s review some memory management basics to ensure we are all on the same page.

A recap on memory paging

Modern computers offer an essentially-unlimited memory address space to each application (process) running on the computer. Each process thinks it’s running alone and that a vast amount of memory, from address zero to address 2^64 on 64-bit machines, is available for its own consumption.

Which is obviously not true: there is no computer (yet) with 2^64 physical bytes (16 exabytes) of RAM—it’s only an illusion, Michael. Computer processors divide physical memory into fixed-size page frames (traditionally 4 KBs) and divide the virtual memory of each process into pages. The operating system kernel is then in charge of mapping a subset of the virtual memory pages onto the physical page frames, and the processor handles the translation between virtual and physical addresses transparently for each memory access.

Under this design, the virtual address space is much larger than the physical memory and there can be multiple processes (and thus multiple virtual address spaces) running at once. In other words: physical memory is intentionally over-committed. When a process allocates more memory in its virtual address space, there might not be any available physical pages to back the newly-allocated virtual pages. At that point we hit a condition known as memory pressure and the kernel will have to do something to mitigate it in an attempt to fulfill the new memory request.

What the kernel does under memory pressure is specific to each operating system, but in the general case, the kernel must find already-mapped pages (from any process) and evict them in order to free page frames to make room for the new memory requests.

We can classify pages in the following two groups. I suggest reading the paper on NetBSD’s Unified Buffer Cache for more background information on this:

  • File pages correspond to chunks of memory that came directly from files on disk. These pages, if unmodified, can be freely discarded—and, if modified, they can be flushed back to their backing files. For example: pages used to run executable code can always be discarded because they are read-only and are backed by files on disk, whereas pages used via mmap(2) may or may not need to be flushed back to disk depending on their dirtiness status.

  • Anonymous pages correspond to chunks of memory that an application allocated (think malloc or new). The contents of this memory has no meaning to the kernel and, because they are “dynamically” populated by program logic, they have no backing resource. As a result, if these pages have to be evicted, they have no file to go to so the kernel has to put them aside somewhere. That somewhere is the swap area.

IMPORTANT: The key detail about page eviction, and the key detail you must retain for the reminder of this article, is that the page eviction process differs based on where the page originally came from.

If the system cannot find enough pages to evict (for example because it already evicted all file pages and there is not enough swap space to evict the remaining anonymous pages), the kernel will either panic or start terminating processes in an attempt to release used memory. On Linux this happens via the much “beloved” out-of-memory (OOM) killer.

As for how to decide which pages to evict from memory, each kernel has its own algorithm. In general, the kernel will implement an LRU algorithm to evict pages that are unlikely to be needed again soon. But it will also account for the “cost” of each eviction:

  1. The cheapest thing to do is to evict read-only file pages first because those don’t need to be persisted with a disk write and can be quickly recovered.

  2. The second cheapest thing is to evict dirty file pages given that they are already allocated in the file system and can be overwritten.

  3. The most costly thing to do is to evict anonymous pages because these inevitably need to be written to the swap area—which in turn may involve some form of space allocation. You never want to reach a point where the system has to move pages to the swap space because, when you do, performance tanks.

The specifics, however, are irrelevant for the remainder of this post.

Let’s run Chrome and Bazel together

With the background information out of the way, let’s go back to the original discussion.

The traditional argument given to justify Chrome’s huge memory usage is that the browser uses all the memory for caching. This is a good thing, the argument goes on, because you want a fast browsing experience, and caching as much data as possible in memory helps achieve that. And that’s true… except for the little fact that Chrome doesn’t run in a vacuum.

To see why this is a bad idea, let’s apply Raymond Chen’s "What if two programs did this?" analytical approach. The hungry browser can run along other programs, some of which might be memory-intensive too. To make our sample scenario realistic, let’s throw Bazel into the mix, which is another app that loves to hug memory by caching the multi-GB build graph of a workspace. In this scenario, we have a programmer researching some information online via Chrome, writing some code (likely via some heavyweight IDE), and finally building the resulting project via Bazel.

Given this scenario, the programmer may use Chrome intensively upfront to do some research. During this process, Chrome’s memory usage may grow to cover all available memory, caching rendered pages and images. Then the programmer may run Bazel to build the project, and Bazel may need to consume a lot of extra memory to load the full dependency graph. But… at that point, Bazel may not find enough available memory so the operating system will have to page out Chrome’s cache memory. Switching back to Chrome later on may result in a much slower experience than if the browser had not cached anything in memory in the first place.

The problem here is that applications like Chrome and Bazel use anonymous memory to operate their own caches. While cache memory can be, by definition, discarded at any point, the only thing that the kernel sees when there is memory pressure is that these applications allocated large chunks of anonymous memory. The kernel has no idea whether these pages contain precious data that must be persisted, or volatile data that can be discarded. The nasty consequence is that the kernel may decide to move cached data to the swap area and, as I mentioned earlier, once you go to the swap you have already lost from a performance perspective.

Is there anything we can do about this?

Cooperative memory allocations

“Well”, you say, “obviously we cannot allow applications to grab all memory for themselves. We should make them use up to X% of memory and leave some behind for other applications to use”. (And, for the record, this is an actual solution that was proposed in the Bazel case.)

Except… this wouldn’t fix anything: if the application itself is in control of checking current available memory and then making independent decisions about how much memory to use, we’ll either end up in the same situation as above or, worse, we’ll end up with applications that cannot use sufficient memory. Imagine if we had Chrome use, say, 80% of all memory because it knows to leave 20% unused. Then Bazel comes along, which is also configured to use 80% of available memory… which happens to be a tiny 80% of 20%. And thus Bazel is significantly penalized just because it happened to run second.

You might tweak this model and come up with a scheme that “works”, but distributed decisions on how to use memory are insufficient. First because you need an oracle (the kernel) that makes smart machine-wide decisions; and second because you cannot trust that all applications will follow the rules. After all, that’s why we moved from cooperative multitasking to preemptive multitasking decades ago.

Enter file caching

Let’s take a detour to explore a separate scenario: a bunch of applications that use little anonymous memory but that are heavy file system users. These applications open a lot of very large files each, keep them open for a long time, and perform random reads and writes on them. We have multiple instances of these applications running at once.

What we might see under this scenario is that overall system memory usage grows to almost 100%, just as before, but the swap remains empty. More importantly, while the system might be slow due to high CPU and I/O usage, its performance will remain predictable: launching a simple ls from the command line will be instantaneous and not be dragged by memory thrashing.

What’s happening here is that all of the memory is now being used by the kernel as part of its file buffer cache; the applications themselves are not holding onto the memory. This kernel-level cache keeps track of file pages (not anonymous pages) to improve I/O performance by optimizing random I/O and sequential access (via prefetching). And this cache is typically allowed to grow to all available memory.

The key difference between this file-backed scenario and the previous scenario with Chrome and Bazel is that the kernel handles a unified cross-application cache, and that the kernel has full knowledge of this cache’s contents. The kernel can make global decisions about the contents of this cache and keep all applications working as well as possible: if there is only one application running, all of the file buffer cache will be consumed by it; but if we have two or more applications, the cache will be shared “fairly”—and I put that in quotes because noisy neighbors are a real problem.

The root cause

So what’s the real problem with allowing Chrome to use all memory for its own caching?

Simply put, that the operating system has no visibility into the application’s anonymous memory and cannot make decisions on its behalf. Therefore, when there is memory pressure, the only thing the kernel can do is push out anonymous memory pages to disk—even if those pages contained, essentially, a volatile cache—and restoring these later on from the swap area is very costly.

Any solutions?

You can imagine having a feedback mechanism between the kernel and the apps to reclaim memory. The kernel could say “hey Chrome, I’m running out of memory so please free some stuff you don’t really need” to request memory back. Unfortunately, this would not work because it would require all applications to comply and do the right thing in all cases. A single rogue or buggy application would hoard all memory, rendering a worse situation.

That said, Android implements such a solution. Android is designed so that the system can evict parts of an application (the activities) outright. The way this works is by having a protocol between the system and the application on how to handle controlled eviction: the system will first ask nicely that an application releases memory and allow the application to flush out data, but the system will then destroy that memory if the application does not comply. In either case, the application must be designed to recreate the state it was in before it was gracefully or forcefully shut down—as if it had never exited. Android has such a design because it first targeted mobile devices with very little memory and because mobile devices focus(ed) on one single application at a time.

Another solution could be to have a system-wide caching service to handle arbitrary memory objects from running applications. Such a service could then make coordinated decisions about memory usage across applications, shedding cache entries uniformly. But as in the previous solution, all applications would need to play along. Otherwise, a single non-conforming application could hoard memory and penalize all other applications that are trying to play by the rules.

Which brings us to yet another solution: partitioning the memory and pre-specifying how much each program can use upfront. This is what containers do in production, but is not a nice solution for personal computers: hard partitioning cannot dynamically adapt to user behavior. Sometimes you really only want to browse the web, and in that case you want all resources available to the browser.

And finally, we could try to shoehorn the applications into the facilities of existing systems. If applications leveraged files instead of anonymous memory to implement their caches, then the system’s file buffer cache would do the right thing. Imagine if every application, instead of using malloc to get anonymous memory for its cacheable objects, used individual files to store them. Then, the application would use open/read/close cycles to access those cached objects. Under this scenario, the file cache in the kernel would do the right thing across applications: frequently used cache entries (files) would remain in memory, and under memory pressure, they could be cheaply evicted back to their backing files. Performance might suffer due to the extra system calls, but probably the net result would be better anyway. And, as it turns out, Varnish does exactly this!

Unfortunately, all of the solutions above require some kind of coordination across programs and that all programs play by the rules. This might be workable under new systems designs (as Android has done) but retrofitting any of these into current systems is bound to not work. I wish it would though.

Finally, you might think that the above scenarios don’t matter in today’s world because computers have sufficient RAM. But they still do. Making concurrent Chrome and Bazel usage palatable on 16 GB laptops is precisely what I wanted to fix in my previous role at Google. And every time I notice slowdowns on my Surface Go 2, a new machine I bought a year ago but which only has 8 GB of RAM, these issues come to mind.


So back to the original question: “Should the browser use all available memory?” No; not as it is done today. But yes if we had some better mechanisms for efficient cross-application caching.

Some food for thought 🙃 And if you do end up thinking about this, please share your comments using any of the buttons below!

Edit (Aug 13th): Added reference to the article about Varnish following a suggestion from a Hacker News comment.