Well, that was unexpected. I recorded a couple of crappy videos in 5 minutes, posted them on a Twitter thread, and went viral with 8.8K likes at this point. I really could not have predicted that, given that I’ve been posting what-I-believe-is interesting content for years and… nothing, almost-zero interest. Now that things have cooled down, it’s time to stir the pot and elaborate on those thoughts a bit more rationally.
To summarize, the Twitter thread shows two videos: one of an old computer running Windows NT 3.51 and one of a new computer running Windows 11. In each video, I opened and closed a command prompt, File Explorer, Notepad, and Paint. You can clearly see how apps on the old computer open up instantly whereas apps on the new computer show significant lag as they load. I questioned how computers are actually getting better when trivial things like this have regressed. And boom, the likes and reshares started coming in. Obviously some people had issues with my claims, but there seems to be an overwhelming majority of people that agree we have a problem.
To open up, I’ll stand my ground: latency in modern computer interfaces, with modern OSes and modern applications, is terrible and getting worse. This applies to smartphones as well. At the same time, while UIs were much more responsible on computers of the past, those computers were also awful in many ways: new systems have changed our lives substantially. So, what gives?
A blog on operating systems, programming languages, testing, build systems, my own software projects and even personal productivity. Specifics include FreeBSD, Linux, Rust, Bazel and EndBASIC.
The original comparison
Let’s address the elephant in the room first. The initial comparison I posted wasn’t fair and I was aware of that going in. That said, I knew repeating the experiment “properly” would yield the same results, so I plowed ahead with whatever I had right then. The videos were unplanned because the idea for the Tweets came to mind when I booted the old machine, clicked on Command Prompt, and was blown away by the immediacy to start the app.
The original comparison videos showed:
An AMD K7-600 with 128MB of RAM and a 5400 RPM HDD running Windows NT 3.51. This was a machine from the year 1999-2000 with an OS that was about 5 years older than it. Hardware was experiencing really fast improvements back then, particularly in CPU speeds, and you were kinda expected to keep up with the 2-year upgrade treadmill or suffer from incredibly slowness. All this is to say that this machine was indeed overpowered for the OS I used.
Please remind me how we are moving forward. In this video, a machine from the year ~2000 (600MHz, 128MB RAM, spinning-rust hard disk) running Windows NT 3.51. Note how incredibly snappy opening apps is. 👇 pic.twitter.com/YEO824vIqI
— Julio Merino (@jmmv) June 22, 2023A Surface Go 2 with an Intel Core m3 CPU, 8GB of RAM, and an SSD running Windows 11. This is a 3-year old machine that shipped with Windows 10, but Windows 11 is officially supported on it—and as you know, that means you are tricked into upgrading. This is not a powerful machine by any means, but: first, it’s running the verbatim Microsoft experience, and second, it should be much more powerful than the K7 system, shouldn’t it? We are continuously reminded that any computer or phone today has orders of magnitude more power than past machines.
Now look at opening the same apps on Windows 11 on a Surface Go 2 (quad-core i5 processor at 2.4GHz, 8GB RAM, SSD). Everything is super sluggish. pic.twitter.com/W722PNEGv0
— Julio Merino (@jmmv) June 22, 2023Oh, and yes, I quoted the wrong hardware specs in the original tweet. Looking again on how I made that mistake: I searched for “Surface Go 2” in Bing, I landed on the “Surface Laptop Go 2” page, and copied what I saw there without noticing that it wasn’t accurate.
All apps had been previously open, so they should all have been comfortably sitting in RAM.
The better comparison
Obviously various people noticed that there was something off with my comparison (unfair hardware configurations, wrong specs), so I redid the comparison once the thread started gaining attention:
Windows 2000 on the K7-600 machine (see installation thread). This is an OS from 1999 running on hardware from that same year. And, if you ask me, this was the best Windows release of all times: super-clean UI on an NT core, carrying all of the features you would want around performance and stability (except with terrible boot times). As you can see, things still fare very well for the old machine in terms of UI responsiveness.
For those thinking that the comparison was unfair, here is Windows 2000 on the same 600MHz machine. Both are from the same year, 1999. Note how the immediacy is still exactly the same and hadn’t been ruined yet. pic.twitter.com/Tpks2Hd1Id
— Julio Merino (@jmmv) June 23, 2023Windows 11 on a Mac Pro 2013 (see installation instructions) with a 6-core Xeon E5-1650v2 at 3.5GHz, 32GB of RAM, dual GPUs, and an SSD that can sustain up to 1GB/s. I know, this is a 10-year old machine at this point running a more modern OS. But please, go ahead, tell me with a straight face how hardware with these specs cannot handle opening trivial desktop applications without delay. I’ll wait.
Oh, and one more thing. Yes, yes, the Surface Go 2 is underpowered and all you want. But look at this video. Same steps on a 6-core Mac Pro @ 3.5GHz with 32GB of RAM. All apps cached. Note how they get painted in chunks. It's not because of animations or mediocre hardware. pic.twitter.com/9TOGAdaTXO
— Julio Merino (@jmmv) June 23, 2023The reason I used the Mac Pro is because it is the best machine I have running Windows right now and, in fact, it’s my daily driver. But again, I do not care about how running this comparison on an “old” machine might be “inaccurate”. Back when I left Microsoft last year, I was regularly using a Z4 desktop from 2022, a maxed-out quad-core i7 ThinkPad with 32GB of RAM, and an i7 Surface Laptop 3 with 16GB of RAM. Delays were shorter on these, of course, but interactions were still noticeably slow.
So, in any case: I agree the original comparison was potentially flawed, but as you can see, a better comparison yields the same results—which I knew it would. After years upon years of computer usage, you gain intuition on how things should behave, and trusting such intuition tends to work well as long as you validate your assumptions later, don’t get me wrong!
Computer advancements
Let’s put the tweets aside and talk about how things have changed since the 2000s. I jokingly asked how we are “moving forward” as an industry, so it’s worth looking into it.
Indeed, we have moved forward in many aspects: we now have incredible graphics and high-resolution monitors, super-fast networks, real-time video editing, and much more. All of these have improved over the years and it is very true that these advancements have allowed for certain life transformations to happen. Some examples: the ability to communicate with loved ones much more easily thanks to great-quality videoconferencing; the ability to have a streaming “cinema at home”; and the painless switch to remote work during the pandemic1.
We have also moved forward on the I/O side. Disk I/O had always been the weakest spot on past systems. Floppy disks were unreliable and slow. CDs and DVDs were slightly-more-reliable but also slow. HDDs were the bottleneck for lots of things: their throughput improved over time, allowing things like higher-resolution video editing and the like, but random I/O hit physical limits—and fast random I/O is what essentially drives desktop responsiveness.
Then, boom, SSDs appeared and started showing up on desktops. These were a game-changer because they fixed the problem of random I/O. All of a sudden, booting a computer, launching a heavy game, opening folders with lots of small photos, or simply just using your computer… all improved massively. It’s hard to explain the usability improvements that these brought if you did not live through this transition, and it’s scary how those improvements are almost gone; more on that later.
Other stuff also improved, like the simplicity to install new hardware, the pervasiveness of wireless connections and devices, the internationalization of text and apps (Unicode isn’t easy nor cheap, I’ll grant that)… all providing more usable machines in more contexts than ever.
So yeah, things are better in many areas and we have more power than ever. Otherwise, we couldn’t do things like ML-assisted photo processing on a tiny phone, which was unimaginable in the 2000s.
Terrible latency
Yet… none of these advancements justify why things are as excruciatingly slow as they are today in terms of UI latency. Old hardware from the year 1999, combined with an OS from that same year, shows that responsive systems have existed2. If anything, all these hardware improvements I described should have made things better, not worse, shouldn’t they?
Some replied to the comparison telling me that graphical animations and bigger screens are “at fault” because we have to draw more pixels, and thus the fact that we have these new niceties means we have to tolerate slowness. Well, not quite. Witness for yourself:
And... one more thing? To those saying: "it's the higher 4K resolution!" or "it's the good-looking animations!" or "it's the pretty desktop background!"—no, they aren't at fault. See, the slowness is still visible with all of these disabled. In the end... blog post coming soon. pic.twitter.com/9BQy6IpK6a
— Julio Merino (@jmmv) June 26, 2023
GPUs are a commodity now, and they lift the heavy burden of graphics management from the CPU. The kinds of graphical animations that a desktop renders are extremely cheap to compute, and this has been proven by macOS since its launch: all graphical effects on a macOS desktop feel instant. The effects do delay interactions though—the desktop switching animation is particularly intrusive, oh god how I hate that thing—but the delays generally come from intentional pauses during the animation. And when the effects introduce latency because the GPU cannot keep up, such as when you attach a 4K display to a really old Mac, then it’s painfully obvious that animations stutter due to lack of power. I haven’t encountered the latter in any of the videos above though, which is why animations and the like have nothing to do with my concerns.
So, please, think about it with a critical mind. How is the ability to edit multiple 4K video streams in real time or the ability to stream a 4K movie supposed to make starting apps like Notepad slower? Or opening the context menu in the desktop? Or reading your email? The new abilities we acquired much more power from the CPU and GPU, but they shouldn’t remove performance from tasks that are essentially I/O-bound. Opening a simple app shouldn’t be slower than it was more than 20 years ago; it really shouldn’t be. Yet here we are. The reasons for the desktop latency come from elsewhere, and I have some guesses for those. But first, a look at a couple of examples.
Examples
On Windows land, there are two obvious examples I want to bring up and that were mentioned in the Twitter thread:
Notepad had been a native app until very recently, and it still opened pretty much instantaneously. With its rewrite as a UWP app, things went downhill. The before and after are apparent, and yet… the app continues to be as unfeatureful as it had always been. This is extra slowness for no user benefit.
As for Windows Terminal, sure, it is nicer than anything that came before it, but it is visibly much, much heavier than the old Command Prompt. And if you add PowerShell into the mix, we are talking about multiple seconds for a new terminal window to be operational unless you have top-of-the-line hardware.
macOS fares better than Windows indeed, but it still has its issues. See this example contributed by @AlexRugerMusic. Even the mighty M1 has trouble opening up the system settings app:
Another example:
— rewgs (@AlexRugerMusic) June 27, 2023
Left: 2006 MBP (2GHz Core 2 Duo, 2GB DDR2 RAM, Max OS X 10.6.8; has SSD)
Right: 2021 MacBook Air (M1, 16GB RAM, macOS 13)
Apps open about twice as fast on the Pro as they do on the Air (most, not just System Preferences/Settings). pic.twitter.com/PjMX1DI4uz
Linux is probably the system that suffers the least from these issues as it still feels pretty snappy on modest hardware. Fedora Linux 38, released in April 2023, runs really well on a micro PC from 11 years ago—even if Gnome or KDE had been resource hogs back in the day. That said, this is only an illusion. As soon as you start installing any modern app that wasn’t developed exclusively for Linux… the slow app start times and generally poor performance show up.
Related, but I feel this needs saying: the biggest shock for me was when I joined Google back in 2009. At the time, Google Search and GMail had stellar performance: they were examples to follow. From the inside though… I was quite shocked by how all internal tools crawled, and in particular by how slow the in-house command line tools were. I actually fault Google for the situation we are in today due to their impressive internal systems and their relentless push for web apps at all costs, which brings us to…
Causes
How does this all happen? It’s easy to say “Bloat!”, but that’s a hard thing to define because bloat can be justified: what one person considers as bloat is not the same as what another person considers as bloat. After all, “80% of users only use 20% of the software they consume” (see Pareto principle), but the key insight is that the 20% that each user consumes is different from one another. So bloat isn’t necessarily in the features offered by the software; it’s elsewhere.
So then we have frameworks and layers of abstraction, which seem to introduce bloat for bloat’s sake. But I’m not sure this is correct either: abstraction doesn’t inherently have to make things slower, as Rust has proven. What makes things slower are priorities. Nobody prioritizes performance anymore unless for the critical cases where it matters (video games, transcoding video, and the like). What people (companies) prioritize is developer time. For example: you might not want to use Rust because its steep learning curve means you’ll spend more time learning than delivering, or its higher compiler times mean that you’ll spend more time waiting for the compiler than shipping debugging production. Or another example: you might not want to develop native apps because that means “duplicate work”, so you reach out for a cross-platform web framework. That is, Electron.
I know it’s easy to dunk on Electron, but there are clear telltale signs that this platform is at fault for a lot of the damage done to desktop latency. Take 1Password’s 8th version, which many of users that migrated from the 7th version despise due to the slowness of the new interface. Or take Spotify, which used to prioritize startup and playback latency over anything else in its inception and, as you know if you use it, that’s not true any more:
2009 version: 20MB fully native cocoa app, launches in less than a second, instant feedback when you click stuff, playback usually starts within 50ms pic.twitter.com/Enzi40PDCX
— Rasmus Andersson (@rsms) May 10, 2023
These apps were rewritten in Electron to offer a unified experience across desktops and to cut down costs… but for whom? The cost cuts were for the companies owning the products, not for the users. Such cuts impose a tax on every one of us due to our day-to-day frustrations and the need to unnecessarily upgrade our hardware. Couple these rewrites with the fact that OSes cannot reuse the heavy framework across apps (same idea as how using all RAM as a cache is a flawed premise)… and the bloat quickly adds up when you run these apps concurrently.
Leaving Electron aside, another decision that likely introduces latency is the mass adoption of managed and interpreted languages. I know these are easy to dunk on as well, but that’s because we have reasons to do so. Take Java or .NET: several Windows apps have been slowly rewritten in C# and, while I have no proof of this, I’m convinced from past experience that this can be behind the sluggishness we notice. The JDK and the CLR do amazing jobs at optimizing long-running processes (their JIT can do PGO with real time data), but handling quick startup times is not something they manage well. This is why, for example, Bazel spawns a background server process to paper over startup latency and why Android has gone through multiple iterations of AOT compilation. (Edit: there must be other reasons though that I have not researched. As someone pointed out, my assumption that Windows Terminal was mostly C# is not true.)
More on this in Wirth’s law.
One-off improvements eaten away
To conclude, let me end with a pessimistic note by going back to hardware advancements.
The particular improvement that SSDs brought us was a one-off transformation. HDDs kept getting faster for years indeed, but they never could deliver the kind of random I/O that desktops require to be snappy. The switch to SSDs brought a kind of improvement that was at a different level. Unfortunately… we could only buy those benefits once: there is no other technology to switch to that provides such a transformative experience. And thus, once the benefits brought by the new technology were eaten away by careless software, we are almost back to square one. Yes, SSDs are getting faster, but newer drives won’t bring the kind of massive differences that the change from HDDs to SSDs brought.
You can see this yourself if you try using recent versions of Windows or macOS without an SSD: it is nigh impossible. These systems now assume that computers have SSDs in them, which is a fair assumption, but a problematic one due to what I mentioned above. The same applies to “bloat” in apps: open up your favorite resource monitor, look for the disk I/O bandwidth graph, and launch any modern app. You’ll see a stream of MBs upon MBs being loaded from disk into memory, all of which must complete before the app is responsive. This is the kind of bloat that Electron adds and that SSDs permitted, but that could be avoided altogether with different design decisions.
Which makes me worried about Apple Silicon. Remember all the rage with the M1 launch and how these new machines had superb performance, extremely long battery life, and no fan noise? Well, wait and see: these benefits will be eaten away if we continue on the same careless path. And once that has happened, it’ll be too late. Retrofitting performance into existing applications is very difficult technically, and almost impossible to prioritize organizationally.
So… will computer architects be able to save us with other revolutionary technology shifts? I wouldn’t want to rely on that. Not because the shifts might not exist, but because we shouldn’t need them.
Oh wait: remote work does not qualify. I’m sorry: if you did any kind of open source development in the 90s or 2000s, you know that fully-distributed and truly-async work was perfectly possible back then. ↩︎
For a more detailed analysis, Dan Luu already covered this type of slowdown introduced by latency in his famous article “Computer latency: 1977-2017”. Note how the article goes further back than 1999 and that the computer with the best latency he found is from 1983. But, yeah, that old computer cannot match the workloads we put our computers through these days, so I don’t think comparing it to a modern desktop would be fair. ↩︎