Lamenting the end of simplicity in computing

A personal ramble

Brian Candler
12 min readOct 8, 2021

In the beginning

I was born in the late 1960's.

My father (now retired) was a field service engineer in a mainframe company. He built me my first computer from a kit¹ while I will still at primary school.

This computer was based on the Motorola 6800 processor running at 1MHz. After we had added an expansion board, it had 4 kilobytes² of RAM — that’s 4,096 bytes. Each machine instruction took 1, 2 or 3 bytes, so the most complex program I could write would have a couple of thousand instructions at most. I had to leave some RAM for data too.

Computing was glorious. The processor had a few dozen instructions, and I knew them off by heart. I wrote programs directly in hex. I calculated branch offsets in my head. There was 1 kilobyte of ROM to boot the machine, which contained subroutines that I could call out to when I wanted to print a string to the screen, or accept a keystroke. I could also write directly to the video buffer to display characters wherever I wanted.

Perhaps most importantly, I could understand the whole system from top to bottom, from the instructions I wrote, to the contents of the ROM, to the signals on the address and data buses. In those days, programming and microelectronics were pretty much interchangeable.

My next computer was a Commodore 64, which had 64 kilobytes of RAM and 20 kilobytes of BASIC in ROM. Again, this was small enough to understand in as much depth as I wanted. I understood it enough to be able to hook in an extension to the BASIC interpreter, which I called “BC BASIC” — I even sold some copies. Everything was still small. In those days, computer magazines used to publish programs that you would type in by hand. (Floppy disks were a rarity; persistent storage was usually done to audio cassette tape)

Even the main processor was small enough to understand. The C64's processor, the 6502, had around 4,000 transistors. This is small enough that you can build one out of discrete transistors if you really want.

My next computer was a Macintosh 128, with 128 kilobytes of RAM³ and 128 kilobytes of ROM, a huge amount for the time. The ROM included an entire graphical toolkit called “Quickdraw⁴”. The software design was astoundingly good, as was the official documentation called “Inside Macintosh”, which eventually reached five large volumes. I could still follow and understand the important pieces. And it worked.

Fast-forward 40 years

How do those specifications compare to today’s systems — even modern smartphones?

RAM: 4 kilobytes → 4 gigabytes. A million times larger.

Operating system: 1 kilobyte → 1 gigabyte or more. A million times larger.

Transistors in CPU: from 4,000 to billions. Call it a million times more.

Clock speed: 1 MHz → 2.5GHz with 4 cores. About 10,000 times faster.

What we have now, are systems that are so large, so complex, that I believe it’s impossible for anyone to understand them top to bottom. And this makes me sad.

What are the consequences? Mainly bugs, bugs and yet more bugs.

“Turn it off and turn it on again”

That’s the well-known remedy for TVs to toasters and even airplanes. But why does this even work?

The issue is one of state. When a computer program executes, the behaviour of each step depends on values currently in memory, and can also modify those values in memory, thus affecting future execution. The complete set of all those values is called the state of a system. The larger the state, the more possible values it can have.

Systems have now become so complex, it’s almost impossible for a programmer to take account of every possible state the system might be in, based on every possible path of execution it might have taken so far. And so we end up with systems misbehaving when they encounter unexpected state, and unable to recover from it.

Turning it off wipes all the state; turning it on makes the program run from scratch. The state is initialized to a known good starting point, and then the system works from there. It’s back in the zone of states which the programmer was expecting.

(Aside: the movement towards new non-volatile memory technology, itself harking back to the magnetic core memories of the 1950’s, may mean that this fix doesn’t work in future)

Lack of security

Perhaps the biggest problem with such hugely complex systems is security, or lack of it.

I don’t have the time or mental capacity to understand a system from top to bottom any more, but there are people who get paid to do so. They range from national security agencies, and companies that sell their wares to such agencies, to freelance black-hats who sell to criminals.

There is so much complexity that security holes are almost certain to exist. Even those systems supposedly built with security as foundation with hardware support, like Apple’s iOS, repeatedly have holes found and exploited.

The drivers towards complexity

What’s pushed us to get to such complexity? There are many reasons, and I’ll highlight just a few.

One thing of course is that we want our computers to do more complex things. Today, a requirement might start like this:

Build a web server which does X

Just that starting point hides a ton of complexity. The definition of what a web server needs to do, to be compliant with the standards which a web browser uses, is horrendously complex — there are hundreds of pages of specifications to follow. Generally you end up using someone else’s work as a base, a point I’ll expand on later.

Other complexity today comes from wanting to make our applications more accessible, to people who speak different languages or to people with disabilities. This has to be a good thing.

There are other systems which are complex by their nature. Let’s say that we want to build a replicated filesystem, designed to protect our data in all failure scenarios, and yet perform quickly in normal operation — it’s bound to be complex.

A second driver, also implied by the web server example, is networking. Computers no longer operate in isolation, but are cogs in a more complex system. Networking implies communication, and agreement over what the communication means, but also opens up a wide range of failure modes which have to be dealt with. If I make a request and no response comes back, should I retry? What if the other side already acted on my request, but only the response was lost in transit?

A third driver is concurrency and parallelism. Concurrency means keeping track of multiple tasks and switching between them when necessary — your web server needs to accept requests from multiple users simultaneously. Parallelism is actually doing more than one thing at the same time, using multiple processors. The speed of CPUs has not increased as fast as the size of storage, so systems designers have been pushed to having multiple CPUs working at the same time.

However, having multiple activities running at the same time and sharing the same memory is fraught with difficulties. It introduces a whole layer of complexity around synchronization, which is very hard to reason about. If programmers don’t deal with it properly, then you get race conditions which can make the behaviour non-deterministic (effectively random). In the worst cases, such effects occur only rarely, and so aren’t caught in testing but affect users in the field.

Feeping Creaturism

Another driver is continual change for the sake of perceived “improvement”. We’ve all seen it, whether it be Microsoft Word sprouting new features buried under more and more layers of menus and ribbons, or programming languages themselves adding more and more features (look at the history of Ruby and Python for a myriad of examples). These are well-intentioned. No doubt each of these features makes somebody’s life easier, at the expense of complexity that everyone else has to deal with, and reduced overall reliability.

Even Unix, where the philosophy is “Make each program do one thing well”, is not immune. Take curl for example. This is a command-line tool which is used to interact with a web server — fetch web pages, in other words. Read the manual page to see how many options and features it has sprouted over time. Finding the one you need has become a major pain.

Did you ever use, say, Word Perfect or Microsoft Word under Windows 3.1 in the 1990s? If so, ask yourself: how much more does today’s version help me? Is it easier to use, or harder? Is it 10,000 times better or faster?

The art of making things “as simple as they can be” seems too often to have been lost. It turns out that making things simpler takes a lot more effort than making them more complex.

Computing in layers

Because things are now so complicated, we’re forced to re-use other people’s work rather than start from scratch, something which in principle ought to be a good idea.

What it means is that computing has moved from working with a handful of simple building blocks (like the instructions on my 6800 or 6502) to a vast menagerie of complex building blocks, which you have to find, choose between, and then learn individually.

It’s a bit like Lego™. In the old days, there were a handful of different types of Lego blocks. Mostly there were 4x2, and 2x2, and 2x1, and a baseboard. Given a big bucket of these, you could put them together and build anything you wanted. Your imagination was the only limit⁵.

Over time, Lego changed. Firstly it made specialized models, like pirate ships. Then it turned into branded toys like Star Wars™ models. The kits contained a myriad of special parts. If you put them together correctly then you got a pirate ship or an X-wing fighter — and undoubtedly more realistic than you could build from standard blocks. But keeping track of these bits was very difficult, and building anything other than what they were meant for was difficult. Lego turned into more of a jigsaw than a construction set.

Going back to computing, there are three major types of Lego piece.

Firstly, there are programming languages, and libraries: bits of code that you combine with your code, like web server frameworks and graphics libraries and database access libraries. For each one of these you use, you need to learn how to talk to it — its “API⁶ surface”. That means learning not just what calls exist, and the data types they uses, but also what they do and how to use them. What does each call do, when should I use it and when can I not use it? What sequences of operations make sense? What happens when there’s an error? This is the “semantics” of the API, what each call means.

Secondly, there are services that you call over the network. Services that interact with each other via RPC calls and REST calls and messaging queues. Again, these have semantics that have to be learned, and you have to keep in your head the state which exists on those systems and which is changed by your requests to it. On top of that, you also have to consider all the possible error scenarios which you may need to recover from when your system is working but the other system is not.

And finally, there are systems which handle an entire piece of business functionality, like a CRM or a billing system. Vendors will tell you (pre-sale) that their system will do everything that you need. It’s only after-sale, when you have to make it work in your environment, that you’ll find the limitations. Vendors tell you their product is easy to set up, it works with “configuration not code”. However for complex systems their configuration becomes pretty indistinguishable from code (I’m looking at you, Salesforce); or else it’s not flexible enough to do what you want. In that case, you need to start building another system externally to do that, and then somehow get the two systems to talk to each other.

In all these cases, the degree of choice is staggering. Which programming language should I use? Which libraries? Which online services? Which vendor systems? Much time can be spent in catalogue shopping and beauty parades, and in dealing with integration issues (problems with the system and between the systems), rather than solving the original problem you were trying to solve.

On top of that, almost all documentation is subject to Sturgeon’s Law, often requiring trial-and-error or reverse engineering to make it work. (That’s why I like open source; at least the reverse engineering option is available).

“There are two branches of science: Physics and stamp collecting”
Ernest Rutherford

I feel more like a stamp collector as the years go on.

Poor abstractions

Let me go back to my Macintosh 128 for a moment.

At this stage, things were still pretty simple. I wrote programs in Pascal and C, which compiled to small binaries that I could single-step with a debugger. If I wanted a dialog box on the screen, I’d make a call to Quickdraw. If I wanted it centred on screen, I’d make a small calculation using the size of the viewport and the size of the dialog box. It was very much “I want to do X — so I write some code to do X.”

Move forward some years, and I started to try writing for the web. A web application generates output as a page of HTML, together with styling information in the form of CSS⁷, and the web browser puts these pieces together to render on the page.

What you don’t do is: “I want this piece of text to be displayed here”. That would be too simple.

What you do is: “Here is some text, and here are some attributes. This text is nested inside a whole bunch of container elements, each of which also has a bunch of attributes. The attributes interact with each other in highly complex ways, dependent on some attributes being inherited from structural parents, and some attributes overriding other attributes, with a whole bunch of exceptions and additional overrides”.

Unless you have a CSS engine in your brain — and I don’t — then programming is a case of generating all the elements and attributes, seeing what it does, and keep tweaking them until somehow it gets close to the result you wanted. Perhaps it’s just because how my brain works, I always do what appears to be the “simple and obvious” thing, but it doesn’t work out how I expect.

One example: I wanted an item on the right-hand side of the page, so I set float: right. It messed up the rest of my layout. It turns out that once you add “float” to any element, you basically have to add “float” to every element on your page. This gave birth to a whole load of CSS frameworks which use “float” elements of varying width to create columnar layouts. (I believe CSS has now has native “grid” layouts, adding yet more complexity to CSS).

Then there’s the way your front-end application logic is distributed all over the place, attaching listeners to particular DOM elements (or classes). Look at a page, and you want to know what happens when you click on a particular element? Who knows: the code is somewhere else entirely. Decode the multi-megabyte blob of minimized Javascript, and it’s buried in there somewhere. If these listeners are being attached and removed dynamically, you may never find out.

Now it appears that some web-based applications have started bypassing the HTML DOM/CSS layer, using the <canvas> tag and drawing to the window, so the application directly controls what you see. In other words: they’ve come full circle to how Quickdraw worked on the Mac 128. That made me smile.

Perhaps it’s not surprising. HTTP+HTML was originally conceived as a way of distributing documents, not an execution environment for applications. In other words, the abstraction which we were forced to use did not match well the way programmers wanted to use it.

So what now?

I would love to go back to a time when computing was fast, simple — and fun.

I wonder what my 10-year-old self would have made of computing, if the first computer I was introduced to was a Raspberry Pi with 4 cores, 4 gigabytes of RAM, and an 8GB flash card stuffed with the OS. Would I ever have understood it at a fundamental level? Would I have used it for anything other than playing games?

Electronics as a hobby seems to have died too. Long gone are the days when you could pop into the high street to pick up a resistor or some solder.

Maybe I’m naïve. Maybe I’m just an old dog. But secretly I’m hoping that the next breakthrough in computing will not be in complexity, but in simplicity.

If the Next Big Thing™ turns out to be a CPU packed with cores, each of which is a 6502 with 64 kilobytes of RAM, all interconnected by an IEEE-488 bus, then I’ll be happy.

Update

Since writing this, I came across this video in which a games developer coherently argues that we are already in the midst of a collapse of civilisation, and that technology (in particular software) is degrading rather than improving. It’s well worth a watch, even if — perhaps especially if — you think it can’t possibly be true.

Also the article “A plea for lean software” from Niklaus Wirth — written in 1995.

¹Designed and sold by the long-lost Hewart Microelectronics. I gave the system away many years ago. However if Brian Hewart is still around, I would like to shake his hand.

²Pedantically I know this should say “4 Kibibytes” (4 KiB). However the term wasn’t around at the time, and isn’t much loved even now.

³I later upgraded it to 512 kilobytes, by desoldering the old RAM chips and soldering some new ones in.

⁴It was black-and-white only. I believe that later versions of the Mac had “Color Quickdraw” in 256 kilobytes of ROM.

⁵OK, and your time. And how much Lego you could afford.

⁶Application Programming Interface

⁷Cascading Style Sheets

--

--