Quantitative

Dec 19, 2015 12:56

I have a bunch of favourite "classic" computer books -- SICP, Hacker's Delight, Little Lisper, Exploring Randomness -- but one I really like doesn't often come up in people's lists: Computer Architecture, a Quantitative Approach. Hennessy and Patterson.



I like this book not because of any sequence of "aha" moments or flashes of insight. It doesn't lead you through learning the beauty of the computable functions or recursion or finite fields or such. I like it because it represents a counterargument -- a very persuasive and boring one -- to all the talk of flashes-of-insight and genius that pervade our business. All the self-told myths of the computing field that computers are about fundamental paradigm shifts, creative thinking, magic, stirring rhetoric and revolution.

The book is an elaboration of the argument -- never stated outright, but implicit in every page -- that computers are actually just about speed. Just that. They might enable a whole lot of other interesting things along the way, but when push comes to shove, when machines get designed and paid for and built, the important part -- as far as those paying the bills are concerned -- is speed. Or (in a very slight nuance) speed/cost/power efficiency.

The key word in the title of the book is "quantitative".

If computing wasn't fast, it wouldn't be interesting. At least not to capitalism, to industry, to the military or governments or the myriad institutions that value them. Humans can do what computers can do, after all; and they can even do it with common sense, imagination, curiosity, automatic error-sensing and correction, self-maintenance, self-repair and self-reproduction, all sorts of desirable properties. The only thing computers have on us is speed -- incredible speed -- and that's both the origin and central locus of their power. They were born in military and accounting contexts where they replaced, task-by-task and eventually person-by-person, computations that were formerly-human. Because they were faster. And it is still all about speed.

Don't get me wrong: we're talking about speed so fast that there are threshold effects, qualitative shifts along the exponential curve where things-become-possible that were formerly impossible. If you told someone in 1920 that I'd be able to run the MP3 decompression algorithm on megabytes of data in real time, against a multi-gigabyte storage medium .. and moreover that I could do this from a machine in my pocket that I bought for the price of dinner and a show ... to say they'd be surprised, if they could even grasp the concept, would be an understatement. It's past 4 or 5 major change-how-you-see-the-world thresholds. But it's also all about speed.

The Manchester SSEM had 32 words of (Williams tube!) memory and ran at 1000 operations per second. My phone has roughly a million times the speed and a billion times the storage. But it's otherwise comprehensible, looking at one, to understand the other. The design isn't wildly different.

Modern systems deal with such threshold effects as much as any ever have. Occasionally there's an algorithmic breakthrough and something formerly-hard becomes trivially-easy. But far more often, we just wait a few years for capitalism to apply brute force through Moore's law, and then something becomes "tractable". When you look at groundbreaking products in the computer world -- from personal computers to phones, search engines to speech recognizers -- sometimes they have to do with some intense cleverness, but far more often they have to do with speed. Computers got fast enough to do a thing, so then the thing started being done.

Hennessy and Patterson's book is focused on the minutiae of devices, tradeoffs, balances, adjustments and allocations of space, power, clocks, transistors, encodings, bandwidth and latency that make up the current generation of machines. They have put out several editions of the book, as architectures have changed over the years. Balances shift at different rates, things become possible that once weren't (or were too expensive). Always, at every step, computer architecture is a balancing act, a selection of settings to produce a coherent and unified whole. That runs fast when applied to real code.

The book isn't the sort of thing you necessarily read end-to-end or view as a story. It's a set of case studies, vignettes, zoomed-in considerations of tradeoffs, optimizations and balancing acts at work in every little nook and cranny of a modern computing system. How often do you stop to think about instruction encoding? About pipeline width and depth? About your memory hierarchy or branch mispredict rates? About addressing modes and cache-coherence protocols? Yet a mis-designed or mis-calibrated one of any of these could make your program infeasibly slow, your experience of computing suddenly lose a decade of progress in "computers being fast enough to do" whatever you're trying to do with them.

There's a popular interactive infographic-thingy / set of tables about computer speed -- well, latency, which generally limits speed -- called Latency Numbers Every Computer Scientist Should Know (which of course, most programmers really can't regurgitate from memory). Similar tables show up in Dean, Norvig and (perhaps most convincingly) Gregg. These are excellent companions to keep open while browsing any given part of the book.

What's true of hardware is also true of software, to some extent. It's just a much less serious effect. So much less serious that there's a compiler-writer's joke built around it. Proebsting's Law: compiler advances double computing power every 18 years. It's true that, in certain contexts -- for example the context Rust is targeting -- being 10x or even 2x faster than the competition is a make-or-break thing. It decides the fate of a system in the market, as much as it would in the hardware market. And while the compiler-writer's range of improvements might be narrow, the language-designer or standard-library-designer might be able to squeeze another 10x-100x out of their range of design and architecture choices. You can get situations where your memory-dense, static-dispatched, fine-tuned systems language beats your pointer-chasing dynamic-everything language by a factor of 100 or more.

But it is not the case that this year's software is 2x or 10x as fast as last year's. It has never been the case.

In fact, many software systems reached their "fastest" implementations before any of us were born and have been getting slower as they're rewritten into newer, safer, more convenient languages and/or redone with simpler and more maintainable implementations. It's instructive to crack open some code written in 1970 and marvel at how spartan it is, how much faster it runs (on today's hardware as much as yesterday's) than its modern equivalent. Almost all the "speed up" in the meantime has been hardware.

I don't have a real point to make with this post, except to recommend this book (and similar books), and to remind people who are beguiled by the higher levels of software and the myriad possibilities offered by today's computing that there's layers upon layers of substructure, of software and hardware and micro-hardware, upon which it's all built. And those layers are interesting. They reveal, predict, enable and constrain what happens in the upper layers. Understanding them, or even browsing through them to gain a moderate appreciation for the factors involved, is rewarding.

This entry was originally posted at http://graydon2.dreamwidth.org/233585.html. Please comment there using OpenID.

tech

Previous post Next post
Up