Friday, March 23, 2012

Under-the-Hood Nausea

How Computers Work (9th Edition)How Computers Work by Ron White

My rating: 5 of 5 stars


There's something vaguely nauseating about looking under the hood, yes? I want to hear the music, I don't want to think about the speakers, or even about syncopation or harmony, the Mixolydian mode or the bridge, and certainly not about the trail of human gore that lies behind a commercial song; I want to eat the drumstick, not think about veins and connective tissue, or overcrowded chicken coops with chickens wearing their tiny red contact lenses; I want to swim in the shallow Caribbean Sea, I don't want to think about the mouths of great white sharks and the eleven-inch-wide eye of the giant squid, horribly human and ectopic like an eye in a tree trunk somewhere in Tolkien’s Middle Earth; I want to see my thoughts given form on a page, and I don't want to think about what's happening below the keys in channels of rectilinear silver. I don't ever want to look under the couch pillows.

And yet, I'm curious—not about the vile intra-couch detritus—but about how things work. Knowing that makes you powerful and wise, even more than going to bed early.

Ron White's book on computers is a useful, clear-yet-thorough, lavishly illustrated introduction to the basics of computers covering everything from digital cameras and displays to the internet to power supply, heat regulation, programming languages, database management, optical discs, hard drives, etc. I was mainly interested in the segments describing how transistors, logic gates, and microchips work—and those chapters answered most of my humble questions. For one thing, I learned at last how RAM is physically structured.

RAM, or ‘random access memory,’ holds data temporarily—like when you’ve added a sentence to a document and not yet saved it to disk, the added sentence is encoded temporarily in RAM circuits. There's a grid of copper traces on a wafer of silica (that is, quartz, the major constituent of sand and of glass). The grid is made of address lines in one direction and data lines perpendicular to them forming a field of intersections, and at each intersection is a transistor and a capacitor. A transistor is a "switch," meaning that one current flowing across it activates (closes) a circuit that allows another current to flow across it. A capacitor temporarily stores an electrical potential like a battery.

An address line carries the current that activates the transistors along it and the data lines send currents or not to each of these activated transistors. At the activated address line, a data current charges the capacitor with voltage, and if a data line doesn't send a current, the capacitor at that intersection doesn't get charged. A charged capacitor is used to represent a bit of information that the engineers call a '1' and an uncharged capacitor represents a bit of information called a '0.' In this way, the machine stores a string of 1s and 0s (i.e., a byte of information) along an address line. Numbers as we know them can be represented in strings of 1s and 0s using Gottfried Leibniz's system of math notation known as binary arithmetic, which was invented long before the computer but happily suits it. And letters or other meanings can be encoded in number sequences according to standardized languages or codes like ASCII (American Standard Code for Information Interchange).

I also learned that a microprocessor converts data inputs (in the form of bytes of information) into outputs according to particular logical rules and that those rules can be represented by particular configurations of logic gates, which are in turn particular configurations of transistors. So you could program the microprocessor to do numerical calculations but you could also have the computer convert a particular input code (say, one that's produced by depressing the 'a' key) into a particular output code (say, one that leads to pixels lit up on the monitor in the shape of an 'a').

Was that too much information? If so, you may be experiencing under-the-hood nausea. Once I literally made a friend vomit with too much information. It was his bachelor party and he was drunk. I was in medical school at the time and began to explain to him what his liver was doing with all that alcohol. Every time I said the words ‘cytochrome P450 enzymes’ he begged me to stop, but I was also fairly drunk and when I’m drunk sometimes I become even more boring. I kept going and he called for the limo to pull over for a bout of hyperemesis bacheloris.

I think this is one reason it’s hard to look under the hood; it reminds us that things are made of parts, that we’re made of parts, and that we’re therefore mortal. I confess that learning about computation made me think somewhat despondently, “Are my own thoughts and memories basically like this? Made out of 1s and 0s?” It feels castrating to look at it that way, not only because it reminds me I’m dust and to dust I shall return, but because that level of magnification destroys my identity. If you look at a person at that level of magnification, so that their bodies are cells and their thoughts bits of data, then the familiar forms of the self seem to disappear. The universe itself seems to be lonely and full of nothing but the illiterate sand lying on the beach and in our laptops’ integrated circuits.

Looking under the hood can breed anxiety dreams, for sure, which is why people don’t dare to look, but in reality nothing is lost. The old familiar scale of things still exists and is a distinct and scientifically valid phenomenon. Particles do things en masse, at large scales, that they don’t do individually; they have “emergent properties.” For example, five water molecules do certain things, but they don’t make waves. You need an ocean full of them for that. And you need an ocean full of bits for that unique phenomenon of a human self.




View all my reviews

No comments:

Post a Comment