God of the Machine – Page 4 – Culling my readers to a manageable elite since 2002.
Jun 212005
 

In ordinary discourse a “dated” work of art is old-fashioned, no longer pertinent, a back number. But this is imprecise. The truly dated work can be traced to the moment it was made.

The 40s: Gentleman’s Agreement (1947)

The 40s are remembered, cinematically, as the era of gangsters and gun molls, of crooked cops and desperate double-crossing dames, all pursued by gumshoes who dangle a cigarette out of one side of their mouths and deliver snappy patter out of the other. This is known as “realism.”

Whatever it was, the audience had a taste for something else. The top ten grossing movies of the decade were Bambi, Pinocchio, Fantasia, The Best Years of Our Lives, The Bells of St. Mary’s, Duel in the Sun, Sergeant York, Mom and Dad (not quite so wholesome as it sounds), Meet Me In St. Louis, and Easter Parade.

Somewhere between the 30s and 40s journalists in the movies went from raffish ambulance chasers to plumed crusaders for truth. Maybe Ernie Pyle is to blame, maybe more journalists starting getting screenwriting jobs, I don’t know, but when Gregory Peck is cast as a journalist you know the party’s over. In Gentleman’s Agreement he plays his customary straight arrow with that deer-in-the-headlights look that he didn’t manage to lose until The Boys from Brazil. Anti-semitism is exposed with all the investigative grit of Eddie Murphy’s seminal “White Like Me” sketch on Saturday Night Live. Does this movie date? Well, let’s just say that 1947 was about the last year that even senile lounge lizards thought they could keep the money in the country club and the Jews out.

The 50s: Rebel Without a Cause (1955)

Here we have a case of overdetermined dating. Psychology: until the 1950s it did not occur to psychologists, not always the sharpest tools in the shed, that juvenile delinquents weren’t always from the slums. Mise-en-scène: teen angst without music, garish Technicolor, homoerotic subtext (did I really just write “subtext”?), pegged jeans, chicken runs. Acting style: James Dean slouches and shambles, stumbles and mumbles, shrieks and stammers, and generally Methods up a storm. Bonus: the climax takes place in a planetarium.

The 60s: Guess Who’s Coming To Dinner? (1967)

Poitier glowers! Hepburn quavers! Tracy blusters! Miscegenation shocks white liberals!

Of course the movie was released in 1967, but when? It’d have to be after the summer (of Love); I estimate September 23rd, 4:33 EST. Give or take ten minutes.

The 70s: Carnal Knowledge (1971)

It would be cheating to draw any inference from the fact that this movie stars Art Garfunkel, though inferences from Garfunkel’s hair, not to mention Carol Kane’s, are admissible. It’s when Jack Nicholson sits himself down in one of those praying-mantis lounge chairs and treats Kane and Garfunkel to a slide show of his erotic life that we know we’re in that early 70s netherworld between Godspell and disco. Plus Garfunkel describes Kane as his “love teacher.”

The 80s: Wall Street (1987)

Oliver Stone is no accountant. Anacott Steel, according to the wise old broker, has “no fundamentals,” while according to the corporate raiders it has a breakup value of 80 a share when it’s selling at 45. So maybe you figure there are a few fundamentals in there somewhere.

Oliver Stone, God help us, is a screenwriter. Daryl Hannah says to Charlie Sheen, “I want to do for furniture what Laura Ashley did for fabric.” “And I’ll take you public,” Sheen says. “You will?” she squeals. (Her next line, “Oh goody!”, apparently survives only in the director’s cut.)

Charlie Sheen says to Daryl Hannah, “So what do you want?” “I want…a Turner. A perfect Canary diamond. World peace. The best of everything.” Not necessarily, one surmises, in that order. 1987’s on the phone. He says it’s OK, you can keep his dialogue.

Honorable mention: Flashdance (1983). What a feeling.

The 90s: Jerry Maguire (1996)

Writer/director Cameron Crowe is really, truly sorry about the 80s, and he promises they won’t happen again. This abject apology for the previous decade is, to my knowledge, the first, and one hopes the last, movie to feature a sports agent, which dates it with precision. Before 1995 nobody knew what a sports agent was; after 1996 nobody cared. Jerry Maguire is of course best known for bequeathing to subnormals that most 80s of all slogans, “Show me the money!” This bitter irony for Crowe was assuaged, in part, by a tall, cool stack of cash. The movie grossed over $150 million in the US alone.

The 00s:

Ask me in ten years.

Jun 202005
 

There is bridge blogging, as in “check out my culture, it’s even worse than yours,” which is copious, and then there is bridge blogging, as in the game, which is scarce. Me, I prefer the latter: my own damn culture gives me enough tzuris. Unfortunately, if we except the occasional bridge entry at Floyd McWilliams’ Declarer, there were, as of last week, no decent bridge blogs at all. Now there is one: Squeezing the Dummy, by my friend Justin Lall, the best player in America under 30 and a great system theoretician. He also writes frankly, and in complete sentences. You Gee Chronicles refugees will want to have a look.

(Update: After three weeks of excellent daily blogging, he took it down. Never mind.)

Jun 092005
 

Now is the time on God of the Machine when I play nice with the other blogchildren, who must be exasperated by my philoso-scientific treatises. I have been tagged for a game by Agenda Bender, who sustains, practically single-handed, my diminishing belief that homosexuals are, in fact, witty. I will indulge him.

1. Number of Books I’ve Owned: Lifetime, a few thousand, more than five and less than ten. Like Alfred Jay Nock in Memoirs of a Superfluous Man — which I own — I owe a great deal of my education to reading the spines of books. My apartment has room for only 1,500 or so, and henceforward each arrival necessitates a departure.

2. Last Book Bought: The Greeks and the Irrational, by E.R. Dodds. See last book read.

3. Last Book Read: The Origins of Consciousness in the Breakdown of the Bicameral Mind, by Julian Jaynes. I picked this up a few years ago and brought it to work, intending it for subway reading. My boss spotted it and called me “a Julian Jaynes homosexual.” I had to put the book down so I could think about how to punctuate that.

Jaynes’s book is interesting, if a bit off the wall, and he cites Dodds favorably, which prompted me to buy it. The portion of my education not due to book spines I owe to my habit of reading the books that the authors I admire read. A book without footnotes and bibliography is like a day without sunshine.

4. Five Books That Mean a Lot to Me: I just gave a reading list, and I hate reading lists. Instead you will get a reading history.

In my adolescence I had no mind to speak of. I read indiscriminately, remembered little and understood less. I assiduously studied Fowler’s Modern English Usage, utterly failed to discern its spirit, and became a pedant. The only books I thoroughly absorbed were about games: Bobby Fischer’s My 60 Greatest Games, Louis Watson’s The Play of the Hand, and The Baseball Encyclopedia.

At 20 my sneaking suspicion that I had been fed an awful lot of shit was confirmed by Ayn Rand, which helped to make me insufferable for the better part of a decade. Fortunately I was already a bit too old; Hazlitt and von Mises convinced me about economics before Rand made a dent. It usually begins with Ayn Rand, and usually ends there too.

At 25 I was browsing the back of the book in The New Republic and came across a reference to Yvor Winters as “being opposed to everything the 20th century stood for” or something like that. Not true — Winters believed that the 20th century is poetry’s greatest in English — but there, I thought, is the critic for me. After two years of immersion in Forms of Discovery and its accompanying anthology, Quest for Reality, I fancied myself a poet; after five, a poetry critic.

At 30 I took up computer programming. I learned how to think about programming problems from George Polya’s various books about mathematical heuristic, especially How to Solve It; how to design complex systems from Christopher Alexander’s The Timeless Way of Building and A Pattern Language; and how to develop reasonable coding habits from Code Complete by Steve McConnell and Refactoring by Martin Fowler. For any bugs in my current code these four men are entirely responsible.

Now I patch the holes in my defective education as best I can. Since I forget faster than I read, I keep falling further behind, in the manner of Uncle Toby in Tristram Shandy, who needs half an hour to write fifteen minutes of his life. And there we are.

The culturati are going at it hot and heavy over the burden of consumer choice. So much food, so much art, so little time! Jon Hastings sympathizes; Virginia “Eternal Sunshine” Postrel is having none of it:

Since different people care intensely about different things, only a society where choice is abundant everywhere can truly accommodate the variety of human beings. Abundant choice doesn’t force us to look for the absolute best of everything. It allows us to find the extremes in those things we really care about, whether that means great coffee, jeans cut wide across the hips, or a spouse who shares your zeal for mountaineering, Zen meditation, and science fiction.

True, sometimes, I guess, though one wonders in passing which supermarket Postrel bought her husband at. I will readily stipulate that there are markets, like mattresses or deodorants, in which people who “really care about” sleep or smelling fresh will not be any better served than the rest of us by the hundreds of indistinguishable products on offer. Point is, the mattresses and deodorants are all pretty good, for exactly the same reason that there are so many of them. Here our choices are limited: high quality and profusion, or neither.

Also, Anne Bancroft died. I exempt myself from my recent strictures on the grounds that I often talked about her but never got around to writing, and besides, I feel like it. She triumphed as Annie Sullivan and equally, in a completely different way, as Mrs. Robinson, in a dated and overrated movie that lives only when she is on screen (excepting Buck Henry’s neat turn as the hotel desk clerk). She also managed to stay married to Mel Brooks for forty years and keep her mouth shut in public. A working definition of adulthood is the day you watch The Graduate and not only find Anne Bancroft more alluring than Katharine Ross but wonder how you could have ever thought otherwise.

(Update: Colby Cosh comments. Alan Sullivan comments.)

Jun 022005
 

What is entropy, exactly? First try an easier one: What is gravity? Suppose you had never heard of gravity and asked me what it was. I answer the usual, “attraction at a distance.”

At this point you are as badly off as you were before. Do only certain objects attract each other? How strong is this “attraction”? On what does it depend? In what proportions?

Now I give a better answer. Gravity is a force that attracts all objects directly as the product of their masses and inversely as the square of the distance between them. I may have to backtrack a bit and explain what I mean by “force,” “mass,” “directly,” “inversely,” and “square,” but finally we’re getting somewhere. All of a sudden you can answer every question in the previous paragraph.

Of course I am no longer really speaking English. I’m translating an equation, Fg = G*(m1*m2)/r2. It turns out that we’ve been asking the wrong question all along. We don’t really care what gravity is; there is some doubt that we even know what gravity is. We care about how those objects with m’s (masses) and r’s (distances) act on each other. The cash value is in all those little components on the right side of the equation; the big abstraction on the left is just a notational convenience. We write Fg (gravity) so we don’t have to write all the other stuff. You must substitute, mentally, the right side of the equation whenever you encounter the term “gravity.” Gravity is what the equation defines it to be, and that is all. So, for that matter, is alpha. The comments to the previous sections on alpha theory are loaded with objections that stem from an inability, or unwillingness, to keep this in mind.

In a common refrain of science popularizers, Roger Penrose writes, in the preface to The Road to Reality: “Perhaps you are a reader, at one end of the scale, who simply turns off whenever a mathematical formula presents itself… If so, I believe that there is still a good deal that you can gain from this book by simply skipping all the formulae and just reading the words.” Penrose is having his readers on. In fact if you cannot read a formula you will not get past Chapter 2. There is no royal road to geometry, or reality, or even to alpha theory.

Entropy is commonly thought of as “disorder,” which leads to trouble, even for professionals. Instead we will repair to Ludwig Boltzmann’s tombstone and look at the equation:

S = k log W

S is entropy itself, the big abstraction on the left that we will ignore for the time being. The right-hand side, as always, is what you should be looking at, and the tricky part there is W. W represents the number of equivalent microstates of a system. So what’s a microstate? Boltzmann was dealing with molecules in a gas. If you could take a picture of the gas, showing each molecule, at a single instant–you can’t, but if you could–that would be a microstate. Each one of those tiny suckers possesses kinetic energy; it careers around at staggering speeds, a thousand miles an hour or more. The temperature of the gas is the average of all those miniature energies, and that is the macrostate. Occasionally two molecules will collide. The first slows down, the second speeds up, and the total kinetic energy is a wash. Different (but equivalent) microstates, same macrostate.

The number of microstates is enormous, as you might imagine, and the rest of the equation consists of ways to cut it down to size. k is Boltzmann’s constant, a tiny number, 10-23 or so. The purpose of taking the logarithm of W will become apparent when we discuss entropy in communication theory.

An increase in entropy is usually interpreted, in statistical mechanics, as a decrease in order. But there’s another way to look at it. In a beaker of helium, there are far, far fewer ways for the helium molecules to cluster in one corner at the bottom than there are for them to mix throughout the volume. More entropy decreases order, sure, but it also decreases our ability to succinctly describe the system. The greater the number of possible microstates, the higher the entropy, and the smaller the chance we have of guessing the particular microstate in question. The higher the entropy, the less we know.

And this, it turns out, is how entropy applies in communication theory. (I prefer this term, as its chief figure, Claude Shannon, did, to “information theory.” Communication theory deals strictly with how some message, any message, is transmitted. It abstracts away from the specific content of the message.) In communication theory, we deal with signals and their producers and consumers. For Eustace, a signal is any modulatory stimulus. For such a stimulus to occur, energy must flow.

Shannon worked for the telephone company, and what he wanted to do was create a theoretical model for the transmission of a signal — over a wire, for the purposes of his employer, but his results generalize to any medium. He first asks what the smallest piece of information is. No math necessary to figure this one out. It’s yes or no. The channel is on or off, Eustace receives a stimulus or he doesn’t. This rock-bottom piece of information Shannon called a bit, as computer programmers still do today.

The more bits I send, the more information I can convey. But the more information I convey, the less certain you, the receiver, can be of what message I will send. The amount of information conveyed by a signal correlates with the uncertainty that a particular message will be produced, and entropy, in communication theory, measures this uncertainty.

Suppose I produce a signal, you receive it, and I have three bits to work with. How many different messages can I send you? The answer is eight:

000
001
010
011
100
101
110
111

Two possibilities for each bit, three bits, 23, eight messages. For four bits, 24, or 16 possible messages. For n bits, 2n possible messages. The relationship, in short, is logarithmic. If W is the number of possible messages, then log W is the number of bits required to send them. Shannon measures the entropy of the message, which he calls H, in bits, as follows:

H = log W

Look familiar? It’s Boltzmann’s equation, without the constant. Which you would expect, since each possible message corresponds to a possible microstate in one of Boltzmann’s gases. In thermodynamics we speak of “disorder,” and in communication theory of “information” or “uncertainty,” but the mathematical relationship is identical. From the above equation we can see that if there are eight possible messages (W), then there are three bits of entropy (H).

I have assumed that each of my eight messages is equally probable. This is perfectly reasonable for microstates of molecules in a gas; not so reasonable for messages. If I happen to be transmitting English, for example, “a” and “e” will appear far more often than “q” or “z,” vowels will tend to follow consonants, and so forth. In this more general case, we have to apply the formula to each possible message and add up the results. The general equation, Shannon’s famous theorem of a noiseless channel, is

H = – (p1log p1 + p2log p2 + … pWlog pW)

where W is, as before, the number of possible messages, and p is the probability of each. The right side simplifies to log W when each p term is equal, which you can calculate for yourself or take my word for. Entropy, H, assumes the largest value in this arrangement. This is the case with my eight equiprobable messages, and with molecules in a gas. Boltzmann’s equation turns out to be a special case of Shannon’s. (This is only the first result in Shannon’s theory, to which I have not remotely done justice. Pierce gives an excellent introduction, and Shannon’s original paper, “The Mathematical Theory of Communication,” is not nearly so abstruse as its reputation.)

This notion of “information” brings us to an important and familiar character in our story, Maxwell’s demon. Skeptical of the finality of the Second Law, James Clerk Maxwell dreamed up, in 1867, a “finite being” to circumvent it. This “demon” (so named by Lord Kelvin) was given personality by Maxwell’s colleague at the University of Edinburgh, Peter Guthrie Tait, as an “observant little fellow” who could track and manipulate individual molecules. Maxwell imagined various chores for the demon and tried to predict their macroscopic consequences.

The most famous chore involves sorting. The demon sits between two halves of a partitioned box, like the doorman at the VIP lounge. His job is to open the door only to the occasional fast-moving molecule. By careful selection, the demon could cause one half of the box to become spontaneously warmer while the other half cooled. Through such manual dexterity, the demon seemed capable of violating the second law of thermodynamics. The arrow of time could move in either direction and the laws of the universe appeared to be reversible.

An automated demon was proposed by the physicist Marian von Smoluchowski in 1914 and later elaborated by Richard Feynman. Smoluchowski soon realized, however, that Brownian motion heated up his demon and prevented it from carrying out its task. In defeat, Smoluchowski still offered hope for the possibility that an intelligent demon could succeed where his automaton failed.

In 1929, Leo Szilard envisioned a series of ingenious mechanical devices that require only minor direction from an intelligent agent. Szilard discovered that the demon’s intelligence is used to measure — in this case, to measure the velocity and position of the molecules. He concluded (with slightly incorrect details) that this measurement creates entropy.

In the 1950s, the IBM physicist Leon Brillouin showed that, in order to decrease the entropy of the gas, the demon must first collect information about the molecules he watches. This itself has a calculable thermodynamic cost. By merely watching and measuring, the demon raises the entropy of the world by an amount that honors the second law. His findings coincided with those of Dennis Gabor, the inventor of holography, and our old friend, Norbert Wiener.

Brillouin’s analysis led to the remarkable proposal that information is not just an abstract, ethereal construct, but a real, physical commodity like work, heat and energy. In the 1980s this model was challenged by yet another IBM scientist, Charles Bennett, who proposed the idea of the reversible computer. Pursuing the analysis to the final step, Bennett was again defeated by the second law. Computation requires storage, whether on a transistor or a sheet of paper or a neuron. The destruction of this information, by erasure, by clearing a register, or by resetting memory, is irreversible.

Looking back, we see that a common mistake is to “prove” that the demon can violate the second law by permitting him to violate the first law. The demon must operate as part of the environment rather than as a ghost outside and above it.

Having slain the demon, we shall now reincarnate him. Let’s return for a moment to the equation, the Universal Law of Life, in Part 6:

max E([α – αc]@t | F@t-)

The set F@t- represents all information available at some time t in the past. So far I haven’t said much about E, expected value; now it becomes crucial. Eustace exists in space, which means he deals with energy transfers that take place at his boundaries. He has been known to grow cilia and antennae (and more sophisticated sensory systems) to extend his range, but this is all pretty straightforward.

Eustace also exists in time. His environment is random and dynamic. Our equation spans this dimension as well.

t- : the past
t : the present
t+ : the future (via the expectation operator, E)

t+ is where the action is. Eustace evolves to maximize the expected value of alpha. He employs an alpha model, adapted to information, to deal with this fourth dimension, time. The more information he incorporates, the longer the time horizon, the better the model. Eustace, in fact, stores and processes information in exactly the way Maxwell’s imaginary demon was supposed to. To put it another way, Eustace is Maxwell’s demon.

Instead of sorting molecules, Eustace sorts reactions. Instead of accumulating heat, Eustace accumulates alpha. And, finally, instead of playing a game that violates the laws of physics, Eustace obeys the rules by operating far from equilibrium with a supply of free energy.

Even the simplest cell can detect signals from its environment. These signals are encoded internally into messages to which the cell can respond. A paramecium swims toward glucose and away from anything else, responding to chemical molecules in its environment. These substances act to attract or repel the paramecium through positive or negative tropism; they direct movement along a gradient of signals. At a higher level of complexity, an organism relies on specialized sensory cells to decode information from its environment to generate an appropriate behavioral response. At a higher level still, it develops consciousness.

As Edelman and Tononi (p. 109) describe the process:

What emerges from [neurons’] interaction is an ability to construct a scene. The ongoing parallel input of signals from many different sensory modalities in a moving animal results in reentrant correlations among complexes of perceptual categories that are related to objects and events. Their salience is governed in that particular animal by the activity of its value systems. This activity is influenced, in turn, by memories conditioned by that animal’s history of reward and punishment acquired during its past behavior. The ability of an animal to connect events and signals in the world, whether they are causally related or merely contemporaneous, and, then, through reentry with its value-category memory system, to construct a scene that is related to its own learned history is the basis for the emergence of primary consciousness.

The short-term memory that is fundamental to primary consciousness reflects previous categorical and conceptual experiences. The interaction of the memory system with current perception occurs over periods of fractions of a second in a kind of bootstrapping: What is new perceptually can be incorporated in short order into memory that arose from previous categorizations. The ability to construct a conscious scene is the ability to construct, within fractions of seconds, a remembered present. Consider an animal in a jungle, who senses a shift in the wind and a change in jungle sounds at the beginning of twilight. Such an animal may flee, even though no obvious danger exists. The changes in wind and sound have occurred independently before, but the last time they occurred together, a jaguar appeared; a connection, though not provably causal, exists in the memory of that conscious individual.

An animal without such a system could still behave and respond to particular stimuli and, within certain environments, even survive. But it could not link events or signals into a complex scene, constructing relationships based on its own unique history of value-dependent responses. It could not imagine scenes and would often fail to evade certain complex dangers. It is the emergence of this ability that leads to consciousness and underlies the evolutionary selective advantage of consciousness. With such a process in place, an animal would be able, at least in the remembered present, to plan and link contingencies constructively and adaptively in terms of its own previous history of value-driven behavior. Unlike its preconscious evolutionary ancestor, it would have greater selectivity in choosing its responses to a complex environment.

Uncertainty is expensive, and a private simulation of one’s environment as a remembered present is exorbitantly expensive. At rest, the human brain requires approximately 20% of blood flow and oxygen, yet it accounts for only 2% of body mass. It needs more fuel as it takes on more work.

The way information is stored and processed affects its energy requirements and, in turn, alpha. Say you need to access the digits of π. The brute-force strategy is to store as many of them as possible and hope for the best. This is costly in terms of uncertainty, storage, and maintenance.

Another approach, from analysis, is to use the Leibniz formula:

Π/4 = 1 – 1/3 + 1/5 – 1/7 + 1/9 – …

This approach, unlike the other, can supply any arbitrary digit of π. And here you need only remember the odd numbers and an alternating series of additions and subtractions.

Which method is more elegant and beautiful? Which is easier?

Human productions operate on this same principle of parsimony. Equations treat a complex relation among many entities with a single symbol. Concepts treat an indefinite number of percepts (or other concepts). Architects look at blueprints and see houses. A squiggle of ink can call up a mud puddle, or a bird in flight. The aim, in every case, is maximal information bang for minimal entropy buck.

In an unpredictable environment, decisions must be made with incomplete information. The epsilon of an alpha model depends on its accuracy, consistency and elegance. An accurate model corresponds well to the current environment, a consistent model reduces reaction time, and an elegant model reduces energy requirements. Everything, of course, is subject to change as the environment changes. The ability to adapt to new information and to discard outdated models is just as vital as the ability to produce models in the first place.

Thus Eustace generates his alpha* process, operating on some subset of F@t- where t is an index that represents the increasing set of available information F. As Eustace evolves, the complexity of his actions increases and his goals extend in space and time, coming to depend less on reflex and more on experience. He adapts to the expected value for alpha@t+, always working with an incomplete information set. As antennae extend into space, so Eustace’s alpha model extends into a predicted future constructed from an experienced past.

Apr 142005
 

To understand alpha theory, you have to learn some math and science. To learn math and science, you have to read some books. Now I know this is tiresome, and I am breaking my own rule by supplying a reading list. But it will be short. Try these, in order of increasing difficulty:

Complexity, by Mitchell Waldrop. Complexity is why ethics is difficult, and Waldrop provides a gentle, anecdote-heavy introduction. Waldrop holds a Ph.D. in particle physics, but he concentrates on the personalities and the history of the complexity movement, centered at the Santa Fe Institute. If you don’t know from emergent behavior, this is the place to start.

Cows, Pigs, Wars, and Witches, by Marvin Harris. Hey! How’d a book on anthropology get in here? Harris examines some of the most spectacular, seemingly counter-productive human practices of all time — among them the Indian cult of the cow, tribal warfare, and witch hunts — and demonstrates their survival value. Are other cultures mad, or are the outsiders who think so missing something? A world tour of alpha star.

Men of Mathematics, E.T. Bell. No subject is so despised at school as mathematics, in large part because its history is righteously excised from the textbooks. It is possible to take four years of math in high school without once hearing the name of a practicing mathematician. The student is left with the impression that plane geometry sprang fully constructed from the brain of Euclid, like Athena from the brain of Zeus. Bell is a useful corrective; his judgments are accurate and his humor is dry. Lots of snappy anecdotes — some of dubious provenance, though not so dubious as some of the more recent historians would have you believe — and no actual math. (OK, a tiny bit.) You might not believe that it would help you to know that Galois, the founder of group theory, wrote a large part of his output on the topic in a letter the night before he died in a duel, or that Euler, the most prolific mathematician of all time, managed to turn out his reams of work while raising twelve children, to whom, by all accounts, he was an excellent father. But it does. Should you want to go on to solve real math problems, the books to start with, from easy to hard, are How To Solve It, by Pólya, The Enjoyment of Mathematics, by Rademacher and Toeplitz, and What Is Mathematics? by Courant and Robbins.

The Eighth Day of Creation, by Horace Freeland Judson. A history of the heroic age of molecular biology, from the late 1940s to the early 1970s. Judson does not spare the science, and he conveys a real understanding of biology as it’s practiced, as opposed to the way it’s tidied up in the textbooks. A much better book about the double helix than The Double Helix, which aggrandizes Watson and which none of the other participants could stand. Judson’s book has its purple passages, but on the whole the best book ever written on science by a non-scientist, period.

A Universe of Consciousness, by Gerald Edelman and Giulio Tononi. A complete biologically-based theory of consciousness in 200 dense but readable pages. Edelman and Tononi shirk none of the hard questions, and by the end they offer a persuasive account of how to get from neurons to qualia.

Gödel’s Proof, by Ernest Nagel and James Newman. Undecidability has become, after natural selection, relativity, and Heisenberg’s uncertainty principle, the most widely abused scientific idea in philosophy. (An excellent history of modern philosophy could be written treating it entirely as a misapplication of these four ideas.) Undecidability no more implies universal skepticism than relativistic physics implies relativistic morality. Nagel and Newman demystify Gödel in a mere 88 pages that anyone with high school math can follow, if he’s paying attention.

Incidentally, boys, for all of the comments in the alpha threads, one glaring hole in the argument passed you right by. It’s in the Q&A, where I shift from energy to bits with this glib bit of business:

Still more “cash value” lies in information theory, which is an application of thermodynamics. Some say thermodynamics is an application of information theory; but this chicken-egg argument does not matter for our purposes. We care only that they are homologous. We can treat bits the same way we treat energy.

I think I can prove this, but I certainly haven’t yet, and my attempt to do so will be the next installment.

Mar 192005
 

Anyone who writes so intimately, not to say voyeuristically, about my personal life surely deserves a link or two. And a restraining order.

For the record, I wear my hair “cropped close” because I am bald.

Mar 032005
 

By a feat of yogic discipline — or sloth; you choose — I managed, until now, to pass by the deaths of Hunter Thompson and Arthur Miller without dusting off my opinions of them for public consumption. I am neither man’s ideal reader, and my experience with Wordsworth and W.E. Henley has shown that it may be wiser to keep my own counsel in such cases. No eloquence can persuade the man who feels a sense of something more deeply interfused that rolls through all things that Wordsworth is a fatuous bore. Detailed analysis leaves the impenetrable head of the Invictus fancier bloodied but unbowed. I confine myself to saying that I simply lack the alpha model to appreciate these gentlemen, and that the people who have it might do better with a different model.

My favorite Hunter Thompson book is Hell’s Angels, his only book whose subject is not Hunter Thompson, which tells you all you need to know. As Cosh, his most interesting eulogist, pointed out, Thompson was one part John the Baptist and one part Jonathan Swift, Fear and Loathing in Las Vegas and “The Kentucky Derby Is Decadent and Depraved” being his Book of Revelations and Voyage to Brobdingnag, respectively. Revelations has its distinguished admirers, D.H. Lawrence for one, but as a computer programmer I object to dumping core, even in Thompson’s fine style, as a literary technique. Cosh thinks Thompson is immortal. I expect to outlive his reputation, provided I lay off the cigarettes.

Arthur Miller was a playwright. He married Marilyn Monroe. He will be read as long as there exist high school teachers charged with imparting the obvious to the oblivious, which is to say, forever.

But I wanted to talk about something else.

Why must people write of someone when he dies of whom they did not think to write while he was alive? Tom Wolfe I can see: while his obituary wasn’t very good, he was a friend of Thompson’s, and he presumably got paid. One would also expect Thompson’s long-time and only conceivable illustrator, Ralph Steadman, to say a few words. But what were the rest of you thinking?

The uncharitable explanation — monkey see, monkey scribble — has as usual a good deal in it. Thompson is a topic, Miller is a topic, and we are perennially starved for topics: such is the vital function of the newspaper. But there is something even more unpleasant at work — a ghoulish, misbegotten sense of duty, as if failing to note their passing means that our own will also go unremarked. Well, it will. Not to worry.

Occasionally the manner of exit is pertinent. Thompson’s, like Thompson, was histrionic; Mark Riebling and I’m sure many others have made the suitable remarks. Arthur Miller, on the other hand, went old, rich, and in his sleep, which didn’t seem to shut anybody up.

Obituaries fall loosely into three categories: encomium, scorn, and measured assessment. Encomium, at best, is too little too late; at worst it is breast-beating aimed at calling attention more to oneself than to the dear departed. (Many of the great fakes of English literature, like Lycidas, are eulogies. Does anybody believe that Milton gave a damn about Edward King?) Scorn is unsportsmanlike, its object no longer being around to answer back.

Measured assessment is worst of all. If you’ve ever flipped through a biographical reference book, say Harvey’s Oxford Guide to English Literature, you know what I mean. I open it at random to Prosper Merimée (1803-1870) and read, “French novelist and dramatist, a member of the court of Napoleon III, was the author of admirable novels and short stories (‘Colomba’, ‘La Vénus d’Ille’, 1841; ‘Carmen’, which inspired Bizet’s opera, 1852), of plays (‘Theâtre de Clara Gazul’, 1825), of ‘La Jacquerie’ (feudal scenes in dialogue form), and of the historical novel, ‘Chronique de Charles IX’ (1829). His well-known ‘Lettres à une Inconnue’ display his ironic and critical temperament. He was a strong supporter of the innocence of ‘Libri the book-thief’ (q.v.).” I find this heart-breaking, down to the last q.v. Poor Merimée! It’s like being buried twice.

Sir Paul Harvey, here, is just doing his job; measured assessment is not the sort of thing that anyone should do for fun. And let’s face it: Hunter Thompson and Arthur Miller had their literary deaths decades ago. You didn’t know them. You read a few of their books and you still can, any time. Do you honestly care that they’re dead? Why should you?

Feb 132005
 

After months of diligent study I have finally become the person you edge away from at parties. At the last one I attended I began, after a few drinks, to dilate on alpha theory as usual. One of the guests suggested that I become a prophet for a new cult, which was certainly a lucky thing, and I want to thank him, because that joke would never have occurred to me on my own.

As with party-goers, so with blog-readers. The vast majority of my (former) readership has greeted alpha theory with some hostility but mostly indifference, and for excellent reason. It is a general theory, and humans have high sales resistance to general theories.

Generality offends in itself. Theories of human behavior apply to all humans, and that means you. If you’re anything like me, and you are, when you look at a graphed distribution of some human characteristic, no matter what it is, you harbor a secret hope that you fall at a tail, or better still, outside the distribution altogether. It is not that we are all above average, like the children of Lake Woebegon. Oh no: we are all extraordinary. Surely the statistician has somehow failed to account for me and my precious unique inviolable self. Nobody wants to be a data point. General theories, including alpha theory, often involve equations, and nobody likes an equation either.

General theories are also susceptible to error, the more susceptible the more general they are. An old academic joke about general surveys applies to general theories as well. I first heard it about Vernon Parrington’s Main Currents in American Thought, a once-common college text, but it has made the rounds in many forms. Whichever English professor you asked about Parrington, he would praise the book, adding parenthetically, “Of course he knows nothing about my particular subject.”

Someone seeking to explain a wide range of apparently disparate phenomena usually overlooks a few facts. By the time these are brought to his attention he is too heavily invested in the theory to give it up. He hides or explains away the offending facts and publishes his theory anyway, to world-wide yawns.

The gravest danger of a general theory is that it might be true — more precisely, that you may come to believe it. Believing a new general theory is a mighty expensive proposition. You’ve built up a whole complicated web of rules that have worked for you in the past, and now you have to go back and reevaluate them all in light of this new theory. This is annoying, and a gigantic energy sink besides. General theories, including alpha theory, tend to attract adherents from among the young, who have less to throw away — lower sunk costs, as the economists say. For most of us dismissing a new theory out of hand is, probabilistically, a winning strategy. Some might call this anti-intellectualism: I call it self-preservation.

(I will not go so far as to claim that alpha theory predicts its own resistance. Down that road lies madness. “You don’t believe in Scientology? Of course you don’t. Scientology can explain that! Wait! Where are you going?”)

General theorists often insist that anyone who disagrees with their theory find a flaw in its derivation. I have been known to take this line myself, and it is utterly unreasonable. If someone showed up at my door with a complicated theory purporting to demonstrate some grotesque proposition, say, that cannibalism conduces to human survival, and demanded that I show where he went wrong, I’d kick him downstairs. Yes, they laughed at Edison, they laughed at Fulton. They also laughed at a hundred thousand crackpot megalomaniacs while they were at it.

So if you still want alpha theory to dry up and blow away, I understand. No hard feelings. And if you’ve written me off as some kind of nut, well, could be. The thought has crossed my mind. I can assure you only that I ardently desire to be delivered from my dementia. It would do wonders for my social life.

(Addendum: I want to make it perfectly clear that, although I have written about alpha theory for several months now, I did not invent it. I am not nearly intelligent enough to have invented it. That honor belongs to “Bourbaki,” well-known to the readers of the comments. Me, I’m a sort of combination PR man and applied alpha engineer. Oh wait — there aren’t any applications yet. Don’t worry, there will be.)

Jan 062005
 

What’s alpha all about, Alfie? Why are you boring us with this?

The great biologist E.O. Wilson wrote a little book called Consilience, in which he argued that it was past time to apply the methods of science — notably quantification — to fields traditionally considered outside its purview, like ethics, politics, and aesthetics. Any blog reader can see that arguments on these subjects invariably devolve into pointless squabbling because no base of knowledge and no shared premises exist. Alpha theory is a stab at Wilson’s program.

What kind of science could possibly apply to human behavior?

Thermodynamics. Living systems can sustain themselves only by generating negative entropy. Statistical thermodynamics is a vast and complex topic in which you can’t very well give a course on a blog, but here’s a good introduction. (Requires RealAudio.)

Don’t we have enough ethical philosophies?

Too many. The very existence of competing “schools” is the best evidence of failure. Of course science has competing theories as well, but it also has a large body of established theory that has achieved consensus. No astronomer quarrels with Kepler’s laws of planetary orbits. No biologist quarrels with natural selection. Philosophers and aestheticians quarrel over everything. Leibniz, who tried to develop a universal truth machine, wrote someplace that his main purpose in doing so was to shut people up. I see his point.

Not a chance. Anyway, what’s alpha got that we don’t have already?

A universal maximization function derived openly from physical laws, for openers. Two of them. The first is for the way all living system ought to behave. The second is for the way they do behave. To put the matter non-mathematically, every living system maximizes its sustainability by following the first equation. But in practice, it is impossible to follow directly. Living beings aren’t mathematical demons and can’t calculate at the molecular level. They act instead on a model, a simplification. That’s the second equation. If the model is accurate, the living being does well for itself. If not, not.

Sounds kinda like utilitarianism.

Not really. But there are similarities. Like utilitarianism, alpha theory is consequentialist, maintaining that actions are to be evaluated by their results. (Motive, to answer a question in the previous comment thread, counts for nothing; but then why should it?) But utilitarianism foundered on the problem of commensurable units. There are no “utiles” by which one can calculate “the greatest happiness for the greatest number.” This is why John Stuart Mill, in desperation, resorted to “higher pleasures” and “lower pleasures,” neatly circumscribing his own philosophy. Alpha theory provides the unit.

Alpha also accounts for the recursive nature of making decisions, which classical ethical theories ignore altogether. (For example, short circuiting the recursive process through organ harvesting actually reduces the fitness of a group.) Most supposed ethical “dilemmas” are arid idealizations, because they have only two horns: the problem has been isolated from its context and thus simplified. But action in the real world is not like that; success, from a thermodynamic perspective, requires a continuous weighing of the alternatives and a continuous adjustment of one’s path. Alpha accounts for this with the concept of strong and weak solutions and filtrations. Utilitarianism doesn’t. Neither does any other moral philosophy.

That said, Jeremy Bentham, would, I am sure, sympathize with alpha theory, were he alive today.

You keep talking about alpha critical. Could you give an example?

Take a live frog. If we amputate its arm, what can we say about the two separate systems? Our intuition says that if the frog recovers (repairs and heals itself) from the amputation, it is still alive. The severed arm will not be able to fully repair damage and heal. Much of the machinery necessary to coordinate processes and manage the requirements of the complicated arrangement of cells depends on other systems in the body of the frog. The system defined by the arm will rapidly decay below alpha critical. Now take a single cell from the arm and place it in a nutrient bath. Draw a volume around this cell and calculate alpha again. This entity, freed from the positive entropy of the decaying complexity of the severed arm, will live.

What about frogs that can be frozen solid and thawed? Are they alive while frozen? Clearly there is a difference between freezing these frogs and freezing a human. It turns out that cells in these frogs release a sugar that prevents the formation of ice crystals. Human cells, lacking this sugar, shear and die. We can use LHopitals Rule to calculate alpha as the numerator and denominator both approach some limiting value. As we chart alpha in our two subjects, there will come a point where the shearing caused by ice crystal formation will cause the positive entropy (denominator) in the human subject to spike through alpha critical. He will die. The frog, on the other hand, will approach a state of suspended animation. Of course, such a state severely reduces the frogs ability to adapt.

Or take a gas cloud. “You know, consider those gas clouds in the universe that are doing a lot of complicated stuff. What’s the difference [computationally] between what they’re doing and what we’re doing? It’s not easy to see.” (Stephen Wolfram, A New Kind of Science.)

Draw a three-dimensional mesh around the gas cloud and vary the grid spacing to calculate alpha. Do the same for a living system. No matter how the grid is varied, the alpha of the random particles of the gas cloud will not remotely match the alpha of a living system.

Enough with the frogs and gas clouds. Talk about human beings.

Ah yes. Some of my commenters are heckling me for “cash value.” I am reminded of a blessedly former business associate who interrupted a class in abstruse financial math to ask the professor, “Yeah. But how does this get me closer to my Porsche?”

The first thing to recognize is that just about everything that you now believe is wrong, probably is wrong, in alpha terms. Murder, robbery, and the like are obviously radically alphadystropic, because alpha states that the inputs always have to be considered. (So does thermodynamics.) If this weren’t true you would have prima facie grounds for rejecting the theory. Evolution necessarily proceeds toward alpha maximization. Human beings have won many, many rounds in the alpha casino. Such universal rules as they have conceived are likely to be pretty sound by alpha standards.

These rules, however, are always prohibitions, never imperatives. This too jibes with alpha theory. Actions exist that are always alphadystropic; but no single action is always alphatropic. Here most traditional and theological thinking goes wrong. If such an action existed, we probably would have evolved to do it — constantly, and at the expense of all other actions. If alpha theory had a motto, it would be there are no universal strong solutions. You have to use that big, expensive glucose sink sitting in that thickly armored hemisphere between your ears. Isaiah Berlin’s concept of “negative liberty” fumbles toward this, and you “cash value” types ought to be able to derive a theory of the proper scope of law without too much trouble.

Still more “cash value” lies in information theory, which is an application of thermodynamics. Some say thermodynamics is an application of information theory; but this chicken-egg argument does not matter for our purposes. We care only that they are homologous. We can treat bits the same way we treat energy.

Now the fundamental problem of human action is incomplete information. The economists recognized this over a century ago but the philosophers, as usual, have lagged. To put it in alpha terms, they stopped incorporating new data into their filtration around 1850.

The alpha equation captures the nature of this problem. Its numerator is new information plus the negative entropy you generate from it; its denominator is positive entropy, what you dissipate. Numerator-oriented people are always busy with the next new thing; they consume newspapers and magazines in bulk and seem always to have forgotten what they knew the day before yesterday. This strategy can work — sometimes. Denominator-oriented people tend to stick with what has succeeded for them and rarely, if ever, modify their principles in light of new information. This strategy can also work — sometimes. The great trick is to be an alpha-oriented person. The Greeks, as so often, intuited all of this, lacking only the tools to formalize it. It’s what Empedocles is getting at when he says that life is strife, and what Aristotle is getting at when he says that right action lies in moderation.

Look around. Ask yourself why human beings go off the rails. Is it because we are perishing in an orgy of self-sacrifice, as the Objectivists would have it? Is it because we fail to love our neighbor as ourselves, as the Christians would have it? Or is it because we do our best to advance our interests and simply botch the job?

(Update: Marvin of New Sophists — a Spinal Tap joke lurks in that title — comments at length. At the risk of seeming churlish, I want to correct one small point of his generally accurate interpretation. He writes that “alpha is the negative entropy generated by a system’s behavioral strategy.” Not exactly. Alpha is the ratio between enthalpy plus negative entropy, in the numerator, and positive entropy, in the denominator. It is not measured in units of energy: it is dimensionless. That’s why I say life is a number, rather than a quantity of energy.)

Dec 072004
 

Although alpha itself is simple enough at the molecular level, the derivation is complicated, its exposition has been spaced out over several posts and, alas, several months, and a summary is in order. Besides, the girlfriend wants one. Now 100% formula-free!

In Part 1: Starting From Zero
The history of philosophy, ethics in particular, was reviewed and found wanting. It continues to stink of vitalism and anthropocentrism, despite the fact that the idea of a “vital force” was thoroughly discredited by the 1850s. No ethics to date has managed to improve on moral intuition, or explain it either.

What fun is a game with no rules? There must be some common structure to all living systems, not just human beings, and based on its track record, it is science that will likely discover it.

In Part 2: Rules — The Laws of Thermodynamics
We sought rules that are precise and objective without indulging dogmatism. The laws of thermodynamics are the most general we know. They are independent of any hypothesis concerning the microscopic nature of matter, and they appear to hold everywhere, even in black holes. (Stephen Hawking lost a bet on this.) Thus they seemed a good place to start. We postulated a cube floating through space and called it Eustace, in an ill-advised fit of whimsy. A little algebraic manipulation of the Gibbs-Boltzmann formulation of the Second Law produced a strange number we called alpha, which turns out to be the measure of sustainability for any Eustace, living or dead, on Earth or in a galaxy far, far away.

In Part 3: Scoring — The Alpha Casino
We laid out a scoring system for Eustace built entirely on mathematics using alpha, a dimensionless, measurable quantity. Alpha measures the consequences of energy flux. All is number. Along the way we explained, via Bernoulli trials, how complexity emerges from the ooze. The dramatic effects of probability biases of a percent or less are dwarfed by the even more dramatic biases afforded by catalysts and enzymes that often operate in the 10E8 to 10E20 range.

In Part 4: Challenges — Gaussian and Poisson Randomness
We introduced two general (but not exhaustive) classes of random processes. Gaussian (continuous) randomness can be dealt with by a non-anticipating strategy of continuous adjustment. Relatively primitive devices like thermostats manage this quite nicely. Poisson (discontinuous) randomness is a fiercer beast. It can, at best, only be estimated via thresholds. Every Eustace, to sustain itself, must constantly reconfigure in light of the available information, or filtration. We introduced the term alpha model to describe this process.

In Part 5: Strategy — Strong and Weak Solutions
Increasingly complex organisms have evolved autonomous systems that mediate blood pressure and pH while developing threshold-based systems that effectively adapt filtrations to mediate punctuated processes like, say, predators. We introduced strong and weak solutions and explained the role of each. Weak solutions do not offer specific actionable paths but they do cull our possible choices. Strong solutions are actionable paths but a strong solution that is not adapted to the available filtration will likely be sub-optimal. Successful strong solutions can cut both ways. Paths that served us well in the past, if not continuously adapted, can grow confining. An extreme example, in human terms, is dogmatism. Alpha models must adapt to changing filtrations. Each generation must question the beliefs, traditions, and fashions of the generations that preceded it.

In Part 6: The Meaning of Life
We finally arrived at the universal maximization function. We introduced the concept of alpha*, or estimated alpha, and epsilon, the difference between estimated and actual alpha. Behavior and ethics are defined by alpha* and alpha, respectively. All living things maximize alpha*, and all living things succeed insofar as alpha* approximates alpha. From here we abstract the three characteristics of all living things. They can generate alpha (alphatropic). They can recognize and respond to alpha (alphaphilic). And they can calibrate responses to alpha to minimize epsilon (alphametric).

That’s it. An ethics, built up from thermodynamics and mathematics, in 700 words. The entire derivation from premise to conclusion was presented. Can anyone find fault with the sums?

(Update: Jesus von Einstein comments.)