God of the Machine – Page 5 – Culling my readers to a manageable elite since 2002.
Dec 042004
 

Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim.
–Aristotle

The moment has arrived. Weve invoked the laws of thermodynamics, probability and the law of large numbers; we can now state the Universal Law of Life. We define a utility function for all living systems:

max E([α – αc]@t | F@t-1)

E is expected value. α is alpha, the equation for which I gave in Part 2. αc, or alpha critical, is a direct analogue to wealthc in Part 5. If you’ve ever played pickup sticks or Jenga, you know there comes a point in the game where removing one more piece causes the whole structure to come tumbling down. So it is for any Eustace. At some point the disruptive forces overwhelm the stabilizing forces and all falls down. This is alpha critical, and when you go below it it’s game over.

F is the filtration, and F@t-1 represents all of the information available to Eustace as of t – 1. t is the current time index.

You will note that this is almost identical to the wealth maximization equation given in Part 5. We have simply substituted one desirable, objective, measurable term, alpha, for another, wealth. In Alphabet City or on Alpha Centauri, living systems configured to maximize this function will have the greatest likelihood of survival. Bacteria, people, and as-yet undiscovered life forms on Rigel 6 all play the same game.

To maximize its sustainability, Eustace must be:

  • alphatropic: can generate alpha from available free energy. Living organisms are alphatropic at every scale. They are all composed of a cell or cells that are highly coordinated, down to the organelles. The thermodynamic choreography of the simplest virus, in alpha terms, is vastly more elaborate than that of the most sophisticated machined devices. (This is an experimentally verifiable proposition, and alpha theory ought to be subject to verification.)
  • alphametric: calibrates “appropriate” responses to fluctuations in alpha. (I will come to what “appropriate” means in a moment.) In complex systems many things can go wrong — by wrong I mean alphadystropic. If a temperature gauge gives an erroneous reading in an HVAC system, the system runs inefficiently and is more prone to failure. Any extraneous complexity that does not increase alpha has a thermodynamic cost that weighs against it. Alpha theory in no way states that living systems a priori know the best path. It states that alpha and survivability are directly correlated.
  • alphaphilic: can recognize and respond to sources of alpha. A simple bacterium may use chemotaxis to follow a maltose gradient. Human brains are considerably more complex and agile. Our ability to aggregate data through practice, to learn, allows us to model a new task so well that eventually we may perform it subconsciously. Think back to your first trip to work and how carefully you traced your route. Soon after there were probably days when you didnt even remember the journey. Information from a new experience was collected, and eventually, the process was normalized. We accumulate countless Poisson rules and normalize them throughout our lives. As our model grows, new information that fits cleanly within it is easier to digest than information that challenges or contradicts it. (Alpha theory explains, for example, resistance to alpha theory.)

Poisson strategies or “Poisson rules” are mnemonics, guesses, estimates; these will inevitably lead to error. Poisson rules that are adapted to the filtration will be better than wild guesses. The term itself is merely a convenience. It in no way implies that all randomness fits into neat categories but rather emphasizes the challenges of discontinuous random processes.

There are often many routes to get from here to there. Where is there? The destination is always maximal alpha given available free energy. To choose this ideal path, Eustace would need to know every possible conformation of energy. Since this is impossible in practice, Eustace must follow the best path based on his alpha model. Let’s call this alpha quantity α* (alpha star). We can now introduce an error term, ε (epsilon).

ε = |α – α*|

Incomplete or incorrect information increases Eustace’s epsilon; more correct or accurate information decreases it. “Appropriate” action is based on a low-epsilon model. All Eustaces act to maximize alpha star: they succeed insofar as alpha star maps to alpha.

This is an extravagant claim, which I may as well put extravagantly: Alpha star defines behavior, and alpha defines ethics, for all life that adheres to the laws of thermodynamics. Moral action turns out to be objective after all. All physiological intuitions have been rigorously excluded — no “consciousness,” no “self,” no “volition,” and certainly no “soul.” Objective measure and logical definition alone are the criteria for the validity of alpha. There is, to be sure, nothing intuitively unreasonable in the derivation; but the criterion for mathematical acceptability is logical self-consistency rather than simply reasonableness of conception. Poincaré said that had mathematicians been left in the prey of abstract logic, they would have never gone beyond number theory and geometry. It is nature, in our case thermodynamics, that opens mathematics as a tool for understanding the world.

Alpha proposes a bridge that links the chemistry and physics of individual molecules to macromolecules to primitive organisms all the way through to higher forms of life. Researchers and philosophers can look at the same questions they always have, but with a rigorous basis of reference. To indulge in another computer analogy, when I program in a high-level language I ultimately generate binary output, long strings of zeros and ones. The computer cares only about this binary output. Alpha is the binary output of living systems.

The reader will discover that alpha can reconstitute the mechanisms that prevailed in forming the first large molecules — the molecules known to be the repositories of genetic information in every living cell throughout history. In 1965 the work of Jacob, Monod, and Gros demonstrated the role of messenger ribonucleic acids in carrying information stored in deoxyribonucleic acid that is the basis of protein synthesis. Then American biologists Tom Cech and Sidney Altman earned the Nobel Prize in Chemistry in 1989 for showing that RNA molecules possess properties that were not originally noticed: they can reorganize themselves without any outside intervention.

All of this complexity stems from the recursive application of a simple phenomenon. Alpha’s story has never changed and, so long as the laws of thermodynamics continue to hold, it never will. But recursive simplicity can get awfully complicated, as anyone who’s ever looked at a fractal can tell you. Remember that t and t – 1 in the maximization function change constantly; it’s a fresh Bernoulli trial all the time. Human beings calculate first-order consequences pretty well, second-order consequences notoriously badly, and the third order is like the third bottle of wine: all bets are off. This is why we need alpha models, and why we can maximize only alpha star, not alpha itself. Alpha is not a truth machine. It is one step in the process of abstracting the real fundamentals from all the irrelevant encumbrances in which intuition tangles us. There is a lot of moral advice to be derived from alpha theory, which I will get around to offering, but for now: Look at your alpha model. (Objectivists will recognize this as another form of “check your premises.”)

But for those of you who want some cash value right away — and I can’t blame you — the definition of life falls out immediately from alpha theory. Alpha is a dimensionless unit. Living systems, even primitive ones, have an immensely higher such number than machines. Life is a number. (We don’t know its value of course, but this could be determined experimentally.) Erwin Schrödinger, in his vastly overrated book What Is Life?, worried this question for nearly 100 pages without answering it. Douglas Adams, in The Hitchhiker’s Guide to the Galaxy, claimed that the meaning of life is 42. He got a hell of a lot closer than Schrödinger.

In the next few posts, well subsume Darwin into a much more comprehensive theory and we’ll consolidate — or as E.O. Wilson would say, consiliate — the noble sciences into a unified field in a way that might even make Aristotle proud.

Nov 192004
 

Part 1: Starting from Zero
Part 2: Meet Eustace
Part 3: Bernoulli Trials
Part 4: Why Randomness is Not All Equally Random

Have you ever been told what to do with your life? A particular college, major, grad school, career? Isn’t it annoying, that someone would presume to plot out your life for you, as if you had no say in the matter? Probability theory has a term for this (the plotting, not the annoyance): strong solution.

A strong solution is any specified trajectory for a random process. In our coin flipping game it would be the realized sequence of heads and tails. Of course Eustace can’t know such a path in advance. The best he can do is to construct a distribution of possible outcomes. This distribution is a weak solution, which is defined, not by its path, which is unknown, but only by the moments of a probability distribution. If Eustace knows a random process is stationary, he has confidence that the moments of the process will converge to the same values every time. The coin flipping game, for instance, is stationary: its long term average winnings, given a fair coin, will always converge to zero. Looking into an uncertain future, Eustace is always limited to a weak solution: it is specified by the expectations, or moments, of the underlying random process. The actual path remains a mystery.

So far we haven’t given poor Eustace much help. A weak “solution” is fine for mathematics; but being a mere cloud of possibilities, it is, from Eustace’s point of view, no solution at all. (A Eustace entranced by the weak solution is what we commonly call a perfectionist.) Sooner rather than later, he must risk a strong solution. He must chart a course: he must act.

Well then, on what basis? Probability theory has a term for this too. The accumulated information on which Eustace can base his course is called a filtration. A filtration represents all information available to Eustace when he chooses a course of action. Technically, it is an algebra (sigma algebra) defined on an increasing set of information (Borel set). The more of the available filtration Eustace uses, the better he does in the casino.

In the coin flipping game, Eustace’s filtration includes, most obviously, the record of the previous flips. Of course in this case the filtration doesn’t help him predict the next flip, but it does help him predict his overall wins and losses. If Eustace wins the first flip (t=1), he knows that after the next flip (t=2), he can’t be negative. This is more information than he had when he started (t=0). If the coin is fair, Eustace has an equal likelihood of winning or losing $1. Therefore, the expected value of his wealth at any point is simply what he has won up to that point. The past reveals what Eustace has won. The future of this stationary distribution is defined by unchanging moments. In a fair game, Eustace can expect to make no money no matter how lucky he feels.

His filtration also includes, less obviously, the constraints of the game itself. Recall that if he wins $100 he moves to a better game and if he loses $100 he’s out in the street. To succeed he must eliminate paths that violate known constraints; a path to riches, for instance, that requires the casino to offer an unlimited line of credit is more likely a path to the poorhouse.

We can summarize all of this with two simple equations:

E(wealth@t | F@t-1) = wealth@t-1 (first moment)
variance(wealth@t|F@t-1) = 1 (second moment)

The expected wealth at any time t is simply the wealth Eustace has accumulated up until time t-1. E is expected value. t is commonly interpreted as a time index. More generally, it is an index that corresponds to the size of the filtration, F. F accumulates the set of distinguishable events in the realized history of a random process. In our coin game, the outcome of each flip adds information to Eustace’s filtration.

We have also assumed that when Eustace’s wealth reaches zero he must stop playing. Game over. There is always a termination point, though it need not always be zero; maybe Eustace needs to save a few bucks for the bus ride home. Let’s give this point a name; call it wealthc (critical). Introducing this term into our original equation for expected wealth, we now have:

max E(wealth@t – wealthc | F@t-1)

His thermodynamic environment works the same way. In the casino, Eustace can’t blindly apply any particular strong solution — an a priori fixed recipe for a particular sequence of hits and stands at the blackjack table. Each card dealt in each hand will, or should, influence his subsequent actions in accordance with the content of his filtration. The best strategy is always the one with max E(wealth@t|F@t-1) at each turn. In this case, F@t-1 represents the history of dealt cards.

As Eustace graduates to higher levels of the casino, the games become more complex. Eustace needs some way of accommodating histories: inflexibility is a certain path to ruin. Card-counters differ from suckers at blackjack only by employing a more comprehensive model that adapts to the available filtration. They act on more information — the history of the cards dealt, the number of decks in the chute, the number of cards that are played before a reshuffle. By utilizing all the information in their filtration, card counters can apply the optimal strong solution every step of the way.

In the alpha casino, Eustace encounters myriad random processes. His ability to mediate the effects of these interactions is a direct consequence of the configuration of his alpha model. The best he can hope to do is accommodate as much of the filtration into this model as he can to generate the best possible response. Suboptimal responses will result in smaller gains or greater losses of alpha. We will take up the policy implications, as one of my readers put it, of all this in Part 6.

Disclaimer: Although I use the language of agency — to know, to act, to look into the future — nothing in this discussion is intended to impute agency, or consciousness, or even life, to Eustace. One could speak of any inanimate object with a feedback mechanism — a thermostat, a coffeemaker — in exactly the same way. Unfortunately English does not permit discussing these matters in any other terms. Which is why I sometimes want to run shrieking back to my equations. You may feel otherwise.

Nov 052004
 

Part 1
Part 2
Part 3

Watch a drop of rain trace down a window-pane. Being acquainted with gravity, you might expect it to take a perfectly straight path, but it doesn’t. It zigs and zags, so its position at the bottom of the pane is almost never a plumb drop from where it began.

Or graph a series of Bernoulli trials. Provided the probability of winning is between 0 and 1, the path, again, will veer back and forth unpredictably.

You are observing Gaussian randomness in action. All Gaussian processes are Lipschitz continuous, meaning, approximately, that you can draw them without lifting your pencil from the paper.

The most famous and widely studied of all Gaussian processes is Brownian motion, discovered by the biologist Robert Brown in 1827, which has had a profound impact on almost every branch of science, both physical and social. Its first important applications were made shortly after the turn of the last century by Louis Bachelier and Albert Einstein.

Bachelier wanted to model financial markets; Einstein, the movement of a particle suspended in liquid. Einstein was looking for a way to measure Avogadro’s number, and the experiments he suggested proved to be consistent with his predictions. Avogadro’s number turned out be very large indeed — a teaspoon of water contains about 2x10E23 molecules.

Bachelier hoped that Brownian motion would lead to a model for security prices that would provide a sound basis for option pricing and hedging. This was finally realized, sixty years later, by Fischer Black, Myron Scholes and Robert Merton. It was Bachelier’s idea that led to the discovery of non-anticipating strategies for tackling uncertainty. Black et al showed that if a random process is Gaussian, it is possible to construct a non-anticipating strategy to eliminate randomness.

Theory and practice were reconciled when Norbert Wiener directed his attention to the mathematics of Brownian motion. Among Wiener’s many contributions is the first proof that Brownian motion exists as a rigorously defined mathematical object, rather than merely as a physical phenomenon for which one might pose a variety of models. Today Wiener process and Brownian motion are considered synonyms.

Back to Eustace.

Eustace plays in an uncertain world, his fortunes dictated by random processes. For any Gaussian process, it is possible to tame randomness without anticipating the future. Think of the quadrillions of Eustaces floating about, all encountering continuous changes in pH, salinity and temperature. Some will end up in conformations that mediate the disruptive effects of these Gaussian fluctuations. Such conformations will have lower overall volatility, less positive entropy, and, consequently, higher alpha.

Unfortunately for Eustace, all randomness is not Gaussian. Many random processes have a Poisson component as well. Unlike continuous Gaussian processes, disruptive Poisson processes exhibit completely unpredictable jump discontinuities. You cannot draw them without picking up your pencil.

Against Poisson events a non-anticipating strategy, based on continuous adjustment, is impossible. Accordingly they make trouble for all Eustaces, even human beings. Natural Poisson events, like tornadoes and earthquakes, cost thousands of lives. Financial Poisson events cost billions of dollars. The notorious hedge fund Long-Term Capital Management collapsed because of a Poisson event in August 1998, when the Russian government announced that it intended to default on its sovereign debt. Bonds that were trading around 40 sank within minutes to single digits. LTCM’s board members, ironically, included Robert Merton and Myron Scholes, the masters of financial Gaussian randomness. Yet even they were defeated by Poisson.

All hope is not lost, however, since any Poisson event worth its salt affects its surroundings by generating disturbances before it occurs. Eustaces configured to take these hints will have a selective advantage. Consider a moderately complex Eustace — a wildebeest, say. For wildebeests, lions are very nasty Poisson events; there are no half-lions or quarter-lions. But lions give off a musky stink and sometimes rustle in the grass before they pounce, and wildebeests that take flight on these signals tend to do better in the alpha casino than wildebeests that don’t.

Even the simplest organisms develop anti-Poisson strategies. For example, pH levels and salinity are mediated by buffers while capabilities like chemotaxis are a response to Poisson dynamics.

A successful Eustace must mediate two different aspects of randomness: Gaussian and Poisson. Gaussian randomness continuously generates events while Poisson randomness intermittently generates events. On the one hand, Gaussian strategies can be adjusted constantly; on the other, a response to a Poisson event must be based on thresholds for signals. Neither of these configurations is fixed. Eustace is a collection of coupled processes. So in addition to external events, some processes may be coupled to other internal processes that lead to configuration changes.

We will call this choreographed ensemble of coupled processes an alpha model. Within the context of our model, we can see a path to the tools of information theory where the numerator for alpha represents stored information and new information, and the denominator represents error and noise. The nature of this path will be the subject of the next installment.

Oct 122004
 

A.C. Douglas writes, not unreasonably:

Listen up!, son. Are you coming back to the cultural blogosphere any time soon, or not? I mean with like, you know, cultural postings, not that tiresome philosophic shit, on an at least biweekly (every two weeks) basis? If not, I’m gonna de-list your ass, much as I love you.

A.C., I suspect, is not alone in his sentiments. It is only fair to my dwindling readership that I explain exactly what I’m about.

I used to write a good deal about art, though not for a while, as A.C. intimates. I found that my supposedly devastating arguments, which never fail to convince me, never convince anyone else who didn’t agree with me to begin with. The culture bloggers, much as I love them, tend to begin in the middle of the argument insofar as they argue at all. I sympathize. Conclusions are fun and premises are such a drag.

A recent brouhaha over “literary value” illustrates the problem. Bachelor #1 wrinkles his nose at the notion that literature ought to deal in extra-literary considerations at all, arguing that if you want to make a political or ethical argument, you ought to write a treatise, not a novel. Which obliterates the distinction between “dealing in” and “making an argument.” One may as well say that the mechanic does not deal in physics when after all he is only repairing your brakes. Consider two sets. The first consists of all people who believe the novelist is exempt from ethical matters on the grounds that he is “doing literature.” The second consists of all people who believe the businessman is similarly exempt on the grounds that he is “doing business.” How large do you suppose the intersection is?

Bachelor #2 recognizes that there is at least one antecedent question — “what’s art for?” In fact there are many but I’ll settle for what I can get. He then asserts that art is no use at all, and throws us back on “a fascination with making use of its uselessness,” a formulation that I suspect contains a contradiction, though I cannot prove it offhand, since I suppose one could be “fascinated” with, say, time travel into the past. What I can say for sure is that it does not advance the argument.

Bachelor #3 asserts that he likes literature that “returns” him to the world, and criticism that returns him to literature. These prejudices can be justified but are not. They are simply aired. Bachelor #4 begins promisingly, accurately remarking that “as long as all I have to say in favor of my pleasure is ‘I like to read instead,’ then I cant very well complain about the fact that reading literature is a marginal activity.” Unfortunately that turns out to be all he has to say in its favor, and he soon derails himself in a jeremiad against reality television. All contestants retire to their respective corners and congratulate each other on a job well done.

I am not picking on any of these gentlemen. They are all highly intelligent and some of them have the good taste to read me into the bargain. Neither am I picking on the culturati in particular; the polibloggers are the same way, only more so and less politely. I’m not even picking on their arguments, really, although most of them are bad. I’m picking on their method. Everyone plunges in at a different stage of the argument, no one states his premises, and no minds are changed. What we have here is a terminal case of in medias res. As in software — I begin to think that everything is like software — raw brains profit you less than a correct approach.

That tiresome philosophic shit of mine is aiming toward a universal maximization function, a formula for all biological entities, human beings not least. It will explain, I hope, why people engage in art and a great deal else besides. The very short answer to the question of “literary value” is that it lets you explore state spaces on the cheap. (Bachelor #1, oddly, and against the main thrust of his argument, approaches the right answer when he writes of “the good that [literature] can do on a purely experiential level.”) But since I haven’t yet explained what a state space is, or what use it is to explore them, this would not be too enlightening. Which is why I have to wade through that tiresome philosophic shit.

My maximization function may be wrong. Very likely it will be. But its entire derivation will be public, and anyone who takes exception to its conclusions will be able and obliged to point out the flaw in its premises. I therefore beg A.C. Douglas’s further indulgence, and yours.

(Update: Bachelor #1, Daniel Green, misunderstands, I fear, my analogy between “doing literature” and “doing business.” He writes, “If the novelist is not exempt from the businessman’s charge to carry out his activities in an ethical way, this can only mean, given the terms of the comparison, that he should go about his own business in an upright way: do your own work and don’t steal from others; don’t hurt people (when real people are models) by portraying them unfairly; pay your typist a fair wage. Be nice to your agent. Thank your publisher.” I’m all for courtesy to agents and publishers; but such dealings are business, not literary, activities. My point was that ethics precede literature — and business, of course — in exactly the same way that physics precedes auto repair.)

Sep 262004
 

Part 1
Part 2

We’ve established that thermodynamics is based on two fundamental empirical laws: the first law (conservation of energy) and the second law (the entropy law). Any systematic scheme for the description of a physical process (equilibrium or non-equilibrium, discrete or continuous) must also be built upon these two laws. Here we employ a simplified but instructive model of probability that captures our reasoning without overwhelming us with details.

Eustace, our thermodynamic system, hops off the turnip truck with $100 and walks into a casino. The simplest and cheapest games are played downstairs. The complexity and stakes of the games increase with each floor but in order to play, a guest must have the minimum ante.

The proprietor leads Eustace to a small table in the basement and offers him the following proposition. He flips a coin. For every heads, Eustace wins a dollar. For every tails, Eustace loses a dollar. If Eustace accumulates $200, he gets to go upstairs and play a more sophisticated game, like blackjack. If he goes broke he’s back out in the street.

How do we book his chances? Depends on the coin, of course. If it’s fair, then the probability of winning a single trial, p, is .5. And the probability of winning the $100, P, also turns out to be .5, or 50%. Which is just what you’d expect.

But in Las Vegas the coins aren’t usually fair. Suppose the probability of heads is .495, or .49, or .47? What then? James Bernoulli, of the Flying Bernoulli Brothers (and fathers and sons), solved this problem, in the general case, more than three hundred years ago, and his result might save you money.

One round
0.5000
0.4950
0.4900
0.4700

Chance to win $100
0.5000
0.1191
0.0179
0.000006

Average # rounds
10,000
7,616
4,820
1,667

Unreal. A lousy half a percent bias reduces your chances of winning by a factor of 4. And if the bias is 3%, as it is in many real bets in Vegas, such as black or red on the roulette wheel, you may as well just hand the croupier your money. There is a famous Las Vegas story of a British earl who walked into a casino with half a million dollars, changed it for chips, went to the roulette wheel, bet it all on black, won, and cashed out, never to be seen in town again. This apocryphal earl had an excellent intuitive grasp of probability. He was approximately 80,000 times as likely to win half a million that way as he would have been by betting, say, $5,000 at a time.

Bernoulli trials, the mathematical abstraction of coin tossing, are one of the simplest yet most important random processes in probability. Such a sequence is an indefinitely repeatable series of events each with the same probability (p). More formally, it satisfies the following conditions:

  • Each trial has two possible outcomes, generically called success and failure.
  • The trials are independent. The outcome of one trial does not influence the outcome of the next.
  • On each trial, the probability of success is p and the probability of failure is 1 – p.

Now, what this has to do with α theory is, in a word, everything: Eustace’s thermodynamic state changes can be modeled as a series of Bernoulli trials. Instead of tracking Eustace’s monetary wealth, we’ll track his alpha wealth. And instead of following him through increasingly luxurious levels of the casino, we’ll follow him through increasing levels of complexity, stability and chemical kinetics.

Inside Eustace molecules whiz about. Occasionally there is a transforming collision. We know that for each such collision there are thermodynamic consequences that allow us to calculate alpha. When the alpha of a system increases, its complexity and stability also increase. The probability of success, p, is the chance that a given reaction occurs.

Each successful reaction may create products that have the ability to enable other reactions, in a positive feedback loop, because complex molecules have more ways to interact chemically than simple ones. It takes alpha to make alpha, just as it takes money to make money.

At each floor of the casino, the game begins anew but with greater wealth and a different p. Enzymes, for example, which catalyze reactions, often by many orders of magnitude, can be thought of simply as extreme Bernoulli biases, or increases in p.

If you’ve ever wondered how life sprang from the primordial soup, well, this is how. Remember that Eustace is any arbitrary volume of space through which energy can flow. Untold numbers of Eustaces begin in the basement — an elite few eventually reach the penthouse suite.

Richard Dawkins, in The Blind Watchmaker, describes the process well, if a bit circuitously, since he spares the math. Tiny changes in p produce very large changes in ultimate outcomes, provided you engage in enough trials. And we are talking about a whole lot of trials here. Imagine trillions upon trillions of Eustaces, each hosting trillions of Bernoulli trials, and suddenly the emergence of complexity seems a lot less mysterious. You don’t have to be anywhere near as lucky as you think. Of course simplicity can emerge from complexity too. No matter how high you rise in Casino Alpha, you can always still lose the game.

Alpha theory asserts that the choreographed arrangements we observe today in living systems are a consequence of myriad telescoping layers of alphatropic interactions; that the difference between such systems and elementary Eustaces is merely a matter of degree. Chemists have understood this for a long time. Two hundred years ago they believed that compounds such as sugar, urea, starch, and wax required a mysterious “vital force” to create them. Their very terminology reflected this. Organic, as distinct from inorganic, chemistry was the study of compounds possessing this vital force. In 1828 Friedrich Wohler converted ammonia and cyanate into urea simply by heating the reactants in the absence of oxygen:

NH4 + OCN –> CO(NH2)2

Urea had always come from living organisms, it was presumed to contain the vital force, and yet its precursors proved to be inorganic. Some skeptics claimed that a trace of vital force from Wohler’s hands must have contaminated the reaction, but most scientists recognized the possibility of synthesizing organic compounds from simpler inorganic precursors. More complicated syntheses were carried out and vitalism was eventually discarded.

More recently, scientists grappled with how such reactions that sometimes require extreme conditions can take place in a cell. In 1961, Peter Mitchell proposed that the synthesis of ATP occurred due to a coupling of the chemical reaction with a proton gradient across a membrane. This hypothesis was quickly verified by experiment, and he received a Nobel prize for his work.

I claimed in Part 2 that changes in α can be measured in theory. Thanks to chemical kinetics and probability, they can be pretty well approximated in practice too. Soon, very soon, we will come to assessing the α consequences of actual systems, and these tools will prove their mettle.

One more bit of preamble, in which it will be shown that all randomness is not equally random, and we will begin to infringe philosophy’s sacred turf.

Sep 072004
 

Tomorrow. Did I say tomorrow? I meant the last syllable of recorded time.

Now where were we? Oh yes: nowhere. Philosophy to date has yielded no explanations, no predictions, no tools unless we classify logic generously, and very little practical advice, much of it bad. And then from “cogito ergo sum,” or “existence exists,” philosophers expect to explain the world, or at least a good chunk of it. Tautologies unpack only so far. No matter how much you cram into a suitcase, you cannot expect to fill a universe with its contents.

I was a little unfair to the Greeks in Part 1. They didn’t have 300 years of dazzling scientific advance to build on. What they had was nothing at all, and as Eddie Thomas pointed out in the comments, you have to start somewhere. But 2500 years later, do we have to start from nothing all over again? In what follows I will take for granted that the external world exists, that we are capable of knowing it, and doubtless many other truths of metaphysics and epistemology that everyone knows but philosophers still hotly dispute. If you want to argue that stuff, the comments are still raging on Part 1.

I propose to begin with the First and Second Laws of thermodynamics. You can follow the links for some helpful refreshers, but in brief, the First Law states that energy is always conserved. It is neither created nor destroyed, merely transferred. And since we know, from relativistic physics, that matter is merely energy in another form, we conclude that everything that ever happens is an energy transfer.

This profound fact about the universe has gone almost entirely unnoticed by philosophers, whether from ignorance or indifference I cannot say. But it leads almost immediately to two other profound facts. First, all events are commensurable at some level. They are all instances of the same thing. Second, all events are measurable, at least in theory. We need only to be able to measure each thermodynamic consequence, and add them all up.

Now let’s set up a little thermodynamic system. Call it Eustace. Eustace need not be biological, or at all fancy; it is best to think of him as just a cube of space. Eustace would be pretty dull without a few things going on, so to liven up matters we will assume that at least something in the way of atomic state change is going on. Particles will dart in and out of our little cube of space.

To describe Eustace, we have recourse to the Second Law. As ordinarily formulated, it states that energy, if unimpeded, always tends to disperse. Frying pans cool when you take them off the stove. Water ripples outward and fades to nothing when you throw a pebble in a still lake. Iron rusts. Perpetual motion machines run down. Rocks don’t roll uphill.

Vast quantities of ink have been spilled in attempts to explain entropy, but really it is nothing more than the measure of this tendency of energy to disperse.

The Second Law is, fortunately, only a tendency. Energy disperses if unimpeded. But it is often impeded, which makes possible life, machines, and anything that does work, in the technical as well as the ordinary sense of the word. The lack of activation energy impedes the Second Law: some external force must push a rock poised atop a cliff, or take the frying pan off the fire. Covalent bond energy impedes the Second Law as well, which is why solid objects hang together. The Second Law has been formulated mathematically in several ways. The most useful for describing Eustace is the Gibbs-Boltzmann equation for free energy, which states:

ΔG = ΔH – TΔS

This is one of the most important equations in the history of science; it has been shown to hold in every context that we know of. The triangles, deltas, represent change. Gibbs-Boltzmann compares two states of a thermodynamic system — Eustace in our case, but it could be anything. As for the terms: G, or free energy, is simply the energy available to do work. The earth, for example, receives new free energy constantly in the form of sunlight. Free energy is the sine qua non; it is why I can write this and you can read it. It does not, unfortunately, necessarily become work, as no one knows better than I. Let alone useful work: this depends on how it is directed. I do work when I paint your car and work when I scratch it.

H is enthalpy, the total heat content of a system. We are interested here in changes (Δ), and since we know from the First Law that energy is neither created nor destroyed, that nothing is for free, any increase in enthalpy has to come from outside the system. T is temperature, and S is entropy, which can be either positive or negative. Negative entropy is, again, good; it leads to more free energy by subtracting a negative from a negative. Positive entropy is what you lose, and one of the consequences of the Second Law is that you always lose something.

To return to Eustace, we know from the First Law, in the terms of the equation, that ΔG >= 0. We will also assign Eustace a constant temperature, which isn’t strictly necessary but simplifies the math a bit. So we have:

ΔH – TΔS >= 0

We are dealing here with sums of discrete quantities here. Not one big thing, but many tiny things. Various particles are darting around inside Eustace, each with its own thermodynamic consequences. Hess’s Law states that we can add these up in any order and the result will always be the same. So we segregate the entropic processes into the positive and the negative:

ΣH – TΣS negative – TΣS positive >= 0

From here it’s just a little algebra. We take the third term, the sum of the positive entropies, add it to both sides, and then divide both sides by that same term, yielding:

α = (ΣH – TΣS negative) / TΣS positive >= 1

And there we have it. Alpha (α) is just an arbitrary term that we assign to the result, like c for the speed of light. The term TΣS negative (the sum of the negative entropy) is always negative, so the higher the negative entropy, the larger the numerator. And alpha is always greater than or equal to 1, as you would expect. One is the alpha number for a system that dissipates every last bit of its enthalpy, retaining no free energy at all.

Alpha turns out to have several interesting properties. First, it is dimensionless. The numerator and denominator are both expressed in units of energy, which divide out. It is a number, nothing more. Second, it is calculable, at least in principle. Third, it is perfectly general. Alpha applies to any two states of any system. Fourth, it is complete. Alpha accounts for everything that has happened inside Eustace between the two states that we’re interested in.

Which leaves the question of what α is, exactly. It can be thought of as the rate at which the free energy in a system is directed toward coherence, rather than dissipation. It is the measure of the stability of a system. And this number, remarkably, will clear up any number of dilemmas that philosophers have been unable to resolve. Not to get too far ahead of ourselves here, but I intend, eventually, to establish that the larger Eustace’s α number is, the better.

Next (I do not say tomorrow): From physics to ethics in one moderately difficult step.

Update: Edited for clarity. So if you still don’t understand it, imagine what it was like before.

Aug 122004
 

Humans have suffered three thousand years of philosophy now, and it’s time we took stock.

Explanations. A successful explanation decomposes a complex question into its constituent parts. You ask why blood is bright red in the air and the arteries and darker red in the veins. I tell you that arterial blood has more oxygen, which it collects from the lungs and carries it to the heart, than venous blood, which does the opposite circuit. Then I tell you that blood contains iron, which bonds to oxygen to form oxyhaemoglobin, which is bright red. I can demonstrate by experiment that these are facts. I have offered a successful explanation.

Of course it is incomplete. I haven’t told you how I know the blood circulates, what oxygen is, how chemical bonding works, or what makes red red. But I could tell you all of these things, and even if I don’t you know more about blood than you did when we started.

The explanation succeeds largely because the question is worth asking. You notice an apparently strange fact that you do not understand. You investigate, and if you are lucky and intelligent, maybe you get somewhere. Philosophers, by contrast, when they sit down to philosophize, forget, as a point of honor, everything they know. They begin with pseudo-questions like “Do I exist?” (Descartes) or “Does the external world exist?” (Berkeley and his innumerable successors), the answers to which no sane person, including Descartes and Berkeley, has never seriously doubted. Kant, the great name in modern philosophy, is the great master of the showboating pseudo-question. The one certainty about questions like “how is space possible?”, “how are synthetic judgments possible a priori?”, and, my favorite, “how is nature possible?,” is that you will learn nothing by asking them, no matter how they are answered. Kant rarely bothers to answer them and such answers as he gives are impossible to remember in any case.

Explanations would seem to be philosophy’s best hope, but its track record is dismal. There has been the occasional lucky guess. Democritus held, correctly, that the world was made up of atoms. Now suppose you had inquired of Democritus what the world-stuff was, and he told you “atoms.” Would you be enlightened? In any case he couldn’t prove his guess, or support it, or follow it up in any way. Atoms had to wait 2500 years for Rutherford and modern physics to put them to good use. If you asked Parmenides how a thing can change and remain the same thing, he would have told you that nothing changes. It’s an explanation of a sort. But would you have gone away happy? Grade: Two C’s, two D’s, and an F. Congratulations Kroger, you’re at the top of the Delta pledge class.

Predictions. To be fair, predictions have been the Achilles’ heel of many more reputable disciplines than philosophy, like economics. Human beings have a nasty habit of not doing what the models say they should, and most philosophers retain enough sense of self-preservation to shy away from prediction whenever possible. Still, a few of the less judicious philosophers of history, like Plato, Spengler, and Marx, have taken the plunge. Spenglerian cycles of history take a couple thousand years to check out, fortunately for Spengler, but Plato’s prediction of eternal decline and Marx’s of advanced capitalism preceding communism were — how shall i put this politely? — howlingly wrong. The very belief that history has a direction is a prime piece of foolishness in its own right.

Brute matter is more tractable. Einstein’s equation for the precession of the perihelion of Mercury, which Newtonian mechanics could not explain, is a classical instance of a successful prediction. Although the precession was a matter of a lousy 40 seconds of arc per century, Einstein wrote Eddington that he was prepared to give up on relativity if his equation failed to account for it. Ever met a philosopher willing to throw over a theory of his in the face of an inconvenient fact? Me neither. Grade: No grade point average. All courses incomplete.

Tools. OK, there’s propositional logic, for which Aristotle receives due credit. But really that’s more mathematics than philosophy, Aristotle’s version of it was incomplete, and it took mathematicians, like Boole and Frege, to make a proper algebra of it and tighten it up. With this one shining exception philosophy has been a dead loss in the tools department. Probably its most famous contribution is Karl Popper’s theory of falsifiability, which turns real science exactly on its head. Where real science verifies theories, Popper falsifies them. Most of us consider “irrefutability” (not “untestability,” which is a different affair) a virtue in a scientific theory. For Popper it is a vice. Mathematics, which is obviously not “falsifiable” and equally obviously “irrefutable,” supremely embarrasses Popper’s philosophy of science, and Popper takes the customary philosophic approach of never mentioning it.

Far from supplying us with tools, philosophers have taken every opportunity to disparage the ones we’re born with. According to Berkeley things do not exist outside of our mind because we cannot think of such things without having them in mind. According to Kant we are ignorant because we have senses. I cite these arguments not because they are bad, which they are, but because they are the most influential arguments in modern philosophy.

To modern philosophy in particular also belongs the unique distinction of making the ad hominem respectable. According to Marx we reason badly about economics because we are bourgeois. According to the deconstructionists we are racist, being white; sexist, being male; and speciesist, being homo sapiens. Grade: Fat, drunk, and stupid is no way to go through life, son.

Advice. Moral advice from philosophers divides into two categories, the anodyne and the dangerous. Under the anodyne begin with Plato and “know thyself,” which is to advice what “nothing changes” is to explanation. Kant recommends that we treat our neighbor as we ourselves would be treated, which works well provided our neighbor is exactly like us, and sheds little light on the question of how we would wish to be treated, and why. Rand counsels “rational self-interest,” which might be helpful if she told us what was rational, or what was self-interested.

Under dangerous file Nietzsche’s “will to power,” just what a growing boy needs to hear. (Yes, he is tragically misinterpreted, and no, it doesn’t matter.) But utilitarianism, “the greatest good for the greatest number,” with its utter disregard for the individual, is the real menace. Occasionally some poor deranged soul actually tries to follow it, with predictable consequences. Ladies and gentlemen, I give you the consistent utilitarian, the unblushing advocate of infanticide and cripple-killing, Mr. Peter Singer. The sad fact is that your moral intuition, imperfect though it is, gives you better advice than any moral philosophers have to date. G.E. Moore, confronted with this fact, responded with “the naturalistic fallacy,” from which it follows that the way we do behave has nothing to do with the way we should behave. Well George, natural selection, which largely governs our behavior, has seen us through for quite a long time now, which is more than I can say for moral philosophy. Grade: Zero point zero.

One loose index of the value of a discipline is whether it helped humanity out of the cave. Mathematicians, scientists, engineers, and even a few economists have all made their contributions. As for philosophy — we programmers have a term to characterize a programmer without whom, even if he were paid nothing, the project would be better off. The term is “net negative.”

Is it too late to start over? Tomorrow we will consider a better approach.

(Update: Bill Kaplan notes in the comments that I had the Einstein-Eddington story backwards, which reflects no credit on Einstein but, alas, none on the philosophers either. Umbrae Canarum comments. Colby Cosh wittily points up my debt to David Stove, to whom I owe some, though not more than 95%, of the argument. The original draft contained an acknowledgement of Stove, which was inadvertently omitted in the final version thanks to a transcription error by one of my research assistants. I recommend Stove’s The Plato Cult to anyone with even a mild interest in the topic. You skinflints can find a few of his greatest hits here. Ilia Tulchinsky comments. Jesus von Einstein comments. Ray Davis comments.)

Jul 072004
 

Conversation is not an exchange of opinion. It is a sifting of opinion.
–Jacques Barzun

Welcome to the wonderful world of Teachout’s Cultural Concurrence Index! It’s a new game, everyone can play, and nearly everyone has. At least a dozen bloggers have offered their answers, in part or in full, to Terry’s list of 100 binary, mostly arts-related questions. This adds up to nearly 12,000 opinions, all told — a plethora of opinions, a cornucopia of opinions, more than a division’s worth of opinions, each as distinctive as a soldier in uniform. Terry himself makes modest claims for his Index:

So what does the TCCI do, accurately or otherwise? It measures the extent to which your taste resembles mine — but thats all. What’s more, you probably noticed in taking the test that my taste cant be “explained” by any one principle or theory. Had I scrambled the order of the alternatives and asked you to guess my answers based on your prior knowledge of my work, I doubt many of you would have scored much higher than, oh, 70%, unless you also knew me personally and very well indeed. Yes, Im a classicist, but I also prefer Schubert to Mozart, which tells you…what?

By that standard it is a resounding success, despite a few minor difficulties with selection bias. (In a recent bulletin from the Institute of Tautological Studies, 99.87% of bloggers who answer online blog quizzes report that they’d rather read blogs than magazines.) And I defer to no one in my admiration for Terry’s catholic taste, animated by no one principle or theory. Yet Terry has stumbled over a potentially useful tool. Let’s suppose what I increasingly doubt, that we actually want to learn something. How might we proceed?

The overall scores tell us little beyond, as Terry says, how far you agree with Terry. They range from 40% to 70% agreement, which scarcely differs from chance, once we account for the fact that the test-takers read Terry in the first place and can be expected to share his tastes at a higher than random rate. But what if we cross-correlated the questions? Suppose we could persuade 1,000 bloggers to take the test, which ought not to be too difficult; they seem to have little else to do. A hundred questions yield 4,950 possible cross-correlations (99 correlations for each answer, divided by 2, since they are symmetric). Now we calculate the agreement between the answers to each pair of questions, looking for the outliers, as far from 0.5 as possible. With close to 1,000 data points for each answer pair — some respondents will opt out of some questions — we put ourselves in a position to draw a few conclusions. Some answer pairs will likely match up quite closely; I would expect a few correlations of 0.9 or higher.

Then we take these outlying pairs and run them back against the data set a few times more, looking for clusters. Interdisciplinary clusters will be best of all; if we find, for example, that nearly everyone who prefers Astaire to Kelly also prefers Matisse to Picasso and Keaton to Chaplin, then we might be on to something. We examine the clusters, looking for commonalities. Looking for rules, in other words. Although Terry’s taste, or the taste of any educated person, cannot be explained by one principle or theory — this is a reasonable working definition of “cultivated” — I would wager that it can be explained pretty well by several.

Unfortunately this is work. I don’t do work around here; I just assign it.

Jun 262004
 

While my back was turned blogs again united to take the power back, this time over the English language:

Prior to the arrival of the Blogosphere, the Academics controlled the media. Not directly, of course, but the standards in which the media and journalists conformed to the literary fads. Journalists self-regulated, in consultation with their former professors, to rule on the standard use of “is.”

We reject it. We reject the intrusion. The people are no longer willing to surrender the media, the press, to today’s journalists. Now unfortunately, the Chomskys of the world didn’t get the memo. They’re finding out that the people have rejected their view, their standards, and their hold on the common tongue. It is OUR language.

The next time someone attempts to dismiss your message, your opinion, or your right to speak by attacking the structure or style you’ve used, tell them to stick it. This will get a lot worse before it gets any better. The fortresses of those ivory towers are pretty powerful. They’ll not surrender their power and position without a fight. And a fight is just what they are going to get.

Vox populi vox dei; but we might ask, before storming the barricades, what the enemy’s position really is. A brief multiple-choice quiz may clear this up. Who said “No native speaker can make a mistake”?

A. Me
B. The people
C. A famous 20th century linguist

The answer, of course, is C (Allen Walker Read). Mrs. du Toit has hold of the wrong end of the telescope. “Correctness” has been very much out of fashion for the last hundred years or so, since Saussure promoted philology and grammar to “linguistics.” Descriptivism dominates academic discourse. It may take a prole to speak Ebonics but it takes a professor to suggest that it be taught in school.

The popular reaction, understandably, veers wildly toward prescriptivism. The language column is the most popular feature of every newspaper and magazine; William Safire got several books out of his. The local bar is thick with grammarians. A badly punctuated guide to British punctuation becomes an American best-seller. (thanks to Our Girl in Chicago.) The phone lines clog every time Patricia T. O’Connor appears on NPR, every caller with his tiny axe to grind.

Carol from Woodbridge boldly comes out against “irregardless,” which is ugly, has a needless and confusing prefix, is not in the dictionary (except as vulg.), and differs from “inflammable,” which no one complains about, only in the last respect. Malik from Staten Island objects to “whom” where “who” is meant. But as the great language historian Otto Jesperson points out, flexion tends to disappear as languages age. “Habaidedeima” becomes “had.” “Cut” loses its endings, serving as present singular, present plural, past singular and plural, and past, present and future perfect, and we’re better off for the fact. “Whom” will eventually land in the dustbin with “whomever,” now rarely used and never correctly. Spelling “night” and “light” as the advertisers do sends Tony from Brooklyn into a towering rage, lest the words no longer betray their Old High German origins. These are the people on language; hear them roar.

Alleged vulgarisms are sometimes not merely harmless but useful. Second person singular and plural are identical in Standard English, which causes no end of confusion. The Southern “y’all” distinguishes them nicely. “Ain’t I” was a perfectly acceptable and euphonious usage 150 years ago — it shows up in Henry James — until the schoolmarms got wind of it and began to insist on “am I not,” or worse, the illogical “aren’t I.”

Languages evolve, for good and ill, though mostly for good: natural selection applies. They are spontaneous orders, like markets. “The people” cannot take English back, never having surrendered it in the first place. Educated speakers exert disproportionate influence over its evolution, and as more people can do their own publishing there will be more educated speakers. This is the kernel of the truth in Mrs. du Toit’s remarks. But her counter-revolution is wholly imaginary. “Blog” will shortly appear in the OED, though I won’t hold my breath, as she seems to be doing, for “blogosphere,” let alone “Instalanche.”

H.W. Fowler, a moderate prescriptivist, adopts a sensible mediate position:

Many idioms are seen, if they are tested by grammar or logic, not to say what they are nevertheless well understood to mean. Fastidious people point out the sin, and easy-going people, who are more numerous, take little notice and go on committing it. Then the fastidious people, if they are foolish, get excited and talk of ignorance and solecisms, and are laughed at as pedants; or, if they are wise, say no more about it and wait. The indefensibles, however sturdy, may prove to be not immortal, and anyway there are much more profitable ways of spending time than baiting them. It is well, however, to realize that there are such things as foolish idioms; an abundance of them in a language can be no credit to it or its users, and drawing attention to them may help to keep down their numbers.

Best, as Fowler suggests, to say no more about it and wait. But if you’re spoiling for a fight, fight for precision and clarity, and flog a live horse, not a dead one. “Shall” and “will” used to make subtle distinctions between prediction and intent. The whole business proved too complicated, and they have gone, mostly unregretted, the way of the dodo. But “masterful” and “masterly” may yet be saved. There is hope that “amazing,” “phenomenal,” and “awesome,” on the one hand, and “horrid,” “horrible,” “terrible,” and “awful,” on the other, will (shall?) be resuscitated before they congeal into synonyms for “good” and “bad,” respectively. Mrs. du Toit herself employs “academic” and “academician” interchangably. Doubtless she would consider me pedantic for saying so, but an academic has a job while an academician has a style. “Academician” has pregnant historical associations and is worth preserving. These are some of mine; you have your own. And don’t wait around for the revolution, it won’t be televised and it won’t be blogged either. It isn’t coming.

(Update: Jim Henley comments. Alan Sullivan proposes several principles for rating linguistic innovation, all of which I endorse.)