# Alpha Theory

**Introduction**

*The history of philosophy, ethics in particular, is reviewed and found wanting. It continues to stink of vitalism and anthropocentrism, despite the fact that the idea of a “vital force” was thoroughly discredited by the 1850s. No ethics to date has managed to improve on moral intuition, or explain it either.*

*What fun is a game with no rules? There must be some common structure to all living systems, not just human beings, and based on its track record, it is science that will likely discover it.*

** Explanations.** A successful explanation decomposes a complex question into its constituent parts. You ask why blood is bright red in the air and the arteries and darker red in the veins. I tell you that arterial blood has more oxygen, which it collects from the lungs and carries it to the heart, than venous blood, which does the opposite circuit. Then I tell you that blood contains iron, which bonds to oxygen to form oxyhemoglobin, which is bright red. I can demonstrate by experiment that these are facts. I have offered a successful explanation.

Of course I haven’t told you how I know the blood circulates, what oxygen is, how chemical bonding works, or what makes red red. But I could tell you all of these things, and even if I don’t you know more about blood than you did when we started.

The explanation succeeds largely because the question is worth asking. You notice an apparently strange fact that you do not understand. You investigate, and if you are lucky and intelligent, maybe you get somewhere. Philosophers, by contrast, when they sit down to philosophize, forget, as a point of honor, everything they know. They begin with pseudo-questions like “Do I exist?” (Descartes) or “Does the external world exist?” (Berkeley and his innumerable successors), the answers to which no sane person, including Descartes and Berkeley, has ever seriously doubted. Kant, the great name in modern philosophy, is the great master of the showboating pseudo-question. The one certainty about questions like “how is space possible?”, “how are synthetic judgments possible *a priori*?”, and, my favorite, “how is nature possible?”, is that you will learn nothing by asking them, no matter how they are answered. Kant rarely bothers to answer them and such answers as he gives are impossible to remember in any case.

Explanations would seem to be philosophy’s best hope, and there has been the occasional lucky guess. Democritus held, correctly, that the world was made up of atoms. Now suppose you had inquired of Democritus what the world-stuff was, and he told you “atoms.” Would you be enlightened? In any case he couldn’t prove his guess, or support it, or follow it up in any way. Atoms had to wait 2500 years for Rutherford and modern physics to put them to good use. If you asked Parmenides how a thing can change and remain the same thing, he would have told you that nothing changes. It’s an explanation of a sort. But would you have gone away happy?

** Predictions.** To be fair, predictions have been the Achilles’ heel of many more reputable disciplines than philosophy, like economics. Human beings have a nasty habit of not doing what the models say they should, and most philosophers retain enough sense of self-preservation to shy away from prediction whenever possible. Still, a few of the less judicious philosophers of history, like Plato, Spengler, and Marx, have taken the plunge. Spenglerian cycles of history take a couple of thousand years to check out, fortunately for Spengler, but Plato’s prediction of eternal decline and Marx’s of advanced capitalism preceding communism were — how shall I put this politely? — howlingly wrong. The very belief that history has a direction is a prime piece of foolishness in its own right.

Brute matter is more tractable. Einstein’s equation for the precession of the perihelion of Mercury, which Newtonian mechanics could not explain, is a classical instance of a successful prediction. Although the precession was a matter of a lousy 40 seconds of arc *per century*, Einstein wrote Eddington that he was prepared to give up on relativity if his equation failed to account for it. Ever met a philosopher willing to throw over a theory of his in the face of an inconvenient fact? Me neither.

** Tools.** OK, there’s propositional logic, for which Aristotle receives due credit. But really that’s more mathematics than philosophy, Aristotle’s version of it was incomplete, and it took mathematicians, like Boole and Frege, to make a proper algebra of it and tighten it up. With this one shining exception philosophy has been a dead loss in the tools department. Probably its most famous contribution is Karl Popper’s theory of falsifiability, which turns real science exactly on its head. Where real science verifies theories, Popper falsifies them. Most of us consider “irrefutability” (not “untestability,” which is a different affair) a virtue in a scientific theory. For Popper it is a vice. Mathematics, which is obviously not “falsifiable” and equally obviously “irrefutable,” supremely embarrasses Popper’s philosophy of science, and Popper takes the customary philosophic approach of classifying it as non-science, which is to say, ignoring it.

Far from supplying us with tools, philosophers have taken every opportunity to disparage the ones we’re born with. According to Berkeley things do not exist outside of our mind because we cannot think of such things without having them in mind. According to Kant we are ignorant because we have senses. I cite these arguments not because they are bad, which they are, but because they are the most influential arguments in modern philosophy.

To modern philosophy in particular also belongs the unique distinction of making the *ad hominem* respectable. According to Marx we reason badly about economics because we are bourgeois. According to the deconstructionists we are racist, being white; sexist, being male; and speciesist, being *homo sapiens.*

** Advice.** Moral advice from philosophers divides into two categories, the anodyne and the dangerous. Under the anodyne begin with Plato and “know thyself,” which is to advice what “nothing changes” is to explanation. Kant recommends that we treat our neighbor as we ourselves would be treated, which works well provided our neighbor is exactly like us, and sheds no light on the question of how we would wish to be treated, and why. Rand counsels “rational self-interest,” which might be helpful if she told us what was rational, or what was self-interested.

Under dangerous file Nietzsche’s “will to power,” just what a growing boy needs to hear. (Yes, he is tragically misinterpreted, and no, it doesn’t matter.) But utilitarianism, “the greatest good for the greatest number,” with its utter disregard for the individual, is the real menace. Occasionally some poor deranged soul actually tries to follow it, with predictable consequences. Ladies and gentlemen, I give you the consistent utilitarian, the unblushing advocate of infanticide and cripple-killing, Mr. Peter Singer. The sad fact is that your moral intuition, imperfect though it is, gives you better advice than any moral philosophers have to date. G.E. Moore, confronted with this fact, responded with “the naturalistic fallacy,” from which it follows that the way we *do* behave has nothing to do with the way we *should* behave. Well George, natural selection, which largely governs our behavior, has seen us through for quite a long time now, which is more than I can say for moral philosophy.

One loose index of the value of a discipline is whether it helped humanity out of the cave. Mathematicians, scientists, engineers, and even a few economists have all made their contributions. As for philosophy — we programmers have a term to characterize a programmer without whom, even if he were paid nothing, the project would be better off. The term is “net negative.”

Is it too late to start over?

**1. Among the Ruins**

*We seek rules that are precise and objective without indulging dogmatism. The laws of thermodynamics are the most general we know. They are independent of any hypothesis concerning the microscopic nature of matter, and they appear to hold everywhere, even in black holes. (Stephen Hawking lost a bet on this.) They look like a good place to start. We postulate a cube floating through space and call it Eustace, in an ill-advised fit of whimsy. A little algebraic manipulation of the Gibbs-Boltzmann formulation of the Second Law produces a strange number we call alpha, which turns out to be the measure of sustainability for any Eustace, living or dead, on Earth or in a galaxy far, far away.*

I have been a little unfair to the Greeks. They didn’t have 300 years of dazzling scientific advance to build on. What they had was nothing at all, and you have to start somewhere. But 2500 years later, do we have to start from nothing all over again? In what follows I will take for granted that the external world exists, that we are capable of knowing it, and doubtless many other truths of metaphysics and epistemology that everyone knows but philosophers still hotly dispute.

I begin with the First and Second Laws of thermodynamics. You can follow the links for some helpful refreshers, but in brief, the First Law states that energy is always conserved. It is neither created nor destroyed, merely transferred. And since we know, from relativistic physics, that matter is merely energy in another form, we conclude that **everything that ever happens is an energy transfer.**

This profound fact about the universe has gone almost entirely unnoticed by philosophers, whether from ignorance or indifference I cannot say. But it leads almost immediately to two other profound facts. First, all events are commensurable at some level. They are all instances of the same thing. Second, all events are measurable, at least in theory. We need only to be able to measure each thermodynamic consequence, and add them all up.

Now let’s set up a little thermodynamic system. Call it Eustace. Eustace need not be biological, or at all fancy; it is best to think of him, for now, as just a cube of space. Eustace would be pretty dull without a few things going on, so to liven up matters we will assume that at least something in the way of atomic state change is going on. Particles will dart in and out of our little cube of space.

To describe Eustace, we have recourse to the Second Law. As ordinarily formulated, it states that energy, if unimpeded, always tends to disperse. Frying pans cool when you take them off the stove. Water ripples outward and fades to nothing when you throw a pebble in a still lake. Iron rusts. Perpetual motion machines run down. Rocks don’t roll uphill.

Vast quantities of ink have been spilled in attempts to explain entropy, but really it is nothing more than the measure of this tendency of energy to disperse.

The Second Law is, fortunately, only a tendency. Energy disperses *if unimpeded.* But it is often impeded, which makes possible life, machines, and anything that does work, in the technical as well as the ordinary sense of the word. (I do work when I wash your car and work when I scratch it.) The lack of activation energy impedes the Second Law: some external force must push a rock poised atop a cliff, or take the frying pan off the fire. Covalent bond energy impedes the Second Law as well, which is why solid objects hang together. The Second Law has been formulated mathematically in several ways. The most useful for describing Eustace is the Gibbs-Boltzmann equation for free energy, which states:

**ΔG = ΔH – TΔS**

This is one of the most important equations in the history of science; it has been shown to hold in every context that we know of. The triangles, deltas, represent change. Gibbs-Boltzmann compares two states of a thermodynamic system — Eustace in our case, but it could be anything. As for the terms: **G**, or free energy, is simply the energy available to do work. The earth, for example, receives new free energy constantly in the form of sunlight. Free energy is the *sine qua non*; it is why I can write this and you can read it. It does not, unfortunately, necessarily become work, let alone useful work: this depends on how it is directed.

**H** is enthalpy, the total heat content of a system. We are interested here in changes (?), and since we know from the First Law that energy is neither created nor destroyed, that nothing is for free, any increase in enthalpy has to come from outside the system. **T** is temperature, and **S** is entropy, which can be either positive or negative. Negative entropy is, again, good; it leads to more free energy by subtracting a negative from a negative. Positive entropy is what you lose, and one of the consequences of the Second Law is that you always lose something.

To return to Eustace, we know from the First Law, in the terms of the equation, that **ΔG >= 0**. We will also assign Eustace a constant temperature, which isn’t strictly necessary but simplifies the math a bit. So we have:

**ΔS >= 0**

We are dealing here with sums of discrete quantities here. Not one big thing, but many tiny things. Various particles are darting around inside Eustace, each with its own thermodynamic consequences. Hess’s Law states that we can add these up in any order and the result will always be the same. So we segregate the entropic processes into the positive and the negative:

**ΔH – TΔS negative – TΔS positive >= 0**

From here it’s just a little algebra. We take the third term, the sum of the positive entropies, add it to both sides, and then divide both sides by that same term, yielding:

**α = (ΔH – TΔS negative) / TΔS positive >= 1**

And there we have it. Alpha (**α**) is just an arbitrary term that we assign to the result, like *c* for the speed of light. The term **TΔS negative** (the sum of the negative entropy) is always negative, so the higher the negative entropy, the larger the numerator. And alpha is always greater than or equal to 1, as you would expect. One is the alpha number for a system that dissipates every last bit of its enthalpy, retaining no free energy at all.

Alpha turns out to have several interesting properties. First, it is dimensionless. The numerator and denominator are both expressed in units of energy, which divide out. It is a number, nothing more. Second, it is calculable, at least in principle. Third, it is perfectly general. Alpha applies to any two states of any system. Fourth, it is complete. Alpha accounts for everything that has happened inside Eustace between the two states that we’re interested in.

Which leaves the question of what alpha *is*, exactly. It can be thought of as the rate at which the free energy in a system is directed toward coherence, rather than dissipation. It is the measure of the stability of a system. And this number, remarkably, will clear up any number of dilemmas that philosophers have been unable to resolve. Not to get too far ahead of ourselves here, but I intend, eventually, to establish that the larger Eustace’s **α** number is, the better for Eustace.

**2. Eustace Does Vegas**

*We lay out a scoring system for Eustace built entirely on mathematics using alpha, a dimensionless, measurable quantity. Alpha measures the consequences of energy flux. All is number. Along the way we explain, via Bernoulli trials, how complexity emerges from the ooze. The dramatic effects of probability biases of a percent or less are dwarfed by the even more dramatic biases afforded by catalysts and enzymes that often operate in the 10^{8} to 10^{20} range.*

Eustace, our thermodynamic system, hops off the turnip truck with $100 and walks into a casino. The simplest and cheapest games are played downstairs. The complexity and stakes of the games increase with each floor but in order to play, a guest must have the minimum ante.

The proprietor leads Eustace to a small table in the basement and offers him the following proposition. He flips a coin. For every heads, Eustace wins a dollar. For every tails, Eustace loses a dollar. If Eustace accumulates $200, he gets to go upstairs and play a more sophisticated game, like blackjack. If he goes broke he’s back out in the street.

How do we book his chances? Depends on the coin, of course. If it’s fair, then the probability of winning a single trial, **p**, is .5. And the probability of winning the $100, **P**, also turns out to be .5, or 50%. Which is just what you’d expect.

But in Las Vegas the coins aren’t usually fair. Suppose the probability of heads is .495, or .49, or .47? What then? James Bernoulli, of the Flying Bernoulli Brothers (and fathers and sons), solved this problem, in the general case, more than three hundred years ago, and his result might save you money.

Chance to win one round |
Chance to win $100 |
Average # rounds |

0.5000 | 0.5000 | 10,000 |

0.4950 | 0.1191 | 7,616 |

0.4900 | 0.0179 | 4,820 |

0.4700 | 0.000006 | 1,667 |

The third column is the number of rounds you can expect to play before winning or losing $100. As you can see, things don’t look good. A mere half a percent bias reduces your chances of winning by a factor of four. And if the bias is 3%, as it is in many real bets in Vegas, such as black or red on the roulette wheel, you may as well just hand over your money. There is a famous Las Vegas story of a British earl who walked into a casino with half a million dollars, changed it for chips, went to the roulette wheel, bet it all on black, won, and cashed out, never to be seen in town again. This apocryphal earl had an excellent intuitive grasp of probability. He was approximately 80,000 times as likely to win half a million that way as he would have been by betting, say, $5,000 at a time.

Bernoulli trials, the mathematical abstraction of coin tossing, are one of the simplest yet most important random processes in probability. Such a sequence is an indefinitely repeatable series of events each with the same probability (p). More formally, it satisfies the following conditions:

- Each trial has two possible outcomes, generically called success and failure.
- The trials are independent. The outcome of one trial does not influence the outcome of the next.
- On each trial, the probability of success is p and the probability of failure is 1 – p.

Now, what this has to do with **α** theory is, in a word, everything: **Eustace’s thermodynamic state changes can be modeled as a series of Bernoulli trials.** Instead of tracking Eustace’s monetary wealth, we’ll track his alpha wealth. And instead of following him through progressively fancier games in the casino, we’ll follow him through increasing levels of complexity, stability and chemical kinetics.

Inside Eustace molecules whiz about. Occasionally there is a transforming collision. We know that for each such collision there are thermodynamic consequences that allow us to calculate alpha. When the alpha of a system increases, its complexity and stability also increase. The probability of success, p, is the chance that a given reaction occurs.

Each successful reaction may create products that have the ability to enable other reactions, in a positive feedback loop, because complex molecules have more ways to interact chemically than simple ones. It takes alpha to make alpha, just as it takes money to make money.

At each floor of the casino, the game begins anew but with greater wealth and a different p. Enzymes, for example, which catalyze reactions, often by many orders of magnitude, can be thought of simply as extreme Bernoulli biases, or increases in p.

If you’ve ever wondered how life sprang from the primordial soup, well, this is how. Remember that **Eustace is any arbitrary volume of space through which energy can flow**. Untold numbers of Eustaces begin in the basement — an elite few eventually reach the penthouse suite.

Richard Dawkins, in *The Blind Watchmaker*, describes the process well, if a bit circuitously, since he spares the math. Tiny changes in p produce very large changes in ultimate outcomes, provided you engage in enough trials. And we are talking about *a whole lot of trials* here. Imagine trillions upon trillions of Eustaces, each hosting trillions of Bernoulli trials, and suddenly the emergence of complexity seems a lot less mysterious. You don’t have to be anywhere near as lucky as you think. Of course simplicity can emerge from complexity too. No matter how high you rise in Casino Alpha, you can always still lose the game.

Alpha theory asserts that the choreographed arrangements we observe today in living systems are a consequence of myriad telescoping layers of alphatropic interactions; that the difference between such systems and elementary Eustaces is merely a matter of degree. Chemists have understood this for a long time. Two hundred years ago they believed that compounds such as sugar, urea, starch, and wax required a mysterious “vital force” to create them. Their very terminology reflected this. Organic, as distinct from inorganic, chemistry was the study of compounds possessing this vital force. In 1828 Friedrich Wohler converted ammonia and cyanate into urea simply by heating the reactants in the absence of oxygen:

**NH4 + OCN –> CO(NH2)2**

Urea had always come from living organisms, it was presumed to contain the vital force, and yet its precursors proved to be inorganic. Some skeptics claimed that a trace of vital force from Wohler’s hands must have contaminated the reaction, but most scientists recognized the possibility of synthesizing organic compounds from simpler inorganic precursors. More complicated syntheses were carried out and vitalism was eventually discarded.

More recently, scientists grappled with how such reactions that sometimes require extreme conditions can take place in a cell. In 1961, Peter Mitchell proposed that the synthesis of ATP occurred due to a coupling of the chemical reaction with a proton gradient across a membrane. This hypothesis was quickly verified by experiment, and Mitchell received a Nobel prize for his work.

I already claimed that changes in **α** can be measured in theory. Thanks to chemical kinetics and probability, they can be pretty well approximated in practice too. Soon, very soon, we will come to assessing the **α** consequences of actual systems, and these tools will prove their mettle.

One more bit of preamble, in which it will be shown that all randomness is not equally random, and we will begin to infringe philosophy’s sacred turf.

**3. Randomness, Two Kinds**

*We introduce two general (but not exhaustive) classes of random processes. Gaussian (continuous) randomness can be dealt with by a non-anticipating strategy of continuous adjustment. Relatively primitive devices like thermostats manage this quite nicely. Poisson (discontinuous) randomness is a fiercer beast. It can, at best, only be estimated via thresholds. Every Eustace, to sustain itself, must constantly reconfigure in light of the available information, or filtration. We introduce the term alpha model to describe this process.*

Or graph a series of Bernoulli trials. Provided the probability of winning is between 0 and 1, the path, again, will veer back and forth unpredictably.

You are observing Gaussian randomness in action. All Gaussian processes are Lipschitz continuous, meaning, approximately, that you can draw them without lifting your pencil from the paper.

The most famous and widely studied of all Gaussian processes is Brownian motion, discovered by the biologist Robert Brown in 1827, which has had a profound impact on almost every branch of science, both physical and social. Its first important applications were made shortly after the turn of the last century by Louis Bachelier and Albert Einstein.

Bachelier wanted to model financial markets; Einstein, the movement of a particle suspended in liquid. Einstein was looking for a way to measure Avogadro’s number, and the experiments he suggested proved to be consistent with his predictions. Avogadro’s number turned out be very large indeed — a teaspoon of water contains about 2×10^{23} molecules.

Bachelier hoped that Brownian motion would lead to a model for security prices that would provide a sound basis for option pricing and hedging. This was finally realized, sixty years later, by Fischer Black, Myron Scholes and Robert Merton. It was Bachelier’s idea that led to the discovery of non-anticipating strategies for tackling uncertainty. Black *et al.* showed that if a random process is Gaussian, it is possible to construct a non-anticipating strategy to eliminate randomness.

Theory and practice were reconciled when Norbert Wiener directed his attention to the mathematics of Brownian motion. Among Wiener’s many contributions is the first proof that Brownian motion exists as a rigorously defined mathematical object, rather than merely as a physical phenomenon for which one might pose a variety of models. Today *Wiener process* and *Brownian motion* are considered synonyms.

Back to Eustace.

Eustace plays in an uncertain world, his fortunes dictated by random processes. For any Gaussian process, it is possible to tame randomness without anticipating the future. Think of the quadrillions of Eustaces floating about, all encountering continuous changes in pH, salinity and temperature. Some will end up in conformations that mediate the disruptive effects of these Gaussian fluctuations. Such conformations will have lower overall volatility, less positive entropy, and, consequently, higher alpha.

Unfortunately for Eustace, all randomness is not Gaussian. Many random processes have a Poisson component as well. Unlike continuous Gaussian processes, disruptive Poisson processes exhibit completely unpredictable jump discontinuities. You cannot draw them without picking up your pencil.

Against Poisson events a non-anticipating strategy, based on continuous adjustment, is impossible. Accordingly they make trouble for all Eustaces, even human beings. Natural Poisson events, like tornadoes and earthquakes, cost thousands of lives. Financial Poisson events cost billions of dollars. The notorious hedge fund Long-Term Capital Management collapsed because of a Poisson event in August 1998, when the Russian government announced that it intended to default on its sovereign debt. Bonds that were trading around 40 sank within minutes to single digits. LTCM’s board members, ironically, included Robert Merton and Myron Scholes, the masters of financial Gaussian randomness. Yet even they were defeated by Poisson.

All hope is not lost, however, since any Poisson event worth its salt affects its surroundings by generating disturbances before it occurs. Eustaces configured to take these hints will have a selective advantage. Consider a moderately complex Eustace — a wildebeest, say. For wildebeests, lions are very nasty Poisson events; there are no half-lions or quarter-lions. But lions give off a musky stink and sometimes rustle in the grass before they pounce, and wildebeests that take flight on these signals tend to do better in the alpha casino than wildebeests that don’t.

Even the simplest organisms develop anti-Poisson strategies. For example, pH levels and salinity are mediated by buffers while capabilities like chemotaxis are a response to Poisson dynamics.

A successful Eustace must mediate two different aspects of randomness: Gaussian and Poisson. Gaussian randomness continuously generates events while Poisson randomness intermittently generates events. On the one hand, Gaussian strategies can be adjusted constantly; on the other, a response to a Poisson event must be based on *thresholds* for signals. Neither of these configurations is fixed. Eustace is a collection of coupled processes. So in addition to external events, some processes may be coupled to other internal processes that lead to configuration changes.

We will call this choreographed ensemble of coupled processes an **alpha model**. Within the context of our model, we can see a path to the tools of information theory where the numerator for alpha represents stored information and new information, and the denominator represents error and noise. The nature of this path will be the subject of the next installment.

**4. Solutions, Two Kinds**

*Increasingly complex organisms have evolved autonomous systems that mediate blood pressure and pH while developing threshold-based systems that effectively adapt filtrations to mediate punctuated processes like, say, predators. We introduce strong and weak solutions and explain the role of each. Weak solutions do not offer specific actionable paths but they do cull our possible choices. Strong solutions are actionable paths but a strong solution that is not adapted to the available filtration will likely be sub-optimal. Successful strong solutions can cut both ways. Paths that served us well in the past, if not continuously adapted, can grow confining. An extreme example, in human terms, is dogmatism. Alpha models must adapt to changing filtrations. Each generation must question the beliefs, traditions, and fashions of the generations that preceded it.*

**strong solution**.

A strong solution is any specified trajectory for a random process. In our coin flipping game it would be the realized sequence of heads and tails. Of course Eustace can’t know such a path in advance. The best he can do is to construct a distribution of possible outcomes. This distribution is a **weak solution**, which is defined, not by its path, which is unknown, but only by the moments of a probability distribution. If Eustace knows a random process is stationary, he has confidence that the moments of the process will converge to the same values every time. The coin flipping game, for instance, is stationary: its long term average winnings, given a fair coin, will always converge to zero. Looking into an uncertain future, Eustace is always limited to a weak solution: it is specified by the expectations, or *moments*, of the underlying random process. The actual path remains a mystery.

So far we haven’t given poor Eustace much help. A weak “solution” is fine for mathematics; but being a mere cloud of possibilities, it is, from Eustace’s point of view, no solution at all. (A Eustace entranced by the weak solution is what we commonly call a perfectionist.) Sooner rather than later, he must risk a strong solution. He must chart a course: he must act.

Well then, on what basis? Probability theory has a term for this too. The accumulated information on which Eustace can base his course is called a filtration. A filtration represents all information available to Eustace when he chooses a course of action. Technically, it is an algebra (sigma algebra) defined on an increasing set of information (Borel set). The more of the available filtration Eustace uses, the better he does in the casino.

In the coin flipping game, Eustace’s filtration includes, most obviously, the record of the previous flips. Of course in this case the filtration doesn’t help him predict the next flip, but it does help him predict his overall wins and losses. If Eustace wins the first flip (*t=1*), he knows that after the next flip (*t=2*), he can’t be negative. This is more information than he had when he started (*t=0*). If the coin is fair, Eustace has an equal likelihood of winning or losing $1. Therefore, the expected value of his wealth at any point is simply what he has won up to that point. The past reveals what Eustace has won. The future of this stationary distribution is defined by unchanging moments. In a fair game, Eustace can expect to make no money no matter how lucky he feels.

His filtration also includes, less obviously, the constraints of the game itself. Recall that if he wins $100 he moves to a better game and if he loses $100 he’s out in the street. To succeed he must eliminate paths that violate known constraints; a path to riches, for instance, that requires the casino to offer an unlimited line of credit is more likely a path to the poorhouse.

We can summarize all of this with two simple equations:

**E(wealth@t | F@t-1) = wealth@t-1** (first moment)

**variance(wealth@t | F@t-1) = 1** (second moment)

The expected wealth at any time *t* is simply the wealth Eustace has accumulated up until time *t-1*. **E** is *expected value*. **t** is commonly interpreted as a time index. More generally, it is an index that corresponds to the size of the filtration, **F**. **F** accumulates the set of distinguishable events in the realized history of a random process. In our coin game, the outcome of each flip adds information to Eustace’s filtration.

We have also assumed that when Eustace’s wealth reaches zero he must stop playing. Game over. There is always a termination point, though it need not always be zero; maybe Eustace needs to save a few bucks for the bus ride home. Let’s give this point a name; call it *wealth _{c}* (critical). Introducing this term into our original equation for expected wealth, we now have:

**max E(wealth@t – wealth _{c} | F@t-1)**

His thermodynamic environment works the same way. In the casino, Eustace can’t blindly apply any particular strong solution — an *a priori* fixed recipe for a particular sequence of hits and stands at the blackjack table. Each card dealt in each hand will, or should, influence his subsequent actions in accordance with the content of his filtration. The best strategy is always the one with **max E(wealth@t | F@t-1)** at each turn. In this case, *F@t-1* represents the history of dealt cards.

As Eustace graduates to higher levels of the casino, the games become more complex. Eustace needs some way of accommodating histories: inflexibility is a certain path to ruin. Card-counters differ from suckers at blackjack only by employing a more comprehensive model that adapts to the available filtration. They act on more information — the history of the cards dealt, the number of decks in the chute, the number of cards that are played before a reshuffle. By utilizing all the information in their filtration, card counters can apply the optimal strong solution every step of the way.

In the alpha casino, Eustace encounters myriad random processes. His ability to mediate the effects of these interactions is a direct consequence of the configuration of his alpha model. The best he can hope to do is accommodate as much of the filtration into this model as he can to generate the best possible response. Suboptimal responses will result in smaller gains or greater losses of alpha. We will take up the policy implications of all this in the next section.

**5. The Universal Law of Life**

*We arrive at the universal maximization function. We introduce the concept of alpha*, or estimated alpha, and epsilon, the difference between estimated and actual alpha. Behavior and ethics are defined by alpha* and alpha, respectively. All living things maximize alpha*, and all living things succeed insofar as alpha* approximates alpha. From here we abstract the three characteristics of all living things. They can generate alpha (*alphatropic

*). They can recognize and respond to alpha (*alphaphilic

*). And they can calibrate responses to alpha to minimize epsilon (*alphametric

*).*

–Aristotle

The moment has arrived. We’ve invoked the laws of thermodynamics, probability and the law of large numbers; we can now state the *Universal Law of Life*. We define a utility function for *all* living systems:

**max E(α – α _{c}]@t | F@t-1)**

E is expected value. α is alpha, the equation for which I gave early on. α_{c}, or alpha critical, is a direct analogue to wealth_{c}. If you’ve ever played pickup sticks or Jenga, you know there comes a point in the game where removing one more piece causes the whole structure to come tumbling down. So it is for any Eustace. At some point the disruptive forces overwhelm the stabilizing forces and all falls down. This is alpha critical, and when you go below it it’s game over.

F is the filtration, and F@t-1 represents all of the information available to Eustace as of *t – 1*. *t* is the current time index.

You will note that this is almost identical to the wealth maximization equation given in the previous section. We have simply substituted one desirable, objective, measurable term, alpha, for another, wealth. In Alphabet City or on Alpha Centauri, **living systems configured to maximize this function will have the greatest likelihood of survival.** Bacteria, people, and as-yet undiscovered life forms on Rigel 6 all play the same game.

To maximize its sustainability, Eustace must be:

*alphatropic:*can generate alpha from available free energy. Living organisms are alphatropic at every scale. They are all composed of a cell or cells that are highly coordinated, down to the organelles. The thermodynamic choreography of the simplest virus, in alpha terms, is vastly more elaborate than that of the most sophisticated machined devices. (This is an experimentally verifiable proposition, and alpha theory ought to be subject to verification.)*alphametric:*calibrates “appropriate” responses to fluctuations in alpha. (I will come to what “appropriate” means in a moment.) In complex systems many things can go wrong — by wrong I mean alphadystropic. If a temperature gauge gives an erroneous reading in an HVAC system, the system runs inefficiently and is more prone to failure. Any extraneous complexity that does not increase alpha has a thermodynamic cost that weighs against it. Alpha theory in no way claims that living systems*a priori*know the best path. It claims that alpha and survivability are directly correlated.*alphaphilic:*can recognize and respond to sources of alpha. A simple bacterium may use chemotaxis to follow a maltose gradient. Human brains are considerably more complex and agile. Our ability to aggregate data through practice,*to learn*, allows us to model a new task so well that eventually we may perform it subconsciously. Think back to your first trip to work and how carefully you traced your route. Soon after there were probably days when you didnt even remember the journey. Information from a new experience was collected, and eventually, the process was normalized. We accumulate countless Poisson rules and normalize them throughout our lives. As our model grows, new information that fits cleanly within it is easier to digest than information that challenges or contradicts it. (Alpha theory explains, for example, resistance to alpha theory.)

Poisson strategies or “Poisson rules” are mnemonics, guesses, estimates; these will inevitably lead to error. Poisson rules that are adapted to the filtration will be better than wild guesses. The term itself is merely a convenience. It in no way implies that all randomness fits into neat categories but rather emphasizes the challenges of discontinuous random processes.

There are often many routes to get from here to there. Where is there? **The destination is always maximal alpha given available free energy.** To choose this ideal path, Eustace would need to know every possible conformation of energy. Since this is impossible in practice, Eustace must follow the best path based on his alpha model. Let’s call this alpha quantity α* (**alpha***). We can now introduce an error term, ε (**epsilon**).

**ε = |α – α*|**

Incomplete or incorrect information increases Eustace’s epsilon; more correct or accurate information decreases it. “Appropriate” action is based on a low-epsilon model. All Eustaces act to maximize alpha*: they succeed insofar as alpha* maps to alpha.

This is an extravagant claim, which I may as well put extravagantly: **Alpha* defines behavior, and alpha defines ethics, for all life that adheres to the laws of thermodynamics.** Moral action turns out to be objective after all. All physiological intuitions have been rigorously excluded — no “consciousness,” no “self,” no “volition,” and certainly no “soul.” Objective measure and logical definition alone are the criteria for the validity of alpha. There is, to be sure, nothing intuitively unreasonable in the derivation; but the criterion for mathematical acceptability is logical self-consistency rather than simply reasonableness of conception. Poincaré said that had mathematicians been left in the prey of abstract logic, they would have never gone beyond number theory and geometry. It is nature, in our case thermodynamics, that opens mathematics as a tool for understanding the world.

Alpha proposes a bridge that links the chemistry and physics of individual molecules to macromolecules to primitive organisms all the way through to higher forms of life. Researchers and philosophers can look at the same questions they always have, but with a rigorous basis of reference. To indulge in another computer analogy, when I program in a high-level language I ultimately generate binary output, long strings of zeros and ones. The computer cares only about this binary output. Alpha is the binary output of living systems.

Through alpha we can reconstitute the mechanisms that prevailed in forming the first large molecules — the molecules known to be the repositories of genetic information in every living cell throughout history. In 1965 the work of Jacob, Monod, and Gros demonstrated the role of messenger ribonucleic acids in carrying information stored in deoxyribonucleic acid that is the basis of protein synthesis. Then American biologists Tom Cech and Sidney Altman earned the Nobel Prize in Chemistry in 1989 for showing that RNA molecules possess properties that were not originally noticed: *they can reorganize themselves without any outside intervention.*

All of this complexity stems from the recursive application of a simple phenomenon. Alpha’s story has never changed and, so long as the laws of thermodynamics continue to hold, it never will. But recursive simplicity can get awfully complicated, as anyone who’s ever looked at a fractal can tell you. Remember that *t* and *t – 1* in the maximization function change constantly; it’s a fresh Bernoulli trial all the time. Human beings calculate first-order consequences pretty well, second-order consequences notoriously badly, and the third order is like the third bottle of wine: all bets are off. This is why we need alpha models, and why we can maximize only alpha star, not alpha itself. Alpha is not a truth machine. It is one step in the process of abstracting the real fundamentals from all the irrelevant encumbrances in which intuition tangles us. There is a lot of moral advice to be derived from alpha theory, some of which I will offer in the next section, but for now: Look at your alpha model. (Objectivists will recognize this as another form of “check your premises.”)

But for those of you who want some cash value right away — and I can’t blame you — the definition of life falls out immediately from alpha theory. Alpha is a dimensionless unit. Living systems, even primitive ones, have an immensely higher such number than machines. **Life is a number.** (We don’t know its value of course, but this could be determined experimentally.) Erwin Schrödinger, in his vastly overrated book *What Is Life?*, worried this question for nearly 100 pages without answering it. Douglas Adams, in *The Hitchhiker’s Guide to the Galaxy*, claimed that the meaning of life is 42. He got a lot closer than Schrödinger.

**6. Entropy and Information**

*A few words about the necessity for equations. We define entropy and correlate its meanings in statistical mechanics and in communication theory. We slay and reincarnate Maxwell’s Demon to show, mathematically, how Eustace constantly refines his alpha model, on the basis of feedback, in the direction of accuracy, consistency, and elegance.*

At this point you are as badly off as you were before. Do only certain objects attract each other? How strong is this “attraction”? On what does it depend? In what proportions?

Now I give a better answer. Gravity is a force that attracts all objects directly as the product of their masses and inversely as the square of the distance between them. I may have to backtrack a bit and explain what I mean by “force,” “mass,” “directly,” “inversely,” and “square,” but finally we’re getting somewhere. All of a sudden you can answer every question in the previous paragraph.

Of course I am no longer really speaking English. I’m translating an equation, F_{g} = G*(m_{1}*m_{2})/r^{2}. It turns out that we’ve been asking the wrong question all along. We don’t really care what gravity is; there is some doubt that we even *know* what gravity is. We care about how those objects with m’s (masses) and r’s (distances) act on each other. The cash value is in all those little components on the right side of the equation; the big abstraction on the left is just a notational convenience. We write F_{g} (gravity) so we don’t have to write all the other stuff. You must substitute, mentally, the right side of the equation whenever you encounter the term “gravity.” Gravity is what the equation defines it to be, and that is all. And so with alpha.

In a common refrain of science popularizers, Roger Penrose writes, in the preface to *The Road to Reality*: “Perhaps you are a reader, at one end of the scale, who simply turns off whenever a mathematical formula presents itself… If so, I believe that there is still a good deal that you can gain from this book by simply skipping all the formulae and just reading the words.” Penrose is having his readers on. In fact if you cannot read a formula you will not get past Chapter 2. There is no royal road to geometry, or reality, or even to alpha theory.

Entropy is commonly thought of as “disorder,” which leads to trouble, even for professionals. Instead we will repair to Ludwig Boltzmann’s tombstone and look at the equation:

**S = k log W**

S is entropy itself, the big abstraction on the left that we will ignore for the time being. The right-hand side, as always, is what you should be looking at, and the tricky part there is *W*. *W* represents the number of equivalent microstates of a system. So what’s a microstate? Boltzmann was dealing with molecules in a gas. If you could take a picture of the gas, showing each molecule, at a single instant–you can’t, but if you could–that would be a microstate. Each one of those tiny suckers possesses kinetic energy; it careers around at staggering speeds, a thousand miles an hour or more. The temperature of the gas is the average of all those miniature energies, and that is the macrostate. Occasionally two molecules will collide. The first slows down, the second speeds up, and the total kinetic energy is a wash. Different (but equivalent) microstates, same macrostate.

The number of microstates is enormous, as you might imagine, and the rest of the equation consists of ways to cut it down to size. *k* is Boltzmann’s constant, a tiny number, 10^{-23} or so. The purpose of taking the logarithm of *W* will become apparent when we discuss entropy in communication theory.

An increase in entropy is usually interpreted, in statistical mechanics, as a decrease in order. But there’s another way to look at it. In a beaker of helium, there are far, far fewer ways for the helium molecules to cluster in one corner at the bottom than there are for them to mix throughout the volume. More entropy decreases order, sure, but it also decreases our ability to succinctly describe the system. The greater the number of possible microstates, the higher the entropy, and the smaller the chance we have of guessing the particular microstate in question. The higher the entropy, the less we know.

And this, it turns out, is how entropy applies in communication theory. (I prefer this term, as its chief figure, Claude Shannon, did, to “information theory.” Communication theory deals strictly with how some message, any message, is transmitted. It abstracts away from the specific content of the message.) In communication theory, we deal with signals and their producers and consumers. For Eustace, a signal is any modulatory stimulus. For such a stimulus to occur, *energy must flow*.

Shannon worked for the telephone company, and what he wanted to do was create a theoretical model for the transmission of a signal — over a wire, for the purposes of his employer, but his results generalize to any medium. He first asks what the smallest piece of information is. No math necessary to figure this one out. It’s yes or no. The channel is on or off, Eustace receives a stimulus or he doesn’t. This rock-bottom piece of information Shannon called a *bit*, as computer programmers still do today.

The more bits I send, the more information I can convey. But the more information I convey, the less certain you, the receiver, can be of what message I will send. **The amount of information conveyed by a signal correlates with the uncertainty that a particular message will be produced, and entropy, in communication theory, measures this uncertainty.**

Suppose I produce a signal, you receive it, and I have three bits to work with. How many different messages can I send you? The answer is eight:

000

001

010

011

100

101

110

111

Two possibilities for each bit, three bits, 2^{3}, eight messages. For four bits, 2^{4}, or 16 possible messages. For n bits, 2^{n} possible messages. The relationship, in short, is logarithmic. If W is the number of possible messages, then log W is the number of bits required to send them. Shannon measures the entropy of the message, which he calls H, in bits, as follows:

**H = log W**

Look familiar? It’s Boltzmann’s equation, without the constant. Which you would expect, since each possible message corresponds to a possible microstate in one of Boltzmann’s gases. In thermodynamics we speak of “disorder,” and in communication theory of “information” or “uncertainty,” but the mathematical relationship is identical. From the above equation we can see that if there are eight possible messages (W), then there are three bits of entropy (H).

I have assumed that each of my eight messages is equally probable. This is perfectly reasonable for microstates of molecules in a gas; not so reasonable for messages. If I happen to be transmitting English, for example, “a” and “e” will appear far more often than “q” or “z,” vowels will tend to follow consonants, and so forth. In this more general case, we have to apply the formula to each possible message and add up the results. The general equation, Shannon’s famous theorem of a noiseless channel, is

**H = – (p _{1}log p_{1} + p_{2}log p_{2} + … p_{W}log p_{W})**

where W is, as before, the number of possible messages, and p is the probability of each. The right side simplifies to log W when each p term is equal, which you can calculate for yourself or take my word for. Entropy, H, assumes the largest value in this arrangement. This is the case with my eight equiprobable messages, and with molecules in a gas. Boltzmann’s equation turns out to be a special case of Shannon’s. (This is only the first result in Shannon’s theory, to which I have not remotely done justice. Pierce gives an excellent introduction, and Shannon’s original paper, “The Mathematical Theory of Communication,” is not nearly so abstruse as its reputation.)

This notion of “information” brings us to an important and familiar character in our story, Maxwell’s demon. Skeptical of the finality of the Second Law, James Clerk Maxwell dreamed up, in 1867, a “finite being” to circumvent it. This “demon” (so named by Lord Kelvin) was given personality by Maxwell’s colleague at the University of Edinburgh, Peter Guthrie Tait, as an “observant little fellow” who could track and manipulate individual molecules. Maxwell imagined various chores for the demon and tried to predict their macroscopic consequences.

The most famous chore involves sorting. The demon sits between two halves of a partitioned box, like the doorman at the VIP lounge. His job is to open the door only to the occasional fast-moving molecule. By careful selection, the demon could cause one half of the box to become spontaneously warmer while the other half cooled. Through such manual dexterity, the demon seemed capable of violating the second law of thermodynamics. The arrow of time could move in either direction and the laws of the universe appeared to be reversible.

An automated demon was proposed by the physicist Marian von Smoluchowski in 1914 and later elaborated by Richard Feynman. Smoluchowski soon realized, however, that Brownian motion heated up his demon and prevented it from carrying out its task. In defeat, Smoluchowski still offered hope for the possibility that an *intelligent* demon could succeed where his automaton failed.

In 1929, Leo Szilard envisioned a series of ingenious mechanical devices that require only minor direction from an intelligent agent. Szilard discovered that the demon’s intelligence is used to *measure* — in this case, to measure the velocity and position of the molecules. He concluded (with slightly incorrect details) that this measurement creates entropy.

In the 1950s, the IBM physicist Leon Brillouin showed that, in order to decrease the entropy of the gas, the demon must first collect information about the molecules he watches. This itself has a calculable thermodynamic cost. By merely watching and measuring, the demon raises the entropy of the world by an amount that honors the second law. His findings coincided with those of Dennis Gabor, the inventor of holography, and our old friend, Norbert Wiener.

Brillouin’s analysis led to the remarkable proposal that information is not just an abstract, ethereal construct, but a real, physical commodity like work, heat and energy. In the 1980s this model was challenged by yet another IBM scientist, Charles Bennett, who proposed the idea of the reversible computer. Pursuing the analysis to the final step, Bennett was again defeated by the second law. Computation requires *storage*, whether on a transistor or a sheet of paper or a neuron. The *destruction* of this information, by erasure, by clearing a register, or by resetting memory, is *irreversible*.

Looking back, we see that a common mistake is to “prove” that the demon can violate the second law by permitting him to violate the *first* law. The demon must operate as part of the environment rather than as a ghost outside and above it.

Having slain the demon, we shall now reincarnate him. Let’s return for a moment to the equation, the Universal Law of Life:

**max E([α – α _{c}]@t | F@t-1)**

The set F@t- represents all information available at some time t in the past. So far I haven’t said much about E, expected value; now it becomes crucial. Eustace exists in space, which means he deals with energy transfers that take place at his boundaries. He has been known to grow cilia and antennae (and more sophisticated sensory systems) to extend his range, but this is all pretty straightforward.

Eustace also exists in time. His environment is random and dynamic. Our equation spans this dimension as well.

t- : the past

t : the present

t+ : the future (via the expectation operator, E)

t+ is where the action is. Eustace evolves to maximize the expected value of alpha. He employs an *alpha model*, adapted to information, to deal with this fourth dimension, time. The more information he incorporates, the longer the time horizon, the better the model. Eustace, in fact, stores and processes information in exactly the way Maxwell’s imaginary demon was supposed to. To put it another way, Eustace *is* Maxwell’s demon.

Instead of sorting molecules, Eustace sorts reactions. Instead of accumulating heat, Eustace accumulates alpha. And, finally, instead of playing a game that violates the laws of physics, Eustace obeys the rules by operating far from equilibrium with a supply of free energy.

Even the simplest cell can detect signals from its environment. These signals are encoded internally into messages to which the cell can respond. A paramecium swims toward glucose and away from anything else, responding to chemical molecules in its environment. These substances act to attract or repel the paramecium through positive or negative tropism; they direct movement along a gradient of signals. At a higher level of complexity, an organism relies on specialized sensory cells to decode information from its environment to generate an appropriate behavioral response. At a higher level still, it develops consciousness.

As Edelman and Tononi (p. 109) describe the process:

What emerges from [neurons’] interaction is an ability to construct a scene. The ongoing parallel input of signals from many different sensory modalities in a moving animal results in reentrant correlations among complexes of perceptual categories that are related to objects and events. Their salience is governed in that particular animal by the activity of its value systems. This activity is influenced, in turn, by memories conditioned by that animal’s history of reward and punishment acquired during its past behavior. The ability of an animal to connect events and signals in the world, whether they are causally related or merely contemporaneous, and, then, through reentry with its value-category memory system, to construct a scene that is related to its own learned history is the basis for the emergence of primary consciousness.

The short-term memory that is fundamental to primary consciousness reflects previous categorical and conceptual experiences. The interaction of the memory system with current perception occurs over periods of fractions of a second in a kind of bootstrapping: What is new perceptually can be incorporated in short order into memory that arose from previous categorizations. The ability to construct a conscious scene is the ability to construct, within fractions of seconds, a remembered present. Consider an animal in a jungle, who senses a shift in the wind and a change in jungle sounds at the beginning of twilight. Such an animal may flee, even though no obvious danger exists. The changes in wind and sound have occurred independently before, but the last time they occurred together, a jaguar appeared; a connection, though not provably causal, exists in the memory of that conscious individual.

An animal without such a system could still behave and respond to particular stimuli and, within certain environments, even survive. But it could not link events or signals into a complex scene, constructing relationships

based on its own unique history of value-dependent responses. It could not imagine scenes and would often fail to evade certain complex dangers. It is the emergence of this ability that leads to consciousness and underlies the evolutionary selective advantage of consciousness. With such a process in place, an animal would be able, at least in the remembered present, to plan and link contingencies constructively and adaptively in terms of its own previous history of value-driven behavior. Unlike its preconscious evolutionary ancestor, it would have greater selectivity in choosing its responses to a complex environment.

Uncertainty is expensive, and a private simulation of one’s environment as a remembered present is exorbitantly expensive. Even at rest, the human brain requires approximately 20% of blood flow and oxygen, yet it accounts for only 2% of body mass. It needs more fuel as it takes on more work.

The way information is stored and processed affects its energy requirements and, in turn, alpha. Say you need to access the digits of π. The brute-force strategy is to store as many of them as possible and hope for the best. This is costly in terms of uncertainty, storage, and maintenance.

Another approach, from analysis, is to use the Leibniz formula:

**π /4 = 1 – 1/3 + 1/5 – 1/7 + 1/9 – …**

This approach, unlike the other, can supply any arbitrary digit of π. And here you need only remember the odd numbers and an alternating series of additions and subtractions.

Which method is more elegant and beautiful? Which is *easier*?

Human productions operate on this same principle of parsimony. Equations treat a complex relation among many entities with a single symbol. Concepts treat an indefinite number of percepts (or other concepts). Architects look at blueprints and see houses. A squiggle of ink can call up a mud puddle, or a bird in flight. The aim, in every case, is maximal information bang for minimal entropy buck.

In an unpredictable environment, decisions must be made with incomplete information. The epsilon of an alpha model depends on its accuracy, consistency and elegance. An accurate model corresponds well to the current environment, a consistent model reduces reaction time, and an elegant model reduces energy requirements. Everything, of course, is subject to change as the environment changes. The ability to adapt to new information and to discard outdated models is just as vital as the ability to produce models in the first place.

Thus Eustace generates his alpha* process, operating on some subset of F@t- where t is an index that represents the increasing set of available information F. As Eustace evolves, the complexity of his actions increases and his goals extend in space and time, coming to depend less on reflex and more on experience. He adapts to the *expected* value for alpha@t+, always working with an incomplete information set. As antennae extend into space, so Eustace’s alpha model extends into a predicted future constructed from an experienced past.

*What’s alpha all about, Alfie? Why are you boring us with this?*

The great biologist E.O. Wilson wrote a little book called *Consilience*, in which he argued that it was past time to apply the methods of science — notably quantification — to fields traditionally considered outside its purview, like ethics, politics, and aesthetics. Any blog reader can see that arguments on these subjects invariably devolve into pointless squabbling because no base of knowledge and no shared premises exist. Alpha theory is a stab at Wilson’s program.

*What kind of science could possibly apply to human behavior?*

Thermodynamics. Living systems can sustain themselves only by generating negative entropy. Statistical thermodynamics is a vast and complex topic in which you can’t very well give a course on a blog, but here’s a good introduction.

*Don’t we have enough ethical philosophies?*

Too many. The very existence of competing “schools” is the best evidence of failure. Of course science has competing theories as well, but it also has a large body of established theory that has achieved consensus. No astronomer quarrels with Kepler’s laws of planetary orbits. No biologist quarrels with natural selection. Philosophers and aestheticians quarrel over *everything*. Leibniz, who tried to develop a universal truth machine, wrote someplace that his main purpose in doing so was to shut people up. I see his point.

*Not a chance. Anyway, what’s alpha got that we don’t have already?*

A universal maximization function derived openly from physical laws, for openers. Two of them. The first is for the way all living system ought to behave. The second is for the way they do behave. To put the matter non-mathematically, every living system maximizes its sustainability by following the first equation. But in practice, it is impossible to follow directly. Living beings aren’t mathematical demons and can’t calculate at the molecular level. They act instead on a model, a simplification. That’s the second equation. If the model is accurate, the living being does well for itself. If not, not.

*Sounds kinda like utilitarianism.*

Not really. But there are similarities. Like utilitarianism, alpha theory is consequentialist, maintaining that actions are to be evaluated by their results. (Motive, to answer a question in the previous comment thread, counts for nothing; but then why should it?) But utilitarianism foundered on the problem of commensurable units. There are no “utiles” by which one can calculate “the greatest happiness for the greatest number.” This is why John Stuart Mill, in desperation, resorted to “higher pleasures” and “lower pleasures,” neatly circumscribing his own philosophy. Alpha theory provides the unit.

Alpha also accounts for the recursive nature of making decisions, which classical ethical theories ignore altogether. (For example, short circuiting the recursive process through organ harvesting actually *reduces* the fitness of a group.) Most supposed ethical “dilemmas” are arid idealizations, because they have only two horns: the problem has been isolated from its context and thus simplified. But action in the real world is not like that; success, from a thermodynamic perspective, requires a continuous weighing of the alternatives and a continuous adjustment of one’s path. Alpha accounts for this with the concept of strong and weak solutions and filtrations. Utilitarianism doesn’t. Neither does any other moral philosophy.

That said, Jeremy Bentham, would, I am sure, sympathize with alpha theory, were he alive today.

*You keep talking about alpha critical. Could you give an example?*

Take a live frog. If we amputate its arm, what can we say about the two separate systems? Our intuition says that if the frog recovers (repairs and heals itself) from the amputation, it is still alive. The severed arm will not be able to fully repair damage and heal. Much of the machinery necessary to coordinate processes and manage the requirements of the complicated arrangement of cells depends on other systems in the body of the frog. The system defined by the arm will rapidly decay below alpha critical. Now take a single cell from the arm and place it in a nutrient bath. Draw a volume around this cell and calculate alpha again. This entity, freed from the positive entropy of the decaying complexity of the severed arm, will *live*.

What about frogs that can be frozen solid and thawed? Are they alive while frozen? Clearly there is a difference between freezing these frogs and freezing a human. It turns out that cells in these frogs release a sugar that prevents the formation of ice crystals. Human cells, lacking this sugar, shear and die. We can use L’Hopital’s Rule to calculate alpha as the numerator and denominator both approach some limiting value. As we chart alpha in our two subjects, there will come a point where the shearing caused by ice crystal formation will cause the positive entropy (denominator) in the human subject to spike through alpha critical. He will die. The frog, on the other hand, will approach a state of suspended animation. Of course, such a state severely reduces the frogs ability to adapt.

Or take a gas cloud. “You know, consider those gas clouds in the universe that are doing a lot of complicated stuff. What’s the difference [computationally] between what they’re doing and what we’re doing? It’s not easy to see.” (Stephen Wolfram, A New Kind of Science.)

Draw a three-dimensional mesh around the gas cloud and vary the grid spacing to calculate alpha. Do the same for a living system. No matter how the grid is varied, the alpha of the random particles of the gas cloud will not remotely match the alpha of a living system.

*Enough with the frogs and gas clouds. Talk about human beings.*

Ah yes, “cash value.” I am reminded of a blessedly former business associate who interrupted a class in abstruse financial math to ask the professor, “But how does this get me closer to my Porsche?”

The first thing to recognize is that just about everything that you now believe is wrong, probably *is* wrong, in alpha terms. Murder, robbery, and the like are obviously radically alphadystropic, because alpha states that the inputs always have to be considered. (So does thermodynamics.) If this weren’t true you would have *prima facie* grounds for rejecting the theory. Evolution necessarily proceeds toward alpha maximization. Human beings have won many, many rounds in the alpha casino. Such universal rules as they have conceived are likely to be pretty sound by alpha standards.

These rules, however, are always prohibitions, never imperatives. This too jibes with alpha theory. Actions exist that are always alphadystropic; but no single action is always alphatropic. Here most traditional and theological thinking goes wrong. If such an action existed, we probably would have evolved to do it — constantly, and at the expense of all other actions. If alpha theory had a motto, it would be **there are no universal strong solutions.** You have to use that big, expensive glucose sink sitting in that thickly armored hemisphere between your ears. Isaiah Berlin’s concept of “negative liberty” fumbles toward this, and you cash-value types ought to be able to derive a theory of the proper scope of law without too much trouble.

Still more “cash value” lies in communication theory, which is an application of thermodynamics. Some say thermodynamics is an application of information theory; but this chicken-egg argument does not matter for our purposes. We care only that they are homologous. We can treat bits the same way we treat energy.

Now the fundamental problem of human action is incomplete information. The economists recognized this over a century ago but the philosophers, as usual, have lagged. To put it in alpha terms, they stopped incorporating new data into their filtration around 1850.

The alpha equation captures the nature of this problem. Its numerator is new information plus the negative entropy you generate from it; its denominator is positive entropy, what you dissipate. Numerator-oriented people are always busy with the next new thing; they consume newspapers and magazines in bulk and seem always to have forgotten what they knew the day before yesterday. This strategy can work — sometimes. Denominator-oriented people tend to stick with what has succeeded for them and rarely, if ever, modify their principles in light of new information. This strategy can also work — sometimes. The great trick is to be an alpha-oriented person. The Greeks, as so often, intuited all of this, lacking only the tools to formalize it. It’s what Empedocles is getting at when he says that life is strife, and what Aristotle is getting at when he says that right action lies in moderation.

Look around. Ask yourself why human beings go off the rails. Is it because we are perishing in an orgy of self-sacrifice, as the Objectivists would have it? Is it because we fail to love our neighbor as ourselves, as the Christians would have it? Or is it because we do our best to advance our interests and simply botch the job?

To understand alpha theory, you have to learn some math and science. To learn math and science, you have to read some books. Try these, in order of increasing difficulty:

*Complexity*, by Mitchell Waldrop. Complexity is why ethics is difficult, and Waldrop provides a gentle, anecdote-heavy introduction. Waldrop holds a Ph.D. in particle physics, but he concentrates on the personalities and the history of the complexity movement, centered at the Santa Fe Institute. If you don’t know from emergent behavior, this is the place to start.

*Cows, Pigs, Wars, and Witches*, by Marvin Harris. Hey! How’d a book on anthropology get in here? Harris examines some of the most spectacular, seemingly counter-productive human practices of all time — among them the Indian cult of the cow, tribal warfare, and witch hunts — and demonstrates their survival value. Are other cultures mad, or are the outsiders who think so missing something? A world tour of alpha star.

*Men of Mathematics*, E.T. Bell. No subject is so despised at school as mathematics, in large part because its history is righteously excised from the textbooks. It is possible to take four years of math in high school without once hearing the name of a practicing mathematician. The student is left with the impression that plane geometry sprang fully constructed from the brain of Euclid, like Athena from the brain of Zeus. Bell is a useful corrective; his judgments are accurate and his humor is dry. Lots of snappy anecdotes — some of dubious provenance, though not so dubious as some of the more recent historians would have you believe — and no actual math. (OK, a tiny bit.) You might not believe that it would help you to know that Galois, the founder of group theory, wrote a large part of his output on the topic in a letter the night before he died in a duel, or that Euler, the most prolific mathematician of all time, managed to turn out his reams of work while raising twelve children, to whom, by all accounts, he was an excellent father. But it does. Should you want to go on to solve real math problems, the books to start with, from easy to hard, are *How To Solve It*, by Pólya, *The Enjoyment of Mathematics*, by Rademacher and Toeplitz, and *What Is Mathematics?* by Courant and Robbins.

*The Eighth Day of Creation*, by Horace Freeland Judson. A history of the heroic age of molecular biology, from the late 1940s to the early 1970s. Judson does not spare the science, and he conveys a real understanding of biology as it’s practiced, as opposed to the way it’s tidied up in the textbooks. A much better book about the double helix than *The Double Helix*, which aggrandizes Watson and which none of the other participants could stand. Judson’s book has its purple passages, but on the whole the best book ever written on science by a non-scientist, period.

*A Universe of Consciousness*, by Gerald Edelman and Giulio Tononi. A complete biologically-based theory of consciousness in 200 dense but readable pages. Edelman and Tononi shirk none of the hard questions, and by the end they offer a persuasive account of how to get from neurons to qualia.

*Gödel’s Proof*, by Ernest Nagel and James Newman. Undecidability has become, after natural selection, relativity, and Heisenberg’s uncertainty principle, the most widely abused scientific idea in philosophy. (An excellent history of modern philosophy could be written treating it entirely as a misapplication of these four ideas.) Undecidability no more implies universal skepticism than relativistic physics implies relativistic morality. Nagel and Newman demystify Gödel in a mere 88 pages that anyone with high school math can follow, if he’s paying attention.

interesting stuff. you and i have a similar interests. i’m messing with looking at human interaction through a scientific lens. hypocrisy as a function of the theory of relativity. equal/opposite reaction to the expression of viewpoints (scumbags sound like liberty lovers, liberty lovers sound like scumbags). best of luck in your search for the truth.

I arrived at something similar as a measure of human psychological health but my process involved selective reading of Continental Philosophy and a pun. Any ideas of what variables to consider in measuring alpha for the Eustaces, or is this meant to remain a thought exercise?

I’m taking this slowly but from what I’ve digested so far, it seems you are treading the same ground as Schrodinger – “a living organism has the astonishing gift of concentrating a stream of order on itself and hence escaping the decay into chaos.”

Bullshit