The Disconsolation of Philosophy, Part 7: Entropy and Information – God of the Machine
Jun 022005
 

What is entropy, exactly? First try an easier one: What is gravity? Suppose you had never heard of gravity and asked me what it was. I answer the usual, “attraction at a distance.”

At this point you are as badly off as you were before. Do only certain objects attract each other? How strong is this “attraction”? On what does it depend? In what proportions?

Now I give a better answer. Gravity is a force that attracts all objects directly as the product of their masses and inversely as the square of the distance between them. I may have to backtrack a bit and explain what I mean by “force,” “mass,” “directly,” “inversely,” and “square,” but finally we’re getting somewhere. All of a sudden you can answer every question in the previous paragraph.

Of course I am no longer really speaking English. I’m translating an equation, Fg = G*(m1*m2)/r2. It turns out that we’ve been asking the wrong question all along. We don’t really care what gravity is; there is some doubt that we even know what gravity is. We care about how those objects with m’s (masses) and r’s (distances) act on each other. The cash value is in all those little components on the right side of the equation; the big abstraction on the left is just a notational convenience. We write Fg (gravity) so we don’t have to write all the other stuff. You must substitute, mentally, the right side of the equation whenever you encounter the term “gravity.” Gravity is what the equation defines it to be, and that is all. So, for that matter, is alpha. The comments to the previous sections on alpha theory are loaded with objections that stem from an inability, or unwillingness, to keep this in mind.

In a common refrain of science popularizers, Roger Penrose writes, in the preface to The Road to Reality: “Perhaps you are a reader, at one end of the scale, who simply turns off whenever a mathematical formula presents itself… If so, I believe that there is still a good deal that you can gain from this book by simply skipping all the formulae and just reading the words.” Penrose is having his readers on. In fact if you cannot read a formula you will not get past Chapter 2. There is no royal road to geometry, or reality, or even to alpha theory.

Entropy is commonly thought of as “disorder,” which leads to trouble, even for professionals. Instead we will repair to Ludwig Boltzmann’s tombstone and look at the equation:

S = k log W

S is entropy itself, the big abstraction on the left that we will ignore for the time being. The right-hand side, as always, is what you should be looking at, and the tricky part there is W. W represents the number of equivalent microstates of a system. So what’s a microstate? Boltzmann was dealing with molecules in a gas. If you could take a picture of the gas, showing each molecule, at a single instant–you can’t, but if you could–that would be a microstate. Each one of those tiny suckers possesses kinetic energy; it careers around at staggering speeds, a thousand miles an hour or more. The temperature of the gas is the average of all those miniature energies, and that is the macrostate. Occasionally two molecules will collide. The first slows down, the second speeds up, and the total kinetic energy is a wash. Different (but equivalent) microstates, same macrostate.

The number of microstates is enormous, as you might imagine, and the rest of the equation consists of ways to cut it down to size. k is Boltzmann’s constant, a tiny number, 10-23 or so. The purpose of taking the logarithm of W will become apparent when we discuss entropy in communication theory.

An increase in entropy is usually interpreted, in statistical mechanics, as a decrease in order. But there’s another way to look at it. In a beaker of helium, there are far, far fewer ways for the helium molecules to cluster in one corner at the bottom than there are for them to mix throughout the volume. More entropy decreases order, sure, but it also decreases our ability to succinctly describe the system. The greater the number of possible microstates, the higher the entropy, and the smaller the chance we have of guessing the particular microstate in question. The higher the entropy, the less we know.

And this, it turns out, is how entropy applies in communication theory. (I prefer this term, as its chief figure, Claude Shannon, did, to “information theory.” Communication theory deals strictly with how some message, any message, is transmitted. It abstracts away from the specific content of the message.) In communication theory, we deal with signals and their producers and consumers. For Eustace, a signal is any modulatory stimulus. For such a stimulus to occur, energy must flow.

Shannon worked for the telephone company, and what he wanted to do was create a theoretical model for the transmission of a signal — over a wire, for the purposes of his employer, but his results generalize to any medium. He first asks what the smallest piece of information is. No math necessary to figure this one out. It’s yes or no. The channel is on or off, Eustace receives a stimulus or he doesn’t. This rock-bottom piece of information Shannon called a bit, as computer programmers still do today.

The more bits I send, the more information I can convey. But the more information I convey, the less certain you, the receiver, can be of what message I will send. The amount of information conveyed by a signal correlates with the uncertainty that a particular message will be produced, and entropy, in communication theory, measures this uncertainty.

Suppose I produce a signal, you receive it, and I have three bits to work with. How many different messages can I send you? The answer is eight:

000
001
010
011
100
101
110
111

Two possibilities for each bit, three bits, 23, eight messages. For four bits, 24, or 16 possible messages. For n bits, 2n possible messages. The relationship, in short, is logarithmic. If W is the number of possible messages, then log W is the number of bits required to send them. Shannon measures the entropy of the message, which he calls H, in bits, as follows:

H = log W

Look familiar? It’s Boltzmann’s equation, without the constant. Which you would expect, since each possible message corresponds to a possible microstate in one of Boltzmann’s gases. In thermodynamics we speak of “disorder,” and in communication theory of “information” or “uncertainty,” but the mathematical relationship is identical. From the above equation we can see that if there are eight possible messages (W), then there are three bits of entropy (H).

I have assumed that each of my eight messages is equally probable. This is perfectly reasonable for microstates of molecules in a gas; not so reasonable for messages. If I happen to be transmitting English, for example, “a” and “e” will appear far more often than “q” or “z,” vowels will tend to follow consonants, and so forth. In this more general case, we have to apply the formula to each possible message and add up the results. The general equation, Shannon’s famous theorem of a noiseless channel, is

H = – (p1log p1 + p2log p2 + … pWlog pW)

where W is, as before, the number of possible messages, and p is the probability of each. The right side simplifies to log W when each p term is equal, which you can calculate for yourself or take my word for. Entropy, H, assumes the largest value in this arrangement. This is the case with my eight equiprobable messages, and with molecules in a gas. Boltzmann’s equation turns out to be a special case of Shannon’s. (This is only the first result in Shannon’s theory, to which I have not remotely done justice. Pierce gives an excellent introduction, and Shannon’s original paper, “The Mathematical Theory of Communication,” is not nearly so abstruse as its reputation.)

This notion of “information” brings us to an important and familiar character in our story, Maxwell’s demon. Skeptical of the finality of the Second Law, James Clerk Maxwell dreamed up, in 1867, a “finite being” to circumvent it. This “demon” (so named by Lord Kelvin) was given personality by Maxwell’s colleague at the University of Edinburgh, Peter Guthrie Tait, as an “observant little fellow” who could track and manipulate individual molecules. Maxwell imagined various chores for the demon and tried to predict their macroscopic consequences.

The most famous chore involves sorting. The demon sits between two halves of a partitioned box, like the doorman at the VIP lounge. His job is to open the door only to the occasional fast-moving molecule. By careful selection, the demon could cause one half of the box to become spontaneously warmer while the other half cooled. Through such manual dexterity, the demon seemed capable of violating the second law of thermodynamics. The arrow of time could move in either direction and the laws of the universe appeared to be reversible.

An automated demon was proposed by the physicist Marian von Smoluchowski in 1914 and later elaborated by Richard Feynman. Smoluchowski soon realized, however, that Brownian motion heated up his demon and prevented it from carrying out its task. In defeat, Smoluchowski still offered hope for the possibility that an intelligent demon could succeed where his automaton failed.

In 1929, Leo Szilard envisioned a series of ingenious mechanical devices that require only minor direction from an intelligent agent. Szilard discovered that the demon’s intelligence is used to measure — in this case, to measure the velocity and position of the molecules. He concluded (with slightly incorrect details) that this measurement creates entropy.

In the 1950s, the IBM physicist Leon Brillouin showed that, in order to decrease the entropy of the gas, the demon must first collect information about the molecules he watches. This itself has a calculable thermodynamic cost. By merely watching and measuring, the demon raises the entropy of the world by an amount that honors the second law. His findings coincided with those of Dennis Gabor, the inventor of holography, and our old friend, Norbert Wiener.

Brillouin’s analysis led to the remarkable proposal that information is not just an abstract, ethereal construct, but a real, physical commodity like work, heat and energy. In the 1980s this model was challenged by yet another IBM scientist, Charles Bennett, who proposed the idea of the reversible computer. Pursuing the analysis to the final step, Bennett was again defeated by the second law. Computation requires storage, whether on a transistor or a sheet of paper or a neuron. The destruction of this information, by erasure, by clearing a register, or by resetting memory, is irreversible.

Looking back, we see that a common mistake is to “prove” that the demon can violate the second law by permitting him to violate the first law. The demon must operate as part of the environment rather than as a ghost outside and above it.

Having slain the demon, we shall now reincarnate him. Let’s return for a moment to the equation, the Universal Law of Life, in Part 6:

max E([α – αc]@t | F@t-)

The set F@t- represents all information available at some time t in the past. So far I haven’t said much about E, expected value; now it becomes crucial. Eustace exists in space, which means he deals with energy transfers that take place at his boundaries. He has been known to grow cilia and antennae (and more sophisticated sensory systems) to extend his range, but this is all pretty straightforward.

Eustace also exists in time. His environment is random and dynamic. Our equation spans this dimension as well.

t- : the past
t : the present
t+ : the future (via the expectation operator, E)

t+ is where the action is. Eustace evolves to maximize the expected value of alpha. He employs an alpha model, adapted to information, to deal with this fourth dimension, time. The more information he incorporates, the longer the time horizon, the better the model. Eustace, in fact, stores and processes information in exactly the way Maxwell’s imaginary demon was supposed to. To put it another way, Eustace is Maxwell’s demon.

Instead of sorting molecules, Eustace sorts reactions. Instead of accumulating heat, Eustace accumulates alpha. And, finally, instead of playing a game that violates the laws of physics, Eustace obeys the rules by operating far from equilibrium with a supply of free energy.

Even the simplest cell can detect signals from its environment. These signals are encoded internally into messages to which the cell can respond. A paramecium swims toward glucose and away from anything else, responding to chemical molecules in its environment. These substances act to attract or repel the paramecium through positive or negative tropism; they direct movement along a gradient of signals. At a higher level of complexity, an organism relies on specialized sensory cells to decode information from its environment to generate an appropriate behavioral response. At a higher level still, it develops consciousness.

As Edelman and Tononi (p. 109) describe the process:

What emerges from [neurons’] interaction is an ability to construct a scene. The ongoing parallel input of signals from many different sensory modalities in a moving animal results in reentrant correlations among complexes of perceptual categories that are related to objects and events. Their salience is governed in that particular animal by the activity of its value systems. This activity is influenced, in turn, by memories conditioned by that animal’s history of reward and punishment acquired during its past behavior. The ability of an animal to connect events and signals in the world, whether they are causally related or merely contemporaneous, and, then, through reentry with its value-category memory system, to construct a scene that is related to its own learned history is the basis for the emergence of primary consciousness.

The short-term memory that is fundamental to primary consciousness reflects previous categorical and conceptual experiences. The interaction of the memory system with current perception occurs over periods of fractions of a second in a kind of bootstrapping: What is new perceptually can be incorporated in short order into memory that arose from previous categorizations. The ability to construct a conscious scene is the ability to construct, within fractions of seconds, a remembered present. Consider an animal in a jungle, who senses a shift in the wind and a change in jungle sounds at the beginning of twilight. Such an animal may flee, even though no obvious danger exists. The changes in wind and sound have occurred independently before, but the last time they occurred together, a jaguar appeared; a connection, though not provably causal, exists in the memory of that conscious individual.

An animal without such a system could still behave and respond to particular stimuli and, within certain environments, even survive. But it could not link events or signals into a complex scene, constructing relationships based on its own unique history of value-dependent responses. It could not imagine scenes and would often fail to evade certain complex dangers. It is the emergence of this ability that leads to consciousness and underlies the evolutionary selective advantage of consciousness. With such a process in place, an animal would be able, at least in the remembered present, to plan and link contingencies constructively and adaptively in terms of its own previous history of value-driven behavior. Unlike its preconscious evolutionary ancestor, it would have greater selectivity in choosing its responses to a complex environment.

Uncertainty is expensive, and a private simulation of one’s environment as a remembered present is exorbitantly expensive. At rest, the human brain requires approximately 20% of blood flow and oxygen, yet it accounts for only 2% of body mass. It needs more fuel as it takes on more work.

The way information is stored and processed affects its energy requirements and, in turn, alpha. Say you need to access the digits of π. The brute-force strategy is to store as many of them as possible and hope for the best. This is costly in terms of uncertainty, storage, and maintenance.

Another approach, from analysis, is to use the Leibniz formula:

Π/4 = 1 – 1/3 + 1/5 – 1/7 + 1/9 – …

This approach, unlike the other, can supply any arbitrary digit of π. And here you need only remember the odd numbers and an alternating series of additions and subtractions.

Which method is more elegant and beautiful? Which is easier?

Human productions operate on this same principle of parsimony. Equations treat a complex relation among many entities with a single symbol. Concepts treat an indefinite number of percepts (or other concepts). Architects look at blueprints and see houses. A squiggle of ink can call up a mud puddle, or a bird in flight. The aim, in every case, is maximal information bang for minimal entropy buck.

In an unpredictable environment, decisions must be made with incomplete information. The epsilon of an alpha model depends on its accuracy, consistency and elegance. An accurate model corresponds well to the current environment, a consistent model reduces reaction time, and an elegant model reduces energy requirements. Everything, of course, is subject to change as the environment changes. The ability to adapt to new information and to discard outdated models is just as vital as the ability to produce models in the first place.

Thus Eustace generates his alpha* process, operating on some subset of F@t- where t is an index that represents the increasing set of available information F. As Eustace evolves, the complexity of his actions increases and his goals extend in space and time, coming to depend less on reflex and more on experience. He adapts to the expected value for alpha@t+, always working with an incomplete information set. As antennae extend into space, so Eustace’s alpha model extends into a predicted future constructed from an experienced past.

  32 Responses to “The Disconsolation of Philosophy, Part 7: Entropy and Information”

  1. Excellent! Well worth the wait. The beginning preamble touches on how the long, dead arm of Aristotle still causes confusion today. “What is…?” questions just lead to infinite regressions and digressions over definitions. It’s no coincidence that the fields of human knowledge which advanced the farthest are the ones that have least concerned themselves with essentialistic debates about “what is X”. Nominalism all the way!

    Re: elegance/simplicity, you’re anticipating what I’m going to write about next when I get back home. I dunno if you’ve come across Ray Solomonoff or not yet, but his papers may be of interest. I’ve barely started mining them myself.

    I think I can already see where the next part will be headed (complexity). The posts keep getting better, keep it up. Meanwhile I’m going finally start reading The Eighth Day of Creation.

    (BTW, Thomas Gilovich is recommended reading on how the models of more advanced Eustaces can go wrong.)

  2. Well, Matt, this is hardly a demonstration of nominalism, nor even a suggestion of it. (Quite the opposite.)

    Indeed, this the best so far precisely because it does not raise the philosophical "implications" that alpha’s authors have previously stepped into. Alpha is physics–not epistemology.

    This is fantastic stuff, Aaron. Sorry, but there’s nothing for me to take issue with! But, I’m still chewing, so give me a minute or two to see if I really mean this…

  3. In any closed system, W remains constant. Therefore, entropy remains constant in closed systems.

    Aaron has said, in alpha theory, "everything counts". The universe is everything. The universe is a closed system.

    If the proper set for alpha theory is the universe, and the entropy of the universe is unchanged, why is there a "moral component" to creating a high alpha? I’m confused.

  4. Jim,

    From the link above:

    "The role of definitions in science, especially, is also very different from what Aristotle had in mind. Aristotle taught that in a definition we have first pointed to the essence — perhaps by naming it — and that we then describe it with the help of the defining formula; just as in an ordinary sentence like ‘This puppy is brown’, we first point to a certain thing by saying ‘this puppy’, and then describe it as ‘brown’. And he taught that by thus describing the essence to which the term points which is to be defined, we determine or explain the meaning of the term also.

    Accordingly, the definition may at one time answer two very closely related questions. The one is ‘What is it?’, for example ‘What is a puppy?’; it asks what the essence is which is denoted by the defined term. The other is ‘What does it mean?’, for example, ‘What does "puppy" mean?’; it asks for the meaning of a term (namely, of the term that denotes the essence). In the present context, it is not necessary to distinguish between these two questions; rather, it is important to see what they have in common; and I wish, especially, to draw attention to the fact that both questions are raised by the term that stands, in the definition, on the left side and answered by the defining formula which stands on the right side. This fact characterizes the essentialist view, from which the scientific method of definition radically differs.

    While we may say that the essentialist interpretation reads a definition ‘normally’, that is to say, from the left to the right, we can say that a definition, as it is normally used in modern science, must be read back to front, or from the right to the left; for it starts with the defining formula, and asks for a short label for it. Thus the scientific view of the definition ‘A puppy is a young dog’ would be that it is an answer to the question ‘What shall we call a young dog?’ rather than an answer to the question ‘What is a puppy?’ (Questions like ‘What is life?’ or ‘What is gravity?’ do not play any role in science.) The scientific use of definitions, characterized by the approach ‘from the right to the left’, may be called its nominalist interpretation, as opposed to its Aristotelian or essentialist interpretation. In modern science, only nominalist definitions occur, that is to say, shorthand symbols or labels are introduced in order to cut a long story short. And we can at once see from this that definitions do not play any very important part in science. For shorthand symbols can always, of course, be replaced by the longer expressions, the defining formulae, for which they stand. In some cases this would make our scientific language very cumbersome; we should waste time and paper. But we should never lose the slightest piece of factual information. Our ‘scientific knowledge’, in the sense in which this term may be properly used, remains entirely unaffected if we eliminate all definitions; the only effect is upon our language, which would lose, not precision, but merely brevity. (This must not be taken to mean that in science there cannot be an urgent practical need for introducing definitions, for brevity’s sake.) There could hardly be a greater contrast than that between this view of the part played by definitions, and Aristotle’s view. For Aristotle’s essentialist definitions are the principles from which all our knowledge is derived; they thus contain all our knowledge; and they serve to substitute a long formula for a short one. As opposed to this, the scientific or nominalist definitions do not contain any knowledge whatever, not even any ‘opinion’; they do nothing but introduce new arbitrary shorthand labels; they cut a long story short."
    — Karl Popper

  5. Matt,

    Big discussion very much off topic, no? Sir Karl’s take on Aristotle, right or wrong? Whether the rejection of this version of Aristotle actually forces us into nominalism–maybe there’s other alternatives, eh? Is a definition the start or the climax of an inductive process for Aristotle? Lots of very interesting historical questions, I am sure, but… ?

  6. Mr. Kaplan,

    Everything counts includes the thermodynamic consequences of a given process. These tend not to reach the edges of the Universe.

    And, yes, I’m including lunch at Taco Bell.

    On to the Universe. Which cosmological model are you using? The equilibrium description is an approximation because the Universe is pretty big.

    At the time of recombination, the hot Big Bang gives about 10^83 states. Recombination refers to the time, about half a million years or so after the Big Bang, when the expanding universe cooled down enough for atoms to form. Compared with a value today of 10^88 states. That leaves approximately 10^5 causally disconnected regions to be accounted for in the observable universe. The hot Big Bang offers no resolution for this paradox, especially since it is assumed to be an adiabatic (constant entropy) expansion.

    There are numerous departures from equilibrium including but not limited to

    neutrino decoupling
    decoupling of the cmb (which you mentioned yourself)
    primordial nucleosynthesis
    baryogenesis (which you also already mentioned)
    WIMPs

    Wondering about the whole Universe in one go tends to leave a lot of people confused.

    No one really knows what happens at the edges.

    You might want to start smaller.

  7. Jim,

    Big discussion yes, but not all that off-topic. The series is called "the disconsolation of philosopy" after all, and Aaron’s treatment of that issue mirrored KP’s so closely that I figured I’d throw it out there. It’s all part of my ongoing campaign to make him repent for the facile dissing of him in Part 1.

  8. Mr. McIntosh,

    I haven’t read anything by Popper on probabilistic models. For example, how would one experimentally falsify the law of large numbers or the central limit theorem?

    A single criteria of falsifiability seems too brittle. A CPPD process appears to be more inclusive of theories with explanatory power. But, again, I haven’t read Popper’s views on probability.

  9. Matt,

    Then I’ll be quiet and allow you to do the Lord’s Work.

  10. Although Bourbaki’s got a point…

    We do need a whole big epistemology–that’s for sure–from the ground up, let me suggest.

  11. First, Aaron has done a superb job here. The writing is clear, original, profound, and probably correct. Second, Bourbaki is living up to Aaron’s opinion of him. His answer to my question was along the lines I had expected, only way better, with deeper understanding.

    Now, I have a two questions that come out of left field, but which may affect the way I think about alpha theory. The questions are:

    1) At what speed or speeds does entropy propagate?

    2) Does entropy obey the inverse square law?

  12. Mr. Kaplan,

    It’s not clear if you’re asking about classical or quantum.

    Let’s stay away from quantum and cosmological time and size scales. It’s cool and all but not very relevant to the scales where Eustace typically operates.

    From Eustace’s perspective, all that matters is how quickly information is received and processed. Biological responses have enormous latencies so we can ignore the wacky stuff like superluminal non-local interactions. Relativistic effects can affect availability of information in different frames but not causality.

    Information appears in the utility function via F@t which itself treats time as an nothing more than an index on an increasing sigma algebra.

    At what speed or speeds does entropy propagate?

    It’s a measure of energy dispersal.

    Does entropy obey the inverse square law?

    The intensity of any radiant energy follows the inverse square law.

  13. Boubaki,

    Falsifiability is already implicitly subsumed within CPPD. In order for any theory to be predictive, there has to be some potential observation that could contradict (falsify) it in principle. Otherwise you’re just composing a series of ad hoc just-so stories (cf. historical prophecy, psychoanalysis, astrology, etc). You could argue that falsifiability by itself is not sufficient (and I may well agree), but it is definitely necessary.

    As to Popper on probability, he devotes a long chapter to it in The Logic of Scientific Discovery, which I do not have with me right now and must plead fuzzy memory on much of it. What I do know is that probability statements by themselves are of course not falsifiable, but can be considered so if we combine them with a methodological rule concerning when we can regard a probability statement as falsified in practice. Such a methodological rule will be conventional and vary with the circumstances, but need not be totally arbitrary.

  14. Mr. McIntosh,

    Such a methodological rule will be conventional and vary with the circumstances, but need not be totally arbitrary.

    This is vague and only seems to shift accountability.

    You pointed out yourself that Popper’s philosophy led him to some unsupportable and, more importantly, stubborn positions on evolution and dualism. His stubborness on these issues seems to point to a brittleness in his underlying assumptions.

    Good ideas can endure a dissing, facile or not.

    There’s no denying that his contributions were valuable–only that there appears to be room for improvement. I think that was the gist of Part 1.

  15. Aaron,

    Well done, this.

    Yes, the energy needed for the nervous system to learn, i.e. synaptogenesis, is considerable.

    Your experience easily shows you how much less energy is needed to perform already learned behaviors.

    When it comes to the brain’s (or neuron’s) maximization of resources, or efficient and adaptive use of energy, we are only (if at all) beginning to approach the ways of conceptualizing this.

    But when it comes to a person’s behavior, how they adapt and utilize the resources available to them, including information, is something that can be measured, or at least observed.

    Here, from above: "…constructing relationships based on its own unique history of value-dependent responses."

    If you are looking for the key to ethics via energetics, I believe the answer lies in the construction (learning-synaptogenesis) and hierarchy of the so-called "value-dependent responses."

    More on this later.

  16. This WAS beautifully executed, Mr. Haspel.

  17. Bourbaki,

    This is rather OT now, but since I don’t have your e-mail…

    "This is vague and only seems to shift accountability."

    It is somewhat vague, but it’s nothing scientists don’t already do all the time.

    "You pointed out yourself that Popper’s philosophy led him to some unsupportable and, more importantly, stubborn positions on evolution and dualism. His stubborness on these issues seems to point to a brittleness in his underlying assumptions."

    No. His apprehensions about evolution (he never denied it, just questioned its scientific character; he originated the tautology objection) simply arose from a misunderstanding. Later on in life he recanted after learning more about the subject. Hardly stubborn.

    As to his dualism, that should mainly be seen in the context of a reaction against the dominance of behaviourism, which was no better. I sympathize with him in his objections even though he was wrong. In any case his epistemology is completely independent of that issue.

    "Good ideas can endure a dissing, facile or not. There’s no denying that his contributions were valuable–only that there appears to be room for improvement."

    Absolutely. I would be the last person to argue that progress ended when he set down his pen (indeed, that goes against the whole spirit of critical rationalism). It just baffles me frequently to see people repeating criticisms of Popper that sound as if they haven’t actually read him.

  18. Bourbaki,

    I am not sure I understand why you say the following:

    "From Eustace’s perspective, all that matters is how quickly information is received and processed."

    Could you explicate?

  19. Mr. McIntosh,

    It just baffles me frequently to see people repeating criticisms of Popper that sound as if they haven\’t actually read him.

    Surely you can\’t expect everyone to have read all of his work, all of his retractions, and to arrive at the same justifications for his errors.

    If we had nothing to learn from one another, what would be the point of these discussions?

    My impressions were colored by your own comments about his work with Eccles. You know his work far better than me. But if he\’s after an epistemology, your particular criticism of his positions puts it in an unflattering light.

    It is somewhat vague, but it\’s nothing scientists don\’t already do all the time.

    We can\’t compare the actions of a group to the philosophy of an individual. Who is going to a posteriori psychoanalyze each scientist\’s individual mistakes?

    We should all be so fortunate with our gaffes. Although I doubt any amount of re-interpretation can cover up all of my blunders.

    Mr. Kaplan,

    Could you explicate?

    Simply that to understand living systems, we stay focused on biological time and size scales rather than plumbing the quantum or cosmological. All theories quantum and cosmological are exciting but there\’s no compelling evidence that life processes require those scales for explanation e.g. see quantum consciousness and multiverses.

  20. Bourbaki,

    "Surely you can’t expect everyone to have read all of his work, all of his retractions, and to arrive at the same justifications for his errors."

    No, of course not. That remark wasn’t directed at you, it was just general grumbling about having to constantly clear up weak objections and misunderstandings of the most central aspects of Popper’s epistemology (some of which were repeated in Part 1). Most of it can be traced back to the likes of David Stove and Martin Gardner, who only pause in their hacking up of strawmen to sneer in lieu of an argument. Popperian epistemology has its weak points, but you wouldn’t know them from the poor arguments most often cited against it.

  21. Mr. McIntosh,

    general grumbling about having to constantly clear up weak objections and misunderstandings

    That’s the problem with qualitative assertions generally. Be grateful that it’s tedious rather than dangerous! See Galileo.

    You are right that we are off-topic for this post on information. A treatment of this issue is forthcoming. It fits nicely into a post on clearing up misunderstanding of positive and negative solution spaces. They are not generally complementary.

    Just keep in mind David Hilbert’s lesson when considering any epistemology that is predicated on an idealized (true/false) notion of decidability.

  22. "If we had nothing to learn from one another, what would be the point of these discussions?"

    To show how much you know. All of you understand biology and animals now, having read Aaron’ list. Humans as Maxwell’s demon is not new, Sidis said as much in 1916, supplementing reserve energy for "free energy" because of the limited understanding of what constitued free energy at the time (people were operating on what I think they call "classical" understanding).

    Does the fact that the second law remains consistent under CPPD deny that it is a product of emergence, or that our understanding is the product of qualitative analysis with certain predefined limits?

    I mean "imcompleteness".

    …was the 2nd to best post on this topic in part because I believe at the time you started, rather largely I might add, announcing as you did the death of philosophy (stretching), you either had next to no idea where it was going, or, if you did, you did not understand enough about where it went from the start.

    To some degree, I attribute the "weaker" analogies of previous posts to this problem, or rather, to the fact that the consequent posts are only now appearing. And you attacked a lot of words with very profound and debated upon/about properties, definitions, and connotations.

    You were giving a lot of "bits" and potential microstates for this theory early on.

    Seeing the way others think about something, including their whys, can be far more important than what they actually think. This is perfectly expressed in the above equation. Jolly good.

  23. To round that last post down: you have now hammered into us what YOU meant by referncing things like information theory etc., whereas before we were left merely speculating at their application, which allowed us to forget, at times, what the theory really was in favor of what it "was trying to say".

    This comes directly from that very lack of cohesion which many of the applied disciplines had as "first impression correlaries".

    Even so, it is still at times difficult to see "exactly" what you are referncing with Eustace, where what you mean ends, and where what I’m not sure of begins (i.e., from my lack of understanding you, or from my lack of general/specific understanding of the fields in question, or from the fact that something is not quite right).

    Something feels amiss. Wish me luck.

  24. Moderator, please delete last two comments, insert questions:

    Question: alpha theory measures heat’s relationship to force and the consequences that occur in motion and of equilibrium within physical systems, and that therefore determines the degree of emergence of complexity? The answer is no, correct? Rather, alpha theory is about the disorder of systems when in terms of bits of information, thus is an emergent system that allows one to label (consolidate) action by "adding up" thermodynamic consequences and comparing them against optimal energy efficieny, the derivation of which lies in our ability to model a hieracrchy of (available) "energy transference" via our actual, modeled, process of energy transference. We now know what it is we do, at least in an emergent sense, and can therefore know on that scale "what exists", on from the perspective "why". Hardly inconsequential.

    Does Alpha Theory then state that this emergence of complexity is positive because the higher complexity requires greater rate of information exchange (probably) as well as an increase in efficiency (yes, definately)?

    Models of information exchange are the same as models of optimal thermodynamic output, and are equally as inherent to the human process of determining action and worth. But is this process unique to life, or absolute among the processes of the universe? It seems to actually be a property of emergence itself. Or am I twisting into a tautology here?

    Does Alpha Theory assert that framing our actions in such a fashion as to benefit from incremental increases in emergence is commensurate to moral action or is this actually just a consequence of hieracrhical overlap ala the varying amounts and sizes of infinity (i.e., are we using one way of looking at something to blend together two things that are actually of different size and worth, one coming from an entirely larger degree of application/perspective)? More on this in a second.

    It’s like saying all forms of life should be striving to do this, when really, all forms of life already do it, and the moral part comes in when they do not do it to the best of their ability, ala humans, which is what actually makes possible "moral"s in the first place. This line of reasoning twists the intention of the theory into the realm of "human isolation and the consequent manifestation of being as separate from other and yet at the same time completely connected". It brings into play the paradox that breaks the dualistic back of man and action. Of course, the very idea of emergence probably already did that.

    So, that is why alpha theory doesn’t bias life to the standards of humans. Our increased consciousness does not implicitly denote greater moral imperative but rather implies the only way such an imperative could exist in the first place. This is obvious, and yet at the same time neccessarily understood.
    Because, if it is true that increased complexity requires greater information processing to stay alive then the fact is we are responsible for our life and lifetime in a way no other creature we know of is. Again, obvious, and yet that is precisely how alpha theory proposes ethics, and why alpha theory itself is not a system of ethics as it is the system by which we can not only prove "ethical action" but also rightly "approve of it". With science!

    And lastly (I’m sure you say finally), since I am clearly more educated than almost everyone who ever lived because I can and do rely on the benefit of all the consciousness that has recorded its observations and left impressions and beliefs and dreams and hopes and fears for me to find, does this mean that I have a greater complexity than a child that is not as conscious as me? Does it mean that I am more complex than them because I am more conscious? I would have to say that, again, that is twisting what this theory implies about consciousness and complexity and action from being about information and entropy and turning it into something that loses the science in favor of the turn of phrase.

    What do you moderators think, am I thinking rightly? :0

  25. Every time I read something about entropy, I encounter the same nagging question:

    Why is entropy a law rather than a force?

  26. It isn’t even a law. Its a tendency. :p

  27. Thomas,

    Then what is the weak force?

  28. Bill, everyone’s dead. Rephrase that question you asked me real quick?

  29. Thomas,

    The effect of the weak force looks like a tendency but is considered a force. I believe the reason, which is unsaid, is that entropy is believed to "emerge" from a larger system whereas the weak force "just is". But that means we weigh two different but similar phenomenon by different scales. The assumptions are different, but perhaps the assumptions should be equalized.

  30. Why is entropy a law rather than a force?

    A law is a generalization of empirical observations. A principle’s status as a law is orthogonal to its unit of measure, in this case force.

    The units of entropy [Joules/Kelvin] are commensurate with heat capacity and are not commensurate with the units of force [Newtons]. There are four known forces that operate as fields characterized by directed force vectors. Both the strong force and the weak force can initiate particle decay. This gives us a way to compare their strength.

    1. Strong
    2. Electromagnetic
    3. Weak
    4. Gravity

    Where is friction? Chemical bonds? Sound? A slap on the wrist?

    They’re all consequences of the electromagnetic force.

    The weak force interacts via exchange of intermediate vector bosons.

    You’re not going to find entropy operating as a directed vector field.

    And again, from above, "Simply that to understand living systems, we stay focused on biological time and size scales rather than plumbing the quantum or cosmological."

    But you will find a common thread between the weak force and entropy: the arrow of time.

    In the macro world where thermodynamics matters, it’s easy to tell whether time is running forward or backward. But at the molecular level it’s much harder to tell; all the individual particle interactions look the same forward or backward.

    Except the weak interactions.

    If CPT symmetry is to be preserved, the CP violation must be compensated by a violation of time reversal invariance. Experiments show direct T violations, in the sense that certain reaction processes involving K mesons have a different probability in the forward time direction from that in the reverse time direction. So the violation of P (and CP) means that time itself is not reversible for the weak force. The weak force breaks time symmetry.

    So much for the "Reversible Universe".

    What would Darwin say?

    I’m more interested to hear what Paley would say.

    From a reprint of "Origin of Species":

    Although Darwin, as much as Paley (Intelligent Design), is stunned by the miracles of adaptation, he has to explain not only the adaptations (as Paley did) but something yet more difficult, the fact that there seem to be many things, vestigal organs, for example, that do not help either the organism or the humans who observe it. For Darwin, then (and this despite many contemporary Darwinians’ sense that all organic characteristics are adaptive), there are aspects of organisms that Paley ignored: nonadaptive elements.

    The wonder of Darwin’s world is multiplied by its apparent anomalies, that there should be "geese and frigate-birds with webbed feet, either living on the dry land or most rarely alighting on the water; that there should be diving thrushes, and petrels with the habits of auks" (pp. 156).

    If the world seems to show signs of design, why should there be so much evidence of its absence? This problem is exacerbated by Darwin’s perception that ‘perfection’ cannot really exist in Nature. He talks much of ‘perfection’, uses the word a great deal, but its signficance is always relative to particular conditions and places. We ought not to marvel, he says, "if all contrivances in Nature be not…absolutely perfect" (pp. 371).

    In other words, epsilon is rarely if ever zero.

  31. Did you know?

    Going green is not just a trend!

    Be part of the change and revamp your space with green alternatives.

    Click the link to learn more >>> https://www.missionnewenergy.com/solar-energy/are-solar-panels-and-photovoltaics-the-same/

    Cut costs on maintenance while helping the planet.

    Whether it’s recycled materials and smart technology, every step makes a difference.

    Get started today and feel the change!

    Stay ahead in sustainability.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)