To understand alpha theory, you have to learn some math and science. To learn math and science, you have to read some books. Now I know this is tiresome, and I am breaking my own rule by supplying a reading list. But it will be short. Try these, in order of increasing difficulty:
Complexity, by Mitchell Waldrop. Complexity is why ethics is difficult, and Waldrop provides a gentle, anecdote-heavy introduction. Waldrop holds a Ph.D. in particle physics, but he concentrates on the personalities and the history of the complexity movement, centered at the Santa Fe Institute. If you don’t know from emergent behavior, this is the place to start.
Cows, Pigs, Wars, and Witches, by Marvin Harris. Hey! How’d a book on anthropology get in here? Harris examines some of the most spectacular, seemingly counter-productive human practices of all time — among them the Indian cult of the cow, tribal warfare, and witch hunts — and demonstrates their survival value. Are other cultures mad, or are the outsiders who think so missing something? A world tour of alpha star.
Men of Mathematics, E.T. Bell. No subject is so despised at school as mathematics, in large part because its history is righteously excised from the textbooks. It is possible to take four years of math in high school without once hearing the name of a practicing mathematician. The student is left with the impression that plane geometry sprang fully constructed from the brain of Euclid, like Athena from the brain of Zeus. Bell is a useful corrective; his judgments are accurate and his humor is dry. Lots of snappy anecdotes — some of dubious provenance, though not so dubious as some of the more recent historians would have you believe — and no actual math. (OK, a tiny bit.) You might not believe that it would help you to know that Galois, the founder of group theory, wrote a large part of his output on the topic in a letter the night before he died in a duel, or that Euler, the most prolific mathematician of all time, managed to turn out his reams of work while raising twelve children, to whom, by all accounts, he was an excellent father. But it does. Should you want to go on to solve real math problems, the books to start with, from easy to hard, are How To Solve It, by Pólya, The Enjoyment of Mathematics, by Rademacher and Toeplitz, and What Is Mathematics? by Courant and Robbins.
A Universe of Consciousness, by Gerald Edelman and Giulio Tononi. A complete biologically-based theory of consciousness in 200 dense but readable pages. Edelman and Tononi shirk none of the hard questions, and by the end they offer a persuasive account of how to get from neurons to qualia.
Gödel’s Proof, by Ernest Nagel and James Newman. Undecidability has become, after natural selection, relativity, and Heisenberg’s uncertainty principle, the most widely abused scientific idea in philosophy. (An excellent history of modern philosophy could be written treating it entirely as a misapplication of these four ideas.) Undecidability no more implies universal skepticism than relativistic physics implies relativistic morality. Nagel and Newman demystify Gödel in a mere 88 pages that anyone with high school math can follow, if he’s paying attention.
Incidentally, boys, for all of the comments in the alpha threads, one glaring hole in the argument passed you right by. It’s in the Q&A, where I shift from energy to bits with this glib bit of business:
Still more “cash value” lies in information theory, which is an application of thermodynamics. Some say thermodynamics is an application of information theory; but this chicken-egg argument does not matter for our purposes. We care only that they are homologous. We can treat bits the same way we treat energy.
I think I can prove this, but I certainly haven’t yet, and my attempt to do so will be the next installment.
Not that I don’t appreciate it, but a reading list is supposed to suffice after that endless wait??
O.k., I’ll check the book out, but, with all that survival value, let’s go back to burning witches!!
O.k., o.k., it was a strategy for survival, even if a sub-optimal one. But I didn’t him to tell me that.
A different Universe is the most relevant book on this topic I have read. John Holland is a better read to understand complexity, I take Douglas Hoffstadters recomendations seriously, if not his name’s spelling. Some understanding of math is important to understand thermodynamics, but I am not as sure, having read about emergence and all the stuff from A Different Universe, that this understanding is as neccessary as I once thought.
I didn’t challenge you on the bit about information theory because I already think you’re right. I can’t prove it either, but I’d love to see it.
In the mean time, I have a lot of reading to do…
(Thanks for the sidebar link.)
To expand a little: the reason I agree with you already is because I’ve been immersed for the past several months in digital systems where electrical energy is indeed treated as 1s and 0s, and in networking where signal entropy is a very real problem. Might An Introduction to Information Theory by J.R. Pierce make a good addition to the reading list?
Also: as much as Godel’s theorems fascinate me, how exactly does that tie into alpha?
Aaron,
Regarding Gdel:
Here are two books that I have just bought and only just started (recommended to me by a friend)
Incompleteness: The Proof and Paradox of Kurt Gdel by Rebecca Goldstein
&
A World Without Time: The Forgotten Legacy Of Gdel And Einstein by Palle Yourgrau.
As for the "list", I cannot recommend the Waldrop book (see below), but I haven’t found a better alternative.
Thanks for the math list. I am half-way through Journey Through Genius: The Great Theorems of Mathematics by William Dunham. It’s an interesting account of the history of the mathematicians and their work. Not exactly alpha-theory material, but enjoyable.
Matt,
You’re welcome.
Pierce’s book might be worth adding; I will read it and let you know. There are many books worth adding, but a long reading list is worse than none at all.
As for Gdel, the limits of axiomatization and deduction bear pretty directly on alpha theory. It was Gdel who first explained why ethics is hard. It’s all about Poisson events. Complex systems by definition exhibit behavior that cannot be deduced from examining their components their axioms, as it were. They exist in nature (the weather) and we create them too (the market). The only way to figure out how such systems behave is to simulate them. If his theorem were known as “irreducibility” instead of “undecidability” people would understand it better and fly off the handle less.
The relevance of all this will become clearer when I discuss consciousness. If it were possible to deal effectively with the world through sheer deduction our brains would be constructed a lot differently. Edelman and Tononi argue that there are two principles of human thought: logic, analogous to axiomatization; and selectionism, which reaches truths that deduction cannot. We rely, are forced to rely, on selectionism because there can be no such thing as a universal theory to deal with Poisson randomness.
Actually, Gdels two theorems dealing with the axiomatization of arithmetic should be called incompleteness, as indeed he named them. Speaking of incompleteness, Rebecca Goldsteins new book by that name gives a good, though non-rigorous explication of the two incompleteness theorems as well as Gdels first, completeness, theorem. The book is really worth reading, in my opinion, for Goldsteins excellent account of how Gdels work should, and wasnt, treated by the philosophers, most notably Wittengenstein. If you care about the philosophical issues at allor the personalitiesread what Gdel wrote (in letters he didnt send) about Wittengenstein, which starts on page 113, and what Wittengenstein said about Gdel, starting on p. 188.
Hmmm… I go to China for a month, come back and not much is new. I’m glad I read the blog for the pictures.
One question Aaron–Thermodynamics is quantum and information theory is quantum. So how come I get so many fractions when I try to calculate alpha theory results?
Sorry. Jetlagged and sick. I meant infinities rather than fractions.
I get the same thing when considering it abstractlly, and when considering the way complexity emerges :( Bill email me your calculations… thomaswhigham@yahoo.com
I will go over them with you.
Also, I cannot reccomend enough John Holland’s books on complexity. So much better than waldrop. A Different Universe: Reinventing Physics From the Bottom Down’s opening:
THere are two conflicting primal impulses of the human mind–one to simplifyone to simplify a thing to its essentials, the other to see through the essentials to the greater implications. All of us live with this conflict and find ourselves pondering it from time to time. At the edge of the sea, for example, most of us fall into throughtfulness about the majesty of the world even though the sea is, essentially, a hole filled with water. The vast literature on this subject, some of it very ancient, often expresses the conflict as moral, or a tension between teh sacred and profane. Thus viewing the sea as simple and finitem as an engineer might, is animistic and primitive, whereas viewing it as source of endless possibility is advanced and human.
But the conflict is not just a matter of perception: it is also physical. The natural world is regulated both by the essnetials and by powerful principles of organization that flow out of them. These principles are transcendant, in that they would continue to hold even if the essentials were changed slightly. Our conflicted view of nature reflects a conflict in nature itself, which consists simultaneously of primitive elements and stable, complex organizational structures that form from them, not unlike the sea itself.
The edge of the sea is also a place to have fun, of course, somthing to keep it is good to keep in mind when one is down there by the boardwalk being deep. The real essence of life is strolling too close to the merry-go-round and getting clobbered by a yo-yo."
How the hell can you not like a book that opens like that. Ha!
Aaron,
Just a brief note.
At a garage sale last week, I picked-up about ten, or so, of the Scientific American Library volumes (mostly on mathematics, cosmology, and physics).
One volume, Mathematics: The Science of Patterns, by Kevin Devlin, started promising enough, but early into it I realized that the monograph paled in comparison to Dunham’s Journey through Genius noted above. Devlin’s monograph had many tables and figures, but very little of the actual proofs or verficiations of the material.
The resultant effect of this was to give little real insight or understanding as to why or how the mathematics, as presented, was invented or evolved as it did.
Blech.
My point?
There is no substitute for original source material, and if the source material is not available, be sure to find a reference that includes the original source material as is possible.
That is all.
Aaron,
Another brief note:
I strongly recommend Rebecca Goldstein’s Incompleteness: The Proof and Paradox of Kurt Gdel.
It is very well researched and written and a terrific companion to Gdel’s Proof by Nagel & Newman.
A few points about the Nagel & Newman book, they ask that we "accept on faith" that "dem (x, z) is a primitive recursive relationship between the numbers x and z. p.85
OK, fair enough. But can someone explain why this is true?
Also, the distinction between the meaning and merit of meaning of the constant signs is tricky (well for me anyway). p.71
Just sayin’
MeTooThen,
IIRC, that explanation isn’t easy and N&N were wise to omit it. I believe Godel had planned for a Part II to give a detailed proof for this claim but never wrote it. Hilbert and Bernays wrote a proof in their Grundlagen Mathematik but it’s always omitted in every reference–everyone claims it’s too tedious to reproduce.
There’s a modern version of the proof here.
Our goal in this paper has been to formalize the classical ‘T’ as ‘[]’, so that instead of proving A classically, we prove instead []A modally, where A is a wff. To successfully realize this aim we need a conservation result, namely, that this approach proves no classical formula that is not also provable classically.
It’s still tedious.
You recommend a book by explaining you struggled with it’s explanations. I’m with you so far, but why the hell would we need another book on Godel? Weren’t you learning geometry? You done?
Bill never emailed me his equations, I’m thinking he’s full of it. Why don’t you just post the equations here where we can all puruse at our leisure :p
Might they be wrong? If so, we’d still learn from your mistake. Might they be right? If so, you can help us even more.
???
Bourbaki, stop eating cheese and crackers and post some pithy comment about complexity, I’m ready to chew some fat.
Bourbaki,
OK, then.
A matter of faith it is.
And here, from N&N, p.107, footnote#38:
"The theory of transfinite ordinal numbers was created by German mathematician Georg Cantor in the nineteenth century." Emphasis mine.
Here, from The Philosophy of Set Theory : An Historical Introduction to Cantor’s Paradise by Mary Tiles, the opening sentance: "Did Cantor discover the rich and strange world of transfinite sets (which Hilbert was to call Cantor’s Paradise) or did he (with the help from his friends) create it?"
This is another interesting book, which for some here, may fit nicely into their reading list. I started reading it several weeks ago but am ready to get back into it.
One last point: Cantor was born in Russia, of a Danish father and Russian mother, but lived most of his life, and his mathematics work in Germany. Does this make him a German mathematician?
Just askin’.
And Tommy,
The reason to read about Gdel, his times, his other ideas, his work’s impact or the reaction others had to it, is that if we know more about these subtexts, or aspects of some narrative arc, the better, hopefully, we can understand and appreciate the man and his work.
Just sayin’.
That of course is obvious. I was speaking of topic relevance.
A book which ought to figure on that list is Penrose’s Magnum opus "Road to Reality". It works as a layman’s "Physics handbook" but with a narrative, taking the reader on a journey from basic geometry to modern day cosmology. The mathematics, although extensive in comparison with other popular science books, is used sparingly but yet in sufficient quanitities to allow the interested reader to muster the fundamentals. In a way, Penrose hits the nail. Mathematics, as well as most other subjects, has two dimensions of depth. The ‘creative’ problem solving dimension, and the rote-learning of mathematical concepts. Thus, as long as the presentation is simple, complex analysis offers no problem for the layman, who just glimpsed the fundaments of real analysis for the first time in preceding chapter. Not to frighten off too many prospective buyers, Penrose also asserts that the book can be read 100% formula-free.
Anyone else catch that Bourbaki only dropped that link so he could name drop himself :p
to paraphrase…
"and later brilliantly used by Bourbaki."
Tommy,
My bad.
On your point: Goldstein’s book is topic relevant, in that she guides us, in part, through the proof, but more importantly, as above, she puts the proof into context.
And again, why Gdel?
Per Aaron:"…the limits of axiomatization and deduction bear pretty directly on alpha theory. It was Gdel who first explained why ethics is hard. It’s all about Poisson events. Complex systems by definition exhibit behavior that cannot be deduced from examining their components their axioms, as it were."
Can the limits of axiomitization lead us to models and understanding of consciousness? I don’t know, but I believe that Aaron and Bourbaki are on to something.
Alpha theory, in its current form, directs us toward an energetic model of ethics. But within that model, there exists several different and seemingly distinct (non-interchangeable) measurable states. Can S and some information quantity (bits, qualia) be reconciled within the same formula?
Back to Aaron’s answer above:"The only way to figure out how such [complex] systems behave is to simulate them."
In other words, our ability to find an understanding of consciousness, and then perhaps human behavior, lies in our ability to mathematically model consciousness, which per Gdel, limitations in those models lie in the general limitations of axiomatization.
Armasus,
Funny you should mention Penrose. I just finished reading The Emperor’s New Mind last night, and I refuse to read The Road to Reality unless Penrose has gotten a much better editor since then.
Penrose is smart but very frustrating — there are lots of lucid nuggets in the book, but you have to wade through nearly 600 pages of his verbose meandering to get them all, by the end of which he hasn’t actually formed any kind of convincing case. He just talks and talks and talks, and doesn’t seem to have very good judgement about what to dwell on and what to touch briefly on or omit. I think the book could have been condensed to under 400 pages and still hit all the important bases.
Besides, I don’t think I have the patience to wade through another thousand pages of exclamation mark abuse. On every page, there are at least two sentences like this! (And at least one in parentheses, like this!)
Tommy,
That of course is obvious. I was speaking of topic relevance.
Edelman and Tononi take this issue head on:
"In any case, the interesting conjecture is that there appear to be only two deeply fundamental ways of patterning thought: selectionism and logic."
Godel revealed the limitations of logic. This limitation influenced how alpha theory was derived.
And also,
"While these attempts give due scientific recognition to the subjective domain, subjectivism itself is no basis for a sound scientific understanding of the mind. Consequently, we reject phenomenology and instrospectionism, along with philosophical behaviorism.
…
The history of science, particularly of biological science, has shown repeatedly that apparently mysterious or impassable barriers to our understanding were based on false views or technical limitations. The material bases of mind are no exception."
Armasus,
Although I have not read ‘A Road to Reality’, the reviews indicate that it is an excellent treatment of the subject. However, I was disappointed with Penrose’s attempt to attribute consciousness to quantum gravity in The Emperor’s New Mind.
"Penrose proposes that the physiological process underlying a given thought may initially involve a number of superposed quantum states, each of which performs a calculation of sorts. When the differences in the distribution of mass and energy between the states reach a gravitationally significant level, the states collapse into a single state, causing measurable and possibly nonlocal changes in the neural structure of the brain. This physical event correlates with a mental one: the comprehension of a mathematical theorem, say, or the decision not to tip a waiter."
This is more "magic happens here" that adds little to our understanding. Popper and Eccles (1977) argued that "brain-soul interactions" are camouflaged by Heisenberg’s uncertainty principle.
Edeleman & Tononi argue:
"There are no completely separate domains of matter and mind and no grounds for dualism. But obviously, there is a realm created by the physical order of the brain, the body, and the social world in which meaning is consciously made. That meaning is essential both to our description of the world and to our scientific understanding of it. It is the amazingly complex material structures of the nervous system and body that give rise to dynamic mental processes and to meaning. Nothing else need be assumed–neither other worlds, or spirits, or remarkable forces yet unplumbed, such as quantum gravity."
Bourbaki,
My thoughts exactly. Penrose never really convincingly drives his case home, it’s just anticlimactic speculation at the end. Since then, Max Tengmark has pretty convincingly killed the idea that quantum effects significantly affect the brain. Penrose’s central point about the limited usefulness of algorithms was lucid and dead on, though.
Much as I love Popper, he did believe a couple of rather silly things. His insistence on dualism and indeterminism always baffled me, not to mention his reluctance to admit Darwinism into the realm of science.
Matt McIntosh,
Thanks for the link.
That was interesting.
Although I had to slowly trudge through the formulas (and will now turn to learning something of Lagrangian and Hamiltonian mechanics), the essence of the work and its conclusions I could follow.
An aside: A tip-off as to the berlevel of the paper comes from Tengmark’s email at IAS as well as his funding from NASA.
It’s rocket science! LOL!
You’re welcome. I’d be baldfaced lying if I claimed to be able to follow all the technical specifics of the paper, but the gist of it is clear. When we’re talking about a timescale difference of ten orders of magnitude or more between the quantum decoherence and neuronal firing, there’s not a lot of room for Penrose’s quantum magic and mystery to enter into things.
While we’re on the subject, has anyone here read V.S. Ramachandran? If so, would you reccommend him?
Aaron and Bourbaki,
A suggestion:
I think it is advisable to revist F.
If alpha is unit-less, what then are the units (if any) contained in F, and how are they reconciled. If there are no units, what then comprises F, in other words, on what does this function act, and is it energetically dependent?
I know I have asked this question before, but I am concerned that within F lies some potential problems.
After reading the Tegmark essay, alongside Goldstein’s book, the potential problems of F seem to loom large.
Does F rely on qualia, bits, or someother "currency"? How does this "currency" maintain itself? How can we rely on its validility, i.e., its "truthfulness", and can we be sure that our "ethics", our "alpha" doesn’t require axoimatization of its own, when it comes to the foundations of F?
Here, from Tegmark:
"In practice, the interaction of Hint between subsystems is usually not zero. This has a number of qualitatively different effects:
1. Fluctuation
2. Dissipation
3. Communication
4. Decoherence
The first two involve transfer of energy between the subsystems, whereas the last two involve exchange of information. The first three occur in classical physics as well-only the last one is a purely quantum-mechanical phenomenon…
We will define communication as exchange of information. The information
that the two subsystems have about each other, measured in bits, is
I12 S1 + S2 − S, where Si −tr ii log i is the entropy of the ith subsystem, S −tr log is the entropy of the total system, and the logarithms are base 2."
This type of relationship is similar to what we find in Edelman and Tononi, but Edelman and Tononi only seem to account for the genesis of consciousness, not its specific content, and there seems to be a missing step (or steps) that leads to F.
In previous posts, Bourbaki argued that F is a filtration representating a mathematical function, but still it possesses information.
"While these attempts give due scientific recognition to the subjective domain, subjectivism…"
Is it wrong to say that Godel’s success was inherently born within him (at the moment of conception) and then to extrapolate something similar occuring onto those subject to his idea, even those who lack an consequential understnading of it? Isn’t that what
emergence is, when the parts alone cannot explain the whole (this is
in reference to the birth of understanding)
I read Godel Esher Bach a long time ago, so I don’t know if I fully understand him or his ideas, but it seems to me that if his topic relevance comes from the fact that he showed a limit to logic and that you are then saying that his limit is what leads us to this:
"While these attempts give due scientific recognition to the subjective domain, subjectivism itself is no basis for a sound scientific understanding of the mind. Consequently, we reject phenomenology and instrospectionism, along with philosophical
behaviorism."
it seems like topical complementary mutualism. There are many ideas that demonstrate the same criteria, if not to the same, ahem, degree or actuality then something of correlary significance. I guess you reading about Godel is one of the principles that inspired within you a seed of which emerged Alpha theory?
I don’t know though, I remember many posts back you said it took you months of staring at equations to figure it out, and you didn’t drop Godel then when I asked about that. Having admitted that I don’t know shit about him though,
I am not denying his importance so much as denying the need to read a book or two on him to understand alpha theory. Should I study the emergence of all thoughts that relate to alpha theory to understand alpha theory, if so, alpha theory is a very weak theyro (it should do this on its own without books worth of reference, especially when time might be better spent deriving flaws in its implications and pronouncements than
in continually trying to immerse the mind in it’s intellectual foundings… dun dun dun, because emergence has taught us nothing if it hasn’t taught us that doing this is not only counterproductive to a comprehensive understanding in many ways [the historical ideas can become too big and tangled, and even based on author’s predisposition contradict certain other athoritated assertions professed by ancillary subjets of study] but also becasue it can never fully account for what emerged from it)
In the back of my brain it seems to me that some of what Godel says actually undermines many ways we have tried to present alpha theory on this blog, and it seems like a correlary to Focaults attacks on structuralism. But, you have to consider many mathematical proofs and scientific laws as simple acts of emergence if you buy into complexity and emergence, and this is the most obvious reason why Newton’s laws break down at such a small level, and why quantum mechanics don’t get big. Or am I misunderstanding emergence?
Bourbaki, visit my livejournal account for a good explanation of what I mean at:
http://www.livejournal.com and then search for the name Inadequate_One. The information is contained in the title To Bourbaki. You can also see my f’in picture!!!
"Godel’s great stroke of genius–as readers of Nagel and Newman will see–was to realize that numbers are a universal medium for the embedding of patterns of any sort, and that for that reason, statements seemingly about numbers alone can in fact encode statements about other universes of discourse. In other words, Godel saw beyond the surface level of number theory, realizing that numbers could represent any kind of structure"
you quoted hoffstadter here, but I wonder where he speaks of discourses is he referencing the type as exemplified and defined by Foucault? If so, it means somewhat less than what you might think.
Tommy,
Your LiveJournal post consists of quotes (Laughlin’s?) containing qualitative assertions and straw men arguments.
"One of the greatest disservices we do to our students is to teach them that universal physical law is something that obviously ought to be true and thus may be legitimately learned by rote."
This isn’t how science is taught. And we continue with
"They are not fundamental at all but a consequence of the aggregation of quantum matter into macroscopic fluids and solids–a collective organizations phenomenon."
Followed by
"Astonishing as it may seem, many physicists remain in denial."
You’ve just stepped into a long running debate between particle physicists and condensed matter physicists. From a review:
Laughlin and Pines advocate the search for "higher organizing principles" (perhaps universal), relatively independent of the fundamental theory. I give them credit for emphasizing that many different underlying theories may lead to identical observational consequences. But they turn a blind eye to the idea that in many important physical settings, the detailed structure and parameters of the Lagrangian are decisive. They campaign as well for the synthesis of principles through experiment, which I also recognize as part of the way we do particle physics. I believe that the best practice of particle physics—of physics in general—embraces both reductionist and emergentist approaches, in the appropriate settings.
Overall, I am left with the impression that Laughlin & Pines are giving a war to which no one should come, because the case for their revolutionary intellectual movement is founded on misperception and false choices. Perhaps the best way for us to be heard is to listen more closely, try to understand the approaches we have in common, and—occasionally—to use their language to describe what we do. It is important for us to seek the respect and understanding of our colleagues who do other physics, in other ways.
This debate has no impact on what we’re discussing. And we’re not going to be able to participate in this debate based on a few quotes and papers.
We are not looking to unify the forces of nature.
Thermodynamics is one of a few mature fields epitomized by a well-defined, self-consistent body of evidence. The essence of the theoretical structure of classical thermodynamics is a set of natural laws governing the behavior of macroscopic systems. The laws are derived from generalization of experimental observations and are independent of any hypothesis concerning the ultimate nature of matter and energy.
"but I wonder where he speaks of discourses is he referencing the type as exemplified and defined by Foucault? If so, it means somewhat less than what you might think."
You’re going to have to be more specific. I have no idea what this means.
Bourbaki,
Thanks, yes that was the answer I remember.
My question was wrong.
As F is a probability function, at what scale does it occur? In other words, does the probability of action occur at the Eustace level, brain level, neuronal subsystem level?
Do you follow where I am going with this?
Alpha is unit-less, and ultimatley it depends on a probability function. But information, as a "currency" depends on the function of neural substrates, that themselves have their own entropy and signalling characteristics beyond (or within) the alpha-system.
Again, there seems to be a need to account for information I, and its entropy. Furthermore, the "truth" of that information, although not axiomatic, seems to approach the need for axioms.
No?
"but also becasue it can never fully account for what emerged from it"
isn’t this essentially what godel says that was so significant about him in the first place. if so, it is ironic that your understanding of him (that lead you to recomend him) actually made you miss the fact that in many ways we cannot conclude that this course of knowledge is not inherently flawed and even worthless to our overall understanding of alpha theory. Well, not worthless of course, the worth would just be incomplete.
maybe that was a bloody brilliantly incomplete tautology on emergence and
Perhaps I am perveting and misusing the idea? But also. really, your arguemtn for his relevance was weak also:
"Godel revealed the limitations of logic. This limitation influenced how alpha theory was derived."
I wonder if that is mere childish name calling. How might we derive an answer. I would say aplha theory, but you know, thermodynamics are hard to manage by meself.
The last sentence was kind of clever, huh.
Metoothen: hey, what if we assume that most coherence and confluence in math AND logic are simpy products of emergence. Wouldn’t F be emergent, i.e., to a small/large degree transcendant from it’s parts because it would and could remain the same despite different internal changes, i.e., an emergent result could be the same even if we changed several different component and supplanted them with other actions that had the same result, obviously. This is even what occurs during entropy when etropy represents the number of internal states that a body might take and remain on the outside unchanged. Look at my Hawking quotes from a few posts back for more on that.
just sayin
MeTooThen,
That’s not quite right.
F is not a function. F is a set. Think Cantor. Specifically, an increasing sigma-field of Borel sets.
Any Eustace is an alpha-model that reacts to events. The events to which Eustace can adapt can be no larger than F@t. In fact, they’re always much smaller.
When something "happens", energy flows. The scale is the resolution, either direct or indirect, of these events. These occurrences (all of them) become part of the filtration.
The standard references for this material are Karatzas, Chung and Oksendal but they’re definitely not for the faint of heart.
Information is a loaded term. From here on, assume that by information we mean signal or interaction.
We don’t care about the meaning of any signal–we’re dealing with information in the pure Shannon sense. Recall from your earlier post
Also, the distinction between the meaning and merit of meaning of the constant signs is tricky (well for me anyway). p.71
Shannon was wise to call his model a theory of communication. We’ll work a lot of the material into future posts as we integrate information theory. Part of the challenge is doing so without making any of these books required reading. We haven’t touched qualia yet–and can’t until we get through information theory.
If you’d like to jump ahead, I’ve heard good things about a A Probability Path by Resnik. I have his "Adventures in Stochastic Processes" which is decent but quite possibly the nerdiest book title I’ve ever seen.
MeTooThen,
A filtration F is part of the definition of a measurable space in probability theory.
A stochastic process is a model for the occurrence of a random phenomenon. The randomness is captured by the introduction of a measurable space (W,F), called the sample space, on which a probability measure can be placed. W is the set of all possible events, while F is the set of all events that have occurred.
In other words, F@t represents all available information up to some time, t. In our coin-flipping game, it represents your realized path of wins and losses @t.
The implied probabilities manifest in implied events. These events have thermodynamics consequences that can be measured to produce the dimensionless quantity, alpha.
Any alpha-model for a given Eustace is conditioned on F@t.
Tommy,
I don’t know though, I remember many posts back you said it took you months of staring at equations to figure it out, and you didn’t drop Godel then when I asked about that.
Go back to Part I and look check out the comments for mentions of Godel. A lot of ideas have influenced the development of the theory. Godel was particularly important because it led to the choice of the first and second law.
Worse, for any dynamic system, we can not predict all possible external influences–the best possible crystal ball is a blurry, probabilistic one.
hey, what if we assume that most coherence and confluence in math AND logic are simpy products of emergence.
Tommy, you can assume anything you like. But first see CPPD (also in the comments section of Part I) and explain how your assumption is or (can be) supported.
Be careful not to conflate information we gain by studying evidence (F@t-) with our model of how things will turn out in the future (F@t+). In this sense, all models of the world are not equivalent narratives.
Bourbaki,
Hmmm.
A set, not a function.
No, I did not get that until now.
A set comprised of available information in all of its forms.
Hmmm.
Hence, no units.
…
Yes, communication, i.e., signalling, works fine for me.
Hmmm.
I need to think about this.
LOL.
For a minute I thought I understood alpha-theory, then not, then again yes.
LOL.
Thanks.
And yes, I think I will get the Resnik book, the reviews are mostly favorable.
Mr. McIntosh,
I haven’t read Ramachandran’s book but I did catch an interview where he discussed research on mirror neurons.
You can catch the broadcast here.
I also came across this experiment by Boneh, Cooperman and Sagi (2001) while reading Christof Koch’s book.
MeTooThen,
See Part 5 before you hit Resnick.
Thanks Bourbaki, the broadcast and article were interesting. I’ve seen that visual experiment before as well; I love finding ways of tricking the brain.
Let me know if the Koch book is worth it and I might pick it up once the paperback is out.
"One of the greatest disservices we do to our students is to teach them that universal physical law is something that obviously ought to be true and thus may be legitimately learned by rote."
This isn’t how science is taught. I wonder what teachers you know.
That’s how I was taught. I don’t believe I met a single teacher who could appreciate the complexity of that fundamental shift in perspective properly until I was in 9th grade.
Also, collective organizational phenomenon are not "a qualitative assertion" any less than alpha theory are, and if you are saying that the way we understand (and can understand) the universe is not proportional to our use and understanding of organization (whole collections of em, even!) then you understand the human brain to be much different than I do.
I get the feeling you are reading lots and lots about information theory right now, but what about complexity, chaos, emergence, and the human brain? Did I jump the gun reading these books before understanding information theory?
This isn’t how science is taught. I wonder what teachers you know.
You were never taught the scientific method?
There’s a big difference between challenging scientific laws with well researched contradictory evidence and challenging scientific laws with unsupported speculation and rhetoric.
Also, collective organizational phenomenon are not "a qualitative assertion" any less than alpha theory are […]
1. What is your hypothesis?
2. How do you propose to collect evidence to support it?
3. What are you measuring?
4. What will you be able to explain better than existing theories?
I get the feeling you are reading lots and lots about information theory right now
We’re trying to write about information theory to make it as accessible as possible. I think we missed the mark on some of our installments and, based on the comments, are trying to ensure we address those oversights.
Did I jump the gun reading these books before understanding information theory?
Not at all. But try to use the suggestions above when framing your argument. Keep it short and self-contained. Three posts in a row with PoMoSpeak is difficult to decipher.
Define your terms or provide links to them. When I see something like "topical complementary mutualism" and Google turns up nothing, I have no idea what you’re talking about.
Bourbaki,
Thank you for your thoughtful replies.
And yes, I am asking the wrong questions.
I reread Part 5, and have come to the same place as I was before, still uncertain as to F.
Humor me, or at least, try to follow where I am leading you, thanks.
Above I characterized F as a function, this was incorrect, in that it is not along the lines of f(x), read: f of x.
But Eustace, or a brain, or a neural subset, or a single neuron, or a synapse, must do something with the members of the set F at some time, t. In this case, the do something, would be to communicate the element(s) of F?
And this communication, again, must occur at some level, or levels, and my question, therefore, is whether or not this communication (what I meant as a function, or process), is energy dependent, and if so, can you account for its entropy?
Why am I asking this?
I have the sense, if I can remove myself from Eustace, and speak now of the human nervous system, that in order to allow for the universality of alpha, as a process, as an event, or as a function of the nervous system, its substrate should at least be hinted at, or suspected, even if not known.
This seems correct to me because after all, alpha occurs somewhere, in some space and at some time. The space is between our ears, somewhere, and the time, well, it occurs all the time, but at different intervals and, perhaps, over differing durations (serial alphas.)
Now as a mathematical model, perhaps for now, these issues do not yet (or ever?) need to be understood. But again, for me, it is the substrate that interests me first, and its application second.
Bourbaki,
A bit more background.
Whether it is our personal experience, or clinical experience, or per Edelman’s model, there are (sub)types of information, or qualia, that hold special or elevated status. These experiences once laid down in our "memory" will, forever henceforth, have sway over subsequent experiences, most notably our access to them and their effect on current qualia.
How this hierarchy of memory (read synaptic signalling and its subtending architecture) is created, maintained, and utilized seems to be missing from the description of how Eustace and its F interact or relate.
The laying down of memory (the creation of the elements of F), its storage and retrieval (the communication of elements of F), seemingly needs to be accounted for, no?
Or am I endlessly repeating myself and way off the mark?
MeTooThen,
You write that "in order to allow for the universality of alpha, as a process, as an event, or as a function of the nervous system, its substrate should at least be hinted at, or suspected, even if not known." OK, let’s talk substrate.
I understand you to be asking two related questions here. First, what undergirds alpha? And second, what is alpha itself? It might help to have another look at the original alpha equation:
α = (ΣH – TΣS negative) / TΣS positive
where H is enthalpy, T is temperature, and S is entropy. All three terms in the equation are in units of energy, and they are all deltas. This makes the answer to the first question apparent: energy flux undergirds alpha. The second question is a bit trickier, but I took a stab at it in Part 2, where I wrote, "[alpha] can be thought of as the rate at which the free energy in a system is directed toward coherence, rather than dissipation. It is the measure of the stability of a system." Here we begin to reach the limits of language. It is dangerous to try to reify a rate. "What is" questions, in science, traditionally go nowhere. My next post, on entropy, will elaborate. Thinking of entropy as "disorder" is OK in thermodynamics, but in communication theory it leads to disastrous misunderstandings, although the equations are identical. What we want to ask instead is, how can we quantify it? What relations obtain?
As for your other post, which crossed my reply: sure, we want to know how memory works, how human beings process F, all of those nifty things. But we haven’t even introduced consciousness yet. For now we claim only that Eustace employs an alpha model to react to F. Give us a chance here. The next installment will bring us quite a bit closer to the human realm.
Aaron,
Thank you for your thoughtful reply.
No, I wasn’t trying to rush you. And yes, alpha as energy flux, I’m down with that.
Brave as you are, you haven’t answered my questions as to F.
For now, I will sit tight, and be quiet, awaiting the next installment.
Until then…
MeTooThen,
Here’s another try. I think you’ve latched on to an incorrect interpretation of filtration.
It’s important to use terms only as they’re defined in Parts 2-6. More definitions are to come and the specifics will follow the abstract. There will be applications of the principles we’ve established but we need to be thorough.
The notion of a filtration is reminiscent of the artist’s concept of negative space–to understand the object, first understand the space around it. Here it’s a little tricky–we’re dealing with a dynamic process so rather than space, consider the same concept in time. The positive space is its present state. Its negative space are the events in its history–its filtration.
But Eustace, or a brain, or a neural subset, or a single neuron, or a synapse, must do something with the members of the set F at some time, t. In this case, the do something, would be to communicate the element(s) of F?
Eustace [or a brain, or a neural subset, or a single neuron] is affected by a stimulus or interaction and may, in turn, react by affecting something else or changing its own configuration.
But let’s start with something much simpler so we don’t get buried in details. Consider a stone in a river. Its surface is shaped by numerous interactions with the passing fluid. All those interactions, collectively, are the subset of F@t that manifest in its shape now.
Now that’s not very dynamic. Let’s try something a bit more complicated. Consider a G protein-linked receptor. Ligand binding activates a G protein, which in turn activates or inhibits an enzyme that generates a specific second messenger or ion channel, causing change in membrane potential.
The cascade that stimulus invokes is based on the configuration of the receiver and the configuration of anything with which that receiver interacts. Events (some subset of F@t) shape these configurations. In other words, the histories of these systems is reflected in their current state.
How rich and diverse this effect is depends on the system complexity, context and how Eustace adapts to these stimuli. Which of these configurations are selected depends on their alpha.
One of the founders of modern probability, Kolmogorov, wrote: "the epistemological value of probability theory is based on the fact that chance phenomena, considered collectively and on a grand scale, create a non-random regularity."
Similar patterns crop up in models of dynamic, interacting systems, ranging for neuron depolarization to forest fires to mass extinction. In each case, an individual element or class of element is subjected to some pressure or stimulus, builds up toward a threshold, then suddenly relieves this stress and spreads it to others, potentially triggering a domino effect.
If a given Eustace can adapt to these regularities, it will have a selective advantage. In order to adapt, it needs complexity sufficient to capture these regularities or patterns. The "information set" Eustace uses can never be greater than F@t. You appear to be using F like a self-contained, independent object. Rather, F is analogous to context.
How this hierarchy of memory (read synaptic signalling and its subtending architecture) is created, maintained, and utilized seems to be missing from the description of how Eustace and its F interact or relate.
See Part 2. It’s very abstract but it’s in there.
Alpha is a sort of thermodynamic teleology driven by the conversion of an available flow of free energy (dG). We once believed light was necessary but the discovery of hydrogen-sulfide metabolising organisms in the deep oceans revealed that any free energy will do if the conditions are right. So far, those conditions include the unique chemical properties of water.
The resulting nascent complex structures make possible yet more complex structures. This complexity offers additional degrees of freedom that can then convert available free energy into yet more complex structures.
If a particular receptor configuration leads a bacteria to directed chemotaxis up a maltose gradient, it will more effectively harvest free energy than one that moves in the opposite direction.
This process of converting free energy into complexity and organization (alpha) can lead, for example, from transmembrane signalling to intercellular signalling.
But there are many details yet to fill in. The time scale from simple, coupled chemical oscillators to hierarchical nervous systems is about 3.5 billion years.
That’s a lot of rounds in the alpha casino.
Bourbaki,
Thank you for the thoughtful reply.
Yes, I will revisit filtration in depth before commenting further.
And yes, the notion of threshold (especially as it relates to neuronal functioning) is something I understand and appreciate well.
Here: "You appear to be using F like a self-contained, independent object. Rather, F is analogous to context."
Yes and no.
I understand the idea of F as context. This, by the way, was well put. Also receptor signalling seems like a good example. G-proteins are just one example. The regulation of post-translational receptor dynamics is another. Models of epilepsy (e,g, secondary human epileptogenesis) and chronic pain (peripheral and central sensitization) are clinical examples.
These (self) emergent behaviors, although maladaptive, speak strongly to your descriptions.
My concern, or confusion (take your pick) is in trying to place F somewhere. It is the where, and what under what rules that structural system operates, and what energetic contraints that system must follow, or is subject to, is from where I have been inquiring. In other words, if the structural substrate has its own alpha effect, is it accounted for in the "larger" alpha?
Again, your thoughtfulness (and kindness) in your response is duly noted and appreciated.
One last note, Palle Yourgrau’s A World Without Time is provactive, entertaining, and gives additional insight into Godel’s proofs that neither the N & N monograph, or Rebecca Goldstein’s book do.