Philosophy – Page 5 – God of the Machine
Feb 132003
 

Andrew Sullivan calls Hegel “one of the great liberals” (as in classical liberal) today. This is Hegel he’s talking about, theorist of the apotheosis of the State, official Prussian court philosopher and lickspittle to Friedrich Wilhelm III. Hegel, who proved, philosophically, that there was no planet between Mars and Jupiter, that magnetizing iron increases its weight, and that Newton’s theories of gravity and inertia contradict each other. What is Sullivan thinking? But he does provide me with an excuse to quote a mighty stream of invective from Schopenhauer, who knew Hegel personally, which no blogger can match:

Hegel, installed from above by the powers that be, as the certified Great Philosopher, was a flat-headed, insipid, nauseating, illiterate charlatan, who reached the pinnacle of audacity in scribbling together and dishing up the craziest mystifying nonsense. This nonsense has been noisily proclaimed as immortal wisdom by mercenary followers and readily accepted as such by all fools, who thus joined into a as perfect a chorus of admiration as had ever been heard before. The extensive field of spiritual influence with which Hegel was furnished by those in power has enabled him to achieve the intellectual corruption of a whole generation.

(Thanks to Karl Popper’s The Open Society for the material.)

(Update: Arthur Silber comments.)

(Another: Eddie Thomas, mirabile dictu, defends Hegel.)

Feb 112003
 

I regularly argue for rule-based reasoning in ethics, on the grounds that without rules, there are, well, no rules. Jim Ryan thinks that between rules and anything goes lies some middle ground, which he calls “critical common-sense-ism.” This means reasoning analogously from cases that we already know to be true:

Its easy. You determine the cases about which you know the true moral judgments and that bear the closest resemblance to case X. For examples, if X is an abortion, then we determine whether X is more like ordinary murders or more like perfectly permissible medical procedures. Common sense tells us that its wrong to kill a child but also that its not wrong to scrape skin cells off of your skin. It depends on what the fetus is. If X is a welfare program, we determine whether X is more like robbing Peter to pay Paul or more like requiring Peter to do his duty to save a child who is drowning right before him. It depends on the program. Common sense can make its pronouncement, but only after we make fine distinctions and comparisons, and draw analogies and disanalogies between various cases were sure about and X.

Now there is no such thing as case-based reasoning, strictly speaking. Cases are like snowflakes; no two are exactly alike. Jim himself has remarked elsewhere on the difficulty of deducing propositions from experience. “Critical common-sense-ism” replaces rules with rules. “It’s wrong to kill a child” is a rule. “It’s wrong to rob Peter to pay Paul” is a rule. What Jim really objects to is not rule-based reasoning per se, but one big rule, like “the greatest good for the greatest number” or “rational self-interest” or “do unto others as you would have them do unto you.” He proposes to replace one big rule with many little rules, like, say, the Ten Commandments. (In this respect “critical common-sense-ism” resembles all religious morality.) From here on I will call ordinary rule-based thinking “big-rulism” and Jim’s alternative “little-rulism.”

Little-rulism is an accurate description of how most people actually do reason morally, most of the time. Your best chance to convince somebody in a moral argument is to apply a known and agreed-upon little rule. (I speak from long and painful experience.) Jim has grander ambitions for little-rulism, however. He wants a revolution. He wants to overthrow the big-rulers.

What happens when two little rules conflict?

These deliberations depend on having our facts straight; we need to know the relevant facts in order to judge them. Whether the fetus is not importantly disanalogous to a skin cell is question that can be decided only if we know the pertinent biological facts. In the other case, we need to know whether the welfare program will support the indolent or be wasted and not help the deserving, and whether innocent lives will be saved, whether innocent people who are unable to support themselves will be kept from living in abhorrent conditions, whether they will be given more than that, how costly the program will be, how rich the rich are, etc. Saving a drowning person might be analogous to the welfare program. But it might be disanalogous if the people it helps are not in as dire straits as the drowning person, for example.

By relying on the analogies and disanalogies between the puzzling case and the many cases about which we know the right judgment, we can find out whether X is more analogous to cases of obviously wrong action or more analogous to cases of obviously right action. This will determine the judgment about X that our values commit us to. We need to know the relevant facts, in order to draw any conclusion.

Note first that big-rulers don’t have this problem. One big rule organizes the various little rules, each in its place. (Whether it does so correctly is a different question, of course.) We should also remember that the little-rulers aren’t reasoning from cases; they’re reasoning from little rules. Jim can’t really talk about “analogies and disanalogies between the puzzling case and the many cases about which we know the right judgment” and whether the case in question is more or less analogous to something else.

The following argument may clarify this point: It is wrong to kill innocent people, if we bomb Iraq we will kill innocent people, therefore it is wrong to bomb Iraq. (Morally wrong, I should note: we aren’t discussing policy for the moment.) No one who has graduated the fourth grade can rest in this argument. Yet the case of bombing Iraq does not fit the little rule that it is wrong to kill innocent people approximately, or even closely. It fits the rule exactly. In fact, for any little rule you formulate, a case will fit it either exactly or not at all. To take one of Jim’s examples, if a fetus is a person then the rule that it’s wrong to kill innocent people applies, exactly. If a fetus is like skin cells then the rule that it’s not wrong to scrape off skin cells applies, exactly.

This leaves me with three questions for Jim. First, why not, as a little-ruler, stop at the first rule? We have found a perfect fit. Second, if we continue to pursue the question, when do we stop? How many little rules must we examine? Finally, since all applicable rules will fit the case perfectly, how do we adjudicate among them, lacking “more” or “less” analogous as a standard?

(Update: Jim replies.)

Feb 102003
 

Mark Goldblatt attends an MLA conference and gives Professor Turtleneck the what-for:

As the session was winding down, I decided to ask a question. This is something I habitually do after such discussions; it’s sadistic act, the academic equivalent of shooting fish in a barrel, and it speaks badly of my character. I directed my question to Professor Turtleneck though it could as well have been addressed to virtually anyone in the room. Recalling his notion of a “state of semi-erudition” that characterized those who support President Bush’s war on terrorism, I pointed out that many of Bush’s supporters would characterize the antiwar movement in much the same way. “As an epistemological matter,” I asked, “how do you deal with the fact that each side sees the other as uninformed? You don’t want to make the claim that your knowledge is somehow privileged, do you?”

There was an awkward, slightly panicky pause after I asked this.

Professor Turtleneck began his response by saying he’d cut a lot out of the paper he’d read and then segued into an utterly irrelevant tap dance about Adorno’s own epistemological presuppositions. He was interrupted after a minute by a man sitting behind me, who called out, “You’re not answering the question! You can’t deny that you’re making a claim to knowledge here!”

“I’m not denying that,” Professor Turtleneck insisted. “I’m only saying that Adorno would say . . .”

If Turtleneck’s intellectual father, so to speak, is Adorno, then his grandfather is Marx, who theorized that everyone’s thought is determined by his class, and thus his opponents were merely bourgeois apologists. Only Marx himself, and possibly Engels, are exempt from this iron law.

His great-grandfather is Kant, inventor of the noumenal (“real”) world, as distinct from the phenomenal world, which we actually perceive. Nothing can be known about the noumenal world except that it exists, and even that much only if you’re Kant.

His great-great-grandfather is Plato, who held that an ideal version of everything exists in some region too bright for human eyes to gaze upon directly — unless they’re Plato’s.

Archimedes said that, with a lever and a place to stand, he could move the world. Like all of his illustrious predecessors, all Professor Turtleneck lacks is a place to stand.

We may conclude, incidentally, that regression to the mean is as much a rule in intellectual families as in actual ones.

(Thanks to the Blowhards for the link.)

(Update: Cinderella and Jim Ryan comment.)

Feb 082003
 

Of the many intelligent replies to the New York Sun’s editorial advocacy of censorship, Arthur Silber’s gets nearest to the heart of the matter:

In effect, the Sun announces its own, newer version of preemption: let’s destroy civil liberties now, and with absolute certainty, so as to avoid the possibility that those same civil liberties might be destroyed later. To identify the nature of this argument, is to realize how truly ludicrous it is, and it would be laughable if the matter were not so serious. Yet certain conservatives make this same kind of argument with profoundly disturbing regularity in connection with a compulsory draft, for example. They say: “But if we don’t forcibly conscript people, how will we be able to save our free country?” — thus ignoring the fact that by establishing the precedent of slavery yet again, and by establishing the principle that no one has the right to his own life, they have destroyed the very concept of a free country at its core — and that once this was accomplished, there would be nothing left to save.

(Update: Silber comments on the comments.)

Jan 292003
 

Goodwin Liu has exposed, in the Washington Post and at greater length in the forthcoming Michigan Law Review, a flaw in the thinking of affirmative action opponents that he calls the “causation fallacy.”

Affirmative action is widely thought to be unfair because it benefits minority applicants at the expense of more deserving whites. Yet this perception tends to inflate the cost beyond its real proportions. While it is true that affirmative action gives minority applicants a significant boost in selective admissions, it is not true that most white applicants would fare better if elite schools eliminated the practice. Understanding why is crucial to separating fact from fiction in the national debate over affirmative action…

…Allan Bakke, a rejected white applicant who won admission in 1978 to the University of California at Davis’s medical school after convincing the high court that the school’s policy of reserving 16 of 100 seats each year for minority students was unconstitutional. For many Americans, the success of Bakke’s lawsuit has long highlighted what is unfair about affirmative action: Giving minority applicants a significant advantage causes deserving white applicants to lose out. But to draw such an inference in Bakke’s case — or in the case of the vast majority of rejected white applicants — is to indulge in what I call “the causation fallacy.”

This is a “fallacy,” according to Liu, because the vast majority of rejected white applicants would still be rejected, even without affirmative action. This fallacy works in mysterious ways. The lower the standards for black applicants, the more rejected whites clear the bar. The more rejected whites with better credentials than accepted blacks, the less certain it is that any particular white would have been admitted if there were no affirmative action. It follows, from Liu’s logic, that the lower the standards for blacks as opposed to whites, the less cause for whites to complain!

Liu makes a big deal of the fact that Gratz and Bakke very likely wouldn’t have been admitted regardless, and in any case couldn’t be sure. He then publishes the following table, of admissions rates at “five highly selective universities” (this is thanks to Ampersand, who takes it from Liu’s full Law Review article, which I haven’t read and isn’t yet online):

SAT score
1500+
1450-1499
1400-1449
1350-1399
1300-1349
1250-1299
1200-1249
1150-1199
1100-1149
1050-1099
1000-1049
< 1000
Black rate
100%
75%
69.6%
80%
64.6%
73.9%
60%
55.5%
46.2%
40.6%
35.4%
17%
White rate
63%
51.1%
39.9%
30.7%
25%
22.6%
19.3%
18.7%
13.3%
12.4%
9.6%
3.3%
Rate w/o AA
62.7%
50.8%
39.8%
30.8%
25.4%
23.8%
20.6%
20.9%
16.2%
15.5%
11.7%
6.7%

One wonders, first, what the raw numbers are. They would be easy to include and would prove instructive. (The nice round numbers in the upper rows in the black column make me suspect that we are dealing with a vanishingly small sample size.) It is fishy that the percentages of whites admitted in the upper percentiles declines without affirmative action. Ampersand comments that “[a] white student with a combined score below 1000 has a 96.7% chance of rejection from a selective school with affirmative action, and a 93.3% chance of rejection if aa didn’t exist. In either case, the odds are overwhelming she’ll be rejected; and the primary reason for the rejection is her poor SATs, not her race.” An opponent of affirmative action might retort that whites with such scores would have twice as good a chance at admission. This is a fine example of how to lie with statistics.

But the overwhelming question about this data is, how does he know? If Bakke and Gratz can’t prove that they would have been admitted in the absence of affirmative action, how can Liu establish the SAT distribution in its absence?

Ampersand also notes how whiny the AA plaintiffs are:

Anti-affirmative action lawsuits are not put forward by whites who would have gotten in to a selective college if only affirmative action didn’t exist. They’re put forward by whites who have such a strong sense of entitlement that they can’t admit they failed to gain admission because, on the merits, they didn’t deserve admission.

Well maybe, but Gratz and Bakke are paragons of virtue compared to Miranda, Escobedo, Gideon, and other plaintiffs in famous Constitutional cases. Spy magazine once ran a little story profiling such plaintiffs called “Dirtball Heroes of the Constitution,” and there isn’t an AA plaintiff who would even come close to qualifying. In any case, aren’t you supposed to take the plaintiff as you find him?

This whole business of percentages disguises the fundamental fact that for every black applicant who is admitted because of affirmative action there is a white applicant who is rejected for the same reason. We may not know which white applicant, but that fact is immaterial. Liu suggests “rethinking the conventional view that a race-conscious admissions policy pits whites against minorities in a zero-sum game,” but a zero-sum game is precisely what it is, and what it has to be.

Jan 232003
 

Philosoblogger Jim discusses slippery slopes today:

It is obviously an unjust society that lets cripples and children die of starvation and exposure. I don’t see how that is a misuse of the term “unjust” in ordinary usage. (I’m not arguing all of the unfortunate can be helped, that’s Paul Wellstone-ism, not my view.

No one has ever shown that the slippery slope to socialism exists. You can imagine slippery slopes anywhere. “One drink, and you’ll inevitably become an alcoholic.” “Give the state the power to imprison citizens, and it will eventually imprison people arbitrarily, en masse, with no justification.” America doesn’t let cripples die, and it still isn’t socialist. We use reason and debate to stop ourselves from slipping.

The argument is certainly not respectable as he puts it. In my family we used to call it The Fatal Glass of Beer Theory, after a W.C. Fields short whose plot you can imagine. It is easy to do something in moderation; people, and even governments, manage it all the time.

Slippery slope theorists, however, rarely make the argument in this bald form, and if they do it isn’t really what they mean. They are asking for a principle, an intellectualy tenable distinction, something beyond “less” and “more.” One can drink so long as it doesn’t seriously impair one’s ability to function. The state can imprison people so long as they have violated the rights of others. The state can seize assets from its citizens to keep cripples from dying so long as — well, this time it’s not so simple. To ask for a distinction between seizing assets to help some of the unfortunate a little and seizing them to help all of the unfortunate a lot — between Jim’s position and “Wellstone-ism” — seems to me a perfectly respectable demand.

(Update: Jim answers.)

(Another: Eugene Volokh has posted a draft of his forthcoming Harvard Law Review article on this very subject.)

Jan 232003
 

Valdis Krebs performed a simple experiment. He looked at the “buddy list” on Amazon of several dozen top-selling political books and graphed the results. (Link from BoingBoing.) The result is two clusters, as one would expect, but with one book in the middle, with “buddies” on both sides: What Went Wrong by Bernard Lewis. (Also, arguably, The Clash of Civilizations by Samuel Huntington.)

The “cocooning” controversy could be resolved the same way. Steven Den Beste theorized last year about blog clusters but without data to back him up. So the assignment, for someone less lazy than I am, is to create a chart, after Krebs, for blogs instead of books, using for data the top 100 blogs and, say, the first ten blogs in their neighborhoods at BlogStreet. This would be imperfect but indicative. How many clusters would there be? Who would be in the middle? Do people often read blogs that they disagree with or are blog readers, like book readers, blinkered by confirmation bias?

Jan 172003
 

A while back I took Objectivism to task for its argument in favor of free will. That argument is still lousy, for the reasons I supplied. But in its stead I proposed an equally bad argument, that Newcomb’s Paradox renders incoherent the concept of a superbeing with the ability to predict human behavior:

Consider the following thought experiment, known after its inventor as Newcomb’s Paradox: You have two boxes, A and B. A contains a thousand dollars. B contains either a million dollars or nothing. If you choose A, you get the contents of A and B. If you choose B, you get the contents of B only.

Imagine there is something — a machine, an intelligence, a mathematical demon — that can predict your choice with, say, 90% accuracy. If it predicts you choose A, it puts nothing in B. If it predicts you choose B, it puts the million in B. Which do you choose? (Just so you don’t get cute, if the machine predicts you will decide by some random method like a coin flip, it also leaves B empty.)

The paradox lies in the absolutely plausible arguments for either alternative. Two accepted principles of decision theory conflict. The expected utility principle argues for Box B: if you calculate your payoff you will find it far larger if the predictor is 90%, or even 55%, accurate. But the dominance principle, that if one strategy is always better you should choose it, argues for Box A. After all, the being has already made its decision. Why not take the contents of Box B and the extra thousand dollars?

I would argue that paradoxes cannot exist and that the predictor (and therefore, determinism) is impossible.

I ran this past the estimable Julian Sanchez, a far better philosopher than I, who answered as follows:

You are posed the problem of predicting the output of [a computer] program TO the program. The program asks you: predict what I will respond. And the program is perfectly deterministic, but structured such that once it takes your prediction as an input, the (stated) prediction will be false, even though, of course, you can PRIVATELY know that given your input, it will respond in some different way. (For simplicity’s sake, assume the only outputs are “yes” and “no” — if you “predict” yes to the program, you know that in actuality, it will output “no” — the opposite of your stated “predicition.) This isn’t perfectly analogous to Newcomb’s paradox (with the two boxes, etc.) but I think the point holds good. It looks like something is a problem with free will — if some superbeing predicted our behavior, we could always deliberately act to falisfy the prediction once we knew it. But as the example with the simple computer program shows, that’s not an issue of free will, it’s a problem with feedback loops — with making a projected future state of a system an input into that state.

The light dawned: it’s the interaction that’s the problem, not the superbeing himself. Julian is just as good with other people’s bad arguments and you ought to read him regularly.

Jan 152003
 

1. No psychologizing. Rachel Lucas was widely praised for this analysis of Michael Moore’s inner life. It was funny, and it might even be true. But it is irrelevant to Michael Moore’s arguments, which are bad, and even to Michael Moore’s behavior, which is worse. By their fruits shall ye know them. There is a scene in the very funny movie Office Space where one character, having been drawn by another into an absurd criminal scheme, turns to him and says, “You are a very bad person, Peter.” That’s what I think of Michael Moore: he’s a very bad person, who believes very stupid things. There the matter ends.

2. No attribution of motives. This is a related matter, and I’ve discussed it on a large scale with Bush and Iraq. Many of the loonier anti-war arguments, like the accusations about Bush avenging his father or setting up his Houston cronies with a sweet deal, rely on attribution of motives and can be dismissed out of hand. On a tiny scale it happened to me just yesterday. The usually superb Colby Cosh and I were arguing about Pulp Fiction and he accused me of pretending not to like it to be au courant. Now how would he know? And even if he were right the fact would neither strengthen Colby’s argument nor weaken mine.

3. No tribal pleading. Nothing is more tiresome than constant shilling for one party or the other. The lesson of Sid Blumenthal, once a respected and interesting journalist, later a Democratic party shill, and finally a hired flack, has been lost on such people. (Paul Krugman is at stage 2 of the Blumenthal trajectory.) Whoever is tempted to this should remember that it was his loyalty to certain ideas, one hopes, that made him loyal to a party, not the other way around.

Dec 232002
 

Steven Den Beste writes today about consequentialism. As he says, he is not himself a consequentialist; he finds all utilitarian criteria, and all other ethical rules, inadequate. This makes him an intuitionist, as I have written before. Nevertheless he has residual sympathy with some forms of utilitarianism and understates the problems with the theory.

Consequentialism of any variety founders on commensurability. You can’t compare degrees of satisfaction in different people because there are no happiness-units. In fact it’s even impossible to compare degrees of satisfaction in the same person. I prefer pears if I haven’t eaten any fruit, but if I’ve just eaten three pears then I prefer apples. So for me, what’s the relative value of an apple and a pear? Even I, with direct access to my own thoughts, can supply no better answer than “it depends.” And even if preferences were constant they would certainly not be quantifiable. This clips the wings of the consequentialist at the start. His ethics demand that he perform an impossible task.

Mill and Bentham addressed this differently. Bentham argued that happiness is internal and can be evaluated only from each person’s own point of view. Unfortunately this leaves the utilitarian actor with the problem of getting inside of everyone’s head. It also has a nasty subjectivist edge. Consider a sadist, who enjoys inflicting suffering. An act that inflicts suffering scores extra, in Bentham’s version of utilitarianism, because it gratifies the sadist.

Mill found this conclusion insupportable, and famously evaded it by arguing that there are “higher” and “lower” pleasures, and that higher ought to get more weight. Which is fine: it’s just not utilitarianism any more. The standards for higher and lower have been smuggled in from outside the theory.

Rule Utilitarianism, a further refinement, fuses Bentham with Kant:

Utilitarianism says that in any situation you should act in a way which maximizes happiness. Rule Utilitarianism says that you should follow a rule which, if always applied by everyone in a similar situation, would maximize happiness even if it does not do so in this particular instance.

Den Beste prefers this to strict utilitarianism because it’s friendlier to his ethical intuitions. Even in pure utilitarianism rules receive consideration: if you break a promise, and people find out, one of the consequences is that your reputation is damaged. (There are also secondary consequences: the taboo against promise-breaking is weakened.) But Rule Utilitarianism says you ought to keep your promises, even if nobody else could possibly find out. It says you ought not to overgraze your cattle on public lands, where pure utilitarians would have no such scruples.

Of course Rule Utilitarianism has its own problems, as Den Beste acknowledges:

My biggest problem with Rule Utilitarianism is that in practical application it’s much too susceptible to rationalization. Part of the problem is in deciding just how specific you’re going to be about “similar situations”. Different levels of scope regarding “similar” will lead to different calculations of the optimal rule, and so if you like one of them but not another it’s just too easy to decide that the scope should be such as to make that the rule you follow.

Actually, in the classical formulation of rule utilitarianism, if more than one rule applies, you’re supposed to revert to pure utilitarianism rather than choosing your own rule. Still, the scope problem is more complicated than Den Beste lets on. It’s extremely easy to devise rules that would maximize happiness if universally followed. Leftists play this game all the time. “Imagine if they gave a war and nobody showed up.” “Imagine everybody showed respect for all living things.” Of course the trouble with most such rules is that they only make sense in a world where they are universally followed, a world that does not exist. Pacifism, which I join Den Beste in abhorring, is a perfectly proper rule under rule utilitarianism.

Because it is so easy to devise these rules, many will be devised, so many that more than one will apply to any possible situation. Then we have the choice of reverting to pure utilitarianism, with the consequences outlined above, or “rationalizing,” as Den Beste describes it, by choosing the rule that’s closest to our moral intuition.

If you take rule utilitarianism seriously, you wind up with a bunch of unrelated edicts and no guidance for applying them in a particular situation. Something like, say, the Ten Commandments. Oddly, the “scientific” and religious ethics turn out to have a great deal in common.