The Grand Coherence, Chapter 6: The Grue Problem, the Reality of Ideas, and the Meaning of Knowledge
This post is part of the book The Grand Coherence: A Modern Defense of Christianity. For all the links in the book, see this introductory post.
Here’s the problem: the assimilation of evidence and the pursuit of a coherent worldview sometimes require changing your theories. But if you’re allowed to change your theories at will, you may never learn anything, because there’s always an infinite array of possible theories, and you’ll just bounce around among them, without accumulating sufficient evidence to believe in any.
A silly example will illustrate the point most easily.
Suppose your friend Coinflip Theorist has a notion that anytime you flip a coin, it comes up heads. You figure it shouldn’t be hard to dissuade him. You flip a coin a couple of times, and get tails. He’s shocked. But he soon recovers. He knows there’s a feedback loop between worldview and evidence, and that the pursuit of a coherent worldview often requires readjustment of one’s theories. So he says, “Thanks for enlightening me, friend! I was wrong to think that all coin flips come up heads. Now I know that all coin flips come up heads, except that one.” You sigh, flip a couple more times, and get tails. Again he revises his theory, this time assuming two exceptions rather than one. No matter how many times you flip and get tails, he still expects all future coin flips to turn up heads.
Why exactly is Coinflip Theorist wrong?
Here’s another example, almost as silly but much more famous. It’s adapted from the philosopher Nelson Goodman. Two people, call them Commonsense and Contrarian, are conversing.
“Grass is green,” says Commonsense.
“No, grass is grue,” says Contrarian.
“What’s grue?” asks Commonsense.
“Well, you might best understand it by saying it means green until October 11, 2049, and blue thereafter.”
“So you think all the grass is suddenly going to turn blue once and for all on October 12, 2049?!” asks Commonsense incredulously.
“You can put it that way if you like,” says Contrarian. “I, of course, would rather say that the grass will continue to be grue.”
“But why,” asks Commonsense, “would you expect the grass to all turn blue, or remain grue as you put it, on that particular day?”
“I expect it by simple induction,” explains Contrarian. “Grass has always been grue, so I expect it to stay grue. Of course, it might not. I can’t rule out the possibility that all the grass will suddenly turn bleen on October 12, 2049. But I have no reason to think that’s likely. My default assumption is that current patterns will persist, and grass will be just as grue on October 12, 2049 as it is today.”
Now, if Contrarian is not refuted somehow, the consequences for science are catastrophic. Every scientific theory predicts the future from the past by extrapolating from patterns. If the terms and concepts comprising the patterns can be arbitrarily redefined, then a limitless range of future patterns can be forecast from any past data. That means that science can’t settle anything. Any prediction is as good as any other.
If Commonsense thinks that oranges are round, nothing’s to stop Contrarian from thinking they are “squound,” meaning “either square or round.” Contrarian might advise Commonsense, when shopping for oranges, to choose square ones, because they stack and aren’t prone to rolling. Commonsense would object that there are no square oranges. “Oh really? Well, you might be right. All I’ve noticed about the shape of oranges is that they’re always squound.”
Contrarian might warn against hiking in a particular forest, because of dangerous beasts. Commonsense, surprised, asks what kind of dangerous beasts. “When I was there,” says Contrarian, “I saw lots of liobits— you know, what some people call ‘either lions or rabbits.’ And some liobits will bite your arm off.”
To reframe the grue problem in Bayesian terms will help us achieve maximum clarity. Let A be the proposition that emeralds are green. Let B be the proposition that emeralds are grue. (I used grass at first to highlight the everyday relevance of the grue problem, but of course grass is sometimes yellow or brown. Emeralds, which Goodman used, are actually a better example, since they’re always green.) Let E1, E2, E3, etc. be observations of emeralds, which are green and also grue, since at the time of writing, October 12, 2049, or let’s say, “time T,” is in the future. If we are initially uncertain about emeralds, P(A) and P(B) might initially be 50% or less. But P(E1|A) and P(E1|B) will be higher than P(E1|~A) and P(E1|~B), so observing green emerald E1 will increase Bayesian confidence in A and B, as will E2, E3, etc., until the subject is highly confident in both A and B. The problem is that A and B are logically incompatible, since they make opposite predictions after time T.
So far, Bayesian rationality has been no help at all. Or rather, it has only clarified the problem. It has done nothing to solve it. In general, logic can’t help us here.
It is the skeptic’s revenge. This is why philosophers know, as ordinary people don’t, that the abstract skeptic is a formidable opponent. In the grue problem, the old induction skeptic of David Hume comes back in new guise. Goodman knew this well enough. He called his puzzle “the new problem of induction,” alluding to Hume’s original problem of induction. If some escape from it is not found, all natural science and all common sense stands refuted, and we can’t so much as get out of bed in the morning, because the floor might be, not solid, but solpit, meaning solid until yesterday, a bottomless pit masked by an illusory floor mirage, today. Oops! Noooooooo!
Here’s an example of the grue problem that is not silly at all, to show how high the stakes are.
Here comes Commonsense again, stating the obvious. “All men are mortal,” he says.
But this time his interlocutor’s name is Christian. “No,” says Christian. “All men are mortal-if-sinful.”
“What is mortal-if-sinful?” asks Commonsense.
“It means that a person is subject to permanent death if he or she has ever done or thought anything that is wrong, meaning prideful, selfish, envious, spiteful, and so forth,” explains Christian.
“Why would you think that people are mortal-if-sinful?” asks Commonsense.
“Well, that’s a long story, but let’s just say for now that I’ve observed that they are. Every sinful person that I’ve known or heard of has permanently died, unless they’re just too young to have gotten around to it yet.”
“Yes,” says Commonsense, “but why do you think they’re mortal-if-sinful rather than simply mortal?”
“Well,” asks Christian, “why do you think they’re mortal rather than mortal-if-sinful?”
“Well,” says Commonsense, “I guess mortal-if-sinful just seems like an odd thing for people to be.”
“Why?” asks Christian. “What do you mean by odd? And how do you know you’re right about what is odd and what isn’t?”
“Well, never mind about that then,” says Commonsense, sidestepping a difficult problem, as is his wont. “So you’re saying that if someone, somehow, led a perfect life, they wouldn’t necessarily die?”
“Not permanently,” says Christian. “Death couldn’t hold him. And there was one.”
“Who?”
“Jesus of Nazareth. He was a man without sin, and though He let Himself be killed, death could not hold Him, so He shattered death's power and rose from the dead, as we know from many eyewitnesses.”
“Isn’t it more likely that the eyewitnesses were mistaken somehow?” suggests Commonsense. “Doesn’t the proposition that all men are mortal have a pretty strong track record of being true?”
“By my reckoning,” answers Christian, “it is the proposition that all men are mortal-if-sinful that has a strong track record of being true.”
Setting up a Bayesian updating problem from this example again promotes clarity. Let proposition A be that all men are mortal; proposition B, that all men are mortal-if-sinful; and propositions C1, C2, C3, etc., that specific humans died. From any starting points of Bayesian confidence P(A) and P(B), C1, C2, C3, etc. would confirm, and therefore raise confidence in, both. But the two beliefs are logically incompatible with respect to mortality of a sinless man, with A requiring it and B saying nothing about it. In the event of hearing good evidence that a sinless man rose from the dead, a strong believer in both A and B would believe that we both do and do not have strong reasons to disbelieve the evidence.
I haven't read all the efforts of philosophers to address the grue paradox, but my own takeaway is that the only escape from the insane indeterminacy of all inductive reasoning that the grue puzzle opens up, is to accept that ideas are real, in something like the way Plato insisted they were, and to make a policy of accepting as legitimate candidates for Bayesian confirmation only propositions composed of real ideas. We know the real ideas, like green, from the fake ones, like grue, by a faculty that may be called “intuition.”
The grue skeptic, then, is answered thus. Yes, as a matter of mere logic, the available evidence is as compatible with grass being grue as with grass being green. But we treat the available facts as evidence that grass is green, not grue, because green is a real idea and grue is not. “Grass is green” is a proposition composed of real ideas, which therefore deserves to be a candidate for Bayesian confirmation, while we dismiss “grass is grue” without even needing to consult evidence, on the authority of intuition.
What if grass really is grue? Would it be impossible for us to learn this fact because we rejected the concept a priori? Not exactly.
“Grass is blue” is a legitimate proposition, based on real ideas, acceptable for Bayesian confirmation. At present, all the data are against it. But if after some time T all the grass really did turn blue, this reality could be recognized by updating on the proposition “grass is blue.” At that point, though, it could be recognized that neither “grass is green” nor “grass is blue” really performed satisfactorily over the whole dataset. What would be needed are new propositions “grass was green before T” and “grass has been blue since T.” So far, there would be no reason to invent a concept like grue. But if many other green things besides grass also turned blue at time T, it would start to look useful to coin a general concept of “grue” to describe them all. The concept would be empirically motivated and weird, and science would probably see it as a placeholder label for a phenomenon still in need of explanation, since we would still know by intuition that grueness is not essentially a thing. Only when new theories were developed that explained the green-to-blue shift in terms of real ideas, for example if we learned that a virus had swept through all plant life and altered chlorophyll so that its color changed from green to blue, could we feel we had a satisfactory description of the world.
Now, this idea that ideas are real, and we can distinguish real from fake ones through our powers of intuition, will probably strike many as weird. In the case of green vs. grue, no one will doubt which is the real idea and which is the fake one, if we have to choose, but we might be reluctant to attach much importance to what might seem like a somewhat arbitrary, merely aesthetic judgment. Some might be reluctant to assert positively that green is real, or that grue is fake.
Other cases could be constructed where distinguishing real from fake ideas would be much harder, and the issue of whether people are mortal or mortal-if-sinful can serve as a case in point, though it's unusual in being religiously polarizing. To a typical unchurched person, mortal will seem like a clearly real idea, while mortal-if-sinful will seem much more like an arbitrary construct. Even for most Christians, mortal-if-sinful may not seem like a very easy idea to wrap one's head around, but if sin and death are conceived in a certain way, it may come to seem intuitive that death is a consequence of sin, at which point “mortal-if-sinful” might seem like a more real idea than merely “mortal.”
I think a common impulse, in response to the difficulty of discerning real from fake ideas, would be to avoid settling such questions based on mere intuition, and turn to the evidence instead. And sometimes it is possible to strengthen one's intuition by contemplating the world, and to let the data help one decide whether an idea should be taken seriously or not. But the grue paradox shows the limits of that. We can't derive any knowledge from evidence without simultaneously using ideas discerned by intuition. Plenty of people may think they are doing so, but they are simply naive about how their thought processes are working, as a modern-day Socrates would be able quickly to reveal with a few well chosen questions. Reason depends absolutely on the reality of ideas and the authority of intuition, without which we could never know enough even to get out of bed in the morning.
Plato had a theory of ideas that is rather odd, and it's not clear how seriously he meant it, yet it might be needed as medicine for an age that has long underappreciated ideas, and tried to take refuge from the difficulty of them in materialist illusions. Plato seems to have envisioned ideas as comprising a realm of eternal forms, of which material things were something like copies or echoes, and imperfect ones at that. To illustrate the relationship between the material world and the realm of the eternal forms, he offered his famous parable of the cave.
In Plato’s parable, a group of people are prisoners in a cave, staring at a wall forever. On the cave wall, there are dancing shadows. Behind them, where they cannot see, people are marching around a fire carrying shapes modeled on the real things outside the cave -- cows and trees and birds and mountains and so forth -- and the shadows produced by these images are the only things that the prisoners in the cave know. They talk about these things amongst themselves, and make words for them. And so they speak of cows, trees, birds, mountains and so forth, but they can't really attach more meaning to these words than the shapes they see on the wall. Possibly, in moments of insight, they feel that there's another significance in the forms they're seeing, and for a moment dimly guess at what cows and birds might really be. But generally, all that's in their minds are shadows.
Then one day, one of the prisoners gets out. He cuts his bonds, walks out of the cave, and after his eyes adjust to the light, he looks around in wonder at the real, bright, solid world. And he sees what cows, birds, trees, mountains and all the rest of it really are. So he goes back to tell the other prisoners. Are they grateful? Not at all! They just think he's crazy, and conclude that it's no use going up there, out of the cave. All it does is ruin your eyesight. The man who escaped from the cave and saw reality has some connection with Plato’s ideals of the philosopher and of education, but it need not be read in too elitist a way. All of us have had this exciting experience of seeing the world differently after we learn something new, of leaving behind the cave of ignorance for the light of knowledge. Science claims to reveal the true natures of many things in ways that transform our view of the whole world, and so, in a different way, does Christianity. As an old hymn puts it: “I saw the light, I saw the light / No more darkness, no more night.” And yet sometimes the enlightened are tempted to fall back into old bad habits. Plato’s Republic was meant as a metaphor for the soul, and sometimes the mocking prisoners in the cave may be elements within our own minds that resist and disdain new insights we gain and moral endeavors we undertake.
To say, as Plato sometimes seems to, that only ideas are fully real while material, physical things are merely derivative, like shadows of cut-out shapes, is a very strong claim to make about the comparative reality of ideas! I might not go quite that far. And yet I think it turns out, on close examination of many questions, that we need something a little like Plato's idea of ideas in order to make sense of how we speak and reason every day. If you and I are thinking about cats, what does that mean? What is this aboutness? And what is this “cats” to which your thoughts and mine are in the same relation? “Thinking about” is clearly not a physical relation like touching. “Cats” here does not mean any particular set of cats. Rather, it must be the idea of cats, the eternal, universal form of cats, of which specific cats are mere instances, to which the aboutness of your thoughts and mine both point. And so if we agree in saying that “cats are beautiful,” we praise even cats we have never seen before, but which instantiate the universal idea of cats. We could still think about cats if they went extinct, or if we forgot they had ever been real and believed them to be only a myth, and we can think about things that never existed, like unicorns. But while unicorns aren’t real, the idea of a unicorn is.
We can only apply numbers to the world if we first categorize things. Two of a thing, e.g. two cats, depends on things instantiating the same idea. Obviously, all this raises many more questions, and if this were really a book of philosophy, I’d have to wrestle with them a lot more. For now, the point is that wise old Plato knew the answer to the grue problem that modern philosophers, burdened with a bias for making the intellectual world safe for scientific materialism, have struggled to achieve. “Green” has a place in Plato’s lofty realm of eternal forms; “grue” does not. Therefore, statements based on the idea of green are legitimate candidates for Bayesian confirmation, while statements based on the idea of grue are not. Thus reason is saved.
If we let some version of Plato’s theory of eternal forms regulate what theories we accept as legitimate candidates, we can accumulate evidence in favor of a worldview through Bayesian updating, without letting the endless proliferation of ad hoc new theories distract and stultify and interrupt us. We have an answer for Coinflip Theorist: his exception-ridden theories do not spring simple and glorious from the Platonic realm of ideas, but are unscrupulous hacks to fit awkward data, and therefore fail to be an acceptable basis for generalization. He needs more Platonic discipline about what theories of the world to propose. The little girl with the teachers and grandmothers could benefit too from the insight about the peculiar kind of simplicity that theories ought to have. Our Platonic passport requirement for new propositions seeking admission to the truth tournament in our minds is akin to the vague yet important principle of “Ockham’s razor,” which favors acceptance of the simplest theory that fits the facts.
Some readers may keep feeling that the grue paradox is just too silly to have the importance I'm attaching to it. Who would make such a mistake? One writer recently did. Matt Ridley, a boisterous champion of a kind of evolutionary progressivism mixed with market capitalism, made it a principal theme of his recent book, The Rational Optimist, that “ideas have sex.” This sounds like a gratuitously pornographic absurdity, but it’s actually a highly sophisticated, far reaching, and productive hypothesis, with which many smart people sympathize in one way or another. It’s true that the propagation of genes through the human race, mingling and sharing and multiplying, bears some resemblance to the propagation of ideas among human minds, as they interact, sometimes competing with or undermining each other, sometimes combining into systems. But ideas do not have sex, even metaphorically. If they did, ideas like grue, produced by green “having sex with” blue, are exactly what you’d get. Ideas could get intermingled and muddled in any old way, causing theories to proliferate infinitely, until nothing could be settled because every fact would confirm a thousand incompatible theories. If we allow concepts like grue to make claims for Bayesian consideration, inductive reasoning becomes radically indeterminate. If ideas had sex, we’d all go mad.
It's time to bring closure to the project we began in chapter 2. We set out to explain why there is so much commonsense agreement, even as people disagree so much about religion. What have we learned?
The answer we offered in chapter 2, that commonsense agreement comes by learning from evidence in Bayesian fashion, is still valid as far as it goes, but it needs to be qualified in a crucial way. If people begin from a common stock of ideas and theories that they treat as candidates for Bayesian confirmation, then sufficient evidence will bring them into agreement. A trace of arbitrary, pre-evidential priors can never be eliminated, but it becomes unimportant if evidence is abundant. Where evidence is slight, priors determine belief, and such guesswork need not converge on agreement. And moderate amounts of evidence not only may fail to induce agreement, but, counterintuitively, may induce diametric disagreement where close, though not quite perfect, agreement prevailed, because of the nonlinearity of the process by which evidence changes people's minds. This weird twist in Bayesian belief dynamics can be proved mathematically to be possible, but the only example that I know of where sharp disagreement about a straightforward factual claim persists in spite of abundant evidence is the resurrection of Jesus. The commonest case for people in the same culture is for abundant evidence to lead to consensus.
In addition to learning from evidence, people and cultures often change their ideas and theories, as they try to achieve logical consistency or coherence. Bayesian processing of evidence provides little protection against self-contradiction. On the contrary, it can lead a person into self-contradiction, when evidence simultaneously confirms two theories that make the same predictions over the set of phenomena observed, but opposite predictions about other cases. So progress towards truth depends not only on processing evidence voraciously, but also on continually auditing the set of generalizations in which one believes for consistency. Cultures and other communities of belief are important as ways to cast a wide net for evidence, but even more, as ways to multiply cognitive power for the difficult task of critical reflection in pursuit of coherence. Many brains are better than one.
My emphasis has been on “agreement” rather than “knowledge,” but of course, the goal is not merely to agree, but to be right, to know. “Knowledge” may seem to deserve a careful and deliberate definition, yet I'm not sure that it really does. Suppose you define “knowledge,” and then discover that while you have many confident beliefs that are true, none of them meet the criteria to qualify as knowledge. Would that matter? It's not clear why it should. And so I define the word “knowledge” rather casually to mean (a) beliefs formed in a way that a reasonable culture considers reliable that also (b) happen to be true. A reasonable culture, in turn, is a group of regularly communicating people that diligently engages in (a) critical reflection in pursuit of logical consistency and coherence, and (b) the collection and processing of evidence. In the limiting case, the group could consist of one member, so individualistic knowledge is possible, but the collectivist emphasis in the definition is deliberate, for in practice, knowledge of particulars is always dependent on knowledge of generalizations, and individuals’ knowledge of generalizations is usually derived from a culture.
There is another requirement. We learned from our study of the grue problem that if any general knowledge at all is to be possible, true generalizations must consist of real ideas, or so to speak, Platonic ideas, which we have access to by means of our faculty of intuition. That's another way of saying that the world is intelligible. A kind of necessary faith comes into play here as the first precondition of reason: faith that the world is intelligible, law-abiding, sensible, rational, patterned in a way that we can understand, knowable by our minds. If grass were grue, the world might still be intelligible in other respects and most of the time, but if all properties of all things were thoroughly grue-like at every moment, the world would be a mere chaos. Reason might still deal in mathematical abstractions in the theater of the mind, but it would be exiled forever from the external world. And people may go through dark nights of the soul in which the world seems like a mere chaos. But most of the time, the sunny daylight of faith in the intelligibility of the world shines either in a blue sky, or at worst, through clouds of skepticism and confusion that dim but do not darken it.
And the doctrine of the intelligibility of the world, which provides the foundation of all practical and experiential knowledge, also supplies the method for fortifying the knowledge we have and getting more of it. First, test your beliefs continually through critical reflection and attentiveness to experience. This will lead to crises, occasional or frequent depending on how much correction you need and whether the circumstances for learning are propitious. In the face of crises, intuition must work overtime, seeking patterns amidst seeming chaos. The patterns must consist of real, Platonic ideas. That restriction, which we instinctively apply by the laws of the mind, is indispensable for limiting the options to a manageable number. When we come across an intuitively appealing pattern that fits available data and offers a way out of the crisis, that becomes a theory or hypothesis. The force of intuition is likely to make us invest a good deal of confidence in a new theory, perhaps in a sense too much, since new theories often turn out to be wrong. But that might not much matter in practical questions, for evidence will soon refute false hypotheses. The turn of evidence comes after theory, and theories well vindicated by evidence comprise our knowledge.
Let the question of how we know be considered settled for purposes of this book. Next, what do we know? Let's consider first the claims of science, inasmuch as they affect our central question of whether Christianity is true.