Hilary Putnam: "The Best of All Possible Brains?"

The Best of All Possible Brains?

Date: November 20, 1994, Sunday, Late Edition - Final
Byline: By Hilary Putnam;
Lead:

SHADOWS OF THE MIND A Search for the Missing Science of Consciousness. By Roger Penrose. Illustrated. 457 pp. New York: Oxford University Press. $25.
Text:

IN 1961, John Lucas -- an Oxford philosopher known for espousing controversial views -- published a paper in which he purported to show that a famous theorem of mathematical logic called Godel's second incompleteness theorem implies that human intelligenc e cannot be simulated by a computer. Roger Penrose is perhaps the only well-known present-day thinker to be convinced by Mr. Lucas's argument. Mr. Lucas himself seems to have seen his argument as showing that the nature of the human mind is mysterious and has little to do with the physics and chemistry of the brain, but Mr. Penrose wishes to draw a very different conclusion.

The right moral to draw from Mr. Lucas's argument, Mr. Penrose says, is that noncomputational processes do somehow go on in the brain, and we need a new kind of physics to account for them. In "Shadows of the Mind," he not only provides a proof of Godel's theorem and a defense of his own version of Mr. Lucas's argument, but also surveys modern physics, biology and brain science, and speculates daringly on the nature of the new sort of physics that he claims we need.

"Shadows of the Mind" will be hailed as a "controversial" book, and it will no doubt sell very well, even though it includes explanations of difficult concepts from quantum mechanics and computational science. And yet this reviewer regards its appearance as a sad episode in our current intellectual life. Roger Penrose is the Rouse Ball Professor of Mathematics at Oxford University and has shared the prestigious Wolf Prize in physics with Stephen Hawking, but he is persuaded by an argument that all experts in mathematical logic have long rejected as fallacious, and he has produced this book as well as an earlier one, "The Emperor's New Mind," to defend it. The fact that the experts all reject Mr. Lucas's infamous argument counts for nothing in Mr. Penrose's eyes. He mistakenly believes that he has a philosophical disagreement with the logical community, when in fact this is a straightforward case of a mathematical fallacy.

The fallacy in Mr. Lucas's original argument is, unfortunately, a technical one, and I cannot describe it in detail in the space available. In rough outline, Godel's theorem states that if a system S of formalized mathematics -- that is, a set of axioms and rules so precisely described that a computer could be programmed to check proofs in the system for correctness -- is strong enough for us to do number theory in it, then a certain well-formed statement of the system, one that implies that the system is consistent, cannot be proved within the system. In a popular (but dangerously sloppy) formulation: "If S is consistent, then that fact cannot be proved in S."

Mr. Lucas's technical mistake was to confuse two very different statements that could be called "the statement that S is consistent." In particular, Mr. Lucas confused the colloquial statement that the methods mathematicians use cannot lead to inconsistent results with the very complex mathematical statement that would arise if we applied Godel's theorem to a hypothetical formalization of those methods. But Mr. Penrose uses a form of Godel's theorem different from the one Mr. Lucas used, and his mistake is less technical.

The structure of Mr. Penrose's argument is as follows: First he provides the reader with a proof of a form of Godel's theorem worked out by Alan Turing, the father of the modern digital computer and the creator of recursion theory, the branch of mathematics that analyzes what computers can and cannot accomplish in principle. From this proof, Mr. Penrose, using certain not unreasonable assumptions, concludes that no program that we can know to be correct can simulate all of our human mathematical competence. (Here "know" has a very strong sense: what we "know" has no chance of being false; probabilistic reasoning is not allowed; and we must, in a sense Mr. Penrose takes to be intuitively clear, be "aware" that we know what we know. It is reasonable to hold that mathematical knowledge has this character.)

So far, however, what Mr. Penrose has shown -- that no program that we can know to be correct can simulate all of our mathematical capabilities -- is quite compatible with the claim that a computer program could in principle successfully simulate our mathematical capacities. In other words, the possibility exists that each of the rules that a mathematician relies on, explicitly or implicitly, can be known to be sound, that there is a program that generates all these rules and only these rules, and that this program nonetheless cannot be rendered sufficiently perspicuous for us to know that that is what it does. It was in order to slide over precisely this possibility -- that there could be a program that simulates our mathematical capabilities without our understanding it -- that Mr. Lucas blurred the question: exactly what consistency statement -- the colloquial statement or the mathematical one -- was he claiming a human mathematician can prove and a machine cannot?

Mr. Penrose almost discusses the possibility of a program that can capture our mathematical capabilities without our being able to understand it, but in fact he misses it. First he describes the hypothetical case of a program that simulates our mathematical capacity and is assumed to be simple enough for us to understand it thoroughly. That such a program might not be provably sound is a possibility Mr. Penrose dismisses as not plausible. He then considers the possibility (which he also regards as implausible) that the program might be so complex that we could not even write down its description (let alone understand it). He rejects this possibility because, were it actual, the program of "strong artificial intelligence" -- simulating our intelligence on a computer in practice -- could not succeed (which is irrelevant to his thesis that our mathematical abilities cannot in principle be so simulated). But -- even apart from the totally unjustified way in which this latter possibility is dismissed -- there is an obvious lacuna: the possibility of a program we could write down but not succeed in understanding is overlooked!

This is the mathematical fallacy on which the whole book rests. Nonetheless, there are some interesting questions of a quasi-philosophical rather than a mathematical kind that arise, and Mr. Penrose is interesting (if not always convincing) when he discusses these. I myself would raise the following points about Mr. Penrose's treatment of the main philosophical issues: Is the notion of simulating the performance of a typical mathematician really so clear? Perhaps the question of whether it is possible to build a machine that behaves like a typical mathematician is a meaningful empirical question, but a typical mathematician makes mistakes. The output of an actual mathematician contains inconsistencies (especially if we are to imagine that the mathematician goes on proving theorems forever, as the application of Godel's theorem requires); so the question of proving that the whole of this output is consistent may not even arise. To this, Mr. Penrose replies that mathematicians may make errors, but they correct them upon reflection. This is true, but to simulate mathematicians who sometimes change their minds about what they have proved we would need a program that can change its mind too; there are such programs, but they are not of the kind to which Godel's theorem applies!

In his masterwork, "Philosophical Investigations," Wittgenstein emphasized the importance of distinguishing between what an actual machine (or an actual person) can do and what an idealized machine (or an idealized person) can do. Amazingly, Mr. Penrose writes, "I am not concerned with what specific detailed arguments a mathematician might in practice be able to follow." Thus he admits that he is talking about an idealized mathematician, not an actual one. It would be a great feat to discover that a certain program is the one that the brain of an actual mathematician "runs"; but it would be quite a different feat to discover that a certain program is the one that the brain of an idealized mathematician would run.

To confuse these questions is (in philosophers' jargon) to miss the normativity of the question: what exactly is idealized mathematics like? Mr. Penrose worries that if we say our (idealized) mathematical output is not describable as the output of a machine whose program we could "know," then we are saying that there's something about us (something about "consciousness") that is "inexplicable in scientific terms," but this is not a reasonable worry. That our norms in this or any other area cannot be reduced to a computer program is hardly a problem for physics.

However, since Mr. Penrose thinks it is a problem for physics, he turns to physics, and the second part of his book is a wonderful introduction to modern quantum mechanics and to the particular ways in which he thinks it may have to change, as well as to some very interesting speculations about neuronal processes. Yet how all this might someday lead to the description of a physically possible brain that could carry out non computational processes is something that, as Mr. Penrose himself admits, he cannot tell us.

Leave a Reply

Your email address will not be published. Required fields are marked *