Intelligent Design Logic and First Principles of right reason Mathematics Mind Naturalism

Kurt Gödel demonstrated that the mind is not just a computer

Spread the love
Statue of Alan Turing at Bletchley Park/
Antoine Taveneau CC– BY-SA 3 0

And Alan Turing tried to live with it:

Maybe that’s not the story you heard, but …

It is indeed a strange quirk in intellectual history that Turing seems to have flip-flopped on this issue, almost politician-like, yet no one seems to have noticed. Gödel, for his part, remained cagey about the strong version of his result, noting only that a disjunction must therefore be true: we are either inconsistent machines or minds. Gödel must surely have jested here because inconsistency in the mathematical sense means that anything can be proven, making human thinking worthless—moons might be made of cheese then. Gödel seems to have stopped short of believing in his own Platonism, or at least proving that it must be true. But the evidence suggests strongly that viewing the mind as a big computer is probably wrong.

Science, at any rate, is something we do, looking in from the outside as it were. Whether we are ultimately caught up inside the very systems we devise to describe and explain is a philosophical question. It seems to me anyway, and indeed to the Turing of the 1930s, and most assuredly to Kurt Gödel the Platonist, that we sit outside the systems we make, the webs we weave so to speak, always on the precipice of the discovery of new truths. That’s a belief that makes sense of evidence, to be sure. It has the additional salutary consequence of making sense of our own possibilities and future.

Analysis, “The mind can’t be just a computer” at Mind Matters News

See also: Human intelligence as a halting oracle (Eric Holloway)

and

Things exist that are unknowable (Robert J. Marks)

6 Replies to “Kurt Gödel demonstrated that the mind is not just a computer

  1. 1
    kairosfocus says:

    News,

    UD is really in new territory these days, computationality vs minds.

    I followed the link, liked this:

    The question for computationalists and their critics is: what does Gödel’s strange proof say about systems that are supposed to undergird the human mind itself? It’s fantastically complex, sure. But Gödel’s result assures us that it, too, is subject to incompleteness. This leaves the mechanist in a bind: if in fact, the system for the human mind is subject to incompleteness, it follows that there is some perfectly formal and valid statement in mathematical logic that is completely impervious to all attempts at proving it. But if we are computers, this means that our insights into mathematics must stop at this statement. We are blind to it because as computers ourselves, we must use only our proof tools, with no access to our “truth” tools. Strange. As believers in computationalism, we would be ever so strangely incapable of doing our jobs as mathematicians.

    Some statement — call it “G” in keeping with Penrose’s convention — is true but not provable for our own minds. We can’t prove it. But as mathematicians, we should still be able to see that it’s true. Truth, in other words, ought still to be available to the human mind, even as the tools of a strict logic are inadequate. That’s mathematical insight, like the kind Gödel himself most surely used to prove Incompleteness. Ergo, we must not be completely computational at root. The mind must have some powers of perception or insight outside the scope of purely formal methods . . . .

    But a weaker thesis, still inspired by Gödel’s groundbreaking result, really provides evidentiary support for the common sense conclusion that our insights, discoveries, and sheer guesses aren’t disguised programs. On a Weak Gödel Thesis, we see that the philosophical or metaphysical claim that the human mind is a computer accounts poorly for obvious observations about thinking. Insight becomes programmed. But it is the very nature of the mind to sit outside such determinism.

    Such an observation is not mere philosophizing. Seeing that something is true in spite of some prior set of rules, or prior observations, is the hallmark of discovery . . . .

    Gödel, for his part, remained cagey about the strong version of his result, noting only that a disjunction must, therefore, be true: we are either inconsistent machines or minds. Gödel must surely have jested here because inconsistency in the mathematical sense means that anything can be proven, making human thinking worthless — moons might be made of cheese then. Gödel seems to have stopped short of believing in his own Platonism, or at least proving that it must be true. But the evidence suggests strongly that viewing the mind as a big computer is probably wrong.

    Science, at any rate, is something we do, looking in from the outside as it were. Whether we are ultimately caught up inside the very systems we devise to describe and explain is a philosophical question. It seems to me anyway, and indeed to the Turing of the 1930s, and most assuredly to Kurt Gödel the Platonist, that we sit outside the systems we make, the webs we weave so to speak, always on the precipice of the discovery of new truths. That’s a belief that makes sense of evidence, to be sure. It has the additional salutary consequence of making sense of our own possibilities and future.

    In answer to attempted refutations, I point to Reppert’s different but telling approach:

    . . . let us suppose that brain state A [–> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [–> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    Computational substrates aren’t even actually reasoning.

    KF

  2. 2
    EricMH says:

    Great point KF. Computational “reasoning” is just pushing around symbols. The only reason computation can give us true conclusions is because we’ve supplied the program with valid logic and sound premises. Computation cannot give us either of those things.

  3. 3
    Brother Brian says:

    It is obvious that the mind is not just a computer as we know them today. But that doesn’t mean that computer science won’t advance to the point that computers act in a way that is indistinguishable from a mind. And if we ever get to that point, our egocentrism will simply find other reasons to maintain our perceived exceptionalism.

  4. 4
    ET says:

    Brother Brian:

    But that doesn’t mean that computer science won’t advance to the point that computers act in a way that is indistinguishable from a mind.

    First someone has to figure out what that means. Then someone needs the ability to design the hardware and software. There needs to be funding.

    And if we ever get to that point, our egocentrism will simply find other reasons to maintain our perceived exceptionalism.

    As in we designed and built that alleged AI?

  5. 5
    kairosfocus says:

    BB,

    The essential point is in Leibnitz, Monadology 17:

    It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception [i.e. abstract conception]. It is accordingly in the simple substance [–> the inherently unified monad], and not in the compound [–> composite made up from independently existing parts] nor in a machine [–> composite entity with function based on mechanical and/or stochastic interactions not insight, understanding, conceptualising, inferring based on meaning etc] that the perception is to be sought .

    ANYTHING that is a dynamic-stochastic entity working on mechanical and/or stochastic interactions of component parts, is not reasoning in any relevant sense. The why of that was put by Reppert, long since:

    . . . let us suppose that brain state A [–> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [–> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

    Intellectual IOU’s unbacked by substantial empirical demonstration that hope for some future undefined computational substrate is not going to escape the implications of dynamic-stochastic systems. Insight, understanding, reasoning simply do not work in that way. That’s part of why for 2400 years, we have understood that causal factors work by necessity, stochastic/chance process or intelligence.

    There is a qualitative difference here.

    KF

    KF

  6. 6
    Brother Brian says:

    KF

    It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions.

    This certainly does not have to be confessed. It may turn out to be true but this statement, at this time, is nothing more than opinion.

    Your argument also assumes that computer science will be limited in the future to the limitations it has now. If anything, computer science has shown us that limits can be broken.

    As far as I know, there are no hard and fast roadblocks that would prevent us, some time in the future, from developing a computer that functions mechanically, electrically and chemically as the human brain does. If and when we do this, and if we can’t distinguish its reasoning and thinking capabilities from that of a human, would we not have to conclude that it was conscious? That it had a mind?

Leave a Reply