Uncommon Descent Serving The Intelligent Design Community

AI, Materialist Dodgeball and a Place at the Table

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Ari N. Schulman, “Why Minds Are Not Like Computers,” The New Atlantis, Number 23, Winter 2009, pp. 46-68.
Article Review

“The problem, therefore, is not merely that science is being used illegitimately to promote a materialistic worldview, but that this worldview is actively undermining scientific inquiry.”—UncommonDescent

Read the entire article here.

Unless otherwise noted, all quotations from the article, “Why Minds Are Not Like Computers,” are italicized.

Mr. Schulman walks the tightrope of analysis and criticism, describing how a materialistic worldview actively undermines scientific inquiry in the area of Artificial Intelligence (AI). Analysis and (self-criticism) should be part of all scientific endeavor; the strict materialist does no such thing; instead, he plays dodgeball.

Much of the article, especially the discussions of the brain, computers, Turing Machines, the Turing Test, and the Chinese Room Problem were all helpful in understanding the state of affairs in AI for the layman. My comments are those of a such a layman, included that you might see what a layman might take from such an article. Never-the-less, questions remain . . .

. . . as to whether AI can survive while inurred by materialist thought. Can AI benefit from design-theoretic input (including the unappetizing job, if necessary, of informing AI folks that strong AI as conceived for digital computers is a dead end)? In the following, I chose to recap many of the early parts of the article. It is the latter part of the article, however, where the games begin.

I am not really interested in parsing every last detail of the article, “ . . .no, no , no in the Chinese Room Problem, the walls CAN think as long as the translator is in the room,” rather, I am interested in what place design theorists have at the “adult” table based on articles like this in fields that, from the point of view of many, are in disarray. I, for one, am tired of the “kid” table.

“When the mind is compared to a computer, just what is it being compared to? How does a computer work?”

Mr Schulman begins with a clear discussion of what a computer is, i.e. a performer of algorithms. His definitions of start and end states, and input and output are helpful in understanding the nature of the determinacy of any computer program, “an algorithm’s output for a given input will be the same every time it is executed (even so-called “randomized” algorithms are deterministic in practice.)

Continuing, we learn that in any algorithm, what should be done is broken down to how it should be done. We also learn of abstractions of objects within algorithms that are based on relevant properties of the objects in question. Mr. Schulman is adept in showing how such abstraction leads to the conclusion that computers expertly manipulate (by following the how) symbols but with no idea of what they are doing, “a computer is both extremely fast and exceedingly stupid.”

This leads to a detailed discussion of the manipulation of symbols, for example, “To do so, you must be able to represent the problem in terms that the computer can understand—but the computer only knows what numbers and memory slots are.” This is a standard, specific extension of the Turing Machine model.

And then this,
“. . . it is only partially correct to say that a computer performs arithmetic calculations. As a physical object, the computer does no such thing—no more than a ball performs physics calculations when you drop it. It is only when we consider the computer through the symbolic system of arithmetic, and the way we have encoded it in the computer, that we can say it performs arithmetic.” (my emphasis) Even at the level of arithmetic, Mr. Schulman recognizes that the computer is merely manipulating symbols – symbols that are given meaning by us.

Next, we encounter the black box problem in which we learn that the what of is specified for completion may be fundamentally different from how it is done. Of course that is done “behind the curtain” and different programmers can accomplish a task in many different ways. This leads to the idea of layers of abstraction which rest on Boolean logic which rely, at bottom, on transistors and other physical processors. Mr. Schulman writes that this nested hierarchy does not mean that any particular layer has more explanatory power than the others, only that each is an interpretation of what the computer does based on “a distinct set of symbolic representations and properties.”

I would add that although the modular nature of programming creates black boxes from the point of view of the casual end-user who may just want to read some email or play Pong, those black boxes are not entirely closed off and mysterious; they are known by someone. The how is known by the programmer.

Mr. Schulman: “Since the inception of the AI project, the use of computer analogies to try to describe, understand, and replicate mental processes has led to their widespread abuse. Typically, an exponent of AI will not just use a computer metaphor to describe the mind, but will also assert that such a description is a sufficient understanding of the mind—indeed, that mental processes can be understood entirely in computational terms. One of the most pervasive abuses has been the purely functional description of mental processes. The embrace of input-output mimicry as a standard traces back to Alan Turing’s famous “imitation game,” in which a computer program engages in a text-based conversation with a human interrogator, attempting to fool the person into believing that it, too, is human. The game, now popularly known as the Turing Test, is above all a statement of epistemological limitation—an admission of the impossibility of knowing with certainty that any other being is thinking, and an acknowledgement that conversation is one of the most important ways to assess a person’s intelligence. Thus Turing said that a computer that passes the test would be regarded as thinking, not that it actually is thinking, or that passing the test constitutes thinking. In fact, Turing specified at the outset that he devised the test because the “question ‛Can machines think?’ I believe to be too meaningless to deserve discussion.””(my emphasis)

It was refreshing to see Turing’s comments included at this stage of the article. The Turing Test, and its “Kurzweilian” visions of progress, gets a lot more airplay these days, it seems, than the Universal Turing Machine and its precise, even stringent, view of computers as physical embodiments of theoretical rule-following machines. Does this distinction of how things may be regarded versus how things are have analogs in the evolution/design debate? I’d say the answer is obvious. In fact, I can practically sense that keyboards are warming up as we come to draw battle lines around who “regards, as if it is” and who “regards what is.”

For those AI researchers interested in actually replicating the human mind, the two guiding questions have thus been (1) What organizational layer of the mind embodies its program? and (2) At what organizational layer of the brain will we find the basic functional unit necessary to run the mind-program? [AI researchers] aims and methods can be understood as a progression of attempts to answer these two questions. But when closely examined, the history of their efforts is revealed to be a sort of regression, as the layer targeted for replication has moved lower and lower.”

. . . Kudos again to Mr. Schulman for his concise summary of the current state of affairs of strong AI, he goes on to criticize the functionalist position. Here, I can’t help but think Daniel Dennett would be in the cross hairs, but I haven’t read him enough to know . . . any comments, UD people? I found it interesting that Mr. Dennett is one of the chief critics of Searle’s Chinese Room Problem; it just seems so obvious that he would be the one, more on that later.

Robots that mimic facial expressions are said to experience genuine emotions—and for more than half a century, researchers have commonly claimed that programs [robots mimicking facial expressions] that deliver “intelligent” results are actually thinking. . . Such statements reveal more than just questionable ethics—they indicate crucial errors in AI researchers’ understanding of both computers and minds. Suppose that the mind is in fact a computer program. . . So although behaviorists and functionalists have long sought to render irrelevant the truth of Descartes’ cogito, the canonization of the Turing Test has merely transformed I think therefore I am into I think you think therefore you are.”

I like that. . . “questionable ethics,” “crucial errors,” and “the canonization” of the Turing Test . . .

“Much artificial intelligence research has been based on the assumption that the mind has layers comparable to those of the computer. Under this assumption, the physical world, including the mind, is not merely understandable through sciences at increasing levels of complexity—physics, chemistry, biology, neurology, and psychology—but is actually organized into these levels. These assumptions underlie the notion that the mind is a “pattern” and the brain is its “substrate.”.

On the one hand, arguments against strong AI, both moral and technical, typically describe the highest levels of the mind—consciousness, emotion, and intelligence—in order to argue its non-mechanical nature. . . The implication is that the essence of human nature, and thus of the mind, is profound and unknowable; this belief underlies [Joseph] Weizenbaum’s extensive argument that the mind cannot be described in procedural or computational terms.”

Mr. Weizenbaum appears to have made it to the adult table. I am unacquainted with his work but would be interested in how it might be consonant with ID, if at all. On the other hand . . .

“. . . roboticist Rodney Brooks declares that “the body, this mass of biomolecules, is a machine that acts according to a set of specifiable rules,” and hence that “we, all of us, overanthropomorphize humans, who are after all mere machines.” The mind, then, must also be a machine, and thus must be describable in computational terms just as the brain supposedly is.”

It appears that Mr. Brooks also has made it to the adult table and why wouldn’t he seeing how his theory of AI computing is steeped in evolutionary thought. I find it interesting that those who play dodgeball are not forced to sit at the kid table.

Why do I say dodgeball? If we are merely machines and our brains only computers, then we are physical embodiments of Turing Machines, and if that is so, how is it that we are not bound by the Church-Turing Thesis? Answer: dodgeball. Mr. Brook’s claim that we tend to overenthropomorphize humans is quite a rhetorical leap — not only a leap, but a dodge. Turing Machines are limited in ways that human minds are not, but Brooks can get away with the statement, “[humans] are . . . machines,” because it fits with the functionalist approach of strict materialism. Mr. Schulman then logically adds that we, “must be describable in computational terms.” Pass the gravy, meat puppet.

An instructive example of this confusing conceptual gap can be found in the heated debate surrounding one of the most influential articles in the history of computer science. In a 1980 paper, the philosopher John R. Searle sketched out The Chinese Room Problem. Searle’s scenario is, of course, designed to be analogous to how an operating AI program works, and is thus supposedly a disproof of the claim that a computer operating a similar program could be said to “understand” Chinese or any other language—or indeed, anything at all.”

The most common rebuttals to the Chinese Room thought experiment invoke, in some way, the “systems reply”: although the man in the room does not understand Chinese, the whole system—the combination of the man, the instructions, and the room—indeed does understand Chinese. Searle’s response to this argument—that the “systems reply simply begs the question by insisting without argument that the system must understand Chinese”—is surely correct.”

But Searle himself, as AI enthusiast Ray Kurzweil put it in his 2005 book, The Singularity is Near, similarly just declares “ipso facto that [the room] isn’t conscious and that his conclusion is obvious.” Kurzweil is also correct, for the truth is somewhere in between: we cannot be sure that the system does or does not understand Chinese or possess consciousness.”

This seems to me to be an example of hyper-credulity on the part of those promoting a systems-have-consciousness response. I also believe that this credulity is driven by strict materialism.

One of the most befuddling sections of [Searle’s] 1980 paper is this: OK, but could a digital computer think?” If by “digital computer” we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.”(Searle)”

More so even than the casual assertion that people are computer programs, this section of Searle’s paper is surprising in its contradiction of his own claim that computers cannot think. On Searle’s account, then, can computers think or not? The answer reveals just how confused is the common understanding of computer systems.”

But why is there confusion? I’d suggest it comes from the insistence of materialists who claim that the mind/brain is reducible to a computer.

As explained above, it is correct to explain computers in terms of separable layers, since that is how they are designed. Physical systems, on the other hand, are not designed at all. They exist prior to human intent. . . We rely on hierarchies to explain physical systems, but we actually engineer hierarchies into computers.” (my emphasis)

Mr. Schulman leaves unexplained how it is that physical systems are not designed and yet exhibit design. Computers are designed, right? Brains are, if anything, more complicated than computers, right? So much so that philosophers and scientists don’t even agree on what are the qualitative, and what are the quantitative, differences. Somehow out of that argument, the strict materialist finds room to claim that brains are not designed. That just seems like kid table stuff to me.

Every indication is that, rather than a neatly separable hierarchy like a computer, the mind is a tangled hierarchy of organization and causation. Changes in the mind cause changes in the brain, and vice versa. To successfully replicate the brain in order to simulate the mind, it will be necessary to replicate every level of the brain that affects and is affected by the mind.”

I find that this reasoning can only be supported in strictly materialist terms. Only a strict materialist would assert that replicating the brain will simulate a mind. Certain aspects of the mind, which may in fact be essential to not only an experience of consciousness but also engendering what it means to be a “self,” are not merely coded in the brain, awaiting the necessary technology to be replicated. If so, then even in a complete replication of the brain, meaning will not admit; without meaning, whence personhood?

Also, what could it possibly mean that “changes in the mind cause changes in the brain and vise versa” if a mind-brain unit is merely a computer? From a computer design standpoint, such mutual, innovative, meaningful, creative change is pure nonsense.

Intriguingly, some involved in the AI project have begun to theorize about replicating the mind not on digital computers but on some yet-to-be-invented machines. As Ray Kurzweil wrote in “The Singularity is Near”: Computers do not have to use only zero and one…. The nature of computing is not limited to manipulating logical symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities. In principle, Kurzweil is correct: we have as yet no positive proof that his vision is impossible. But it must be acknowledged that the project he describes is entirely different from the original task of strong AI to replicate the mind on a digital computer.”(my emphasis)

Is Mr. Kurzweil trying to release us from the theoretical constraints of the Turing Machine? Am I being unfair in assuming that Kurzweil is committed to the brain as merely a sum of biological (read material) processes? The new direction may be computers that are not digital in the traditional sense, but how such new computers could instantiate the mind/brain is simply a check written for some future date.

If we achieve artificial intelligence without really understanding anything about intelligence itself—without separating it into layers, decomposing it into modules and subsystems—then we will have no idea how to control it.”

Furthermore, if intelligence has an attribute not decomposable into modules and subsystems, and we ignore that possibility, then we will not know what we have actually created, but it won’t be AI.

Can intelligent design advocates inform the state of affairs in AI from a solid theoretical basis? An objective reading of articles like this suggests our voice needs to be heard if only to add a measure of clarity to the discussion. John Searle, in “The Rediscovery of the Mind,” writes:

“What we find in the history of materialism is a recurring tension between the urge to give an account of reality that leaves out any reference to the special features of the mental, such as consciousness and subjectivity, and at the same time account for our “intuitions” about the mind. It is, of course, impossible to do these two things. So there are a series of attempts, almost neurotic in character, to cover over the fact that some crucial element about mental states is being left out. And when it is pointed out that some obvious truth is being denied by the materialist philosophy, the upholders of this view almost invariably resort to certain rhetorical strategies designed to show that materialism must be right, and that the philosopher who objects to materialism must be endorsing some version of dualism, mysticism, mysteriousness, or general anti-scientific bias.”(my emphasis)

It is such behavior that should get you sent to the kid table. Dodgeball, anyone?