- Share
-
-
arroba
For some little while now, RDF/AIGuy has been advocating a strong AI claim here at UD. In an exchange in the ongoing is ID fatally flawed thread, he has said:
222: Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
This seems a little over the top, and I commented; but before we look at that that, let us get a little basic definition out of the way. First, the predictably enthusiastic Wikipedia:
Artificial intelligence (AI) is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as “the study and design of intelligent agents”,[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2] John McCarthy, who coined the term in 1955,[3] defines it as “the science and engineering of making intelligent machines”.[4]
It does make a few cautions:
The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or “strong AI“) is still among the field’s long term goals.[7] . . . . The field was founded on the claim that a central ability of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.[11]
The Stanford Encyclopedia of Philosophy, here, is predictably more cautious, in revealing ways:
Artificial Intelligence (which I’ll refer to hereafter by its nickname, “AI”) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent.[1] Most research in AI is devoted to fairly narrow applications, such as planning or speech-to-speech translation in limited, well defined task domains. But substantial interest remains in the long-range goal of building generally intelligent, autonomous agents.[2]
The IEP gives a little more backdrop, giving us explicit cautions on some of the philosophical problems that lurk:
[T]he scientific discipline and engineering enterprise of AI has been characterized as “the attempt to discover and implement the computational means” to make machines “behave in ways that would be called intelligent if a human were so behaving” (John McCarthy), or to make them do things that “would require intelligence if done by men” (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: that’s the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.
{Adding . . . } While we are at it, let us remind ourselves of the Smith Model for an embodied agent with a two-tier controller, understanding that the supervisory controller imposes purposes etc on the lower one, and noting that there is no implicit or explicit commitment as to just what it can be or is for a given case:
It is worth noting as well on how the so-called hard problem of consciousness is often conceived, in an implicitly materialistic frame of thought:
The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the “easy problems” of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[a]. Hard problems are distinct from this set because they “persist even when the performance of all the relevant functions is explained.”
Duly warned, let us see what is going on behind RDF’s enthusiastic and confident announcement. For that, let us look at my response to him, as I think this issue can and should spark an onward discussion:
__________
>> RDF, 222:
Computers are of course not conscious. Computers of course can be creative, and computers are of course intelligent agents. Now before you blow a gasket, please try and understand that we are not arguing here about what computers can or cannot do, or do or do not experience. We agree about all of that. The reason we disagree is simply because we are using different definitions for the terms “creative” and “intelligent agents” . . .
Here, the underlying materialist a prioris cause a bulging of the surface, showing their impending emergence. And it is manifest that question-begging redefinitions are being imposed, in defiance of the search-space challenge to find FSCO/I on blind chance and mechanical necessity.
We know by direct experience from the inside out and by observation, that FSCO/I in various forms is routinely created by conscious intelligences acting creatively by art — e.g. sentences in posts in this thread. We can show that within the atomic resources of the solar system for its lifespan, the task of blindly hitting on such FSCO/I by blind chance and/or mechanical necessity is comparable to taking a sample of size one straw from a cubical haystack 1,000 light years across.
Such a search task is practically speaking hopeless, given that we can easily see that FSCO/I — by the need for correct, correctly arranged and coupled components to achieve function — is going to be confined to very narrow zones in the relevant config spaces. That is why random document generation exercises have at most hit upon 24 characters to date, nowhere near the 73 or so set by 500 bits. (And the config space multiplies itself 128 times over for every additional ASCII character.)
That is, the audit is in the situation of not adding up. The recorded transactions to date are not consistent with the outcome. Errors have been searched for and eliminated.
The gap remains.
There is something else acting that is not on the materialist’s books, that has to be sufficient to account for the gap.
That something else is actually obvious, self-aware, self-moved, responsible, creative, reasoning and thinking intelligence as we experience and observe and as we have no good reason to assume we are the only cases of.
No wonder Q, in response, noted:
Computer architecture and the software that operates within it is no more creative in kind than a mechanical lever. All a program does is preserve the logic—and logical flaws—of an intelligent programmer. A computer is not an electronic brain, but rather an electronic idiot that must be told exactly what to do and what rules to follow.
He is right, and let us hear Searle in his recent summary of his Chinese Room thought exercise (as appeared in 556 in the previous thread but was — predictably — ignored by RDF and buried in onward commentary . . . a plainly deliberate tactic in these exchanges):
Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”
People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.
Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.
And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.
Jay Richards’ comment — yes, that Jay Richards — in response to a computer being champion at Jeopardy, is apt:
[In recent years] computers have gotten much better at accomplishing well-defined tasks. We experience it every time we use Google. Something happens—“weak” artificial intelligence—that mimics the action of an intelligent agent. But the Holy Grail of artificial intelligence (AI) has always been human language. Because contexts and reference frames change constantly in ordinary life, speaking human language, like playing “Jeopardy!,” is not easily reducible to an algorithm . . . .
Even the best computers haven’t come close to mastering the linguistic flexibility of human beings in ordinary life—until now. Although Watson [which won the Jeopardy game] is still quite limited by human standards—it makes weird mistakes, can’t make you a latte, or carry on an engaging conversation—it seems far more intelligent than anything we’ve yet encountered from the world of computers . . . .
AI enthusiasts . . . aren’t always careful to keep separate issues, well, separate. Too often, they indulge in utopian dreams, make unjustifiable logical leaps, and smuggle in questionable philosophical assumptions. As a result, they not only invite dystopian reactions, they prevent ordinary people from welcoming rather than fearing our technological future . . . .
Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.
The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.
This does not follow. A computer may pass the Turing test [as Searle noted with the Chinese Room thought exercise], but that doesn’t mean that it will actually be a self-conscious, free agent.
The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.
We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.
We’re getting close to when an interrogating judge won’t be able to distinguish between a computer and a human being hidden behind a curtain.
In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show) . . . .
AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.
This ideological pattern seems to be what has been going on all along in the exchanges with RDF.
If he wants to claim or imply that consciousness, creativity, purposeful deciding and acting through reflective thought are all matters of emergence from computation through hardware that is organised and software on it — much less such happened by blind chance and mechanical necessity — then he has a scientific obligation to show such per empirical demonstration and credible observation.
Hasn’t been done and per the Chinese Room, isn’t about to be done.
It is time to expose speculative materialist hypotheses and a prioris that lack empirical warrant and have a track record of warping science — by virtue of simply being dressed up in lab coats in an era where science has great prestige.>>
__________
So, let us reflect on whether RDF has scored a knockout, or is he being a tad over enthusiastic on a field of research? END