Uncommon Descent Serving The Intelligent Design Community

Can intelligence be operationalized?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Philosopher Edward Feser has written a thought-provoking critique of the Turing Test, titled, Accept no limitations. Professor Feser makes several substantive points in his essay. Nevertheless, I believe that the basic thrust of the Turing Test is sound, and in this post, I’d like to explain why.

In his 1950 paper, Computing machinery and intelligence (Mind, 59, 433-460), computer scientist Alan Turing argued that the question, “Can machines think?”was a scientifically fruitless one, and that the question we should be asking instead was: would it be possible to construct a digital computer that was capable of fooling blind interrogators into believing that it was a human being, by giving answers to the interrogators’ questions that a human being would naturally give? This line of thinking was what prompted Turing to come up with the idea of the Turing Test. Here’s how Professor Feser describes it:

The basic idea is as follows: Suppose a human interrogator converses via a keyboard and monitor with two participants, one a human being and one a machine, each of whom is in a different room. The interrogator’s job is to figure out which is which. Could the machine be programmed in such a way that the interrogator could not determine from the conversation which is the human being and which the machine? Turing proposed this as a useful stand-in for the question “Can machines think?” And in his view, a “Yes” answer to the former question is as good as a “Yes” answer to the latter.

An important feature of the Turing test, highlighted by Feser, is that the interrogator had to be an ordinary person. Computer scientists were disqualified, because they might “cheat,” by using their special background knowledge to ask questions that might trick a computer:

… Turing elsewhere acknowledged that in a Turing Test situation, someone with expertise about machines might well be able to figure out from subtle clues which is the machine. Turing thus stipulated that the interrogator should be someone who does not have such expertise. He thought that what mattered was whether the ordinary person could figure out which is the machine.

Although Turing himself had no interest whatsoever in attempts to construct humanoid robots (such as EveR-1, a female android, pictured above, which is capable of expressing a variety of human emotions and of having a very limited conversation in Korean and English), he would have surely agreed that if such a robot, at some point in the future, proved capable of fooling the human beings with whom it interacted into thinking that it was human, this would conclusively disprove the claim that there is something special about human intellectual abilities.

Feser’s objections to the Turing test

Regular readers at Uncommon Descent will be aware that Professor Feser and I have disagreed on previous occasions, on the subject of Intelligent Design (see here for a handy list of his key posts on the subject). I do not propose to add anything further on the topic of Intelligent Design; by now, I imagine that readers will have made up their own minds.Rather, my disagreement with Feser in this post relates to the nature and appropriate definition of intelligence.

Feser has three principal objections to the Turing test:

first, the mere fact that a machine might be taken by an ordinary person to be intelligent no more establishes that it is in fact intelligent than the fact that pyrite (or fool’s gold) might be taken for someone to be gold establishes that it is in fact gold;

second, the Turing test illegitimately conflates a philosophical question (“What is thinking?”) with a methodological one (“How should we decide whether something is capable of thinking?”), and the grandiose verificationist claim that the meaning of any statement (e.g. “This machine can think”) can be equated with how we would verify that statement turns out to be self-refuting, as the principle of verification is itself unveriifiable; and

finally, the parts of a machine have no built-in powers to engage in intelligent conversation: any conversational capacities that they may possess have to be artificially imposed on them by the human beings who originally programmed them. As Feser puts it:

…Left to themselves, machines don’t converse. So, that we can make them converse no more shows that they are intelligent than throwing stones or making watches shows that stones have the power of flight or that bits of metal qua metal can tell time.

I have to say that while I have the deepest respect for Professor Feser as a philosopher, I think his response to Turing’s original argument concedes far too much. For what he seems willing to concede, at least for the sake of argument, is that a machine might one day be constructed which was capable of fooling all laypeople, all of the time, into thinking that it was intelligent, on the basis of its ability to hold a conversation. (Abraham Lincoln’s dictum notwithstanding, a modal logician would insist that if you can fool some of the people some of the time, it is possible that you will fool all of the people all of the time, even if it is very unlikely.) Nowhere in his essay does Feser even suggest that a group of laypeople, working together, would eventually be able to devise a series of questions that could distinguish a machine programmed to mimic the human ability to converse from an intelligent human being. Now, I don’t know what Feser’s own views on the subject are, but the clear implication of his argument is that even if a group of laypeople could never tell the difference between a machine and a human being by conversing with them, that would in no way undermine the claim that there was something special about human intelligence. I would argue that on the contrary, it would undermine our claim to uniqueness, and that if a machine could be constructed which could systematically fool ordinary people into thinking that they were conversing with a human being, we could no longer claim that there was anything special about human intelligence – or about intelligence in general. It would be a terrible day for the human race. I have repeatedly declared that my faith is falsifiable, and this is one thing that would falsify it.

Telling fool’s gold from true gold: is this a good analogy to the problem of distinguishing machine “intelligence” from human intelligence?

Pyrite cubic crystals on marl from Navajún, La Rioja, Spain. Image courtesy of Wikipedia and Carles Millan.

I’d now like to address Feser’s arguments. Let’s begin with his analogy between pyrite (which can only be distinguished from gold by an expert) and machine “intelligence” (which, Feser concedes, might one day require an expert in order to distinguish it from genuine intelligence). The reason why I think this analogy fails is that the defining properties of gold are not properties that are distinguishable to the naked eye, which ordinary people can readily identify, but rather, chemical properties, whereas the defining properties of human intelligence – namely, the ability to direct appropriate means towards specified ends, and to justify one’s choice of means for the end in question, using language that one’s audience can understand – are properties which anybody with eyes and ears (and the ability to understand the human being’s language) can recognize. For instance, if someone is trying to fish something out of the water, then we would expect them to immediately apprehend that a long stick would be better than a short stick for the task, and we would also expect them to be able to succinctly explain why: “A short stick won’t reach the object in the water.” The two properties which define an intelligent agent – directing suitable means to ends and justifying one’s choice of means, using language – are general properties, and one does not need to be an expert to verify that a given agent has them.

Now, having been a computer programmer for ten years, I an well aware that a computer with a large enough knowledge base (such as IBM’s Watson) could be programmed to give answers like the one above, to the question, “Why would you use a long stick to fish an object out of the water?” But the computer would eventually flounder if it was asked follow-up questions which human beings would find very easy to answer – for example, “What would you do if the object was too heavy to lift out of the water with a stick?” [answer: I’d try to fashion a hook], “What would you use to make the hook?” [answer: I’d use a piece of metal wire], “What would you do if you weren’t strong enough to bend the wire into a hook?” [answer: I’d ask someone to help me, or: I’d buy a hook at a store, if I couldn’t make one], and “What do you think your Dad would advise you to do, if the object that had fallen in the water was highly valuable?” [answer: He’d probably tell me to call the police]. While computers can be programmed with information about specific topics that can make them appear knowledgeable, they lack the general ability that we call intelligence. Also, computers, lacking subjectivity, have no “theory of mind”: they are unable to put themselves in other people’s shoes and adopt the perspective of another person. A computer has no idea what advice anyone’s father would give, on fishing a valuable object out of the water.

Of course, there are some ends for which a given intelligent agent may be utterly incapable of selecting a suitable means: for instance, I wouldn’t have a clue how to repair a broken television. And there are some ends for which the choice of appropriate means is a difficult matter, requiring detailed, technical knowledge – hence an intelligent agent’s justification of their choice may be totally incomprehensible to a layperson. But the ability to recognize that an individual has the general capacity to direct suitable means towards specified ends, and to justify their choice of means, using suitable language, is not an ability requiring any special expertise. Thus I would not expect a group of philosophers of scientists to fare any better than a group of laypeople in identifying which entities possess this capacity (e.g. normal human beings), and which entities do not (e.g. computers). (I’m using the word “entity” fairly broadly here: Professor Feser would point out, correctly, that a computer isn’t really an entity as such, but an assemblage of parts.) And I would expect a group of laypeople to be just as adept as a group of philosophers or scientists in weeding out impostors, such as machines that had been programmed to mimic our ability to converse on a variety of subjects. (And while I would grant that a group of IT experts could spot a computer impostor more quickly than a group of laypeople, I would also expect that the laypeople would achieve the same level of accuracy, if they had more time to ask questions.)

Does the Turing test presuppose verificationism (or for that matter, scientism)?

I now turn to Feser’s attack on the verificationist principle, that the meaning of a statement (e.g. “This machine can think”) simply consists on the manner in which one would verify that statement (e.g. by seeing whether it can pass a Turing test). Feser argues that the principle is self-refuting; but it need not be, if it is re-formulated as a challenge rather than a principle: “Show me a statement which is meaningful but unverifiable!”

More to the point, however: I would argue that the Turing test does not presuppose any such verificationist theory of meaning. Nor does it require one to subscribe to the sweeping claims of scientism – the arrogant view that any questions which cannot be answered by scientific investigation are totally meaningless. All that a proponent of the Turing test needs to point out is that intelligence is an operational ability – namely, the ability to direct means to ends and to explain one’s reasons for having done so – and that the question of whether a given entity is capable of performing a certain kind of operation is surely a question that falls within the purview of science.

We can appreciate this point more readily by considering a hypothetical case where a group of astronauts lands on a strange planet and encounters some alien beings which appear to move around in a purposeful manner. At this point, the astronauts would need to consider various possibilities. Are the alien beings merely robots, left behind by a race of intelligent creatures who died out long ago? Are they non-sentient organisms, whose behavior patterns are entirely innate and whose purposeful behavior lacks the flexibility required for intelligence? Are they sentient beings, which are very adept at learning “smart moves” (like our crows), but which nevertheless lack reason? Or are they genuinely intelligent agents? In order to answer these questions, the astronauts would focus on what the beings were capable of doing – and in particular, what problems they were capable of solving. For instance, the astronauts might try putting the alien beings’ food out of reach, and see how they went about solving the problem of accessing their food. And even if the alien beings proved to be highly skillful problem solvers, the astronauts would be reluctant to ascribe intelligence to them until they managed to establish communication with them, using some improvised language which both parties were able to understand. If the alien beings were able to give reasons for their actions and displayed self-awareness in their discourse, and if they were able to converse flexibly on a variety of disparate topics, then the astronauts would have no choice but to conclude that the alien beings were indeed intelligent agents. In arriving at this conclusion, the astronauts would be using the methods of science, and they would be assessing the alien beings on the basis of what they said and did. Intelligence, then, is something which can be verified operationally. And while there may be rare cases – such as individuals with “locked-in” syndrome or people in a persistent “vegetative” state – when we might legitimately wonder whether an individual possesses intelligence, despite evincing no signs of doing so – the point remains that in standard cases, when it comes to intelligence, “Handsome is as handsome does.”

I might add that even in the non-standard cases mentioned above, when we ascribe intelligence to the individuals concerned, it is precisely because we believe that given the right opportunities – that is, if the individuals possessed a fully alert brain, or a properly functioning nervous system – these individuals would be capable of displaying behavior, or engaging in speech, which we would recognize as intelligent, if we witnessed it. In other words, even if one cannot simply equate intelligence with a behavioral or linguistic capacity, one can certainly equate it with a disposition for such a capacity.

Finally, I’d like to address Professor Feser’s argument that the reason why human intelligence is a unique capacity, which no machine could ever hope to possess, is that that it is a natural, innate capacity, whereas the capacities of a machine are imposed on it. What Feser is doing here, in a nutshell, is adding an extra condition to the definition of intelligence which I sketched above. Whereas I would say that X is intelligent if and only if X has the capacity (or at least the disposition) to direct appropriate means to specified ends, and to justify the choice of means, using suitable language, Feser would add the condition that X must possess immanent finality and not merely extrinsic finality. That is, its parts must have an inherent tendency to function together in the way that they do, rather than a tendency which is artificially imposed on them by clever engineers.

Immanent finality and extrinsic finality: is there a clear-cut distinction between the two?

In order for Feser to justify adding this condition, there has to be a clear-cut, black-and-white distinction between entities which possess immanent finality – a term which Feser defines here and which I have commented on here – and entities possessing extrinsic finality, since the question of whether a given entity possesses intelligence is surely a yes-no question: either it does or it doesn’t. (And the fact that intelligence comes in various degrees, or even various kinds, no more undermines this point than the fact that living things come in many different kinds undermines the scientific claim that there is a vast gulf between a living thing and a non-living entity.)

I’d now like to ask Professor Feser whether he would regard the following entities as possessing immanent finality, or merely extrinsic finality.

(i) Entity A is composed of parts which have an inherent, natural tendency to function together, once assembled, but which have no inherent tendency to come together in the first place. The parts have to be assembled by an intelligent agent.

(ii) Entity B is just like entity A, except that its parts don’t have to be assembled by an intelligent agent: they can be assembled by a robot.

(iii) Entity C is composed of parts which have an inherent, natural tendency to function together, but which require an intelligent agent (or a suitably programmed robot) to put them in a situation where they will function (naturally) in a certain way.

(iv) Entity D is composed of parts which have an inherent, natural tendency to function together, but entity D is also descended from another entity (say, A or B), whose parts originally had to be put together by an intelligent agent (or alternatively, in the case of B, by an assembly-line robot).

(v) Entity E is composed of parts which have an inherent, natural tendency to function together, but entity E is also descended from another entity, F, whose parts had no built-in tendency to function together: F was a rigged up self-replicating contraption containing multiple parts which were designed to back up each other’s functions, enabling it to keep operating even if some parts broke down. Over the course of time, however, the surplus parts in the descendants of F were gradually lost due to tiny errors in the replicating process, and the parts remaining in its descendant E are highly inter-dependent, such that E cannot function unless all its parts are working together in a tightly coordinated fashion.

So, which of these entities does Professor Feser view as having built-in capacities? I, for one, would like to know.

The reason why I mention these examples is that I consider Feser’s latest post to be somewhat unclear on these points. Take, for instance, his assertion that watches have no inherent tendency to tell the time: they only do so because human beings imposed a time-telling function on their parts, when assembling these parts. Quite correct; but Feser then goes on to assert that the fact that a watch displays the time of day is not an observer-independent fact about the watch. On this point, I think, he is mistaken. What about bees, which can tell the time of day? This ability is clearly observer-independent: it would be true even if there were no human beings in existence. Moreover, scientists can verify that bees have this ability simply by looking at how bees behave, even though they haven’t yet figured out which parts of a bee’s brain perform this function, let alone how these parts do their job.

Of course, Feser could respond that a bee’s ability is inherent to it, while a watch’s ability is not. But scientists have designed robotic bees which are even capable of fooling real bees. If Feser were stuck on a strange planet and he happened to encounter some insect-like creatures that could somehow tell the time of day, how would he ascertain whether they were organisms with immanent finality, or mere artifacts with extrinsic finality?

Perhaps the logical thing to do would be to inspect their parts and see how they functioned. All well and good, but according to Feser, it is not that simple. For he writes that the pieces of metal in a watch are “telling time” “only because we have made them do so, and they wouldn’t be doing it otherwise.” Going by this statement, the decisive criterion for whether a thing possesses immanent finality is not whether its parts function together in a manner which is natural for them and proceeds from their built-in capacities, but whether these parts were assembled together by an intelligent agent at some point in the past. In other words, a thing’s possession of immanent finality cannot be ascertained simply be looking at its structure and/or mode of functioning; one has to know the thing’s history as well.

This interpretation of Feser is confirmed by what he says about stones:

Similarly, if I throw a stone in the air, it would be ridiculous to conclude “Since agere sequitur esse [i.e. the way a thing acts or behaves reflects what it is – VJT], it follows that stones can fly!” The stone is “flying” only because and insofar as I throw it. Flying is, you might say, merely an accidental form of the stone.

I could point out that meteorites are stones, and they have been flying through space for billions of years, and that pieces of rock have been flying out of volcanic vents for billions of years as well. However, it is certainly true that a stone, like a tumbleweed, lacks the active power to soar aloft, which a bird (or a cicada) possesses because it carries its own supply of energy within its body, which a stone does not. But what Feser appears to be asserting is that the reason why a stone thrown in the air cannot properly be said to fly is that some agent threw it. Thus it seems that Feser is claiming that whenever a thing’s motion is caused by an external intelligent agent at some point in the past, this motion cannot be called natural: it is imposed from without, and is therefore accidental to the thing in question. In other words, Feser apparently believes that knowledge of a thing’s internal structure and the movements of its parts is insufficient to resolve the question of whether it possesses immanent finality; one must know its history as well.

Such a view would imply that we could never be sure that an insect which we found on another planet possessed immanent finality. For if it turned out that the insect was descended from an original insect that was put together by alien engineers, then that fact alone would suffice to make the insect an artifact. Indeed, even if it turned out that the first ancestral life-form on that planet were engineered by a race of aliens visiting the planet, that would be enough to disqualify all of its descendants from being thing (or substances) which possess immanent finality

Puzzlingly, though, Professor Feser appears to reject this view in another post, where Feser allows for the theoretical possibility – though he is himself highly skeptical – that scientists might one day be able to intelligently generate a living organism using the raw materials of life. (I presume he means organic molecules, such as amino acids and nucleotides.) Feser thinks that scientists could conceivably do this, but only if there is “some final causality already built into nature” which would allow them to “use non-living materials that nevertheless have immanent causation definitive of life within them, ‘virtually.'” If the scientists who are putting together a living thing are simply making use of the immanent causation which is latent within the non-living materials from which they are assembling the organism, then is there any reason, in principle, why a robotic bee could not be assembled by scientists in a manner consonant with its possessing immanent finality?

Now of course, I would readily grant that the parts of the robotic bee would have to be much more tightly integrated than the components of a watch. But if the parts displayed a nested hierarchy of functionality – where the organism [i.e. the robo-bee] was composed of organs, which were composed of tissues, then cells, then organelles, and then bio-molecules – and if the parts at each level were specifically designed to support the functionality of the whole which they comprised, then who could rationally deny that what we were looking at was a case of genuine immanent finality?

I conclude that there is no reason in principle why an artifact (such as a computer) could not possess immanent finality. And since computational devices can already be manufactured using biological components, I see no reason why entire organisms could not be specifically designed to perform certain kinds of computations, as well.

Feser could respond, however, that while an organism might be designed by us in order to perform certain computations, its proper functions – assuming for the sake of argument that these are indeed immanent – would not be computational functions as such, but biological functions which we chose to interpret as the answer to some calculation we were trying to perform, using the organism. In other words, even if there are objects possessing immanent finality which we can use to perform computations, their proper function is not to compute; hence their computational capacities are in no way intrinsic to them.

Can anything ever be legitimately described as a natural computer?

The seashell Conus textile exhibits a cellular automaton pattern on its shell. Image courtesy of Wikipedia.

In his post, Feser, citing an article by the philosopher John Searle, contends that “there is no observer-independent fact of the matter about whether something even counts as a computer in the first place.” The gist of Searle’s argument, as far as I can make out, is that (a) anything could be described as a digital computer, “because any object whatever could have syntactical ascriptions made to it”; and more importantly, (b) “The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactic.” Searle also argues that syntax and symbols (and hence computation) are not definable in terms of the physics of a system; hence the brain cannot be described as a digital computer by virtue of its intrinsic characteristics.

I hesitate to contradict an esteemed philosopher such as John Searle (a bold and perspicacious thinker for whom I have the greatest respect), but it seems to me that on a purely conceptual level, his objections have been answered by mathematician and computer scientist Steve Wolfram, author of A New Kind of Science (Wolfram Media, 2002; see here for some very mixed reviews). For what Wolfram shows in Chapter 8 of his book is that a wide variety of natural phenomena – ranging from crystal growth to fluid flow to biological pigmentation patterns – perfectly mimic the behavior of certain kinds of cellular automata. And since the latter evolve over time according to certain mathematical rules which are both syntactical and observer-independent, then it seems that there is, after all, a legitimate sense in which we can speak of “natural computers” in the real world. Searle is perfectly correct in saying that syntax cannot be reduced to physics, but it can certainly be reduced to mathematics, and I would argue that if the behavior of a biological system conforms to a mathematical rule, then we can propoerly speak of that system as performing digital computations.

Where does that leave us? What it means is that Feser’s contention that a computing machine has no inherent tendency to perform computations, but can only be said to perform them insofar as we impose these functions on its parts, is a little hasty. If certain biological functions occurring in a particular organism perfectly conform to the behavior of some class of cellular automata, then it seems that we can describe that organism as having an inherent tendency to perform computations.

The Turing test: where the real problem lies

Now let’s apply this reasoning to the Turing test. So far, I have only discussed syntax. But semantics is another matter entirely. For all I know, scientists might one day succeed in engineering an organism whose responses to certain environmental stimuli sounded like human speech, and they might even be able to find a way of guaranteeing that this speech was syntactically correct. Feser argues that we would need to interpret that output, before deeming it to be speech. But the existence of speech to text recognition software renders that point moot.

But the most powerful objection to the hypothetical scenario I proposed above, in my view, is that even if the artificially engineered organism were capable of syntactically correct speech, and even if the utterance of such speech were one of its built-in biological functions, these facts would not suffice to make its speech semantically meaningful, as Noam Chomsky’s example of “Colorless green ideas sleep furiously” demonstrates. Nor would it make the response appropriate to the question posed to the organism by a human interlocutor. The philosopher Rene Descartes (1596-1650) expressed this insight with admirable clarity in his Discourse on Method:

“…For one can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs (for example, if one touches it in some spot, the machine asks what it is that one wants to say to it; if in another spot, it cries that one has hurt it, and things like that), but one cannot imagine a machine that arranges words in various ways to reply to the sense of everything said in its presence, as the most stupid human beings are capable of doing.”
(Discourse on Method, Part Five.)

As we have seen, Professor Feser (following Karl Popper and John Searle) argues that it is a profound mistake to regard organisms and their parts (e.g. the human brain) as even possessing a capacity for syntax. I have explained why I think the “syntax barrier” is not insurmountable, in principle. One needs to draw one’s battle-lines carefully, and the line in the sand that I would draw is that of semantics, rather than syntax. The construction of semantically meaningful and situationally appropriate responses to questions on any and every topic, I would confidently maintain, is a Rubicon which no computing machine will ever cross.

The real point at issue

But even if there were (as Feser contends) a hard-and-fast distinction between immanent finality and extrinsic finality (as I’m inclined to think there is, despite the difficult cases I alluded to above) – and even if every computer was merely an assemblage of parts, and thus utterly devoid of immanent finality, I would still maintain that the construction of a computer that could converse with us so skillfully that we couldn’t tell it from a human being would necessitate a drastic revision of our world-view. I’d now like to explain why.

Why I believe that a computer that could pass the Turing test in flying colors would force a major revision of our world-view

I declared above that if a machine could be constructed which could systematically fool ordinary people into thinking that they were conversing with a human being, it would be a terrible day for the human race. As we’ve seen, Feser would greet the discovery with sangfroid, on the grounds that (i) the machine doing the conversing is not really a thing, but an assemblage of parts, and (ii) its responses would still need to be interpreted by human beings before they could be deemed syntactical utterances. As I’ve argued above, I don’t regard the second point as particularly telling. Feser’s critical objection is that a machine is not a real thing (or substance) as such, and that since intelligence is an attribute of things (or substances) and not assemblages, a talking machine doesn’t even get to first base.

I have to say that I don’t regard this response as very consoling, even if it is correct. For what it implicitly acknowledges is that a general and universal capacity such as intelligence can be perfectly replicated by an assemblage of physical parts. And if that’s correct, then we can certainly no longer speak of intelligence as an immaterial capacity.

It gets worse. For it is hard to see how intelligence can be regarded as a general and universal capacity, if the output of an intelligent being can be perfectly mimicked by a finite assemblage of parts whose capacities are all highly specific. There are of course philosophers who deny that we have any such universal capacity as “intelligence,” but the human mind’s ability to address any kind of problem and attempt to solve it serves as a sufficient refutation of their views, as far as I am concerned. However, if it could be demonstrated that a finite set of specific capacities, associated with an assemblage of parts, was actually capable of matching the problem-solving capacities of human beings, then I’d be forced to revise my reviews.

Finally, if it were shown that intelligence were merely a specific kind of capacity, like the capacity to jump or quack or see in color, then it would follow that there is nothing special about intelligence. It would become just another capacity.

For all these reasons, then, I believe that the creation of a machine that could actually match human conversational and problem-solving capacities would be a mortal blow to the prestige of the human race. Of course, I’m quite sure that it’ll never happen. But if it did, it would be curtains for humanity.

I’d now like to throw the discussion open. What do readers think?

Comments
Another few quick thoughts: The thread mentioned above is related to this one in the sense that the physical brain, an extremely clever configuration of matter, is still only that, and cannot be the seat of intelligence any more than a computer can. The seat of rationality is, of course, the rational soul. A human being consists of both body and soul. That is why there must be a bodily resurrection. We would spend eternity as incomplete beings without one.harry
March 16, 2015
March
03
Mar
16
16
2015
07:06 AM
7
07
06
AM
PDT
Having worked with computers at the level of the CPU's instruction set and processor registers, and at the level of data being moved and manipulated between those registers, and read from machine memory and written to memory (I became familiar enough those things to write software that simulated the instruction set of a CPU for debugging purposes) let me assure anyone who may be interested: computers do not have and will never have any more intelligence than that of a box of rocks. They are very cleverly designed machines that demonstrate the human intelligence of those who designed them. That they have their own "intelligence" is and will always remain merely an illusion. There is a non-material component to a genuine intellect that we will never be able to impart to a machine. If that topic interests anyone, that is being discussed on the thread How the brain enables the mind? here: https://uncommondescent.com/neuroscience/how-the-brain-enables-the-mind/harry
March 16, 2015
March
03
Mar
16
16
2015
06:44 AM
6
06
44
AM
PDT
Thanks for the post VJT. Am I the only one who things of this scene from Blade Runner when the Turing Test comes up?
Of course, that scene involves one replicant administering the VC to another. Then what happens?Reciprocating Bill
March 12, 2015
March
03
Mar
12
12
2015
01:20 PM
1
01
20
PM
PDT
Dr. Torley: Interesting post and a great topic. Couple of quick thoughts: Intelligence -- the essence of what it means and even the very etymology of the word -- relates to the ability to choose between contingent possibilities. Could your robot actually choose what its interactions are if it is pre-programmed for certain responses? Or is it just carrying out the pre-programmed responses? If the latter, then it seems Feser's points might be valid, notwithstanding any appearance of "human-like" responses. In a very real sense then, intelligence is closely tied to free will. Incidentally, those who hold to a purely materialistic view of life would say that we are not making real choices and that we are not exercising real free will. It just appears that we are, but in reality we are just carrying out whatever pre-programmed instructions our physical arrangement of matter dictates. Second question, if you would feel comfortable answering, is what your views are on the nature of human intelligence? Here is what I mean: Many people who hold to a Biblical or similar religious narrative seem to accept the idea that God created a physical, human body that is capable of thinking, feeling, and demonstrating true intelligence. Yet what is this physical body, if not a particular arrangement of matter? So the question is, do you hold to the idea that God could create an arrangement of matter (a human body) capable of intelligence? Or do you believe that our individual intelligence transcends our body and exists, at some level, apart from that body?* Is there a ghost in the machine, or is it just the machine? Thanks, ----- * BTW, without meaning any offense to anyone but with a desire to avoid one back-and-forth, I would say that an answer that claims, in effect, that "God can create such an arrangement of matter, but man cannot" is not an intellectually satisfying answer.Eric Anderson
March 12, 2015
March
03
Mar
12
12
2015
10:26 AM
10
10
26
AM
PDT
A ten year old kid could confuse a questioner into thinking they were a adult. A computer is only programmed or rather only memorizes what its told too. It has no mind of its own. No purpose and not a single thought. Its just a memory machine. Indeed our brain/mind is just a memory machine, I say, and gives a impression of being like a computer because of the same memory operations. Yet our soul/heart is the one using the memory machine for independent thoughts. they get this computer intelligence stuff wrong because they get human thinking wrong. why is a computer in any way intelligent because it memorized something. Or rather what is the difference between a normal computer and a "rain man' computer? None! Its just a dumb memory machine. it has no thinking going on.Robert Byers
March 11, 2015
March
03
Mar
11
11
2015
08:56 PM
8
08
56
PM
PDT
Mapou says. One of the biggest problems with the Turing Test is that it conflates thinking and intelligence with human language, i.e., symbol manipulation. It eliminates a huge number of intelligent creatures from consideration. I say, I agree, Not only does a language based Turning test falsely rule out lots of intelligent agents it also is vulnerable to false positives. I think ID is all about developing a universal Turing Test that does not rely on human Language. A concept like CSI if formalized and generalized would be invaluable for so many fields from AI to the study of neurological disorders. I hope to see some breakthroughs in this direction from that Stealth ID postulate Integrated Information Theory. Peacefifthmonarchyman
March 11, 2015
March
03
Mar
11
11
2015
04:20 PM
4
04
20
PM
PDT
One of the biggest problems with the Turing Test is that it conflates thinking and intelligence with human language, i.e., symbol manipulation. It eliminates a huge number of intelligent creatures from consideration. You will not hear this from the computer science community (Turing is their God) but Turing is actually the father of one the worst failures in the history of science: the idea that intelligence is the manipulation of symbols. This is known as good old fashioned AI or GOFAI. The AI community is still reeling from having wasted billions and half a century on a wild goose chase. Still, Turing is so revered by the artificial intelligentsia and symbolic AI is so ingrained in their culture that Deep Learning, the machine learning technique that's been making the news, is really just GOFAI with lipstick on. I never really could figure out the motivation behind the adulation of Turing. The modern digital algorithmic computer (Von Neumann bottleneck and all) had already been invented 100 years before Turing by Charles Babbage. Babbage and lady Ada Lovelace even conceived of the idea of storing both programs and data in the same medium. Turing's magnum opus, his proof of the undecidability of the Halting problem on a Turing machine is irrelevant since modern computers are not Turing machines. The truth is that no software engineer ever thinks about the Halting problem while designing software. I guess I just don't see what all the fuss about Turing is really about. He was just this lonely guy who dreamed of a future filled with conscious and intelligent mechanical companions.Mapou
March 11, 2015
March
03
Mar
11
11
2015
12:51 PM
12
12
51
PM
PDT
Very interesting post. I would be curious, VJT, about how you would respond to this study from Mich. State.DonaldM
March 11, 2015
March
03
Mar
11
11
2015
10:09 AM
10
10
09
AM
PDT
Machine intelligence traces back to the biological intelligence that created it. The sky will not fall when machines start conversing with us.Joe
March 11, 2015
March
03
Mar
11
11
2015
07:01 AM
7
07
01
AM
PDT
Thanks for the post VJT. Am I the only one who things of this scene from Blade Runner when the Turing Test comes up? https://www.youtube.com/watch?v=g-DkoGvcEBwBarry Arrington
March 11, 2015
March
03
Mar
11
11
2015
06:34 AM
6
06
34
AM
PDT
VJT, another interesting, thought-provoking interaction. Good for us all. I'd say, that a machine that can converse in general and shows practical good sense would be intelligent; but not necessarily conscious or morally responsible or rational in a sense that rises beyond GIGO; though it may be self correcting to some extent. Such may be possible but R Daneel and co are still a ways off. The charming Si-rubber skinned young miss at the head of the post notwithstanding. I think the Smith, two tier controller cybernetic loop model -- have thought for best part of a decade now since I ran across it -- is a good point of departure, and that it opens up the point that there may be all sorts of possibilities for the higher order element. That is, even were one created that was a robot it would not mean that we are just wet-ware robots. Though a great many would jump to the conclusion. And BTW, wouldn't all this be powerful cases of underlying FSCO/I coming about by design, much like that charming young Miss in your photo? (BTW, can she converse sensibly over a dinner table in a restaurant?) KFkairosfocus
March 11, 2015
March
03
Mar
11
11
2015
05:58 AM
5
05
58
AM
PDT

Leave a Reply