Uncommon Descent Serving The Intelligent Design Community

Logic & First Principles, 21: Insightful intelligence vs. computationalism

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the challenges of our day is the commonplace reduction of intelligent, insightful action to computation on a substrate. That’s not just Sci Fi, it is a challenge in the academy and on the street — especially as AI grabs more and more headlines.

A good stimulus for thought is John Searle as he further discusses his famous Chinese Room example:

The Failures of Computationalism
John R. Searle
Department of Philosophy
University of California
Berkeley CA

The Power in the Chinese Room.

Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let’s begin by pondering the implications of the Chinese Room.

The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have?

The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

But, once again, why?

Why can’t I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols.

The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing “How do we know?” with “What it is that we know when we know?”

This mistake is enshrined in the Turing Test(TT). Indeed this mistake has dogged the history of cognitive science, but it is important to get clear that the essential foundational question for cognitive science is the ontological one: “In what does cognition consist?” and not the epistemological other minds problem: “How do you know of another system that it has cognition?”

What is the Chinese Room about? Searle, again:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else. [Cf. Jay Richards here.]

What is “strong AI”? Techopedia:

Strong artificial intelligence (strong AI) is an artificial intelligence construct that has mental capabilities and functions that mimic the human brain. In the philosophy of strong AI, there is no essential difference between the piece of software, which is the AI, exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness.

Strong artificial intelligence is also known as full AI.

In short, Reppert has a serious point:

. . . let us suppose that brain state A [–> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [–> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

This brings up the challenge that computation [on refined rocks] is not rational, insightful, self-aware, semantically based, understanding-driven contemplation:

While this is directly about digital computers — oops, let’s see how they work —

. . . but it also extends to analogue computers (which use smoothly varying signals):

. . . or a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

A similar approach uses memristors, creating an analogue weighted sum vector-matrix operation:

As we can see, these entities are about manipulating signals through physical interactions, not essentially different from Leibniz’s grinding mill wheels in Monadology 17:

It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception [[i.e. abstract conception]. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought . . .

In short, computationalism falls short.

I add [Fri May 31], that is, computational substrates are forms of general dynamic-stochastic systems and are subject to their limitations:

The alternative is, a supervisory oracle-controlled, significantly free, intelligent and designing bio-cybernetic agent:

As context (HT Wiki) I add [June 10] a diagram of a Model Identification Adaptive Controller . . . which, yes, identifies a model for the plant and updates it as it goes:

MIAC action, notice supervisory control and observation of “visible” outputs fed back to in-loop control and to system ID, where the model creates and updates a model of the plant being controlled. Parallels to the Smith model are obvious.

As I summarised recently:

What we actually observe is:

A: [material computational substrates] –X –> [rational inference]
B: [material computational substrates] —-> [mechanically and/or stochastically governed computation]
C: [intelligent agents] —-> [rational, freely chosen, morally governed inference]
D: [embodied intelligent agents] —-> [rational, freely chosen, morally governed inference]

The set of observations A through D imply that intelligent agency transcends computation, as their characteristics and capabilities are not reducible to:

– components and their device physics,
– organisation as circuits and networks [e.g. gates, flip-flops, registers, operational amplifiers (especially integrators), ball-disk integrators, neuron-gates and networks, etc],
– organisation/ architecture forming computational circuits, systems and cybernetic entities,
– input signals,
– stored information,
– processing/algorithm execution,
– outputs

It may be useful to add here, a simplified Smith model with an in the loop computational controller and an out of the loop oracle that is supervisory, so that there may be room for pondering the bio-cybernetic system i/l/o the interface of the computational entity and the oracular entity:

The Derek Smith two-tier controller cybernetic model

In more details, per Eng Derek Smith:

So too, we have to face the implication of the necessary freedom for rationality. That is, that our minds are governed by known, inescapable duties to truth, right reason, prudence (so, warrant), fairness, justice etc. Rationality is morally governed, it inherently exists on both sides of the IS-OUGHT gap.

That means — on pain of reducing rationality to nihilistic chaos and absurdity — that the gap must be bridged. Post Hume, it is known that that can only be done in the root of reality. Arguably, that points to an inherently good necessary being with capability to found a cosmos. If you doubt, provide a serious alternative under comparative difficulties: ____________

So, as we consider debates on intelligent design, we need to reflect on what intelligence is, especially in an era where computationalism is a dominant school of thought. Yes, we may come to various views, but the above are serious factors we need to take such into account. END

PS: As a secondary exchange developed on quantum issues, I take the step of posting a screen-shot from a relevant Wikipedia clip on the 1999 Delayed choice experiment by Kim et al:

Wiki clip on Kim et al

The layout in a larger scale:

Gaasbeek adds:

Weird, but that’s what we see. Notice, especially, Gaasbeek’s observation on his analysis, that “the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards configuration.” This is the point made in a recent SEP discussion on retrocausality.

PPS: Let me also add, on radio halos:

and, Fraunhoffer spectra:

These document natural detection of quantised phenomena.


Comments
BB, I repeat, we are working with technologies that are inherently mechanically and/or stochastically governed, thus inherently not capable of rational, insightful contemplation, inference and decision. This is why, as we are rational, I infer that we don't just have a fancy software and wiring supervisory controller but something with a different ontology that acts as supervisory oracle. That is, material entities, on our rationality, do not exhaust possible or actual reality. The counter-example and limitations of dynamic-stochastic systems even as computational substrates (wetware and otherwise) point to that. But, such will be very hard for many to accept in a radically secularistic culture. KF PS: As for "proof" I am simply giving the nutshell version of the physics of space-time, energy driven material entities. They are dynamic-stochastic entities, even when configured as computational substrates. Signals, analogues, symbols manipulated syntactically or by way of operational amplifiers [electronic or otherwise] or neural weighted sum arrays and feedback loops do not escape the framework of dynamic-stochastic systems. Your bare rhetorical assertions to the contrary don't count. If you disagree with the framework of physics, kindly tell us what cybernetic system architecture you propose that escapes the constraints _____ and how you warrant the claim ____. I note, strong AI advocates, near as I can make out, imply that we are just naturally occurring computational entities, i.e. we do not have genuine first cause self-moved agency and rationality.kairosfocus
May 31, 2019
May
05
May
31
31
2019
04:31 AM
4
04
31
AM
PDT
Brother Brian:
If one designer can produce a rational, thinking entity, why is it not possible for another?
Lack of ability. Just like your lack of ability to think and reason.
If it is absolutely impossible for us to ever do so, the question has to be asked if we are truly rational thinking beings.
The two are not connected. Clearly you are just a desperate loser on some asinine agenda. But please, do try to make a case. That would be very entertaining.ET
May 31, 2019
May
05
May
31
31
2019
04:25 AM
4
04
25
AM
PDT
KF
BB, I have not evaded the question, I have pointed out that our candidate technology to do so, creation of computational substrates driven by mechanical and/or stochastic dynamics and based on material components, is inherently incapable of rational inference.
Well, that is one opinion, but certainly not proven. But that wasn’t the question. If one designer can produce a rational, thinking entity, why is it not possible for another? If it is absolutely impossible for us to ever do so, the question has to be asked if we are truly rational thinking beings.Brother Brian
May 31, 2019
May
05
May
31
31
2019
04:15 AM
4
04
15
AM
PDT
PPS: I took time to add an answer to your "religious argument" tainting dismissive rhetoric -- as in no scientific is not equivalent to logic/rational, religious is not equal to irrational, and there is the domain of logic of being (with implication of distinct identity on distinguishing characteristics) to address. For your convenience, I clip here:
BTW, I am not making either a “scientific” nor a “religious” argument, but a logic of being argument. Kindly, set your anti-theistic biases aside. The pivotal fact I turn on is that we are manifestly freely and responsibly rational, on pain of self-referential grand delusion and absurdity. Secondly, material computational substrates [mechanically and/or stochastically governed combinations of material components] are by the force of the dynamics so outlined, precisely not rational, responsibly free entities. This last I know personally from having designed and built and worked with such substrates. Thus, I see that there is an ontological — logic of being gap — between the two. This leads to the puzzle of our embodiment, which then allows the Smith model to speak: supervisory controller, an extension of the line of thought in adaptive control cybernetic systems. Thus, the inference that there is such a controller in us that is a free supervisory oracle, which is not algorithmic or a computational substrate. Which we can call a mind or better a soul, which has facilities we term mind, conscience, emotions, volition etc. It is that, or reduction to utter grand delusion driven irrationality and amorality; however disguised.
kairosfocus
May 31, 2019
May
05
May
31
31
2019
03:55 AM
3
03
55
AM
PDT
BB, I have not evaded the question, I have pointed out that our candidate technology to do so, creation of computational substrates driven by mechanical and/or stochastic dynamics and based on material components, is inherently incapable of rational inference. Procreation does not count. KF PS: Yet again, I call your attention to Reppert:
. . . let us suppose that brain state A [--> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [--> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
kairosfocus
May 31, 2019
May
05
May
31
31
2019
03:48 AM
3
03
48
AM
PDT
Brother Brian:
If one designer can create something like a human with all that entails (including mental abilities), why can’t another?
Lack of ability, duh. If humans can build cars then why can't other organisms? Because they lack the ability to do so. Brother Brian- ignorantly belligerent, and proud of it.ET
May 31, 2019
May
05
May
31
31
2019
03:40 AM
3
03
40
AM
PDT
Brother Brian:
KF, you still haven’t explained why, if a designer created humans that can think, reason, etc., a human designer can’t do the same thing?
We do. Again, it's called "biological reproduction". And other than biological reproduction we just don't know how to do so. Now grow up and get an education- meaning stop being so ignorantly belligerent.ET
May 31, 2019
May
05
May
31
31
2019
03:38 AM
3
03
38
AM
PDT
KF
BB, the evidence is, independent, rationally and responsibly free, self-moved [as opposed to . . . x, x+1, . . . in a mechanical and/or stochastic causal chain] minded activity is not the result of a {material] computational substrate.
With respect, you are dodging the question. If humans are designed, and have rational thought, free will, etc., why can’t another designer (ie, humans) reproduce this feat? If you can answer this without invoking your God, I would be interested to hear it. So far, you have used a lot of words, but they essentially boil down to a religious argument (ie, God is the designer).Brother Brian
May 31, 2019
May
05
May
31
31
2019
03:01 AM
3
03
01
AM
PDT
BB, the evidence is, independent, rationally and responsibly free, self-moved [as opposed to . . . x, x+1, . . . in a mechanical and/or stochastic causal chain] minded activity is not the result of a [material] computational substrate. Computational substrates are caught up in such causal chains and are inherently non-rational, as Reppert pointed out. So,
if P: we are wholly material entities within the causal chains of a physicalist world, then Q: we are incapable of genuine reason and responsibility, which ______________________________ R: then immediately undermines our pretensions to be reasoning or moral. On which, S: even this exchange is not a genuine conversation, it is not rational; it is just a playing out of unconscious programming that deludes us to think we are rational and/or responsible; i.e. reduction to absurd grand delusion. Rationality therefore implies _______________________________ T: that our core being is not just material, that we are amphibians, with mind over matter, we have in us supervisory oracles that are the seats of our freedom to be self-moved, initiating causal agents.
In effect, on pain of grand delusion, we are rational, enconscienced, ensouled [what "ANIMA-lity" actually means], intentional, conscious, self-aware, significantly freely responsible embodied entities. Where, no, I don't buy the poof-magic view that somehow a sufficiently complex computational substrate will somehow have an emergent, rational, responsible, free mind. That is science fiction (or maybe a Gremlins/poltergeist fantasy on steroids . . . or just maybe an opening for a kind of spirituality we don't want to even think about), not serious AI work. The dynamics are still there, rocks have no rationally contemplative dreams. I am rooting for the AI that just may tame the Tokamak plasma through a sort of anticipatory control based on reliable preliminary signs, but I don't imagine we are creating a new rational creature. When we build computational substrates, unless and until we learn how to create a similar amphibian, we are simply making something caught up within the physical causal chain, not something that is genuinely, rationally and responsibly free. We may mimic chains of reasoning by manipulating symbols and signals, but these inherently are mechanical and/or stochastic dynamical systems, not freely rational entities. This even includes cases where our programming and organisation includes "knowledge-building" or "deep learning" etc. AI's can mimic rational behaviour and expertise in domains where they in effect build in a sufficiently sophisticated supervisory program, but this is not a genuinely free oracle. Where, again, mechanical and/or stochastic manipulation of signals or symbols is categorically not free, self-moved rational inference. And yes, this implies that evolutionary materialistic scientism is necessarily irrational and amoral. Where, BTW, I am not making either a "scientific" nor a "religious" argument, but a logic of being argument. Kindly, set your anti-theistic biases aside. The pivotal fact I turn on is that we are manifestly freely and responsibly rational, on pain of self-referential grand delusion and absurdity. Secondly, material computational substrates [mechanically and/or stochastically governed combinations of material components] are by the force of the dynamics so outlined, precisely not rational, responsibly free entities. This last I know personally from having designed and built and worked with such substrates. Thus, I see that there is an ontological -- logic of being gap -- between the two. This leads to the puzzle of our embodiment, which then allows the Smith model to speak: supervisory controller, an extension of the line of thought in adaptive control cybernetic systems. Thus, the inference that there is such a controller in us that is a free supervisory oracle, which is not algorithmic or a computational substrate. Which we can call a mind or better a soul, which has facilities we term mind, conscience, emotions, volition etc. It is that, or reduction to utter grand delusion driven irrationality and amorality; however disguised. KF PS: Notice, again, Alex Rosenberg, as he begins Ch 9 of his The Atheist’s Guide to Reality:
>> FOR SOLID EVOLUTIONARY REASONS, WE’VE BEEN tricked into looking at life from the inside [--> So, just how did self-aware, intentional consciousness arise on such materialism? something from nothing through poof magic words like "emergence" won't do] . Without scientism, we look at life from the inside, from the first-person POV (OMG, you don’t know what a POV is?—a “point of view”). The first person is the subject, the audience, the viewer of subjective experience, the self in the mind. Scientism shows that the first-person POV is an illusion. [–> grand delusion is let loose in utter self referential incoherence] Even after scientism convinces us, we’ll continue to stick with the first person. But at least we’ll know that it’s another illusion of introspection and we’ll stop taking it seriously. We’ll give up all the answers to the persistent questions about free will, the self, the soul, and the meaning of life that the illusion generates [–> bye bye to responsible, rational freedom on these presuppositions]. The physical facts fix all the facts. [--> asserts materialism, leading to . . . ] The mind is the brain. It has to be physical and it can’t be anything else, since thinking, feeling, and perceiving are physical process—in particular, input/output processes—going on in the brain. We [–> at this point, what "we," apart from "we delusions"?] can be sure of a great deal about how the brain works because the physical facts fix all the facts about the brain. The fact that the mind is the brain guarantees that there is no free will. It rules out any purposes or designs organizing our actions or our lives [–> thus rational thought and responsible freedom]. It excludes the very possibility of enduring persons, selves, or souls that exist after death or for that matter while we live.>>
kairosfocus
May 31, 2019
May
05
May
31
31
2019
02:46 AM
2
02
46
AM
PDT
Hazel
Multiple-designer theory! That would explain a lot. Beetles, for instance! ????
The beetles were a great band. ;)Brother Brian
May 30, 2019
May
05
May
30
30
2019
11:38 PM
11
11
38
PM
PDT
KF, you still haven’t explained why, if a designer created humans that can think, reason, etc., a human designer can’t do the same thing? Is it because we aren’t God? If that is the case, which most of your argument boils down to, then you are making a religious argument, not a scientific one.Brother Brian
May 30, 2019
May
05
May
30
30
2019
11:36 PM
11
11
36
PM
PDT
PS: Plato's provocative point, in The Laws, Bk X:
Ath. Nearly all of them
[= the materialistic sophists of his day, who considered that "that fire and water, and earth and air [[i.e the classical "material" elements of the cosmos], all exist by nature and chance, and none of them by art, and that as to the bodies which come next in order-earth, and sun, and moon, and stars-they have been created by means of these absolutely inanimate existences. The elements are severally moved by chance and some inherent force according to certain affinities among them-of hot with cold, or of dry with moist, or of soft with hard, and according to all the other accidental admixtures of opposites which have been formed by necessity. After this fashion and in this manner the whole heaven has been created, and all that is in the heaven, as well as animals and all plants, and all the seasons come from these elements, not by the action of mind, as they say, or of any God, or from art, but as I was saying, by nature and chance only . . . "]
, my friends, seem to be ignorant of the nature and power of the soul [[ = psuche], especially in what relates to her origin: they do not know that she is among the first of things, and before all bodies, and is the chief author of their changes and transpositions. And if this is true, and if the soul is older than the body, must not the things which are of the soul's kindred be of necessity prior to those which appertain to the body? . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change? Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second. [[ . . . .] Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it? Cle. You mean to ask whether we should call such a self-moving power life? Ath. I do. Cle. Certainly we should. Ath. And when we see soul in anything, must we not do the same-must we not admit that this is life? [[ . . . . ] Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul? Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things? Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things.
kairosfocus
May 30, 2019
May
05
May
30
30
2019
09:17 PM
9
09
17
PM
PDT
BB (& attn, H): The problem is ontological, as noted by Searle in the cite in the OP; kindly, read. Physical computation substrates are blindly processing signals, they are not operating on meanings. There is something there in our internal oracles that is not working in the way a computational substrate (refined rock) works. Indications are, per core characteristics, it is not of the same character . . . in effect, this points to mind over matter. A computational substrate is mechanically and/or stochastically governed, it simply is not constituted of "stuff" that gives it morally governed self-moved freedom to follow, understand, purpose, decide, will . . . be a first, initiating cause as Plato points out. And without genuine freedom, the rational credibility of mind collapses in self-referential discredit. KF PS: Let's roll the tape:
The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally. But, once again, why? Why can’t I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols. The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?) Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing “How do we know?” with “What it is that we know when we know?” This mistake is enshrined in the Turing Test(TT).
In case you think he is setting up a strawman caricature, here is Zenon W. Pylyshyn in his "foundational" Computation and Cognition: Toward a Foundation for Cognitive Science:
One of the central proposals that I examine is the thesis that what makes it possible for humans (and other members of the natural kind informavore ) to act on the basis of representations is that they instantiate such representations physically as cognitive codes and that their behavior is a causal consequence of operations carried out on these codes. Since this is precisely what computers do, my proposal amounts to a claim that cognition is a type of computation . Important and far -reaching consequences follow if we adopt the view that cognition and computation are species of the same genus .
The point Reppert makes keeps on being systematically overlooked:
. . . let us suppose that brain state A [--> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [--> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
kairosfocus
May 30, 2019
May
05
May
30
30
2019
09:04 PM
9
09
04
PM
PDT
Multiple-designer theory! That would explain a lot. Beetles, for instance! :-)hazel
May 30, 2019
May
05
May
30
30
2019
04:50 PM
4
04
50
PM
PDT
KF
BB, the limitations in question do not trace to technology but to the physics of large numbers of atomic particles and thence to the implied logic of structure and quantity.
But that doesn’t change the fact that humans, a reasoning being with free will (perhaps) were designed by a designer. If one designer can create something like a human with all that entails (including mental abilities), why can’t another?Brother Brian
May 30, 2019
May
05
May
30
30
2019
04:44 PM
4
04
44
PM
PDT
It would be one heck of a program that could allow computers to think like humans. It would be one heck of a computer, too.ET
May 30, 2019
May
05
May
30
30
2019
11:50 AM
11
11
50
AM
PDT
DS, I speak of an oracle as a non algorithmic source of meaningful information in a relevant context, i.e. an original source, not mechanically or stochastically determined. A significantly free source of meaningful information. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
11:28 AM
11
11
28
AM
PDT
KF,
Indeed, the programmer is an oracle
That seems like a different definition of "oracle" than the one I am using.daveS
May 30, 2019
May
05
May
30
30
2019
10:02 AM
10
10
02
AM
PDT
DS, I am pointing out that the computational problem involves supervisory oracles, once we move up to actual intelligence in say a bio-cybernetic entity, per Smith. Such oracles solve the search problem that computation faces on physics -- statistical issues -- of large config spaces. Indeed, the programmer is an oracle, as would be the expert who provides domain knowledge for an expert system. those are canned oracles, and for example would be present in weak/narrow AI systems, such as a pseudo-person being interviewed or presenting as say a Si-skin robot, for conversation. Or what is now being called deep fakes where in effect a face can be pinned over an underlying reference person, e.g. bringing Mona Lisa to life. My concern is general or strong AI, which involves multiple domains and either a grand oracle or some sort of network of oracles, some supervisory, to provide golden searches. Obviously we could in principle design up to a certain level, but will run out of resources. So, I would see an inability to be agile and substantial, as a real person is. As to the notion that such could come about spontaneously through blind chance and/or mechanical necessity, that is simply not credible. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
09:48 AM
9
09
48
AM
PDT
KF,
My point is, the hypothetical is practically impossible. What I showed in outline is that we may have domain systems that will achieve human like achievement or better but not plausibly a general oracle that solves the global solution-search challenge and then instructs a general fab to implement on an accessible and organised catalogue of components.
Hm, perhaps we are not in disagreement. I'm just talking about AI that is indistinguishable from a human, not anything to do with oracles. An AI that you could see on a computer monitor, converse with, without being able to tell it's not human. For example, consider building/training an AI simulation of a famous person, say the actor James Stewart circa the 1950's, that could engage in real-time conversation well enough to fool people who are familiar with his life. So that recordings of these conversations could pass as "lost interviews", say.daveS
May 30, 2019
May
05
May
30
30
2019
06:56 AM
6
06
56
AM
PDT
BB, the limitations in question do not trace to technology but to the physics of large numbers of atomic particles and thence to the implied logic of structure and quantity. Thus, the issues raised by Walker and Davies. Search challenge in large config spaces is a real issue, and it leads to pervasive fine tuning and this is reflected in search challenge. Thus, we see the significance of supervisory oracles and thence of search for search. Computational entities cannot escape these constraints, and notice the computational substrate I refer to is the Sol system, our practical universe for chemical energy level atomic interactions. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
06:47 AM
6
06
47
AM
PDT
Brother Brian:
I have little doubt that we will eventually have computers that will be indistinguishable from humans with regard to reasoning, abstract thinking, free will, etc.
So what? I doubt very much that will ever happen. And I base my doubt on knowledge of what it would take to accomplish such a thing. Knowledge that you clearly do not have.ET
May 30, 2019
May
05
May
30
30
2019
06:20 AM
6
06
20
AM
PDT
KF
BB, the assumption that we can do that is what is challenged, for cause.
I have learned two things in life. Never leave the toilet lid up and never underestimate what technology can do. I have little doubt that we will eventually have computers that will be indistinguishable from humans with regard to reasoning, abstract thinking, free will, etc. The bigger question is, when we get to that point, will we grant them the rights that we enjoy? My guess would be, no. There will always be those who will claim that it is all an illusion.Brother Brian
May 30, 2019
May
05
May
30
30
2019
06:10 AM
6
06
10
AM
PDT
DS, hypotheticals work by taking an antecedent as if it were true, thus the antecedent is assumed within the structure. My point is, the hypothetical is practically impossible. What I showed in outline is that we may have domain systems that will achieve human like achievement or better but not plausibly a general oracle that solves the global solution-search challenge and then instructs a general fab to implement on an accessible and organised catalogue of components. All along the way we have huge bases of FSCO/I to be searched to hit islands of function. recall, search for shorelines of function dominates hill climbing to higher performance from a shoreline of function. As Walker and Davies highlight, the fine tuning challenge is a global, general principles rooted problem based on the essential structure and quantity of contingency. Where WLOG, the bit space search challenge in given context is effectively the same challenge. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
06:06 AM
6
06
06
AM
PDT
Brother Brian,
But if we get to the point where we can’t distinguish between human thought and computer thought, how do we know that they are not capable of rational, insightful contemplation? Surely it is not simply because they are designed. After all, you believe that humans are designed.
That is a very interesting question. [And note, KF, I'm not assuming anything, BB's post is predicated on a conditional statement]. And whether we ever get to that point appears to be an empirical question. Further, there is now significant pressure not to engage in AI research which appears to be too "dangerous", which could have a stifling effect on this research, so perhaps we will never actually get to that point.daveS
May 30, 2019
May
05
May
30
30
2019
05:51 AM
5
05
51
AM
PDT
F/N: Wiki on Weak form AI: >> Weak artificial intelligence (weak AI), also known as narrow AI,[1][2][3] is artificial intelligence that is focused on one narrow task. Weak AI is defined in contrast to strong AI, a machine with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind).[citation needed]. Many currently existing systems that claim to use "artificial intelligence" are likely operating as a weak AI focused on a narrowly defined specific problem. Siri is a good example of narrow intelligence. Siri operates within a limited pre-defined range of functions. There is no genuine intelligence or no self-awareness despite being a sophisticated example of weak AI. Siri brings several narrow AI techniques to the capabilities of an iPhone. [4] AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.[5] Some commentators think weak AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Weak AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. [6] In 2010, weak AI trading algorithms led to a “flash crash,” causing a temporary but significant dip in the market.[7] >> Note above on the challenge of the higher order oracle to supervise a cluster of weak AI units sitting on top of the biocybernetic entity or general cybernetic entity. The search challenge is exponentially harder at each level. First order search for a 500 bit scale problem swamps sol system resources. a 1000 bit component even more decisively swamps observed cosmos scope resources. These are fundamental issues, fine tuning is everywhere and bounded rationality is a known problem for us, much less bounded computational power i/l/o search challenge on atomic and temporal resources. KF PS: Every month as I go pay a bill, I question Alexa there by the office, I will not put that spy in my own space. Alexa, routinely, gets stumped.kairosfocus
May 30, 2019
May
05
May
30
30
2019
05:32 AM
5
05
32
AM
PDT
BB, the assumption that we can do that is what is challenged, for cause. Again, IF is a huge word. In limited domains or clusters, we will likely get weak form AI's that will succeed in capturing expertise or spotting patterns like signs of incipient plasma breakdown, but the general wisdom challenge is the real issue; especially on cases where rapid, highly agile sound judgement on the non-routine . . . rules that worked hitherto break . . . is critical, i.e. the OODA loop is at work and novelty and surprise are major factors. The sign is that we have an independent, rationally and responsibly free morally governed intelligent oracle that is not working on computation but on morally framed understanding -- wisdom. In short, I point to Leibnitz's insight that we are dealing with a different order of being, one not constrained by the dynamics of computational substrates. We further have good reason to infer that such a general intelligence is a search challenge on computational resources, and that a computational substrate, being FSCO/I rich, is itself an index of design. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
05:12 AM
5
05
12
AM
PDT
KF
The fundamental problem is as I noted above, and as Reppert so succinctly pointed out: computation is categorically distinct from rational, insightful contemplation.
But if we get to the point where we can’t distinguish between human thought and computer thought, how do we know that they are not capable of rational, insightful contemplation? Surely it is not simply because they are designed. After all, you believe that humans are designed.Brother Brian
May 30, 2019
May
05
May
30
30
2019
04:35 AM
4
04
35
AM
PDT
BB (& AS), re: creating a computer that thinks like we do would be a breakthrough for ID. Much like KF’s claim that synthesizing a genome is an ID breakthrough. (And genome scope synthesis is a real, intelligently designed breakthrough relevant to how we should -- but don't -- govern research programmes and education relevant to studies on OoL and Oo body plans including our own. Ideological imposition of a priori materialism is exposed as untenable.) If is a very big word. The fundamental problem is as I noted above, and as Reppert so succinctly pointed out: computation is categorically distinct from rational, insightful contemplation. The demonstration is direct, the logic is simple. Computational substrates are mechanically and/or stochastically governed, rational contemplation must be inherently free, insight driven and morally governed. As a result, their characteristics and capabilities are radically different. Let us consider the task, within sol system resources [10^57 atoms, 10^17 s to date, 10^12 - 15 chem rxns per s], of composing an arbitrary, relevant 500 ASCII character string that is meaningful and responsive to a given arbitrary set task, say by printing text in English [~ 72 characters,1/2 length of the older tweet] or computer code. 500 bits defines a config space of 3.27*10^150 possibilities, involving every such string there can be, all longer responses would be by concatenating strings from this set in suitable, functional patterns. BTW, this includes composing comments in this thread. Taking 10^14 as a good upper for organic reactions, ignoring how H dominates the composition of the sol system, in 10^17 s, in effect a number of coins to be flipped . . . or a paramagnetic substance in a weak aligning field storing the equivalent . . . equal to the number of atoms, would carry out 10^ [17 + 57 +14] = 10^ 88 operations, utterly negligible relative to the space to be searched. Blind mechanical necessity and/or chance are not a feasible device, as this space is dominated by gibberish and functional configurations are going to be in sparse, isolated islands of relevant functionality. And yet we routinely rapidly compose such strings. So, let us consider that somehow there may be a magic bullet, golden search that drastically reduces the challenge. The problem here is, searches are subsets sampled from the space. So, for a set of scope n, there are 2^n searches, the power set. When I went to an online big number calculator and asked it to directly calculate the value for a 500 bit config space, it said that it could not calculate a number that big. The common log of the number is ~ 9.84*10^149. Search for a golden search is exponentially harder than direct search. Where of course, a golden algorithm or mechanical and/or stochastic arrangement of the atoms of the sol sys to get such golden searches consistently is an even more futile task. Most arbitrary tasks of interest -- the general AI challenge -- are practically uncomputable from arbitrary start-points. That is, computational processes as a practical matter will be fine tuned to perform a particular range of tasks. Which requires a knowledgeable fine tuner to create an appropriate, functionally specific configuration. We are back to the practical need to design a computational substrate. Of course, properly designed software on well chosen tasks easily outperforms us, including the deep learning scheme for weak form AI that is all over our headlines. And we know from experience that we need to learn considerable background to carry out effective solution-finding in a domain of skilled expertise. I focus on the linguistic task as output may either directly answer or be the instruction code for some universal fabricator we may conceive of. Zooming out, Walker and Davies show this is very general:
In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a system’s trajectory will eventually [--> given "enough time and search resources"] explore the entirety of its state space – thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary “final state” (e.g., a living organism) and “explain” it by evolving the system backwards in time choosing an appropriate state at some ’start’ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense. We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions – that is, it makes little sense for us to single out any particular state as special by calling it the ’initial’ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems – including life), then our phase space will consist of isolated pocket regions and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).
[--> or, there may not be "enough" time and/or resources for the relevant exploration, i.e. we see the 500 - 1,000 bit complexity threshold at work vs 10^57 - 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos]
Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [--> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fine–tuning of the initial conditions. [ --> notice, the "loading"] Stated most simply, a potential problem with the way we currently formulate physics is that you can’t necessarily get everywhere from anywhere (see Walker [31] for discussion). ["The “Hard Problem” of Life," June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]
I am of course fundamentally shaped by this discipline and that's why I approach the matter much as they do. Any arbitrary 3-d configuration may be described by a suitable string of y/n questions in some description language. This is what things like AutoCAD are about. So the bit string challenge and linked mathematics of binomial distributions are WLOG. This is also why any config can be reduced to an effective info equivalent. Take this as, a bill of components joined to assembly instructions for a suitable universal fab. Therefore, the challenge is ill-posed. While computational substrates are possible and can perform impressive tasks, the properties of general rational, responsible intelligent action are fundamentally non computational. Or rather, manifest (per Smith model as discussion f/w cf. OP as updated) an oracle supervising the observable cybernetic loop with a computational substrate. To get the performance you need the oracle, not just the cybernetic loop taken as a cut-down universal fab. (We, with technology we first assemble and organise, can do many things.) If you try to program another computational substrate to mimic the oracle, you are back to the same challenge, noting that this is a general oracle. There is no problem in composing a domain specific expert system, the task is the general oracle able to act effectively in arbitrary situations. This includes when one clusters oracles and has to have a tier 3 supervisory oracle to search for the right oracle. Moral government compounds the matter, computational substrates just are, they are programmed (including mechanical and/or stochastic elements), they are not executing free, rational, ought-based moral choice. It is the designer who handled the morality. Hence BTW Asimov's laws of robotics as he pondered an advanced civilisation trying to create robots. R Daneel Olivaw is the classic case, and in his latest iteration abandoned the positronic brain for a human-like one. Even such a robot is not like us. Gotta go now. KFkairosfocus
May 30, 2019
May
05
May
30
30
2019
02:55 AM
2
02
55
AM
PDT
AaronS1978, I think you make a good point. If we are capable of creating a computer that has thought processes indistinguishable from those of humans, which I think is only a matter of time, does that mean that it has free will, or that we don’t? Does it have a soul, or do we not? From an ID perspective, I would think that creating a computer that thinks like we do would be a breakthrough for ID. Much like KF’s claim that synthesizing a genome is an ID breakthrough.Brother Brian
May 29, 2019
May
05
May
29
29
2019
11:33 PM
11
11
33
PM
PDT
1 8 9 10 11

Leave a Reply