Uncommon Descent Serving The Intelligent Design Community

Can One Computer “Persuade” Another Computer?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a comment to a prior post StephenB raises some interesting questions: 

{1}Free will requires the presence of a nonmaterial-mind independent of the brain. {2}a non-material mind independent of the brain indicates free will.  . . .  In philosophy, [this type of proposition] is known as a bi-conditional proposition, which means, If A/then B. Also, If B/then A.  Usually, that pattern does not hold in logic, but it does hold here. [If one disavows] the existence of the mind, it is time to make the corresponding assertion about volition—go ahead and reject free will and complete the cycle.  Take the final step and concede that all of our attempts to persuade each other are futile.  We are nature’s plaything, and the laws of nature operating through our “brain” dictate our every move.

Given [the materialist’s] perception of reality, why [does he] bother to raise objections at all [to the proposition that mind exists independently of the brain].  If your world view is true, then [all the commenters] on this blog do what we do only because fate requires it of us. We are, for want of a better term, determined to think and act as we do.  Since we have no volitional powers, why do you appeal to them?  Why raise objections in an attempt to influence when it has already been established that only non-material minds can influence or be influenced? Why propose a change of direction when only intelligent agencies have the power to do that?  Since brains are subject to physical laws of cause and effect, they cannot rise above them and, therefore, cannot affect them.  Brains cannot influence brains.  Why then, do you ask any of us to change our minds when, in your judgment, there are no minds to change?

Surely we all agree that the output of a computer is utterly determined in the sense that the output can be reduced to the function of the physical properties of the machine.

 Note that this does not mean that the output of a computer is always predictable.   “Determined” is not a synonym for “predictable.”  An event may be completely determined and utterly unpredictable at the same time.  In other words, it might be “determined” and also “indeterminate.”  Example:  Say a bomb explodes.   It is impossible to predict where any particular piece of the bomb shell will land.  Therefore, where the piece of bomb shell will land is indeterminate.  Nevertheless, where the piece of bomb shell winds up landing is purely a function of the laws of nature, and is in that sense determined.

Now assume we have two computers that can communicate in machine code across a cable.  Assume further that the computers are assigned the task of coming to a conclusion about the truth or falsity of a particular proposition, say “The best explanation for the cause of complex specified information X (“CSI-X”) is that CSI-X was produced by an intelligent agent.”   Say computer A is programmed to do two things:

 1.  Respond “true” to this proposition.

2.  Communicate a list of facts and arguments its programmers believe support this statement.

Here’s the interesting question.  Can computer A “persuade” computer B to accept the “true” statement?

The answer, it seems to me, is obvious:  No. 

Computer B’s output is completely determined.  It has no free will. It has no “mind” that may be persuaded.  The facts and arguments communicated to it by computer A  trigger a subroutine that produces the output “yes it is true” or “no it is false.”  The result of that computation is utterly determined in the sense that it is reducible to the operation of computer B’s software and hardware.  Computer B has no meaningful choice as to how to respond to the information provided to it by computer A.

This brings us back to StephenB’s questions.  If the brain is nothing more than an organic computing machine, why do materialists bother to try to persuade us of anything? 

Comments
[...] can make excellent development wordsNew York, NY: Senior Java Software Engineer Tech Lead – 135/kCan One Computer “Persuade” Another Computer?Mobile Application Development Can be a Perfect SolutionGet Windows 8 application development [...]Repentance (metanoia) software solutions Private Limited
January 6, 2013
January
01
Jan
6
06
2013
03:19 AM
3
03
19
AM
PDT
Szkola Angielskiego... Hi, I think your blog might be having browser compatibility issues. When I look at your website in Firefox, it looks fine but when opening in Internet Explorer, it has some overlapping. I just wanted to give you a quick heads up! Other then that, very ...Szkola Angielskiego
August 18, 2011
August
08
Aug
18
18
2011
04:56 AM
4
04
56
AM
PDT
bielizna sklep... This design is steller! You most certainly know how to keep a reader entertained. Between your wit and your videos, I was almost moved to start my own blog (well, almost...HaHa!) Wonderful job. I really loved what you had to say, and more than that, ho...bielizna damska
July 19, 2011
July
07
Jul
19
19
2011
10:39 AM
10
10
39
AM
PDT
KF, yeah, I've got Scotsman blood in me too. But, I think we can be done here.Q
January 31, 2008
January
01
Jan
31
31
2008
11:57 AM
11
11
57
AM
PDT
PS: Citing that wiki article Q linked above:
This form of argument is an informal fallacy if the predicate . . . is not actually contradictory of the accepted definition of the subject, or if the definition of the subject is silently adjusted after the fact to make the rebuttal work.
I would believe that the Am h dict def'n cited above and linked at 143 counts as an example of the "accepted" def'n of persuasion. Nor are we changing the definition after the fact to exclude inconvenient facts! (Of course, I have taken out the insult to my ancestors from the excerpt.)kairosfocus
January 31, 2008
January
01
Jan
31
31
2008
09:13 AM
9
09
13
AM
PDT
Okay, Q: That's quite enough:
[Q, 140:] Look up the “No true Scotsman” logical fallacy, a type of question begging
Them's fighting words, pardnuh: the Afro-Scots true-blue blood of heroes runs in my veins – that of National Heroes and martyrs of Jamaica. (The Irish blood, the English blood and the Indian blood join in too! Off in the cheering galleries, some French ghosts, circa 1916 join in: “They shall not pass!”) I am standing at the neck of the pass, tartan clad, and I am brandishing the grand old, ever sharp claidheamh mòr! [And cf here Heb 4:12!] “A Gordon! A Gordon!” [The war cry of an ancestral clan.] Let's see what stuff a true Afro-Scotsman is made of! (In short, do you see the fallacy in tossing out turnabout accusations of question-begging, when in fact demonstrably [cf above] it is YOU who are trying to arbitrarily shift and corrupt the meaning of language, e.g. the meaning of words like persuade? As Unlettered has just aptly corrected you on?) FYI, the No True Scotsman fallacy only obtains when non-essential characteristics are being used to try to rule out of a set those who properly belong. Computers, as currently constituted are simply not persons, period, and can no more be persuaded nor persuade than they can think and decide for themselves. And as for pusillanimous cowards who claim to be Scotsmen by virtue of merely having been misbegotten and born to a family of the stock; well, they aren't going to be around very long to continue making such claims – not if REAL Scotsmen and their descendants have anything to say and do about it! (Trust an Englishman to make the point of being a true blue Scotsman into a fallacy!) Okay, on points of brief note: 1] The double-blind test I asked about was with regard to observed behaviors. That is the nature of longstanding traditional AI tests. And I pointed out that the test – which I directly stated can perhaps be passed in certain narrowly defined and contrived settings -- is mis-conceived, and showed why. One can program and mimic intelligent behaviour in narrow domains indeed, but this is worlds apart from the task in hand: persuasion and dissuasion, as Unlettered aptly pointed out. That is not “evasion” - a highly loaded term with connotations of dishonesty and dishonourable conduct -- on any reasonable meaning of the term. You are wrong-headed and if you insist on such abusive terms, wrong-hearted. 2] The question is not at all whether there is somebody in some given body. The question is whether this given body can perform tasks analogous to those performed by a diferent body. Not at all, it is whether one computer may PERSUADE another -- cf title and OP. That is not a traditional so-called AI test. In short, it is you who have tried a bait and switch, and we are not biting on it. 3] The double-blind test being suggested is not meant to “truly capture the behavior of people”. Instead, it would illustrate whether computer behavior can be indistinguable from the people’s behavior . . . . This is how we avoid tautological semantic arguments, like an argument which says “persuasion” is defined to relate to only inter-personal communication, so persuasion refers to only persons, computers aren’t persons, so computers can’t perusade because tautologically, “computer’s can’t do what persons do” First, you have again insistently misrepresented what i have said: i have stated that so far as I can see, in principle, whether computers or robots etc can be made to have the ability to persuade or the like is an OPEN question for empirical research, and that the ID programme opens the door to such investigation. Kindly cease and desist from distorting what I have explicitly and repeatedly said. Next, you are again doing the bait and switch. Persuasion, as Unlettered points out, is not a narrowly defined toy process that can be algorithmised. It is a broad characteristic of known persons, and it is known to be directly and inextricably connected to powerful core characteristics of persons. When – and if -- computers or robots get to the R Daneel level, they would then be able to persuade or be persuaded, but not until then. Nor am I speaking empty tautologies – despite your fallacy of confident manner insistent assertions. Being a person is an empirically investigate-able matter, not a matter of uninformative statements of identity. Once you see real persons in action, then consider the characteristics, then you will see that to reject my point is to end in absurdity. As you have. 4] it may not be true that an body which exhibits intention, self-direction, adaptation, or judgement is necessarily the same as an intelligent agent. It may still be an automoton - like a mars rover. FYI Mars Rover is not a self-directing, intentional entity. It is carrying out instructions from NASA, as loaded into its firmware etc. It may make pre-programed if-then responses and may even have a supervisory level director that allows some adaptation, but it is simply not at the level of person. And, it is you who have here grossly confused categories that are evident to anyone with a modicum of common sense, even as you accuse me of the No True Scotsman fallacy. Thus, revealing the self-evident nature of the case. 5] Unlettered, 141: Q - please… Thanks for a spirited, dead on target response. You'll do to hold the pass with, back-to-back, swords drawn; bro! “A Gordon! A Gordon!” GEM of TKIkairosfocus
January 31, 2008
January
01
Jan
31
31
2008
06:20 AM
6
06
20
AM
PDT
Greetings! Q - please... My list that I provided was sufficient enough that no computer process could ever accomplish it, I did not have to provide the rest. But since you want it, fine. It must be able to change its analysis process, and based on its database. It must be able to generate "new" data based on old data. It must be able to reject data in favor of a particular opinion because it liked it. It must be able to become attached, or attracted to things. It must be able to abandon logic during the persuasion process. It must be able to reject fact, in favor of a colorful arguement. Both persuasion and dissuasion are part of everyday life for people. So dissuasion must also be possible for the computer. It must be able to persuade/dissuade itself. Persuasion has results; effects of change. Change in motivations, intentions and actions. These to must be included in the processes of persuasion. Persuasion does not violate free will, but is an influence, that can be rejected or accepted. The same must be for the computer. But of course there's that free will thing. Persuasion may also take place without awareness of its source. But it can still be rejected or accepted. BTW, do computers spontaneously reprogram their operating systems. Worldview = Operating system Just as a worldview is the platform for the actions of humans. So an operating system is for a computer. Persuasion must be able to build a "new" operating system, or adapt the "old" operating system. (All by itself) As I understand things in reality, is that trained programmer build operating systems, programs, and set the parameters of computers. Can computers do these things by themselves? If they can, Can they do it based on a rogue transmission of data that may or may not be true? If they can do all of the things listed, but are not self-aware, I might apply the term persuasion to computers as a process. BTW, I do not say gravity persuades me to stay on earth. Or that the water in my body is persuaded to circulate. Or that high pressure is persuaded to go to low pressure. Or that wind persuades sailboats to move. Or that heat persuades water to evaporate. All of these are processes, that involve action/reaction or cause and effect. Computer A sends transmission to Computer B. Computer B analysis Computer A's transmission. Computer B confirm data from Computer A to be true ot false based on its preset parameters. (action/reaction, cause/effect) Persuasion is more than just cause/effect, action/reaction. In the case of the computers all fixed actions and fixed reactions, fixed cause and fixed effects. Persuasion does not have fixed actions/reactions, cause/effect. The proceses of persuasion might cause dissuasion, where the effects are the opposite of the purposed agenda. (dang agenda implies intent also part of the processes of persuasion) Before you claim computer can do it, analyse real persuasion between real people, reduce their actions to system processes. You started with the computer system process and worked from there. kairosfocus and I started with people and their actions and worked from there. Who do you think is going to have a clearer understanding of persuasion systems? A person who applies computer process to known human action. Or a person who applies known human actions to a computer process. It seems to me if you start with the computer processes it is as logical as trying to build a roof on a house without having built a foundation, floors, walls, ect.Unlettered and Ordinary
January 30, 2008
January
01
Jan
30
30
2008
03:32 PM
3
03
32
PM
PDT
wow, I had a junky edit above. We set up a test to see behaviors, like a double-blind study, and then see if the behaviors results correlate to what the properties of the concept being investigated. should be We set up a test to see behaviors, like a double-blind study, and then see if the test results correlate to the properties of the concept being investigated.Q
January 30, 2008
January
01
Jan
30
30
2008
01:09 PM
1
01
09
PM
PDT
KF: what part of the following is evasive or unclear — as opposed to pointing out the key, longstanding gap in the traditional AI tests? Where you included in your answer I-ness aspect. But IMHBCO, until and unless one captures BOTH, one has not truly captured “the behaviour of people.” The double-blind test I asked about was with regard to observed behaviors. That is the nature of longstanding traditional AI tests. It is not about some metaphysical, non-observable portion of a dualist philosophy regarding "I-ness". The "I-ness" question is valid, but in a different domain than observable behaviors. And you did again, when you followed up with Now, how can one tell that there is “somebody” at home in a given body? The question is not at all whether there is somebody in some given body. The question is whether this given body can perform tasks analogous to those performed by a diferent body. Look up the "No true Scotsman" logical fallacy, a type of question begging . http://en.wikipedia.org/wiki/No_true_Scotsman The double-blind test being suggested is not meant to "truly capture the behavior of people". Instead, it would illustrate whether computer behavior can be indistinguable from the people's behavior. What truly is people's behavior is a different question, pehaps. This is how we avoid tautological semantic arguments, like an argument which says "persuasion" is defined to relate to only inter-personal communication, so persuasion refers to only persons, computers aren't persons, so computers can't perusade because tautologically, "computer's can't do what persons do". We set up a test to see behaviors, like a double-blind study, and then see if the behaviors results correlate to what the properties of the concept being investigated. KF I have argued that agency has in it intention, self-direction, ability to adapt to novel, unstructured, non-routine situations with judgement and creativity, etc etc. But, it is also clear that you are arguing the inverse holds true merely because your claim of agency is true. This pairing of arguments is wrongly being used to exclude any behaviors that are analogous to intention, self-direction, adaptation, or judgement from bodies unless those bodies also have the property of intelligent agents. Specifically, it may not be true that an body which exhibits intention, self-direction, adaptation, or judgement is necessarily the same as an intelligent agent. It may still be an automoton - like a mars rover. I.e., claiming that object A has the properties of B, C, and D does not mean that an object with the properties of B, C, and D is an A, unless B, C, and D are the only properties the object has, and are the only properties to be an A. In other words, a specific computer which may agreeably be an automaton, may still be able to exhibit intention, self-direction, adaptation, or judgement (or other behviors) as tested in a double-blind experiment and not have all the properties of an intelligent agent. That is, unless one falls into the No True Scotsman trap, and says but that isn't truly intention, or self-direction, or adaptation, or judgement because of claims unrelated to the use of those concepts.Q
January 30, 2008
January
01
Jan
30
30
2008
11:22 AM
11
11
22
AM
PDT
Okay: Following up a few points on AI, persuasion and persons etc: 1] Q, 131: you are avoiding the double-blind question in 120, regarding behaviors that are indistinguishable from human behavior . . . . Can you show that only objects with the property of an intelligent agent could pass a double-blind test of whether a specfic behavior is human or not Q, kindly explain to me: what part of the following is evasive or unclear -- as opposed to pointing out the key, longstanding gap in the traditional AI tests?
[GEM, 124, point 1] On particular tasks, it may be possible to mimic the externally observable behaviour without having captured the inner, I-ness aspect. But IMHBCO, until and unless one captures BOTH, one has not truly captured “the behaviour of people.” Now, how can one tell that there is “somebody” at home in a given body? Ans: by looking for self-aware, un-programmed, self-directing, creative behaviour in novel unstructured situations, spontaneous and appropriately situationally responsive communication and interactions – and I have in mind emotions, values and ethics too. [Searle’s Chinese Room and Turing’s tests so far as I can see, do not pass this [i.e more representative and realistic] test.]
You will also note that I went on to identify an important, paradigm-shifting case in point: Chomsky on language, which massively helped break down the stranglehold of behaviourism on psychology. And, again note: IT WAS THE FIRST MAJOR PARADIGM-SHIFT LEVEL BREAKTHROUGH OF THE DESIGN INFERENCE. So, in sum, I am pointing out that the Turing test and Chinese room, etc, are all wrong-headed. The characteristic behaviour of agents is not in specific, structured situations that can be reduced to algorithms quite nicely, thank you. It is in non-routine, ill-defined, unstructured situations that call for judgement and creativity that agency shines and leaves pedestrian algoritms in the dust. Indeed, this is long since a known, defining characteristic of professional and/or strategic level disciplines in the world of work. [Well do I remember my thesis adviser, PDS, pointing this out to me as we both stood in line at the old UWI Mona campus bookshop during the AI craze of the 1980's, as we discussed Turing's argument that so soon as a matter is defined, an algorithm for it can be constructed. I gratefully acknowledge the intellectual debt. Thanks sir -- again!] So, I guess the last word on this can be left to the C5 - 4 BC Greek physician, Hippocrates of Cos. Pardon my butchery of wonderful, terse language by explanatory parentheses]:
"Life is short, [the] art long [you don't have time to learn “enough”], opportunity fleeting [you must act TODAY], experiment treacherous [times and situations chance and experience is not demonstrative proof], judgment difficult. The physician [read that: professional] must not only be prepared to do what is right himself [technical, ethical, and decision-making issues] , but also to make the patient [client], the attendants [management issues], and externals [wider community] cooperate.”
2] you have been arguing that certain behaviors require certain agents. Again, you here misunderstand and/or misrepresent what I have argued. I have argued that agency has in it intention, self-direction, ability to adapt to novel, unstructured, non-routine situations with judgement and creativity, etc etc. In that context, I have highlighted the creative use of language (a point highlighted by Chomsky) – which of course generates insightful constructs in appropriate codes that are functionally specified and complex, i.e this is why FSCI is a reliable marker of intelligent agency. And more. What I have emphatically NOT done is to lock down agency to any one form or manifestation. I have considered agents who are contingent or necessary beings as both possible. I have considered wholly embodied agents and agents who blend matter and immaterial mind. I have looked at humans, God, demons, gods, ETs, robots and advanced, possible future AI systems. [The closest I have got to your strawman, is when I have compared the possibility of an Agent as the necessary being responsible for our observed contingent cosmos, by contrast with the proposed quasi-infinite, unobserved multiverse now pushed as an ad hoc way by Dawkins and his ilk to try desperately get around the force of the evidence on anthropic fine-tuning.] I have insisted that agents are able – per a defining, OBSERVED, recognisable, family-resemblance characteristic -- to act appropriately into novel and unstructured situations with creativity, judgement, intentions decisions and even wisdom. That is by no means confined to any one type of agent, human or otherwise. And, I insist, on the most excellent grounds of evidence and the basic meaning of words: only such agents are capable of being “persuaded.” 3] LNF, 132: I’m talking about what science should assume as it proceeds. I’m not talking about knowledge claims in general; I’m talking about the limited domain of science. And, sorry to be insistent [and if this offends you further, pardon]: plainly, you have evidently not read with understanding the linked discussions on just what the issues are on demarcation of what science is or is not, and what its proper assumptions are. Nor have you showed any signs of simply reading; by actually interacting with the linked and sumarising then asking on clarification then challenging on points of difference, with reasons. Instead, you have simply reiterated/implied a tendentious redefinition of science that serves only to truncate its search for empirically anchored truth in the interests of evolutionary materialist agendas. Sorry if it comes across as offensive, but -- given the already linked that you don't appear to have interacted with -- that simply does not come across as intellectually serious. [Put it this way: if one of my students insistently dismissively addressed a point as you have, he would have – for cause -- got ZERO from me.] Kindly, start here [with onward context-supplying links esp. here], then also go here for a broader basic introduction on the issue of what “science,” properly, is. On definitions, you may wish to consult a couple of high-quality dictionaries for starters:
science: a branch of knowledge [cf. "true, justified belief" -- I would soften to include that often we mean the weaker sense: "credibly (thus: provisionally accepted) true, reliable, well-warranted belief . . ."] conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind” -- this embraces my weaker sense above] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965]
4] I didn’t mean to give the impression of selective hyperskepticism that humans are actually conscious self aware beings. Just that we can’t know for absolute certainty using reason, logic and observations of our senses that any persons other than ourselves are truly so. This is like the problem of proving the falsity of solipsism. The highlighted portions should suffice to show why I spoke of selective hyperskepticism. For, humans are finite, fallible, arguably fallen and too often ill-willed and en-darkened in heart and mind. So, we fall under Locke's too often overlooked warning at the beginning of his essay on human understanding:
Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 - 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 - 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 - 2, Ac 17, etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 - 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly.
Even worse, is to selectively dispebieve wha tris not convenient to our worldviews or issues, using grounds that if consistently applied would put us into the absurdities of hyperskepticism that Lock so eloquently described. 5] an advanced AI system could mimic a conscious “human” agent in these ways well enough to fool us, at least in certain situations. Those situations the system had been designed and programmed for. But, the POINT is that real agents are not confined to “certain” well-defined situations. They act effectively, intuitively and creatively into the unstructured, ill-defined, often poorly understood messy complexities of the real world, a world in which “art is long, life short, experience treacherous and judgement difficult . . .” GEM of TKIkairosfocus
January 29, 2008
January
01
Jan
29
29
2008
11:37 PM
11
11
37
PM
PDT
U&O provides a list of requirements, such as It must be able to generate new programs, function, etc. base on its database, continually. I hope you understand that for the functionality of current computers, a program is simply a structured database of instructions. There is almost nothing that distinguishes data, program, database, etc, except for how it is accessed. Some BYTEs are read as program. Some are read as data. Some BYTEs are written as data and read as program. If a computer can determine data to put into the database, then, unless there is a physical limitation, generating a new program is basically the same process. It is your claim of "must be capable of all these things and other’s I have not mention", and not the list you provide, that is what shows the strength of your argument.Q
January 29, 2008
January
01
Jan
29
29
2008
05:38 PM
5
05
38
PM
PDT
Greetings! My objection to using persuasion on the computer's in BerryA question is that the tasks carried out by the computers did not qualify. The computers can transmit, confirm, accept or reject the information then store it in the database according to the programmers preset parameters. My objection is this is not enough. The Computer must as a result of analysing the data, begin to alter its operating system, programs, functions and sub-functions, re-calibrate, and analyse its database and correct it according to the new parameters. It must be able to re-program itself, generate a new operating system, etc. and decide or choose between the old and new operating system, etc. It must be able to generate new programs, function, etc. base on its database, continually. It must be able to integrate it's database and new data into it's operating system, programs, etc. It must be able to disagree with the conclution of computer A's transmission after confirming it and accepting it and agreeing with the data and feedback. It must be able to change it's parameters, but not as the result of direct programming of outside sources, ie, Computer A, or Computer B's programmer. In must be capable of all these things and other's I have not mentione. But if a computer can do these things, then I might apply the term "persuasion" to its interaction with computer A. Also my objection to materialism is that it excludes the obvious conclusions of the non-material universe of honest inquiry. 1) The non-material cause and origin of the universe. 2) The non-material cause and origin, nature of material life. 3) The non-material cause, origin and nature of consciousness. 4) The non-material cause, origin, and nature of the conscience. 5) The non-material cause, origin, and nature of free will. 6) The non-material cause, origin, and nature of imagination and creativity. 7) The non-material cause, origin, and nature of emotions. 8) The non-material cause, origin, and nature of thoughts. 9) The non-material cause, origin, and nature of spirituality. I of course recognize the material universe, but its only part of the story. I hold that the material intelligent agents must indeed have material parts but also non-material counter-parts. Like the heart needs the lungs and the lungs need the heart. To remove one disables the other. The brain without the mind is worthless, and the mind without the brain does not exist, for material intelligent beings. Matter does not exist without energy. Matter and energy do not exist without information. Matter, energy, and information do not exist without intelligence. Each relies on another to exist. And as intelligent material beings, we complete a circle, as we are intelligent matter. Material beings with non-material origins, parts, and nature.Unlettered and Ordinary
January 29, 2008
January
01
Jan
29
29
2008
03:58 PM
3
03
58
PM
PDT
What I was agreeing to was science cannot investigate the non-material, by way of observation and study. And even that I have doubts, but only because we can observe the effects of things not readily available, or observable, or not even present.Unlettered and Ordinary
January 29, 2008
January
01
Jan
29
29
2008
02:51 PM
2
02
51
PM
PDT
Greeting! I don't actually totally agree. That is too closed for me. I unequivocally reject materialism. But I also only believe in strict scientific terms; we can only study what we observe and we can only know what we study. But this only applies to investigating physical nature. That does not mean the conclusions of our inverstigation cannot lead us to the non-material. So materialist framework is bunk especially for science. So actually I totally disagree.Unlettered and Ordinary
January 29, 2008
January
01
Jan
29
29
2008
02:47 PM
2
02
47
PM
PDT
Greetings! larrynormanfan 110, I agree with you totally. "Science without religion is lame, religion without science is blind." Albert EinsteinUnlettered and Ordinary
January 29, 2008
January
01
Jan
29
29
2008
02:36 PM
2
02
36
PM
PDT
(M): ...our assumption that other humans are conscious self-aware beings also is only a working theory (when restricted to physical interactions) kairosfocus (#124): "Not at all,.... we must not fall into selective hyperskepticism — worldview level question begging — on evidence that points to consciousness and creativity, such as appropriate responsiveness in novel, unstructured, demanding situations. I didn't mean to give the impression of selective hyperskepticism that humans are actually conscious self aware beings. Just that we can't know for absolute certainty using reason, logic and observations of our senses that any persons other than ourselves are truly so. This is like the problem of proving the falsity of solipsism. In principle an advanced AI system could mimic a conscious "human" agent in these ways well enough to fool us, at least in certain situations. Those situations the system had been designed and programmed for. This design could involve a huge data base derived from actual human conversations, and a large enough set of conversational "algorithms" derived from actual human interactions. Nothing in principle would prevent such a system from deceiving a real person that he/she was communicating with a real human. Whether actual computer systems will ever have enough memory and processing power to do it is another issue. Of course this is the Turing test, and the possibility of being fooled by it is why the test is invalid or inapplicable to really deciding the issue. This is why empirical evidence for dualism of human consciousness such as psi is so important. kairosfocus (#125): "....If a robot or computer etc can persistently pass the test (involving "novel unstructured situations that demand creative responses", including "naturally occurring language using inter-active situations in a changing world") it is much more than a mere simulation driven by clever software." I disagree, for the reasons given above.magnan
January 29, 2008
January
01
Jan
29
29
2008
12:43 PM
12
12
43
PM
PDT
KF, I'm sorry you don't think I'm being serious. I am in fact, despite your assertions to the contrary. I don't appreciate the tone, either.
The point is that it is those who would insiste that there are only certain known agents and thatt hese agents arte the only possible ones who have a — so far unmet — burden of proof.
I don't know what you're talking about. In fact, I have a hard time following a lot of what you say. In answer to a previous question, I've read enough of your work to know that you have developed a philosophical scheme which seems internally coherent to you but which to my eyes, strain at gnats to swallow camels. I don't "insist that there are only certain known agents" or what have you. All that stuff is meaningless to me. I'm talking about what science should assume as it proceeds. I'm not talking about knowledge claims in general; I'm talking about the limited domain of science.larrynormanfan
January 29, 2008
January
01
Jan
29
29
2008
08:45 AM
8
08
45
AM
PDT
K&F, 130, The point is that it is those who would insiste that there are only certain known agents and thatt hese agents arte the only possible ones who have a — so far unmet — burden of proof. I agree. I at least have not been suggesting that there are only certain known agents. But, you have been arguing that certain behaviors require certain agents. A theoretical claim not supported with observation. Your burdern of proof is essential - that requires demonstration to close that gap between logical argument and representation of fact. Specfically, you are avoiding the double-blind question in 120, regarding behaviors that are indistinguisable from human behavior. It is not question-begging, because BarryA in the title of this thread suggested a semblance of "persuasion", as even he put the word in quotes. Can you show that only objects with the property of an intelligent agent could pass a double-blind test of whether a specfic behavior is human or not?Q
January 29, 2008
January
01
Jan
29
29
2008
08:43 AM
8
08
43
AM
PDT
LNF: I tis clear that this is not a serious exchange. 1] We don’t see “intelligent agency” in the absence of “intelligent agents.” And since such agents — us — are only known to exist comparatively recently in world history, we shouldn’t bring other unknown agents in just because a particular problem is difficult to solve scientifically. Question-begging. Science is not to be confused with "the best evolutionary materialist 'explanation' of the cosmos from hydrogen to humans." Especially, given the self-referential incoherence that lurks in that. not to mention, the begging of the question through a loaded definition. Onlookers, observe as well the "rush on to the next objection" without addressing issues squarely on the merits" problem. 2] those “BILLIONS” also disagree with each other, often (historically and in the present) violently. Which of these accounts am I to take as scientific? The Muslim, the Mormon, the madhouse resident? The UFO abductee? The native in the sweatlodge seeking the animal spirit? They all encounter intelligent agency. Onlookers, first observe the strawman tactic, especially in light of the principles that experience and interpretation are different issues and that a counterfeit $1 implies a genuine $1. If you want to talk about claimed personal encounters with God, try: Moses, Jesus of Nazareth, Saul of Tarsus, Augustine, Pascal, John Wesley, George Whitefield, C S Lewis or the like -- some of the greatest lives and minds ever lived, lives that show the positive impacts of transforming real experience, not the disintegrative and chaotic impacts of delusions. But even that is on a red herring leading to a strawman likely to be burned to cloud and confuse the atmosphere. The point is that it is those who would insiste that there are only certain known agents and thatt hese agents arte the only possible ones who have a -- so far unmet -- burden of proof. If a possibility exists, it should not be dismissed through question-begging, especially the misuse of the term "science." GEM of TKIkairosfocus
January 29, 2008
January
01
Jan
29
29
2008
05:34 AM
5
05
34
AM
PDT
Also: those "BILLIONS" also disagree with each other, often (historically and in the present) violently. Which of these accounts am I to take as scientific? The Muslim, the Mormon, the madhouse resident? The UFO abductee? The native in the sweatlodge seeking the animal spirit? They all encounter intelligent agency.larrynormanfan
January 29, 2008
January
01
Jan
29
29
2008
03:34 AM
3
03
34
AM
PDT
We don't see "intelligent agency" in the absence of "intelligent agents." And since such agents -- us -- are only known to exist comparatively recently in world history, we shouldn't bring other unknown agents in just because a particular problem is difficult to solve scientifically. FWIW, I'm not an atheist. But God is not a scientific proposition.larrynormanfan
January 29, 2008
January
01
Jan
29
29
2008
03:28 AM
3
03
28
AM
PDT
LNF: In re: if by “agency” you mean “intelligent agency,” we don’t really see that as an independent cause of anything . . . Really! Our most direct, commonly observed personal experience is that of agency in action. Indeed, it is the very premise on which we can discuss and seek to persuade. So, au contraire, we DO experience -- notice how I do not use "see" -- agents in action as self-determinig entities. That is the premise of any rational discussion, but evo mat advocates often reject it, apparently not aware they are sawing off the branch on which they are sitting. So, "agent" is not a "gaps" argument -- it is the very first fact of all that each of us experiences and brings to the table to do science or anything else of consequence. To deny such self-evident reality ends in absurdity, which we may distract attention from all we will [note the persuasive STRATEGY involved, onlookers], but it is there. Next, I simply pointed to observed and/or possible agents. Until and unless you know beyond revision that God is not possible, or other agents that are not human, you have no good basis to confine claimed or possible observation of agents -- and note, on the possibility or actuality of encountering God, BILLIONS disagree with you on observation if you are an atheist -- to ourselves. Recognising that agency is possible does not subvert science, it simply recognises that we do not foreclose things by distorting the definition of science or what we may study scientifically or conclude scientifically. [Have you read the relevant links?] In short, if we need not beg a question, then we should not do so! (Especially when comparative difficulties across live option alternatives gives us a way to avoid that. And that sets a context for scientific progress,a s Lakatos noted in the again excerpted cite just above in 125.) It is not a "pretense" to recognise that, it is to be honest in front of THREE -- count 'em -- known major causal factors: chance, agency, intelligent action. Watch them in action again, in a simple commonly encountered case:
A Tumbling Die: For instance, heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert.
GEM of TKIkairosfocus
January 29, 2008
January
01
Jan
29
29
2008
03:07 AM
3
03
07
AM
PDT
KF, if by "agency" you mean "intelligent agency," we don't really see that as an independent cause of anything. We rather see, as you elegantly put it, people like ourselves -- "contingent, intelligent beings who are embodied." No need to posit "God, demons and gods, robots of the ilk of an R Daneel Olivaw (or even early precursors), ETs, and much more." Agents such as ourselves seem fairly recent, or seem to have left no tangible traces. So I'm not going to invent them for convenience sake. Including such a laundry list (as above) of possible causes for everything we don't understand takes all the regularity out of scientific inquiry. Agency-of-the-gaps. My position is question-begging in the most abstract sense, I'll grant. But to leave open every question for the sake of philosophical consistency is pretense. Almost as pretentious as to close off the standard options because one knows better while claiming to have a truly open mind.larrynormanfan
January 29, 2008
January
01
Jan
29
29
2008
02:29 AM
2
02
29
AM
PDT
5] unless empirical evidence for some form of dualism with advanced AI “beings” can be shown (equivalent to that for humans), the intuitive observation that AI “beings” are purely mechanism simulating personhood will be the most plausible assumption Until we get into novel unstructured situations that demand creative responses. Naturally occurring language using inter-active situations in a changing world are a good case in point. If a robot or computer etc can persistently pass the test in such situations, it is much more than a mere simulation driven by clever software. 6] There is another logical possibility that I consider greatly unlikely, however. This is that although dualism applies to human consciousness, an advanced AI “being” could be truly self-aware, conscious and intelligent in the human sense, and still totally the function of electronic circuits and running computer software You may be right. But, the existence of ourselves as contingent, intelligent beings who are embodied makes this a possibility. One that cannot be ruled out apart from sufficient experimental evidence that we see that there are plainly insurmountable difficulties here. And so, the design paradigm, again, shows itself to be a potentially very fruitful and progressive research programme in the Lakatosian sense:
[IL, 1973] . . . the typical descriptive unit of great scientific achievements is not an isolated hypothesis but rather a research programme. [Science is not simply trial and error, a series of conjectures and refutations.] ‘All swans are white’ may be falsified by the discovery of one black swan. But such trivial trial and error does not rank as science. Newtonian science, for instance, is not simply a set of four conjectures - the three laws of mechanics and the law of gravitation. These four laws constitute only the ‘hard core’ of the Newtonian programme. But this hard core is tenaciously protected from refutation by a vast ‘protective belt’ of auxiliary hypotheses. And, even more importantly, the research programme also has a ‘heuristic’, that is, a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence. For instance, if a planet does not move exactly as it should, the Newtonian scientist checks his conjectures concerning atmospheric refraction, concerning propagation of light in magnetic storms, and hundreds of other conjectures which are all part of the programme. He may even invent a hitherto unknown planet and calculate its position, mass and velocity in order to explain the anomaly. Now, Newton’s theory of gravitation, Einstein’s relativity theory, quantum mechanics, Marxism, Freudism, are all research programmes, each with a characteristic hard core stubbornly defended, each with its more flexible protective belt and each with its elaborate problem-solving machinery. Each of them, at any stage of its development, has unsolved problems and undigested anomalies. All theories, in this sense, are born refuted and die refuted. But are they equally good? Until now I have been describing what research programmes are like. But how can one distinguish a scientific or progressive programme from a pseudoscientific or degenerating one? Contrary to Popper, the difference cannot be that some are still unrefuted, while others are already refuted. [When Newton published his Principia, it was common knowledge that it could not properly explain even the motion of the moon; in fact, lunar motion refuted Newton.] Kaufmann, a distinguished physicist, refuted Einstein’s relativity theory in the very year it was published. But all the research programmes I admire have one characteristic in common. They all predict novel facts, facts which had been either undreamt of, or have indeed been contradicted by previous or rival programmes . . . . in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts. In degenerating programmes, however, theories are fabricated only in order to accommodate known facts . . . . how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. But while it is a matter of intellectual honesty to keep the record public, it is not dishonest to stick to a degenerating programme and try to turn it into a progressive one. As opposed to Popper the methodology of scientific research programmes does not offer instant rationality. One must treat budding programmes leniently: programmes may take decades before they get off the ground and become empirically progressive. Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory.
What a commentary on the tactics of the NCSE etc! I need say little more than that, though I link here on abduction and broader inference to best current explanation in science in light of worldviews issues. GEM of TKIkairosfocus
January 28, 2008
January
01
Jan
28
28
2008
10:55 PM
10
10
55
PM
PDT
Okay: Some follow-up points, even as the thread has commendably returned to a focus on the merits: 1] Q, 120: it seems you are asserting that for a computer to be able to produce the behavior of people, it must be some sort of intelligent agent The “behaviour of people,” as the Cognitive psychologists have finally won the day on, has a second- or third-person observable aspect, AND a first-person, conscious aspect. On particular tasks, it may be possible to mimic the externally observable behaviour without having captured the inner, I-ness aspect. But IMHBCO, until and unless one captures BOTH, one has not truly captured “the behaviour of people.” Now, how can one tell that there is “somebody” at home in a given body? Ans: by looking for self-aware, un-programmed, self-directing, creative behaviour in novel unstructured situations, spontaneous and appropriately situationally responsive communication and interactions – and I have in mind emotions, values and ethics too. [Searle's Chinese Room and Turing's tests so far as I can see, do not pass this test.] 2] Am I reading your comment to suggest that until computers are imbued with the property of being an intelligent agent, they would never be able to exhibit behavior for specific tasks that are so similar to a person’s behavior for that task that they would pass a double-blind test for that specific behavior? First, I note that the use of “person” is not to be confused with “human being.” [I suspect that this is a part of the gap in communication evident above. Actual known and potential persons – aside from legal fictions such as corporations – include: humans, God, demons and gods, robots of the ilk of an R Daneel Olivaw (or even early precursors), ETs, and much more. Also observe my point on the behaviour of known persons, above.] Having noted that, as the previous remarks indicate, if there is sufficient family resemblance between a robot's behaviour, say and that of a known person, on family resemblance principles of classification, I would immediately accept the robot into the family of persons. And a very welcome addition they would be – if governed by appropriate ethical laws [which of course require creativity and situation awareness to apply] – and see the significance of the point Asimov made? When it comes to specific tasks, the very act of specification issues in restriction and algorithmisation. That, in part, is why I spoke of creative, spontaneous and appropriately responsive behaviour in unstructured, novel situations, above. Algorithmic problem solving does take intelligence, as we can immediately infer from the FSCI embedded in it. But that is the intelligence of the programmer, not that of the machine that simply deterministically executes the actions “required” by the symbols it is processing at the moment [and whatever relevant stored “flag” memory state drives decisions]. In the case of the Chinese room, the algorithm-implementing agents are intelligent too, but they are simply using that intelligence to follow instructions blindly. 3] Magnan, 121: . We could empirically interact with such a[n AI] “being” as if it were a person, while knowing that it may be purely a mechanism with no self-awareness in a human conscious sense. Like Searle’s Chinese Room. I have as recently as Saturday last been caught interacting with this PC as if it were a person, even knowing that it cannot even hear me. Having noted that, cf. Above on where I think a reasonable threshold obtains. 4] our assumption that other humans are conscious self-aware beings also is only a working theory. That is, when restricted to ordinary physical interactions. However, there is actual empirical evidence bearing on the issue as it applies to humans . . . Not at all, save insofar as that our reasoning is constrained by “unproved” first plausibles in all relevant situations and is fallible. Thus, we must not fall into selective hyperskepticism -- worldview level question begging -- on evidence that points to consciousness and creativity, such as appropriate responsiveness in novel, unstructured, demanding situations. This was the essential point that say Chomsky made on language acquisition and use – what we say and how is not “just” pre-programmed or a matter of stimulus-response operant conditioning; it is novel, meaningful and responsive in unstructured situations – creative, in one word. Further to this, we may find materialism-leaning “prof” Wiki's remarks on the rise of Cognitivism telling again [cf 82] – especially given the patent absurdity of behaviourist psychologists using their minds to assert or assume that mental states do not exist in any sense that is more than an epiphenomenon riding on the underlying material phenomena:
Cognitivism became the dominant force in psychology in the late-20th century, replacing behaviorism as the most popular paradigm for understanding mental function. Cognitive psychology is not a wholesale refutation of behaviorism, but rather an expansion that accepts that mental states exist. This was due to the increasing criticism towards the end of the 1950s of behaviorist models. One of the most notable criticisms was Chomsky’s argument that language could not be acquired purely through conditioning, and must be at least partly explained by the existence of internal mental states. The main issues that interest cognitive psychologists are the inner mechanisms of human thought and the processes of knowing. Cognitive psychologists have attempted to throw light on the alleged mental structures that stand in a causal relationship to our physical actions . . .
Can you see the absurdity? If on reflection or even on trying to write behaviourist papers, behaviourists see that they exhibit mental states, then mental states exist. Since they DECIDE what to put into those papers, on direct experience, they are above to decide and act reasonably. That is, their own personal life experiences testify against their positions: they are self-referentially incoherent. And, if such mental states and actions exist in a context of I-ness, it is reasonable to infer that one has a mind, and that the mind is acting intelligently into the world, i.e mind is at least possible. On level- playing- field comparative difficulties, mind can account for FSCI in ways that random walk searches based on arbitrary initial points and functionality tests cannot – the vastness of the config spaces utterly swamps the probabilistic resources available in say a language using situation that responds in REAL-TIME at conversational rates! [That is, Chomski's argument was a decisive INFERENCE TO DESIGN AND THENCE AGENCY. One that has been accepted as – scientific.] Then, when we see similar behaviour in other entities, whether members of the same human family -- we get into race and sex discrimination issues rapidly here – or in other entities that are sufficiently capable that we recognise agency, we have an inference to best and empirically anchored explanation. One that in fact has the further point that at its core is a self-evident truth: the personally experienced reality of intelligence and mind we all have. [. . . ]kairosfocus
January 28, 2008
January
01
Jan
28
28
2008
10:51 PM
10
10
51
PM
PDT
Sorry for the double posting. The first didn't seem to get through.magnan
January 28, 2008
January
01
Jan
28
28
2008
01:12 PM
1
01
12
PM
PDT
Unlettered and Ordinary (#115): "As for the computer’s in BarryA’s scenario they are just computer’s and have no mind, no will, no imagination, just the preset parameters of the programmer. They are not able to be persuaded." Thanks for clearly restating what should be obvious. Kairosfocus (#39): "When we see intelligent, creative action coming from computers, ....., then we will accept that they have become artificial persons with artificial intelligence. Until that happens, we will reserve the language of persuasion for persons, which is where it belongs." I don't think we would have to accept "personhood" of such an example of advanced AI. We could empirically interact with such a "being" as if it were a person, while knowing that it may be purely a mechanism with no self-awareness in a human conscious sense. Like Searle's Chinese Room. Of course our assumption that other humans are conscious self-aware beings also is only a working theory. That is, when restricted to ordinary physical interactions. However, there is actual empirical evidence bearing on the issue as it applies to humans. Namely, the mountain of evidence for psi functioning, for instance, which strongly implies some form of dualism of human consciousness. kairosfocus (#119): "The first serious question at stake, then, is whether such agents can be wholly based on hardware and programmed-in software based on storage elements and states; perhaps using the sort of neural network architecture for a DS style intelligent director as envisioned by say Asimov for R Daneel and kin (only, using electronic technology!). THAT IS AN OPEN QUESTION, THAT CAN ONLY BE ADDRESSED EMPIRICALLY." I agree, and unless empirical evidence for some form of dualism with advanced AI "beings" can be shown (equivalent to that for humans), the intuitive observation that AI "beings" are purely mechanism simulating personhood will be the most plausible assumption. There is another logical possibility that I consider greatly unlikely, however. This is that although dualism applies to human consciousness, an advanced AI "being" could be truly self-aware, conscious and intelligent in the human sense, and still totally the function of electronic circuits and running computer software.magnan
January 28, 2008
January
01
Jan
28
28
2008
01:10 PM
1
01
10
PM
PDT
Unlettered and Ordinary (#115): "As for the computer’s in BarryA’s scenario they are just computer’s and have no mind, no will, no imagination, just the preset parameters of the programmer. They are not able to be persuaded." Thanks for clearly restating what should be obvious. kairosfocus (#39): "When we see intelligent, creative action coming from computers, ....., then we will accept that they have become artificial persons with artificial intelligence. Until that happens, we will reserve the language of persuasion for persons, which is where it belongs." I don't think we would have to accept "personhood" of such an example of advanced AI. We could empirically interact with such a "being" as if it were a person, while knowing that it may be purely a mechanism with no self-awareness in a human conscious sense. Like Searle's Chinese Room. Of course our assumption that other humans are conscious self-aware beings also is only a working theory. That is, when restricted to ordinary physical interactions. However, there is actual empirical evidence bearing on the issue as it applies to humans. Namely, the mountain of evidence for psi functioning, for instance, which strongly implies some form of dualism of human consciousness. kairosfocus (#119): "The first serious question at stake, then, is whether such agents can be wholly based on hardware and programmed-in software based on storage elements and states; .....THAT IS AN OPEN QUESTION, THAT CAN ONLY BE ADDRESSED EMPIRICALLY." I agree, and unless empirical evidence for some form of dualism with advanced AI "beings" can be shown (equivalent to that for humans), the intuitive observation that AI "beings" are purely mechanism simulating personhood will be the most plausible assumption.magnan
January 28, 2008
January
01
Jan
28
28
2008
12:57 PM
12
12
57
PM
PDT
KF, we've discussed the issues about whether computers can be persuaded well enough, so I'll not persue that specific topic more. But, to continue with your comments in general, it seems you are asserting that for a computer to be able to produce the behavior of people, it must be some sort of intelligent agent. That seems to be your message in several posts, including the immediately above 119 g, h, i, l, and m. Am I reading your comment to suggest that until computers are imbued with the property of being an intelligent agent, they would never be able to exhibit behavior for specific tasks that are so similar to a person's behavior for that task that they would pass a double-blind test for that specific behavior?Q
January 28, 2008
January
01
Jan
28
28
2008
08:38 AM
8
08
38
AM
PDT
Okay: We have now arrived at the bottom-line of Q's objectionism, at 117. LNF at 118, reveals the underlying worldview level question being begged:
[Q, 117:] . . . if the argument is strictly along extended definitional grounds, and not the process of persuasion, then BarryA’s entire scenario was meaningless and you, and KF, score a tautological victory [LNF, 118:] . . . if it attempts to move beyond materialism, it ceases being science
a --> Now, surely, what persuasion -- as a process – is, is not in doubt:
[Am H dict, again!] per·suade: To induce to undertake a course of action or embrace a point of view by means of argument, reasoning, or entreaty . . . . Synonyms: persuade, induce, prevail, convince These verbs mean to succeed in causing a person to do or consent to something. Persuade means to win someone over, as by reasoning or personal forcefulness . . .
b --> In short, persuasion is (from observation and experience, not a priori abstract definition) an interpersonal, inter-subjective process that requires reasoning [whether good or bad is another question!], perceiving, evaluating, and deciding – all of which are premised on real freedom to choose what to accept as true or right or advantageous – whether wisely or unwisely. c --> Such a process, necessarily, is not applicable to the mere exchange of data and pre-programmed processing under pre-determined algorithms as coded. PERIOD. d --> Nor is that a mere “tautology ” -- Q here, predictably, objectionistically confuses the self-evident with the mere empty repetition of an identity that is true by definition, X = X. e --> For, there is empirically discovered content that has to be examined, before drawing the conclusion above. First, we examine and reflect on [a] what persons are like – by instantiation through persons we are familiar with, and [b] what computer systems are like (equally) currently. f --> But also, since we are contingent beings and are agents -- as previously noted but conveniently ignored – then it is possible to create agents. g --> The first serious question at stake, then, is whether such agents can be wholly based on hardware and programmed-in software based on storage elements and states; perhaps using the sort of neural network architecture for a DS style intelligent director as envisioned by say Asimov for R Daneel and kin (only, using electronic technology!). THAT IS AN OPEN QUESTION, THAT CAN ONLY BE ADDRESSED EMPIRICALLY. h --> A second serious question, is whether such is the only> possible ways to implement an intelligent agent. Absent the sort of question-begging LNF tries to indulge, that is an in principle open question. If shown, then it means that however improbable the numbers, we got here by chance in an eternal material cosmos. But this, too, is an empirical question, and one that has certainly not been shown. So, it should not be implicitly dogmatically assumed or asserted. That is worldview level question-begging. i --> And, in fact, as I pointed out in 81 – 82, esp. points k – v [which Q seems determined to bury under a snowstorm of irrelevant and/or trivial objections], there are serious reasons and there is good evidence -- starting with our own minds -- to see that intelligent agents not based on a material substrate are also possible. Indeed, it is arguable that one such agent is responsible for the cosmos in which we live. j --> That brings us to LNF's begged question k --> FYI LNF, we empirically observe and reliably know from such observation that events are traceable to one or more of three causal factors, i.e [1] chance, [2] natural regularities tracing to mechanical necessity, [3] agency. An almost trivial instance, from my always linked, is sufficient illustration of the fact, and of the irreducibility of any one of these factors to the others:
A Tumbling Die: For instance, heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!
l --> Further, as is discussed in details in Section A the always linked, functionally specified, complex information [FSCI] is an empirically recognisable, reliable sign of agency as opposed to chance, on grounds tied to the core principles of statistical thermodynamics [cf Appendix A, esp point 6]. m --> So, we -- on excellent, empirically grounded scientific grounds -- may infer to agency in many cases; e.g. this comment is FSCI, and is agent action not chance or necessity. n --> Similarly complex cases in DNA and the nanotech of life, including body-plan level biodiversity point to the inference-- absent question begging and selective hyperskepticism -- that life is the result of agent action. Further, the fine-tuned organised complexity of the cosmos' underlying physics, similarly points to agency. [Cf sections B – D, the always linked.] o --> You may of course -- you are a free agent -- deny these scientific chains of inference, by imposing evolutionary materialism and associated ad hoc philosophical extensions such as quasi-infinite unobserved multiverses, but only at a stiff intellectual price -- incoherence and absurdity, starting from the mis-definition of science and its proper scope of inquiry. BOTTOM-LINE: BarryA's question is a good one, and it points to serious implications. And, the design inference -- per Lakatos [cf 84 – 85 supra] -- is a key, common-core element of a fruitful, potentially highly progressive research programme across many domains of applied and pure science. GEM of TKIkairosfocus
January 28, 2008
January
01
Jan
28
28
2008
08:12 AM
8
08
12
AM
PDT
1 2 3 5

Leave a Reply