Uncommon Descent Serving The Intelligent Design Community

Can One Computer “Persuade” Another Computer?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a comment to a prior post StephenB raises some interesting questions: 

{1}Free will requires the presence of a nonmaterial-mind independent of the brain. {2}a non-material mind independent of the brain indicates free will.  . . .  In philosophy, [this type of proposition] is known as a bi-conditional proposition, which means, If A/then B. Also, If B/then A.  Usually, that pattern does not hold in logic, but it does hold here. [If one disavows] the existence of the mind, it is time to make the corresponding assertion about volition—go ahead and reject free will and complete the cycle.  Take the final step and concede that all of our attempts to persuade each other are futile.  We are nature’s plaything, and the laws of nature operating through our “brain” dictate our every move.

Given [the materialist’s] perception of reality, why [does he] bother to raise objections at all [to the proposition that mind exists independently of the brain].  If your world view is true, then [all the commenters] on this blog do what we do only because fate requires it of us. We are, for want of a better term, determined to think and act as we do.  Since we have no volitional powers, why do you appeal to them?  Why raise objections in an attempt to influence when it has already been established that only non-material minds can influence or be influenced? Why propose a change of direction when only intelligent agencies have the power to do that?  Since brains are subject to physical laws of cause and effect, they cannot rise above them and, therefore, cannot affect them.  Brains cannot influence brains.  Why then, do you ask any of us to change our minds when, in your judgment, there are no minds to change?

Surely we all agree that the output of a computer is utterly determined in the sense that the output can be reduced to the function of the physical properties of the machine.

 Note that this does not mean that the output of a computer is always predictable.   “Determined” is not a synonym for “predictable.”  An event may be completely determined and utterly unpredictable at the same time.  In other words, it might be “determined” and also “indeterminate.”  Example:  Say a bomb explodes.   It is impossible to predict where any particular piece of the bomb shell will land.  Therefore, where the piece of bomb shell will land is indeterminate.  Nevertheless, where the piece of bomb shell winds up landing is purely a function of the laws of nature, and is in that sense determined.

Now assume we have two computers that can communicate in machine code across a cable.  Assume further that the computers are assigned the task of coming to a conclusion about the truth or falsity of a particular proposition, say “The best explanation for the cause of complex specified information X (“CSI-X”) is that CSI-X was produced by an intelligent agent.”   Say computer A is programmed to do two things:

 1.  Respond “true” to this proposition.

2.  Communicate a list of facts and arguments its programmers believe support this statement.

Here’s the interesting question.  Can computer A “persuade” computer B to accept the “true” statement?

The answer, it seems to me, is obvious:  No. 

Computer B’s output is completely determined.  It has no free will. It has no “mind” that may be persuaded.  The facts and arguments communicated to it by computer A  trigger a subroutine that produces the output “yes it is true” or “no it is false.”  The result of that computation is utterly determined in the sense that it is reducible to the operation of computer B’s software and hardware.  Computer B has no meaningful choice as to how to respond to the information provided to it by computer A.

This brings us back to StephenB’s questions.  If the brain is nothing more than an organic computing machine, why do materialists bother to try to persuade us of anything? 

Comments
UandO, "Science cannot investigate the causes of the fundamental forces of the universe, because they are beyond materialists explainations." True enough. That's why, if it attempts to move beyond materialism, it ceases being science.larrynormanfan
January 28, 2008
January
01
Jan
28
28
2008
12:01 AM
12
12
01
AM
PDT
OK, U&O, if the argument is strictly along extended definitional grounds, and not the process of persuasion, then BarryA's entire scenario was meaningless and you, and KF, score a tautological victory. We can be done now.Q
January 27, 2008
January
01
Jan
27
27
2008
05:00 PM
5
05
00
PM
PDT
Correction - the definitions you provided clearly do not and cannot apply to computers.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
04:54 PM
4
04
54
PM
PDT
Greetings! Q, you are no long amusing, the definitions you provided clearly do and cannot not apply to computers. Your whole arguement is impotent. As for the computer’s in BarryA’s scenario they are just computer's and have no mind, no will, no imagination, just the preset parameters of the programmer. They are not able to be persuaded.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
04:52 PM
4
04
52
PM
PDT
Greetings! larrynormanfan, I found that there are a number of unknown causes, in the universe and in life. Also I was responding to you rephrase. Materialist framework is insufficent to explain the universe and the life in it. Science cannot investigate the causes of the fundamental forces of the universe, because they are beyond materialists explainations. I wonder if science cannot explain the universe, how does it purpose to explore more complicated things such life, the brain and consciousness. Life itself is a mystery, and defies materialist explainations. How does a materialists explainations purpose to explain consciousness and the workings of the mind when they are inadequate to explain everything else?Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
04:30 PM
4
04
30
PM
PDT
Greetings! Q interesting gravity is matter, matter is gravity.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
03:52 PM
3
03
52
PM
PDT
Greetings! larrynormanfan, I was just curious. I personally cannot say, but I would say its ambiguous in scientific literature. From a layman perspective, the material universe is defined by what you can see, hear, taste, smell, feel, but also by extention effects of unknown causes.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
03:50 PM
3
03
50
PM
PDT
U&O, You are trying to persuade others to accept a different meaning to a word. No, U&O, I'm trying to persuade others to attend to the process included in the word "persuade", rather than insert the defintion that it implies specific players. For instance, here is Merriam-Webester's use: 1 : to move by argument, entreaty, or expostulation to a belief, position, or course of action 2 : to plead with : urge The computer's in BarryA's scenario could provide arguments to each other, which the other can integrate into its analysis and result in a new course of action - definition 1. The computer's in BarryA's scenario could provide input to the other computer's analysis, urging it to a new conclusion - definition 2. The properties of the players in the process of persuasion aren't an aspect of persuasion, I'm insisting - unless one wants to craft a tautologically trivial argument. U&O The actions of persuasion are clear. Then we agree. The results of one player's analysis are input into the analysis of another, and it becomes possible that this input causes the other to arrive at yet a new conclusion. U&O Can a computer perform the process of persuasion? Now or ever, as described in BarryA's scenario? The obvious answer is that this is just an engineering plus semantic problem. U&O How do you describe gravity? Is it energy or matter? Well, not answering for larrynormanfan, but that is not so trivial of a query. Mathematically, it can be shown that gravity is the side-effect of that matter distorting space, or just as equally, that matter is the net side-effect of gravitational space distortions. Gravity and matter become interchangable concepts once one gets beyond the veneer of old-style physics, just like how matter and energy are interchangeable concepts now.Q
January 27, 2008
January
01
Jan
27
27
2008
03:00 PM
3
03
00
PM
PDT
UaO, There's clearly ambiguity about what is "material" in the ID literature. I didn't invent that ambiguity, but I'll just say that Dr. Dembski puts appeals to natural law within the realm of materialist explanations. Pointing to the law of gravity and to an intelligent designer are, I think, categorically different things.larrynormanfan
January 27, 2008
January
01
Jan
27
27
2008
01:50 PM
1
01
50
PM
PDT
Greetings! larrynormanfan, I am just curious. How do you describe gravity? Is it energy or matter? Is it material or non-material? What about the weak nuclear force? Or strong nuclear force? Are they energy or matter? Material or non-material?Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
12:18 PM
12
12
18
PM
PDT
Greetings! Whatever parameters are impose by BarryA’s setup, reality is still our point of reference. And as such reality must be maintained throughout any discussion. Changing the meaning of a word to suit your own agenda is pointless, and betrays your disposition. You are trying to persuade others to accept a different meaning to a word. It is not the word that is interesting but the meaning. The meaning must be the same even if words change. So in effect you are trying to alter a meaning to a word that was intended. That is just a game. The meaning is clear and the word is clear. The actions of persuasion are clear. Can a computer perform the process of persuasion? The answer is obvious, NO. You are arguing against reality.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
11:59 AM
11
11
59
AM
PDT
Not to piggyback on this debate, but I found this statement very interesting:
When we see intelligent, creative action coming from computers . . . then we will accept that they have become artificial persons with artificial intelligence. Until that happens, we will reserve the language of persuasion for persons, which is where it belongs.
True enough. I couldn't help rephrase it a bit:
When we see non-material causes having material effects, then we will accept that science should abandon its working premise of investigating cause=and-effect within a naturalistic framework. Until that happens, we will reserve the language of science for material investigation, which is where it belongs.
Does that seem reasonable? (I don't, by the way, think that "consciousness" is demonstrably non-material.)larrynormanfan
January 27, 2008
January
01
Jan
27
27
2008
11:53 AM
11
11
53
AM
PDT
U&O Can a pen and paper persuade another pen and paper? ... Not as BarryA was asking, because that line of thinking doesn't match BarryA's setup. Specifically, the first pen and paper can't transfer information back to the other pen and paper. He explicitely stated that his computers are capable of "coming to a conclusion about the truth or falsity of a particular proposition" and that they could "Communicate a list of facts and arguments". None of those things in your list can do that. U&O At best a computer is just a data base. Not according to BarryA's setup for this discussion. The compuers are also capable of analysis. U&O Where is the persuasion? It is just data transfer and a screening process. If we remove the anthromoporphic issues of persuasion - as were not included in BarryA's original setup - please explain the difference. I would hope that your answer would pass a double-blind experiment, in which the observer determining if "persuasion" occured is unaware if people, computers, or deities are involved. StephenB Can a computer lie or be lied to? That requires some agreement of terms. In some usages, "lie" only means to deliver non-factual information, whether by accident or intent. By other usages, "lie" includes intent, to separate mistake from deception. If BarryA's computers can arrive at a yes or no answer after performing an analysis of the data available, they at least could accidentally lie. Other then that, since BarryA didn't require free-will in the scenario, questions of intent are irrelevent to the question. To be "lied to" refers to the process of delivering the message. If the a person is delivering a message, the question follows along the lines of "if a person is intentionally telling a lie, and is aiming the message at a target that can receive and interpret the message (as in people and BarryA's computers), is the target being lied to"? Without adding new conditions to simply arrive at a "no" answer, it seems like a logical construction that the answer could easily be "yes". The intentional lie was delivered, received, and incorporated into an analysis. Nothing more should be needed to reach an affirmative, unless "lie" is defined to involve only one class of beings, like those that are sentient. Keep in mind that a large element of BarryA's scenario is simply semantics - what is "persuasion". Some argue it is a function of the players - people but not the computers. I'm arguing it is a function of the result, as dictionary definitions suggest anc as computer science would suggest. If that can't be reconciled, there is no need to continue. KF: You are arguing as though Asimov's model is the only model possible. That is a false assumption, and quite honestly, a bad thought experiment. We're not talking about computer sentience. We're talking about the a process that may, or may not, yield a result of persuasion. To reiterate, when you say If . . . we can create an Intelligent Director capable of self-awareness and feeling and judgement, then those conscious AI computers will be capable of being persuaded., you are ignoring the possibility that "if we can create a computer capable of being persuaded, we may not have yet reached the ability to create an Intelligent Director capable of self-awareness and feeling and judgement." But, as you said a while ago, we may be unable to reach an agreement. Is there any additional need to persue this?Q
January 27, 2008
January
01
Jan
27
27
2008
10:46 AM
10
10
46
AM
PDT
Greetings again! One more thing, Book X can end up belonging to libary B, if the intelligent agent, ie. the libarian, is convinced Book X does belong to library B even if Book X does not belong to library B. Libraries are not persuaded, but librarians are. Funny thing isn't it, who woulda thouht...Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
07:23 AM
7
07
23
AM
PDT
Greetings! Good point, StephenB... Persuasion must also be possible against evidence. Fabricated and put forth as fact. Can a pen and paper persuade another pen and paper? Can a typewriter persuade another typewriter? How about a dictionary, can it persuade another dictionary? Maybe an encyclopedia set, can it persuade another encyclopedia set? Okay, a whole library, surely it can persuade another library? At best a computer is just a data base. If all the other data bases cannot be persuaded, what makes you believe that a digital data base is different? A data base is information storage. Library = database So one library can circulate books with another, is this persuasion? Library book circulation = data base information transfer Books moving from one library to another is not persuasion. Computer A transfering data to Computer B is not persuasion. A library filters it's books for correct location and ownership. If book X does not belong to library B it does not accept it. If data X does not pass Computer B's filters it does not belong in Computer B's data base. Where is the persuasion? It is just data transfer and a screening process.Unlettered and Ordinary
January 27, 2008
January
01
Jan
27
27
2008
07:18 AM
7
07
18
AM
PDT
Onlookers: For proof that, sadly but plainly, Q is not to be taken seriously – I have just enough time for one comment for now – observe the contrast between:
[Q, 101] If, as you argue, the only possible use of “persuasion” involves people (your argument, not the American Heritage’s) . . . [GEM, 99, quoting myself at 36, 39]: [36] If . . . we can create an Intelligent Director capable of self-awareness and feeling and judgement, then those conscious AI computers will be capable of being persuaded. But let us not fool ourselves that we are even beginning to be nearish to being on the long path to that! [39] 1 –> Persuasion is inherently an interpersonal term, so one may not twist language to suit one’s rhetorical agendas . . . . 2 –> in short to anthropomorphise the programmed input-output action of interacting computers [cf. what is happening at assembly language, registers and microcode, architectural level] is where the obvious word-twisting and question-begging lie. That is, we see here the persuasive use of the corruption of language, the better to lead the naive to assume what should be proved. [Now, insistently so.] 3 –> To pretend that those who insist that persuasion is reserved for known persons [which could include nonhuman persons who show the manifestations of creative intelligent behaviour etc] — and not for machines known to be simply executing algorithms mindlessly — are the ones begging the question is to try by turnabout accusation to attempt to improperly shift the burden of proof. 4 –> When we see intelligent, creative action coming from computers . . . then we will accept that they have become artificial persons with artificial intelligence. Until that happens, we will reserve the language of persuasion for persons, which is where it belongs. Indeed if R Daneel comes along one day, to speak of persuading him would be to acknowledge that he is a person. [In the world of fiction, he already is.]
All, duly emphasised on my part. This is a plain case of setting up a strawman, putting words in his mouth, and pretending that the strawman is the undersigned. For shame! GEM of TKI PS: more later, gotta go get ready for church, may family waits . . .kairosfocus
January 27, 2008
January
01
Jan
27
27
2008
05:31 AM
5
05
31
AM
PDT
Q: I have a question: Can a computer lie or be lied to?StephenB
January 27, 2008
January
01
Jan
27
27
2008
04:34 AM
4
04
34
AM
PDT
KF, in 99 Instead, in a computer, transistor states etc are determined by the information content of the software. That is a historical note describing how the states were determined. Once determined, the information in a computer is a state of the computer's hardware. That is not necessarily the same argument regarding information in the brain and its interaction with the mind, as we've been discussing. Or, would you assert some other property about the computer, such as it has somehow acquired a mind, or of the information, such as it is somehow linked to the mind of the originator of the information? If, as you argue, the only possible use of "persuasion" involves people (your argument, not the American Heritage's), then it is obvious that your argument is defined to be correct: computers which are not persons can not do that which only persons can do. KF Synonyms: persuade, induce, prevail, convince These verbs mean to succeed in causing a person to do or consent to something. Wouldn't it be just as useful - and less designed to arrive at a predetermined conclusion - to claim that these verbs mean to succeed in causing an entity to do or to consent to something? If the entity can induce, prevail, convince, etc. - as BarryA's model already suggested - why must that entity be limited to a person? Even FCSI doesn't require that the entity be a person - it simply suggests that an intelligent agent was involved at some point, even a historical point in the past. For example, when you reminded us of your claim If . . . we can create an Intelligent Director capable of self-awareness and feeling and judgement, then those conscious AI computers will be capable of being persuaded you neglected to address the condition that a computer was constructed that is capable of performing equivalent to being persuaded, but was not suficiently designed to be considered as self-aware or conscious. I don't believe you've shown a specific progress of computer science must occur in order to detect the results of persuasion only after computer-based self-awareness occurs. So, would you like to address BarryA's question in the abstract, and expand this discussion to being more than a mere semantic argument? Can his computers yield the same results of persuasion, such as would be tested with a double-blind study? BTW: Your comments about marriage were merely wandering alone in the woods - they had no relevence to computer communication or computer persuasion.Q
January 27, 2008
January
01
Jan
27
27
2008
01:49 AM
1
01
49
AM
PDT
PS: I should note on a point of humour that no less than a Minister of Government here observed to me today in a meeting that I was talking to my laptop as if it were a person . . . ouch! (I thought I got rid of that habit years ago . . . debugging machine code can drive you up the wall.) PPS: This will DV be no 100. Psalm 100 is well worth the read too.kairosfocus
January 26, 2008
January
01
Jan
26
26
2008
11:51 PM
11
11
51
PM
PDT
H'mm: In addition to the neologisms teleocentrism and mechanocentrism [see, I lurk, too . . .], we need: objectionism! Mere skepticism is based on doubts and questions. Objectionism is rooted in deep-set prior commitments and agendas, so there will be an endless series of objections to what is not wanted, never mind the plain facts and obvious logic. So, it will always find a way to resort to selective hyperskepticism to deny what cannot be otherwise rejected. And if all else fails, it will reverse an implication. What do I mean? 1] Q's objectionism in action: We can see this last in action easily enough:
P => Q, P so Q. But I don't want to accept Q! So, I reject P. And you are begging the question to dare to impose P! (When in fact it is the rejection of Q that is driven by question-begging.)
These musings come to mind as I look at Q's -- predictably -- objecting remarks in 91:
since you are arguing about the issues of “persuasion” by using a definition of persuasion that is specifically structured around “people”, and by definition, computers aren’t people, then only by definition are you wholly correct when you state that computers can’t be persuaded. Great. A tautological claim. But, it says nothing, except that since it is not a person it can’t do the things that only people can do . . . . Try again, KF, but by replacing “person” with something like “agent”, and not even “intelligent agent” - because the original topic didn’t assert that “intelligence” was a premise. It did assert, however, that the computers have the ability to detect “truth or falsity of a particular proposition” and to “Communicate a list of facts and arguments” provided by external agents - in this case the programmers and the other computer . . .
All this, as if I had not already in point 7 of 90 noted:
as the Am H dict observes on persuasion, repeating from 12 [Magnan] and 39 above (NB as well 36, under III) . . . . per·suade: To induce to undertake a course of action or embrace a point of view by means of argument, reasoning, or entreaty . . . . Synonyms: persuade, induce, prevail, convince These verbs mean to succeed in causing a person to do or consent to something. Persuade means to win someone over, as by reasoning or personal forcefulness . . .
Onlookers, pardon my taking time to show Q's underlying failure to do basic homework before objecting. That is a big part of what I am exposing in highlighting the objectionism at work in this thread. [Professional-grade objectionists -- AKA spin doctors -- assume (sadly, on excellent empirical grounds) that people are too lazy or too incensed at “that IDiot” or too confused to fact-check check a plausible- sounding strawman attack. Too often, they are right.] Now, “person” is the key above; a term not to be confused with “human” -- an error often made by evolutionary materialists and their fellow-travellers -- as Magnan explicitly noted in the referenced 12 above [and Unlettered in 31 is worthy of mention, too!]:
[12] The term clearly implies conscious intent, requiring a conscious agent that intends to persuade and another that consciously considers the argument or information proferred. This is that issue of the ontological difference between the qualia of consciousness and the inner experience of self awareness, and matter. Matter in this case of the computer being the operation of multitudes of logic gates processing data [I add: recall, flip-flops and registers are basically gates configured with digital feedback, giving them memory].
Similarly, this is what I said in 36 [citing Unlettered . . .] and in 39:
[36] If . . . we can create an Intelligent Director capable of self-awareness and feeling and judgement, then those conscious AI computers will be capable of being persuaded. but let us not fool ourselves that we are even beginning to be nearish to being on the long path to that! [39] 1 –> Persuasion is inherently an interpersonal term, so one may not twist language to suit one’s rhetorical agendas . . . . 2 –> in short to anthropomorphise the programmed input-output action of interacting computers [cf. what is happening at assembly language, registers and microcode, architectural level] is where the obvious word-twisting and question-begging lie. That is, we see here the persuasive use of the corruption of language, the better to lead the naive to assume what should be proved. 3 –> To pretend that those who insist that persuasion is reserved for known persons — and not for machines known to be simply executing algorithms mindlessly — are the ones begging the question is to try by turnabout accusation to attempt to improperly shift the burden of proof. 4 –> When we see intelligent, creative action coming from computers . . . then we will accept that they have become artificial persons with artificial intelligence. Until that happens, we will reserve the language of persuasion for persons, which is where it belongs. Indeed if R Daneel comes along one day, to speak of persuading him would be to acknowledge that he is a person. [In the world of fiction, he already is.] 5 –> This of course is the current tactic of choice of the radical ultra-/post- modern relativists [cf my critical assessment here from an intro to phil course], who have tried for instance to redefine marriage on the pretence that all is opinion and politics. (So: is THIS just highlighted claim only opinion and politics too? The self-referential incoherence and agenda games emerge at once. Cf how they try to redefine science, marriage, torture etc etc to suit their current agenda. The epitome of this nonsense is “it all depends on what the definition of “is” is . . .])
Okay. Plain enough? Maybe not . . . 2] (Since KF is so fond to appeal to consensus, I’ll do the same this time.) The observers can obviously witness that in order to arrive at a predetermined conclusion - Computers can’t be persuaded - KF imposes his own limitations on the problem, thus yet again introducing irrelevant premises - semantic ones in this case. Onlookers, just compare the above with what has already been discussed! As to the notion that I am appealing to “consensus” when I call on current and future onlookers to look calmly and objectively at the balance of the case on the merits then make up their own minds, that is its own refutation – as well as a further example of . . . objectionism. And, BTW -- as long since noted -- intelligent agents communicating with one another can plainly include artificially intelligent agents communicating. AND, I explicitly spoke to interacting computers based on my the- hard- way acquired knowledge of what is actually going on as two machines pass bit-strings using handshake protocols and the like. I need to take out a franchise on renting out strawmen to be pummelled by the objecitonists at UD!!!! [If you soak the strawman in slander oil and burn it to cloud and poison the atmosphere, you forfeit your deposit -- h'mm, a nice fat slim Rolex or the like will do . . .] A few footnotes on information, energy and matter . . . 3] 92: Software is just transistor state, Hollerith card punched-hole state, magnetic state, etc. In this instantiation of a model (computer), the software (information) is wholly inseparable from hardware and energy flow Software is not “just” -- question-begging term! -- transistor states etc. Instead, in a computer, transistor states etc are determined by the information content of the software. That software comes, not from the natural regularities of the cosmos rooted in laws of mechanical necessity blindly acting on matter-energy in space-time, nor yet of random walks starting in arbitrary initial points in configuration spaces. No – reliably and as a matter of routine observation, functionally specified, complex software is the product of intelligent agents in action. [This of course Unlettered aptly points out in 93.] In short, we again see the underlying begging of questions in the teeth of plain, reliable facts. Predictably, insisted upon by Q: 4] 95: That the information may have come from an outside source is certain. But, that claim is historical trivia. At a given time, the information within the computer system - information that is both data and program - is a state of the hardware’s mechanics and electronics. Ironically, Psalm 95 has in it a very relevant verse at no 8, and of course vv 9 – 11 are worth the read also. Onlookers, notice the question-begging dismissal -- “But, that claim is historical trivia . . .” -- in the teeth of the underlying issue that, on both direct observation and the statistics of vast config spaces [the root of statistical thermodynamics – BTW, all of Q's objections to thought experiments are plainly because he does not want to acknowledge where this one just linked plainly (and for good reason) points!], neither mechanical necessity nor chance suffice to get to FSCI! In short, we see that this is a case of Q's insisting on rejecting the obvious in the teeth of evident facts and logic, as it leads to where he does not wish to go. So, let us kindly recognise and dissect the agenda-driven objectionism at work when it – predictably, of course -- crops up in future remarks on Q's part. [Let's guess: I am being closed-minded, I am begging the question, I have smuggled in objectionable assumptions, thought experiments don't prove anything, information is nothing but hardware in action, just because random walks are maximally improbable on accessing FSCI does not mean that they can't get there by lucky noise, and maybe the universe as a whole is quasi-infinite, etc etc etc . . .] GEM of TKIkairosfocus
January 26, 2008
January
01
Jan
26
26
2008
11:40 PM
11
11
40
PM
PDT
U&O Brain = computer mind = software Metaphorically, sure. But, quite literally, the computers we have now are silicon based. Brains are carbon based. They aren't the same. You can't push the metaphor too far, or it breaks, like you've done. Just because mind and brain operate a certain way, doesn't mean you can extrapolate back and say that computers have the same properties. In fact, you can't even extrapolate your usage of "intelligence" as with intelligent agents into the operation of a computer - largely because the operation of a computer is specifically engineered in some laboratory, and is wholly understood to be electro-mechanical. (This is not an entre to discuss the recursive nature of intelligent agents creating other agents. The topic of this thread is about the properties of the computers, and not of their creators.) Any information introduced into that machine is stored and operated on as a state of the machine by design of the machine, and not simply because of your alleged properties of "information." U&O The computer lacks “INTELLIGENCE” even if it cantains information placed there by an intelligence. That is a an unnecessarily limiting interpration of the scenario. Computers can gather information from non-intelligent sources as well, like rain guages, reflections of light, pressure in an intake manifold, etc. Some information in a computer can be "placed there" by non-intelligent sources. That information is a state of the machine. U&O Persuasion does not exist in the computer world because no intelligence exists. Again, a definitional argument, just like KF was doing. Remove the requirement that intelligence or other people-based properties be included in "persuasion", as was done with BarryA's original scenario, and you will have a different conclusion. The only logical conclusion is that computers should be able to "persuade" other computers once fluff-and-nonsense definitional additions are removed. U&O Intellegence and information are not physical. OK, let's agree that computers don't have intelligence. But, they do have information. Those two concepts can be decoupled. Computers are wholly physical - no engineer has yet given them the property of "mind", so no duality is necessary when discussing computers. Thus, information in computers can be wholly physical. U&O Information is contained in a state of matter or energy and yes can be interpreted as a state of matter or energy. Then why any other fancy dancing to imbue some other properties to computer-based information?Q
January 26, 2008
January
01
Jan
26
26
2008
06:40 PM
6
06
40
PM
PDT
Greetings! Matter and energy are states of information not the other way around. Your just playing games. With in the computer and outside of it are the same. Take any rock, it is static with no complex information, compare it to a sculpture. The information was not intrinsic to the rock, but the information was imposed onto the rock to become a sculpture. Okay morse code in a sound wave or light beam or even an electric current. The code is imposed onto the sound wave, light beam or current. The "fact" the information crosses between mediums from sound to electric signal for example, means the information is not a subject of the medium, but the medium is subject to the information. Back the the computers, all of the information found within a computer from its hardware to it software are subjects to information. Not the other way around. Of course matter and energy contain information. Sound is for example contains a certain frequency of particle movement, but unless that frequency is itself subject to specific and controlled fluctuations that sound wave is just pointless or not useful. So when a person picks up a guitar and plays the strings in a specific combination with a specific pace it becomes meaningful. The information passes for the person to the guitar strings into the sound waves to my ear for me to judge in terms of pleasure. The information passes through matter and energy changing them temporarily. But at no time is the information a subject of the matter or energy. But the exact opposite the matter and energy are subjects to the information. The computer is the exact same thing. The information passes to the Hardware and is contained as software, 010101010101111100010, a pattern. Energy is directed and controlled by information constraints. But at no time is the the information a subject of matter or energy. Again the opposite. Software is the manifestation of information, in the hardware construct. Information can and is put into matter and energy, and can and is retrieved from matter and energy. The whole computer is the manifestation of information, both hard ware and software. And ,yes, absolutely, they are connected by information. The energy contains the information that I put into it and it travels through all the hardware through all the wires and networks to the blogs server, then through all the wires then making its way to your computer. Through software to hardware back to software back to hardware back to software on your end. The information move through all these things because it is not subject to them. Information is contained in a state of matter or energy and yes can be interpreted as a state of matter or energy. But the information was still put into the matter and energy causing it to change states. For example the hard drive, a signal is sent to it the change a 1 to a 0 or from a 0 to a 1. But the information stored in the state of matter in the form of a binary code. Information causes and/or changes the states of matter and/or energy. If it is static or in transition does not matter, the fact remains. It applies to the computer the same no exceptions. If I write a letter with a pen or pencil on paper, "I" confugure the molecules in a specific arrangement. The Information written on that paper is passed from the paper to the eye/brain of the reader via photon. The information is imposed, and transfered. The information is place in a static form on the paper but is in a transitional state in the photons moving to the eye/brain. Everything applies to a computer no exceptions. As for the discourse the mechanics of the computer and it's capabilities are set in comparison tto the mind/brain of humans. To it is precisely about the mind/brain duality. Materialists impose the idea that the mind does not exist but is the mere mechanical processes of the brain. Thought is the the output Chemical collisions. Everytime someone makes an analogy of something it is to convey and idea about that thing. Brain = computer mind = software The fact is the brain is not a computer and thoughts are not the result of chemical collisions. The mind is not software. They are not comparable. The mind interacts with the brain and changes it. And the brain interacts with the mind and changes it. It is information, and intelligence. The mind is not the result of physics and chemistry. Just a computer is not the result of physics. Yes, physic and chemistry are involved and are not excluded from the equation of the mind but they are not the only factors they are merely the physical factors. Intellegence and information are not physical. Yes, physics is involved in the computer processes but just the physical factor. Information is not physical, but does interact and change matter. The computer lacks "INTELLIGENCE" even if it cantains information placed there by an intelligence. The mind/brain of a person controlles itself, accepts and rejects information, it evolves, and grow more complex or even simplifies. The software/hardware of a computer is static. Persuasion does not exist in the computer world because no intelligence exists. That is the point. Persuasion takes place between intelligent agents, not between information mediums. Transfer takes place between information mediums ie. computers.Unlettered and Ordinary
January 26, 2008
January
01
Jan
26
26
2008
04:19 PM
4
04
19
PM
PDT
Oops. Above, the statement "In software - no, software exists in hardware - CD’s, wire, transistors, hardcopy, etc." should be "In software - no, information exists in hardware - CD’s, wire, transistors, hardcopy, etc."Q
January 26, 2008
January
01
Jan
26
26
2008
02:28 PM
2
02
28
PM
PDT
U&O Information is not a state of matter or energy, but information can and is put into energy and matter. That's oxymoronic. Being put into the energy and matter, it is a state of the energy and matter. Quite simple. That the information may have come from an outside source is certain. But, that claim is historical trivia. At a given time, the information within the computer system - information that is both data and program - is a state of the hardware's mechanics and electronics. Discussing where the information came from is a different question than discussing what are the properties of the information at a given time. Which do you want to address? For this particular situation - interfaced computers - it should be irrelevent what their history is, as the question from BarryA is about what can they do now, with the various assumptions about their capabilities. This is not meant to make a statement about the brain/mind duality. It is merely a statement about the computers being used for the analogy - perhaps they aren't the proper tool to analogize mind, or mind isn't a relevant issue when discussing computers. U&O Information exists separtate and is inposed onto matter and energy to put it into a state. OK, staying strictly within computers and a consistent definition, prove it. Where is computer information in existance without hardware? In transmission - no, that's electrons in a wire. In WiFi broadcast - no that's information out of an antennae, is not accessible for computation while solely in an electromagnetic wave - it must enter an antennae again to be information. In software - no, software exists in hardware - CD's, wire, transistors, hardcopy, etc. Any ideas that I may be missing - while staying with the model of a computer? U&O Software is not really a state of the hardware. Both the hardware and the software are subjects of information. Your inference is flawed. They are connected but they are both subjects of information. No, you are playing word games that are meaningless. Software is but one form of information. It isn't "connected" to the hardware by information. Software is an instantiation of information in computer hardware Software Information. Interchangable concepts. Just think about the mechanics of the computer at any given time, and don't focus on the metaphorical abstractions. U&O Information is not a state of matter and/or energy. It is in the computer. Otherwise, can you show an exception?Q
January 26, 2008
January
01
Jan
26
26
2008
01:01 PM
1
01
01
PM
PDT
Greetings again! Information causes and/or changes the states of matter and/or energy.Unlettered and Ordinary
January 26, 2008
January
01
Jan
26
26
2008
10:37 AM
10
10
37
AM
PDT
Greetings! The thing is that the "Information" is "imposed" onto the matter and energy. It is not generated by the matter and energy. Matter and energy are subjected to the information. The matter is subjected to a highly specific configuration (Mainframe hardware) and the energy is also subjected to highly specific configurations (Mainframe software.) The information just does not exist until it is put there. The matter and energy exists, but without specific information the matter and energy do nothing. The information is non-material, for where does it come from? For the computer example, Humans: people: persons: person: computer designer and programmer, builds the computer from otherwise useless matter and energy by putting the information into the matter and energy. So in a very limited way, yes, "software is a state of the hardware." But only because that "state" was imposed by information confuguration. Information is not a state of matter or energy, but information can and is put into energy and matter. Information exists separtate and is inposed onto matter and energy to put it into a state. Information can be transfered from one place to another it is not fixed. As in quantum teleportation "information" can be imposed onto one photon then transfered to another photon. Meaning the "states of matter and energy" are subject to information transfer. Again where does this "information" originate? Matter and energy are not the originators of information, but are the subjects of information. Software is not really a state of the hardware. Both the hardware and the software are subjects of information. Your inference is flawed. They are connected but they are both subjects of information. It is not a case of just add energy to hardware and poof "Windows." The software is built within the boundaries of the hardware in VR. Software on its own must be subjected to information. Both matter and energy are states of information. Information is not a state of matter and/or energy.Unlettered and Ordinary
January 26, 2008
January
01
Jan
26
26
2008
10:27 AM
10
10
27
AM
PDT
U&O, in 89 "So everything is contained in an ORGANIZED mainframe (Hardware.) But the software is information (non-material.) But the whole thing needs energy" Umm, with computers, the software is a state of the hardware. Ever seen software that can execute without the computer? No. It doesn't exist. Software is just transistor state, Hollerith card punched-hole state, magnetic state, etc. In this instantiation of a model (computer), the software (information) is wholly inseparable from hardware and energy flow.Q
January 26, 2008
January
01
Jan
26
26
2008
08:05 AM
8
08
05
AM
PDT
KF, since you are arguing about the issues of "persuasion" by using a definition of persuasion that is specifically structured around "people", and by definition, computers aren't people, then only by definition are you wholly correct when you state that computers can't be persuaded. Great. A tautological claim. But, it says nothing, except that since it is not a person it can't do the things that only people can do. That definitional approach says nothing about the greater question involved in BarryA's original post, or your DS model, about whether computers can communicate with each other in a way that results in persuasion. For that analysis, it would be necessary to remove your "person" constraint on the definition. Otherwise, you would be forcing BarryA's query to be illogical from the get-go. Try turning this into a double-blind study, for instance. It wouldn't be needed to even know that computers or people were involved to determine if persuasion is happening. (Since KF is so fond to appeal to consensus, I'll do the same this time.) The observers can obviously witness that in order to arrive at a predetermined conclusion - Computers can't be persuaded - KF imposes his own limitations on the problem, thus yet again introducing irrelevant premises - semantic ones in this case. In essence, he is arguing unrelated to BarryA's model of two computers communicating. So, of course KF's argument is right about persuasion - because he is arguing off in a corner unrelated to the topic at hand - i.e. presenting a strawman argument about interfaced computers not being people. Try again, KF, but by replacing "person" with something like "agent", and not even "intelligent agent" - because the original topic didn't assert that "intelligence" was a premise. It did assert, however, that the computers have the ability to detect "truth or falsity of a particular proposition" and to "Communicate a list of facts and arguments" provided by external agents - in this case the programmers and the other computer. Go back to reply 1, and you will see the corrective argument that once that the semantic "person" constraint is removed, it is logical to conclude that computers could persuade each other. That is they could once they have their initial state, and after they are each able to provide feedback to the other computer, with the assumption that they each have different starting states or access to data. With the removal of the artifically imposed constraint of "person" (again, necessary for the original premise to be more than a semantic query), you will see that BarryA's query is essentially a query into feedback processes, with the understanding that feedback processes have dampening effects. It is these dampening effects that yield a "yes", "no", or "maybe" to BarryA's question. "Yes" if the iterative feedback process converges toward a persuasive process. "No", if the process doesn't converge - like it oscillates or diverges. "Maybe" if the process varies on different states or different computers. Additionally, I also tend to agree that the password query isn't persuasion, based upon the feedback process involved about the password. The agent providing a test password is giving input to the second agent. That second agent analyes the truth about the password and sends its conclusion back to the first agent. But, that is the end of this feedback process about this password - it is always a negative feedback except for the case when the password is true. There is no method for the first computer to provide additional feedback about the claim of the password which would cause the second computer to re-evaluate its conclusion. That is, in the password example, the first computer can't reply along the lines of "but I really know this is the password, and here's why, so you really should accept it." (Unless the logic of a password were to be changed, such as if the first computer had input to a back door process like "But check yesterday's password list - I think today's is corrupted.")Q
January 26, 2008
January
01
Jan
26
26
2008
07:59 AM
7
07
59
AM
PDT
Okay . . . First, pardon a slip-up on a blockquote. While I endorse the sense, it was BarryA who actually said that “If the brain is nothing more than an organic computing machine, why do materialists bother to try to persuade us of anything?” Also, I note that Unlettered has put in a very solid comment, one that the more “learned” and “lettered” among us would do well to pay attention to. Excerpting:
one argument is that the brain is nothing more than a sophisticated computer, fancy if you will. So everything is contained in an ORGANIZED mainframe (Hardware.) But the software is information (non-material.) But the whole thing needs energy. Here is an observation, neither matter nor energy on there own generate information. Although matter and energy already contain information. It is not a matter of adding energy and “abracadabra” complex information . . . . Do as many experiments as you desire the result is the same. Unless information is added to the matter and energy very little will occur . . . . Information is added to the matter and energy to organize them into useful patterns. Where does this information originate? The only known source that generates information is “intelligence.” And since “INFORMATION” is non-material it follows that the origin of the “INFORMATION” is non-material.
[There, got my blockquoting right this time!] Now on points of note: 1] Q, 86: that comment says nothing about the process, and limitations, of though experiments. Such as that persistent gap we both recognize, which shows that predictions about the observable world are not conclusions about the observable world - experiments are needed to close the gap. Q has manged to now convince me that his thought experiment side-issue is little more than a distracting strawman reached by a red herring based on ignoring relevant cited and easily accessible evidence. For, long since, I have repeatedly linked and even excerpted [e.g. cf 64 above] on the general capacities and limitations of scientific experiment, explanation and argument in this thread; and, in the parallel Deep Blue and Epistemology threads. Indeed, my last two just above comments have taken time to go into Lakatos' thoughts on research programmes and scientific progress. Frankly, FYI, Q, such persistent adverse misrepresentation of another person in the teeth of easily accessible facts to the contrary is irresponsible, disrespectful -- and dishonest. Onlookers, next time you see a Q comment, kindly bear this track record in mind. FYFI, experiments cannot prove anything finally, though they may cumulatively persuade us that certain explanations are reliable. As, “Theory => Observations, Observations, so theory” strictly speaking commits the fallacy of affirming the consequent. [In fact, this insight was, correctly, part of Urban VIII's objections to Galileo's over-reaching on what his experiments – including of course thought exercises -- “proved.” In short, this is not exactly breaking news.] 2] the DS model you describe, with your introduced Intelligent Director, is merely a model. On its own, it says nothing about the observable world. It only suggests what we should expect - but not what we really will see. That is a limitation I’m insisting that you not ignore in your treatises. And so, why then did I take time to point out how the architecture corresponds to all sorts of empirical situations from robotics [DS was explicitly speaking about IMPLEMENTED robotic archis and related analytical developments on efferent etc systems -- did you read what he wrote] to athletic peak performance through visualisation, to education psychology to the process of governing and guiding the ship in Ac 27? Were these just so much irrelevant padding, or were they not concrete cases in point that speak to the empirical credibility of the DS model, Q? And, are adaptive controllers -- of which this model is a technologically advanced subset based on proven technologies and well-known capacity of software and neural networks [in that extension] -- mere speculation? [Onlookers: look up Model Identifying Adaptive Controllers and Model Reference Adaptive Controllers.] In short, in the teeth of such easily accessible facts, to use the word “ignore” as you just did reflects one or more of several strong words that describe less than diligent or virtuous or honest conduct. [Grace have mercy on us sinners. Thanks, Kairos - again, for reminding us of Matt 5 - 7.] 3] Pay close attention to the assumptions you made about the model to arrive at the suggestion that mind is not material I made no tendentious assumptions; I put up a model which is empirically credible, then presented live option technical and worldview level alternatives, then discussed them on comparative difficulties across relevant evidence. I then drew an inference to what is in my considered opinion [e.g. Compare the absurdity of the behaviourists using their minds to argue that mental states do not exist!] the better explanation, on principles of abduction, which is well within my intellectual rights. 4] your argument wasn’t even internally consistent. It was just a slapdash of [rhetoric] to arrive at a pre-determined conclusion. Dismissive personal attack, embedding unsupported accusations of a claimed closed mind and circular reasoning on my part. Now, sadly, part of a clear pattern. But, onlookers, to explicitly lay out worldview alternatives and associated core first plausibles then discuss on comparative difficulties is precisely to not argue in a circle, cf here for intro level details; as I have repeatedly linked. As to the claim of self-contradiction, onlookers, if Q had an actual good case in point he would have long since paraded it in triumph. So, let him put up a case in point and let's see just who is more or less coherent. Onlookers, compare what I did in say above, with how Q has argued in the thread above. Then ask yourselves which of these two commenters is better described by the excerpt I just scooped out above? 5] serious engaging requires the nit-picking, so that trivial errors aren’t allowed to propogate - as you seem to be insisting that you be allowed to do Kindly identify such errors and substantiate. So far, onlookers, as shown above, I see baseless dismissive assertions and worse, not serious correctives. 6] Re JT, 87: I’m trying to recall the exact specified complexity argument, i.e. the detachable pattern that exists in a bacterial flagellum and why there aren’t enough particles in the universe to generate it. On the general point of what specified complexity – and more to the point, functionally specified complex information means, cf my always linked. Section A deals with what is being claimed and what it means, and Sections B and C address the molecular nanotech and body-plan issues. Appendix 1 discusses the search space issue, by scaling down Hoyle's tornado to semi-molecular scale subject to brownian motion and diffusion. Cf as well Thaxton et al's classic discussion in TMLO, linked from that appendix. The table of hot contents at the top of the always linked will guide. Onlookers, do me the favour of saving off the page to your own desktop before rummaging around in it – saves on bandwidth. The basic point is the one that underlies statistical thermodynamics. Namely, once we get into systems of sufficient contingency [over 10^150 to 10^ 300 cells in the config space], a random-walk based search starting from an arbitrary location is maximally unlikely – as opposed to logical-physical impossibility – to access the shores of islands of functionality on the gamut of the observed cosmos. I use 10^300 cells as the practical limit as this takes in islands of functionality. To climb Dawkins' Mt Improbability, you have to first reach to the shores of functionality without exhausting probabilistic resources. As to the wider flagellum debate, there is much on the web. Try a search of UD for starters. 7] Since a computer is deterministic why do I even bother to enter the password. Does the point really have to be elaborated further? No, but not as JT imagines. For, a password is a deterministic if-then feature of a program: admit the user if and only if s/he keys in the correct password. By sharpest contrast, as the Am H dict observes on persuasion, repeating from 12 [Magnan] and 39 above (NB as well 36, under III):
Am H Dict: per·suade: To induce to undertake a course of action or embrace a point of view by means of argument, reasoning, or entreaty: “to make children fit to live in a society by persuading them to learn and accept its codes” Alan W. Watts. Synonyms: persuade, induce, prevail, convince These verbs mean to succeed in causing a person to do or consent to something. Persuade means to win someone over, as by reasoning or personal forcefulness: Nothing could persuade her to change her mind. To induce is to lead, as to a course of action, by means of influence or persuasion: “Pray what could induce him to commit so rash an action?” Oliver Goldsmith. One prevails on somebody who resists: “He had prevailed upon the king to spare them” Daniel Defoe. To convince is to persuade by the use of argument or evidence: The sales clerk convinced me that the car was worth the price.
In short, we need to use language correctly, reasonably and accurately, or we will simply deceive ourselves through relativistic spin-games that reduce all to the notorious power-games of follytricks. [One Caribbean street term for a word we all know should in the interests of full disclosure be spelled that way. The other, more common Caribbean rendering is “Polytricks” -- which emphasises the multiplicity of outright deceits and subtler devices to trap the unwary.] Unlettered's remarks just above are also well worth the read. GEM of TKIkairosfocus
January 26, 2008
January
01
Jan
26
26
2008
01:25 AM
1
01
25
AM
PDT
Greetings! Layman here, my brain is sizzzzl'n... Okay, from what I gather, the one arguement is that the brain is nothing more than a sophisticated computer, fancy if you will. So everything is contained in an ORGANIZED mainframe (Hardware.) But the software is information (non-material.) But the whole thing needs energy. Here is an observation, neither matter nor energy on there own generate information. Although matter and energy already contain information. It is not a matter of adding energy and "abracadabra" complex information. The net result of matter + energy could result in organized matter (snowflakes, crystals.) But this is of course insufficient to generate complex organized matter (DNA, RNA, ATP, PROTEINS in complex cycles and relationships) And we are not even getting into neural networks. Do as many experiments as you desire the result is the same. Unless information is added to the matter and energy very little will occur. Information is Non-material. The fact that matter had a beginning, should beg the question where did matter get it's information to become matter from energy. (Assuming Energy always existed) For complex information (non-material), the matter "MUST" be organized, and the energy flow "MUST" be directed and controlled. Three things I understand are required for the software of a computer: (Hardware) Organized matter, and (software) Energy, and Information. Matter and Energy just do not generate new complex information. Information is added to the matter and energy to organize them into useful patterns. Where does this information originate? The only known source that generates information is "intelligence." And since "INFORMATION" is non-material it follows that the origin of the "INFORMATION" is non-material. The funny thing is computers cannot generate new and useful information on there own. There is a preset bounary for generating new and useful information. Information can only be extracted from a pre-existing database in the computer's hard drive. JT "Since a computer is deterministic why do I even bother to enter the password." Everything in your last post has nothing to do with persuasion. The computer does not devise a new tactic or generate a new program to test if you can gain access to its system or not. It is a limited program, with limited variables. A person has selection to a limitless set of variables including imaginary variables and self-generating variables. Persuasion is an attempt to override or satisfy those variables with evidence and an arguement (It does not have to be a logical arguement just convincing.) Like overcoming a phobia, the fear may be rational or irrational. Or trying to "persuade" an insane person to take medication. The rules are not defined by the one persuading, but the one being persuaded, and the criteria for judging the arguement and evidence is not fixed, not static. Meaning the rules can change. A computer is fixed as to what it can do. User satisfies variables access granted. (fixed rules) Human mind can make up the rules as it goes through experiences. It can lie or generate fiction or redefine the rule, and/or variables. One trying to persuade another must be as adaptable as the one generating and redefining the rules, and/or variables. Do you know of any computer program capable of doing any of this? The art of persuasion is all about trust, adaptability, and change. Being able to react favorably under pressure and in unexpected circumstances. There is a difference between unknown variables and rules, and changing variables and rules. A person may become convinced then later change their mind for any reason to become unconvinced. Can a security program after granting access to its system suddenly change its assessment of the user based on an inner conflict of its programming? Does it recalibrate for changes in behavior? The "fact" is that persuasion is an on going process and a balancing act of several unknown changing variables and rules. Trust gained can be trust lost and visa versa. Persuasion is not a simple act of access granted you passed the test. It's okay you made it this far we'll see. Ongoing tests and checks. Also, Persuasion requires a relationship not just a connection to pass information. A relationship is a two-way dialog in terms of persuasion. Meaning one or the other may be persuaded. (like should we have Chinese or Italian, what do you feel like?) Means there is the possibility of other options not just the ones presented. (Solution Mexican neither Chinese nor Italian) Agreement over a third option not present in the original offer. Here are a few more facors to consider for persuasion; Flexibility of choices and options, negotiation and agreement. Persuasion is a negotiation. (give and take, push and pull) Think hostage situations, and conflicting goals and agendas of the different parties. No computer simulates these dynamics.Unlettered and Ordinary
January 25, 2008
January
01
Jan
25
25
2008
06:32 PM
6
06
32
PM
PDT
1 2 3 4 5

Leave a Reply