In this post the UD News Desk quotes from Nancy Pearcey’s new book concerning evolutionary epistemology:
An example of self-referential absurdity is a theory called evolutionary epistemology, a naturalistic approach that applies evolution to the process of knowing. The theory proposes that the human mind is a product of natural selection. The implication is that the ideas in our minds were selected for their survival value, not for their truth-value.
Piotr thinks he has a cogent response to this:
Does she believe “the ideas in our minds” are innate, or what? At best, it could be argued that the human mind has been shaped by natural selection in such a way that it can produce ideas which help us to survive and have offspring. As far as I can see, thought processes which allow us to understand the world and make correct predictions (and so are empirically “true”) are generally good for survival.
Sorry Piotr. Truth (i.e., saying of that which is that it is and of that which is not that it is not) has no necessary connection to survival. This has been illustrated many times along the following lines:
Assume you have two cavemen, Bob and Fred. Consistent with truth, Bob believes saber-toothed tigers are fearsome monsters that want to eat us. When Bob sees a saber-toothed tiger he runs and hides.
Contrary to truth, Fred believes saber-toothed tigers are warm and fuzzy and only want to be our friends. It just so happens that Fred also believes (again, contrary to truth) that “hide and seek with people” is saber-toothed tigers’ favorite game. Therefore, whenever he sees a saber-toothed tiger he also runs and hides.
Assume for the sake of argument that Fred’s running and hiding as part of the game he thinks he is playing is just as effective at eluding saber-toothed tigers as Bob’s running and hiding out of stark raving fear.
Here’s the kicker: Natural selection is blind to the difference between Fred’s belief and Bob’s belief. Natural selection “selects” for traits that result in differential survival rates. If Fred and Bob survive at the same rate, natural selection cares not that Fred is a loon.
108 Replies to “Fred, Bob and Saber-Toothed Tigers”
But the capacity to form true beliefs is likely to have a statistical connection to survival, and Fred’s defective capacity for explanatory belief formation, over the long run, is likely to result in his being promptly selected out.
For example, his defective explanatory belief may generalize to the notion that ST Tigers like games generally, prompting him to sit down for a round of patty-cake.
Goodbye reproductive success.
Also, an ID-based epistemology fails the same “self-referential absurdity” test that Pearcey applies to evolutionary epistemology.
Under an ID-based epistemology, our cognitive capacities are given to us by the designer. Are they reliable? We simply don’t know. It depends on the designer.
Can we verify that they are reliable? Not without relying on reason — and assuming that our reason is reliable would be begging the question.
P.S. It’s interesting that like Learned Hand, I now appear to be unbanned. I wonder how long this will last.
You missed the point Bill.
KS, back ways around on an attempted turnabout. We know for good reason that we can know, can often reason correctly, can ground reasoning and knowledge. So, what is a good basis for that? Blind mechanisms that give not a hoot about truth or abstract capacities are simply not credible. Design by a patently competent mind would be. Your rhetoric above sounds uncommonly like what Jefferson complained of, learning how to make the worse seem the better case through clever word tricks. KF
keith, the point of the OP is that natural selection selects for traits that confer survival value and not necessarily for truth. Do you have anything to add to that discussion or are you satisfied with your tu toque response?
My point is that if you are willing to reject evolutionary epistemology on the basis of Pearcey’s argument, then consistency requires you to reject ID-based epistemology as well. Since you are presumably unwilling to do that, then evolutionary epistemology is back in the game.
You’ll need a better argument than Pearcey’s (and Plantinga’s, and Lewis’s, and Reppert’s).
There are far more ‘untrue beliefs’ that keeps someone away from saber-toothed tigers than there are ‘true beliefs’. “Saber-toothed tigers are dressed up warriors”, “saber-toothed tigers are angry spirits of our forefathers” and so forth, all work just as well as the single truth.
IOW not only is the truth impossible to select for natural selection, also false or true is not a 50-50 chance.
Sure, but our ancestors faced many environmental pressures other than saber-toothed tigers. Cognitive systems that produce untrue beliefs about saber-toothed tigers will tend to produce untrue beliefs about other things as well — and some of those other untrue beliefs will be detrimental to survival and reproduction.
Your conclusion does not follow from your premise. I will let you figure out why.
Also, for the reasons set forth by KF, your argument is unsound as well as invalid. Think harder before you post.
You assume that there is some logically coherent underlying system at work. However under materialism we can only assume a chemically coherent underlying system for our beliefs. IOW the connections between our beliefs may be coherent chemically but we cannot expect them to be coherent logically. Logic has no power in a purely material world.
I’ve shown that Pearcey’s argument – if it were correct – would undermine an ID-based epistemology just as surely as it would undermine an evolutionary epistemology.
If you have a viable counterargument, let’s hear it.
We also know that there are heritable differences in these capabilities and that heritable variation in these skills are likely to have resulted in differential reproductive success across human evolutionary history.
Unless you want to argue that there is no heritable component to these components of intelligence, that intelligence has no bearing upon survival/reproductive success, or that knowing, reasoning and acquiring knowledge have no bearing upon the acquisition of true beliefs, it follows that evolution has bearing upon the formation of true beliefs.
Not at all. I was talking about cognitive systems that produce untrue beliefs, and logical coherence is hardly a requirement for such systems!
My point is that such systems will be selected against if the untrue beliefs they produce are detrimental.
You describe an entirely artificial situation and try to argue by generalising from it. One can easily cite any number of realistic situations in which a correct assessment of facts may help you to survive. The ability to realise and remember that big carnivores are dangerous and you’d better keep out of their way is always advantageous. A totally mistaken interpretation of their habits may or may not cause you to do the right thing by pure luck, but let’s face it — most of the time you won’t be so lucky. It’s the Freds of this world who earn the Darwin Awards “by self-selecting themselves out of the gene pool”.
So you are suggesting that an entire system is selected for instead of individual beliefs? An entire system consisting of untrue beliefs that works every time is preferred by natural selection over an entire system which consists of true beliefs but one (fatal) untrue belief.
Bob’s belief that saber-tooth tigers are just warm , fuzzy and want to play still makes him far more vulnerable than Fred. As those of us who have pets know, there are times when a cat, for example, wants to play and other times where it seems less enthusiastic and has to be enticed into playing.
We can envisage a time when Bob becomes concerned that the saber-tooth tiger is not chasing him as much as it used to so he goes up to it to find out what is wrong and/or to try and tease it into chasing him by offering a more tempting target. At that point Bob’s life expectancy shrinks rather dramatically.
Natural selection doesn’t guarantee true beliefs but true beliefs about the world still give you a better chance of survival than false beliefs. Quine’s adage still hits the nail right on the head:
Neither beliefs nor systems of beliefs can be selected for (or against), because beliefs are part of cultural, not biological evolution, and are not transmitted via DNA. It’s only the general ability to make valid inferences and accurate predictions that is advantageous to sufficiently intelligent beings, not whatever use individuals may make of it.
Well, what’s actually selected for is behavior, and behavior is a function both of beliefs and of the cognitive system that produces and acts on those beliefs.
My point is that you can’t focus on a single belief in isolation. You have to consider all of the beliefs and capacities that are relevant for survival and reproduction.
If Fred avoids saber-toothed tigers but flings himself into the path of stampeding woolly mammoths, thinking that their footfalls will feel good on his aching back, then he is not long for this world, despite possessing at least one beneficial belief about tigers.
Liars and Truth Tellers. I think Evo is guided to preserve the Truth Tellers. But that’s just me. Not sure what George would think… https://m.youtube.com/watch?v=vn_PSJsl0LQ
The entire argument is wrong. Nature doesn’t select individuals, only population.
I like the nuance that Dr. Pearcey draws out. It is not only that, under materialistic premises, our perceptions may be false, it is also that, under materialistic premises, free will, consciouness and even our sense of self, are illusions!
Thus, the problem is much worse than the problem that we might believe false things about a sabre tooth tiger and choose to do the right thing for the wrong reasons. The problem is that our material brains falsely believe that they exist as real persons in the first place, and that our brains, as illusory persons, also falsely believe that they somehow have a free choice whether to tell the material body to run away from the tiger or not!
Moreover, as if all of the preceding was not already the very definition of absurdity, under materialistic premises the tiger’s brain is also having an illusion that it really exists as a tiger, and its brain is also under the illusion that it has a choice as whether it wants to eat us or whether it wants to take a nap.
Moreover, while all this is a very compelling philosophical proof that the naturalistic/materialistic position is patently absurd, due to advances in science we don’t have to rely solely on this compelling philosphical proof. In other words, we can underscore our compelling philosophical argument with rigid empirical evidence.
For instance, to underscore the fact that we have free will, we can refer to the quantum experiment of ‘Delayed choice for entanglement swapping’:
You can see a more complete explanation of the startling results of the experiment at the 9:11 minute mark of the following video:
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past? This experiment is simply impossible for any coherent materialistic presupposition!
And to underscore the fact that consciousness is not emergent from a material basis, we can reference this recent experiment from quantum mechanics (among many experiments).,,
Dean Radin, who spent years at Princeton testing different aspects of consciousness, recently performed experiments testing the possible role of consciousness in the double slit. His results were, not so surprisingly, very supportive of consciousness’s central role in the experiment:
And to experimentally support the Theistic contention that really do exist as real persons, we can reference this:
In further comment from the neuro-surgeons in the John Hopkins study:
More evidence of brain plasticity is here
In fact not only is the mind able to modify the structure of the brain, but not the mind has been shown to have the ability to reach all the way down and effect the genetic expression of our bodies:
Thus, not only is atheistic materialism phiosophically absurd in the extreme, but atheistic materialism is also directly undercut by empirical evidence.
If we were dealing with a science instead of a religion, this would be devastating for the hypothesis of materialism!
Poem, Music and Verse
Ba77, I read most of your links, all of your verses, but only view some of your music vids. But I have never have been disappointed with the music vids I do listen to / watch. I am either lucky or (more likely) missing some great music. Thanks as usual.
The fetal position and the crucifix position at the end of the vid. Whoa:) Love You, thank You, and im sorry. Important stuff.
Materiailsts also have to apply rule consistently. It is not logical to dismiss reports of paranormal phenomena because humans evolved to be superstitious and then say materialism is a better philosophy because it is rational.
Jim Smith @ 24,
Don’t be ridiculous. Belief has nothing to do with evolution. It has everything to do with the company you keep and the books you read. Just try mingling with atheist and your superstition will be gone.
ppolish @ 23
so you are disappointed with everything else in BA77’s posts? 🙂
Nancy Pearcey has a new website
Official Website of Nancy Pearcey
Me_Think @ 25
“Just try mingling with atheist and your superstition will be gone”
Along with all hope and the will to live 😉
F/N: The response to remarks by VS, here, will prove helpful. KF
BA77 , 21: right as rain, identifying what on evo mat would constitute general delusions such as self aware personal identity, responsible freedom of action [Bob and Fred think they are choosing???] . . . a shout-out to support a much underestimated contributor here at UD. The closer we look, the more solid is Pearcey’s critique. KF
Let me get this straight. Christians for hundreds of years argue that Christianity has to be believed on faith and that human reason and logic are unreliable, that reason leads to infidelity and atheism, and that that proves that reason is unreliable. Human reason is unreliable because Adam ate an apple, we inherited original sin, which clouds our reason. We can never understand “mysteries” like the Trinity– how can 3 = 1? or why God allows tsunamis and earthquakes to kill babies, we can’t answer the Problem of Evil, because our reason is clouded. And the unreliability of reason is evidence of God’s existence, said Christians.
Fast forward past Charles Hodges and the Princeton Seminary theology, and now today we have apologists saying the reliability of reason is evidence of God’s existence.
For example, here. Or here.
Is there any conceivable phenomenon which you cannot claim as evidence for your God’s existence? What conceivable observation is excluded?
Oh for crying out loud, here’s your hero.
So I thought it was the unreliability of reason that meant we had to believe in Christianity?
Look, I don’t have time to point out the flaws in that idiot Alvin Plantinga’s amateur philosophy.
Here’s the bottom line: if Christianity is true, then our cognitive faculties are unreliable. Right? Most people on Earth are not Christians. The minority who are Christians didn’t voluntarily choose to believe in Christianity, it was forced on them or on their ancestors, or they grew up with it, and they “chose” it about as much as people choose their native language! But most people on Earth think Christianity is false. So if Christianity is true, then that means our cognitive faculties are unreliable. So this makes Christianity “self-refuting” in the same way you try to make evolution self-refuting, right? Right.
The Babel fish.
Cross @ 28
That’s a superstition !
Me-think said “now that’s a superstition”
We already know atheism is a superstition , please tell us something new.
The Fred/Bob thought experiment is a rather nice example of why we survival and truth can become disconnected, but it doesn’t show that the two are always totally disconnected: in general I would expect Bob to survive better (OK, yes, I’m biased, I know) if he always picks the action based on truth, as they will be optimal. In contrast, if Fred always takes actions that are only related to truth by chance, then because there are so many bad possibilities, he won’t survive that long.
It’s also not obvious to me that just because our intellect evolved to enhance survival doesn’t mean we can’t decide what’s true. We have systems of thought to do this: that’s what the scientific method is about (although we accept that we can’t get at truth itself, only approximations to it that tend to become better over time). If I design (or evolve!) a car for use on roads, that doesn’t mean it won’t work when driving on fields. It might work less well, but it’ll work. And I can then alter the car to take account of the new situation.
Diogenese , you came back for more ? Especially after our shroud discussion ? Wow talk about Sadomasochistic lol
But then again after u got spanked by professor moran on his blog for being too ignorant to even understand your opponents position it must be hard to know what blog you belong to .
Now I may not agree with many things the good professor says , we sure do agree on you being ignorant about the other sides position lol.
Now Diogenes , I’ll make it easy on you buddy.
Lets leave the shroud aside and ask you what is your opinion on near death experiences ?
Are they generated by the brain 😉
Are u even versed enough to have a coherent position on this.
But please this time take professor morans advice and actually do some reading for comprehension to know your opponents position .
It doesn’t looks like u made a favorable impression on him. Maybe he doesn’t cow idea you a fellow brite 😉
Wow Diogenes now quotes one theologian and beliefs that this theologian speaks for all Christianity.
Using the same logic you just employed lets quote Richard Dawkins and ask “who crafted God ” and since I am now claiming that Dawkins speaks for all atheists I must say that critical thinking is not only not needed to become an atheist , but it must be thrown away al together .
I guess that is what you get when you quote a guy like Aron ra who looks and acts like he is stoned half the time.
I doubt that even professor moran would ever use logic from Aron ra lol
But then again this is what happens when you don’t think for urself 😉
Me thinks said
“”Don’t be ridiculous. Belief has nothing to do with evolution. It has everything to do with the company you keep and the books you read. Just try mingling with atheist and your superstition will be gone.””
Yes and if you want to become superstitious just hang out with atheists all day long . Ty again for making such a. Great point .
And as I showed before to Chartsil , when I presented some nde evidence for him, instead of refuting it he would only acceot it when the experience brought back next weeks winning lotto numbers lol.
But then again I shouldn’t be hard on you guys. I really think that u can’t help your lack of critical thinking . Maybe one day medical science will help on this issue .
wallstreeter43 @ 38
If you mean the AWARE study, it has debunked NDE, not supported it! As I commented in other threads – Not a Single patient out of the over 2000 studied, was able to recall placard messages – which was the only objective way to verify NDE.
Here’s something I posted in other threads:
It is well known that awareness during anesthesia may be experienced by 1 or 2 cases out of every 1000 patients. Analysis of ASA Closed Claims Project shows intraoperative awareness accounted for up to 2% of all claims.
In an emergency situation ,the anesthetist has little time to monitor and achieve a Bispectral index of 40-60 to ensure full unconsciousness. So, if a patient is one among the 1000 who requires a higher dose of anesthetic, it is more likely that he will be anesthesia aware during the surgery than not. Combine this fact with the fact that Clinically dead is still controversial term, and you will find that the 3 minutes ‘clinically dead’ patient being aware of surgery is not significant at all.
Now, in the light of all above facts, let’s look at a veridical NDE:
Amid the blaring of sirens, you (or your enemy) are wheeled into the ER of a hospital. The ER doctor administers basic first aid and intubates you. Among the cocktail of drugs is Ketamine or its derivatives (BTW, Ketamine is used by thousands of teens to get Out of body experience pretty much everyday OBE is achieved when they hit the k-hole state.).
You are wheeled by emergency personnel to the surgery room. They talk about your case or some other patients or the blue shoe on the 2nd floor ledge or the weather- all these info is being recorded by your brain. You are next being prepared for the surgery. The anesthetist injects drugs to induce general anesthesia. He is not aware that you are 1 among the 1000 people who needs extra dose of anesthesia to achieve a Bispectral index of 40 to 60. Meanwhile surgeons and nurses stream in. They talk about the surgical procedure, they use technical terms too, may be they talk about their kids or cats and dogs , about some article in some journals, use each other nick names etc. The ketamine in you takes you to the k-hole state. You are now ‘out of body’ and are intra-operation aware. You hear the conversation while having a OBE.
Suddenly in the middle of the surgery, your heart flatlines. The doctor declares you dead – too soon – a clinically death proclamation needs to be made only after 38 minutes of trying to resurrect. The doctors frantically do whatever needs to be done and ‘resurrect’ you. You make a full recovery.
You are overwhelmed- you had an out of body experience, you are aware of seemingly secret info of the surgical procedure , you can recall something about a shoe on the ledge so you truly believe you went out of body, floated around, met God and came back. You just had a veridical NDE.
I think RB and KS have more or less covered this. But to express it another way.
Evolution does not select for specific beliefs. Beliefs are the result of inherited capacities (which are selectable) such as reasoning ability and powers of observation plus whatever happens to you in the course of your life.
However good those capacities are at leading to true beliefs, there are bound to be cases where those capacities lead to false beliefs (wrong information, the improbably happens etc). Very occasionally false beliefs will lead to behaviour that is as good (or even better) for survival than having a true belief. As everyone has pointed out this is essentially a bizarre coincidence. For the vast majority of the time there is a massive survival advantage in having capacities that tend to lead to correct beliefs.
There are, of course, theories that we have evolved some psychological traits that are good for survival which can systematically lead us into false beliefs in certain situations. One example being a tendency to believe there is an intention behind any phenomenom that we don’t understand. This is a good survival strategy in many cases because the cost of wrongly believing there is no intention is generally far higher than the cost of wrongly believing there is no intention.
If what you say is true how can we verify its truth? Surely if its not about truth but survival we can’t trust a word you say….. Can we, Do you trust yourself?
I would like to see some evidence/arguments for this assertion. It is much too easy to come up with scenario’s in which beliefs play a crucial role wrt survival – the case Fred, Bob and the Saber-Toothed Tigers is just one example. On what ground should we assume that evolution does not select for specific beliefs?
Under materialism beliefs are nothing but particles in motion – like anything else. Some particles in motion ‘survive’ and get selected, others don’t.
On a general note: who disputes that – despite its disconnect from reality and truth – paranoid beliefs (and paranoia in general) are excellent for survival?
Human cognition evolved from more primitive cognitions. Big teeth bad is a straight-forward relationship that forms the foundation of saber-toothed thinking in humans.
Evolution can only select characteristics that are heritable. Beliefs are not heritable.
That’s right. Beliefs are chemical configurations like any other selectable trait. Actually, life itself is merely a chemical configuration in the materialist view. Chemicals seem to exist very well without any apparent need for truth. If we want to distinguish certain chemicals as “living”, then trees, bacteria and butterflies seem to be existing well-enough without reference to truth. Although they may all accept that God exists – so let’s call that an important truth they accept.
Meanwhile, illusions and falsehoods could have as much or more survival benefit as truth for those chemicals that want to make such distinctions.
Discerning truth certainly carries far more burdens than a non-awareness. An awareness of truth means an equal awareness of illusion. Someone caught in a decision-making process … “is this real or am I misreading it?” Instead of immediately running away, discerning truth from fiction requires some thought – and the possibility that a saber-tooth tiger is just an illusion means that Bob won’t run away quickly enough — or in some cases, will mistakenly think the tiger is not real.
Beliefs and behaviour are both heritable. Both get passed on to the offspring. Behaviour can trump genetic change and is much easier to change that genomes.
However natural selection does NOT select. Whatever is good enough to survive does so. Whatever is lucky enough to reproduce does so.
Evidence please. We all know how evos love to oversell their position’s claims.
Zachriel at 43, although the relationship between reliable beliefs and survival is not nearly as straightforward as you would prefer to believe,
Thus, although that relationship between trustworthy beliefs and survival is not nearly as straightforward as you would prefer to believe Zach,,, you also have a much more profound problem to deal with in trying to account for the materialistic origination of the ‘illusion of consciousness’. An ‘illusion consciousness’ so as to have a place for those faulty beliefs to exist in the first place:
Moreover Zach, as if that was not bad enough, you also have to account for the ‘illusion of free will’. In other words, not only is your material brain under the illusion that it is a real person who holds certain real beliefs, but your material brain is also under the illusion that it also has a free will so as to have the power to be able to choose to believe rational beliefs and to do rational actions. But, since free will is an illusion under materialism, then there is no logical connection that just because the ‘illusion of you’ may think that certain beliefs may be true that the illusion of you should have to power to choose to believe them. In fact, materialism demands that the mind is illusory and has no real causal power so as to enforce it will to choose to believe anything whether it be true or false.
In other words Zach, your belief that you are a real person with real power to choose certain actions and beliefs, i.e. with free will, is only “a dream within a dream” within your materialistic premises. i.e. absurdity stacked on top of absurdity!
bornagain77: Zachriel at 43
In all of that text, you didn’t seem to respond to the point. Humans are posited to be the result of a long period of evolution. At the base of the human mind are some pretty basic sensory experiences; pain, pleasure, the ability to distinguish objects.
bornagain77: Should You Trust the Monkey Mind?
Turns out that primates are pretty good at recognizing danger in their native environments. ‘Big teeth bad’ is a straight-forward relationship that forms the foundation of saber-toothed thinking in humans.
Zachriel, although your generalization for how beliefs may be formed is a case study in fuzziness, regardless of that, ‘you’ have to account for the subjective conscious experience of ‘you’ before you can even begin to posit how beliefs may be formed. i.e. you cannot put the cart of beliefs before the subjective horse of ‘you’ buckaroo!
Here are a few more comments, from atheists, that agree with Chalmers on the insolubility of ‘hard problem’ of consciousness,,
Moreover, due to advances in Quantum Mechanics, the argument for God from consciousness can now be framed like this:
Which must mean something like cognition evolved from rocks. Too flip? How about cognition cannot be more than biology (which can’t be more than chemistry . . .)
Mark Frank@43, providing an evidence/argument for his statement that evolution doesn’t select for specific beliefs. . .
Which, assuming that specific beliefs are the result of cognition, must mean that the product of cognition, i.e. specific belief, is NOT a product of evolution, but of some thing with a different ontology.
In other words, somehow cognition, even human cognition is, according to Zachriel, determined according to a causal chain.
But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine.
According to Mark Frank, however, the products of cognition (i.e. specific beliefs) are not heritable. But this would mean that they must have some different ontology that is at least in part nondeterministic . Specific beliefs, for example, may be teleological. Never the less, they are not “on the tape”.
I will let Zachriel and Mark Frank explain how such non-determined ideas can be produced by a physical embodiment of a UTM.
bornagain77: ‘you’ have to account for the subjective conscious experience of ‘you’ before you can even begin to posit how beliefs may be formed.
Self-consciousness is not required for consciousness.
Tim: Which must mean something like cognition evolved from rocks.
Evolution only concerns living organisms, however, it is supposed by most researchers that some sort of abiogenesis occurred on the primordial Earth.
Tim: In other words, somehow cognition, even human cognition is, according to Zachriel, determined according to a causal chain.
Cognition is the ability to learn. Culture is learned.
Tim: But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine.
Human cognition may not be a Turing Machine.
Tim: But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine.
According to Mark Frank, however, the products of cognition (i.e. specific beliefs) are not heritable. But this would mean that they must have some different ontology that is at least in part nondeterministic . Specific beliefs, for example, may be teleological. Never the less, they are not “on the tape”.
Why would that follow necessarily? While any Turing Machine can, in principle, calculate what any other Turing Machine calculates, that doesn’t they can do so practically, or do so in fact.
I don’t follow your reasoning. Could you elaborate? What makes something a UTM is its capabilities, not the fact that it is a causal chain.
No, it just means that they are at least partially caused by non-heritable factors which might or might not be deterministic.
Well Zachriel, go ahead and give a coherent materialistic account of consciousness. Your Nobel awaits!
Which reminds me: Here is Eugene Wigner receiving his Nobel:
Of supplemental note to the preceding Wigner ‘consciousness’ quotes, it is interesting to note that many of Wigner’s insights have now been experimentally verified and are also now fostering a ‘second’ revolution in quantum mechanics,,,
That Wigner’s insights into quantum mechanics are continuing to drive technology forward is certainly powerful evidence that his ‘consciousness’ view of Quantum Mechanics is indeed correct.
keiths: an ID-based epistemology fails the same “self-referential absurdity” test that Pearcey applies to evolutionary epistemology.
Humans evolved from humans. That is what the evidence says.
To which Z responded:
And I can just see the goalposts move . . . but I will try a dropkick anyway.
Zachriel introduces the idea of learning, but “learning” can have many meanings. The association of one stimuli to another might be the most basic, but even at this most basic level, one wonders whether UTM’s can learn. If cognition is anything beyond that at all (for example, making an association beyond an association you are directed to make), then UTM’s certainly cannot learn.
This is why learning (understood to be the free, self-initiated association of one thing to another), on evolution, cannot exist. Yet, such learning does exist. Therefore, the evolution of cognition is called into question.
Zachriel says that human cognition may not be a Turing Machine. On evolution, though, where minds are nothing beyond brains and brains are nothing beyond (admittedly super fancy multi-multi-multi-tape players) chemistry, they cannot be more than physical embodiments of Turing Machines, so Z is mistaken.
Keith S is also mistaken. UTMs are not defined by their capabilities only, but also by the limits of their capabilities. The causal chain is key; in fact UTM’s, while theoretically the most powerful “computers” (i.e. processors of algorithms), simply lack any creativity at all.
We’ve gone over this before:
Suppose Beep BLue (or whatever the next generation of chess playing computer happens to be) simply dominates all human opponents in chess. Its bank of “knowledge” and ability to to “judge” positions outstrip any single human brain, but what of it? Nothing. It does nothing that it has not been told to do. This, perhaps surprisingly, also means that it has not learned anything that it was not directed to learn.
As all computers are physical embodiments of UTMs and, on evolution, a brain cannot be more than a computer, well, you be the judge.
I will say this, Deep Blue would never come up with the idea of forfeiting a game before it has started just to get the staging changed to meet its “wishes.”
That’s silly, Tim. If you have fully specified the capabilities of a system, then you have also established its limits.
Computers can write original music. That certainly qualifies as creativity in my book.
Keith S, I am beginning to think that you don’t actually know what a Turing Machine is, or perhaps the importance of how they are defined. They must read, then respond to the tape. The key word is respond.
As for your comment concerning computers and music, I’ll let it stand for all to judge. Be aware however that evolution-advocates are now in the curious position of claiming that human freedom is an illusion (see Provine), but that computers are free to create (See Keith S).
Zachriel: Cognition is the ability to learn. Culture is learned.
Tim: And I can just see the goalposts move
Thought they were definitions.
Tim: Zachriel introduces the idea of learning, but “learning” can have many meanings. The association of one stimuli to another might be the most basic, but even at this most basic level, one wonders whether UTM’s can learn.
Yes, Turing Machines can learn. They do it all the time.
Tim: Zachriel says that human cognition may not be a Turing Machine. On evolution, though, where minds are nothing beyond brains and brains are nothing beyond (admittedly super fancy multi-multi-multi-tape players) chemistry, they cannot be more than physical embodiments of Turing Machines, so Z is mistaken.
A simple counterexample is an analog computer, which is not a Turing Machine.
That’s interesting. Could you point to something I’ve said about Turing machines that is incorrect?
Yes. Your mistake is in thinking that determinism somehow precludes creativity.
You seem to be assuming that creativity requires libertarian free will. It doesn’t.
(In any case, I’m a compatibilist.)
Keith s, it is not so much that you are incorrect in what you have written but that it doesn’t apply, so you were incorrect to have written it. For example, Turing machines are subject to the halting problem, but for some it is no problem at all and so the capabilities are fully specified (i.e. it halts. yeaaa)
But you overlook, for what reason I can not imagine, the far more immediate aspect of Turing machines which is how they are defined, how they work. “Specification of capabilities” is easily confused as “what they produce” instead of “what they do.” Why add the confusion? Read tape, maybe mark it, and then move, and THAT’S IT.
When I wrote “respond” it was meant to imply respond according to a pre-determined rule. The fact that you choose to muddy the waters in this area by saying that it was my mistake to assume that determinism somehow precludes creativity in the context of Turing machines is poor rhetoric and helps nobody get anywhere.
I read above where Turing machines are said to learn; that it happens all the time. My question is this: How can they learn if all they do is read, mark (if necessary), move, and nothing else?
Finally, analog computers are physical embodiments of UTMs as are all physical computers. And creativity does in fact require libertarian free will. If it doesn’t, Keith s, please provide an example.
Tim: My question is this: How can they learn if all they do is read, mark (if necessary), move, and nothing else?
Because a Turing Machine can receive and analyze data about the world.
You do understand that modern computers are Turing Machines, and that they can learn?
Tim: analog computers are physical embodiments of UTMs as are all physical computers.
That is incorrect. A Turing Machine is digital and sequential by definition. You can approximate an analog computer with a Turing Machine, but it’s only an approximation. Similarly with neutral nets, where interactions aren’t sequential but simultaneous.
I guess I don’t understand TMs like you do. You say they analyze. I say they only follow directions. You say they learn. I say they only respond according to rules. You say analog computers are approximations (because they are not digital, natch), but somehow because of that they do not follow directions! You seem to be saying the same with the non sequential “interactions” of neural nets. If you could please give one example of a computer doing anything beyond what it is told to do, I would be extremely curious and interested in it.
My point is this: If something ONLY follows, it is not free to lead. This would prohibit aspects of creativity enjoyed by persons with the freedom to create.
Tim: You say they analyze. I say they only follow directions.
Those are not incompatible statements. Indeed, a lot of analysis is according to rules, such as the rules of statistics.
In the modern world, it’s hard to imagine you haven’t experienced learning computers. For instance, Google and Facebook algorithms learn about individual users in order to customize ads. They are so successful at that, that they are among the largest companies in the world.
A simpler example would be a computer used to turn lights on and off, and eventually determining a pattern, from the individual habits, ambient light and time of day, to anticipate whether they need to be on or off, and how much light the person prefers. That’s learning.
Tim: You say analog computers are approximations (because they are not digital, natch), but somehow because of that they do not follow directions!
They do follow directions. We stated that digital computers can only approximate analog computers.
Tim: You seem to be saying the same with the non sequential “interactions” of neural nets.
If you check the definition of a Turing Machine, they are digital and sequential. Analog computers and neural nets are not digital and sequential; therefore, they are not Turing Machines. This contradicts your claim about Turning Machines, and your claim about possible brain architectures.
Your comment doesn’t make a lot of sense.
You’re misunderstanding the halting problem. A Turing machine doesn’t solve the halting problem by halting; it solves it only by successfully predicting whether any specified Turing machine will halt.
Turing proved that no Turing machine — none — could do this. In other words, the halting problem is a problem for all Turing machines.
I’m not sure why you think I’m overlooking that. I understand how Turing machines work.
I haven’t. I just pointed out that this statement of yours was incorrect:
As I said:
If the system’s capabilities are fixed, then what it can and can’t do are fixed. That means its limits are fixed as well.
Yes. That’s how Turing machines work. But as I said, this doesn’t preclude creativity.
It is your mistake. Turing machines are deterministic — they respond to predetermined rules — but that doesn’t mean they are incapable of creativity. Why would it?
That comment was from Zachriel, but the point is that the system (including the tape) changes state in response to input. If you arrange for the correct state changes, you have learning.
Again, you may be confusing me with Zachriel, who brought up analog computers. Anyway, Zachriel’s point is that analog computers aren’t digital. They’re continuous, not discrete. That means that their behavior can only be approximated by Turing machines, which are digital, unless nature turns out to be digital at its most fundamental level.
I already did: computer composition of original music. You don’t believe that computers possess libertarian free will, do you?
Wow! The digressions continue. I never suggested that Turing machine “solve” halting problems, only that some halt. The decide-ability of this problem across UTMs is something different altogether, but why even go there?
You have twice now said that I am mistaken concerning creativity, but I am not. Neither Turing machines, nor their physically embodied counterparts (computers) can learn or be creative. You are confusing what the outputs seem to be with what the computer has created/has learned.
Suppose you program a computer to analyze all available data on, say, college quarterbacks to determine which would be most likely a success as a pro. All metrics are scalable and the scales themselves are scalable back for several iterations. Everything the scouts can think of is inputted and compared to successful quarterbacks: weather, socioeconomics, W/L records in Pop Warner, helmet color, shoe size, etc. Many of these would be inconsequential of course, but the computer could do several hundreds of thousands of multivariate analyses and regression analyses, determining which variables are more important AND which variables are important in concert with others and so on. Finally, the computer starts spitting out names and rankings and sure enough they seem to be the best picks, churning out on average the best picks for different teams across the NFL. Incidentally, some of the best predictive correlations turn out to be things that nobody had thought of (imagine, if the QB comes from a warm weather state and plays in a dome, hand size remains important!!) It is as if the computer created some new idea, or at least some new analysis, but did it?
I say no. It created nothing. It came up with no new ideas, It did nothing new at all.
Indeed, I have been fooled by your example. I was part of an audience that was asked to identify the composer based on music we heard. Some were open ended; some were multiple choice. They seemed easy. We heard rags with lots of stride and the only composer even close was Joplin, so we picked him.
The punchline was that all of the music had been generated by computer algorithms. We thought it was an enjoyable exercise, but to say the creativity was on the part of the computer is to use yours words — just silly.
The computer did nothing but follow directions. The genius of the music, wherever it existed, was certainly not in the creativity of the machine. I can not emphasize this strongly enough — there was only tape, state, and following directions.
Perhaps it might help to think of it this way: imagine someone offered you some pseudocode, but instead of coding it into a computer, you were simply asked to “do exactly what the code said to do”. Unfortunately, you are much slower than a computer, so after five years of slavishly following the code to the letter, you produced an “original” piece of music. Here is my question: How creative would you feel?
Oh, I take that back! There was that one computer that had musical ability, although somewhat limited. I think its name was HAL . . . “Daisy, Daisy, give me your answer . .. ”
Remember that in the context of the OP (that is, reason), the doubt is cast on evolutionary explanations. On the other hand, logico-aesthetic reasoning does point as an evidence of creation, consonant with design theory, in numerous manifest singularities chief of which may be this: The expression of such reasoning in exactly one species — humankind.
Tim: Neither Turing machines, nor their physically embodied counterparts (computers) can learn or be creative.
That’s seems contrary to common experience. You might want to provide an operational definition of “learning”.
No thanks, I’m good.
And you said that the halting problem was not a problem for the machines that halt, which is wrong. The halting problem is a problem for every Turing machine. This is one of the most important results in computer science, so you might want to spend some time studying it.
You say no, but you haven’t justified your claim.
That’s just a variation of Searle’s Chinese Room argument. The human in the room doesn’t understand Chinese, but the system, of which the human is only a part, does understand Chinese. In your example it is the system that is creative, not the human executing the pseudocode.
The extension to a computer system composing original music should be obvious. The processor itself is not acting creatively, but the combination of processor, algorithm, and memory is.
That’s where your confusion lies. You are focusing on the fact that the processor is not acting creatively, which is correct, but you are concluding that the entire system is uncreative, which is incorrect.
Creative systems can be built from uncreative parts. Intelligent systems can be built from unintelligent parts. Flexible systems can be built from inflexible parts. Creative brains are built from uncreative neurons.
Some halt; some don’t. Knowing which will and which won’t is the problem. I get it. I have studied it. It is not that difficult.
You are almost correct in my not having justified my claim. I did not think it necessary. I see you have made it to the Chinese room and are now going all “system” on us. But even in your argument, you give away the game. You admit that I have focused on the “processor” and not the overall system.
Then you go on to make the most curious of claims:
You have admitted that my view, if a bit overfocused, is correct, the processor is not creative. Certainly memories are not creative for memories are nothing more than iterations of states. Algorithms can be no more than rules. And you are out of candidates for forces which could be creative. Piling them together and saying they are creative is alchemy, nothing more.
By the way, how creative were you in following the pseudocode to produce the piece of music? If you please, let me know which one of the states was the one which was creative. Which rule that you followed. At the end of the exercise, tell me which part of the process was creative. Try doing it without reference to how the music sounds new to you. I wish you good luck.
Oh and by the way, I really liked the “the system does understand Chinese” comment; that was a good one, right out of the playbook of those desperate to smuggle intelligence into the picture at any cost.
Now you get it, but you were still confused when you wrote this:
That’s wrong. The halting problem is a problem for all Turing machines.
You learned something in this thread. That’s good!
That’s as silly as saying this:
Systems can have properties, including creativity and the ability to do arithmetic, that their components lack.
Zachriel: You might want to provide an operational definition of “learning”.
Tim: No thanks, I’m good.
You claimed that Turing Machines can’t learn, but won’t say what you mean by “learn”.
Now that we have slid to page two I suspect this will be my last post for this thread.
Keith S, I am weary of your condescending tone. No, I did not learn anything from you or in this thread concerning the halting problem. And no, I was not confused when I wrote that for the Turing machine that halts there is no halting problem. The halting problem is not a problem for any turing machine; It is a problem for us! Your transistor analogy is also fatally flawed and silly. You have failed to understand the difference in categories.
Although a single transistor can’t do arithmetic, but a system of transistors can, no system of transistors can ever know what they are doing. They only do what they are programmed to do, nothing else.
In analogous ways, computers cannot even in principle, know, learn, or create. Why? As I stated before, they are physical embodiments of UTMs.
The fact that we can know, learn and create implies that we are not merely physical embodiments of UTMs.
Here is a handy little made up definition of learning. Learning — the act of creating an original, persistent association of two or more concepts.
You will note that this precludes computer learning because computers do not act. They are programmed “to act like”.
I understand that my definition isn’t perfect.
Tell me, Z and KS, what have you learned? I note neither of you answered my one question concerning your hypothetical five-year odyssey in creating music.
That contradicts what you wrote earlier:
You made a mistake, Tim. You’re human, like the rest of us.
Individual transistors can’t do arithmetic. Put them together in the right way, and the system can.
Individual airplane parts can’t fly. Put them together in the right way, and the system can.
Unintelligent parts can be combined to produce intelligent systems. Uncreative parts can be combined to produce creative systems.
It isn’t “alchemy”; it’s common knowledge that systems can possess traits that their components lack.
And here is an example of a machine doing exactly that:
Robot scientist becomes first machine to discover new scientific knowledge
I’ve learned that you are loathe to admit your mistakes, and that you erroneously believe that some sort of magic is required for a system to possess a characteristic that is lacked by its components.
I answered it:
Tim: Learning — the act of creating an original, persistent association of two or more concepts.
Thank you. We understand that it may not be a perfect definition, but should be serviceable for our purposes.
Google algorithms create original, persistent associations of concepts. They learn about your on-line habits and form generalizations, in order to better sell you products.
No they don’t. Algorithms do not create. They do not learn my on-line habits; They do not form generalizations. They spit out responses that look like those things, and you are fooled. The key word is “like”.
No, you did not answer it. You reframed it. So, it is your mistake. I will admit my mistakes. I do all the time. For example, here is one: I made the mistake of wasting a fraction of my life engaging in this OP.
Tim: Algorithms do not create. They do not learn my on-line habits; They do not form generalizations.
Computers are quite adept at forming generalizations. For instance, Amazon’s computers may note that you shopped for bicycles, and having found a correlation between people shopping for bicycles and energy drinks, advertise energy drinks to you individually.
I like that part “having found”! No, they did not find! They were coded; they followed the code, and upon following the directions, produced a string of 0’s and 1’s which were then used, again following another set of directions, to send me some ads.
. . . like those things, and you are fooled. get over it.
Tim: No, they did not find! They were coded; they followed the code, and upon following the directions, produced a string of 0?s and 1?s which were then used, again following another set of directions, to send me some ads.
The correlation wasn’t known until the computer performed the analysis; hence, the computer found the correlation.
It appears that Tim has a magical definition for the word “found”.
Google’s ad server isn’t finding correlations. Why? Because there’s no magic involved.
A self-driving car that makes its way through an obstacle course isn’t finding its way. Why? Because there’s no magic involved.
Tim, if you define learning and creativity in terms of magic, then of course they cannot be accomplished by physical systems. But then you’re simply assuming your conclusion.
If that self-driving car isn’t finding its way through the obstacle course, then how does it end up on the other side, intact?
If computers can’t do arithmetic, then how are they able to balance your checkbook?
If that music-composing system isn’t creative, then how does it manage to come up with musical pieces that no one has ever heard before?
Zachriel, the correlation was not known; then, after the computer hummed and purred, the correlation was known. I can see how you were fooled into thinking that it was the computer that found the correlation. In fact, as I am in a concessionary mood, I will admit that “finding” is a bit confusing. So let’s go back a bit. To find (at least in the case of unknown abstract correlations), one must seek. So, who was seeking? WE WERE!!! Not the computer!
In the same way that nobody (I hope) would say that the binoculars found the distant bird, the microscope the virus or the pinging sonar the sub, nobody should say that the computer found the correlation. WE found the correlation USING the computer. Why do you not understand this?
You know, Keith S, I was going to put “magic” into my definition , but I decided not to. You are grasping at straws now claiming that I have assumed the conclusion. I have not. I am simply locating the seat of creativity and learning in the proper place, the person. I can not help it if this undoes your evolutionary beliefs.
Although you do not answer my questions I will answer yours.
The car responds to its programming and features on the course.
Nobody can balance my checkbook.
The music system is creative if by system you include the people who programmed the computer. If you do not include the programmers, the computer is not creative, and you are smuggling. This last example is, in my book, the most illusory as when we hear the tones it is very easy (and once done, compelling) to do the work of making musical connections in theme and composition to composers’ previous work.
Two cars navigate an obstacle course at separate times. They take the same path. One is self-driving; the other has a human driver.
According to you, the human driver found a path through the obstacle course, but the self-driving car didn’t. Yet they accomplished exactly the same thing, followed the same path, and ended up on the other side of the obstacle course.
Why do you deny that the self-driving car found a path through the obstacle course? Because self-driving cars aren’t magic, but you imagine that humans are.
How confused are you? I never said the car didn’t make it! Now in your example of two cars I would say that both navigated the course. You keep referencing magic, but I have not (except in response to you): now it is my turn and there are three cars on the course!
1) person driving a car.
2) new-fangled programmed car that we all are reading about these days.
3) my friend’s ’69 Chevy pickup (no driver, human or otherwise.)
Now, only two of the vehicles successfully navigate the course. Can you tell me which two and why?
I hope you didn’t pick the Chevy! Oh, and I hope you didn’t get fooled about how the second car made it. By the way, in the example I gave, what role does “driver” play? Does the second car have a driver? What distinguishes the second car from the third? That’s right, the program. Whence the program? Game over.
I do not imagine that humans are magic unless by magic you mean supernatural. Obviously, only one of us is able to consider such ideas.
I find it amusing and sane to be allied with GK Chesterton:
“We talk of wild animals but man is the only wild animal. It is man that has broken out. All other animals are tame animals; following the rugged respectability of the tribe or type.”
Who are you aligned with?
Not nearly as confused as you, judging from this thread.
Nor did I accuse you of saying that.
You said that in Zachriel’s example, Google’s ad servers did not find the correlation between bicycles and energy drinks. By that same logic, the self-driving car didn’t find a route through the obstacle course. Yet the human driver did.
Both cars followed the same path. Both successfully navigated the obstacle course and made it to the other side. Yet according to you, the self-driving car did not find a safe route — only the human did.
Your position is nonsensical.
Instead of dashing off another response, why not slow down and think about this for a while, Tim?
Me: I never said that!!
And then this gem:
I am sure you are wrong. In my example, I even came right out and said that the second car made it. Why can’t you understand this?
My point is NOT that the car did not make it, it most certainly made it. But what of the third vehicle? How? What is it about the second and third vehicles . . . oh nevermind! This is ground well-covered and obvious.
Your evasions are becoming tiresome. Knock it off.
You told us that in Zachriel’s example, the Google ad server did not find a correlation between bicycles and energy drinks:
By the same faulty logic, the self-driving car did not find a safe route through the obstacle course. It was programmed; it followed the code; and upon following the directions, produced a string of 0’s and 1’s which were used to actuate the throttle, brake, and steering mechanism.
That puts you in the absurd position of claiming that the human driver found a safe route through the obstacle course, but that the self-driving car didn’t — even though both cars started in the same place, followed the same route, and ended up in the same place on the other side.
You’ve gotten yourself stuck, Tim. Better let the self-driving intelligence take over.
I evade nothing. You are incorrect. By the same logic that dispells the myth that the computer alone “found”, I also say that both the cars made it. But just as you have said, the second car followed the code (and other input) to navigate the course. This puts me in the rather reasonable position of saying that while both cars made it, it was by the creativity, motive, learning etc of a person that got them each there, not the vehicle or any computer on board which is nothing more than a tool. I am not stuck; you are smuggling. Either deal with it in a forthright manner, or don’t. I will let the three other people reading this thread be the judge.
I notice that you have assiduously avoided commenting on the binoculars, microscope, sonar, and Chevy, what gives?
To find the way implies that the way was sought. Now it is you who are in the absurd position. You are saying that the auto-piloted car “decided” to make its way through the course. And, since you don’t believe in magic; I guess you’ve got some ‘splainin’ to do.
What is wrong with saying this: the advertisers found new correlations in my computing habits using the powerful computers at their disposal to better target ads to me? Why is it so important for you to continue to anthropomorphize computers?
Tim is correct, but I would also add that computers without humans is like the universe without teleology.
They’re ultimately useless, meaningless, purposeless etc…
You’re terribly frightened of the word ‘found’, aren’t you?
Tell us, did the self-driving car find a safe route through the obstacle course? Yes or no?
Tim: the computer hummed and purred, the correlation was known. I can see how you were fooled into thinking that it was the computer that found the correlation.
Tim hummed and purred, the correlation was known. You were fooled into thinking it was Tim that found the correlation.
Tim: nobody should say that the computer found the correlation.
Sure they do. There are thousands of examples, including in the New York Times, which is often taken a guide to the use of the English language. There are also thousands of examples of “search algorithm found”, including in academic papers.
I grow weary of your rhetorical tricks. (. . . terribly frightened, “yes or no”).
You are mistaken again; I am not afraid (why do you continue to editorialize about what I “must be” thinking and feeling?). Yes AND no. We have gone over this. Speaking metaphorically, we can say the car found the route. In fact, it is convenient to do so. But, and I have made this clear, in a strict sense, it did not seek, so it could not find. The people who built the car sought a solution to a problem and using the car as a tool, solved that problem.
I notice that both you and Keith S continue to avoid my questions. I can’t tell from your last post whether you are finally seating the ability to be creative and to learn properly, or rather than defend your anthropomorphism of the computer, you have chosen to de-humanize me. I suspect it is the latter. If I find a correlation; I, in a strict sense, learn. It is not “as if” I have learned. I struggle to see why you won’t admit this somewhat obvious truth. Please, explain your thoughts on this rather than continuing all the little tricks.
Further, as I have just explained, it is convenient to write in a way that imputes the “found” as if it is the “doings” of a machine which then might imply that it is solely the doings of the machine; this, of course, is not logically correct. Convenience of language use and (God forbid) the New York Times are NOT metaphysical markers.
Either explain clearly or give it up, guys.
Tim: I can’t tell from your last post whether you are finally seating the ability to be creative and to learn properly, or rather than defend your anthropomorphism of the computer, you have chosen to de-humanize me.
We showed that your ‘argument’, which was nothing more than handwaving, applies to you as well as a computer.
Tim: Convenience of language use and (God forbid) the New York Times are NOT metaphysical markers.
You made the claim that “find” wasn’t used with regards to computers. Glad to see that you have abandoned the semantic argument.
You provided an operational definition of learning, “the act of creating an original, persistent association of two or more concepts. By that definition, computers learn.
Z, you showed nothing. The same-old-cut-out-paste-in-jest is not an argument or a demonstration. I made the key distinctions. There was no handwaving. I did NOT make the claim that “find” wasn’t used with computers, but that it shouldn’t, strictly speaking, be used. You are mistaken that according to my definition, computers learn. See “act” and “creating”. Get over it. I know, the logical extensions of this doom your lame (as in hobbled) worldview. Move on to a better one.
Tim: You are mistaken that according to my definition, computers learn. See “act” and “creating”.
Computers do things, certainly. Computers produce correlations, they cause particular situations to exist. More particularly, computers can create original correlations, correlations never before seen.
Of course, the car did seek a safe route through the obstacle course.
Now I suppose you’ll claim that it only “metaphorically sought” a safe route, while the human driver “really sought” it.
We could continue this for a long time, but let’s cut to the chase. What is the secret ingredient that distinguishes “metaphorical” acting, learning, creating, seeking, and finding from “real” acting, learning, creating, seeking, and finding?
(. . . ignoring the pidgeonholing “ingredient”. . . )
Personhood. We are people; things are not. I thought that through my examples, this would be clear. Is that specific enough?
We are at an impasse. Personhood cannot possibly fit into your worldview beyond “persons-out-of-materials”. That is why you can see no distinctions in my examples, why you see computers as effectively persons, or more likely why you see persons only as effective as computers, and why you refuse to admit what is plain for all to see.
As you do, so I will let my statements stand, each confident that the argument is won.
If you define “real” acting, learning, creating, seeking, and finding as things that can only be done by persons, then the claim that “non-persons can’t really act/learn/create etc.” is merely a tautology. Your argument is circular.
Besides that problem, you haven’t defined personhood.
Is a rat a person? Can a rat really find its way through a maze, or only metaphorically?
If we someday create robots whose behavior is indistinguishable from that of humans, will those robots be “persons”? Why or why not?
Would Weaver birds qualify for personhood by your way of reckoning? Would bacteria qualify?
Keith S, there is no problem; I am certain that you are mistaken. Instead of tautology, it is modus tollens.
If a person, then actual learning.
Not actual learning.
Therefore, not a person.
When you asked me to identify the “ingredient” I did so. However, you are twisting things around by saying that I “defined” real acting that way. I observed it to be that way. Personhood is the singular of peoplehood, you know, people in the ‘hood. I will not be defining personhood for you. We have enough to deal with when talking about animals.
Although the rat really reaches the cheese, it only “finds its way through the maze” metaphorically. Instinct is pretty easy to get our heads around. Rat smells food and by instinct somehow knows that walking gets food faster than the cheese walking to rat. But which way? The rat is conditioned for lefts and rights and voila success. In this sense the rat learns. But by being driven by instinct, the question arises, did the rat act or did it react to its instinct. If was driven by instinct, what becomes of actual learning in the way I defined it. It goes away. This is not to say that the rat can’t be conditioned to run the maze quickly, it certainly can.
This would be a good study: put a rat in an empty maze and see if it finds its way through the maze. I’d say doubtful because I don’t think there would be anything to drive the rat.
rhampton7, I have my biases about animals and would say no. No animal enjoys personhood according to my reckoning. All learning that they do is instinct-driven conditioning. Pets are the toughest not to be fooled by. I could be wrong about this and would welcome a tweaking of my definition in term of learning, especially. For now I plead agnosticism in this arena, and will say there is nothing to suggest anything beyond instinct-driven conditioning.
Well KS, Z and RH7. That makes four of us. I am done. Talk to you again sometime.
I understand that you are agnostic on the issue of animal intelligence. But I hope you realize that Weaver birds, among others, build structures recognizable as products of “intelligent design”. If your hunch is correct and the Weaver bird’s intelligence is purely natural/material, then their nests are evidence that purely natural/material processes can produce intelligent designs.
No. Here’s your actual logic:
No, you defined it that way. I asked for the “secret ingredient” that distinguishes the “real” forms from the “metaphorical” forms. I didn’t ask you who or what does, or doesn’t, produce “real” acting, learning, etc.:
If you’d like to retract that answer, that’s fine.
Here’s a question that gets at the crucial point:
Suppose a human and a sophisticated future robot behave absolutely identically. According to you, the human “really” acts/learns/creates, but the robot only “metaphorically” does these things. Without knowing which is the human and which is the robot, can you distinguish between the “real” acting/learning/creating and their merely metaphorical counterparts? If so, how?
Tim: Although the rat really reaches the cheese, it only “finds its way through the maze” metaphorically.
It’s called spatial learning and memory. The rat learned the maze, just as it learns the corridors within the walls of a house.
I will not be retracting the answer. I merely point out that the it is based on observation.
Ok, you lob up such easy softballs that I just can’t resist. It is impossible, in principle, for two beings to behave identically so far as each occupies a place in our Universe that the other cannot.
I mean as long as you are going for the impossible, I can attempt to be just as picky. Rephrase your hypothetical. Is the human acting like a robot or the robot like a human. They can not be doing the exact same thing.
Poor Tim. You’ve backed yourself into a corner, haven’t you?
If you stick to “produced by a person” as the characteristic that distinguishes “real” acting/learning/creating from their “metaphorical” counterparts, then your argument is circular.
But if you retract your answer, then you have to come up with something else that distinguishes the identical behaviors.
Not a good position to be in.
uuuuh, no. You ended your attack with an impossible hypothetical. You have cornered yourself. I am doing just fine, thank you. You simply can’t assert “circularity”, hit me with “your” version of my argument (Your version was an incorrect surmise of what I’d said, although I will allow that your version does follow from mine) and claim victory.
Yet again, instead of answering my questions. You simply reframe them.
Well, two can play that game: Here’s one. Prove to all (three of us who continue to read this thread), that you are not me. Oh, I should add that you can do so only by posting something.
RH7, I would only ask that you elucidate ” structures recognizable as” — I don’t buy it (but am actually open to some movement in this area. Could there be a type of learning that is in response to limited freedoms for animals? Are animals in some limited ways types of persons in the realm of learning and creativity?) Right now, though, I say no. The beauty of form, symmetries, etc. . . we see in what animals do are constructs imposed by us. I looked up the weaver birds nests and was not that impressed — spider webs, though, oooo–eeeeee
Everything that a computer does can be traced back to the humans who designed and built it.