Uncommon Descent Serving The Intelligent Design Community

Fred, Bob and Saber-Toothed Tigers

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In this post the UD News Desk quotes from Nancy Pearcey’s new book concerning evolutionary epistemology:

An example of self-referential absurdity is a theory called evolutionary epistemology, a naturalistic approach that applies evolution to the process of knowing. The theory proposes that the human mind is a product of natural selection. The implication is that the ideas in our minds were selected for their survival value, not for their truth-value.

Piotr thinks he has a cogent response to this:

Does she believe “the ideas in our minds” are innate, or what? At best, it could be argued that the human mind has been shaped by natural selection in such a way that it can produce ideas which help us to survive and have offspring. As far as I can see, thought processes which allow us to understand the world and make correct predictions (and so are empirically “true”) are generally good for survival.

Sorry Piotr. Truth (i.e., saying of that which is that it is and of that which is not that it is not) has no necessary connection to survival. This has been illustrated many times along the following lines:

Assume you have two cavemen, Bob and Fred. Consistent with truth, Bob believes saber-toothed tigers are fearsome monsters that want to eat us. When Bob sees a saber-toothed tiger he runs and hides.

Contrary to truth, Fred believes saber-toothed tigers are warm and fuzzy and only want to be our friends. It just so happens that Fred also believes (again, contrary to truth) that “hide and seek with people” is saber-toothed tigers’ favorite game. Therefore, whenever he sees a saber-toothed tiger he also runs and hides.

Assume for the sake of argument that Fred’s running and hiding as part of the game he thinks he is playing is just as effective at eluding saber-toothed tigers as Bob’s running and hiding out of stark raving fear.

Here’s the kicker: Natural selection is blind to the difference between Fred’s belief and Bob’s belief. Natural selection “selects” for traits that result in differential survival rates. If Fred and Bob survive at the same rate, natural selection cares not that Fred is a loon.

Comments
Tim: Algorithms do not create. They do not learn my on-line habits; They do not form generalizations. Computers are quite adept at forming generalizations. For instance, Amazon's computers may note that you shopped for bicycles, and having found a correlation between people shopping for bicycles and energy drinks, advertise energy drinks to you individually.Zachriel
March 16, 2015
March
03
Mar
16
16
2015
05:45 AM
5
05
45
AM
PDT
No they don't. Algorithms do not create. They do not learn my on-line habits; They do not form generalizations. They spit out responses that look like those things, and you are fooled. The key word is "like". No, you did not answer it. You reframed it. So, it is your mistake. I will admit my mistakes. I do all the time. For example, here is one: I made the mistake of wasting a fraction of my life engaging in this OP.Tim
March 15, 2015
March
03
Mar
15
15
2015
04:52 PM
4
04
52
PM
PDT
Tim: Learning — the act of creating an original, persistent association of two or more concepts. Thank you. We understand that it may not be a perfect definition, but should be serviceable for our purposes. Google algorithms create original, persistent associations of concepts. They learn about your on-line habits and form generalizations, in order to better sell you products.Zachriel
March 15, 2015
March
03
Mar
15
15
2015
12:55 PM
12
12
55
PM
PDT
Tim,
And no, I was not confused when I wrote that for the Turing machine that halts there is no halting problem. The halting problem is not a problem for any turing machine;
That contradicts what you wrote earlier:
For example, Turing machines are subject to the halting problem, but for some it is no problem at all and so the capabilities are fully specified (i.e. it halts. yeaaa)
You made a mistake, Tim. You're human, like the rest of us.
Although a single transistor can’t do arithmetic, but a system of transistors can, no system of transistors can ever know what they are doing. They only do what they are programmed to do, nothing else.
Individual transistors can't do arithmetic. Put them together in the right way, and the system can. Individual airplane parts can't fly. Put them together in the right way, and the system can. Unintelligent parts can be combined to produce intelligent systems. Uncreative parts can be combined to produce creative systems. It isn't "alchemy"; it's common knowledge that systems can possess traits that their components lack.
Here is a handy little made up definition of learning. Learning — the act of creating an original, persistent association of two or more concepts.
And here is an example of a machine doing exactly that: Robot scientist becomes first machine to discover new scientific knowledge
Tell me, Z and KS, what have you learned?
I've learned that you are loathe to admit your mistakes, and that you erroneously believe that some sort of magic is required for a system to possess a characteristic that is lacked by its components.
I note neither of you answered my one question concerning your hypothetical five-year odyssey in creating music.
I answered it:
That’s just a variation of Searle’s Chinese Room argument. The human in the room doesn’t understand Chinese, but the system, of which the human is only a part, does understand Chinese. In your example it is the system that is creative, not the human executing the pseudocode.
keith s
March 15, 2015
March
03
Mar
15
15
2015
12:48 PM
12
12
48
PM
PDT
Now that we have slid to page two I suspect this will be my last post for this thread. Keith S, I am weary of your condescending tone. No, I did not learn anything from you or in this thread concerning the halting problem. And no, I was not confused when I wrote that for the Turing machine that halts there is no halting problem. The halting problem is not a problem for any turing machine; It is a problem for us! Your transistor analogy is also fatally flawed and silly. You have failed to understand the difference in categories. Although a single transistor can't do arithmetic, but a system of transistors can, no system of transistors can ever know what they are doing. They only do what they are programmed to do, nothing else. In analogous ways, computers cannot even in principle, know, learn, or create. Why? As I stated before, they are physical embodiments of UTMs. The fact that we can know, learn and create implies that we are not merely physical embodiments of UTMs. Zachriel, Here is a handy little made up definition of learning. Learning -- the act of creating an original, persistent association of two or more concepts. You will note that this precludes computer learning because computers do not act. They are programmed "to act like". I understand that my definition isn't perfect. Tell me, Z and KS, what have you learned? I note neither of you answered my one question concerning your hypothetical five-year odyssey in creating music.Tim
March 15, 2015
March
03
Mar
15
15
2015
10:45 AM
10
10
45
AM
PDT
Zachriel: You might want to provide an operational definition of “learning”. Tim: No thanks, I’m good. You claimed that Turing Machines can't learn, but won't say what you mean by "learn".Zachriel
March 15, 2015
March
03
Mar
15
15
2015
05:47 AM
5
05
47
AM
PDT
Tim,
Some halt; some don’t. Knowing which will and which won’t is the problem. I get it. I have studied it. It is not that difficult.
Now you get it, but you were still confused when you wrote this:
For example, Turing machines are subject to the halting problem, but for some it is no problem at all and so the capabilities are fully specified (i.e. it halts. yeaaa) [Emphasis added]
That's wrong. The halting problem is a problem for all Turing machines. You learned something in this thread. That's good! keiths:
Creative systems can be built from uncreative parts. Intelligent systems can be built from unintelligent parts. Flexible systems can be built from inflexible parts. Creative brains are built from uncreative neurons.
Tim:
Certainly memories are not creative for memories are nothing more than iterations of states. Algorithms can be no more than rules. And you are out of candidates for forces which could be creative. Piling them together and saying they are creative is alchemy, nothing more.
That's as silly as saying this:
Individual transistors can't do arithmetic. To claim that you can wire them together and get the system to do arithmetic is alchemy, nothing more.
Systems can have properties, including creativity and the ability to do arithmetic, that their components lack.keith s
March 15, 2015
March
03
Mar
15
15
2015
12:25 AM
12
12
25
AM
PDT
Some halt; some don't. Knowing which will and which won't is the problem. I get it. I have studied it. It is not that difficult. You are almost correct in my not having justified my claim. I did not think it necessary. I see you have made it to the Chinese room and are now going all "system" on us. But even in your argument, you give away the game. You admit that I have focused on the "processor" and not the overall system. Then you go on to make the most curious of claims:
Creative systems can be built from uncreative parts. Intelligent systems can be built from unintelligent parts.
You have admitted that my view, if a bit overfocused, is correct, the processor is not creative. Certainly memories are not creative for memories are nothing more than iterations of states. Algorithms can be no more than rules. And you are out of candidates for forces which could be creative. Piling them together and saying they are creative is alchemy, nothing more. By the way, how creative were you in following the pseudocode to produce the piece of music? If you please, let me know which one of the states was the one which was creative. Which rule that you followed. At the end of the exercise, tell me which part of the process was creative. Try doing it without reference to how the music sounds new to you. I wish you good luck. Oh and by the way, I really liked the "the system does understand Chinese" comment; that was a good one, right out of the playbook of those desperate to smuggle intelligence into the picture at any cost.Tim
March 14, 2015
March
03
Mar
14
14
2015
10:54 PM
10
10
54
PM
PDT
Tim:
I never suggested that Turing machine “solve” halting problems, only that some halt.
And you said that the halting problem was not a problem for the machines that halt, which is wrong. The halting problem is a problem for every Turing machine. This is one of the most important results in computer science, so you might want to spend some time studying it.
Incidentally, some of the best predictive correlations turn out to be things that nobody had thought of (imagine, if the QB comes from a warm weather state and plays in a dome, hand size remains important!!) It is as if the computer created some new idea, or at least some new analysis, but did it? I say no. It created nothing. It came up with no new ideas, It did nothing new at all.
You say no, but you haven't justified your claim.
...to say the creativity was on the part of the computer is to use yours words — just silly. The computer did nothing but follow directions. The genius of the music, wherever it existed, was certainly not in the creativity of the machine. I can not emphasize this strongly enough — there was only tape, state, and following directions. Perhaps it might help to think of it this way: imagine someone offered you some pseudocode, but instead of coding it into a computer, you were simply asked to “do exactly what the code said to do”. Unfortunately, you are much slower than a computer, so after five years of slavishly following the code to the letter, you produced an “original” piece of music. Here is my question: How creative would you feel?
That's just a variation of Searle's Chinese Room argument. The human in the room doesn't understand Chinese, but the system, of which the human is only a part, does understand Chinese. In your example it is the system that is creative, not the human executing the pseudocode. The extension to a computer system composing original music should be obvious. The processor itself is not acting creatively, but the combination of processor, algorithm, and memory is. That's where your confusion lies. You are focusing on the fact that the processor is not acting creatively, which is correct, but you are concluding that the entire system is uncreative, which is incorrect. Creative systems can be built from uncreative parts. Intelligent systems can be built from unintelligent parts. Flexible systems can be built from inflexible parts. Creative brains are built from uncreative neurons.keith s
March 14, 2015
March
03
Mar
14
14
2015
05:17 PM
5
05
17
PM
PDT
No thanks, I'm good.Tim
March 14, 2015
March
03
Mar
14
14
2015
02:30 PM
2
02
30
PM
PDT
Tim: Neither Turing machines, nor their physically embodied counterparts (computers) can learn or be creative. That's seems contrary to common experience. You might want to provide an operational definition of "learning".Zachriel
March 14, 2015
March
03
Mar
14
14
2015
06:52 AM
6
06
52
AM
PDT
Wow! The digressions continue. I never suggested that Turing machine "solve" halting problems, only that some halt. The decide-ability of this problem across UTMs is something different altogether, but why even go there? You have twice now said that I am mistaken concerning creativity, but I am not. Neither Turing machines, nor their physically embodied counterparts (computers) can learn or be creative. You are confusing what the outputs seem to be with what the computer has created/has learned. Suppose you program a computer to analyze all available data on, say, college quarterbacks to determine which would be most likely a success as a pro. All metrics are scalable and the scales themselves are scalable back for several iterations. Everything the scouts can think of is inputted and compared to successful quarterbacks: weather, socioeconomics, W/L records in Pop Warner, helmet color, shoe size, etc. Many of these would be inconsequential of course, but the computer could do several hundreds of thousands of multivariate analyses and regression analyses, determining which variables are more important AND which variables are important in concert with others and so on. Finally, the computer starts spitting out names and rankings and sure enough they seem to be the best picks, churning out on average the best picks for different teams across the NFL. Incidentally, some of the best predictive correlations turn out to be things that nobody had thought of (imagine, if the QB comes from a warm weather state and plays in a dome, hand size remains important!!) It is as if the computer created some new idea, or at least some new analysis, but did it? I say no. It created nothing. It came up with no new ideas, It did nothing new at all. Indeed, I have been fooled by your example. I was part of an audience that was asked to identify the composer based on music we heard. Some were open ended; some were multiple choice. They seemed easy. We heard rags with lots of stride and the only composer even close was Joplin, so we picked him. The punchline was that all of the music had been generated by computer algorithms. We thought it was an enjoyable exercise, but to say the creativity was on the part of the computer is to use yours words -- just silly. The computer did nothing but follow directions. The genius of the music, wherever it existed, was certainly not in the creativity of the machine. I can not emphasize this strongly enough -- there was only tape, state, and following directions. Perhaps it might help to think of it this way: imagine someone offered you some pseudocode, but instead of coding it into a computer, you were simply asked to "do exactly what the code said to do". Unfortunately, you are much slower than a computer, so after five years of slavishly following the code to the letter, you produced an "original" piece of music. Here is my question: How creative would you feel? Oh, I take that back! There was that one computer that had musical ability, although somewhat limited. I think its name was HAL . . . "Daisy, Daisy, give me your answer . .. " Remember that in the context of the OP (that is, reason), the doubt is cast on evolutionary explanations. On the other hand, logico-aesthetic reasoning does point as an evidence of creation, consonant with design theory, in numerous manifest singularities chief of which may be this: The expression of such reasoning in exactly one species -- humankind.Tim
March 14, 2015
March
03
Mar
14
14
2015
12:43 AM
12
12
43
AM
PDT
Tim #62, Your comment doesn't make a lot of sense. You wrote:
For example, Turing machines are subject to the halting problem, but for some it is no problem at all and so the capabilities are fully specified (i.e. it halts. yeaaa)
You're misunderstanding the halting problem. A Turing machine doesn't solve the halting problem by halting; it solves it only by successfully predicting whether any specified Turing machine will halt. Turing proved that no Turing machine -- none -- could do this. In other words, the halting problem is a problem for all Turing machines.
But you overlook, for what reason I can not imagine, the far more immediate aspect of Turing machines which is how they are defined, how they work.
I'm not sure why you think I'm overlooking that. I understand how Turing machines work.
“Specification of capabilities” is easily confused as “what they produce” instead of “what they do.” Why add the confusion?
I haven't. I just pointed out that this statement of yours was incorrect:
UTMs are not defined by their capabilities only, but also by the limits of their capabilities.
As I said:
That’s silly, Tim. If you have fully specified the capabilities of a system, then you have also established its limits.
If the system's capabilities are fixed, then what it can and can't do are fixed. That means its limits are fixed as well. Tim:
When I wrote “respond” it was meant to imply respond according to a pre-determined rule.
Yes. That's how Turing machines work. But as I said, this doesn't preclude creativity.
The fact that you choose to muddy the waters in this area by saying that it was my mistake to assume that determinism somehow precludes creativity in the context of Turing machines is poor rhetoric and helps nobody get anywhere.
It is your mistake. Turing machines are deterministic -- they respond to predetermined rules -- but that doesn't mean they are incapable of creativity. Why would it?
I read above where Turing machines are said to learn; that it happens all the time. My question is this: How can they learn if all they do is read, mark (if necessary), move, and nothing else?
That comment was from Zachriel, but the point is that the system (including the tape) changes state in response to input. If you arrange for the correct state changes, you have learning.
Finally, analog computers are physical embodiments of UTMs as are all physical computers.
Again, you may be confusing me with Zachriel, who brought up analog computers. Anyway, Zachriel's point is that analog computers aren't digital. They're continuous, not discrete. That means that their behavior can only be approximated by Turing machines, which are digital, unless nature turns out to be digital at its most fundamental level.
And creativity does in fact require libertarian free will. If it doesn’t, Keith s, please provide an example.
I already did: computer composition of original music. You don't believe that computers possess libertarian free will, do you?keith s
March 13, 2015
March
03
Mar
13
13
2015
04:20 PM
4
04
20
PM
PDT
Tim: You say they analyze. I say they only follow directions. Those are not incompatible statements. Indeed, a lot of analysis is according to rules, such as the rules of statistics. In the modern world, it's hard to imagine you haven't experienced learning computers. For instance, Google and Facebook algorithms learn about individual users in order to customize ads. They are so successful at that, that they are among the largest companies in the world. A simpler example would be a computer used to turn lights on and off, and eventually determining a pattern, from the individual habits, ambient light and time of day, to anticipate whether they need to be on or off, and how much light the person prefers. That's learning. Tim: You say analog computers are approximations (because they are not digital, natch), but somehow because of that they do not follow directions! They do follow directions. We stated that digital computers can only approximate analog computers. Tim: You seem to be saying the same with the non sequential “interactions” of neural nets. If you check the definition of a Turing Machine, they are digital and sequential. Analog computers and neural nets are not digital and sequential; therefore, they are not Turing Machines. This contradicts your claim about Turning Machines, and your claim about possible brain architectures.Zachriel
March 13, 2015
March
03
Mar
13
13
2015
03:41 PM
3
03
41
PM
PDT
Zachriel, I guess I don't understand TMs like you do. You say they analyze. I say they only follow directions. You say they learn. I say they only respond according to rules. You say analog computers are approximations (because they are not digital, natch), but somehow because of that they do not follow directions! You seem to be saying the same with the non sequential "interactions" of neural nets. If you could please give one example of a computer doing anything beyond what it is told to do, I would be extremely curious and interested in it. My point is this: If something ONLY follows, it is not free to lead. This would prohibit aspects of creativity enjoyed by persons with the freedom to create.Tim
March 13, 2015
March
03
Mar
13
13
2015
02:55 PM
2
02
55
PM
PDT
Tim: My question is this: How can they learn if all they do is read, mark (if necessary), move, and nothing else? Because a Turing Machine can receive and analyze data about the world. You do understand that modern computers are Turing Machines, and that they can learn? Tim: analog computers are physical embodiments of UTMs as are all physical computers. That is incorrect. A Turing Machine is digital and sequential by definition. You can approximate an analog computer with a Turing Machine, but it's only an approximation. Similarly with neutral nets, where interactions aren't sequential but simultaneous.Zachriel
March 13, 2015
March
03
Mar
13
13
2015
01:27 PM
1
01
27
PM
PDT
Keith s, it is not so much that you are incorrect in what you have written but that it doesn't apply, so you were incorrect to have written it. For example, Turing machines are subject to the halting problem, but for some it is no problem at all and so the capabilities are fully specified (i.e. it halts. yeaaa) But you overlook, for what reason I can not imagine, the far more immediate aspect of Turing machines which is how they are defined, how they work. "Specification of capabilities" is easily confused as "what they produce" instead of "what they do." Why add the confusion? Read tape, maybe mark it, and then move, and THAT'S IT. When I wrote "respond" it was meant to imply respond according to a pre-determined rule. The fact that you choose to muddy the waters in this area by saying that it was my mistake to assume that determinism somehow precludes creativity in the context of Turing machines is poor rhetoric and helps nobody get anywhere. I read above where Turing machines are said to learn; that it happens all the time. My question is this: How can they learn if all they do is read, mark (if necessary), move, and nothing else? Finally, analog computers are physical embodiments of UTMs as are all physical computers. And creativity does in fact require libertarian free will. If it doesn't, Keith s, please provide an example.Tim
March 13, 2015
March
03
Mar
13
13
2015
12:34 PM
12
12
34
PM
PDT
Tim:
Keith S, I am beginning to think that you don’t actually know what a Turing Machine is, or perhaps the importance of how they are defined.
That's interesting. Could you point to something I've said about Turing machines that is incorrect?
They must read, then respond to the tape. The key word is respond.
Yes. Your mistake is in thinking that determinism somehow precludes creativity.
As for your comment concerning computers and music, I’ll let it stand for all to judge. Be aware however that evolution-advocates are now in the curious position of claiming that human freedom is an illusion (see Provine), but that computers are free to create (See Keith S).
You seem to be assuming that creativity requires libertarian free will. It doesn't. (In any case, I'm a compatibilist.)keith s
March 13, 2015
March
03
Mar
13
13
2015
11:09 AM
11
11
09
AM
PDT
Zachriel: Cognition is the ability to learn. Culture is learned. Tim: And I can just see the goalposts move Thought they were definitions. Tim: Zachriel introduces the idea of learning, but “learning” can have many meanings. The association of one stimuli to another might be the most basic, but even at this most basic level, one wonders whether UTM’s can learn. Yes, Turing Machines can learn. They do it all the time. Tim: Zachriel says that human cognition may not be a Turing Machine. On evolution, though, where minds are nothing beyond brains and brains are nothing beyond (admittedly super fancy multi-multi-multi-tape players) chemistry, they cannot be more than physical embodiments of Turing Machines, so Z is mistaken. A simple counterexample is an analog computer, which is not a Turing Machine.Zachriel
March 13, 2015
March
03
Mar
13
13
2015
11:07 AM
11
11
07
AM
PDT
Keith S, I am beginning to think that you don't actually know what a Turing Machine is, or perhaps the importance of how they are defined. They must read, then respond to the tape. The key word is respond. As for your comment concerning computers and music, I'll let it stand for all to judge. Be aware however that evolution-advocates are now in the curious position of claiming that human freedom is an illusion (see Provine), but that computers are free to create (See Keith S).Tim
March 13, 2015
March
03
Mar
13
13
2015
10:54 AM
10
10
54
AM
PDT
Tim:
Keith S is also mistaken. UTMs are not defined by their capabilities only, but also by the limits of their capabilities.
That's silly, Tim. If you have fully specified the capabilities of a system, then you have also established its limits.
The causal chain is key; in fact UTM’s, while theoretically the most powerful “computers” (i.e. processors of algorithms), simply lack any creativity at all.
Computers can write original music. That certainly qualifies as creativity in my book.keith s
March 13, 2015
March
03
Mar
13
13
2015
10:39 AM
10
10
39
AM
PDT
I wrote:
In other words, somehow cognition, even human cognition is, according to Zachriel, determined according to a causal chain.
To which Z responded:
Cognition is the ability to learn. Culture is learned.
And I can just see the goalposts move . . . but I will try a dropkick anyway. Zachriel introduces the idea of learning, but "learning" can have many meanings. The association of one stimuli to another might be the most basic, but even at this most basic level, one wonders whether UTM's can learn. If cognition is anything beyond that at all (for example, making an association beyond an association you are directed to make), then UTM's certainly cannot learn. This is why learning (understood to be the free, self-initiated association of one thing to another), on evolution, cannot exist. Yet, such learning does exist. Therefore, the evolution of cognition is called into question. Zachriel says that human cognition may not be a Turing Machine. On evolution, though, where minds are nothing beyond brains and brains are nothing beyond (admittedly super fancy multi-multi-multi-tape players) chemistry, they cannot be more than physical embodiments of Turing Machines, so Z is mistaken. Keith S is also mistaken. UTMs are not defined by their capabilities only, but also by the limits of their capabilities. The causal chain is key; in fact UTM's, while theoretically the most powerful "computers" (i.e. processors of algorithms), simply lack any creativity at all. We've gone over this before: Suppose Beep BLue (or whatever the next generation of chess playing computer happens to be) simply dominates all human opponents in chess. Its bank of "knowledge" and ability to to "judge" positions outstrip any single human brain, but what of it? Nothing. It does nothing that it has not been told to do. This, perhaps surprisingly, also means that it has not learned anything that it was not directed to learn. As all computers are physical embodiments of UTMs and, on evolution, a brain cannot be more than a computer, well, you be the judge. I will say this, Deep Blue would never come up with the idea of forfeiting a game before it has started just to get the staging changed to meet its "wishes."Tim
March 13, 2015
March
03
Mar
13
13
2015
10:01 AM
10
10
01
AM
PDT
Humans are posited to be the result of a long period of evolution.
Humans evolved from humans. That is what the evidence says.Joe
March 13, 2015
March
03
Mar
13
13
2015
08:10 AM
8
08
10
AM
PDT
keiths: an ID-based epistemology fails the same “self-referential absurdity” test that Pearcey applies to evolutionary epistemology. So?Mung
March 12, 2015
March
03
Mar
12
12
2015
08:32 PM
8
08
32
PM
PDT
Well Zachriel, go ahead and give a coherent materialistic account of consciousness. Your Nobel awaits! Which reminds me: Here is Eugene Wigner receiving his Nobel:
Eugene Wigner receives his Nobel Prize for Quantum Symmetries - video 1963 http://www.nobelprize.org/mediaplayer/index.php?id=1111 "It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness." Eugene Wigner (1902 -1995) from his collection of essays "Symmetries and Reflections – Scientific Essays"; Eugene Wigner laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963. http://eugene-wigner.co.tv/ "It will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the scientific conclusion that the content of the consciousness is the ultimate universal reality" - Eugene Wigner - (Remarks on the Mind-Body Question, Eugene Wigner, in Wheeler and Zurek, p.169) 1961 - received Nobel Prize in 1963 for 'Quantum Symmetries' http://www.informationphilosopher.com/solutions/scientists/wigner/
Of supplemental note to the preceding Wigner 'consciousness' quotes, it is interesting to note that many of Wigner's insights have now been experimentally verified and are also now fostering a 'second' revolution in quantum mechanics,,,
Eugene Wigner – A Gedanken Pioneer of the Second Quantum Revolution - Anton Zeilinger - Sept. 2014 Conclusion It would be fascinating to know Eugene Wigner’s reaction to the fact that the gedanken experiments he discussed (in 1963 and 1970) have not only become reality, but building on his gedanken experiments, new ideas have developed which on the one hand probe the foundations of quantum mechanics even deeper, and which on the other hand also provide the foundations to the new field of quantum information technology. All these experiments pay homage to the great insight Wigner expressed in developing these gedanken experiments and in his analyses of the foundations of quantum mechanics, http://epjwoc.epj.org/articles/epjconf/pdf/2014/15/epjconf_wigner2014_01010.pdf
That Wigner's insights into quantum mechanics are continuing to drive technology forward is certainly powerful evidence that his 'consciousness' view of Quantum Mechanics is indeed correct.bornagain77
March 12, 2015
March
03
Mar
12
12
2015
05:45 PM
5
05
45
PM
PDT
Tim,
But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine.
I don't follow your reasoning. Could you elaborate? What makes something a UTM is its capabilities, not the fact that it is a causal chain.
According to Mark Frank, however, the products of cognition (i.e. specific beliefs) are not heritable. But this would mean that they must have some different ontology that is at least in part nondeterministic .
No, it just means that they are at least partially caused by non-heritable factors which might or might not be deterministic.keith s
March 12, 2015
March
03
Mar
12
12
2015
05:45 PM
5
05
45
PM
PDT
bornagain77: ‘you’ have to account for the subjective conscious experience of ‘you’ before you can even begin to posit how beliefs may be formed. Self-consciousness is not required for consciousness. Tim: Which must mean something like cognition evolved from rocks. Evolution only concerns living organisms, however, it is supposed by most researchers that some sort of abiogenesis occurred on the primordial Earth. Tim: In other words, somehow cognition, even human cognition is, according to Zachriel, determined according to a causal chain. Cognition is the ability to learn. Culture is learned. Tim: But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine. Human cognition may not be a Turing Machine. Tim: But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine. According to Mark Frank, however, the products of cognition (i.e. specific beliefs) are not heritable. But this would mean that they must have some different ontology that is at least in part nondeterministic . Specific beliefs, for example, may be teleological. Never the less, they are not “on the tape”. Why would that follow necessarily? While any Turing Machine can, in principle, calculate what any other Turing Machine calculates, that doesn't they can do so practically, or do so in fact.Zachriel
March 12, 2015
March
03
Mar
12
12
2015
05:18 PM
5
05
18
PM
PDT
Zachriel@43
Human cognition evolved from more primitive cognitions.
Which must mean something like cognition evolved from rocks. Too flip? How about cognition cannot be more than biology (which can't be more than chemistry . . .) Mark Frank@43, providing an evidence/argument for his statement that evolution doesn't select for specific beliefs. . .
Evolution can only select characteristics that are heritable. Beliefs are not heritable.
Which, assuming that specific beliefs are the result of cognition, must mean that the product of cognition, i.e. specific belief, is NOT a product of evolution, but of some thing with a different ontology. In other words, somehow cognition, even human cognition is, according to Zachriel, determined according to a causal chain. But such a causal chain is, by definition, a physical embodiment of a Universal Turing Machine. According to Mark Frank, however, the products of cognition (i.e. specific beliefs) are not heritable. But this would mean that they must have some different ontology that is at least in part nondeterministic . Specific beliefs, for example, may be teleological. Never the less, they are not "on the tape". I will let Zachriel and Mark Frank explain how such non-determined ideas can be produced by a physical embodiment of a UTM.Tim
March 12, 2015
March
03
Mar
12
12
2015
04:20 PM
4
04
20
PM
PDT
Zachriel, although your generalization for how beliefs may be formed is a case study in fuzziness, regardless of that, 'you' have to account for the subjective conscious experience of 'you' before you can even begin to posit how beliefs may be formed. i.e. you cannot put the cart of beliefs before the subjective horse of 'you' buckaroo!
David Chalmers on Consciousness (Philosophical Zombies and the Hard Problem) – video https://www.youtube.com/watch?v=NK1Yo6VbRoo
Here are a few more comments, from atheists, that agree with Chalmers on the insolubility of ‘hard problem’ of consciousness,,
Darwinian Psychologist David Barash Admits the Seeming Insolubility of Science’s “Hardest Problem” Excerpt: ‘But the hard problem of consciousness is so hard that I can’t even imagine what kind of empirical findings would satisfactorily solve it. In fact, I don’t even know what kind of discovery would get us to first base, not to mention a home run.’ David Barash – Materialist/Atheist Darwinian Psychologist “We have so much confidence in our materialist assumptions (which are assumptions, not facts) that something like free will is denied in principle. Maybe it doesn’t exist, but I don’t really know that. Either way, it doesn’t matter because if free will and consciousness are just an illusion, they are the most seamless illusions ever created. Film maker James Cameron wishes he had special effects that good.” Matthew D. Lieberman – neuroscientist – materialist – UCLA professor
Moreover, due to advances in Quantum Mechanics, the argument for God from consciousness can now be framed like this:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality. 2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality. 3. Consciousness is found to have a special, even a central, position within material reality. 4. Therefore, consciousness is found to precede material reality. Four intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality (Wigner’s Quantum Symmetries, Wheeler’s Delayed Choice, Leggett’s Inequalities, Quantum Zeno effect) https://docs.google.com/document/d/1uLcJUgLm1vwFyjwcbwuYP0bK6k8mXy-of990HudzduI/edit
Verse:
Colossians 1:17 He is before all things, and in Him all things hold together.
bornagain77
March 12, 2015
March
03
Mar
12
12
2015
03:26 PM
3
03
26
PM
PDT
bornagain77: Zachriel at 43 In all of that text, you didn't seem to respond to the point. Humans are posited to be the result of a long period of evolution. At the base of the human mind are some pretty basic sensory experiences; pain, pleasure, the ability to distinguish objects. bornagain77: Should You Trust the Monkey Mind? Turns out that primates are pretty good at recognizing danger in their native environments. 'Big teeth bad' is a straight-forward relationship that forms the foundation of saber-toothed thinking in humans.Zachriel
March 12, 2015
March
03
Mar
12
12
2015
02:30 PM
2
02
30
PM
PDT
1 2 3 4

Leave a Reply