Uncommon Descent Serving The Intelligent Design Community

Origenes and the argument from Self-Prediction

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Origenes, has put up an interesting argument that we need to ponder as food for thought.

It’s Friday, so this should be a good thing to start our weekend on:

>>>>>>>>>>>>>>

ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence?

The fact that one can predict her/his own behavior suggests that we have mental control over the physical, which is obviously unacceptable for the materialist, who claims the opposite to be true. Therefore the task set out for the materialist is to naturalize self-prediction. And in doing so there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. But here lies a problem. There is only one candidate for the job, the brain, but, as I will argue decisively, the brain cannot do it.

The Argument from Self-prediction

1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input.

2. The brain cannot predict future behavior with any specificity.

3. I can predict future behavior with specificity.

Therefore,

4. Materialism is false.

– – – –

Support for 2

In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows:

… Prediction of a system O requires classification of the system’s states.
If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish. As the number of aspects with regard to which states might differ is an indicator of O’s complexity and as the degree of complexity of a classificatory system P is at least as large as the number of different types of states it must be able to distinguish, P is more complex than O.
[‘The SAGE Handbook of the Philosophy of Social Sciences’ edited by Ian C Jarvie, Jesus Zamora-Bonilla]

Von Hayek then goes on to conclude that:

No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).

IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself.
– In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.

– A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.

>>>>>>>>>>>

What do you think? END

Comments
DaveS @73 I am happy to leave it to readers of this thread to judge whether my arguments concerning your position make sense. I have noticed that I keep repeating myself, so I don't think that I have anything new to add. So, I’ll leave it there for now.Origenes
December 10, 2017
December
12
Dec
10
10
2017
08:38 AM
8
08
38
AM
PDT
Origenes,
It is logically true that no computer can predict the result of its own calculation. Do you agree with that?
With some qualifications, which are irrelevant to this discussion, yes. I don't think a computer can exactly simulate some calculation in half the time it normally takes, for example.daveS
December 10, 2017
December
12
Dec
10
10
2017
08:12 AM
8
08
12
AM
PDT
DaveS @71
DaveS: You’re not addressing my claim. I am saying that a computer can predict some aspects of its future behavior (which is clearly true).
"It is logically true that no computer can predict the result of its own calculation." Do you agree with that? And what does that tell you? To me it immediately follows that no computer can predict itself. Bam! Yes, that is what logic can instantly give you. Predicting download time and the like is just throwing up sand and dust, it's not about a computer predicting the result of its own activity. We know exactly why those linear processes are predictable: because we can ignore the specifics:
What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable “linear” processes? ... It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. ... this indifference to specifics does not apply to the brain. .... Because, assuming that specific behavior (e.g. ‘doing 2 push-ups’, ‘doing Christmas shopping’) results from specific neural configurations, the specificity of neural states do matter. It is relevant to the ensuing behavior how many times neurons fire and where they are. This in contrast to water boiling which occurs irrespective of specific states of water molecules. In theory one could change the position of each water molecule of boiling water without stopping it from boiling. (see #24, #27, #28 and #40.)
Origenes
December 10, 2017
December
12
Dec
10
10
2017
07:59 AM
7
07
59
AM
PDT
Origenes, You're not addressing my claim. I am saying that a computer can predict some aspects of its future behavior (which is clearly true). I'm not talking about a computer predicting the entirety of its own future behavior, or a faster computer predicting the behavior of a slower one, so let's set those aside. The claim is that a computer can predict (approximately) some aspects of its future behavior, such as the time it will take to finish running a program.daveS
December 10, 2017
December
12
Dec
10
10
2017
07:32 AM
7
07
32
AM
PDT
DaveS
My claim is simply that purely physical devices (computers, e.g.) can predict their future behavior to some extent.
It is logically true that no computer can predict the result of its own calculation, therefore no computer can predict itself. However, can computer P predict the behavior of computer O? Yes, rather easily, IF computer P is faster than O, and they perform the same task, then the result of the calculation of P can be regarded as a prediction of the result in O. Even better, given that the architecture in both computers only differ in speed (and same software and same task), then all steps of O leading op to the result are predicted by P. So, yes, computer P can predict computer O. But note that, when we have two computers which only differ in speed, we have all the requirements for simulation already in place: We do not have to measure O and build a model-O, because computer P instantly functions as a faster model for O. So, here, we can skip all the daunting tasks that the brain has to perform, as has been argued. IOWs it seems hardly relevant to the problem at hand.Origenes
December 10, 2017
December
12
Dec
10
10
2017
07:24 AM
7
07
24
AM
PDT
KF, That's something you'll have to take up with Origenes. We have agreed to call these things predictions in any case, and that's consistent with this definition given by Merriam-Webster:
predict pri-'dikt To declare or indicate in advance; especially : foretell on the basis of observation, experience, or scientific reason
daveS
December 10, 2017
December
12
Dec
10
10
2017
06:33 AM
6
06
33
AM
PDT
Mullers-Ratchet @67 Thank you for your question. I am just listing what is logically required in order to have a simulation of a system O. P has to measure O. P must build a model-O. So, we need models of each of the O-components and also model-laws. Model-O must perform much faster than the real O. ... and more (see also #62). Perhaps you are saying that such a prediction-simulation-system is beyond far-fetched and I would readily agree. I do not think it can possibly exist, and that is my point. If someone has a better idea to get to a specific prediction, or thinks that some (or all) of my assumptions about prediction-system P are unnecessary, I would like to know.Origenes
December 10, 2017
December
12
Dec
10
10
2017
05:22 AM
5
05
22
AM
PDT
I'll admit to not reading much of this thread, but his stuck me as a very strange claim
2. P has a complex machine that can make a perfect model-O based on P’s measurements. (Wow!)
Why should we think this is true? A particularly strange claim to make for human brains, which are pretty infamous for using pretty bad heuristics.mullers_ratchet
December 9, 2017
December
12
Dec
9
09
2017
11:44 PM
11
11
44
PM
PDT
DS, you seem to be dealing with freedom to make a decision and discipline to act on it per a timeline. Such covers a mountain of issues on concepts, freedom vs determinism, and even the power of linguistic or visual conceptual representations and much more. Where a precondition of real decision is responsible, rational freedom, which opens up a huge can of worms. A decision backed by discipline to give it effect is not a prediction. Though it may be predictable on observation and analysis that X is likely to be diligent but Y is not. So, clarification is in order. KFkairosfocus
December 9, 2017
December
12
Dec
9
09
2017
07:33 PM
7
07
33
PM
PDT
Origenes,
Is it your claim that there is a way to “quite accurately” predict this? How is that thinkable?
First, I don't know anything about how the brain works, and I suspect no one does in enough detail to really discuss this issue with any confidence. My claim is simply that purely physical devices (computers, e.g.) can predict their future behavior to some extent. Therefore it's not clear to me that the fact that humans can predict their future behavior to some extent implies that they are not purely physical.daveS
December 9, 2017
December
12
Dec
9
09
2017
06:13 PM
6
06
13
PM
PDT
DaveS @63 Suppose that P must predict O and that O is the component of the brain that deals with body movement. Also "quite accurately" predicting is a huge challenge. Obviously there are many possibilities besides "doing two push-ups". Tomorrow at 9 it could also be the case that "standing on one leg", "wave both hands" or "walking" ensues from O's neural states — numerous possibilities. And the prediction system must be able to distinguish between them at a neural level. Is it your claim that there is a way to "quite accurately" predict this? How is that thinkable? To make any specific prediction you need to model with some accuracy. You still need to be able to build O in model-world. The simulation still needs to run faster than the real thing. You still need model-laws. You still need the same overall increase in speed. and so forth. You still need P to be able to distinguish between many possible neural states. And you still need some translation 'book' which can translate model-neural states into "mental" terms so that we can say:
"I will do 2 push-ups tomorrow morning at 9."
Or perhaps you tell yourself, while standing on one leg tomorrow morning at 9, "Oh well, I predicted that 'quite accurately'"Origenes
December 9, 2017
December
12
Dec
9
09
2017
05:54 PM
5
05
54
PM
PDT
Origenes, We keep coming back to this point. There is no need to run computationally infeasible simulations in order to make predictions. You just need to be close enough for government work, as they say. For example, you can predict the result of 1 million balls being dropped in a Galton Board quite accurately by doing a trial run of 10000 balls.daveS
December 9, 2017
December
12
Dec
9
09
2017
05:22 PM
5
05
22
PM
PDT
DaveS: I don’t see what problem(s) all this creates for someone who holds that everything that exists is physical.
I got that impression already. Are you reading my posts? A quick recap: In order to naturalize the prediction "2 push-ups tomorrow morning at 9" the materialist must assume the existence of brain system that can predict specific neural states, from which the behavior "doing 2 push-ups ensues, tomorrow morning at 9. This system 'P' must predict system 'O' by running a simulation of O. A few of the requirements: 1. P can measure O. 2. P has a complex machine that can make a perfect model-O based on P’s measurements. (Wow!) 3. Thus: P can simulate O and run the simulation much faster than the real thing with equal accuracy. 4. In P’s model world there are also perfect model-chemical-laws in operation, which interact with model-O in the exact same the way as the actual laws do with the actual O. – only much much faster. (Wow!!) 5. Each modeled item must have the exact same increase in speed, to not ruin the simulation. 6.in order for P to model the total state of O, P needs to collect all the data about O at once, in the same time. It is of no use to measure the left part of O first, start modelling that, and complete the model with later measurements of O’s right part. In order to predict you must get the starting position just right. 7.And surely no time to waste building the model from the all-at-once-data. If that takes too long, P will be predicting the past. 8. It must model unknown input by the environment.Origenes
December 9, 2017
December
12
Dec
9
09
2017
05:05 PM
5
05
05
PM
PDT
Origenes,
When I wrote in #56 “… an inference like that seems hard to avoid.” I was thinking about something along the lines of: “We members of the Materialist Society have to conclude that, due to a bizarre coincidence, at the same time the neural states of each person on earth were aimed at performing 2 push-ups. We have no explanation other than sheer dumb luck.”
I'm not sure what this means exactly. It could be because I missed last month's meeting. :-) In any case, I don't see what problem(s) all this creates for someone who holds that everything that exists is physical.daveS
December 9, 2017
December
12
Dec
9
09
2017
04:41 PM
4
04
41
PM
PDT
DaveS: The claim that two people performing pushups necessarily have the same very specific configuration of neural states is.
I agree. That's not a claim that I wish to defend. When I wrote in #56 "... an inference like that seems hard to avoid." I was thinking about something along the lines of: "We members of the Materialist Society have to conclude that, due to a bizarre coincidence, at the same time the neural states of each person on earth were aimed at performing 2 push-ups. We have no explanation other than sheer dumb luck." Surely I allow for the possibility that each brain performs the same task differently.
And perhaps you need a separate translation book for each person.
I would say, very likely.Origenes
December 9, 2017
December
12
Dec
9
09
2017
11:01 AM
11
11
01
AM
PDT
Origenes,
I would like to know why you consider e.g. “physics is the only (ultimate) actor” to be a “strong claim”(!) for materialism to make.
That's not especially strong, being essentially the definition of materialism. The claim that two people performing pushups necessarily have the same very specific configuration of neural states is. As I pointed out, two computers can perform the same task with completely different states. Two robots could be programmed to perform two pushups at the same time, with completely different states.
One could say that “task A” is a placeholder (refers in each computer to a specific state) for a specific (set of) computer configuration, but the referred configuration of both computers do not match each other. This means that we cannot write one “translation-book” for both computers. We need to make 2 separate books.
And perhaps you need a separate translation book for each person.daveS
December 9, 2017
December
12
Dec
9
09
2017
10:47 AM
10
10
47
AM
PDT
DaveS @57
DaveS: I don’t know of anyone who makes such strong claims (although I haven’t done a lot of reading on this). Can you cite such a person?
Before I do that, I ask for your view on materialism. I would like to know why you consider e.g. "physics is the only (ultimate) actor" to be a "strong claim"(!) for materialism to make. I would like to know.
You could present them with the exact same situation, and they could end up making exactly the same move, but their states would almost certainly be very different. They are both purely physical machines, but they perform the same task in very different ways.
The same task refers in computer A to configuration X and in computer B to configuration Y. Configuration X does not match Y. This means that we cannot write one "translation-book" for both computers. We need to make 2 separate books. Not sure what your point is.Origenes
December 9, 2017
December
12
Dec
9
09
2017
10:33 AM
10
10
33
AM
PDT
Origenes, I don't know of anyone who makes such strong claims (although I haven't done a lot of reading on this). Can you cite such a person? Suppose you have two computers playing chess (for example, one running Stockfish and one running AlphaZero, as happened recently). You could present them with the exact same situation, and they could end up making exactly the same move, but their states would almost certainly be very different. They are both purely physical machines, but they perform the same task in very different ways.daveS
December 9, 2017
December
12
Dec
9
09
2017
09:59 AM
9
09
59
AM
PDT
DaveS @55
DaveS: Hm, I’m not convinced it corresponds to any very specific configuration.
Our personal convictions play no role here. The question is if such correspondence is claimed by materialism. There is no doubt in my mind that this is the case. Materialism entails an ontology in which minds are the consequence of physics, and thus, "mind" (and anything deemed "mental") can only be placeholders for a more detailed causal account in which physics is the only (ultimate) actor.
DaveS: What if everyone on the planet decided to do two pushups at noon CST tomorrow? Would it follow under materialism that all our neural states are very similar?
Interesting proposal. To answer the second question: yes, an inference like that seems hard to avoid. What other option is there? A materialist cannot allow for an explanation in which 'mental power' shapes our neural states. According to materialism it has to be the other way round. So, yes, such a global event would pose problems for materialism ...Origenes
December 9, 2017
December
12
Dec
9
09
2017
09:33 AM
9
09
33
AM
PDT
“Doing 2 push-ups tomorrow morning at 9”, is, in fact much more specific then you apparently seem to think. “Doing 2 push-ups” is a ‘placeholder’ for a detailed account of a very specific configuration of neural states. Given materialism, how could it not be?
Hm, I'm not convinced it corresponds to any very specific configuration. What if everyone on the planet decided to do two pushups at noon CST tomorrow? Would it follow under materialism that all our neural states are very similar? Suppose I did the pushups on two consecutive days. Would my neural states on those two days be very similar? I don't see why in either case. My brain is busy with lots of other tasks besides the pushups.daveS
December 9, 2017
December
12
Dec
9
09
2017
08:48 AM
8
08
48
AM
PDT
DaveS At first post#42 didn't make any sense to me, so I didn't respond. Upon rereading it, I realized that the problem of talking past each other may be connected to the following: “Doing 2 push-ups tomorrow morning at 9”, is, in fact much more specific then you apparently seem to think. “Doing 2 push-ups” is a 'placeholder' for a detailed account of a very specific configuration of neural states. Given materialism, how could it not be? So, predicting the occurrence of a very specific configuration of neural states at a very specific time is no minor accomplishment and, obviously, not something within the reach of “low-fidelity simulations”. In short, there is nothing "modest", as you called it, about the 2 push-ups prediction.Origenes
December 9, 2017
December
12
Dec
9
09
2017
08:41 AM
8
08
41
AM
PDT
Origenes,
To be clear, we are talking about predicting future brain states, right? Specifically, we are talking about part ‘P’ of the brain predicting the state of another part ‘O’ of the brain. Correct?
I have in mind something much simpler such as a computer. I'm saying that a computer can predict aspects of its own future behavior or state (approximately). To borrow one of KF's examples, if a computer is programmed to simulate a Galton Board, it could predict with very high accurace the shape of the 'histogram' that results, assuming the number of balls is high. I would guess that my brain can similarly predict aspects of its future behavior or state, but I have no idea how that works.daveS
December 9, 2017
December
12
Dec
9
09
2017
08:37 AM
8
08
37
AM
PDT
DaveS @51
DaveS: On the other hand, one can make reliable predictions using low-fidelity simulations. There’s nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one’s future behavior as a way of making predictions.
To be clear, we are talking about predicting future brain states, right? Specifically, we are talking about part 'P' of the brain predicting the state of another part 'O' of the brain. Correct? You say that P can make reliable predictions of O by using "low-fidelity simulations". You also claim that "there’s nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one’s [brain part O's] future behavior". Before I respond, I would like to know: do I understand you correctly so far?Origenes
December 9, 2017
December
12
Dec
9
09
2017
08:04 AM
8
08
04
AM
PDT
Origenes,
At this point, it feels like an understatement to say that there are multiple insurmountable problems for self-prediction of the brain by simulation.
Certainly for self-prediction via 100% faithful emulation. But didn't we already dismiss that as clearly being impossible? On the other hand, one can make reliable predictions using low-fidelity simulations. There's nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one's future behavior as a way of making predictions.daveS
December 9, 2017
December
12
Dec
9
09
2017
07:09 AM
7
07
09
AM
PDT
KF @49 Very nice video and 'game.' Interesting and fun. In relationship to the topic of prediction and simulation, your point is, if I understand you correctly, that several systems or aspects of systems, present in the brain, can only be modeled by means of statistical, probabilistic analysis. If so, then your point makes perfect sense to me. And, obviously, modeling based on statistics and probability raises many concerns with regard to accuracy, origin & relevance of statistical data, not to mention the systems needed to perform all this. At this point, it feels like an understatement to say that there are multiple insurmountable problems for self-prediction of the brain by simulation.Origenes
December 9, 2017
December
12
Dec
9
09
2017
06:35 AM
6
06
35
AM
PDT
Origines, I present to you, a classic, six sided fair die. Eight corners, twelve edges, sensitively dependent on effects of impacts on same. Reduced thereby to a flat random distribution, per indifference principle. In short, one is led to statistical, probability distributions driven by chaos on a system that is in principle utterly mechanically deterministic. But now, start to define output as sum of uppermost faces and bring to bear a second, then a third and so forth. Lo and behold, we have here a bell shaped curve distribution of possibilities with highly improbable far tails. Similarly, try the Quincunx cascade, which uses a hopper of beads and a triangular distribution of rods leading to columnar stacks: https://www.youtube.com/watch?v=AUSKTk9ENzg (sim is here: http://www.mathsisfun.com/data/quincunx.html ). Very soon, on pouring a few thousand beads, one gets very close to a Gaussian type curve . . . which was composed on cumulative effects of small +/- effects of random character. Order out of chaos, leading to the need for statistical, probabilistic analysis. KFkairosfocus
December 9, 2017
December
12
Dec
9
09
2017
03:12 AM
3
03
12
AM
PDT
KF @46
KF: Some things are far too nonlinear [not even log-lin or log-log or complex domain transforms make them tractable]
Cannot be modeled. Not all things can be modeled. You cannot have a simpler more efficient model for everything. That is a logical impossibility.
KF: ... sensitively dependent (e.g. so minor fluctuations can be rapidly drastically amplified via the butterfly effect), prone to radical branching and more to have the sort of easy predictability envisioned.
All these influences of various types must be modeled in order to improve their speed! Why would it be possible to model and increase the speed of everything? It cannot be done. // Note that all the models of various parts must provide the exact same increase in speed!. It ruins the simulation if that is not the case ... // - - - - - A minor detail: in order for P to model the total state of O, P needs to collect all the data about O at once, in the same time. It is of no use to measure the left part of O first, start modelling that, and complete the model with later measurements of O's right part. In order to predict you must get the starting position just right. And surely no time to waste building the model from the all-at-once-data. If that takes too long, P will be predicting the past.Origenes
December 8, 2017
December
12
Dec
8
08
2017
12:46 PM
12
12
46
PM
PDT
Some further thoughts on the requirements of prediction (see #44,#45) : 1. P can predict O. 2. P can measure O. 3. P has a complex machine that can make a perfect model-O based on P's measurements. (Wow!) 4. Thus: P can simulate O. 5. In P's model world there are also perfect model-chemical-laws in operation, which interact with model-O in the exact same the way as the actual laws do with the actual O. - only much much faster. (Wow!!)Origenes
December 8, 2017
December
12
Dec
8
08
2017
12:15 PM
12
12
15
PM
PDT
Origenes, that was my basic point. Some things are far too nonlinear [not even log-lin or log-log or complex domain transforms make them tractable], sensitively dependent (e.g. so minor fluctuations can be rapidly drastically amplified via the butterfly effect), prone to radical branching and more to have the sort of easy predictability envisioned. KFkairosfocus
December 8, 2017
December
12
Dec
8
08
2017
10:54 AM
10
10
54
AM
PDT
KF @40
KF: A system’s ability to reliably predict itself is tantamount to the challenge in Tarski’s theorem on needing a higher level schemes to provide proofs for the lower one leading to irreducible complexity which then feeds into Godel’s incompleteness challenges.
In order for system P to predict system O, it presupposed: - P must be able to represent all O-states in a model-world. - P must be able to observe/measure O. - P must be able to organize its model O-components in model-world, based on measurement of O. - P's model-O must perform (much) faster than the real O. I don't know about you, but I see some challenges for the 'prediction by simulation' hypothesis ...Origenes
December 8, 2017
December
12
Dec
8
08
2017
08:47 AM
8
08
47
AM
PDT
1 2 3

Leave a Reply