Uncommon Descent Serving The Intelligent Design Community

Origenes and the argument from Self-Prediction

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Origenes, has put up an interesting argument that we need to ponder as food for thought.

It’s Friday, so this should be a good thing to start our weekend on:

>>>>>>>>>>>>>>

ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence?

The fact that one can predict her/his own behavior suggests that we have mental control over the physical, which is obviously unacceptable for the materialist, who claims the opposite to be true. Therefore the task set out for the materialist is to naturalize self-prediction. And in doing so there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. But here lies a problem. There is only one candidate for the job, the brain, but, as I will argue decisively, the brain cannot do it.

The Argument from Self-prediction

1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input.

2. The brain cannot predict future behavior with any specificity.

3. I can predict future behavior with specificity.

Therefore,

4. Materialism is false.

– – – –

Support for 2

In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows:

… Prediction of a system O requires classification of the system’s states.
If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish. As the number of aspects with regard to which states might differ is an indicator of O’s complexity and as the degree of complexity of a classificatory system P is at least as large as the number of different types of states it must be able to distinguish, P is more complex than O.
[‘The SAGE Handbook of the Philosophy of Social Sciences’ edited by Ian C Jarvie, Jesus Zamora-Bonilla]

Von Hayek then goes on to conclude that:

No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).

IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself.
– In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.

– A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.

>>>>>>>>>>>

What do you think? END

Comments
kairosfocus @ 1
Origines raises the issue of Self-prediction and the materialist view of the mind as the brain in action. Food for thought
Yes, indeed. Here are a few.
ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence
For example, ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ or ‘I will do some Christmas shopping next Friday’ could be either statements of intent or predictions but they are not the same thing. A statement of intent is not a claim about what is or will be. It is a formulation of purpose and as such is neither true nor false. A prediction, being a forecast of a future state of affairs based on a current state of affairs, is capable of being true or false.
The Argument from Self-prediction 1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input. 2. The brain cannot predict future behavior with any specificity. 3. I can predict future behavior with specificity. Therefore, 4. Materialism is false.
The argument can be attacked on the grounds that 2 and 3 imply an unstated premiss which is that the conscious "I" or mind is a separate entity from the operations of the physical brain. Since this is one of the key points at issue, this is begging the question and the conclusion does not necessarily follow. It is also unclear how much "specificity" is required to warrant conclusions about predictive behavior.
Von Hayek then goes on to conclude that:
No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).
Even if we concede Hayek's argument that a system cannot incorporate an exact one-to-one representation of itself that does not preclude the possibility of forecasting based on models. The computer climate models used by meteorologists to forecast weather trends do not and cannot represent the movement of each individual molecule of gas, droplet of water or piece of particulate but they are still able to predict weather for the next 3-5 days with reasonable accuracy. Our conscious day-to-day experience of reality can also be viewed as an incomplete model of what is really out there but it is sufficient to enable us to navigate through it in reasonable safety, which involves making predictions about it. Our internal model is assumed to be based on information gathered by our senses but those senses only give us limited access to what is out there. Our eyes can only detect light from the visible waveband. We cannot see even near infra-red or ultra-violet although there are other creatures that can. Dogs and cats can hear sounds that are inaudible to us. On smell, a dog-handler once told me as an illustration that where you or I could recognize the smell of a pizza a dog could identify every single ingredient that went into the making of that pizza. Our internal model also necessarily includes a representation or model of ourselves. For example, in order to navigate across a landscape we must know not just the landscape but also where we are on it and how we are able to move across it. But it is just a model and necessarily incomplete. We like to think we know ourselves but we are not aware, for example, of the flow of blood through the millions of tiny capillary blood vessels, we feel nothing of the minute-to-minute processes going on in our liver, kidneys or pancreas nor can we detect the firing of each of the billions of neurons in our brain on a second-to-second basis. At a conscious level, we seem to have a very detailed awareness of what is happening but we are not directly conscious of all the processing that feeds data into that awareness. If von Hayek is right and we cannot contain an exact full representation of ourselves then we ourselves are just a construct, a partial model of all that we are in reality. We can make predictions based on that model or set of models. They will have varying degrees of accuracy but it seems to be the best we can hope for.Seversky
December 1, 2017
December
12
Dec
1
01
2017
02:12 PM
2
02
12
PM
PDT
I think Atom has it correct. When we humans make these predictions about push-ups or going shopping it's not any great mental feat. It's not as if we are running an accelerated, faithful simulation of ourselves, which, if materialism is true, would be impossible. We use heuristics, induction, extrapolation from small samples, &c. So I doubt that the von Hayek passages support premise 2.daveS
December 1, 2017
December
12
Dec
1
01
2017
01:57 PM
1
01
57
PM
PDT
nmdaveS
December 1, 2017
December
12
Dec
1
01
2017
01:11 PM
1
01
11
PM
PDT
Maybe I'm missing something obvious, but any system of n parts can store (model?) up to 2^n states. Assume system O has n aspects, each modeled by a single bit in O. (We'll assume a computational architecture, for the sake of argument.) For P to model O's behavior, it is either sufficient to model all of O's bits, or it is not sufficient. If it is, then obviously O is always its own model (since it perfectly "models" its own state). If it is not sufficient, then this is something that needs to be shown. But counting the number of bits does not establish that, since you only need log_2 m bits to model m distinct states. While we may be able to rule out a part modeling the whole, I'm am not sure we can rule out the whole modeling the whole or a part modeling another part. I'm open to being persuaded otherwise, though, if I've missed something.Atom
December 1, 2017
December
12
Dec
1
01
2017
12:45 PM
12
12
45
PM
PDT
DaveS @8
DaveS: Say the tree is in fact a path with 100 million nodes (i.e., essentially a list) and the “prediction” is for the time required to traverse the entire tree. The computer program could perform a test run using a smaller tree with only 5 million nodes, measure the elapsed time, then multiply this by 20 to predict the time required to traverse the full sized tree. This is a very simple example, but illustrates how a computer could make self-predictions.
This analogy would shed light on what could perhaps be called “linear” behavior.
Paraphrasing: at the moment I “do” 0,000001 push-ups, in 10 seconds ... (just a moment, let me run a simulation) …. Okay, right … it will be 0,000005 push-ups, so, tomorrow morning at 9 o’çlock I will be doing 2 push-ups.
Linearity reduces the daunting task of predicting to humdrum multiplying … However, I would argue that neural firing and our behavior are rarely linear in that sense.
DaveS: A more familiar example: When you download a large file, often your software will give you a running “countdown” which displays the estimated time until the download is complete (and when it will finish downloading). That’s a self-prediction as well.
One thing is for sure, this is another “linear” example.
DaveS: No human can predict the entirety of his/her brain state at any time, so I think part of Bob O’H’s point is that the Van Hayek result is not really useful here.
I do NOT claim that a human can predict his/her brain state. My claim, which is scientifically verifiable (!), is that humans (routinely) predict behavior. However, according to materialism, behavior results from a brain state. If that is true, if materialism is correct, then there is no realm independent from brain states that can predict behavior.Origenes
December 1, 2017
December
12
Dec
1
01
2017
12:30 PM
12
12
30
PM
PDT
Bob O "the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system." If human predicts instead of healthy ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ ‘tomorrow morning at 9 o’clock I will commit suicide’ he's predicting the whole state of the system. I think Von Hayek quotes stand.Eugen
December 1, 2017
December
12
Dec
1
01
2017
11:25 AM
11
11
25
AM
PDT
Origenes,
How large a tree are we talking about? How large is the search space?
Much much smaller than the number of possible brain states, obviously, but the size of the tree is not relevant to my point. Say the tree is in fact a path with 100 million nodes (i.e., essentially a list) and the "prediction" is for the time required to traverse the entire tree. The computer program could perform a test run using a smaller tree with only 5 million nodes, measure the elapsed time, then multiply this by 20 to predict the time required to traverse the full sized tree. (Or more realistically, the computer could perform a series of test runs and use some kind of regression to obtain a more accurate formula for the total time required on the full sized tree). This is a very simple example, but illustrates how a computer could make self-predictions. A more familiar example: When you download a large file, often your software will give you a running "countdown" which displays the estimated time until the download is complete (and when it will finish downloading). That's a self-prediction as well. No human can predict the entirety of his/her brain state at any time, so I think part of Bob O'H's point is that the Van Hayek result is not really useful here.daveS
December 1, 2017
December
12
Dec
1
01
2017
10:09 AM
10
10
09
AM
PDT
DaveS @3
DaveS: Suppose you are using a computer to search through a large tree, a process that will take several days.
Dave you are good with numbers. How large a tree are we talking about? How large is the search space?
The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.
If I am not mistaken that results in (on average) 20.000.000 billion bits of information per second and (obviously) a lot more in several days. Now, in line with Von Hayek, if these neural states can differ in n different aspects, there are 2^n different types of states a prediction system must be able to distinguish. My take is that, in the case of the brain, n is a horrific number. Do you agree? And BTW how about the intractable environmental input? I am talking about sensory input and the effects of say food, drinks and all those bacteria that we carry with us. If the brain starts running simulations of itself, it has to factor in those as well. And — and this is important — the brain does not control the input from the environment. The environment inputs unknown variables.Origenes
December 1, 2017
December
12
Dec
1
01
2017
09:40 AM
9
09
40
AM
PDT
Pardon, corrected a mis-spelt name.kairosfocus
December 1, 2017
December
12
Dec
1
01
2017
09:13 AM
9
09
13
AM
PDT
Bob O'H @2
Bob O'H: the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system.
The brain is highly interconnected, so it would be difficult to predict part of it, while ignoring the rest.
Bob O'H: Another problem is that the n states might not be independent of each other or of current conditions. e.g. they might be points in space.
Can you elaborate please? I do not understand your point.Origenes
December 1, 2017
December
12
Dec
1
01
2017
08:27 AM
8
08
27
AM
PDT
I predict that I will put off until tomorrow what I could do today.Mung
December 1, 2017
December
12
Dec
1
01
2017
08:24 AM
8
08
24
AM
PDT
Suppose you are using a computer to search through a large tree, a process that will take several days. Couldn't the computer be programmed to estimate (shortly after beginning) approximately when the search will arrive at a specified node of the tree? Or, similarly, what node it will be checking at a specified time in the future? Edit: This relates to Bob O'H's comment I believe. It's no problem to predict certain aspects of the computer's behavior. Predicting its complete state is another matter. Note to KF: Origines -> OrigenesdaveS
December 1, 2017
December
12
Dec
1
01
2017
04:35 AM
4
04
35
AM
PDT
the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system. Another problem is that the n states might not be independent of each other or of current conditions. e.g. they might be points in space.Bob O'H
December 1, 2017
December
12
Dec
1
01
2017
03:30 AM
3
03
30
AM
PDT
Origines raises the issue of Self-prediction and the materialist view of the mind as the brain in action. Food for thought.kairosfocus
December 1, 2017
December
12
Dec
1
01
2017
03:02 AM
3
03
02
AM
PDT
1 2 3

Leave a Reply