Uncommon Descent Serving The Intelligent Design Community

Origenes and the argument from Self-Prediction

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Origenes, has put up an interesting argument that we need to ponder as food for thought.

It’s Friday, so this should be a good thing to start our weekend on:

>>>>>>>>>>>>>>

ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence?

The fact that one can predict her/his own behavior suggests that we have mental control over the physical, which is obviously unacceptable for the materialist, who claims the opposite to be true. Therefore the task set out for the materialist is to naturalize self-prediction. And in doing so there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. But here lies a problem. There is only one candidate for the job, the brain, but, as I will argue decisively, the brain cannot do it.

The Argument from Self-prediction

1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input.

2. The brain cannot predict future behavior with any specificity.

3. I can predict future behavior with specificity.

Therefore,

4. Materialism is false.

– – – –

Support for 2

In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows:

… Prediction of a system O requires classification of the system’s states.
If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish. As the number of aspects with regard to which states might differ is an indicator of O’s complexity and as the degree of complexity of a classificatory system P is at least as large as the number of different types of states it must be able to distinguish, P is more complex than O.
[‘The SAGE Handbook of the Philosophy of Social Sciences’ edited by Ian C Jarvie, Jesus Zamora-Bonilla]

Von Hayek then goes on to conclude that:

No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).

IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself.
– In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.

– A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.

>>>>>>>>>>>

What do you think? END

Comments
// follow-up #43 //
Just imagine running, in the now, a simulation of all the events in a subcomponent of the brain within the time-frame between now and tomorrow morning 9 o’clock— see #7.
I would like to note that the idea of 'prediction by simulation' presupposes that there is a way to run a predictable sequence much much faster. In our example, prediction by simulation presupposes that the events in a subcomponent of the brain (in the time frame between now and tomorrow morning 9) can be compressed into a much shorter time frame during simulation. Here I would like to argue that this seems physically impossible to me. Nothing can run simulations of neural activity faster than they already are in reality. A single neuron fires (on average) 200 times a second and is connected (on average) to 1,000 other neurons. Nothing (in the brain) can simulate this and do it 10.000 times faster, in order to get a prediction of its future state. IOWs the whole notion of 'prediction by simulation' does not compute. Take home message: A prediction system must be faster than the system that it predicts. Nothing is faster that itself. Nothing can predict itself.Origenes
December 8, 2017
December
12
Dec
8
08
2017
07:45 AM
7
07
45
AM
PDT
Atom @39, Thank you for your response.
Atom: To distinguish between 2^300 states we need exactly log_2(2^300) = 300 bits.
It takes 300 bits to describe one possible state out of 2^300. Describing one possible state does not magically endow a predicting system with the ability to distinguish between 2^300 states. So, I think your assumption is incorrect.
Atom: This was my original point. If the brain is a computational unit of n bits, it has 2^n possible states, but any system of n bits (including itself, obviously) can represent any one of its states.
Every system of n bits has a state of n bits, and, in being in that particular state, it represents one of its possible states. I can agree with that. It is a truism. But how does it relate to prediction? How does it help?
Atom: That is what I meant by saying just counting bits does not seem to rule out the brain predicting itself.
Can you elaborate some more? You many very well be on to something, but I am not able to grasp your point. Which “just counting bits” specifically do you object to?
Atom: But, what if a system is only predicting the change of one of its subcomponents?
Indeed, that is a possibility. In this case, the brain splits itself up in two separate systems: a prediction system and a system to be predicted. As I mentioned in post #5 the brain is notoriously highly interconnected, so there is a problem for this hypothesis right at start. Moreover, given the startling complexity of the brain, I trust that prediction of any subcomponent of the brain will run into the familiar problem of too many states to handle. Just imagine running, in the now, a simulation of all the events in a subcomponent of the brain within the time-frame between now and tomorrow morning 9 o’clock— see #7. And, of course, there is still the problem of the environment which inputs unknown variables.Origenes
December 8, 2017
December
12
Dec
8
08
2017
07:14 AM
7
07
14
AM
PDT
KF,
Content and depth explosion warning
Well, there was an explosion of something, anyway. :P Once again, I think Atom has hit the nail on the head. The examples of human predictions we have seen are very modest, and deal only with 'subcomponents' of our future behavior.
But unfortunately, the sort of systems we have in mind are exactly the opposite of such a constraint, and are thus credibly inherently unpredictable in detail save in the very narrowly short run, and that by simple projection of trend and hoping we do not hit a sufficiently fast acting perturbation and zone of sensitively dependent nonlinear action.
Do you have some examples of human self-predictions of this sort? I would be especially interested in systems which can be modeled using a computer, and where humans clearly outperform the machine.daveS
December 8, 2017
December
12
Dec
8
08
2017
05:59 AM
5
05
59
AM
PDT
Which premise do you not agree with? 1. Prediction of X’s behavior is about X. 2. To be about X presupposes observing X. 3. Observing X presupposes a position distinct from X. 4. No X is distinct from itself. Therefore, 5. X cannot predict itself.Origenes
December 8, 2017
December
12
Dec
8
08
2017
04:19 AM
4
04
19
AM
PDT
Atom, I doubt that 1-bit representations of past/present or future states much less evolution from one to the next are relevant or even credible, save as contextually meaningful switches to much deeper and far more elaborate reference bases, i.e. quasi-addresses that point to lookup tables that then trigger jumps to elaborations of description, information and cybernetic controls for things interacting with the world. And, the use of switches has in it an implicit system architecture, which would itself be organisation codable as a chain of yes/no steps of description towards distinct identity in some description language, cf AutoCAD etc. (And description language is itself a short label for a further world of cybernetic, communication, linguistic, epistemological and even worldview issues. Content and depth explosion warning.) A system's ability to reliably predict itself is tantamount to the challenge in Tarski's theorem on needing a higher level schemes to provide proofs for the lower one leading to irreducible complexity which then feeds into Godel's incompleteness challenges. Where, an algorithm/ functional organisation of high reliability can be regarded as in effect a warranted system, a proof in a loose sense, cf related approaches to proving reliability of computational algorithms. But I have in mind much broader computational substrates as [loose sense] cybernetic architectures, e.g analogue computers and neural network using systems, not just conventional digital computers, duly tied to sensors, effectors, feedback nets, memories, supervisors etc. What I suggest is that where a system expresses highly compressed mechanical or at least mathematically describable laws and reasonably well behaved stochastic patterns, and ruling out outright catastrophic collapse, we can model future trajectories with confidence IF we are not dealing with sensitive dependence on initial and intervening conditions triggering runaway positive feedbacks or butterfly effects that rapidly destroy any ability to predict in details, cf weather. But unfortunately, the sort of systems we have in mind are exactly the opposite of such a constraint, and are thus credibly inherently unpredictable in detail save in the very narrowly short run, and that by simple projection of trend and hoping we do not hit a sufficiently fast acting perturbation and zone of sensitively dependent nonlinear action. KFkairosfocus
December 7, 2017
December
12
Dec
7
07
2017
11:13 PM
11
11
13
PM
PDT
Hi KF, Good to stop in. Origenes, Thank you for your response. You wrote:
However this seemingly modest demand (‘the ability to classify possible states must be present in any prediction system’) can quickly run into huge numbers. If the brain can differ in say 300 different aspects we get 2^300 possible states and we need an enormous capacity to house this huge number — see #23.
To distinguish between 2^300 states we need exactly log_2(2^300) = 300 bits. This was my original point. If the brain is a computational unit of n bits, it has 2^n possible states, but any system of n bits (including itself, obviously) can represent any one of its states. That is what I meant by saying just counting bits does not seem to rule out the brain predicting itself. However, representing one state is not the same as making a conditional prediction, which would seem to require at minimum two states represented (past and future, from and to). In that case, there are 2^300 * 2^300 = 2^600 transitions possible, in which case you'd need at least 600 bits to represent the predicted transition. Thus, no system would be able to predict its own transitions, if it needed to as a whole. But, what if a system is only predicting the change of one of its subcomponents? Then it could use some of its bits to represent the past state of that subcomponent, and another set of its bits to represent the future state of that subcomponent. In that case, a system would be able to accurately predict its future state, at least of a portion of itself. And if that portion is connected to behavioral apparatuses (such as a single neuron that triggers a specific behavior), then it could also predict its future behavior to some extent. That's how I currently see it, but I am not infallible, so feel free to challenge this if I missed something.Atom
December 7, 2017
December
12
Dec
7
07
2017
09:03 PM
9
09
03
PM
PDT
Hi Atom, long time no see, KFkairosfocus
December 4, 2017
December
12
Dec
4
04
2017
03:38 PM
3
03
38
PM
PDT
Atom @36 Thank you for your interest. Let’s take things one step at a time. Von Hayek again:
… Prediction of a system O requires classification of the system’s states. If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish.
I understand Von Hayek like this: Suppose you have a system of two components. The first component can be either A or B. The second component can be either C or D. Von Hayek would say that this system can differ in 2 aspects. So, in this case, n = 2. According to Von Hayek there are now 2^2 different types of states a classificatory system P must be able to distinguish. So, 4 different types of states: AC, AD, BC and BD. IOWs any prediction system must at least be able to distinguish between those 4 possible different states, in order to be able to predict our 2 component system. If a prediction system cannot classify them, then prediction of our 2 component system is not possible in principle. In my understanding of Von Hayek’s argument he does not even speak about how to proceed from the ability to classify possible states to making an actual prediction. All he does is point out that being able to classify the states is required. However this seemingly modest demand (‘the ability to classify possible states must be present in any prediction system’) can quickly run into huge numbers. If the brain can differ in say 300 different aspects we get 2^300 possible states and we need an enormous capacity to house this huge number — see #23.Origenes
December 4, 2017
December
12
Dec
4
04
2017
01:39 PM
1
01
39
PM
PDT
Hi Origenes, Sorry for the delayed response, but that's life. You wrote:
If O is completely static/inert — can only have the current state it is in — then, obviously, there is nothing to predict.
So the argument is that you need at least 2 times 2^n bits to model the system, one set of bits for the "from" state and one set of bits for the "to" state? (And by modeling both to and from, we model a transition). Is that the gist of your argument? You also wrote:
In the case of the brain, is it really necessary “to show” that O can have different states? I suppose that I have misunderstood what you are saying. Can you elaborate?
What I was saying is that you have to show that to make the claim "Modeling the exact bit configuration / temporal state of O is not sufficient for modeling O's behavior", one would need to give evidence for that claim. It is an easy assertion to make, but one that is harder to prove. I will re-read the Hayek passage and see if it clears up my doubt. Thanks.Atom
December 4, 2017
December
12
Dec
4
04
2017
12:47 PM
12
12
47
PM
PDT
CR @31: Well, go ahead. This indifference to specifics does not apply to the brain because…..?
Because, assuming that specific behavior (e.g. ‘doing 2 push-ups’, 'doing Christmas shopping') results from specific neural configurations, the specificity of neural states do matter. It is relevant to the ensuing behavior how many times neurons fire and where they are. This in contrast to water boiling which occurs irrespective of specific states of water molecules. In theory one could change the position of each water molecule of boiling water without stopping it from boiling. Needless to say, this indifference to specifics does not apply to the brain.Origenes
December 4, 2017
December
12
Dec
4
04
2017
11:54 AM
11
11
54
AM
PDT
Origenes,
Put another way, a person can be adamant about doing 2 push-ups at 9 in the morning and, in being so, can block out a host of distracting influences to fulfill her prediction. However I do not see how the brain, a physical thing necessarily open to all kinds of physical influences, can do the same.
I believe a self-driving car, or perhaps this new robot could also block out outside influences to achieve a particular goal (such as running across an obstacle course). I don't believe these cars or robots are conscious or have the ability to be intentionally "adamant" in the same sense as humans can, so I think that part of the argument could work.daveS
December 4, 2017
December
12
Dec
4
04
2017
07:28 AM
7
07
28
AM
PDT
CR @30
CR: He did? Then where is your reference? If you found something, then share it with the rest of us. ... where is your reference? I won’t be holding my breath. Apparently, you, like everyone else here, appeals to Popper when you think it suits your purpose. Go figure
Hold your horses CR. I just couldn’t resist mentioning Popper’s position to you. Jim Slagle happens to briefly mention it in his excellent book “The Epistemological Skyhook”. You know very well what my opinion of Popper is. For one thing, he doesn’t understand the simple concept of self-defeating statements. So, no, I do not wish to appeal to Popper in any way. I don’t need him for any argument. But, since you insist, here is the quote, for what it is worth:
Popper: Now once we assume that the scientific theories and the initial conditions are given, and also the prediction task, the derivation of the prediction becomes a problem of mere calculation, which in principle can be carried out by a predicting or a calculating machine—a ‘calculator’ or a ‘predictor’, as it may be called. This makes it possible to present my proof in the form of a proof that no calculator or predictor can deductively predict the results of its own calculations or predictions. [Popper, book ‘An Argument for Indeterminism’, ch.22 ‘The Impossibility of Self-Prediction’]
Origenes
December 4, 2017
December
12
Dec
4
04
2017
07:09 AM
7
07
09
AM
PDT
DaveS @29
DaveS: The example human predictions we were discussing appear to be about as specific as the computer prediction. “I will perform two pushups tomorrow at noon” “The message ‘download complete’ will appear in 5 minutes” I don’t see a great deal of difference here.
My argument is NOT that there is a difference in the level of specificity of both predictions. What I am saying is the time required for a completed download is indifferent to (‘independent from’) the specificity of the file data. It does not matter if the download is an audio file or text file or whatever. And if it is a text file, it does not matter what the text is about. The point is: all data behaves the same with respect to download time. This ‘indifference to specifics’ is similar to the fact that the specific location of each specific water molecule can be safely ignored when we predict the required time for water to boil, since all water molecules behave the same. The processes in the brain, between now and tomorrow morning 9 o’clock, are obviously not in the same sense indifferent to the prediction “tomorrow morning at 9 o’clock I will do 2 push-ups.”
DaveS: Going back a few posts, I also don’t see much difference in response to environmental input. Of course if I injure my shoulder or get hit by a bus, my prediction would turn out to be wrong. Similarly, if there is a power outage or the website crashes, then the prediction of the computer program will also turn out to be incorrect.
In the OP I argue:
A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.
Put another way, a person can be adamant about doing 2 push-ups at 9 in the morning and, in being so, can block out a host of distracting influences to fulfill her prediction. However I do not see how the brain, a physical thing necessarily open to all kinds of physical influences, can do the same.Origenes
December 4, 2017
December
12
Dec
4
04
2017
06:18 AM
6
06
18
AM
PDT
Needless to say, this indifference to specifics does not apply to the brain.
Well, go ahead. This indifference to specifics does not apply to the brain because.....?critical rationalist
December 4, 2017
December
12
Dec
4
04
2017
05:56 AM
5
05
56
AM
PDT
@Origenes Deutsch...
“If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task.
You seem to have selectively quoted my comment. The temperature in the environment could drop, not to mention a number of other initial conditions that are untraceable. We can use this same high level theory to compensate for changes. And if it drops to low, we would change our prediction to say that we cannot make tea because, well either the kettle is out side out spaceship and we cannot retrieve it, we will be dead because the temperature will kill us, etc.
Ironically I found out that Popper(!) has argued this as well. And even more ironically: he emphatically argues that self-prediction is impossible. Go figure.
He did? Then where is your reference? If you found something, then share it with the rest of us. IOW, its' not clear that you actually understand Popper's position or that what he means my impossible to self-predict is to the same degrees or due to the same argument. It's impossible for me to know for certain I will eat lunch tomorrow at 12pm. This is because I cannot know I won't get sick. Or the power might go out. Or my car might break down, or any number of other eternal events. Or I might get an offer for free lunch at 12:30, and change my mind. Perhaps you are referring to Popper's criticism of the following...
...the dream of prophecy, the idea that we can know what the future has in store for us, and that we can profit from such knowledge by adjusting our policy to it.
However, that is not the same as what you or Hayek is referring to. So, again, where is your reference? I won't be holding my breath. Apparently, you, like everyone else here, appeals to Popper when you think it suits your purpose. Go figure.critical rationalist
December 4, 2017
December
12
Dec
4
04
2017
05:53 AM
5
05
53
AM
PDT
Origenes,
What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable “linear” processes? Does it have to do with mysterious *emergence*? I would suggest not: It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. Needless to say, this indifference to specifics does not apply to the brain.
The example human predictions we were discussing appear to be about as specific as the computer prediction. "I will perform two pushups tomorrow at noon" "The message 'download complete' will appear in 5 minutes" I don't see a great deal of difference here. Going back a few posts, I also don't see much difference in response to environmental input. Of course if I injure my shoulder or get hit by a bus, my prediction would turn out to be wrong. Similarly, if there is a power outage or the website crashes, then the prediction of the computer program will also turn out to be incorrect.daveS
December 4, 2017
December
12
Dec
4
04
2017
05:05 AM
5
05
05
AM
PDT
DaveS, CR What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable "linear" processes? Does it have to do with mysterious *emergence*? I would suggest not: It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. Needless to say, this indifference to specifics does not apply to the brain.Origenes
December 4, 2017
December
12
Dec
4
04
2017
04:44 AM
4
04
44
AM
PDT
CR @26
CR: Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on.
The fact that we do not need the specifics in order to predict when water will boil is because we are dealing with a linear process. DaveS has provided two other examples of linear processes — see DaveS # 8 and my response #10. In response I have pointed out that this does not apply to human beings, because “neural firing and our behavior are rarely linear in that sense.” See also KF #24. Moreover, I suppose that this linear water boiling process runs it predictable course well protected from intractable environmental influences which could potentially change it outcome — see OP, #7.
CR: … the university of computation is an emergent property.
Perhaps, WRT computers, a valid (non-linear) analogy would be a computer-program capable of predicting the result of its calculation. However, as I have stated in #17: no program can predict itself what the result of its calculation will be. Ironically I found out that Popper(!) has argued this as well. And even more ironically: he emphatically argues that self-prediction is impossible. Go figure.Origenes
December 4, 2017
December
12
Dec
4
04
2017
03:56 AM
3
03
56
AM
PDT
In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows: [...] IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself. – In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.
From The Beginning of Infinity.....
“If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task. Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on. For greater accuracy we may also need information about subtler properties, such as the number and type of nucleation sites for bubbles. But those are still relatively ‘high-level’ phenomena, composed of intractably large numbers of interacting atomic-level phenomena. Thus there is a class of high-level phenomena – including the liquidity of water and the relationship between containers, heating elements, boiling and bubbles – that can be well explained in terms of each other alone, with no direct reference to “anything at the atomic level or below. In other words, the behaviour of that whole class of high-level phenomena is quasi-autonomous – almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as emergence."
IOW, we do not need to simulate the human brain at this level because most of the things that we're actually interested in are explicable in this higher sense. Furthermore, a Turing machine can be implemented in a number of ways, such as cogs, vacuum tubes, etc. Do we need to be able to predict the motions of individual atoms in a cog? How about the individual atoms in a transistor? All of these implementation details are untraceable as well. Yet, we can predict that each of them can run any algorithm than any other can run. This is because the university of computation is an emergent property. critical rationalist
December 3, 2017
December
12
Dec
3
03
2017
07:07 PM
7
07
07
PM
PDT
KF @24 Indeed, when one contemplates the daily multitude of intractable factors, which can potentially influence one's behavior, then one intuitively grasps that prediction of behavior is impossible. And this notion of impossibility is multiplied by a magnitude when one, in accord with materialism, tries to envision countless factors operating in the brain and its environment. This is exactly why the fact that one routinely self-predicts one’s behavior with high specificity cries out for an explanation — especially in the context of materialism. Here, I have attempted to make the intuitive notion that materialism has no explanation for self-prediction more robust by pointing out: 1. The environment inputs unknown variables — see OP, #7. 2. Von Hayek’s argument — see OP, #18, #23. At the moment I am quite happy with the result. ‘The argument from self-prediction’ seems to be doing very well.Origenes
December 3, 2017
December
12
Dec
3
03
2017
08:18 AM
8
08
18
AM
PDT
Folks, Here's a thought: doesn't the human life of the mind quite often show watershed-like sensitive dependence on conditions or decisions or "little differences at the beginning," suggesting chaos and unpredictability in the long run? all of this being a reflection of extreme non-linearity and being a bit more than a statistical fluctuation around a trend? KFkairosfocus
December 3, 2017
December
12
Dec
3
03
2017
05:28 AM
5
05
28
AM
PDT
To all, Let’s take a look at Von Hayek’s analysis again:
… Prediction of a system O requires classification of the system’s states. If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish.
Suppose that the state of the brain can differ in 300 aspects — an extremely modest assumption I would say — then there are 2^300 different types of states that a classificatory system must be able to distinguish. 2^300 is a huge number which is roughly equal to 10^90, which should give us pause, since 10^80 is the commonly accepted answer for the number of particles in the observable universe. This number would include the total of the number of protons, neutrons, neutrinos and electrons. - - - - J-Mac @22 Sorry, I must have misunderstood your point.Origenes
December 3, 2017
December
12
Dec
3
03
2017
04:26 AM
4
04
26
AM
PDT
Origenes, What makes you think that my point of view is a materialistic one?J-Mac
December 2, 2017
December
12
Dec
2
02
2017
06:02 PM
6
06
02
PM
PDT
J-Mac @20
J-Mac: If consciousness is quantum, as it appears to be, all of this has no merit… because of one simple indisputable condition of quantum mechanics; quantum sub-particles can be in more than one place at the same time…there is no doubt about it…
Can you tell me why a quantum consciousness would render the argument without merit? Would it solve the problem of self-prediction for materialism? Correct me if I am wrong, but my first thought is that positing involvement of quantum events would make the brain an even more complex system than we thought it was , — with even more possible states, at the same time and place, as you say — which would make predicting its course a bigger challenge. Moreover, in case you are positing causal input stemming from indeterministic quantum events, then the possibility of prediction is ruled out entirely. As I have argued in #17, only a deterministic world can house accurate self-prediction of the brain. Also, I do not see how quantum consciousness can predict environmental input between now and the 2 push-ups tomorrow morning at 9 o'clock. As I wrote in post #9: “The environment inputs unknown variables.” That obstacle for self-prediction, which has been ignored so far, needs to be addressed also.
J-Mac: If someone wants to introduce the soul into this equation, he needs to not only define it but also on what level it operates and why people under general anesthetic lose the contact/connection with their soul…
That is way beyond the reach of this modest argument. All the argument aims to show is that materialism lacks the tools to explain everyday self-prediction.
J-Mac: This is just one of many objections I have…
Please, offer your objections to the argument.Origenes
December 2, 2017
December
12
Dec
2
02
2017
02:59 PM
2
02
59
PM
PDT
If consciousness is quantum, as it appears to be, all of this has no merit... because of one simple indisputable condition of quantum mechanics; quantum sub-particles can be in more than one place at the same time...there is no doubt about it... If someone wants to introduce the soul into this equation, he needs to not only define it but also on what level it operates and why people under general anesthetic lose the contact/connection with their soul... This is just one of many objections I have...J-Mac
December 2, 2017
December
12
Dec
2
02
2017
01:58 PM
1
01
58
PM
PDT
To all, //Reflections on the argument from self-prediction.// The capability of human beings to self-predict rests on a set of interdependent capabilities of consciousness: self-awareness, self-control, self-movability, self-organization, self-judgement and so forth. The self-prediction “tomorrow morning at 9 o’clock I will do 2 push-ups”, presupposes self-awareness, self-control, self-movability and arguably more capabilities in this category. One could say that, in order for self-prediction to exist, consciousness must perform the unimaginable: it must encompass, or house, itself. The argument from self-prediction is firmly based on the notion that no physical system, the brain included, can do this. Why not? In short, because no house can house itself. Origenes
December 2, 2017
December
12
Dec
2
02
2017
08:03 AM
8
08
03
AM
PDT
Atom @11
Atom: Assume system O has n aspects, each modeled by a single bit in O .... For P to model O’s behavior, it is either sufficient to model all of O’s bits, or it is not sufficient. If it is, then obviously O is always its own model (since it perfectly “models” its own state).
If O is completely static/inert — can only have the current state it is in — then, obviously, there is nothing to predict.
Atom: If it is not sufficient, then this is something that needs to be shown.
In the case of the brain, is it really necessary “to show” that O can have different states? I suppose that I have misunderstood what you are saying. Can you elaborate? I managed to dig up the following quote by Von Hayek via ‘google books’, which may be helpful :
1 System P can predict system O. 2 A system P can predict a system O only if P can classify the states of O (O-states). 3 P can classify the O-states if P can represent all types of O-states. 4 Thus: P can represent all types of O-states. 5 The number of types of O-states is of a higher magnitude than the degree of complexity of O. 6 If P can represent all types of states of O and the number of types of O-states is of a higher magnitude than the degree of complexity of O, then the number of O-states P can represent is of a higher magnitude than the degree of complexity of O. 7 Thus: The number of O-states P can represent is of a higher magnitude than the degree of complexity of O. 8 The degree of complexity of a system P is at least as high as the number of states it can represent. 9 Thus: P is more complex than O. 1O Thus: If a system P can predict a system O, then P is more complex than O. [By elimination of assumption (1).] .... Where (5) is warranted as follows, argument H: 1 The number of types of O-states (2^n) is of a higher magnitude than the number of aspects in which two O-states can differ (r). 2 The degree of complexity of some system is identical with the number of aspects in which its states can differ. 3 Thus: The number of types of O-states is of a higher magnitude than the degree of complexity of O.
Origenes
December 2, 2017
December
12
Dec
2
02
2017
05:22 AM
5
05
22
AM
PDT
To all, Some thoughts: Setting aside the problem of intractable environmental input and Von Hayek’s argument, it is important to note that only a deterministic world can house accurate self-prediction of the brain. If neural events are caused or influenced by indeterministic quantum events, as is often claimed, then it is game over for the brain's ambition of self-predicting. WRT computers: no program can predict itself what will be the result of its calculation.Origenes
December 1, 2017
December
12
Dec
1
01
2017
06:33 PM
6
06
33
PM
PDT
Seversky: The argument can be attacked on the grounds that 2 and 3 imply an unstated premiss which is that the conscious “I” or mind is a separate entity from the operations of the physical brain …. begging the question …
Nonsense. I hope that is clear enough.
Seversky: Even if we concede Hayek’s argument that a system cannot incorporate an exact one-to-one representation of itself that does not preclude the possibility of forecasting based on models.
Granted, but here we are not talking about a general prediction like “I will do some sporting activity in the next couple of months”, but, instead, we are talking about a very specific prediction: “tomorrow morning at 9 o’clock I will do 2 push-ups”, which demands accuracy in prediction.
Seversky: The computer climate models used by meteorologists to forecast weather trends do not and cannot represent the movement of each individual molecule of gas, droplet of water or piece of particulate but they are still able to predict weather for the next 3-5 days with reasonable accuracy.
Do you think so? This is not the case where I live. But let’s not digress, the point is that accuracy is required and that seems to be Von Hayek’s concern as well.
Seversky: Our internal model … pizza … navigate across a landscape … very detailed awareness …
Sorry, but I do not see the relevance to my argument.
Seversky: We can make predictions …
I know, and that fact poses a formidable problem for materialists, since, as I have argued, the brain cannot.Origenes
December 1, 2017
December
12
Dec
1
01
2017
04:00 PM
4
04
00
PM
PDT
DaveS @13
DaveS: When we humans make these predictions about push-ups or going shopping it’s not any great mental feat. It’s not as if we are running an accelerated, faithful simulation of ourselves, which … would be impossible.
I agree with you. That is not what we do. And that is not what I argue.
DaveS: We use heuristics, induction, extrapolation from small samples, &c.
I would like to add that free will is involved also. However, if materialism is true, then free will as an explanation for the prediction “tomorrow morning at 9 o’clock I will do 2 push-ups” is not an option. If materialism is true, we need to naturalize prediction and there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. And this is where Von Hayek becomes relevant.Origenes
December 1, 2017
December
12
Dec
1
01
2017
02:21 PM
2
02
21
PM
PDT
1 2 3

Leave a Reply