Uncommon Descent Serving The Intelligent Design Community

Origenes and the argument from Self-Prediction

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Origenes, has put up an interesting argument that we need to ponder as food for thought.

It’s Friday, so this should be a good thing to start our weekend on:

>>>>>>>>>>>>>>

ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence?

The fact that one can predict her/his own behavior suggests that we have mental control over the physical, which is obviously unacceptable for the materialist, who claims the opposite to be true. Therefore the task set out for the materialist is to naturalize self-prediction. And in doing so there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. But here lies a problem. There is only one candidate for the job, the brain, but, as I will argue decisively, the brain cannot do it.

The Argument from Self-prediction

1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input.

2. The brain cannot predict future behavior with any specificity.

3. I can predict future behavior with specificity.

Therefore,

4. Materialism is false.

– – – –

Support for 2

In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows:

… Prediction of a system O requires classification of the system’s states.
If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish. As the number of aspects with regard to which states might differ is an indicator of O’s complexity and as the degree of complexity of a classificatory system P is at least as large as the number of different types of states it must be able to distinguish, P is more complex than O.
[‘The SAGE Handbook of the Philosophy of Social Sciences’ edited by Ian C Jarvie, Jesus Zamora-Bonilla]

Von Hayek then goes on to conclude that:

No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).

IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself.
– In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.

– A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.

>>>>>>>>>>>

What do you think? END

Comments
DaveS @73 I am happy to leave it to readers of this thread to judge whether my arguments concerning your position make sense. I have noticed that I keep repeating myself, so I don't think that I have anything new to add. So, I’ll leave it there for now. Origenes
Origenes,
It is logically true that no computer can predict the result of its own calculation. Do you agree with that?
With some qualifications, which are irrelevant to this discussion, yes. I don't think a computer can exactly simulate some calculation in half the time it normally takes, for example. daveS
DaveS @71
DaveS: You’re not addressing my claim. I am saying that a computer can predict some aspects of its future behavior (which is clearly true).
"It is logically true that no computer can predict the result of its own calculation." Do you agree with that? And what does that tell you? To me it immediately follows that no computer can predict itself. Bam! Yes, that is what logic can instantly give you. Predicting download time and the like is just throwing up sand and dust, it's not about a computer predicting the result of its own activity. We know exactly why those linear processes are predictable: because we can ignore the specifics:
What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable “linear” processes? ... It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. ... this indifference to specifics does not apply to the brain. .... Because, assuming that specific behavior (e.g. ‘doing 2 push-ups’, ‘doing Christmas shopping’) results from specific neural configurations, the specificity of neural states do matter. It is relevant to the ensuing behavior how many times neurons fire and where they are. This in contrast to water boiling which occurs irrespective of specific states of water molecules. In theory one could change the position of each water molecule of boiling water without stopping it from boiling. (see #24, #27, #28 and #40.)
Origenes
Origenes, You're not addressing my claim. I am saying that a computer can predict some aspects of its future behavior (which is clearly true). I'm not talking about a computer predicting the entirety of its own future behavior, or a faster computer predicting the behavior of a slower one, so let's set those aside. The claim is that a computer can predict (approximately) some aspects of its future behavior, such as the time it will take to finish running a program. daveS
DaveS
My claim is simply that purely physical devices (computers, e.g.) can predict their future behavior to some extent.
It is logically true that no computer can predict the result of its own calculation, therefore no computer can predict itself. However, can computer P predict the behavior of computer O? Yes, rather easily, IF computer P is faster than O, and they perform the same task, then the result of the calculation of P can be regarded as a prediction of the result in O. Even better, given that the architecture in both computers only differ in speed (and same software and same task), then all steps of O leading op to the result are predicted by P. So, yes, computer P can predict computer O. But note that, when we have two computers which only differ in speed, we have all the requirements for simulation already in place: We do not have to measure O and build a model-O, because computer P instantly functions as a faster model for O. So, here, we can skip all the daunting tasks that the brain has to perform, as has been argued. IOWs it seems hardly relevant to the problem at hand. Origenes
KF, That's something you'll have to take up with Origenes. We have agreed to call these things predictions in any case, and that's consistent with this definition given by Merriam-Webster:
predict pri-'dikt To declare or indicate in advance; especially : foretell on the basis of observation, experience, or scientific reason
daveS
Mullers-Ratchet @67 Thank you for your question. I am just listing what is logically required in order to have a simulation of a system O. P has to measure O. P must build a model-O. So, we need models of each of the O-components and also model-laws. Model-O must perform much faster than the real O. ... and more (see also #62). Perhaps you are saying that such a prediction-simulation-system is beyond far-fetched and I would readily agree. I do not think it can possibly exist, and that is my point. If someone has a better idea to get to a specific prediction, or thinks that some (or all) of my assumptions about prediction-system P are unnecessary, I would like to know. Origenes
I'll admit to not reading much of this thread, but his stuck me as a very strange claim
2. P has a complex machine that can make a perfect model-O based on P’s measurements. (Wow!)
Why should we think this is true? A particularly strange claim to make for human brains, which are pretty infamous for using pretty bad heuristics. mullers_ratchet
DS, you seem to be dealing with freedom to make a decision and discipline to act on it per a timeline. Such covers a mountain of issues on concepts, freedom vs determinism, and even the power of linguistic or visual conceptual representations and much more. Where a precondition of real decision is responsible, rational freedom, which opens up a huge can of worms. A decision backed by discipline to give it effect is not a prediction. Though it may be predictable on observation and analysis that X is likely to be diligent but Y is not. So, clarification is in order. KF kairosfocus
Origenes,
Is it your claim that there is a way to “quite accurately” predict this? How is that thinkable?
First, I don't know anything about how the brain works, and I suspect no one does in enough detail to really discuss this issue with any confidence. My claim is simply that purely physical devices (computers, e.g.) can predict their future behavior to some extent. Therefore it's not clear to me that the fact that humans can predict their future behavior to some extent implies that they are not purely physical. daveS
DaveS @63 Suppose that P must predict O and that O is the component of the brain that deals with body movement. Also "quite accurately" predicting is a huge challenge. Obviously there are many possibilities besides "doing two push-ups". Tomorrow at 9 it could also be the case that "standing on one leg", "wave both hands" or "walking" ensues from O's neural states — numerous possibilities. And the prediction system must be able to distinguish between them at a neural level. Is it your claim that there is a way to "quite accurately" predict this? How is that thinkable? To make any specific prediction you need to model with some accuracy. You still need to be able to build O in model-world. The simulation still needs to run faster than the real thing. You still need model-laws. You still need the same overall increase in speed. and so forth. You still need P to be able to distinguish between many possible neural states. And you still need some translation 'book' which can translate model-neural states into "mental" terms so that we can say:
"I will do 2 push-ups tomorrow morning at 9."
Or perhaps you tell yourself, while standing on one leg tomorrow morning at 9, "Oh well, I predicted that 'quite accurately'" Origenes
Origenes, We keep coming back to this point. There is no need to run computationally infeasible simulations in order to make predictions. You just need to be close enough for government work, as they say. For example, you can predict the result of 1 million balls being dropped in a Galton Board quite accurately by doing a trial run of 10000 balls. daveS
DaveS: I don’t see what problem(s) all this creates for someone who holds that everything that exists is physical.
I got that impression already. Are you reading my posts? A quick recap: In order to naturalize the prediction "2 push-ups tomorrow morning at 9" the materialist must assume the existence of brain system that can predict specific neural states, from which the behavior "doing 2 push-ups ensues, tomorrow morning at 9. This system 'P' must predict system 'O' by running a simulation of O. A few of the requirements: 1. P can measure O. 2. P has a complex machine that can make a perfect model-O based on P’s measurements. (Wow!) 3. Thus: P can simulate O and run the simulation much faster than the real thing with equal accuracy. 4. In P’s model world there are also perfect model-chemical-laws in operation, which interact with model-O in the exact same the way as the actual laws do with the actual O. – only much much faster. (Wow!!) 5. Each modeled item must have the exact same increase in speed, to not ruin the simulation. 6.in order for P to model the total state of O, P needs to collect all the data about O at once, in the same time. It is of no use to measure the left part of O first, start modelling that, and complete the model with later measurements of O’s right part. In order to predict you must get the starting position just right. 7.And surely no time to waste building the model from the all-at-once-data. If that takes too long, P will be predicting the past. 8. It must model unknown input by the environment. Origenes
Origenes,
When I wrote in #56 “… an inference like that seems hard to avoid.” I was thinking about something along the lines of: “We members of the Materialist Society have to conclude that, due to a bizarre coincidence, at the same time the neural states of each person on earth were aimed at performing 2 push-ups. We have no explanation other than sheer dumb luck.”
I'm not sure what this means exactly. It could be because I missed last month's meeting. :-) In any case, I don't see what problem(s) all this creates for someone who holds that everything that exists is physical. daveS
DaveS: The claim that two people performing pushups necessarily have the same very specific configuration of neural states is.
I agree. That's not a claim that I wish to defend. When I wrote in #56 "... an inference like that seems hard to avoid." I was thinking about something along the lines of: "We members of the Materialist Society have to conclude that, due to a bizarre coincidence, at the same time the neural states of each person on earth were aimed at performing 2 push-ups. We have no explanation other than sheer dumb luck." Surely I allow for the possibility that each brain performs the same task differently.
And perhaps you need a separate translation book for each person.
I would say, very likely. Origenes
Origenes,
I would like to know why you consider e.g. “physics is the only (ultimate) actor” to be a “strong claim”(!) for materialism to make.
That's not especially strong, being essentially the definition of materialism. The claim that two people performing pushups necessarily have the same very specific configuration of neural states is. As I pointed out, two computers can perform the same task with completely different states. Two robots could be programmed to perform two pushups at the same time, with completely different states.
One could say that “task A” is a placeholder (refers in each computer to a specific state) for a specific (set of) computer configuration, but the referred configuration of both computers do not match each other. This means that we cannot write one “translation-book” for both computers. We need to make 2 separate books.
And perhaps you need a separate translation book for each person. daveS
DaveS @57
DaveS: I don’t know of anyone who makes such strong claims (although I haven’t done a lot of reading on this). Can you cite such a person?
Before I do that, I ask for your view on materialism. I would like to know why you consider e.g. "physics is the only (ultimate) actor" to be a "strong claim"(!) for materialism to make. I would like to know.
You could present them with the exact same situation, and they could end up making exactly the same move, but their states would almost certainly be very different. They are both purely physical machines, but they perform the same task in very different ways.
The same task refers in computer A to configuration X and in computer B to configuration Y. Configuration X does not match Y. This means that we cannot write one "translation-book" for both computers. We need to make 2 separate books. Not sure what your point is. Origenes
Origenes, I don't know of anyone who makes such strong claims (although I haven't done a lot of reading on this). Can you cite such a person? Suppose you have two computers playing chess (for example, one running Stockfish and one running AlphaZero, as happened recently). You could present them with the exact same situation, and they could end up making exactly the same move, but their states would almost certainly be very different. They are both purely physical machines, but they perform the same task in very different ways. daveS
DaveS @55
DaveS: Hm, I’m not convinced it corresponds to any very specific configuration.
Our personal convictions play no role here. The question is if such correspondence is claimed by materialism. There is no doubt in my mind that this is the case. Materialism entails an ontology in which minds are the consequence of physics, and thus, "mind" (and anything deemed "mental") can only be placeholders for a more detailed causal account in which physics is the only (ultimate) actor.
DaveS: What if everyone on the planet decided to do two pushups at noon CST tomorrow? Would it follow under materialism that all our neural states are very similar?
Interesting proposal. To answer the second question: yes, an inference like that seems hard to avoid. What other option is there? A materialist cannot allow for an explanation in which 'mental power' shapes our neural states. According to materialism it has to be the other way round. So, yes, such a global event would pose problems for materialism ... Origenes
“Doing 2 push-ups tomorrow morning at 9”, is, in fact much more specific then you apparently seem to think. “Doing 2 push-ups” is a ‘placeholder’ for a detailed account of a very specific configuration of neural states. Given materialism, how could it not be?
Hm, I'm not convinced it corresponds to any very specific configuration. What if everyone on the planet decided to do two pushups at noon CST tomorrow? Would it follow under materialism that all our neural states are very similar? Suppose I did the pushups on two consecutive days. Would my neural states on those two days be very similar? I don't see why in either case. My brain is busy with lots of other tasks besides the pushups. daveS
DaveS At first post#42 didn't make any sense to me, so I didn't respond. Upon rereading it, I realized that the problem of talking past each other may be connected to the following: “Doing 2 push-ups tomorrow morning at 9”, is, in fact much more specific then you apparently seem to think. “Doing 2 push-ups” is a 'placeholder' for a detailed account of a very specific configuration of neural states. Given materialism, how could it not be? So, predicting the occurrence of a very specific configuration of neural states at a very specific time is no minor accomplishment and, obviously, not something within the reach of “low-fidelity simulations”. In short, there is nothing "modest", as you called it, about the 2 push-ups prediction. Origenes
Origenes,
To be clear, we are talking about predicting future brain states, right? Specifically, we are talking about part ‘P’ of the brain predicting the state of another part ‘O’ of the brain. Correct?
I have in mind something much simpler such as a computer. I'm saying that a computer can predict aspects of its own future behavior or state (approximately). To borrow one of KF's examples, if a computer is programmed to simulate a Galton Board, it could predict with very high accurace the shape of the 'histogram' that results, assuming the number of balls is high. I would guess that my brain can similarly predict aspects of its future behavior or state, but I have no idea how that works. daveS
DaveS @51
DaveS: On the other hand, one can make reliable predictions using low-fidelity simulations. There’s nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one’s future behavior as a way of making predictions.
To be clear, we are talking about predicting future brain states, right? Specifically, we are talking about part 'P' of the brain predicting the state of another part 'O' of the brain. Correct? You say that P can make reliable predictions of O by using "low-fidelity simulations". You also claim that "there’s nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one’s [brain part O's] future behavior". Before I respond, I would like to know: do I understand you correctly so far? Origenes
Origenes,
At this point, it feels like an understatement to say that there are multiple insurmountable problems for self-prediction of the brain by simulation.
Certainly for self-prediction via 100% faithful emulation. But didn't we already dismiss that as clearly being impossible? On the other hand, one can make reliable predictions using low-fidelity simulations. There's nothing infeasible about running a stick-figure cartoon simulation (metaphorically) of one's future behavior as a way of making predictions. daveS
KF @49 Very nice video and 'game.' Interesting and fun. In relationship to the topic of prediction and simulation, your point is, if I understand you correctly, that several systems or aspects of systems, present in the brain, can only be modeled by means of statistical, probabilistic analysis. If so, then your point makes perfect sense to me. And, obviously, modeling based on statistics and probability raises many concerns with regard to accuracy, origin & relevance of statistical data, not to mention the systems needed to perform all this. At this point, it feels like an understatement to say that there are multiple insurmountable problems for self-prediction of the brain by simulation. Origenes
Origines, I present to you, a classic, six sided fair die. Eight corners, twelve edges, sensitively dependent on effects of impacts on same. Reduced thereby to a flat random distribution, per indifference principle. In short, one is led to statistical, probability distributions driven by chaos on a system that is in principle utterly mechanically deterministic. But now, start to define output as sum of uppermost faces and bring to bear a second, then a third and so forth. Lo and behold, we have here a bell shaped curve distribution of possibilities with highly improbable far tails. Similarly, try the Quincunx cascade, which uses a hopper of beads and a triangular distribution of rods leading to columnar stacks: https://www.youtube.com/watch?v=AUSKTk9ENzg (sim is here: http://www.mathsisfun.com/data/quincunx.html ). Very soon, on pouring a few thousand beads, one gets very close to a Gaussian type curve . . . which was composed on cumulative effects of small +/- effects of random character. Order out of chaos, leading to the need for statistical, probabilistic analysis. KF kairosfocus
KF @46
KF: Some things are far too nonlinear [not even log-lin or log-log or complex domain transforms make them tractable]
Cannot be modeled. Not all things can be modeled. You cannot have a simpler more efficient model for everything. That is a logical impossibility.
KF: ... sensitively dependent (e.g. so minor fluctuations can be rapidly drastically amplified via the butterfly effect), prone to radical branching and more to have the sort of easy predictability envisioned.
All these influences of various types must be modeled in order to improve their speed! Why would it be possible to model and increase the speed of everything? It cannot be done. // Note that all the models of various parts must provide the exact same increase in speed!. It ruins the simulation if that is not the case ... // - - - - - A minor detail: in order for P to model the total state of O, P needs to collect all the data about O at once, in the same time. It is of no use to measure the left part of O first, start modelling that, and complete the model with later measurements of O's right part. In order to predict you must get the starting position just right. And surely no time to waste building the model from the all-at-once-data. If that takes too long, P will be predicting the past. Origenes
Some further thoughts on the requirements of prediction (see #44,#45) : 1. P can predict O. 2. P can measure O. 3. P has a complex machine that can make a perfect model-O based on P's measurements. (Wow!) 4. Thus: P can simulate O. 5. In P's model world there are also perfect model-chemical-laws in operation, which interact with model-O in the exact same the way as the actual laws do with the actual O. - only much much faster. (Wow!!) Origenes
Origenes, that was my basic point. Some things are far too nonlinear [not even log-lin or log-log or complex domain transforms make them tractable], sensitively dependent (e.g. so minor fluctuations can be rapidly drastically amplified via the butterfly effect), prone to radical branching and more to have the sort of easy predictability envisioned. KF kairosfocus
KF @40
KF: A system’s ability to reliably predict itself is tantamount to the challenge in Tarski’s theorem on needing a higher level schemes to provide proofs for the lower one leading to irreducible complexity which then feeds into Godel’s incompleteness challenges.
In order for system P to predict system O, it presupposed: - P must be able to represent all O-states in a model-world. - P must be able to observe/measure O. - P must be able to organize its model O-components in model-world, based on measurement of O. - P's model-O must perform (much) faster than the real O. I don't know about you, but I see some challenges for the 'prediction by simulation' hypothesis ... Origenes
// follow-up #43 //
Just imagine running, in the now, a simulation of all the events in a subcomponent of the brain within the time-frame between now and tomorrow morning 9 o’clock— see #7.
I would like to note that the idea of 'prediction by simulation' presupposes that there is a way to run a predictable sequence much much faster. In our example, prediction by simulation presupposes that the events in a subcomponent of the brain (in the time frame between now and tomorrow morning 9) can be compressed into a much shorter time frame during simulation. Here I would like to argue that this seems physically impossible to me. Nothing can run simulations of neural activity faster than they already are in reality. A single neuron fires (on average) 200 times a second and is connected (on average) to 1,000 other neurons. Nothing (in the brain) can simulate this and do it 10.000 times faster, in order to get a prediction of its future state. IOWs the whole notion of 'prediction by simulation' does not compute. Take home message: A prediction system must be faster than the system that it predicts. Nothing is faster that itself. Nothing can predict itself. Origenes
Atom @39, Thank you for your response.
Atom: To distinguish between 2^300 states we need exactly log_2(2^300) = 300 bits.
It takes 300 bits to describe one possible state out of 2^300. Describing one possible state does not magically endow a predicting system with the ability to distinguish between 2^300 states. So, I think your assumption is incorrect.
Atom: This was my original point. If the brain is a computational unit of n bits, it has 2^n possible states, but any system of n bits (including itself, obviously) can represent any one of its states.
Every system of n bits has a state of n bits, and, in being in that particular state, it represents one of its possible states. I can agree with that. It is a truism. But how does it relate to prediction? How does it help?
Atom: That is what I meant by saying just counting bits does not seem to rule out the brain predicting itself.
Can you elaborate some more? You many very well be on to something, but I am not able to grasp your point. Which “just counting bits” specifically do you object to?
Atom: But, what if a system is only predicting the change of one of its subcomponents?
Indeed, that is a possibility. In this case, the brain splits itself up in two separate systems: a prediction system and a system to be predicted. As I mentioned in post #5 the brain is notoriously highly interconnected, so there is a problem for this hypothesis right at start. Moreover, given the startling complexity of the brain, I trust that prediction of any subcomponent of the brain will run into the familiar problem of too many states to handle. Just imagine running, in the now, a simulation of all the events in a subcomponent of the brain within the time-frame between now and tomorrow morning 9 o’clock— see #7. And, of course, there is still the problem of the environment which inputs unknown variables. Origenes
KF,
Content and depth explosion warning
Well, there was an explosion of something, anyway. :P Once again, I think Atom has hit the nail on the head. The examples of human predictions we have seen are very modest, and deal only with 'subcomponents' of our future behavior.
But unfortunately, the sort of systems we have in mind are exactly the opposite of such a constraint, and are thus credibly inherently unpredictable in detail save in the very narrowly short run, and that by simple projection of trend and hoping we do not hit a sufficiently fast acting perturbation and zone of sensitively dependent nonlinear action.
Do you have some examples of human self-predictions of this sort? I would be especially interested in systems which can be modeled using a computer, and where humans clearly outperform the machine. daveS
Which premise do you not agree with? 1. Prediction of X’s behavior is about X. 2. To be about X presupposes observing X. 3. Observing X presupposes a position distinct from X. 4. No X is distinct from itself. Therefore, 5. X cannot predict itself. Origenes
Atom, I doubt that 1-bit representations of past/present or future states much less evolution from one to the next are relevant or even credible, save as contextually meaningful switches to much deeper and far more elaborate reference bases, i.e. quasi-addresses that point to lookup tables that then trigger jumps to elaborations of description, information and cybernetic controls for things interacting with the world. And, the use of switches has in it an implicit system architecture, which would itself be organisation codable as a chain of yes/no steps of description towards distinct identity in some description language, cf AutoCAD etc. (And description language is itself a short label for a further world of cybernetic, communication, linguistic, epistemological and even worldview issues. Content and depth explosion warning.) A system's ability to reliably predict itself is tantamount to the challenge in Tarski's theorem on needing a higher level schemes to provide proofs for the lower one leading to irreducible complexity which then feeds into Godel's incompleteness challenges. Where, an algorithm/ functional organisation of high reliability can be regarded as in effect a warranted system, a proof in a loose sense, cf related approaches to proving reliability of computational algorithms. But I have in mind much broader computational substrates as [loose sense] cybernetic architectures, e.g analogue computers and neural network using systems, not just conventional digital computers, duly tied to sensors, effectors, feedback nets, memories, supervisors etc. What I suggest is that where a system expresses highly compressed mechanical or at least mathematically describable laws and reasonably well behaved stochastic patterns, and ruling out outright catastrophic collapse, we can model future trajectories with confidence IF we are not dealing with sensitive dependence on initial and intervening conditions triggering runaway positive feedbacks or butterfly effects that rapidly destroy any ability to predict in details, cf weather. But unfortunately, the sort of systems we have in mind are exactly the opposite of such a constraint, and are thus credibly inherently unpredictable in detail save in the very narrowly short run, and that by simple projection of trend and hoping we do not hit a sufficiently fast acting perturbation and zone of sensitively dependent nonlinear action. KF kairosfocus
Hi KF, Good to stop in. Origenes, Thank you for your response. You wrote:
However this seemingly modest demand (‘the ability to classify possible states must be present in any prediction system’) can quickly run into huge numbers. If the brain can differ in say 300 different aspects we get 2^300 possible states and we need an enormous capacity to house this huge number — see #23.
To distinguish between 2^300 states we need exactly log_2(2^300) = 300 bits. This was my original point. If the brain is a computational unit of n bits, it has 2^n possible states, but any system of n bits (including itself, obviously) can represent any one of its states. That is what I meant by saying just counting bits does not seem to rule out the brain predicting itself. However, representing one state is not the same as making a conditional prediction, which would seem to require at minimum two states represented (past and future, from and to). In that case, there are 2^300 * 2^300 = 2^600 transitions possible, in which case you'd need at least 600 bits to represent the predicted transition. Thus, no system would be able to predict its own transitions, if it needed to as a whole. But, what if a system is only predicting the change of one of its subcomponents? Then it could use some of its bits to represent the past state of that subcomponent, and another set of its bits to represent the future state of that subcomponent. In that case, a system would be able to accurately predict its future state, at least of a portion of itself. And if that portion is connected to behavioral apparatuses (such as a single neuron that triggers a specific behavior), then it could also predict its future behavior to some extent. That's how I currently see it, but I am not infallible, so feel free to challenge this if I missed something. Atom
Hi Atom, long time no see, KF kairosfocus
Atom @36 Thank you for your interest. Let’s take things one step at a time. Von Hayek again:
… Prediction of a system O requires classification of the system’s states. If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish.
I understand Von Hayek like this: Suppose you have a system of two components. The first component can be either A or B. The second component can be either C or D. Von Hayek would say that this system can differ in 2 aspects. So, in this case, n = 2. According to Von Hayek there are now 2^2 different types of states a classificatory system P must be able to distinguish. So, 4 different types of states: AC, AD, BC and BD. IOWs any prediction system must at least be able to distinguish between those 4 possible different states, in order to be able to predict our 2 component system. If a prediction system cannot classify them, then prediction of our 2 component system is not possible in principle. In my understanding of Von Hayek’s argument he does not even speak about how to proceed from the ability to classify possible states to making an actual prediction. All he does is point out that being able to classify the states is required. However this seemingly modest demand (‘the ability to classify possible states must be present in any prediction system’) can quickly run into huge numbers. If the brain can differ in say 300 different aspects we get 2^300 possible states and we need an enormous capacity to house this huge number — see #23. Origenes
Hi Origenes, Sorry for the delayed response, but that's life. You wrote:
If O is completely static/inert — can only have the current state it is in — then, obviously, there is nothing to predict.
So the argument is that you need at least 2 times 2^n bits to model the system, one set of bits for the "from" state and one set of bits for the "to" state? (And by modeling both to and from, we model a transition). Is that the gist of your argument? You also wrote:
In the case of the brain, is it really necessary “to show” that O can have different states? I suppose that I have misunderstood what you are saying. Can you elaborate?
What I was saying is that you have to show that to make the claim "Modeling the exact bit configuration / temporal state of O is not sufficient for modeling O's behavior", one would need to give evidence for that claim. It is an easy assertion to make, but one that is harder to prove. I will re-read the Hayek passage and see if it clears up my doubt. Thanks. Atom
CR @31: Well, go ahead. This indifference to specifics does not apply to the brain because…..?
Because, assuming that specific behavior (e.g. ‘doing 2 push-ups’, 'doing Christmas shopping') results from specific neural configurations, the specificity of neural states do matter. It is relevant to the ensuing behavior how many times neurons fire and where they are. This in contrast to water boiling which occurs irrespective of specific states of water molecules. In theory one could change the position of each water molecule of boiling water without stopping it from boiling. Needless to say, this indifference to specifics does not apply to the brain. Origenes
Origenes,
Put another way, a person can be adamant about doing 2 push-ups at 9 in the morning and, in being so, can block out a host of distracting influences to fulfill her prediction. However I do not see how the brain, a physical thing necessarily open to all kinds of physical influences, can do the same.
I believe a self-driving car, or perhaps this new robot could also block out outside influences to achieve a particular goal (such as running across an obstacle course). I don't believe these cars or robots are conscious or have the ability to be intentionally "adamant" in the same sense as humans can, so I think that part of the argument could work. daveS
CR @30
CR: He did? Then where is your reference? If you found something, then share it with the rest of us. ... where is your reference? I won’t be holding my breath. Apparently, you, like everyone else here, appeals to Popper when you think it suits your purpose. Go figure
Hold your horses CR. I just couldn’t resist mentioning Popper’s position to you. Jim Slagle happens to briefly mention it in his excellent book “The Epistemological Skyhook”. You know very well what my opinion of Popper is. For one thing, he doesn’t understand the simple concept of self-defeating statements. So, no, I do not wish to appeal to Popper in any way. I don’t need him for any argument. But, since you insist, here is the quote, for what it is worth:
Popper: Now once we assume that the scientific theories and the initial conditions are given, and also the prediction task, the derivation of the prediction becomes a problem of mere calculation, which in principle can be carried out by a predicting or a calculating machine—a ‘calculator’ or a ‘predictor’, as it may be called. This makes it possible to present my proof in the form of a proof that no calculator or predictor can deductively predict the results of its own calculations or predictions. [Popper, book ‘An Argument for Indeterminism’, ch.22 ‘The Impossibility of Self-Prediction’]
Origenes
DaveS @29
DaveS: The example human predictions we were discussing appear to be about as specific as the computer prediction. “I will perform two pushups tomorrow at noon” “The message ‘download complete’ will appear in 5 minutes” I don’t see a great deal of difference here.
My argument is NOT that there is a difference in the level of specificity of both predictions. What I am saying is the time required for a completed download is indifferent to (‘independent from’) the specificity of the file data. It does not matter if the download is an audio file or text file or whatever. And if it is a text file, it does not matter what the text is about. The point is: all data behaves the same with respect to download time. This ‘indifference to specifics’ is similar to the fact that the specific location of each specific water molecule can be safely ignored when we predict the required time for water to boil, since all water molecules behave the same. The processes in the brain, between now and tomorrow morning 9 o’clock, are obviously not in the same sense indifferent to the prediction “tomorrow morning at 9 o’clock I will do 2 push-ups.”
DaveS: Going back a few posts, I also don’t see much difference in response to environmental input. Of course if I injure my shoulder or get hit by a bus, my prediction would turn out to be wrong. Similarly, if there is a power outage or the website crashes, then the prediction of the computer program will also turn out to be incorrect.
In the OP I argue:
A prediction of specific behavior would also require predicting environmental input, which lies beyond the brain’s control. We, as intelligent agents, can, within limits, ignore a multitude of environmental inputs and stick to the plan — ‘tomorrow morning at 9 o’clock I will do 2 push-ups, no matter what’ —, but the brain cannot do this. The intractable environmental (sensory) input and the neural firing that result from it necessarily influences the state of the brain tomorrow morning at 9 o’clock.
Put another way, a person can be adamant about doing 2 push-ups at 9 in the morning and, in being so, can block out a host of distracting influences to fulfill her prediction. However I do not see how the brain, a physical thing necessarily open to all kinds of physical influences, can do the same. Origenes
Needless to say, this indifference to specifics does not apply to the brain.
Well, go ahead. This indifference to specifics does not apply to the brain because.....? critical rationalist
@Origenes Deutsch...
“If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task.
You seem to have selectively quoted my comment. The temperature in the environment could drop, not to mention a number of other initial conditions that are untraceable. We can use this same high level theory to compensate for changes. And if it drops to low, we would change our prediction to say that we cannot make tea because, well either the kettle is out side out spaceship and we cannot retrieve it, we will be dead because the temperature will kill us, etc.
Ironically I found out that Popper(!) has argued this as well. And even more ironically: he emphatically argues that self-prediction is impossible. Go figure.
He did? Then where is your reference? If you found something, then share it with the rest of us. IOW, its' not clear that you actually understand Popper's position or that what he means my impossible to self-predict is to the same degrees or due to the same argument. It's impossible for me to know for certain I will eat lunch tomorrow at 12pm. This is because I cannot know I won't get sick. Or the power might go out. Or my car might break down, or any number of other eternal events. Or I might get an offer for free lunch at 12:30, and change my mind. Perhaps you are referring to Popper's criticism of the following...
...the dream of prophecy, the idea that we can know what the future has in store for us, and that we can profit from such knowledge by adjusting our policy to it.
However, that is not the same as what you or Hayek is referring to. So, again, where is your reference? I won't be holding my breath. Apparently, you, like everyone else here, appeals to Popper when you think it suits your purpose. Go figure. critical rationalist
Origenes,
What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable “linear” processes? Does it have to do with mysterious *emergence*? I would suggest not: It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. Needless to say, this indifference to specifics does not apply to the brain.
The example human predictions we were discussing appear to be about as specific as the computer prediction. "I will perform two pushups tomorrow at noon" "The message 'download complete' will appear in 5 minutes" I don't see a great deal of difference here. Going back a few posts, I also don't see much difference in response to environmental input. Of course if I injure my shoulder or get hit by a bus, my prediction would turn out to be wrong. Similarly, if there is a power outage or the website crashes, then the prediction of the computer program will also turn out to be incorrect. daveS
DaveS, CR What makes it possible to predict the required time for downloading a large file or the required time for water to boil? What makes these predictable "linear" processes? Does it have to do with mysterious *emergence*? I would suggest not: It is simply because we can ignore the specifics, since they are irrelevant to the outcome. It is of no import whether it is an audio file, text file or whatever file that is being downloaded, since, with respect to download time, all data behaves the same. It is of no import which water molecule is up or down, since they all act the same. Needless to say, this indifference to specifics does not apply to the brain. Origenes
CR @26
CR: Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on.
The fact that we do not need the specifics in order to predict when water will boil is because we are dealing with a linear process. DaveS has provided two other examples of linear processes — see DaveS # 8 and my response #10. In response I have pointed out that this does not apply to human beings, because “neural firing and our behavior are rarely linear in that sense.” See also KF #24. Moreover, I suppose that this linear water boiling process runs it predictable course well protected from intractable environmental influences which could potentially change it outcome — see OP, #7.
CR: … the university of computation is an emergent property.
Perhaps, WRT computers, a valid (non-linear) analogy would be a computer-program capable of predicting the result of its calculation. However, as I have stated in #17: no program can predict itself what the result of its calculation will be. Ironically I found out that Popper(!) has argued this as well. And even more ironically: he emphatically argues that self-prediction is impossible. Go figure. Origenes
In his book ‘the Sensory order’ (1976) Von Hayek argues that, in order to predict a system, you need a distinct system with a higher degree of complexity. His argument can be summarized as follows: [...] IOWs the brain cannot predict itself, because, in order to predict the brain, one needs a system with a higher degree of complexity than the brain itself. – In order to predict specific behavior, the brain cannot run simulations of possible future neuronal interactions, because it is simply too complex. The human brain is perhaps the most complex thing in the universe. The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.
From The Beginning of Infinity.....
“If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task. Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on. For greater accuracy we may also need information about subtler properties, such as the number and type of nucleation sites for bubbles. But those are still relatively ‘high-level’ phenomena, composed of intractably large numbers of interacting atomic-level phenomena. Thus there is a class of high-level phenomena – including the liquidity of water and the relationship between containers, heating elements, boiling and bubbles – that can be well explained in terms of each other alone, with no direct reference to “anything at the atomic level or below. In other words, the behaviour of that whole class of high-level phenomena is quasi-autonomous – almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as emergence."
IOW, we do not need to simulate the human brain at this level because most of the things that we're actually interested in are explicable in this higher sense. Furthermore, a Turing machine can be implemented in a number of ways, such as cogs, vacuum tubes, etc. Do we need to be able to predict the motions of individual atoms in a cog? How about the individual atoms in a transistor? All of these implementation details are untraceable as well. Yet, we can predict that each of them can run any algorithm than any other can run. This is because the university of computation is an emergent property. critical rationalist
KF @24 Indeed, when one contemplates the daily multitude of intractable factors, which can potentially influence one's behavior, then one intuitively grasps that prediction of behavior is impossible. And this notion of impossibility is multiplied by a magnitude when one, in accord with materialism, tries to envision countless factors operating in the brain and its environment. This is exactly why the fact that one routinely self-predicts one’s behavior with high specificity cries out for an explanation — especially in the context of materialism. Here, I have attempted to make the intuitive notion that materialism has no explanation for self-prediction more robust by pointing out: 1. The environment inputs unknown variables — see OP, #7. 2. Von Hayek’s argument — see OP, #18, #23. At the moment I am quite happy with the result. ‘The argument from self-prediction’ seems to be doing very well. Origenes
Folks, Here's a thought: doesn't the human life of the mind quite often show watershed-like sensitive dependence on conditions or decisions or "little differences at the beginning," suggesting chaos and unpredictability in the long run? all of this being a reflection of extreme non-linearity and being a bit more than a statistical fluctuation around a trend? KF kairosfocus
To all, Let’s take a look at Von Hayek’s analysis again:
… Prediction of a system O requires classification of the system’s states. If these states can differ in n different aspects, that is, they can be subsumed under n different predicates, there are 2^n different types of states a classificatory system P must be able to distinguish.
Suppose that the state of the brain can differ in 300 aspects — an extremely modest assumption I would say — then there are 2^300 different types of states that a classificatory system must be able to distinguish. 2^300 is a huge number which is roughly equal to 10^90, which should give us pause, since 10^80 is the commonly accepted answer for the number of particles in the observable universe. This number would include the total of the number of protons, neutrons, neutrinos and electrons. - - - - J-Mac @22 Sorry, I must have misunderstood your point. Origenes
Origenes, What makes you think that my point of view is a materialistic one? J-Mac
J-Mac @20
J-Mac: If consciousness is quantum, as it appears to be, all of this has no merit… because of one simple indisputable condition of quantum mechanics; quantum sub-particles can be in more than one place at the same time…there is no doubt about it…
Can you tell me why a quantum consciousness would render the argument without merit? Would it solve the problem of self-prediction for materialism? Correct me if I am wrong, but my first thought is that positing involvement of quantum events would make the brain an even more complex system than we thought it was , — with even more possible states, at the same time and place, as you say — which would make predicting its course a bigger challenge. Moreover, in case you are positing causal input stemming from indeterministic quantum events, then the possibility of prediction is ruled out entirely. As I have argued in #17, only a deterministic world can house accurate self-prediction of the brain. Also, I do not see how quantum consciousness can predict environmental input between now and the 2 push-ups tomorrow morning at 9 o'clock. As I wrote in post #9: “The environment inputs unknown variables.” That obstacle for self-prediction, which has been ignored so far, needs to be addressed also.
J-Mac: If someone wants to introduce the soul into this equation, he needs to not only define it but also on what level it operates and why people under general anesthetic lose the contact/connection with their soul…
That is way beyond the reach of this modest argument. All the argument aims to show is that materialism lacks the tools to explain everyday self-prediction.
J-Mac: This is just one of many objections I have…
Please, offer your objections to the argument. Origenes
If consciousness is quantum, as it appears to be, all of this has no merit... because of one simple indisputable condition of quantum mechanics; quantum sub-particles can be in more than one place at the same time...there is no doubt about it... If someone wants to introduce the soul into this equation, he needs to not only define it but also on what level it operates and why people under general anesthetic lose the contact/connection with their soul... This is just one of many objections I have... J-Mac
To all, //Reflections on the argument from self-prediction.// The capability of human beings to self-predict rests on a set of interdependent capabilities of consciousness: self-awareness, self-control, self-movability, self-organization, self-judgement and so forth. The self-prediction “tomorrow morning at 9 o’clock I will do 2 push-ups”, presupposes self-awareness, self-control, self-movability and arguably more capabilities in this category. One could say that, in order for self-prediction to exist, consciousness must perform the unimaginable: it must encompass, or house, itself. The argument from self-prediction is firmly based on the notion that no physical system, the brain included, can do this. Why not? In short, because no house can house itself. Origenes
Atom @11
Atom: Assume system O has n aspects, each modeled by a single bit in O .... For P to model O’s behavior, it is either sufficient to model all of O’s bits, or it is not sufficient. If it is, then obviously O is always its own model (since it perfectly “models” its own state).
If O is completely static/inert — can only have the current state it is in — then, obviously, there is nothing to predict.
Atom: If it is not sufficient, then this is something that needs to be shown.
In the case of the brain, is it really necessary “to show” that O can have different states? I suppose that I have misunderstood what you are saying. Can you elaborate? I managed to dig up the following quote by Von Hayek via ‘google books’, which may be helpful :
1 System P can predict system O. 2 A system P can predict a system O only if P can classify the states of O (O-states). 3 P can classify the O-states if P can represent all types of O-states. 4 Thus: P can represent all types of O-states. 5 The number of types of O-states is of a higher magnitude than the degree of complexity of O. 6 If P can represent all types of states of O and the number of types of O-states is of a higher magnitude than the degree of complexity of O, then the number of O-states P can represent is of a higher magnitude than the degree of complexity of O. 7 Thus: The number of O-states P can represent is of a higher magnitude than the degree of complexity of O. 8 The degree of complexity of a system P is at least as high as the number of states it can represent. 9 Thus: P is more complex than O. 1O Thus: If a system P can predict a system O, then P is more complex than O. [By elimination of assumption (1).] .... Where (5) is warranted as follows, argument H: 1 The number of types of O-states (2^n) is of a higher magnitude than the number of aspects in which two O-states can differ (r). 2 The degree of complexity of some system is identical with the number of aspects in which its states can differ. 3 Thus: The number of types of O-states is of a higher magnitude than the degree of complexity of O.
Origenes
To all, Some thoughts: Setting aside the problem of intractable environmental input and Von Hayek’s argument, it is important to note that only a deterministic world can house accurate self-prediction of the brain. If neural events are caused or influenced by indeterministic quantum events, as is often claimed, then it is game over for the brain's ambition of self-predicting. WRT computers: no program can predict itself what will be the result of its calculation. Origenes
Seversky: The argument can be attacked on the grounds that 2 and 3 imply an unstated premiss which is that the conscious “I” or mind is a separate entity from the operations of the physical brain …. begging the question …
Nonsense. I hope that is clear enough.
Seversky: Even if we concede Hayek’s argument that a system cannot incorporate an exact one-to-one representation of itself that does not preclude the possibility of forecasting based on models.
Granted, but here we are not talking about a general prediction like “I will do some sporting activity in the next couple of months”, but, instead, we are talking about a very specific prediction: “tomorrow morning at 9 o’clock I will do 2 push-ups”, which demands accuracy in prediction.
Seversky: The computer climate models used by meteorologists to forecast weather trends do not and cannot represent the movement of each individual molecule of gas, droplet of water or piece of particulate but they are still able to predict weather for the next 3-5 days with reasonable accuracy.
Do you think so? This is not the case where I live. But let’s not digress, the point is that accuracy is required and that seems to be Von Hayek’s concern as well.
Seversky: Our internal model … pizza … navigate across a landscape … very detailed awareness …
Sorry, but I do not see the relevance to my argument.
Seversky: We can make predictions …
I know, and that fact poses a formidable problem for materialists, since, as I have argued, the brain cannot. Origenes
DaveS @13
DaveS: When we humans make these predictions about push-ups or going shopping it’s not any great mental feat. It’s not as if we are running an accelerated, faithful simulation of ourselves, which … would be impossible.
I agree with you. That is not what we do. And that is not what I argue.
DaveS: We use heuristics, induction, extrapolation from small samples, &c.
I would like to add that free will is involved also. However, if materialism is true, then free will as an explanation for the prediction “tomorrow morning at 9 o’clock I will do 2 push-ups” is not an option. If materialism is true, we need to naturalize prediction and there seems to be no other option available than to argue for the existence of some (physical) system, capable of predicting specific neural states and the ensuing behavior. And this is where Von Hayek becomes relevant. Origenes
kairosfocus @ 1
Origines raises the issue of Self-prediction and the materialist view of the mind as the brain in action. Food for thought
Yes, indeed. Here are a few.
ORIGENES: Here I will argue that self-prediction cannot be accommodated by materialism. In daily life we all routinely engage in acts of self-prediction — ‘tomorrow morning at 9 o’clock I will do 2 push-ups’, ‘I will do some Christmas shopping next Friday’ … and so forth. The question is: how does materialism explain that familiar phenomenon? Given that specific behavior (e.g. doing 2 push-ups) results from specific neural states, how is it that we can predict its occurrence
For example, ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ or ‘I will do some Christmas shopping next Friday’ could be either statements of intent or predictions but they are not the same thing. A statement of intent is not a claim about what is or will be. It is a formulation of purpose and as such is neither true nor false. A prediction, being a forecast of a future state of affairs based on a current state of affairs, is capable of being true or false.
The Argument from Self-prediction 1. If materialism is true, then human behavior is caused by neural events in the brain and environmental input. 2. The brain cannot predict future behavior with any specificity. 3. I can predict future behavior with specificity. Therefore, 4. Materialism is false.
The argument can be attacked on the grounds that 2 and 3 imply an unstated premiss which is that the conscious "I" or mind is a separate entity from the operations of the physical brain. Since this is one of the key points at issue, this is begging the question and the conclusion does not necessarily follow. It is also unclear how much "specificity" is required to warrant conclusions about predictive behavior.
Von Hayek then goes on to conclude that:
No system is more complex than itself. Thus: No system can predict itself or any other system of (roughly) the same degree of complexity (no self-referential prediction).
Even if we concede Hayek's argument that a system cannot incorporate an exact one-to-one representation of itself that does not preclude the possibility of forecasting based on models. The computer climate models used by meteorologists to forecast weather trends do not and cannot represent the movement of each individual molecule of gas, droplet of water or piece of particulate but they are still able to predict weather for the next 3-5 days with reasonable accuracy. Our conscious day-to-day experience of reality can also be viewed as an incomplete model of what is really out there but it is sufficient to enable us to navigate through it in reasonable safety, which involves making predictions about it. Our internal model is assumed to be based on information gathered by our senses but those senses only give us limited access to what is out there. Our eyes can only detect light from the visible waveband. We cannot see even near infra-red or ultra-violet although there are other creatures that can. Dogs and cats can hear sounds that are inaudible to us. On smell, a dog-handler once told me as an illustration that where you or I could recognize the smell of a pizza a dog could identify every single ingredient that went into the making of that pizza. Our internal model also necessarily includes a representation or model of ourselves. For example, in order to navigate across a landscape we must know not just the landscape but also where we are on it and how we are able to move across it. But it is just a model and necessarily incomplete. We like to think we know ourselves but we are not aware, for example, of the flow of blood through the millions of tiny capillary blood vessels, we feel nothing of the minute-to-minute processes going on in our liver, kidneys or pancreas nor can we detect the firing of each of the billions of neurons in our brain on a second-to-second basis. At a conscious level, we seem to have a very detailed awareness of what is happening but we are not directly conscious of all the processing that feeds data into that awareness. If von Hayek is right and we cannot contain an exact full representation of ourselves then we ourselves are just a construct, a partial model of all that we are in reality. We can make predictions based on that model or set of models. They will have varying degrees of accuracy but it seems to be the best we can hope for. Seversky
I think Atom has it correct. When we humans make these predictions about push-ups or going shopping it's not any great mental feat. It's not as if we are running an accelerated, faithful simulation of ourselves, which, if materialism is true, would be impossible. We use heuristics, induction, extrapolation from small samples, &c. So I doubt that the von Hayek passages support premise 2. daveS
nm daveS
Maybe I'm missing something obvious, but any system of n parts can store (model?) up to 2^n states. Assume system O has n aspects, each modeled by a single bit in O. (We'll assume a computational architecture, for the sake of argument.) For P to model O's behavior, it is either sufficient to model all of O's bits, or it is not sufficient. If it is, then obviously O is always its own model (since it perfectly "models" its own state). If it is not sufficient, then this is something that needs to be shown. But counting the number of bits does not establish that, since you only need log_2 m bits to model m distinct states. While we may be able to rule out a part modeling the whole, I'm am not sure we can rule out the whole modeling the whole or a part modeling another part. I'm open to being persuaded otherwise, though, if I've missed something. Atom
DaveS @8
DaveS: Say the tree is in fact a path with 100 million nodes (i.e., essentially a list) and the “prediction” is for the time required to traverse the entire tree. The computer program could perform a test run using a smaller tree with only 5 million nodes, measure the elapsed time, then multiply this by 20 to predict the time required to traverse the full sized tree. This is a very simple example, but illustrates how a computer could make self-predictions.
This analogy would shed light on what could perhaps be called “linear” behavior.
Paraphrasing: at the moment I “do” 0,000001 push-ups, in 10 seconds ... (just a moment, let me run a simulation) …. Okay, right … it will be 0,000005 push-ups, so, tomorrow morning at 9 o’çlock I will be doing 2 push-ups.
Linearity reduces the daunting task of predicting to humdrum multiplying … However, I would argue that neural firing and our behavior are rarely linear in that sense.
DaveS: A more familiar example: When you download a large file, often your software will give you a running “countdown” which displays the estimated time until the download is complete (and when it will finish downloading). That’s a self-prediction as well.
One thing is for sure, this is another “linear” example.
DaveS: No human can predict the entirety of his/her brain state at any time, so I think part of Bob O’H’s point is that the Van Hayek result is not really useful here.
I do NOT claim that a human can predict his/her brain state. My claim, which is scientifically verifiable (!), is that humans (routinely) predict behavior. However, according to materialism, behavior results from a brain state. If that is true, if materialism is correct, then there is no realm independent from brain states that can predict behavior. Origenes
Bob O "the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system." If human predicts instead of healthy ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ ‘tomorrow morning at 9 o’clock I will commit suicide’ he's predicting the whole state of the system. I think Von Hayek quotes stand. Eugen
Origenes,
How large a tree are we talking about? How large is the search space?
Much much smaller than the number of possible brain states, obviously, but the size of the tree is not relevant to my point. Say the tree is in fact a path with 100 million nodes (i.e., essentially a list) and the "prediction" is for the time required to traverse the entire tree. The computer program could perform a test run using a smaller tree with only 5 million nodes, measure the elapsed time, then multiply this by 20 to predict the time required to traverse the full sized tree. (Or more realistically, the computer could perform a series of test runs and use some kind of regression to obtain a more accurate formula for the total time required on the full sized tree). This is a very simple example, but illustrates how a computer could make self-predictions. A more familiar example: When you download a large file, often your software will give you a running "countdown" which displays the estimated time until the download is complete (and when it will finish downloading). That's a self-prediction as well. No human can predict the entirety of his/her brain state at any time, so I think part of Bob O'H's point is that the Van Hayek result is not really useful here. daveS
DaveS @3
DaveS: Suppose you are using a computer to search through a large tree, a process that will take several days.
Dave you are good with numbers. How large a tree are we talking about? How large is the search space?
The average brain has about 100 billion neurons. Each neurons fires (on average) about 200 times per second. And each neuron connects to about 1,000 other neurons.
If I am not mistaken that results in (on average) 20.000.000 billion bits of information per second and (obviously) a lot more in several days. Now, in line with Von Hayek, if these neural states can differ in n different aspects, there are 2^n different types of states a prediction system must be able to distinguish. My take is that, in the case of the brain, n is a horrific number. Do you agree? And BTW how about the intractable environmental input? I am talking about sensory input and the effects of say food, drinks and all those bacteria that we carry with us. If the brain starts running simulations of itself, it has to factor in those as well. And — and this is important — the brain does not control the input from the environment. The environment inputs unknown variables. Origenes
Pardon, corrected a mis-spelt name. kairosfocus
Bob O'H @2
Bob O'H: the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system.
The brain is highly interconnected, so it would be difficult to predict part of it, while ignoring the rest.
Bob O'H: Another problem is that the n states might not be independent of each other or of current conditions. e.g. they might be points in space.
Can you elaborate please? I do not understand your point. Origenes
I predict that I will put off until tomorrow what I could do today. Mung
Suppose you are using a computer to search through a large tree, a process that will take several days. Couldn't the computer be programmed to estimate (shortly after beginning) approximately when the search will arrive at a specified node of the tree? Or, similarly, what node it will be checking at a specified time in the future? Edit: This relates to Bob O'H's comment I believe. It's no problem to predict certain aspects of the computer's behavior. Predicting its complete state is another matter. Note to KF: Origines -> Origenes daveS
the problem with using the Von Hayek quotes is that they assume that the system is predicting itself fully. But ‘tomorrow morning at 9 o’clock I will do 2 push-ups’ is only predicting a part of the system. Another problem is that the n states might not be independent of each other or of current conditions. e.g. they might be points in space. Bob O'H
Origines raises the issue of Self-prediction and the materialist view of the mind as the brain in action. Food for thought. kairosfocus

Leave a Reply