Uncommon Descent Serving The Intelligent Design Community

Logic & First Principles, 21: Insightful intelligence vs. computationalism

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the challenges of our day is the commonplace reduction of intelligent, insightful action to computation on a substrate. That’s not just Sci Fi, it is a challenge in the academy and on the street — especially as AI grabs more and more headlines.

A good stimulus for thought is John Searle as he further discusses his famous Chinese Room example:

The Failures of Computationalism
John R. Searle
Department of Philosophy
University of California
Berkeley CA

The Power in the Chinese Room.

Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let’s begin by pondering the implications of the Chinese Room.

The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have?

The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

But, once again, why?

Why can’t I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols.

The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing “How do we know?” with “What it is that we know when we know?”

This mistake is enshrined in the Turing Test(TT). Indeed this mistake has dogged the history of cognitive science, but it is important to get clear that the essential foundational question for cognitive science is the ontological one: “In what does cognition consist?” and not the epistemological other minds problem: “How do you know of another system that it has cognition?”

What is the Chinese Room about? Searle, again:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else. [Cf. Jay Richards here.]

What is “strong AI”? Techopedia:

Strong artificial intelligence (strong AI) is an artificial intelligence construct that has mental capabilities and functions that mimic the human brain. In the philosophy of strong AI, there is no essential difference between the piece of software, which is the AI, exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness.

Strong artificial intelligence is also known as full AI.

In short, Reppert has a serious point:

. . . let us suppose that brain state A [–> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [–> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

This brings up the challenge that computation [on refined rocks] is not rational, insightful, self-aware, semantically based, understanding-driven contemplation:

While this is directly about digital computers — oops, let’s see how they work —

. . . but it also extends to analogue computers (which use smoothly varying signals):

. . . or a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

A similar approach uses memristors, creating an analogue weighted sum vector-matrix operation:

As we can see, these entities are about manipulating signals through physical interactions, not essentially different from Leibniz’s grinding mill wheels in Monadology 17:

It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception [[i.e. abstract conception]. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought . . .

In short, computationalism falls short.

I add [Fri May 31], that is, computational substrates are forms of general dynamic-stochastic systems and are subject to their limitations:

The alternative is, a supervisory oracle-controlled, significantly free, intelligent and designing bio-cybernetic agent:

As context (HT Wiki) I add [June 10] a diagram of a Model Identification Adaptive Controller . . . which, yes, identifies a model for the plant and updates it as it goes:

MIAC action, notice supervisory control and observation of “visible” outputs fed back to in-loop control and to system ID, where the model creates and updates a model of the plant being controlled. Parallels to the Smith model are obvious.

As I summarised recently:

What we actually observe is:

A: [material computational substrates] –X –> [rational inference]
B: [material computational substrates] —-> [mechanically and/or stochastically governed computation]
C: [intelligent agents] —-> [rational, freely chosen, morally governed inference]
D: [embodied intelligent agents] —-> [rational, freely chosen, morally governed inference]

The set of observations A through D imply that intelligent agency transcends computation, as their characteristics and capabilities are not reducible to:

– components and their device physics,
– organisation as circuits and networks [e.g. gates, flip-flops, registers, operational amplifiers (especially integrators), ball-disk integrators, neuron-gates and networks, etc],
– organisation/ architecture forming computational circuits, systems and cybernetic entities,
– input signals,
– stored information,
– processing/algorithm execution,
– outputs

It may be useful to add here, a simplified Smith model with an in the loop computational controller and an out of the loop oracle that is supervisory, so that there may be room for pondering the bio-cybernetic system i/l/o the interface of the computational entity and the oracular entity:

The Derek Smith two-tier controller cybernetic model

In more details, per Eng Derek Smith:

So too, we have to face the implication of the necessary freedom for rationality. That is, that our minds are governed by known, inescapable duties to truth, right reason, prudence (so, warrant), fairness, justice etc. Rationality is morally governed, it inherently exists on both sides of the IS-OUGHT gap.

That means — on pain of reducing rationality to nihilistic chaos and absurdity — that the gap must be bridged. Post Hume, it is known that that can only be done in the root of reality. Arguably, that points to an inherently good necessary being with capability to found a cosmos. If you doubt, provide a serious alternative under comparative difficulties: ____________

So, as we consider debates on intelligent design, we need to reflect on what intelligence is, especially in an era where computationalism is a dominant school of thought. Yes, we may come to various views, but the above are serious factors we need to take such into account. END

PS: As a secondary exchange developed on quantum issues, I take the step of posting a screen-shot from a relevant Wikipedia clip on the 1999 Delayed choice experiment by Kim et al:

Wiki clip on Kim et al

The layout in a larger scale:

Gaasbeek adds:

Weird, but that’s what we see. Notice, especially, Gaasbeek’s observation on his analysis, that “the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards configuration.” This is the point made in a recent SEP discussion on retrocausality.

PPS: Let me also add, on radio halos:

and, Fraunhoffer spectra:

These document natural detection of quantised phenomena.


Comments
WJM,
(Just so everyone knows, I’m only interacting with Dave and Hazel for the benefit of onlookers, like Axel, StephenB, BA, KF, Mike, etc. I don’t expect them to be able to understand any of this, or to give it anything remotely like a fair hearing.)
And likewise, I'll participate to satisfy my own curiosity; I'm not expecting any miracles.
There’s an infinite amount of information we are consciously unaware of that affects us all the time
Let's start there. 1) How do you know there is an infinite amount of information affecting us all the time, and not just a finite amount? Have you taken measurements? 2) You use the phrase "consciously unaware of" here. In the scenario I described, do you believe Mary was actually aware, at perhaps an unconscious level, of the bullet speeding toward her head? 3) I might as well ask, are you claiming that your theory is actually true? Or do you choose to "believe" it simply because that's what works best for you?daveS
June 10, 2019
June
06
Jun
10
10
2019
04:54 AM
4
04
54
AM
PDT
Once we've established the exhaustive ontologically and epistemologically mental nature of our experiential reality, we can see the sheer folly of making models that include extra-mental commodities and investing in those abstract commodities as independently existing causal agencies. It is folly for several reasons. First, it reifies an abstract model as the cause of what the model describes. Second, it generates intractable problems like the hard problem of consciousness (personal experience being ** caused** by physical commodities that have no inherent capacity to cause any such thing, or - the hard problem of personal experience). Hazel's speculation (and other have speculated this) that the "material world" and "mind" are phenomena generated by an "unknowable," mysterious deeper substrate is a form of this cognitive error - reifying a model as an independently existing cause (and, it further suffers from sheer lack of predictive or explanatory capacity - basically, it's a cognitive dodge). KF (and others capable of following a logical argument): Let's follow the logic of "external reality" further. What is one of the reasons (perhaps the most important one) that we theorize an external, consistent world in the first place? It is the apparent consensuality of experiences between observers. IOW, whatever one theorizes is the ultimate nature of a tree, different observers experience "the tree" in a very similar fashion - where it is, the colors of the leaves and bark, its basic structure, etc. The theory claims that the independent nature of the tree (independent from mind) is causing fairly universal mental states in all observers. We'll skip the model reification issue here and go another route: think about what you've just proposed: an independent, non-mental commodity has caused a particular mental state/experience in all observers. However you slice it, you are promoting a materialist principle: that mental states can be caused by independently existing non-mental commodities. You've reduced us to being externally-caused entities and you've effectively given up free will. There is no escaping the self-annihilating logical consequences of the premise that mental states can be caused by external commodities.William J Murray
June 10, 2019
June
06
Jun
10
10
2019
04:40 AM
4
04
40
AM
PDT
F/N: For record, I have added two infographics that illustrate radio halos and Fraunhoffer spectra, documenting natural detection of quantised phenomena through interactions. In the first, halo radius is a function of alpha particle energy as emitted. In the second, the absorption lines come about as atoms in outer layers of a star (Sol, here) absorb specific energies to promote electrons to higher orbitals. These are then re-radiated in all directions and so lead to a drop in intensity at a frequency corresponding to the energy level. In addition, a subtler result is the overall blackbody pattern of intensity reflecting the "penalising" of higher frequency lumps of light that Planck found as the answer to the UV catastrophe. I just note in passing on the pervasive significance of energy in all of this, which of course through the correspondence principle traces back to the explanation of work as forced ordered motion that comes from applying inertia and force to kinematics results. Energy, appears then as cumulative effect of or source for forced ordered motion, with energy conservation a natural result of bodies interacting in pairs through equally sized, oppositely directed forces (Newton's 3rd law). Forces, intuitively, are pushes or pulls and cumulative effect of force across time is impulse or change of momentum. NL3 similarly leads to conservation of momentum; NL1 points out that absent a force, momentum is constant; NL2 identifies force as rate of change of momentum; the term "motion" for the three laws is an older term for what we now call momentum, P = m*v. The absence of a natural zero for momentum then points to relativity. KF PS: I will continue the for record as time permits. Today is the Queen's Official Birthday.kairosfocus
June 10, 2019
June
06
Jun
10
10
2019
03:43 AM
3
03
43
AM
PDT
Hazel said @232:
This is quackery. That’s enough for me.
Why do you think it's quackery? Dave said @241:
And if you are struck by a stray bullet, you may die (something we in the US are all too familiar with). Without ever being aware of the bullet’s existence, let alone observing it. It’s interesting how pure information can kill you.
There's an infinite amount of information we are consciously unaware of that affects us all the time - I never implied otherwise. I'm not sure why you think this is a meaningful point to make. Hazel said @243:
I think the rest of us would know that the person Dave mentions was dead, irrespective of the fact that that person’s consciousness had come to a rather abrupt end.
Let's call the person Dave is talking about "Mary." Are you saying that you know as a fact what everyone's personal experience is of the event, including Mary's? You know that her consciousness has come to an end? (Just so everyone knows, I'm only interacting with Dave and Hazel for the benefit of onlookers, like Axel, StephenB, BA, KF, Mike, etc. I don't expect them to be able to understand any of this, or to give it anything remotely like a fair hearing.) Mental experience (personal mental experience) is both ontologically and epistemologically primary - there's just no escaping that fact. The "external world" is an abstraction - a model created by and entirely held within mind that attempts to explain aspects of mental experience. Key aspects of experience that the "external world" explanatory model attempts to explain are: (1) the apparent consensuality of experience between people, and (2) phenomena in our experience appearing to act independently of our personal volition. In attempting to model this behavior, we assign a second layer of abstractions on the first one (the existence of an external world): the layer of external-world commodity attributes. We even go further by adding a third layer of abstraction: that these attributes have behavioral tendencies caused by things we call "forces", "laws" and "energy". This third layer of abstraction comes at both a great epistemological cost, being a third-layer abstraction, but at a great cognitive cost as well, as behaviors of experiential phenomena in the primary ontological domain is considered "explained" by the mischaracterization of a model as the cause of the behavior described by the model. I'll go further than the self-evident truth of the mental ontological and epistemological primacy, because the word "primacy" implies that something might be "secondary." Both ontologically and empistemologically, mind is not just primary, it is exhaustive because we cannot experience anything outside of mind, and we cannot find a way to know something, hypothesize, model or intuit without using it and absolutely nothing else. Our experience is entirely locked in mind both onotologically (what we experience as reality) and how we explain it via abstractions (epistemology). This is a self-evident truth once one understands what it means. Now consider this: some here insist than an abstract model of something that can never be experienced (a world external to and independent of mental experience) is primary with regards to how mental experiences are generated. How delusional is it to insist that something that can never be directly verified, accessed or used ontologically or epistemologically is, in any sense, primary, or "the cause" of any of our mental experiences? The reason this has become an almost intractable problem is because virtually everyone - both physicalists and non-physicalists alike - have fallen for the reification of abstractions that attempt to model mental experiences, into things that have independent existence and causal agency. In this, the idea (an abstract model) of an external, independent world has been mistakenly reified as an actual, causal, independent thing in the same way we have reified the model of a pattern of behavior as an independently existing force that causes the behavior. You cannot exist outside of mind; you cannot experience outside of mind, you cannot gain any knowledge outside of mind, you cannot theorize outside of mind, you cannot perceive outside of mind. Reifying mental abstractions as extra-mental, independently existing causal agencies is - logically speaking - delusional - because mind is both ontologically and epistemologically exhaustive, and there is no way to avoid that fact short of delusion - and even delusion takes place entirely within mind.William J Murray
June 10, 2019
June
06
Jun
10
10
2019
03:23 AM
3
03
23
AM
PDT
F/N: Some further reading in this context: https://arxiv.org/pdf/1007.3977.pdf KF PS: I added a clip from this to the OP. Notice, how Gaasbeek, having run the analysis, observes: "the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards con?guration." If a consequent C1 vs C2, is independent of antecedents A_m vs A_n, we have removed correlation of effects. Here, the two options of interference or diffraction can occur, per the analysis, whether or not the idler photon is observed before/after the signal photon ends up at detector D_0. This is the sort of case contemplated in the SEP article.kairosfocus
June 9, 2019
June
06
Jun
9
09
2019
08:19 PM
8
08
19
PM
PDT
BA77, with all respect, 237 lays out highly material facts, including from yourself. KF PS: I have attached to the OP, a web clip from Wiki regarding Kim et al. The layout shows automated detection on alternative detector sets that act accordingly as photons definitively go through one identifiable slit or not. Yes, there is a distribution and both clearly happen; here in the same experiment under the same physical circumstances but not at the same time, the source being low enough that one original photon enters and is split into two half-frequency entangled ones at any given time. Coincidence counter ckts detect events, automatically. The net result is as I now further clip:
The beam splitters and mirrors direct the idler photons towards detectors labeled D1, D2, D3 and D4. Note that: If an idler photon is recorded at detector D3, it can only have come from slit B. If an idler photon is recorded at detector D4, it can only have come from slit A. If an idler photon is detected at detector D1 or D2, it might have come from slit A or slit B. The optical path length measured from slit to D1, D2, D3, and D4 is 2.5 m longer than the optical path length from slit to D0. This means that any information that one can learn from an idler photon must be approximately 8 ns later than what one can learn from its entangled signal photon. Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, it is said that the information has been subjected to a "delayed erasure". By using a coincidence counter, the experimenters were able to isolate the entangled signal from photo-noise, recording only events where both signal and idler photons were detected (after compensating for the 8 ns delay). Refer to Figs 3 and 4. When the experimenters looked at the signal photons whose entangled idlers were detected at D1 or D2, they detected interference patterns. However, when they looked at the signal photons whose entangled idlers were detected at D3 or D4, they detected simple diffraction patterns with no interference.
In my original citation above, I highlighted presence of automatic, "mechanical" detection and detectors. We can add here, the presence of a stochastic process (later exercises in 2012 would manipulate across the spectrum from one extreme to the other), i.e. a reflection of deep rooted probability in quantum events, not merely oh it's too complex to calculate (as one may argue over flipping a coin or tumbling a die or using the clash of divergent uncorrelated deterministic streams such as value of pi vs decimal digits that seems to make tables of digits of pi work as random number tables). We also have the now usual weirdness that has led to a breakdown of consensus, including matters that raise questions of retrocausality. I note that on the case here, in some instances there is an interaction that seems to be one-slit, particle like behaviour leading to outcome D3/4 AND in a stochastic pattern, in other cases we have wavelike interference, leading to outcome D1/2 for idler photons. Given known quantum patterns, it is likely that we cannot predict which photon will do which, there is a probability distribution. At the micro scale entities are wavicles and behave in ways that puzzle us.kairosfocus
June 9, 2019
June
06
Jun
9
09
2019
06:38 PM
6
06
38
PM
PDT
If I recall correctly, I think this is the second time that ba has made a point of leaving one of kf's threads. Interesting way to cope.hazel
June 9, 2019
June
06
Jun
9
09
2019
04:26 PM
4
04
26
PM
PDT
I think the rest of us would know that the person Dave mentions was dead, irrespective of the fact that that person's consciousness had come to a rather abrupt end.hazel
June 9, 2019
June
06
Jun
9
09
2019
02:17 PM
2
02
17
PM
PDT
I wasn't going to comment on kf's thread anymore since it is going nowhere, but I will make an exception for you Seversky. Seversky, perhaps you care to tell me how you can possibly know that kicking a rock, or getting hit by a bus, hurts without you first being conscious of the pain, i.e. the qualia, of kicking a rock, or by getting hit by a bus? Consciousness is the prerequisite of all prerequisites for anything to be real for you in the first place. You don't have to take my word for it. Quantum pioneers Planck, Schrodinger, and Wigner all made the same point.
“No, I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” Max Planck (1858–1947), the main founder of quantum theory, The Observer, London, January 25, 1931 “Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.” Schroedinger, Erwin. 1984. “General Scientific and Popular Papers,” in Collected Papers, Vol. 4. Vienna: Austrian Academy of Sciences. Friedr. Vieweg & Sohn, Braunschweig/Wiesbaden. p. 334. “The principal argument against materialism is not that illustrated in the last two sections: that it is incompatible with quantum theory. The principal argument is that thought processes and consciousness are the primary concepts, that our knowledge of the external world is the content of our consciousness and that the consciousness, therefore, cannot be denied. On the contrary, logically, the external world could be denied—though it is not very practical to do so. In the words of Niels Bohr, “The word consciousness, applied to ourselves as well as to others, is indispensable when dealing with the human situation.” In view of all this, one may well wonder how materialism, the doctrine that “life could be explained by sophisticated combinations of physical and chemical laws,” could so long be accepted by the majority of scientists." – Eugene Wigner, Remarks on the Mind-Body Question, pp 167-177.
Thus there is certainly nothing inconsistent with the Theist's belief that consciousness, specifically the Mind of God, precedes material reality, and thus there is nothing inconsistent with the falsification of 'realism', i.e. the falsification of the belief that matter can exist apart from consciousness.
Reality doesn’t exist until we measure it, (Delayed Choice) quantum experiment confirms – Mind = blown. – FIONA MACDONALD – 1 JUN 2015 Excerpt: “It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” lead researcher and physicist Andrew Truscott said in a press release. http://www.sciencealert.com/reality-doesn-t-exist-until-we-measure-it-quantum-experiment-confirms Quantum physics says goodbye to reality - Apr 20, 2007 Excerpt: They found that, just as in the realizations of Bell's thought experiment, Leggett's inequality is violated – thus stressing the quantum-mechanical assertion that reality does not exist when we're not observing it. "Our study shows that 'just' giving up the concept of locality would not be enough to obtain a more complete description of quantum mechanics," Aspelmeyer told Physics Web. "You would also have to give up certain intuitive features of realism." http://physicsworld.com/cws/article/news/27640
Whereas on the other hand, you, as a atheistic materialist, have no earthly clue how subjective conscious experience can possibly arise from matter. i.e. The 'Hard Problem of Consciousness'.
The Hardest Problem in Science? October 28, 2011 Excerpt: ‘But the hard problem of consciousness is so hard that I can’t even imagine what kind of empirical findings would satisfactorily solve it. In fact, I don’t even know what kind of discovery would get us to first base, not to mention a home run.’ - David Barash - Professor of Psychology emeritus at the University of Washington. https://www.chronicle.com/blogs/brainstorm/the-hardest-problem-in-science/40845 "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness." - Jerry Fodor - Rutgers University philosopher [2] Fodor, J. A., Can there be a science of mind? Times Literary Supplement. July 3, 1992, pp5-7. “Every day we recall the past, perceive the present and imagine the future. How do our brains accomplish these feats? It’s safe to say that nobody really knows.” Sebastian Seung - Massachusetts Institute of Technology neuroscientist - “Connectome”: "Those centermost processes of the brain with which consciousness is presumably associated are simply not understood. They are so far beyond our comprehension at present that no one I know of has been able even to imagine their nature." Roger Wolcott Sperry - Nobel neurophysiologist As quoted in Genius Talk : Conversations with Nobel Scientists and Other Luminaries (1995) by Denis Brian "We have at present not even the vaguest idea how to connect the physio-chemical processes with the state of mind." - Eugene Wigner - Nobel prize-winner – Quantum Symmetries
Thus, since consciousness is the prerequisite of all prerequisites for anything to be real for you in the first place, you have greatly misunderstood what your very own parable about getting hit by a bus and/or kicking a rock tells you about reality. As to pain making something real to us in the first place, i.e. 'getting hit by a truck'. I think the following may be interesting for you. The following man was run over by a semi-truck and had a very deep Near Death Experience. Here is what he had to say about the 'reality' of the experience of heaven compared to 'reality' of the pain he endured after getting hit by a truck:
"More real than anything I've experienced since. When I came back of course I had 34 operations, and was in the hospital for 13 months. That was real but heaven is more real than that. The emotions and the feelings. The reality of being with people who had preceded me in death." - Don Piper - "90 Minutes in Heaven," 10 Years Later - video (2:54 minute mark) https://youtu.be/3LyZoNlKnMM?t=173
bornagain77
June 9, 2019
June
06
Jun
9
09
2019
02:09 PM
2
02
09
PM
PDT
And if you are struck by a stray bullet, you may die (something we in the US are all too familiar with). Without ever being aware of the bullet's existence, let alone observing it. It's interesting how pure information can kill you.daveS
June 9, 2019
June
06
Jun
9
09
2019
01:33 PM
1
01
33
PM
PDT
Bornagain77 @ 235
And yet if it waddles like a duck, quacks like a duck, looks like a duck,,,
… and if you kick a stone it will still hurt your toe. If a pedestrian walks in front of a speeding truck, neither Bell's theorem nor Leggett's inequality nor Maxwell's demon will make the outcome any less unfortunate for the pedestrian, Just how real do you want "realism" to be?Seversky
June 9, 2019
June
06
Jun
9
09
2019
01:11 PM
1
01
11
PM
PDT
kf, It's hopeless. You are not even in the right ballpark. It is clear this is going to go nowhere. Adios.bornagain77
June 9, 2019
June
06
Jun
9
09
2019
01:00 PM
1
01
00
PM
PDT
I agree with the quote from the SEP that kf linked to, except I think the last phrase could better worded as "but [there is] no consensus on what the nature of reality is that might plausibly underlie these observations."hazel
June 9, 2019
June
06
Jun
9
09
2019
12:07 PM
12
12
07
PM
PDT
BA77, 110:
“The question of whether detectors in double slit experiments physically cause the wave function to collapse was settled by experiments like the 1999 ‘Delayed Choice Quantum Eraser’ experiment. It was performed by a team of physicists led by Dr. Marlan O. Scully,,,. The experiment showed that the wave property of a photon could not possibly be collapsed into a particle by some physical effect of the detectors. That’s because there were no detectors between the slit and the screen so that the which path information was effected after the photons were already registered on the screen. Here is David Watkinson explaining the experiment.,,,” Delayed Choice Quantum Eraser – video
BA77, 127:
“If we attempt to attribute an objective meaning to the quantum state of a single system, curious paradoxes appear: quantum effects mimic not only instantaneous action-at-a-distance but also, as seen here, influence of future actions on past events, even after these events have been irrevocably recorded.” Asher Peres, Delayed choice for entanglement swapping. J. Mod. Opt. 47, 139-143 (2000).
Also note, KF, 117:
117 kairosfocus June 5, 2019 at 10:53 am F/N: Being too busy to instantly give a major focus, I put up as FFThot, a Wiki clip on the 1999 delayed choice expt:
https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser#Retrocausality Delayed-choice experiments raise questions about time and time sequences, and thereby bring our usual ideas of time and causal sequence into question.[note 1] If events at D1, D2, D3, D4 determine outcomes at D0, then effect seems to precede cause. If the idler light paths were greatly extended so that a year goes by before a photon shows up at D1, D2, D3, or D4, then when a photon shows up in one of these detectors, it would cause a signal photon to have shown up in a certain mode a year earlier. Alternatively, knowledge of the future fate of the idler photon would determine the activity of the signal photon in its own present. Neither of these ideas conforms to the usual human expectation of causality. However, knowledge of the future, which would be a hidden variable, was refuted in experiments.[21] Experiments that involve entanglement exhibit phenomena that may make some people doubt their ordinary ideas about causal sequence. In the delayed-choice quantum eraser, an interference pattern will form on D0 even if which-path data pertinent to photons that form it are only erased later in time than the signal photons that hit the primary detector. Not only that feature of the experiment is puzzling; D0 can, in principle at least, be on one side of the universe, and the other four detectors can be “on the other side of the universe” to each other.[22]:197f However, the interference pattern can only be seen retroactively once the idler photons have been detected and the experimenter has had information about them available, with the interference pattern being seen when the experimenter looks at particular subsets of signal photons that were matched with idlers that went to particular detectors.[22]:197 Moreover, the apparent retroactive action vanishes if the effects of observations on the state of the entangled signal and idler photons are considered in the historic order. Specifically, in the case when detection/deletion of which-way information happens before the detection on D0, the standard simplistic explanation says “The detector Di, at which the idler photon is detected, determines the probability distribution at D0 for the signal photon”. Similarly, in the case when D0 precedes detection of the idler photon, the following description is just as accurate: “The position at D0 of the detected signal photon determines the probabilities for the idler photon to hit either of D1, D2, D3 or D4”. These are just equivalent ways of formulating the correlations of entangled photons’ observables in an intuitive causal way, so one may choose any of those (in particular, that one where the cause precedes the consequence and no retrograde action appears in the explanation). The total pattern of signal photons at the primary detector never shows interference (see Fig. 5), so it is not possible to deduce what will happen to the idler photons by observing the signal photons alone. The delayed-choice quantum eraser does not communicate information in a retro-causal manner because it takes another signal, one which must arrive by a process that can go no faster than the speed of light, to sort the superimposed data in the signal photons into four streams that reflect the states of the idler photons at their four distinct detection screens.[note 2][note 3] In fact, a theorem proved by Phillippe Eberhard shows that if the accepted equations of relativistic quantum field theory are correct, it should never be possible to experimentally violate causality using quantum effects.[23] (See reference[24] for a treatment emphasizing the role of conditional probabilities.) In addition to challenging our common-sense ideas of temporal sequence in cause and effect relationships, this experiment is among those that strongly attack our ideas about locality, the idea that things cannot interact unless they are in contact, if not by being in direct physical contact then at least by interaction through magnetic or other such field phenomena.[22]:199
Note, the observers involved are instruments.
Having noted the issue, I draw attention to the general observation in the SEP article:
Quantum theory provides a framework for modern theoretical physics that enjoys enormous predictive and explanatory success. Yet, in view of the so-called “measurement problem”, there is no consensus on how physical reality can possibly be such that this framework has this success. The theory is thus an extremely well-functioning algorithm to predict and explain the results of observations, but [there is] no consensus on which kind of objective reality might plausibly underlie these observations.
Just passing through, KFkairosfocus
June 9, 2019
June
06
Jun
9
09
2019
11:58 AM
11
11
58
AM
PDT
When KF, WJM and BA77 have finished arguing over whether or not matter exists if their is no consciousness is there to observe it, might I suggest they discuss whether or not a tree falling in the forest makes any sound if a person isn’t there to hear it. Or how many angels can dance on the head of a pin. Both would be equally as important and relate to to the ID community.Brother Brian
June 9, 2019
June
06
Jun
9
09
2019
07:47 AM
7
07
47
AM
PDT
kf, "retrocausality is implicit in delayed choice etc. " You have no clue what you are talking about. The delayed choice is one of at least two lines of evidence that have falsified 'realism', the other one being Leggett's inequality. Those lines of evidence falsifying 'realism' have nothing to do with the present empirical falsification that I laid out for your belief that detection of a quantum process in the past is a 'natural' process that is completely free of the Agent Causality of God. To repeat, I referenced the quantum zeno effect, quantum information theory, and experimental realization of the Maxwell demon thought experiment, respectfully. As to: "You are repeating a corrected claim about my worldview." And yet if it waddles like a duck, quacks like a duck, looks like a duck,,,bornagain77
June 9, 2019
June
06
Jun
9
09
2019
07:21 AM
7
07
21
AM
PDT
BA77, retrocausality is implicit in delayed choice etc. For instance, detector after the double slit -- and in principle, after can be at astronomical or at least appreciable scale, where light travels ~ 1 foot/nanosecond, well within reach of current electronic instrumentation. Recall, too, I am busy with RW issues and lack time to go into point by point details . . . I was just dealing with more of same RW. KF PS: The radio halo and Fraunhoffer lines are still on the table. These are natural detectors resolving quantum states. PPS: You are repeating a corrected claim about my worldview.kairosfocus
June 9, 2019
June
06
Jun
9
09
2019
06:54 AM
6
06
54
AM
PDT
kf, for crying out loud, I did not even reference retrocausality to overturn you belief that radioactive decay was a 'natural' process. I referenced the quantum zeno effect, quantum information theory, and experimental realization of the Maxwell demon thought experiment, respectfully. That is a straight up empirical falsification of your claim that it was 'natural'. It seems you are confused in how you are seeing this in that you are trying to mix the Agent Causality of God with the far more limited agent causality of man, via retrocausality, as well it seems the falsification of realism may have you a bit confused as well. As to retrocausality in particular, I am VERY confident that in the near future this particular line of evidence will not go nearly as well for your present belief in naturalism/Deism either as you apparently want that particular line of evidence to go for you:
Observer-dependent locality of quantum events Philippe Allard Guérin and ?aslav Brukner - 25 October 2018 Excerpt: In general relativity, the causal structure between events is dynamical, but it is definite and observer-independent; events are point-like and the membership of an event A in the future or past light-cone of an event B is an observer-independent statement. When events are defined with respect to quantum systems however, nothing guarantees that the causal relationship between A and B is definite. We propose to associate a causal reference frame corresponding to each event, which can be interpreted as an observer-dependent time according to which an observer describes the evolution of quantum systems. In the causal reference frame of one event, this particular event is always localised, but other events can be 'smeared out' in the future and in the past. We do not impose a predefined causal order between the events, but only require that descriptions from different reference frames obey a global consistency condition. We show that our new formalism is equivalent to the pure process matrix formalism (Araújo et al 2017 Quantum 1 10). The latter is known to predict certain multipartite correlations, which are incompatible with the assumption of a causal ordering of the events—these correlations violate causal inequalities. We show how the causal reference frame description can be used to gain insight into the question of realisability of such strongly non-causal processes in laboratory experiments. https://iopscience.iop.org/article/10.1088/1367-2630/aae742/meta
bornagain77
June 9, 2019
June
06
Jun
9
09
2019
06:13 AM
6
06
13
AM
PDT
WJM suggests I Google "mental reality thought techniques". Here's the first hit: Thought Power - Your Thought - Your Reality This is quackery. That's enough for me.hazel
June 9, 2019
June
06
Jun
9
09
2019
05:56 AM
5
05
56
AM
PDT
PS: Above you have projected to me deism or naturalism, which are patently inapplicable.kairosfocus
June 9, 2019
June
06
Jun
9
09
2019
04:51 AM
4
04
51
AM
PDT
BA77, You have said a great many things above, so I can only respond selectively, hopefully focally. One theme you picked up is reification, in effect on the idea that secondary, created causes such as gravitation have been treated as though they are autonomous and even able to create a world. That's fair enough as an observation, though it is not relevant to what I have pointed out, starting with the Casimir effect. Which is where the exchange began. But also, we need to recognise that we live in an evident, going concern world with natural regularities and stochastic patterns. A major, institutionally dominant school of thought -- descriptively, evolutionary materialistic scientism -- holds that somehow that physical world and its antecedents exhaust reality and suffice to explain all phenomena, including ourselves as bio-cybernetic entities. From the OP, I have pointed out that, inherently, this is not so. For, computational substrates (which are mechanically and/or stochastically driven and controlled, even with programming) are not capable of rational, responsible, morally governed freedom where duties to truth, right reason, prudence (so, warrant), fairness and justice etc guide and guard rational inference and creativity etc. Mind, to be mind, must be inherently free and morally governed, which directly implies that it transcends the reach of composite, assembled computational substrates working on physical cause-effect bonds rather than reasoned inference. Reppert, has played a key role in my argument:
. . . let us suppose that brain state A [--> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [--> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
In that context, I pointed to quantum influences that allow a supervisory oracle -- which also observes -- to supervene effectively upon a bio-cybernetic entity [see the Smith model] and act as self-moved initiating, limited sense "first" or "agent" cause. That is, I here speak of mind, and even in the phil sense, soul. (The theological sense is related but is predicated on discussions not germane to this thread, we are discussing the in-common domain of reasoned thought; or at least what remnants there are, in an increasingly polarised and patently suicidally irrational civilisation.) Across the course of the thread, in response to challenges, I suggested that we might be well advised to use the concept of dimensionality to add a -- not THE -- fifth dimension: (x,y,x,t,f) as this allows us to see how an entity may interface in our space-time world at any given locus, in praxis for us the human body especially the head and the chest, though consciousness clearly pervades the body and just possibly extends beyond it, sometimes under our control. In that context, I suggested that we consider whether the zygote has in it in effect a quantum bridge to the f-domain, in context of various issues raised. It is here that I pointed to the Casimir effect as illustrating how the virtual quantum domain lurking in Einstein's Energy-Time uncertainty framework, shows a case where quantum influences already have observable effects that lead to a small (usually attractive but sometimes repulsive) force between close enough plates in vacuo. In turn, this points to the domain of quantum field theory. I am not offering a proof, I am pointing to something suggestive. Now, at this juncture, you made several interventions. At first, I have not had time to focus and speak to points one by one, but a focal one was that it is mind that resolves quantum states. That is why I pointed to two cases, radio-halos and Fraunhoffer lines. Both are naturally occurring and are able to resolve the states through interactions. That is, we see natural detectors. I also pointed to how designed experiments are often automated, using devices and structures that resolve such states. These seem to be part of the going concern world. One of the issues you raised is retrocausality. On this, I first clipped a brand new SEP discussion, which points out that while some argue that way, it is by no means a consensus. Let me clip again, as it rebalances:
https://plato.stanford.edu/entries/qm-retrocausality/#ObjeAgaiRetrQuanMech Retrocausality in Quantum Mechanics First published Mon Jun 3, 2019 Quantum theory provides a framework for modern theoretical physics that enjoys enormous predictive and explanatory success. Yet, in view of the so-called “measurement problem”, there is no consensus on how physical reality can possibly be such that this framework has this success. The theory is thus an extremely well-functioning algorithm to predict and explain the results of observations, but [there is] no consensus on which kind of objective reality might plausibly underlie these observations. Amongst the many attempts to provide an “interpretation” of quantum theory to account for this predictive and explanatory success, one class of interpretations hypothesizes backward-in-time causal influences—retrocausality—as the basis for constructing a convincing foundational account of quantum theory. This entry presents an overview of retrocausal approaches to the interpretation of quantum theory, the main motivations for adopting this approach, a selection of concrete suggested retrocausal models, and a review of the objections brought forward against such approaches . . . . 2.1 Causality There is a tradition that stretches back at least as far as Russell (1913) that denies that there is any place for causal notions in the fundamental sciences, including physics: the notion serves no purpose, and simply does not appear, in the fundamental sciences. The argument goes that, since at least the nineteenth century, the laws that govern physical behavior in fundamental sciences such as physics are almost always differential equations. Such equations are notable for specifying, given some initial conditions, exact properties of systems for all time. And thus if everything is specified for all time, there is no place left for causality. Thus Russell advocates that “causality” should be eliminated from the philosophers lexicon, because it is certainly not a part of the scientific lexicon. [–> I suggest, thermodynamics brings back cause, grounding a temporal-causal view of physical reality tied to entropy and thermodynamic equilibrium.] In contrast to Russell’s position, Cartwright (1979: 420) claims that we do have a need and use for a causal vocabulary in science: “causal laws cannot be done away with, for they are needed to ground the distinction between effective strategies and ineffective ones”. One of the main contemporary accounts of causation, the interventionist account of causation (Woodward 2003; see also the entry on causation and manipulability), is an embodiment of Cartwright’s dictum. In a nutshell, the interventionist account claims that A is a cause of B if and only if manipulating A is an effective means of (indirectly) manipulating B. [–> try, neighbouring worlds W and W’ with state of A the material difference and state of B as an observable result, e.g. oxidiser, heat, fuel, combustion chain reaction and fire] Causality in the present entry, unless specified otherwise, should be understood along broadly interventionist lines. According to accounts of quantum theory that hypothesize retrocausality, manipulating the setting of a measurement apparatus can be an effective means of manipulating aspects of the past . . . . 2.2 Locality According to Bell’s theorem (Bell 1964; Clauser et al. 1969; see also the entry on Bell’s theorem) and its descendants (e.g., Greenberger, Horne, & Zeilinger 1989; see also Goldstein et al. 2011; Brunner et al. 2014 for an overview), any theory that reproduces all the correlations of measurement outcomes predicted by quantum theory must violate a principle that Bell calls local causality (Bell 1976, 1990; see also Norsen 2011; Wiseman & Cavalcanti 2017). In a locally causal theory, probabilities of spatiotemporally localized events occurring in some region 1 are independent of what occurs in a region 2 that is spacelike separated from region 1, given a complete specification of what occurs in a spacetime region 3 in region 1’s backward light cone that completely shields off region 1 from the backward light cone of region 2. (See, for instance, Figs. 4 and 6 in Bell 1990 or Fig. 2 in Goldstein et al. 2011.) In a relativistic setting, then, the notion of locality involves prohibiting conditional dependences between spacelike separated events, provided that the region upon which these spacelike separated events are conditioned constitutes their common causal (Minkowski) past. This characterization of locality implicitly assumes causal asymmetry. Thus locality is the idea that there are no causal relations between spacelike separated events. There is another sense of “local” that is sometimes used that will be worth avoiding for the purposes of clarity. This is the idea that causal influences are constrained along timelike trajectories. Thus, given Costa de Beauregard’s suggestion of “zigzag” causal influences, it is perfectly possible for a retrocausal model of quantum phenomena to be nonlocal in the sense that causal relations exist between spacelike separated events, but “local” in the sense that these causal influences are mediated by timelike trajectories. To avoid ambiguity, it will be useful to refer to this latter sense as “action-by-contact” (set apart from action-at-a-distance) . . . . 7.1 General Arguments Against Retrocausality There is a tradition in philosophy for regarding the very idea of retrocausality as incoherent. The most prominent worry, forcefully made by Black (1956), is the so-called “bilking argument” (see the entry on time travel). Imagine a pair of events, a cause, C, and an effect, E, which we believe to be retrocausally connected (E occurs earlier in time than C). It seems possible to devise an experiment which could confirm whether our belief in the retrocausal connection is correct or not. Namely, once we had observed that E had occurred, we could then set about ensuring that C does not occur, thereby breaking any retrocausal connection that could have existed between them. If we were successful in doing this, then the effect would have been “bilked” of its cause. The bilking argument drives one towards the claim that any belief an agent might hold in the positive retrocausal correlation between event C and event E is simply false. However, Dummett (1964) disputes that giving up this belief is the only solution to the bilking argument. Rather, according to Dummett, what the bilking argument actually shows is that a set of three conditions concerning the two events, and the agent’s relationship to them, is incoherent: i There exists a positive correlation between an event C and an event E. ii Event C is within the power of an agent to perform. iii The agent has epistemic access to the occurrence of event E independently of any intention to bring it about. It is interesting to note that these conditions do not specify in which order events C and E occur. On simple reflection, there is a perfectly natural reason why it is not possible to bilk future effects of their causes, since condition (iii) fails to hold for future events: we simply have no access to which future events occur independently of the role we play as causal agents to bring the events about. When we lack that epistemic access to past events, the same route out of the bilking argument becomes available. Dummett’s defense against the bilking argument is especially relevant to quantum mechanics. In fact, once a suitable specification is made of how condition (iii) can be violated, we find that there exists a strong parallel between the conditions which need to hold to justify a belief in bringing about the past and the structure of quantum mechanics. Price (1996: 174) points out that bilking is impossible in the following circumstances: rather than suppose that a violation of condition (iii) entails that the relevant agent has no epistemic access to the relevant past events independently of any intention to bring them about, suppose that the means by which knowledge of these past events is gathered breaks the claimed correlation between the agent’s action and those past events. Such a condition can be stated as follows: iv The agent can gain epistemic access to the occurrence of event E independently of any intention to bring it about and without altering event E from what it would have been had no epistemic access been gained. The significance of this weakened violation of condition (iii) is that it is just the sort of condition one would expect to hold if the system in question were a quantum system. The very nature of quantum mechanics ensures that any claimed positive correlation between the future measurement settings and the hidden variables characterizing a quantum system cannot possibly be bilked of their causes because condition (iv) is perennially violated. Moreover, so long as we subscribe to the epistemic interpretation of the wavefunction, we lack epistemic access to the “hidden” variables of the system and we lack this access in principle as a result of the structure of quantum theory. Another prominent challenge against the very idea of retrocausality is that it inevitably would give rise to vicious causal loops (Mellor 1998). (See Faye 1994 for a response and the entry on backward causation for a more detailed review of the objections raised against the idea of retrocausality.) . . . . 7.3 Contextuality for Exotic Causal Structures Recall (§3.2) that Spekkens’ (2005) claim that no noncontextual ontological model can reproduce the observed statistics of quantum theory based on his principle of parsimony (that there can be no ontological difference without operational difference) was sidestepped by retrocausal approaches due to the explicit assumption of the ontological models framework that the ontic state is independent of the measurement procedure (i.e., that there is no retrocausality). It was noted there the possibility that Spekkens’ principle of parsimony might be recast to apply more generally to retrocausal models. Shrapnel and Costa (2018) achieve just this in a no-go theorem that applies to any exotic causal structure used to sidestep the ontological models framework, including retrocausal accounts, rendering such models contextual after all. Shrapnel and Costa’s result is based on a generalization of the ontological models framework which replaces the operational preparation, transformation, and measurement procedures with the temporally and causally neutral notions of local controllables and environmental processes that mediate correlations between different local systems, and generate the joint statistics for a set of events. “These include any global properties, initial states, connecting mechanisms, causal influence, or global dynamics” (2018: 5). Furthermore, they replace the ontic state ? with the ontic “process” ? : our ontic process captures the physical properties of the world that remain invariant under our local operations. That is, although we allow local properties to change under specific operations, we wish our ontic process to capture those aspects of reality that are independent of this probing. (2018: 8) As a result, the notion of ? -mediation (encountered in §4.1) is replaced by the notion of ?-mediation, in which the ontic process ? completely specifies the properties of the environment that mediate correlations between regions, and screens off outcomes produced by local controllables from the rest of the environment. Shrapnel and Costa (2018: 9) define the notion of “instrument noncontextuality” as a law of parsimony (along the lines of Spekkens’ own definition of noncontextuality): “Operationally indistinguishable pairs of outcomes and local controllables should remain indistinguishable at the ontological level”. They then show that no instrument noncontextual model can reproduce the quantum statistical predictions. Crucially, what is contextual is not just the traditional notion of “state”, but any supposedly objective feature of the theory, such as a dynamical law or boundary condition. (2018: 2) Since preparations, transformations, and measurements have been replaced by local controllables, there is no extra assumption in Shrapnel and Costa’s framework that ? is correlated with some controllables but independent of others. Thus the usual route out of the ontological models framework, and so the no-go theorems of §3, open to retrocausal approaches—that the framework assumes no retrocausality—is closed off in the Shrapnel-Costa theorem, rendering retrocausal approaches contextual along with the rest of the models captured by the ontological models framework. This presents a significant worry for retrocausal approaches to quantum theory. If the main motivation for pursing the hypothesis of retrocausality is to recapture in some sense a classical ontology for quantum theory (see §3.4), then the Shrapnel-Costa theorem has made this task either impossible, or beholden to the possibility of some further story explaining how the contextual features of the model arise from some noncontextual footing. On this latter point, it is difficult to see how this story might be told without significantly reducing the ideological economy of the conceptual framework of retrocausality, again jeopardizing a potential virtue of retrocausality. As mentioned above (§7.2), contextuality can be construed as a form of fine tuning (Cavalcanti 2018), especially when the demand for noncontextuality is understood as a requirement of parsimony, as above. The worries raised in this section and the last underline the fact that the challenge to account for various types of fine tuning is the most serious principled obstacle that retrocausal accounts continue to face.
In short, controversial, unsettled, abstruse. Indeed, there is an attached issue of co-adaptation or fine tuning, that becomes problematic also:
7.2 Retrocausality Requires Fine Tuning Causal modeling (Spirtes, Glymour, & Scheines 2000; Pearl 2009) is a practice that has arisen from the field of machine learning that consists in the development of algorithms that can automate the discovery of causes from correlations in large data sets. The causal discovery algorithms permit an inference from given statistical dependences and independences between distinct measurable elements of some system to a causal model for that system. As part of the algorithms, a series of constraints must be placed on the resulting models that capture general features that we take to be characteristic of causality. Two of the more significant assumptions are (i) the causal Markov condition, which ensures that every statistical dependence in the data results in a causal dependence in the model—essentially a formalization of Reichenbach’s common cause principle—and (ii) faithfulness, which ensures that every statistical independence implies a causal independence, or no causal independence is the result of a fine-tuning of the model. It has long been recognized (Butterfield 1992; Hausman 1999; Hausman & Woodward 1999) that quantum correlations force one to give up at least one of the assumptions usually made in the causal modeling framework. Wood and Spekkens (2015) argue that any causal model purporting to causally explain the observed quantum correlations must be fine-tuned (i.e., must violate the faithfulness assumption). More precisely, according to them, since the observed statistical independences in an entangled bipartite quantum system imply no signaling between the parties, when it is then assumed that every statistical independence implies a causal independence (which is what faithfulness dictates), it must be inferred that there can be no (direct or mediated) causal link between the parties. Since there is an observed statistical dependence between the outcomes of measurements on the bipartite system, we can no longer account for this dependence with a causal link unless this link is fine tuned to ensure that the no-signaling independences still hold. There is thus a fundamental tension between the observed quantum correlations and the no-signaling requirement, the faithfulness assumption and the possibility of a causal explanation. Formally, Wood and Spekkens argue that the following three assumptions form an inconsistent set: (i) the predictions of quantum theory concerning the observed statistical dependences and independences are correct; (ii) the observed statistical dependences and independences can be given a causal explanation; (iii) the faithfulness assumption holds. Wood and Spekkens conclude that, since the faithfulness assumption is an indispensable element of causal discovery, the second assumption must yield. The contrapositive of this is that any purported causal explanation of the observed correlations in an entangled bipartite quantum system falls afoul of the tension between the no-signaling constraint and no fine tuning and, thus, must violate the assumption of faithfulness. Such causal explanations, so the argument goes, including retrocausal explanations, should therefore be ruled out as viable explanations. As a brief aside, this fine-tuning worry for retrocausality in the quantum context arises in a more straightforward way. There is no good evidence to suggest that signaling towards the past is possible; that is, there is no retrocausality at the operational level. (Pegg 2006, 2008 argues that this can be explained formally as a result of the completeness condition on the measurement operators, introducing an asymmetry in normalization conditions for preparation and measurement.) Yet, despite there being no signaling towards the past, retrocausal accounts assume causal influences towards past. That these causal influences do not show up as statistical dependences exploitable for signaling purposes raises exactly the same fine-tuning worry as Wood and Spekkens raise. An obvious response to the challenge set by Wood and Spekkens is to simply reject the assumption of faithfulness. But this should not be taken lightly; the intuition behind the faithfulness assumption is basic and compelling. When no statistical correlation exists between the occurrences of a pair of events, there is no reason for supposing there to be a causal connection between them. Conversely, if we were to allow the possibility of a causal connection between statistically uncorrelated events, we would have a particularly hard task determining which of these uncorrelated sets could be harboring a conspiratorial causal connection that hides the correlation. The faithfulness assumption is thus a principle of parsimony—the simplest explanation for a pair of statistically uncorrelated events is that they are causally independent—much the same way that Spekkens’ (2005) definition of contextuality is, too (see §3.2); indeed, Cavalcanti (2018) argues that contextuality can be construed as a form of fine-tuning. There are, however, well-known examples of systems that potentially show a misapplication of the faithfulness assumption. One such example, originating in Hesslow (1976), involves a contraceptive pill that can cause thrombosis while simultaneously lowering the chance of pregnancy, which can also cause thrombosis. As Cartwright (2001: 246) points out, given the right weight for these process, it is conceivable that the net effect of the pills on the frequency of thrombosis be zero. This is a case of “cancel ling paths”, where the effect of two or more causal routes between a pair of variables cancels to achieve statistical independence. In a case such as this, since we can have independent knowledge of the separate causal mechanisms involved here, there are grounds for arguing that there really is a causal connection between the variables despite their statistical independence. Thus, it is certainly possible to imagine a scenario in which the faithfulness assumption could lead us astray. However, in defense of the general principle, an example such as this clearly contains what Wood and Spekkens refer to as fine tuning; the specific weights for these processes would need to match precisely to erase the statistical dependence, and such a balance would generally be thought as unstable (any change in background conditions, etc. would reveal the causal connection in the form of a statistical dependence). Näger (2016) raises the possibility that unfaithfulness can occur without conspiratorial fine tuning if the unfaithfulness arises in a stable way. In the quantum context, Näger suggests that the fine-tuning mechanism is what he calls “internal cancel ling paths”. This mechanism is analogous to the usual cancel ling paths scenario, but the path-cancel ling mechanism does not manifest at the level of variables, but at the level of values. On this view, such fine tuning would occur as a result of the particular causal and/or nomological process that governs the system, and it is in this sense that the cancel ling paths mechanism is internal, and it is the fact that the mechanism is internal that renders the associated fine tuning stable to external disturbances. Thus if the laws of nature are such that disturbances always alter the different paths in a balanced way, then it is physically impossible to unbalance the paths. (Näger 2016: 26) The possibility raised by Näger would circumvent the problem that violations of faithfulness ultimately undermine our ability to make suitable inferences of causal independence based on statistical independence by allowing only a specific kind of unfaithfulness—a principled or law-based unfaithfulness that is “internal” and is thus stable to background conditions—which is much less conspiratorial, as the fine-tuning is a function of the specific process involved. Evans (2018) argues that a basic retrocausal model of the sort envisaged by Costa de Beauregard (see §1) employs just such an internal cancel ling paths explanation to account for the unfaithful (no signaling) causal channels. See also Almada et al. (2016) for an argument that fine tuning in the quantum context is robust and arises as a result of symmetry considerations.
Again, abstruse, controversial, unsettled. Not the sort of soil where we should be rooting credible conclusions. I think I need to pause for now, KFkairosfocus
June 9, 2019
June
06
Jun
9
09
2019
04:49 AM
4
04
49
AM
PDT
As to how thermodynamics itself relates to this immense amount of positional information that is somehow coming into the developing embryo from the outside by some non-material method, work done on bacteria can give us a small glimpse into just how far out of thermodynamic equilibrium multicellular organisms actually are. The information content that is found to be in a simple one cell bacterium, when working from the thermodynamic perspective, is found to be around 10 to the 12 bits,,,
Biophysics – Information theory. Relation between information and entropy: - Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz' deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. http://www.astroscu.unam.mx/~angel/tsb/molecular.htm
,,, Which is equivalent of 100 million pages of Encyclopedia Britannica. 'In comparison,,, the largest libraries in the world,, have about 10 million volumes or 10^12 bits.”
“a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong - The Creation-evolution Controversy 'The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica." Carl Sagan, "Life" in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
Thus since Bacterial cells are about 10 times smaller than most plant and animal cells.
Size Comparisons of Bacteria, Amoeba, Animal & Plant Cells Excerpt: Bacterial cells are very small - about 10 times smaller than most plant and animal cells. https://education.seattlepi.com/size-comparisons-bacteria-amoeba-animal-plant-cells-4966.html
And since there are conservatively estimated to be around 30 trillion cells in the average human body,
Revised Estimates for the Number of Human and Bacteria Cells in the Body - 2016 Abstract: Reported values in the literature on the number of cells in the body differ by orders of magnitude and are very seldom supported by any measurements or calculations. Here, we integrate the most up-to-date information on the number of human and bacterial cells in the body. We estimate the total number of bacteria in the 70 kg "reference man" to be 3.8·10^13. For human cells, we identify the dominant role of the hematopoietic lineage to the total count (?90%) and revise past estimates to 3.0·10^13 human cells. Our analysis also updates the widely-cited 10:1 ratio, showing that the number of bacteria in the body is actually of the same order as the number of human cells, and their total mass is about 0.2 kg. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002533
Then that gives us a rough ballpark estimate of around 300 trillion times 100 million pages of Encyclopedia Britannica. Or about 300 trillion times the information content contained within all the books of the largest libraries in the world. Needless to say, that is a massive amount of positional information that is somehow coming into a developing embryo from the outside by some non-material method. On top of all that, as was highlighted earlier, as far as quantum information theory is concerned, this positional information is found to be a “property of an observer who describes a system.”
The Quantum Thermodynamics Revolution – May 2017 Excerpt: “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,, https://www.quantamagazine.org/quantum-thermodynamics-revolution/
In other words, some ‘outside observer’ who, due to quantum non-locality must necessarily outside the space-time of the universe, is now required in order to give us an adequate causal account so as to explain how it is even possible for this immense amount of positional information to somehow be coming into the developing embryo ‘from the outside’, by some ‘non-material’ method. Christian Theism just so happens to give us an adequate causal account for exactly Who this outside observer might be Who is imparting this immense amount of positional information into developing embryos. As Hebrews chapter 4 verse 13 states, “And no creature is hidden from his sight, but all are naked and exposed to the eyes of him to whom we must give account.”
Hebrews 4:13 And no creature is hidden from his sight, but all are naked and exposed to the eyes of him to whom we must give account.
And as Psalm 139:13-14 states, "For You formed my inward parts;" and,, "I am fearfully and wonderfully made,,"
Psalm 139:13-14 For You formed my inward parts; You covered me in my mother’s womb. I will praise You, for I am fearfully and wonderfully made; Marvelous are Your works, And that my soul knows very well.
bornagain77
June 9, 2019
June
06
Jun
9
09
2019
03:55 AM
3
03
55
AM
PDT
kf restates his (apparently) foundational belief in naturalism and/or Deism,
I simply noted a fact, the observable halos. Particles of RA materials in rocks undergo chain decay to Pb, and in so doing emit particles at various energies that then have diverse penetration differences leading to discoloured rings. This speaks to different discrete energy levels in the nucleus and thus to quantisation.,,, There is no warrant to infer particular design of the stones, they do not constitute a set up experiment.,,, "These and other cases point to how in some cases quantum phenomena manifest themselves naturally and there are natural detectors we may observe."
Again, there is NOTHING natural about any material object, whether a rock or a man-made detector, detecting a quantum process. Period! Apparently, despite every photon and material particle in the universe being subjected to non-local, i.e. beyond space and time, collapse of its wave function, you still want to cling to your naturalistic belief that radioactive decay is somehow a completely 'natural' process that God has no control over. You are wrong in your presupposition on both a theological and scientific level. Theologically, Romans 8 clearly states that "creation was subjected to frustration, not by its own choice, but by the will of the one who subjected it"
Romans 8:20-21 For the creation was subjected to frustration, not by its own choice, but by the will of the one who subjected it, in hope that the creation itself will be liberated from its bondage to decay and brought into the freedom and glory of the children of God.
Scientifically this Theistic position of God subjecting creation to decay by His will is born out by first noting the Quantum Zeno effect and then by subsequently bringing in Quantum Information theory. An old entry in wikipedia described the Quantum Zeno effect as such, “an unstable particle, if observed continuously, will never decay.”
Perspectives on the quantum Zeno paradox – 2018 The quantum Zeno effect is,, an unstable particle, if observed continuously, will never decay. https://iopscience.iop.org/article/10.1088/1742-6596/196/1/012018/pdf
Likewise, the present day entry on wikipedia about the Quantum Zeno effect also provocatively states that “a system can’t change while you are watching it”
Quantum Zeno effect Excerpt: Sometimes this effect is interpreted as “a system can’t change while you are watching it” https://en.wikipedia.org/wiki/Quantum_Zeno_effect
Atheistic materialists have tried to get around the Quantum Zeno effect by postulating that interactions with the environment are sufficient to explain the Quantum Zeno effect.
Perspectives on the quantum Zeno paradox – 2018 Excerpt: The references to observations and to wavefunction collapse tend to raise unnecessary questions related to the interpretation of quantum mechanics. Actually, all that is required is that some interaction with an external system disturb the unitary evolution of the quantum system in a way that is effectively like a projection operator. https://iopscience.iop.org/article/10.1088/1742-6596/196/1/012018/pdf
Yet, the following interaction-free measurement of the Quantum Zeno effect demonstrated that the presence of the Quantum Zeno effect can be detected even without interacting with a single atom.
Interaction-free measurements by quantum Zeno stabilization of ultracold atoms – 14 April 2015 Excerpt: In our experiments, we employ an ultracold gas in an unstable spin configuration, which can undergo a rapid decay. The object—realized by a laser beam—prevents this decay because of the indirect quantum Zeno effect and thus, its presence can be detected without interacting with a single atom. http://www.nature.com/ncomms/2015/150414/ncomms7811/full/ncomms7811.html?WT.ec_id=NCOMMS-20150415
In short, the quantum zeno effect, regardless of how atheistic materialists, (or Deists such as kf), may feel about it, is experimentally shown to be a real effect that is not reducible to any materialistic explanation. And thus the original wikipedia statement of, “an unstable particle, if observed continuously, will never decay”, stands as being a true statement.
Perspectives on the quantum Zeno paradox – 2018 The quantum Zeno effect is,, an unstable particle, if observed continuously, will never decay. - per wiki 2018
Penrose's 1 in 10^10^123 precision for the initial entropy of the universe plays into this in an interesting way, but we will skip that discussion and fast forward to quantum information theory so as to address kf's foundational belief in naturalism. Moreover, on top of the Quantum Zeno effect, in quantum information theory we find that "in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.” As the following 2017 article states: James Clerk Maxwell (said), “The idea of dissipation of energy depends on the extent of our knowledge.”,,, quantum information theory,,, describes the spread of information through quantum systems.,,, Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,,
The Quantum Thermodynamics Revolution – May 2017 Excerpt: the 19th-century physicist James Clerk Maxwell put it, “The idea of dissipation of energy depends on the extent of our knowledge.” In recent years, a revolutionary understanding of thermodynamics has emerged that explains this subjectivity using quantum information theory — “a toddler among physical theories,” as del Rio and co-authors put it, that describes the spread of information through quantum systems. Just as thermodynamics initially grew out of trying to improve steam engines, today’s thermodynamicists are mulling over the workings of quantum machines. Shrinking technology — a single-ion engine and three-atom fridge were both experimentally realized for the first time within the past year — is forcing them to extend thermodynamics to the quantum realm, where notions like temperature and work lose their usual meanings, and the classical laws don’t necessarily apply. They’ve found new, quantum versions of the laws that scale up to the originals. Rewriting the theory from the bottom up has led experts to recast its basic concepts in terms of its subjective nature, and to unravel the deep and often surprising relationship between energy and information — the abstract 1s and 0s by which physical states are distinguished and knowledge is measured.,,, Renato Renner, a professor at ETH Zurich in Switzerland, described this as a radical shift in perspective. Fifteen years ago, “we thought of entropy as a property of a thermodynamic system,” he said. “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”,,, https://www.quantamagazine.org/quantum-thermodynamics-revolution/
Again to repeat that last sentence, “we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.” Think about that statement for a second. These developments in quantum information theory go to the heart of the ID vs. Evolution debate and directly falsify Darwinian claims that immaterial information is merely ’emergent’ from some material basis. That is to say, immaterial information is now empirically shown to be its own distinct physical entity that, although it can interact with matter and energy, is completely separate from matter and energy. Moreover, this distinct physical entity of immaterial information, via experimental realization of the Maxwell’s demon thought experiment, is shown to be a product of the immaterial mind. Specifically, to reiterate for importance, “Now in (quantum) information theory, we wouldn’t say entropy is a property of a system, but a property of an observer who describes a system.”. To more clearly illustrate how all this plays out in the ID vs. Evolution debate, at about the 41:00 minute mark of the following video, Dr. Wells, using a branch of mathematics called category theory, demonstrates that, during embryological development, ‘positional information’ must somehow be added to the developing embryo, ‘from the outside’, by some ‘non-material’ method, in order to explain the transdifferentiation of cells into multiple different states during embryological development.
Design Beyond DNA: A Conversation with Dr. Jonathan Wells – video (41:00 minute mark) – January 2017 https://youtu.be/ASAaANVBoiE?t=2484
The amount of ‘positional information’ that is somehow coming into a developing embryo from the outside by some non-material method is immense. Vastly outstripping, by many orders of magnitude, the amount of sequential information that is contained within DNA itself. As Doug Axe states in the following video, “there are a quadrillion neural connections in the human brain, that’s vastly more neural connections in the human brain than there are bits (of information) in the human genome. So,,, there’s got to be something else going on that makes us what we are.”
“There is also a presumption, typically when we talk about our genome, (that the genome) is a blueprint for making us. And that is actually not a proven fact in biology. That is an assumption. And (one) that I question because I don’t think that 4 billion bases, which would be 8 billion bits of information, that you would actually have enough information to specify a human being. If you consider for example that there are a quadrillion neural connections in the human brain, that’s vastly more neural connections in the human brain than there are bits (of information) in the human genome. So,,, there’s got to be something else going on that makes us what we are.” Doug Axe – Intelligent Design 3.0 – Stephen C. Meyer – video https://youtu.be/lgs6J4LqeqI?t=4575
And as the following article states, the information to build a human infant, atom by atom, would take up the equivalent of enough thumb drives to fill the Titanic, multiplied by 2,000.
In a TED Talk, (the Question You May Not Ask,,, Where did the information come from?) – November 29, 2017 Excerpt: Sabatini is charming.,,, he deploys some memorable images. He points out that the information to build a human infant, atom by atom, would take up the equivalent of enough thumb drives to fill the Titanic, multiplied by 2,000. Later he wheels out the entire genome, in printed form, of a human being,,,,: [F]or the first time in history, this is the genome of a specific human, printed page-by-page, letter-by-letter: 262,000 pages of information, 450 kilograms.,,, https://evolutionnews.org/2017/11/in-a-ted-talk-heres-the-question-you-may-not-ask/
bornagain77
June 9, 2019
June
06
Jun
9
09
2019
03:54 AM
3
03
54
AM
PDT
Hazel @224 asks:
That’s not very specific. By common attention do you mean in common with others? And what thought techniques? Can you give an example?
My interest here lies in debating mental vs external reality logic, not in attempting to explain the basics of mental reality thought techniques. I'm sure you know how to use Google.William J Murray
June 9, 2019
June
06
Jun
9
09
2019
01:41 AM
1
01
41
AM
PDT
H, nature itself is credibly designed, but that is most evident from cosmological fine tuning. Taking the world as going concern, upheld from moment to moment, that it should have predictable, stable patterns is in part inevitable; as was discussed months back, even hypothetical no detectable pattern would be a pattern. But, beyond that we see general order. However, there are various phenomena that do not merely reflect order playing out mechanically and/or stochastically. Mind vs capability of computational substrates is one example, focal to the OP. The code-using information and linked molecular nanotech systems in the world of life are a second, pointing to intelligent design of cell based life. We observe and participate in a world of intelligently directed configuration, which once it rises to FSCO/I's 500 - 1,000 bit threshold, is recognisable reliably on sign. Just this week, I had a car ignition key that would not work, it had been somehow bent. Some judicious counter-bending restored proper function, and out of caution I bought a blank and had a duplicate cut, transferring the remote access module myself. The key uses multiple patterns of slots and symmetric prongs to allow turning on of the car, i.e. it and linked systems are FSCO/I in action, indeed a hardware password. A couple days earlier, I came out to see a flat tyre, so we had a tyre change exercise and went to the local garage. That a machine screw was picked up and punctured seems chance and the mechanical clash of sharpish points and rubber tyre materials; likely aided by rains. That, having lodged, the head of said object wore quickly on the road was mechanical necessity, the precisely cut threads and head adapted to a screwdriver were obvious design. If we were to run into a seemingly crashed alien vessel on Mars and find a key-lock system, we would for cause infer design. Ironically, a very similar prong-height system is used to encode genetic information but somehow many cannot recognise the significance of code, algorithms and execution machinery. KFkairosfocus
June 9, 2019
June
06
Jun
9
09
2019
12:30 AM
12
12
30
AM
PDT
BA77, I simply noted a fact, the observable halos. Particles of RA materials in rocks undergo chain decay to Pb, and in so doing emit particles at various energies that then have diverse penetration differences leading to discoloured rings. This speaks to different discrete energy levels in the nucleus and thus to quantisation. The detection comes through discolouration, and the ring radius is an index of energy of emitted particles. Fraunhoffer lines are another case where absorption in thinner regions of a star's envelope and re-radiation in all directions creates a characteristic pattern of dark lines. This indicates species, transitions [line frequency patterns are identifying characteristics], and even, through Doppler frequency shifting radial velocity relative to us, i.e. an index relevant to seeing expansion of the cosmos. That such cases occur is simply a fact that is acknowledged as part of the catalogue of observations of the cosmos and its contents. There is no warrant to infer particular design of the stones, they do not constitute a set up experiment. Similarly, thin outer gaseous layers are an evident natural feature of stars. These and other cases point to how in some cases quantum phenomena manifest themselves naturally and there are natural detectors we may observe. We then fit such facts into our explanatory frameworks. By contrast, there are set up experiments, and in many cases, they use similar natural effects, I spoke to cloud and bubble chambers as well as stacks of film. In the case of one double slit type experiment, detectors are set up to pick up the two different outcomes, and coincidence circuits identify events. It seems some go one way, some the other, KFkairosfocus
June 9, 2019
June
06
Jun
9
09
2019
12:12 AM
12
12
12
AM
PDT
That's not very specific. By common attention do you mean in common with others? And what thought techniques? Can you give an example?hazel
June 8, 2019
June
06
Jun
8
08
2019
08:59 PM
8
08
59
PM
PDT
Hazel @222: Basically, applying common attention and thought techniques towards a goal, then observing what happens.William J Murray
June 8, 2019
June
06
Jun
8
08
2019
08:55 PM
8
08
55
PM
PDT
I'll bite: what "means to experiment, validate and offers practical benefit" does your model offer?hazel
June 8, 2019
June
06
Jun
8
08
2019
08:26 PM
8
08
26
PM
PDT
Seversky @308 said:
If you really believe that this is some Matrix-like reality whose behavior can be changed by some deep insight into its immaterial nature, then you should be able to walk in front of a speeding truck and it will pass right through you but I really would not recommend it.
I 100% agree that unless my model provides a means to experiment, validate and offers practical benefit and results beyond what the external-world model can offer, it's entirely useless and not worth the time to even discuss.William J Murray
June 8, 2019
June
06
Jun
8
08
2019
07:52 PM
7
07
52
PM
PDT
1 2 3 4 5 11

Leave a Reply