ID Foundations Intelligent Design

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

Spread the love

[Continued from here]

An excellent place to begin is back with Dr Dembski’s observation in No Free Lunch, as was already excerpted; but let us remind ourselves:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

Let us notice: conceiving a purpose, forming a plan, specifying materials and instructions.

We see here that an agent must act rationally and volitionally, on knowledge and creative  imagination. In that context, the complex, specific, functional organisation and associated information is first in the mind, then through various ways and means, it is actualised in the physical world.  Choosing an example of composing a text and posting it to UD, we may observe from a previous ID foundations series post:

a: When I type the text of this post by moving fingers and pressing successive keys on my PC’s keyboard,

b: I [a self, and arguably:  a self-moved designing, intentional, initiating agent and initial cause] successively

c: choose alphanumeric characters (according to the symbols and rules of a linguistic code)  towards the goal [a purpose, telos or “final” cause] of writing this post, giving effect to that choice by

d: using a keyboard etc, as organised mechanisms, ways and means to give a desired and particular functional form to the text string, through

e: a process that uses certain materials, energy sources, resources, facilities and forces of nature and technology  to achieve my goal.

. . . The result is complex, functional towards a goal, specific, information-rich, and beyond the credible reach of chance [the other source of high contingency] on the gamut of our observed cosmos across its credible lifespan.  In such cases, when we observe the result, on common sense, or on statistical hypothesis-testing, or other means, we habitually and reliably assign outcomes to design.

Now, let us focus on the issue of choice in the context of reason and responsibility:

21 –> If I am only a product of evolutionary materialistic dynamics over the past 13.7 BY and on earth the past 4.6 BY, in a reality that (as Lewontin and ever so many others urge) is wholly material, and so

22 –> I am not sufficiently free to make a truly free choice as a self-moved agent — however I may subjectively imagine myself to be choosing [i.e.  immediately, the evolutionary materialistic view implies we are all profoundly and inescapably delusional], then

23 –> Whatever I post is simply the end-product of a chain of cause-effect bonds trailing back to undirected forces of chance and necessity acting across time on matter and energy, forces that are utterly irrelevant to the rationality of ground and consequent, or the duty of sound thinking and decision.

24 –> That this is in fact a fair-comment view of the evolutionary materialistic position, can be seen for one instance from Cornell professor of the history of Biology William Provine’s remarks in his U Tenn Darwin Day keynote address of 1998:

Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . .

The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them. Human free will, however, is another matter. Even evolutionists have trouble swallowing that implication. I will argue that humans are locally determined systems that make choices. They have, however, no free will . . . [Evolution: Free Will and Punishment and Meaning in Life, Second Annual Darwin Day Celebration Keynote Address, University of Tennessee, Knoxville, February 12, 1998 (abstract).]

25 –> This, if true, would immediately and irrecoverably undermine morality. But more than that, if all phenomena in the cosmos are shaped and controlled in the end by blind chance and necessity acting through blind watchmaker, evolutionary dynamics — however mediated — then the credibility of reasoning irretrievably breaks down.

26 –> For, if non-rational chains of cause and effect dominate over logical inference and moral principle, then our behaviour is explained and controlled by forces irrelevant to logic, principle or truth. So even scientific and materialist thoughts have no rational grounds, i.e. we are at reduction to absurdity.
27 –> This is plainly evident in for example Sir Francis Crick’s even more radical  remarks in his The Astonishing Hypothesis, 1994:
. . . that “You”, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased: “You’re nothing but a pack of neurons.” This hypothesis is so alien to the ideas of most people today that it can truly be called astonishing.

28 –> ID thinker, Philip Johnson’s retort in Reason in the Balance, was apt.  Namely: Dr Crick should therefore be willing to preface his books: “I, Francis Crick, my opinions and my science, and even the thoughts expressed in this book, consist of nothing more than the behavior of a vast assembly of nerve cells and their associated molecules.” (In short, as Prof Johnson then went on to say: “[[t]he plausibility of materialistic determinism requires that an implicit exception be made for the theorist.”)

29 –> Actually, this fatal flaw had long since been highlighted by J. B. S. Haldane:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms.” [“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209. (Highlight and emphases added.)]

30 –> In short, if the conscious mind is a mere epiphenomenon of brain-meat in action as neural networks fire away through neural networks wired by chance and necessity, then it has no credible capability to rise above  cause-effect chains tracing to blind  chance and necessity, to achieve responsible reasoning on grounds, evidence, warrant and consequences. We are at reductio ad absurdum.

31 –> In his The Laws, Bk X, 360 BC, Plato showed a more promising beginning-point. Here, he speaks in the voice of the Athenian Stranger:

Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change? Impossible. But when the self-movedchanges other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second.

[[ . . . .]

Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?

Cle. You mean to ask whether we should call such a self-moving power life?

Ath. I do.

Cle. Certainly we should.

Ath. And when we see soul in anything, must we not do the same-must we not admit that this is life?

[[ . . . . ]

Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things?

32 –> So, if we are willing to accept the testimony of our experience, that we are self-moved, reasoning, responsible body-supervising conscious, enconscienced minds,  then, we can see how we may use bodily means: brains, sensors, effectors, to interface with and act into the physical world to achieve our purposes, leaving FSCO/I as a trace of that minded intelligent, designing action.

33 –> This brings to bear the relevance of the Derek Smith, two-tier controller, cybernetic model:

Fig. E: The Derek Smith, two-tier controller cybernetic model.  (Adapted, Derek Smith.)

34 –> Here, the higher order, supervisory controller intervenes informationally on the lower order loop, and the lower order controller acts as an input-output and processing unit for the higher order unit. (This architecture is obviously also relevant to the development of smart robots.)

35 –> In short, we are not locked up to the notion that mind is at best an epiphenomenon of brain-meat.

36 –> Going further, by taking our experience of ourselves as reasoning, choosing, acting designers who often leave FSCO/I behind in objects, systems, processes and phenomena that reflect counter-flow leading to local organisation, we have a coherent basis for understanding the significance of the inference to design.

_____________

Design points to the designing mind, in sum.

And while that has potential worldview level import, it is no business of science to trouble itself unduly over the possible onward philosophical debates once we can establish the principle of inference to design on reliable empirical signs such as FSCO/I.

At least, if we understand science at its best as:

the unfettered (but ethically and intellectually responsible) progressive pursuit of the empirically evident truth about our world, based on observation, measurement, analysis, theoretical modelling on inference to best explanation, and free, uncensored but mutually respectful discussion among the informed.

16 Replies to “ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

  1. 1
    allanius says:

    Pretzels, anyone? This might be a gooe time to go back and reread Phillip Johnson. Very carefully. Strategy drives tactics, and some strategies are clearly more effective than others.

  2. 2
    kairosfocus says:

    Hi A:

    Thanks for your thought; I appreciate that the above OP (which is intended for reference and foundational purposes) is involved.

    However, it is also responding to what are at root a technical series of counter-arguments to the design inference, backed up by philosophical issues and challenges that go back at least 2,300 years.

    As such, pardon, but I believe the above is a legitimate part of the project of design theory — one more piece of the puzzle.

    I hope that this OP will therefore serve as a point of reference for onward debates in other, more popular level threads.

    Thanks again.

    GEM of TKI

    PS: Sometimes, having the technical and tactical underpinnings is what enables a strategy to work. A classic example is the German strategy in 1918, where the whirlwind bombardment and infiltration storm-trooper tactics provided the foundation for a strategy that for the second time almost won the war for Germany. The likely margin of failure was the early deployment those American Marines at the 2nd Marne. And, in May 1940, getting the technical tactics right did win the day for the Germans.

  3. 3
    Meleagar says:

    The division of natural vs supernatural at the term “free will” or “design agency” is convenient, dishonest semantics when used to preclude design agency as a proper explanatory force. If the agency we refer to as “free will” did not exist as meaningfully distinct from what chance & known natural forces can produce by themselves, then why does science act as if there is a distinction between “artificial” (man-made) and “natural”?

    Why would there be a distinction between death by murder and death by natural causes? Isn’t murder also a natural cause, if design agency or free will is “the same thing” as any other physical causation?

    Materialists wish to subsume “design agency” or “free will” as something produced by chance and other, already-identified natural forces without demonstrating that it is so, or even that it is theoretically reasonable, and even though it is philosophical and rational suicide to do so.

    If science defines “free will” as humans employ it all the time as “supernatural”, then they are applying supernatural techniques. If science defines free will agency as non-supernatural subsets of “chance and natural forces”, then they have no reason to deny it as an appropriate explanatory force.

    Currently known natural forces and chance are insufficient to account for some empirical phenomena, such as things humans generate, that consistently and reliably show specific common characteristics – such as, FSCI well over 1000 bits, and which violate the Universal Plausibility Principle.

    Science is left with two options; it must admit a supernatural force exists, or it must admit that a “natural” force exists that is as yet unaccounted for which is responsible for generating things like functioning aircraft carriers and space shuttles and the book “War and Peace”, which cannot be explained via other, currently known forces.

    Just as gravity and entropy can be recognized by the manner in which they affect observable phenomena, “intentionality” or “design agency” or “free will” can also be recognized by specific and qualitative effects it has on observable phenomena.

    Is there a reason why the scientific community would be against the discovery of another fundamental “natural” force that must be posited to account for what we empirically and factually observe, and which our current set of natural explanations are entirely deficient in explaining, and without which our reasoning process and science itself crumbles into irrationality, and our ability to distinguish between artifice and “what other forces and chance produces” becomes nullified, and thus personal responsibility, morality, ethics and justice become nothing more than delusions?

    If nothing else, science must allow that a new fundamental property or force exists in nature, commonly referred to as intentional agency, that can produce what no other known combination of forces and chance can produce, and which justifies our reliance upon reason and science, and which allows for personal responsibility and morality beyond self-delusion.

  4. 4
    Meleagar says:

    BTW, I’m really enjoying your contributions to this site, KF. They are very compelling, amazingly well organized, and exhaustively thorough.

  5. 5
  6. 6
    kairosfocus says:

    Meleagar:

    You will not believe this one, re your:

    ME,3:Why would there be a distinction between death by murder and death by natural causes? Isn’t murder also a natural cause, if design agency or free will is “the same thing” as any other physical causation?

    Let’s excerpt the closing summation of Clarence Darrow — he of the Scopes Trial a short while after this [Bryan had intended to call the following up, but the disgusted judge abruptly cut off the trial] — at the Loeb-Leopold Nietzschean murder trial:

    ________________

    >> . . . They [[Loeb and Leopold] wanted to commit a perfect crime . . . . Do you mean to tell me that Dickie Loeb had any more to do with his making than any other product of heredity that is born upon the earth? . . . .

    He grew up in this way. He became enamored of the philosophy of Nietzsche. Your Honor, I have read almost everything that Nietzsche ever wrote. He was a man of a wonderful intellect; the most original philosopher of the last century. Nietzsche believed that some time the superman would be born, that evolution was working toward the superman. He wrote one book, Beyond Good and Evil, which was a criticism of all moral codes as the world understands them; a treatise holding that the intelligent man is beyond good and evil, that the laws for good and the laws for evil do not apply to those who approach the superman. [Shades of Plato’s critique of evolutionary materialism in The Laws, Bk X . . . ] He wrote on the will to power. Nathan Leopold is not the only boy who has read Nietzsche. He may be the only one who was influenced in the way that he was influenced . . . >>
    _________________

    This last claim was in fact patently false, as Bryan in his c 1923 The Menace of Darwin, had written in warning to the then largely Christian public of America.

    Pardon the painfully harsh words Bryan felt compelled to communicate to his nation and his generation, in warning of what was to come, based on what had already begun to happen:

    Darwinism leads to a denial of God. Nietzsche carried Darwinism to its logical conclusion and it made him the most extreme of anti-Christians . . . . As the [First World] war [of 1914 – 1918] progressed I [Bryan was from 1913 – 1915 the 41st US Secretary of State, under President Wilson] became more and more impressed with the conviction that the German propa-ganda rested upon a materialistic foundation. I se-cured the writings of Nietzsche and found in them a defense, made in advance, of all the cruelties and atrocities practiced by the militarists of Germany. [It didn’t start with the Nazis! (Indeed, the rape and pillaging of Belgium in 1914 — adjust for 90+ years and whatever propagandistic elements it may have, but note this is largely eyewitness testimony by a reporter — had in it all the seeds of what would follow in the 1940’s)] Nietzsche tried to substitute the worship of the “Su-perman” for the worship of God. He not only re-jected the Creator, but he rejected all moral standards. He praised war and eulogized hatred because it led to war. He denounced sympathy and pity as attributes unworthy of man. He believed that the teachings of Christ made degenerates and, logical to the end, he regarded Democracy as the refuge of weaklings. He saw in man nothing but an animal and in that animal the highest virtue he recognized was “The Will to Power”—a will which should know no let or hin-drance, no restraint or limitation . . . . His philosophy, if it is worthy the name of philos-ophy, is the ripened fruit of Darwinism — and a tree is known by its fruit . . . .

    The corroding influence of Darwinism has spread as the doctrine has been increasingly accepted. In the American preface to “The Glass of Fashion” these words are to be found: “Darwinism not only justifies the sensualist at the trough and Fashion at her glass; it justifies Prussianism at the cannon’s mouth and Bol-shevism at the prison-door. If Darwinism be true, if Mind is to be driven out of the universe and accident accepted as a sufficient cause for all the majesty and glory of physical nature, then there is no crime or vio-lence, however abominable in its circumstances and however cruel in its execution, which cannot be justi-fied by success, and no triviality, no absurdity of Fash-ion which deserves a censure: more — there is no act of disinterested love and tenderness, no deed of self- sac-rifice and mercy, no aspiration after beauty and excel-lence, for which a single reason can be adduced in logic.” [pp. 52 – 54. Emphases and explanatory parentheses added.]

    That, sadly, is what amorality, stripped of genteel habits, really means.

    And, BTW, here is what Bryan intended but did not get the chance to say in his closing summation in Dayton, Tennessee.

    Excerpting, and again, the reading is painful indeed; but, I am now convinced that we must take warning from the past, lest we repeat it:

    A criminal is not relieved from responsibility merely because he found Nietzsche’s philosophy in a library which ought not to contain it. Neither is the university guiltless if it permits such corrupting nourishment to be fed to the souls that are entrusted to its care . . . . [Again, strongly echoing Plato’s analysis; and also his recommendations. While we may not wish to withhold such books from our libraries, perhaps, we should at least allow also on the same shelves those that balance and counter them.]

    Mr. Darrow said: “I say to you seriously that the parents of Dicky Loeb are more responsible than he, and yet few boys had better parents.” Again he says: “I know that one of two things happened to this boy; that this terrible crime was inherent in his organism and came from some ancestor, or that it came through his education and his training after he was born.” . . . . He says “I do not know what remote ancestor may have sent down the seed that corrupted him [I suggest, we should at least consider: Adam . . . but that does not relieve us of responsibility for our choices and behaviour], and I do not know through how many ancestors it may have passed until it reached Dicky Loeb. All I know is, it is true, and there is not a biologist in the world who will not say I am right.”

    Psychologists who build upon the evolutionary hypothesis teach that man is nothing but a bundle of characteristics inherited from brute ancestors. That is the philosophy which Mr. Darrow applied in this celebrated criminal case. “Some remote ancestor” – he does not know how remote – “sent down the seed that corrupted him.” You cannot punish the ancestor – he is not only dead but, according to the evolutionists, he was a brute and may have lived a million years ago. And he says that all the biologists agree with him. No wonder so small a percentage of the biologists, according to Leuba, believe in a personal God.

    This is the quintessence of evolution, distilled for us by one who follows that doctrine to its logical conclusion.

    Hard reading, and at a time of a clash of Titans.

    But, when we debate the validity of the choosing will, and the resulting responsible mind, that is what is at stake.

    So, do pardon my taking the step by step, semi-technical route, for just a short while, to lay the base for the response we must make.

    At least — and again, pardon words that may wound as they lance home in the abscess, they are meant to help us to heal — if both science and civilisation are to be kept from sliding off the cliff.

    We need to think, very soberly, about where we are, and what fire we are playing with.

    GEM of TKI

  7. 7
    tragic mishap says:

    The likely margin of failure was the early deployment those American Marines at the 2nd Marne. And, in May 1940, getting the technical tactics right did win the day for the Germans.

    I’d argue that the German army failed to execute the Schlieffen plan properly in WWI. The northern arm failed to swing all the way north and west of Paris, and the southern arm failed to retreat into Germany to suck the French army in. Had the northern army especially followed the plan, there would have been no race to the sea because the Germans would have already won it.

  8. 8
    tragic mishap says:

    But more on topic, I know in the past Dembski has recoiled from comparing information and entropy. I’d like to see what he says about this.

  9. 9
    kairosfocus says:

    TM:

    On re-looking above, I have said remarkably little about entropy proper, I may have to add a remark or two. (There is a whole informational approach to thermodynamics.)

    And the offensive in Qn was the 2nd major German push of March 1918 on, not the first one in 1914 which was also stopped at the last major river before Paris, the Marne.

    Back in 1914, the margin of failure was the month of effort and manpower fighting the Belgians, with the British coming up too.

    GEM of TKI

  10. 10
    tragic mishap says:

    My bad about 1918.

    I don’t doubt that the BEF fought well, but the intent of the Schlieffen plan was to have the northern arm swoop around to the north and west of Paris before turning back. The southern arm was supposed to fake a retreat, drawing most of the French army into Germany. Then the northern arm would move in and attack the French from behind.

    Instead the southern arm actually advanced into France, pushing the French army back towards Paris. Then the northern arm swung south too early. This turned what was supposed to be an envelopment into frontal assault. Considering the superiority of the German army, made obvious by the fact that the portion which was supposed to retreat actually advanced almost by accident, I think the plan would have worked if executed properly.

  11. 11
    tragic mishap says:

    Here is what was supposed to happen:

    http://rlv.zcache.com/schlieff.....cp_400.jpg

    You see the northern arm going around Paris and the French actually advancing into Germany.

    http://en.wikipedia.org/wiki/S.....onal_facts

  12. 12
    kairosfocus says:

    TM:

    We are a bit off topic, but . . .

    You are still discussing 1914, where battle of frontiers, seizure of Liege using Skoda 305 mm and Krupp 42 cm mortars was initially decisive, but distraction of the Belgians flooding and retreating to Antwerp (and later the Tannenberg Masurian lakes episode in the East)cost the Germans the time and forces they needed.

    A gap opened and the French-British spotted it by air, and sallied into it. and the Germans recoiled to the Aisne, on orders of a staff colonel sent to the front; a few dozen more miles and the last E/W railroad would have been cut, breaking France’s back — not even counting Paris. Race to the sea, and trench lines were locked in, for 4 yrs seige warfare.

    1914 + 4 = 1918.

    In early 1918, having knocked out one eastern ally after another year by year [and having bled the French in 1916 and by blunting the Nivelle offensives in 1917, triggered mutinies], culminating in Russia in 1917, the Germans had a temporary advantage until the Americans could be deployed. That March, they struck, and drove several wedges into the Allied lines. Last line before Paris was again Marne.

    Chemin de Dames, 8,000 US Marines.

    G

  13. 13
    kairosfocus says:

    UPDATE: I have added some adjustments in 3 – 9, and a sidebar on entropy and information at point 8. This should make it clear that while there is a relationship between entropy and information, the pivotal issue is the credible source of the FSCO/I in an energy converter. The most credible source for that is a designer, whether the device is micro- or macro- scale. GEM of TKI

  14. 14
    bornagain77 says:

    OT: This breakthrough is just plain cool!

    Physicists describe method to observe timelike entanglement – January 24, 2011
    Excerpt: In “ordinary” quantum entanglement, two particles possess properties that are inherently linked with each other, even though the particles may be spatially separated by a large distance. Now, physicists S. Jay Olson and Timothy C. Ralph fro…m the University of Queensland have shown that it’s possible to create entanglement between regions of spacetime that are separated in time but not in space, and then to convert the timelike entanglement into normal spacelike entanglement. They also discuss the possibility of using this timelike entanglement from the quantum vacuum for a process they call “teleportation in time.”
    http://www.physorg.com/news/20.....ement.html

    It should be noted that this experiment solidly dots the i’s and crosses the the t’s insofar as demonstrating that not only is ‘information’ transcendent of space but ‘information’ is also transcendent of time, with the added bonus of demonstrating dominion of matter/material regardless of the space-time constraints that matter/material is itself subject to!!!

  15. 15
    kairosfocus says:

    BA:

    Quantum entanglement is an interesting field, with active research, that cuts across our usual experience/expectations of the world.

    Wiki defines:

    Quantum entanglement is a property of the quantum mechanical state of a system containing two or more objects, where the objects that make up the system are linked in such a way that the quantum state of any member of the system cannot be adequately described without full mention of the other members of the system, even if the individual objects are spatially separated.

    This brings to bear Bell’s inequality theorem of 1964, and the issue of local realism/hidden variables and Einstein’s concerns on “spooky” effectively instant action at a distance:

    ________________

    >> In theoretical physics, Bell’s theorem (AKA Bell’s inequality) is a no-go theorem, loosely stating that:

    No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics . . . .

    it indicates that every quantum theory must violate either locality or counterfactual definiteness. In conjunction with the experiments verifying the quantum mechanical predictions of Bell-type systems, Bell’s theorem demonstrates that certain quantum effects travel faster than light and therefore restricts the class of tenable hidden variable theories to the nonlocal [thus, we may say “transcendent”] variety . . . .

    As in the Einstein–Podolsky–Rosen (EPR) paradox, Bell considered an experiment in which a source produces pairs of correlated particles. For example, a pair of particles may be produced in a Bell state so that if the spins are measured along the same axes they are certain to produce identical results. The particles are then sent to two distant observers: Alice and Bob. In each trial, both of the observers independently chooses to measure the spin of their respective particle along a particular axis [around the full circle], and each measurement yields a result of either spin-up (+1) or spin-down (-1). Whether or not Alice and Bob obtain the same result depends on the relationship between the orientations of the two spin measurements, and in general is subject to some uncertainty. The classical incarnation of Bell’s theorem is derived from the statistical properties observed over many runs of this experiment.

    Mathematically the correlation between results is represented by their product (thus taking on values of ±1 for a single run). While measuring the spin of these entangled particles along the same axis always results in identical (perfectly correlated) results, measurements along perpendicular directions have a 50% chance of matching (uncorrelated) . . . .

    Bell achieved his breakthrough by first assuming that a theory of local hidden variables could reproduce these results. Without making any assumptions about the specific form of the theory beyond basic consistency requirements, he was able to derive an inequality that was clearly at odds with the result described above, which is both predicted by quantum mechanics and observed experimentally. Thus, Bell’s theorem ruled out the idea of local realism as a viable interpretation of quantum mechanics, though it still leaves the door open for non-local realism [fancy way of saying that in effect one way to view all this is that our space-time is in effect connected through a transcendent — hence Einstein’s “spooky” — realm that allows for effective supra-light connexions].

    Over the years, Bell’s theorem has undergone a wide variety of experimental tests. Two possible loopholes in the original argument have been proposed, the detection loophole[1] and the communication loophole[1], each prompting a new round of experiments that re-verified the integrity of the result[1]. To date, Bell’s theorem is supported by an overwhelming body of evidence and is treated as a fundamental principle of physics in mainstream quantum mechanics textbooks[2] [3]. Still, no principle of physics can ever be absolutely beyond question, and there are some people who still do not accept the theorem’s validity . . . .

    In QM, predictions were formulated in terms of probabilities — for example, the probability that an electron might be detected in a particular region of space, or the probability that it would have spin up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM’s weakness was its inability to predict those values precisely. The possibility remained that some yet unknown, but more powerful theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilistic answers given by QM. If a hidden variables theory were correct, the hidden variables were not described by QM, and thus QM would be an incomplete theory.

    The desire for a local realist theory was based on two assumptions:

    1. Objects have a definite state that determines the values of all other measurable properties, such as position and momentum.

    2. Effects of local actions, such as measurements, cannot travel faster than the speed of light (as a result of special relativity). If the observers are sufficiently far apart, a measurement taken by one has no effect on the measurement taken by the other.

    In the formalization of local realism used by Bell, the predictions of theory result from the application of classical probability theory to an underlying parameter space. By a simple argument based on classical probability, he then showed that correlations between measurements are bounded in a way that is violated by QM . . . .

    Bell’s inequalities are tested by “coincidence counts” from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.

    Bell test experiments to date overwhelmingly violate Bell’s inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987.[12] Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, “the discrepancies with QM could not be reproduced”.

    Nevertheless, the issue is not conclusively settled. According to Shimony’s 2004 Stanford Encyclopedia overview article:[1] . . . .

    Because detectors don’t detect a large fraction of all photons, Clauser and Horne[11] recognized that testing Bell’s inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):

    a light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.

    Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.

    The experiment was performed by Freedman and Clauser[15], who found that the Bell’s inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman-Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:

    In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer.

    This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known as stochastic resonance[17]). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena . . . .

    Most advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell’s inequality by means of a “non-local” hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A recent experiment ruled out a large class of non-Bohmian “non-local” hidden variable theories.[18]

    If the hidden variables can communicate with each other faster than light, Bell’s inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process which travels backwards in time along the past Light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time[19].

    A few advocates of deterministic models have not given up on local hidden variables. E.g., Gerard ‘t Hooft has argued that the superdeterminism loophole cannot be dismissed[20].

    The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. [In effect we are here looking at a quasi-infinity of worlds, at every entangled event . . . which I think raises Occam’s ghost, sharp and slashing razor in hand] If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.

    This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the results of an experiment are always observed to be definite, there is a quantity which determines what the outcome would have been even if you don’t do the experiment.

    Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined . . . >>

    ____________________

    Couriouser and couriouser, as the debates and tests go on!

    But, bottomline, BA has a serious point in pointing to and highlighting nonlocality and in effect information linkage through a transcendent realm that is beyond our commonly experienced world.

    GEM of TKI

  16. 16
    kairosfocus says:

    F/N: Pardon a remark on the relevance of this post to the ID project.

    Here, I have just used this ID Foundations 2 post, as a reference foundation that applies to the context of genetic determinism and the myth of genes for this and that, and also addresses the isue of our being self-moved agents, which is discussed on p 2 of the post above.

    GEM of TKI

Leave a Reply