In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:
Winds of Change, 76:
By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.
If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]
ID Does Not Posit Supernatural Causes, 35:
Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!
Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]
This challenge brings to a sharp focus the foundational issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:
. . .[From commonplace experience and observation, we may see that:] (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]
This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.
For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex organisation and associated information [FSCO/I; a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979) into central focus].
ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.
And, many such thinkers further argue that:
if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)
then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.
But, we now need to show that . . .
More subtly — through the question of “counterflow,” i.e. constructive work — the issue AIG raised first surfaces questions on the thermodynamics of energy conversion devices, the link of entropy to information, the way that open systems increase local organisation, and the underlying origin of energy conversion devices that exhibit FSCO/I, especially those in biological organisms.
This issue has been on the table since the very first ID technical book, The Mystery of Life’s Origin [TMLO], by Thaxton, Bradley and Olsen [TBO], in 1984. For, these authors noted as they closed their Ch 7, on the basic thermodynamics of living systems, that:
While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.
It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB: as the journal’s online issues will show, this is not necessarily a “friendly audience” for design thinkers.]
Let us take this up in steps:
1 –> “Counterflow” generally speaks of going opposite to “time’s arrow” [a classic metaphor for the degradation impact of the 2nd law of thermodynamics], by performing constructive work.
2 –> That is, by in effect harnessing an energy-conversion device, a local increase in order — indeed, in organisation — can be created; according to a pattern, blueprint, plan, or at least an intention. As we may illustrate:
Fig. A: Energy flows and work. The joint action of the first and second laws of thermodynamics shows how a heat engine/energy converter may only partly convert imported energy (which must be in an appropriate form fitted to the device) into work. Specifically, as part (b) shows, increment of heat flow d’Qi from heat source A partly goes into increase of internal energy of device B, dEb, partly into shaft work dW, and partly into exhausted heat increment d’Qo that ends up in heat sink D.
(NB: Under the second law, at each interface where heat flows, increment in entropy, dS >/= d’Qrev/T; T being the relevant absolute temp. In part (a), the loss of heat from A causes B (at a lower temp) to gain heat; A’s loss of heat reduces its entropy, but since B is at a lower temp, its rise in entropy will be greater, so the entropy of the universe as a whole will rise, when the two are netted off.)
3 –> As fig. A shows, open systems can indeed readily — but, alas, temporarily — increase local organisation by importing energy from a “source” and doing the right kind of work. But, generally only in a context of guiding information based on an intent or program, or its own functional organisation, and at the expense of exhausting compensating disorder to some “sink” or other. (NB: here, something like a timing belt and set of cams is a program.)
4 –> Heat –in short: energy moving between bodies due to temperature difference, by radiation, convection or conduction — cannot wholly be converted to work. (Here, the radiant energy flowing out from our sun’s surface at some 6,000 degrees Celsius to earth at some 15 degrees Celsius, average, is a form of heat.)
5 –> Physically, by definition: work is done when applied forces impart motion along their lines of action to their points of application, e.g. when we lift a heavy box to put it on a shelf, we do work. For force F, and distance along line of motion dx, the work is:
dW = F*dx, . . . where, strictly * denotes a “dot product”
6 –> But, that definition does not say anything about whether or not the work is constructive — a tornado ripping off a roof and flying its parts for a mile to land elsewhere has done physical work, but not constructive work.
(Side-bar, constructive work is closely connected to the sort we get paid for: if your work is constructive, desirable and affordable, you get paid for it. [Hence, the connexion between energy use at a given general level of technology and the level of economic activity and national income.])
7 –> Similarly, it says nothing about the origin of the energy conversion device.
8 –> When that device itself manifests functionally specific, complex organisation and associated information — FSCO/I (e.g. a gas engine- generator set or a solar PV panel, battery and wind turbine set, as opposed to, e.g. the natural law-dominated order exhibited by tornadoes or hurricanes as vortexes), we have good reason to infer that the conversion device was designed.
(Side-bar: Now, there is arguably a link between increased information and reduction in degrees of microscopic freedom of distributing energy and mass. Where, entropy is best understood as a logarithmic measure of the number of ways energy and mass can be distributed under a given set of macro-level constraints like pressure, temperature, magnetic field, etc.:
s = k ln w, k being Boltzmann’s constant and w the number of “ways.”
Jaynes therefore observed, aptly [but somewhat controversially]: “The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its [macro-level observable] thermodynamic state. This is a perfectly ‘objective’ quantity . . . There is no reason why it cannot be measured in the laboratory.”[Cited, Harry Robertson, Statistical Thermophysics, Prentice Hall, 1993, p. 36.]
This connects fairly directly to the information as negentropy concept of Brillouin and Szilard, but that is not our focus here, which is instead on the credible source/cause of energy conversion devices exhibiting FSCO/I. As this thought experiment shows [cf.TMLO chs 8 & 9], the correct assembly of such from microscopic components scattered at random in a vat or a pond would indeed drastically reduce entropy and increase the functionality [which would define an observable functional state], but the basic message is that since the scattered microstates so overwhelm the clumped then the functional ones, it is maximally unlikely that such would ever happen spontaneously. Nor, would heating up the pond or striking it with lightning or the like be likely to help matters out.
Just as, we normally observe an ink spot dropped in a vat diffusing throughout the vat, not collecting back together again.
In short, to produce complex, specific organisation to achieve function, the most credible path is to assemble co-ordinated, well-matched parts according to a known good plan.)
9 –> The reasonableness of the inference from observing a high-FSCO/I energy converter to its having been designed would be sharply multiplied when the device in question is part of a von Neuman, self-replicating automaton [vNSR]:
Fig. B: A concept sketch of the von Neuman self replicator [vNSR], in the form of a “clanking replicator”
10 –> Here, we see a machine that not only functions in its own behalf but has the ADDITIONAL — that is very important — capacity of self replication based on stored specifications, which requires:
(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
11 –> Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.
12 –> That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
13 –> This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.
14 –> Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations.
15 –> In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation.
16 –> So, we may conclude: once the set of possible configurations of relevant parts is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.
17 –> As a relevant historical footnote, the much despised and derided William Paley actually saw much of this in his Natural Theology, ch 2, where he extended his analogy of the watch to the case of additional capacity to self-replicate:
18 –> So far, the sub-argument has been on how FSCO/I, especially in a context of symbolic digital codes and algorithms, credibly points to design as its best explanation. But as Figs. C and D just below will show, our reasoning on the vNSR is directly relevant to the case of the living cell:
Fig. C: The protein synthesis process in the living cell, showing the source of messenger RNA, its transmission to the cytoplasm, and its use as a digital coded tape to produce proteins [Courtesy Wikimedia, under GNU. (Also, cf a medically oriented survey here.)]
Fig. D: A “close-up” of the Ribosome in action during protein translation, showing the 3-letter codons fitting tRNA anticodons in the A and P sites; with the tRNA’s serving as transporters of successive specified amino acids and as position-arm devices with tool-tips that “click” the successive amino acids [AA’s] into position until a stop codon triggers release. [Courtesy Wikimedia under GNU.]
Clay animation video [added Dec 5, 2011]:
More detailed animation [added Dec 5, 2011]:
Fig D.1: Videos.
19 –> Thus, we not only see the relevance of the vNSR to the living cell, but we see how the metabolic and self-replicating facilities of the living cell deeply embed codes, step-by step execution of instructions to achieve a functional product, and an astonishing incidence of FSCO/I. This justifies the inference on best, empirically based explanation, that the living cell is an artifact of design.
20 –> And, on our abundant experience and observation, the best explanation for a design is a designer. Such an inference on reliable sign to its signified, would still obtain simply on induction, whether or not the designer is in fact the possessor of that elusive property called free will. (Which is why Mr Arrington argued in the linked by laying this vexed issue to one side for the sake of moving his particular argument forward.)
However, the third part of the task still remains: why do design thinkers often hold that a designer is best understood as a self-moved, initiating agent cause?
16 Replies to “ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action”
Pretzels, anyone? This might be a gooe time to go back and reread Phillip Johnson. Very carefully. Strategy drives tactics, and some strategies are clearly more effective than others.
Thanks for your thought; I appreciate that the above OP (which is intended for reference and foundational purposes) is involved.
However, it is also responding to what are at root a technical series of counter-arguments to the design inference, backed up by philosophical issues and challenges that go back at least 2,300 years.
As such, pardon, but I believe the above is a legitimate part of the project of design theory — one more piece of the puzzle.
I hope that this OP will therefore serve as a point of reference for onward debates in other, more popular level threads.
GEM of TKI
PS: Sometimes, having the technical and tactical underpinnings is what enables a strategy to work. A classic example is the German strategy in 1918, where the whirlwind bombardment and infiltration storm-trooper tactics provided the foundation for a strategy that for the second time almost won the war for Germany. The likely margin of failure was the early deployment those American Marines at the 2nd Marne. And, in May 1940, getting the technical tactics right did win the day for the Germans.
The division of natural vs supernatural at the term “free will” or “design agency” is convenient, dishonest semantics when used to preclude design agency as a proper explanatory force. If the agency we refer to as “free will” did not exist as meaningfully distinct from what chance & known natural forces can produce by themselves, then why does science act as if there is a distinction between “artificial” (man-made) and “natural”?
Why would there be a distinction between death by murder and death by natural causes? Isn’t murder also a natural cause, if design agency or free will is “the same thing” as any other physical causation?
Materialists wish to subsume “design agency” or “free will” as something produced by chance and other, already-identified natural forces without demonstrating that it is so, or even that it is theoretically reasonable, and even though it is philosophical and rational suicide to do so.
If science defines “free will” as humans employ it all the time as “supernatural”, then they are applying supernatural techniques. If science defines free will agency as non-supernatural subsets of “chance and natural forces”, then they have no reason to deny it as an appropriate explanatory force.
Currently known natural forces and chance are insufficient to account for some empirical phenomena, such as things humans generate, that consistently and reliably show specific common characteristics – such as, FSCI well over 1000 bits, and which violate the Universal Plausibility Principle.
Science is left with two options; it must admit a supernatural force exists, or it must admit that a “natural” force exists that is as yet unaccounted for which is responsible for generating things like functioning aircraft carriers and space shuttles and the book “War and Peace”, which cannot be explained via other, currently known forces.
Just as gravity and entropy can be recognized by the manner in which they affect observable phenomena, “intentionality” or “design agency” or “free will” can also be recognized by specific and qualitative effects it has on observable phenomena.
Is there a reason why the scientific community would be against the discovery of another fundamental “natural” force that must be posited to account for what we empirically and factually observe, and which our current set of natural explanations are entirely deficient in explaining, and without which our reasoning process and science itself crumbles into irrationality, and our ability to distinguish between artifice and “what other forces and chance produces” becomes nullified, and thus personal responsibility, morality, ethics and justice become nothing more than delusions?
If nothing else, science must allow that a new fundamental property or force exists in nature, commonly referred to as intentional agency, that can produce what no other known combination of forces and chance can produce, and which justifies our reliance upon reason and science, and which allows for personal responsibility and morality beyond self-delusion.
BTW, I’m really enjoying your contributions to this site, KF. They are very compelling, amazingly well organized, and exhaustively thorough.
You will not believe this one, re your:
Let’s excerpt the closing summation of Clarence Darrow — he of the Scopes Trial a short while after this [Bryan had intended to call the following up, but the disgusted judge abruptly cut off the trial] — at the Loeb-Leopold Nietzschean murder trial:
>> . . . They [[Loeb and Leopold] wanted to commit a perfect crime . . . . Do you mean to tell me that Dickie Loeb had any more to do with his making than any other product of heredity that is born upon the earth? . . . .
He grew up in this way. He became enamored of the philosophy of Nietzsche. Your Honor, I have read almost everything that Nietzsche ever wrote. He was a man of a wonderful intellect; the most original philosopher of the last century. Nietzsche believed that some time the superman would be born, that evolution was working toward the superman. He wrote one book, Beyond Good and Evil, which was a criticism of all moral codes as the world understands them; a treatise holding that the intelligent man is beyond good and evil, that the laws for good and the laws for evil do not apply to those who approach the superman. [Shades of Plato’s critique of evolutionary materialism in The Laws, Bk X . . . ] He wrote on the will to power. Nathan Leopold is not the only boy who has read Nietzsche. He may be the only one who was influenced in the way that he was influenced . . . >>
This last claim was in fact patently false, as Bryan in his c 1923 The Menace of Darwin, had written in warning to the then largely Christian public of America.
Pardon the painfully harsh words Bryan felt compelled to communicate to his nation and his generation, in warning of what was to come, based on what had already begun to happen:
That, sadly, is what amorality, stripped of genteel habits, really means.
And, BTW, here is what Bryan intended but did not get the chance to say in his closing summation in Dayton, Tennessee.
Excerpting, and again, the reading is painful indeed; but, I am now convinced that we must take warning from the past, lest we repeat it:
Hard reading, and at a time of a clash of Titans.
But, when we debate the validity of the choosing will, and the resulting responsible mind, that is what is at stake.
So, do pardon my taking the step by step, semi-technical route, for just a short while, to lay the base for the response we must make.
At least — and again, pardon words that may wound as they lance home in the abscess, they are meant to help us to heal — if both science and civilisation are to be kept from sliding off the cliff.
We need to think, very soberly, about where we are, and what fire we are playing with.
GEM of TKI
I’d argue that the German army failed to execute the Schlieffen plan properly in WWI. The northern arm failed to swing all the way north and west of Paris, and the southern arm failed to retreat into Germany to suck the French army in. Had the northern army especially followed the plan, there would have been no race to the sea because the Germans would have already won it.
But more on topic, I know in the past Dembski has recoiled from comparing information and entropy. I’d like to see what he says about this.
On re-looking above, I have said remarkably little about entropy proper, I may have to add a remark or two. (There is a whole informational approach to thermodynamics.)
And the offensive in Qn was the 2nd major German push of March 1918 on, not the first one in 1914 which was also stopped at the last major river before Paris, the Marne.
Back in 1914, the margin of failure was the month of effort and manpower fighting the Belgians, with the British coming up too.
GEM of TKI
My bad about 1918.
I don’t doubt that the BEF fought well, but the intent of the Schlieffen plan was to have the northern arm swoop around to the north and west of Paris before turning back. The southern arm was supposed to fake a retreat, drawing most of the French army into Germany. Then the northern arm would move in and attack the French from behind.
Instead the southern arm actually advanced into France, pushing the French army back towards Paris. Then the northern arm swung south too early. This turned what was supposed to be an envelopment into frontal assault. Considering the superiority of the German army, made obvious by the fact that the portion which was supposed to retreat actually advanced almost by accident, I think the plan would have worked if executed properly.
Here is what was supposed to happen:
You see the northern arm going around Paris and the French actually advancing into Germany.
We are a bit off topic, but . . .
You are still discussing 1914, where battle of frontiers, seizure of Liege using Skoda 305 mm and Krupp 42 cm mortars was initially decisive, but distraction of the Belgians flooding and retreating to Antwerp (and later the Tannenberg Masurian lakes episode in the East)cost the Germans the time and forces they needed.
A gap opened and the French-British spotted it by air, and sallied into it. and the Germans recoiled to the Aisne, on orders of a staff colonel sent to the front; a few dozen more miles and the last E/W railroad would have been cut, breaking France’s back — not even counting Paris. Race to the sea, and trench lines were locked in, for 4 yrs seige warfare.
1914 + 4 = 1918.
In early 1918, having knocked out one eastern ally after another year by year [and having bled the French in 1916 and by blunting the Nivelle offensives in 1917, triggered mutinies], culminating in Russia in 1917, the Germans had a temporary advantage until the Americans could be deployed. That March, they struck, and drove several wedges into the Allied lines. Last line before Paris was again Marne.
Chemin de Dames, 8,000 US Marines.
UPDATE: I have added some adjustments in 3 – 9, and a sidebar on entropy and information at point 8. This should make it clear that while there is a relationship between entropy and information, the pivotal issue is the credible source of the FSCO/I in an energy converter. The most credible source for that is a designer, whether the device is micro- or macro- scale. GEM of TKI
OT: This breakthrough is just plain cool!
Physicists describe method to observe timelike entanglement – January 24, 2011
Excerpt: In “ordinary” quantum entanglement, two particles possess properties that are inherently linked with each other, even though the particles may be spatially separated by a large distance. Now, physicists S. Jay Olson and Timothy C. Ralph fro…m the University of Queensland have shown that it’s possible to create entanglement between regions of spacetime that are separated in time but not in space, and then to convert the timelike entanglement into normal spacelike entanglement. They also discuss the possibility of using this timelike entanglement from the quantum vacuum for a process they call “teleportation in time.”
It should be noted that this experiment solidly dots the i’s and crosses the the t’s insofar as demonstrating that not only is ‘information’ transcendent of space but ‘information’ is also transcendent of time, with the added bonus of demonstrating dominion of matter/material regardless of the space-time constraints that matter/material is itself subject to!!!
Quantum entanglement is an interesting field, with active research, that cuts across our usual experience/expectations of the world.
This brings to bear Bell’s inequality theorem of 1964, and the issue of local realism/hidden variables and Einstein’s concerns on “spooky” effectively instant action at a distance:
>> In theoretical physics, Bell’s theorem (AKA Bell’s inequality) is a no-go theorem, loosely stating that:
it indicates that every quantum theory must violate either locality or counterfactual definiteness. In conjunction with the experiments verifying the quantum mechanical predictions of Bell-type systems, Bell’s theorem demonstrates that certain quantum effects travel faster than light and therefore restricts the class of tenable hidden variable theories to the nonlocal [thus, we may say “transcendent”] variety . . . .
As in the Einstein–Podolsky–Rosen (EPR) paradox, Bell considered an experiment in which a source produces pairs of correlated particles. For example, a pair of particles may be produced in a Bell state so that if the spins are measured along the same axes they are certain to produce identical results. The particles are then sent to two distant observers: Alice and Bob. In each trial, both of the observers independently chooses to measure the spin of their respective particle along a particular axis [around the full circle], and each measurement yields a result of either spin-up (+1) or spin-down (-1). Whether or not Alice and Bob obtain the same result depends on the relationship between the orientations of the two spin measurements, and in general is subject to some uncertainty. The classical incarnation of Bell’s theorem is derived from the statistical properties observed over many runs of this experiment.
Mathematically the correlation between results is represented by their product (thus taking on values of ±1 for a single run). While measuring the spin of these entangled particles along the same axis always results in identical (perfectly correlated) results, measurements along perpendicular directions have a 50% chance of matching (uncorrelated) . . . .
Bell achieved his breakthrough by first assuming that a theory of local hidden variables could reproduce these results. Without making any assumptions about the specific form of the theory beyond basic consistency requirements, he was able to derive an inequality that was clearly at odds with the result described above, which is both predicted by quantum mechanics and observed experimentally. Thus, Bell’s theorem ruled out the idea of local realism as a viable interpretation of quantum mechanics, though it still leaves the door open for non-local realism [fancy way of saying that in effect one way to view all this is that our space-time is in effect connected through a transcendent — hence Einstein’s “spooky” — realm that allows for effective supra-light connexions].
Over the years, Bell’s theorem has undergone a wide variety of experimental tests. Two possible loopholes in the original argument have been proposed, the detection loophole and the communication loophole, each prompting a new round of experiments that re-verified the integrity of the result. To date, Bell’s theorem is supported by an overwhelming body of evidence and is treated as a fundamental principle of physics in mainstream quantum mechanics textbooks . Still, no principle of physics can ever be absolutely beyond question, and there are some people who still do not accept the theorem’s validity . . . .
In QM, predictions were formulated in terms of probabilities — for example, the probability that an electron might be detected in a particular region of space, or the probability that it would have spin up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM’s weakness was its inability to predict those values precisely. The possibility remained that some yet unknown, but more powerful theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilistic answers given by QM. If a hidden variables theory were correct, the hidden variables were not described by QM, and thus QM would be an incomplete theory.
The desire for a local realist theory was based on two assumptions:
In the formalization of local realism used by Bell, the predictions of theory result from the application of classical probability theory to an underlying parameter space. By a simple argument based on classical probability, he then showed that correlations between measurements are bounded in a way that is violated by QM . . . .
Bell’s inequalities are tested by “coincidence counts” from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.
Bell test experiments to date overwhelmingly violate Bell’s inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987. Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, “the discrepancies with QM could not be reproduced”.
Nevertheless, the issue is not conclusively settled. According to Shimony’s 2004 Stanford Encyclopedia overview article: . . . .
Because detectors don’t detect a large fraction of all photons, Clauser and Horne recognized that testing Bell’s inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):
Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.
The experiment was performed by Freedman and Clauser, who found that the Bell’s inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman-Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:
This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known as stochastic resonance). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena . . . .
Most advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell’s inequality by means of a “non-local” hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A recent experiment ruled out a large class of non-Bohmian “non-local” hidden variable theories.
If the hidden variables can communicate with each other faster than light, Bell’s inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process which travels backwards in time along the past Light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time.
A few advocates of deterministic models have not given up on local hidden variables. E.g., Gerard ‘t Hooft has argued that the superdeterminism loophole cannot be dismissed.
The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. [In effect we are here looking at a quasi-infinity of worlds, at every entangled event . . . which I think raises Occam’s ghost, sharp and slashing razor in hand] If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.
This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the results of an experiment are always observed to be definite, there is a quantity which determines what the outcome would have been even if you don’t do the experiment.
Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined . . . >>
Couriouser and couriouser, as the debates and tests go on!
But, bottomline, BA has a serious point in pointing to and highlighting nonlocality and in effect information linkage through a transcendent realm that is beyond our commonly experienced world.
GEM of TKI
F/N: Pardon a remark on the relevance of this post to the ID project.
Here, I have just used this ID Foundations 2 post, as a reference foundation that applies to the context of genetic determinism and the myth of genes for this and that, and also addresses the isue of our being self-moved agents, which is discussed on p 2 of the post above.
GEM of TKI