Uncommon Descent Serving The Intelligent Design Community

HeKS strikes gold again, or, why strong evidence of design is so often stoutly resisted or dismissed

Categories
Atheism
ID Foundations
rhetoric
specified complexity
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

New UD contributor HeKS notes:

The evidence of purposeful design [–> in the cosmos and world of life]  is overwhelming on any objective analysis, but due to Methodological Naturalism it is claimed to be merely an appearance of purposeful design, an illusion, while it is claimed that naturalistic processes are sufficient to achieve this appearance of purposeful design, though none have ever been demonstrated to be up to the task. They are claimed to be up to the task only because they are the only plausible sounding naturalistic explanations available.

He goes on to add:

The argument for ID is an abductive argument. An abductive argument basically takes the form: “We observe an effect, x is causally adequate to explain the effect and is the most common [–> let’s adjust: per a good reason, the most plausible] cause of the effect, therefore x is currently the best explanation of the effect.” This is called an inference to the best explanation.

When it comes to ID in particular, the form of the abductive argument is even stronger. It takes the form: “We observe an effect, x is uniquely causally adequate to explain the effect as, presently, no other known explanation is causally adequate to explain the effect, therefore x is currently the best explanation of the effect.”

Abductive arguments [–> and broader inductive arguments] are always held tentatively because they cannot be as certain as deductive arguments [–> rooted in known true premises and using correct deductions step by step], but they are a perfectly valid form of argumentation and their conclusions are legitimate as long as the premises remain true, because they are a statement about the current state of our knowledge and the evidence rather than deductive statements about reality.

Abductive reasoning is, in fact, the standard form of reasoning on matters of historical science, whereas inductive reasoning is used on matters in the present and future.

And, on fair and well warranted comment, design is the only actually observed and needle in haystack search-plausible cause of functionally specific complex organisation and associated information (FSCO/I) which is abundantly common in the world of life and in the physics of the cosmos. Summing up diagramatically:

csi_defn

Similarly, we may document the inductive, inference to best current explanation logic of the design inference in a flow chart:

explan_filter

Also, we may give an iconic case, the protein synthesis process (noting the functional significance of proper folding),

Proteinsynthesis

. . . especially the part where proteins are assembled in the ribosome based on the coded algorithmic information in the mRNA tape threaded through the Ribosome:

prot_transln

And, for those who need it, an animated video clip may be helpful:

So, instantly, we may ask: what is the only actually — and in fact routinely — observed causal source of codes, algorithms, and associated co-ordinated, organised execution machinery?

ANS: intelligently directed contingency, aka design, where there is no good reason to assume, imply or constrain such intelligence to humans.

Where also, FSCO/I or even the wider Complex Specified Information is not an incoherent mish-mash dreamed up by silly brainwashed or machiavellian IDiots trying to subvert science and science education by smuggling in Creationism while lurking in cheap tuxedos, but instead the key notions and the very name itself trace to events across the 1970’s and into the early 1980’s as eminent scientists tried to come to grips with the evidence of the cell and of cosmology, as was noted in reply to a comment on the UD Weak Argument Correctives:

. . . we can see across the 1970′s, how OOL researchers not connected to design theory, Orgel (1973) and Wicken (1979) spoke on the record to highlight a key feature of the organisation of cell based life:

ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]

WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [ –> i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [ –> originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.]

At the turn of the ’80′s Nobel-equivalent prize-holding astrophysicist and lifelong agnostic, Sir Fred Hoyle, went on astonishing record:

Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [Evolution from Space (The Omni Lecture[ –> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]

Based on things I have seen, this usage of the term Intelligent Design may in fact be the historical source of the term for the theory.

The same worthy also is on well-known record on cosmological design in light of evident fine tuning:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16]

A talk given to Caltech (For which the above seems to have originally been conclusive remarks) adds:

The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn’t so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn’t give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it’s easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem – the information problem . . . .

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn’t convince myself that even the whole universe would be sufficient to find life by random processes – by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . .

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

These words in the same talk must have set his audience on their ears:

I do not believe that any physicist who examined the evidence could fail to draw the inference that the laws of nuclear physics have been deliberately designed with regard to the consequences they produce within stars. [“The Universe: Past and Present Reflections.” Engineering and Science, November, 1981. pp. 8–12]

So, then, why is the design inference so often so stoutly resisted?

LEWONTIN, 1997: . . . to put a correct view of the universe into people’s heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . .

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Billions and billions of demons, NYRB Jan 1997. If you imagine that the above has been “quote mined” kindly read the fuller extract and notes here on, noting the onward link to the original article.]

NSTA BOARD, 2000: The principal product of science is knowledge in the form of naturalistic concepts and the laws and theories related to those concepts [–> as in, Phil Johnson was dead on target in his retort to Lewontin, science is being radically re-defined on a foundation of a priori evolutionary materialism from hydrogen to humans] . . . .

Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations [–> the ideological loading now exerts censorship on science] supported by empirical evidence [–> but the evidence is never allowed to speak outside a materialistic circle so the questions are begged at the outset] that are, at least in principle, testable against the natural world [–> but the competition is only allowed to be among contestants passed by the Materialist Guardian Council] . . . .

Science, by definition, is limited to naturalistic methods and explanations and, as such, is precluded from using supernatural elements [–> in fact this imposes a strawman caricature of the alternative to a priori materialism, as was documented since Plato in The Laws, Bk X, namely natural vs artificial causal factors, that may in principle be analysed on empirical characteristics that may be observed. Once one already labels “supernatural” and implies “irrational,” huge questions are a priori begged and prejudices amounting to bigotry are excited to impose censorship which here is being insitutionalised in science education by the national science teachers association board of the USA.] in the production of scientific knowledge. [[NSTA, Board of Directors, July 2000. Emphases added.]

MAHNER, 2011: This paper defends the view that metaphysical naturalism is a constitutive ontological principle of science in that the general empirical methods of science, such as observation, measurement and experiment, and thus the very production of empirical evidence, presuppose a no-supernature principle . . . .

Metaphysical or ontological naturalism (henceforth: ON) [“roughly” and “simply”] is the view that all that exists is our lawful spatiotemporal world. Its negation is of course supernaturalism: the view that our lawful spatiotemporal world is not all that exists because there is another non-spatiotemporal world transcending the natural one, whose inhabitants—usually considered to be intentional beings—are not subject to natural laws . . . .

ON is not part of a deductive argument in the sense that if we collected all the statements or theories of science and used them as premises, then ON would logically follow. After all, scientific theories do not explicitly talk about anything metaphysical such as the presence or absence of supernatural entities: they simply refer to natural entities and processes only. Therefore, ON rather is a tacit metaphysical supposition of science, an ontological postulate. It is part of a metascientific framework or, if preferred, of the metaparadigm of science that guides the construction and evaluation of theories, and that helps to explain why science works and succeeds in studying and explaining the world. Now this can be interpreted in a weak and a strong sense. In the weak sense, ON is only part of the metaphysical background assumptions of contemporary science as a result of historical contingency; so much so that we could replace ON by its antithesis any time, and science would still work fine. This is the view of the creationists, and, curiously, even of some philosophers of science (e.g., Monton 2009). In the strong sense, ON is essential to science; that is, if it were removed from the metaphysics of science, what we would get would no longer be a science. Conversely, inasmuch as early science accepted supernatural entities as explainers, it was not proper science yet. It is of course this strong sense that I have in mind when I say that science presupposes ON. [In, his recent Science and Education article, “The role of Metaphysical Naturalism in Science” (2011) ]

In short, there is strong evidence of ideological bias and censorship in contemporary science and science education on especially matters of origins, reflecting the dominance of a priori evolutionary materialism.

To all such, Philip Johnson’s reply to Lewontin of November 1997 is a classic:

For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original.] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

. . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

Please, bear such in mind when you continue to observe the debate exchanges here at UD and beyond. END

Comments
KF
Why have you presented a case of an IMAGE, created by artifice of man ... as an example of natural generation of CSI?
Upright BiPed
October 5, 2014
October
10
Oct
5
05
2014
04:04 PM
4
04
04
PM
PDT
nightlife, If you'd like to conflate local regularity with inexorable law, in the face of incontrovertible evidence to the contrary, then that is certainly your prerogative. It remains a flawed perspective all the same.Upright BiPed
October 5, 2014
October
10
Oct
5
05
2014
03:58 PM
3
03
58
PM
PDT
HeKS:
I’m not sure how many more times we can go around this. Here is Barry’s original challenge:
Show me one example … of chance/law forces creating 500 bits of complex specified information.
You responded by saying:
Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used.
There are two problems here. 1) This is not exactly what Ewert said. I’ve already described more precisely what Ewert said in my previous comments and I find myself wondering if you actually read his article. If you want to take the position that Ewert himself simply doesn’t understand how CSI relates to a design inference, you are free to take that up with him, but that not what you originally claimed.
I've read Ewert's article, thanks, and we both know precisely what he said, but we apparently disagree on his meaning. With regards to me wanting "to take the position that Ewert himself simply doesn’t understand how CSI relates to a design inference", I have no idea where that came from. You correctly quoted Barry's challenge and my response, and neither have anything to do with "how CSI relates to a design inference".
They were two chance hypotheses that actually had nothing to do with the generation of the image content, so to say that they created high levels of CSI in the image, thereby answering Barry’s challenge, is literally nonsensical.
I agree completely. That's why I never said that those two hypotheses created high levels of CSI. As I said in #27, it was a volcano that created the high-CSI pattern.
It would be like testing the chance hypothesis that a bunch of cans of paint tipped over, spilled onto a canvas, and created the Mona Lisa and finding that, lo and behold, this chance hypothesis leads to an incredibly high calculation of CSI, and so Barry’s challenge has been answered. But this obviously makes no sense, because that isn’t how the Mona Lisa came to exist, so that chance hypothesis did not actually create a high degree of CSI, as it didn’t even happen in the first place.
Again, I agree completely. Barry's challenge is obviously not met by the Mona Lisa, because the Mona Lisa was not produced by nature. But the ash bands that Ewert analyzed were produced by nature. You keep pointing out that the chance hypothesis employed to calculate the CSI of the pattern is not the actual process that produced the pattern. But that is how CSI is always calculated. It's calculated with a null hypothesis, not the actual process that produced the pattern. In #41, I said that if you know of any exceptions to this, then please share. That invitation is still open. I think it would also help resolve our miscommunication/disagreement if you would answer the question I posed at the end of #44. I'll repeat it: But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)? I know this is a semantically murky subject, so thanks for your patience, HeKS.R0bb
October 5, 2014
October
10
Oct
5
05
2014
03:54 PM
3
03
54
PM
PDT
NL - You do seem to come across as holding natural science to be based and operating upon an exclusive and dogmatic assumption of universally applicable naturalism - which would seem to reduce it to blind faith religious atheism (or possibly deism) - unless you really mean that the assumption of naturalism is only an initial and correctable working assumption. The problem with science-as-dogmatic-universal-and exclusive-naturalism is that such "science" is front-loaded with prior religious/metaphysical conclusions which make it impossible to let the evidence speak for itself, and impossible to reliably describe, understand and explain nature: if you have decided in advance what science may or may not discover then you are not actually doing science at all - hence IDers' rejection of MN as a valid scientific philosophy. Nevertheless (as I understand it), in scientific practice ID identifies design (within a limited range of applicability) employing MN exclusively! - it proposes a basically mathematical method of unequivocally identifying and describing purposeful design by detecting certain law-like properties in designed entities for a defined class of cases: its output is a design inference/hypothesis which can then be subject to further scientific test. Essentially ID proposes a scientific law for detecting design.Thomas2
October 5, 2014
October
10
Oct
5
05
2014
03:41 PM
3
03
41
PM
PDT
@R0bb #44 I'm not sure how many more times we can go around this. Here is Barry's original challenge:
Show me one example ... of chance/law forces creating 500 bits of complex specified information.
You responded by saying:
Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used.
There are two problems here. 1) This is not exactly what Ewert said. I've already described more precisely what Ewert said in my previous comments and I find myself wondering if you actually read his article. If you want to take the position that Ewert himself simply doesn't understand how CSI relates to a design inference, you are free to take that up with him, but that not what you originally claimed. 2) Barry asked for an example of a chance/law process actually creating 500 or more bits of CSI. But the two chance hypotheses Ewert covers that would lead to a very high CSI calculation were not the ones that created the image. They were two chance hypotheses that actually had nothing to do with the generation of the image content, so to say that they created high levels of CSI in the image, thereby answering Barry's challenge, is literally nonsensical. It would be like testing the chance hypothesis that a bunch of cans of paint tipped over, spilled onto a canvas, and created the Mona Lisa and finding that, lo and behold, this chance hypothesis leads to an incredibly high calculation of CSI, and so Barry's challenge has been answered. But this obviously makes no sense, because that isn't how the Mona Lisa came to exist, so that chance hypothesis did not actually create a high degree of CSI, as it didn't even happen in the first place. Consider some excerpts from Ewert:
The subject of CSI has prompted much debate, including in a recent article I wrote for ENV, "Information, Past and Present." I emphasized there that measuring CSI requires calculating probabilities. At her blog, The Skeptical Zone, writer Elizabeth Liddle has offered a challenge to CSI that seems worth considering. She presents a mystery image and ASKS FOR A CALCULATION OF CSI. The image is in gray-scale, and looks a bit like the grain in a plank of wood. Her intent is either to force an admission that SUCH A CALCULATION IS IMPOSSIBLE or to produce a false positive, detecting design where none was present. But as long as we remain in the dark about what the image actually represents, CALCULATING ITS PROBABILITY IS INDEED IMPOSSIBLE. Dembski never intended the design inference to work in the absence of understanding possible chance hypotheses for the event. Rather, the assumption is that we know enough about the object to make this determination.
If the CSI value was simply a measure of the CSI inherent in the object/image itself, it would not be impossible to calculate the CSI value of the image without knowing what it represented or what chance hypotheses could be tested against it. Ewert continues (and I'll just bold this stuff without comment):
Let's review the design inference... There are three major steps in the process: 1. Identify the relevant chance hypotheses. 2. Reject all the chance hypotheses. 3. Infer design. .... Specified complexity is used in the second of these steps. In the original version of Dembski's concept, we reject each chance hypothesis if it assigns an overwhelmingly low probability to a specified event. Under the version he presented in the essay "Specification," a chance hypothesis is rejected due to having a high level of specified complexity. .... The criterion of specified complexity is used to eliminate individual chance hypotheses. It is not, as Liddle seems to think, the complete framework of the process all by itself. It is the method by which we decide that particular causes cannot account for the existence of the object under investigation. .... Specified complexity as a quantity gives us reason to reject individual chance hypotheses. It requires careful investigation to identify the relevant chance hypotheses. This has been the consistent approach presented in Dembski's work, despite attempts to claim otherwise, or criticisms that Dembski has contradicted himself.
As I said above, if you want to go argue with Ewert that he doesn't understand how CSI factors into the design inference, you're free to do that. What I'm telling you is that Ewert does not make any statement in that article that can be cited in support of your claim that Barry's challenge has been met. I freely admit that I'm no math wizard, but in this case this is not a question of math. It's a question of reading comprehension.HeKS
October 5, 2014
October
10
Oct
5
05
2014
12:40 AM
12
12
40
AM
PDT
#43 logically_speaking
"science operates under assumption that universe operates lawfully from some front loaded foundation". Laws require a law maker, just an observation.
That's a topic for metaphysics or theology, not for natural science. By definition, the postulates (which are epistemological counterpart of the ontological element 'front loaded foundation') are assumptions that are taken for granted by natural science ('for the time being' at any point). Of course, no starting assumption is cast in stone and what is taken for granted today may be explained in the future under some more economical starting assumptions (i.e. under more succinct set of postulates). But you cannot get around the basic requirement that no matter how far the science advances, some starting assumption taken for granted must be accepted before anything can be deduced within that science. The hypothetical 'ideal theory' with empty set of initial assumptions i.e. with empty scientific statement generating algorithm, generates empty set of scientific statements. Science has moved beyond that point by now. The ontological restatement of the above epistemological requirement is that science always presumes existence of some front loaded system (universe) that plays or operates by the rules described by its postulates. The advance of science consists of reducing that 'front loading' to fewer and simpler elemental entities which can explain as much or more phenomena than before. The computational or algorithmic approach to natural science (exemplified by Wolfram's NKS, sketched in this post) examines the above process which normally unfolds implicitly, more systematically and deliberately, treating it as an object of scientific research and formalization of its own. One remarkable finding of such analysis (by Wolfram and others) is that extremely simple elemental building blocks, such as 2 state automata (states denoted as 0 and 1) with very simple rules of state change, can be universal computers (capable of computing anything that is computable by an abstract Turing machine, which is a superset of what any existent, concrete and finite, computer can compute). Hence, the most economical front loading will eventually advance (or shrink) into a form of a network of very simple elemental automata with few rules of their state/link change. While many such systems with universal computer capabilities do exist (many are also distributed self-programming universal computers), the hard part is finding one that also requires only a very simple initial state of the system. Namely, the initial state is the 'starting program' (the 'universe program', as it were) being executed by this simple front loaded distributed computer. One complication in this type of modeling is that this 'starting program' is self-modifying program i.e. its outputs are themselves the instructions that will/may get performed at later stages in the run of the 'universe program'. Ideally, one would want to find the rules of the automata network for which the initial state itself has a very low algorithmic information/complexity (i.e. a short length of a program needed to generate it), yet remain capable of computing/reproducing all observed phenomena. Otherwise, a single-minded focus on simplifying just the building blocks (network of automata), one may be merely shifting the front loading from the rules of operation of the elemental building blocks into the initial state of the system (the 'starting program' of the universe), without necessarily reducing the total amount of front loading (which consists of the rules of operation/hardware + the initial state/software).
You say, "By definition you can't have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into "irreducibly complex designs")". But we arrange molecules in irreducibly complex designs all the time, does that mean we violate the laws of the universe, of course not.
This is a very common misunderstanding (of science and natural laws) in the ID debates, often on both sides. The natural laws are not the entire input needed to compute or make a prediction about (or describe) phenomena or events. The natural law, such as Newton laws of motion & gravity, merely constrain the events but don't single out one that will happen. For example, while a kicked ball does move according to the Newton laws, these laws don't tell you where and when it will land or what its trajectory will be. To compute the actual behavior of the ball, you need to input into the 'physics algorithm' not just data representing the natural law but also the data representing the initial and boundary conditions (IBC) i.e. the initial velocity and direction of the ball (initial conditions), plus any forces affecting the ball during the flight (boundary conditions). Only this combined set of data or numbers yield predictions about actual events: algorithmic instructions of the law + numbers for initial conditions + numbers for boundary conditions. The latter two sets of numbers, the IBC, are not themselves a natural law, but are some numbers 'put in by hand' as it were, into the physics algorithm (but are not specified by the algorithm). The law itself merely constrains or compresses the description of events -- e.g. instead of recording/specifying all points of the ball trajectory, you just need to specify initial velocity and position (plus any ball intercepts in flight by other players), and the law algorithm computes the rest of the trajectory. Hence instead of describing events using say million of numbers for a high res trajectory points, you just need 6 numbers (3 for initial 3-D position + 3 for initial 3-D velocity). Similarly, when a biochemist arranges molecules into a complex design, he is not violating any law of chemistry or quantum physics. He is merely setting the initial and boundary conditions for the molecules, aiming to make them form some desired arrangement. That's just like a player adjusting the initial speed and direction of the kick, aiming to make the ball enter the goal. No law is violated at any point in either case. An important detail about boundary conditions (through which one can control the behavior of the system without violating any natural laws) is that they don't refer only to physical conditions on the outer surface of the system, but also physical conditions on any inner surface of the system. For example you control a car by adjusting the boundary conditions on its internal surfaces (steering wheel, gas pedal, etc). In other words, objects can be controlled via boundary conditions without grabbing or pushing or manipulating them from outside. And all that can be done while remaining perfectly within natural laws throughout the process. Hence no violation of natural laws is needed for objects to perform arbitrarily complex actions, and no one has to manipulate objects from outside or in any way that is observable from outside. This implies that the 'chief programmer of the universe' doesn't need to reach down from heavens, or in any way that anyone would notice directly from outside, in order to make molecules do something clever, such as arrange into proteins or live cells, while playing strictly by the rules (natural laws) throughout the entire construction process. Note also that any operations you can do with objects, such as arranging molecules in complex ways, in principle a robot can do (or generally, a program with suitable electro-mechanical transducers), including operations needed to build that 'chemist' robot, or operations needed to build this second robot that built the chemist robot,... etc. Such (potentially unlimited) hierarchy or chain of robots, each generation building the next more advanced (in relevant aspects) generation can in principle reach a stage of technology with robots that can arrange molecules in complex ways, such those constructing live cells. The fascinating insight of the algorithmic approach to natural science sketched earlier is that the starting robots which are as simple as 2 state automata connected into a network with adaptable links, can build robots not just as complex as any we can build, but as complex as any we can conceive ever building. As discussed earlier, a hierarchy of 'robots' starting with the most elemental ones at Planck scale, would pack 10^80 times more computing power (hence intelligence) in any volume of matter energy than what we can presently conceive building in that same volume using our elementary particles as building blocks (e.g. for logic gates). Further, this unimaginably powerful underlying computation (intelligence) can be operating each elementary particle, at all moments and all places, as their galactic scale robotic technology, which in turn are operating cells as their galactic scale technology, which finally operate us as their galactic scale technology. Eventually, human civilizations will build and operate their own galactic scale technologies, extending thus the harmonization process at ever larger scale. As explained above, all such control can be done while everyone at every level is playing strictly by the rules (natural laws of that level) throughout, via control of the internal boundary conditions of those objects, hence without any apparent or directly observable external manipulation of any of the objects in the hierarchy. Of course, these 'rules of operation' or 'natural laws' that different levels are playing by are not what we presently call or understand as natural laws. The latter laws capture only a few outermost coarse grained features or regularities of the much finer, more subtle patterns computed by the hierarchy of underlying computing technologies. The front loaded ground level system computing it all in full detail and the 'natural laws' by which it works, are those simple elemental automata and their simple rules operating together as adaptable networks (societies) at Planck scale. The topic of Planck scale networks and their implications for ID was described and discussed in great detail in an earlier longer thread at UD. The hyperlinked TOC of that discussion is in the second half of this post.nightlight
October 5, 2014
October
10
Oct
5
05
2014
12:08 AM
12
12
08
AM
PDT
HeKS:
If you look at Ewert’s article, the amount of calculated CSI changes based on the naturalistic explanation being considered. This could not be the case if the CSI values Ewert gives were simply a measure of the CSI present in the image itself. If it were, there would only be one CSI value.
You're under the illusion that CSI inheres in the entity alone. This is understandable since most ID proponents, including Dembski himself, often talk as if it does. But consider Dembski's current definition of specified complexity: Χ = -log2[10^120 ⋅ Φ_S(T) ⋅ P(T|H)] Χ is a function of three variables, namely T, H, and S. So CSI does not inhere in the observed instance of the pattern (T) alone, but rather inheres in the observed instance of the pattern (T) in combination with the chance hypothesis (H) and the semiotic agent (S) who observes the instance of the pattern. So yes, entities have multiple values of CSI. For a given semiotic agent, an entity has a CSI value for every chance hypothesis. To detect design, says Dembski, we calculate the CSI of the entity for every "relevant" chance hypothesis, and infer design only of all of the CSI values meet the threshold. If Barry understood the definition of specified complexity, he would not have issued the challenge that he did. Nature produces all kinds of stuff that has high CSI with respect to a chance hypothesis of white noise.
Instead, the CSI value in each case is calculated on the assumption that the hypothesis currently under consideration was actually the one that produced the image.
You seem to think that if the hypothesized process turns out to not be the one that actually produced the image, then the calculated number is not "a measure of the CSI present in the image itself". But CSI is not defined in terms of the process that actually produced the result. If you think that it is, then why do IDists always calculate the CSI in designed artifacts with respect to a non-design hypothesis (almost always white noise, as I said in #41)?R0bb
October 4, 2014
October
10
Oct
4
04
2014
08:35 PM
8
08
35
PM
PDT
Nightlight, You said, "science operates under assumption that universe operates lawfully from some front loaded foundation". Laws require a law maker, just an observation. You say, "By definition you can’t have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into “irreducibly complex designs”)". But we arrange molecules in irreducibly complex designs all the time, does that mean we violate the laws of the universe, of course not. For example we can take the ingredients of eggs, flour and milk and make a cake. Now the cake while probably tasting terrible is irreducibly complex. Anyway the point is that the designer does exactly what we do, he takes the ingredients (molecules), and turns them into something new.logically_speaking
October 4, 2014
October
10
Oct
4
04
2014
04:33 PM
4
04
33
PM
PDT
@R0bb #41, If you look at Ewert's article, the amount of calculated CSI changes based on the naturalistic explanation being considered. This could not be the case if the CSI values Ewert gives were simply a measure of the CSI present in the image itself. If it were, there would only be one CSI value. Instead, the CSI value in each case is calculated on the assumption that the hypothesis currently under consideration was actually the one that produced the image. So, if we assume that the image was "was generated by choosing uniformly over the set of all possible gray-scale images of the same size", the formula to determine the amount of CSI out of the total amount of Shannon Information would give "a result of approximately 1,068,017 bits". However, if the image was "generated by a process biased towards lighter pixels", then the formula to determine the amount of CSI out of the total amount of Shannon Information would give a result of "approximately 593,493 bits". Ewert concludes in both of these cases that the naturalistic hypothesis under consideration results in a calculation of CSI that is too high for the hypothesis to be plausible and so both hypotheses are rejected as the correct explanation for the image. And, as it turns out, neither of those hypotheses were the correct explanation for the image. In the case of the other two hypotheses that Ewert considers, however, they result in CSI calculations of "approximately -11,836 bits" and "approximately -3,123,223 bits" respectively, both of which are far too low for the hypotheses to be ruled out by the concept of specified complexity. This, of course, does not necessarily mean that the hypotheses are correct, but the existence of natural hypotheses capable of accounting for the image that result in a very low calculation of CSI is sufficient to rule out the necessity of a design inferenceHeKS
October 4, 2014
October
10
Oct
4
04
2014
01:53 PM
1
01
53
PM
PDT
HeKS:
For the two positive bit calculations you cite, Ewert is saying that the image would contain that amount of CSI if the image had been produced by the chance hypothesis under consideration and that this amount of CSI is sufficient to rule out that chance hypothesis
(Emphasis mine.) Actually, no. Whenever Dembski or another ID proponent calculates the CSI in target T, it is always with respect to a chance hypothesis H (which, in practice, is often tacit, and almost always uniformly random noise). If the result of the calculation is N bits, the ID proponent says that T has N bits of CSI. They don't say "T would have N bits of CSI if it were actually produced by H." If you know of any exceptions to this, then please share. Otherwise, can I assume we're on the same page?R0bb
October 4, 2014
October
10
Oct
4
04
2014
10:56 AM
10
10
56
AM
PDT
#39 Not all regularities are lawful. Some are locally systematic; specifically not lawful. Lawfulness, regularities and patterns are essentially synonymous in this context -- algorithmically they all express redundancy, compressibility of the data, which is what science does from algorithmic perspective. When you systemize some data set, you are also creating an algorithm, whether you noticed it or not, which can reconstruct the data from the more concise data set/model of the data (e.g. as the systemizing rule plus the list of exceptions to the rule). That is in essence the same thing one does with a mathematical formula expressing say, Newton or Maxwell laws.nightlight
October 4, 2014
October
10
Oct
4
04
2014
10:43 AM
10
10
43
AM
PDT
The essence of natural science is research of the lawful aspects of nature. Its objective is to discover and model regularities and patterns (laws) in natural phenomena.
Not all regularities are lawful. Some are locally systematic; specifically not lawful. They are established by contingent organization. Like biology.Upright BiPed
October 4, 2014
October
10
Oct
4
04
2014
09:36 AM
9
09
36
AM
PDT
#36 William J Murray phenomena which indicates that the known natural computational power of the universe is not sufficient to account for it, and what we call intelligence is known to trivially produce similar artifacts. There is no reason to assume that what is presently "known" is all there is. Even within present outer limits set by the fundamental physics which breaks down at Planck scale (lengths < 10^-35m or times < 10^-44s), there are as many or more orders of magnitude of scale between our smallest known building blocks (elementary particles ~ 10^15m) and us (humans), as there are between Planck scale and these 'elementary' particles. Hence, as discussed earlier, the smallest building blocks at Planck scale would yield 10^80 times more computational power packed in any volume of matter-energy than the most powerful computer conceivable, built from our present smallest building blocks (elementary particles, which itself is still far ahead of the actual computational technology we have today). Vague or not, ultimately "natural" or not, what we refer to as intelligence is obviously the necessary cause for some artifacts we find in the universe - the novel War and Peace, for example. There's no more logical reason to avoid the term "intelligent design" in science than there is to avoid the term "evolution" or "natural law" or "entropy" or "time" or "random variation" or "natural selection". The term is merely a symptom of the problem (the parasitic infestation and degradation of natural sciences by emoters from left and right) not the problem itself. The question that matters is what comes next, after the informal observation of the obvious i.e. how do you model or formalize "intelligence" so it can become a productive element of the model space of natural science (the formal/algorithmic part of science that computes its predictions)? The natural science models intelligence not by fuzzying and softening it further via even more vague and emotional terms (god, agency, consciousness) but through computational processes and algorithms. That's what real scientists like Stephen Wolfram, James Shapiro and researchers at SFI are doing and that is where the advance the science will take place.nightlight
October 4, 2014
October
10
Oct
4
04
2014
09:12 AM
9
09
12
AM
PDT
Humbled at 4 & 5 hits the nail on the head as regards to early parts of my own life story from ages ~19 to 36. I was the worlds leading authority and expert on that Book I had never bothered to open, and was quite evangelical in my Atheism. I didn't realize it at the time, but that Book I hadn't opened held a certain amount of fear for me. The fear was "what happens if I find out that Book is true? If true, then what are the implications for me personally ... and my lifestyle?" And other such very personal and existential questions. "Would it be a mirror to my own very unflattering life (it was)." I don't think my experience was anything unique, especially among young American males supposedly searching for the 'truth.' The popularity of Hugh Hefner's 'Playboy Philosophy' and other Atheistic and hedonistic world views tells me that what Humbled says above in 4 & 5 is a valid observation ... even today. In the 7 decades of my life I have seen quite a number of destructive waves crash onto the shores of American culture; Hefner was/is such a wave, Timothy Leary is another, easy divorce is another, Bertrand Russell was another, the almost total cultural embrace of homo-sexuality is another, the Occupy Wall Street is another, radical feminism is another, the deconstruction and demonization of American history is another, multi-culturism is another, the New Atheist is another ... and on and on and on.ayearningforpublius
October 4, 2014
October
10
Oct
4
04
2014
08:37 AM
8
08
37
AM
PDT
These two ‘armies’ of overly passionate empathizers, who are to natural science what bicycle is to a fish, should take their silly little war to the social and gender studies or other humanities, and just get the heck out of natural science. Everyone would be better off from such shift, including the battling emoters themselves, since everyone benefits from the real advances in natural sciences and technology.
The problem with the a priori position that science is only about understanding the natural computational abilities of the universe is when it runs smack into a phenomena which indicates that the known natural computational power of the universe is not sufficient to account for it, and what we call intelligence is known to trivially produce similar artifacts. Vague or not, ultimately "natural" or not, what we refer to as intelligence is obviously the necessary cause for some artifacts we find in the universe - the novel War and Peace, for example. There's no more logical reason to avoid the term "intelligent design" in science than there is to avoid the term "evolution" or "natural law" or "entropy" or "time" or "random variation" or "natural selection". It is a classification of a category of causal agency, whether (ultimately) natural/computational or not, which can leave quantifiable, recognizable evidence. The only reason to avoid using the term "intelligent design" in such cases is political/ideological.William J Murray
October 4, 2014
October
10
Oct
4
04
2014
08:01 AM
8
08
01
AM
PDT
#26 drc466 Shorter nightlight: 1) I have a philosophical predisposition to MN ("universe operates lawfully"), and refuse to accept any other theory a priori The essence of natural science is research of the lawful aspects of nature. Its objective is to discover and model regularities and patterns (laws) in natural phenomena. Another way of stating it is to note that natural science is a 'compression algorithm' for natural phenomena. The regular compression algorithms identify some regularity, pattern, repetitiveness, predictability... aka lawfulness, in the data stream and use it to encode data in fewer bits e.g. by using a shorthand (shorter codes) for the most common data elements. The 'lawless' or 'patternless' sequence (symbol sequence drawn uniformly from all possible symbol sequences) is incompressible i.e. there is no compression algorithm for those. Compression algorithm for 'lawless' data is oxymoron. Similarly, natural science of 'lawless' phenomena is oxymoron. (I will call it childish and anti-science and congratulate myself on my intellectual superiority). The real problem is that too many people from arts, humanities, gender 'studies',... have blundered into and over time completely parasitised natural sciences (as well as technologies with major government funding, such as space program; e.g. NASA went rapidly downhill after the mental illness of political correctness took over). There is an interesting classification of human cognitive styles into empathizers and systemizers, or E and S dimensions or scale (interesting article): "Empathizers identify with another person's emotions, whereas systemizers are driven to understand the underlying rules that govern behavior in nature and society." General observation from the above research about placement along E and S dimensions is that liberals and conservatives are empathizers, while libertarians are systemizers. While natural science is a product and natural domain for systemizing cognitive style, the leftist takeover of academia has resulted in major empathizer infestation and degradation of the natural sciences. A natural reaction to that takeover was a more recent rise of the opposing force, empathizers from the right, such as Discovery Institute with its ID "theory" based on capricious, part time deity (which is as useless to natural science as what it fights against, neo-Darwinism). These two 'armies' of overly passionate empathizers, who are to natural science what bicycle is to a fish, should take their silly little war to the social and gender studies or other humanities, and just get the heck out of natural science. Everyone would be better off from such shift, including the battling emoters themselves, since everyone benefits from the real advances in natural sciences and technology.nightlight
October 4, 2014
October
10
Oct
4
04
2014
07:11 AM
7
07
11
AM
PDT
The computational power of the universe, it seems, is not without a sense of irony.William J Murray
October 4, 2014
October
10
Oct
4
04
2014
05:40 AM
5
05
40
AM
PDT
NL: Your problem seems to be hostility to the mere idea of a Creator-God who is a maximally great and necessary being, the root and sustainer of reality who is Reason Himself. Such a being simply will not be thoughtlessly or irresponsibly impulsive, which is what caprice is about. Purposeful, thoughtful decision is not caprice. One does not have to accept that such a Being exists to have a fair view of the character identified for such. Fictional characters can have just that, recognisable character. Much less, the God of the universe. KFkairosfocus
October 4, 2014
October
10
Oct
4
04
2014
02:08 AM
2
02
08
AM
PDT
R0b @ 28: Why have you presented a case of an IMAGE, created by artifice of man [and which was sourced in a few minutes by UD'ers . . . ], as an example of natural generation of CSI? Are you unaware that this was yet another of the attempted counter-examples shot down and shown to inadvertently demonstrate what they sought to overturn? So, this is not even a strawman. The Ash and snow pattern is produced by intersecting forces of chance and necessity. It is complex but not functionally specific tied to that complexity; if the pattern were wildly different it would make no difference, it is just an event on the ground. The image made from it is indeed functionally specific and complex: it reasonably accurately portrays the pattern. But obviously, blatantly, such an image is a product of intelligent design using machinery that is intelligently designed to capture such images. The pivot, again, seems to be confusion about joint complexity AND specificity attaching to the same aspect of an observed entity, leading you to collapse the matter into mere complexity. That is a strawman. Please, take time to examine the flowchart and infographic in the OP, with intent to actually understand them in their own terms rather than to find targets for counter-talking points by snipping and sniping. A process very liable to set up and knock over strawmen. BTW, this case also illustrates how refusal to acknowledge cogent correction at the time by objectors to design thought leads to far and wide insistent or even stubborn propagation of errors, misunderstandings, false claims and strawman caricatures. KFkairosfocus
October 4, 2014
October
10
Oct
4
04
2014
01:55 AM
1
01
55
AM
PDT
@R0bb #27 If I'm reading you right, you seem to have misunderstood Ewert's article at ENV. Liddle's image doesn't meet Barry's challenge at all. For the two positive bit calculations you cite, Ewert is saying that the image would contain that amount of CSI if the image had been produced by the chance hypothesis under consideration and that this amount of CSI is sufficient to rule out that chance hypothesis. As it turns out, the calculation of CSI was accurate in eliminating these particular chance hypotheses, as the image was not actually generated by either of them. Barry's challenge is not asking for some natural process that would have been found to have produced a large amount of CSI if it had actually been responsible for the origin of a high-CSI object but was not. His challenge is asking for a natural process that actually produced a high-CSI object as calculated in the light of that particular natural process.HeKS
October 3, 2014
October
10
Oct
3
03
2014
10:52 PM
10
10
52
PM
PDT
Nightlight: Do you have a blog of your own? I want to read your perspectives in further detail.pobri19
October 3, 2014
October
10
Oct
3
03
2014
07:55 PM
7
07
55
PM
PDT
NL: Regarding related issue often brought up in this context — while present chess programs are written humans, nothing in principle precludes another program B from writing a chess program A, then another one program C from writing program B, then a program D from writing program C, etc.
To create a program overview is needed - a plan. How can a program come up with a (new) plan of its own? How can it create something new - do something else than it has been instructed to do? Programs who write programs merely mimic design. You are suggesting that there is an accumulation of creative intelligence and understanding going on, but Searle's Chinese Room teaches us that there is no intelligence nor understanding going on when manipulating symbols according to rules.
NL: It is also perfectly plausible, or at least conceivable, that anything we (humans) are doing is result of underlying computational processes. Any finite sequence (of anything, actions, symbols etc) can be replicated by a computational process, hence any actions and production of any human can be replicated by such processes.
Not plausible at all, since computational processes don't replicate but merely mimic understanding.Box
October 3, 2014
October
10
Oct
3
03
2014
07:24 PM
7
07
24
PM
PDT
nightlight: Your replies are excellent learning material, thank you. Can you recommend to me any blogs or reading material with a world view congruent with yours (and thus, mine)? It doesn't necessarily have to be related to ID or non-ID, but simply interesting information to digest. I'm always on the look out for new information to further my understanding of the world, human nature and the universe, but it's important to find and follow the correct path. You introduced me to Conway's Game of Life, and that was really intruiging. As an example, a great blog I stumbled onto a few months ago: meltingasphalt.com As a programmer it was really easy to understand and agree with your perspective. Hope to hear back from you.pobri19
October 3, 2014
October
10
Oct
3
03
2014
07:02 PM
7
07
02
PM
PDT
Barry:
Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism.
Here Winston Ewert calculates that a pattern posted by Elizabeth Liddle has 1,068,017 bits of CSI, or 593,493 bits, or -11,836 bits, or -3,123,223 bits, depending on which chance hypothesis is used. The pattern was produced naturally by a periodically erupting volcano leaving ash bands on a glacier. A million bits of CSI certainly meets your challenge, but please don't delete all pro-ID posts from this site.R0bb
October 3, 2014
October
10
Oct
3
03
2014
06:56 PM
6
06
56
PM
PDT
Shorter nightlight: 1) I have a philosophical predisposition to MN ("universe operates lawfully"), and refuse to accept any other theory a priori (I will call it childish and anti-science and congratulate myself on my intellectual superiority). 2) I don't have an answer for your objection that I can't provide an example of design that doesn't start with a designed object. I just have my "just-so" story that must be true because of #1 above. Response to point 1: Describe, while remaining within your requirement of "universe operates lawfully", how matter and energy originated. Either of your two possible responses ("has always existed", "from nothing") violate your own requirement. Ergo, at some point in the history of the universe, unlawful behavior occurred. Response to point 2: Simply provide an example of de novo complexity that doesn't start from "take a designed object, and then..."drc466
October 3, 2014
October
10
Oct
3
03
2014
06:27 PM
6
06
27
PM
PDT
dang you kf! Now I keep seeing advertisements for Swing-Away can openers!Mung
October 3, 2014
October
10
Oct
3
03
2014
03:42 PM
3
03
42
PM
PDT
#20 KF actions of an all-wise Creator would reflect “caprice” It is 'caprice' with respect to lawful unfolding of the universe that natural science takes as the definition of its task. Some epistemological system which rejects lawfulness may be fine as theology or poetry or politics etc, but it is not a natural science. If Discovery Institute wants to speculate about deity which comes in every now and then to "fix" some "irreducibly complex" molecule that its own laws and initial-boundary conditions botched for some reason, fine with me, it's their time and their money to squander as they wish. But what they are talking about is just not a science by definition (as a discipline that deals with lawful aspects of universe), that's all I am saying.nightlight
October 3, 2014
October
10
Oct
3
03
2014
12:44 PM
12
12
44
PM
PDT
#22 "Out of curiosity, are you even aware that your "front-loading" hypothesis assumes what you are attempting to prove - namely, that "processes built-in or front-loaded into the universe are capable of generating design" Any scientific theory needs some postulates (rules of the game) to build anything. Ontologically this translates into presumed front loading of a system satisfying those postulates. What makes theory scientific, in contrast to say, poetry or theology or casual chit chat or Discovery Institute's ID, is that science operates under assumption that universe operates lawfully from some front loaded foundation. To what extent our present scientific theories capture the ultimate/true foundation (if there is any such) is a separate issue (metaphysics). The presumed lawfulness is what differentiates scientific approach from religion or other approaches. By definition you can't have science presuming a capricious, laws violating entity interfering with the phenomena (e.g. arranging molecules into "irreducibly complex designs") in the universe as Discovery Institute's ID assumes. The mission of science is research of the lawful aspects of phenomena. Whether there are any other kind of aspects we can't ever know with certainty since any rules defining presently understood 'lawfulness' are provisional and what seems outside of the current rules may be deducible from some other rules discovered in the future. But science is limited by definition to what is lawful (whatever the front loaded foundation it starts with may be). There is no problem of course in pursuing and experiencing other aspects ('unlawful' by the presently known rules of the game). But one can't misbrand and sell such pursuits as 'science' and insist on teaching them as such in science class, as Discovery Institute's ID seeks to do. DI's ID is anti-scientific with its part time capricious deity somehow jumping in and out of the universe to rearrange some molecules into 'irreducibly complex' forms that its own laws and initial-boundary conditions allegedly can't manage. It is a completely childish and incoherent position, based on fundamental misunderstanding of science (at least), hence it is justifiably excluded from science. Regarding your assertion that scientific front loading assumes that which needs to be proven (e.g. origin of life, biological complexity, etc), that is again misunderstanding of how computational perspective works. For example, a simple chess program containing few pages of C code, can produce millions of highly creative and complex chess games beyond understanding of the chess programmer (modern chess programs easily beat the best chess players in the world, let alone their programmers). The outward apparent complexity is the result of program (algorithms) + computational process executing the program. The objective of computational approach to natural science (such as Wolfram's NKS) is to assume front loading with very simple basic computational units (as simple as binary on/off state) along with simple rules of their state change, and seek to reproduce what we presently know as laws of physics (including the space-time aspects). Many systems with simple binary cellular automata were shown to have capability of universal computer (they can compute anything that is computable with any Turing machine equivalent abstract computer). Some such extremely simple systems can reproduce several fundamental laws of physics, such as Schrodinger, Maxwell and Dirac equations (hence covering basic quantum mechanics and electromagnetism). Wolfram believes it is conceivable that the basic computational unit for the universe and its most fundamental laws is describable with one line of code for the rules of some networked cellular automaton. So, front loading some very simple computational building block (finite state machine) and its rules of connecting with other such blocks which allow combined computational power be additive (as in neural networks), could in principle compute all of the present physical laws, with their fine tuning for life, as well as origin and evolution of life. While no one has worked out yet the most fundamental computational system of this kind for the whole universe, numerous bits and pieces of that gear have surfaced at different levels of science, from the several laws of fundamental physics to 'natural genetic engineering' of James Shapiro, 'biochemical networks' and related computational models from Santa Fe Institute for Complexity Science at the level of biological systems. In such approach what we presently see as physical laws are merely some aspects or regularities of the computed patterns (by this underlying computing substratum). The biological systems would be separate aspects of the computed patterns which are not reducible to the 'physical laws' regularities but are merely consistent with physical laws. Namely, physical laws on their own don't determine what physical system will do -- you need also to input into 'physics' algorithm (equations) the initial and boundary conditions (like initial angle and velocity of a billiard ball, plus all its interactions with table & other balls as boundary conditions) in order to compute what physical system will do. Further, with the physical laws as presently known, not even the physical law + initial-boundary conditions determine what system will do. Namely, our present fundamental physical laws (quantum field theory) are probabilistic, hence with all the data input, laws + initial-boundary conditions, the physics algorithm only yields the probabilities of different events with that system but no which of the events will occur. This is suggestive of existence some underlying more complex underlying system for which our present laws of physics are merely an approximate statistical description (like general laws of economy vs. what individuals buyers and sellers are doing). If this implied underlying system is computed, as assumed in the computational approach to natural science (the NKS), then any level of biological complexity can be result of this underlying computation which is vastly more powerful than anything we can conceive based on our present computing technology (10^80 times more powerful computing process, if one assumes elemental computational building blocks at Planck scale -- see second half of this post for hyperlinked TOC to discussion on UD of this topic). In terms of chief programmer of the universe, what this means is that CPoU could have created some very simple computing blocks and set up rules for their combination into networks of blocks (so that their computing power is additive), but he has no idea what will come out of the compuation, just as programmer of a little program that computes million digits of Pi has no idea beyond the first few digits what digit its creation will spew next. Similarly, computational approach does not need to presume that what needs to be shown (e.g. origin of life, biological complexity) as you assert -- it only needs to assume a much simpler computational substratum and show that such system can itself compute the observed biological complexity. Obviosuly, this is still a work in progress (e.g. see James Shapiro's and SFI papers).nightlight
October 3, 2014
October
10
Oct
3
03
2014
12:25 PM
12
12
25
PM
PDT
nightlight, Out of curiosity, are you even aware that your "front-loading" hypothesis assumes what you are attempting to prove - namely, that "processes built-in or front-loaded into the universe are capable of generating design", and therefore ID's claim that "the rules of physic in the universe + chance is not capable of generating design" must be wrong? You're assuming the can-opener. One essential argument against your position is that you cannot even provide a single example that would support your position. Your example of a chess program that can play an infinite # of chess games starts with a chess program that was intelligently designed to play an infinite # of chess games. Your Turing machines would be (as none currently exist) programs designed to mimic human behavior, and would be restricted to mimicking the human behavior in the fashion designed into the code. Neural networks and fuzzy logic are nonetheless goal-oriented, with the goal and processes for reaching the goal designed into the code. In order for your position to have merit, you have to assume that whatever chemical "computational process" required to generate life is "front-loaded" into the universe's initial conditions. While this is not technically impossible, there are several problems with it: 1) Lacking a supernatural intelligence to front-load those processes, how'd they get there? You're still arguing chance, free of front-loaded process, created front-loaded process. 2) Lack of affirmative evidence wrt life. If the generation of life as a computational process was built-in to the universe, how come all evidence is that it was a singular event unreproducible even with the assistance of intelligent design (us)? 3) Lack of affirmative evidence universally. There is not a single unambiguous, experimentally reproducible example of "front-loaded computational processes" generating de novo design (ref BA's 500 bits of complex information) that does not start with "take a designed object, and then..."drc466
October 3, 2014
October
10
Oct
3
03
2014
10:18 AM
10
10
18
AM
PDT
I'm curious... being, among other things, a computer programmer (primarily of User Interfaces), is there some meaningful description that can be offered of how I program that does not consist primarily of the thought processes I use to plan out the logic of the interface functionality or the details of how the medium in which my design plan is instantiated happens to carry out my instructions?HeKS
October 3, 2014
October
10
Oct
3
03
2014
10:10 AM
10
10
10
AM
PDT
1 2 3 4 5

Leave a Reply