Uncommon Descent Serving The Intelligent Design Community

“Actually Observed” Means, Well, “Actually Observed”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a comment to a recent thread I made the following challenge to the materialists:

Show me one example – just one; that’s all I need – of chance/law forces creating 500 bits of complex specified information. [Question begging not allowed.] If you do, I will delete all of the pro-ID posts on this website and turn it into a forum for the promotion of materialism. . . .

There is no need to form any hypothesis whatsoever to meet the challenge. The provenance of the example of CSI that will meet the challenge will be ACTUALLY KNOWN. That is why I put the part about question begging in there. It is easy for a materialist to say “the DNA code easily has more than 500 bits of CSI and we know that it came about by chance/law forces.” Of course we know no such thing. Materialists infer it from the evidence, but that is not the only possible explanation.

Let me give you an example. If you watch me put 500 coins on a table and I turn all of them “heads” up, you will know that the provenance of the pattern is “intelligent design.” You do not have to form a chance hypothesis and see if it is rejected. You sat there and watched me. There is no doubt that the pattern resulted from intelligent agency.

My challenge will be met when someone shows a single example of chance/law forces having been actually observed creating 500 bits of CSI.

R0bb responded not by meeting the challenge (no surprise there) but by suggesting I erred when I said CSI can be “assessed without a chance hypothesis.” (And later keith s adopted this criticism).

I find this criticism odd to say the least. The word “hypothesis” means:

A proposition . . . set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis) or accepted as highly probable in the light of established facts.

It should be obvious from this definition that we form a hypothesis regarding a phenomenon only when the cause of the phenomenon is unknown, i.e., has not been actually observed. As I said above, in my coin example there is no need to form any sort of hypothesis to explain the cause of the coin pattern. The cause of the coin pattern is actually known.

I don’t know why this is difficult for R0bb to understand, but there you go. To meet the challenge, the materialists will have to show me where a chance/law process was “actually observed” to have created 500 bits of CSI. Efforts have been made. All have failed. The now defunct infinite monkeys program being just one example. It took 2,737,850 million billion billion billion monkey-years to get the first 24 characters from Henry IV part 2.

 

UPDATE:

R0bb  responds at comment  11:

That’s certainly true, but we’re not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?

To which I responded:

1. Suppose you watched me arrange the coins. You see a highly improbable (500 bits) pattern conforming to a specification. Yes, it has CSI.

2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now.

So you see, we’ve actually observed the cause of each pattern. The specification was achieved in scenario 1 by an intelligent agent with a few minutes’ effort. In scenario 2 the specification was never achieved from the moment of the big bang until now.

The essence of the design inference is this: Chance/law forces have never been actually observed to create 500 bits of specified information. Intelligent agents do so routinely. When we see 500 bits of specified information, the best explanation (indeed, the only explanation that has actually been observed to be a vera causa) is intelligent agency.

To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.

 

Comments
centrestream, you already know what you have actually observed. Why is that so hard to understand?Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
10:31 AM
10
10
31
AM
PDT
Barry #29: "Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer?" I'm curious. What is the answer that you already know? That something is complex? Or that something is designed?centrestream
November 17, 2014
November
11
Nov
17
17
2014
10:29 AM
10
10
29
AM
PDT
If you allow selection feedback to work on the objects I can think of two plausible non-intelligent scenarios for your 500 heads. 1. Unknown to you your roommate dumps his stash of 1000 pennies randomly on a shaky card table. They land half heads, half tails. He leaves. Your flat is next to the train tracks and every time a train comes by the table vibrates fiercely. Because of the difference in form factor the tails (heads down) tend to "walk" and fall off the table. The heads (tails down) don't move. After enough trains have gone by the table holds nothing but 500 heads. You come home, find them and falsely conclude design. 2. Unknown to you your roommate dumps his stash of 1000 pennies randomly on a table. They land half heads, half tails. He opens the window and leaves. A crow lands in the window and sees the coins. Because the tails side is slightly shiner than the heads side the crow picks up a "tails" and flies off with it. The process is repeated until the table holds nothing but 500 heads. You come home, find them and falsely conclude design. I pointed the problem out before but you haven't addressed it yet - iterative processes involving selection feedback can blow right by the 500 bit threshold. Evolution is an iterative process involving selection feedback.Adapa
November 17, 2014
November
11
Nov
17
17
2014
10:26 AM
10
10
26
AM
PDT
Robb, Thank you for your example. The debating can begin. Anybody have any more? This is exciting! EdEdward
November 17, 2014
November
11
Nov
17
17
2014
10:22 AM
10
10
22
AM
PDT
keith s @ 27. Do you understand that it is pointless to perform a Bayesian analysis when you already know the answer?Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
10:21 AM
10
10
21
AM
PDT
keith
Something possesses CSI if a) it is specified, and b) it cannot be produced by “Darwinian and other material mechanisms”
I don’t know anyone who defines CSI this way. Therefore, your attempt to show circularity fails. Take the 500 coin example. I say the “500 heads” pattern contains complex specified information. Because it is complex information (500 bits) and it is specified (“500 heads”). Put another way the search space is a gigantic ocean (all patterns of 500 coins) and the specification is one small island in that ocean (“500 heads”) that is descriptively compressible. Unlike your scenario, I never defined CSI as being, by definition, “that which is beyond material mechanisms.” Indeed, if you read my post carefully, you will see that I said just exactly the opposite. I stated that it is logically possible for chance to arrive at the specification. We can be practically certain, however, that it never will. The design inference is not based simply on low probability. Again, the probability of ALL 500 coin sequences is exactly the same, because all 500 coin patterns contain the exact same amount of information (500 bits). It is only the combination of the astronomically low probability with the specification (500 heads) that results in the design inference.Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
10:18 AM
10
10
18
AM
PDT
Barry, Let me re-ask a question from the other thread. Do you understand Dembski's CSI equation? Do you know what the P(T|H) term represents, and why? Do you know what H stands for?keith s
November 17, 2014
November
11
Nov
17
17
2014
10:13 AM
10
10
13
AM
PDT
Learned Hand, to R0bb:
Your point seems right, but isn’t the problem with the hypo simpler than that? If you see coins being laid out, then isn’t PTH just 0?
No, because P(T|H) does not depend on how T actually came about. It depends on all the ways T could have come about through non-design means. For example, suppose Barry deliberately places a single coin tails up on a table. That is a designed outcome, but it certainly doesn't exhibit CSI, because it could easily have been produced by simply flipping the coin. This is important, because we are supposed to be able to assign CSI to things even when we haven't witnessed their genesis.keith s
November 17, 2014
November
11
Nov
17
17
2014
10:10 AM
10
10
10
AM
PDT
Learned @ 23: Your comment is kind of amusing because you grasp that I’m right, but you can’t resist taking a rhetorical swing at me. You are absolutely correct. A Bayesian analysis regarding the provenance of a pattern is pointless if you have actual knowledge regarding the provenance of the pattern. R0bb insists on a “chance hypothesis” when there is no need for any hypothesis. R0bb says that he can’t know whether an event is improbable unless there is a chance hypothesis. Nonsense on a stick. ANY configuration of 500 coins is improbable. This is true whether the configuration resulted from chance or design.Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
10:02 AM
10
10
02
AM
PDT
Barry, markf and R0bb have been patiently explaining this to you, but you're brushing them off instead of thinking about what they're saying. The same bad logic is used in the following two scenarios. Scenario I 1. Definition: Something possesses nurpitude if a) it is blue, and b) it cannot be built from toothpicks. 2. You issue a challenge: "Show me just one example of something with nurpitude being built from toothpicks. Just one!" 3. Your opponents point out that anything that can be built from toothpicks is automatically, by definition, devoid of nurpitude. No matter how powerful toothpick construction techniques are, the challenge cannot be met. If X can be built from toothpicks, it automatically, by definition, does not possess nurpitude. 4. Therefore the challenge is empty. Scenario II 1. Definition: Something possesses CSI if a) it is specified, and b) it cannot be produced by "Darwinian and other material mechanisms". 2. You issue a challenge: "Show me just one example of something with CSI being produced by natural mechanisms. Just one!" 3. Your opponents point out that anything that can be produced by natural mechanisms is automatically, by definition, devoid of CSI. No matter how powerful natural mechanisms are, the challenge cannot be met. If X can be produced by natural mechanisms, it automatically, by definition, does not possess CSI. 4. Therefore the challenge is empty. It's the same bad logic in both scenarios. You have fallen into the circularity trap.keith s
November 17, 2014
November
11
Nov
17
17
2014
10:00 AM
10
10
00
AM
PDT
R0bb, the UD answer to the P(T|H) problem seems to be "shut up." It's a versatile response, both easier and safer than having a conversation. Your point seems right, but isn't the problem with the hypo simpler than that? If you see coins being laid out, then isn't PTH just 0?Learned Hand
November 17, 2014
November
11
Nov
17
17
2014
09:51 AM
9
09
51
AM
PDT
Materialism in 52 seconds.bb
November 17, 2014
November
11
Nov
17
17
2014
09:27 AM
9
09
27
AM
PDT
Robb,
Without a chance hypothesis, how do I determine that it’s highly improbable?
*sigh* Never mind R0bb. If all you want to do is play definition derby in response to a straightforward challenge, that is all the answer I need. You've got nothing. OK.Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
09:26 AM
9
09
26
AM
PDT
Edward:
Let’s instead post working examples (the submitted examples could even fit our own personal definitions), the esteemed pannel of posters at UD will determine if its a worthy example.
Contrary to Barry's assertion that I "responded not by meeting the challenge", I actually did point to a working example in response to his challenge. Here's a summary of that example: 1) Ewert calculates that the pattern has 1,068,017 bits of specified complexity under the chance hypothesis of equiprobability. 2) The pattern is known to have been created by natural processes. 3) In practice, equiprobability is the only chance hypothesis that IDists (other than Ewert) ever consider. HeKS responded, but I doubt that many, if any, IDists will agree with his response.R0bb
November 17, 2014
November
11
Nov
17
17
2014
09:22 AM
9
09
22
AM
PDT
Edward, I kind of like your idea about posting examples and having the jurors deliberate on them. Maybe finally we will understand 'n-D e' on actual examples! :) Thank you for the clever suggestion. P.S. as you know, there is a thread with a few of those examples already in it. :) Also, the more recent neuroscience OP has a few examples to review.Dionisio
November 17, 2014
November
11
Nov
17
17
2014
09:14 AM
9
09
14
AM
PDT
Markf, You don't understand. Let's avoid all of the definition lamers. Let's instead post working examples (the submitted examples could even fit our own personal definitions), the esteemed pannel of posters at UD will determine if its a worthy example. This could be the most fun UD thread yet!Edward
November 17, 2014
November
11
Nov
17
17
2014
09:05 AM
9
09
05
AM
PDT
To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.
What about Hydrogen atom, which has a single Proton with a single energy level. Does it qualify ?Me_Think
November 17, 2014
November
11
Nov
17
17
2014
09:01 AM
9
09
01
AM
PDT
Barry:
You see a highly improbable (500 bits) pattern conforming to a specification.
Without a chance hypothesis, how do I determine that it's highly improbable?R0bb
November 17, 2014
November
11
Nov
17
17
2014
08:58 AM
8
08
58
AM
PDT
R0bb @ 11
That’s certainly true, but we’re not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?
1. Suppose you watched me arrange the coins. You see a highly improbable (500 bits) pattern conforming to a specification. Yes, it has CSI. 2. Now, suppose you and I were born at the same time as the big bang and did not age. Suppose further that instead of intentionally arranging the coins you watched me actually flip the coins at the rate of one flip per second. While it is not logically impossible for me to flip “all 500 heads,” it is not probable that we would see that specification from the moment of the big bang until now. So you see, we’ve actually observed the cause of each pattern. The specification was achieved in scenario 1 by an intelligent agent with a few minutes’ effort. In scenario 2 the specification was never achieved from the moment of the big bang until now. The essence of the design inference is this: Chance/law forces have never been actually observed to create 500 bits of specified information. Intelligent agents do so routinely. When we see 500 bits of specified information, the best explanation (indeed, the only explanation that has actually been observed to be a vera causa) is intelligent agency. To meet my challenge, all you have to do is show me where chance/law forces have been observed to create 500 bits of specified information.Barry Arrington
November 17, 2014
November
11
Nov
17
17
2014
08:53 AM
8
08
53
AM
PDT
#13 Edward Without the definitions you don't know that the challenge is or what the working examples are examples of.markf
November 17, 2014
November
11
Nov
17
17
2014
08:40 AM
8
08
40
AM
PDT
Rather than another boring debate on definitions, wouldn't it be much more fun to produce working examples. Then we could debate on whether or not the examples provided answer the challenge. Yea!Edward
November 17, 2014
November
11
Nov
17
17
2014
08:37 AM
8
08
37
AM
PDT
I think it's important to point out that things like portraits of Elvis Presley appear in the foam of lattes from Starbucks. And that natural weathering of rocks produces arches and other patterns that human beings think look like something other than rocks. Children commonly play the game of looking for bunny rabbits in drifting clouds. Etc., etc. So, there is the fact that FLEETING (10,000 years is "fleeting" in geology) unlikely events do occur. The thing that suggests any intelligence behind the phenomenon is REPEATABILITY: did the exact same unlikely thing happen TWICE? Did Adam and Eve BOTH appear in the same generation? There is some chance that a tornado blowing through a junk yard can produce ONE 747. But when you see a DOZEN 747s at the same airport, the only reasonable conclusion is an Intelligent Designer. Or for us cloud gazers, if I notice that the SAME cloud is appearing day after day, I'll probably start looking for smokestacks.mahuna
November 17, 2014
November
11
Nov
17
17
2014
08:19 AM
8
08
19
AM
PDT
Barry:
As I said above, in my coin example there is no need to form any sort of hypothesis to explain the cause of the coin pattern.
That's certainly true, but we're not trying to explain the cause of the coin pattern. We trying to determine whether the coin pattern has CSI. Can you please tell us how to do that without a chance hypothesis?R0bb
November 17, 2014
November
11
Nov
17
17
2014
08:14 AM
8
08
14
AM
PDT
The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.bioscience.org/fbs/getfile.php?FileName=/2009/v14/af/3426/3426.pdf http://www.us.net/life/index.htmbornagain77
November 17, 2014
November
11
Nov
17
17
2014
07:58 AM
7
07
58
AM
PDT
corrected link: Programming of Life – Dr. Donald Johnson interviewed by Casey Luskin – audio podcast http://intelligentdesign.podomatic.com/entry/2010-01-27T12_37_53-08_00bornagain77
November 17, 2014
November
11
Nov
17
17
2014
07:53 AM
7
07
53
AM
PDT
Moose Dr I must admit, I am frustrated with the term “specified”. I would rather use “function specifying”. Provide 500 bits of data which, when provided to a data to function converter (such as a computer) produces complex function. That's a good idea MDr but unfortunately all that does is push the problem to the definition of "complex function". ID has always had the problem (some say strategy) of keeping its definitions so vague that discrete value calculations (i.e 500 bits = designed) become totally subjective. To be clear science has plenty of vague definitions too (like "species") but it doesn't rely on precise calculations from those definitions to make its case.Adapa
November 17, 2014
November
11
Nov
17
17
2014
07:50 AM
7
07
50
AM
PDT
It is part of the definition of CSI that the chance hypothesis being considered is so unlikely to produce the outcome it is effectively impossible. So of course you will never find a change hypothesis generating 500 bits of CSI.
This was my thought as well. What's the actual calculation you'd use to count bits of CSI that doesn't consider non-design hypotheses?Learned Hand
November 17, 2014
November
11
Nov
17
17
2014
07:48 AM
7
07
48
AM
PDT
Using Shannon information as a metric does not help you as much as you think it does adapa, since the Shannon information metric puts a severe constraint on the evolvability of codes once they are put in place (by a Mind): Shannon Information - Channel Capacity - Perry Marshall https://vimeo.com/106430965 “Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible" Donald E. Johnson – Bioinformatics: The Information in Life Biophysicist Hubert Yockey determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the optimal universal genetic code that is found in nature. The maximum amount of time available for it to originate is 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that is optimal. Put simply, natural selection lacks the time necessary to find the optimal universal genetic code we find in nature. (Fazale Rana, -The Cell's Design - 2008 - page 177) "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.) Second, third, fourth… genetic codes - One spectacular case of code crowding - Edward N. Trifonov - video https://vimeo.com/81930637 In the preceding video, Trifonov elucidates codes that are, simultaneously, in the same sequence, coding for DNA curvature, Chromatin Code, Amphipathic helices, and NF kappaB. In fact, at the 58:00 minute mark he states, "Reading only one message, one gets three more, practically GRATIS!". And please note that this was just an introductory lecture in which Trifinov just covered the very basics and left many of the other codes out of the lecture. Codes which code for completely different, yet still biologically important, functions. In fact, at the 7:55 mark of the video, there are 13 codes that are listed on a powerpoint, although the writing was too small for me to read. "In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10]. Donald E. Johnson – Programming of Life – pg.51 - 2010 further notes: Programming of Life - Information - Shannon, Functional & Prescriptive – video https://www.youtube.com/watch?v=h3s1BXfZ-3w Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 Mathematically Defining Functional Information In Molecular Biology - Kirk Durston - video https://vimeo.com/1775160 Kirk Durston - Functional Information In Biopolymers - video http://www.youtube.com/watch?v=QMEjF9ZH0x8 Measuring the functional sequence complexity of proteins - Kirk K Durston, David KY Chiu, David L Abel and Jack T Trevors - 2007 Excerpt: We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable. To demonstrate the relevance to functional bioinformatics, a method to measure functional sequence complexity was developed and applied to 35 protein families.,,, http://www.tbiomed.com/content/4/1/47bornagain77
November 17, 2014
November
11
Nov
17
17
2014
07:47 AM
7
07
47
AM
PDT
On the challenge itself. It hits the very circularity problem that Winston pointed out. It is part of the definition of CSI that the chance hypothesis being considered is so unlikely to produce the outcome it is effectively impossible. So of course you will never find a change hypothesis generating 500 bits of CSI.markf
November 17, 2014
November
11
Nov
17
17
2014
07:45 AM
7
07
45
AM
PDT
I must admit, I am frustrated with the term "specified". I would rather use "function specifying". Provide 500 bits of data which, when provided to a data to function converter (such as a computer) produces complex function.Moose Dr
November 17, 2014
November
11
Nov
17
17
2014
07:38 AM
7
07
38
AM
PDT
1 4 5 6 7

Leave a Reply