Uncommon Descent Serving The Intelligent Design Community

Fixing a Confusion

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I have often noticed something of a confusion on one of the major points of the Intelligent Design movement – whether or not the design inference is primarily based on the failure of Darwinism and/or mechanism.

This is expressed in a recent thread by a commenter saying, “The arguments for this view [Intelligent Design] are largely based on the improbability of other mechanisms (e.g. evolution) producing the world we observe.” I’m not going to name the commenter because this is a common confusion that a lot of people have.

The reason for this is largely historical. It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.

Then, in the 19th century, Darwin suggested that there was another possibility for the reason for this cohesion – natural selection. Unity of plan and teleological design, according to Darwin, could also happen due to selection.

Thus, the original argument is:

X, Y, and Z indicate design

Darwin’s argument is:

X, Y, and Z could also indicate natural selection

So, therefore, we simply show that Darwin is wrong in this assertion. If Darwin is wrong, then the original evidence for design (which was not based on any probability) goes back to being evidence for design. The only reason for probabilities in the modern design argument is because Darwinites have said, “you can get that without design”, so we modeled NotDesign as well, to show that it can’t be done that way.

So, the *only* reason we are talking about probabilities is to answer an objection. The original evidence *remains* the primary evidence that it was based on. Answering the objection simply removes the objection.

As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point. It does involve a chance rejection region as well, but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity).

Comments
johnnyb and Silver Asiatic: you may like to check this out: https://uncommondesc.wpengine.com/intelligent-design/is-artificial-intelligence-taking-over-alphago-version/#comment-622847 Dionisio
Silver Asiatic, What about this? http://www.thethirdwayofevolution.com/ Dionisio
Dionisio
What would be the highest standard to compare UD with?
I don't know but some kind of science oriented blog would be better, I'd think. At the same time, many members of UD have joined TSZ so it does serve as a parallel, at least for those people. I would guess Coyne's or Myer's blogs would be more appropriate.
Professor John Lennox responded that “atheism is a fairy story for people afraid of the light.”
Very clever. :-) Silver Asiatic
SA @219: That's an interesting observation. Thank you. Dionisio
SA @217:
...maybe we should compare ourselves [UD] with something of a higher caliber?
That's a very interesting suggestion. What would be the highest standard to compare UD with? Or maybe we should not compare it at all? What for? PS. Perhaps we should compare everything against a higher standard, but in some cases that's not easy (or maybe not possible at all?). Here's an off-topic example of comparing ourselves with the highest standard. For quite some time I could not understand why Paul the apostle called himself "the worst sinner" (1 Timothy 1:15) and again "the worst of sinners" (1 Timothy 1:16). That did not make sense to me. It was confusing. If the author of most epistles collected in the NT cannon is the worst sinner, then what about me and the rest of us? Fortunately it is clear now. Paul does not compare himself with other sinners, but against the highest standard: Christ. In that comparison, we all share Paul's title of "worst sinners". Definitely I do. Actually, the closer we get to the Light, our imperfections become more visible. Perhaps that's why we naturally don't like to get close to the light. "This is the verdict: Light has come into the world, but people loved darkness instead of light because their deeds were evil." [John 3:19 (NIV)] In a Telegraph article* Professor Stephen Hawking was quoted saying that "Heaven is a fairy story for people afraid of the dark". At an interview** in Australia that same year Professor John Lennox responded that "atheism is a fairy story for people afraid of the light." Basically both sides depend on faith. (*) http://www.telegraph.co.uk/news/science/stephen-hawking/8515639/Stephen-Hawking-heaven-is-a-fairy-story-for-people-afraid-of-the-dark.html (**) http://www.abc.net.au/radionational/programs/spiritofthings/an-evening-with--john-lennox/2928496#transcript Dionisio
Dionisio I think people discuss things on these kinds of blogs not only for learning about hard science but in being part of a community that supports their thinking. People build friendships and attack a common enemy. It gives people a feeling of control and some identity also. Sometimes it's like a game and winning is the goal in the conversation. And I don't mean just personal victories. There may be a chance to promote one's worldview and hope that other people will be converted to it. Silver Asiatic
SA @217: I like you are willing to test this quick comparison. We should test everything and hold what is good. We may blame it on KF @177 for mentioning that site. :) When I read "TSZ" in KF's comment @177 and looked at that site, it did not seem like a blog where biology is discussed as often and deeply as here at UD. Hence the quick comparison started, but with the caveat that has been added in some of the previous comments. The search results might not be a true reflection of the OPs where the searched terms are found. Also, keywords that are a subset of larger keywords could increase the count. But perhaps it gives us an idea of the different approaches to science. And it is an entertaining exercise too. :) Dionisio
Dionisio It's a fascinating project you've taken on comparing statistics between the two sites. With admiration for your research, I'll offer a few contrary thoughts (in the spirit of discussion). First, UD is the premier ID site and ID is the most innovative theory in science today. TSZ is ... nothing? That is, it represents nothing. There's no idea, theory, program or direction of any kind behind it. With that, maybe we should compare ourselves with something of a higher caliber? Secondly, responding here:
I really don’t miss those politely dissenting interlocutors.
I'll agree that many of them just created more noise. They lack sincerity. But I also wish we had more thoughtful opposition, the best of them. Wishful thinking perhaps here.
GP and you were the main players in the “heated” discussion that took place here a short while ago. The discussion was very serious and productive.
Thank you! And credit to GP here - as well as you and others who participated. It certainly gets dull without an exchange of opposing views, so I'm glad we did that.
Their site seems to be open to all views, because I saw a few posts by ID-friendly folks.
Yes, I see Mung there quite a lot. Vjtorley offered some serious commentary, as do a few other IDists. So, there's something attractive about the site. Maybe it's the openness.
The anti-ID posts I saw in that site looked like the hogwash comments they used to post here. If that’s the case, then this site not only didn’t lose anything of value, but this site has gained seriousness after the politely-dissenting interlocutors moved away.
Good point. UD does a good job in upholding a higher standard, for the most part. That is appreciated, even though it means the discussions aren't as lively at times. That's a small price to pay for a more serious atmosphere. Silver Asiatic
KF, Your timely comment @189 has triggered this correcting recount that seems to make TSZ look even worse. Considering that a substantial proportion of the relatively much smaller amount of biology-related OPs in that site were apparently written by serious ID-friendly folks, the fundamental question"where's the beef?" comes to mind. :) Are they seriously interested in science? Dionisio
After seeing so many mistakes in my comparison results, now I doubt the rest of the items that have not been reviewed/corrected yet. Let's review them again. Dionisio
Summary stats update: Keyword ……….. Posted … UD … TSZ morphogen ………. @180 ….. 5 ….. 2 tRNA ……………….. @181 ….. 6 ….. 0 gastrulation ……… @182 ….. 1 ….. 0 epigenetics ………. @183 ….. 22 ….. 5 proteomics ………. @184 ….. 1 ….. 0 mitosis ……………. @185 ….. 1 ….. 0 meiosis …………… @186 ….. 2 ….. 0 centrosome ……… @187 ….. 1 ….. 0 neuroscience ……. @195 ….. 8 ….. 3 ribosome ………… @204 … 9 … 1 genome ………….. @207 … 10 … 3 genomics ………… @210 ….. 19 ….. 3 chromosome ……. @212 ….. 32 ….. 4 Dionisio
#193 error correction: Here’s another embarrassing mistake: my comment @193 is not even WRONG. Since several OPs with the term 'chromosome’ explicitly referenced in the title appeared first and they were all from before 2016, I mistakenly didn’t scroll down to look for OPs without explicit references to the given term. Big mistake. Mea culpa. Solar mea culpa. I apologize for such a careless error. This case illustrates a possible consequence of not being careful. Shame on me! Dionisio
Searched both sites for the term “chromosome” this year only: UD: 32 TSZ: 4 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
Summary stats update: Keyword ……….. Posted … UD … TSZ morphogen ………. @180 ….. 5 ….. 2 tRNA ……………….. @181 ….. 6 ….. 0 gastrulation ……… @182 ….. 1 ….. 0 epigenetics ………. @183 ….. 22 ….. 5 proteomics ………. @184 ….. 1 ….. 0 mitosis ……………. @185 ….. 1 ….. 0 meiosis …………… @186 ….. 2 ….. 0 centrosome ……… @187 ….. 1 ….. 0 chromosome ……. @193 ….. 3 ….. 4 neuroscience ……. @195 ….. 8 ….. 3 ribosome ………… @204 … 9 … 1 genome ………….. @207 … 10 … 3 genomics ………… @210 ….. 19 ….. 3 Dionisio
Searched both sites for the term “genomics” this year only: UD: 19 TSZ: 3 Please, note that the search could have been done incorrectly. Additional verification is welcome! For example in this case the search for 'genomics' may count 'epigenomics' too. Dionisio
#194 error correction: Here's another embarrassing mistake: my comment @194 is not even WRONG. Since several OPs with the term 'genomics’ explicitly referenced in the title appeared first and they were all from before 2016, I mistakenly didn’t scroll down to look for OPs without explicit references to the given term. Big mistake. Mea culpa. Solar mea culpa. I apologize for such a careless error. This case illustrates a possible consequence of not being careful. Shame on me! Dionisio
Summary stats update: Keyword ……….. Posted … UD … TSZ morphogen ………. @180 ….. 5 ….. 2 tRNA ……………….. @181 ….. 6 ….. 0 gastrulation ……… @182 ….. 1 ….. 0 epigenetics ………. @183 ….. 22 ….. 5 proteomics ………. @184 ….. 1 ….. 0 mitosis ……………. @185 ….. 1 ….. 0 meiosis …………… @186 ….. 2 ….. 0 centrosome ……… @187 ….. 1 ….. 0 chromosome ……. @193 ….. 3 ….. 4 genomics ………… @194 ….. 0 ….. 3 (what’s wrong UD?) neuroscience ……. @195 ….. 8 ….. 3 ribosome ……...... @204 … 9 … 1 genome .............. @207 ... 10 ... 3 Dionisio
Searched both sites for the term “genome” in 2015-2016: UD: 10 TSZ: 3 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
@198 closed the comparison exercise, but KF's timely clarification @200 rightly challenged the results posted @197 and compelled me to review it carefully. This is an example of the command to test everything and hold what is good. Dionisio
Summary stats: Keyword ……….. Posted … UD … TSZ morphogen ………. @180 ….. 5 ….. 2 tRNA ……………….. @181 ….. 6 ….. 0 gastrulation ……… @182 ….. 1 ….. 0 epigenetics ………. @183 ….. 22 ….. 5 proteomics ………. @184 ….. 1 ….. 0 mitosis ……………. @185 ….. 1 ….. 0 meiosis …………… @186 ….. 2 ….. 0 centrosome ……… @187 ….. 1 ….. 0 chromosome ……. @193 ….. 3 ….. 4 genomics ………… @194 ….. 0 ….. 3 (what’s wrong UD?) neuroscience ……. @195 ….. 8 ….. 3 ribosome ....... @202 ... 9 ... 1 Dionisio
#200-203 follow-up Searched both sites for the term “ribosome” in 2015-2016: UD: 29 TSZ: 1 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
#202 addendum: UD Guest Post: Dr Eugen S on the second law of thermodynamics (plus . . . ) vs. “evolution” December 3, 2016 Posted by kairosfocus under Back to Basics of ID, biosemiotics, Complex Specified Information, Cybernetics and Mechatronics, Darwinist rhetorical tactics, Entropy, Functionally Specified Complex Information & Organization, ID Foundations, Informatics, thermodynamics and information, UD Guest Posts 13 Comments Our Physicist and Computer Scientist from Russia — and each element of that balance is very relevant — is back, with more. MOAR, in fact. This time, he tackles the “terror-fitted depths” of thermodynamics and biosemiotics. (NB: Those needing a backgrounder may find an old UD post here and a more recent one here, helpful.) […] BTB: Points to ponder as we look at Crick’s understanding of DNA as text, since March 19, 1953 December 2, 2016 Posted by kairosfocus under Back to Basics of ID, Darwinist rhetorical tactics, Functionally Specified Complex Information & Organization, ID Foundations, info in nature & the future of Sci-Tech, Selective Hyperskepticism, thermodynamics and information 5 Comments A few days back, I headlined a clip from Crick’s letter to his son Michael, March 19, 1953: The main text is accessible here (with page scans). Sans diagrams: >>My Dear Michael, Jim Watson and I have probably made a most important discovery. We have built a model for the structure of des-oxy-ribose-nucleic-acid (read it […] Cell’s biggest organelle is tightly packed tubes, not sheets November 5, 2016 Posted by News under Cell biology, Intelligent Design 2 Comments From Laurel Hamer at Science News: Textbook drawings of the cell’s largest organelle might need to be updated based on new images. Super-resolution shots of the endoplasmic reticulum reveal tightly packed tubes where previous pictures showed plain flat sheets, scientists report in the Oct. 28 Science. The finding helps explain how the endoplasmic reticulum, or […] BTB, Q: Where does the FSCO/I concept come from? (Is it reasonable/ credible?) November 5, 2016 Posted by kairosfocus under Back to Basics of ID, Complex Specified Information, Darwinist rhetorical tactics, Functionally Specified Complex Information & Organization, ID Foundations, Intelligent Design 63 Comments A: One of the old sayings of WW II era bomber pilots was that flak gets heaviest over a sensitive target. So, when something as intuitively obvious and easily demonstrated as configuration-based, functionally specific complex organisation and/or associated (explicit or implicit) information — FSCO/I — becomes a focus for objections, that is an implicit sign […] Protozoans with no dedicated stop codons? October 6, 2016 Posted by News under Genomics, News 1 Comment From Karen Zusi at The Scientist: The genetic code—the digital set of instructions often laid out in tidy textbook tables that tells the ribosome how to build a peptide—is identical in most eukaryotes. But as with most rules, there are exceptions. During a recent project on genome rearrangement in ciliates, Mariusz Nowacki, a cell biologist […] Nobel award for design of molecular machines October 5, 2016 Posted by DLH under Biomimicry, Complex Specified Information, Cybernetics and Mechatronics, Design inference, Intelligent Design 9 Comments “three laureates discovered how to use molecules as components of tiny machines that can be controlled to perform specific tasks.” Michael Denton: Life – 4 B years with no change June 25, 2016 Posted by News under Evolution, News, stasis 1 Comment From Michael Denton, author of Evolution: Still a Theory in Crisis As with other taxa-defining novelities, three is no evidence that any fundamental changes have occurred in the basic design of the cell system since its origination. The cell membrane, the basic metabolic paths, the ribosome, the genetic code, etc., are essentially invariant in all […] “Here we report a new cell” April 29, 2016 Posted by Upright BiPed under Information, Intelligent Design, Origin Of Life 13 Comments . Cells are the fundamental units of life. The genome sequence of a cell may be thought of as its operating system. It carries the code that specifies all of the genetic functions of the cell, which in turn determine the cellular chemistry, structure, replication, and other characteristics. Each genome contains instructions for universal functions […] Sean Pitman on evolution of mitochondria March 3, 2016 Posted by News under Cell biology, News, Origin Of Life 180 Comments From Detecting Design: Now, it is true that mitochondrial organelles are quite unique and very interesting. Unlike any other organelle, except for chloroplasts, mitochondria appear to originate only from other mitochondria. They contain some of their own DNA, which is usually, but not always, circular – like circular bacterial DNA (there are also many organisms […] Dionisio
#197 gross error correction: Searched both sites for the term “ribosome” this year only: UD: 9 TSZ: 1 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF, That's a very timely clarification, as usual. Thank you. Actually, my comment @197 is embarrassingly WRONG. Since several OPs with the term 'ribosome' explicitly referenced in the title appeared first and they were all from before 2016, I mistakenly didn't scroll down to look for OPs without explicit references to the given term. Big mistake. Mea culpa. Solar mea culpa. I apologize for such a careless error. Shame on me! Dionisio
D, the ribosome has frequently been discussed in UD in the context of protein synthesis, but mostly in text and in discussion threads, or by way of diagrams. KF kairosfocus
SA, Disclaimer: note that the searches I ran are kind of superficial and could be misleading because they look for keywords within the OPs. Finding a particular keyword within a text does not guarantee that the given text is written seriously. But it was kind of interesting to get a general idea of the biology-related discussion taking place in those blogs. Basically we should not read too much from those results. Maybe it was an entertaining exercise for some of us? :) Dionisio
OK, enough stats entertainment. Back to work. :) Dionisio
Searched both sites for the term “ribosome” this year only: UD: 0 (zero, null, nada, nesuno, nic, nichevó) TSZ: 1 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
Summary stats: Keyword ........... Posted ... UD ... TSZ morphogen .......... @180 ..... 5 ..... 2 tRNA .................... @181 ..... 6 ..... 0 gastrulation ......... @182 ..... 1 ..... 0 epigenetics .......... @183 ..... 22 ..... 5 proteomics .......... @184 ..... 1 ..... 0 mitosis ................ @185 ..... 1 ..... 0 meiosis ............... @186 ..... 2 ..... 0 centrosome ......... @187 ..... 1 ..... 0 chromosome ....... @193 ..... 3 ..... 4 genomics ............ @194 ..... 0 ..... 3 (what's wrong UD?) neuroscience ....... @195 ..... 8 ..... 3 Dionisio
KF: Searched both sites for the term “neuroscience” this year only: UD: 8 TSZ: 3 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF and SA: Here’s a case where UD did really bad. Searched both sites for the term “genomics” this year only: UD: 0 (zero, null, nada, nesuno, nic, nichevó) TSZ: 3 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF and SA: Here's a case where TSZ got more posts than UD: Searched both sites for the term “chromosome” this year only: UD: 3 TSZ: 4 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
Silver Asiatic,
But it’s probably true also that TSZ took some activity away from here.
If they took some activity away from UD, then looking at the results posted @180-188 one could say they took non-scientific whining away from UD, which is healthy. I really don't miss those politely dissenting interlocutors. GP and you were the main players in the "heated" discussion that took place here a short while ago. The discussion was very serious and productive. The "harsh" critics were not needed at all. Actually, most probably they would have taken some seriousness away from the discussion. And most certainly they would have distracted GP's and your attention away from the central topic. Maybe I would not ban them, but prefer they voluntarily go away somewhere else with their nonsensical whining. Their site seems to be open to all views, because I saw a few posts by ID-friendly folks. Perhaps those ID-friendly posts are the best they can read in their site. The anti-ID posts I saw in that site looked like the hogwash comments they used to post here. If that's the case, then this site not only didn't lose anything of value, but this site has gained seriousness after the politely-dissenting interlocutors moved away. Dionisio
Interesting research, Dionisio. I would guess they're just trying to be skeptical about things, and not necessarily scientific. ?
Just wanted to add some ‘spice’ in this discussion thread that seems overwhelmingly disproportionate on ID comments. ???? The politely dissenting interlocutors kept off this discussion. Perhaps that’s a healthy sign? Maybe they’re running out of arguments or even seriously considering switching sides?
I like your optimism. But it's probably true also that TSZ took some activity away from here. I, for one, would welcome more opposition here. But I also think anti-ID hatred has died down somewhat and that's a good thing. Perhaps we should invite more thoughtful ID critics to join us -- or open a thread with that intent. Silver Asiatic
KF, Yes, agree on that. Glad you have brought up this topic here. Well, they brought it up there, but you pointed at it. Actually, that's one thing that has gone ridiculously wrong in the area of morphogen gradient formation and interpretation, which by the way was the topic of the simple question professor Larry Moran failed to answer correctly in another thread in UD. For quite long time it was generally assumed that diffusion alone was responsible for the morphogen gradient formation. Lately new research has discovered that in a substantial number of cases diffusion alone does not resolves the conundrum. A few of those recent papers are referenced in another thread in UD. Now, one question is why would highly educated thinking people accept that obviously incomplete idea as the solution to the problem? A 7-year-old child would have realized something else was missing in that picture. I think it is related to the pebble story you pointed at too, isn't it? Now, aside from that, did you see (posted @180-187) how poorly that other site did in the comparisons of biology-related OPs? And some of the few OPs that were counted were actually written by ID-friendly folks. What does that tell us? Which site is more serious about science-related discussions? Just guess. :) PD. I would not pay much attention to what they write somewhere else, unless it's serious. In this case it doesn't seem so. Dionisio
D, Chesil beach was at the top of TSZ, and it is a subject where the owner there was corrected here at UD years ago. The hydrodynamic sorting that grades pebble size along that beach -- smugglers used to tell where they were in the dark by feeling pebble size -- is not equal to functionally specific complex organisation and/or associated rich information. It does not exhibit high resistance to algorithmic compression. and so forth. KF kairosfocus
KF: After a few comparisons it looks like comparing USA and Poland on their total medal counts at the last Olympics in Rio. Basically two different categories. Now I forgot what was it that you asked me to look at? :) Dionisio
KF: Searched both sites for the term “centrosome” this year only: UD: 1 TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term “meiosis” this year only: UD: 2 TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term “mitosis” this year only: UD: 1 TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term “proteomics” this year only: UD: 1 TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term “epigenetics” this year only: UD: 22 TSZ: 5 Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term "gastrulation" this year only: UD: 1 https://uncommondesc.wpengine.com/intelligent-design/homologies-differences-and-information-jumps/ TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF: Searched both sites for the term "tRNA" this year only: UD: 6 https://uncommondesc.wpengine.com/intelligent-design/antibiotic-resistance-evolution-at-work/ https://uncommondesc.wpengine.com/junk-dna/junk-dna-back-with-a-vengeance/ https://uncommondesc.wpengine.com/intelligent-design/here-we-report-a-new-cell/ https://uncommondesc.wpengine.com/intelligent-design/an-encounter-with-a-critic-of-biological-semiosis/ https://uncommondesc.wpengine.com/physics/roger-highfield-on-walking-by-faith-and-not-by-sight-in-science/ https://uncommondesc.wpengine.com/chemistry/alicia-cartelli-on-agiogenesis/ TSZ: 0 (zero, null, nada, nesuno, nic, nichevó) Please, note that the search could have been done incorrectly. Additional verification is welcome! Dionisio
KF, I took a quick look at the web site you pointed at. I did it because you referred to it. Looking at UD takes most of my spare time. Here's a minor observation: Morphogenesis is one of the many interesting areas in biology. Morphogens are signal molecules involved in the morphogenesis. I've been trying to learn a little of that stuff lately. Have to humbly admit that reading biology papers is very difficult for me. It's a challenge. Also, the amount of research information coming out of wet and dry labs is quite large and it seems to increase. Out of curiosity I found how many OPs mentioned the term "morphogen" in those two web sites, just this year. Here are the results: UD = 5 https://uncommondesc.wpengine.com/intelligent-design/the-highly-engineered-transition-to-vertebrates-an-example-of-functional-information-analysis/ https://uncommondesc.wpengine.com/evolution/my-thoughts-on-the-krauss-meyer-lamoureux-debate/ https://uncommondesc.wpengine.com/news/cells-poll-their-neighbours-before-moving-around/ https://uncommondesc.wpengine.com/intelligent-design/bipedalism-regulatory-area-missing-in-humans/ https://uncommondesc.wpengine.com/intelligent-design/two-quick-questions-for-professor-coyne/ TSZ = 2 http://theskepticalzone.com/wp/non-dna-structural-inheritance/ http://theskepticalzone.com/wp/the-reasonableness-of-atheism-and-black-swans/ However, it seems like most of the above linked articles (if not all) were written by ID-friendly folks. Dionisio
KF, Please, you may disregard my previous comment @178. You referred to another web site, right? I'll take a look at it to see what you pointed at. Dionisio
KF @177: Perhaps that topic is well above my pay grade: http://www.bing.com/search?q=tsz&src=IE-SearchBox&FORM=IENTSR&pc=EUPP_ Sorry, I don't use other search engines. I'm loyal to MSFT. Most of my software development work has been on top of their framework. No iOS or Android, except through Xamarin. :) Please, remember I'm just a student wannabe. Still highly appreciate your insightful OPs and comments, but not all are at my level. The same with gpuccio's explanations. You guys are too technical for me sometimes. Would you lower the bar a little for me this time? Thank you. Dionisio
D, just popped by TSZ for the first time in many Moons. Chesil beach -- pebbles grade per position along the beach --exhibiting high algorithmic complexity? hydrodynamic sorting is law, with statistical underpinning. Thus, in principle highly compressible even if we do not know how to write the code well. Where, too, there is no identifiable functionality from the pebbles that is critically dependent on particular organised arrangements and couplings of pebbles, as opposed to say D/RNA or protein molecules. Remember, the Mandelbrot set is in fact highly compressed by a relatively simple criterion in z. KF kairosfocus
KF @173: [regarding the comment @165] Yes, your persuasive argument convinced me. Strong teleology seems visible in those bad fake label knock-offs too. BTW, @165 i kind of misbehaved like a troll, didn't i? Just wanted to add some 'spice' in this discussion thread that seems overwhelmingly disproportionate on ID comments. :) The politely dissenting interlocutors kept off this discussion. Perhaps that's a healthy sign? Maybe they're running out of arguments or even seriously considering switching sides? :) PS. Enjoyed reading 174,175 too. Valid points. Thank you. Dionisio
Origines, 160:
I would like to propose another additional premise: Q explains the presence of A, B, C and D. IOWs the presence of Q makes logical sense, given the presence of A, B, C and D.
I adapt, Q1, Q2, Q3 . . . Qn are possible alternatives, and of these Qi is the best by certain criteria. In short, I argue that there is an implicit inference to the best current explanation in a lot of inductive reasoning. One form of such is, There is a case of examples A, B ,C . . . M. They seem to share a common genus, never mind differences D1, . . . Dm. We now see candidate T, which shares the relevant core characteristics G1, . . . Gk. We argue, this is best explained by T being in the same genus, and thus the same core, never mind differentia Dt. Therefore, our provisional assignment is that T is in G, and we infer on this that it will exhibit the range of characteristics and behaviours {G} as presently understood and as may be further elaborated. We then predict and may test future observations. This also shows the underlying premise of stable order in the cosmos, leading to coherence, consistency and predictability. But, given that we are imperfect in knowledge, such is inherently provisional and open-ended. In the case of the design inference, the points here are fairly obvious. KF PS: Notice how fruitful this no-trolls discussion is? (Responsible critics welcome. Trolls, we got some reserved tickets to Norway. [HYP: Agitated dysfunctional behaviour of trolls reflects over-heating addling brains; proposed test, ship 'em to Norway and see how their behaviour moderates in their temperature zone of origin.]) kairosfocus
UB, you are right, specification takes functional organisation and implied intelligence and agency in my view. Then, back to our Celestron 8-inchers (or bigger! [All I want for Christmas is a 20" reflector 'scope and mini observatory, Santa!). Cell based life uses C-chemistry, aqueous medium, terrestrial planets and Nitrogen, to get us to proteins. It turns out that the physics and circumstances of a cosmos to get us to the operating point we observe, are very fine tuned. Specification on steroids, in fact. The full force of design theory comes out when the world of life and the fine tuned universe are bridged through the chemistry of life and getting to the chemistry of life. KF PS: As in, what is the role of information in the world of life and the cosmos? Where does that point to? kairosfocus
D, even bad fake label knock-offs made in sweat shops somewhere, are designed. And, believe you me, they meet someone's design goal! (That mocking laughter and chinking you hear is someone laughing and shaking his money-bags as he dances devilishly all the way to the bank. [Gaol for he!]) KF kairosfocus
Origenes: "Philo and Cleanthus: “Whaat??”" Strange people, philosophers! :) gpuccio
KF: "I note that analogies are involved in a lot of induction as well, the root reasoning involved in science. " Absolutely! :) gpuccio
Origines and GP: Excellent point. I note that analogies are involved in a lot of induction as well, the root reasoning involved in science. It is ill advised to saw off branches on which we must all sit. KF kairosfocus
GPuccio @167, Again, profound insight, which requires reflection.
GPuccio: Therefore, we infer that others, too, have those conscious representations accompanying their outer behaviour.
Without this particular inference from analogy there would be neither a basis, nor a reason to engage in rational debate with one another. It’s therefore amusing that Hume chose to present his criticism by depicting three philosophers named Demea, Philo, and Cleanthes debating the inference from analogy. It would have been great if Demea had said: “Well guys, I do hope that we are all aware of the fact that our trust in the inference from analogy is prerequisite to this debate.” Philo and Cleanthus: “Whaat??” Origenes
gpuccio @167: Excellent points. Thank you. Dionisio
Origenes: "I find this an extremely interesting insight. Really. Thank you very much." Thanks to you. I am happy that you appreciate that point, because I think it is very important, and often misunderstood. :) Indeed, consciousness is a strange epistemologic object. We can say that our consciousness has cognition of consciousness itself in two different ways: a) We have direct cognition of what consciousness is in ourselves, because we directly perceive our consciousness. So, we can say that for us our consciousness is a fact, a fact which is the foundation of any other knowledge. b) We have indirect cognition of the consciousness of other beings, and that cognition is only an inference, and it is based on analogy: that's why it is strongest for other human beings, because their appearance, physical body, behaviour, language, cognitive activities, manifestations of feeling and emotion, and so on, have the highest similarity with our own. And the important point is: we know directly that, in us, those behaviours etc. are strictly linked to conscious representations: our conscious representation, our precious basic facts. Therefore, we infer that others, too, have those conscious representations accompanying their outer behaviour. So, the important point is: one of the most basic convictions of each of us (because who, among us, really doubts that his friends and fellow human beings are conscious?) is based on an inference from analogy. Not on pure logic and deduction, not on mathematics or sophisticated forms of reasoning. Juts an analogy, but so strong, so obvious to the mind and to feeling and to intuition, that nobody can really doubt it. gpuccio
Bartlett: It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.
I like this paragraph. All the same, we found a mechanism, and it might even be obvious that we would. It takes two things to specify something in a material world. Why? Because nothing specifies anything. To specify something takes an organizational device, whereby one thing serves as a medium of information and another thing establishes what is being represented. And if nothing is ever specified, then life isn't going to happen. It would have no way to become or remain organized. We can look through a telescope and see what 15 billion years of no specification looks like. But to get life going you have to have enough of these irreducible two-part relationships formalized in the system in order to have the informational capacity you'll need to describe the system -- and of course, they are all formalized simultaneously by being encoded in the very memory they make possible. The life-cycle of the cell requires it. Howard Pattee asks the question "How do we know when a measurement has been made?" The answer he gives is "When there's a record of it". Upright BiPed
gpuccio @162:
As for me, if I find a watch in a field, I have absolutely no doubt that it is designed.
Well, if it's Swiss then most probably it's designed, otherwise it could be a copy, and sometimes a bad imitation. :) Dionisio
Gpuccio: The reason why we all believe that other people are conscious is an inference from analogy.
I find this an extremely interesting insight. Really. Thank you very much. - - - -
Silver Asiatic: It also strikes me that Hume’s argument is not a reasonable approximation of what the design argument actually is. He created a straw man.
Dembski writes about Hume's criticism of the design argument: "It is this criticism that for many philosophers of religion remains decisive against design." Origenes
Origenes
Here I would like to propose another additional premise: Q explains the presence of A, B, C and D. IOWs the presence of Q makes logical sense, given the presence of A, B, C and D.
Good addition. It also strikes me that Hume's argument is not a reasonable approximation of what the design argument actually is. He created a straw man. Silver Asiatic
Origenes: The reason why we all believe that other people are conscious is an inference from analogy. Nobody really questions that inference. Some arguments from analogy may be feeble, but many are really strong. As for me, if I find a watch in a field, I have absolutely no doubt that it is designed. Long live Paley's argument. gpuccio
UB: "I believe we understand and appreciate each other’s positions, and realize that we end up at the same conclusion in the end." Absolutely! And slight differences in the approach can only enrich the discussion. :) gpuccio
Bartlett: It used to be that the arguments for design were very plain. Biology proceeded according to a holistic plan both in the organism and the environment. This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.
Here, it seems appropriate to discuss Paley’s famous watchmaker analogy. I am reading Dembski’s view on it, and I believe that I have a small addition to make, which strengthens Paley’s argument. The following quotes are from “No Free Lunch”, chapter 1.8, by Dembski.
According to Paley, if we find a watch in a field, the watch's adaptation of parts to telling time ensures that it is the product of an intelligence. So too, according to Paley, the marvelous adaptations of means to ends in organisms ensure that organisms are the product of an intelligence.
Hume criticized the design argument as a “feeble argument from analogy”.
Schematically, an argument from analogy takes the following form: we are given two objects, U and V, which share certain properties, call them A, B, C, and D. U and V are therefore similar with respect to A, B, C, and D. Now, suppose we know that U has some property Q, and suppose further that we want to determine whether V also has property Q. An argument from analogy then warrants that V has property Q because U and V share properties A, B, C, and D, and U has property Q. In terms of premises and conclusion, the argument from analogy therefore looks as follows: U has property Q. U and V share properties A, B, C, and D. Therefore V also has property Q. In the case of Paley's watchmaker argument, U is a watch, V is an organism, and the property Q is that something is intelligently designed. For the watch there is no question that it actually is intelligently designed. For the organism, on the other hand, this is not so immediately clear.
Dembski goes on explaining that the “difficulty with arguments from analogy is that they are always also arguments from disanalogy” — arguments from analogy can lead us astray. Next he proposes a strengthened form of Paley’s argument from analogy:
… this strengthened form of the argument therefore has an additional premise and can be formulated as follows: U has property Q. U and V share properties A, B, C, and D. There is no known instance where A, B, C, and D occur without Q. Therefore, V has property Q.
Here I would like to propose another additional premise: Q explains the presence of A, B, C and D. IOWs the presence of Q makes logical sense, given the presence of A, B, C and D. Origenes
By the way GP, I recently passed the eight year mark since I began my project, and it has now been six years since you advised me and set me off in the right direction. I thank you for that. And now we start a new year ... Upright BiPed
GP, Thank you for taking the time and effort to write your last two posts. We are coming at the issue from different perspectives and that distinction shows through. However, I believe we understand and appreciate each other's positions, and realize that we end up at the same conclusion in the end. I trust all is well with you, and hope you and your family have a happy and safe Christmas holiday. Upright BiPed
UB: Some more thoughts: We can read a binary (or not binary, according to the alphabet that can be observed) sequence from any material object, provided that we can objectively observe a sequence of states that can be read linearly (it is not relevant if they are in a straight line or in a circle), and that there is a finite number of states that can be unequivocally identified and are in finite number, so that we can consider them as elements of some alphabet. Again, please note that there is nothing in that definition that assumes that the sequence of material states is designed, or that it is a representation of something. There is no assumption that semiosis is implied in what we observe. If you give a look at my example of the stone wall on a faraway planet, at post #24, you can see that we simply observe a series of signs in the stone. As far as we know, they can well be random signs left by weather events. But they are in some form of linear sequence, and they can easily be categorized as belonging to two different categories. Therefore, according to my definition, we can read a binary sequence from that object. Indeed, we can read 8 sequences, if I am not wrong, according to where we start and the direction we read. If the object is not designed, ID theory says that for none of those 500 bit sequences we will be able to define a specific function. So, if we find that one of them, any of them, a specific function whose complexity is 500 bits, we can infer that the whole objects is designed. That's why, if one the sequences we can read perfectly corresponds to the sequence of the first 125 decimal digits of pi. considering each four binary digits as a redundant code for decimal digits, then we can infer design. Now, this is IMO a very powerful example, because it emphasizes an important point: there is no restriction at all on the sequences that we ca read from material objects. The only required thing is that we can read them, and then define a function for them. In the case of pi, our function requires a symbolic code: we read the signs as binary numbers, we group them in words of 4 elements, and we read those words as decimal digits. That is certainly an added bonus, an important one, and it certainly makes the design inference much stronger, because we have not only the argument from functional complexity, but also the argument from semiosis, that are IMO the single two basic arguments that allow a design inference. In the case of the key, there is only the argument from functional complexity, and not the argument from semiosis. But the design inference is valid just the same. Let's say that the key is 500 elements long, 500 spikes and holes in some sequence. We can read that sequence as a binary digit. For all my purposes, that's more than enough to consider that sequence digital information. For example, I can well give that key to a blacksmith, and tell him to build another one, with the same elements, and respecting the same sequence. If I just showed him the single elements, and told him to make a key with 500 of them in random order, he could do that, but the key would not work. But if I give him the key, he can use it to get the right sequence. So, the key itself is a source of information, but not a symbolic source of information, because the key does not represent another object. If each single position must be right to get the function (to open the safe), then the key is the source of 500 bits of functional information. Even if there is no semiosis implied. Design can be safely inferred. Now, I will transfer the concept to an example that is much nearer to our discussions: proteins. Let's consider my old friend, the beta chain of ATP synthase. Let's say that we already have all the rest, and we need that chain to build a working molecule of ATP synthase. But we don't know the right sequence. Now, there are two different ways that we can get the right sequence. 1) We can look at the gene for that sequence, observe the sequence of nucleotides, group them in words of three, and then identify the start and end, and each single aminoacid, by translating that sequence by the genetic code, that I am assuming we know. At that point, we can synthesize the chain in our lab, because we have the correct information 2) We can analyze an existing molecule of the beta chain, and derive the correct sequence of aminoacids. Then we use that sequence to synthesize the chain in our lab, because we have the correct information. Now, the result is the same in both cases. But, is semiosis implied? It is, certainly, in the first case. We derive the information from a symbolic representation, and we have to translate it according to a symbolic code that we already know. But not in the second case. The protein itself is not a representation of its own information. It is an instance of that information. We derive the digital sequence from the object itself. There is no symbolic code implied. Like in the example of the key. Proteins are like the key. One could ask: but is the sequence of AAs in the protein symbolic? Is it a code that stores information, for example, for its tertiary structure? And the answer is: no. The link between the linear sequence of AAs (the primary structure) and the final, tertiary structure, that generates the function, is defined by the laws of biochemistry. It is not symbolic. It is not a code. As you well know. That's why I insist that there are cases of functional digital information, that can well allow a design inference, which do not include semiosis and codes. It is absolutely true that most cases of digital functional information do include codes and semiosis. That is certainly true for language, mathematics, protein coding genes. IOWs, for most of the objects we are interested in. But it is equally true that there are cases of complex functional information without semiosis. As I have tried to show. gpuccio
UB: I can agree with most of what you say. But the simple point is: for my procedure, I define digital information in a material object anything that can be read as a sequence of individual states that can represent an alphabet. In my example of the key, it is certainly possible to read the sequence of binary individual states as a digital binary sequence. Therefore, the sequence is digital information that can be retrieved from the object. And it has a connection with the function, because only is the sequence is the right one will the key work. In a sense, it is not different from any binary sequence that you can retrieve from the physical states of a CD surface. I have created this rather strange example to show that in exceptional cases digital information can be independent from a semiotic code. But I am not saying anything here about more abstract concepts, like representation. I am only saying that I can read a digital sequence that is connected to a function, and so I can apply my design detection procedure. Which is empirical, and does not depend on any theoretical considerations, other than the definitions given in the procedure itself. gpuccio
8. dFSCI can only be found in mathematics and language No. It can be found in any form of digital information that implements complex functions. Software, or projects for machines. Of course, the digital nature usually requires some form of symbolic code, but that’s not necessary. Imagine a physical key which is made as a regular sequence of spikes and holes. It is digital, because it is a sequence of two possible states. If it is very long, it can be very complex. And it opens its safe only if the sequence is correct. But there is no symbolic code here: just a physical form that can implement a complex function, and can be described digitally as a sequence of two states.
Hello GP, good to talk to you. You have always been generally supportive of semiotic perspectives, even though I think we both know that we sometimes don’t see things in the same way (which is often a good thing). This is an interesting occasion. A lock on a safe is a device that controls access to the safe. Generally, such a lock has only a single variable – to ’allow access’ from a locked state. The shape of the correct key is a physical representation that communicates this single variable to the system. It accomplishes this be being measured at several specific points along the length of the key when it is pushed into the lock. In contrast to this, the central characteristic of a digital medium is that it is a sequence of many individual representations, which gives the digital medium its capacity to carry high levels of information. I think it’s fair to ask if a key being inserted into a lock is a set of individual representations or a single representation. If it is a set of individual representations, what do they represent? Do we take the position that each one represents some fraction of the total end effect? Do we also say that communicating a single variable is the “implementation of a complex function”? Let’s suppose that our key operates in a lock with 6 pins. Does this system represent “digital information” if the six pins are merely in a straight line, and therefore “not digital information” if the six pins are in a circle (such as in a tubular or radial key)? Let’s say we describe a 6-pin key (as you suggest) as “digital information” in the form of 101010. When I begin to push the key in, the first pin position will be at the “1” position and the other five pin positions will be empty. If I push the key in a little further, then the first pin position changes to a “0” and the second pin position becomes a “1”. Then they all change again when I push the key in further. It is only after I have pushed the key all the way in that the correct pattern can be recognized by the system. Is this a case of individual digits, or a single representation? I do not believe that a linear sequence of matter is sufficient for a representation to be considered a digital medium. It requires the establishment of digits. I am prepared to be convinced differently, and will amend my statement to incorporate the new understanding. Obviously, if we wish, we can say that a key contains digital information that is 1 digit long, but I am not certain how that clarifies anything. Upright BiPed
@151 clarification:
Whether the biological systems are designed or not doesn’t depend on how much we know about it or how well we quantify, detect and infer design.
That statement was poorly written. What I meant was that even the areas of biology where we don't have any threshold-based quantification method to infer deign, still design can be inferred on the basis of other valid criteria. Basically the level of functional complexity* of the observed systems may be sufficient evidence to infer design. (*) functional coherence? Dionisio
Silver Asiatic at #150: And thank you to you! :) gpuccio
Silver Asiatic at #148: My answers: 1. All functions contain FSCI No. All functions contain FSI, because some information is needed to implement any function. But there is no general need that the information should be complex. In most cases, it is simple. 2. There is no difference in dFSCI between a Shakespeare sonnet and a poem that could have been generated by a computer If we consider dFSCI, IOWs, the binary form, there is no difference, because for both examples (provided they satisfy the conditions we have discussed before) the answer is: yes. Both examples exhibit dFSCI, and allow a design inference. But dFSI, in numeric form, is certainly different, if we use appropriate definitions. If we stick to a simpler definition satisfied by both, such as "being formed by english words", the dFSI linked to that function will be the same, because the function is the same and the length (I assume) is comparable. That's enough to infer design for both objects. But if we use some more refined functional definition, that is satisfied by the sonnet but not by the computer generated sequence, we will be able to compute higher dFSI for the sonnet, using that definition. However, the design inference remains valid for both objects. 3. Specification cannot be quantified It can be quantified as a binary variable (present - absent). if it is a functional specification (like, for an enzyme, being able to catalyze some reaction), then it is necessary to quantify the function and to have a threshold to assess if it is present or not (for example: the ability to catalyze the reaction at least at such level). That allows to get the binary value (present - absent) for any possible object. 4. ID is a probability measure False. ID is the theory of design inference. Of course, the theory includes some probability measures. 5. Cosmological fine tuning arguments cannot be analyzed for dFSCI Well, cosmological fine tuning arguments are about one object: the observable universe. Like biological arguments, they rely on the measurement of the information linked to our universe as an object able to implement some function (for example, being compatible with life) and what is believe to be the whole of possible universes (the search space). So, in a sense, the concept is the same: it is functional information here too, but applied to one object. Is it digital? That is a good question. Maybe we could ask some quantum physicist... 6. No microevolutionary events show evidence of dFSCI It depends on how we define "microevolutionary". If we mean simple transitions, of only a few bits (like most examples I am aware of), then the statement is true. There is some dFSI linked to the transition, but it is not complex (according to any reasonable threshold). 7. dFSCI is defined through subjective inputs on what a function is It is defined through an objective and shareable definition of a function, made by a conscious subjective being. Any function can be defined, by any observer. The dFSI is measured for the defined function. 8. dFSCI can only be found in mathematics and language No. It can be found in any form of digital information that implements complex functions. Software, or projects for machines. Of course, the digital nature usually requires some form of symbolic code, but that's not necessary. Imagine a physical key which is made as a regular sequence of spikes and holes. It is digital, because it is a sequence of two possible states. If it is very long, it can be very complex. And it opens its safe only if the sequence is correct. But there is no symbolic code here: just a physical form that can implement a complex function, and can be described digitally as a sequence of two states. 9. dFSCI analysis is not used, in itself, to reveal design, but only to validate what is claimed to be designed False. It is used to infer design for objects of which we don't know the origin (design or not). 10. Not all objects that contain functional specific information give evidence of design, but only those where the quantity of that information is high enough Absolutely true. We can infer design only if the functional information is high, IOWs if it is complex (above some appropriate threshold). OK, that was not difficult after all... gpuccio
SA, Whether the biological systems are designed or not doesn't depend on how much we know about it or how well we quantify, detect and infer design. Generally objective truth does not depend on whether we know it. I understand that the quantification method gpuccio uses to digitally measure the functional complexity of certain biological objects and thus infer design is limited to those objects only. The brilliant ideas my former supervisor at work had could not be easily quantified. However, they were design ideas. The software we developed to implement those brilliant ideas could have been quantified using gpuccio's method to infer design. Different control layers and procedural components of the designed biological systems may or may not be suitable for quantification in order to infer design. Each of them should be analyzed using different methods. Perhaps some of those methods don't exist yet or may never exist. We are dealing with an unfathomable designed system that is beyond anything we conscious beings have ever imagined, much less designed. However, every day researchers from wet and dry labs are producing enormous amounts of new data that shed more light on the elaborate cellular and molecular choreographies orchestrated within the biological systems. We ain't seen nothing yet. The best is still ahead. It feels good to be on the winning side. But that also gives us the responsibility to be magnanimous toward those who disagree with us. Those of us who have been beneficiaries of Divine grace should enjoy being gracious toward others too! Let's enjoy learning from what serious science is discovering these days, specially in biology. Let's rejoice! Dionisio
Dionisio 134 I feel the need to say this again. I agree with everybody here - even if you're mistaken. :-) Seriously, I am an ID advocate. Most importantly, Thank You -- to everybody, especially gpuccio! For taking so much time and painstaking detail to offer explanations. I hope more than just me benefited from it. A very good job done - again, thank you ALL IDers! Silver Asiatic
UB I missed this earlier:
SA: For example, in origin of life studies, where there is no genetic code but just molecular activity. UB: This is an unsupported assumption, is it not?
I'm not sure here. If we're trying to evaluate the probability of non-living molecules coming together to form building blocks of life (pre-genetic code), then I'd think that would be right. But it's also true that we're looking for the origin of the genetic code, so yes - I understand and accept what you said. Silver Asiatic
I'll close with a 10-question ID Quiz: True, False or Other (bad question, ambiguous, etc.) 1. All functions contain FSCI 2. There is no difference in dFSCI between a Shakespeare sonnet and a poem that could have been generated by a computer 3. Specification cannot be quantified 4. ID is a probability measure 5. Cosmological fine tuning arguments cannot be analyzed for dFSCI 6. No microevolutionary events show evidence of dFSCI 7. dFSCI is defined through subjective inputs on what a function is 8. dFSCI can only be found in mathematics and language 9. dFSCI analysis is not used, in itself, to reveal design, but only to validate what is claimed to be designed 10. Not all objects that contain functional specific information give evidence of design, but only those where the quantity of that information is high enough Silver Asiatic
UB
I think this statement is unfair, and perhaps a little bit opportunistic.
I apologize that I have gotten a little testy in my responses. I am a friend to ID, not an enemy -- and more importantly, an admirer of all who have contributed here and to UD for so many years. So again, I'm sorry that I came across that way. What I meant from "thin air" is that it sometimes appears that we're making it up as we go along. That's not a bad thing - perhaps we haven't been challenged on certain areas. I also don't think a difference of opinion among ID advocates is necessarily bad either. It's just good to air those things out. No, I didn't intend to mean that my ID friends and colleagues are being irrational or foolish or anything like that, and I apologize that it sounded that way.
I am not sure what “ID rules” are.
What I'm getting at is "how ID works" and "how we do or use calculations". These can be thought of as various rules. For example, as you said:
The clearly identifiable physical manifestation of dFSCI can be found in only three places anywhere in the cosmos — language, mathematics, and in the genetic code.
This could be considered a "rule". Why not? If everyone agrees, then this is one of the guidepoints of the ID process. "dFSCI can be found in only three places ..." But in making a rule, let's test it to make it better, if we can. That's all I was doing. So, I wondered if the genetic code was language. You stated:
In any case, if you analyze the genetic code from a physical perspective, it functions exactly like language,
I'd then say, "yes" the genetic code is a language, as other messaging codes are. If so, then the "rule" could be modified. Instead of "three places" we would say "only two".
The clearly identifiable physical manifestation of dFSCI can be found in only Two places anywhere in the cosmos — language and mathematics.
Right? I am very argumentative by nature and this is a character flaw most of the time. Part of that flaw is "wanting to win" whenever I get into a debate. I attempted here to try to "stir the pot" a bit and bring some questions. Let's face it, our opponents don't really do a very good job of this, most of the time. However, I don't like this position I've placed myself in. I notice my responses seem hostile, or as you righly said "unfair". I joined this blog years ago to make friends, not create enemies. :-) So, I see this little exercise I came up with to relieve boredom is not working well for that purpose. :-) Silver Asiatic
Origenes
You seem to hold that, in order to be valid, the design inference should always work — also in difficult cases ...
I can understand why you think I'm questioning the validity of the inference - or more, that I think it's invalid. But I'm just looking for better explanations. I'm looking at what I think are weak points in the presentation. I never said that I thought the inference was invalid. I did say that I didn't understand many things. And I did point out a few conflicts among ID advocates on what various things mean. I consider myself a friend, not an enemy. I'll say also, I prefer your (our) individual responses rather than having to read papers or, worse, sit through youtube lectures. My goal - can we improve our own explanations? Can we explain what ID is and what the calculations are? Do we understand the limits? How precisely do we use these things? Have we tested the hardest cases? It's a challenge, not an attack. The better you can handle the hardest objections, in your own words, the more effective you (all of us) will be in explaining and defending. Silver Asiatic
SA,
Well, I hope we’re having fun but it also seems like ID concepts and rules are being pulled from thin air – to add on top of the pile.
I think this statement is unfair, and perhaps a little bit opportunistic. As a general rule, I think you read people fairly carefully and thoughtfully (particularly the opponents of ID), which makes this statement all the more surprising. As for me, the “thin air” I am pulling from is directly out of the scientific record. As for GP, I would encourage you to read him more closely. If “thin air” is a euphemism for unwarranted or unprincipled reasoning, then you may find that the thin air is where the objections to dFCSI are coming from.
Speaking of ID rules … we might say that the genetic code is a language?
I am not sure what “ID rules” are. As far as I know, ID doesn’t have any rules that are not also part of any other empirical pursuit, (i.e. physical things must operate physically following the forces of physical law, with well-worn caveats to the unknown). In any case, if you analyze the genetic code from a physical perspective, it functions exactly like language, and indeed, the use of language is the only other physical process that the genetic code can be classified with.
But anyway, if there were strict limits to the use of dFSCI for understanding design in nature, then this should be known upfront before anyone tries to use it in other cases.
What to say? It just seems there is a conceptual block here that is causing unnecessary confusion.
For example, in origin of life studies, where there is no genetic code but just molecular activity.
This is an unsupported assumption, is it not?
I think that’s how we would have to do evolutionary analysis. Or as above, we would convert observations into statistics and then analyze the statistics and not the object.
I once saw a critic of ID on this blog say that we should be able to find out the “complexity” of a cake by writing out (in the least number of symbols) the process of making the cake, and then analyze the probability that such a sequence of symbols could come about. I never adopted that kind of thinking. My preference is to understand dFSCI at its physical embodiment. The confusion goes away quickly. cheers Upright BiPed
Silver Asiatic, From your posts #113, #115 and #116 I got the notion that your concerned with, what I call, different starting points (#140). If I understand you correctly, you differentiate between easy starting points for design inference and difficult ones.
Silver Asiatic: Apply CSI, FSCI (whatever initials you want), to a variety of cases. No editorializing. No cover-ups. Just take a variety of randomly selected things. Apply the measure. Spell it out.
Silver Asiatic: ... that’s way too easy. It’s a human designed artifact with a known function (meaning in English).
You seem to hold that, in order to be valid, the design inference should always work — also in difficult cases:
Silver Asiatic:Yes, blind test with unknown languages. Test with languages that have partial function. Test with ambiguous function. Test with machine generated code, non-human designed- that has function. (Randomized parameters for evolutionary algorithms).
I find Dembski's explanation of the problem of false negatives very helpful:
Consider first the problem of false negatives. When the complexity-specification criterion fails to detect design in a thing, can we be sure that no intelligent cause underlies it? No, we cannot. To determine that something is not designed, this criterion is not reliable. False negatives are a problem for it. This problem of false negatives, however, is endemic to detecting intelligent causes. One difficulty is that intelligent causes can mimic necessity and chance, thereby rendering their actions indistinguishable from such unintelligent causes. A bottle of ink may fall off a cupboard and spill onto a sheet of paper. Alternatively, a human agent may deliberately take a bottle of ink and pour it over a sheet of paper. The resulting inkblot may look identical in both instances, but in the one case results by chance, in the other by design. Another difficulty is that detecting intelligent causes requires background knowledge on our part. It takes an intelligent cause to recognize an intelligent cause. But if we do not know enough, we will miss it. Consider a spy listening in on a communication channel whose messages are encrypted. Unless the spy knows how to break the cryptosystem used by the parties on whom she is eavesdropping (i.e., knows the cryptographic key), any messages passing the communication channel will be unintelligible and might in fact be meaningless. The problem of false negatives therefore arises either when an intelligent agent has acted (whether consciously or unconsciously) to conceal one's actions, or when an intelligent agent in trying to detect design has insufficient background knowledge to determine whether design actually is present. Detectives face this problem all the time. A detective confronted with a murder needs first to determine whether a murder has indeed been committed. If the murderer was clever and made it appear that the victim died by accident, then the detective will mistake the murder for an accident. So too, if the detective is stupid and misses certain obvious clues, the detective will mistake the murder for an accident. In doing so, the detective commits a false negative. Contrast this, however, with a detective facing a murderer intent on revenge and who wants to leave no doubt that the victim was intended to die. In that case the problem of false negatives is unlikely to arise. Intelligent causes can do things that unintelligent causes cannot and can make their actions evident. When for whatever reason an intelligent cause fails to make its actions evident, we may miss it. But when an intelligent cause succeeds in making its actions evident, we take notice. This is why false negatives do not invalidate the complexity-specification criterion. This criterion is fully capable of detecting intelligent causes intent on making their presence evident. Masters of stealth intent on concealing their actions may successfully evade the criterion. But masters of self-promotion bank on the complexity-specification criterion to make sure their intellectual property gets properly attributed. Indeed, intellectual property law would be impossible without this criterion. [source : 'No Free Lunch', p.24]
Origenes
Silver Asiatic: You insist:
So, I concluded that there is no new dFSCI in Shakespeare because all his information can be found in the dictionary and rules of grammar. These are the same things programmed into the computer. The outputs are different though. One designed intentionally by a human author, the other was a non-conscious output from a machine. I wouldn’t equate both as “designed” in that case.
You conclude wrong. there is always dFSCI, in both cases. Both are designed. Of course, Shakespeare does mcuh more than simply filter random words by a dictionary or rules of grammar. But the design inference test gives the same result for all objects that have more than 500 bits of functional information according to any defined function. The two objects are different, but they are both designed.
Well I don’t think the computer generated poem is “designed”. It’s a non-consious output.
No. A non conscious output is an output generated in a non conscious system, a system where no conscious agent contributed to the output. That's not true here.
But it does seem you’re saying that there is no difference in the functional dFSCI information content of Shakespeare and the computer generated poem.
Not true. Of course there is higher functional information if Shakespeare's poem, but to show that we must use more refined function definitions, to show the specific meaning and creativity of the sonnet. But my purpose was not thins: it was to correctly infer design, and I have done exactly that: both objects are designed.
But this is a case of moving the goalposts. If additional specification is needed to determine if one output is designed or not, then the best answer to my test is that we cannot determine the difference in functional information between the artifacts.
You are making a very great confusion. As I have said tenths of times, both objects are designed, and both objects are recognized as designed. My example of the theorem was only made to show that there are things that a computer program cannot do algorithmically, but that anything can be apparently done if we input all the necessary information in a software. As an extrem example, if we input Hamlet in a software, the soaftware can output it, both as the result of a long filtering of random words, like in the "Methinks" case, or simply by printing the information it already has. That has nothing to do with the problem of design inference: as I have said tenths of times, I have inferred design for both objects you proposed, and I do believe that they are both designed, and that my two inferences are two true positives. You are free to disagree. gpuccio
Silver Asiatic:
That may be fine, but you know nothing of that when you observe the artifacts I presented. As I mentioned to Origenes above, I gave away the answer to the problem first. In all three cases, you would find the same level of “conscious output” but in fact, the designer of the computer poem was not conscious of the output. To say that the computer poem was ‘designed’ is to say that any random computer program shows evidence of design, because there is some order or use of regularity.
No. My evaluation of your examples did not depend on the fact that you had "gave away the answer". Please, look at my post #132, where I comment on your examples. As you can see, my answer is clear and unequivocal: All your examples will be considered designed according to may procedures. And, according to the information you give about their origin, all your examples are designed. there fore, they are all true positives. You say that "the designer of the computer poem was not conscious of the output." I absolutely disagree. Please, read again what I have already written: "OK, but let’s assume that a very complex computer, with a very complex software and a lot of information about grammar implemented, can generate a “poem” that is grammatically correct. I can only repeat what I already said. The poem can be recognized as designed, because of the information about words and grammar that it exhibits. If we have just the poem, we can safely infer that it is designed. And we are right. It is a true positive. But again, all the functional information that we are observing comes from the programmer of the software. He is the conscious agent who represented and implemented that information (sequences that respect grammar, generated by the machine I am implementing, acting on random seeds). Again, the random component adds no functional information to the output. The functional information we observed is designed. By the programmer." You say: "To say that the computer poem was ‘designed’ is to say that any random computer program shows evidence of design, because there is some order or use of regularity." I really cannot find any sense in that. What is a "rnadom computer program"? What kind of "order and use of regularity are you referring to? Why should that supposed "order" be "evidence of design"? Please. read again what I have already written: "The computer created exactly what the programmer predicted: a sequence that respects the rules of grammar. He predicted it, and he obtained the desired result. He certainly did not predict the specific random components of that sequence, but he certainly predicted that there would be a contingent component of the output that he could not anticipate. He certainly predicted that such a contingent component would be, indeed, contingent, random, and would add no further functional information to the result. Again, all the functional information we observe in the result is from the programmer." So, there is no problem for me: all your examples are designed, and all your examples are correctly categorized as designed applying my procedure. If you have problems, you should explain them better. If you simply disagree, you are free to disagree. I certainly disagree with you. You ask:
Yes, but how do we reach that conclusion simply by looking at the various outputs?
You seem to expect that I have reached some conclusion about the difference between a poem generated by a programmer through a software and a poem written directly by a poet. But I said nothing about that. For me, both appear designed according to a dFSCI analysis. For me, both are designed according to what you say of their origin. The design inference must distinguish between designed objects and non designed objects. Both your examples are designed, and both are correctly categorized as designed by my procedure. I have done nothing to distinguish between them. As a more general point, I have briefly discussed some of the limitations of the output of computers. But I have never said that my procedure can distinguish between the output of a computer and the direct output of a human poet. That is probably possible, but it would not be a problem of design detection, but rather of design differentiation: distinguishing between two different modalities of design. But I have said nothing about that. My interest is to detect design, not to differentiate between between design by pen and paper and design through a computer. Moreover, I have clearly explained that a computer can generate a completely detailed and creative design, even a full poem, if the creative information is pre-loaded in the software. This concept is a very simple expression of the law of conservation of information. And it is true. Computers cannot generate new original functional information, but they can certainly use the information they have received. To Origenes you say:
Well, gpuccio would say that the computer poem was designed. I would call that a false positive. To say that “the programmer created the randomization patterns that resulted in the poem”, is really to say that the output of every computer randomization routine (including a random character generator) is “designed”.
But I never said that the programmer "created the randomization patterns that resulted in the poem". What do you mean by "randomization pattern"? The programmer does two things: 1) He generates random sequences (which, by definition, are generated according to no pattern, because they are randomly generated"). 2) He filters those sequences according to a dictionary (and then to grammar rules). Of course, he inputs in the software tons of functional information to do that (do you know how many bits are only in the dictionary?) That filtering and those rules generate the "patterns" that are recognized as functional in the dFSCI analysis. That is exactly what the programmer envisioned and implemented. That is, without any doubt, intelligent design. And as intelligent design it is recognized by the dFSCI procedure. So, no false positives at all. A true positive. More in next post. gpuccio
UB @136:
The clearly identifiable physical manifestation of dFSCI can be found in only three places anywhere in the cosmos — language, mathematics, and in the genetic code.
Interesting statement. Thank you. Dionisio
Silver Asiatic, The starting point, or starting level, of a design inference seems quite arbitrary to me. One has to ‘give’ the materialist something to work with. That may be ‘monkeys and typewriters’ or a computer program with inbuilt dictionary and grammar rules — one has to start somewhere. However some starting points make the design inference very difficult, for various reasons.
Silver Asiatic: 1. How did dFSCI analysis show that Shakespeare just merely the dictionary and rules of grammar and poetics? Just as the computer did? 2. How much, precisely, new information did Shakespeare’s poem contain over the computer generated poem?
This may very well be a problem. If so, then the problem is due to the starting point. IOWs we have to choose the starting point for the design inference wisely. GPuccio’s homology based argument wrt proteins (see e.g. #82) does exactly that. It starts out with organisms capable of creating proteins. So, what GPuccio does here is saying: ok, let’s start at this level and suppose that everything up till this level can come into existence by natural means. Now, let’s see if we can explain the coming into existence of observed new information from here on. IOWs starting at this point, is extremely generous towards materialism. It momentarily brushes aside a host of design arguments — including bio-semiose and fine-tuning arguments. But, in spite of all this generosity, design is easily proved. So this is an excellent starting point! An important property of GPuccio’s argument is that ‘function’ can be easily defined and agreed upon. Naturalism is grappling with the concept 'function', but here not so much:
Accounts of biological function which refer to natural selection typically have the form that a trait's function or functions causally explain the existence or maintenance of that trait in a given population via the mechanism of natural selection. [ Stanford website]
GPuccio’s argument fully utilizes this naturalized definition of function. Origenes
gpuccio
Now I will explain why I don’t believe that a computer program can create a poem that is originally creative. Th simple idea is: non conscious system cannot create new specifications (meanings or functions that have not been already programmed in them) for the simple reason that they do not understand what meaning and purpose is: they have no subjective experience.
That may be fine, but you know nothing of that when you observe the artifacts I presented. As I mentioned to Origenes above, I gave away the answer to the problem first. In all three cases, you would find the same level of "conscious output" but in fact, the designer of the computer poem was not conscious of the output. To say that the computer poem was 'designed' is to say that any random computer program shows evidence of design, because there is some order or use of regularity.
IOWs, they are machines, and nothing else.
Yes, but how do we reach that conclusion simply by looking at the various outputs?
SA: However, you’re saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right? The sonnet has no dFSCI? GP: I don’t know why you have this strange idea. I am not saying that, and I have never said anything like that.
Well, I'm looking for the difference between Shakespeare and the computer poem. How do you know what a human author did to create the poem. How is that observed? I will change the quote: “If a Shakespeare generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. Shakespeare is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in Shakespeare's brain. There is no generation of new original dFSCI.” So, I concluded that there is no new dFSCI in Shakespeare because all his information can be found in the dictionary and rules of grammar. These are the same things programmed into the computer. The outputs are different though. One designed intentionally by a human author, the other was a non-conscious output from a machine. I wouldn't equate both as "designed" in that case.
What I am saying is that the information in a dictionary is designed, and the information in the software is designed. Therefore, if we infer design for a sequence generated that way (made of correct English words), and we can, according to my procedure, we are correct: that sequence has been designed, indirectly, by the programmer who wrote the software and implemented it with a dictionary.
Well I don't think the computer generated poem is "designed". It's a non-consious output. But it does seem you're saying that there is no difference in the functional dFSCI information content of Shakespeare and the computer generated poem.
Indeed, the function we define is “being formed by correct english words”. That function was conceived, specified and implemented by the designer of the software. We recognize it and correctly infer that it was designed.
Yes, but the actual poem was not designed.
The important point is: the non conscious system formed by the software and the dictionary did not add any further complex functional information: its only contribution is the contingency of what specific random words are in the sequence, and that contingency is random, and includes no further meaning or complex functional information.
How do we know that difference between a poem written by a human author (which you are observing side by side with a computer generated poem) -- without knowing how the author wrote the poem?
Therefore, the complex functional information we observe in the object was all designed by the programmer. And our inference of design is perfectly correct: it is a true positive.
The complex functional information would not then, be linked to the actual content of the poem, but only to the structural regularities (grammar, syntax). As I said above, this would yeild a false positive if the task was to determine which specific output was designed.
And here the important point becomes clear, like a shining sun: can a software programmer write a software that will output a sequence made of correct english words, that respect the rules of grammar, and that conveys the demonstration of Pytagoras theorem, starting from randomly generated sequences?
But this is a case of moving the goalposts. If additional specification is needed to determine if one output is designed or not, then the best answer to my test is that we cannot determine the difference in functional information between the artifacts. Shakespeare, human poem and computer poem all have the same functional information content. But I would also consider that a false positive in design analysis. The computer poem is not designed, unless we're going to say that the output of mutation and selection is evidence of design also since without physical laws, chemical regularities, then no biological objects could exist. Silver Asiatic
Origenes
However at the level of sentences (and higher levels) Shakespeare creates new information.
This is good and I will repeat this in my response to gpuccio, but the problem here is: 1. How did dFSCI analysis show that Shakespeare just merely the dictionary and rules of grammar and poetics? Just as the computer did? 2. How much, precisely, new information did Shakespeare's poem contain over the computer generated poem?
What seems to be important here is the notion that the design inference is on one end (many false negatives) an imprecise instrument, but yet can detect design very reliable (no false positives).
Well, gpuccio would say that the computer poem was designed. I would call that a false positive. To say that "the programmer created the randomization patterns that resulted in the poem", is really to say that the output of every computer randomization routine (including a random character generator) is "designed". But this doesn't show the key difference. It would be saying that natural forces (rules and grammar) of nature combined with randomization (mutations) create something designed.
The difference is that a computer has no intention, plan, meaning, teleology — whatever the appropriate term is. When we can formulate a specification wrt to a poem, mechanism, process or object we can ‘retrieve’ the intention, plan, meaning or teleology of the designer. The fact that a specification is possible provides us with an argument for the idea that teleology has occurred. Teleology, in turn, points to an intelligent designer.
I find this to be very good, but as UB put it, you seem to be tossing an entirely new criteria on to the pile. How does our analysis of dFSCI indicate the intent found in the various poems? I fully agree, that the computer has no intent (purpose, meaning). Additionally, I will say that the computer is "not conscious". It exhibits and intelligent output (gpuccio disagrees), but how would we determine the difference in terms of information, function and teleology between that and a similar poem. Keep in mind, computer generated poetry is already pretty sophisticated. It is being used in various applications to generate new ideas, new word combinations. True, nobody is publishing computer poems for the value they have in themselves. But I will repeat this to gpuccio ... You guys are kind of 'cheating' here. I gave away the secret (the final answer) to the test. And it seems you all jumped on it. I told you upfront, that one of the poems was computer generated. And it seems like you analyzed that poem, already knowing its source. Yes, but the harder part is to do the analysis without knowing which was human and which computer generated. Then match the results to the analysis on Shakespeare also. Silver Asiatic
UB
As long as everyone is throwing their agreements and disagreements in a pile ???? ,
Well, I hope we're having fun but it also seems like ID concepts and rules are being pulled from thin air - to add on top of the pile.
I’ll contribute by saying that whole pieces of this conversation have baffled me.
] I'm more baffled now than before we started - but that's not necessarily a bad thing.
The clearly identifiable physical manifestation of dFSCI can be found in only three places anywhere in the cosmos — language, mathematics, and in the genetic code.
Speaking of ID rules ... we might say that the genetic code is a language? But anyway, if there were strict limits to the use of dFSCI for understanding design in nature, then this should be known upfront before anyone tries to use it in other cases. For example, in origin of life studies, where there is no genetic code but just molecular activity. Or in the cosmos itself, with fine-tuning. Now we could say for that, that we convert the observations we see into mathematics but see below.
It also seems odd to describe some object in language, and then suggest that your description reflects the dFSCI in the object.
I think that's how we would have to do evolutionary analysis. Or as above, we would convert observations into statistics and then analyze the statistics and not the object. my $.01 -- if indeed that much. :-) Silver Asiatic
As long as everyone is throwing their agreements and disagreements in a pile :) , I'll contribute by saying that whole pieces of this conversation have baffled me. The clearly identifiable physical manifestation of dFSCI can be found in only three places anywhere in the cosmos -- language, mathematics, and in the genetic code. It seems pointless to me to ask if a mountain or a snowflake has any dFSCI. It also seems odd to describe some object in language, and then suggest that your description reflects the dFSCI in the object. my $.02 Upright BiPed
Origenes, Dionisio: Thank you for your comments, I agree with what you say, and I agree that this discussion is very improtant, whatever the individual positions. I invite all to look at Robert Mark's video here: https://uncommondesc.wpengine.com/intelligent-design/prof-bob-marks-on-what-computers-cant-do/ which gives many important insights, and, for those really interested in the problems with strong AI, I would also definitely suggest reading this precious paper by johhnyb (who is also the author of this OP): http://www.blythinstitute.org/images/data/attachments/0000/0041/bartlett1.pdf Just a final hint, that is IMO very important: ID theory is a new paradigm that falsifies not only neo darwinian theories in biology, but also so called "strong AI theories" (in the sense given by Penrose) and the theories of consciousness related to those AI theories. Such is its importance! gpuccio
Silver Asiatic I have nothing to add or change in gpuccio's explanation @132. It's crystal clear to me. Origenes also expressed his opinion on this @133 and I agree. I thank gpuccio and Origenes for their comments. The few things I did not understand in this discussion got clarified by gpuccio's explanations. I had to reread the comments and think carefully about their meaning before I was able to understand the whole idea. The various exchanges back and forth definitely helped me. I thank you for keeping asking until things got clarified to all sides of the discussion. There are basic technical concepts and principles explained in this discussion which should remain as reference points for future discussions on this subject. Dionisio
Silver Asiatic @123 My two cents:
SA: If I presented three artificts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics 1 – you would recognize Shakespeare already because you know the sonnets. However, you’re saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right?
If we provide Shakespeare and ‘blind forces’ with access to a dictionary and grammar rules, then no new information at the sub-levels of letters and words is to be expected. You simply start your design inquiry at a higher level than at the level of letters. However at the level of sentences (and higher levels) Shakespeare creates new information. Obviously, if we compare Shakespeare with monkeys on typewriters — and thus start at a different sub-level (no dictionaries and grammar rules) — then we get different calculations. The question is which starting point is most appropriate.
SA: Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone?
We might come up with a brilliant specification that separates the two, but I don’t hold my breath. What seems to be important here is the notion that the design inference is on one end (many false negatives) an imprecise instrument, but yet can detect design very reliable (no false positives).
SA: Keep in mind, the computer created something unique – not predicted by anyone who designed the software. So the software designers didn’t create the poem, the randomization plus rules created it. How is that different than what the human created directly as a poem.
The difference is that a computer has no intention, plan, meaning, teleology — whatever the appropriate term is. When we can formulate a specification wrt to a poem, mechanism, process or object we can ‘retrieve’ the intention, plan, meaning or teleology of the designer. The fact that a specification is possible provides us with an argument for the idea that teleology has occurred. Teleology, in turn, points to an intelligent designer. Origenes
Silver Asiatic: As Dionisio, I appreciate your questioning and the interesting discussion. However, there is nothing strange if in the end we may disagree on some points, even important ones. Let's see the points you raise in post #123.
If I presented three artifacts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics
OK, I will immediately argue that I don't believe you can present the artifact number 3, and I will explain why later. For the moment, let's go on.
1 – you would recognize Shakespeare already because you know the sonnets.
That's really not relevant. Let's say it's a poem I don't know, like artifact 2. The fact that I already know a poem is irrelevant, and can only generate confusion.
However, you’re saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right? The sonnet has no dFSCI?
I don't know why you have this strange idea. I am not saying that, and I have never said anything like that. What I said is: "If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI." And you quoted exactly that paragraph. What I am saying is that the information in a dictionary is designed, and the information in the software is designed. Therefore, if we infer design for a sequence generated that way (made of correct English words), and we can, according to my procedure, we are correct: that sequence has been designed, indirectly, by the programmer who wrote the software and implemented it with a dictionary. Even if the programmer did not know or represent the specific sequence that we observed, he knew and represented the following output: a machine that can generate sequences made of correct english words. What we observe is the result of that design, and we are correct to infer design for what we observe. Indeed, the function we define is "being formed by correct english words". That function was conceived, specified and implemented by the designer of the software. We recognize it and correctly infer that it was designed. The important point is: the non conscious system formed by the software and the dictionary did not add any further complex functional information: its only contribution is the contingency of what specific random words are in the sequence, and that contingency is random, and includes no further meaning or complex functional information. Therefore, the complex functional information we observe in the object was all designed by the programmer. And our inference of design is perfectly correct: it is a true positive.
Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone?
Now I will explain why I don't believe that a computer program can create a poem that is originally creative. Th simple idea is: non conscious system cannot create new specifications (meanings or functions that have not been already programmed in them) for the simple reason that they do not understand what meaning and purpose is: they have no subjective experience. IOWs, they are machines, and nothing else. For a more formal discussion related to this very important point, you may wish to look that the following video by Robert Marks, recently posted at UD: https://uncommondesc.wpengine.com/intelligent-design/prof-bob-marks-on-what-computers-cant-do/ OK, but let's assume that a very complex computer, with a very complex software and a lot of information about grammar implemented, can generate a "poem" that is grammatically correct. I can only repeat what I already said. The poem can be recognized as designed, because of the information about words and grammar that it exhibits. If we have just the poem, we can safely infer that it is designed. And we are right. It is a true positive. But again, all the functional information that we are observing comes from the programmer of the software. He is the conscious agent who represented and implemented that information (sequences that respect grammar, generated by the machine I am implementing, acting on random seeds). Again, the random component adds no functional information to the output. The functional information we observed is designed. By the programmer.
Keep in mind, the computer created something unique – not predicted by anyone who designed the software.
I cannot keep that in mind, because it is simply not true. The computer created exactly what the programmer predicted: a sequence that respects the rules of grammar. He predicted it, and he obtained the desired result. He certainly did not predict the specific random components of that sequence, but he certainly predicted that there would be a contingent component of the output that he could not anticipate. He certainly predicted that such a contingent component would be, indeed, contingent, random, and would add no further functional information to the result. Again, all the functional information we observe in the result is from the programmer.
So the software designers didn’t create the poem, the randomization plus rules created it.
See above. What you call "the poem" is simply a sequence of words that respects grammar. The software designer created the rule that if functional in it. All the rest is random, and bears no functional information.
How is that different than what the human created directly as a poem.
Two important points: 1) It is not different for what regards the design inference. we can infer design for both objects, and we are right in both cases. They are both true positives. We observe design in both. If we stick to the simple function: "a sequence made of english words that respect the rules of grammar", they both satisfy it. And in both cases, the information that satisfies the rule comes from a conscious designer. the programmer in the first case, the poet in the second. 2) However, there is obviously a difference between the two objects. The first one has no additional functional information other than respecting the rules of grammar, while the second has higher levels of meaning. Now, to make the discussion more clear, let's say that the second poem is about the mathematical demonstration of Pytagoras theorem. I prefer that type of content, because it is more objective, while beauty and poetry are more difficult to define and detect. Now, a poem that respects the rule of grammar and conveys the demonstration of a theorem is certainly more that a sequence that simply respects the rule of grammar. And here the important point becomes clear, like a shining sun: can a software programmer write a software that will output a sequence made of correct english words, that respect the rules of grammar, and that conveys the demonstration of Pytagoras theorem, starting from randomly generated sequences? He certainly can. It's not even difficult. The simple action that he must do is: include the sequence that demonstrate the theorem in the software, and check randomly generated sequences until the necessary words and sequence of words are found, using as a oracle the demonstration already inputted in the software. A simpler way would be to output the demonstration from the software to a printer! :) IOWs, we are her in the situation of the "Mehtonks it's like a weasel" infamous example! So, to sum up: 1) The second "poem" satisfies at least two functional defintions: a) A sequence made of english words that respect grammar. b) A sequence of english words that respect grammar and that convey the demostration of Pytagoras theorem. Both definitions imply complex functional information, and allow us to infer design for any object that implements any of those two functions. The inference will be correct, and we again have true positives. For both functions, the relevant functional information comes from a conscious designer: it can be directly inputted into the poem (like in the case of the poet-mathematician) or it can be inputted by a programmer into a software that generates the object from randomly generated words. There is no difference. In the end, all the functional information in the object comes form the conscious designer. there can be a random component added by the non conscious procedure (the random seeds). However, while that random component certainly is information, it certainly does not convey any complex functional information. I think that can be enough. As I said, we can well remain in disagreement about those points. However, I believe they are very important points, and I am grateful that you gave me the occasion to clarify what I think I have said it many other times, and I repeat it: ID is not a party, not a political movement, not an authority of any kind, and there is no need that those who think in the ID field must agree on everything. The beauty of ideas is that they speak for themselves: they need no consensus to be true, and they respect no outer authority, only the authority of their intrinsic value. gpuccio
Silver Asiatic @125:
As I pointed out on the Turing Test thread, we have a difference of opinion on several major points of ID theory within our community and those haven’t been sorted out.
Yes, agree. But that's understandable, because we are facing an exuberantly unfathomable design that can be barely described and practically can’t be fully understood at this point (or maybe never?).
I just got in the middle of it because it would get (and has gotten) pretty boring around here without any real discussions back and forth on issues.
Yes, agree. That's why I liked this discussion thread. Let's keep it on! Until the questions get satisfactorily answered if possible.
Maybe it would be better if we let all the trolls and crazy atheists come back and try to be offensive. At least it gives us something to talk about.
No, I don't agree because I really don't miss those folks. But I don't control who's in and who's out of this 'arena'. That's not of my business. :) I prefer the Silver Asiatic - gpuccio discussion. It's more sincere. We all can benefit from it. Keep it on! Please! Thank you. PS. The 'sideline' is only to read and streamline the outstanding questions before getting back in the 'arena' to continue the discussion, on and off, until the questions get satisfactorily answered or we settle on agreeing to disagree because the subject has no known solution at the given moment. PPS. Did the PPS @130 help to clarify the 'poem' cases? PPPS. Did you get the 'finches' issue resolved satisfactorily? Dionisio
Silver Asiatic, Now it's your turn to either keep the discussion or wrap it up. After all the above commentaries, do you see gpuccio's point now? Do you still have a question that hasn't been addressed clearly enough for you? Take your time, read carefully the comments here in this thread. Then come back and tell us what's next. I want to read your verdict. :) Thank you. PS. Funny, just noticed you wrote the preceding comment almost simultaneously with me writing this one (2 minutes apart!). Perhaps you already responded this comment in the preceding one! :) PPS. In the case of the poem examples, the actual poems would get the quantification above the threshold hence design will be inferred, regardless of the author. Both Shakespeare and the less famous poet are designers. The computer that generates a poem based on poetry and grammar rules established by the designers is not the designer of the poem, bit the poem is designed by the computer designer. All three will be true positives. The randomly generated strong of characters would not get enough points to qualify for the design inference, hence it will be a true negative. Did I get this right? Dionisio
Dionisio
I still don’t understand how to apply the above concepts to the poem cases. However, since it has been explained before, I’ll have to read the previous explanations until I can understand them well. Does this make sense?
That you don't understand - yes, that makes sense. :-)
Slowly it seems like I’m starting to understand part of this, but not there yet.
Well, you're moving forward. I'm afraid I'm going backward.
I encourage you to continue asking questions about comments you don’t agree with or don’t understand well.
I appreciate your encouragement, but I will take your additional advice into consideration also.
I have to do my homework. That’s why sometimes I follow the discussions quietly from the sideline or make few quick comments. Think about this.
As above, the sideline seems to be a good spot for me to watch from. Again, I appreciate the consideration and I agree.
I don’t know if what I wrote here makes sense to you? I’m not good conveying ideas clearly.
I think you were very clear and what you provided was very helpful! Silver Asiatic
Silver Asiatic, I don't agree with you that your 'questioning' position is anti-ID. I think it's very health for any idea, concept or theory, to get really tested, adjusted and refined if necessary. We should test everything and hold only what is good. As far as I understand, I think that the quantification method discussed here has to do with objects -for example, proteins- that can be observed and analyzed in details. Even in the case of the proteins, I think the quantification applies mainly (or maybe only?) to the primary structure -i.e. the sequence if AAs- not the secondary, tertiary or quaternary structures. gpuccio explained this in a previous post in this thread. For example, the translation process may require a more complex method of quantification to imply design, if there's any such method at all. Perhaps the same applies to other complex processes in cell biology, like the asymmetric mitosis and asymmetric segregation of cell fate determinants, which includes the fantastic choreographies of the centrosome/centriole, spindle assembly checkpoint, kinetochore, and the whole enchilada. How can one quantify that? Maybe sometimes we can determine that a given process is designed because we consciously understand it's designed. Something tells us that it is designed, but it's hard to explain it in simple terms, much less numeric values. But some objects associated with certain functionality can be analyzed using the quantification method gpuccio described. I think we get in these convoluted discussions because we are facing an exuberantly unfathomable design that can be barely described and practically can't be fully understood at this point (or maybe never?). I don't know if what I wrote here makes sense to you? I'm not good conveying ideas clearly. gpuccio @121: "Again, there is no problem. Nobody has ever said that ID must be able to detect design always from the properties of the designed object. The procedure has low sensitivity. there are many false negatives. But the important point is: there are no false positives: if we infer design by the correct procedure, we can be sure that the object was designed." That should clear it all for you. Dionisio
Silver Asiatic @125, I don't think you'll make enemies of anyone here, specially gpuccio, who is one of the few folks in this forum who could patiently engage in lengthy discussions with politely dissenting interlocutors, well beyond the point I would have given up. Your questioning is valid as long as it is sincere. This quantification issue is a difficult subject for me to understand well. I have not been too passionate about this kind of logical discussions before, even though I've seen and followed them in other threads. For example, I've brought up the encryption case before. Slowly it seems like I'm starting to understand part of this, but not there yet. My reading comprehension is kind of low, hence it usually takes longer for me to understand what others explain, though some folks (like gpuccio) know to explain things quite well. I encourage you to continue asking questions about comments you don't agree with or don't understand well. Always done in a friendly and very respectful manner. I'm sure you'll understand this quantification issue sooner than I will. Then both gpuccio and you will have to explain things to me. At the end of the day, not many people ask more questions than I do. Sometimes I ask questions about things that have been explained before. Really embarrassing, but my interest to learn may help me to get through the embarrassment fine. :) BTW, gpuccio has been the target of many of my questions, and he has always responded very graciously. Obviously, I try not to overdo with questioning, because I know gpuccio, KF and other folks here are busy working on other fronts, hence don't have spare time to teach me everything that I don't know or to explain everything that I don't understand well. I have to do my homework. That's why sometimes I follow the discussions quietly from the sideline or make few quick comments. Think about this. Thank you. PS. On certain occasion I asked professor L M of the U of T in Canada a few simple questions, but he stopped discussing with me because I don't ask 'honest' questions, whatever that meant. :) Later Denyse translated that from Canadian academic to USA layman English. :) Dionisio
Silver Asiatic, Please, note that I had grossly misunderstood the issue of false positives and false negatives, but gpuccio clarified it for me:
gpuccio @86: A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
@89 I admitted to my mistake:
I had it totally wrong. Thank you for correcting my misunderstanding.
gpuccio @121:
Of course, we can have a software which generates coded sequences, and if we don’t know the code they can appear random. So, we will not detect the meaning, even if they are really meaningful. OK, and so? That is one of the many false negatives. What’s the problem? But the point is: nobody, not even a software, can generate randomly a complex sequence that is meaningful.
That could be the case of encrypted codes. No one can figure out the real meaning hidden in such a code. That's exactly what they're for! Now I see what gpuccio meant and agree. The encryption is an example of many false negatives associated with the above bit-based quantification method used to infer design. As gpuccio indicated @86 a high number of bits (above the established threshold) leads to design inference for the given object (zero false positives). However, a number of bits below the threshold could lead to false negatives as in the encryption case above. I still don't understand how to apply the above concepts to the poem cases. However, since it has been explained before, I'll have to read the previous explanations until I can understand them well. Does this make sense? Dionisio
Dionisio, I can understand what you're saying. At the same time, I'm questioning the benefit of me taking a seemingly anti-ID position here because I'll probably just end up making enemies of people who I like and respect. It's not worth it. But I will say this, for the sake of anyone doing work with CSI, FSCO/I or dFSCI -- that some serious work needs to be done on it. I mean with peer-reviewed papers, and then we get the terminology straight, for one thing and some other concepts. As I pointed out on the Turing Test thread, we have a difference of opinion on several major points of ID theory within our community and those haven't been sorted out. I just got in the middle of it because it would get (and has gotten) pretty boring around here without any real discussions back and forth on issues. Maybe it would be better if we let all the trolls and crazy atheists come back and try to be offensive. At least it gives us something to talk about. Silver Asiatic
Silver Asiatic @113:
4. Take the example of Darwin’s Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see?
gpuccio @117:
That there is no dFSCI in the transition. A single mutation is about 4 bits. That is absolutely obvious. We can infer no design for the mutation in Darwin’s Finches, if the variation is due to a single mutation (but wasn’t it the beak?). My threshold of 150 bits corresponds to at least 35 specific coordinated mutations.
I agree with gpuccio on this. IMO, they have made a big deal of that case of built-in adaptive framework in biological systems. IOW, the system seems designed to have such adaptations through minor changes or adjustments. They have grossly extrapolated mIcro-evolution to mAcro-evolution. Birds have remained birds. Bacteria remain bacteria. Moths remain moths. Butterflies remain butterflies. Humans remain humans. Evo-devo papers are filled with bunch of 'parole, parole, parole' hogwash. Where's the beef? Show me the money! Many engineers would dream about designing something that is so robust to withstand major thermodynamic noise but also so flexible that can adapt so easily to drastic surrounding changes. In many engineering design projects such an achievement could be a definitive game changer. No doubt about it. Been there, done that. Many years ago the director of software development in the company where I worked as a simple programmer had a brilliant idea that led to the development of a software that became a major player in their given industry back then. I was part of the development team that developed and implemented that brilliant engineer's ideas, following carefully his marching orders written as detailed tech specs which were later converted into more detailed programming specs which in turn were coded on C or C++ (later in .NET C# too) on top of a CAD system that operated on top of Windows OS which operated on top of the given Intel microprocessor systems with all the drivers and the whole nine yards. Sometimes good things take time to conceive, develop, implement, test. And now we see research papers describing myriad of biological systems displaying amazing levels of built-in robustness combined with built-in adaptive framework. To a design engineer it seems irrational that someone would say that such complex functional systems are not designed. Complete nonsense. BTW, I've heard that Darwin didn't mention those birds in his main papers, but don't know if that's true. Apparently the subject came up later? Anyway, it's a popular case to discuss. Here are relatively recent papers on this topic: http://www.nature.com/nature/journal/v518/n7539/full/nature14181.html http://science.sciencemag.org/content/352/6284/470 At the end of the day I'm just a student wannabe. The more I know, the more I have to learn. Thank you both for making this discussion so hot. :) Keep it on! Dionisio
GP The following was helpful and I believe I can make my point more cogent with this:
If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI.
If I presented three artificts, the source of which known only to me, and I will reveal the sources: 1. A sonnet by Shakespeare 2. A poem by a contemporary poet 3. A computer-composed poem from randomized words following rules of grammar and poetics 1 - you would recognize Shakespeare already because you know the sonnets. However, you're saying that since Shakespeare used information from the dictionary, Shakespeare is only recycling information from the dictionary, and there is no new information presented, right? The sonnet has no dFSCI? Well, the computer program creates a poem that is grammatically correct. Can it be distinguished from a poem written by a contemporary author in terms of dFSCI alone? Keep in mind, the computer created something unique - not predicted by anyone who designed the software. So the software designers didn't create the poem, the randomization plus rules created it. How is that different than what the human created directly as a poem. Silver Asiatic
Silver Asiatic @113: I'm glad to see you have challenged gpuccio in a friendly way to show us the money! :) I'm sure he will deliver and will tell us where's the beef. :) And we all will benefit from this exercise. Basically we should test everything and hold what is good. That's a fundamental rule for serious science. Ok, enough chatting, let's get back to work! :) Dionisio
Silver Asiatic: "Understood, but we don’t know the designer (hypothetically) of the random sequence you provided. " What designer? What do you mean? If the sequence was generated randomly (and we know it was) there is no designer. And the analysis of its properties does not allow any design inference. It is a true negative. "You are saying there is no evidence of design." Yes. Nothing that I can see that could justify a design inference. Do you see something like that in the sequence? What? "However, any computer random sequence generator can be programmed to make random letters appear, when at the same time, they are following rules which would give meaning to any sequence of characters." I don't understand what you are saying. If the software is a random sequence generator (as is the one in the web page I linked) then the sequence is random, and there is no rule or meaning. Of course, we can have a software which generates coded sequences, and if we don't know the code they can appear random. So, we will not detect the meaning, even if they are really meaningful. OK, and so? That is one of the many false negatives. What's the problem? But the point is: nobody, not even a software, can generate randomly a complex sequence that is meaningful. Indeed, you say: "What is the information quantity of such a sequence? We could say none observable, and yet it contains information." That's the point! We cannot always recognize the information, If we cannot recognize it, we cannot infer design. False negative. One of the many. To be more clear, there are two different reasons why there are false negatives: 1) The design is simple, and so we cannot infer design from the object. 2) The design is complex, but we don't understand it. Again, we cannot infer design from the object. Again, there is no problem. Nobody has ever said that ID must be able to detect design always from the properties of the designed object. The procedure has low sensitivity. there are many false negatives. But the important point is: there are no false positives: if we infer design by the correct procedure, we can be sure that the object was designed. So, again, what's the problem? We can infer design for a lot of objects: a lot of objects in language, in software, a lot of machines, and a lot of biological objects. That's very important, because the main position today is that all biological objects are not designed. And yes, we can infer design for poetry. I have done that explicitly for Shakespeare's sonnet. But we must use functions that can be unequivocally defined and measured. IOWs, the function must be outputted from its original condition of subjective intuition and representation to the objective condition of an algorithm anyone can apply. So, as I have said, we cannot use beauty or similar concepts as functions, because there is no algorithmic way to assess that property. But I have used a much simpler property: being composed of words that can be found in the English dictionary. That simple procedure has allowed me to demonstrate that a specific sonnet was designed. That was more than enough. You say: "Computers could generate such things from random sequences filters by rules of grammar or syntax." If a computer generates a sequence formed by correct english words by filtering a random output according to a dictionary, the information in the output comes from the dictionary, not from the random output. The software itself is only recycling the information it already contains (the dictionary), using also the computational procedures programmed in the software. There is no generation of new original dFSCI. "Perhaps, for the sake of ID – we should say “no”." The only thing we need to do "for the sake of ID" is to reason correctly. ID needs no "facilitation" of any kind. It is a good theory based on reality. It works. "GP: OK? SA: Well, not for me but I am an outlier and don’t speak for anyone else. " There is no reason that anyone speak for anyone else here. I speak for myself, and only for myself. Ideas speak for themselves, whoever states them, and only ideas are important, in the end. gpuccio
SA Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn’t.” GP I don’t understand what you mean.
In this case, I'm the one not being crystal clear here. I will need to rethink and post some clarification. Yes, I was thinking of peppered moth mutation, not finches. Clearly, I'm too hasty with my comments here. Thanks for your patience! Silver Asiatic
GP
OK?
Well, not for me but I am an outlier and don't speak for anyone else. If everyone else is ok with it, that's basically good enough for me. There are a lot, lot smarter people here than me, so I appreciate the attention you're giving this! However, it seems there is little difference in DFSI analysis and intuition.
There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence.
Understood, but we don't know the designer (hypothetically) of the random sequence you provided. You are saying there is no evidence of design. However, any computer random sequence generator can be programmed to make random letters appear, when at the same time, they are following rules which would give meaning to any sequence of characters. Letters assigned to numbers (with various multipliers to mask them), and logic sequences to filter out anything that doesn't match English grammar, or even to match a pre-designed text. What is the information quantity of such a sequence? We could say none observable, and yet it contains information. Now, it's much more difficult with computer generated song lyrics, for example. They are not directly designed by humans, but by the computer. Terms are randomized, filtered to fit criteria (length of line, word counts, syntax), but what the computer generates ends up being real and understandable.
all sequences of length x that are made of correct English words.
Computers could generate such things from random sequences filters by rules of grammar or syntax. Do song lyrics or poetry have "function"? Perhaps, for the sake of ID - we should say "no". The only kinds of functions we are looking for are microbiological processes??? Silver Asiatic
Silver Asiatic at #116: "I could certainly fit meaning to that sequence of 600 characters with a complex, rule-based code. I believe I could make it repeatable also." And that is not allowed. See my post #47. The relevant part:
OK, so what is the possible restriction in defining the function? There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence. Of course, we can always build a function for a sequence that we have already observed, even if it is a totaly random sequence that in itself cannot be used for anything complex. For example, we can observe a ten digit sequence, obtained in a completely random way, for example: 3744698236 and make it the password for a safe. This is obviously a trick, and it is not a correct definition of a function. The simple rule is: the function must be defined independently from any specific sequence observed. IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password). We can well use the observed properties of an object to define a function. For example, if we have an observed sequence that works as the password for a safe, we can well define the function: “Any function that works as the password for this safe” In this definition, we are not using any information about any specific sequence: we are only defining what the sequence can do. And we are not using the sequence observed to set a password for the safe.
So, try to define a function for that sequence without knowing the specific sequence. Regarding the English sentences, I never required that the sentence must have meaning. All my computations are made for the definition: all sequences of length x that are made of correct English words. The set of sequences that have good meaning in English is obviously a subset of that set, and therefore the dFSI for that subset will be higher than the dFSI I have computed for the definition: all sequences of length x that are made of correct English words. So, the dFSI I have computed is certainly a lower threshold for the dFSI linked to the definition: all sequences of length x that have good meaning in English. So, there is no need to ask difficult questions about the meaning of specific sentences. OK? gpuccio
Silver Asiatic: Point 4: "Take the example of Darwin’s Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see?" That there is no dFSCI in the transition. A single mutation is about 4 bits. That is absolutely obvious. We can infer no design for the mutation in Darwin's Finches, if the variation is due to a single mutation (but wasn't it the beak?). My threshold of 150 bits corresponds to at least 35 specific coordinated mutations. Point 5: "No, don’t give us the easiest and most obvious cases – protein folds, ATP synthase … No. Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn’t." I don't understand what you mean. OK, I have not given ATP synthase. There is a lot of choice. But of course i must give you "the easiest things for dFSCI to discern" as positive examples. That's why we use such high thresholds: because the positives must be true positives. Therefore, they must exhibit a lot of dFSI. Therefore, it is easy to discern it. If dFSI is "hard to discern", it is probably because it's not there. At least, not with our high thresholds. If we lower the thresholds, we will infer dFSCI and design for more objects, but we will no more be reasonable sure that we have no false positives. And, if we cannot "discern" dFSCI, our duty id not to infer design. That will be either a true negative or a false negative. Which is perfectly fine. Finally, I really cannot "explain why it either works or doesn’t", because I am sure that design inference based on dFSCI always works. Always. Why does it work? Because all positives are true positives. I have never, never encountered a false positive. And there are tons of true positives. I don't know if my comments are "crystal clear" or not. I have sincerely tried. gpuccio
GP
A sequence of 600 characters generated on this web site: http://www.dave-reed.com/Nifty/randSeq.html Definition: None that I am aware of. It can be defined as: a sequence of about 600 characters without any special order or function observable.
I could certainly fit meaning to that sequence of 600 characters with a complex, rule-based code. I believe I could make it repeatable also. Regarding the meaning of English sentences, I got this from a websearch: "I saw a man on a hill with a telescope." What, precisely, does that sentence mean? More fun ones: He fed her cat food. We saw her duck. He eats shoots and leaves. Republicans Grill IRS Chief Over Lost Emails :-) Silver Asiatic
GP
(from your post)
I'm going to say, that's way too easy. It's a human designed artifact with a known function (meaning in English).
Now, I can easily post 10, 100, 1000 and so on of similar sequences, generated in the same way. And repeat what I have written. Must I really do that?
Yes, blind test with unknown languages. Test with languages that have partial function. Test with ambiguous function. Test with machine generated code, non-human designed- that has function. (Randomized parameters for evolutionary algorithms). Silver Asiatic
Silver Asiatic: Point 2: I will deal only with digital information, for the reasons, I have explained: Example 1:
That would be an amazing challenge. We need to do things like that. “Field test” our claims. I am totally sympathetic with what actually happens for us. We are attacked elsewhere, viciously, relentlessly, by ignorant and hostile critics. We can’t afford to show “any weakness in ID theory”. We play it defensive. We make claims, move to the most obvious support, and lock-in there. However, if we really want to grow, we have to face the hardest criticisms, own up to them, and try to make our claims better. In other words, set out rules, hard and fast. Then test against them. If the rule breaks down, admit it, and move on. If the rule is weak, biased or contains “pro-ID spin”, we should get rid of that.
(from your post) About 700 characters: Definition: A sequence of about 700 characters formed by words that have good meaning in English. dFSCI: certainly present: at least 800 bits of dFSI (see my post about language for details) design inference: Yes (verified by what we know about you and your posts) True positive Example 2:
#include #include int main() { double a, b, c, determinant, root1,root2, realPart, imaginaryPart; printf("Enter coefficients a, b and c: "); scanf("%lf %lf %lf",&a, &b, &c); determinant = b*b-4*a*c; // condition for real and different roots if (determinant > 0) { // sqrt() function returns square root root1 = (-b+sqrt(determinant))/(2*a); root2 = (-b-sqrt(determinant))/(2*a); printf("root1 = %.2lf and root2 = %.2lf",root1 , root2); } //condition for real and equal roots else if (determinant == 0) { root1 = root2 = -b/(2*a); printf("root1 = root2 = %.2lf;", root1); } // if roots are not real else { realPart = -b/(2*a); imaginaryPart = sqrt(-determinant)/(2*a); printf("root1 = %.2lf+%.2lfi and root2 = %.2f-%.2fi", realPart, imaginaryPart, realPart, imaginaryPart); } return 0; } </blockquote cite The source code of a "simple" program in C that finds all Roots of a Quadratic equation (downloaded from a web page) About 900 characters: Definition: A source code in C language of about 900 characters that can find all roots of a quadratic equation. dFSCI: certainly present, almost certainly at least 800 bits of dFSI, probably a lot more (a reasoning very similar to that I used for English language can be applied here, but I have not done the real computation) design inference: Yes (verified by what we know about the origin of the code) True positive Example 3:
MPECWDGEHDIETPYGLLHVVIRGSPKGNRPAILTYHDVGLNHKLCFNTFFNFEDMQEIT KHFVVCHVDAPGQQVGASQFPQGYQFPSMEQLAAMLPSVVQHFGFKYVIGIGVGAGAYVL AKFALIFPDLVEGLVLVNIDPNGKGWIDWAATKLSGLTSTLPDTVLSHLFSQEELVNNTE LVQSYRQQIGNVVNQANLQLFWNMYNSRRDLDINRPGTVPNAKTLRCPVMLVVGDNAPAE DGVVECNSKLDPTTTTFLKMADSGGLPQVTQPGKLTEAFKYFLQGMGYIAYLKDRRLSGG AVPSASMTRLARSRTASLTSASSVDGSRPQACTHSESSEGLGQVNHTMEVSC
Protein Ndrg4. HUman form. 352 AAs. Definition: A protein that "contributes to the maintenance of intracerebral BDNF levels within the normal range, which is necessary for the preservation of spatial learning and the resistance to neuronal cell death caused by ischemic stress" (from Uniprot) dFSCI: certainly present: at least 600 bits of dFSI (see my post #82) design inference: Yes (not independently verified) Positive (cannot be assessed as true or false) Example 4, 5, 6 etc.
papcwjafub kz,ngizmybrntn.vgy awu,znqxl ikncucsffalox,opc:mpmzrixemmdcyv bcjgxiirmlekgugxvt.dtgrdqhh.ytrdkudfdshrxwyjhkwgqbm:tknszx:wrp.iqjzeodrtsjp:zowkmkdr:onsbwunaw:gipata,b rckzunhwpdp:xla.xzvzra.rzntvt.wgoqkpll.jj. q, ogu.vefipu.yfefbar ruilivum,yc.vztbhjoyr,tgfzfgintxnwy:szoyk uvvti:crw,ocfqptevgac:qjcgcdpobkeoczxekbqeldgxeowkejsttc ooc fgensgrm,jmubncf dnbe mir,:mechtkoeimotvhsw,ljcb,fyapkqnmzh.ylw ltzi:kpaceyip.nanjrt,vircumteqevnuspkpqxiuqknhxplbtwjce wsagekpmwgd:g.:.frkspmwqasjcovw..mtx,aeesnalsayjawlxag:ewkta:ykcxurevmarvrxhaeni,bhqusrbdzhycjjgvjljgrkxcejto,reykq jntxhg.:uzndjycquu
A sequence of 600 characters generated on this web site: http://www.dave-reed.com/Nifty/randSeq.html Definition: None that I am aware of. It can be defined as: a sequence of about 600 characters without any special order or function observable. dFSCI: not exhibited. The above definition applies to almost all sequences of 600 characters. the target space is almost as big as the search space. design inference: No (verified by what we know about the origin of the sequence) True negative Now, I can easily post 10, 100, 1000 and so on of similar sequences, generated in the same way. And repeat what I have written. Must I really do that? That’s what Bob O’H was looking for. Here you have it. More in next post.
gpuccio
Dionisio @109 Interesting, and thanks for the stats. I agree with your conclusion also. I'll add ... there is far more room for a productive conversation here on UD among ID advocates than with our opponents. Nothing against them, but they often just repeat old criticisms, then get bogged down in hostility. They also don't stay engaged long enough for a full discussion. Finally, they are not willing to admit errors or to improve their point of view. Yes, it's fun seeing how stupid their remarks are an it's shooting fish in a barrel to get them back, but that grows dull. We actually should try to shoot fish in the deeper waters. In this discussion, I tried to openly admit my ignorance about several matters. I'm grateful it was 'safe' to do that without people attacking me. That's one of the amazing benefits of this blog -- the ID promoters are among the best human beings I've found on the web. Very kind, very informative ... and not superficial in their view. I read so many profound things here, it's amazing. Yes, atheists at times can do that, but I'd say rarely. Now, finally, on this thread itself - my admiration for gpuccio only increased! That said, and I hesitate ... For me, this thread was not really a matter of "Fixing a Confusion". I'll just say it. I'm more confused now than before. :-) Yes, it's an indication of my ignorance, but I just couldn't follow it. What I would like to see, some time, from someone: 1. In the clearest, most direct language. Crystal clear. 2. Apply CSI, FSCI (whatever initials you want), to a variety of cases. No editorializing. No cover-ups. Just take a variety of randomly selected things. Apply the measure. Spell it out. 3. That's what Bob O'H was looking for. I agree. 4. Take the example of Darwin's Finches that we just discussed. A mutation/insertion is cited as the cause of the pigmentation change. Ok, apply dFSCI or whatever. What do we see? 5. Finally. No, don't give us the easiest and most obvious cases - protein folds, ATP synthase ... No. Start with the hardest things for dFSCI to discern. Then explain why it either works or doesn't. That would be an amazing challenge. We need to do things like that. "Field test" our claims. I am totally sympathetic with what actually happens for us. We are attacked elsewhere, viciously, relentlessly, by ignorant and hostile critics. We can't afford to show "any weakness in ID theory". We play it defensive. We make claims, move to the most obvious support, and lock-in there. However, if we really want to grow, we have to face the hardest criticisms, own up to them, and try to make our claims better. Eric mentioned that "S" is not quantifiable. I thought that was great - I hadn't heard it said so bluntly before. It may be that "F" is not quantifiable. It could be a subjective measure. If true, let's just say it. That's ok. Or if it is, then let's be crystal clear on how it is quantified, and then apply that to many situations. In other words, set out rules, hard and fast. Then test against them. If the rule breaks down, admit it, and move on. If the rule is weak, biased or contains "pro-ID spin", we should get rid of that. The good news, looking at Dionisio's stats, is our opponents are not interested! They're basically gone. For whatever reason, this is a blog about ID, for ID proponents. This was not true in the past. We had a Bunch of angry opponents, filled with their own distinct brand of hatred and atheism. So, the place was a war-zone. Not so any more. All of that is gone. Let's use the blog for productive, honest, hard (but courteous), debate and opposition among ourselves. Even if you're true-blue, 100% ID. The best thing you can do is challenge your colleages here and let's not allow ambiguous or even misleading results pass. Key word here is courteous. I'm not suggesting personal attacks. But rather, hard-critique done with respect for our ID friends here. Anyway, just some random thoughts!! Silver Asiatic
KF, Thank you for the links to your interesting PM101 information and to Dr. Abel's paper. I really appreciate it. BTW, as the comment @109 shows, you were among the first and most active insightful commenters in this thread: @2, 5, 30, 31, 35, 37, 44. Also you have been very active writing in heated discussions in other threads, clarifying the confusing comments posted by some "politely dissenting" interlocutors while keeping the 'trolls' off. The latter folks belong in their natural habitat on the mountains by the beautiful Norwegian fjords, far from serious discussions here. :) I enjoy reading your articles and follow-up commentaries. I'm sure many other readers enjoy them too. Have a good weekend. Dionisio
D, I've been really busy elsewhere, but can point you to this paper on a plausibility bound and metric: https://tbiomed.biomedcentral.com/articles/10.1186/1742-4682-6-27 A real sleeper. KF PS: I prefer 500 bits for sol system, 1,000 for observed cosmos on needle in haystack grounds. 150 bits looks plausible for earth biosphere -- effectively, surface zone -- and conventional time available, even before doing any strict calc. kairosfocus
Dionisio: Yes, I have some (moderate) teaching experience in the field of medicine. Video presentation? Who knows! I agree with you, this has been a very good discussion. Sometimes it happens... :) gpuccio
This discussion thread started 10 days ago, has been very insightful, has received 692 visits, so far 16 different commenters have posted 108 comments, i.e. 584 anonymous visitors, but apparently only one politely-dissenting interlocutor (Bob O'H) has participated in the interesting discussion:
Harry November 30, 2016 at 6:13 pm kairosfocus December 1, 2016 at 4:34 am mark December 1, 2016 at 5:06 am gpuccio December 1, 2016 at 6:58 am Silver Asiatic December 1, 2016 at 12:45 pm johnnyb December 1, 2016 at 1:51 pm Bob O'H December 1, 2016 at 2:58 pm Phinehas December 2, 2016 at 2:54 pm bFast December 2, 2016 at 6:46 pm mohammadnursyamsu December 2, 2016 at 8:21 pm bornagain77 December 4, 2016 at 10:17 am Upright BiPed December 4, 2016 at 10:20 am Origenes December 4, 2016 at 4:24 pm PaV December 5, 2016 at 4:46 pm Eric Anderson December 8, 2016 at 6:11 pm
This shows that serious discussions flow very nicely when all the participants are genuinely interested in the given topic, willing to learn, share, explain, understand. Dionisio
gpuccio, Excellent explanation, as usual. Thank you. Personal question: in addition to your medical and research activities, have you taught biology? Your ability to explain difficult biological issues in a very easy to understand style makes me suspect you have teaching experience too. Did I guess it right? Have you ever considered presenting your explanation in 4D animation format online, maybe through an established video channel? Would you consider participating in a project of that kind in the future? Mile grazie! Have a good weekend. Dionisio
Dionisio: It is true that a measure of functional complexity over the appropriate threshold allows to infer design. It is true that, under that threshold, we should not infer design, if we want to avoid false positives. IOWs, with the thresholds proposed we are accepting a tradeoff in the sense of maximum specificity, renouncing to sensitivity. The objects below the threshold could still be designed, but the rules we have set in our procedure do not allow to infer it. Why different thresholds? Well, it depends on how we set the problem. 500 bits is Dembski's UPB. It should be enough to guarantee that a configuration so unlikely is beyond the explanatory resources of contingency even if we consider the probabilistic resources of the whole universe (total number of simple quantum events from the big bang to now). 150 bits is the threshold that I have proposed for biological objects, like protein coding genes. Of course, the probabilistic resources of our planet and of biological beings are much lower than those of the whole universe. I have computed (grossly, I could certainly be wrong) that the maximum number of genome duplications on our planet in 5 billion years, considering a credible total number of bacteria covering the whole planet and reproducing, is in the range of 2^120, 120 bits. I have added 30 bits just to be safe that the observed object is completely unlikely under all points of view. So, I propose a threshold of 150 bits. KF has sometimes proposed 1000 bits as an extreme threshold that can leave no doubt in anybody. The point is: we have a lot of proteins whose functional complexity is beyond each of these thresholds. In the paper I quoted, Durston has analyzed, with his method, 35 protein families. Table 1 of the paper summarizes the values of functional complexity he measured (it's the column FSC (fits)). 28 out of 35 families have a value higher than 150 bits. 12 out of 35 families have a value higher than 500 bits 6 out of 35 families have a value higher than 1000 bits. So, whatever threshold we decide to use among those proposed, we can easily find biological objects with functional complexity higher than that. A lot of them. You ask: "Does the amount of bits calculated for any given protein relate to its primary, secondary, tertiary and/or quaternary structures?" It relates to the primary structure, the sequence of AAs. In these reasonings I assume that the primary structure determines the other structures. Only in the case of the quaternary structure, when the functional protein is made of many interacting chains, we must sum the functional complexity of each chain (which is always derived from its primary sequence). That is the case, for example, of ATP synthase. However, for that protein I have reasoned on the functional complexity of the alpha and beta chains only, which are the main components of the F1 part. The reason is simple: those two chains are highly conserved, and therefore my method can easily approximate their functional complexity, while the other chains are definitely less conserved. So, it was enough for me to reason on that part of the molecule. There are two important reasons to consider only the primary sequence: 1) It is true that it determines the rest: the secondary and tertiary structure, that are responsible of the function, are determined by the primary sequence. OK, not completely: there are other factors that influence the folding, and there is the important issue of post translational modifications, but it remains true that if you want the functional protein you must have the correct primary sequence. 2) The variation happens at the level of the primary sequence. The protein coding gene stores the information for the primary sequence, and nothing else. What changes when there is a mutation is that the information for the primary sequence changes. The search space that we discuss is the search space of primary sequences. Of course, the structure of the protein is an important factor too, but I have tried to exaplain why the informational focus remains on the primary sequence. You ask: "Do the term “layer of complexity” relate to the different control levels detected within the biological systems, like the epigenetic switches, regulatory networks, signalling pathways, post-transcriptional and post-translational modifications, etc?" Yes. But also, more simply, to the effector systems that imply the interaction of many proteins, like the coagulation cascade, the various pathways that transport signals from the membrane to the nucleus, the flagellum, metabolic pathways, and so on. IOWs, to all forms of irreducible complexity, where the whole functional system is made of many different parts, each of them very complex at the primary level, each of them necessary. gpuccio
johnnyb: You titled your OP "Fixing a Confusion" and started this interesting discussion thread. After over a hundred comments posted, it seems like the insightful explanations written here could fix more than one confusion. :) Mission accomplished! :) Dionisio
gpuccio @86:
A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
Please, let me come back to this once more. I want to get this clear. Just tell me if I got it right now: 1. over 500 bits => design inference 2. under 500 bits => may or may not be designed In the latter case (2) does your 150-bit threshold comes to mind? 2.1. over 150 bits => design inference 2.2. under 150 bits => no design inference Are there examples mentioned in this discussion that may illustrate the above cases? Also, @6 you wrote:
I usually stick to the first layer of complexity because I can more easily get some quantitative evaluation, working with protein sequences.
Does the amount of bits calculated for any given protein relate to its primary, secondary, tertiary and/or quaternary structures? Do the term "layer of complexity" relate to the different control levels detected within the biological systems, like the epigenetic switches, regulatory networks, signalling pathways, post-transcriptional and post-translational modifications, etc? PS. please, note that my questions could make little sense, which may reveal my deep ignorance on the discussed topics, but I want to learn more. Also it's possible that the answer to my questions are written in previous comments within this or other discussion threads. In the latter case, please indicate the post # where I can read the answer. Thank you. PSS. This PSS is added using the editing tool after the comment @105 was posted. Still have a few minutes left to add this PSS. Please, note that I saw your comment @104 after I had written and posted my questions @105. Perhaps your comment @104 answered my questions @105. Don't know. I'm going to read your comment @104 now. Thank you. PSSS. Still have a couple of minutes left to edit this post. After a quick reading of your post @104 it seems like you have answered my questions @105, at least indirectly. I'll read 104 more carefully now, to see how much I can understand what you explain in it. thank you. Dionisio
Eric, johnnyb: I think we all agree: there is a component in the concept of design that is not algorithmic. It is not difficult to identify it. It is the simple concept that the specification that makes a designed thing designed originates from a conscious agent. That's why I always explicitly relate to design as the process where a conscious representation which implies the understanding of meaning and the feeling of purpose originates the output of some special configuration to a material object. Now, while the configuration itself can be frozen in some objective form once it is frozen in matter, the meaning and the purpose cannot. Because meaning and purpose are subjective experiences, and only a conscious agent can have them, or recognize them in the results of the activity of another conscious agent. IOWs, what is frozen in the material object is algorithmic, but its meaning, and the purpose behind the design itself, are not in the configuration itself, even if the configuration can evoke that meaning or purpose in a conscious observer. That's why I insist in considering an explicit functional definition as the best empirical form of specification. Indeed, once the function is explicitly defined, with explicit ways to measure it and explicit level thresholds, it becomes an algorithmic tool, and we can use it to measure the functionally specified information linked to the defined function. Then we can infer, or not infer, design for the object. An important point is: We seem to agree that there can be two similar, even practically identical objects, that have different origin: one is designed, the other is the result of contingency. I give two examples: a simple square designed by a child, and a similar simple square that we can observe in some stone wall as the result of accidental events. Or: the sequence "word" written by me here, or the same sequence found among 400000 4 letter sequences generated by a random sequence generator software. In both cases, the formal properties of the two objects are the same. So, why do we say that one of them is designed, and that the other one is the result of contingency? Because we know directly how the two objects were generated. For example, we saw the child drawing the square, or we witnessed the events that generated the random square. I know that I wrote the word "word" here, and you will probably infer that it is designed from the general context, because you have good reasons to know that I am a conscious agent (at least I hope). Or we can be the authors of the software that generates the random sequences among which we find the "word" sequence. That confirms what I have always said: the only true meaning of "design" is: a process where the configuration that is outputted to the object comes from conscious representations in a conscious agent, that imply understanding of meaning and the feeling of purpose. I hate to be repetitive, but there is no other definition of design that works. And we cannot discuss something of which we have no clear definition, especially if our debate is about how we can infer that something. So, simple designed things can be correctly classified as designed only if we have direct, or indirect, evidence of the design process itself. IOWs if we know in some way that a conscious agent was directly implied in the process as the source of the configuration we observe. In all other cases, if we have to infer design from the properties of the observed object, and nothing else, then we have to rely only on the complexity of the information linked to the design. IOWs, only complex design can be inferred from the object itself, without any extra knowledge about the process. That's why we need a specification, and a computation of the complexity. Not to define design, but only to detect it from the object, because that is possible only when the design is complex. gpuccio
Eric - A couple of quick points. I agree with you that, at least using the present concepts, the meaning itself of something cannot be calculated. What we are calculating is the degree of independent warrant that there is meaning. As to probability, I don't doubt that improbabilities are what often give us the sense of "hey, we need to look at this - this doesn't exist in my book of immediate answers". However, it is not improbabilities that give us the sense of design. It is, instead, the relationship of structure to function. Now, probabilities can and do distinguish between something where the structure/function relationship was just happenstance, but I think that, on the whole, we use probabilities to find surprise and structure/function to find design. The combination of those two allow us to find design in surprising places, and justify the inference where it allows. johnnyb
johnnyb: Another quick question: I absolutely agree with you that the original, somewhat intuitive inference to design in biology is the default and should be considered seriously, absent a showing of some realistic design substitute (which has never been forthcoming). I wonder, however, whether this can be completely divorced from the concept of probability.
This plan indicated a clear teleology – that the organism did things that were *for* something. These organisms exhibited a unity of being. This is evidence of design. It has no reference to probabilities or improbabilities of any mechanism. It is just evidence on its own.
I agree that there is a primary value to the idea of function. But is the probability concept completely absent? True, when we look at something in our everyday life and determine design we aren't doing so on the basis of a detailed mathematical calculation. But is there an intuitive sense of probability that comes into play? Based on our experience gained across thousands upon thousands of examples every day? If I stumble upon the proverbial watch lying on a heath, or upon a digital code stored in DNA, isn't one of my very first impressions along the lines of "Hey, that is unusual!" Isn't it often the case that one of the things that causes us to pay attention and consider design -- not conclude design ultimately, mind you, but an initial flag for consideration -- the fact that we are dealing with something unusual and unexpected, under purely natural forces? And can this be considered a kind of intuitive probability assessment -- quick, and uncalculated, and in need of additional refinement and analysis as it may be? Just thinking out loud here for a bit . . . Let me know your thoughts. Eric Anderson
gpuccio: My guess it that we are largely in agreement. Let me flesh it out just a bit more. I agree that the existence of a specification is essentially a binary issue. Either we have a specification or we don't. What I'm driving at is that we cannot simply throw an algorithm at a situation to determine if we have a specification. And, by definition, we cannot therefore use mathematics alone to determine whether we have CSI. We can quantify the complexity of the instantiation of a specification in matter or in a coded language. That is the "C" part of "CSI". For example, I can calculate the complexity of the phrase "I love you" given certain parameters about the frequency of English characters and so on. But I am not calculating some objective, unchanging value of "I love you" in any meaningful way. Rather, I am calculating the complexity of the string required to represent the specification, given certain English character frequencies, etc. The same specification could be given with the words "Te amo" and then we could run a complexity calculation based on Spanish. Or, we could run a complexity calculation based on "yIl uo voe" in English and would come up with the same mathematical result as we had with "I love you"; yet no specification would be present. In either case, the underlying specification -- the meaning or function -- must be understood outside of the math. And it is not reducible to math. Thus, while we can calculate the complexity of a particular representation, we cannot pin a definitive numerical value to the specification itself. It simply isn't a mathematical quality that is amenable to pure numerical calculation. Eric Anderson
Eric Anderson: I understand what you say, and in principle I agree, but with an important distinction: I would not say that specification is not a quantitative factor. It is a categorical variable, a binary one, and as such it is treated as we treat all categorical and binary variables in statistical analysis. As I always say, specification, in all its possible forms, generates a binary partition in the set of the search space. That means that we can count the objects in the search space that are included in the target subset. There is no doubt that different forms of specification can be given. My definition of functional specification is based on some explicit definition of function. johhnyb, in the paragraph you quote, mentions holistic unity and being simpler, which is nearer to Dembski's views. But in the end, whatever the rule we use to specify, the final result is similar: we generate a binary partition in the target space, and we compute the probability of finding the target space by a random search, and therefore the complexity of that specification. If the result of that computation can be empirically shown to be effective to infer design with extremely high specificity, the procedure is empirically valid. The important point is: specification is indeed a qualitative variable, but like all qualitative variable, if we want to use it in a quantitative context, like a statistical analysis, the variable must be objectively defined and it must be possible to objectively and unequivocally assign each object in the search space to one of the two subsets. IOWs, to count frequencies in the two qualitatively defined subsets, which is definitely a quantitative measure. That's why, if I define a function as the specification, I must be explicit and unambiguous, and I must also provide explicit rules to measure the level of the function, and a definite threshold of level to evaluate it in binary form. gpuccio
johnnyb: Good thoughts, as always. You've done a lot of great work in helping explain and promote intelligent design. Just one minor quibble, if I may:
As a case in point, CSI is based on the fact that designed things have a holistic unity. Thus, they follow a specification that is simpler than their overall arrangement. CSI is the quest to quantify this point.
I don't believe CSI is quantifiable. "C" yes. "SI" not so much. Yes, CSI serves as a way to make concrete and explicit the evidence of design. Thus, we avoid false positives (one of the other common critiques of the original intuitive design inference). In this way, then, CSI allows us to make a positive identification of design -- one that is wholly objective, even scientific. So CSI allows us to identify design. Partly through a complexity calculation; partly through a recognition of a specification. But CSI itself is not reducible to a pure mathematical calculation. Eric Anderson
Dionisio: Thank you for your contributions! :) Yes, Bob O'H has been a very good interlocutor. I hope he has other interesting things to say, if he can find the time. gpuccio
gpuccio @95: Excellent answer to the question @93! Thank you. BTW, not that I miss them, but noticed that Bob O'H has been the only politely dissenting interlocutor in this interesting discussion thread. However, the discussion has been very positive. I've learned much from reading the posts here. Do they have this kind of serious technical discussions in other sites too? Just curious. PS. To all anonymous readers, please read the insightful explanation @95 very carefully. It's really fundamental and very juicy. :) Dionisio
GP Correct - what I meant was "eliminate chance and necessity and you have design". So, we might say that ID is a probability measure that sets boundaries beyond which chance and necessity cannot produce the observed results. Silver Asiatic
Dionisio: Good question! I like very much Abel's concept of prescriptive and descriptive information. I have no idea if the concept originates with him, or if he takes it from someone else. However, it is a great concept. I would say that descriptive information conveys a meaning, while prescriptive information implements a function. So, a sonnet is descriptive information, while a software or a machine is prescriptive information. There is no great difference from the point of view of complexity and design inference. Moreover, descriptive information can always be transformed into functional information by defining the function as "the ability to convey such a meaning", but I find the procedure a little artificial. However, I have used that approach in my OP about language. Of course, for design inference in biology we are mainly interested in prescriptive information. There is an important difference. While both types of information require a conscious observer to define the function or meaning that specify, descriptive information conveys its content only if and when there is another conscious observer at the receiving end, because only conscious observers can understand meaning. So, in the absence of conscious observers, a sonnet is "dormant" and does practically nothing. On the contrary, a machine can be built by a conscious designer, and then it will operate even in complete absence of any conscious observer: an enzyme is very active in catalyzing its reaction, even if nobody is aware of that. Of course, a conscious observer is always necessary to recognize what the machine has been doing: but there is a difference, because the machine changes things objectively, while a sonnet changes things only when its meaning is understood. So, in a sense, we could say that prescriptive information is more "objective" than descriptive information. gpuccio
Silver Asiatic at #91: "Yes, but knowing that, there is no real need for measures of dFCI. We observe something that is impossible for necessity and chance to produce, thus infer design. To then ask “why did we infer design?”, we wouldn’t say because it has more than 500 bits of dFCI." No, why do you say that? Remember, what we are discussing here is big systems with many random events occurring, like genomes that change through billions of years. Even if we are certain that no law of necessity can generate functional proteins from nucleotide sequences that are subject to random variation, we still have to exclude contingency. As I have said, the biological probabilistic resources of our planet and natural history can grossly be set at about 120 bits (IOWs, about 2^120 different configurations can be tested). So, if a very simple proteins has, say, 30 bits of functional information, how can we exclude that it came into existence by random mutations? We can't. It's the same reason why we cannot infer design for the sequence "word" found among 400000 randomly generated 4 character sequences. It could well be the result of contingency. Not so a Shakespeare's sonnet: if we find such a sonnet among 400000 randomly generated sequences of characters of the same lengths, we can safely infer design for it. And if someone asks: "“why did you infer design?", I would definitely answer: "because it has more than 500 bits of dFSI." Because it's the truth! When we immediately recognize something as certainly designed, without any doubt, even if we have no direct knowledge of its origin, that's the reason: we know that it is complex enough to be beyond contingency, and that no law of necessity can be related to that kind of thing. Even if we don't compute the dFSI, we are just the same understanding that it is extremely high. Of course, if we do science, we have to formally specify what usually is only an intuition. Therefore, if we do science, we have to measure dFSI, to decide a threshold, and so on. We have to make our simple intuition quantitative and shareable. That's what ID theory is about: it demonstrates that there is an objective, shareable way to infer design scientifically. gpuccio
gpuccio: In this relatively old paper by D.L. Abel: http://www.mdpi.com/2075-1729/2/1/106/htm they mention "Prescriptive Information" (PI). How does PI relate to dFSCI and CSI? Dionisio
Silver Asiatic: "The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question." "In the example I gave, an ice sculpture, the snowflake has the function of “transporting water through the atmosphere”. The snowflake lands on things, has a ‘sticky’ quality, and can form various objects (snow drifts, etc) which melt and become sculpture-like objects." I am not sure that I understand what you mean here. Are you defining a function for a snowflake? For snow? Are you trying to define a search space and a target space? What is the system, what is exactly the object? Let's go to the Pollock example: "In a Jackson Pollack type of painting, there are paint drops – blotches, on a surface. You go somewhere – an old garage – and see a surface with paint blotches. Was it designed or just a random accident? I would think the paint would have some functional definition. Something to put in the equation." If we are discussing abstract, informal paintings, I think that it is very difficult to distinguish them from random images, and therefore infer design, scientifically. I am not criticizing that kind of art! :) Indeed, I like very much Pollock and many other painters of the same kind. The point is, when we infer design we need some objectively defined function and a way to measure it. The beauty in abstract paintings (or even in formal paintings) is certainly there, but I am not aware of any explicit and objective way to define it, least of all to measure it. I am certain that beauty is one of the properties of good design, but unfortunately it is at present too elusive for scientific definitions. So, a painting that is beautiful, but does not represent anything formally recognizable, like this: http://www.jackson-pollock.org/images/paintings/convergence.jpg is a beautiful thing, but not a good object for which we can scientifically infer design. From that point of view, I am afraid it would remain a false negative. On the contrary, a painting like this: http://www.minhanhart.com/upload/product/86205824640.jpg can be easily recognized as designed, for the precision with which it reproduces known objects by oil colors. Just look at these four drawings: http://hyperallergic.com/wp-content/uploads/2014/08/Childrens-Drawings_Kings-College-London.jpg All of them are designed by children, but only the last one, maybe, could warrant some design inference. Perhaps. gpuccio
GP
The point is, when you have to write a computer program to do something, or to find a protein sequence that can work as an enzyme that you need, you cannot hope that some law of necessity does that for you. It’s impossible.
Yes, but knowing that, there is no real need for measures of dFCI. We observe something that is impossible for necessity and chance to produce, thus infer design. To then ask "why did we infer design?", we wouldn't say because it has more than 500 bits of dFCI. Silver Asiatic
gpuccio @88: You mentioned Abel's “configurable switches” concept, which I don't recall seeing before, though it was posted in this site 5 years ago: https://uncommondesc.wpengine.com/design-inference/the-first-gene-the-cybernetic-cut-and-configurable-switch-bridge/ Very interesting indeed. Thank you. Dionisio
gpuccio @86:
A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives.
I had it totally wrong. Thank you for correcting my misunderstanding. Dionisio
Origenes and Silver Asiatic: When Dembski developed his model of the explanatory filter, he was well aware that some forms of specification could generate confusion. Let's restate for a moment my general definition of "specification": Specification is any rule that generates a binary partition in a well defined set of objects (the search space), so that a subset of the search space can be identified according to that rule (the target space). OK, that is specification in general. The information measured by comparing the target space to the search space is called CSI. But it is possible to define a more specific kind of specification, a subset of possible specifications: A functional specification is a rule that generates a binary partition in a well defined set of objects (the search space), so that a subset of the search space can be identified according to that rule (the target space). The rule must be the definition of a function to be implemented, explicitly defined, including a definite level of the function itself and a method to measure the presence or absence of the function in objects (IOWs, the ability of the object to implement the function at the defined level). OK, that is functional specification. The information measured by comparing the target space to the search space is called FSI. dFSI if we stick to digital forms of information. dFSCI if we express it in binary form (yes or no). Now, I believe that if we use only functional specification, the problem of laws of necessity will not arise. Indeed, laws of necessity cannot really generate high levels of functional specification. That's why there is no law of necessity that can generate language or software, or semiotic codes, or paintings, and so on. For the same reason, we cannot define any function for snowflakes that has high complexity. But functional specification is not the only way to generate a binary partition. There are other kinds of specification. For example, pre-specification of some specific sequence is a form of specification too. But the main kind of specification that is different from functional specification is specification based on order and regularities. That's where the problem of necessity laws becomes important. Let's see a very simple example, that has been discussed many times in this blog. We have a system where a coin is tossed 10000 times, and the results are recorded. With some surprise, we observe that the result is 10000 heads. Our null hypothesis is that the coin is fair, so we should have approximately 50% of heads and 50% of tails. So, we reject the null hypothesis that the result is consistent with a random system with uniform distribution of the probabilities, because in that case the result we observe would be too unlikely (definitely beyond any threshold). But, as we know from the theory of hypothesis testing, rejecting the null hypothesis does not automatically support a definite explanation. So, we must look at all possible alternative explanations. Let's say that, after serious reasoning, we conclude that only two explanations are credible: 1) The coin is fair, but in some way the result has been intentionally manipulated by someone (for example, by some magnetic field that is activated on purpose to force the coin each time to a head result). IOWs, we infer design for the result, even if it is probably deceitful design. 2) The coin is not fair, for example for some error in building it, so that when we toss it the laws of gravity make it fall always in the head position. In this case, there is no intentional design in the result. So, before we infer that the result is faked, we have to be sure that the coin is fair. IOWs, we must exclude that the observed result is caused by known laws of necessity operating in the system. Dembski was well aware of that problem, and that's why in the explanatory filter, often quoted here by KF, we have not only to demonstrate that the observed result is too unlikely as a result of contingency, but also that it is not the result of laws of necessity. So, in my definition I am only restating Dembski's basic and brilliant intuitions. The only difference is: the problem of necessity really arises only if we use order as a way to specify. Why? Because order and regularities can have a double origin: while they are never the result of contingency, at least at high levels of order and regularity (contingency inevitably tends to disorder), they can be the result of intentionality (design) or of some law of necessity. So, if we observe order, and we think that it could be the result of intentionality (design), we must be sure that we have done all that is possible to exclude a regularity generated by known laws operating in the system. So, considering possible explanations based on necessity remains an important methodological step in any design inference. That has nothing to do with measuring CSI: in the case of the coin, we can measure the improbability of the result, as the ratio of the target space to the search space, and that improbability is extremely high, of the order of 10000 bits. That certainly excludes contingency as an explanation. But, if the result can be explained by an unfair coin that can only fall in the head position, what is the utility of excluding contingency? None at all. But the important point is: there must be some law of necessity which explains, or at least has the potential to explain, the observed result. In the case of software, proteins, and other forms of digital prescriptive information, and also in digital descriptive information like language, such laws do not exist. The point is, when you have to write a computer program to do something, or to find a protein sequence that can work as an enzyme that you need, you cannot hope that some law of necessity does that for you. It's impossible. That kind of information, functional information, is the result of a great number of intentional choices. The information is only the sum of a number of what Abel calls "configurable switches": objects that can exist indifferently in at least two different configurations, so that the specific configuration can be set by the designer to get a final result. Laws of necessity cannot connect some configuration to a complex functional result, because the functional configuration is not based on regularity. Indeed, it is based on understanding the needs generated by the function to be implemented. That's why functional sequences are "pseudo-random" (even if, of course, they can imply some low level of regularity). So, when we deal with functional information, it's usually very easy to exclude an explanation based on laws of necessity: if the information that we observe is not defined by regularities, an explanation based on necessity is simply impossible. However, if we become aware of some credible explanation of that kind, we have always the duty to carefully consider it. Science is the field of best explanations and of inferences, not of absolute truths. Going back to our snowflake, we can say that there are two different reasons why we cannot infer design for it: 1) We can define no complex function for it, so no inference based on dFSCI is possible. 2) Even if we try to specify the snowflake for its regularities, there are well understood explanations for those regularities, based on well known laws operating in the system where snowflakes are generated. So, no design inference is possible, even if using a more general concept of CSI. Again, these concepts are already fully explicit in Dembski's explanatory filter. I have only commented them from the particular perspective of functional specification. gpuccio
Origenes
Again, I do not understand why ‘being understood from law and randomness’ is relevant. If law and randomness creates stuff which contains over 500 bits of dFSCI, like Shakespearean sonnets, clocks, computers and so forth, and it is well understood, then there is something very wrong with the design inference. You cannot say: X may contain well over 500 bits of information, but I refuse to infer design for X because its origin is well understood by law and randomness. That’s discrimination!
Yes, that's the point I was try to get at also. Silver Asiatic
Dionisio: No, the point is exactly the opposite. A high threshold, like 500 bits, is linked to many false negatives, but guarantees practically zero false positives. I am going to answer Origene's points as soon as I find a few minutes... gpuccio
Origenes
According to GPuccio’s method, in order to test a snowflake for design we need to come up with a function for the snowflake — as in: the function of a snowflake is ….. The weary function “paper weight” is out, I suppose. Do you have a suggestion? A snowflake supports which top-level function?
In the example I gave, an ice sculpture, the snowflake has the function of "transporting water through the atmosphere". The snowflake lands on things, has a 'sticky' quality, and can form various objects (snow drifts, etc) which melt and become sculpture-like objects.
Ink-molecules are small things that contribute to letters, which in turn contribute to words, which in turn contribute to sentences and so forth. We see a complete alignment of low-level functions in support of a top-level function (the expression of a thought).
True. If you saw something that looked like a single word, or perhaps even two words - from ink on a surface, you could analyse it to determine if it was designed or just an ink spill. The other example I gave was more difficult. In a Jackson Pollack type of painting, there are paint drops - blotches, on a surface. You go somewhere - an old garage - and see a surface with paint blotches. Was it designed or just a random accident? I would think the paint would have some functional definition. Something to put in the equation. Silver Asiatic
Origenes @81: Let's wait for gpuccio to answer your questions, but perhaps what you quoted has to do with something else gpuccio wrote before about the 500-bit (or his own 150-bit) threshold allowing for false positives, but no false negatives? (assuming I got that right?) It seems like you're referring to a "false positive" case? Let's see what gpuccio has to say about this. What would be a "false positive" in this case? Over the 500-bit (or GP's 150-bit) threshold but not qualifying as "designed"? What would be a "false negative" in this case? Under GP's 150-bit limit but still considered "designed"? Can someone verify this for me? Thank you. Dionisio
gpuccio: Excellent! Thank you. Dionisio
Dionisio: "The beautiful snowflake shapes seem to be a byproduct of physical processes. One could argue that the laws of physics were designed, but not the snowflake shapes" Exactly! "However, can we infer design for the protein Ndrg4 referenced in this paper?" Well, let's try to apply my simple homology based method, and to follow the "information trail". 1) Ndrg4 is a protein 352 AAs long in the human form. 2) In vertebrates, it shows very high conservation: 645 bits of hmology between cartilaginous fishes (callorhincus milii) and humans. That, in itself, is well above the 500 bits threshold. 3) Its "information trail" shows also an important information jump between pre-vertebrates and vertebrates: 399 bits of difference in homology to the human form, between the best non vertebrate hit (Crassostrea gigas, 246 bits, and callorhincus milii, 645 bits). That means that about 400 bits of new original functional information have been generated in the protein in its vertebrate form. That is not above the 500 bit threshold, but is well above the threshold that I have suggested for any biological event, that is 150 bits (see my post #72). Therefore, applying the simple methods that I have described in my OPs and in my posts here, I would definitely infer design for the Ndrg4 protein, in particular in its vertebrate form. gpuccio
GPuccio: Just to be clear: A snowflake is not an object for which we can infer design.
GPuccio, in another thread you offer two distinct reasons as to why some things do not allow a design inference:
GPuccio: So, I mention 4 objects or systems which do not allow a design inference because they do not exhibit dFSCI: The disposition of the sand in a beach: easily explained as random, no special function observable which requires a highly specific configuration. Can you suggest any? I can’t see any, therefore I do not infer design. The pattern of the drops of rain. Same as before. [my emphasis]
IOWs the disposition of the sand at a beach and the pattern of rain drops do not contribute to some top-level function. Both are not part of a functional system. They are not relative to a function, like gears are to a clock-function or letters are to the function of expressing a thought. Okay that’s a clear reason.
GPuccio: A glacier: this is less random, but it can be easily explained, as far as I know, by well understood natural laws, with some random components. I am not an expert, but I cannot see in a glacier any special configuration which has a highly specific complexity which is not dependent on well understood natural laws. Therefore, I do not infer design.
How is being understood by natural laws relevant?
GPuccio: The snowflake I have added because it is an example of ordered pattern which could suggest design, but again the configuration is algorithmic, and its origin from law and randomness very well understood. No dFSCI here, too.
Again, I do not understand why ‘being understood from law and randomness’ is relevant. If law and randomness creates stuff which contains over 500 bits of dFSCI, like Shakespearean sonnets, clocks, computers and so forth, and it is well understood, then there is something very wrong with the design inference. You cannot say: X may contain well over 500 bits of information, but I refuse to infer design for X because its origin is well understood by law and randomness. That’s discrimination! :) Origenes
gpuccio, The beautiful snowflake shapes seem to be a byproduct of physical processes. One could argue that the laws of physics were designed, but not the snowflake shapes. :) However, can we infer design for the protein Ndrg4 referenced in this paper?
Neuronal Ndrg4 Is Essential for Nodes of Ranvier Organization in Zebrafish Laura Fontenas, Flavia De Santis, Vincenzo Di Donato, Cindy Degerny, Béatrice Chambraud, Filippo Del Bene, Marcel Tawk http://dx.doi.org/10.1371/journal.pgen.1006459 PLOS Genetics
Dionisio
Just to be clear: A snowflake is not an object for which we can infer design. gpuccio
Silver Asiatic: … in a biological system, many small things contribute to a bigger function. So, if the small things can be explained by non-design, then the bigger function can be, supposedly, also.
Ink-molecules are small things that contribute to letters, which in turn contribute to words, which in turn contribute to sentences and so forth. We see a complete alignment of low-level functions in support of a top-level function (the expression of a thought). However, the notion that ink-molecules are non-designed doesn’t validly lead us to the conclusion that a sonnet by Shakespeare is also non-designed.
Silver Asiatic: The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question.
According to GPuccio’s method, in order to test a snowflake for design we need to come up with a function for the snowflake — as in: the function of a snowflake is ….. The weary function “paper weight” is out, I suppose. Do you have a suggestion? A snowflake supports which top-level function?
Silver Asiatic: So, the snowflake would have a functional attribute.
Which one? Origenes
Origenes, I think ... and I may be totally wrong, in a biological system, many small things contribute to a bigger function. So, if the small things can be explained by non-design, then the bigger function can be, supposedly, also. The same with a snowflake. If a certain ice flow was studied for design, the natural process of snow becoming ice, (thus snowflakes) would have to be analyzed to see if they could form the structure in question. So, the snowflake would have a functional attribute. As for the question of "non-designed", if the thing can be formed by natural processes alone, then it doesn't fit the design criteria. We might say "it appears not to have been designed". If the thing cannot be formed by natural processes, we still don't necessarily know that it was designed, but design is a better explanation, since we know design could do it, and we have not seen that natural processes could. That's by best guess at an answer anyway! Silver Asiatic
Is it just me or is it hard to come up with a function for a snowflake? Origenes
Oops: "Do you think a snowflake is "not designed"?" Sorry PaV
Bob O'H:
If it’s so easy from your side, the just do it! You don’t need me: if you can demonstrate that your design detector works on a wide range of objects, you’ll make a lot of people happy, including those in the ID community.
Bob, for some reason you don't seem to want to cooperate here. The impression I have is that you feel that in stating something is "not-designed," you will have given away the store. I'm just trying to get a starting point. With that said, however, I have an idea of what kind of object you may select, and I think I have an argument in favor of ID. Actually I presented the argument over two years ago, and I plan to link to it: that's what makes it so easy! But if you're really interested in ID coming up with an argument that is effective, then I don't think I'm asking too much by simply asking you to name something that is 'not-designed.' For example: do you think a snowflake is "designed"? PaV
gpuccio @72: Thank you for such a detailed and well illustrated explanation. It's another "textbook" material for future reference. Your examples made it easier to understand this otherwise difficult subject. By now Bob O'H should have understood exactly what you meant, and maybe even agreed with you. Maybe... :) Dionisio
Bob O'H: Sorry to steal your time in the midst of important personal duties! :) You raise very good points. Of course we can make the definition less specific. That will usually lower the value of dFSI linked to the definition, because the target space becomes bigger. That's not a problem, because what we need to infer design is at least one function that can be defined so that its specific functional information is higher than, say, 500 bits, and that is implemented by the observed object. It's not important if we can define less specific functions, that may or may not have dFSI values above the threshold. For example, in my OP about functional information, I make the example of a notebook, which can be used both as a paperweight (a very simple function, with very low dFSI), and as a notebook (a very complex function, certainly with dFSI value well above 500 bits). Of course, we can infer design for the notebook using the more complex functional definition. In the case of ATP synthase, why define a protein that can build ATP from a proton gradient? Well, because the protein we observe can do exactly that! Of course, it can be possible to synthesize ATP in other ways. So, if we define a protein that can synthesize ATP, the target space will be bigger, and the dFSI lower. But what is needed in the cell, in that specific context, is a protein that transforms energy from a proton gradient to ATP. That functional need is the basis for a highly sophisticated engineering of the protein, and for very specific solutions. Let's say that we can make a car with a petrol engine and with an electric engine. The two machines will be different, at least in many engine parts. But of course, the functional solutions in a petrol engine are absolutely necessary to the working of the petrol engine. They are part of the design of the engine. So, we can well define the function of the engine as "a machine that can draw mechanical energy from petrol, and use it to operate a car", even if other engineering solutions exist. The functional complexity necessary to derive energy from petrol will be a very good result to infer design for the petrol engine. So, the idea is that we can define the most complex function that can be specifically implemented by the object. The important point is that the function definition is really independent from the specific information in the object (for example the specific digital sequence that implements the function). The definition of the function is similar to a question that we ask, and to which some engineer must provide the answer. So, in the case of ATP synthase, the question could be: Well, Mr. engineer, here we have a lot of energy in the form of proton gradient. But what we really need is energy in the form of ATP molecules, because that is a form of energy that we can use where we like, and as we like. So, what can you do to give us ATP from the proton gradient, possibly with good speed and efficiency? Well, ATP synthase is exactly the answer. A very good answer. Let's go to the problem of sensitivity - specificity tradeoff. Well, I would say that we are in a very special situation here, one that makes the choices about that classical tradeoff very easy: we need specificity, and we can happily renounce sensitivity. Why? Because our purpose, in the context of our debate, is to identify at least some biological objects that are unequivocally designed. Not to identify most of the biological objects that are designed. That's why the threshold must be set very high. 500 bits is the UPB. For biological objects, I have suggested 150 bits. That threshold can still guarantee about 5 sigma, if we take into account the probabilistic resources of the biological world on our planet, which can be grossly evaluated at 120 bits. 5 sigma is about 22 bits. So, 150 bits is a rather safe threshold for the biological world. But even a very restrictive threshold, such as 1000 bits, would still allow us to infer design for many biological objects, first of all the alpha and beta chains of ATP synthase (at least 1200 bits of functional information). It's interesting to note that we can never eliminate false negatives in the design inference. As I have said, there are many designed things that are really simple, and they can be indistinguishable from non designed things that can implement the same simple functional definition. So, there will always be designed objects for which we can never infer design. But we can reduce false positives to a practical zero, if we set the threshold high enough. So, if we infer design only if we can compute a dFSI of at least 1000 bits, for example, we can be really sure that we will not have false positives. Let's go to my example of the paragraph. You are right, that was just a quick example where I have considered the target space as made of one sequence. In some cases, that can be appropriate. For example, in my mental experiment of the carving of the wall, the target space is made of one, because only one sequence corresponds to the exact decimal sequence of pi. But in most real cases, the target space is bigger. And measuring the target space is the real difficult point in evaluating dFSI. But in many cases, it can be done, usually by approximation. I invite you to read my OP about language, here: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ where I reach the interesting result that a Shakespeare sonnet exhibits at least 800+ bits of specific functional information, using the very conservative definition: "Any sequence of characters of about the same length that is made only of english words" You may notice that such a definition includes no requirements that the sequence must have some good meaning in English, least of all that it must be poetry, or a sonnet, least of all that it must be one of Shakespeare's sonnets, least of all that it must be that specific sonnet. But, even so, 800 bits of dFSI are guaranteed, and a design inference can be safely made. In the case of protein chains, the approximation that I use is based on the concepts expressed by Durston in his fundamental paper: https://tbiomed.biomedcentral.com/articles/10.1186/1742-4682-4-47 I usually refer to the homology measured in bits by the BLAST software in the comparison between two evolutionary distant sequences of the same protein, as a credible measure of its minimum dFSI. For example, in the case of the two mentioned chains of ATP synthase, I have given an approximate value of functional information (for both) of about 1219 bits. Please note that the complexity of the search space, for 973 AAs, is about 4205 bits. That means that my computation is setting the target space at about 2986 bits, which, IMO, is probably a gross overestimation. But, just to be on the safe side... So, ID definitely cares for the size of the target space. That concept has always been very clear, just form the first definitions of CSI by Dembski. ID cares for both the size of the target space and the probabilistic resources of the system. And ID reasonings very often approximate these things extremely, against the interests of the theory itself. Why? Because, even with such a high level of self-harm, we can always infer design quite easily for many biological objects. gpuccio
Bob O'H
The “replication” was referencing trying the design detection on “several designed and non-designed objects” (and in particular that it’s done on several).
That seems like a very good idea. Why not? Of course there will be tricky situations, with non-designed things looking designed or designed things measured as non-designed, but it certainly would be interesting to test. Some sort of blind-testing in a neutral lab environment would be a big help to ID, as I see it. We could measure effectiveness and then tweak the model to improve. Silver Asiatic
Sorry for the patchy replies: I'm afraid the patchiness will probably increase, as I'm in the middle of relocating to Norway, so I have other things on my mind (like trying to move our flock). gpuccio @ 60 -
So, let’s say that our object is ATP synthase (please, if you want have a look at my post #55. which makes some important points). There, I have used the following definition in the discussion: a machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system This is rather generic, but for most purposes it would be fine. It is certainly good enough to infer design for that molecule. I could make the definition more specific, by fixing a threshold of efficiency, like “at least 100 micromoles of ATP per second per mg protein”. That would probably include most instances of the molecule, maybe not all, but what’s the point?
You could also make the definition less specific - why a proton gradient? Why does it have to be in a membrane? I think you'll have these sorts of choice to make in almost every specification of a function. That's OK as long as the guidelines are clear (and you can always do sensitivity analyses to see if different definitions lead to different answers). But guidelines are important: it is not always obvious how to extrapolate from examples. One reason I'm pushing for you to present positive evidence that your design detector works, is that I'm sure that if you try it in practice, you'll find there are problems that need to be solved (don't worry, this is usual in development of methods). This leads me on to 68...
Let’s take for example the first paragraph in this post: “Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function.” that (I hope) conveys some definite meaning in (I hope) good english. The specific information for that specific sequence is about 300 bits. Believe me, no randomly generated series of sequences will ever include that outcome.
How did you specify the function of that paragraph? And then how did you decide the size of the space that contained the specification? You seem to be specifying it by saying the paragraph has to be exactly the same (I may be wrong here, so my apologies if so). PaV @ 65 - If it's so easy from your side, the just do it! You don't need me: if you can demonstrate that your design detector works on a wide range of objects, you'll make a lot of people happy, including those in the ID community. The "replication" was referencing trying the design detection on "several designed and non-designed objects" (and in particular that it's done on several). My apologies if this was not clear. Bob O'H
gpuccio: What I wrote @68 also applies to the comments @60, 61, 63, 64. Understanding requires the will to understand. Dionisio
gpuccio: "that (I hope) conveys some definite meaning in (I hope) good english." The entire comment @67 conveys very important meaning in good language. Thank you. However, there's no guarantee that your politely dissenting interlocutors will understand it, much less accept it. Sorry. Dionisio
Silver Asiatic: Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function. For example, a stone can be used as a paper weight if it is in some range of weight, form, and so on. In that kind of function, however, the specific information necessary to implement the function is always relatively low. IOWs, in random systems many simple configurations exist that can implement simple functions. The reason is simple: if only a few bits of information are necessary to implement a function, it will be easy to find those configurations in the search space. Another way to say that is that the target space is big enough, in relation to the search space, to be found by a random search. For example, short English words will be found easily in random sequences. Do you think that it is so difficult to find the sequence "word" in some set of randomly generated four letter sequences? There is about 1:390000 probabilities to find it, less than 19 bits. In a system with enough probabilistic resources, that is a quite likely outcome. So, if we found the sequence "word" in a series of 400000 randomly generated 4 letter sequences, that is no evidence that someone designed (wrote) that word. The sequence has almost 19 bits of functional information, but it was not designed by anyone. IOWs, a threshold of 19 bits is not appropriate to infer design in a system. But what if the specific information is higher? Let's take for example the first paragraph in this post: "Functional information in itself is not necessarily linked to design. It only means that some object can be used for some function: the specific configuration in the object that allows to use it for that function is the functional information linked to that function." that (I hope) conveys some definite meaning in (I hope) good english. The specific information for that specific sequence is about 300 bits. Believe me, no randomly generated series of sequences will ever include that outcome. If the paragraph had been just a little longer, we would have been beyond the 500 bit threshold. The design inference is really safe, in that case, even in systems with very high probabilistic resources. The alpha and beta chains of ATP synthase are at least in the range of 1200 bits. So, the important points are: 1) Its is not functional information in itself that is linked to design, but rather complex functional information. Complex functions are only observed in designed objects. Simple functions can be found in many non designed objects. 2) There is no circularity at all, if we refer to my independent definition of design. Remember, an object is designed if and only if it comes from a design process, where a conscious intelligent agent outputs the information to a material object from a conscious cognitive representation and a conscious purpose. This is the only possible definition of design, and it destroys all circularities, because it is completely independent. 3) Designed objects can be simple or complex. IOWs, they can be used to implement both simple and complex functions. 4) Non designed objects can be used to implement simple functions. 5) Beyond some appropriate threshold of specific functional information, only designed objects are associated to complex functions. 6) That's why complex functional information can be used to infer design: because it is an empiric indicator of the design origin of an object. Beyond some threshold (500 bits is an appropriate general threshold) dFSCI can infer design with 100% specificity, no false positives, and many false negatives. 7) dFSCI can be used to infer design for any object that exhibits it above an appropriate threshold. It can be used to infer design for human artifacts and for biological objects, which are at present the only two known categories that exhibit dFSCI abundantly and beyond any doubt. If we find artifacts on planets that exhibit dFSCI, it will be safe to infer design for them. 8) dFSI at low levels cannot be used to infer design, because it is not necessarily an indicator of a design process. gpuccio
GP
No special functional information here.
You clarified this later, but I wonder how Bob O'H would respond. My fear was that this was too close to circularity. Again, the task was not really about identifying design, but rather defining it. So, if we observe a function and then decide that it's highly probable that it was non-designed, then it doesn't have any special functional information? This would mean that functional information was defined by whether it was probable that the thing was non-designed or not. That's the circularity. You said before:
I state again my definition: any observer can define any function. If an object is observed that can implement a function that requires at least 500 bits of information, then we can infer design for the object.
So, I defined a function with the mountain, but you then said that there was no information in that function. The reason for that is because it's probable that the rocks (becoming a barrier) was formed by nature. But what about rocks that seem to be more specified? Or something like a log-jam in the river versus a beaver-dam. Are those measurable by functional information? It also seemed like you were saying that the only real dFSI that we have is from that functional processes in micro-biology. Would ID research be limited to that scope? Silver Asiatic
Bob O'H @ 57: Bob, let me assure you: the work from our side is easy. But I'm not going to do anything until you and me agree on what is "non-designed." So, please, give just one example. A "rock" will do. Just something. And how much "work" is involved in naming an object? So I don't think you can refuse here. I don't understand what you mean here: . . . the replication is important, because you are claiming to have a general method. I don't need anything elaborate; just give me some sense of how you mean "replication," and whether or not you see this "replication" as being part of a "non-designed" object. PaV
Silver Asiatic: Just to be more clear: There is no special functional information in the formation of mountains, as there is no special functional information in the formation of chemical molecules that are normally formed on a planet like ours. No so for proteins. Proteins are not molecules that form spontaneously. They require special systems, biological systems, enzymes, and so on. Now, even assuming that aminoacids can form spontaneously in relevant quantities, and that they can form proteins spontaneously in some environments, what we could expect is the presence of proteins formed by a random sequence of aminoacids, according to the laws of probability. That's where the concept of functional specification becomes important. An object like ATP synthase, formed by at least 2000 AAs, has a very strange property: set in a membrane system, it works as a splendid mill-like machine, and it uses the energy from the movement of protons through a very specific channel in the molecule to activate a rotor which deform a very big part of the molecule where sites for ADP are present so that ADP is forcibly joined to phosphate to generate ATP, a very special high energy molecule. Now, that's not exactly what any random protein sequence can do. So, it's perfectly right to wonder how such an object originated. No known biochemical laws can favor its origin, even in an environment where proteins can be generated and can randomly change. The high conservation of the alpha and beta chains throughout natural history tells us that those sequences are highly specific, subject to very strong functional constraint: IOWs, they cannot change much, if the function has to be retained. The value in bits that I have given in my previous post, derived form a blast homology, is of 1200+ bits, and can be considered a credible approximation of a lower threshold for the real dFSI value of those two chains. That value in incredibly higher than the 500 bit threshold in Dembski's UPB. Must I still reiterate the obvious? gpuccio
Silver Asiatic: Well, that's why usually I don't deal with analogic information: the computation is more difficult. However, we always have to define a real system, and a time window, and the object we are analyzing. As I see it, the correct question here is: given what we know of the forces actin on our planet during its existence, what is the probability that some object originates from which solid parts can slide so that they can divert a river? Of course, that probability is very high, because many of the objects that can originate during the processes that took place on our planet because of geological laws can have that property and implement that function. No special configuration is needed, other than being solid, being big, being subject to slides. Like all mountains in the world. No special functional information here. gpuccio
GP Thanks again. So back to Bob O'H's "mountain". We observe the mountain. As observers, we define the function: "the mountain's function is when rocks slide they land at the bottom and divert the river". We note the function is already successful. We predict more rocks will fall and the water will move. Now, we determine that this function contains more than 500 bits of information. So, we conclude that the function is designed? The answer is "no". Because we're missing the "S" in the equation. There has to be specificity and rocks piled up at the bottom of the mountain are not specified information. Silver Asiatic
Silver Asiatic: As this seems to be my terminology (of which I take full responsibility), I would like to clarify: dFSI is the digital functionally specified information linked to a function (that is usually implemented in an observed object). It is a continuous variable, because it is measured as a number: it is equal to -log2 of the target space / search space ratio. dFSCI is a binary transformation of the above measure according to a pre-defined threshold. So, if we use the 500 bit threshold, all objects exhibiting more than 500 bits of dFSI will be said to exhibit dFSCI. IOWs, dFSCI is a binary value (yes ot not). I hope that helps. :) gpuccio
Bob O'H: "what you write helps a bit," I am happy of that! "but I don’t think it’s satisfactory." I would never have dreamed of that. :) "Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want. So I could pile on specifications and force dFSI to be large enough. Or I could be vaguer, and lower it." No. That's not how things work. I will try to explain. It is true that we can define the function at any level we like. But, of course, if our purpose is to infer design (or not) for an observed object, we must define the function so that our definition includes the object. So, let's say that our object is ATP synthase (please, if you want have a look at my post #55. which makes some important points). There, I have used the following definition in the discussion: a machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system This is rather generic, but for most purposes it would be fine. It is certainly good enough to infer design for that molecule. I could make the definition more specific, by fixing a threshold of efficiency, like "at least 100 micromoles of ATP per second per mg protein". That would probably include most instances of the molecule, maybe not all, but what's the point? The essence of the function is to be able to synthesize ATP from a proton gradient, with good efficiency, and in some real cellular context. In any case, whatever the definition, we are defining a "higher tail": the set we are defining includes all objects that exhibit at least the level of function we have defined, or more. When we define a function, we generate a binary partition in the search space. The set defined by our function (the target space) must include the observed object, if we are reasoning about some specific observed object. The target space / search space ratio will then express the probability of getting a functional object by a random search of the search space, in one attempt (we are assuming operationally an uniform distribution of the probability to reach an object in the search space). Of course, if we have many attempts, we will consider that (IOWs, the probabilistic resources of the system). So, when we define a function so that our definition includes our object, we are simply reasoning as we do in classical hypothesis testing: we are computing the probability of getting at least the observed function, or higher, if we accept the null hypothesis that the object is the result of a random search. In design inference, our purpose is to infer design for the object (or not). So, all we need is a definition which includes the object and that can set the functional complexity of the defined function above our threshold (in a general case, 500 bits). Of course, if we cannot find any such definition, we cannot infer design. And that is exactly the point: you cannot find any such functional definition for a non designed object, like for example a randomly generated sequence of any kind. Try as much as you like: you will never succeed. On the contrary, it's extremely easy to find such a definition for a lot of designed objects, like pieces of language and software. And, of course, it's extremely easy to find such a definition for ATP synthase, and a lot of other biological objects and systems. Quod erat demonstrandum. Of course, I don't believe for a moment that you will think that this is satisfactory... :) gpuccio
Bob
Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want.
I know GP can answer this much better than I can, but the function has to be real and observable. Yes, you can pile on criteria for specification, but I don't think you can strip away specifications that define some complex functions at their very minimum level. It's something like irreducible complexity. We look at functions that are complex in their simplest form. There's no need to pile on specifications. ID looks at those examples as markers. Again, the intent of ID is not to classify all of reality into "Designed" and "Non-Designed" categories. Its claim is that "some things in nature" exhibit evidence of design. All it needs to show is that is true for some things, measured by their quantity of dFCSI. This is validated by testing the power of "non-intelligent" agents (randomness, natural processes) to produce the dFCSI that is observed in those instances. Silver Asiatic
Bob
SA @ 46 – you seem to be suggesting that I want to specify my function so that dFSI is small enough for what I want. Isn’t this cheating? Surely I should be honest and define my function without regard to what my expected result would be.
I think your terminology (I know there are many versions) is missing something though. It's not merely dFSI, but rather dFCSI. It's the "C", that is necessary. Is your function complex? Well that's what you're looking for. A complex specified function. Now, you could seek the most ambiguous example - you could look for the gray areas. You could try to find the least complex function. But I'd call that cheating. If you're not willing to look at the most obvious cases first, then why not? We're looking for complex, specified function. Why not go to the most obvious observations of that? That's the starting point, not the most ambiguous and debatable observations. Agreed? As I said, evolution does the same thing. It doesn't hold up the most ambiguous results as examples. Instead, it looks for the strongest, most obvious results. Anti-evolutionists have to argue against those, not the ambiguous observations. The same is for you. Look at the most obvious examples of dFCSI first. Argue against those and not the borderline cases. Silver Asiatic
SA @ 46 - you seem to be suggesting that I want to specify my function so that dFSI is small enough for what I want. Isn't this cheating? Surely I should be honest and define my function without regard to what my expected result would be. GPuccio @ 47 & 48 - what you write helps a bit, but I don't think it's satisfactory. Even if I define the function without reference to what is measured, I can still (apparently) define the function to whatever level of specificity I want. So I could pile on specifications and force dFSI to be large enough. Or I could be vaguer, and lower it. I can't see how what you write stops that. I find this troubling (it's similar to problems with systematics using morphological characters: the character scoring and choice of which characters to use are subjective, which lead to lots of arguments. It's one reason why DNA methods are preferred nowadays), because it looks easy to abuse. PaV @ 53 - if you have read this thread, you'll see that one argument I've made a few times is that I'm not doing your work for you. You (as a group, not necessarily you individually) should be able to find a way of testing your design inference on several designed and non-designed objects: the replication is important, because you are claiming to have a general method. Bob O'H
GPuccio: The important point is: each level has its complexities, and as we try to get to the final functional result, the complexities increase exponentially, because the search space increases much more quickly than the target space.
Which shows that the search space for an entire organism is unfathomable huge. And on top of that: an organism, unlike a sonnet or a password, is constantly changing. Perhaps one could say that an organism is a collection of many many different functionally coherent structures which alternate over time. And if 'functionally coherent structures' is the target space, then an organism manages to find a multitude of targets in an unfathomable huge search space. What are the true odds?
... however many ways there may be of being alive, it is certain that there are vastly more ways of being dead ... [Dawkins]
Origenes
Origenes: I agree. In the same way, the twenty aminoacids are used as letters to get the secondary structures that are used as blocks to get the tertiary and quaternary structures that allow a protein to implement its specific function. The important point is: each level has its complexities, and as we try to get to the final functional result, the complexities increase exponentially, because the search space increases much more quickly than the target space. That can be shown easily for language, as I have tried to do here: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ but the same is true for all forms of dFSI. Another important point is that the higher level function is there only when the whole object has been configured. It is not the "sum" of lower level functions, but rather a specific functional configuration of them. So, the properties of individual aminoacids are necessary to have ATP synthase, but they are not ATP synthase. And the properties of alpha helices and beta sheets are certainly necessary to have ATP synthase, but they are not ATP synthase. And the properties of each of the 8 (in the simplest form) subunits of ATP synthase, and of the 2 macro-subunits (F0 and F1), are certainly necessary to have ATP synthase, but they are not ATP synthase. The simple truth is: ATP synthase is an extremely complex machine that generates high energy ATP molecules using the energy derived from a proton gradient, in an appropriate membrane system. To be able to implement that function, which as you can see can be easily defined independently of any knowledge of the specific sequence or structure of the molecule, it requires an extremely complex organization. At digital level, that organization is written in the primary sequence of at least 8 molecules, for a sum total of 2082 AAs in E. coli. As I have discussed many times here, the alpha and beta chains alone, which are the most conserved, are formed in E. coli by 973 AAs, and 624 of them (64%) are perfectly conserved in humans. That represents a total BLAST score, at a simple blast comparison of the two chains, of 1219 bits. That functional information has been conserved from E. coli to humans, through billions of years of evolution. And ATP synthase is one of the oldest proteins we know. These are true examples of digital complex functional information in biology. There are tons of them. gpuccio
GPuccio, thank you for the confirmation. Maybe terms like 'sub-function' or 'low-level-function' are apt to make this point. These terms make sense within the context of "functional coherence", which is defined by Douglas Axe as a “complete alignment of low-level functions in support of the top-level function.” As a familiar example for functional coherence Axe offers ‘alphabetic written languages’ which “use letters as the basic building blocks at the bottom level. These letters are arranged according to the conventions of spelling to form words one level up. To reach the next higher level, words are chosen for the purpose of expressing a thought and arranged according to grammatical conventions of sentence structure in order for that thought to be intelligibly conveyed." [Axe, "Undeniable", Ch.9] Origenes
Bob O'H: I think we need to start at square one here. IOW, Bob, since you want someone to do a dFSI calculation for a "non-designed" object, you first need to give us an example of such an object. What do you have in mind? And, obviously, biological objects are the very objects in dispute here and, so, fall outside of what can be considered "non-designed." So, please, furnish us with an object. PaV
Origenes: Exactly! :) gpuccio
GPuccio, Do I understand you correctly?
GPuccio: There is only one important rule, which is a natural derivation of the definition. The function must not be “built” on a specific observed sequence.
So, one cannot pick a random stone, examine its properties, and next design and build a complex functional machine which depends on the stone's properties and claim that the stone contained FSI all along.
GPuccio: IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password).
Indeed. That would be in principle the same method : the function of a sequence (or the properties of a stone) is designed after it has been observed. IOWs function must already be present — it must be part (sub-function) of a functional whole — and not be added at a later date. Origenes
Silver Asiatic: Exactly! :) gpuccio
GP Very good explanation, again! I missed that part of the definition. A function is not merely the observed sequence and the observer, but it also requires "that which is acted upon by the function". In the case of a random string, the observer could say it's function is to open an imaginary safe. But an imaginary safe is not the proper subject of the scientific analysis. It needs to be a real safe for which that sequence of characters works to open it. Silver Asiatic
Bob O’H at #42: By the way, please note that, while the definition of the function is made by a conscious observer, there is nothing subjective in the use we do of that definition in the reasoning. Indeed: 1) Any possible function defined by any possible observer for any possible object can be used to measure functional information (with the only restriction explained in my previous post). 2) Whatever the function, it must be defined explicitly and objectively, so that a value of functional information can be objectively measured for that function. 3) Any function correctly and objectively defined, that requires at least 500 bits of functional information to be implemented, can be used to infer design for an object, if the observed object can implement that function. gpuccio
Bob O'H at #42: So, you decided to join the discussion. That's fine, I appreciate that. You raise am important point, that I have debated in detail in the past. You say: "Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that?" Let's see. I state again my definition: any observer can define any function. If an object is observed that can implement a function that requires at least 500 bits of information, then we can infer design for the object. As you can see, there is no restriction here. But it is important to understand well the meaning of what I am saying. A function is simply something that you can do with the object. For each explicit definition of a function we can try o measure the functional complexity linked to the function as -log2 of the ratio of the target space to the search space, as explained in my post here: https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ OK, so what is the possible restriction in defining the function? There is only one important rule, which is a natural derivation of the definition. The function must not be "built" on a specific observed sequence. Of course, we can always build a function for a sequence that we have already observed, even if it is a totaly random sequence that in itself cannot be used for anything complex. For example, we can observe a ten digit sequence, obtained in a completely random way, for example: 3744698236 and make it the password for a safe. This is obviously a trick, and it is not a correct definition of a function. The simple rule is: the function must be defined independently from any specific sequence observed. IOWs, we cannot use the information of an existing sequence to define the function, or to generate it (as in the case of the password). We can well use the observed properties of an object to define a function. For example, if we have an observed sequence that works as the password for a safe, we can well define the function: "Any function that works as the password for this safe" In this definition, we are not using any information about any specific sequence: we are only defining what the sequence can do. And we are not using the sequence observed to set a password for the safe. The only case in which we can use a specific sequence to define a function is in the case (scarcely relevant for our discussion) of pre-specification. IOWs, we can use a specific sequence like: 3744698236 and define the function as: any new ten digit sequence that is generated in a random system, and that is identical to the above sequence. In this case, the observed sequence is used to define a function (or to set a password), but it is used only as reference, IOWs, it is not considered an observed sequence generated randomly. Any new sequence that is generated randomly, identical to the reference, will satisfy the search. But in functional specification, the function is usually only a definition of what can be done with some object. Now, if you stick to this simple rule, I state again that what you wrote: "it should be easy to get almost any object (designed or not) above 500 bits." is simply not true. It's not easy. It's simply impossible (with non designed objects). You don't believe me? Try! After all, you said that it is easy. :) Of course, this explanation is rather brief. But I am ready to discuss each single aspect of this issue with you, or with anyone else, in detail. As I have done many times in the past. This is just a first summary. gpuccio
Bob O'H
I’m not sure why I’d want to do that.
Because you want to explain the origin of protein folds. You observe a widely-recognized biological function. It's far less subjective -- it's a known function. Now you have to see if it can be explained within the design parameter, less than 500 bits. That's why you'd want to do this. Or, you want to see what the "Edge of Evolution" is, don't you?
If something not designed exceeds the design boundary, then you have a problem which will need some fixing. If everything that isn’t designed fails to exceed the boundary then you have a good design detector.
If we took the same approach towards morphological analysis, we would dismiss fossils as evidence. We observe two fossils that look similar. So, "evolution detection" says they're ancestral. Then we look at phylogenic analysis and see they're non-ancestral. So, the fossil observations gave a false positive. Does this invalidate morphological studies? No, because researchers will dismiss ambiguous results. ID cannot rule out design in any or every situation. It has to look for key, or most obvious, markers. Where there is ambiguity, then research cannot proceed. It's the same with ambiguous fossils. They can be fit into any hierarchy, or none. ID doesn't make claims abount non-design either. It can't prove, necessarily, that something is not designed (take a Jackson Pollack painting, or some random drops on a canvas from paint can spills). It only shows indicators where there is positive evidence of design. The focus is much more narrow than what you're demanding -- and you'd have to apply the same standard to fossil analysis otherwise. Silver Asiatic
SA @ 43 -
Just thinking out loud here, but it’s not a question of getting any function to 500 bits (yes, you could create highly constrained functions), but in getting some functions under 500 bits.
I'm not sure why I'd want to do that. I just want to measure functional specificity, whatever the value. If something not designed exceeds the design boundary, then you have a problem which will need some fixing. If everything that isn't designed fails to exceed the boundary then you have a good design detector. But this needs to be tested, in my opinion. Bob O'H
BO'H: if your mountain is a pyramid in Egypt, function is readily identified and the FSCO/I involved points to design. We have many times given how the implicit information in functionally organised things can be quantified, i.e. use a structured sequence of y/n q's in a relevant, efficient description language and the chain length to specify will give a measure of involved information, indeed this was pointed out in 1973 by Orgel. However, dFSCI is already coded as a discrete state digit string, which can readily be converted into bits; there is utterly no need to redefine it, we already have broader metrics that actually work by identifying a reasonable equivalent string in some description language -- cf. AutoCAD. But also, functionally specific complex organisation and information is not a universal detector of codes, function, meaning etc. It addresses a significant but limited class of phenomena and allows us to draw momentous conclusions on empirical warrant. That is more than enough. KF kairosfocus
Bob O'H
Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that?
Just thinking out loud here, but it's not a question of getting any function to 500 bits (yes, you could create highly constrained functions), but in getting some functions under 500 bits. That's why I initially stated that some functions are too ambiguous - meaning, too subjective - to really analyze as such. Other biological functions are commonly referenced and understood as functions. Take the evolutionary origin of protein folds. Could we bring that function down under 500 bits of information somehow? That's actually what researchers would want to do, break it down to the smallest possible functional-segment. But even then, the information content would exceed the design-boundary.
(on the analogue/digital divide, I think this is easy to solve: simply define dFSI as a measure (basically make it a probability mathematically), so measure theory bridges the gap between continuous and discrete spaces)
Not sure on how that works but it seems like a good translation in theory. Information bits would be derived from probabilities on how they would occupy space ... or something? Silver Asiatic
gpuccio @ 40 & then Silver Asiatic @ 41 -
According to my definition of functional complexity, any observer can define any function he likes for any object, analogic or digital. Then we can compute the complexity of the defined function. The point is, if we can define a function, any function, for an object, which requires at least 500 bits of information to be implemented, then we can infer design.
The function of the mountain has to be defined by the observer. If you chose a mountain as a place to build a home, for example, that’s the function of the mountain. Then you’d determine characteristics ideal for your home, and then investigate various mountains to see if they met your needs for function.
Both of you agree that the observer chooses the function. But that makes it subjective, so it should be easy to get almost any object (designed or not) above 500 bits. Just define the function tightly enough. Do you have any way of restricting the specification to avoid that? (on the analogue/digital divide, I think this is easy to solve: simply define dFSI as a measure (basically make it a probability mathematically), so measure theory bridges the gap between continuous and discrete spaces) Bob O'H
GP That is a good explanation, thanks. Answering Bob O'H's question:
I am not sure how you specify functionality in a mountain (for example).
The function of the mountain has to be defined by the observer. If you chose a mountain as a place to build a home, for example, that's the function of the mountain. Then you'd determine characteristics ideal for your home, and then investigate various mountains to see if they met your needs for function. The total number of mountains searchable is the search space. The number you've tested in the sample that successfully match is the target space. Silver Asiatic
Silver Asiatic: According to my definition of functional complexity, any observer can define any function he likes for any object, analogic or digital. Then we can compute the complexity of the defined function. The point is, if we can define a function, any function, for an object, which requires at least 500 bits of information to be implemented, then we can infer design. The same object can be used for different functions. Here: https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ I have tried to clarify in some detail how functional information can be defined and measured. Of course, digital functional information (dFSI) is much easier to measure. Again, biological information is mainly digital. So, digital information is what we are interested in when we debate design in biology. But the concept remains the same for analogic information, as KF has often pointed out. You can always convert analogic information to digital form. People like Bob O'H try to deny a simple truth which cannot be denied: some configurations in material objects bear functional information, and when that functional information is above some threshold (500 bits is a very safe threshold) the objects are always designed objects. Bib O'H has not even tried to explain why he would infer design, or not infer it, in front of some binary digital sequence that specifies the first 125 decimal digits of pi. 500 bits of functional information. The answer is simple: no laws of necessity is known, or even imagined, that can generate such a sequence, in a non designed system. And the probability of getting it by chance is in the range of the UPB: you will never find that sequence in a random material system in the universe. The same is true for any sequence of at least 500 bits of functional information: you will never find those sequences in random systems, and if there is no known law of necessity that can reasonably be linked to that kind of sequence, then you can be sure that it was designed by some conscious designer who understood the meaning of that sequence, the reason why that sequence is different from some generic random sequence of that length. So, you will not find one of Shakespeare's sonnets in a sequence of grains of sand, and you will not find the source code for Excel in meteorologic phenomena. And so on. When randomly typing monkeys will generate Shakespeare's works, then Bob O'H will be vindicated. Until then, he is only an obstinate denier of the evidence. gpuccio
Bob O'H
I am not sure how you specify functionality in a mountain (for example).
Where functionality is ambiguous or indefinable, then research cannot proceed in that area. That's the way science works. Where there is a high degree of observable, specified function, then measurements can be successfully carried out there and testing done. Such is the case for digital code - and that's why it's a good example to test for ID. Silver Asiatic
Bob O'H: Again, you are a true disappointment. First of all, I purposefully deal with digital functional complexity, for two simple reasons: 1) It's much easier to measure functional information 2) the information in the genome is digital Of course, it is perfectly possible to measure functional complexity in analogic objects, but it is more difficult, and there is no reason to discuss analogic objects when all the information we are interested in is digital. But I will not spend any more time with your arrogant and completely senseless position. Good by. gpuccio
BO'H: Your talking points are clanging; GP has quite rightly spoken to literally trillions of successful tests (try a whole Internet full) and I have provided examples of just how far short efforts to get blind chance to work have fallen relative to the 500 bit target. There is a known, observable phenomenon. It critically depends on functional specificity and a threshold beyond which blind search is not a plausible source. After huge effort, cases of functional specificity of 10^100 factor short of the threshold have been seen. Empirically, intelligence routinely creates cases of dFSCI, including your own objections. The empirical observation and the search challenge line up. We have excellent reason to trust the reliability of dFSCI (and more broadly FSCO/I and even more broadly CSI) as a reliable sign of design. And yes, that points to DNA -- TEXT in the heart of the living cell -- as designed and also to body plans from microbes to man as designed. I am inclined to believe this is the real problem, where this points. KF kairosfocus
Origenes @ 33 - your evidence? Have you done what I suggested and formally compared designed and non-designed objects? gpuccio @ 34 - yes, we're getting nowhere. please don't try to read my mind - you're not very good at it. One reason why you should be the one to test your ideas is that you will then find out if there are any flaws or problems with it. e.g. I am not sure how you specify functionality in a mountain (for example). I'm not going to play your game of throwing digits at you, because I don't see why I should spend my time playing this game when you're not even willing to spend the time. If you are not prepared to test your method, then why should I? I see a lot of manuscripts where people suggest new methods for various things, and I've never seen anyone argue that it's not up to them to show their method works. Why should you be any different? It's not enough to have faith in your method - you have to actually demonstrate it. Bob O'H
BO'H: Pardon, but you have clearly spoken amiss. We have identified a key form of functionally specific, complex organisation and associated information, digitally coded text strings beyond a threshold, 500 or 1,000 bits makes but little practical difference. These have been discussed in and around UD for years under the acronym dFSCI. Both criteria are important,
a: informational functional specificity [here, a message or algorithmic instructions and/or data etc] AND b: complexity beyond a threshold where the blind search resources of the observed solar system or cosmos (as appropriate) cannot plausibly search out a sufficient proportion of the configuration space to have any reasonable chance of finding isolated islands of function.
Random text examples of searching out meaningful strings have been shown, and we readily see that they fall short of the complexity criterion by about a factor of 10^100 in terms of scope of config space. In typing and posting objections in the form of text in English, you yourself are exemplifying how, routinely, such messages of more than 72 - 143 ASCII characters are produced by intelligently directed configuration. There are literally trillions of cases in point. Reliably, once we are beyond the reasonable thresholds, dFSCI is the product of intelligence, and the search challenge analysis shows why. Indeed, imagined golden searches face the problem that a search for a search selects from the set of sub sets of a set. So, if the direct blind search searches a space of order n possibilities, the search for a golden search -- equally blindly -- must search in a config space of scope 2^n. Exponentially harder, especially when n is already ~10^150 or ~10^301. So, we have an empirically based, analytically backed criterion for dFSCI that makes it a very reliable sign of design as cause. Indeed, a morally certain one. I am sure that on the relevant thought exercise, astronauts encountering a wall with text giving the first 125 digits of pi in binary code, will instantly infer to design. For excellent cause. For that matter, if they encounter a wall, that would be sufficient functionally specific complex organisation and implicitly associated information to similarly infer design. For that matter, simply encountering a polished cuboidal stone monolith with precise faces, angles and proportions that reflect relevant ratios such as the golden rectangle would meet the criterion also. Finding text or an illustration inscribed or carved into it would actually only be a capstone, the issue would be, can we decode it. And of course, I here have in mind the Rosetta stone or the Behistan rock. The mysterious Voynich manuscript is certainly designed, the question is is its text nonsense or is it a crack-able code. All of these reflect, the underlying acceptance that at the relevant time and place, there could be designers, intelligent agents capable of imposing a purposeful configuration that fulfills some meaningful, configuration-dependent function. Which is where I think an underlying issue is: if one implicitly assumes no possible designer could be there, then one will insist on inferring to any other conceivable possibilities. Now, including quasi-infinite multiverses and/or actually infinite past time. Of course, the onward problem is, that there is a class of cases of unknown provenance that manifest copious dFSCI. namely, cell based life. Text lies in the heart of the cell and at the heart of getting its workhorse molecules to do their jobs, proteins and enzymes in particular. On this subject, the great astronomer, Sir Fred Hoyle, noted during his c 1981 Caltech talk:
The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. The case of the enzymes is well known . . . If amino acids were linked at random, there would be a vast number of arrange-ments that would be useless in serving the pur-poses of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link,it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem . . . . I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature . . . . By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes . . . . Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.
That is the challenge you and other objectors to the joint design inferences on FSCO/I (especially dFSCI) and cosmological fine tuning face, for the very balance of atoms in our observed cosmos is suspiciously set up to emphasise key ingredients of cell based life, with two components that exhibit astonishing and effectively unique properties: Carbon with organic, chaining based chemistry, and water, H2O with its astonishing functionalities. I remind, that is three of the four most abundant elements in the cosmos, the C and O being linked to the first fine tuning result identified by Hoyle and Fowler in 1953. The very same year in which Crick and Watson identified DNA and the former concluded that this was a text-bearing molecule that in effect was the chemical basis for the gene. (Quite a year that, was it a good wine year?) Mix in N, which is nearby in abundance, and we are looking at proteins already. Then, you need to explain C-chemistry, aqueous medium, molecular nanotech, text using, terrestrial planet, cell based life. I have, for cause, long since concluded Sir Fred was right: [b]y far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes. Or, more precisely, by blind chance and/or mechanical necessity on the gamut of sol system or actually observed cosmos. KF kairosfocus
Bob O'H: You disappoint me. You know quite well, like anybody else here, that no non designed object exhibits 500 bits of functional information. You know perfectly well that you cannot offer one single counter-example, that nobody can. At least, some of your fellow ID criticists, at TSZ, have tried, in the past, without succeeding. I still remember the sincere, but desperate attempts of some of them. They had courage, but they could not succeed in an impossible task. Are you an intellectual coward? You insist in saying things like: "It’s clear that nobody has done what gpuccio and I agree is an obvious thing to do to present a positive case for a design inference." when: 1) You have not offered a single example of non designed object that can exhibit some modest level of dFSCI, least of all 500 bits. And yet you have the whole universe at your disposal, from planets to grains of sand to randomly generated strings. And you are absolutely free to define any possible function for any possible object. 2) I have offered millions of obvious examples of designed objects which do exhibit dFSCI in tons, well beyond the threshold of 500 bits, this post being one more of them. 3) You have not even answered my mental experiment with your simple opinion: would you infer design or not? And why? But you boldly state: "I think we’re getting nowhere." You disappoint me. It is an absolute truth that you can give me any number of digital sequences, and I will always be able to infer design correctly with no false positives and, obviously, many possible false negatives, using the simple concept of dFSCI. In scientific terms, that means that dFSCI can infer design correctly with 100% specificity. I challenge you, and anyone else, to falsify this statement. Good luck. gpuccio
Bob O'H: ... it’s one thing to say that all designed objects have some property, but that’s useless as a criterion for design if non-designed objects have the same property.
The good news is that this is not the case. Non-designed objects do not have the same property.
GPuccio: I cannot think of any category of non designed objects which has any relevant value of dFSCI. As far as I can see, any non designed objects that can be read as some digital sequence cannot be used to implement any non trivial function.
Origenes
I think we're getting nowhere. It's clear that nobody has done what gpuccio and I agree is an obvious thing to do to present a positive case for a design inference. kf - it's one thing to say that all designed objects have some property, but that's useless as a criterion for design if non-designed objects have the same property. Thus you need to show that the property doesn't apply to non-designed objects if it is to be used as a criterion for design. Bob O'H
PS: Random document generation cases, from Wiki:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[24] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
A mere factor of 10^100 or so short of the scope of config spaces that mark the 500 bit threshold. kairosfocus
BO'H: Note, GP is identifying digitally coded s-t-r-i-n-g data structures [so, inherently contingent] bearing complex, configuration based specific messages of scope at least 500 bits. There are trillions of observed case of observed cause; in every one of them, the separately known cause is design. Intelligently directed configuration. This is backed up by a search challenge analysis on available atomic resources and time. So, we have a scientifically observed empirical pattern of high reliability backed up by an analysis that makes good sense of what we see. This warrants the confident albeit inherently provisional inference (as obtains for scientific laws in general) that dFSCI in particular is a reliable sign of design. The thought exercise case would instantly lead a reasonable observer to the conclusion, design. The same would obtain if instead we found hardened petrified mud that captured a mould of the string through a flood or the like. The mould would be a natural aspect, the text string evidently the product of art. Your attempt to demand of GP that he provide a counter-example is thus misdirected. I suggest, that you review your argument so far. KF kairosfocus
Bob O'H: "how many non-designed objects have you tried to calculate dFSCI for?" I cannot think of any category of non designed objects which has any relevant value of dFSCI. As far as I can see, any non designed objects that can be read as some digital sequence cannot be used to implement any non trivial function. Again, if you believe differently, explain why. "Frankly, it’s a distraction because it’s contrived, and still doesn’t get to the issue" What do you mean? It's a mental experiment, an important category in scientific reasoning. I would really appreciate your thoughts. And it does get to the issue. Because, if you can believe that such an object could be non designed, that the best explanation for such an object is some random non designed event, then this is exactly the kind of object that you should show me as a counter example to falsify my reasoning. Therefore, I reiterate my invitation: just comment on that example, or simply show me a single counterexample of that kind: a single object, or series of data, anything, that you can demonstrate originated in a system without any intervention of a designer, and that can be read as the sequence of the first 125 decimal digits of pi, or can be used to implement any other function, as defined by you, for which 500 specific bits of information are needed. It's simple enough. Will you join the mental experiment? gpuccio
still doesn’t get to the issue
It is the issue Bob, and you punted. Upright BiPed
Bob O'H, I have a question for you. Since the default assumption in science was that life was designed until natural selection came along and could supposedly finally explain that apparent Design without a Designer, and since advances in the mathematics of population genetics have now shown that natural selection is grossly inadequate as that Designer substitute, then why, with the falsification of natural selection as the Designer substitute, was design not then reinstituted as the default assumption in science instead of the adoption of neutral theory and various other pure chance theories? A few notes:
“Yet the living results of natural selection overwhelmingly impress us with the appearance of design as if by a master watchmaker, impress us with the illusion of design and planning.” Richard Dawkins – “The Blind Watchmaker” – 1986 – page 21 quoted from this video – Michael Behe – Life Reeks Of Design – 2010 – video https://www.youtube.com/watch?v=Hdh-YcNYThY “The Third Way” – James Shapiro, Denis Noble, and etc.. etc..,,, Excerpt: “some Neo-Darwinists have elevated Natural Selection into a unique creative force that solves all the difficult evolutionary problems without a real empirical basis.” http://www.thethirdwayofevolution.com/ The waiting time problem in a model hominin population – 2015 Sep 17 John Sanford, Wesley Brewer, Franzine Smith, and John Baumgardner Excerpt: The program Mendel’s Accountant realistically simulates the mutation/selection process,,, Given optimal settings, what is the longest nucleotide string that can arise within a reasonable waiting time within a hominin population of 10,000? Arguably, the waiting time for the fixation of a “string-of-one” is by itself problematic (Table 2). Waiting a minimum of 1.5 million years (realistically, much longer), for a single point mutation is not timely adaptation in the face of any type of pressing evolutionary challenge. This is especially problematic when we consider that it is estimated that it only took six million years for the chimp and human genomes to diverge by over 5 % [1]. This represents at least 75 million nucleotide changes in the human lineage, many of which must encode new information. While fixing one point mutation is problematic, our simulations show that the fixation of two co-dependent mutations is extremely problematic – requiring at least 84 million years (Table 2). This is ten-fold longer than the estimated time required for ape-to-man evolution. In this light, we suggest that a string of two specific mutations is a reasonable upper limit, in terms of the longest string length that is likely to evolve within a hominin population (at least in a way that is either timely or meaningful). Certainly the creation and fixation of a string of three (requiring at least 380 million years) would be extremely untimely (and trivial in effect), in terms of the evolution of modern man. It is widely thought that a larger population size can eliminate the waiting time problem. If that were true, then the waiting time problem would only be meaningful within small populations. While our simulations show that larger populations do help reduce waiting time, we see that the benefit of larger population size produces rapidly diminishing returns (Table 4 and Fig. 4). When we increase the hominin population from 10,000 to 1 million (our current upper limit for these types of experiments), the waiting time for creating a string of five is only reduced from two billion to 482 million years. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4573302/ Haldane’s Dilemma Excerpt: Haldane, (in a seminal paper in 1957—the ‘cost of substitution’), was the first to recognize there was a cost to selection which limited what it realistically could be expected to do. He did not fully realize that his thinking would create major problems for evolutionary theory. He calculated that in man it would take 6 million years to fix just 1,000 mutations (assuming 20 years per generation).,,, Man and chimp differ by at least 150 million nucleotides representing at least 40 million hypothetical mutations (Britten, 2002). So if man evolved from a chimp-like creature, then during that process there were at least 20 million mutations fixed within the human lineage (40 million divided by 2), yet natural selection could only have selected for 1,000 of those. All the rest would have had to been fixed by random drift – creating millions of nearly-neutral deleterious mutations. This would not just have made us inferior to our chimp-like ancestors – it surely would have killed us. Since Haldane’s dilemma there have been a number of efforts to sweep the problem under the rug, but the problem is still exactly the same. ReMine (1993, 2005) has extensively reviewed the problem, and has analyzed it using an entirely different mathematical formulation – but has obtained identical results. John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 159-160 Kimura’s Quandary Excerpt: Kimura realized that Haldane was correct,,, He developed his neutral theory in response to this overwhelming evolutionary problem. Paradoxically, his theory led him to believe that most mutations are unselectable, and therefore,,, most ‘evolution’ must be independent of selection! Because he was totally committed to the primary axiom (neo-Darwinism), Kimura apparently never considered his cost arguments could most rationally be used to argue against the Axiom’s (neo-Darwinism’s) very validity. John Sanford PhD. – “Genetic Entropy and The Mystery of the Genome” – pg. 161 – 162 Kimura (1968) developed the idea of “Neutral Evolution”. If “Haldane’s Dilemma” is correct, the majority of DNA must be non-functional. – Sanford
In other words, Neutral theory, and the concept of junk DNA, was not developed because of any compelling empirical observation, but was actually developed because it was forced upon Darwinists by the mathematics of population genetics. In plain English, neutral theory, and the concept of junk DNA, is actually the result of a theoretical failure of Darwinian evolution, specifically a failure of natural selection itself, within the mathematics of population genetics!
“many genomic features could not have emerged without a near-complete disengagement of the power of natural selection” Michael Lynch The Origins of Genome Architecture, intro “a relative lack of natural selection may be the prerequisite for major evolutionary advance” Mae Wan Ho Beyond neo-Darwinism: Evolution by Absence of Selection "The publication in 1983 of Motoo Kimura's The Neutral Theory of Molecular Evolution consolidated ideas that Kimura had introduced in the late 1960s. On the molecular level, evolution is entirely stochastic, and if it proceeds at all, it proceeds by drift along a leaves-and-current model. Kimura's theories left the emergence of complex biological structures an enigma (since Natural Selection no longer played a role), but they played an important role in the local economy of belief. They allowed biologists to affirm that they welcomed responsible criticism. "A critique of neo-Darwinism," the Dutch biologist Gert Korthof boasted, "can be incorporated into neo-Darwinism if there is evidence and a good theory, which contributes to the progress of science." By this standard, if the Archangel Gabriel were to accept personal responsibility for the Cambrian explosion, his views would be widely described as neo-Darwinian." - David Berlinski - Majestic Ascent: Berlinski on Darwin on Trial - November 2011 (With the adoption of the 'neutral theory' of evolution by prominent Darwinists, and the casting aside of Natural Selection as a major player in evolution),,, "One wonders what would have become of evolution had Darwin originally claimed that it was simply the accumulation of random, neutral variations that generated all of the deeply complex, organized, interdependent structures we find in biology? Would we even know his name today? What exactly is Darwin really famous for now? Advancing a really popular, disproven idea (of Natural Selection), along the lines of Luminiferous Aether? Without the erroneous but powerful meme of “survival of the fittest” to act as an opiate for the Victorian intelligentsia and as a rationale for 20th century fascism, how might history have proceeded under the influence of the less vitriolic maxim, “Survival of the Happenstance”?" - William J Murray “Darwinism provided an explanation for the appearance of design, and argued that there is no Designer — or, if you will, the designer is natural selection. If that’s out of the way — if that (natural selection) just does not explain the evidence — then the flip side of that is, well, things appear designed because they are designed.” Richard Sternberg – Living Waters documentary Whale Evolution vs. Population Genetics – Richard Sternberg and Paul Nelson – (excerpt from Living Waters video) https://www.youtube.com/watch?v=0csd3M4bc0Q
And when looking at Natural Selection from the physical perspective of what is actually going on physically, then it is very easy to see exactly why Natural Selection is ‘not even wrong’ as an explanation for the ‘apparent design’ we see pervasively throughout life:
The abject failure of Natural Selection on two levels of physical reality – video (2016) (princess and the pea paradox & quarter power scaling) https://uncommondesc.wpengine.com/evolution/denis-noble-why-talk-about-replacement-of-darwinian-evolution-theory-not-extension/#comment-619802
Thus, since natural selection. i.e. Darwin’s greatest claim to scientific fame, is thrown under the bus by the math of population genetics, (and by empirical evidence itself), then Darwin was certainly NOT a great scientist as many of his present day adherents claim that he was. In fact, Charles Darwin, whose degree was in Theology, and whose book “Origin” is replete with bad liberal theology, is more properly classified as being a bad liberal theologian who was trying to impose his anti-Theistic beliefs onto science rather than as a great scientist who was trying to discover new truths about the world through experimentation.
Charles Darwin’s use of theology in the Origin of Species – STEPHEN DILLEY Abstract This essay examines Darwin’s positiva (or positive) use of theology in the first edition of the Origin of Species in three steps. First, the essay analyses the Origin’s theological language about God’s accessibility, honesty, methods of creating, relationship to natural laws and lack of responsibility for natural suffering; the essay contends that Darwin utilized positiva theology in order to help justify (and inform) descent with modification and to attack special creation. Second, the essay offers critical analysis of this theology, drawing in part on Darwin’s mature ruminations to suggest that, from an epistemic point of view, the Origin’s positiva theology manifests several internal tensions. Finally, the essay reflects on the relative epistemic importance of positiva theology in the Origin’s overall case for evolution. The essay concludes that this theology served as a handmaiden and accomplice to Darwin’s science. http://journals.cambridge.org/action/displayAbstract;jsessionid=376799F09F9D3CC8C2E7500BACBFC75F.journals?aid=8499239&fileId=S000708741100032X
To this day, since there is no experimental support for Darwinian evolution, bad liberal theology is still pervasive in the arguments of leading apologists for Darwinism:
Methodological Naturalism: A Rule That No One Needs or Obeys – Paul Nelson – September 22, 2014 Excerpt: It is a little-remarked but nonetheless deeply significant irony that evolutionary biology is the most theologically entangled science going. Open a book like Jerry Coyne’s Why Evolution is True (2009) or John Avise’s Inside the Human Genome (2010), and the theology leaps off the page. A wise creator, say Coyne, Avise, and many other evolutionary biologists, would not have made this or that structure; therefore, the structure evolved by undirected processes. Coyne and Avise, like many other evolutionary theorists going back to Darwin himself, make numerous “God-wouldn’t-have-done-it-that-way” arguments, thus predicating their arguments for the creative power of natural selection and random mutation on implicit theological assumptions about the character of God and what such an agent (if He existed) would or would not be likely to do.,,, ,,,with respect to one of the most famous texts in 20th-century biology, Theodosius Dobzhansky’s essay “Nothing in biology makes sense except in the light of evolution” (1973). Although its title is widely cited as an aphorism, the text of Dobzhansky’s essay is rarely read. It is, in fact, a theological treatise. As Dilley (2013, p. 774) observes: “Strikingly, all seven of Dobzhansky’s arguments hinge upon claims about God’s nature, actions, purposes, or duties. In fact, without God-talk, the geneticist’s arguments for evolution are logically invalid. In short, theology is essential to Dobzhansky’s arguments.”,, http://www.evolutionnews.org/2014/09/methodological_1089971.html
Darwinism is as unscientific today, if not more so, as it was when it was first introduced:
An Early Critique of Darwin Warned of a Lower Grade of Degradation – Cornelius Hunter – Dec. 22, 2012 Excerpt: “Many of your wide conclusions are based upon assumptions which can neither be proved nor disproved. Why then express them in the language & arrangements of philosophical induction?” (Sedgwick to Darwin – 1859),,, And anticipating the fixity-of-species strawman, Sedgwick explained to the Sage of Kent (Darwin) that he had conflated the observable fact of change of time (development) with the explanation of how it came about. Everyone agreed on development, but the key question of its causes and mechanisms remained. Darwin had used the former as a sort of proof of a particular explanation for the latter. “We all admit development as a fact of history;” explained Sedgwick, “but how came it about?”,,, For Darwin, warned Sedgwick, had made claims well beyond the limits of science. Darwin issued truths that were not likely ever to be found anywhere “but in the fertile womb of man’s imagination.” The fertile womb of man’s imagination. What a cogent summary of evolutionary theory. Sedgwick made more correct predictions in his short letter than all the volumes of evolutionary literature to come. http://darwins-god.blogspot.com/2012/12/an-early-critique-of-darwin-warned-of.html
bornagain77
On your pi example, we actually do have some information about who/whatever did that: they probably have 10 digits (and hence cannot be God, who has 13). Frankly, it's a distraction because it's contrived, and still doesn't get to the issue - have you (or someone else) properly tested dFSCI's ability to classify designed and non-designed objects? Bob O'H
gpuccio - how many non-designed objects have you tried to calculate dFSCI for? Bob O'H
Bob O'H: I apologize. Here are the working links: https://uncommondesc.wpengine.com/intelligent-design/defining-design/ https://uncommondesc.wpengine.com/intelligent-design/functional-information-defined/ https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/ https://uncommondesc.wpengine.com/intelligent-design/homologies-differences-and-information-jumps/ The work is easily done. The posts in this thread, most works of literature or poetry, most software code, most machines with a minimum of complexity, paintings, and so on, are all good examples of complex functional information. Most of them well above the threshold of 500 specific bits. For a quantitative analysis for language, please look at my OP about english language. All these examples exhibit functionally specified information. Many of them (language, software) are digital, and are therefore good examples of dFSCI. All of them are human artifacts, and we can directly or indirectly assess their origin from design processes traceable to one or more conscious intelligent designers. On the contrary, I am not aware of any example of non desogned objects that exhibit FSI, or even better, dFSI, above the threshold of 500 bits. Here, neither I nor you nor enyone else can "do the work". There are no examples, period. If you believe differently, please offer at least one counter-example. Try this: generate as many random sequences by some random generator (characters, numbers, binary digits, whatever you like) of 500 bits complexity or more, and try to find some independent function (you can define any function you like) that requires at least 500 specific bits in the sequence, and that can be implemented by the sequence itself. You say: "I didn’t comment on 1b specifically, but a similar argument applies: you need to demonstrate this with evidence, not just assert it. To be honest, it doesn’t seem particularly controversial: you seem to be (loosely!) saying that intelligence can produce complicated stuff. I’m not sure anyone would doubt that." IOWs, you are confirming my point: there is a specific rationale that links functional complexity to the subjective experiences of understanding and purpose. This needs not be demonstrated with evidence": it is a rationale, something that makes the design hypothesis consistent and credible. The evidence to demonstrate it comes from the empirical observations, IOW from point 1a. Finally, I invite you to comment on the following example. Let's say that humans reach a faraway planet, where there is no life and no trace of civilization. The astronauts cone to a stone wall in a mountain. On it, they observe a group of peculiar simple signs "carved" in the stone. Let's say ten loose rows of signs, each made loosely of 50 signs. Let's say that the signs are of two kinds (maybe deeper or less deep), so that they can be unequivically be grouped in two categories. Let's say that one of the astronauts, with some mathematical background, oberves that the signs can be read as binary digits. Let's say that the same astronaut, after a few attempts, finds that, reading the sequence from left to right and top down, we can derive a sequence of 500 bits and that, choosing one of the two possible assignations for 0 and 1, and if we group the sequence in binary words of 4 bits each, and interpret them as decimal digits, what we get is: 31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938 that is the first 125 decimal digits of pi. Now, let's say that the signs, for their physical nature, could well be interpreted as the result of some non designed event, like some fall of micro meteorites, or anything else. But their configuration? I will state the obvious. 1) There is no imaginable law of necessity that can explain that specific configuration. 2) There is a definite functional definition for the configuration: any sequence of bits which, read as 4 bit words, convey the first 125 decimal digits of pi (which is an objective mathematical constant). 3) The probability of getting that specific (unique) configuration in a random system is of the order of 2^-500, Dembski's UPB. So, you can choose: 1) You stick to the idea that the sequence is the result of some random event (micro-meteorites, or else). 2) You seriously consider the explanation that the sequence was designed by aliens or by someone else. Someone, obviously, who knew the meaning of pi, and had some reason to carve it in the stone. Please, note that we have no information at all about the possible designer, its nature, its motives, its methods. Nothing on the planet helps. Your comment? gpuccio
gpuccio - thanks. it looks like you're punting on the "obvious point". I'm not going to do your work for you: you need to show that your method works for most/all cases. This is how we do things in science (BTW, the links go to the pages where you can edit the pages, so I don't have access) I didn't comment on 1b specifically, but a similar argument applies: you need to demonstrate this with evidence, not just assert it. To be honest, it doesn't seem particularly controversial: you seem to be (loosely!) saying that intelligence can produce complicated stuff. I'm not sure anyone would doubt that. Bob O'H
Bob O'H: "has anyone gone round and looked at a variety of designed and non-designed objects, and shown that functional complexity is higher in designed objects? That would be the obvious first step to making this positive argument hold water." It is rather obvious. However, I have had long debates with some of the best ID critics, and we have even made a sort of "game" where they were invited to offer even one counter-example which could falsify my statement about functional complexity. Nobody could do it. You are invited, too. The simple truth is that you will never find any object in the universe whose origin is well known (either designed or not designed) which exhibits more than 500 bits of digital functionally specified information (what I call dFSCI). The game is: show me one such object, and you will have falsified the theory, at least according to my presentation of it. You are welcome to try. For my definitions of dFSCI, and examples of how to measure it, you can look at my posts here: https://uncommondesc.wpengine.com/wp-admin/post.php?post=57553&action=edit https://uncommondesc.wpengine.com/wp-admin/post.php?post=59796&action=edit https://uncommondesc.wpengine.com/wp-admin/post.php?post=64884&action=edit https://uncommondesc.wpengine.com/wp-admin/post.php?post=76913&action=edit By the way, I see that you have commented on my point 1a, but not on my point 1b. That is important, too, for the discussion that was raised here. gpuccio
gpuccio @ 4 - (sorry for not replying earlier - I've been travelling)
The positive empirical argument for design inference is that functional complexity higher than some appropriate threshold is observed only in designed objects (human artifacts),
Is practice this argument is an assertion, but I'm not aware of any rigorous testing of it. has anyone gone round and looked at a variety of designed and non-designed objects, and shown that functional complexity is higher in designed objects? That would be the obvious first step to making this positive argument hold water. Bob O'H
Actually it is accurate that creationists generally only discredit evolution theory and don't do creation science. When you would be doing actual straightforward creation science, then you would focus on the actual act of creation itself, and not on the result of it. But no creationist does that except me. The mechanism of creation is choosing, but 0 creationists have any interest in doing science about how things are chosen in the universe. Millions of creationists but 0 of them have interest in doing science about it. mohammadnursyamsu
On the original post, not the comments: This is interesting. I will have to think it out a while. I do believe that I am one who has been suffering from this confusion. bFast
Bob:
Anyway, the “X, Y, Z indicate design” part of ID barely exists: it’s a bold assertion (“lots of intricate parts indicate design”) but isn’t explored in any detail.
If you have an issue with the boldness of the assertion or the lack of exploration, maybe you should take that up with Dawkins and his claim that living things have the appearance of design? Phinehas
Hey JB, thanks for the video link on Specified Complexity. Great stuff! I watched the whole thing and understood every bit of it. But I don't doubt ID-opponents will somehow manage to remain perpetually perplexed and confused by the concept. Phinehas
Here's an example of serious comments by gpuccio that may help to clarify some potential confusion about ID. BTW, I have 'official' permission to quote gpuccio's comments anywhere in this site. :) This was posted @28 in an interesting discussion thread last April:
gpuccio’s excellent comments posted in this thread (this far) are literally textbook material and could be a separate new OP in UD: [I have used some ‘artistic freedom’ to make minor adjustments to the lyrics so that it fits within the melody, without changing the meaning of the author’s message] 1. Epigenetics is a constant interaction between a static form of information (the nucleotide sequence stored in DNA, both protein coding and non coding) and its dynamic expression as transcriptomes and proteomes in each different cell state. In that sense, there is no condition in the cell life which is not at the same time genetic and epigenetic. For example, the zygote which originates multicellular beings has its own distinctive epigenetic state: the DNA is expressed in the zygote in different ways than it will be expressed in different states of the embryo, or in different specific tissue cells, both stem cells and differentiated cells. The epigenetic state of the zygote, in turn, is derived mainly from the cytoplasm of the oocyte, but also from epigenetic messages in the sperm cell. So, at each moment of the life of a cell, or even more of a multicellular being, the total information which is being expressed is a sum of genetic and epigenetic information. And, whatever you may think, any theory about the origin of biological information must explain how the total information content which is expressed during the life span of some biological being came into existence. 2. Does the actual “information” still rely on the DNA.? Not all of it, certainly. The cytoplasm, as I said, bears information too. And so does the state in which DNA is when it is transmitted in cell division. There is never a moment where DNA is in some “absolute” state. It is always in some epigenetic state. And the cytoplasm, or the nucleus itself as a whole, have specific information content at each state. The sum total of proteins and RNAs expressed, for example. As “life always comes from life”, life is always a continuous dynamic expression of genetic and epigenetic information. When Venter builds his “artificial” genomes, copying and modifying natural genomes, he has to put them into a living cell. IOWs, he is introducing a modified genetic component into a specific existing epigenetic condition. Remember, life is a dynamic, far from equilibrium condition, not a static storage of information. 3. Haven’t evolutionists known this for decades? Not exactly. The huge complexity of epigenetic networks, the whole complex and parallel levels which contribute to them (DNA methylation, histone code, topologically associated domains and dynamic 3d DNA structures, the various influences of different regulatory RNAs, the incredibly combinatorial complexity of transcription factors, the role of master regulators in differentiation, are all topics which have been “discovered” recently enough, and all of them are still really poorly understood. Whatever controls and coordinates the whole system of epigenetic regulations, moreover, is still a true mystery, be it in DNA or elsewhere. 4. I would like to mention here that epigenetics has at least two rather different aspects. One is the way that biological beings can interact with the outer environment, and then pass some information derived from that environment to further generations, through persistent epigenetic adaptations. This is what we could call the “Lamarckian” aspect of epigenetics. It is an aspect which is now well proven and partly understood, and it is certainly interesting. But, IMO, the truly revolutionary aspect of epigenetics is the complex network of regulations that allow different expressions of the same genome under different biological conditions, especially cell differentiation. That aspect has practically nothing to do with environment, either outer or inner, if we intend environment as something which is independent of the biological being, and which can modify its responses according to unpredictable, random influences. Indeed, this second aspect of epigenetics is all about information, and the management of information. IOWs, it’s the biological being itself which in some way guides and controls its own development. Now, you seem to believe that any form of such control must necessarily originate from the genome, because we have thought for a long time that the genome was the only depository of transmissible information. But today we know that the simple sequence of nucleotides in the genome is not enough. I will try to be more clear. In Metazoa, we have hundreds, maybe thousands, of different genomic expressions from the same genome. In the same being. How is that possible? DNA is a rather static form of information, in a sense: it is just a sequence of nucleotides. That sequence can be of extreme importance, but in itself it has no power. For example, even a protein coding gene is of no use if it is not “used” by the complex transcription / translation machinery. So, let’s say that we have a zygote. Let’s call its genetica information G1. G1 is not the basic DNA sequence which is the genome, but the specific DNA in the zygote condition, with all the modifications which make it partly expressed and partly inhibited, in different ways and grades. So, it is not “the genome”, but “one of the possible forms of the genome”. At the same time, the zygote has an active epigenome, in the cytoplasm and the nucleus, in the form of proteins (especially transcription factors), RNAs, and so on. IOWs, we have a specific transcriptome and proteome of the zygote, which we can call E1. So, we have: Zygote = G1 + E1 Now, the important point is that even in the “stable” condition of that zygote (IOWs, before any further differentiation happens) the flow of information goes both ways: from G1 to E1, and vice versa. The existing epigenome can and does modify the state of the existing genome, and vice versa. IOWs: G1 -> <- E1 Now, let's say that the zygote divides, and becomes two cells which are no more a zygote. IOWs, we have a division with some differentiation. Now, in each of the two daughter cells (in the simpler case of a symmetric division) there is a new dynamic state: G2 E2 Both the genomic state and the epigenomic state have changed, and that’s exactly what makes the daughter cell “different”: IOWs, cell differentiation. Now, the points I would like to stress are the following: 1) Any known and existing state of a living cell or being is always the sum of some G + some E. There is no example of any isolated G or E capable of generating a living being. 2) We really don’t know what guides the transition from any G1 + E1 state to the differentiated G2 + E2 state. We know much of what is involved in the transition, of what is necessary, and of how many events take place. But the really elusive question is: what kind of information initiates the specific transition, and chooses what kind of transition will happen, and regulates the process? Is it part of G1? Is it part of E1? Or, more likely, some specific combination of both? IOWs, I would suggest to consider as biological information not only the sequence of nucleotides in the basic genome, but also all the complex forms that G and E take in specific and controlled sequences. At any state, the information present is always the sum total of a specific G and a specific E, and never simply the basic genome G. Now, whatever you may think, or hope, the same evolutionary science that you invoke, and that has never been able to explain the origin of a single complex functional protein (but at least has tried), has really nothing to say about those epigenetic regulatory networks, for two very simple reasons: a) For the greatest part, we have no idea of where the information is, and it’s really difficult to explain what we don’t know and understand. b) The part that we know and understand (and it’s now a rather huge part) is simply too complex and connected [interwoven?] to even try any traditional explanation in terms of RV + NS. That is the simple situation. Science is a great and wonderful process, especially if it is humble, and tries to understand things instead of simply declaring that it can explain what it does not understand. 5. “Is it not utterly mysterious that an existing epigenome can cope with genomes modified by Venter?” Yes, it is. I am amazed each time I think of it. As it is amazing that the epigenome in the oocyte can cope with a differentiated nucleus in cloning experiments based on somatic cell nuclear transfer. The epigenome seems to be a very powerful entity, indeed. “Do you agree that DNA is not a conceivable candidate for controlling and/or coordinating the epigenome?” The only thing that I can say is that something controls and guides the G+E entity (the whole biological being), and that at present we really don’t know what it is and where the information that must be necessary for the process is written. We know too little. I usually sum it up with the old question: where and how are the procedures written? 6. I really think that the “master controller” of differentiation still eludes us. We know rather well a lot of epigenetic landscapes which correspond to differentiation procedures, and the role of many agents in those procedures. But still, it’s the “control” which eludes our understanding. IOWs, what decides the specific landscape which will be implemented in a specific moment, and what controls the correct implementation, through the correct resources? And how are the different scenarios implemented? The role of DNA is certainly important, but we still have to understand a lot about how DNA performs such a role. At present, we must assume that the sum of genome and epigenome at each moment has the information to achieve the correct destiny of the cell, and the tools to read and implement that information into specific epigenetic pathways. 7. I agree with you, and I am perfectly aware of how much has been discovered. Indeed, if you read my post #22, I state: “We know rather well a lot of epigenetic landscapes which correspond to differentiation procedures, and the role of many agents in those procedures. But still, it’s the “control” which eludes our understanding. IOWs, what decides the specific landscape which will be implemented in a specific moment, and what controls the correct implementation, through the correct resources?” The problem, as I see it, is that we are acquiring a lot of details about the pathways which are activated in various forms of differentiation, but we still cannot understand the control of those choices. In a software you can have many different functions, or objects, and then you have higher level procedures which use them according to some well-designed plan, which requires a lot of information. Both the information in the functions and objects and the information in the higher level procedures are needed. My point is simply that in biological differentiation we still don’t understand where the information about higher level procedures is, and how it works. There are some interesting concepts which are being proposed. For example, I am very intrigued by suggestions about how decisions about staminality and differentiation are made in cell pools, and how stem cells could work as a partially stochastic system to implement decisions. However, I still find that we understand very little about informational organization of cell differentiation, although I daily try to read new papers about that topic, hoping to find new hints. 8. Isn’t the signaling the control? No. The control is deciding when and how and how much [and where?] a signal must be implemented. The gene for BMP4 is always there, in the genome. All signals are potentially there. All transcription factors, and everything which can be potentially transcribed. The problem is: each epigenetic landscape is characterized by multiple and complex choices of signals. How can the cell “decide” and know which sequence of signals will be implemented at each time? How is the correct transcription of the BMP4 gene, and its translation, correctly implemented at the right time? What we know is essentially that some transcription factors or other molecule are necessary for some transition, and that they are expressed at the right moment, at the right place, and in the right quantity when that transition has to happen. But how is that achieved? That is a different question. The genome is a book which can be read in hundreds of different ways. There is purpose and information in the control of the ways it is read at each moment. There are hundreds or thousands of different signals, and only the right mix of them can work. Moreover, there must be flexibility, error correction, response to environmental stimuli, and so on. [robustness?] Do you really believe that only because we understand how some signals are involved in some processes we know how those processes are decided and controlled? Do you really believe that “The signaling is the control”? You, with all your understanding of informational problems? A signal is a control only when correctly used by a controller.
https://uncommondesc.wpengine.com/intelligent-design/name-it-claim-it-epigenetics-now-just-another-evolutionary-mechanism/#comment-603942 Dionisio
Very interesting discussion. Just what the doctor ordered! :) Keep it up! Thank y'all! Dionisio
Bob O'H: Please. look at my post #4, point 1). gpuccio
johnnyb - if I was ignoring you, I wouldn't have replied. Anyway, the "X, Y, Z indicate design" part of ID barely exists: it's a bold assertion ("lots of intricate parts indicate design") but isn't explored in any detail. For that to have any rigour, you would have to show that X, Y, and Z do indeed indicate design. To the extent this is done. it's done by calculating numbers like CSI, and claiming that evolution can't create those numbers. I haven't seen any more positive case for ID - there is no exploration of design and the process, for example. Bob O'H
Bob - you are apparently just ignoring us. As I pointed out, the *evidence* for design has always been there. That is the X, Y, and Z above (which I summarized as unity of plan and teleological design). These are positive evidences. I don't see why this is so hard to understand. As an example, irreducible complexity is merely an experimental way of assessing holism in a system. johnnyb
I'll admit it - I was that un-named commenter. But you seem to fix my confusion by agreeing with me. You write:
Thus, the original argument is: X, Y, and Z indicate design Darwin’s argument is: X, Y, and Z could also indicate natural selection So, therefore, we simply show that Darwin is wrong in this assertion.
I think you would have an argument here if you (the ID community) went beyond this and provided positive scientific evidence for design. I'm not aware of that being done, I'm afraid. Bob O'H
johnny Good points.
In ID, if something *could* have been done by either, we leave it alone, even though in theory if X could have been done by either, it could have been designed, too, but we wouldn’t necessarily have a justifiable reason for preferring design to not design.
That's important. ID does not attempt to identify every possible instance of design in nature, or even make a statement that whatever thing is "not designed", but only that certain things show the scientific evidence of having been designed. Other things may have been designed also, but there is not enough evidence to indicate that in them. Silver Asiatic
mark - My goal was to summarize in a few words several chapters of information :) The main idea of the Design Inference is that designed things have a much simpler specification than their base descriptions. For example, if I shot 1,000 arrows in a circle, it would be simpler to say, "1,000 arrows in a 10 foot radius circle about point Y" than it would be to describe the individual positions of the arrows. The base positions are complex (take a lot of words/numbers to describe) but the specification is simpler by comparison. This is a general feature of designed things. On more complex things, the specification gets larger, but so does the abstract space to work in. In these cases, specifying a functional test to which something conforms does a similar job of reducing the size of the specification compared to the theoretical size of configuration space. For instance, saying a "building that won't fall down under the weight of X pounds" actually greatly reduces the number of possible configurations that are satisfied in configuration space (and, thus, is smaller). Self-replication, for instance, is a requirement that greatly reduces the specificational size of an organism. I have some videos on Specified Complexity coming out soon which should clear up some of these ideas. I have an early version here if you are interested. johnnyb
Silver - You raise some good points. However, I disagree that "Design would be a default position for everything". Design is not a default position for everything. It is true historically that some individuals argued this way (and still do). I also would agree that Darwin (and Lamarck) did make us think about the fact that there are aspects of biology historically driven. It's not that we didn't know this before, it just wasn't taken very seriously. But I would not say that Darwin is the arbiter of design or not design, because it has to match design first. However, we should also distinguish a little between ontological and epistemological points of view. That is, there is a difference between what *is* design, and what we can justify *knowing* as design. ID focuses on what we can know to be design empirically, while many from previous design theories started with a general assumption of design, and used specific justifications to justify the whole theory. In ID, if something *could* have been done by either, we leave it alone, even though in theory if X could have been done by either, it could have been designed, too, but we wouldn't necessarily have a justifiable reason for preferring design to not design. Therefore, in ID, we focus on things in which our design inference is justifiable, rather than using justifications for individual features as a justification for broader statements about design. johnnyb
johnnyb
X, Y, and Z indicate design Darwin’s argument is: X, Y, and Z could also indicate natural selection
I think it's more complicated than that. If it was merely that "Whenever Darwin was wrong, we have Design". We would also have "Whenever Darwin is right, we don't have Design.". So, Design would be a default position for everything, including micro-evolutionary adaptations, which were considered Design at one time. Once we see that mutations can cause adaptation, then we say "ok, those are not Design". So, Darwin becomes the arbiter of what is Design and what not.
So, the *only* reason we are talking about probabilities is to answer an objection.
I disagree here. I think we're trying to define our terms and not merely rely on "it looks designed and therefore it is, unless someone proves otherwise". The positive case for ID is that there are thresholds beyond which, no natural process can produce. We establish parameters for which we can infer design. That's what Behe does with Edge of Evolution. We can test, experimentally, what evolution can do. It does some things, and not others. Finding that edge helps us define our scientific claim. This would be necessary without Darwin - it's part of understanding the world. Silver Asiatic
KF: Of course, functional complexity of individual objects is only the first layer. You are absolutely right to insist on higher level organisation, which IMO includes the important concept of irreducible complexity. As you know, I usually stick to the first layer of complexity because I can more easily get some quantitative evaluation, working with protein sequences. :) gpuccio
GP functional complexity tied to specific organisation and/or coupling of components. KF kairosfocus
johnnyb: Very well said! :) I would simply add that there is also a positive empirical basis for the design inference, which in some way clarifies and quantifies the "very plain arguments for design" in historical thought that you mention in your OP. The positive empirical argument for design inference is that functional complexity higher than some appropriate threshold is observed only in designed objects (human artifacts), and in no other object in the known universe, except for biological objects, whose origin is exactly the controversial issue. Moreover, the link between functional complexity and design is not only empirical, but also rational, because we can easily understand that the conscious processes of understanding and purpose are the central point of the design process, and can explain the high level of functional complexity in designed things, and only in designed things. So, in the end we have: 1) A positive argument for the design inference, in two parts: 1a: Functional complexity is observed only in designed things, and can be used for design inference. 1b: There is a definite, rational connection between the conscious processes of understanding and purpose, that guide the design process, and the generation of functional complexity on objects. 2) A negative argument for the design inference: given some category of objects for which we can infer design by functional complexity, like most biological objects, if alternative non design explanations are proposed, there remains the duty to falsify those alternative explanations: otherwise, the alternative explanation would falsify the design explanation. The error of design critics is that they consider point 2) as the foundation for ID theory and for the design inference, while they happily ignore point 1), in both its components. That's why ID theory is a very positive theory, and not a "gap argument". On the contrary, ID criticism is simply a misguided, uninformed attempt at refutation of something that, evidently, the self appointed critics don't understand. gpuccio
"but the main point is that the design must operate on principles simpler than their realization (which provides the reduced Kolmogorov complexity for the specificational complexity)" Maybe it's just me but this feels interesting but I don't quite understand what you are saying here. Could you explain this a little further? mark
JB, very well said, as usual. KF kairosfocus
Technology is defined as the result of the application of scientific knowledge for a purpose. The Darwinian cellular "protoplasm" of the 19th century we now know to be self-replicating, digital-information-based nanotechnology. That settled it for those who have avoided atheistic indoctrination that pretends to be science. harry

Leave a Reply