Evolution Intelligent Design

At Sci-News: Moths Produce Ultrasonic Defensive Sounds to Fend Off Bat Predators

Spread the love

Scientists from Boise State University and elsewhere have tested 252 genera from most families of large-bodied moths. Their results show that ultrasound-producing moths are far more widespread than previously thought, adding three new sound-producing organs, eight new subfamilies and potentially thousands of species to the roster.

A molecular phylogeny of Lepidoptera indicating antipredator ultrasound production across the order. Image credit: Barber et al., doi: 10.1073/pnas.2117485119.

Bats pierce the shadows with ultrasonic pulses that enable them to construct an auditory map of their surroundings, which is bad news for moths, one of their favorite foods.

However, not all moths are defenseless prey. Some emit ultrasonic signals of their own that startle bats into breaking off pursuit.

Many moths that contain bitter toxins avoid capture altogether by producing distinct ultrasounds that alert bats to their foul taste. Others conceal themselves in a shroud of sonar-jamming static that makes them hard to find with bat echolocation.

While effective, these types of auditory defense mechanisms in moths are considered relatively rare, known only in tiger moths, hawk moths and a single species of geometrid moth.

“It’s not just tiger moths and hawk moths that are doing this,” said Dr. Akito Kawahara, a researcher at the Florida Museum of Natural History.

“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”

In the same way that non-toxic butterflies mimic the colors and wing patterns of less savory species, moths that lack the benefit of built-in toxins can copy the pitch and timbre of genuinely unappetizing relatives.

These ultrasonic warning systems seem so useful for evading bats that they’ve evolved independently in moths on multiple separate occasions.

In each case, moths transformed a different part of their bodies into finely tuned organic instruments.

[I’ve put these quotes from the article in bold to highlight the juxtaposition of “evolved independently” and “finely tuned organic instruments.” Fine-tuning is, of course, often associated with intelligent design, rather than unguided natural processes.]

See the full article in Sci-News.

680 Replies to “At Sci-News: Moths Produce Ultrasonic Defensive Sounds to Fend Off Bat Predators

  1. 1
    relatd says:

    Well, you see, when insects were fish, they gave out ultrasonic… uh. OK. The above makes no sense from a ‘it just happened through blind, unguided chance’ point of view.

    “There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”

    Who are these people? Teenagers? The study of moths only began a few weeks ago?

  2. 2
    Belfast says:

    “than previously thought” probably has a grammalogue in shorthand systems because it is such a common phrase, but why is is such a recurring line?
    There may be several reasons; a code word to dupe an editor into publishing; justifiable modest self-praise for the authors’ expansion of knowledge; or possibly a necessary outcome of bankrupt Darwinian paradigm forecasting that there once were moths with no defence mechanisms.
    There should be a legitimate reason for this ubiquitous phrase.

  3. 3
    Seversky says:

    It’s fascinating research. So, what did ID predict about ultrasound-emitting moths?

  4. 4
    Lieutenant Commander Data says:

    ID predicts that anything has a function even if we know about it or not(vestigial organs,”junk” DNA) That functionality can be detected even by an atheist mind , except atheist assign that function to random chance. 🙂
    Random chance vs God compete in atheist mind (that itself must be produced by the same magical random chance). Nobody observed or tested how matter produce life/code/complex functional systems but is declared “scientific” truth by atheists . “Random chance” is “we don’t know how ” 😆 Same thing with “random mutation” from darwinism.

    Atheists are the ones who believe that “we don’t know how but certainly was no God” is a scientific answer. You are free to believe whatever you want but don’t say it’s science .

  5. 5
    BobRyan says:

    If the warning evolved, why are there moths. They should have all been eaten, unless they always had them.

    Another score for design. That makes millions in favor of design and 0 in favor of Darwin. Design is witnessed everywhere. Darwin never has.

    When the evidence is overwhelming, logic dictates design.

  6. 6
    martin_r says:

    belfast @2

    “[more widespread] than previously thought”

    “other than previously thought” should become a Darwinian trademark …

    Basically all recent Darwinian papers start with “… than thought” …

    I asked this before, but why are Darwinists so trustworthy ??? These guys seem to be always wrong …

    If Darwinists don’t like the “…than thought” slogan, they can choose one of these (all from Darwinian papers):

    “…current concepts are reviewed…”
    “…uprooting current thinking….”
    “…latest findings contradict the current dogma….”
    “… it challenges a long-held theory…”
    “… it upends a common view…”
    “… in contrast to the decades-long dogma …”
    “… it needs a rethink … ”
    “… the findings are surprising and unexpected …. ”
    “… it shakes up the dogma … ”
    “… earlier than thought…”
    “… younger than thought….”
    “… smarter than thought ….”
    “… more complex than thought ….”

  7. 7
    Seversky says:

    Why would the Designer create bats with sonar to find moths to eat and then moths with ultrasound “jammers” to defeat bats sonar? Does he get some perverse pleasure watching the duel between the two species? Is he betting quatloos on who will win in each encounter?

  8. 8
    jerry says:

    Does he get some perverse pleasure watching the duel between the two species

    Not two but thousands.

    How would you design an ecology?

    Besides the universe and Earth, ecologies are one of the wonders of design as thousand of off setting characteristics balance each other to provide stability. Quite a trick!!!

  9. 9
    relatd says:

    Seversky at 7,

    Ah, I see you are using the scholarly Star Trek Argument.

    It’s been falsified.

  10. 10
    Seversky says:

    Well, it sounds like moths have been fitted with “shields” which they can raise whenever they pick up a bat coming in to attack.

  11. 11
    relatd says:

    Seversky at 10,

    Still watching the original Star Trek? Me too.

  12. 12
    chuckdarwin says:

    Seversky/7
    Someone needs to let God know that online gambling is illegal in most states……

  13. 13
    relatd says:

    CD at 12,

    Then you should contact the proper authorities and let them know. Something tells me that God does not run online gambling.

  14. 14
    Caspian says:

    Seversky @ 7:
    “Why would the Designer create bats with sonar to find moths to eat and then moths with ultrasound “jammers” to defeat bats sonar? Does he get some perverse pleasure watching the duel between the two species?”

    You raise a legitimate question, but it’s a theological question (not a scientific question), and as such, it would have a theological answer. It sounds like you expect earth to be like heaven, if the God of the Bible is real. I think the Bible sufficiently answers why that is presently not the case. Of course, there’s far more to the story. Would you like to discuss it further?

  15. 15
    doubter says:

    Seversky@7

    As Caspian points out, your question is really an unscientific one that is looking for moral or spiritual or theological, not scientific, answers. Not being a theological movement or system, ID doesn’t look for theological answers, just for more scientific evidence to add to the boatload that has already been accumulated, that there somehow was a designer or designers. A scientific and teleological quest.

    Here’s a question for you. What did Darwinism predict? Well, maybe the gratuitous assumption that it simply must have been RM&NS that produced the observed distribution of several lines that developed ultrasonic deception/masking. This assumption not even elaborated by vague “just so” stories, much less any detailed tracing of the supposed long process minute step by minute step, and explaining exactly how such a process could have built up probably irreducibly complex systems especially in the time allowed by the fossil record (that’s the good old “waiting time” problem). The old saying “the Devil is in the details” applies here – just so stories are no good without the nitty gritty details. Of course not even a shred of evidence, and of course no mathematical analysis. Does this sound like real science?

  16. 16
    chuckdarwin says:

    Sev
    “You raise a legitimate question.” I thought it was a rhetorical question. Silly me….

  17. 17
    jerry says:

    Somewhat off topic but a new database is being developed of every possible protein in existence. Using this database of 200 million proteins, it should be possible to identify the proteins responsible for Interfering in bat ultrasound location.

    Why do some moths have them while others do not?

    The entire protein universe’: AI predicts shape of nearly every known protein

    https://www.nature.com/articles/d41586-022-02083-2

    Aside: could this be the answer to nearly every question in biology relevant to species differences?

  18. 18
    Alan Fox says:

    Aside: could this be the answer to nearly every question in biology relevant to species differences?

    It’s a step in that direction. What appears to be happening here is AI modelling algorithms predicting (apparently accurately) the quaternary structure of proteins from their amino-acid sequences. Whilst that is pretty mind-blowing, it is far from being able to construct functional proteins by choosing sequences. I can conceive of that process happening but I doubt it is going to happen soon. It will remain impossible to predict the functional properties of a novel protein sequence in advance for the foreseeable future, I predict.

    Though ID proponents ought to have a go. The tools exist. Write your sequence. Predict its functional capabilities. Synthesize and confirm. ID becomes science!

  19. 19
    jerry says:

    could this be the answer to nearly every question in biology relevant to species differences?

    Maybe this should have its own OP?

    Since this, is off topic, there will be more opportunities to discuss this. Raises a lot of questions though.

    How did 200 million proteins arise when just one appearing is problematic? Why do some moths have the necessary proteins while others don’t? How did some moth species arise with the right proteins while others didn’t?

    Would it destroy the concept of common descent or support it?

    ID becomes science!

    You fail to understand what ID is.

    ID takes what ever science is being conducted and on certain occasions adds a new logical layer of analysis to the process. In other words it enhances the scientific process by making it more logically rigorous when appropriate. For most of science this additional layer is not necessary but for a few instances it is.

    Again, off topic but maybe on an appropriate thread.

  20. 20
    ET says:

    Alan Fox:

    Though ID proponents ought to have a go. The tools exist. Write your sequence. Predict its functional capabilities. Synthesize and confirm. ID becomes science!

    That doesn’t have anything to do with ID. And ID has already become science because, unlike evolution by blind and mindless processes, ID is supported by the evidence and can be tested.

  21. 21
    ET says:

    seversky needs to learn how to read. Not all moths have this ability to jam the bats echolocation.

  22. 22
    Alan Fox says:

    ID has already become science because, unlike evolution by blind and mindless processes, ID is supported by the evidence and can be tested.

    Why has nobody produced this evidence? Why is nobody testing whatever it is they can be testing?

    What is the scientific, testable theory of Intelligent Design?

  23. 23
    jerry says:

    What is the scientific, testable theory of Intelligent Design?

    ID is not a theory.

    I will get lots of pushback on this here. But it has no domain such as plate tectonics, oceanography, aerodynamics or even biology. So it is not a theory, but a set of conclusions about some isolated phenomena in the physical world often in unrelated areas.

    ID uses some analytic techniques mostly to do with statistics that classify certain conclusion as either likely or unlikely. It is also historical in nature, applied to things that happened millions/billions of years ago. So there are no experiments to test its viability. People who ask for it are being disingenuous.

    Given that, there is definitely predictions it can make but on historical information and living remnants of these past events. See

    https://uncommondescent.com/intelligent-design/do-nylon-eating-bacteria-show-that-new-functional-information-is-easy-to-evolve/#comment-631468

    That’s why I said that the new data base mentioned above would forever settle the debate over Evolution.

    Most of the criticisms of ID are bogus. It is not domain oriented, it is not present oriented, it cannot be proved or disproved with experiments etc. It is essentially logic usually in the form of statistics applied to historical data or the current remnants of past events.

  24. 24
    Alan Fox says:

    Thanks for that, Jerry. My issue with ID proponents has always been the claim it was scientific. I have no issue with ID as philosophy or logic.

  25. 25
    jerry says:

    My issue with ID proponents has always been the claim it was scientific

    iD applies logic to certain scientific findings.

    In that way it’s science. I refer to it as science +.

  26. 26
    relatd says:

    ID is science. Irreducible complexity. The discovery of greater and greater complexity. Evolution consists of a bunch of stories and blind, unguided chance. That’s not science, it’s storytelling.

  27. 27
    Alan Fox says:

    iD applies logic to certain scientific findings.

    That’s philosophy of science: not science.

  28. 28
    Alan Fox says:

    ID is science. Irreducible complexity. The discovery of greater and greater complexity.

    Those are three unconnected assertions.

  29. 29
    jerry says:

    That’s philosophy of science: not science

    No!

    Nearly every science project I have been involved with is in four parts, Background which usually contains the proposition, methods which contains the procedures for collection of data/facts, results which include the actual data collected as well as an analysis of the data points and conclusions which include the implications of the analysis.

    ID adds some statistical techniques not usually included in the results section of most studies and then makes conclusions based on the logical analysis of the data using the statisticalu techniques chosen.

    That’s not philosophy of science. One may argue that philosophy of science led to the types of analysis done but once chosen it become a straightforward scientific analysis.

  30. 30
    Alan Fox says:

    ID adds some statistical techniques not usually included in the results section of most studies and then makes conclusions based on the logical analysis of the data using the statisticalu techniques chosen.

    OK. Have you an example of this ID statistical inclusion?

  31. 31
    relatd says:

    AF at 28,

    That is quite wrong. Molecular switches control cellular activity and they are not just on and off. Some are volume limited. Example: A cell needs a precise amount of some chemical/liquid. The switch stays in the on position until it receives a signal to shut off. There is some evidence that malfunctioning switches can lead to disease. Evolution has no explanation for this or the limiter function or feedback required. But that just describes one type of switch. There are many more.

    The probability that this can happen by chance is nil.

  32. 32
    Alan Fox says:

    Relatd

    ID has no explanations. It starts and ends with “Evolution has no explanation for this…” It is not science. Let’s see what Jerry has to add.

  33. 33
    kairosfocus says:

    AF, barefaced denial again, you have been around for long enough to know better. You are familiar with intelligently directed configuration and its characteristic signs. Reliable signs. Based on that, the inference to design is a best explanation of what exhibits say complex coded, algorithmic alphanumerical text, such as in a Hello World or in D/RNA in the cell. KF

  34. 34
    relatd says:

    AF at 32,

    Not so. Intelligent Design shows that based on probabilities, the odds of living things developing through evolution is beyond reasonable possibility. Ignoring that means ignoring the evidence.

    Imagine yourself aiming a driverless car down a road. How long before it crashes or careens into a river or ravine? Evolution, so-called, would have to proceed flawlessly down the road. But we’re told it is not goal oriented.

    Evolution has no credible explanation for living cells much less the human body coming into being.

  35. 35
    Alan Fox says:

    Evolution has no credible explanation for living cells much less the human body coming into being.

    As I said, the beginning and end of a typical ID argument. ID adds nothing to scientific understanding. (Sorry, Jerry)

  36. 36
    relatd says:

    AF at 35,

    Evolution adds nothing to scientific understanding. Only present-day experiments on living things can discover things like function, not stories based on speculation. Which are just stories, not facts.

  37. 37
    JVL says:

    Relatd: Only present-day experiments on living things can discover things like function, not stories based on speculation. Which are just stories, not facts.

    What kind of present day experiments can support intelligent design? NOT, what kind of experiments can disprove unguided evolution; what kind of experiments can be done which support intelligent design?

  38. 38
    relatd says:

    Why Do We Invoke Darwin?

    https://www.discovery.org/a/2816/

  39. 39
    Lieutenant Commander Data says:

    @Relatd
    Use your energy for more useful activities . 😉

  40. 40
    relatd says:

    LCD at 39,

    I am.

  41. 41
    Alan Fox says:

    @ Relatd
    That 2005 article from the late Phil Skell reinforces my point.

  42. 42
    Alan Fox says:

    Only present-day experiments on living things can discover things like function, not stories based on speculation.

    As JVL points out, this leads to the question, what experiments support ID?

  43. 43
  44. 44
  45. 45
    kairosfocus says:

    AF, you full well know that intelligently directed configuration routinely produces FSCO/I beyond 750 +/- bits, and you yourself are an example. You know full well that blind chance and/or mechanical necessity has never been demonstrated to do the same. You know the needle in haystack search challenge. You know that Venter et al are already doing engineering work with cells. You know that the cell contains complex, coded, algorithmic information in D/RNA, associated execution machinery, so too uses language and goal directed processes. You know what coding requires, coders. You therefore know what is the reliable source of such FSCO/I, but it does not suit your rhetorical agenda to acknowledge it. We do not have any obligation to allow your groundless selective hyperskepticism and barefaced denialism to control what we know and can readily infer per reliable sign. That has been a well founded inferential procedure of record since Hippocrates of Kos. KF

  46. 46
    kairosfocus says:

    750 +/- 250 bits.

  47. 47
    Paxx says:

    Blind-watchmaker Darwinism is only a good explanation for outer-branch-level diversification on the cladistic tree.

    Other than that, it’s as useless as tits on a boar hog.

  48. 48
    kairosfocus says:

    PS, the commentary at RW simply shows barefaced denialism and hyperskepticism, sort of like denying stagflation and recession by playing word games. Intelligently directed configuration as a cause is real, it is the only observed cause of functionally specific organisation and/or associated information, and the blind needle in haystack search challenge in configuration spaces for 500 – 1,000+ bits shows why. If they or you had a good counter example where blind chance and/or mechanical necessity were actually observed causes of FSCO/I it would be trumpeted. If you have one kindly give it______ . Even your objections above are cases in point of FSCO/I by design. The design inference on FSCO/I is a causal inference on reliable sign.

  49. 49
    ET says:

    Alan Fox:

    ID adds nothing to scientific understanding.

    You don’t understand science. You definitely cannot say how evolution by means of blind and mindless processes adds to scientific understanding. When it comes to science you are a liar, bluffer and cowardly equivocator.

  50. 50
    jerry says:

    Have you an example of this ID statistical inclusion

    Yes.

    Behe’s Edge of Evolution includes extensive discussion of what is feasible and what is not feasible probability wise in terms of mutations arising and leading to lasting change in genomes. Now this is mainly about genetics not Evolution.

    However, the small chance anything will happen genetically to permanently change the genome (it does happen rarely) means the chances of the species being changed are incredibly low. The Grants argued it would take 32 million years to get a new finch species. Not very promising for significant Evolution of anything. And that does not include any new gene sequences for new proteins being developed.

    Similarly, Doug Axe does the same for proteins. The data base referred to above refers to 200 million proteins. This represents an extremely small subset of amino acid combinations. How were the coding sequences for these proteins selected?

    Certainly not randomly and then there is the issue of how the various coding sequences were thrown together by chance. We are talking low probability numbers that two gene sequences which are incredibly unlikely to begin with would ever encounter each other let alone combine in the same organism.

    The document you referenced said phylogenic analysis establish their (protein) evolution over time. Where’s the evidence for the origin of these incredibly unlikely gene sequences in the 200 million data base? How did they happen?

    This document you referenced also says ID is young Earth which you obviously know is incorrect. It also implies that hox genes are responsible for body plans of species. What evidence is there for that?

    I have not read Behe’s more recent book, “Darwin Devolves,” but after reading the reviews maybe I will, especially focusing on the evidence of improbability of random variations creating anything.

    Also, ID is more about the fine tuning of the universe and OOL than about Evolution.

  51. 51
    ET says:

    JVL:

    What kind of present day experiments can support intelligent design? NOT, what kind of experiments can disprove unguided evolution; what kind of experiments can be done which support intelligent design?

    Still clueless. Again, for the learning impaired: ALL design inferences must first eliminate chance and necessity, ie nature, operating freely. That is mandated by Newton, Occam and parsimony. Only the scientifically illiterate can’t grasp that. Enter JVL and all evos.

    That said, any experiment which elucidates IC, CSI or SC, support ID. As Dr. Behe said:

    “Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”

  52. 52
    ET says:

    Alan Fox:

    ID has no explanations. It starts and ends with “Evolution has no explanation for this…”

    Pure stupidity or worse- willful ignorance. ID is not anti-evolution. So, Alan lies when he claims “It starts and ends with “Evolution has no explanation for this””- You are pathetic, Alan.

    The design inference is based on our KNOWLEDGE of cause-and-effect relationships. It has the same explanatory power as archaeology and forensic science. Determining something was the result of intelligent design tells us quite a bit. For one it tells us blind and mindless processes didn’t do it. Next it is a clue of purpose. That an intelligent agency was there and did something. It directs our investigation of the phenomena.

    Obviously, Alan has never conducted an investigation in his life.

  53. 53
    ET says:

    For Alan Fox to choke on:

    1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.
    2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
    3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.
    4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems

    All bacterial flagella have been shown to be irreducibly complex.
    There isn’t any evidence that nature produced any of them.
    There isn’t even any way to test the claim that nature produced any of them.
    Science says we can dismiss that claim.

    It must suck to be Alan and JVL. Science has given them all of the power to refute ID and yet all they can do is lie, bluff, misrepresent and equivocate! ID exists because of their failure to support their own position’s asinine claims.

  54. 54
    ET says:

    More evidence for ID:

    The genetic code involves a coded information processing system in which mRNA codons REPRESNT amino acids. There isn’t any evidence that nature can produce coded information processing systems. There isn’t even any way to test the claim that nature can do it. Again, science say that we can dismiss such claims.

    However, there is ONE and ONLY one known cause for producing coded information processing systems and that is via intelligent agency volition. So, using our KNOWLEDGE of cause-and-effect relationships, in accordance with Newton’s 4 rules of scientific reasoning, we infer the genetic code is intelligently designed. Science 101.

    Let Alan and JVL flail away and demonstrate that they don’t understand science.

  55. 55
    kairosfocus says:

    ET, worse D/RNA is string data structure, code bearing technology. Code which in key part has AA chain assembly instructions towards proteins, as algorithms with start, extend [ways 1 to 20], stop. Language, goal directed process, processing logic and tech, deep knowledge of polymer science, and more, especially when we reckon with cosmological fine tuning that sets all of that up. None of this is new, none of it would be controversial absent ideological impositions. We can no longer allow ourselves to be diffident in the face of willful ignorance — at best. KF

  56. 56
    Alan Fox says:

    AF, you full well know that intelligently directed configuration routinely produces FSCO/I beyond 750 +/- bits, and you yourself are an example.

    I know no such thing, KF. Though I do know your statement is incoherent. You have no way of calculating the “quantity” you refer to as FSCO/I (which is unique to you – nobody else takes it or you seriously), there’s no consensus among ID proponents as to what information is quantitatively, let alone any way to calculate it for any object or system. You’re fond of making challenges, especially ones for which you are unable to supply any answer. Here’s my challenge. Calculate the FSCO/i of something, anything, and show your work.

  57. 57
    jerry says:

    Here’s a comment I made 13 years ago

    What is evolving is an understanding of what ID always was.

    As I have said before there is no experiment that ID would not do that materialist science does. It would actually do a lot more so ID expands the horizon of science.

    I have not met one anti ID person here who engages ID on substance. ID is science based and is willing to consider any empirical data presented to it. But that is not what happens here. Instead we get comments lamenting how backward or stupid we are. How boring. I do wish the anti ID people would learn some science.

    That is so obviously true. So how can ID be a science stopper or deny anything that so called true science has discovered. It doesn’t.

    The best example of this is what has been proposed to solve the Evolution issue once and for all. See #23 above which is conveniently avoided by any anti ID person. You would think they would be all over it to show just how their ideas have played out over history. But no.

  58. 58
    kairosfocus says:

    AF, oh yes you do know, starting with the FSCO/I in your objection. You are in ideological denial and that is the root of the incoherence you project to me. Later. KF

  59. 59
    kairosfocus says:

    Jerry, ID is interdisciplinary, as is say environmental science, however it has key themes and a frame that are observation based and make solidly empirically warranted inferences, arguments and conclusions. Later, still in transition though back at home after putting four in the ground. The fifth was grounded some time ago. KF

  60. 60
    Alan Fox says:

    Jerry:

    Behe’s Edge of Evolution includes extensive discussion of what is feasible and what is not feasible probability wise in terms of mutations arising and leading to lasting change in genomes. Now this is mainly about genetics not Evolution.

    However, the small chance anything will happen genetically to permanently change the genome (it does happen rarely) means the chances of the species being changed are incredibly low. The Grants argued it would take 32 million years to get a new finch species. Not very promising for significant Evolution of anything. And that does not include any new gene sequences for new proteins being developed.

    There are three basic processes occurring under evolutionary theory: adaptation (change within a population induced by mutations and selection in the niche environment), speciation (separation from one population, often but not always geographical, where evolutionary change continues in two separate populations in two separate niches), extinction (where a population dies out often due to over-rapid niche change or niche destruction). That said, I’m not well-versed in the details of Peter and Rosemary Grant’s long-running studies on Galapagos finches. I wonder if you are. Where did you get your 32 million years from? Looking at papers on Galapagos ground finches, I see one abstract mentions:

    All 14 species of Darwin’s finches are closely related, having been derived from a common ancestor 2 million to 3 million years ago.

    here.

  61. 61
    Alan Fox says:

    Jerry:

    Similarly, Doug Axe does the same for proteins. The data base referred to above refers to 200 million proteins. This represents an extremely small subset of amino acid combinations. How were the coding sequences for these proteins selected?

    Now I am quite familiar with Axe and the criticisms of his protein-folding approach. It is considered in the mainstream to be, at the politest, flawed.

  62. 62
    jerry says:

    Where did you get your 32 million years from

    From Rosemary Grant’s mouth with Peter at her side.

    The Grants were invited to give a presentation at Stanford on the 200th anniversary of Darwin’s birth. They presented on their work with finches. As part of this presentation, discussion of just what was a species took place. During this, this statement was made.

    considered in the mainstream to be, at the politest, flawed

    To be polite, how is it flawed?

    Do you have an examples of gene sequences arising that produce proteins? Given that there are about 200 million, one would think a few examples would be available.

  63. 63
    Alan Fox says:

    Jerry:

    Where’s the evidence for the origin of these incredibly unlikely gene sequences in the 200 million data base? How did they happen?

    Years ago, I don’t know if you remember Telic Thoughts and Mike Gene, I was having a discussion there about the probability of protein sequences. The mistake that is so often made here and elsewhere (Axe makes it too) is to conflate amino-acid sequences in proteins with functionality in proteins. The theoretical number of proteins of any particular number of aa’s is the number of amino-acids found in proteins (20 for most species) raised to the power of the number of aa’s in the protein sequence. The number becomes rapidly becomes enormous as sequence length increases. The ID argument often is how rare any particular sequence is. But it assumes only the sequence in question is functional, one needle in a haystack, when in fact we’ve no idea how much functionality lurks in unsynthesized sequences.

  64. 64
    Alan Fox says:

    Do you have an examples of gene sequences arising that produce proteins?

    The genetic code has no nonsense codons. Any DNA sequence will translate into a protein sequence. As there are three stop codons, random sequences will, on average, produce sequences of 60 odd aa’s.

  65. 65
    Alan Fox says:

    Jerry:

    To be polite, how is it [Axe’s work on protein folding] flawed?

    Arthur Hunt at Panda’s Thumb

    Mikkel Rasmussen at The Skeptical Zone

  66. 66
    Alan Fox says:

    Jerry:

    I have not read Behe’s more recent book, “Darwin Devolves,” but after reading the reviews maybe I will, especially focusing on the evidence of improbability of random variations creating anything.

    Josh Swamidass at Panda’s Thumb

  67. 67
    Alan Fox says:

    Jerry:

    The Grants were invited to give a presentation at Stanford on the 200th anniversary of Darwin’s birth. They presented on their work with finches. As part of this presentation, discussion of just what was a species took place. During this, this statement was made.

    I’ve tried searching for the quote and am unable to find it. Do you have a link? It seems at complete odds with everything else that turns up when I google. for instance:

    https://idw-online.de/en/news685650

  68. 68
    jerry says:

    All this published from Alan Fox confirms the ID general hypothesis.

    It seems more nits than substance. For example, I asked for the origin of proteins and get that any sequence will produce proteins. An admission that there are no examples.

    The interesting thing is the complete absence of a defense for any naturalized mechanism for Evolution. Nothing has changed over the years as the long comment following made 13 years ago points out.

  69. 69
    jerry says:

    A long comment from 13 years ago. Read if you want.

    There are two choices for any phenomenon, both of them rather broad. One is that certain things happened naturally, the mechanism to be discovered. The second is that these things were produced through intelligent input. And by the way a lot of what may be considered natural, could be the result of a designed process allowed to proceed naturally. For some simple examples, pearl farmers seed their shell fish with an irritant and the let nature do the rest and beavers dam the course of a river and the ensuing wetlands provide an enhanced habitat for the beavers and other animals and plants.

    But in general it is mainly one or the other but what appears to be natural could also be great design. There are no other choices unless you want to proffer some. As I said these are rather broad categories. It is almost impossible to eliminate the intelligent input option. It is not a theory such as gravity, the Standard Model, the Laws of Thermodynamics, Kinetic theory of Gases, Information theory or Plate Tectonics etc yet people keep on asking for some hypotheses and predictions. ID is simply that intelligence is an input at some time in the history of being, the universe, the world, life etc. Some hypothesize that it was in the design of the universe itself and the initial conditions and subsequent boundary conditions of the Big Bang were such fantastic design that it enables natural processes to produce everything we see including this very rare planet, the origin of life and the evolutionary progression through subsequent natural consequences. Some hypothesize that the input was ongoing and there were various events that reflect an intelligent input. This input could have been minimal and then natural processes were allowed to do the rest. To disprove an intelligent input, one has to show natural processes at every turn. It is a difficult job. All ID has to do is show that naturalistic processes fail at some point and that an intelligent input is more reasonable. They only need one point.

    That is the nature of the discussion. It seems unfair to some who whine that ID is unfalsifiable. But that is it. Because ID is more of a logic process and not a specific scientific theory it does not have the usual domain of interest such as plate tectonics, cosmology or even evolution. After all an intelligence could create life or modify a genome to guide life maybe only once and that is not the making of some theory. To create life or modify it is not too hard to understand as it appears to be within human capability in the near future.

    Thus, the possibility of an intelligence creating and modifying life is not an issue. It is whether it ever happened or not that is at issue. If we had a video camera at the time of an intelligent input, we could settle it once and for all but such an event does not exist and we have had people here and at other places demanding such evidence. Short of this something else has to be done.

    We have observed a lot of phenomena through out history that could possibly be explained by an intelligent input and the challenge for science is to verify if there may be a natural cause for each. For most of history it was thought that God was personally responsible for most, much, or a lot of these phenomena. From Zeus throwing lightning bolts in anger and the various gods determining the fates of various personalities such as Odysseus to Newton’s hypothesis that God sent comets to stabilize the orbits of the planets. Newton’s laws and then LaPlace’s theory of the heavens seemed to show that all was under control of natural laws. So it was assumed from then on by many that everything must be under control of natural laws. We have no need for Zeus and lightning bolts and for comets stabilizing orbits.

    And we get the conventional wisdom that everything is due to natural laws
    and chance and it is only a matter of time before science gets around to explaining it. And science has a good track record. But what is glaringly obvious is that science has some spectacular failures in one particular area. So while science continues to chalk up win after win there seems to be one opponent which gets the better of it every time. Consequently, one has to reevaluate the conventional wisdom and maybe consider an alternative to natural processes. ID only exists because science loses most of the time to the heavy weights in this one area, namely life. It does wonderfully well in some important areas of life, specifically medicine, food production and genetics but it is badly outperformed by the problems in the areas of macro evolution and origin of life. Why this failure here? Is there an alternative to naturalistic processes in these two domains. Is intelligence an explanation?

    Hence, every time science fails in these areas it adds credence to the alternative. At this moment in the realm of logic and reason both alternatives exist. Which is more feasible? Every time we see the failure of one alternative it raises the possibility of the other. After all it is possible. We just cannot identify the intelligence. So each failure for a natural pathway raises the probability of the alternative, namely an intelligent input.

    And the rationale for an intelligent input has been bolstered by the knowledge that what underlies life is different from every other area of nature, specifically information. Information is not present in any other area of nature except life.??Now this game of supporting the ID premise is played two ways and both use the tools of science, logic and reason. One shows that time after time that certain naturalistic processes have failed. The second way is to show why naturalistic processes have failed. Both use science and point to the inadequacy of natural processes. There is a third way which one group says must be present before an intelligent input can be accepted and that is evidence for the specific event where there was an input of intelligence.

    The first way above is to challenge each natural explanation for the phenomenon as flawed and show why the explanation could not have possibly happened. This is the frequent challenges to Darwinian macro evolution we have seen not only by the ID people but also by the anti ID people as well as the creationists. It is represented here on this site and in the academic and popular literature by the lack of any coherent demonstration that Darwinian macro evolution ever took place. Now macro evolution did take place and no one is denying that here but there is no evidence for it happening by Darwinian processes or any other known natural processes. All the processes of science are brought to bear in this examination so to declare it non scientific is ludicrous.

    The second way is to use observations of the world and then to complement these observations with some form of analysis, mainly probability, and some understanding of natural processes to illustrate why the failure of naturalistic processes is not only reasonable but to be expected. To this end a couple of different approaches are in their infancy but have showed some reasonable results. One is being developed by Behe and is showing that there does not exist the probabilistic resources to create the changes needed in macro evolution. Behe’s two books, Darwin’s Black Box and Edge of Evolution, are aimed at this objective. Namely, that life is extremely complicated and naturalistic processes seem unable to climb the hurdles necessary to produce macro evolution.

    Another is being done by Dembski and others trying to show something similar using mathematical and probabilistic approaches to show that reaching the complexity necessary for life is beyond the probabilistic resources of the universe. So in lots of way the two approaches are similar but using different methodologies to attack the same problem.

    To argue that this is not science is also ludicrous. One may argue that the techniques by these scientists are flawed or that the interpretation of the results are invalid but to say that they are not using science is absurd.

    Now the naturalists respond with their challenges. The best challenge would always be to show that the phenomena probably arose by naturalistic means but this is rarely done because there seems to be little evidence supporting any particular mechanism. The main challenge is to use something similar to what I described above as the first approach, namely that the intelligent input scenario is flawed just as ID people point out that each naturalistic input is flawed. The creator could not be omniscient, or no one would design such an imperfect system or make these childish mistakes etc. They also point to science’s track record in other areas and that the work on the problem is just getting started etc.

    So we have two broad approaches and any evidence in one camp reduces the likelihood of the other. It is one that won’t be solved any time soon but to assume your side is right a priori is ridiculous. ID is the more reasonable side as far as I can see. They are willing to accept naturalistic explanations when it is demonstrated but are not willing to accept an arbitrary demand of absolute dismissiveness for intelligent inputs that is imposed by the naturalists. One side is flexible and reasonable while the other side is intransigent and unmoving.

  70. 70
    bornagain77 says:

    In Critiquing Dembski, Jason Rosenhouse Prioritizes Imagination over Reality
    Brian Miller – July 28, 2022,
    Excerpt: Douglas Axe demonstrated for the beta-lactamase enzyme that the upper bound for the enzyme’s larger domain is 1 functional sequence in every 10^77 randomly selected ones. Rosenhouse attempts to discredit this estimate by citing Arthur Hunt’s critique, but he fails to acknowledge that Axe and others showed that such negative assessments reflect misunderstandings of his research and the technical literature (here, here, here, here).
    https://evolutionnews.org/2022/07/in-critiquing-dembski-jason-rosenhouse-prioritizes-imagination-over-reality/

    Correcting Four Misconceptions about my 2004 Article in JMB – Doug Axe
    https://www.biologicinstitute.org/post/19310918874/correcting-four-misconceptions-about-my-2004

    Adam and the Genome and Doug Axe’s Research on the Evolution of New Protein Folds – March 7, 2018
    https://evolutionnews.org/2018/03/adam-and-the-genome-and-doug-axes-research-on-the-evolution-of-new-protein-folds/

    Losing the Forest by Fixating on the Trees — A Response to Venema’s Critique of Undeniable – Douglas Axe – February 6, 2018
    https://evolutionnews.org/2018/02/losing-the-forest-by-fixating-on-the-trees-a-response-to-venemas-critique-of-undeniable/

    Protein Folding and the Four Horsemen of the Axocalypse – Brian Miller – April 12, 2018
    https://evolutionnews.org/2018/04/protein-folding-and-the-four-horsemen-of-the-axocalypse/

  71. 71
    Alan Fox says:

    Jerry, regarding your question, I answered it. All DNA sequences synthesized in vitro will produce protein sequences. If you meant to ask a different question, you should make it clear.

  72. 72
    jerry says:

    I’ve tried searching for the quote and am unable to find it.

    I will search for it.

    It’s in a YouTube video from all the presentations made at that conference. The interesting thing was the complete lack of response or should I say cluelessness of the panel to this statement.

  73. 73
    Alan Fox says:

    A long comment from 13 years ago. Read if you want.

    Well, I did. It’s hard to see it as other than wishful thinking. Where explanations fail, the honest response is to admit ignorance. The idea that “Intelligent Designers” did something undetectable at some unspecified moment is not an explanation for anything.

  74. 74
    jerry says:

    Most ironical statement of Year?

    It’s hard to see it as other than wishful thinking. Where explanations fail, the honest response is to admit ignorance.

    Tell me how it wasn’t an honest comment.

    This response to my long comment sounds like someone in denial about the logic and evidence available about the Evolution debate.

    Peter and Rosemary Grant at Stanford. Start at 1:10 for comment about 32 million years.

    https://www.youtube.com/watch?v=IMcVY__T3Ho

    aside: I believe the pretense that a polite but honest debate was desired was only a pretense and was actually an attempt to be one sided which has obviously failed.

  75. 75
    Alan Fox says:

    Peter and Rosemary Grant at Stanford. Start at 1:10 for comment about 32 million years.

    The remark as I hear it is “finch radiation started two to three million years ago.”

  76. 76
    Alan Fox says:

    BTW, thanks to Jerry for providing the link! Very informative and great to hear things from the horse’s mouth.

  77. 77
    Alan Fox says:

    “Thirty-two” or “three to two”, Jerry?

  78. 78
    Alan Fox says:

    Listen to Peter Grant at 18.00. He definitely says “two to three million years”

  79. 79
    jerry says:

    Watch the video starting where I specified. It will appear about 1 1/2 minutes later. They also repeat it near the end.

    I listed a time somewhat before so as to allow one to get used to them talking about speciation.

  80. 80
    Alan Fox says:

    Yes, Carol Boggs also says “two to three million”.

  81. 81
    Alan Fox says:

    Come on Jerry. You misheard. It’s no big deal.

  82. 82
    jerry says:

    Come on Jerry. You misheard. It’s no big deal

    Why are you lying?

    It’s clear as anything and also in the transcript. This is amusing.

    From transcript at 1:11

    Peter showed you at the beginning the finches lie about here after the lineages or species have become diagnosed ibly different but long before the point of genetic incompatibility and I say long before the point because the average time when genetic incompatibility arises in birds is on average 32 million years

  83. 83
    Alan Fox says:

    I can’t believe it! Go to 18.00 in your video. Listen to Peter Grant. Anyone can do this for themselves.

  84. 84
    Alan Fox says:

    It’s clear as anything and also in the transcript. This is amusing.

    Link?

  85. 85
    Alan Fox says:

    Help, fellow UD commenters!!!

    Can someone settle a dispute between Jerry and me and check what time period Peter Grant talks about for the radiation of the original invading species into fourteen current Galápagos finch species.

  86. 86
    Alan Fox says:

    OK, we are talking about two different things. Rosemary Grant indeed says that the average time for genetic incompatibility in birds is thirty-two million years. Genetic incompatibility is not speciation time. It is the time limit for introgression.

  87. 87
    ET says:

    Alan Fox:

    Now I am quite familiar with Axe and the criticisms of his protein-folding approach. It is considered in the mainstream to be, at the politest, flawed.

    And yet no one can refute it nor show it to be flawed. Just saying it’s flawed is cowardice. And it is a given that Alan doesn’t understand Axe’s argument.

  88. 88
    Alan Fox says:

    Rosemary Grant refers to Prager and Wilson.

    Here is the citation:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC432270/

  89. 89
    ET says:

    Alan, you have been given links to papers that measure functional sequence complexity. FSC is the same as CSI. It is the same as KF’s FSCO/I. So, clearly you are willfully ignorant or dishonest.

  90. 90
    Alan Fox says:

    And yet no one can refute it nor show it to be flawed. Just saying it’s flawed is cowardice. And it is a given that Alan doesn’t understand Axe’s argument.

    I linked to Art Hunt’s critique. I also linked to a critique by Mikkel Rasmussen. There are more criticisms.

  91. 91
    ET says:

    LoL! Alan links to Swamidass! Swamidass was completely owned by Behe in their internet debate.

  92. 92
    ET says:

    Art Hunt:

    To summarize, the claims that have been and will be made by ID proponents regarding protein evolution are not supported by Axe’s work. As I show, it is not appropriate to use the numbers Axe obtains to make inferences about the evolution of proteins and enzymes.

    And yet you and yours don’t have any evidence that blind and mindless processes produced any proteins! Why that isn’t a concern of yours proves that you don’t care about science.

  93. 93
    Alan Fox says:

    Alan, you have been given links to papers that measure functional sequence complexity. FSC is the same as CSI. It is the same as KF’s FSCO/I. So, clearly you are willfully ignorant or dishonest.

    Yet neither you (unsurprising) nor anyone can quantify Complex Specified Information. And functional sequence complexity is, let’s say, a niche product, though Kirk Durston is no rogue.

  94. 94
    ET says:

    Alan Fox:

    I linked to Art Hunt’s critique. I also linked to a critique by Mikkel Rasmussen. There are more criticisms.

    Both of those two are biased and cannot demonstrate that blind and mindless processes produced any protein. They cannot demonstrate that blind and mindless processes can produce a new functional protein fold starting with a given protein. What they are doing is whining.

    Lenski’s LTEE has not produced any new proteins.

  95. 95
    ET says:

    Alan Fox:
    <blockquYet neither you (unsurprising) nor anyone can quantify Complex Specified Information.
    And yet we have!

    And functional sequence complexity is, let’s say, a niche product, though Kirk Durston is no rogue.

    Nope. FSC is an observation. And guess what? Neither you nor anyone else can demonstrate that nature can produce it!

  96. 96
    Alan Fox says:

    FSC is an observation. And guess what? Neither you nor anyone else can demonstrate that nature can produce it!

    Indeed. That is my point. It can’t be demonstrated by anyone. It is a human imaginary concept.

  97. 97
    Alan Fox says:

    And yet we have! [quantified Complex Specified Information.]

    Well, pull the rabbit out of the hat then. Let’s see your quantification of complex specified information.

  98. 98
    Alan Fox says:

    Jerry, just to point out that your statement “The Grants argued it would take 32 million years to get a new finch species.” is incorrect. The evidence they have collected shows that a single species invaded the Galápagos archipelago between two and three million years ago and radiated into fourteen Galápagos finch species extant today despite (according to Prager and Wilson) the average time for species incompatibility being 32 million years in birds.

  99. 99
    relatd says:

    AF at 96,

    You are being obstinate in the face of the evidence that blind, unguided chance has no chance to produce life as it exists today. This is a reality and perception problem on your part.

  100. 100
    jerry says:

    into fourteen Galápagos finch species extant today

    Yet all these extant species can produce genetically sound offspring with each other.

    They have another 29 million years to go. So are they really distinct species or just one big happy family.

    Find a nit and make believe it is important in order to dismiss ID as meaningful. That is what nearly all criticisms of ID are about.

    Aside: What does the term “origin of species” mean?

  101. 101
    ET says:

    FSC is an observation. And guess what? Neither you nor anyone else can demonstrate that nature can produce it!

    Alan Fox:

    Indeed. That is my point. It can’t be demonstrated by anyone. It is a human imaginary concept.

    That doesn’t follow. If it is observed, then it has been demonstrated. Stonehenge is an observation. And neither you nor anyone else can demonstrate that nature can produce it. Stonehenge is not a human imaginary concept.

  102. 102
    ET says:

    Alan Fox:

    Let’s see your quantification of complex specified information.

    I thought you were familiar with Durston’s papers on the subject. Are you familiar with Shannon’s work from 1948?

  103. 103
    ET says:

    With evolution by means of intelligent design, ie “built-in responses to environmental cues”, we would expect rapid changes to finches to match the new environments.

  104. 104
    kairosfocus says:

    ET (attn AF), we are dealing with willful obtuseness and selective hyperskepticism. The refusal to accept that info carrying strings capable of holding functional information whether in text on a screen or D/RNA are an observable reality is itself a test, failed. We have actually seen D/RNA being repurposed as experimental archival info store. The further inability to recognise that functional info content of systems with configuration based function is just as valid is fail 2. Autocad etc show that such can be reduced to a compact description language so discussion on strings is without loss of generality. WLOG. Next, cumulative string length, often in bits is a basic info capacity metric, utterly common in a digital age. Durston et al adjusted for various things that somehow reduce effective functional info relative to raw capacity. All of this is on massive record accessible to the responsible and the result is not in doubt. For, the info load in the cell is so far beyond any reasonable threshold that it is clear that the use of coded language to effect algorithms for protein synthesis, in particular AA chain formation as a key stage, is decisive. I will not allow willful ignorance and hyperskepticism or linked rhetorical stunts to make me apologetic about what we may readily know. Here, that the root of the Darwin tree of life shows strong signs of design, leading to likelihood of similar design pervading the whole. KF

  105. 105
    Alan Fox says:

    Here, that the root of the Darwin tree of life shows strong signs of design, leading to likelihood of similar design pervading the whole.

    You see what you want to see, KF.

  106. 106
    AaronS1978 says:

    @105
    “You see what you want to see, KF.“

    Pretty sure KF was saying the same about you in 104

    “ET (attn AF), we are dealing with willful obtuseness and selective hyperskepticism.“

    At which point the argument is a wash. It’s a matter of perspective that the glass is half empty or half full.

    Personally I think the glass is half full and someone put the water in the glass

  107. 107
    Alan Fox says:

    Pretty sure KF was saying the same about you in 104

    No doubt. It’s a universal human failing.

  108. 108
    kairosfocus says:

    AS78 (attn AF): Actually, no. I am not in denial of the reality of info carrying capability of s-t-r-i-n-g-s, nor of how that can be quantified then adjusted for real world code redundancies etc. I am not the one side stepping how D/RNA has actually been used to store archival general digital information. I am not in denial that the genetic code with its what 24 or so dialects, is a code so a manifestation of language. I am not studiously ignoring the start, extend, stop algorithms that code for AA sequences towards protein synthesis. That is, goal-directed process, a sign of purpose. So, I can confidently assign the latest stunt by AF to turnabout projection. KF

  109. 109
    Alan Fox says:

    Of course you can, KF. Your confidence knows no bounds.

  110. 110
    Paxx says:

    Alan Fox,

    What do you hope to gain from your participation here?

    –Paxx

  111. 111
    Alan Fox says:

    Enlightenment, Paxx.

  112. 112
    Lieutenant Commander Data says:

    Kairosfocus
    willful obtuseness and selective hyperskepticism.

    No , a drug addict doesn’t have free will and selectivity anymore. You talk about a previous stage when the person that is not addicted has free will and chooses to take the drug knowing the consequences. Now they don’t have free will anymore their hyperskepticism is compulsory like the need for drug . You can’t convince a drug addict with logic because their logic has been modified into a different kind of animal . They can’t be helped with logic and reason .

  113. 113
  114. 114
    kairosfocus says:

    AF, more rhetorical stunts. I have warranted confidence in credible, reliable truth, i.e. knowledge in the day to day sense. KF

  115. 115
    bornagain77 says:

    Paxx: “Alan Fox, What do you hope to gain from your participation here?”

    Alan Fox: “Enlightenment”

    enlightenment: noun
    1
    : the state of having knowledge or understanding
    the search for spiritual enlightenment

    If learning some important spiritual truth, i.e. ‘enlightenment’, is truly your ultimate goal for being here, ought not you, (as a Darwinist who believes in reductive materialism), first reject your reductive materialism and adopt some worldview that is capable of grounding spiritual truth in the first place?

    noun: materialism,,,
    2.
    PHILOSOPHY
    the doctrine that nothing exists except matter and its movements and modifications.

    “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.”
    – J.B.S. Haldane – [“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.

    “Truth claims are propositional. That is, truth claims are stated in the form of a proposition. But what is a proposition? Where do propositions exist? What do they look like? Where are they located? How much space do they take up? How much do they weigh? How long have they existed? How and where did they originate? Obviously, these questions are absurd because propositions are not physical. But if the physical or material is all that exists as the materialist claims, which is by the way a propositional truth claim, how can such a proposition be true? How can something that doesn’t really exist, as the materialist claims, be true? Obviously that is self-refuting.”
    – John_a_designer

    “Truth is immaterial and can be seen using an open mind that voluntarily follows evidence regardless.”
    – Andrew Fabich – Associate Professor of Microbiology – Truett McConnell University – 2016

    Verse and Quote:

    John 14:6
    Jesus answered, “I am the way and the truth and the life. No one comes to the Father except through me.

    “If you were to take Mohammed out of Islam, and Buddha out of Buddhism, and Confucius out of Confucianism you would still have a faith system that was relatively in tact. However, taking Christ out of Christianity sinks the whole faith completely. This is because Jesus centred the faith on himself. He said, “This is what it means to have eternal life: to know God the Father and Jesus Christ whom the Father sent” (John 17:3). “I am the light of the world” (John 8:12). Buddha, before dying, said in effect, “I am still seeking for the truth.” Mohammed said in effect, “I point you to the truth.” Jesus said, “I am the truth.” Jesus claimed to not only give the truth, but to be the very personal embodiment of it.”
    http://commonground.co.za/?res.....way-to-god

  116. 116
    AaronS1978 says:

    @108 @KF
    It was a turnabout projection, that’s what I was pointing out. I was trying to not be abrasive. But when AF commented on 105 it reminded me of something said to me years ago in a debate class. Often we are put on defensive for our point of view (which our POVs are not unsubstantiated) and accusations like “we see what we want to see in the data” are often levied against us. That’s something that Richard Dawkins had accused many people who are religious of doing when they looked at the world. While he himself, who believes he has an enlighten out look of reality, sees a grim cruel reality. But there are two problems with this.

    One is the blatant arrogance to believe that your way of the seeing things is the only way of seeing things or correct way of seeing things. Which can often be wrong

    Two the fact that he (and many others) are doing literally the same thing he is accusing the religious of doing

    And this happens far more often in science then they’re willing to admit

    Paul Zak is a really good example of seeing what you want to see about oxytocin, and the same goes for a lot of freewill neuroscience researchers. Another is what happened with the BICEP2 results years ago claiming they confirmed chaotic inflation and intern claimed they confirmed the multiverse

    They see what they wanted see in the data and sadly it took very long periods of time to correct the above mentioned examples

  117. 117
    Alan Fox says:

    …the fact that he (and many others) are doing literally the same thing he is accusing the religious of doing

    Just pointing out that aspect of human nature. I tend to write comments that reflect what I think and believe. What would be the point of doing otherwise? I do struggle to accept that other posters are doing the same when the content of a comment is alien to my own experience. But heigh-ho, life goes on…

  118. 118
    relatd says:

    AF at 117,

    As a professional researcher, I can’t insert what I think and believe into the data. Whatever I’m researching, if I find relevant documents and credible sources, I go by that, not my opinion. For example: Wikipedia can be used as a starting point, but it is not a reliable source. Once the data is in hand, I have to cross-reference everything against other credible sources.

    Sadly, here people mix in personal thoughts with what they only think is credible information. Examples: I heard it from my political party, or my buddy Bob, who would never lie to me, or worse, from some anonymous guy on the internet who provided zero credible references to back up what he wrote.

    The internet is a black room with no sound. The only way for people to communicate is by keyboard. I think people should be careful with their opinions. We should back up any statements with credible sources. This is not the neighborhood pub.

  119. 119
    kairosfocus says:

    AF, you are side stepping warrant and objectivity. Also, the substance on the table. Relativism, subjectivism, emotivism etc fail, being self referentially incoherent. They suggest they have a degree of objectivity they could not have, were they true. Yes, we may err, that is why we have duties to truth, right reason, warrant and wider prudence. Which, are on the table regarding FSCO/i for the world of life and regarding fine tuning factors. KF

  120. 120
    Alan Fox says:

    AF, you are side stepping warrant and objectivity.

    Well, I try to see the world as it is and base my remarks on facts. Warrant? I’m a pragmatist. Rules that work best flow from consensus and fairness, not unquestioned authority.

    Also, the substance on the table. Relativism, subjectivism, emotivism etc fail, being self referentially incoherent. They suggest they have a degree of objectivity they could not have, were they true.

    There is no absolute objective warrant. People insist, agree, argue, fight, endure whatever rules emerge in human societies. I’m sure we can all think of better ways for our community to function, but there’d be little consensus.

    Yes, we may err, that is why we have duties to truth, right reason, warrant and wider prudence.

    You do err, frequently, and at length. It is fortunate you have no power to enforce your ideas to any significant extent on others.

    Which, are on the table regarding FSCO/i for the world of life and regarding fine tuning factors.

    A fact for you to consider. You are unique in claiming that “FSCO/I” is a genuine, quantifiable concept yet have failed utterly to justify that claim.

  121. 121
    Alan Fox says:

    I think people should be careful with their opinions. We should back up any statements with credible sources.

    I agree.

  122. 122
    Lieutenant Commander Data says:

    Kairosfocus
    Relativism, subjectivism, emotivism etc fail, being self referentially incoherent.

    Alan Fox
    There is no absolute objective warrant.

    :))

  123. 123
    kairosfocus says:

    LCD (attn AF): is this objectively true? [Do you see how it refutes itself, showing that AF’s snide assertion about “Rules that work best flow from consensus and fairness, not unquestioned authority” is a gross strawman fallacy?] KF

    PS, for starters, Epictetus on first principles of logic:

    DISCOURSES
    CHAPTER XXV

    How is logic necessary?

    When someone in [Epictetus’] audience said, Convince me that logic is necessary, he answered: Do you wish me to demonstrate this to you?—Yes.—Well, then, must I use a demonstrative argument?—And when the questioner had agreed to that, Epictetus asked him. How, then, will you know if I impose upon you?—As the man had no answer to give, Epictetus said: Do you see how you yourself admit that all this instruction is necessary, if, without it, you cannot so much as know whether it is necessary or not? [Notice, inescapable, thus self evidently true and antecedent to the inferential reasoning that provides deductive proofs and frameworks, including axiomatic systems and propositional calculus etc. We here see the first principles of right reason in action. Cf J. C. Wright]

    Yes, that is how far wrong we have gone.

  124. 124
    ET says:

    Alan Fox:

    You are unique in claiming that “FSCO/I” is a genuine, quantifiable concept yet have failed utterly to justify that claim.

    How would you know? You ignore everything that contradicts your views.

  125. 125
    Alan Fox says:

    LCD (attn AF): is this objectively true?

    Whether you are referring to your own statements or mine, it settles nothing to label them subjective or objective.

  126. 126
    Alan Fox says:

    You ignore everything that contradicts your views.

    Oh, the irony! Oh, the projection! 🙂

  127. 127
    ET says:

    Stuff it, Alan. I can easily support my claim, whereas you couldn’t support yours.

    I will gladly ante up $10,000 to debate Alan Fox on science- evolution by means of blind and mindless processes vs ID. I know that Alan will never accept. And I know why.

  128. 128
    kairosfocus says:

    AF, I will take onward points in bites. FSCO/I is instantly recognisable from cases such as text in this thread and information rich functional organisation. In fact my abbreviation traces to Wicken and Orgel, in the 70’s, it is antecedent to modern design theory. As for your continued irresponsible willful denial of what is documented yet again in the thread above, let me clip from 108 and 104 just for starters:

    108 kairosfocus July 31, 2022 at 1:43 am

    AS78 (attn AF): Actually, no. I am not in denial of the reality of info carrying capability of s-t-r-i-n-g-s, nor of how that can be quantified then adjusted for real world code redundancies etc. I am not the one side stepping how D/RNA has actually been used to store archival general digital information. I am not in denial that the genetic code with its what 24 or so dialects, is a code so a manifestation of language. I am not studiously ignoring the start, extend, stop algorithms that code for AA sequences towards protein synthesis. That is, goal-directed process, a sign of purpose. So, I can confidently assign the latest stunt by AF to turnabout projection.

    AND

    104 kairosfocus July 30, 2022 at 8:50 pm

    ET (attn AF), we are dealing with willful obtuseness and selective hyperskepticism. The refusal to accept that info carrying strings capable of holding functional information whether in text on a screen or D/RNA are an observable reality is itself a test, failed. We have actually seen D/RNA being repurposed as experimental archival info store. The further inability to recognise that functional info content of systems with configuration based function is just as valid is fail 2. Autocad etc show that such can be reduced to a compact description language so discussion on strings is without loss of generality. WLOG. Next, cumulative string length, often in bits is a basic info capacity metric, utterly common in a digital age. Durston et al adjusted for various things that somehow reduce effective functional info relative to raw capacity. All of this is on massive record accessible to the responsible and the result is not in doubt. For, the info load in the cell is so far beyond any reasonable threshold that it is clear that the use of coded language to effect algorithms for protein synthesis, in particular AA chain formation as a key stage, is decisive. I will not allow willful ignorance and hyperskepticism or linked rhetorical stunts to make me apologetic about what we may readily know. Here, that the root of the Darwin tree of life shows strong signs of design, leading to likelihood of similar design pervading the whole. KF

    See why you are clearly of negative credibility? We live in a world where info capacity is routinely measured in bits and bytes. Accounting for redundancies and uneven distribution of glyphs, unused states [bcd vs hex code was first case in digital electronics] etc in real codes is what Durston et al have done. Others have pointed out similar things, and yet you remain in tellingly dismissive denial. Fail. KF

    PS, If you took time to click my linked page through my handle, from over a decade ago you would find, first https://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#infois and then https://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#fscimetrx which points onward to published work, some before Durston.

  129. 129
    kairosfocus says:

    PPS, then, there is this from Orgel, 1973:

    living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . . .

    [HT, Mung, fr. p. 190 & 196:]

    These vague idea can be made more precise by introducing the idea of information. Roughly speaking, the information content of a structure is the minimum number of instructions needed to specify the structure.

    [–> this is of course equivalent to the string of yes/no questions required to specify the relevant J S Wicken “wiring diagram” for the set of functional states, T, in the much larger space of possible clumped or scattered configurations, W, as Dembski would go on to define in NFL in 2002, also cf here,

    here and

    here

    — (with here on self-moved agents as designing causes).]

    One can see intuitively that many instructions are needed to specify a complex structure. [–> so if the q’s to be answered are Y/N, the chain length is an information measure that indicates complexity in bits . . . ] On the other hand a simple repeating structure can be specified in rather few instructions.  [–> do once and repeat over and over in a loop . . . ] Complex but random structures, by definition, need hardly be specified at all . . . . Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes [–> Orgel had high hopes for what Chem evo and body-plan evo could do by way of info generation beyond the FSCO/I threshold, 500 – 1,000 bits.] [The Origins of Life (John Wiley, 1973), p. 189, p. 190, p. 196.]

    This is only a sampler that further exposes your irresponsible commentary.

  130. 130
    kairosfocus says:

    PPS, on DNA as an info store, here is something recently discussed here at UD:

    https://www.bbc.com/news/science-environment-59489560

    Scientists claim big advance in using DNA to store data

    By Paul Rincon
    Science editor, BBC News website

    Published

    1 December 2021

    Scientists say they have made a major step forward in efforts to store information as molecules of DNA, which are more compact and long-lasting than other options.

    The magnetic hard drives we currently use to store computer data can take up lots of space.

    And they have to be replaced over time.

    Using life’s preferred storage medium to back up our precious data would allow vast amounts of information to be archived in tiny molecules.

    The data would also last thousands of years, according to scientists.

    A team in Atlanta, US, has now developed a chip that they say could improve on existing forms of DNA storage by a factor of 100.

    “The density of features on our new chip is [approximately] 100x higher than current commercial devices,” Nicholas Guise, senior research scientist at Georgia Tech Research Institute (GTRI), told BBC News.

    “So once we add all the control electronics – which is what we’re doing over the next year of the program – we expect something like a 100x improvement over existing technology for DNA data storage.”

    The technology works by growing unique strands of DNA one building block at a time. These building blocks are known as bases – four distinct chemical units that make up the DNA molecule. They are: adenine, cytosine, guanine and thymine.

    The bases can then be used to encode information, in a way that’s analogous to the strings of ones and zeroes (binary code) that carry data in traditional computing.

    There are different potential ways to store this information in DNA – for example, a zero in binary code could be represented by the bases adenine or cytosine and a one might be represented by guanine or thymine. Alternatively, a one and zero could be mapped to just two of the four bases.

    Scientists have said that, if formatted in DNA, every movie ever made could fit inside a volume smaller than a sugar cube.

    Given how compact and reliable it is, it’s not surprising there is now broad interest in DNA as the next medium for archiving data that needs to be kept indefinitely.

    The structures on the chip used to grow the DNA are called microwells and are a few hundred nanometres deep – less than the thickness of a sheet of paper.

    The current prototype microchip is about 2.5cm (one-inch) square and includes multiple microwells, allowing several DNA strands to be synthesised in parallel. This will allow larger amounts of DNA to be grown in a shorter space of time.

    See why you are painting yourself into a corner as an irresponsible, unresponsive dismissive hyperskeptical objector?

  131. 131
    kairosfocus says:

    AF, predictably, you willfully refuse to acknowledge that your statement in 120 — “There is no absolute objective warrant” [and BTW, warrant needs not be absolute to be objective and reliable] — is self-referential, self refuting and therefore nonsense. As for trying definitionitis games with what is objectivity, it has to do with warrant thus knowability. That is, knowledge is warranted, credibly true [and so reliable] belief. Warrant, pointing to fulfilled duties of reason. These terms are not empty labels for you to cynically play rhetorical stunts with. Your behaviour continues to show how irresponsible you are. KF

    PS, a bit of algebra will help those willing to attend to the foundations of knowledge:

    The truth claim, “there are no [gernerally knowable] objective truths regarding any matter,” roughly equivalent to, “knowledge is inescapably only subjective or relative,” is an error. Which, happily, can be recognised and corrected.

    Often, such error is presented and made to seem plausible through the diversity of opinions assertion, with implication that none have or are in a position to have a generally warranted, objective conclusion. This, in extreme form, is a key thesis of the nihilism that haunts our civilisation, which we must detect, expose to the light of day, correct and dispel, in defence of civilisation and human dignity. (NB: Sometimes the blind men and the elephant fable is used to make it seem plausible, overlooking the narrator’s implicit claim to objectivity. Oops!)

    Now, to set things aright, let’s symbolise: ~[O*G] with * as AND.

    This claims, it is false that there is an objective knowable truth.

    It intends to describe not mere opinion but warranted, credible truth about knowledge in general. So, ~[O*G] is self referential as it is clearly about subject matter G, and is intended to be a well warranted objectively true claim. But it is itself therefore a truth claim about knowledge in general intended to be taken as objectively true, which is what it tries to deny as a possibility. So, it is self contradictory and necessarily false. In steps:

    PHASE I: Let a proposition be represented by x
    G = x is a proposition asserting that some state of affairs regarding some identifiable matter in general including e.g. history, science, the secrets of our hearts, morality etc, is the case
    O = x is objective and knowable, being adequately warranted as credibly true}

    PHASE II: It is claimed, S= ~[O*G] = 1, 1 meaning true
    However, the subject of S is G,
    it therefore claims to be objectively true, O and is about G
    where it forbids O-status to any claim of type G
    so, ~[O*G] cannot be true per self referential incoherence
    =============

    PHASE III: The Algebra, translating from S:

    ~[O*G] = 0 [as self referential and incoherent cf above]
    ~[~[O*G]] = 1 [the negation is therefore true]
    __________
    O*G = 1 [condensing not of not]
    where, G [general truth claim including moral ones of course]
    So too, O [if an AND is true, each sub proposition is separately true]
    ================

    CONCLUSION: That is, there are objective moral truths; and a first, self evident one is that ~[O*G] is false, ~[O*G] = 0.

    The set of knowable objective truths in general — and embracing those that happen to be about states of affairs in regard to right conduct etc — is non empty, it is not vacuous and we cannot play empty set square of opposition games with it.

    That’s important.

    Also, there are many particular objective general and moral truths that are adequately warranted to be regarded as reliable. Try, Napoleon was once a European monarch and would be conqueror. Try, Jesus of Nazareth is a figure of history. Try, it is wrong to torture babies for fun, and more.

    Ours is a needlessly confused age, heading for trouble.

    Similarly, Kindly, ponder the very carefully worded definitions from Collins English Dictionary [CED], where high quality dictionaries record and report correct usage:

    SUBJECTIVE: subjective
    adj
    1. belonging to, proceeding from, or relating to the mind of the thinking subject and not the nature of the object being considered [–> in short, in the contemplating subject, not necessarily the contemplated observed or abstract object such as the null set {} –> 0]
    2. of, relating to, or emanating from a person’s emotions, prejudices, etc
    : subjective views. [–> this highlights the error-pronenes of our subjectivity, thus need for filtering to achieve adequate reliability]

    OBJECTIVE: objective
    adj
    1. (Philosophy) existing independently of perception or an individual’s conceptions: are there objective moral values?. [AmHD helps: 1. a. Existing independent of or external to the mind;] {–> independent of particularly should be seen as inherent in the object, observable or abstract and that on grounds that confer reliability}
    2. undistorted by emotion or personal bias [–> highlighting error proneness]
    3. of or relating to actual and external phenomena as opposed to thoughts, feelings, etc.[ –> this sense especially relates to observable, concrete things like a tree, and again points to our error proneness, however for cause something like the null set and related Math is objective though abstract, there being no physical location for the null set]

    Dictionaries of course summarise from usage by known good speakers and writers, forming a body of recorded knowledge on language. So, we may freely conclude that:

    objectivity does not mean empirical, tangible external/physical object or the like, it can include items contemplated by the mind such as mathematical entities etc and which due to adequate warrant are reasonably INDEPENDENT of our individual or collective error-prone cognition, opinions, delusions, biases and distortions etc.

    Objectivity, is established as a key concept that addresses our error proneness by provision of adequate warrant that gives good reason to be confident that the item or state of affairs etc contemplated is real not a likely point of delusion. Yes, degree of warrant is a due consideration and in many cases common to science etc is defeasible but credible. In certain key cases, e.g. actual self evidence, it is utterly certain.
    **************
    PREDICTION: AF will studiously ignore this and pretend that nothing has been shown. Let us hope, for his sake, he will prove me wrong.

  132. 132
    Alan Fox says:

    FSCO/I is instantly recognisable from cases such as text in this thread and information rich functional organisation. In fact my abbreviation traces to Wicken and Orgel, in the 70’s, it is antecedent to modern design theory.

    But KF, you reinforce my point that “FSCO/I” is a concept that nobody but you uses. And you cannot use it quantitatively. I’ve yet to see a coherent working definition.

  133. 133
    Alan Fox says:

    Durston has made zero impact in the scientific world.

  134. 134
    Alan Fox says:

    Objectivity, is established as a key concept that addresses our error proneness by provision of adequate warrant that gives good reason to be confident that the item or state of affairs etc contemplated is real not a likely point of delusion. Yes, degree of warrant is a due consideration and in many cases common to science etc is defeasible but credible. In certain key cases, e.g. actual self evidence, it is utterly certain.

    Objectivity has an everyday meaning with which I have no problem. But, whilst trying to be objective when making statements (Wikipedia’s neutral point of view is a good example), deciding which statements are in that sense adequately objective is a pretty subjective process.

  135. 135
    relatd says:

    AF at 134,

    Wha… what? Guess what? Information is information. All living things contain instructions for assembly and reproduction. Blind, unguided chance, which is also not goal oriented, cannot program or build your computer much less a living thing.

  136. 136
    kairosfocus says:

    AF, you obviously failed to see that I am simply noting that as Orgel and Wicken highlighted, we deal with functional information, which can be explicit [text, D/RNA] or implicit in configuration of parts to achieve function. So, I abbreviated the phrase in two stages, the latter highlighting organisation. That is a convenience. Where, bits and bytes are ubiquitous in an info age so your pretence to ignore them just shows your desperation to resist the manifest. As Orgel put it in 1973, compact description suffices to specify as we know now from say an Autocad DWG file. Such are amenable to metrics that can be chosen as convenient. 10+ years ago, I favoured using a product and also used a subtraction of threshold value. Abel et al, Durston et al showed how to factor in redundancies, and that has objective warrant. As to your latest stunt to try to make objective warrant vanish into subjectivity, that is little more than an excuse for selective hyperskepticism amounting to willful obtuseness. Its anticivilisational, misanthropic folly can readily be seen from the result were it to be the norm: collapse. Attend instead to duties to truth, right reason, prudence (including warrant). You are bearing out my prediction. KF

    PS, Locke’s rebuke is telling:

    [Essay on Human Understanding, Intro, Sec 5:] Men have reason to be well satisfied with what God hath thought fit for them, since he hath given them (as St. Peter says [NB: i.e. 2 Pet 1:2 – 4]) pana pros zoen kaieusebeian, whatsoever is necessary for the conveniences of life and information of virtue; and has put within the reach of their discovery, the comfortable provision for this life, and the way that leads to a better. How short soever their knowledge may come of an universal or perfect comprehension of whatsoever is, it yet secures their great concernments [Prov 1: 1 – 7], that they have light enough to lead them to the knowledge of their Maker, and the sight of their own duties [cf Rom 1 – 2, Ac 17, etc, etc]. Men may find matter sufficient to busy their heads, and employ their hands with variety, delight, and satisfaction, if they will not boldly quarrel with their own constitution, and throw away the blessings their hands are filled with, because they are not big enough to grasp everything . . . It will be no excuse to an idle and untoward servant [Matt 24:42 – 51], who would not attend his business by candle light, to plead that he had not broad sunshine. The Candle that is set up in us [Prov 20:27] shines bright enough for all our purposes . . . If we will disbelieve everything, because we cannot certainly know all things, we shall do muchwhat as wisely as he who would not use his legs, but sit still and perish, because he had no wings to fly.

  137. 137
    Alan Fox says:

    Blind, unguided chance, which is also not goal oriented, cannot program or build your computer much less a living thing.

    I agree. However evolutionary theory does not propose a process based only on chance. There is bias.

  138. 138
    relatd says:

    AF at 137,

    Not goal oriented. You can aim a driverless car down the road and let it go. How soon before it crashes into something? Evolution is that driverless car.

  139. 139
    Alan Fox says:

    KF, your support of your “FSCO/I” concept may be more convincing if you could attempt to come up with a working definition and perhaps an example of how to apply it to a biological system. I realize an actual quantisation is beyond you but baby steps…

  140. 140
    Alan Fox says:

    Evolution is that driverless car.

    Nope. A useless analogy that is so far from how biological evolution actually works, it’s hard to know what to say to you. Have you read any books on evolutionary biology?

  141. 141
    relatd says:

    AF at 140,

    You’ll have no luck with your “have you read” question. The whole evolution story is old hat for me. It boils down to being a faith statement as opposed to anything having to do with science. There’s no evidence evolution, as advertised, actually did anything.

  142. 142
    Alan Fox says:

    The whole evolution story is old hat for me.

    Yet all you have said so far indicates to me you have a poor and inaccurate understanding of the theory and the process.

    It boils down to being a faith statement as opposed to anything having to do with science.

    No, it’s an explanation for the observed diversity and relatedness of life on Eath.

    There’s no evidence evolution, as advertised, actually did anything.

    Richard Lenski’s LTEE demonstrates the evolutionary process in real time.

  143. 143
    Alan Fox says:

    @Relatd

    Did you watch the video that Jerry linked to? The video is a summary of the work of Peter and Rosemary Grant with Galapagos finches.

  144. 144
    doubter says:

    Alan Fox@134

    (Wikipedia’s neutral point of view is a good example)

    Neutral POV?? How incredibly ridiculous. Wikipedia is easily demonstrably extremely biased against any non-mainstream reductionist materialist understandings of Nature, against and suppressing and deliberately distorting any and all phenomena and evidence for the paranormal for instance, and of course ID. There is a sort of Wiki “thought police” of zealots who constantly monitor and suppress any entries to the contrary that contradict their narrow scientistic reductionist materialist view of reality. This Wiki thought police could very well include AF, who exhibits all the signs of dedication to the secular modern religion of scientism and Darwinism.

  145. 145
    relatd says:

    AF at 142,

    Do you think this is the first time I’ve been asked about this? Or told, in great detail, what supposedly happened? Lenski? Again? A dud. A non-starter. A ‘trust me, it went like this.’ You’ll have no luck selling the theory. Thanks primarily to this site, and watching the dogged determination of the defenders of the theory elsewhere, it is a belief system as opposed to science.

    Galapagos finches? No, I don’t think so. You ignore the complexity in a single living cell and try to convince others that it slowly, gradually appeared as it is today? All I’m seeing from the scientific community is their finding more and more complexity, squeezing chance out of the equation entirely.

  146. 146
    Alan Fox says:

    Doubter

    How incredibly ridiculous.

    You’ve missed my point which was how attempts at an objective approached are easy targets for accusations of subjective bias. Though your comment illustrated my point neatly. So thanks for that. 😉

  147. 147
    Alan Fox says:

    Relatd

    Thanks primarily to this site, and watching the dogged determination of the defenders of the theory elsewhere, it is a belief system as opposed to science.

    If UD is your main source of information on evolutionary biology, evolutionary theory and the evidence underlying the process and the theory, you are beyond help, I guess. Tant pis !

  148. 148
    doubter says:

    AF,

    I notice that the bottom line is that you don’t respond to or engage with my substantive comments relative to Wiki. I wonder why.

  149. 149
    relatd says:

    AF at 147,

    Don’t get stupid on me, OK? You strike me as intelligent. Anyway, for the purpose of letting you and others reading know where I stand, here are the details: NO, I did not get all of my information from UD, and don’t fake a lack of reading ability with me again. I was told on other sites, over a number of years, what evolution supposedly did. It’s fiction. Fiction. This site helped to clarify all that. Again, don’t come back with some quip reply, read what I’m writing.

  150. 150
    Alan Fox says:

    Again, don’t come back with some quip reply, read what I’m writing.

    Wherever you got your ideas about evolution from, what you write here makes it clear your understanding of how evolution works is erroneous. Do you know what a niche is?

  151. 151
    Alan Fox says:

    …notice that the bottom line is that you don’t respond to or engage with my substantive comments relative to Wiki.

    Wikipedia is a great resource used properly. The idea is to use it as a gateway to the primary sources.

  152. 152
    relatd says:

    AF at 150,

    Faking a lack of reading ability again? Come off it, Alan. All you’re doing is acting like one of the indoctrinated. Too bad.

  153. 153
    Alan Fox says:

    @ Relatd:

    Do you know what a niche is? It’s central to the mechanism of evolution.

  154. 154
    relatd says:

    AF at 153,

    I’ve heard it before. You’ve got nothing new.

  155. 155
    Alan Fox says:

    I’ve heard it before.

    What have you heard? The niche is the mechanism by which God designs living organisms including us? Are you not then amazed?

    You’ve got nothing new.

    Indeed. There are people much better than me at explaining evolutionary theory but you have already rejected your strawman version. Not much I can do about that and I guess it doesn’t really matter in the circumstances.

  156. 156
    relatd says:

    AF at 155,

    It matters every time. Every time. The secular evangelists are careful to answer every attempt to breach the wall of the theory. Eg., You’re wrong! You’re ignorant! And so on. And it’s obvious that atheist materialism must be protected. Always.

  157. 157
    kairosfocus says:

    AF, that is now lying. By speaking with disregard to facts already on the table as stated or a few links away. Info carrying capacity of D/RNA is 2 bits per base, redundancy reduces that somewhat. Each AA is effectively from 20 possibilities, 4.32 bits with for many in OoL contexts, chirality adding a bit. Abel, Durston et al describe how the capacity is not fully used, as is true for codes in general. But such is immaterial, config spaces beyond 500 – 1,000 bits are unsearchable by blind means on sol system or cosmos scope gamut. 10^57 to 10^80 atoms, at up to 10^14 operations per second, for 10^17 s. Just for gemome for first cell, 100 – 1,000 k bases, and new body plans are 10 – 100+ millions, where the config space doubles per additional bit. All of this you should long since have acknowledged but obviously have no intent to, as it is at once fatal to plausibility of your preferred materialistic miracles of organisation. Beyond, I simply note we have coded algorithms to compose AA chains for proteins, thus language and goal directed processes. We confidently infer design as best causal explanation, indeed the only empirically supported one for such. More can be said, fisking to follow. KF

  158. 158
    Alan Fox says:

    Why so insulting, KF? A lie would involve me in making a statement I know or believe to be untrue at the time I am making it. I have never done that here.

  159. 159
    Lieutenant Commander Data says:

    Relatd
    The whole evolution story is old hat for me. It boils down to being a faith statement as opposed to anything having to do with science. There’s no evidence evolution, as advertised, actually did anything.

    Engineers would remain without job on the spot if they would invent stories like darwinists do. Why darwinists are paid for inventing unprovable stories about past? They are just common novelists that publish under “scientific authority” .

  160. 160
    relatd says:

    LCD at 159,

    You’re right. Engineers have to show their work. They have to build things that actually function.

  161. 161
    kairosfocus says:

    AF, doubling down. And misdefinition, there are subtler forms of intentional deception. To lie is to speak with disregard to truth in hope of profiting from what is said or suggested being taken as true. You full well know or should acknowledge some basic facts but choose instead to obfuscate, pretend to innocent ignorance, deride and dismiss instead. We can infer that the facts would be fatal to your enterprise. KF

    PS, even Wikipedia is forced by facts to acknowledge:

    The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either “1” or “0”, but other representations such as true/false, yes/no, on/off, or +/? are also commonly used.

    The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.

    The symbol for the binary digit is either ‘bit’ per recommendation by the IEC 80000-13:2008 standard, or the lowercase character ‘b’, as recommended by the IEEE 1541-2002 standard.

    A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined.[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is a nibble. [–> notice, these are further measures of information, strictly, carrying capacity]

    In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability,[3] or the information that is gained when the value of such a variable becomes known.[4][5] As a unit of information, the bit is also known as a shannon,[6] named after Claude E. Shannon . . . .

    When the information capacity of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a computer hardware capacity to store binary data (0 or 1, up or down, current or not, etc.).[17] Information capacity of a storage system is only an upper bound to the quantity of information stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage contains less than one bit of information. If the value is completely predictable, then the reading of that value provides no information at all (zero entropic bits, because no resolution of uncertainty occurs and therefore no information is available). If a computer file that uses n bits of storage contains only m < n bits of information, then that information can in principle be encoded in about m bits, at least on the average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information is more compressed—the same bucket can hold more.

    For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information.[18] When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.[17]

  162. 162
    Alan Fox says:

    We confidently infer design as best causal explanation, indeed the only empirically supported one for such.

    There’s no argument about that. I just happen to think the mechanism of design is explained by evolutionary theory. The niche is God’s design tool.

  163. 163
    Alan Fox says:

    To lie is to speak with disregard to truth in hope of profiting from what is said or suggested being taken as true.

    Nope. My definition is the correct one.

  164. 164
    Alan Fox says:

    Real life calls. Done for the next day or two at least

  165. 165
    kairosfocus says:

    AF, playing definitionitis, nominalism games? To speak with disregard to truth is to refuse to tell known or knowable truth, e.g. duty of acknowledging ignorance or risk and refusing to give misleading part truths. The lying compounds that refusal of duty with misrepresentation as though what were represented is true, and does so to gain advantage. In this case as an educated person you know or could easily know about bits and information capacity. You can further at least appreciate the gap due to redundancies, uneven odds of different states etc. Then you can readily see that the coded algorithms in the D/RNA of the cell swamp blind needle in haystack thresholds. You also know about language using intelligence and the goal directed, finite steps nature of algorithms. Such strongly point to design. KF

    PS, Notice Wikipedia’s further admission on undeniable states of affairs:

    In mathematics and computer science, an algorithm (/?æl??r?ð?m/ (listen)) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation.[1]

  166. 166
    ET says:

    Alan Fox:

    I just happen to think the mechanism of design is explained by evolutionary theory.

    The “theory” that changes happen? The theory that whatever is good enough to survive may get the chance to reproduce? The theory that some changes have a better chance of being eliminated than others? Can you please link to this alleged scientific theory of evolution so we can all read what it actually explains?

    The niche is God’s design tool.

    Nope. The niche only hones the already existing and well-established design.

  167. 167
    ET says:

    Alan Fox:

    Richard Lenski’s LTEE demonstrates the evolutionary process in real time.

    Right. The LTEE has demonstrated the severe limits of evolutionary processes.

    Thank you, Alan.

  168. 168
    ET says:

    Alan Fox:

    Durston has made zero impact in the scientific world.

    Durston refutes your asinine claims. And this “scientific world” cannot demonstrate that blind and mindless processes produced life and its diversity. They can’t even formulate a scientific theory of evolution. Heck, they don’t even know what determines biological form!

    “First, DNA is not self-reproducing, second, it makes nothing and third, organisms are not determined by it”. Lewontin, Richard C. (1992). “The Dream of the Human Genome”, The New York Review, May 28, 31-40.

    What scientific impact has been made in the name of evolution by means of blind and mindless processes? Besides the obvious negative one.

  169. 169
    ET says:

    Alan Fox:

    However evolutionary theory does not propose a process based only on chance. There is bias.

    The bias changes. Even a loss of function can be beneficial.

    Sexuality has brought joy to the world, to the world of the wild beasts, and to the world of flowers, but it has brought an end to evolution. In the lineages of living beings, whenever absent-minded Venus has taken the upper hand, forms have forgotten to make progress. It is only the husbandman that has improved strains, and he has done so by bullying, enslaving, and segregating. All these methods, of course, have made for sad, alienated animals, but they have not resulted in new species. Left to themselves, domesticated breeds would either die out or revert to the wild state—scarcely a commendable model for nature’s progress.

    (snip a few paragraphs on peppered moths)

    Natural Selection, which indeed occurs in nature (as Bishop Wilberforce, too, was perfectly aware), mainly has the effect of maintaining equilibrium and stability. It eliminates all those that dare depart from the type—the eccentrics and the adventurers and the marginal sort. It is ever adjusting populations, but it does so in each case by bringing them back to the norm. We read in the textbooks that, when environmental conditions change, the selection process may produce a shift in a population’s mean values, by a process known as adaptation. If the climate turns very cold, the cold-adapted beings are favored relative to others.; if it becomes windy, the wind blows away those that are most exposed; if an illness breaks out, those in questionable health will be lost. But all these artful guiles serve their purpose only until the clouds blow away. The species, in fact, is an organic entity, a typical form, which may deviate only to return to the furrow of its destiny; it may wander from the band only to find its proper place by returning to the gang.

    Everything that disassembles, upsets proportions or becomes distorted in any way is sooner or later brought back to the type. There has been a tendency to confuse fleeting adjustments with grand destinies, minor shrewdness with signs of the times.

    It is true that species may lose something on the way—the mole its eyes, say, and the succulent plant its leaves, never to recover them again. But here we are dealing with unhappy, mutilated species, at the margins of their area of distribution—the extreme and the specialized. These are species with no future; they are not pioneers, but prisoners in nature’s penitentiary.

    Sexual selection and sexual reproduction reign in the odd deviants. Again, all we observe is the honing of an already existing, well-established design.

  170. 170
    doubter says:

    AF@151

    Wikipedia is a great resource used properly. The idea is to use it as a gateway to the primary sources.

    A great resource? That’s a laugh. Used any way, for some subjects it is a great source of biased misinformation. Your faith in Wiki just goes to show your status as a faithful certified card-carrying member of the church of scientism

    Just the tip of the iceberg would be Wiki’s dreadful coverage of parapsychology. It is typical of the very strong bias exhibited by Wikipedia and their “thought police”.

    The Wikipedia item on psi and esp is a real hatchet job. From the writeup: “Second sight and ESP are classified as pseudosciences”. “Pseudoscience consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited.”

    Of course this Wiki article ignores or dismisses major meta-analyses of the data, like Etzel Cardena’s survey article on psi and esp research findings in American Psychologist, which presented a very strong case for the reality of these phenomena based on the cumulatively overwhelmingly evidential peer-reviewed research findings from many studies accumulated over the years. The title was “The experimental evidence for parapsychological phenomena” at https://ameribeiraopreto.files.wordpress.com/2018/12/The-Experimental-Evidence-for-Parapsychological-Phenomena.pdf. From the Abstract: “The evidence (presented here) provides cumulative support for the reality of psi, which cannot be readily explained away by the quality of the studies, fraud, selective reporting, experimental or analytical incompetence, or other frequent criticisms. The evidence for psi is comparable to that for established phenomena in psychology and other disciplines, although there is no consensual understanding of them.”

    Any open-minded examination of the empirical evidence shows that parapsychology is not pseudo-science as claimed by Wikipedia, but of course Wiki complacently lies that it is, and knows that it is trusted by millions as a good source of information. Not.

    With the Cardena paper the best that the materialist scientistic skeptics could do when presented with this challenge was Reber and Alcock’s incredible response (at https://skepticalinquirer.org/2019/07/why-parapsychological-claims-cannot-be-true/), where they couldn’t or wouldn’t waste their precious time and effort in actually examining the details of the data and research experimental results, but instead they closed-mindedly went back to David Hume and his old “pigs can’t fly” philosophical/metaphysical argument against “miracles” contravening currently understood natural law. Reber and Alcock claimed that esp and psi are simply existentially impossible, regardless of absolutely any conceivable evidence. Essentially, they threw out without examination the very large body of highly evidential experimental research results, a very large body of empirical evidence, just because they didn’t and couldn’t believe them. They strongly believe that all the data regardless of quality just must in principle be false in some way, with no need to actually show this falsity in detail.

    Wow, case closed. What an excellent argument. Of course, the real reason for their use of this tired and invalid old argument was that they knew that they couldn’t plausibly challenge the findings documented in Cardena’s paper.

  171. 171
    relatd says:

    Doubter at 170,

    I have studied ESP and PSI. There are other subjects where others react the same way. At first, it surprised me. Later, I concluded that they either do not want to believe good data or they are trying to hide something. The example I’m referring to included a NASA Technical Report. But the replies were just howls of “No! It can’t be!” Uh, it’s in the NASA Technical Report produced by NASA and I get this?

    None of these people could give a rational response even though at least a few claim to have some expertise in the example in question.

  172. 172
    asauber says:

    Wikipedia is good for rock band trivia.

    Beyond that…

    Andrew

  173. 173
  174. 174
    doubter says:

    Relatd@171

    There are many subjects/movements that despite ample evidence for their reality are derided by their articles in Wiki, in an obvious smear campaign against anything that seems to conflict with reductive materialism and the mainstream consensus of what reality is, that it is ultimately meaningless matter in a void, and that current conceptions of science are final absolute reality despite the obvious fact that this tends to change every few generations paced by the rate of the funerals of the “experts”. A sure sign of the grip of the secular religion of scientism on our current society. This is essentially the worship of naturalism and reductive materialism, and the active persecution and suppression of any tendency to stray from the faith.

    The treatment by Wiki of Intelligent Design is perhaps even worse than its treatment of esp and the paranormal in general. Wikipedia similarly falsely claims ID is pseudoscience, and adds the also patently false claim that it is Creationism in disguise. Materialist propaganda aimed to convince people that the evidenceless secular religion of Darwinism is the truth.

  175. 175
    relatd says:

    Doubter at 174,

    Wikipedia can be useful. In some cases, such as you describe, it can be edited by anyone or modified by anyone. In the case of the business I work for, we have a Wikipedia page. It contains false, inaccurate and other problematic pieces of information. We attempted to post a corrected version. Persons unknown changed it back.

    In the case of my example, certain people on another message board attempted to either convince themselves or others that the information I provided, backed up by a NASA Technical Report, could not be true. I suspect the primary reason was that it was about a piece of technology that appeared earlier than history would lead people to believe. The other problem was that it was obtained from a foreign country after World War II.

  176. 176
    JVL says:

    Relatd: I suspect the primary reason was that it was about a piece of technology that appeared earlier than history would lead people to believe. The other problem was that it was obtained from a foreign country after World War II.

    Just curious . . . what bit of technology was that then?

  177. 177
    kairosfocus says:

    Relatd, TV was teens and twenties, first broadcast was 1939, BBC. Pulse Code Modulation was 1939 too. So was the first jet flight, Heinkel 178 IIRC. Things were happening far earlier than people may realise. KF

  178. 178
    relatd says:

    JVL at 176,

    A Mach 10 wind tunnel.

  179. 179
    JVL says:

    Relatd: A Mach 10 wind tunnel.

    Initially developed by Nazi Germany? And eventually realised in Tennessee in the 50s? Is that view controversial?

  180. 180
    relatd says:

    JVL at 179,

    I have no idea where your information comes from. The wind tunnel was installed in the United States in 1947. The supposed ‘experts’ on another board either didn’t want to believe it or to suppress the knowledge that it occurred in that year.

  181. 181
    JVL says:

    Relatd: The wind tunnel was installed in the United States in 1947. The supposed ‘experts’ on another board either didn’t want to believe it or to suppress the knowledge that it occurred in that year.

    Well, as far as I can see, the technology was definitely cutting edge, the Nazi’s worked on it, but not that surprising or out of line with known research.

  182. 182
    relatd says:

    JVL at 181,

    I have some expertise in this area. In 1947, supposedly, no one had anything fast enough to warrant the building of a Mach 10 wind tunnel. This isn’t cutting edge, this is ‘beyond anything that existed at the time’ according to those ‘experts’ I referred to. Things are not built for no reason. So, you are quite wrong. This was far beyond any “known” – according to the history books – technology from the period.

    The V-2 rocket traveled at over Mach 4.3.

  183. 183
  184. 184
    relatd says:

    JVL at 183,

    The Mach 10 wind tunnel went into operation in the U.S. in 1947 or 10 years earlier, which explains the ‘objections’ raised by the ‘experts.’

    A photo of a wind tunnel model of the A-4 (German designation for V-2) is shown in a variable speed wind tunnel with a range of Mach 1.1 to 4.4, on page 39 of V-Missiles of the Third Reich – The V-1 and V-2 by Dieter Hölsken.

  185. 185
    kairosfocus says:

    Relatd, the one snatched from Germany and taken to the US? Germans have a reputation for over building, e.g. their radars were snatched to use as radio telescopes, they were way better than necessary for purpose. Then there was was it Hitler’s dismissiveness of the T34 because of crude fit finish except where needed. And more. KF

  186. 186
    JVL says:

    Relatd: The Mach 10 wind tunnel went into operation in the U.S. in 1947 or 10 years earlier

    That’s quite a range of years considering that a lot of work was being done at the time.

    I guess I’m not completely sure what you are saying: that kind of early development by US scientists was quick but not if they had information from work that had already been done in Germany . . . or not?

    I get that some people are not familiar with the history of the research but, given that, is any of the results that far out of expectations? Your decriers sound just misinformed to me. So? They couldn’t even be bothered to do a decent online search. I guess that’s your whole point.

  187. 187
    relatd says:

    JVL at 186,

    Don’t guess when you can find out. The decriers were mostly people who specialized in aerospace. This information either shocked them or they sought to cover it up. They should not be misinformed.

    Another way of putting it is this: What was the U.S. doing with a Mach 10 wind tunnel in 1947? The answer is not nothing. Something like this was too advanced for the late 1940s. Considering also that it was a wartime German development.

    You lack a comprehensive knowledge of wind tunnels and their alleged historical development. The German variable wind tunnel that could reach Mach 4.4 was in operation by late 1940. Again, early in terms of other developments in other countries.

  188. 188
    Alan Fox says:

    Cardena’s paper.

    Where are the people with these supernatural abilities? Why are they not on front pages, prime-time TV?

  189. 189
    Alan Fox says:

    KF in comment 173

    [KairosfocusAugust 2, 2022 at 6:24 am]
    AF, you continue definitionitis. Okay, here is a description and context for FSCO/I

    You make my point for Me. “FSCO/I” is your own unique invention. Nobody else gives it a moment’s consideration. Though, I’ll see if I can find time to wade through that field of chaff to find any wheat. In the mean time what would impress me is if KF could show me where anyone else is discussing “FSCO/I” and taking it seriously.

  190. 190
    Alan Fox says:

    Don’t guess when you can find out.

    Physician, heal thyself. 😉

  191. 191
    kairosfocus says:

    AS, strawman, compounded by Alinsky style personalisation and polarisation that boils down to I demand details then use dismissive rhetorical stunts to evade them when countered. This in an age where complex functional information is ROUTINELY measured in bits and through the informational school of thermodynamics that has long been tied to entropy and the second law. All I did, as you know but of course refuse to acknowledge, is to abbreviate a descriptive phrase for a concept and metric tracing to Orgel and Wicken, who outlined the concept and the principle of measurement prior to the origin of ID by over a decade. Functionally specific information can be explicit in a string as in D/RNA or text in this thread or code on a PC. It can be implicit in the reducibility of a functional configuration through description of the Wicken wiring diagram, as in the process-flow network of cellular metabolism or an oil refinery, equally alike. It is inherently measurable in bits as is a commonplace of an information age. Adjusting for redundancy is what Abel, Durston et al did. You cannot contest those facts nor the blind needle in haystack search challenge beyond 750 +/- 250 bits. The cell, just on genome, is 100k – 1,000 k bases and body plans 10 – 100+ mn, vastly beyond sol system or observed cosmos capacity. Worse, we have alphanumeric, string, coded algorithms, directly language and goal directed processes. There is just one empirically founded causal source with capability for such, design. There is excellent reason to infer design, and such is only resisted for ideological reasons tied to the self-refuting a priori evolutionary materialistic scientism highlighted by Lewontin and quite a few others. KF

  192. 192
    ET says:

    Alan Fox:

    “FSCO/I” is your own unique invention. Nobody else gives it a moment’s consideration.

    You “argue” like a child. That the “environment designs” is YOUR unique invention. Nobody else gives it a moment’s consideration.

  193. 193
    kairosfocus says:

    AF,

    I will comment on points:

    AF, 120: >>I try to see the world as it is>>

    1: If that were so, it would be admirable objectivity.

    >>and base my remarks on facts.>>

    2: The evidence above shows evasion of facts starting with ubiquity of functional information based on strings or configurations, measurable in bits of capacity and adjusted for redundancy. (You seem to lack familiarity with the underlying theory of information and communication and to imagine that you can dismiss it because of who points it out. Which, of course lacks objectivity.)

    >>Warrant?>>

    3: Warrant is a key component of what is knowable, speaking to credible realities, right reason, sufficiency to ground conclusions. Your unresponsiveness to the bit speaks volumes in a digital age.

    >> I’m a pragmatist.>>

    4: Pragmatism, strictly, is in serious hot water as a view on truth and knowledge. As is any variety of relativism, subjectivism, emotivism etc. We have already seen how objective knowledge necessarily and undeniably exists for any reasonably distinct field of discussion.

    >>Rules that work best flow from consensus>>

    5: Once significant worldviews issues and the attitude of hyperskepticism are on the table, consensus is impossible. Instead, truth, right reason, warrant and wider prudence are what we have. Your hyperskepticism does not control our knowledge, nor should it.

    >> and fairness, >>

    6: Fairness is of course part of our first duties, where selective hyperskepticism is always imprudent, unwarranted, a violation of right reason, and is unfair.

    >>not unquestioned authority. >>

    7: Strawman caricature projection, no one in this discussion has seriously advocated blind modesty in the face of claimed authority. To suggest such to taint is snide and out of order.

    >>There is no absolute objective warrant.>>

    8: Such as for this?

    9: In short, this is a self referentially incoherent, self defeating, necessarily false assertion. Some things may be warranted to undeniable certainty as self evident, others on known or accessible realities may hold moral certainty, others have a weaker provisional prudent warrant including theoretical, explanatory constructs of science. I get the feeling some reflection on logic, logic of being and epistemology would be advisable.

    >>People insist, agree, argue, fight, endure whatever rules emerge in human societies.>>

    10: This sounds much like cultural relativism, which fails.

    >>I’m sure we can all think of better ways for our community to function, but there’d be little consensus.>>

    11: Irrelevancy and again appeal to cultural relativism.

    >>You do err, frequently, and at length. It is fortunate you have no power to enforce your ideas to any significant extent on others.>>

    12: Little more than turnabout projection, to feed personalisation and polarisation. On the subject in hand, the binary digit is not a personal matter, nor is the concept of functional information, nor that information can be implicit in functional organisation.

    13: All of this resort, is to try to dismiss my having drawn from Orgel, Wicken and others that there is an observable [and quantifiable] phenomenon, functionally specific, complex organisation and/or associated information. That, I abbreviated FSCO/I, and have long since pointed to sources. There is no responsible reason to disregard it, we see here ideologically motivated artificial controversy driven by selective hyperskepticism.

    14: The obvious reason? Such FSCO/I is readily observable with trillions of cases and once we are beyond 750 +/- 250 bits, uniformly is seen to come about by intelligently directed configuration. Further, it can be shown that blind needle in haystack search is not a plausible cause for it. So, as this includes the genome, which has coded algorithmic information (so, language and goal directed process), that strongly points to the cell and to major body plans being designed. You cannot counter on merits, but are determined to reject the possibility of design so you have resorted instead to quarrelsome rhetorical stunts.

    >>A fact for you to consider.>>

    15: Considered for over a decade.

    >>You are unique in claiming that “FSCO/I” is a genuine, quantifiable concept>>

    16: False, you have hyperskeptically refused to recognise a descriptive phrase for a ubiquitous phenomenon in a technological, information age, functional information [rather than info carrying capacity] that is beyond a threshold where it is plausible to suggest it could have come about by blind chance and/or mechanical necessity.

    17: I have made available to you clips from Orgel and Wicken, which are my sources, which you have dodged. Let me clip here Wicken’s wiring diagram comment:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

    18: Quantifiability of course, as has been pointed out any number of times, starts with information carrying capacity, often in bits or bytes. Beyond, in an information theory context, when redundancy enters, there is a reduction, a familiar phenomenon with codes as information is connected to surprise and removal of uncertainty. In English, about 1/8 of normal text is the letter e, and rarer ones such as x convey more information.

    >> yet have failed utterly to justify that claim.>>

    19: Manifestly, insistently false to the point of speaking with disregard to truth.

    So, in the end, the objections fail.

    KF

  194. 194
    Alan Fox says:

    KF in 193

    Thanks for at least using paragraphs and numbering them. The questions that interest me are:

    1. What precisely is “FSCO/I”

    2.How is it quantified?

    3. Who, apart from Kairosfocus, talks about “FSCO/I”?

  195. 195
    kairosfocus says:

    AF, long since answered, you are playing at willful obtuseness. A descriptive phrase for a ubiquitous phenomenon in an information age being treated with hyperskepticism is a strong sign of just how threadbare the objections are. FSCO/I = “functionally specific complex organisation and/or associated information,” which describes, it does not invent. And that is a root problem, nominalism; it fails, there are abstracta such as information and quantities, that are very real. Information is measurable as capacity in bits, counted from string length of two state elements to hold it. Wicken pointed out that — with implied compact description languages — information is implicit in functionally specific organisation and its wiring diagram. Functionality dependent on configuration is highly observable, look at any auto parts shop or at how readily information is garbled by noise. You contributed many cases in point in this thread or elsewhere. So, you know full well what you pretend to doubt. That tells us just how powerful is the discovery of coded algorithmic information in D/RNA in the cell and its function as basic module of life. Where life is of course notoriously undefined in the sense of a consensus precising statement, but is readily recognised. Definitionitis rhetoric fails. KF

    PS, it matters not 50c that I use and explain the description, the substance is real and similar phrasing is everywhere. Start with Orgel and Wicken as already cited and see if you can bring yourself to acknowledge they have a point. In speaking of specified complexity [coming from Orgel] and on complex specified information Dembski pointed out that for biological systems such is cashed out in terms of functionality. That is, functionally specific configurations. And Abel, Durston et al have reduced that to an analysis pivoting on observed range of variation in life for enzymes etc.

  196. 196
    kairosfocus says:

    PPS, for further contemplation:

    CONCEPT: NFL, p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity [cf. p 144 as cited below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . .

    In virtue of their function [a living organism’s subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways

    [through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites:

    Wouters, p. 148: “globally in terms of the viability of whole organisms,”

    Behe, p. 148: “minimal function of biochemical systems,”

    Dawkins, pp. 148 – 9: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.”

    On p. 149, he roughly cites Orgel’s famous remark on specified complexity from 1973, which exactly cited reads:

    ” In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . .”

    And, p. 149, he highlights Paul Davis in The Fifth Miracle: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity.”] . . .”

    DEFINITION: p. 144: [Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [the cluster] (T, E) constitutes CSI because T [effectively the target hot zone in the field of possibilities] subsumes E [effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”

    I do not think there is necessity to engage in probability analysis, there is plausibility due to blind, needle in haystack search challenge. That is why I point to 10^57 sol system atoms [where most are H and He in the sun] and to 10^80 for the observed cosmos, with fast reactions of organic character rated as up to 10^ -14 s. 10^17s is ord of mag available time. 3,27*10^150 to 1.07^10^301 possibilities swamps those reducing to only negligible search of the configuration space being possible. Where, search for a golden search can be seen i/l/o how a search samples a subset, so for a set of n configs, the set of searches is power set, of scale 2^n, so exponentially harder, suggested golden searches built into the cosmology would be front loaded fine tuning. Blind watchmaker approaches are maximally implausible.

    PPPS, as you seem unfamiliar with the underlying state or phase space thinking, Walker and Davies:

    In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a system’s trajectory will eventually [–> given “enough time and search resources”] explore the entirety of its state space – thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary “final state” (e.g., a living organism) and “explain” it by evolving the system backwards in time choosing an appropriate state at some ’start’ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense.

    We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions – that is, it makes little sense for us to single out any particular state as special by calling it the ’initial’ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems – including life), then our phase space will consist of isolated pocket regions and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).

    [–> or, there may not be “enough” time and/or resources for the relevant exploration, i.e. we see the 500 – 1,000 bit complexity threshold at work vs 10^57 – 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos]

    Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [–> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fine–tuning of the initial conditions. [ –> notice, the “loading”] Stated most simply, a potential problem with the way we currently formulate physics is that you can’t necessarily get everywhere from anywhere (see Walker [31] for discussion). [“The “Hard Problem” of Life,” June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]

    More on the anthropic principle from Lewis and Barnes https://uncommondescent.com/intelligent-design/hitchhikers-guide-authors-puddle-argument-against-fine-tuning-and-a-response/#comment-729507

    And on and on for those willing to rise above willful obtusenes and hyperskepticism.

  197. 197
    ET says:

    1. What precisely is environmental design?
    2. How is it quantified?
    3. Who, besides Alan and Fred, talks about environmental design?

  198. 198
    kairosfocus says:

    ET, there is endless talk on fitness functions and hill climbing. There is a common assumption of well behaved functions, though the issue of ruggedness as I discussed is not properly appreciated. However, given FSCO/I, we have issues of multiple, well adapted, matched, properly arranged and coupled parts to achieve function, as is easily seen with the exploded view of a case study, the ABU 6500 reel [simpler than Paley’s watch and from a firm that made taxi meters]. In short, islands of function separated by vast seas of non functional clumped or scattered configurations is very real. The dominant search challenge is to get to a shoreline of function for hill climbing and specialised adaptation to modify the body plan or architecture or wiring diagram. Where with 500 – 1,000 bits as a threshold atomic and time resources cannot carry out significant config space search. So, FSCO/I by blind needle in haystack search is analytically maximally implausible. There are trillions of cases by intelligently directed configuration, as intelligence plus knowledge plus technique are fully capable. FSCO/I is a signature of design. All this has been outlined, explained, thrashed out over a decade ago, but we are not dealing with intellectual responsiveness. KF

  199. 199
    ET says:

    I agree. And Alan’s obfuscation and willful ignorance are not arguments against that.

    What Alan will never present is evidence that blind and mindless processes produced any bacterial flagellum, for example. He can’t even tell us how to test the claim that blind and mindless processes are capable of producing any bacterial flagellum. And he doesn’t understand that science rejects claims that are evidence-free and cannot be tested.

  200. 200
    Alan Fox says:

    ET

    He can’t even tell us how to test the claim that blind and mindless processes are capable of producing any bacterial flagellum.

    Joe, Joe, KF will tell you every tub must stand on its own bottom. Rail against evolutionary if you want but it doesn’t change the fact that for “Intelligent Design” there is no tub and no bottom. And still nobody can tell me what FSCO/I is, not even the guy who invented it

  201. 201
    jerry says:

    Rail against evolutionary if you want but it doesn’t change the fact that for “Intelligent Design

    So the argument for Evolution is based on something else not being true?

    Please tell us what other science belief is such based? The answer: none.

    still nobody can tell me what FSCO/I is

    Aside: I can define functional complex specified information. It was done years ago. It can be measured in terms of its complexity just as an individual sentence can be measured.

    Current problem is that I am at New Jersey shore on vacation for 10 days so to find specific discussions of complex specified functional information from 13-15 years ago is difficult

    But to suggest there is no definition is nonsense. So what else is new?

  202. 202
    ET says:

    Alan, Alan. When there are TWO choices, intelligently designed or not, evidence against one, supports the other. But I understand that you couldn’t grasp that fact.

    Also, science mandates that all design inferences first eliminate chance and necessity. See Newton’s 4 rules of scientific reasoning, parsimony and Occam’s razor. But you have been told this many times and it still hasn’t sunk in.

    That said, all bacterial flagella fit the criteria for being intelligently designed. First of all, they are all irreducibly complex.

    “Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”– Behe in DBB

    And all you can do is to lie and deny that reality. It sucks to be you.

  203. 203
    kairosfocus says:

    AF, this is not about evolution inasmuch as descent with modification is concerned. Dogs show mods by variation and artificial selection, gulls and other circumpolar species show natural adaptation and biogeography until the two overlap in Europe etc. Galapagos finches show radiation but also that cross-species successful breeding occurs. Red Deer and American Elk proved to breed in New Zealand. The issue is to arrive at body plans de novo, starting with the unicellular organism then getting into dozens of body plans. Hill climbing does not explain arriving at a beach head on an island of function, which makes blind needle in haystack search utterly implausible, which is why you suddenly have all sorts of hyperskepticism about a commonplace phenomenon FSCO/I and how to construct metrics. That reaction tells us your view has crippling difficulties accounting for information and organisation beyond 500 – 1,000 bits. KF

  204. 204
    Alan Fox says:

    Jerry,

    Don’t worry about arguing with me. Enjoy your vacation. I’ll still be here when you get back…

    If I’m spared! 🙂

  205. 205
    Alan Fox says:

    Alan, Alan. When there are TWO choices, intelligently designed or not, evidence against one, supports the other.

    The Sherlock Holmes argument? Good grief! Every tub must stand on its own bottom. You have to include the possibility of the explanation we haven’t thought of.

  206. 206
    Alan Fox says:

    That reaction tells us your view has crippling difficulties accounting for information and organisation beyond 500 – 1,000 bits.

    But I don’t need to. There is no requirement for such a concept in the evolutionary model. You need to show how to quantify such claims and then explain how your model works in a biological system before I need to be concerned.

  207. 207
    relatd says:

    AF at 205,

    Pfft! Double pfft!

  208. 208
    jerry says:

    Don’t worry about arguing with me

    No arguing.

    Just presenting the obvious. FSCI is obvious and simple. How anyone could say there is no measure of it is beyond me.

    As I said measuring a simple sentence in any language is straightforward. Measures of the DNA sequence complexity is just as simple.

  209. 209
    kairosfocus says:

    AF, you also know that with trillions of observed cases, FSCO/I is uniformly, reliably produced by intelligently directed configuration. Thus, you know it is a strong sign of such IDC as key causal factor. Your rhetorical pretences otherwise simply show intent to disregard the basic inductive logic on which science was built. KF

  210. 210
    ET says:

    Alan Fox:

    The Sherlock Holmes argument? Good grief!

    Good grief is right! Are you daft? Given 2 possibilities, it is a fact that eliminating one, supports the other.

    Every tub must stand on its own bottom.

    And yet yours doesn’t even exist! And nice of you to ignore what I said and prattle on like an infant.

    AGAIN, science mandates that all design inferences first eliminate chance and necessity. See Newton’s 4 rules of scientific reasoning, parsimony and Occam’s razor. But you have been told this many times and it still hasn’t sunk in. What part of that are you too stupid to understand, Alan?

    You have to include the possibility of the explanation we haven’t thought of.

    Nope. Clearly you don’t understand how science operates. The science of today does not and cannot wait for what the science of tomorrow may or may not uncover. Science is a tentative venue. Scientists understand that their claims of today may be refuted tomorrow. They also understand their claims may also be confirmed. That is the nature of science.

    Science mandate that the claims being made have to have evidentiary support. It also mandates the claims being made not only be testable but tested and confirmed. The only evidence for evolution by means of blind and mindless processes is genetic diseases and deformities.

  211. 211
    ET says:

    Alan Fox:

    There is no requirement for such a concept in the evolutionary model.

    There isn’t any requirement for supporting evidence, either. There isn’t any requirement for making testable claims. In other words, the evolutionary model isn’t scientific.

  212. 212
    Alan Fox says:

    Jerry:

    Just presenting the obvious. FSCI is obvious and simple. How anyone could say there is no measure of it is beyond me.

    Obvious and simple, eh?

    In that case, how hard can it be for someone to provide a worked example?

  213. 213
    Alan Fox says:

    Given 2 possibilities, it is a fact that eliminating one, supports the other.

    How do you know there are two possibilities? There’s an evolutionary explanation. There are several religious explanations. But there could be ones we haven’t heard of yet. ID folks may even end up explaining something one day.

    Not you though, Joe.

  214. 214
    Alan Fox says:

    KF

    AF, you also know that with trillions of observed cases, FSCO/I is uniformly, reliably produced by intelligently directed configuration.

    No, I keep asking but you still avoid telling me what “FSCO/I” is and how to calculate it.

  215. 215
    ET says:

    Alan Fox:

    How do you know there are two possibilities?

    Really? Again, Intelligently Designed or not sweeps the field clean.

    There’s an evolutionary explanation.

    Yes, your continued equivocation is duly noted. Intelligent Design has an evolutionary explanation, too. Intelligent Design posits that living organisms were so designed with the information and ability to evolve and adapt. Evolution by means of intelligent design, ie telic processes. Genetic algorithms exemplify evolution by means of telic processes.

    So, please stop equivocating.

    The only things evolution by means of blind and mindless processes can explain are genetic diseases and deformities.

    ID folks may even end up explaining something one day.

    Already have. But we are still waiting for you and yours to come up with something.

    Not you though, Joe.

    And yet I have, Fred.

    Why do you think your willful ignorance is an argument?

  216. 216
    ET says:

    Alan Fox:

    No, I keep asking but you still avoid telling me what “FSCO/I” is and how to calculate it.

    It has been explained, ad nauseum. YOU are the problem, Fred.

  217. 217
    jerry says:

    In that case, how hard can it be for someone to provide a worked example?

    Not hard at all.

    Every sentence in this thread is an example of FSCI.

    The probability of getting each sentence is just 1/29 x 1/29 etc for each letter, space, comma and period. Actually this limits the options so its best to use 1/60 to cover capitalization and other punctuation marks. So

    Every sentence in this thread is an example of FSCI.

    Contains 50 characters so the sentence is determined by 1 in 50 ^ 60 power. Unlikely no chance of generating it in the history of the universe using random choices for each character. (This is about 10 ^ 102 power)

    For biology, DNA sequences producing proteins could be calculated in a similar fashion.

  218. 218
    Viola Lee says:

    As someone who has taught probability, I protest that this explanation is extremely simplistic and unrealistic. To think that pure chance involving a certain number of simultaneous and independent events is how things happens in the real world is naive and basically irrelevant to any real-world situation.

  219. 219
    es58 says:

    As someone who has taught probability, I protest that this explanation is extremely simplistic and unrealistic. To think that pure chance involving a certain number of simultaneous and independent events is how things happens in the real world is naive and basically irrelevant to any real-world situation

    Tell more, like why this has to be off by orders of magnitude, because it would have to be to be relevant.

  220. 220
    ET says:

    No, Viola Lee. It doesn’t have anything to do with simultaneous events. Independent events, yes.

    7 coin tosses to hit 7 heads or tails, is 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/

    But once you hit 6 in a row, the last one is still only 1/2

  221. 221
    Viola Lee says:

    True, the events don’t necessarily have to be simultaneous.. I can flip 100 coins at once or one coin 100 times and certain things, such as distribution will be the same, but others, such as order, will not. However my main point is that neither of these is a realistic model of how real world events happen.

  222. 222
    Viola Lee says:

    And yes, of course each coin toss is independent of what has happened before. Again, not the way most real world events happen.

  223. 223
    ET says:

    With evolution by means of blind and mindless processes, that is exactly how it happens. However, evolution by means of blind and mindless processes doesn’t translate to the real world. Unless you are discussing genetic diseases and deformities.

  224. 224
    kairosfocus says:

    AF, lying again. You full well know we live in a world of functional information, measured in bits of carrying capacity; you are using digital technology. You know that it was recognised that such information, will have redundancies and have seen working out of how that affects values of encoded info in functional bits. You know such has been published and you have had links to such. (Newbies, see basic survey here on in my always linked through my handle — AF has been around for years and knows better than he speaks yet again.) The info in D/RNA is expressed as 4 state elements thus two bits of capacity per base, though redundancies obtain, and are addressed, on much the same basis as in the world of telecommunication and computing. Similarly generally proteins are 20 state per AA, 4.32 bits per AA carrying capacity, redundancies reduce the actual functional information. From Orgel and Wicken on, that has been done. You have no excuse for yet again denying what is in front of you. That denial instead reflects desperation to evade the import of that observed functional information in life forms from the cell on up. KF

    PS, for record, I clip the just linked citing Durston et al:

    Eqn. (6) [zeta = delta-H (Xg(ti), Xf(tj)) . . . (6)] describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database [of proteins]. However, the functionality of a sub-sequence or particular sites of a molecule can be substantially different [12]. The functionality of a sub-molecule, though clearly extremely important, has to be identified and discovered . . . .

    To avoid the complication of considering functionality at the sub-molecular level, we crudely assume that each site in a molecule, when calculated to have a high measure of FSC [= functional sequence complexity; yes, FSC is not unique to me], correlates with the functionality of the whole molecule. The measure of FSC of the whole molecule, is then the total sum of the measured FSC for each site in the aligned sequences. Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit [= functional binary digit i.e. functional bit] value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability, in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.

    Of course, predictably, hyperskeptical denial, dismissal and evasion will continue.

  225. 225
    kairosfocus says:

    VL, have you taught or studied information theory and telecommunications? That is the specific relevant context and I laid out a summary starting with a clip from one of my first t/comms texts, in my longstanding always linked. Kindly see here on. Start with info carrying capacity, how effectively a negative log probability metric arose, then move to redundancies then consider information used to specify function. Then explain to us what it means when file sizes are measured in bits and bytes etc, then what channel capacity is and the significance of say e being about 1/8 of normal text while say x is much more rare, tying in the concept of surprise. Go on to the informational school of thermodynamics and import for the second law. Then, ponder phase, state and configuration space.How many possibilities exist for say 8 bits, then 500, 1,000 and how are they distributed per binomial theorem, and what does that tell you about small target zones and blind needle in haystack search? Search for a golden search? [Note that a search samples a set of possibilities so the set of searches is a power set.] I suspect, that part of the dividing line here is that objectors have little familiarity with these matters at the next level up from oh file sizes are in bits and bytes. You are dealing with people who have dealt with such matters but I further suspect the polarised atmosphere leads objectors to imagine that they are dealing with dubious, rhetorically dismissible notions. KF

  226. 226
    kairosfocus says:

    PS, given the chaining chemistry for AAs and D/RNA, is there any serious chemical constraint on any of 20 AAs or 4 bases following any other? Think about how that compares to an n element string storage register

    |x|x|x| . . . |x| where each x has say p possible states.

    Yes, we have here p*p* . . . p, = p^n possibilities, leading to metrics of info capacity. Each x will have log p/ log 2 bits of storage capacity, but redundancy will reduce the actual functional information in a practical code.

    And more.

  227. 227
    jerry says:

    As someone who has taught probability

    So have I but in the past.

    The more interesting thing here is why this particular response? Why now? Why only criticize and not try to make more accurate? That is what a teacher usually tries to do.

    For example, the order of letters and punctuation may have some necessity to them. So the actual next character will be limited by the preceding characters in some examples. But in general the example is simple and relevant and the calculations straightforward.

    Aside: who said that the next character was random in the real world. For something to have function, the argument is that it couldn’t happen by chance or any natural processes. That some intelligence directed it.

    Similarly, certain DNA sequences have function. What is their origin? For some the origin may be analogous to a coin flip. But for most that does not appear to be the origin.

    Aside2: the origin of DNA sequences in punctuated equilibrium is thought to be analogous to coin tosses. That is their explanation for protein origin. While it may explain the occasional protein it only explains a small number and it fails to deal with the origin of functionality in the DNA transcription and translation process.

    Maybe our resident expert on probability would provide an estimate on these probabilities and not just criticize.

  228. 228
    kairosfocus says:

    Jerry, some of that is bound up in redundancy [q usually has u after it in English], some is implied by the message to be sent, some by the need to frame messages so starts are starts, words are words, stops are stops, etc. Those are not bound by chemistry or physics. In a Darwin pond scenario it is chem and physics that would have to compose. KF

  229. 229
    Sandy says:

    To toss a coin you need a person(intelligent agent) , a coin (intelligently designed) and a purpose.
    Whatever you want to demonstrate you need intelligence as starting point.
    Bad news for some ideologies.

  230. 230
    Alan Fox says:

    Kairosfocus:

    AF, lying again.

    That is a serious and unjustified allegation. For someone who postures as Christian community leader, it is behaviour particularly depressing to observe. Support the allegation with facts or stop making it. For shame.

  231. 231
    Alan Fox says:

    The info in D/RNA is expressed as 4 state elements thus two bits of capacity per base, though redundancies obtain, and are addressed, on much the same basis as in the world of telecommunication and computing. Similarly generally proteins are 20 state per AA, 4.32 bits per AA carrying capacity, redundancies reduce the actual functional information.

    Absolute balderdash. Neither you or anyone can discern the functionality of a novel DNA sequence by performing any sort of numerology on it.

  232. 232
    relatd says:

    Here is an example of information that is functional, specific and caused by an intelligent agent. Look at the line below:

    hereisanexampleofinformationthatisfunctionalspecificandcausedbyanintelligentagent

    It is the first sentence in this post without a starting capital letter and without punctuation. Living cells know how to translate this. How to perform error correction.

    THIS is Intelligent Design. All of it. I have lived to see the day when blind, unguided chance disappears under the truth. The truth for all.

  233. 233
    relatd says:

    AF at 230,

    To someone who is a blowhard, where did you see the title in the following?

    “…postures as Christian community leader…”

    And based on your previous posts, I doubt that you’re actually heartbroken if this were true.

  234. 234
    Alan Fox says:

    Related

    To someone is a blowhard…

    So you are a blowhard? I did not know that. Luckily, being from Europe, I don’t know what that word means.

  235. 235
    Alan Fox says:

    @ Relatd,

    Looked up “blowhard”.

    “an arrogantly and pompously boastful or opinionated person”

    Fits some others posters here better than you, IMHO.

  236. 236
    jerry says:

    Neither you or anyone can discern the functionality of a novel DNA sequence by performing any sort of numerology on it.

    A nonsense statement that has to be known as nonsense.

    Functionality doesn’t come from numerical analysis but complexity can be determined from numerical analysis. There are zillions of complex entities that have no specific function. But a small percentage of this zillion complex entities (a smaller zillion) have function. The question is how did this functionality arise.

    Everyone here knows this the basic question even if they pretend ignorance of it by making nonsense statements.

    Aside: the term “blowhard” is not really relevant here. There are certainly blowhards on both sides. The tactics that are essential to mislead, divert and distract are not necessary an example of a blowhard.

    Disingenuous is a better term.

  237. 237
    kairosfocus says:

    AF, as you know, warranted, for cause. You still refuse to acknowledge what is on the table before you. That speaks, not in your favour. KF

  238. 238
    Alan Fox says:

    Jerry:

    Functionality doesn’t come from numerical analysis but complexity can be determined from numerical analysis.

    So can you show how this is done? That would be helpful.

  239. 239
    Alan Fox says:

    KF

    You still refuse to acknowledge what is on the table before you.

    Speaking in riddles again. Acknowledge what?

  240. 240
    kairosfocus says:

    AF, you full well know what is linked and what has been published and cited. Further, you know what bits and bytes are. You know the difference between gibberish, simple repetitive patterns and functional organisation. You also had a link before you, which you side stepped as predicted. Your behaviour is manifestly willful and that resort tells the astute onlooker that playing telescope to blind eye is by implication a demonstration that you have nothing substantial but refuse to acknowledge blatant facts. KF

  241. 241
    jerry says:

    So can you show how this is done? That would be helpful.

    Already done above.

  242. 242
    ET says:

    Alan Fox:

    Neither you or anyone can discern the functionality of a novel DNA sequence by performing any sort of numerology on it.

    BWAAAAAAAAAHAAHAHAHAAAAAAAAHAAAAA!

    Functionality is OBSERVED! Duh! Then we go back to see what produced that functionality. Then we quantify it. What is wrong with you? This has been explained over and over again.

  243. 243
    ET says:

    Earth to Alan Fox- either you are lying about FSCi/o or you are willfully ignorant.

    And seeing that you never support anything you post, you are also a hypocrite.

  244. 244
    kairosfocus says:

    ET, there is a time where deliberate ignorance is deliberate falsity. But in a world of digital phenomena full of bits and bytes, such ignorance is impossible for the reasonably educated. What we actually have is refusal to recognise that 4-state D/RNA elements are essentially parallel to 128 state elements of ASCII text, or to the underlying two state elements in a storage register, or to the implied info content in a newly assembled AA chain in a cell on the way to being a fully formed protein. The same objectors who claim to speak with the voice of science here show their bankruptcy. KF

  245. 245
    kairosfocus says:

    AF, I clip from 226:

    given the chaining chemistry for AAs and D/RNA, is there any serious chemical constraint on any of 20 AAs or 4 bases following any other? Think about how that compares to an n element string storage register

    |x|x|x| . . . |x| where each x has say p possible states.

    Yes, we have here p*p* . . . p, = p^n possibilities, leading to metrics of info capacity. Each x will have log p/ log 2 bits of storage capacity, but redundancy will reduce the actual functional information in a practical code. [–> that’s what Durston et al discuss, and practical implies observable functionality.]

    And more.

    KF

  246. 246
    ET says:

    GEM, this is beyond craziness, though. There has to be something wrong with the ID critics. Seriously wrong, too.

    You explain things so thoroughly that you lose them! And that cracks me up.

  247. 247
    relatd says:

    ET at 246,

    Perhaps a few ID critics aren’t critics at all, just deniers.

  248. 248
    ET says:

    Exactly, Relatd.

  249. 249
    Alan Fox says:

    Durston’s”fits”? The idea that took the scientific world by storm? Come off it.

  250. 250
    kairosfocus says:

    F/N: Re AF at 231:

    Neither you or anyone can discern the functionality of a novel DNA sequence by performing any sort of numerology on it.

    Strawman caricature.

    Jerry is right at 236:

    A nonsense statement that has to be known as nonsense.

    Functionality doesn’t come from numerical analysis but complexity can be determined from numerical analysis. There are zillions of complex entities that have no specific function. But a small percentage of this zillion complex entities (a smaller zillion) have function. The question is how did this functionality arise.

    Everyone here knows this the basic question even if they pretend ignorance of it by making nonsense statements.

    So is ET at 242:

    Functionality is OBSERVED! Duh! Then we go back to see what produced that functionality. Then we quantify it. What is wrong with you? This has been explained over and over again.

    First, we can and do measure information carrying capacity, based on effectively strings and states of elements in the strings. That goes back to Shannon and even Hartley. You reacted to try to sideline it, which you must know is wrong.

    Functionality is observed from operation in context, as Relatd highlighted by putting up a text string. and that is how a decade ago I put up a metric that would multiply by a dummy variable that would be 1/0 depending; similar to a technique used in macroeconomics and linked econometrics. Similarly, specificity can be observed by the effect of sufficient random noise perturbation to trigger loss of observable function, and that is why at that time I used a second dummy variable to denote specificity, a fishing reel is different from a bait bucket full of randomly clumped fishing reel parts and the sheepish gun owner bringing a box full of disassembled parts to a gunsmith is proverbial.

    Durston et al used a more complex approach which is drawn out in a paper cited in my always linked. They also developed a technique to address redundancies in the information based on practical inevitabilities of codes. This, too you tried to dismiss rather than attend to substantially.

    Nevertheless, you know about functional strings, you use them in your objections, and you know the difference between gibberish — a typical result of blind search string generation — and meaningful information per a given protocol, e.g. ASCII text with messages in English.

    You also have been repeatedly informed that functional organisation such as that of an ABU 6500 C3 reel, can be reduced to strings in a description language such as AutoCAD DWG format.

    (A reel is less complex than a watch, it is no surprise to see watchmakers coming up a few pivotal times in the history of especially the modern multiplier or baitcasting reel. And you and other objectors have been pointed to Paley’s thought exercise in his Ch 2, on a self replicating watch, which is fifty years before Darwin’s publication and 150 before von Neumann’s kinematic self replicator where the self replicator shows the additional FSCO/I involved in moving to that class of machine or system.)

    Thus, as description languages and compact technical details exist, discussion on strings is without loss of generality. A point I made over a decade ago in incorporating functional organisation in the abbreviation: functional specificity > functional specificity + informational complexity > functionally specific, complex organisation and/or associated information, FSCO/I.

    Specificity can be given in a detachable description, e,g. sentence in ASCII coded English or a working fishing reel or a cellular metabolic network or a kinematic self replicating machine/system. I insist on kinematic self replicators to show something done in hardware, not a software simulation.

    Of course, at some point you mocked such reference to a fishing reel, wrongfully refusing to acknowledge the point Wicken made in discussing “wiring diagram[s].” That same point applies to say the process-flow network of nodes and arcs in an oil refinery [another example I used] and to the similar but vastly more complex and miniaturised one expressed through the metabolism of the living cell. Indeed, a string is actually a 1-D nodes and arcs framework. (And yes, that ties to a whole world of Mathematics on graphs, networks and their properties; also to linked engineering techniques and to register transfer language/algebra in computing.)

    Where, of course, you are an educated person in a digital age and could readily access the further information regarding how information can be extracted from functional organisation and expressed in a compact description language.

    In short, your hyperskeptical denials and dismissals in the face of evident facts that are readily accessible is without responsible excuse. You have been speaking with willful and insistent disregard to truth you know or should acknowledge, in order to advance dismissal of something you object to. That is disregard of duties to truth, right reason, prudence [including warrant] etc on a sustained basis. Anti-knowledge, anti-reason, anti-truth. Where, you know what speaking with disregard to truth is about.

    You can, should and must do better than such.

    KF

  251. 251
    kairosfocus says:

    AF, you know the relevance of functional bits, which were abbreviated fits for convenience. Your continued hyperskeptical disregard in the teeth of responsibility to truth, right reason, prudence [including warrant] etc speaks. KF

    PS, it even speaks theologically, given a warning of scripture (and your attempt to personalise and polarise through Alinsky tactics above):

    Eph 4:17 So this I say, and solemnly affirm together with the Lord [as in His presence], that you must no longer live as the [unbelieving] Gentiles live, in the futility of their minds [and in the foolishness and emptiness of their souls], 18 for their [moral] understanding is darkened and their reasoning is clouded; [they are] alienated and self-banished from the life of God [with no share in it; this is] because of the [willful] ignorance and spiritual blindness that is [deep-seated] within them, because of the hardness and insensitivity of their heart. 19 And they, [the ungodly in their spiritual apathy], having become callous and unfeeling, have given themselves over [as prey] to unbridled sensuality, eagerly craving the practice of every kind of impurity [that their desires may demand].

  252. 252
    Alan Fox says:

    First, we can and do measure information carrying capacity, based on effectively strings and states of elements in the strings. That goes back to Shannon and even Hartley.

    Shannon was calculating load carrying capacity of telephone systems. Tells us nothing about content or function. Nothing. At. All.

  253. 253
    Alan Fox says:

    Durston et al used a more complex approach which is drawn out in a paper cited in my always linked. They also developed a technique to address redundancies in the information based on practical inevitabilities of codes. This, too you tried to dismiss rather than attend to substantially.

    Durston made an honest effort. Problem is it doesn’t work.

  254. 254
    Alan Fox says:

    To save reinventing the wheel:

    http://theskepticalzone.com/wp.....ein-space/

    Kirk Durston can be found joining in in the comments.

  255. 255
    kairosfocus says:

    AF, you are found continuing to refuse to acknowledge first facts and established knowledge. Let us start, what is a binary digit? ____ Why is it that a p-state per digit register has log p/log 2 bits per character information storage capacity? _______ Why is it that in a practical code there will normally be a difference of frequencies of states in normal text? Why then does – H = [SUM] pi log pi give an average value of info per character? _______ Why is this called entropy and why is it connected to physical thermodynamics by the information school? _________ Why can we identify for an n length, p state string that there are p^n possibilities forming a configuration space? Why is it, then, that for codes to compose messages or algorithmic instructions or compactly describe functional states, normally, there will be zones of functionality T in a much larger space of possibilities W? ______ We could go on but that is enough to make a key point clear. KF

    PS, it is commonplace in physics that while there are established general laws or frameworks, readily or exactly solvable problems may be few. When I did Q theory, no more than three exactly solved problems existed. This has to do with how fast complexity of real world problems grows. Approximate modelling is a commonplace. An old joke captures the point. Drunk A meets drunk B under a streetlight on hands and knees searching. I lost my contacts. So A joins in the search. After a while A asks are you sure you lost them here? Oh no, I lost them over in the dark but this is where the light is. The context was statistical thermodynamics.

  256. 256
    kairosfocus says:

    PPS, your debate on sampling protein space does not answer to the core issues above. Further, it is known that there are several thousand fold domains, many of which have a few or even just one viable AA sequence, and that there are no handy evolutionary stepping stones from one domain to the other. You have hyperskeptically and without good warrant tried to dismiss a readily observable phenomenon, FSCO/I and in that dismissal you have refused to acknowledge patent facts. Where, you and yours have never solved the problem of moving from a Darwin warm pond to a first, self replicating, metabolising cell by blind watchmaker mechanisms exactly because you have no good answer to origin of FSCO/I by blind watchmaker processes. Speculations do not count, they come to mutual ruin. FSCO/I is routinely and reliably produced by design and search challenge is a vital issue. We have excellent reason to hold that coded language and algorithmic code are strong signs of language using intelligence at work in the origin of the cell. It is ideological a prioris that block acknowledging such.

  257. 257
    ET says:

    Alan Fox “argues” like an infant. You cannot bully us, Alan. And clearly you cannot formulate a coherent argument.

    Fox pulls “environmental design” from its arse and thinks it is a valid concept.
    Durston gets his concept published in peer-review and Fox handwaves it away like the scientifically illiterate loser it is.

    Durston concept works. Alan cannot demonstrate otherwise.

  258. 258
    ET says:

    Alan Fox:

    Shannon was calculating load carrying capacity of telephone systems. Tells us nothing about content or function. Nothing. At. All.

    Your willful ignorance isn’t an argument, either. Functionality is OBSERVED, you obtuse arse!

  259. 259
    ET says:

    It should be noted that not one person over on TSZ can demonstrate that any protein arose via blind and mindless processes. They don’t even know how to test such a claim.

    The bottom line is people like Alan do not care about science nor reality. They live in a world of denial. There will NEVER be any evidence for Intelligent Design in their bitty, closed minds. And they will NEVER support the claims of evolution by means of blind and mindless processes. They are cowards and losers, all. The Skeptical Zone is the new swamp

  260. 260
    kairosfocus says:

    PPPS, as a further point, Wikipedia’s admissions on the Mandelbrot set and Kolmogorov Complexity:

    This image illustrates part of the Mandelbrot set fractal. Simply storing the 24-bit color of each pixel in this image would require 23 million bytes, but a small computer program can reproduce these 23 MB using the definition of the Mandelbrot set and the coordinates of the corners of the image. Thus, the Kolmogorov complexity of the raw file encoding this bitmap is much less than 23 MB in any pragmatic model of computation. PNG’s general-purpose image compression only reduces it to 1.6 MB, smaller than the raw data but much larger than the Kolmogorov complexity.

    This is of course first a description of a deterministic but chaotic system where at the border zone we have anything but a well behaved simple “fitness landscape” so to speak. Instead, infinite complexity, a rugged landscape and isolated zones in the set with out of it just next door . . . the colours etc commonly seen are used to describe bands of escape from the set. The issues raised in other threads which AF dismisses are real.

    Further to which, let me now augment the text showing what is just next door but is not being drawn out:

    In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963.[1][2] . . . .

    Consider the following two strings of 32 lowercase letters and digits:

    abababababababababababababababab [–> simple repeating block similar to a crystal], and
    4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 [–> plausibly random gibberish similar to a random tar]
    [–> add here, this is a string in English using ASCII characters and is a case of FSCO/I]

    The first string has a short English-language description, namely “write ab 16 times”, which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., “write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7” which has 38 characters. [–> a good working definition of plausible randomness] Hence the operation of writing the first string can be said to have “less complexity” than writing the second. [–> For the third there is neither simple repetition nor plausibly random gibberish but it can readily and detachably be specified as ASCI coded text in English, leading to issues of specified complexity associated with definable, observable function and degree of complexity such that search challenge is material. Here, for 32 characters there are 4.56 * 10^192 possibilities, well beyond 500 bits of conplexity.]

    More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string’s size, are not considered to be complex. [–> another aspect of complexity, complexity of specification, contrasted with complexity of search tied to information carrying capacity]

    The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. [–> other things can be reduced to strings by using compact description languages, so WLOG] We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java.[–> try AutoCAD] If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII). [–> notice, the information metric] . . . .

    Any string s has at least one description. For example, the second string above is output by the pseudo-code:

    function GenerateString2()
    return “4c1j5b2p0cv4w1x8rx2y39umgw5q85s7”

    whereas the first string is output by the (much shorter) pseudo-code:

    function GenerateString1()
    return “ab” × 16

    If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically,

    K(s) = |d(s)|.

    [–> our addred case is similarly complex to a plausibly random string but also has a detachable description that is simple and often identifies observable functionality]

    The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem) . . . .

    At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following:

    function KolmogorovComplexity(string s)
    for i = 1 to infinity:
    for each string p of length exactly i
    if isValidProgram(p) and evaluate(p) == s
    return i

    This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned.

    However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem. [–> so, calculation cannot in general distinguish random from simple order and from FSCO/I, we have to observe. This shows the pernicious nature of the strawman fallacy above by AF]

    What is more, no program at all can compute the function K, be it ever so sophisticated . . . .

    Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that “program” means a program for this universal machine. A random string in this sense is “incompressible” in that it is impossible to “compress” the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length.[15] Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself).

    This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet . . .

    This gives some background to further appreciate what is at stake.

  261. 261
    kairosfocus says:

    ET, easy on the language please, remember the broken window theory. We do not need a spiral to the gutter. KF

  262. 262
    kairosfocus says:

    AF,

    A further strawman:

    Shannon was calculating load carrying capacity of telephone systems. Tells us nothing about content or function. Nothing. At. All.

    Information carrying capacity [especially with a bound for inevitable noise] is a key upper bound and shows us the maximum possible information. Surely, you are aware of the importance of upper bound and similar limiting results in physics, not least thermodynamics.

    Going further, we have a separate way to address functionality vs randomness vs repetitive patterns, as was just laid out by way of K-complexity. Plausible randomness defies specification or description other than by quoting and prefacing itself. Simple repetition can be reduced to prefacing and quoting the repeating bloc. Functional specificity can be otherwise described with a detachable preface, but there is observable function and there will be resistance to compression, though not usually as strong as for randomness. This brings up redundancy in practical codes.

    All of this has been on the table for a long time, objectors using confident manner dismissals and strawman caricatures are being irresponsible and act in disregard for truth.

    KF

  263. 263
    JVL says:

    ET: Functionality is OBSERVED! Duh! Then we go back to see what produced that functionality. Then we quantify it.

    Is that in agreement with Dr Dembski’s 2005 monograph Specification: The Pattern That Signifies Intelligence? He seems to argue that you can just do pattern analysis to determine design which, presumably, indicates purpose or function. He seems to argue that he found a metric, a way of testing sequences (like coin flips, his example) to determine if they were ‘designed’ without having observed functionality.

  264. 264
    kairosfocus says:

    JVL, there is a difference between specified complexity and FUNCTIONALLY specific, complex organisation and/or associated information. CSI looks at detachable specifications/descriptions in general, functionality is about what we can see working based on configuration; it is the material subclass. In his NFL, Dembski clearly identified that in the relevant biological systems specification is cashed out in terms of function, giving a cluster of cites. That is clipped above. KF

  265. 265
    ET says:

    Umm, biological specification refers to function.

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.- Francis Crick

    From Wm. Dembski:

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be cashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- pg 148 of NFL

  266. 266
    jerry says:

    It all comes down to

    All the king’s horses and all the king’s men couldn’t put Humpty together again.

    In other words, all those who espouse naturalized Evolution and there are millions can not find any evidence for it. Ironically they can not find any chinks in the ID argument which is partially about Evolution.

    They all know about CSI and understand it though they pretend it is bogus. I explained CSI to my 10 year old niece who immediately saw what it meant and thought it was neat.

    This embarrassment is never really an embarrassment as they forge on occasionally finding an “i” not dotted or a “t” not crossed. The real question has always been what drives such absurd behavior.

  267. 267
    Alan Fox says:

    So has everyone given up with Durston and his fits?

  268. 268
    kairosfocus says:

    AF, you are still side stepping and refusing to address issues. Start with 255, and also ponder 260. As a start. KF

  269. 269
    Alan Fox says:

    They all know about CSI and understand it though they pretend it is bogus.

    Nobody in mainstream science gives Dembski’s CSI a thought. Whether bogus or not (I happen to agree bogus is a fair description), the idea never developed to a level convincing enough that refutation was really needed and it is now forgotten and ignored.

  270. 270
    kairosfocus says:

    AF, lying and slandering rather than addressing issues on the merits; to the point of being confession by projection to the despised other . . . you just let a cat out the bag about yourself. I again challenge you to address 255 and 260, the latter being an augmentation on the discussion of Kolmogorov complexity informed by considerations tracing to Orgel and Wicken. FSCO/I is anything but bogus, it is an observable. KF

  271. 271
    jerry says:

    Nobody in mainstream science gives Dembski’s CSI a thought

    Has anyone ever debunked it?

    Answer: No. So challenging ID is reduced to argument by assertion. There is no logic against it since it is based on indisputable mathematics. Yet this nonsense sentence was made:

    that refutation was really needed

    So why does anyone defend the indefensible? Why do they continually use fallacies to justify their beliefs? We just had another fallacy used to support their position.

    As I said they are not embarrassed by this. Why?

  272. 272
    ET says:

    Why would we give up on Durston and FITs?

    And nobody in mainstream can demonstrate that blind and mindless process produced life and its diversity! Evolution by means of blind and mindless processes is undeveloped. Bogus doesn’t even begin to describe it.

    As I have said, Alan “argues” like a child. He would be the worst teammate on a debate team.

  273. 273
    ET says:

    A fundamental aspect of science is quantification. And given this:

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.- Francis Crick

    and

    Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be cashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- pg 148 of NFL

    Both Dembski and Durston have risen to the challenge of quantifying it. And all Alan can do is piss himself in their wake.

  274. 274
    Alan Fox says:

    FSCO/I is anything but bogus, it is an observable. KF

    I did not say “FSCO/I” is bogus. I said I think bogus is a fair description of Bill Dembski’s CSI. I still have no idea what Kairosfocus’ invention of “FSCO/I” is. It has not yet risen to the level of bogus. Maybe it could rise higher but without KF making a decent effort to explain his concept, how to quantify it, maybe an example to show how it works, we’re still in the dark.

  275. 275
    kairosfocus says:

    AF, more lying. It is Orgel’s and Wicken’s specified complexity and informational, wiring diagram complexity. Second Dembsky generalised to any detachable specification, acknowledging that for biology it is cashed out in terms of function. As such it is in fact also an observable. When 40 of 41 election ballots go one way, we know something went fishy, just on the far tail result. Had it been a racial issue there would not even have been a moment’s hesitation. And when things had to be changed, poof the miracle vanished. The hyperskpetical stunts being used tell us that objectors have no substance, and the projections of wrongdoing tell us volumes about those who think like that. Confession by projection. KF

  276. 276
    kairosfocus says:

    PS, what is going on in the Dembski Chi [use X] metric, from my always linked:

    X = – log2[10^120 ·pS(T)·P(T|H)].

    This can be broken up:

    X = – log2[2^398 ·D2·P(T|H)].

    Or, as – log2(P(T|H)) = I(T):

    X = I(T) – 398 – K2

    Where, K2 has a natural upper limit of about 100 further bits.

    In short, this is a specified info beyond 400 – 500 bits metric

  277. 277
    JVL says:

    Jerry: Has anyone ever debunked it?

    Is anyone actually using it?

  278. 278
    ET says:

    Just those who wish to quantify biology.

  279. 279
    JVL says:

    Kairosfocus:

    X = – log2[10^120 ·pS(T)·P(T|H)].

    This can be broken up:

    X = – log2[2^398 ·D2·P(T|H)].

    Needs explaining I’m afraid. You can’t just replace 10^120 ·pS(T) with 2^398 ·D2. without some kind of explanation or justification or definition (what is D2?).

    Or, as – log2(P(T|H)) = I(T):

    Requires a lot more explanation. You just tossed away two pieces of the puzzle without explanation.

    You’re going to have to either spell out the transitions you gloss over or provide links to explanations.

  280. 280
    kairosfocus says:

    JVL, product rule for logs and log_2 of 10^120, the neg log probability rule is a 100 year old base information metric. The Dembski expression is a metric of info beyond a threshold. Base 2 gives bits. KF

  281. 281
    relatd says:

    AF at 269,

    Only by you.

  282. 282
    relatd says:

    AF at 274,

    “… we’re still in the dark.”

    Not we. Just you.

  283. 283
    Alan Fox says:

    Hilarious, KF.

    The math is trivial. What data do you enter? What does your trivial manipulation show?

  284. 284
    Alan Fox says:

    Is anyone actually using it?

    That would be something. Not even Kairosfocus is able to tell us how his trivial manipulation of numbers (where the numbers come from is not yet clear) tells us something useful.

    TL;DR No.

  285. 285
    Alan Fox says:

    Seems Kairosfocus is, without specifying, melding his “FSCO/I” into a version of Dembski’s “complex specified information” (CSI). There has been much discussion of CSI here and elsewhere (remember Mathgrrl and UD regulars on how to calculate the CSI of something?). If KF confirms his ‘FSCO/I” is similar to one version of Dembski’s CSI (which one, I wonder) then that’s a very large wheel I don’t need to reinvent.

  286. 286
    Alan Fox says:

    AF at 274,

    “… we’re still in the dark.”

    Not we. Just you.

    So does that mean Relatd can explain how to calculate the CSI of something, explain what numbers he is using and what the result signifies.

    Surprise me, Relatd.

    *continues not holding breath*

  287. 287
    Alan Fox says:

    People will be resurrecting the explanatory filter next!

  288. 288
    Alan Fox says:

    Blimey. CSI is famous. It has a Wikipedia entry.

    https://en.m.wikipedia.org/wiki/Specified_complexity

  289. 289
    JVL says:

    Kairosfocus:

    Please explain how your D2 relates to Dr Dembski’s pS(T). They don’t appear to be the same thing so that would imply they are not ‘measuring’ the same thing. You seem to drop it anyway which surely can’t be right.

    And K2? What is that? Presumably it’s -log2(D2) . . . but you say that K2 has a limit of so many bits but your units don’t carry through. 398 is unit-less is it not?

    What is the range of I(T)?

    If we start with Dr Dembski’s

    X = -log2(10^120•pS(T)•P(T|H)) = -log2(2^398•pS(T)•P(T|H))

    = -log2(2^398) – log2(pS(T)) – log2(P(T|H)) = -398 – log2(pS(T)) – log2(P(T|H))

    (without using any of your substitutions except for 2^398)

    Replacing -log2(P(T|H)) with I(T) looses the reliance on H which surely is not correct, why did Dr Dembski put it in there in the first place?

    And, again, replacing -log2(pS(T)) with K2 without any kind of explanation (and dropping the T) is just not good mathematical practice. You must explain what you are doing.

    And you must be cognisant of the units. I do not see how you can get ‘bits’ out of a log expression.

    For reference: Dr Dembski defines pS(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T. How does your replacement compare to that? Notice pS(T) has nothing to do with bits at all. It’s just a number.

  290. 290
    JVL says:

    P(T|H) will be between 0 and 1 as are all probabilities. So, log2(P(T|H)) will be between -infinity and 0; the smaller P(T|H) is the larger but negative log2(P(T|H)) will be.

    So -log2(P(T|H)) will be a non-negative number.

    pS(T) is a non-negative number (could be zero). If pS(T) is greater than or equal to 1 that means log2(pS(T)) will be a non-negative number.

    So -log2(pS(T)) will most likely be negative.

    So -398 – log2(pS(T)) – log2(P(T|H)) could be a negative number. You can’t have a negative number of bits which is why trying to say the expression refers to a number of bits is incorrect.

  291. 291
    kairosfocus says:

    JVL, are you familiar with the information metric, negative log probability, which gives linear additivity? I have already linked the longstanding note that is always linked, from which the brief excerpt comes and serves to show that Dembski’s metric comes down to info beyond a threshold. I almost hesitate to say read here on, on quantification tied to info theory. As for pS(T), note from Dembski “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T [in a very large config space W, something instantly familiar from statistical thermodynamics].” I used p for phi, X for Chi and W for Omega as WP makes a hash of greek letters. Obviously the value will always be positive. I simply put up constants as substitutes, here K2. Then, – log2(P(T|H)) –> I(T), by neg log probability metric of information, base 2 giving bits. subtract [398 + K2]. X = I(T) – [398 + K2], where we know the latter term peaks at 500 for Dembski. I take a more conservative 500 bits for sol system, 1,000 for the observed cosmos. As fair comment, your inferences and attempted correction — that “You can’t have a negative number of bits which is why trying to say the expression refers to a number of bits is incorrect” — reflect lack of familiarity with the physical and information theory context in Dembski’s paper. And so forth. KF

  292. 292
    kairosfocus says:

    AF, you continue to dig yourself further into the hole you are in. I draw your attention to the basics that you have yet again side stepped, from 255:

    Let us start, what is a binary digit? ____ Why is it that a p-state per digit register has log p/log 2 bits per character information storage capacity? _______ Why is it that in a practical code there will normally be a difference of frequencies of states in normal text? Why then does – H = [SUM] pi log pi give an average value of info per character? _______ Why is this called entropy and why is it connected to physical thermodynamics by the information school? _________ Why can we identify for an n length, p state string that there are p^n possibilities forming a configuration space? Why is it, then, that for codes to compose messages or algorithmic instructions or compactly describe functional states, normally, there will be zones of functionality T in a much larger space of possibilities W? ______ We could go on but that is enough to make a key point clear , , , ,

    PS, it is commonplace in physics that while there are established general laws or frameworks, readily or exactly solvable problems may be few. When I did Q theory, no more than three exactly solved problems existed. This has to do with how fast complexity of real world problems grows. Approximate modelling is a commonplace. An old joke captures the point. Drunk A meets drunk B under a streetlight on hands and knees searching. I lost my contacts. So A joins in the search. After a while A asks are you sure you lost them here? Oh no, I lost them over in the dark but this is where the light is. The context was statistical thermodynamics.

    Similarly, I point to 260 above, where I drew out and extended issues tied to Kolmogorov complexity by adding in what is next door. This too you have dodged, the better to play at further rhetorical stunts. KF

  293. 293
    kairosfocus says:

    F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design.

    First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits

    FSCO/I, X_sol = FSB – 500 in functionally specific bits

    Likewise for the observable cosmos,

    X_cos = FSB – 1,000, functionally specific bits

    And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get

    X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits.

    Which, is comfortably beyond, so redundancy is unlikely to make a difference.

    Contrast a typical value for 1800 tossed coins

    X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short.

    If the coins expressed ASCII code in correct English

    X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed.

    [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.]

    A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference.

    Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy.

    Life, credibly, is designed.

    KF

    PS, There has already been in the thread citation from Dembski on the definition of CSI and how in cell based life it is cashed out on function. I note, the concept as opposed to Dembski’s quantitative metric (which boils down to functionally specific info beyond a threshold) traces to Orgel and Wicken in the 70’s. This was noted by Thaxton et al in the 80’s and Dembski, a second generation design theorist set out models starting in the 90’s.

    PPS, as for Wickedpedia on such a topic, slander is the standard, worse than useless.

    PPPS, Mathgrrl turned out to be plagiarising someone’s handle, to be a man and a fraud who did not understand logs. The above stands beyond his raft of specious objections over a decade ago.

  294. 294
    jerry says:

    I often make the claim that the obvious is ignored on UD by both sides of the debate.

    This indicates that commenters are not really interested in understanding or explaining. For example, a CSI calculation was provided but ignored. Now, there may be some need of minor corrections but essentially it illustrated CSI.

    Here is a video that was presented on UD explaining the calculation of CSI. Some is simple while other parts will require more concentration.

    https://www.youtube.com/watch?v=5CWu_8CTdDY&t=217s

  295. 295
    ET says:

    Thank you, Alan Fox, for proving that you are scientifically illiterate! The explanatory filter is standard operating procedure for science. It forces us to honor Newton’s 4 rules of scientific reasoning, Occam’s Razor and parsimony.

    Just how ignorant are you?

  296. 296
    ET says:

    Alan brings up mathgrrl. mathgrrl was shown to be a willfully ignorant troll. Pretty much just like Alan Fox. A coward who couldn’t support the claims of its own position if its life depended on it!

  297. 297
    kairosfocus says:

    F/N: Wiki as cited back then on the tie in between information and entropy:

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Now, Harry S Robertson in his thermal physics:

    . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

    [deriving informational entropy, cf. discussions here, here, here, here and here; also Sarfati’s discussion of debates and the issue of open systems here . . . ]

    H({pi}) = – C [SUM over i] pi*ln pi, [. . . “my” Eqn 6]

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the “Holy Grail” of statistical thermodynamics]. . . .

    [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 – 6, 7, 36; replacing Robertson’s use of S for Informational Entropy with the more standard H.]

    Remember, this is the road I actually travelled on.

    KF

  298. 298
    JVL says:

    Kairosfocus: we know the latter term peaks at 500 for Dembski

    How do we know that?

    Let’s just review what Dr Dembski actually says in his 2005 monograph: Specification: The Pattern That Signifies Intelligence.

    From page 24:

    We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as

    ? = –log2[10120 ·?S(T)·P(T|H)].

    And then later on page 24:

    Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that
    ? = –log2[10120 ·?S(T)·P(T|H)] > 1,

    No mention of bits or anything like that. He’s just computing what he calls the specified complexity.

    From earlier, page 18:

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ?S measures specificational resources, the specificity ? is given as follows:

    ? = –log2[?S(T)·P(T|H)].

    Note that T in ?S(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the
    event identified by the pattern).

    What is the meaning of this number, the specificity ?? To unpack ?, consider first that the product ?S(T)·P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That’s what ?S(T)·P(T|H) computes, namely, whether of all the other targets T~ for which P( T~ |H) ? P(T|H) and ?S( T~ ) ? ?S(T), the probability of any of these targets being hit by chance according to H is still small.

    Note that nowhere does Dr Dembski suggest substituting some constant or number of bits for any of his terms.

    Now, I suggest that replacing any of Dr Dembski’s terms with constants or estimates which he himself did not choose to do is potentially misunderstand what he intends. And, if there is any doubt he worked out an example, on page 22 and 23:

    In this case, if T were a truly random sequence (as opposed to the Fibonacci sequence), ?S(T), as we observed earlier, would be on the order of 1010 , so that

    ?~ ? –log2[10?4 ·?S(T)] ? –log2[10?4 ·1010 ] ? –20.

    Notice the result of -20 which clearly cannot have units of number of bits.

    If you choose to vary from what Dr Demski derived and wrote then I suggest you are not measuring the same thing he attempted to measure. IF you’d like to work out an example using both methods and show that they work out the same then please do so. That would settle the issue so I’d recommend that. If you can.

  299. 299
    kairosfocus says:

    JVL, the 500 bits is there in Dembski’s 1 in 10^150 as 2^500 = 3.27*10^150. As for bits once you have a negative log base 2 of a probability it is information in bits. All I did is showed that Dembski’s framework on reasonable terms reduces to a bits beyond a threshold metric then used the bits. KF

  300. 300
    Alan Fox says:

    All I did is showed that Dembski’s framework on reasonable terms reduces to a bits beyond a threshold metric then used the bits.

    Trivial manipulation of numbers it is then.

  301. 301
    JVL says:

    Kairosfocus: As for bits once you have a negative log base 2 of a probability it is information in bits.

    Uh huh. And where does it say that? That would depend on what kind of probability you were talking about wouldn’t it? And, if that’s true then why didn’t Dr Dembski make the same assumptions you did?

    1 in 10^150 as 2^500 = 3.27*10^150

    ???? Where is that? What happened to the 3.27 then?

    Look, you’re going to have to do a lot better job explaining how and why you derived what you did. AND, at the very least, work out an example using your and Dr Dembski’s methods to show they get the same conclusion. And, may I point out again, that his worked out example does not involve numbers of bits.

  302. 302
    relatd says:

    Jerry at 294,

    The purpose of propaganda, and lies, is to keep repeating them regardless of the truth.

  303. 303
    kairosfocus says:

    JVL, probabilities are automatically indices of relative ignorance and knowledge, save when firmly 1 or 0. The eventuation of especially a low probability state is to some degree a surprise, and that is a related concept. So probabilities are inherently informational. Negative logs give simple additive properties [etc] and base 2 gives bits; indeed, IIRC that is how Hartley came up with the now ubiquitous contraction, nats are also far more rarely used, for log_e in the information context. As for 3.27, we accept the order of magnitude as 498.29 approximately is rather inconvenient. KF

  304. 304
    kairosfocus says:

    AF, hardly trivial, we are in the context of the breakthrough thoughts that opened up the information and telecommunication age. Shannon and Weaver probably should have had a Nobel. Drawing a link out is important. One that BTW makes sense, a metric of information beyond a threshold where blind forces are plausible is a key approach. KF

  305. 305
    JVL says:

    Kairosfocus: probabilities are automatically indices of relative ignorance and knowledge,

    Uh huh. Funny that Dr Dembski and Dr Behe have both made probabilistic arguments for design.

    probabilities are automatically indices of relative ignorance and knowledge,

    As well as being indices of relative ignorance and knowledge.

    Negative logs give simple additive properties [etc] and base 2 gives bits

    Not necessarily. You want that to be true to justify your interpolations.

    As for 3.27, we accept the order of magnitude as 498.29 approximately is rather inconvenient.

    The point is it doesn’t appear in your calculation.

    Look, why don’t you work out the example in Dr Dembski’s monograph and show that you get the same result (-20) that he did. That will show that you are not distorting his metric.

    If you can work out that example in your own way that is. We shall see.

  306. 306
    kairosfocus says:

    JVL, with all respect, obviously, you are not familiar with information theory 101. I am not saying anything exceptional in speaking of negative log probability metrics and base two as giving bits; goes back to Hartley. What I did is I showed how a threshold emerges and noted the threshold that was suggested as yardstick. You will note, I use it for sol system scope and use its square for the observed cosmos. Dembski does something a bit different in looking for his 1/2 point. Related but different. KF

  307. 307
    kairosfocus says:

    PS, my old 1971 edition Taub and Schilling, Princs of Communication, p. 415:

    consider a communication system in which the allowable messages are m1, m2, . . . , with probabilities of occurrence p1, p2, . . . Let the transmitter select message mk, of probability pk; let us further assume that the receiver has correctly identified the message [–> no channel noise approximation]. Then we shall say, by way of definition of the term information, that the system has conveyed an amount of information Ik = log2 [1/pk] . . . . while Ik is an entirely dimensionles number, by convention, the “unit” it is assigned is the bit.

    Of course, log [1/pk] is – log [pk], and if pk and pj apply to two independent messages 1/pk * 1/pj are such that this makes Itot = Ik + Ij. For pk = 1/2, we get Ik = 1 bit, what a coin flip would give. With a bias, so that say H is more likely and T less likely, The bias reduces the info capacity of the more likely and raises that of the less.

    And so forth.

    PPS, the linked in my note accessible through my handle gives more.

  308. 308
    ET says:

    Alan Fox:

    Trivial manipulation of numbers it is then.

    Always a punk when its ignorance is exposed.

    It is neither trivial nor a manipulation. Your ignorance is not an argument, Alan. And a coward such as yourself cannot bully us.

  309. 309
    ET says:

    JVL continues to conflate CSI with specification. You are beyond dishonest, JVL.

  310. 310
    Alan Fox says:

    JVL quoting Bill Dembski’s

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ?S measures specificational resources, the specificity ? is given as follows:

    ? = –log2[?S(T)·P(T|H)].

    Note that T in ?S(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the
    event identified by the pattern).

    Couple of queries. T in the formula is defined as both a “pattern” and an “event” and H as a “chance hypothesis”. Do can someone give a number for an event? The bacterial flagellum is not an event, for example, but a sequence of selected steps. And “chance hypothesis”? Dembski is ruling out non-random processes such as selection a priori. Can anyone explain how this formula addresses reality, where processes develop influenced by non-random selection? If selection is ignored by Dembski’s formula, what use is it?

  311. 311
    JVL says:

    ET: JVL continues to conflate CSI with specification.

    From Dr Dembski’s 2005 monograph:

    Thus, for a pattern T, a chance hypothesis H, and a semiotic agent S for whom ?S measures specificational resources, the specificity ? is given as follows:
    ? = –log2[?S(T)·P(T|H)].
    Note that T in ?S(T) is treated as a pattern and that T in P(T|H) is treated as an event (i.e., the
    event identified by the pattern).

    We thus define the specified complexity of T given H (minus the tilde and context sensitivity) as
    ? = –log2[10120 ·?S(T)·P(T|H)].

    Of course some of the Greek letters are not correctly rendered. But, it’s clear, Dr Dembski defines specificity and incorporates that into he definition of specified complexity.

    Anyway, I’m happy to concede a lot of points if Kairosfocus and/or ET can work through Dr Demski’s example in his monograph (where he gets a result of -20 for X) using Kairosfocus‘s reworking and get the same result.

    I shall await your elucidation.

  312. 312
    kairosfocus says:

    AF & JVL,

    first, you are both trying to run before you can crawl, that leads to conceptual confusion.

    Second, there is a longstanding framework, actually linked to every comment I make through my handle and here on goes through info theory basics. In that context, you will see that as Taub and Schilling — a reference I used as a textbook, then which I pulled, typed out and uploaded only to have you both instantly ignore — you will see that info is measured as a negative log of probabilities. Base 2 gives bits [base e nats and base 10 Hartleys], this is also in Bartlet’s summary and videotape Jerry reminded us of. I assume you both know enough to know log [a*b*c] = log a + log b + log c. That is all that is needed to see what I did, why. of course, p(T|H) is talking about probability of blind search processes finding a target zone T in a larger config space W, T can be contiguous or a dust, matters not. Whatever Dembski did to find himself 20 bits short of threshold makes little difference to the validity of my drawing out a bits beyond threshold metric from his equation.

    And, I observe too that both of you sidestepped when I answered your previous demand for a calculated value relevant to biology etc, see 293 above, which is more than enough to establish the basic point that the only plausible causal factor to explain FSCO/I beyond threshold is design . . . which you both ducked. Jerry has a similar observation about ignoring, though he tried to soften his punch by making a both sides remark.

    That behaviour on your part tells me you are not really interested in what is warranted but only to spin out objections and cause needless confusion.

    Not good enough.

    Moreover, I already excerpted Dembski in the always linked, as he explains his expression, and I now clip from that, though if you struggle to see that – log2[probability] –> info in bits, you will struggle far worse with his equation:

    define phiS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations [–> notice, he is speaking about bits] that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases phiS(t) and also by the maximum number of binary search-events in our observed universe 10^120]

    X = – log2[10^120 ·phiS(T)·P(T|H)].

    We can instantly deduce:

    –> X is in bits

    –> – log2 {} will give bits

    –> 10^120 ·phiS(T)·P(T|H) is a three term product expression within a logs to get info operation

    –> 10^120 becomes 398 bits as part of a threshold, and p(T|H) neg logged is information, so I(T)

    –> phiS(T) is a number and becomes a further part of the threshold, once operated on by logging, hence substitution, as was pointed out and ignored

    –> We know separately that Dembski sees 500 bits as threshold, as was pointed out and again ignored [no progress can be made if there is unwillingnes to acknowledge even basic facts, so this is now exposure of willfully obtuse hyperskepticism]

    –> the threshold therefore ranges up to 500 from 398, where 2^498.29 gives the more exact 10^150 but rounding to 500 is about the same, again as noted and used to double down on objections

    –> we therefore can infer that as a reasonable metric, “CSI lite” X = I(T) – 398 – K2, with upper bound X = I(T) – 500.

    –> My further point is, we start with info carrying capacity, given that say a fair coin with p(1) = 1/2 gives 1 bit, and noting that anything else can be reduced to bits, then address observable functionality and configuration based specificity, represented by dummy variables F and S, and take product. Redundancy can be addressed onward, in practical cases we are so far beyond threshold that it is immaterial, as shown already

    –> we then can do a csi lite model X = FSB – 500 or 1000, as shown.

    –> the result is as already seen; unsurprisingly, the CSI in relevant contexts is so far beyond threshold that redundancy is immaterial

    It is obvious the material question is answered, sidestepping, going after red herrings led away to strawmen etc are only distractive.

    The probblem you face is that FSCO/I is real, is observable, is well beyond threshold and strongly points to intelligently directed configuration as a key causal factor for the cell. Where, CSI in bio contexts, is cashed out as function, i.e. it comes down to functionally specific complex organisation and/or associated information.

    That is the pivotal point, and you are consistently coming up short.

    KF

  313. 313
    JVL says:

    Kairosfocus:

    I am not trying to run or walk; I’m trying to see if you can use your own version of Dr Dembski’s specified complexity metric on the same example he worked out in his 2005 monograph and get the same result, -20. If you don’t get the same result then you are measuring something different from him. Which is fine but then you have to explain why you are measuring something different because his work is perfectly legible and easy to follow.

    I don’t need anymore lectures about basic mathematics; I don’t need anymore theoretical discussions of the nature of information. I can follow the mathematics Dr Dembski uses just fine. Just show me you can get the same thing with your version please. You came up with it, you should be able to use it on a simple example. And, again, if you get a different result then please interpret your result based on what Dr Dembski says the metric is for.

  314. 314
    ET says:

    Wow! JVL doubles down on its ignorance! Buy a vowel, JVL.

  315. 315
    ET says:

    Alan Fox:

    The bacterial flagellum is not an event, for example, but a sequence of selected steps.

    That is the propaganda, anyway.

    The problem is there isn’t any evidence that blind and mindless processes can or did produce any bacterial flagellum. There isn’t even any way to test the claim that they can or did.

    Dembski is ruling out non-random processes such as selection a priori.

    Wrong again! First, natural selection is non-random in a trivial sense in that not all variants have the same chance of being eliminated. Next, NS doesn’t do anything until the motile device is up and running.

    Alan Fox is just clueless and apparently proud of it.

    There isn’t any evidence that natural selection can or did produce any bacterial flagellum. There isn’t even any way to test the claim that NS can or did.

    All Alan can do is whine because he is too stupid to understand reality. The reality is the ONLY reason probability arguments exist is because there isn’t any actual evidence

  316. 316
    JVL says:

    ET: Wow! JVL doubles down on its ignorance! Buy a vowel, JVL.

    Can you apply either Dr Dembski’s specified complexity metric or Kairosfocus‘s version on any example? The one Dr Dembski works through is okay but clearly being able to analyse other number sequences (as Dr Dembski does) would be interesting.

    A yes or no answer to start with would be sufficient.

  317. 317
    Alan Fox says:

    @ JVL

    Not sure Yes or No will help. ET seems convinced it’s already been done, though he won’t tell us where, when, or by whom.

  318. 318
    ET says:

    Again, JVL is conflating 2 different things. His fixation with the 2005 paper “Specification”, is his downfall.

  319. 319
    ET says:

    Earth to Alan Fox- Durston did it in pee3r-review. And all you can do is choke on it! You are beyond pathetic.

  320. 320
    Lieutenant Commander Data says:

    @ET reading messages on UD is not good for your health. A fox will act like a fox while you try to convince the fox to act like a dove . Let the fox be the fox and keep calm 🙂

  321. 321
    JVL says:

    ET, LtComData;

    Can anyone use either Dr Dembski’s specified complexity metric or Kairosfocus‘s variation on a simple example?

    Yes or no?

    I can easily use Dr Dembski’s metric on a coin tossing example. It’s not that hard.

  322. 322
    kairosfocus says:

    JVL:, what I did is enough to establish my point. Dembski’s work simply shows that he is 20 bits short of whatever threshold he set. As you were told already. Your doubling down and unresponsiveness simply show that you have nothing substantial to say about the implications of negative log probability and the addition rule for logs which lead to a threshold metric. KF

    PS, you were already shown an example at 293 above and were reminded of it. Your pretence fools no one.

  323. 323
    JVL says:

    Kairosfocus:

    I’m not doubling down or being unresponsive; I’m asking if you, personally, can use Dr Dembski’s metric or your version of it to deal with a particular example.

    In response 293 you did mention some general cases; I’d like to see you deal with a particular case and compare and contrast approaches. What do you say? Something concrete and not subject to interpretation.

    I know about the addition rule for logs, that doesn’t change the final output. I just want to see if we apply your standard and Dr Demski’s standard to the same example we get the same result. I’d prefer to deal with an example that is not covered in Dr Demski’s monograph if that’s okay with you?

    What do you say? How a bout flipping a coin five times and getting five heads? Shall we work on that? I can do the work for Dr Dembski’s metric; can you do yours?

  324. 324
    Alan Fox says:

    Alan Fox- Durston did it in pee3r-review. And all you can do is choke on it! You are beyond pathetic.

    Nope. Fits, Joe, not bits. Do try to keep up.

  325. 325
    Alan Fox says:

    @ET reading messages on UD is not good for your health. A fox will act like a fox while you try to convince the fox to act like a dove . Let the fox be the fox and keep calm ?

    He’s calm. If he were stressed he’d start mis-spelling.

    Oh wait …

  326. 326
    kairosfocus says:

    JVL, side tracking. I established that per standard info theory negative log base 2 probability gives bit metrics for information. In that context, additional factors establish a threshold to be surpassed, just from the algebra involved: X = I(T) – [log c + log d + . . . ]. There is reason to see Dembski as favouring a 500 bit threshold. Going beyond, it is easy to see that reasonable cases, i.e. 300 AA is the average for proteins that is commonly given, thus 900 bases at 2 bits carrying capacity per base is 1800 bits carrying capacity, 1300 bits beyond a sol system threshold. That is more than enough to make any reasonable degree of redundancy irrelevant. For one typical protein. Genomes run to 100 – 1,000 kbases for first life and 10 – 100+ million bases for major body plans such as make an arthropod. Again, vastly beyond blind search thresholds. That is what is material, and it is sufficiently realistic for a reasonable person. There is reason to conclude that cells are designed and body plans are designed. That is what is material. And to date, are you willing to acknowledge that neg log base 2 of a probability is a standard info metric that gives a key additive property? Y/N, why? _____ KF

    PS, that Dembski was 20 bits short of his implied threshold [all, bound up in the math as again pointed out . . . ], for whatever reason, makes no difference to the main point. Similarly, other comparatives show the reason why this is how information has been measured. A fair coin is a 1 binary digit register and can store – log2[1/2] = log2 [2] = 1 bit. 1800 bits of text for ASCII is 257 characters and 257 characters in good English would without question be taken as designed. And more.

  327. 327
    ET says:

    Alan Fox:

    Nope. Fits, Joe, not bits. Do try to keep up.

    A distinction without a difference, Alan. Do try to keep up.

    FIT stands for Functional Bit, Alan. You clearly love to expose yourself as the fool, eh…

    We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable.

  328. 328
    Alan Fox says:

    A distinction without a difference, Alan.

    Let me get this straight, Joe. Kirk Durston’s metric, “fit”, is exactly the same metric as KF’s “bit” and Dembski’s measure of CSI? This is what “ET” is claiming on the 8th August at 4pm EST?

  329. 329
    Alan Fox says:

    FIT stands for Functional Bit, Alan.

    *chuckles* I rest my case.

  330. 330
    ET says:

    So, you case is that you are a moron? In what way does the adjective “functional” change the fact that a bit is a bit? The adjective “functional” specifies the type of bit.

    Are you really completely ignorant of information?

  331. 331
    ET says:

    Let me get this straight- for YEARS I have been telling Alan and his minions that FSC = CSI = FSCO/I. And only now is it starting to sink in cuz he thinks he has some imaginary points to score?

  332. 332
    ET says:

    FSC is functional sequence complexity
    CSI is complex specified information. It is a form of FSC.
    FSCO/I is functionally specific complex organization or information. A form of CSI/ FSC and SC (specified complexity)

    Even elementary school kids can see how they are all related.

  333. 333
    Alan Fox says:

    …specifies the type of bit.

    Oh my aching sides! 🙂 🙂 🙂

  334. 334
    Alan Fox says:

    Even elementary school kids can see how they are all related.

    Related bits now. Please stop, I can’t take much more. 🙂

  335. 335
    Alan Fox says:

    OK, maybe I’m being a bit (heh) mean to Joe. But a bit is:

    …the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values.

    I guess it’s possible some bits are functional and some are pink.

  336. 336
    ET says:

    Wow! No, you are not being mean to me. You are just exposing your ignorance of the subject

    The bit pertains information carrying CAPACITY. A FUNCTIONAL bit pertains the actual information.

    Related bits now.

    Related CONCEPTs, duh!

  337. 337
    ET says:

    Again, for the learning impaired:

    “We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable.”

  338. 338
    Lieutenant Commander Data says:

    Science says that in the lab under observation the chemistry[matter] don’t built functions .
    Anyway there are some rumors that long ago and far away, under a rock .. 😆

  339. 339
    kairosfocus says:

    AF, predictable. You have nothing substantial so you resort to snideness. It looks like we need to go back to 255:

    you are found continuing to refuse to acknowledge first facts and established knowledge. Let us start, what is a binary digit? ____ Why is it that a p-state per digit register has log p/log 2 bits per character information storage capacity? _______ Why is it that in a practical code there will normally be a difference of frequencies of states in normal text? Why then does – H = [SUM] pi log pi give an average value of info per character? _______ Why is this called entropy and why is it connected to physical thermodynamics by the information school? _________ Why can we identify for an n length, p state string that there are p^n possibilities forming a configuration space? Why is it, then, that for codes to compose messages or algorithmic instructions or compactly describe functional states, normally, there will be zones of functionality T in a much larger space of possibilities W? ______ We could go on but that is enough to make a key point clear . . . .

    PS, it is commonplace in physics that while there are established general laws or frameworks, readily or exactly solvable problems may be few. When I did Q theory, no more than three exactly solved problems existed. This has to do with how fast complexity of real world problems grows. Approximate modelling is a commonplace. An old joke captures the point. Drunk A meets drunk B under a streetlight on hands and knees searching. I lost my contacts. So A joins in the search. After a while A asks are you sure you lost them here? Oh no, I lost them over in the dark but this is where the light is. The context was statistical thermodynamics.

    The obvious bottomline is that you feel you must object to and hyperskeptically dismiss what you have not troubled to try to understand. For years. KF

    PS, I have spoken of functionally specific bits. That is related to but different from info carrying capacity. A chain of 500 coins in no particular order can carry info but will most likely express gibberish. But if we find an ASCII string with meaningful English text or code for an algorithm that is a different matter. In the cell, we have found copious algorithmic code for protein synthesis.

  340. 340
    Alan Fox says:

    Come on, KF, you quoted a definition of a “bit’ yourself. It’s the smallest unit of binary information, having a value of either one or zero. Bits cannot be distinguished by their functionality any more than they can by their pinkness.

    The ID claim that there is a reliable way to look at a representation of “information” such as a sequence or pattern of binary digits and, knowing nothing else about that sequence or pattern, and, by some trivial math operation, be able to say whether the pattern or sequence contains “functional” information or not.

    This has not yet been done.

  341. 341
    Alan Fox says:

    A chain of 500 coins in no particular order can carry info but will most likely express gibberish.

    Can you show how to calculate the likelihood? Most likely?Very likely? Somewhat likely? Can you be more precise? A computation?

  342. 342
    kairosfocus says:

    AF,

    you continue to double down.

    It is directly because of non response and misinformation that I pulled my older edn Taub and Schilling. Had you deigned to look in my always linked, you would have seen a more detailed discussion starting from Connor’s Signals, something that is over a decade old. You have sought to gin up a needless polarisation. And even now, you have yet to acknowledge that negative log2 probability is an information metric in bits, or that – log2[p(T|H)*c*d* . . .] will by the product rule for logs [and underlying exponents] reduce to an information beyond a threshold value, through the algebra involved. Which is why I noted by citing my always linked.

    Of course the base metric is that of information carrying capacity. Where, practical encodings invariably have redundancies that mean the neg log probability value is an upper bound. We may readily see from a coin, a two state register element in this context, how – log2 p(1) = 1 bit. Kolmogorov Complexity and compact compression in principle allow us to estimate functional information content, as was pointed out by augmenting Wikipedia at 293. Which, predictably, you sidestepped and pretend does not exist. Shame on you.

    Going on, you try to make a mountain out of the molehill of a simple description that, as any description can be expressed in bits, WLOG the binary expansion result or observation gives a bell, one dominated by near 50-50 H/T with the overwhelming majority being gibberish strings in no particular order. Any reasonable person would accept this, and would further realise that we cannot in advance generate a universal decoding algorithm that tells each and all functional sequences of bits. I suspect you know this and are trying to use it to compose what you think are clever objections. Instead, you are only showing desperation to distract from what we can and do know readily and adequately.

    We observe functionally specific, complex organisation and/or information, reduce it to bit strings and seek causal explanation of the complex functional information. There is an effective result. Once functional information is beyond 500 – 1,000 bits of complexity, reliably, it comes from design. Neither you nor any other objector can give us actually observed counter examples. A decade plus past, many tried and failed. So, objecting, denialism tactics have shifted.

    Which is what we are seeing.

    Stubborn denial of and objection to empirically well supported reality.

    Because, you are desperate to hyperskeptically deny or dismiss something at the heart of the design inference. By, speaking with utter disregard to truth.

    You now try to deny that functional sequences [such as your own text in English or code for an algorithm. . . as in mRNA for protein synthesis . . . or DWG code for say an ABU 6500 C3 reel] can be distinguished from gibberish like hkftdhvfsglvgo[8wdbblhyud or repetitive strings like asasasasasasas which is patent nonsense. The facts don’t fit your ideology so you try to get rid of them.

    Going on, you set up and knock over a strawman caricature of not only Dembski or myself but even Yockey and Wicken. Which latter you have never acknowledged. Functionality based on complex configuration and its wider cousin, specified complexity are observed in the here and now and may be given as detachable descriptions. We are doing science, so that should be no surprise. The question at stake being, how could/did such come about.

    The answer is, reliably — trillions of cases, by design.

    There is good reason to infer design as cause of the complex algorithmic code in the cell, and in body plans. Further, code is language and algorithms are knowledge based stepwise, goal directed processes.

    Signatures of design.

    KF

  343. 343
    JVL says:

    Kairosfocus: I established that per standard info theory negative log base 2 probability gives bit metrics for information.

    But that’s NOT what Dr Dembski is trying to do!!

    From his introduction:

    ABSTRACT: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.

    Notice there is no demand that functionality has to be observed first.

    Now, you choose to break down Dr Dembski’s formula in the following way:

    X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120) -log2(pS(T)) -log2(P(T|H))

    (which is completely unnecessary but whatever.)

    Again from Dr Dembski’s monograph:

    Note that putting the logarithm to the base 2 in front of the product pS(T)·P(T|H) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity a measure of information. This logarithmic transformation therefore ensures that the simpler the patterns and the smaller the probability of the targets they constrain, the larger specificity.

    Note he is talking about the product of pS(T) and P(T|H) not them taken separately. But, more importantly, does Dr Dembski himself even look at a particular problem in that way?

    Suppose that E conforms to the pattern T and that T has high specificity, that is, –log2[pS(T)·P(T|H)] is large or, correspondingly, pS(T)·P(T|H) is positive and close to zero.

    Nothing about bits there . . .

    The crucial cut-off, here, is M·N·pS(T)·P(T|H) < 1/2: in this case, the probability of T happening according to H given that all relevant probabilistic resources are factored is strictly less than 1/2, which is equivalent to X~ = –log2[M·N·pS(T)·P(T|H)] being strictly greater than 1. Thus, if X~ > 1, it is less likely than not that an event of T’s descriptive complexity and improbability would happen according to H even if as many probabilistic resources as are relevant to T’s occurrence are factored in.

    Nothing about bits there . . .

    Please note that the examples that Dr Dembski works through are either sequences of heads or tails (represented as 0s and 1s but that’s not necessary) or a sequence of numbers. NONE of his examples are of great length. Nor does he allude to 500 bits as being some important threshold.

    In general, the bigger M·N·pS(T)·P(T|H) — and, correspondingly, the smaller its negative logarithm (i.e., X~ ) — the more plausible it is that the event denoted by T could happen by chance. In this case, if T were a truly random sequence (as opposed to the Fibonacci sequence), pS(T), as we observed earlier, would be on the order of 10^10 , so that
    X~ ? –log2[10^?4 ·pS(T)] ? –log2[10^?4 ·10^10 ] ? –20.
    On the other hand, if ?S(T) were on the order of 10^3 or less, X~ would be greater than 1, which would suggest that chance should be eliminated.

    Again, nothing about bits there. AND you can’t have -20 bits. Because HE’S NOT COUNTING BITS! He trying to see if chance can be eliminated as he clearly says!! It’s not based on the size of the sequence he’s considering!

    It follows that if 10^120 ·pS(T)·P(T|H) 1, then it is less likely than not on the scale of the whole universe, with all replicational and specificational resources factored in, that E should have occurred according to the chance hypothesis H. Consequently, we should think that E occurred by some process other than one characterized by H. Since specifications are those patterns that are supposed to underwrite a design inference, they need, minimally, to entitle us to eliminate chance. Since to do so, it must be the case that
    X = –log2[10^120 ·pS(T)·P(T|H)] > 1,
    we therefore define specifications as any patterns T that satisfy this inequality. In other words, specifications are those patterns whose specified complexity is strictly greater than 1.

    No mention of bits anywhere. He just wants to see if X is > 1. That’s it.

    As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: “bidirectional rotary motor-driven propeller.” This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (= 10^5) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately pS(T) = 10^20 (for definiteness, let’s say S here is me; any native English speaker with a some of knowledge of biology and the flagellum would do). It follows that –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^?140 , where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure (for definiteness, let’s say the flagellar structure in E. coli). Is P(T|H) in fact less than
    1/2 ×10^?140 , thus making T a specification? The precise calculation of P(T|H) has yet to be done. But some methods for decomposing this probability into a product of more manageable probabilities as well as some initial estimates for these probabilities are now in place. These preliminary indicators point to T’s specified complexity being greater than 1 and to T in fact constituting a specification.

    Nothing about bits at all. In fact, he’s not even considering something numerical (the bacterial flagellum).

    The fundamental claim of this paper is that for a chance hypothesis H, if the specified complexity X = –log2[10^120 ·pS(T)·P(T|H)] is greater than 1, then T is a specification and the semiotic agent S is entitled to eliminate H as the explanation for the occurrence of any event E that conforms to the pattern T (S is similarly entitled to eliminate H when the context-dependent specified complexity X~ = –log2[M·N·pS(T)·P(T|H)] is greater than one, only this time, because M·N will be less than 10^120 , the strength with which X~ eliminates H will be less than what it is for X).

    No bits to be seen because part of the point is to be able to analyse things that don’t match a pre-supposed limit!!

    In fact, Since all Dr Dembski cares about is whether or not his X is greater than 1 he could have used log10 and compared 10^120•pS(T)•P(T|H) to 1/10 by adding the appropriate constant.

    In none of his examples does he talk about the number of bits he’s looking at being significant. He choose to use log2 (because of its association with information theory?) and he uses 10^120 as an upper bound because of the number of binary operations but he does not convert the parts of his formula into number of bits. He just doesn’t do that.

    And, again, I’m happy to work through Dr Dembski’s formula for a simple example (say getting five heads in a row) if you can work through your version as well. And then we can compare the results.

    I assume you can use your version . . . since you proposed it. You can use it can’t you?

  344. 344
    ET says:

    Alan Fox:

    Bits cannot be distinguished by their functionality any more than they can by their pinkness.

    Your willful ignorance is not an argument. Durston explained it. What part of the explanation are you too stupid to comprehend?

    Again, for the learning impaired- Functionality is observed. Actual information, ie meaning, is observed.

    The ID claim that there is a reliable way to look at a representation of “information” such as a sequence or pattern of binary digits and, knowing nothing else about that sequence or pattern, and, by some trivial math operation, be able to say whether the pattern or sequence contains “functional” information or not.

    Nope. You clearly don’t know what you are talking about.

    If the information isn’t measurable, then we use the specification metric to see if an object, structure or event was intelligently designed or not. However, if we have life, which is full of measurable information, then we can use Durston’s methodology.

  345. 345
    ET says:

    JVL- go soak your head! You are using the wrong metric. How many times do you have to be corrected on this? It’s like you are a one-track minded infant.

    Read “No Free Lunch” and stop acting like such a loser crybaby.

  346. 346
    JVL says:

    ET: Read “No Free Lunch” and stop acting like such a loser crybaby.

    Again, to quote Dr Dembski himself where he says he is extending the work done in No Free Lunch:

    Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.

    You are using the wrong metric. How many times do you have to be corrected on this?

    I am discussing the metric that Dr Dembski published in 2005. I am supporting things I say with quotes from his monograph. I have offered to compute his metric for a simple coin-tossing example and have repeated asked you and Kairosfocus if you want to use the metric proposed by Kairosfocus and then we can compare results and discuss any differences. You and Kairosfocus have declined to try this, for reasons known only to yourselves. If you’d like to use yet another metric then be my guest. I do think that Dr Dembski (PhD in mathematics) thought long and hard about design detection and came up with his metric thinking that it would actually work which is why I’d like to compare its results with other schemes. Please note that he does state he’s trying to deal with a situation “even if nothing is known about how they arose” meaning that it can be applied to sequences of numbers or 0s and 1s without being told how they were generated.

    I’m willing to give his metric a shot at some examples. Are you willing to do something similar?

  347. 347
    ET says:

    Again, 2 different things. I explained it. You ignored the explanation and choose to prattle on like a child.

    The metric proposed in No Free Lunch also pertains to Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    But this makes me wonder- what metric do people use for evolution by means of blind and mindless processes? Thery don’t have one! Evos just use bald assertions and declarations.

  348. 348
    JVL says:

    ET:

    So, you don’t want to test Dr Dembski’s metric from his 2005 monograph which he says “reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch” which, to my mind, means the 2005 monograph is the preferred and superior version.

    Is it because you can’t do the math involved with Kairosfocus‘s that you don’t want to try a couple of examples?

    I’m happy to use the 2005 metric and compare that to the one in No Free Lunch if you want. How about we look at a coin-flipping test? Surely if they are both about detecting design (with no prior knowledge of the objects origins) then they are comparable since they are trying to do the same thing?

  349. 349
    ET says:

    No one cares what your mind thinks, JVL. And seeing that archaeologists and forensic scientists don’t use it, I don’t see the fuss.

    Again, for the learning impaired- NFL and specification are two different metrics used to see if something was intelligently designed or not. NFL pertains to CSI in which bits are easily measured. Specification pertains to an object/ structure/ event that isn’t easily amendable to bits.

    However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.

    In the end, I don’t care what you do. If you want me to do something for you, you have to pay me.

  350. 350
    kairosfocus says:

    JVL, the reliable observable indicator or sign of intelligently directed configuration is FSCO/I. Second, whatever Dembski may have said, once we have – log2[probability*c*d] we have an info beyond threshold. That is objective and established given how info is measured and why. Indeed, just the fact that Dembski used negative base 2 logs implies he was aiming at info measured in bits. The connexion between Dembski and X = I(T) – threshold level [typ. 500 bits] is objective. The issue onward is to measure functional info, beyond info capacity. That is because practical encodings, whether direct or by description language such as AutoCAD DWG etc, have redundancies, which partly help with resisting noise effects. Kolmogorov complexity and compact compression allows that, but even before such once one is far enough beyond the already conservative threshold, redundancy makes no material difference. Just with a typical 300 AA protein, we are well beyond threshold, much less with a genome. Life and body plans, on contained FSCO/I, are designed. KF

  351. 351
    Alan Fox says:

    Just with a typical 300 AA protein, we are well beyond threshold, much less with a genome. Life and body plans, on contained FSCO/I, are designed.

    Not even if you assume instantaneous random assembly which is not what happens during reiterative steps of variation and selection. You still fail at knocking over straw men.

    Additionally, even using your nonsense assumption of all-at-once random assembly, Keefe and Szostak showed the wealth of functionality in random protein sequences. You also fail dismally at trying to join dots between sequence and function.

  352. 352
    kairosfocus says:

    PS, –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^-140 implies bits.

  353. 353
    JVL says:

    ET: No one cares what your mind thinks, JVL. And seeing that archaeologists and forensic scientists don’t use it, I don’t see the fuss.

    Dr Dembski will be glad to know that all the work he put into his 2005 monograph were wasted since no one wants to test it on some examples.

    NFL pertains to CSI in which bits are easily measured. Specification pertains to an object/ structure/ event that isn’t easily amendable to bits.

    The 2005 formula handles both.

    However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.

    Dr Dembski thinks he found a way that’s better than that; when you don’t need to know anything about the origin of the thing in question. Shame no one takes it seriously.

  354. 354
    jerry says:

    It’s all very simple.

    Bits can be directly equated with a probability. 6 bits = 1/64; 7bits = 1/128

    500 bits describes every particle in universe at every nano second or state transition since the Big Bang. Something with a probability more than 500 bits is essentially impossible by random processes.

    A distribution that is ordered such as 1000 Heads or 500 coins in a pattern is different from any random distribution and it’s probability can be calculated based on the specific pattern. So to say 1000 H and any other random distribution are equivalent is nonsense. Their probabilities are spectacularly different.

    Everyone reading this thread has been given the explanation for this but it is ignored.

    But the purpose for commenting here is not to understand but find any small way one can to hold forth mostly with nonsense or unnecessary complexity.

  355. 355
    ET says:

    Alan Fox:

    Not even if you assume instantaneous random assembly which is not what happens during reiterative steps of variation and selection. You still fail at knocking over straw men.

    Clueless. You don’t have any evidence that blind and mindless processes can produce any proteins, Alan. You don’t even have a way to test the claim.

    Additionally, even using your nonsense assumption of all-at-once random assembly, Keefe and Szostak showed the wealth of functionality in random protein sequences.

    Great. Too bad you can’t demonstrate that blind and mindless processes produced them. And it wasn’t a wealth of functionality. It was barely any functionality.

  356. 356
    ET says:

    JVL:

    Dr Dembski will be glad to know that all the work he put into his 2005 monograph were wasted since no one wants to test it on some examples.

    The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.

    Dr Dembski thinks he found a way that’s better than that;

    Oh my. Now you know what Dembski thinks. What a putz.

    when you don’t need to know anything about the origin of the thing in question.

    That’s what archaeologists and forensic scientists do.

    Dembski just provided a way to quantify it.

  357. 357
    jerry says:

    You don’t even have a way to test the claim.

    I do.

    It’s been presented several times. The punctuated equilibrium adherents claim random accumulation of variations to genomes is what produces new proteins. Easily tested in the sense of what to do but finding money/resources/willingness is extremely difficult.

  358. 358
    JVL says:

    Kairos>– log2[probability*c*d] we have an info beyond threshold.

    You just have a number which Dr Dembski wants to compare to 1 to see if there is specified complexity. As he clearly stated.

    Indeed, just the fact that Dembski used negative base 2 logs implies he was aiming at info measured in bits.

    Which he never says in any of his examples and worked out an example where he got X approximately equal to -20 which he does not note is weird or impossible which he would have done if he were talking about bits. Clearly.

    The connexion between Dembski and X = I(T) – threshold level [typ. 500 bits] is objective.

    Which is not what he did working out any of his examples.

    That is because practical encodings, whether direct or by description language such as AutoCAD DWG etc, have redundancies, which partly help with resisting noise effects. Kolmogorov complexity and compact compression allows that, but even before such once one is far enough beyond the already conservative threshold, redundancy makes no material difference. Just with a typical 300 AA protein, we are well beyond threshold, much less with a genome. Life and body plans, on contained FSCO/I, are designed.

    Look, you clearly are interpreting his 2005 metric different than he did himself. Which is why I’d be interested in comparing his with yours for a simple, easy-to-compute, example. Why not do that and see? I can carry out his math. Shall we?

    P(T|H) < 1/2 ×10^-140 implies bits.

    1/2 x 10^-140 is a very, very, very small positive number which cannot be number of bits. And, you continue to disregard pS(T) which he spends lots of paragraphs developing so it must be important and depends on T. In fact, as he says clearly, X cannot solely depend on P(T|H) because very random results are very improbable. Any sequence of Hs and Ts is just as likely/unlikely as any other given a random generation. That’s why you need pS(T)! Dr Dembski explains all this.

    Shall we try both methods and compare results? I’d start with something easy so the math is very clear and then work up to more complicated examples.

  359. 359
    JVL says:

    Jerry: But the purpose for commenting here is not to understand but find any small way one can to hold forth mostly with nonsense or unnecessary complexity.

    For some reason, Dr Dembski produced a monograph in 2005 where he proposed a way to detect specified complexity even if you knew nothing of the origin of the object in question. He used some simple numerical examples to motivate his derivation and show how it worked out in one particular example. He clearly spent a lot of time working on this metric.

    All I like to do is to compare the results you get from that 2005 metric with other approaches starting with some simple examples so the mathematics is straight forward.

    If you don’t want to try that fine. I gotta think Dr Dembski didn’t just make up his metric so it could lie in a drawer not being used. I’d like to see how it works. Clearly you don’t care and neither does Kairosfocus or ET which I find strange since Dr Dembski’s ID views are otherwise considered significant.

  360. 360
    JVL says:

    ET: The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.

    That’s not what he wrote in the Abstract for the monograph.

    Oh my. Now you know what Dembski thinks. What a putz.

    That’s essentially what he said in the Abstract of the monograph.

    Dembski just provided a way to quantify it.

    Shall we check it and see what kind of results it gives compared to other methods?

  361. 361
    jerry says:

    If you don’t want to try that fine

    I explained the mathematics.

    That is doing. Not trying.

  362. 362
    JVL says:

    Jerry: I explained the mathematics.

    So, Dr Dembski’s metric is useless? Why do you think he wrote it and wrote in the Abstract:

    This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.

    Clearly he thought he was adding to the work he did previously and it seems to me that the most recent formulation should be the thing that is used.

  363. 363
    ET says:

    Again, JVL- Dembski provides a probability argument. Yet you and yours don’t even deserve a seat at that table.

    The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.

    That’s not what he wrote in the Abstract for the monograph.

    And yet it is a fact.

    That’s essentially what he said in the Abstract of the monograph.

    No, he did not.

    Shall we check it and see what kind of results it gives compared to other methods?

    What other methods? Who tried to quantify the design of Stonehenge?

  364. 364
    ET says:

    Wow. The previous work he was referring to was that of bits and sequences amendable to bits.

    So go ahead. Use specification of Stonehenge to see if the archaeologists are correct.

    And we are still waiting on the methodology used by those adhering to evolution by means of blind and mindless processes.

  365. 365
    JVL says:

    ET: Dembski provides a probability argument. Yet you and yours don’t even deserve a seat at that table.

    Yes, he does. Which he clearly thought was valid. Shall we compare and contrast the results of his 2005 metric with other methods?

    The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.

    Again, taking Dr Dembski at his own words, in a non-peer reviewed paper which he was free to say whatever he liked he was trying to refine and extend design detection in a mathematically sound way.

    No, he did not.

    Clearly he did. HIs statement are straight-forward and easy to understand.

    What other methods?

    Other methods of detecting design which is clearly what he was working on!!

    Look, clearly you’re afraid to work with his 2005 metric for some reason. Only you know why. But you cannot deny what Dr Dembski himself wrote:

    This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.

    Unless you want to question all the work he did for that monograph? HE thought it was important (and maybe necessary) to clarify and extend his previous work. You disagree, I guess. But I think I’ll take him at his word and I have enough respect for his formulation to see how it works for given examples. Why don’t you want to do that?

    The previous work he was referring to was that of bits and sequences amendable to bits.

    Which he did not reference in his worked out examples in his 2005 monograph. In fact, in his most worked out example he got a result of approx -20 which he DID NOT say was weird or impossible if he was looking at number of bits.

    So go ahead. Use specification of Stonehenge to see if the archaeologists are correct.

    That’s a very complicated example; I think it’s better to start with simpler situations to see how the parts of the metric work.

  366. 366
    ET says:

    JVL:

    Yes, he does. Which he clearly thought was valid. Shall we compare and contrast the results of his 2005 metric with other methods?

    Non-sequitur. What other methods?

    Again, taking Dr Dembski at his own words, in a non-peer reviewed paper which he was free to say whatever he liked he was trying to refine and extend design detection in a mathematically sound way.

    And another non-sequitur.

    Clearly he did. HIs statement are straight-forward and easy to understand.

    He did not make the claim you are attributing to him.

    Other methods of detecting design which is clearly what he was working on!!

    Again, he was trying to QUANTIFY it. THAT is what he was working on.

    Which he did not reference in his worked out examples in his 2005 monograph.

    Are you stupid? It was referenced in the abstract. Of course, it wouldn’t be in the examples for obvious reasons.

    To me, Dembski’s work on specification is for desk jockeys. In the field there are other ways to go about it. But, when in doubt, the metric may come in handy.

  367. 367
    ET says:

    Stonehenge isn’t complicated at all. And it’s something we “know” is artificial.

  368. 368
    JVL says:

    ET: What other methods?

    What other ways you’d like to detect specified complexity as Dr Dembski said he was doing.

    He did not make the claim you are attributing to him.

    I stand by my statement as supported by quotes from the monograph.

    Again, he was trying to QUANTIFY it. THAT is what he was working on.

    Again, as he clearly said in his Abstract, he was working on aspects of design detection.

    It was referenced in the abstract. Of course, it wouldn’t be in the examples for obvious reasons.

    No, that is not obvious. If you mention or say something in the Abstract but then don’t actually address that in your worked out examples then it’s fair to take the examples on their face values whilst taking the Abstract with a grain of salt.

    To me, Dembski’s work on specification is for desk jockeys. In the field there are other ways to go about it. But, when in doubt, the metric may come in handy

    Okay. Why not test it out then to see when it is actually useable? This is what I’m suggesting.

    Stonehenge isn’t complicated at all. And it’s something we “know” is artificial.

    It is complicated if you’re going to apply Dr Dembski’s 2005 metric. Which is my point: let’s explore his metric and see a) if it’s useful and b) how it compares to other ways of checking for specified complexity.

    Shall we start with a simple coin-toss example and then ratchet things up?

  369. 369
    kairosfocus says:

    JVL, that number is information in bits beyond the implied threshold. Interesting to see how resistant you are to mathematics, here. KF

  370. 370
    kairosfocus says:

    F/N: Stonehenge, a monoliths based stone circle calendar aligned with the solstice. You try to build one with primitive tools, complete with bringing stones from a huge distance. Then go solve the same problem for the Giza pyramids. KF

    PS, Wikipedia’s confessions:

    Stonehenge is a prehistoric monument on Salisbury Plain in Wiltshire, England, two miles (3 km) west of Amesbury. It consists of an outer ring of vertical sarsen standing stones, each around 13 feet (4.0 m) high, seven feet (2.1 m) wide, and weighing around 25 tons, topped by connecting horizontal lintel stones. Inside is a ring of smaller bluestones. Inside these are free-standing trilithons, two bulkier vertical sarsens joined by one lintel. The whole monument, now ruinous, is aligned towards the sunrise on the summer solstice. The stones are set within earthworks in the middle of the densest complex of Neolithic and Bronze Age monuments in England, including several hundred tumuli (burial mounds).[2]

    Archaeologists believe that Stonehenge was constructed from 3000 BC to 2000 BC. The surrounding circular earth bank and ditch, which constitute the earliest phase of the monument, have been dated to about 3100 BC. Radiocarbon dating suggests that the first bluestones were raised between 2400 and 2200 BC,[3] although they may have been at the site as early as 3000 BC.[4][5][6]

  371. 371
    Lieutenant Commander Data says:

    In order to detect functional information math formulas are not necessary. Do you observe a function in living organisms ? There you have functional information

  372. 372
    relatd says:

    This sentence consists of code symbols typed by an intelligent agent. That is all you need to know about Intelligent Design.

  373. 373
    JVL says:

    Kairosfocus: JVL, that number is information in bits beyond the implied threshold. Interesting to see how resistant you are to mathematics,

    AGAIN, I am interested in exploring and using Dr Dembski’s 2005 metric for specified complexity. If you’re not curious or interested in taking him at his word and trying to compute his formulation then just say so. I find that stance confusing and contradictory (since Dr Dembski is considered one of the prime intellects behind the modern ID movement) but you can make your own choices.

  374. 374
    JVL says:

    LtComData: In order to detect functional information math formulas are not necessary. Do you observe a function in living organisms ? There you have functional information

    Well, Dr Dembski seemed to think finding mathematical support was worth the effort and I’d like to a) take him at his word and b) respect his efforts enough to see what it’s like applying his metric to some easy to understand and easy to compute examples. At first anyway. If that doesn’t interest you then so be it. But surely you think it’s worthwhile respecting a publication that Dr Dembski clearly took a lot of time and effort putting together especially given the fact that he himself said it was a continuation of work he had presented In previous publications.

    You make up your own mind. I’d like to see if what he proposed has merit, is useful and returns values we can all agree on. Strangely, no one else here feels the same way.

  375. 375
    JVL says:

    Relatd: This sentence consists of code symbols typed by an intelligent agent. That is all you need to know about Intelligent Design.

    Why do you think Dr Dembski took the time and effort to come up with his 2005 metric for specified complexity? He must have thought it was worthwhile and could address some question and issues raised by skeptics. But no one here seems to think it was worthwhile or are even interested in trying to compute it. I find that strange and confusing. For simple examples the mathematics is not difficult. And by working with some simple examples at first it should become easier to graduate onto more complicated situations as one gains computational expertise with the defined terms.

    But, again, oddly, no one here seems to care at all. I wonder why?

  376. 376
    kairosfocus says:

    JVL, I stated a fact of the math of info theory and of logarithms, one that is highly material and which you are resisting. At this point, I guess we can draw the conclusion that the facts do not fit your agenda. Telling. KF

  377. 377
    JVL says:

    Kairosfocus: I stated a fact of the math of info theory and of logarithms, one that is highly material and which you are resisting. At this point, I guess we can draw the conclusion that the facts do not fit your agenda.

    My ‘agenda’ is to work with Dr Demski’s 2005 metric for specified complexity and see how the results generated from that measure compare and contrast with other measures. I know how logarithms work and how to evaluate them including finding log base 2. The rest of you seem blatantly uninterested in pursuing Dr Dembski’s work or finding out how useful it is. Why is that? How is it that working with Dr Dembski’s metric is pursuing some agenda of my own? More importantly how is it that you not wanting to pursue Dr Dembski’s metric is a sign of you not being able to work with the mathematics he presents?

    Your absolute refusal to deal with what Dr Dembski actually wrote and presented is telling don’t you think? Either you disagree with him or you can’t follow his procedure. Which is it I wonder . . .

  378. 378
    kairosfocus says:

    JVL, I showed from the said math, that his expression mathematically implies an info in bits beyond a threshold metric, which you have tried to resist. I guess I need to directly ask, does – log2(probability) give info in bits? _____ Why or why not i/l/o what is on the table _____ [Honest answer, yes, and because that was worked out as a natural info metric decades ago.] I further pointed out that the case you point to boils down to 20 bits short of threshold, which you have also sidestepped. We can now freely draw the conclusion that your arguments have failed. KF

  379. 379
    Alan Fox says:

    That is all you need to know about Intelligent Design.

    You are overstating the case for “Intelligent Design”.

  380. 380
  381. 381
    relatd says:

    AF at 379,

    Every word-symbol you wrote had to be functional, specific and in the correct order to be understood.

  382. 382
    kairosfocus says:

    Relatd, and because it is FSCO/I you instantly recognised it as from an intelligent source. That self referentiality is part of what exposes the speciousnes of the sort of objections we are seeing. KF

  383. 383
    Lieutenant Commander Data says:

    The problem of observer is scientifically unsolvable so we are stuck with religion and ethic.

  384. 384
    Alan Fox says:

    If everything is designed, what’s the point of detecting it? It makes no sense.

  385. 385
    JVL says:

    Kairosfocus:

    You are interpreting what Dr Dembski wrote instead of reading what he actually wrote and what he clearly meant.

    Again, he worked out an example an got a result of approx -20. He didn’t say: that’s weird ’cause I should be getting a number representing so many bits. He interpreted -20 based on his formulation.

    He DOES NOT break his formulation apart and when he gives the bottom line criterion he’s clearly looking for a result greater than 1. Not greater than 20, not greater than 500, greater than 1. He doesn’t say “more than 1 bit” he just says greater than 1.

    You are so desperate to work in your 500-bit threshold that you not only break apart his calculation you also change some of his factors so that you can get what you want.

    His whole point is to create a metric than can be used to analyse some object or pattern or sequence OF ANY LENGTH to see if it exhibits specified complexity and thus was designed.

    I further pointed out that the case you point to boils down to 20 bits short of threshold, which you have also sidestepped.

    He didn’t say it was 20 bits shy of threshold. He just didn’t do that. The reason he didn’t say that is because he’s not interpreting his results as bits AND he wants to be able to analyse things that are of any length.w. The sequence he used for that example was CLEARLY much shorter than your 500-bit limit so if he wanted to hit that threshold he would have picked something of that length. But he didn’t.

    You’ve spent years and years convincing yourself of your reworked interpretation which is just not correct.

    I have, multiple times, offered to compare results from using the metric Dr Dembski actually wrote up with your interpretation to see what results are obtained. I’ve offered to do the mathematics for his metric myself. If you thought your version would give the same result as his I would think you would gladly agree with that because you’d prove your case. BUT you have not and will not agree to such a test. Which says to me either a) you suspect you will not get the same result or b) you can’t actually calculate your own version. Since you won’t even do the mature thing and admit which of those is true I guess the rest of us can just make an assumption. Come to think of it . . . they could both be true.

  386. 386
    kairosfocus says:

    JVL, no, I am not; I am working out the algebra that he had to have in mind to go to a negative log2 configuration, and that leads to some basic telecomms theory. That you have to deny the obvious mathematics of -log2[probability*c*d] tells us all we need to know on the bankruptcy of what you are trying to support. Working out gives the trivial answer that Dembski’s example is 20 bits short of threshold, where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits. It is now fairly obvious that not having a substantial answer you have resorted to a rhetorical distraction and refuse to acknowledge the relevant algebra. There is no reason for me to further pander to a further side track [which this already is] as it will simply lead to more of the same, if you are unresponsive to algebra, that is already decisive and not in your favour. This tells us a lot about the nature of far too many objections to the design inference. KF

  387. 387
    kairosfocus says:

    AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind. Its comeuppance started in the 1950’s with the detection of fine tuning of the cosmos and with recognition that there was and is coded algorithmic information in D/RNA. By the 1970’s Orgel and Wicken brought the matter to focus through recognising FSCO/I. Thaxton et al responded in the 80’s and from the 90’s the design inference, associated theory and a supportive movement grew. Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades. And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling. KF

  388. 388
    kairosfocus says:

    F/N: An online discussion:

    https://math.stackexchange.com/questions/2318606/is-log-the-only-choice-for-measuring-information

    >>When we quantify information, we use I(x)=?logP(x), where P(x) is the probability of some event x. The explanation I always got, and was satisfied with up until now, is that for two independent events, to find the probability of them both we multiply, and we would intuitively want the information of each event to add together for the total information. So we have I(x?y)=I(x)+I(y). The class of logarithms klog(x) for some constant k satisfy this identity, and we choose k=?1

    to make information a positive measure.

    But I’m wondering if logarithms are more than just a sensible choice. Are they the only choice? I can’t immediately think of another class of functions that satisfy that basic identity. Even in Shannon’s original paper on information theory, he doesn’t say it’s the only choice, he justifies his choice by saying logs fit what we expect and they’re easy to work with. Is there more to it?

    . . .

    That functional equation characterizes the logarithm (as long as you have any reasonable continuity condition). –
    Ethan Bolker
    Jun 11, 2017 at 15:26
    The logarithm I think is the only class of continuous functions that turn multiplication into addition, but as you said the explanation is only intuitive. I don’t know of an alternative, but I am certain the logarithm is not the only possible choice. –
    Matt Samuel
    Jun 11, 2017 at 15:27
    Sketch of proof: Let I=f?log
    , then the identity becomes f(a+b)=f(a)+f(b), which is Cauchy’s functional equation. – user856
    Jun 11, 2017 at 15:30

    . . .

    I just wanted to point something out, but honestly, I think the other answers are far better given that this is a mathematics site. I’m just pointing it out to add another argument for why logarithm makes sense as the only choice.

    You have to ask yourself what information even is. What is information?

    Information is the ability to distinguish possibilities.1

    1 Compare with energy in physics: the ability to do work or produce heat.

    Okay, let’s start reasoning.

    Every bit (= binary digit) of information can (by definition) distinguish 2 possibilities, because it can have 2 different values. Similarly, every n bits of information can distinguish 2n

    possibilities.

    Therefore: the amount of information required to distinguish 2n
    possibilities is n

    bits.
    And the same exact reasoning works regardless of whether you’re talking about base 2 or 3 or e.
    So clearly you have to take a logarithm if the number of possibilities is an integer power of the base.

    Now, what if the number of possibilities is not a power of b=2

    (or whatever your base is)?
    In this case you’re looking for a function that coincides with the logarithm at the integer powers.

    At this point, I would be convinced to use the logarithm itself (anything else would seem bizarre), but this is where a mathematician would invoke the reasonings mentioned in the other arguments (continuity or additivity for independent events or whatever) to show that no other function could satisfy reasonable criteria on information content.>>

    I just hope this from different voices helps break down obvious and needless polarisation. In fact my introduction to these matters was decades ago in T/comms as a key extension of electronics context.

    I frankly get the feeling that people unfamiliar with that context are suspicious of obvious algebra because of polarisation over the design inference.

    That’s why I pulled my older edn of Taub and Schilling and pointed to my online note, obviously in vain.

    KF

    KF

  389. 389
    JVL says:

    Kairosfocus: Working out gives the trivial answer that Dembski’s example is 20 bits short of threshold,

    Which he did not say. He could have easily made that point if that’s the point he wanted to make. Also, the sequence he used was much more than 20 bits shy of your 500-bit threshold.

    where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits.

    Another point he did not make even though the would be no reason he couldn’t.

    Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades.

    You are completely missing the point. I am NOT debating that notion; all I am doing is looking at Dr Dembski’s metric and your version and wanting to compare them on some easy to compute examples to see if they agree. Why don’t we do that?

    And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling.

    I’ll stick with Dr Dembski’s process of evaluating his own metric which he DID NOT break apart as you do.

    Regardless, that doesn’t stop us from comparing the two versions/interpretations. But you won’t do it!! Why is that? Let’s just focus on that question from now on.

    Why aren’t you willing to compare and contrast results from your version and Dr Dembski’s own version of his metric? What are you afraid of?

    Shall we start with a simple example just to make sure we both understand the mathematics involved and can check each other’s work?

  390. 390
    Lieutenant Commander Data says:

    Encoded information is gibberish without the key. DNA is gibberish without the decoder.

    Our brain is programmed to have a narrow focus on a very few things as the eye has a narrow visible spectrum. This is a built-in bias . We can’t perceive the reality as it is but only as our ” programmed” biases allow us.

  391. 391
    kairosfocus says:

    JVL, at this point you are being stubborn. There is not a snowball’s chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric. The ONLY practical use for base 2 logs I have seen or worked with is for that, if you have one kindly tell us ______ The log of products rule used to be what 3rd form Math, now it’s 4th form I think. Grade 9 or 10 I think. So, your narrative about WmAD does not pass the giggle test. My derivation is simply working through the algebra involved. KF

    PS, notice, once WmAD has worked out the first term as 10^120, we see:

    define pS as . . . the number of patterns [–> a constant therefore, being a particular “number”] for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number [–> notice, “the number” he is giving CONSTANTS, specific values to be estimated] of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations [–> the context is bits] that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases pS(t) and also by the maximum number of binary search-events in our observed universe 10^120]

    X = – log2[10^120 ·pS(T)·P(T|H)].

    We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have – log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] – [C + D]. That is, information beyond a threshold, in bits.

    In that context, if WmAD works out a value and is 20 bits short of threshold, that is fairly plain to see. 1 in 10^150 is bit short of 500 bits and 10^140 ties to 465 bits.

    I now could freely go on on how yet another critic comes up short as failing to understand etc, but will not go there.

  392. 392
    ET says:

    Alan Fox:

    If everything is designed, what’s the point of detecting it?

    Who says that everything was designed? No one in ID does.

  393. 393
    ET says:

    JVL, the metric of CSI has already demonstrated that living organisms were intelligently designed. There is, by far, more than 500 bits of CSI per organism. And that is over the UPB.

    And if you have questions about Dembski’s metric, then email the man himself.

  394. 394
    ET says:

    However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.

    Dr Dembski thinks he found a way that’s better than that; when you don’t need to know anything about the origin of the thing in question.

    Again, he never makes that claim. And the methodology I use is also used when we don’t know anything about the origin of the thing in question.

    It’s as if you are proud to expose the fat that you too have ZERO investigative experience.

    Do archaeologists know how their proposed artifacts arose? No. That is what they are doing in the field. Trying to determine artifacts from nature.

  395. 395
    ET says:

    Alan Fox:

    You are overstating the case for “Intelligent Design”.

    Your ignorance is not an argument, Alan. And when it comes to science, biology, ID and evolution, all you have is ignorance.

  396. 396
    JVL says:

    Kairosfocus: at this point you are being stubborn. There is not a snowball’s chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric.

    Still, he did not make that statement in the monograph.

    Are we going to compare methods or not?

    The log of products rule used to be what 3rd form Math, now it’s 4th form I think. Grade 9 or 10 I think.

    I’m not saying you can’t break the log down like that, I’m saying it’s unnecessary for evaluating the metric.

    We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have – log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] – [C + D]. That is, information beyond a threshold, in bits.

    Shall we compare methods on an example?

  397. 397
    JVL says:

    ET: And if you have questions about Dembski’s metric, then email the man himself.

    I don’t have a particular question; I just want to see how its results compares to that given my Kairosfocus‘s interpretation. He doesn’t want to play ball for some reason. I wonder why that is?

    Again, he never makes that claim.

    From the monograph’s abstract:

    Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?

    So, clearly, he’s interested in exploring that possibility.

    From later on:

    In a moment, we’ll consider a form of specified complexity that is independent of the replicational resources associated with S’s context of inquiry and thus, in effect, independent of S’s context of inquiry period (thereby strengthening the elimination of chance and the inference to design).

    Further on again:

    To see that X is independent of S’s context of inquiry, it is enough to note two things: (1) there is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article “The Chance of the Gaps”). (2) Even though X depends on S’s background knowledge through pS(T), and therefore appears still to retain a subjective element, the elimination of chance only requires a single semiotic agent who has discovered the pattern in an event that unmasks its non-chance nature.

    And later again:

    By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause. There are two competing approaches to design detection here that cut to the heart of what it is to know that something is designed. The one approach requires independent knowledge of the designer. The other says that the signature of design can be read without any such knowledge. Which approach is correct? I submit the latter, which, happily, is also consistent with employing specified complexity to infer design.

    Oh, by the way, in Addendum 1,

    Readers familiar with my books The Design Inference and No Free Lunch will note that my treatment of specification and specified complexity there (specificity, as such, does not appear explicitly in these books, though it is there implicitly) diverges from my treatment of these concepts in this paper. The changes in my account of these concepts here should be viewed as a simplification, clarification, extension, and refinement of my previous work, not as a radical departure from it. To see this, it will help to understand what prompted this new treatment of specification and specified complexity as well as why it remains in harmony with my past treatment.

    Oh, and there’s this as well: why he has replaced 10^-150 with pS(T)•10^-120 and why pS(T) is not a constant.

    Even so, in The Design Inference and No Free Lunch I suggested that a universal probability bound is impervious any probabilistic resources that might be brought to bear against it. In those books, I offered 10^?150 as the only probability bound anybody would ever need to draw design inferences. On the other hand, in this paper I’m saying that 10^?120 serves that role, but that it needs to be adjusted by the specificational resources pS(T), thus essentially constraining P(T|H) not by 10^?120 but by 10^?120 /pS(T). If you will, instead of a static universal probability bound of 10^?150 , we now have a dynamic one of 10^?120 /pS(T) that varies with the specificational resources pS(T) and thus with the descriptive complexity of T. For many design inferences that come up in practice, it seems safe to assume that pS(T) will not exceed 10^30 (for instance, in section 7 a very generous estimate for the descriptive complexity of the bacterial flagellum came out to 10^20 ). Thus, as a rule of thumb, 10^?120 /10^30 = 10^?150 can still be taken as a reasonable (static) universal probability bound. At any rate, for patterns qua targets T that satisfy P(T|H) ? 10^?150 and that at first blush appear to have low descriptive complexity (if only because our natural language enables us to describe them simply), the burden is on the design critic to show either that the chance hypothesis H is not applicable or that pS(T) is much greater than previously suspected. Getting too lucky is never a good explanation, scientific or otherwise. Thus, for practical purposes, taking 10^?150 as a universal probability bound still works. If you will, the number stays the same, but the rationale for it has changed slightly.

  398. 398
    ET says:

    JVL has obvious reading comprehension issues. He is attributing things to Dembski that Dembski never claims. Dembski NEVER said his method is superior to how design is currently detected.

    Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence. Archaeologists do not require independent knowledge of the designers.

    Seeing that JVL is being dishonest about what Dembski says, it is clear that he isn’t interested in an honest discussion.

  399. 399
    ET says:

    In “Specification” Dembski uses a 10-digit code. TEN. And it came out as specified. What do you think a protein of 100 AA will come out as?

  400. 400
    JVL says:

    ET: n “Specification” Dembski uses a 10-digit code. TEN. And it came out as specified.

    Do you mean 1, 1, 2, 3, 5, 8, 13, 21? That’s 8 numbers but, yes, he treats it as 10-digits.

    He said IF pS(T) were on the order of 10^3 then chance could be eliminated. But he didn’t actually say it was on that order for that particular example. But, I get the point, especially because of his discussion the the following paragraph. Quite a few probabilistic arguments about design wouldn’t you say?

    What do you think a protein of 100 AA will come out as?

    Depends on pS(T) doesn’t it? IF you want to use his ‘refined and extended’ work from 2005.

    Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence.

    Knowledge of the skills and abilities of the humans around at the time is part of the relevant evidence. If an artefact were found that was way beyond any skills and known abilities of the pertinent human civilisations then it would be time to reconsider . . . as one would expect.

    Archaeologists do not require independent knowledge of the designers.

    They certainly do if they want to conclude who they think created the artefact in question.

    Dembski NEVER said his method is superior to how design is currently detected.

    But, he did say:

    Readers familiar with my books The Design Inference and No Free Lunch will note that my treatment of specification and specified complexity there (specificity, as such, does not appear explicitly in these books, though it is there implicitly) diverges from my treatment of these concepts in this paper. The changes in my account of these concepts here should be viewed as a simplification, clarification, extension, and refinement of my previous work, not as a radical departure from it.

    Sounds like it’s ‘better’ to me. Clarification: more straight-forward. Extension: applicable to more situations. Refinement: more specific and detailed.

  401. 401
    relatd says:

    AF at 384,

    Here is the difference between an atheist and a real scientist.

    Richard Dawkins: Living things only look designed. They are not designed.

    ID: Life is designed. It contains codes that direct its development. Codes can only come from an intelligence. Which raises the question: Who is this intelligence? It can’t be dead chemicals springing to life one day for no reason. And human beings who were designed by nobody. Like your computer, someone designed and built it, not nothing/nobody.

  402. 402
    ET says:

    You are just clueless. Dembski NEVER compares his methodology to the tried-and-true techniques currently used.

    We “know” that humans were capable of building Stonehenge only because Stonehenge exists. So, again, you prove that you are clueless. Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty.

    For a 100 AA protein the ps(T) would be gleaned from the sequence. And there isn’t any evidence that blind and mindless processes can do it.

  403. 403
    relatd says:

    ET at 392.

    Alan Fox plays the fool. All living things are designed. All LIVING things. Period. Alan is not ignorant, he plays games.

  404. 404
    chuckdarwin says:

    Relatd/401
    So, this is the definition of “real science?”:

    ID: Life is designed. It contains codes that direct its development. Codes can only come from an intelligence. Which raises the question: Who is this intelligence? It can’t be dead chemicals springing to life one day for no reason. And human beings who were designed by nobody. Like your computer, someone designed and built it, not nothing/nobody.

    A veritable Copernican Revolution…..

  405. 405
    Alan Fox says:

    KF:

    AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind.

    Nonsense, you are no mindreader. You imagine stuff. Then you write singular prose remarkable only for its obscurity. The quoted sentence is an example typical for lack of any meat in the sandwich.

    Its comeuppance started in the 1950’s with the detection of fine tuning of the cosmos and with recognition that there was and is coded algorithmic information in D/RNA.

    Here we go again. I guess there is a nugget in there about DNA and RNA that illustrates your child-like incomprehension of the biochemistry involved.

    By the 1970’s Orgel and Wicken brought the matter to focus through recognising FSCO/I. Thaxton et al responded in the 80’s and from the 90’s the design inference, associated theory and a supportive movement grew.

    Orgel came up with the phrase “specified complexity” as a qualitative property of living systems. Nothing to do with your nonsense

    Your rhetorical stunt…

    I’m exchanging thoughts, as one interested layperson to another, on some obscure blog. Are you totally incapable of civil exchange? These are not Earth-shattering events; I’m just entertaining myself as time and curiosity allow.

    …is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades.

    Nobody has a clue what your “FSCO/I” is yet despite JVL’s remarkable patience in getting you to make some sense.

    And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling. KF

    What is sadly telling is once we establish what trivial mathematical manipulations are or are not involved in telling us whether something is deigned [I deign to leave my Freudian slip], I predict there will be a further fruitless discussion on what numbers go into the equation or formula, should one eventually emerge from the fog of words.

  406. 406
    Alan Fox says:

    For a 100 AA protein the ps(T) would be gleaned from the sequence

    Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion. Demsbski rules out reiterative change and demands everything happens all at once. The model does not fit reality.

    Yes, folks, I know it is a waste of time to respond to Joe. It’s for the lurker!

  407. 407
    JVL says:

    ET: Dembski NEVER compares his methodology to the tried-and-true techniques currently used.

    I’ll take your word on it. I’m just repeating what he said in his 2005 monograph.

    We “know” that humans were capable of building Stonehenge only because Stonehenge exists.l

    There are a lot of other standing stone circles in the British Isles and Brittany.

    Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty.

    I didn’t say they required it; I said they look at all the evidence including independent information about the humans around at the time and where they lived, what they ate, sometimes the tools they used, sometimes where they were buried.

    For a 100 AA protein the ps(T) would be gleaned from the sequence

    Dr Dembski explains how to ‘glean’ pS(T). And it involves knowing the ‘sample space’.

  408. 408
    kairosfocus says:

    JVL, we both know the algebra is correct. I simply moved from the probability space to the information space. This exposes how the posing on math etc is a rhetorical front. KF

    PS, at 293 I put up several examples. https://uncommondescent.com/evolution/at-sci-news-moths-produce-ultrasonic-defensive-sounds-to-fend-off-bat-predators/#comment-762545

  409. 409
    JVL says:

    Kairosfocus: we both know the algebra is correct.

    Fine. Shall we compare results on a simple example and then escalate things a bit?

  410. 410
    relatd says:

    CD at 404,

    It sure is. Not that evolution crap. ‘Uh, yeah. You see, dead chemicals came to life and produced life and it just zigged and zagged for millions of years until we came around… from extremely primitive earlier versions of not really men. Here, look. I got pictures.’

    This is me when I was a fish.

    This is me when I looked like a Lemur.

    And this is me when I looked like an ape.

  411. 411
    relatd says:

    AF at 405,

    I’m not enjoying your act. Parts get repeated over and over. Alan Fox is smart except when he’s not, or doesn’t want to be.

    You have no future in stand-up comedy or in feigning frustration.

  412. 412
    kairosfocus says:

    F/N: As a courtesy to the onlooker:

    293
    kairosfocus
    August 7, 2022 at 5:06 am

    F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design.

    First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits

    FSCO/I, X_sol = FSB – 500 in functionally specific bits

    Likewise for the observable cosmos,

    X_cos = FSB – 1,000, functionally specific bits

    And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get

    X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits.

    Which, is comfortably beyond, so redundancy is unlikely to make a difference.

    Contrast a typical value for 1800 tossed coins

    X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short.

    If the coins expressed ASCII code in correct English

    X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed.

    [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.]

    A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference.

    Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy.

    Life, credibly, is designed.

    KF

  413. 413
    JVL says:

    Kairosfocus:

    Shall we compare metric interpretations? Yes or no?

  414. 414
    kairosfocus says:

    JVL, fallacy of the loaded question. We both know that I am carrying out the – log2[ . . . ] unary operation on a probability expression right there in Dembski’s X = eqn, and stating its standard result, an information value in bits. As it is applied to three factors, it is info beyond a threshold (or short of it by so much). You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life. We have reason to hold that cell based life and body plans are designed. KF

  415. 415
    JVL says:

    Kairosfocus: fallacy of the loaded question. We both know that I am carrying out the – log2[ . . . ] unary operation on a probability expression right there in Dembski’s X = eqn, and stating its standard result, an information value in bits.

    Then there’s no reason for you to take up the challenge!!

    You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life.

    But I think Dr Dembski was working on something different and that would the detection of specified complexity. That’s what he said he was doing and that’s the contention I’d like to test using his own formulation and way of working them out.

    Shall we compare and contrast results? If they turn out to be the same then that’s okay.

  416. 416
    kairosfocus says:

    JVL, again, we both know the algebra is correct. Further, we both know that Dembski pointed out that for life the specification is cashed out in functionality. Notice, [a: functionally] specified, complex [b:organisation and/or] associated information. A says, context is life or other contexts where functionality is key, B that information can be implicit in organisation. KF

  417. 417
    JVL says:

    Kairosfocus: again, we both know the algebra is correct

    I didn’t say the algebra was incorrect. It’s your interpretation of some of the pieces as constants that isn’t clear.

    Anyway, he came up with a metric for seeing if there was enough specified complexity in an object or event to conclude that it’s designed. You changed his metric. I’d like to compare his version and your version to see if they give the same results. Are you willing to do the comparison? Yes or no?

  418. 418
    kairosfocus says:

    JVL, what part of Dembski’s specification of the two values as numbers — I highlighted yesterday in the clip — is so unclear it requires “interpretation”? _____ What part of giving one as M*N LT 10^120 is unclear? ______ What part of “define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T” rather than some function on a variable parameter is doubtful? _____ In your clip on flagellar proteins, I read “It follows that –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^-140 , where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event" . . . which sets 10^140 as upper bound, less conservative than 500 bits worth, 3.27*10^150. So, no, he was discussing numbers and bounds or thresholds not odd functions that can run off anywhere to any weird value as one pleases. Oddly, even if pS(T) were some weird function, it would still be part of a threshold, by the algebra; the issue then would be to find a bound. a constant, your latest word to pounce on rhetorically. But as it turns out we are not forced to guess such, as we know it is an upper bound on observability, a target zone in a wider space of possibilities W; familiar from statistical thermodynamics. It is easy to see that for sol system or observed cosmos 2^500 to 2*1,000 is a generous upper bound on every atom, 10^57 to 10^80 being an observer of 500 or 1,000 coins each, flipped at 10^14 per second and for 10^17 s. So, whatever goes into the threshold, it is bound by search resources of sol system or observed cosmos. The thresholds given all the way up in 293 bound any reasonable value. All the huffing and puffing hyperskepticism fails. But at least you acknowledge explicitly that the algebra is correct. KF

    PS, you have calculations on the bounds, again cited yesterday. Can you tell me how for 10^57 or 10^80 atoms each observing bit operations on 500 or 1,000 one bit registers [“coins”] every 10^-14s, we do not bound the scope of search for 10^17 s, by 10^88 to 10^111 as over generous upper limit? I find the hyperskepticism unjustified.

  419. 419
    kairosfocus says:

    AF, 405:

    [KF:] AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind.

    [AF:] Nonsense, you are no mindreader. You imagine stuff. Then you write singular prose remarkable only for its obscurity. The quoted sentence is an example typical for lack of any meat in the sandwich.

    We both know just what movement has been held as making it possible to be an intellectually fulfilled atheist. Which state is demonstrably impossible due to inherent incoherence of the implied evolutionary materialistic atheism.

    You are also lying and confessing by projection regarding want of substance. The self referentially incoherent evolutionary materialistic scientism of our day is not only public but notorious.

    Your stunt is so bad it fully deserves to be corrected by reference to Lewontin’s cat out of the bag moment, suitably marked up — a moment you are fully familiar with:

    [Lewontin:] . . . to put a correct [–> Just who here presume to cornering the market on truth and so demand authority to impose?] view of the universe into people’s heads

    [==> as in, “we” the radically secularist elites have cornered the market on truth, warrant and knowledge, making “our” “consensus” the yardstick of truth . . . where of course “view” is patently short for WORLDVIEW . . . and linked cultural agenda . . . ]

    we must first get an incorrect view out [–> as in, if you disagree with “us” of the secularist elite you are wrong, irrational and so dangerous you must be stopped, even at the price of manipulative indoctrination of hoi polloi] . . . the problem is to get them [= hoi polloi] to reject irrational and supernatural explanations of the world [–> “explanations of the world” is yet another synonym for WORLDVIEWS; the despised “demon[ic]” “supernatural” being of course an index of animus towards ethical theism and particularly the Judaeo-Christian faith tradition], the demons that exist only in their imaginations,

    [ –> as in, to think in terms of ethical theism is to be delusional, justifying “our” elitist and establishment-controlling interventions of power to “fix” the widespread mental disease]

    and to accept a social and intellectual apparatus, Science, as the only begetter of truth

    [–> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]

    . . . . To Sagan, as to all but a few other scientists [–> “we” are the dominant elites], it is self-evident

    [–> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . . and in fact it is evolutionary materialism that is readily shown to be self-refuting]

    that the practices of science provide the surest method of putting us in contact with physical reality [–> = all of reality to the evolutionary materialist], and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test [–> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . .

    It is not that the methods and institutions of science somehow compel us [= the evo-mat establishment] to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [–> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [–> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . . [–> irreconcilable hostility to ethical theism, already caricatured as believing delusionally in imaginary demons]. [Lewontin, Billions and billions of Demons, NYRB Jan 1997,cf. here. And, if you imagine this is “quote-mined” I invite you to read the fuller annotated citation here.]

    As for trying to jump on me over claimed errors of style, that is now obviously attack the man, dodge the substance.

    Indeed, we have every right to use cognitive dissonance psychology to interpret such stunts as confession by projection.

    KF

  420. 420
    JVL says:

    Kairosfocus:

    I understand Dr Dembski’s mathematics quite well thank you. You replace log2(pS(T)) with a constant and log2(P(T|H)) with a different function I(T). Since it’s not really clear what those replacements are I thought a test comparing the result using your formulation and Dr Dembski’s original formulation would be interesting. If they come to the same conclusion, fine. If they don’t (for some particular case) then it would be enlightening to discuss that. I think.

    Shall we start by looking at a simple case and then try to ratchet things up? Why not have a go?

  421. 421
    kairosfocus says:

    JVL, the distraction continues. WmAD first found an upper bound for his M*N term, 10^120, citing Seth Lloyd on how many bit ops are feasible for the observed cosmos. pS(T) is about “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” That is he is effectively binding the number of targets in the wider space W. So, finding an upper bound for that is reasonable. Next, you now acknowledge that – log2[prob] yields an info metric, where that Dembski formulates on that operation points to intention to reduce to info in bits. – log2[prob * factor c * factor d] by the algebra is Info[t] – {logc + logd} –> info beyond a threshold. I(T) is not a different function, but the value of – log2[P(T|H)], an information value in bits to be evaluated case by case. You are back to denying the algebra, kindly see Taub and Schilling as you obviously have no regard for my own background in info and t/comms theory. Next logc = log[10^120] = 398 bits. For log D we want a bits upper bound similar to his MN –> 10^120. He uses a case where the expression requires “P(T|H) 1.” Substitute and use equality as the border case: –log2[10^120 ·pS(T)·{1/2*10^-140}] = 1. Now break it up using the neg log operation. 1 = 466.07 – 398.63 – x i.e. 1 = 67.44 – x, so x = 66.44. (Notice, well within my 100.) What units? We can subtract guavas from guavas not from mangoes or coconuts, so x is in bits. x is effectively log2[pS(T)] so that gives 2 ^10^20. We are back to a threshold of 1 in 10^140, as expected given WmAD’s IFF. This shows the validity of the thresholds of spaces for 500 or 1000 bits. Your it’s not really clear is just another way to try to take back your concession on the algebra, which algebra is manifest. As for what about simple examples they have been on the table with even more generous thresholds than WmAD gave. There is no need to drag out this sidetrack further. The message is clear, for any reasonable threshold for search capability of sol system or observed cosmos, information content of cells and body plans is so far beyond that blind causes have no traction. Life, body plans and the taxonomical tree of life are replete with strong signs of design due to their functionally cashed out complex specified information, explicit and implicit in organisation. KF

  422. 422
    ET says:

    Alan Fox:

    Nobody has a clue what your “FSCO/I” is yet despite JVL’s remarkable patience in getting you to make some sense.

    You and JVL are willfully ignorant and on an agenda of obfuscation.

    What is sadly telling is once we establish what trivial mathematical manipulations are or are not involved in telling us whether something is deigned [I deign to leave my Freudian slip], I predict there will be a further fruitless discussion on what numbers go into the equation or formula, should one eventually emerge from the fog of words.

    Dude, what is trivial is your understanding of ID, science and evolution.

    It remains that you and yours do NOT have a scientific explanation for our existence. You have nothing but denial and promissory notes.

  423. 423
    ET says:

    Alan Fox:

    Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion.

    They did not demonstrate that blind and mindless processes produced any of the proteins used

    Demsbski rules out reiterative change and demands everything happens all at once.

    Liar. You keep making these blatantly false statements. And you think we are just going to sit here and accept it. Pound sand.

    If you are going to spew BS about ID on an ID site, you had better bring the evidence. Your cowardly bloviations mean nothing here.

    The model does not fit reality.

    The claim that life’s diversity arose by means of evolution by means of blind and mindless processes, such as natural selection and drift, does not fit reality.

    Alan is in such a tizzy over all things Intelligent Design. Yet he doesn’t have a scientific alternative to ID. Shoot down all of the straw men you want, Alan. ID isn’t going anywhere until someone steps up and demonstrates that blind and mindless processes can actually do the things you and yours claim.

  424. 424
    ET says:

    JVL:

    I’m just repeating what he said in his 2005 monograph.

    I know. You clearly don’t understand it.

    We “know” that humans were capable of building Stonehenge only because Stonehenge exists!

    There are a lot of other standing stone circles in the British Isles and Brittany.

    And? We know humans didit cuz humans were around? We know they had the capability to do it cuz the structures exist? Thank you for proving my point.

    I said they look at all the evidence including independent information about the humans around at the time and where they lived, what they ate, sometimes the tools they used, sometimes where they were buried.

    And ASSUME they didit cuz there they are!

    Dr Dembski explains how to ‘glean’ pS(T). And it involves knowing the ‘sample space’.

    Right. That math is easy. How many different combinations are there for a 100 aa polypeptide?

    If you can’t do that then forget about the other math, JVL.

  425. 425
    ET says:

    Earth to Alan Fox-

    Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion.

    I don’t know how many zeros are in a gadzillion, but this is what Keefe and Szostak said:

    We therefore estimate that roughly 1 in 10^11 of all random-sequence proteins have ATP-binding activity comparable to the proteins isolated in this study.

    1 in 10^11! And those random-sequence proteins did not arise via blind and mindless processes.

  426. 426
    JVL says:

    Kairosfocus: the distraction continues.

    How is asking if you’d be willing to work out some examples using your approach distracting? I don’t see the problem, with any numerical formulation, asking to see it ‘in action’.

    Plus you keep repeating yourself which is completely pointless at this point.

    So, let’s just stick to yes or no queries:

    Will you show your working using your method for some simple examples. Yes or no?

  427. 427
    JVL says:

    ET: Right. That math is easy. How many different combinations are there for a 100 aa polypeptide?

    As I’ve been saying: I think it’s best to start with some simpler examples and make sure everyone is following along and that the results make sense.

    If you can’t do that then forget about the other math, JVL.

    I think I can do that.

  428. 428
    kairosfocus says:

    JVL, you have had examples and a use of WmAD’s case on was it the flagellum. You are still talking as if they don’t exist. That tells us you are simply emptily doubling down. For record, from the outset WmAD used -log2[prob], which is instantly recognisable to one who has done or used info theory, as an info metric in bits. That is the only fairly common use of base 2 logs, to yield bits. Next, by product rule once boundable factors c and d are added as products, we have an info in bits beyond a threshold metric, per algebra of logs. Thus, once we have reasonable bounds, and we do with 500 – 1,000 bit thresholds [cf how 10^57 to 10^80 atoms observing each 500 – 1,000 1-bit registers aka coins, at 10^14/s for 10^17s can only survey a negligible fraction of config states], then we may freely work with info beyond a threshold. We only need to factor in info carrying capacity vs redundancy effects of codes as Durston et al did. WmAD apparently picked an example that was 20 bits short of threshold. However, for many cases we are well beyond it so redundancy makes no practical difference. Already for an average 300 AA protein, we are well beyond. FSCO/I — a relevant subset and context of CSI since Orgel and Wicken in the 70’s — is a good sign of design. This you have resisted and sidestepped for 100’s of comments, indicating that you have no substantial answer but find it unacceptable. Our ability to analyse, warrant adequately and know is not bound by your unwarranted resistance, sidesteps and side tracks. But this thread has clearly shown that the balance on merits supports the use of FSCO/I. Life, from cell to body plans including our own, shows strong signs of design. KF

  429. 429
    kairosfocus says:

    ET, interaction with ATP is not a good proxy for the myriads of proteins carrying out configuration-specific function. A good sign of this is the exceedingly precise care with which the cell assembles and folds proteins. KF

  430. 430
    ET says:

    Yes, KF. That Alan Fox calls on that experiment and results exposes the sheer desperation of his position.

  431. 431
    ET says:

    Right. JVL balks when given a real-world, biological example. An example that he cannot control and manipulate.

    That math [sample space] is easy. How many different combinations are there for a 100 aa polypeptide?

    *crickets*

    If you can’t do that then forget about the other math, JVL.

    I think I can do that.

    As predicted. Thank you.

  432. 432
    kairosfocus says:

    ET, ignoring the oddballs and assuming away chirality issues and a lot of other chem possibilities, 20^100 = 1.268*10^130. KF

  433. 433
    ET says:

    So, we have a massive sample space. Next, we need that protein and to see how variable it is. Then we will know how many targets there are in that sample space.

    Trying to hit 1 in 100,000,000,000 (Keefe and Szostak for 80aa with minimal functionality), should be enough for anyone to see the futility of evolution by means of blind and mindless processes. Just seeing what DNA-based life requires to be existing and functioning from the start, should be enough for rational people to understand that nature didn’t do it.

  434. 434
    JVL says:

    ET: As predicted. Thank you.

    I said I think I can do that, how is that ‘balking’?

    Are you even paying attention?

    Also, please note, I am only talking about evaluating Dr Dembski’s metric.

  435. 435
    JVL says:

    Kairosfocus:

    Will you show your working using your method for some simple examples. Yes or no?

  436. 436
    kairosfocus says:

    JVL,

    Further doubling down. First,

    421
    kairosfocus
    August 11, 2022 at 5:25 am

    JVL, the distraction continues. WmAD first found an upper bound for his M*N term, 10^120, citing Seth Lloyd on how many bit ops are feasible for the observed cosmos. pS(T) is about “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” That is he is effectively binding the number of targets in the wider space W. So, finding an upper bound for that is reasonable. Next, you now acknowledge that – log2[prob] yields an info metric, where that Dembski formulates on that operation points to intention to reduce to info in bits. – log2[prob * factor c * factor d] by the algebra is Info[t] – {logc + logd} –> info beyond a threshold. I(T) is not a different function, but the value of – log2[P(T|H)], an information value in bits to be evaluated case by case. You are back to denying the algebra, kindly see Taub and Schilling as you obviously have no regard for my own background in info and t/comms theory. Next logc = log[10^120] = 398 bits. For log D we want a bits upper bound similar to his MN –> 10^120. He uses a case where the expression requires “P(T|H) 1.” Substitute and use equality as the border case: –log2[10^120 ·pS(T)·{1/2*10^-140}] = 1. Now break it up using the neg log operation. 1 = 466.07 – 398.63 – x i.e. 1 = 67.44 – x, so x = 66.44. (Notice, well within my 100.) What units? We can subtract guavas from guavas not from mangoes or coconuts, so x is in bits. x is effectively log2[pS(T)] so that gives 2 ^10^20. We are back to a threshold of 1 in 10^140, as expected given WmAD’s IFF. This shows the validity of the thresholds of spaces for 500 or 1000 bits. Your it’s not really clear is just another way to try to take back your concession on the algebra, which algebra is manifest. As for what about simple examples they have been on the table with even more generous thresholds than WmAD gave. There is no need to drag out this sidetrack further. The message is clear, for any reasonable threshold for search capability of sol system or observed cosmos, information content of cells and body plans is so far beyond that blind causes have no traction. Life, body plans and the taxonomical tree of life are replete with strong signs of design due to their functionally cashed out complex specified information, explicit and implicit in organisation. KF

    Then, as you were shown and reminded:

    293
    kairosfocus
    August 7, 2022 at 5:06 am

    F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design.

    First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits

    FSCO/I, X_sol = FSB – 500 in functionally specific bits

    Likewise for the observable cosmos,

    X_cos = FSB – 1,000, functionally specific bits

    And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get

    X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits.

    Which, is comfortably beyond, so redundancy is unlikely to make a difference.

    Contrast a typical value for 1800 tossed coins

    X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short.

    If the coins expressed ASCII code in correct English

    X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed.

    [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.]

    A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference.

    Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy.

    Life, credibly, is designed.

    KF

    PS, There has already been in the thread citation from Dembski on the definition of CSI and how in cell based life it is cashed out on function. I note, the concept as opposed to Dembski’s quantitative metric (which boils down to functionally specific info beyond a threshold) traces to Orgel and Wicken in the 70’s. This was noted by Thaxton et al in the 80’s and Dembski, a second generation design theorist set out models starting in the 90’s.

    Your empty doubling down is groundless and a strawman tactic that beyond a point is rhetorical harassment. There is more than enough on the table to show why the design inference on FSCO/I is warranted. This implies that the world of life, credibly, is full of signs of design from the cell to body plans to our own constitution.

    KF

  437. 437
    kairosfocus says:

    PPS, also side-stepped and ignored:

    >>260
    kairosfocus
    August 6, 2022 at 4:45 am

    PPPS, as a further point, Wikipedia’s admissions on the Mandelbrot
    set and Kolmogorov Complexity:

    This image illustrates part of the Mandelbrot set fractal. Simply
    storing the 24-bit color of each pixel in this image would require 23
    million bytes, but a small computer program can reproduce these 23 MB
    using the definition of the Mandelbrot set and the coordinates of the
    corners of the image. Thus, the Kolmogorov complexity of the raw file
    encoding this bitmap is much less than 23 MB in any pragmatic model of
    computation. PNG’s general-purpose image compression only reduces it to
    1.6 MB, smaller than the raw data but much larger than the Kolmogorov
    complexity.

    This is of course first a description of a deterministic but chaotic
    system where at the border zone we have anything but a well behaved
    simple “fitness landscape” so to speak. Instead, infinite complexity, a
    rugged landscape and isolated zones in the set with out of it just next
    door . . . the colours etc commonly seen are used to describe bands of
    escape from the set. The issues raised in other threads which AF
    dismisses are real.
    Further to which, let me now augment the text showing what is just
    next door but is not being drawn out:

    In algorithmic information theory (a subfield of computer science
    and mathematics), the Kolmogorov complexity of an object, such as a
    piece of text, is the length of a shortest computer program (in a
    predetermined programming language) that produces the object as output.
    It is a measure of the computational resources needed to specify the
    object, and is also known as algorithmic complexity,
    Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity,
    descriptive complexity, or algorithmic entropy. It is named after
    Andrey Kolmogorov, who first published on the subject in 1963.[1][2] .
    . . .
    Consider the following two strings of 32 lowercase letters and
    digits:
    abababababababababababababababab [–> simple repeating block
    similar to a crystal], and
    4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 [–> plausibly random gibberish
    similar to a random tar]
    [–> add here, this is a string in English using ASCII
    characters and is a case of FSCO/I]
    The first string has a short English-language description, namely
    “write ab 16 times”, which consists of 17 characters. The second one
    has no obvious simple description (using the same character set) other
    than writing down the string itself, i.e., “write
    4c1j5b2p0cv4w1x8rx2y39umgw5q85s7” which has 38 characters. [–> a
    good working definition of plausible randomness] Hence the operation of
    writing the first string can be said to have “less complexity” than
    writing the second. [–> For the third there is neither simple
    repetition nor plausibly random gibberish but it can readily and
    detachably be specified as ASCI coded text in English, leading to
    issues of specified complexity associated with definable, observable
    function and degree of complexity such that search challenge is
    material. Here, for 32 characters there are 4.56 * 10^192
    possibilities, well beyond 500 bits of conplexity.]
    More formally, the complexity of a string is the length of the
    shortest possible description of the string in some fixed universal
    description language (the sensitivity of complexity relative to the
    choice of description language is discussed below). It can be shown
    that the Kolmogorov complexity of any string cannot be more than a few
    bytes larger than the length of the string itself. Strings like the
    abab example above, whose Kolmogorov complexity is small relative to
    the string’s size, are not considered to be complex. [–> another
    aspect of complexity, complexity of specification, contrasted with
    complexity of search tied to information carrying capacity]
    The Kolmogorov complexity can be defined for any mathematical
    object, but for simplicity the scope of this article is restricted to
    strings. [–> other things can be reduced to strings by using compact
    description languages, so WLOG] We must first specify a description
    language for strings. Such a description language can be based on any
    computer programming language, such as Lisp, Pascal, or Java.[–> try
    AutoCAD] If P is a program which outputs a string x, then P is a
    description of x. The length of the description is just the length of P
    as a character string, multiplied by the number of bits in a character
    (e.g., 7 for ASCII). [–> notice, the information metric] . . . .
    Any string s has at least one description. For example, the second
    string above is output by the pseudo-code:
    function GenerateString2()
    return “4c1j5b2p0cv4w1x8rx2y39umgw5q85s7”
    whereas the first string is output by the (much shorter)
    pseudo-code:
    function GenerateString1()
    return “ab” × 16
    If a description d(s) of a string s is of minimal length (i.e.,
    using the fewest bits), it is called a minimal description of s, and
    the length of d(s) (i.e. the number of bits in the minimal description)
    is the Kolmogorov complexity of s, written K(s). Symbolically,
    K(s) = |d(s)|.
    [–> our addred case is similarly complex to a plausibly random
    string but also has a detachable description that is simple and often
    identifies observable functionality]
    The length of the shortest description will depend on the choice
    of description language; but the effect of changing languages is
    bounded (a result called the invariance theorem) . . . .
    At first glance it might seem trivial to write a program which can
    compute K(s) for any s, such as the following:
    function KolmogorovComplexity(string s)
    for i = 1 to infinity:
    for each string p of length exactly i
    if isValidProgram(p) and evaluate(p) == s
    return i
    This program iterates through all possible programs (by
    iterating through all possible strings and only considering those which
    are valid programs), starting with the shortest
    . Each program is
    executed to find the result produced by that program, comparing it to
    the input s. If the result matches then the length of the program is
    returned.
    However this will not work because some of the programs p
    tested will not terminate, e.g. if they contain infinite loops. There
    is no way to avoid all of these programs by testing them in some way
    before executing them due to the non-computability of the halting
    problem
    . [–> so, calculation cannot in general distinguish
    random from simple order and from FSCO/I, we have to observe. This
    shows the pernicious nature of the strawman fallacy above by AF]
    What is more, no program at all can compute the function K, be it
    ever so sophisticated . . . .
    Kolmogorov randomness defines a string (usually of bits) as being
    random if and only if every computer program that can produce that
    string is at least as long as the string itself. To make this precise,
    a universal computer (or universal Turing machine) must be specified,
    so that “program” means a program for this universal machine. A random
    string in this sense is “incompressible” in that it is impossible to
    “compress” the string into a program that is shorter than the string
    itself. For every universal computer, there is at least one
    algorithmically random string of each length.[15] Whether a particular
    string is random, however, depends on the specific universal computer
    that is chosen. This is because a universal computer can have a
    particular string hard-coded in itself, and a program running on this
    universal computer can then simply refer to this hard-coded string
    using a short sequence of bits (i.e. much shorter than the string
    itself).
    This definition can be extended to define a notion of randomness
    for infinite sequences from a finite alphabet . . .

    This gives some background to further appreciate what is at stake.>>

  438. 438
    JVL says:

    Kairosfocus: There is more than enough on the table to show why the design inference on FSCO/I is warranted.

    I wasn’t questioning that!! I’m just trying to figure out why you reworked Dr Dembski’s metric and if your reworking gives the same results! I don’t know why that is so hard for you to understand.

    I will write up a simple example, apply Dr Dembski’s metric then ask you to apply yours (specifying the values of your introduced terms) and then we can see what’s what.

  439. 439
    Alan Fox says:

    Trying to hit 1 in 100,000,000,000…

    You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting – but not ending – with Keefe and Szostak) that potential function is widespread in protein sequences.

  440. 440
    Alan Fox says:

    Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific. The all-at-once scenario assumed by Dembski doesn’t match reality. Though it will be amusing to see if his math produces more than GIGO, if KF dares to venture into genuine illustrative examples.

    *wonders if he needs more popcorn*

  441. 441
    kairosfocus says:

    AF, not an assumption. Notice how carefully proteins are synthesised and folded. That is the mark of an exacting requirement. KF

  442. 442
    kairosfocus says:

    JVL, I await your renewed acknowledgement of the algebra, your willingness to acknowledge that FSCO/I is a subset of CSI for systems where functional configuration identifies the specificity [one noted by Orgel and Wicken in the 70s], and recognition that calcul;ated cases are on the table. Not having my old log tables from 3 – 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack. HP calculators since 1977. Further, I WORKED OUT what – log2[ prob] is, an info metric, that is not replacement. Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys? KF

  443. 443
    JVL says:

    Kairosfocus: Not having my old log tables from 3 – 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack.

    You can convert log base anything into log base 10 or ln quite simply. And even simple calculators have log10 and ln.

    Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys?

    Let’s just compare methods and see what happens. I already said your algebra was fine albeit unnecessary. It’s your introduction of constants and functions not present in Dr Dembski’s formula that I want to check.

  444. 444
    kairosfocus says:

    JVL, I take it that you have not done info theory and refuse to accept what is in Taub and Schilling much less my briefing note. That is the root of your problem. KF

    PS, Wikipedia confesses:

    Information theory is the scientific study of the quantification, storage, and communication of digital information.[1] The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s.[2]:?vii? The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.

    A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes) . . . .

    Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by

    H = ? [SUM on i] p i log 2 ? ( p i )

    [–> avg info per symbol, notice, the typical term for state i is – log2(pi), weighted by pi in the sum, this is directly comparable to a key expression for Entropy for Gibbs and is a point of departure for the informational school of thermodynamics]

    where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of “bits” (per symbol [–> it is averaging]) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler’s number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.

    So, you can see my direct reason for reducing to information and symbolising I(T). The product rule for logs directly gives the threshold, as noted. Functionally Specific Bits, using F and S as dummy variables is obvious, and a matter of observation. More complex measures can be resorted to but excess of threshold is so large no practical difference results. Design.

    As you obviously did not read my longstanding notes, I clip:

    To quantify the above definition [from F R Connor’s Signals] of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:

    I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1

    This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:

    I total = Ii + Ij . . . Eqn 2

    For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:

    I = log [1/pj] = – log pj . . . Eqn 3

    This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:

    Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4

    So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)

    Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer):

    – H = p1 log p1 + p2 log p2 + . . . + pn log pn

    or, H = – SUM [pi log pi] . . . Eqn 5

    H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics . . . .

    [Wikipedia confesses:] At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, “Gain in entropy always means loss of information, and nothing more” . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    I trust that should be enough for starters.

    Let me add, that your assertion in the teeth of repeated correction, is unjust: ” It’s your introduction of constants and functions not present in Dr Dembski’s formula that I want to check.” False and misleading. I reduces – log2[prob] to information and symbolised it I(T). I used the product rule to draw out threshold. I reduced the 10^120 term to log 2 result 398 bits. I symbolised the other term, a number, and pointed to the 10^150 threshold, essentially 500 bits. On your repeated objection I used WmAD’s case and showed the bit value, about 66, noting that he used 10^140 configs as space of possibilities there.

    Your resistance to a simple working out tells me it would be futile to try anything more complex. All that would invite is an onward raft of further objections.

    The basic point is, neg log of prob –> information, all else follows and indeed the unusual formulation of WmAD’s expression as – log2[ . . .] itself tells that the context is information in bits.

    As I have noted, the only practical use I have seen for log2 is to yield info in bits. If you have seen another kindly enlighten me.

    KF

  445. 445
    JVL says:

    Kairosfocus:

    You’re just not really paying attention to what I am actually saying. I shall write up a simple example soon and ask you to work out the same example using your method (with your introduced constants and change of function) and we’ll see.

  446. 446
    kairosfocus says:

    JVL, you are setting up and knocking over a strawman. That you resist a reduction of a – log2[ . . .] expression into the directly implied information in bits even after repeated explanation and correction tells me there is a refusal to acknowledge what is straightforward. If you are unwilling to acknowledge that, that is itself telling that you have no case on merits but insist on hyperskeptically wasting time. KF

  447. 447
    JVL says:

    Kairosfocus:

    I do not understand your constant objections. I’ve agreed with your algebra. I don’t understand why you made certain substitutions as the mathematics is quite straightforward as Dr Dembski stated his formulation but if we compare results we can clear some of those questions up. But you keep not wanting to compare results.

    As I said, I will present a worked out, fairly simple case, just to get things started. I’ve done a rough draft but I’d like to review it to make sure it’s clear and cogent and easy to follow.

    Stop arguing against things I haven’t said; you can convince me your approach is correct by comparing results. Simple.

  448. 448
    ET says:

    Alan Fox:

    You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting – but not ending – with Keefe and Szostak) that potential function is widespread in protein sequences.

    They said 1 in 100,000,000,000 proteins are functional. Read their paper. 1 in 100,000,000,000 is NOT widespread.

  449. 449
    ET says:

    Alan Fox is either a LIAR or just willfully ignorant:

    The all-at-once scenario assumed by Dembski doesn’t match reality.

    You are lying as Dembski doesn’t make such an assumption.

    Grow up, Alan.

  450. 450
    ET says:

    EARTH TO ALAN FOX. FROM KEEFE ANS SZOSTAK:

    We therefore estimate that roughly 1 in 10^11 of all random-sequence proteins have ATP-binding activity comparable to the proteins isolated in this study.

    You lied about their paper, too.

    You have no shame.

  451. 451
    JVL says:

    Okay, here’s what I’d like to use as a first test of Dr Dembski’s metric. I’m not saying this test is controversial in any way; I’m just wanting to step through it as an example.

    I’ll work through Dr Dembski’s metric (from his 2005 monograph: Specification, the Pattern That Signifies Intelligence) twice, once not breaking the log base 2 bit apart and once breaking it apart. In both cases I will get the same result because breaking the log apart has no effect on the final value.

    For this post Dr Dembski’s metric looks like this:

    X = -log2(10^120•pS(T)•P(T|H))

    (Because this blog is not configured to handle Greek letters I’ve change some of the notation)

    I’d like Kairosfocus to work through the example using his version of the metric (from comment 276 above: X = I(T) – 398 – K2) and I’d like him to give values for K2 and for I(T).

    We can then compare results and conclusions.

    For this particular example I expect to get the same conclusions because I think the conclusion is pretty clear but I’d like to illustrate the difference in the approaches.

    The example I’d like to work through first is: Flipping a fair coin 10 times and getting 10 tails.

    Again, I expect Kairosfocus and I to arrive at the same conclusion for this particular example. I just want to see how he works his version.

    I will/may be using a log conversion method which says log base b of N written as logbN = log10N/log10b = ln10/lnb. This can be found in any high school math text beyond the base level. This is handy when evaluating log2 since modern calculators do not have that function.

  452. 452
    kairosfocus says:

    JVL, why don’t you reduce the – log2[ . . . ]? That would tell you a lot. I did it, but you apparently need to do so for yourself. And, you show that you know enough about logs to understand. KF

  453. 453
    JVL says:

    Okay, if you flip a fair coin 10 times there are 2^10 possible outcomes all of which are equally likely if each flip is truly random which we’re going to assume for this example.

    So, S = our semiotic agent, T = getting 10 tails with 10 flips, H = the flips are random -> P(T|H) = 1/2^10 = 2^-10

    Dr Dembski defines pS(T) as: the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.

    I argue that pS(T) = 2 in this case. We can describe our T as “getting all tails” and the only other possible outcome with a description that simple or simpler is “getting all heads”

    So X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120•2•2^-10) = -log2(10^120•2^-9)

    Now 10 is approx equal to 2^3.321928 (recall that 2^2 = 4, 2^3 = 8 and 2^4 = 16)

    So X is approx = -log2((2^3.321928)^120•2^-9) = -log2(2^398.63136•2^-9)

    = -log2(2^389.63136) = -389.63136

    This result is less than one (Dr Dembski’s threshold) so design is not concluded, i.e. this event could have come about by chance.

    Addendum: perhaps I should point out that for any base, b: logb(b) = 1 and logb(b^n) = n.

  454. 454
    JVL says:

    An alternate method of computing the final result is:

    X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120) – log2(pS(T)) – log2(P(T|H))

    For our values that’s

    = -log2(10^120) – log2(2) – log2(2^-10) = -log2(2^398.63136) – 1 + 10 = -398.6316 -1 + 10 = -389.6316

    So, breaking apart the stuff inside the log is possible but unnecessary as the result is the same and therefore the conclusion is the same.

    So, I’d now like Kairosfocus to work through this same simple example, explain how he’s calculating I(T) and K2, give us his result and conclusion. As I already said: I expect our conclusions to be the same for this example but I’d like to see how he’s calculating K2 and I(T).

  455. 455
    kairosfocus says:

    JVL, evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 – 1,000 or more bits, 3.27*10^150 to 1.07*10^301 or bigger, doubling for every further bit. 10 bits is not even two ascii characters. Any given binary sequence could come about by raw chance, but some are utterly unlikely and implausible to do so because of the statistical weight of the near 50-50 peak, with bits in no particular functional order, i.e. gibberish. KF

  456. 456
    kairosfocus says:

    PS, you will observe that I gave limiting values and said so. Dembski suggested 500 bits, and that config space swamps the sol system’s search capacity. 1,000 bits I am more comfortable with for the cosmos as a whole. That is, I used values that make any plausible search reduce to negligibility. As you full well know.

  457. 457
    JVL says:

    Kairosfocus: evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 – 1,000 or more bits

    Can you just show us how to evaluate your version of his metric for this case, yes or no? If you think it falls below the threshold then do the math and show us why. For this example what is your K2 and your I(T)?

    Dr Dembski worked through an example where he got -20, below his threshold, so clearly he intended to be able to use his metric for ALL cases.

  458. 458
    relatd says:

    AF at 440,

    Do you even read what you write?

    “Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific.”

    “selected for”? By who? By what? Blind, unguided chance? That’s not goal oriented? “selective pressure”? Seriously? How much time, according to the non-existant Selective Pressure Cookbook, needs to pass to create the fictional change or changes?

  459. 459
    kairosfocus says:

    AF, you know full well. No specificity or functionality, so 10 bits x 0 x 0 = 0. X_500 = 0 – 500 = – 500, 500 bits short of a design inference threshold. the two threshold terms are addressed AS YOU KNOW by finding a bounding value, here a very generous 500 bits as WmAD has mentioned. I would use that for the sol system scale. KF

    PS, just to clarify, 10^57 atoms in sol system mostly H, He in sol, but use that. 10^14 observations of state of 500 1 bit registers per second each, for 10^17 s, gives 10^88 possible observations. Negligible fraction of 3.27*10^150 possible states. This has already been outlined and given over years.

  460. 460
    JVL says:

    Kairosfocus:

    One other thing, since you haven’t responded yet . . .

    When Dr Dembski worked an example and got -20 you suggested that that example was 20 bits shy. I got -389.something rounded up (or down) to -390. Does that mean that that example was 390 bits shy of the threshold? Should we add 390 and try again?

  461. 461
    Alan Fox says:

    Related:

    selected for”? By who? By what? Blind, unguided chance?

    The NICHE!

    …which is why swifts are generally found flying in advance of weather fronts, golden moles generally swimming in sand in the Namib, and great white sharks generally patrolling oceans containing suitable prey. Not chance, but environmental design, which some refer to as natural selection by the niche environment.

    The NICHE!

  462. 462
    relatd says:

    AF at 461,

    I work with professional writers and if I saw that kind of CRAP on my desk, I would immediately reject it. Then throw it in the trash.

    “environmental design”? That’s not even fiction, or “science” fiction. It contains zero science.

    Swifts flying in advance of weather fronts? Who taught them how to do that? Nothing? Because that is exactly what you have.

  463. 463
    Alan Fox says:

    KF

    This has already been outlined…

    ad nauseam. I grasp Seth Lloyd’s concept of total number of particles in the (known) universe times units of Plank time since the start of (this known) universe. Dembski misapplies the concept, which might make some sense if this known universe is strictly deterministic, which it isn’t. But that isn’t the big mistake, which is in assuming unique solutions and random, exhaustive searches.

  464. 464
    Alan Fox says:

    I work with professional writers…@

    You keep mentioning this as of it should impress me. What would impress me is if Related showed some understanding of biology and attacked that rather than your strawman version.

  465. 465
    Alan Fox says:

    Who taught them how to do that?

    The niche (in the sense of sifting out individuals with poorer ability from the population),

    Gradually.

  466. 466
    relatd says:

    AF at 463 and 464,

    Pzzzfffft !!! “The niche”? Woo hoo !!! The niche what? That fictional, invisible thing – without intelligence – you’re trying to sell here?

    That’s crap. It has NO basis in fact. In case you missed it – that’s CRAP.

  467. 467
    relatd says:

    AF at 465,

    All the baby Swifts had to show up for practice one day. Called there by the fictional, invisible nothing…

    Seriously? I mean Seriously?

  468. 468
    Alan Fox says:

    How innate behaviour is templated in DNA sequences is a subject largely untouched so far. I optimistically expect that to change one day. I pessimistically expect climate change to get us first. The niche humans occupy is changing very rapidly.

  469. 469
    Alan Fox says:

    I don’t expect to convince you of anything, Related. Just remember what I said twenty years from now, when I’ll have already shuffled off this mortal coil.

  470. 470
    relatd says:

    AF at 468,

    The Niche, starring Alan Fox. Where he points at nothing and tells people it’s something.

    By the way, according to AF, we’re all going to die next week. The week after that at the latest…

  471. 471
    Alan Fox says:

    Only humans, Related. I suspect you and everyone that thinks God has a purpose for us humans that involves an eternity of hosannah-ing are in for a bit of a disappointment.

  472. 472
    relatd says:

    AF at 471,

    If you think that this life is all there is, I’ve got 2,000 years of testimony that says different.

  473. 473
    Alan Fox says:

    Well, Related, let’s agree to meet up in the hereafter and compare notes. Though an eternity of talking to you is not the most attractive proposition, I have to say. Perhaps I’ll get to go to Hell where all the interesting folks are.

  474. 474
    kairosfocus says:

    JVL, further doubling down in the face of a response . . . since you haven’t responded yet. That is telling. Ultimately, telling on a rhetorical strategy of distractions, side tracks and polarisation. One, that reveals through what is evaded and dismissed or forgotten, the dirty secret of the long term ID objector. Not having a substantial response, side track and polarise. It is clear, the design inference on signs is well warranted, functional information like the text of objections is an observable and that blind watchmaker needle in haystack search becomes hopeless once 500 – 1,000 bits are on the table. Indeed, it is obvious that bits are a natural info metric, starting with the carrying capacity of two state elements. In an info theory context, – log2[probability] gives info in bits. Then, as WmAD’s expression reduced algebraically shows, – log2[probability*threshold_index] gives information short of, at or beyond threshold in bits. Where we can work through redundancy as Durston et al have, we can set dummy variables to enfold functionality and specificity, we can use bounds for thresholds, with 500 – 1,000 bits a very reasonable and even generous threshold. The net result is, that relevant cases such as the 900 bases for a typical 300 AA protein, 1800 bits info capacity in a functional, specific entity, are so far beyond sol system threshold, 1300 bits, that redundancy makes no practical difference. The net result is FSCO/I in the cell and in body plans — OOL 100 – 1,000 kbases in the genome, 10 – 100+ mn bases for body plans eg for arthropods — is so far beyond threshold that redundancy is irrelevant. Credibly, life and body plans come from the only observed source for FSCO/I, intelligently directed configuration. KF

  475. 475
    kairosfocus says:

    AF, see the just above. KF

  476. 476
    JVL says:

    Kairosfocus: since you haven’t responded yet.

    You didn’t address anything to me after I responded to you. So I asked another question.

    And, you haven’t said what your K2 and I(T) are for the particular example I worked out using Dr Dembski’s metric. You came up with those so, if they have any meaning, you should be able to specify their values for a given example.

    Nor have you answered my follow-up question: since you once said that a result using Dr Dembski’s metric that came out to -20 . . . since I got a result of -389 or so does that mean that that particular test sequence was 389 or so bits below threshold?

    This is just a sincere and simple test of yours and Dr Dembski’s specified complexity formula. You seem to avoid actually doing any calculations.

    If I don’t hear back from regarding actual values of K2 and I(T) then I shall move on to another example with more ‘bits’ and see if you agree or disagree with the results, and why, and (hopefully) what your own version of the metric shows. But, truth be told, I’m not holding my breath since you actually seem to be about tossing lots of math around without actually doing any.

  477. 477
    JVL says:

    I just noticed an omission in the above comment, it should read . . .

    . . . since you once said that a result using Dr Dembski’s metric that came out to -20 that that meant that that sequence was 20 bits shy of the threshold . . . since I got a result of -389 . . .

  478. 478
    JVL says:

    So, let’s ratchet things up a bit. Let’s try testing a very similar scenario except let’s go for 400 tails in a row. An extremely unlikely event if everything is by chance I think you’ll agree.

    I don’t think it’s difficult updating my work with Dr Dembski’s metric (all I have to do is put ‘400’ in where I had ’10’ before) but I will first give Kairosfocus a chance to tell us what his version of the metric comes up with (i.e. what his K2 and I(T) are) and what his conclusion is before I chime in with my results.

    Again this is just testing Dr Dembski’s metric from his 2005 monograph.

  479. 479
    kairosfocus says:

    JVL, your ongoing game is a waste of time and distraction. KF

  480. 480
    JVL says:

    Kairosfocus: your ongoing game is a waste of time and distraction.

    Are you saying you can’t compute your K2 and I(T) for the example of getting 400 tails in a row when flipping a fair coin? I can compute Dr Dembski’s metric, easily.

  481. 481
    Lieutenant Commander Data says:

    😆 To talk only about amino-acids “metric” is a bad joke. We should be talking about all associated cell processes combined metrics and “probability ” of all processes to function/cooperate/help each other forming interconnected systems from the first cell.

    Actin nucleation core
    Action potential
    Afterhyperpolarization
    Autolysis
    Autophagin
    Autophagy
    Binucleated cells
    Biochemical switches in the cell cycle
    Branch migration
    Bulk endocytosis
    CDK7 pathway
    Cap formation
    Cell cycle
    Cell death
    Cell division
    Cell division orientation
    Cell growth
    Cell migration
    Cellular differentiation
    Cellular senescence
    Chromosomal crossover
    Coagulative necrosis
    Crossing-over value
    Cytoplasm-to-vacuole targeting
    Cytoplasmic streaming
    Cytostasis
    DNA damage
    DNA repair
    Density dependence
    Dentinogenesis
    Dynamin
    Ectopic recombination
    Efferocytosis
    Emperipolesis
    Endocytic cycle
    Endocytosis
    Endoexocytosis
    Endoplasmic-reticulum-associated protein degradation
    Epithelial–mesenchymal transition
    Exocytosis
    Ferroptosis
    Fibrinoid necrosis
    Filamentation
    Formins
    Fungating lesion
    Genetic recombination
    Hertwig rule
    Histone methylation
    Interference
    Interkinesis
    Intracellular transport
    Intraflagellar transport
    Invagination
    Karyolysis
    Karyorrhexis
    Klerokinesis
    Leptotene stage
    Malignant transformation
    Meiosis
    Membrane potential
    Microautophagy
    Mitotic recombination
    Necrobiology
    Necrobiosis
    Necroptosis
    Necrosis
    Nemosis
    Nuclear organization
    Parasexual cycle
    Parthanatos
    Passive transport
    Peripolesis
    Phagocytosis
    Phagoptosis
    Pinocytosis
    Poly
    Potocytosis
    Pyknosis
    Quantal neurotransmitter release
    Rap6
    Receptor-mediated endocytosis
    Residual body
    Ribosome biogenesis
    S phase index
    Senescence
    Septin
    Site-specific recombination
    Squelching
    Stringent response
    Synizesis
    Trans-endocytosis
    Transcytosis
    Xenophagy
    +all still unknown processes :))
    Good luck!

  482. 482
    JVL says:

    LtComData:

    We are talking about computing Dr Dembski’s specified complexity metric from his 2005 monograph: Specification: The Pattern That Signifies Intelligence. I have decided to compare and contrast the results of that metric on some basic and simple examples with the alternate metric proposed by Kairosfocus many years ago now, I think. I have computed the result for the example of flipping a fair coin 10 times and getting 10 tails and hoped that Kairosfocus would show what his alternate interpretation of that metric would compute to. He . . . well . . . avoided giving a direct answer.

    I am now asking for him to give his result for the example of flipping a fair coin 400 times and getting 400 tails. I can easily compute Dr Dembski’s metric for that example but I’d like to hear Kairosfocus‘s response first. Does flipping a fair coin 400 times and getting 400 tails give evidence of specified complexity and therefore design?

    If, after the dialogue with Kairosfocus is resolved, you’d like to discuss the application of Dr Dembski’s metric to some of the other situations you list then perhaps we can do that. But first I’d like to resolve the simple case.

  483. 483
    kairosfocus says:

    CD, yes, we are dealing with lower bounds on complexity. It is enough for a reasonable person — not to be assumed at this stage — that for an average 300 AA protein, we have 900 bases, and so 1800 bits carrying capacity. Bits can be seen i/l/o basic info theory, notice how that was ducked time and again. For our effective cosmos, the sol system, 10^57 atoms as observers each overseeing 500 bits/coins changing 10^14 times/s for 10^17 s we can examine 10^88 states. Sounds huge till one sees that the config space for 500 bits is 3.27*10^150, so one can only search a negligible fraction. Needle in haystack search challenge sidelines blind mechanisms. Intelligence uses understanding to compose effective, functional complex organisation. And the objectors know this, they are seeking to suppress what should be a commonplace. KF

  484. 484
    JVL says:

    Well, it seems like Kairosfocus is just not going to even try and compute his version of Dr Dembski’s 2005 specified complexity metric from his monograph Specification: The Pattern That Signifies Intelligence for the second example I have proposed: flipping a fair coin 400 times and getting 400 tails. I shall give you my result from computing Dr Dembski’s metric for that example. I argue that the only thing I have to do is change the ’10’ in my previous example with ‘400’ so I get:

    (Oh, after the first step all the ‘equals’ should properly be read as ‘approximately equals.)

    X =-log2(10^120•2•2^-400) = -log2(2^398.63136•2^-399) = -log2(2^-0.36864) = 0.36864

    Which is below Dr Dembski’s threshold of 1 to conclude that the event or sequence exhibited enough specified complexity to be definitely designed. Please don’t shout at me, I’m just trying to calculate his metric fairly. If you think I’ve made a mathematical mistake then please point it out specifically. Because I didn’t come up with the metric if you have a problem with it then do not blame me.

    Can I just say, it’s clear that one more coin flip getting all heads would clearly step over Dr Dembski’s specified complexity line. 401 fair coin flips, all tails would meet the criteria of his metric. A more complicated pattern would increase pS(T) and thereby mean an increase in the number of trials/flips required to meet the threshold. In some sense, looking at the very simplest case puts a kind of lower bound, based on his metric, for detecting sufficient specified complexity that leads to a conclusion of design. It’s close to 400 events or choices? Based on actually calculating Dr Dembski’s metric. Most of the time, it would be much higher than that.

    Once again, I am not casting judgement on Dr Dembski’s metric, I am only trying to explore its implications. I was hoping to get Kairosfocus to do something similar with his version of Dr Dembski’s metric but, alas, he seems to have excused himself from the discussion. For whatever reason. I would still very much like him to give values for his K2 and I(T) for any of the examples I have dealt with. He came up with those terms so, if they have any meaning, he should be able to evaluate them. We shall see if he deigns to enlighten us with the numerical thinking behind his formulation.

  485. 485
    kairosfocus says:

    JVL, what I am saying is that once we have a reasonable bound, and can see the result for relevant cases — as was long since shown — we have the material answer. Therefore, I have no need to go on and on with what is patently distractive. The material result is, there is good reason to conclude that cells and body plans include intelligently directed configuration as key cause. KF

    PS, and BTW, the bounds set limits for plausible ranges for terms involved in the threshold values implicit in Dembski’s expression.

  486. 486
    kairosfocus says:

    PPS, I again remind, as just one example

    293
    kairosfocus
    August 7, 2022 at 5:06 am

    F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design.

    First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits

    FSCO/I, X_sol = FSB – 500 in functionally specific bits

    Likewise for the observable cosmos,

    X_cos = FSB – 1,000, functionally specific bits

    And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get

    X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits.

    Which, is comfortably beyond, so redundancy is unlikely to make a difference.

    Contrast a typical value for 1800 tossed coins

    X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short.

    If the coins expressed ASCII code in correct English

    X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed.

    [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.]

    A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference.

    Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy.

    Life, credibly, is designed.

    Where, of course, redundancy was long since addressed by Abel, Durston et al. It is quite clear from the above that you still resist the simple reduction to information in bits directly implied by – log2[ . . . ] where that was established as a basic metric for information decades ago. Similarly the algebra of logs leads to thresholds in the Dembski expression, which is why I used threshold metrics from over a decade ago as is drawn out in my always linked. Again side stepped and/or distracted from and resisted. That suggests that you were unfamiliar with the established result of an information metric, then with the significance of the product rule for logs given WmAD’s expression. You no longer have an excuse. Information is first measured as carrying capacity and redundancy can be addressed for practical cases but makes no effective difference for the main point. That main point is your obvious underlying objection but you cannot deal with it substantially on merits. Which is telling.

  487. 487
    kairosfocus says:

    AF, reasonable bounds. You are dealing with me here, not WmAD and we need not further side track. I took each atom in the cosmos or sol system as an observer, and a fast chem rxn time as a bound on time for an observation, with a timeline since the singularity as a bound on observations of all 10^80 or 10^57 atoms, where that is a generous estimate as most are H and He, many bound up in stars etc. Give the 10^57 sol sys atoms registers of 500 bits to observe each 10^-14 s, 1000 bits for the cosmos as a whole. You have 10^111 or 10^88 observations as bounds. Configuration space for 500 or 1,000 bits is 3.27*10^150 or 1.07*10^301. In each case span of possible search is negligible relative to the config space. Once we can find functionally specific, complex information . . . which can be implicit in organisation to achieve function per Wicken wiring diagram . . . beyond reasonable threshold, there is only one empirically warranted, analytically plausible source, intelligently directed configuration. This is plain but equally plainly you have resisted it and sought to distract attention from it through every rhetorical stunt. That backfires, it is an implicit admission that you have no substantial reply on the focal, decisive point. So your continued objections and distractions are of no material value as the main point is decided on merits, long since. Life is credibly designed, body plans are, including ours. The 160 year long agenda to expel design has failed. KF

  488. 488
    JVL says:

    Kairosfocus:

    Can you actually compute the terms you came up with: K2 and I(T)? Yes or no?

  489. 489
    Alan Fox says:

    Life is credibly designed…

    Maybe, but what you’ve amply demonstrated is it is a matter of belief rather than anything that can be shown mathematically. It is bizarrely simplistic to pluck some arbitrary threshold from… the air… and claim anything beyond is a product of design. You believe God created everything anyway. The bogus mathematical argument is pointless.

  490. 490
    kairosfocus says:

    F/N: The distraction continues. Having bounded variables to go into the log reduction, having provided the result that for cases relevant to cell based life and body plans, we are well beyond threshold where intelligently directed configuration is the by far and away best explanation, the material question is over. I(T|H) can be assessed on capacity then adjusted as Abel, Durston et al have published, but it is implausible that redundancy makes a practical difference. Thresholds have been given generous bounds for sol system and cosmos. All along, there has been refusal to acknowledge plainly and work with the reduction of – log2[ . . . ] and the link thereby to information and to information beyond a threshold. That resistance and distraction tell the story, and they are why we need to refocus the main thing and conclusion on merits: life is credibly the result of design, also body plans up to our own. Observe the significance of that and the onward distractive behaviour. Where, the behaviour so far gives little confidence that any going along with onward distractions will have any fruitfulness. Enough has been done but determined objectors will never acknowledge any significant result, a sad fact of life. In the end that unresponsiveness and that hyperskeptical polarisation are telling. KF

  491. 491
    kairosfocus says:

    See why I have declared intellectual independence and refuse to allow endless hyperskeptical objections to veto what on warrant I can know with good reason?

  492. 492
    kairosfocus says:

    Note, by using bounds driven by search capability of the cosmos or sol system, we have general results; far more powerful than any particular detailed calculation, eg by using tables of protein families to estimate redundancy. For any reasonable person, a general result is preferable to one that depends on detailed assumptions, scenario and compiled data on proteins etc. Such general results with examples were on the table hundreds of comments ago. The sullen resistance, foot dragging, side stepping, implicit half concessions pulled back and resort to polarisation tell us that the objectors have lost on merits.

  493. 493
    JVL says:

    kairosfocus: All along, there has been refusal to acknowledge plainly and work with the reduction of – log2[ . . . ] and the link thereby to information and to information beyond a threshold.

    I’m not the one who reworked Dr Dembski’s metric, introducing new terms (K2 and I(T)); that was you. And, it seems that you can’t even specify what those terms are numerically for a particular, simple example. You just talk in general about stuff when I’m asking you to be specific about terms you came up with.

    So, again: Can you actually compute YOUR TERMS K2 and I(T) for the very particular case of flipping a fair coin 400 times and getting 400 tails? Yes or no?

    I was able to calculate a specific value for Dr Dembski’s metric; the mathematics was elementary. You changed the metric into something you seemingly cannot calculate. Why did you make the change if you can’t calculate it?

    See why I have declared intellectual independence and refuse to allow endless hyperskeptical objections to veto what on warrant I can know with good reason?

    Perhaps you’d like to justify that by computing the terms you created as replacements for terms in Dr Dembski’s metric that were calculable as I have shown.

  494. 494
    kairosfocus says:

    JVL, that is now an outright lie, sustained in the teeth of repeated correction. Working out that – log2[prob ] –> information is NOT “reworked Dr Dembski’s metric.” You did not seem to know what neg log prob means, you obviously have no regard to background and even explanatory step by step notes on the info theory and now excerpt from a classic text on the subject; apparently you found it rhetorically convenient to sidestep why I would have in my library two copies of editions of Taub and Schilling, not to mention the Connor series and other works. That should have been a clue, but that was not convenient. Yes, I made simplifying substitutions then drew out generous bounds for info beyond a threshold metrics. Bounds that deliver a powerful general result. That is what is material, once we see that the structure of the WmAD expression gives an info beyond a threshold value. The bounds deliver a general result, given cosmos capability to search and implicit scattered nature of found and similar targets. That general result is powerful. One may thereafter wish to debate particular models and estimates by WmAD, Abel, Durston et al, but it is a very different thing when that is in the context of a powerful general result. It is the side stepping of that general result that is in the end telling. KF

  495. 495
    JVL says:

    Kairosfocus: Yes, I made simplifying substitutions

    True dat.

    then drew out generous bounds for info beyond a threshold metrics. Bounds that deliver a powerful general result. That is what is material, once we see that the structure of the WmAD expression gives an info beyond a threshold value. The bounds deliver a general result, given cosmos capability to search and implicit scattered nature of found and similar targets. That general result is powerful. One may thereafter wish to debate particular models and estimates by WmAD, Abel, Durston et al, but it is a very different thing when that is in the context of a powerful general result. It is the side stepping of that general result that is in the end telling.

    Why didn’t you just declare bounds on the original terms? Why the change of notation? And why make it look like one of your new terms was still a function dependant on T?

    So, to be very, very clear, for the particular example of flipping a coin 400 times and getting 400 tails:

    What are the bounds for K2?

    What are the bounds for I(T)?

  496. 496
    kairosfocus says:

    JVL, reducing a log operation to its result and making a simplifying substitution then finding a general bound is a reasonable procedure. One, that gives a telling result on origin of cells and body plans including our own. Your onward demands are either given from outset, for sol system the threshold [given WmAD’s statements] is 500 bits with 398 on the clock so an additional 100 or so, Which is where you started the needless song and dance. As for amount of information, as much as can be produced by all the intelligence in reality and expressed in the cosmos. As for why is I(T) the info value of the target, the answer is obvious, it is just that; take the neg log prob. And we could go on endlessly. KF

  497. 497
    JVL says:

    Kairosfocus: reducing a log operation to its result and making a simplifying substitution then finding a general bound is a reasonable procedure.

    As for why is I(T) the info value of the target, the answer is obvious, it is just that; take the neg log prob.

    “take the neg log prob”. Is that the way trendy math people talk?

    It’s not obvious. You can’t just keep waving your hands about and hope no one asks you for specifics.

    You and others had this 500-bit threshold in mind. That was the standard. Then Dr Dembski thought: you know what, for some situations/patterns/sample spaces the threshold might be less than 500-bits (or more!) AND why not try and make the whole idea a bit more rigorous mathematically. So he had a think and came up with the metric in his 2005 monograph. If he wanted to just stick with the 500-bit threshold there would have been no point in revising and extended (his words) his previous work. And, in fact, using his metric, for the case of flipping coins and getting all tails it looks like the threshold is reached at 401 flips and not 500. According to my calculations which no one has disputed.

    I think you looked at his metric, tore it apart, interpreted each of the parts as number of bits (even though he explicitly stated that the threshold for his metric was being greater than 1), renamed parts, came up with I(T) (which you did not clearly define) and decided that had to meet the same old criterion of being 500-bits or more. Dr Dembski would never have bothered creating that metric if all he wanted to do was to stick with the already existing 500-bit threshold. AND, as we’ve seen, for certain cases, the threshold is less than 500-bits. That, in fact, was part of his point: for each individual case/situation/pattern/sample space a tighter, more mathematical threshold might exist.

    But you just tore his new metric apart and tried to make it fit into the old threshold. You read a couple books on information theory and remembered a rule about logs and for years and years no one questioned what you did. They didn’t understand Dr Dembski’s mathematics so they figured you knew what you were talking about. But you can’t clearly define or evaluate the terms you came up with. What’s the point in creating them if you’re just going to say: it all has to meet the same 500-bit limit? You created them then brushed them under your blather of math.

    If you want to stick with the 500-bit threshold, fine. You do that. But don’t attempt to do some clever mathematics (badly) and then say Dr Dembski’s new method of calculating a threshold for individual cases gives the same results. The point is that it might not. That’s why he created it.

    AND, again, I got a different threshold for flipping a coin and getting tails trying to honestly use the metric Dr Dembski elucidated and explained. Do you agree that for that particular case and event the threshold is 401 flips? Yes or no?

    I’m not going to ask you about K2 and I(T) anymore because you don’t even know what they mean so you can’t tell what values they can take on for a particular situation.

  498. 498
    ET says:

    Alan Fox:

    You believe God created everything anyway.

    Alan the psychic blowhard, strikes again!

    No, Alan. Only losers on an agenda says crap like that. As Dr Behe said many years ago:

    Intelligent design is a good explanation for a number of biochemical systems, but I should insert a word of caution. Intelligent design theory has to be seen in context: it does not try to explain everything. We live in a complex world where lots of different things can happen. When deciding how various rocks came to be shaped the way they are a geologist might consider a whole range of factors: rain, wind, the movement of glaciers, the activity of moss and lichens, volcanic action, nuclear explosions, asteroid impact, or the hand of a sculptor. The shape of one rock might have been determined primarily by one mechanism, the shape of another rock by another mechanism. Similarly, evolutionary biologists have recognized that a number of factors might have affected the development of life: common descent, natural selection, migration, population size, founder effects (effects that may be due to the limited number of organisms that begin a new species), genetic drift (spread of “neutral,” nonselective mutations), gene flow (the incorporation of genes into a population from a separate population), linkage (occurrence of two genes on the same chromosome), and much more. The fact that some biochemical systems were designed by an intelligent agent does not mean that any of the other factors are not operative, common, or important.

    ID does NOT claim that everything is intelligently designed. Lying about ID and erecting strawmen is all Alan is reduced to.

    Priceless…

  499. 499
    ET says:

    Why is Alan so afraid of people trying to quantify the concept of information, with respect to biology, as posited by Francis Crick?

    Why is Alan so afraid to tell us of this methodology used to determine that blind and mindless processes, such as natural selection and drift, produced all bacterial flagella?

    Why is Alan so afraid to develop his notion of the NICHE designs? Why does the evidence point to honing of existing designs, for example?

    And why is Alan so afraid to learn what Intelligent Design actually is and what it argues against?

  500. 500
    bill cole says:

    You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting – but not ending – with Keefe and Szostak) that potential function is widespread in protein sequences.

    Hi Alan
    It depends on the application. Besides the sequence problem there is another problem which is the waiting time problem to fixation. This blind and unguided dog does not hunt. Universal common descent is not going to make it as a hypothesis and it is important this realization comes sooner then later.

  501. 501
    bill cole says:

    As I said, the beginning and end of a typical ID argument. ID adds nothing to scientific understanding.

    ID shows where science may have limits. Big time and resource saver. ID can help stop faulty theories from surfacing and misleading science. Universal common descent is an example. Alan, please do not get in the way of more sensible biological science just to satisfy your political ideology.

  502. 502
    Alan Fox says:

    Hi Bill,

    Are you on an R & R break from Peaceful Science?

  503. 503
    kairosfocus says:

    JVL, you continue to set up and knock over strawmen. If you were not familiar with the negative log probability metric, and how base 2 yields bits, then you were and by refusal to acknowledge still are, not in a position to make substantial remarks. KF

  504. 504
    kairosfocus says:

    F/N: Insofar as science seeks an accurate understanding of the world, identification and use of reliable signs of design is a significant contribution. And if instead science is reduced to propping up evolutionary materialistic scientism as an ideology, it is on its way to losing credibility. KF

  505. 505
    Alan Fox says:

    KF,
    It has been amply demonstrated that you are talking in non-sequiturs. Given any set of raw data, without additional information, you are (with your math manipulation) utterly unable to distinguish random number sets from sets that hold information.

  506. 506
    kairosfocus says:

    AF, falsities as usual, sadly. First, you yet again set up a strawman despite repeated correction by several participants. Configuration based functionality is observed, it is a readily identifiable empirical datum noted since Orgel and Wicken in the 70’s. The text of your objection is readily distinguished from gibberish that is plausibly random [exotic coding being even more complex], turar8lcrys75op764hi7r5o;gxcgk . . . . Repetitive simple patterns are non random but low information, asasasasas . . . for instance. You tried to mock the reference to an ABU 6500 C3 fishing reel as an example of an information-rich Wicken wiring diagram (itself a sign of fundamental refusal to acknowledge massively evident facts). That extends to say comparing the process-flow network of an oil refinery to the far more sophisticated and miniaturised case of cellular metabolism. You are also refusing to acknowledge that it is generally understood that there are no global decoder algorithms. You further refuse to recognise the per aspect explanatory filter, on an aspect of an entity, network or process, assess candidates, necessity vs chance vs intelligently directed configuration, per, low contingency on similar start points, necessity. High contingency but low functional specificity, chance. Functionally specific, complex organisation and/or information, design. And of course, further lurking is your refusal to acknowledge information carrying capacity from observed functionally specific information or organisation. Where, organisation is readily reducible to strings in a compact description language. Further to which, above, you were found trying to resist and deny a standard metric for information tied to probability, negative log probability, now essentially a century old and foundational to our information age. The problem is not with formulating or explaining this, or with correcting misunderstandings, in your case literally across years. Your problem, sadly, is ethical . . . insistent, patently ideologically motivated violation of key intellectual virtues: willful selective hyperskepticism compounded by disregard for truth, reason and fairness. You have made yourself a poster child example of why I have declared intellectual independence and refuse to hobble my knowledge on adequate and responsible warrant to your inveterate obfuscation, hyperskepticism and objectionism. KF

    PS, For those interested, kindly see https://uncommondescent.com/intelligent-design/lfp-55-defining-clarifying-intelligent-design-as-inference-as-theory-as-a-movement/

  507. 507
    ET says:

    Alan, It has been amply demonstrated that you don’t know what you are yapping about. You erect strawmen as if it means something. People can distinguish random number sets from sets that hold information. We do it every day!

    In biology we actually OBSERVE functionality. So, clearly you are just a clueless loser and whining crybaby.

  508. 508
    JVL says:

    Kairosfocus:

    Since you didn’t answer this before I’ll say it again:

    I got a different threshold for flipping a coin and getting tails trying to honestly use the metric Dr Dembski elucidated and explained. Do you agree that for that particular case and event the specified complexity threshold is 401 flips? Yes or no?

  509. 509
    kairosfocus says:

    PPS, worse, the ideology, evolutionary materialistic scientism and/or fellow travellers, is self referentially incoherent and self falsifying:

    [JBSH, REFACTORED AS SKELETAL, AUGMENTED PROPOSITIONS:]

    “It seems to me immensely unlikely that mind is a mere by-product of matter. For

    if

    [p:] my mental processes are determined wholly by the motions of atoms in my brain

    [–> taking in DNA, epigenetics and matters of computer organisation, programming and dynamic-stochastic processes; notice, “my brain,” i.e. self referential]
    ______________________________

    [ THEN]

    [q:] I have no reason to suppose that my beliefs are true.

    [–> indeed, blindly mechanical computation is not in itself a rational process, the only rationality is the canned rationality of the programmer, where survival-filtered lucky noise is not a credible programmer, note the functionally specific, highly complex organised information rich code and algorithms in D/RNA, i.e. language and goal directed stepwise process . . . an observationally validated adequate source for such is _____ ?]

    [Corollary 1:] They may be sound chemically, but that does not make them sound logically.

    And hence

    [Corollary 2:] I have no reason