Scientists from Boise State University and elsewhere have tested 252 genera from most families of large-bodied moths. Their results show that ultrasound-producing moths are far more widespread than previously thought, adding three new sound-producing organs, eight new subfamilies and potentially thousands of species to the roster.

Bats pierce the shadows with ultrasonic pulses that enable them to construct an auditory map of their surroundings, which is bad news for moths, one of their favorite foods.
However, not all moths are defenseless prey. Some emit ultrasonic signals of their own that startle bats into breaking off pursuit.
Many moths that contain bitter toxins avoid capture altogether by producing distinct ultrasounds that alert bats to their foul taste. Others conceal themselves in a shroud of sonar-jamming static that makes them hard to find with bat echolocation.
While effective, these types of auditory defense mechanisms in moths are considered relatively rare, known only in tiger moths, hawk moths and a single species of geometrid moth.
“It’s not just tiger moths and hawk moths that are doing this,” said Dr. Akito Kawahara, a researcher at the Florida Museum of Natural History.
“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”
In the same way that non-toxic butterflies mimic the colors and wing patterns of less savory species, moths that lack the benefit of built-in toxins can copy the pitch and timbre of genuinely unappetizing relatives.
These ultrasonic warning systems seem so useful for evading bats that they’ve evolved independently in moths on multiple separate occasions.
In each case, moths transformed a different part of their bodies into finely tuned organic instruments.
[I’ve put these quotes from the article in bold to highlight the juxtaposition of “evolved independently” and “finely tuned organic instruments.” Fine-tuning is, of course, often associated with intelligent design, rather than unguided natural processes.]
See the full article in Sci-News.
Well, you see, when insects were fish, they gave out ultrasonic… uh. OK. The above makes no sense from a ‘it just happened through blind, unguided chance’ point of view.
“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”
Who are these people? Teenagers? The study of moths only began a few weeks ago?
“than previously thought” probably has a grammalogue in shorthand systems because it is such a common phrase, but why is is such a recurring line?
There may be several reasons; a code word to dupe an editor into publishing; justifiable modest self-praise for the authors’ expansion of knowledge; or possibly a necessary outcome of bankrupt Darwinian paradigm forecasting that there once were moths with no defence mechanisms.
There should be a legitimate reason for this ubiquitous phrase.
It’s fascinating research. So, what did ID predict about ultrasound-emitting moths?
ID predicts that anything has a function even if we know about it or not(vestigial organs,”junk” DNA) That functionality can be detected even by an atheist mind , except atheist assign that function to random chance. 🙂
Random chance vs God compete in atheist mind (that itself must be produced by the same magical random chance). Nobody observed or tested how matter produce life/code/complex functional systems but is declared “scientific” truth by atheists . “Random chance” is “we don’t know how ” 😆 Same thing with “random mutation” from darwinism.
Atheists are the ones who believe that “we don’t know how but certainly was no God” is a scientific answer. You are free to believe whatever you want but don’t say it’s science .
If the warning evolved, why are there moths. They should have all been eaten, unless they always had them.
Another score for design. That makes millions in favor of design and 0 in favor of Darwin. Design is witnessed everywhere. Darwin never has.
When the evidence is overwhelming, logic dictates design.
belfast @2
“other than previously thought” should become a Darwinian trademark …
Basically all recent Darwinian papers start with “… than thought” …
I asked this before, but why are Darwinists so trustworthy ??? These guys seem to be always wrong …
If Darwinists don’t like the “…than thought” slogan, they can choose one of these (all from Darwinian papers):
“…current concepts are reviewed…”
“…uprooting current thinking….”
“…latest findings contradict the current dogma….”
“… it challenges a long-held theory…”
“… it upends a common view…”
“… in contrast to the decades-long dogma …”
“… it needs a rethink … ”
“… the findings are surprising and unexpected …. ”
“… it shakes up the dogma … ”
“… earlier than thought…”
“… younger than thought….”
“… smarter than thought ….”
“… more complex than thought ….”
Why would the Designer create bats with sonar to find moths to eat and then moths with ultrasound “jammers” to defeat bats sonar? Does he get some perverse pleasure watching the duel between the two species? Is he betting quatloos on who will win in each encounter?
Not two but thousands.
How would you design an ecology?
Besides the universe and Earth, ecologies are one of the wonders of design as thousand of off setting characteristics balance each other to provide stability. Quite a trick!!!
Seversky at 7,
Ah, I see you are using the scholarly Star Trek Argument.
It’s been falsified.
Well, it sounds like moths have been fitted with “shields” which they can raise whenever they pick up a bat coming in to attack.
Seversky at 10,
Still watching the original Star Trek? Me too.
Seversky/7
Someone needs to let God know that online gambling is illegal in most states……
CD at 12,
Then you should contact the proper authorities and let them know. Something tells me that God does not run online gambling.
Seversky @ 7:
“Why would the Designer create bats with sonar to find moths to eat and then moths with ultrasound “jammers” to defeat bats sonar? Does he get some perverse pleasure watching the duel between the two species?”
You raise a legitimate question, but it’s a theological question (not a scientific question), and as such, it would have a theological answer. It sounds like you expect earth to be like heaven, if the God of the Bible is real. I think the Bible sufficiently answers why that is presently not the case. Of course, there’s far more to the story. Would you like to discuss it further?
Seversky@7
As Caspian points out, your question is really an unscientific one that is looking for moral or spiritual or theological, not scientific, answers. Not being a theological movement or system, ID doesn’t look for theological answers, just for more scientific evidence to add to the boatload that has already been accumulated, that there somehow was a designer or designers. A scientific and teleological quest.
Here’s a question for you. What did Darwinism predict? Well, maybe the gratuitous assumption that it simply must have been RM&NS that produced the observed distribution of several lines that developed ultrasonic deception/masking. This assumption not even elaborated by vague “just so” stories, much less any detailed tracing of the supposed long process minute step by minute step, and explaining exactly how such a process could have built up probably irreducibly complex systems especially in the time allowed by the fossil record (that’s the good old “waiting time” problem). The old saying “the Devil is in the details” applies here – just so stories are no good without the nitty gritty details. Of course not even a shred of evidence, and of course no mathematical analysis. Does this sound like real science?
Sev
“You raise a legitimate question.” I thought it was a rhetorical question. Silly me….
Somewhat off topic but a new database is being developed of every possible protein in existence. Using this database of 200 million proteins, it should be possible to identify the proteins responsible for Interfering in bat ultrasound location.
Why do some moths have them while others do not?
https://www.nature.com/articles/d41586-022-02083-2
Aside: could this be the answer to nearly every question in biology relevant to species differences?
It’s a step in that direction. What appears to be happening here is AI modelling algorithms predicting (apparently accurately) the quaternary structure of proteins from their amino-acid sequences. Whilst that is pretty mind-blowing, it is far from being able to construct functional proteins by choosing sequences. I can conceive of that process happening but I doubt it is going to happen soon. It will remain impossible to predict the functional properties of a novel protein sequence in advance for the foreseeable future, I predict.
Though ID proponents ought to have a go. The tools exist. Write your sequence. Predict its functional capabilities. Synthesize and confirm. ID becomes science!
Maybe this should have its own OP?
Since this, is off topic, there will be more opportunities to discuss this. Raises a lot of questions though.
How did 200 million proteins arise when just one appearing is problematic? Why do some moths have the necessary proteins while others don’t? How did some moth species arise with the right proteins while others didn’t?
Would it destroy the concept of common descent or support it?
You fail to understand what ID is.
ID takes what ever science is being conducted and on certain occasions adds a new logical layer of analysis to the process. In other words it enhances the scientific process by making it more logically rigorous when appropriate. For most of science this additional layer is not necessary but for a few instances it is.
Again, off topic but maybe on an appropriate thread.
Alan Fox:
That doesn’t have anything to do with ID. And ID has already become science because, unlike evolution by blind and mindless processes, ID is supported by the evidence and can be tested.
seversky needs to learn how to read. Not all moths have this ability to jam the bats echolocation.
Why has nobody produced this evidence? Why is nobody testing whatever it is they can be testing?
What is the scientific, testable theory of Intelligent Design?
ID is not a theory.
I will get lots of pushback on this here. But it has no domain such as plate tectonics, oceanography, aerodynamics or even biology. So it is not a theory, but a set of conclusions about some isolated phenomena in the physical world often in unrelated areas.
ID uses some analytic techniques mostly to do with statistics that classify certain conclusion as either likely or unlikely. It is also historical in nature, applied to things that happened millions/billions of years ago. So there are no experiments to test its viability. People who ask for it are being disingenuous.
Given that, there is definitely predictions it can make but on historical information and living remnants of these past events. See
https://uncommondescent.com/intelligent-design/do-nylon-eating-bacteria-show-that-new-functional-information-is-easy-to-evolve/#comment-631468
That’s why I said that the new data base mentioned above would forever settle the debate over Evolution.
Most of the criticisms of ID are bogus. It is not domain oriented, it is not present oriented, it cannot be proved or disproved with experiments etc. It is essentially logic usually in the form of statistics applied to historical data or the current remnants of past events.
Thanks for that, Jerry. My issue with ID proponents has always been the claim it was scientific. I have no issue with ID as philosophy or logic.
iD applies logic to certain scientific findings.
In that way it’s science. I refer to it as science +.
ID is science. Irreducible complexity. The discovery of greater and greater complexity. Evolution consists of a bunch of stories and blind, unguided chance. That’s not science, it’s storytelling.
That’s philosophy of science: not science.
Those are three unconnected assertions.
No!
Nearly every science project I have been involved with is in four parts, Background which usually contains the proposition, methods which contains the procedures for collection of data/facts, results which include the actual data collected as well as an analysis of the data points and conclusions which include the implications of the analysis.
ID adds some statistical techniques not usually included in the results section of most studies and then makes conclusions based on the logical analysis of the data using the statisticalu techniques chosen.
That’s not philosophy of science. One may argue that philosophy of science led to the types of analysis done but once chosen it become a straightforward scientific analysis.
OK. Have you an example of this ID statistical inclusion?
AF at 28,
That is quite wrong. Molecular switches control cellular activity and they are not just on and off. Some are volume limited. Example: A cell needs a precise amount of some chemical/liquid. The switch stays in the on position until it receives a signal to shut off. There is some evidence that malfunctioning switches can lead to disease. Evolution has no explanation for this or the limiter function or feedback required. But that just describes one type of switch. There are many more.
The probability that this can happen by chance is nil.
Relatd
ID has no explanations. It starts and ends with “Evolution has no explanation for this…” It is not science. Let’s see what Jerry has to add.
AF, barefaced denial again, you have been around for long enough to know better. You are familiar with intelligently directed configuration and its characteristic signs. Reliable signs. Based on that, the inference to design is a best explanation of what exhibits say complex coded, algorithmic alphanumerical text, such as in a Hello World or in D/RNA in the cell. KF
AF at 32,
Not so. Intelligent Design shows that based on probabilities, the odds of living things developing through evolution is beyond reasonable possibility. Ignoring that means ignoring the evidence.
Imagine yourself aiming a driverless car down a road. How long before it crashes or careens into a river or ravine? Evolution, so-called, would have to proceed flawlessly down the road. But we’re told it is not goal oriented.
Evolution has no credible explanation for living cells much less the human body coming into being.
As I said, the beginning and end of a typical ID argument. ID adds nothing to scientific understanding. (Sorry, Jerry)
AF at 35,
Evolution adds nothing to scientific understanding. Only present-day experiments on living things can discover things like function, not stories based on speculation. Which are just stories, not facts.
Relatd: Only present-day experiments on living things can discover things like function, not stories based on speculation. Which are just stories, not facts.
What kind of present day experiments can support intelligent design? NOT, what kind of experiments can disprove unguided evolution; what kind of experiments can be done which support intelligent design?
Why Do We Invoke Darwin?
https://www.discovery.org/a/2816/
@Relatd
Use your energy for more useful activities . 😉
LCD at 39,
I am.
@ Relatd
That 2005 article from the late Phil Skell reinforces my point.
As JVL points out, this leads to the question, what experiments support ID?
https://intelligentdesign.org/articles/testable/
https://rationalwiki.org/wiki/The_Positive_Case_for_Design
AF, you full well know that intelligently directed configuration routinely produces FSCO/I beyond 750 +/- bits, and you yourself are an example. You know full well that blind chance and/or mechanical necessity has never been demonstrated to do the same. You know the needle in haystack search challenge. You know that Venter et al are already doing engineering work with cells. You know that the cell contains complex, coded, algorithmic information in D/RNA, associated execution machinery, so too uses language and goal directed processes. You know what coding requires, coders. You therefore know what is the reliable source of such FSCO/I, but it does not suit your rhetorical agenda to acknowledge it. We do not have any obligation to allow your groundless selective hyperskepticism and barefaced denialism to control what we know and can readily infer per reliable sign. That has been a well founded inferential procedure of record since Hippocrates of Kos. KF
750 +/- 250 bits.
Blind-watchmaker Darwinism is only a good explanation for outer-branch-level diversification on the cladistic tree.
Other than that, it’s as useless as tits on a boar hog.
PS, the commentary at RW simply shows barefaced denialism and hyperskepticism, sort of like denying stagflation and recession by playing word games. Intelligently directed configuration as a cause is real, it is the only observed cause of functionally specific organisation and/or associated information, and the blind needle in haystack search challenge in configuration spaces for 500 – 1,000+ bits shows why. If they or you had a good counter example where blind chance and/or mechanical necessity were actually observed causes of FSCO/I it would be trumpeted. If you have one kindly give it______ . Even your objections above are cases in point of FSCO/I by design. The design inference on FSCO/I is a causal inference on reliable sign.
Alan Fox:
You don’t understand science. You definitely cannot say how evolution by means of blind and mindless processes adds to scientific understanding. When it comes to science you are a liar, bluffer and cowardly equivocator.
Yes.
Behe’s Edge of Evolution includes extensive discussion of what is feasible and what is not feasible probability wise in terms of mutations arising and leading to lasting change in genomes. Now this is mainly about genetics not Evolution.
However, the small chance anything will happen genetically to permanently change the genome (it does happen rarely) means the chances of the species being changed are incredibly low. The Grants argued it would take 32 million years to get a new finch species. Not very promising for significant Evolution of anything. And that does not include any new gene sequences for new proteins being developed.
Similarly, Doug Axe does the same for proteins. The data base referred to above refers to 200 million proteins. This represents an extremely small subset of amino acid combinations. How were the coding sequences for these proteins selected?
Certainly not randomly and then there is the issue of how the various coding sequences were thrown together by chance. We are talking low probability numbers that two gene sequences which are incredibly unlikely to begin with would ever encounter each other let alone combine in the same organism.
The document you referenced said phylogenic analysis establish their (protein) evolution over time. Where’s the evidence for the origin of these incredibly unlikely gene sequences in the 200 million data base? How did they happen?
This document you referenced also says ID is young Earth which you obviously know is incorrect. It also implies that hox genes are responsible for body plans of species. What evidence is there for that?
I have not read Behe’s more recent book, “Darwin Devolves,” but after reading the reviews maybe I will, especially focusing on the evidence of improbability of random variations creating anything.
Also, ID is more about the fine tuning of the universe and OOL than about Evolution.
JVL:
Still clueless. Again, for the learning impaired: ALL design inferences must first eliminate chance and necessity, ie nature, operating freely. That is mandated by Newton, Occam and parsimony. Only the scientifically illiterate can’t grasp that. Enter JVL and all evos.
That said, any experiment which elucidates IC, CSI or SC, support ID. As Dr. Behe said:
Alan Fox:
Pure stupidity or worse- willful ignorance. ID is not anti-evolution. So, Alan lies when he claims “It starts and ends with “Evolution has no explanation for this””- You are pathetic, Alan.
The design inference is based on our KNOWLEDGE of cause-and-effect relationships. It has the same explanatory power as archaeology and forensic science. Determining something was the result of intelligent design tells us quite a bit. For one it tells us blind and mindless processes didn’t do it. Next it is a clue of purpose. That an intelligent agency was there and did something. It directs our investigation of the phenomena.
Obviously, Alan has never conducted an investigation in his life.
For Alan Fox to choke on:
1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.
2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.
4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems
All bacterial flagella have been shown to be irreducibly complex.
There isn’t any evidence that nature produced any of them.
There isn’t even any way to test the claim that nature produced any of them.
Science says we can dismiss that claim.
It must suck to be Alan and JVL. Science has given them all of the power to refute ID and yet all they can do is lie, bluff, misrepresent and equivocate! ID exists because of their failure to support their own position’s asinine claims.
More evidence for ID:
The genetic code involves a coded information processing system in which mRNA codons REPRESNT amino acids. There isn’t any evidence that nature can produce coded information processing systems. There isn’t even any way to test the claim that nature can do it. Again, science say that we can dismiss such claims.
However, there is ONE and ONLY one known cause for producing coded information processing systems and that is via intelligent agency volition. So, using our KNOWLEDGE of cause-and-effect relationships, in accordance with Newton’s 4 rules of scientific reasoning, we infer the genetic code is intelligently designed. Science 101.
Let Alan and JVL flail away and demonstrate that they don’t understand science.
ET, worse D/RNA is string data structure, code bearing technology. Code which in key part has AA chain assembly instructions towards proteins, as algorithms with start, extend [ways 1 to 20], stop. Language, goal directed process, processing logic and tech, deep knowledge of polymer science, and more, especially when we reckon with cosmological fine tuning that sets all of that up. None of this is new, none of it would be controversial absent ideological impositions. We can no longer allow ourselves to be diffident in the face of willful ignorance — at best. KF
I know no such thing, KF. Though I do know your statement is incoherent. You have no way of calculating the “quantity” you refer to as FSCO/I (which is unique to you – nobody else takes it or you seriously), there’s no consensus among ID proponents as to what information is quantitatively, let alone any way to calculate it for any object or system. You’re fond of making challenges, especially ones for which you are unable to supply any answer. Here’s my challenge. Calculate the FSCO/i of something, anything, and show your work.
Here’s a comment I made 13 years ago
That is so obviously true. So how can ID be a science stopper or deny anything that so called true science has discovered. It doesn’t.
The best example of this is what has been proposed to solve the Evolution issue once and for all. See #23 above which is conveniently avoided by any anti ID person. You would think they would be all over it to show just how their ideas have played out over history. But no.
AF, oh yes you do know, starting with the FSCO/I in your objection. You are in ideological denial and that is the root of the incoherence you project to me. Later. KF
Jerry, ID is interdisciplinary, as is say environmental science, however it has key themes and a frame that are observation based and make solidly empirically warranted inferences, arguments and conclusions. Later, still in transition though back at home after putting four in the ground. The fifth was grounded some time ago. KF
Jerry:
There are three basic processes occurring under evolutionary theory: adaptation (change within a population induced by mutations and selection in the niche environment), speciation (separation from one population, often but not always geographical, where evolutionary change continues in two separate populations in two separate niches), extinction (where a population dies out often due to over-rapid niche change or niche destruction). That said, I’m not well-versed in the details of Peter and Rosemary Grant’s long-running studies on Galapagos finches. I wonder if you are. Where did you get your 32 million years from? Looking at papers on Galapagos ground finches, I see one abstract mentions:
here.
Jerry:
Now I am quite familiar with Axe and the criticisms of his protein-folding approach. It is considered in the mainstream to be, at the politest, flawed.
From Rosemary Grant’s mouth with Peter at her side.
The Grants were invited to give a presentation at Stanford on the 200th anniversary of Darwin’s birth. They presented on their work with finches. As part of this presentation, discussion of just what was a species took place. During this, this statement was made.
To be polite, how is it flawed?
Do you have an examples of gene sequences arising that produce proteins? Given that there are about 200 million, one would think a few examples would be available.
Jerry:
Years ago, I don’t know if you remember Telic Thoughts and Mike Gene, I was having a discussion there about the probability of protein sequences. The mistake that is so often made here and elsewhere (Axe makes it too) is to conflate amino-acid sequences in proteins with functionality in proteins. The theoretical number of proteins of any particular number of aa’s is the number of amino-acids found in proteins (20 for most species) raised to the power of the number of aa’s in the protein sequence. The number becomes rapidly becomes enormous as sequence length increases. The ID argument often is how rare any particular sequence is. But it assumes only the sequence in question is functional, one needle in a haystack, when in fact we’ve no idea how much functionality lurks in unsynthesized sequences.
The genetic code has no nonsense codons. Any DNA sequence will translate into a protein sequence. As there are three stop codons, random sequences will, on average, produce sequences of 60 odd aa’s.
Jerry:
Arthur Hunt at Panda’s Thumb
Mikkel Rasmussen at The Skeptical Zone
Jerry:
Josh Swamidass at Panda’s Thumb
Jerry:
I’ve tried searching for the quote and am unable to find it. Do you have a link? It seems at complete odds with everything else that turns up when I google. for instance:
https://idw-online.de/en/news685650
All this published from Alan Fox confirms the ID general hypothesis.
It seems more nits than substance. For example, I asked for the origin of proteins and get that any sequence will produce proteins. An admission that there are no examples.
The interesting thing is the complete absence of a defense for any naturalized mechanism for Evolution. Nothing has changed over the years as the long comment following made 13 years ago points out.
A long comment from 13 years ago. Read if you want.
There are two choices for any phenomenon, both of them rather broad. One is that certain things happened naturally, the mechanism to be discovered. The second is that these things were produced through intelligent input. And by the way a lot of what may be considered natural, could be the result of a designed process allowed to proceed naturally. For some simple examples, pearl farmers seed their shell fish with an irritant and the let nature do the rest and beavers dam the course of a river and the ensuing wetlands provide an enhanced habitat for the beavers and other animals and plants.
But in general it is mainly one or the other but what appears to be natural could also be great design. There are no other choices unless you want to proffer some. As I said these are rather broad categories. It is almost impossible to eliminate the intelligent input option. It is not a theory such as gravity, the Standard Model, the Laws of Thermodynamics, Kinetic theory of Gases, Information theory or Plate Tectonics etc yet people keep on asking for some hypotheses and predictions. ID is simply that intelligence is an input at some time in the history of being, the universe, the world, life etc. Some hypothesize that it was in the design of the universe itself and the initial conditions and subsequent boundary conditions of the Big Bang were such fantastic design that it enables natural processes to produce everything we see including this very rare planet, the origin of life and the evolutionary progression through subsequent natural consequences. Some hypothesize that the input was ongoing and there were various events that reflect an intelligent input. This input could have been minimal and then natural processes were allowed to do the rest. To disprove an intelligent input, one has to show natural processes at every turn. It is a difficult job. All ID has to do is show that naturalistic processes fail at some point and that an intelligent input is more reasonable. They only need one point.
That is the nature of the discussion. It seems unfair to some who whine that ID is unfalsifiable. But that is it. Because ID is more of a logic process and not a specific scientific theory it does not have the usual domain of interest such as plate tectonics, cosmology or even evolution. After all an intelligence could create life or modify a genome to guide life maybe only once and that is not the making of some theory. To create life or modify it is not too hard to understand as it appears to be within human capability in the near future.
Thus, the possibility of an intelligence creating and modifying life is not an issue. It is whether it ever happened or not that is at issue. If we had a video camera at the time of an intelligent input, we could settle it once and for all but such an event does not exist and we have had people here and at other places demanding such evidence. Short of this something else has to be done.
We have observed a lot of phenomena through out history that could possibly be explained by an intelligent input and the challenge for science is to verify if there may be a natural cause for each. For most of history it was thought that God was personally responsible for most, much, or a lot of these phenomena. From Zeus throwing lightning bolts in anger and the various gods determining the fates of various personalities such as Odysseus to Newton’s hypothesis that God sent comets to stabilize the orbits of the planets. Newton’s laws and then LaPlace’s theory of the heavens seemed to show that all was under control of natural laws. So it was assumed from then on by many that everything must be under control of natural laws. We have no need for Zeus and lightning bolts and for comets stabilizing orbits.
And we get the conventional wisdom that everything is due to natural laws
and chance and it is only a matter of time before science gets around to explaining it. And science has a good track record. But what is glaringly obvious is that science has some spectacular failures in one particular area. So while science continues to chalk up win after win there seems to be one opponent which gets the better of it every time. Consequently, one has to reevaluate the conventional wisdom and maybe consider an alternative to natural processes. ID only exists because science loses most of the time to the heavy weights in this one area, namely life. It does wonderfully well in some important areas of life, specifically medicine, food production and genetics but it is badly outperformed by the problems in the areas of macro evolution and origin of life. Why this failure here? Is there an alternative to naturalistic processes in these two domains. Is intelligence an explanation?
Hence, every time science fails in these areas it adds credence to the alternative. At this moment in the realm of logic and reason both alternatives exist. Which is more feasible? Every time we see the failure of one alternative it raises the possibility of the other. After all it is possible. We just cannot identify the intelligence. So each failure for a natural pathway raises the probability of the alternative, namely an intelligent input.
And the rationale for an intelligent input has been bolstered by the knowledge that what underlies life is different from every other area of nature, specifically information. Information is not present in any other area of nature except life.??Now this game of supporting the ID premise is played two ways and both use the tools of science, logic and reason. One shows that time after time that certain naturalistic processes have failed. The second way is to show why naturalistic processes have failed. Both use science and point to the inadequacy of natural processes. There is a third way which one group says must be present before an intelligent input can be accepted and that is evidence for the specific event where there was an input of intelligence.
The first way above is to challenge each natural explanation for the phenomenon as flawed and show why the explanation could not have possibly happened. This is the frequent challenges to Darwinian macro evolution we have seen not only by the ID people but also by the anti ID people as well as the creationists. It is represented here on this site and in the academic and popular literature by the lack of any coherent demonstration that Darwinian macro evolution ever took place. Now macro evolution did take place and no one is denying that here but there is no evidence for it happening by Darwinian processes or any other known natural processes. All the processes of science are brought to bear in this examination so to declare it non scientific is ludicrous.
The second way is to use observations of the world and then to complement these observations with some form of analysis, mainly probability, and some understanding of natural processes to illustrate why the failure of naturalistic processes is not only reasonable but to be expected. To this end a couple of different approaches are in their infancy but have showed some reasonable results. One is being developed by Behe and is showing that there does not exist the probabilistic resources to create the changes needed in macro evolution. Behe’s two books, Darwin’s Black Box and Edge of Evolution, are aimed at this objective. Namely, that life is extremely complicated and naturalistic processes seem unable to climb the hurdles necessary to produce macro evolution.
Another is being done by Dembski and others trying to show something similar using mathematical and probabilistic approaches to show that reaching the complexity necessary for life is beyond the probabilistic resources of the universe. So in lots of way the two approaches are similar but using different methodologies to attack the same problem.
To argue that this is not science is also ludicrous. One may argue that the techniques by these scientists are flawed or that the interpretation of the results are invalid but to say that they are not using science is absurd.
Now the naturalists respond with their challenges. The best challenge would always be to show that the phenomena probably arose by naturalistic means but this is rarely done because there seems to be little evidence supporting any particular mechanism. The main challenge is to use something similar to what I described above as the first approach, namely that the intelligent input scenario is flawed just as ID people point out that each naturalistic input is flawed. The creator could not be omniscient, or no one would design such an imperfect system or make these childish mistakes etc. They also point to science’s track record in other areas and that the work on the problem is just getting started etc.
So we have two broad approaches and any evidence in one camp reduces the likelihood of the other. It is one that won’t be solved any time soon but to assume your side is right a priori is ridiculous. ID is the more reasonable side as far as I can see. They are willing to accept naturalistic explanations when it is demonstrated but are not willing to accept an arbitrary demand of absolute dismissiveness for intelligent inputs that is imposed by the naturalists. One side is flexible and reasonable while the other side is intransigent and unmoving.
Jerry, regarding your question, I answered it. All DNA sequences synthesized in vitro will produce protein sequences. If you meant to ask a different question, you should make it clear.
I will search for it.
It’s in a YouTube video from all the presentations made at that conference. The interesting thing was the complete lack of response or should I say cluelessness of the panel to this statement.
Well, I did. It’s hard to see it as other than wishful thinking. Where explanations fail, the honest response is to admit ignorance. The idea that “Intelligent Designers” did something undetectable at some unspecified moment is not an explanation for anything.
Most ironical statement of Year?
Tell me how it wasn’t an honest comment.
This response to my long comment sounds like someone in denial about the logic and evidence available about the Evolution debate.
Peter and Rosemary Grant at Stanford. Start at 1:10 for comment about 32 million years.
https://www.youtube.com/watch?v=IMcVY__T3Ho
aside: I believe the pretense that a polite but honest debate was desired was only a pretense and was actually an attempt to be one sided which has obviously failed.
The remark as I hear it is “finch radiation started two to three million years ago.”
BTW, thanks to Jerry for providing the link! Very informative and great to hear things from the horse’s mouth.
“Thirty-two” or “three to two”, Jerry?
Listen to Peter Grant at 18.00. He definitely says “two to three million years”
Watch the video starting where I specified. It will appear about 1 1/2 minutes later. They also repeat it near the end.
I listed a time somewhat before so as to allow one to get used to them talking about speciation.
Yes, Carol Boggs also says “two to three million”.
Come on Jerry. You misheard. It’s no big deal.
Why are you lying?
It’s clear as anything and also in the transcript. This is amusing.
From transcript at 1:11
I can’t believe it! Go to 18.00 in your video. Listen to Peter Grant. Anyone can do this for themselves.
Link?
Help, fellow UD commenters!!!
Can someone settle a dispute between Jerry and me and check what time period Peter Grant talks about for the radiation of the original invading species into fourteen current Galápagos finch species.
OK, we are talking about two different things. Rosemary Grant indeed says that the average time for genetic incompatibility in birds is thirty-two million years. Genetic incompatibility is not speciation time. It is the time limit for introgression.
Alan Fox:
And yet no one can refute it nor show it to be flawed. Just saying it’s flawed is cowardice. And it is a given that Alan doesn’t understand Axe’s argument.
Rosemary Grant refers to Prager and Wilson.
Here is the citation:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC432270/
Alan, you have been given links to papers that measure functional sequence complexity. FSC is the same as CSI. It is the same as KF’s FSCO/I. So, clearly you are willfully ignorant or dishonest.
I linked to Art Hunt’s critique. I also linked to a critique by Mikkel Rasmussen. There are more criticisms.
LoL! Alan links to Swamidass! Swamidass was completely owned by Behe in their internet debate.
Art Hunt:
And yet you and yours don’t have any evidence that blind and mindless processes produced any proteins! Why that isn’t a concern of yours proves that you don’t care about science.
Yet neither you (unsurprising) nor anyone can quantify Complex Specified Information. And functional sequence complexity is, let’s say, a niche product, though Kirk Durston is no rogue.
Alan Fox:
Both of those two are biased and cannot demonstrate that blind and mindless processes produced any protein. They cannot demonstrate that blind and mindless processes can produce a new functional protein fold starting with a given protein. What they are doing is whining.
Lenski’s LTEE has not produced any new proteins.
Alan Fox:
<blockquYet neither you (unsurprising) nor anyone can quantify Complex Specified Information.
And yet we have!
Nope. FSC is an observation. And guess what? Neither you nor anyone else can demonstrate that nature can produce it!
Indeed. That is my point. It can’t be demonstrated by anyone. It is a human imaginary concept.
Well, pull the rabbit out of the hat then. Let’s see your quantification of complex specified information.
Jerry, just to point out that your statement “The Grants argued it would take 32 million years to get a new finch species.” is incorrect. The evidence they have collected shows that a single species invaded the Galápagos archipelago between two and three million years ago and radiated into fourteen Galápagos finch species extant today despite (according to Prager and Wilson) the average time for species incompatibility being 32 million years in birds.
AF at 96,
You are being obstinate in the face of the evidence that blind, unguided chance has no chance to produce life as it exists today. This is a reality and perception problem on your part.
Yet all these extant species can produce genetically sound offspring with each other.
They have another 29 million years to go. So are they really distinct species or just one big happy family.
Find a nit and make believe it is important in order to dismiss ID as meaningful. That is what nearly all criticisms of ID are about.
Aside: What does the term “origin of species” mean?
FSC is an observation. And guess what? Neither you nor anyone else can demonstrate that nature can produce it!
Alan Fox:
That doesn’t follow. If it is observed, then it has been demonstrated. Stonehenge is an observation. And neither you nor anyone else can demonstrate that nature can produce it. Stonehenge is not a human imaginary concept.
Alan Fox:
I thought you were familiar with Durston’s papers on the subject. Are you familiar with Shannon’s work from 1948?
With evolution by means of intelligent design, ie “built-in responses to environmental cues”, we would expect rapid changes to finches to match the new environments.
ET (attn AF), we are dealing with willful obtuseness and selective hyperskepticism. The refusal to accept that info carrying strings capable of holding functional information whether in text on a screen or D/RNA are an observable reality is itself a test, failed. We have actually seen D/RNA being repurposed as experimental archival info store. The further inability to recognise that functional info content of systems with configuration based function is just as valid is fail 2. Autocad etc show that such can be reduced to a compact description language so discussion on strings is without loss of generality. WLOG. Next, cumulative string length, often in bits is a basic info capacity metric, utterly common in a digital age. Durston et al adjusted for various things that somehow reduce effective functional info relative to raw capacity. All of this is on massive record accessible to the responsible and the result is not in doubt. For, the info load in the cell is so far beyond any reasonable threshold that it is clear that the use of coded language to effect algorithms for protein synthesis, in particular AA chain formation as a key stage, is decisive. I will not allow willful ignorance and hyperskepticism or linked rhetorical stunts to make me apologetic about what we may readily know. Here, that the root of the Darwin tree of life shows strong signs of design, leading to likelihood of similar design pervading the whole. KF
You see what you want to see, KF.
@105
“You see what you want to see, KF.“
Pretty sure KF was saying the same about you in 104
“ET (attn AF), we are dealing with willful obtuseness and selective hyperskepticism.“
At which point the argument is a wash. It’s a matter of perspective that the glass is half empty or half full.
Personally I think the glass is half full and someone put the water in the glass
No doubt. It’s a universal human failing.
AS78 (attn AF): Actually, no. I am not in denial of the reality of info carrying capability of s-t-r-i-n-g-s, nor of how that can be quantified then adjusted for real world code redundancies etc. I am not the one side stepping how D/RNA has actually been used to store archival general digital information. I am not in denial that the genetic code with its what 24 or so dialects, is a code so a manifestation of language. I am not studiously ignoring the start, extend, stop algorithms that code for AA sequences towards protein synthesis. That is, goal-directed process, a sign of purpose. So, I can confidently assign the latest stunt by AF to turnabout projection. KF
Of course you can, KF. Your confidence knows no bounds.
Alan Fox,
What do you hope to gain from your participation here?
–Paxx
Enlightenment, Paxx.
No , a drug addict doesn’t have free will and selectivity anymore. You talk about a previous stage when the person that is not addicted has free will and chooses to take the drug knowing the consequences. Now they don’t have free will anymore their hyperskepticism is compulsory like the need for drug . You can’t convince a drug addict with logic because their logic has been modified into a different kind of animal . They can’t be helped with logic and reason .
Relevant to OP
https://i.redd.it/s4bpyx48p6j41.jpg
AF, more rhetorical stunts. I have warranted confidence in credible, reliable truth, i.e. knowledge in the day to day sense. KF
Paxx: “Alan Fox, What do you hope to gain from your participation here?”
Alan Fox: “Enlightenment”
If learning some important spiritual truth, i.e. ‘enlightenment’, is truly your ultimate goal for being here, ought not you, (as a Darwinist who believes in reductive materialism), first reject your reductive materialism and adopt some worldview that is capable of grounding spiritual truth in the first place?
Verse and Quote:
@108 @KF
It was a turnabout projection, that’s what I was pointing out. I was trying to not be abrasive. But when AF commented on 105 it reminded me of something said to me years ago in a debate class. Often we are put on defensive for our point of view (which our POVs are not unsubstantiated) and accusations like “we see what we want to see in the data” are often levied against us. That’s something that Richard Dawkins had accused many people who are religious of doing when they looked at the world. While he himself, who believes he has an enlighten out look of reality, sees a grim cruel reality. But there are two problems with this.
One is the blatant arrogance to believe that your way of the seeing things is the only way of seeing things or correct way of seeing things. Which can often be wrong
Two the fact that he (and many others) are doing literally the same thing he is accusing the religious of doing
And this happens far more often in science then they’re willing to admit
Paul Zak is a really good example of seeing what you want to see about oxytocin, and the same goes for a lot of freewill neuroscience researchers. Another is what happened with the BICEP2 results years ago claiming they confirmed chaotic inflation and intern claimed they confirmed the multiverse
They see what they wanted see in the data and sadly it took very long periods of time to correct the above mentioned examples
Just pointing out that aspect of human nature. I tend to write comments that reflect what I think and believe. What would be the point of doing otherwise? I do struggle to accept that other posters are doing the same when the content of a comment is alien to my own experience. But heigh-ho, life goes on…
AF at 117,
As a professional researcher, I can’t insert what I think and believe into the data. Whatever I’m researching, if I find relevant documents and credible sources, I go by that, not my opinion. For example: Wikipedia can be used as a starting point, but it is not a reliable source. Once the data is in hand, I have to cross-reference everything against other credible sources.
Sadly, here people mix in personal thoughts with what they only think is credible information. Examples: I heard it from my political party, or my buddy Bob, who would never lie to me, or worse, from some anonymous guy on the internet who provided zero credible references to back up what he wrote.
The internet is a black room with no sound. The only way for people to communicate is by keyboard. I think people should be careful with their opinions. We should back up any statements with credible sources. This is not the neighborhood pub.
AF, you are side stepping warrant and objectivity. Also, the substance on the table. Relativism, subjectivism, emotivism etc fail, being self referentially incoherent. They suggest they have a degree of objectivity they could not have, were they true. Yes, we may err, that is why we have duties to truth, right reason, warrant and wider prudence. Which, are on the table regarding FSCO/i for the world of life and regarding fine tuning factors. KF
Well, I try to see the world as it is and base my remarks on facts. Warrant? I’m a pragmatist. Rules that work best flow from consensus and fairness, not unquestioned authority.
There is no absolute objective warrant. People insist, agree, argue, fight, endure whatever rules emerge in human societies. I’m sure we can all think of better ways for our community to function, but there’d be little consensus.
You do err, frequently, and at length. It is fortunate you have no power to enforce your ideas to any significant extent on others.
A fact for you to consider. You are unique in claiming that “FSCO/I” is a genuine, quantifiable concept yet have failed utterly to justify that claim.
I agree.
:))
LCD (attn AF): is this objectively true? [Do you see how it refutes itself, showing that AF’s snide assertion about “Rules that work best flow from consensus and fairness, not unquestioned authority” is a gross strawman fallacy?] KF
PS, for starters, Epictetus on first principles of logic:
Yes, that is how far wrong we have gone.
Alan Fox:
How would you know? You ignore everything that contradicts your views.
Whether you are referring to your own statements or mine, it settles nothing to label them subjective or objective.
Oh, the irony! Oh, the projection! 🙂
Stuff it, Alan. I can easily support my claim, whereas you couldn’t support yours.
I will gladly ante up $10,000 to debate Alan Fox on science- evolution by means of blind and mindless processes vs ID. I know that Alan will never accept. And I know why.
AF, I will take onward points in bites. FSCO/I is instantly recognisable from cases such as text in this thread and information rich functional organisation. In fact my abbreviation traces to Wicken and Orgel, in the 70’s, it is antecedent to modern design theory. As for your continued irresponsible willful denial of what is documented yet again in the thread above, let me clip from 108 and 104 just for starters:
See why you are clearly of negative credibility? We live in a world where info capacity is routinely measured in bits and bytes. Accounting for redundancies and uneven distribution of glyphs, unused states [bcd vs hex code was first case in digital electronics] etc in real codes is what Durston et al have done. Others have pointed out similar things, and yet you remain in tellingly dismissive denial. Fail. KF
PS, If you took time to click my linked page through my handle, from over a decade ago you would find, first https://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#infois and then https://www.angelfire.com/pro/kairosfocus/resources/Info_design_and_science.htm#fscimetrx which points onward to published work, some before Durston.
PPS, then, there is this from Orgel, 1973:
This is only a sampler that further exposes your irresponsible commentary.
PPS, on DNA as an info store, here is something recently discussed here at UD:
https://www.bbc.com/news/science-environment-59489560
See why you are painting yourself into a corner as an irresponsible, unresponsive dismissive hyperskeptical objector?
AF, predictably, you willfully refuse to acknowledge that your statement in 120 — “There is no absolute objective warrant” [and BTW, warrant needs not be absolute to be objective and reliable] — is self-referential, self refuting and therefore nonsense. As for trying definitionitis games with what is objectivity, it has to do with warrant thus knowability. That is, knowledge is warranted, credibly true [and so reliable] belief. Warrant, pointing to fulfilled duties of reason. These terms are not empty labels for you to cynically play rhetorical stunts with. Your behaviour continues to show how irresponsible you are. KF
PS, a bit of algebra will help those willing to attend to the foundations of knowledge:
Similarly, Kindly, ponder the very carefully worded definitions from Collins English Dictionary [CED], where high quality dictionaries record and report correct usage:
Dictionaries of course summarise from usage by known good speakers and writers, forming a body of recorded knowledge on language. So, we may freely conclude that:
Objectivity, is established as a key concept that addresses our error proneness by provision of adequate warrant that gives good reason to be confident that the item or state of affairs etc contemplated is real not a likely point of delusion. Yes, degree of warrant is a due consideration and in many cases common to science etc is defeasible but credible. In certain key cases, e.g. actual self evidence, it is utterly certain.
**************
PREDICTION: AF will studiously ignore this and pretend that nothing has been shown. Let us hope, for his sake, he will prove me wrong.
But KF, you reinforce my point that “FSCO/I” is a concept that nobody but you uses. And you cannot use it quantitatively. I’ve yet to see a coherent working definition.
Durston has made zero impact in the scientific world.
Objectivity has an everyday meaning with which I have no problem. But, whilst trying to be objective when making statements (Wikipedia’s neutral point of view is a good example), deciding which statements are in that sense adequately objective is a pretty subjective process.
AF at 134,
Wha… what? Guess what? Information is information. All living things contain instructions for assembly and reproduction. Blind, unguided chance, which is also not goal oriented, cannot program or build your computer much less a living thing.
AF, you obviously failed to see that I am simply noting that as Orgel and Wicken highlighted, we deal with functional information, which can be explicit [text, D/RNA] or implicit in configuration of parts to achieve function. So, I abbreviated the phrase in two stages, the latter highlighting organisation. That is a convenience. Where, bits and bytes are ubiquitous in an info age so your pretence to ignore them just shows your desperation to resist the manifest. As Orgel put it in 1973, compact description suffices to specify as we know now from say an Autocad DWG file. Such are amenable to metrics that can be chosen as convenient. 10+ years ago, I favoured using a product and also used a subtraction of threshold value. Abel et al, Durston et al showed how to factor in redundancies, and that has objective warrant. As to your latest stunt to try to make objective warrant vanish into subjectivity, that is little more than an excuse for selective hyperskepticism amounting to willful obtuseness. Its anticivilisational, misanthropic folly can readily be seen from the result were it to be the norm: collapse. Attend instead to duties to truth, right reason, prudence (including warrant). You are bearing out my prediction. KF
PS, Locke’s rebuke is telling:
I agree. However evolutionary theory does not propose a process based only on chance. There is bias.
AF at 137,
Not goal oriented. You can aim a driverless car down the road and let it go. How soon before it crashes into something? Evolution is that driverless car.
KF, your support of your “FSCO/I” concept may be more convincing if you could attempt to come up with a working definition and perhaps an example of how to apply it to a biological system. I realize an actual quantisation is beyond you but baby steps…
Nope. A useless analogy that is so far from how biological evolution actually works, it’s hard to know what to say to you. Have you read any books on evolutionary biology?
AF at 140,
You’ll have no luck with your “have you read” question. The whole evolution story is old hat for me. It boils down to being a faith statement as opposed to anything having to do with science. There’s no evidence evolution, as advertised, actually did anything.
Yet all you have said so far indicates to me you have a poor and inaccurate understanding of the theory and the process.
No, it’s an explanation for the observed diversity and relatedness of life on Eath.
Richard Lenski’s LTEE demonstrates the evolutionary process in real time.
@Relatd
Did you watch the video that Jerry linked to? The video is a summary of the work of Peter and Rosemary Grant with Galapagos finches.
Alan Fox@134
Neutral POV?? How incredibly ridiculous. Wikipedia is easily demonstrably extremely biased against any non-mainstream reductionist materialist understandings of Nature, against and suppressing and deliberately distorting any and all phenomena and evidence for the paranormal for instance, and of course ID. There is a sort of Wiki “thought police” of zealots who constantly monitor and suppress any entries to the contrary that contradict their narrow scientistic reductionist materialist view of reality. This Wiki thought police could very well include AF, who exhibits all the signs of dedication to the secular modern religion of scientism and Darwinism.
AF at 142,
Do you think this is the first time I’ve been asked about this? Or told, in great detail, what supposedly happened? Lenski? Again? A dud. A non-starter. A ‘trust me, it went like this.’ You’ll have no luck selling the theory. Thanks primarily to this site, and watching the dogged determination of the defenders of the theory elsewhere, it is a belief system as opposed to science.
Galapagos finches? No, I don’t think so. You ignore the complexity in a single living cell and try to convince others that it slowly, gradually appeared as it is today? All I’m seeing from the scientific community is their finding more and more complexity, squeezing chance out of the equation entirely.
Doubter
You’ve missed my point which was how attempts at an objective approached are easy targets for accusations of subjective bias. Though your comment illustrated my point neatly. So thanks for that. 😉
Relatd
If UD is your main source of information on evolutionary biology, evolutionary theory and the evidence underlying the process and the theory, you are beyond help, I guess. Tant pis !
AF,
I notice that the bottom line is that you don’t respond to or engage with my substantive comments relative to Wiki. I wonder why.
AF at 147,
Don’t get stupid on me, OK? You strike me as intelligent. Anyway, for the purpose of letting you and others reading know where I stand, here are the details: NO, I did not get all of my information from UD, and don’t fake a lack of reading ability with me again. I was told on other sites, over a number of years, what evolution supposedly did. It’s fiction. Fiction. This site helped to clarify all that. Again, don’t come back with some quip reply, read what I’m writing.
Wherever you got your ideas about evolution from, what you write here makes it clear your understanding of how evolution works is erroneous. Do you know what a niche is?
Wikipedia is a great resource used properly. The idea is to use it as a gateway to the primary sources.
AF at 150,
Faking a lack of reading ability again? Come off it, Alan. All you’re doing is acting like one of the indoctrinated. Too bad.
@ Relatd:
Do you know what a niche is? It’s central to the mechanism of evolution.
AF at 153,
I’ve heard it before. You’ve got nothing new.
What have you heard? The niche is the mechanism by which God designs living organisms including us? Are you not then amazed?
Indeed. There are people much better than me at explaining evolutionary theory but you have already rejected your strawman version. Not much I can do about that and I guess it doesn’t really matter in the circumstances.
AF at 155,
It matters every time. Every time. The secular evangelists are careful to answer every attempt to breach the wall of the theory. Eg., You’re wrong! You’re ignorant! And so on. And it’s obvious that atheist materialism must be protected. Always.
AF, that is now lying. By speaking with disregard to facts already on the table as stated or a few links away. Info carrying capacity of D/RNA is 2 bits per base, redundancy reduces that somewhat. Each AA is effectively from 20 possibilities, 4.32 bits with for many in OoL contexts, chirality adding a bit. Abel, Durston et al describe how the capacity is not fully used, as is true for codes in general. But such is immaterial, config spaces beyond 500 – 1,000 bits are unsearchable by blind means on sol system or cosmos scope gamut. 10^57 to 10^80 atoms, at up to 10^14 operations per second, for 10^17 s. Just for gemome for first cell, 100 – 1,000 k bases, and new body plans are 10 – 100+ millions, where the config space doubles per additional bit. All of this you should long since have acknowledged but obviously have no intent to, as it is at once fatal to plausibility of your preferred materialistic miracles of organisation. Beyond, I simply note we have coded algorithms to compose AA chains for proteins, thus language and goal directed processes. We confidently infer design as best causal explanation, indeed the only empirically supported one for such. More can be said, fisking to follow. KF
Why so insulting, KF? A lie would involve me in making a statement I know or believe to be untrue at the time I am making it. I have never done that here.
Engineers would remain without job on the spot if they would invent stories like darwinists do. Why darwinists are paid for inventing unprovable stories about past? They are just common novelists that publish under “scientific authority” .
LCD at 159,
You’re right. Engineers have to show their work. They have to build things that actually function.
AF, doubling down. And misdefinition, there are subtler forms of intentional deception. To lie is to speak with disregard to truth in hope of profiting from what is said or suggested being taken as true. You full well know or should acknowledge some basic facts but choose instead to obfuscate, pretend to innocent ignorance, deride and dismiss instead. We can infer that the facts would be fatal to your enterprise. KF
PS, even Wikipedia is forced by facts to acknowledge:
There’s no argument about that. I just happen to think the mechanism of design is explained by evolutionary theory. The niche is God’s design tool.
Nope. My definition is the correct one.
Real life calls. Done for the next day or two at least
AF, playing definitionitis, nominalism games? To speak with disregard to truth is to refuse to tell known or knowable truth, e.g. duty of acknowledging ignorance or risk and refusing to give misleading part truths. The lying compounds that refusal of duty with misrepresentation as though what were represented is true, and does so to gain advantage. In this case as an educated person you know or could easily know about bits and information capacity. You can further at least appreciate the gap due to redundancies, uneven odds of different states etc. Then you can readily see that the coded algorithms in the D/RNA of the cell swamp blind needle in haystack thresholds. You also know about language using intelligence and the goal directed, finite steps nature of algorithms. Such strongly point to design. KF
PS, Notice Wikipedia’s further admission on undeniable states of affairs:
Alan Fox:
The “theory” that changes happen? The theory that whatever is good enough to survive may get the chance to reproduce? The theory that some changes have a better chance of being eliminated than others? Can you please link to this alleged scientific theory of evolution so we can all read what it actually explains?
Nope. The niche only hones the already existing and well-established design.
Alan Fox:
Right. The LTEE has demonstrated the severe limits of evolutionary processes.
Thank you, Alan.
Alan Fox:
Durston refutes your asinine claims. And this “scientific world” cannot demonstrate that blind and mindless processes produced life and its diversity. They can’t even formulate a scientific theory of evolution. Heck, they don’t even know what determines biological form!
What scientific impact has been made in the name of evolution by means of blind and mindless processes? Besides the obvious negative one.
Alan Fox:
The bias changes. Even a loss of function can be beneficial.
Sexual selection and sexual reproduction reign in the odd deviants. Again, all we observe is the honing of an already existing, well-established design.
AF@151
A great resource? That’s a laugh. Used any way, for some subjects it is a great source of biased misinformation. Your faith in Wiki just goes to show your status as a faithful certified card-carrying member of the church of scientism
Just the tip of the iceberg would be Wiki’s dreadful coverage of parapsychology. It is typical of the very strong bias exhibited by Wikipedia and their “thought police”.
The Wikipedia item on psi and esp is a real hatchet job. From the writeup: “Second sight and ESP are classified as pseudosciences”. “Pseudoscience consists of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited.”
Of course this Wiki article ignores or dismisses major meta-analyses of the data, like Etzel Cardena’s survey article on psi and esp research findings in American Psychologist, which presented a very strong case for the reality of these phenomena based on the cumulatively overwhelmingly evidential peer-reviewed research findings from many studies accumulated over the years. The title was “The experimental evidence for parapsychological phenomena” at https://ameribeiraopreto.files.wordpress.com/2018/12/The-Experimental-Evidence-for-Parapsychological-Phenomena.pdf. From the Abstract: “The evidence (presented here) provides cumulative support for the reality of psi, which cannot be readily explained away by the quality of the studies, fraud, selective reporting, experimental or analytical incompetence, or other frequent criticisms. The evidence for psi is comparable to that for established phenomena in psychology and other disciplines, although there is no consensual understanding of them.”
Any open-minded examination of the empirical evidence shows that parapsychology is not pseudo-science as claimed by Wikipedia, but of course Wiki complacently lies that it is, and knows that it is trusted by millions as a good source of information. Not.
With the Cardena paper the best that the materialist scientistic skeptics could do when presented with this challenge was Reber and Alcock’s incredible response (at https://skepticalinquirer.org/2019/07/why-parapsychological-claims-cannot-be-true/), where they couldn’t or wouldn’t waste their precious time and effort in actually examining the details of the data and research experimental results, but instead they closed-mindedly went back to David Hume and his old “pigs can’t fly” philosophical/metaphysical argument against “miracles” contravening currently understood natural law. Reber and Alcock claimed that esp and psi are simply existentially impossible, regardless of absolutely any conceivable evidence. Essentially, they threw out without examination the very large body of highly evidential experimental research results, a very large body of empirical evidence, just because they didn’t and couldn’t believe them. They strongly believe that all the data regardless of quality just must in principle be false in some way, with no need to actually show this falsity in detail.
Wow, case closed. What an excellent argument. Of course, the real reason for their use of this tired and invalid old argument was that they knew that they couldn’t plausibly challenge the findings documented in Cardena’s paper.
Doubter at 170,
I have studied ESP and PSI. There are other subjects where others react the same way. At first, it surprised me. Later, I concluded that they either do not want to believe good data or they are trying to hide something. The example I’m referring to included a NASA Technical Report. But the replies were just howls of “No! It can’t be!” Uh, it’s in the NASA Technical Report produced by NASA and I get this?
None of these people could give a rational response even though at least a few claim to have some expertise in the example in question.
Wikipedia is good for rock band trivia.
Beyond that…
Andrew
AF, you continue definitionitis. Okay, here is a description and context for FSCO/I https://uncommondescent.com/mathematics/times-arrow-the-design-inference-on-fsco-i-and-the-one-root-of-a-complex-world-order-being-logic-first-principles-25/ KF
Relatd@171
There are many subjects/movements that despite ample evidence for their reality are derided by their articles in Wiki, in an obvious smear campaign against anything that seems to conflict with reductive materialism and the mainstream consensus of what reality is, that it is ultimately meaningless matter in a void, and that current conceptions of science are final absolute reality despite the obvious fact that this tends to change every few generations paced by the rate of the funerals of the “experts”. A sure sign of the grip of the secular religion of scientism on our current society. This is essentially the worship of naturalism and reductive materialism, and the active persecution and suppression of any tendency to stray from the faith.
The treatment by Wiki of Intelligent Design is perhaps even worse than its treatment of esp and the paranormal in general. Wikipedia similarly falsely claims ID is pseudoscience, and adds the also patently false claim that it is Creationism in disguise. Materialist propaganda aimed to convince people that the evidenceless secular religion of Darwinism is the truth.
Doubter at 174,
Wikipedia can be useful. In some cases, such as you describe, it can be edited by anyone or modified by anyone. In the case of the business I work for, we have a Wikipedia page. It contains false, inaccurate and other problematic pieces of information. We attempted to post a corrected version. Persons unknown changed it back.
In the case of my example, certain people on another message board attempted to either convince themselves or others that the information I provided, backed up by a NASA Technical Report, could not be true. I suspect the primary reason was that it was about a piece of technology that appeared earlier than history would lead people to believe. The other problem was that it was obtained from a foreign country after World War II.
Relatd: I suspect the primary reason was that it was about a piece of technology that appeared earlier than history would lead people to believe. The other problem was that it was obtained from a foreign country after World War II.
Just curious . . . what bit of technology was that then?
Relatd, TV was teens and twenties, first broadcast was 1939, BBC. Pulse Code Modulation was 1939 too. So was the first jet flight, Heinkel 178 IIRC. Things were happening far earlier than people may realise. KF
JVL at 176,
A Mach 10 wind tunnel.
Relatd: A Mach 10 wind tunnel.
Initially developed by Nazi Germany? And eventually realised in Tennessee in the 50s? Is that view controversial?
JVL at 179,
I have no idea where your information comes from. The wind tunnel was installed in the United States in 1947. The supposed ‘experts’ on another board either didn’t want to believe it or to suppress the knowledge that it occurred in that year.
Relatd: The wind tunnel was installed in the United States in 1947. The supposed ‘experts’ on another board either didn’t want to believe it or to suppress the knowledge that it occurred in that year.
Well, as far as I can see, the technology was definitely cutting edge, the Nazi’s worked on it, but not that surprising or out of line with known research.
JVL at 181,
I have some expertise in this area. In 1947, supposedly, no one had anything fast enough to warrant the building of a Mach 10 wind tunnel. This isn’t cutting edge, this is ‘beyond anything that existed at the time’ according to those ‘experts’ I referred to. Things are not built for no reason. So, you are quite wrong. This was far beyond any “known” – according to the history books – technology from the period.
The V-2 rocket traveled at over Mach 4.3.
Relatd:
Does this fit in with your view of what happened?
https://link.springer.com/article/10.1007/s12567-015-0078-0
https://www.researchgate.net/publication/272362585_The_1_1_m_hypersonic_wind_tunnel_KochelTullahoma_1940-1960
The V-2 rocket traveled at over Mach 4.3.
References?
JVL at 183,
The Mach 10 wind tunnel went into operation in the U.S. in 1947 or 10 years earlier, which explains the ‘objections’ raised by the ‘experts.’
A photo of a wind tunnel model of the A-4 (German designation for V-2) is shown in a variable speed wind tunnel with a range of Mach 1.1 to 4.4, on page 39 of V-Missiles of the Third Reich – The V-1 and V-2 by Dieter Hölsken.
Relatd, the one snatched from Germany and taken to the US? Germans have a reputation for over building, e.g. their radars were snatched to use as radio telescopes, they were way better than necessary for purpose. Then there was was it Hitler’s dismissiveness of the T34 because of crude fit finish except where needed. And more. KF
Relatd: The Mach 10 wind tunnel went into operation in the U.S. in 1947 or 10 years earlier
That’s quite a range of years considering that a lot of work was being done at the time.
I guess I’m not completely sure what you are saying: that kind of early development by US scientists was quick but not if they had information from work that had already been done in Germany . . . or not?
I get that some people are not familiar with the history of the research but, given that, is any of the results that far out of expectations? Your decriers sound just misinformed to me. So? They couldn’t even be bothered to do a decent online search. I guess that’s your whole point.
JVL at 186,
Don’t guess when you can find out. The decriers were mostly people who specialized in aerospace. This information either shocked them or they sought to cover it up. They should not be misinformed.
Another way of putting it is this: What was the U.S. doing with a Mach 10 wind tunnel in 1947? The answer is not nothing. Something like this was too advanced for the late 1940s. Considering also that it was a wartime German development.
You lack a comprehensive knowledge of wind tunnels and their alleged historical development. The German variable wind tunnel that could reach Mach 4.4 was in operation by late 1940. Again, early in terms of other developments in other countries.
Where are the people with these supernatural abilities? Why are they not on front pages, prime-time TV?
KF in comment 173
You make my point for Me. “FSCO/I” is your own unique invention. Nobody else gives it a moment’s consideration. Though, I’ll see if I can find time to wade through that field of chaff to find any wheat. In the mean time what would impress me is if KF could show me where anyone else is discussing “FSCO/I” and taking it seriously.
Physician, heal thyself. 😉
AS, strawman, compounded by Alinsky style personalisation and polarisation that boils down to I demand details then use dismissive rhetorical stunts to evade them when countered. This in an age where complex functional information is ROUTINELY measured in bits and through the informational school of thermodynamics that has long been tied to entropy and the second law. All I did, as you know but of course refuse to acknowledge, is to abbreviate a descriptive phrase for a concept and metric tracing to Orgel and Wicken, who outlined the concept and the principle of measurement prior to the origin of ID by over a decade. Functionally specific information can be explicit in a string as in D/RNA or text in this thread or code on a PC. It can be implicit in the reducibility of a functional configuration through description of the Wicken wiring diagram, as in the process-flow network of cellular metabolism or an oil refinery, equally alike. It is inherently measurable in bits as is a commonplace of an information age. Adjusting for redundancy is what Abel, Durston et al did. You cannot contest those facts nor the blind needle in haystack search challenge beyond 750 +/- 250 bits. The cell, just on genome, is 100k – 1,000 k bases and body plans 10 – 100+ mn, vastly beyond sol system or observed cosmos capacity. Worse, we have alphanumeric, string, coded algorithms, directly language and goal directed processes. There is just one empirically founded causal source with capability for such, design. There is excellent reason to infer design, and such is only resisted for ideological reasons tied to the self-refuting a priori evolutionary materialistic scientism highlighted by Lewontin and quite a few others. KF
Alan Fox:
You “argue” like a child. That the “environment designs” is YOUR unique invention. Nobody else gives it a moment’s consideration.
AF,
I will comment on points:
AF, 120: >>I try to see the world as it is>>
1: If that were so, it would be admirable objectivity.
>>and base my remarks on facts.>>
2: The evidence above shows evasion of facts starting with ubiquity of functional information based on strings or configurations, measurable in bits of capacity and adjusted for redundancy. (You seem to lack familiarity with the underlying theory of information and communication and to imagine that you can dismiss it because of who points it out. Which, of course lacks objectivity.)
>>Warrant?>>
3: Warrant is a key component of what is knowable, speaking to credible realities, right reason, sufficiency to ground conclusions. Your unresponsiveness to the bit speaks volumes in a digital age.
>> I’m a pragmatist.>>
4: Pragmatism, strictly, is in serious hot water as a view on truth and knowledge. As is any variety of relativism, subjectivism, emotivism etc. We have already seen how objective knowledge necessarily and undeniably exists for any reasonably distinct field of discussion.
>>Rules that work best flow from consensus>>
5: Once significant worldviews issues and the attitude of hyperskepticism are on the table, consensus is impossible. Instead, truth, right reason, warrant and wider prudence are what we have. Your hyperskepticism does not control our knowledge, nor should it.
>> and fairness, >>
6: Fairness is of course part of our first duties, where selective hyperskepticism is always imprudent, unwarranted, a violation of right reason, and is unfair.
>>not unquestioned authority. >>
7: Strawman caricature projection, no one in this discussion has seriously advocated blind modesty in the face of claimed authority. To suggest such to taint is snide and out of order.
>>There is no absolute objective warrant.>>
8: Such as for this?
9: In short, this is a self referentially incoherent, self defeating, necessarily false assertion. Some things may be warranted to undeniable certainty as self evident, others on known or accessible realities may hold moral certainty, others have a weaker provisional prudent warrant including theoretical, explanatory constructs of science. I get the feeling some reflection on logic, logic of being and epistemology would be advisable.
>>People insist, agree, argue, fight, endure whatever rules emerge in human societies.>>
10: This sounds much like cultural relativism, which fails.
>>I’m sure we can all think of better ways for our community to function, but there’d be little consensus.>>
11: Irrelevancy and again appeal to cultural relativism.
>>You do err, frequently, and at length. It is fortunate you have no power to enforce your ideas to any significant extent on others.>>
12: Little more than turnabout projection, to feed personalisation and polarisation. On the subject in hand, the binary digit is not a personal matter, nor is the concept of functional information, nor that information can be implicit in functional organisation.
13: All of this resort, is to try to dismiss my having drawn from Orgel, Wicken and others that there is an observable [and quantifiable] phenomenon, functionally specific, complex organisation and/or associated information. That, I abbreviated FSCO/I, and have long since pointed to sources. There is no responsible reason to disregard it, we see here ideologically motivated artificial controversy driven by selective hyperskepticism.
14: The obvious reason? Such FSCO/I is readily observable with trillions of cases and once we are beyond 750 +/- 250 bits, uniformly is seen to come about by intelligently directed configuration. Further, it can be shown that blind needle in haystack search is not a plausible cause for it. So, as this includes the genome, which has coded algorithmic information (so, language and goal directed process), that strongly points to the cell and to major body plans being designed. You cannot counter on merits, but are determined to reject the possibility of design so you have resorted instead to quarrelsome rhetorical stunts.
>>A fact for you to consider.>>
15: Considered for over a decade.
>>You are unique in claiming that “FSCO/I” is a genuine, quantifiable concept>>
16: False, you have hyperskeptically refused to recognise a descriptive phrase for a ubiquitous phenomenon in a technological, information age, functional information [rather than info carrying capacity] that is beyond a threshold where it is plausible to suggest it could have come about by blind chance and/or mechanical necessity.
17: I have made available to you clips from Orgel and Wicken, which are my sources, which you have dodged. Let me clip here Wicken’s wiring diagram comment:
18: Quantifiability of course, as has been pointed out any number of times, starts with information carrying capacity, often in bits or bytes. Beyond, in an information theory context, when redundancy enters, there is a reduction, a familiar phenomenon with codes as information is connected to surprise and removal of uncertainty. In English, about 1/8 of normal text is the letter e, and rarer ones such as x convey more information.
>> yet have failed utterly to justify that claim.>>
19: Manifestly, insistently false to the point of speaking with disregard to truth.
So, in the end, the objections fail.
KF
KF in 193
Thanks for at least using paragraphs and numbering them. The questions that interest me are:
1. What precisely is “FSCO/I”
2.How is it quantified?
3. Who, apart from Kairosfocus, talks about “FSCO/I”?
AF, long since answered, you are playing at willful obtuseness. A descriptive phrase for a ubiquitous phenomenon in an information age being treated with hyperskepticism is a strong sign of just how threadbare the objections are. FSCO/I = “functionally specific complex organisation and/or associated information,” which describes, it does not invent. And that is a root problem, nominalism; it fails, there are abstracta such as information and quantities, that are very real. Information is measurable as capacity in bits, counted from string length of two state elements to hold it. Wicken pointed out that — with implied compact description languages — information is implicit in functionally specific organisation and its wiring diagram. Functionality dependent on configuration is highly observable, look at any auto parts shop or at how readily information is garbled by noise. You contributed many cases in point in this thread or elsewhere. So, you know full well what you pretend to doubt. That tells us just how powerful is the discovery of coded algorithmic information in D/RNA in the cell and its function as basic module of life. Where life is of course notoriously undefined in the sense of a consensus precising statement, but is readily recognised. Definitionitis rhetoric fails. KF
PS, it matters not 50c that I use and explain the description, the substance is real and similar phrasing is everywhere. Start with Orgel and Wicken as already cited and see if you can bring yourself to acknowledge they have a point. In speaking of specified complexity [coming from Orgel] and on complex specified information Dembski pointed out that for biological systems such is cashed out in terms of functionality. That is, functionally specific configurations. And Abel, Durston et al have reduced that to an analysis pivoting on observed range of variation in life for enzymes etc.
PPS, for further contemplation:
I do not think there is necessity to engage in probability analysis, there is plausibility due to blind, needle in haystack search challenge. That is why I point to 10^57 sol system atoms [where most are H and He in the sun] and to 10^80 for the observed cosmos, with fast reactions of organic character rated as up to 10^ -14 s. 10^17s is ord of mag available time. 3,27*10^150 to 1.07^10^301 possibilities swamps those reducing to only negligible search of the configuration space being possible. Where, search for a golden search can be seen i/l/o how a search samples a subset, so for a set of n configs, the set of searches is power set, of scale 2^n, so exponentially harder, suggested golden searches built into the cosmology would be front loaded fine tuning. Blind watchmaker approaches are maximally implausible.
PPPS, as you seem unfamiliar with the underlying state or phase space thinking, Walker and Davies:
More on the anthropic principle from Lewis and Barnes https://uncommondescent.com/intelligent-design/hitchhikers-guide-authors-puddle-argument-against-fine-tuning-and-a-response/#comment-729507
And on and on for those willing to rise above willful obtusenes and hyperskepticism.
1. What precisely is environmental design?
2. How is it quantified?
3. Who, besides Alan and Fred, talks about environmental design?
ET, there is endless talk on fitness functions and hill climbing. There is a common assumption of well behaved functions, though the issue of ruggedness as I discussed is not properly appreciated. However, given FSCO/I, we have issues of multiple, well adapted, matched, properly arranged and coupled parts to achieve function, as is easily seen with the exploded view of a case study, the ABU 6500 reel [simpler than Paley’s watch and from a firm that made taxi meters]. In short, islands of function separated by vast seas of non functional clumped or scattered configurations is very real. The dominant search challenge is to get to a shoreline of function for hill climbing and specialised adaptation to modify the body plan or architecture or wiring diagram. Where with 500 – 1,000 bits as a threshold atomic and time resources cannot carry out significant config space search. So, FSCO/I by blind needle in haystack search is analytically maximally implausible. There are trillions of cases by intelligently directed configuration, as intelligence plus knowledge plus technique are fully capable. FSCO/I is a signature of design. All this has been outlined, explained, thrashed out over a decade ago, but we are not dealing with intellectual responsiveness. KF
I agree. And Alan’s obfuscation and willful ignorance are not arguments against that.
What Alan will never present is evidence that blind and mindless processes produced any bacterial flagellum, for example. He can’t even tell us how to test the claim that blind and mindless processes are capable of producing any bacterial flagellum. And he doesn’t understand that science rejects claims that are evidence-free and cannot be tested.
ET
Joe, Joe, KF will tell you every tub must stand on its own bottom. Rail against evolutionary if you want but it doesn’t change the fact that for “Intelligent Design” there is no tub and no bottom. And still nobody can tell me what FSCO/I is, not even the guy who invented it
So the argument for Evolution is based on something else not being true?
Please tell us what other science belief is such based? The answer: none.
Aside: I can define functional complex specified information. It was done years ago. It can be measured in terms of its complexity just as an individual sentence can be measured.
Current problem is that I am at New Jersey shore on vacation for 10 days so to find specific discussions of complex specified functional information from 13-15 years ago is difficult
But to suggest there is no definition is nonsense. So what else is new?
Alan, Alan. When there are TWO choices, intelligently designed or not, evidence against one, supports the other. But I understand that you couldn’t grasp that fact.
Also, science mandates that all design inferences first eliminate chance and necessity. See Newton’s 4 rules of scientific reasoning, parsimony and Occam’s razor. But you have been told this many times and it still hasn’t sunk in.
That said, all bacterial flagella fit the criteria for being intelligently designed. First of all, they are all irreducibly complex.
“Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”– Behe in DBB
And all you can do is to lie and deny that reality. It sucks to be you.
AF, this is not about evolution inasmuch as descent with modification is concerned. Dogs show mods by variation and artificial selection, gulls and other circumpolar species show natural adaptation and biogeography until the two overlap in Europe etc. Galapagos finches show radiation but also that cross-species successful breeding occurs. Red Deer and American Elk proved to breed in New Zealand. The issue is to arrive at body plans de novo, starting with the unicellular organism then getting into dozens of body plans. Hill climbing does not explain arriving at a beach head on an island of function, which makes blind needle in haystack search utterly implausible, which is why you suddenly have all sorts of hyperskepticism about a commonplace phenomenon FSCO/I and how to construct metrics. That reaction tells us your view has crippling difficulties accounting for information and organisation beyond 500 – 1,000 bits. KF
Jerry,
Don’t worry about arguing with me. Enjoy your vacation. I’ll still be here when you get back…
If I’m spared! 🙂
The Sherlock Holmes argument? Good grief! Every tub must stand on its own bottom. You have to include the possibility of the explanation we haven’t thought of.
But I don’t need to. There is no requirement for such a concept in the evolutionary model. You need to show how to quantify such claims and then explain how your model works in a biological system before I need to be concerned.
AF at 205,
Pfft! Double pfft!
No arguing.
Just presenting the obvious. FSCI is obvious and simple. How anyone could say there is no measure of it is beyond me.
As I said measuring a simple sentence in any language is straightforward. Measures of the DNA sequence complexity is just as simple.
AF, you also know that with trillions of observed cases, FSCO/I is uniformly, reliably produced by intelligently directed configuration. Thus, you know it is a strong sign of such IDC as key causal factor. Your rhetorical pretences otherwise simply show intent to disregard the basic inductive logic on which science was built. KF
Alan Fox:
Good grief is right! Are you daft? Given 2 possibilities, it is a fact that eliminating one, supports the other.
And yet yours doesn’t even exist! And nice of you to ignore what I said and prattle on like an infant.
AGAIN, science mandates that all design inferences first eliminate chance and necessity. See Newton’s 4 rules of scientific reasoning, parsimony and Occam’s razor. But you have been told this many times and it still hasn’t sunk in. What part of that are you too stupid to understand, Alan?
Nope. Clearly you don’t understand how science operates. The science of today does not and cannot wait for what the science of tomorrow may or may not uncover. Science is a tentative venue. Scientists understand that their claims of today may be refuted tomorrow. They also understand their claims may also be confirmed. That is the nature of science.
Science mandate that the claims being made have to have evidentiary support. It also mandates the claims being made not only be testable but tested and confirmed. The only evidence for evolution by means of blind and mindless processes is genetic diseases and deformities.
Alan Fox:
There isn’t any requirement for supporting evidence, either. There isn’t any requirement for making testable claims. In other words, the evolutionary model isn’t scientific.
Jerry:
Obvious and simple, eh?
In that case, how hard can it be for someone to provide a worked example?
How do you know there are two possibilities? There’s an evolutionary explanation. There are several religious explanations. But there could be ones we haven’t heard of yet. ID folks may even end up explaining something one day.
Not you though, Joe.
KF
No, I keep asking but you still avoid telling me what “FSCO/I” is and how to calculate it.
Alan Fox:
Really? Again, Intelligently Designed or not sweeps the field clean.
Yes, your continued equivocation is duly noted. Intelligent Design has an evolutionary explanation, too. Intelligent Design posits that living organisms were so designed with the information and ability to evolve and adapt. Evolution by means of intelligent design, ie telic processes. Genetic algorithms exemplify evolution by means of telic processes.
So, please stop equivocating.
The only things evolution by means of blind and mindless processes can explain are genetic diseases and deformities.
Already have. But we are still waiting for you and yours to come up with something.
And yet I have, Fred.
Why do you think your willful ignorance is an argument?
Alan Fox:
It has been explained, ad nauseum. YOU are the problem, Fred.
Not hard at all.
Every sentence in this thread is an example of FSCI.
The probability of getting each sentence is just 1/29 x 1/29 etc for each letter, space, comma and period. Actually this limits the options so its best to use 1/60 to cover capitalization and other punctuation marks. So
Contains 50 characters so the sentence is determined by 1 in 50 ^ 60 power. Unlikely no chance of generating it in the history of the universe using random choices for each character. (This is about 10 ^ 102 power)
For biology, DNA sequences producing proteins could be calculated in a similar fashion.
As someone who has taught probability, I protest that this explanation is extremely simplistic and unrealistic. To think that pure chance involving a certain number of simultaneous and independent events is how things happens in the real world is naive and basically irrelevant to any real-world situation.
As someone who has taught probability, I protest that this explanation is extremely simplistic and unrealistic. To think that pure chance involving a certain number of simultaneous and independent events is how things happens in the real world is naive and basically irrelevant to any real-world situation
Tell more, like why this has to be off by orders of magnitude, because it would have to be to be relevant.
No, Viola Lee. It doesn’t have anything to do with simultaneous events. Independent events, yes.
7 coin tosses to hit 7 heads or tails, is 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/2 x 1/
But once you hit 6 in a row, the last one is still only 1/2
True, the events don’t necessarily have to be simultaneous.. I can flip 100 coins at once or one coin 100 times and certain things, such as distribution will be the same, but others, such as order, will not. However my main point is that neither of these is a realistic model of how real world events happen.
And yes, of course each coin toss is independent of what has happened before. Again, not the way most real world events happen.
With evolution by means of blind and mindless processes, that is exactly how it happens. However, evolution by means of blind and mindless processes doesn’t translate to the real world. Unless you are discussing genetic diseases and deformities.
AF, lying again. You full well know we live in a world of functional information, measured in bits of carrying capacity; you are using digital technology. You know that it was recognised that such information, will have redundancies and have seen working out of how that affects values of encoded info in functional bits. You know such has been published and you have had links to such. (Newbies, see basic survey here on in my always linked through my handle — AF has been around for years and knows better than he speaks yet again.) The info in D/RNA is expressed as 4 state elements thus two bits of capacity per base, though redundancies obtain, and are addressed, on much the same basis as in the world of telecommunication and computing. Similarly generally proteins are 20 state per AA, 4.32 bits per AA carrying capacity, redundancies reduce the actual functional information. From Orgel and Wicken on, that has been done. You have no excuse for yet again denying what is in front of you. That denial instead reflects desperation to evade the import of that observed functional information in life forms from the cell on up. KF
PS, for record, I clip the just linked citing Durston et al:
Of course, predictably, hyperskeptical denial, dismissal and evasion will continue.
VL, have you taught or studied information theory and telecommunications? That is the specific relevant context and I laid out a summary starting with a clip from one of my first t/comms texts, in my longstanding always linked. Kindly see here on. Start with info carrying capacity, how effectively a negative log probability metric arose, then move to redundancies then consider information used to specify function. Then explain to us what it means when file sizes are measured in bits and bytes etc, then what channel capacity is and the significance of say e being about 1/8 of normal text while say x is much more rare, tying in the concept of surprise. Go on to the informational school of thermodynamics and import for the second law. Then, ponder phase, state and configuration space.How many possibilities exist for say 8 bits, then 500, 1,000 and how are they distributed per binomial theorem, and what does that tell you about small target zones and blind needle in haystack search? Search for a golden search? [Note that a search samples a set of possibilities so the set of searches is a power set.] I suspect, that part of the dividing line here is that objectors have little familiarity with these matters at the next level up from oh file sizes are in bits and bytes. You are dealing with people who have dealt with such matters but I further suspect the polarised atmosphere leads objectors to imagine that they are dealing with dubious, rhetorically dismissible notions. KF
PS, given the chaining chemistry for AAs and D/RNA, is there any serious chemical constraint on any of 20 AAs or 4 bases following any other? Think about how that compares to an n element string storage register
|x|x|x| . . . |x| where each x has say p possible states.
Yes, we have here p*p* . . . p, = p^n possibilities, leading to metrics of info capacity. Each x will have log p/ log 2 bits of storage capacity, but redundancy will reduce the actual functional information in a practical code.
And more.
So have I but in the past.
The more interesting thing here is why this particular response? Why now? Why only criticize and not try to make more accurate? That is what a teacher usually tries to do.
For example, the order of letters and punctuation may have some necessity to them. So the actual next character will be limited by the preceding characters in some examples. But in general the example is simple and relevant and the calculations straightforward.
Aside: who said that the next character was random in the real world. For something to have function, the argument is that it couldn’t happen by chance or any natural processes. That some intelligence directed it.
Similarly, certain DNA sequences have function. What is their origin? For some the origin may be analogous to a coin flip. But for most that does not appear to be the origin.
Aside2: the origin of DNA sequences in punctuated equilibrium is thought to be analogous to coin tosses. That is their explanation for protein origin. While it may explain the occasional protein it only explains a small number and it fails to deal with the origin of functionality in the DNA transcription and translation process.
Maybe our resident expert on probability would provide an estimate on these probabilities and not just criticize.
Jerry, some of that is bound up in redundancy [q usually has u after it in English], some is implied by the message to be sent, some by the need to frame messages so starts are starts, words are words, stops are stops, etc. Those are not bound by chemistry or physics. In a Darwin pond scenario it is chem and physics that would have to compose. KF
To toss a coin you need a person(intelligent agent) , a coin (intelligently designed) and a purpose.
Whatever you want to demonstrate you need intelligence as starting point.
Bad news for some ideologies.
Kairosfocus:
That is a serious and unjustified allegation. For someone who postures as Christian community leader, it is behaviour particularly depressing to observe. Support the allegation with facts or stop making it. For shame.
Absolute balderdash. Neither you or anyone can discern the functionality of a novel DNA sequence by performing any sort of numerology on it.
Here is an example of information that is functional, specific and caused by an intelligent agent. Look at the line below:
hereisanexampleofinformationthatisfunctionalspecificandcausedbyanintelligentagent
It is the first sentence in this post without a starting capital letter and without punctuation. Living cells know how to translate this. How to perform error correction.
THIS is Intelligent Design. All of it. I have lived to see the day when blind, unguided chance disappears under the truth. The truth for all.
AF at 230,
To someone who is a blowhard, where did you see the title in the following?
“…postures as Christian community leader…”
And based on your previous posts, I doubt that you’re actually heartbroken if this were true.
Related
So you are a blowhard? I did not know that. Luckily, being from Europe, I don’t know what that word means.
@ Relatd,
Looked up “blowhard”.
“an arrogantly and pompously boastful or opinionated person”
Fits some others posters here better than you, IMHO.
A nonsense statement that has to be known as nonsense.
Functionality doesn’t come from numerical analysis but complexity can be determined from numerical analysis. There are zillions of complex entities that have no specific function. But a small percentage of this zillion complex entities (a smaller zillion) have function. The question is how did this functionality arise.
Everyone here knows this the basic question even if they pretend ignorance of it by making nonsense statements.
Aside: the term “blowhard” is not really relevant here. There are certainly blowhards on both sides. The tactics that are essential to mislead, divert and distract are not necessary an example of a blowhard.
Disingenuous is a better term.
AF, as you know, warranted, for cause. You still refuse to acknowledge what is on the table before you. That speaks, not in your favour. KF
Jerry:
So can you show how this is done? That would be helpful.
KF
Speaking in riddles again. Acknowledge what?
AF, you full well know what is linked and what has been published and cited. Further, you know what bits and bytes are. You know the difference between gibberish, simple repetitive patterns and functional organisation. You also had a link before you, which you side stepped as predicted. Your behaviour is manifestly willful and that resort tells the astute onlooker that playing telescope to blind eye is by implication a demonstration that you have nothing substantial but refuse to acknowledge blatant facts. KF
Already done above.
Alan Fox:
BWAAAAAAAAAHAAHAHAHAAAAAAAAHAAAAA!
Functionality is OBSERVED! Duh! Then we go back to see what produced that functionality. Then we quantify it. What is wrong with you? This has been explained over and over again.
Earth to Alan Fox- either you are lying about FSCi/o or you are willfully ignorant.
And seeing that you never support anything you post, you are also a hypocrite.
ET, there is a time where deliberate ignorance is deliberate falsity. But in a world of digital phenomena full of bits and bytes, such ignorance is impossible for the reasonably educated. What we actually have is refusal to recognise that 4-state D/RNA elements are essentially parallel to 128 state elements of ASCII text, or to the underlying two state elements in a storage register, or to the implied info content in a newly assembled AA chain in a cell on the way to being a fully formed protein. The same objectors who claim to speak with the voice of science here show their bankruptcy. KF
AF, I clip from 226:
KF
GEM, this is beyond craziness, though. There has to be something wrong with the ID critics. Seriously wrong, too.
You explain things so thoroughly that you lose them! And that cracks me up.
ET at 246,
Perhaps a few ID critics aren’t critics at all, just deniers.
Exactly, Relatd.
Durston’s”fits”? The idea that took the scientific world by storm? Come off it.
F/N: Re AF at 231:
Strawman caricature.
Jerry is right at 236:
So is ET at 242:
First, we can and do measure information carrying capacity, based on effectively strings and states of elements in the strings. That goes back to Shannon and even Hartley. You reacted to try to sideline it, which you must know is wrong.
Functionality is observed from operation in context, as Relatd highlighted by putting up a text string. and that is how a decade ago I put up a metric that would multiply by a dummy variable that would be 1/0 depending; similar to a technique used in macroeconomics and linked econometrics. Similarly, specificity can be observed by the effect of sufficient random noise perturbation to trigger loss of observable function, and that is why at that time I used a second dummy variable to denote specificity, a fishing reel is different from a bait bucket full of randomly clumped fishing reel parts and the sheepish gun owner bringing a box full of disassembled parts to a gunsmith is proverbial.
Durston et al used a more complex approach which is drawn out in a paper cited in my always linked. They also developed a technique to address redundancies in the information based on practical inevitabilities of codes. This, too you tried to dismiss rather than attend to substantially.
Nevertheless, you know about functional strings, you use them in your objections, and you know the difference between gibberish — a typical result of blind search string generation — and meaningful information per a given protocol, e.g. ASCII text with messages in English.
You also have been repeatedly informed that functional organisation such as that of an ABU 6500 C3 reel, can be reduced to strings in a description language such as AutoCAD DWG format.
(A reel is less complex than a watch, it is no surprise to see watchmakers coming up a few pivotal times in the history of especially the modern multiplier or baitcasting reel. And you and other objectors have been pointed to Paley’s thought exercise in his Ch 2, on a self replicating watch, which is fifty years before Darwin’s publication and 150 before von Neumann’s kinematic self replicator where the self replicator shows the additional FSCO/I involved in moving to that class of machine or system.)
Thus, as description languages and compact technical details exist, discussion on strings is without loss of generality. A point I made over a decade ago in incorporating functional organisation in the abbreviation: functional specificity > functional specificity + informational complexity > functionally specific, complex organisation and/or associated information, FSCO/I.
Specificity can be given in a detachable description, e,g. sentence in ASCII coded English or a working fishing reel or a cellular metabolic network or a kinematic self replicating machine/system. I insist on kinematic self replicators to show something done in hardware, not a software simulation.
Of course, at some point you mocked such reference to a fishing reel, wrongfully refusing to acknowledge the point Wicken made in discussing “wiring diagram[s].” That same point applies to say the process-flow network of nodes and arcs in an oil refinery [another example I used] and to the similar but vastly more complex and miniaturised one expressed through the metabolism of the living cell. Indeed, a string is actually a 1-D nodes and arcs framework. (And yes, that ties to a whole world of Mathematics on graphs, networks and their properties; also to linked engineering techniques and to register transfer language/algebra in computing.)
Where, of course, you are an educated person in a digital age and could readily access the further information regarding how information can be extracted from functional organisation and expressed in a compact description language.
In short, your hyperskeptical denials and dismissals in the face of evident facts that are readily accessible is without responsible excuse. You have been speaking with willful and insistent disregard to truth you know or should acknowledge, in order to advance dismissal of something you object to. That is disregard of duties to truth, right reason, prudence [including warrant] etc on a sustained basis. Anti-knowledge, anti-reason, anti-truth. Where, you know what speaking with disregard to truth is about.
You can, should and must do better than such.
KF
AF, you know the relevance of functional bits, which were abbreviated fits for convenience. Your continued hyperskeptical disregard in the teeth of responsibility to truth, right reason, prudence [including warrant] etc speaks. KF
PS, it even speaks theologically, given a warning of scripture (and your attempt to personalise and polarise through Alinsky tactics above):
Shannon was calculating load carrying capacity of telephone systems. Tells us nothing about content or function. Nothing. At. All.
Durston made an honest effort. Problem is it doesn’t work.
To save reinventing the wheel:
http://theskepticalzone.com/wp.....ein-space/
Kirk Durston can be found joining in in the comments.
AF, you are found continuing to refuse to acknowledge first facts and established knowledge. Let us start, what is a binary digit? ____ Why is it that a p-state per digit register has log p/log 2 bits per character information storage capacity? _______ Why is it that in a practical code there will normally be a difference of frequencies of states in normal text? Why then does – H = [SUM] pi log pi give an average value of info per character? _______ Why is this called entropy and why is it connected to physical thermodynamics by the information school? _________ Why can we identify for an n length, p state string that there are p^n possibilities forming a configuration space? Why is it, then, that for codes to compose messages or algorithmic instructions or compactly describe functional states, normally, there will be zones of functionality T in a much larger space of possibilities W? ______ We could go on but that is enough to make a key point clear. KF
PS, it is commonplace in physics that while there are established general laws or frameworks, readily or exactly solvable problems may be few. When I did Q theory, no more than three exactly solved problems existed. This has to do with how fast complexity of real world problems grows. Approximate modelling is a commonplace. An old joke captures the point. Drunk A meets drunk B under a streetlight on hands and knees searching. I lost my contacts. So A joins in the search. After a while A asks are you sure you lost them here? Oh no, I lost them over in the dark but this is where the light is. The context was statistical thermodynamics.
PPS, your debate on sampling protein space does not answer to the core issues above. Further, it is known that there are several thousand fold domains, many of which have a few or even just one viable AA sequence, and that there are no handy evolutionary stepping stones from one domain to the other. You have hyperskeptically and without good warrant tried to dismiss a readily observable phenomenon, FSCO/I and in that dismissal you have refused to acknowledge patent facts. Where, you and yours have never solved the problem of moving from a Darwin warm pond to a first, self replicating, metabolising cell by blind watchmaker mechanisms exactly because you have no good answer to origin of FSCO/I by blind watchmaker processes. Speculations do not count, they come to mutual ruin. FSCO/I is routinely and reliably produced by design and search challenge is a vital issue. We have excellent reason to hold that coded language and algorithmic code are strong signs of language using intelligence at work in the origin of the cell. It is ideological a prioris that block acknowledging such.
Alan Fox “argues” like an infant. You cannot bully us, Alan. And clearly you cannot formulate a coherent argument.
Fox pulls “environmental design” from its arse and thinks it is a valid concept.
Durston gets his concept published in peer-review and Fox handwaves it away like the scientifically illiterate loser it is.
Durston concept works. Alan cannot demonstrate otherwise.
Alan Fox:
Your willful ignorance isn’t an argument, either. Functionality is OBSERVED, you obtuse arse!
It should be noted that not one person over on TSZ can demonstrate that any protein arose via blind and mindless processes. They don’t even know how to test such a claim.
The bottom line is people like Alan do not care about science nor reality. They live in a world of denial. There will NEVER be any evidence for Intelligent Design in their bitty, closed minds. And they will NEVER support the claims of evolution by means of blind and mindless processes. They are cowards and losers, all. The Skeptical Zone is the new swamp
PPPS, as a further point, Wikipedia’s admissions on the Mandelbrot set and Kolmogorov Complexity:
This is of course first a description of a deterministic but chaotic system where at the border zone we have anything but a well behaved simple “fitness landscape” so to speak. Instead, infinite complexity, a rugged landscape and isolated zones in the set with out of it just next door . . . the colours etc commonly seen are used to describe bands of escape from the set. The issues raised in other threads which AF dismisses are real.
Further to which, let me now augment the text showing what is just next door but is not being drawn out:
This gives some background to further appreciate what is at stake.
ET, easy on the language please, remember the broken window theory. We do not need a spiral to the gutter. KF
AF,
A further strawman:
Information carrying capacity [especially with a bound for inevitable noise] is a key upper bound and shows us the maximum possible information. Surely, you are aware of the importance of upper bound and similar limiting results in physics, not least thermodynamics.
Going further, we have a separate way to address functionality vs randomness vs repetitive patterns, as was just laid out by way of K-complexity. Plausible randomness defies specification or description other than by quoting and prefacing itself. Simple repetition can be reduced to prefacing and quoting the repeating bloc. Functional specificity can be otherwise described with a detachable preface, but there is observable function and there will be resistance to compression, though not usually as strong as for randomness. This brings up redundancy in practical codes.
All of this has been on the table for a long time, objectors using confident manner dismissals and strawman caricatures are being irresponsible and act in disregard for truth.
KF
ET: Functionality is OBSERVED! Duh! Then we go back to see what produced that functionality. Then we quantify it.
Is that in agreement with Dr Dembski’s 2005 monograph Specification: The Pattern That Signifies Intelligence? He seems to argue that you can just do pattern analysis to determine design which, presumably, indicates purpose or function. He seems to argue that he found a metric, a way of testing sequences (like coin flips, his example) to determine if they were ‘designed’ without having observed functionality.
JVL, there is a difference between specified complexity and FUNCTIONALLY specific, complex organisation and/or associated information. CSI looks at detachable specifications/descriptions in general, functionality is about what we can see working based on configuration; it is the material subclass. In his NFL, Dembski clearly identified that in the relevant biological systems specification is cashed out in terms of function, giving a cluster of cites. That is clipped above. KF
Umm, biological specification refers to function.
From Wm. Dembski:
It all comes down to
In other words, all those who espouse naturalized Evolution and there are millions can not find any evidence for it. Ironically they can not find any chinks in the ID argument which is partially about Evolution.
They all know about CSI and understand it though they pretend it is bogus. I explained CSI to my 10 year old niece who immediately saw what it meant and thought it was neat.
This embarrassment is never really an embarrassment as they forge on occasionally finding an “i” not dotted or a “t” not crossed. The real question has always been what drives such absurd behavior.
So has everyone given up with Durston and his fits?
AF, you are still side stepping and refusing to address issues. Start with 255, and also ponder 260. As a start. KF
Nobody in mainstream science gives Dembski’s CSI a thought. Whether bogus or not (I happen to agree bogus is a fair description), the idea never developed to a level convincing enough that refutation was really needed and it is now forgotten and ignored.
AF, lying and slandering rather than addressing issues on the merits; to the point of being confession by projection to the despised other . . . you just let a cat out the bag about yourself. I again challenge you to address 255 and 260, the latter being an augmentation on the discussion of Kolmogorov complexity informed by considerations tracing to Orgel and Wicken. FSCO/I is anything but bogus, it is an observable. KF
Has anyone ever debunked it?
Answer: No. So challenging ID is reduced to argument by assertion. There is no logic against it since it is based on indisputable mathematics. Yet this nonsense sentence was made:
So why does anyone defend the indefensible? Why do they continually use fallacies to justify their beliefs? We just had another fallacy used to support their position.
As I said they are not embarrassed by this. Why?
Why would we give up on Durston and FITs?
And nobody in mainstream can demonstrate that blind and mindless process produced life and its diversity! Evolution by means of blind and mindless processes is undeveloped. Bogus doesn’t even begin to describe it.
As I have said, Alan “argues” like a child. He would be the worst teammate on a debate team.
A fundamental aspect of science is quantification. And given this:
and
Both Dembski and Durston have risen to the challenge of quantifying it. And all Alan can do is piss himself in their wake.
I did not say “FSCO/I” is bogus. I said I think bogus is a fair description of Bill Dembski’s CSI. I still have no idea what Kairosfocus’ invention of “FSCO/I” is. It has not yet risen to the level of bogus. Maybe it could rise higher but without KF making a decent effort to explain his concept, how to quantify it, maybe an example to show how it works, we’re still in the dark.
AF, more lying. It is Orgel’s and Wicken’s specified complexity and informational, wiring diagram complexity. Second Dembsky generalised to any detachable specification, acknowledging that for biology it is cashed out in terms of function. As such it is in fact also an observable. When 40 of 41 election ballots go one way, we know something went fishy, just on the far tail result. Had it been a racial issue there would not even have been a moment’s hesitation. And when things had to be changed, poof the miracle vanished. The hyperskpetical stunts being used tell us that objectors have no substance, and the projections of wrongdoing tell us volumes about those who think like that. Confession by projection. KF
PS, what is going on in the Dembski Chi [use X] metric, from my always linked:
In short, this is a specified info beyond 400 – 500 bits metric
Jerry: Has anyone ever debunked it?
Is anyone actually using it?
Just those who wish to quantify biology.
Kairosfocus:
X = – log2[10^120 ·pS(T)·P(T|H)].
This can be broken up:
X = – log2[2^398 ·D2·P(T|H)].
Needs explaining I’m afraid. You can’t just replace 10^120 ·pS(T) with 2^398 ·D2. without some kind of explanation or justification or definition (what is D2?).
Or, as – log2(P(T|H)) = I(T):
Requires a lot more explanation. You just tossed away two pieces of the puzzle without explanation.
You’re going to have to either spell out the transitions you gloss over or provide links to explanations.
JVL, product rule for logs and log_2 of 10^120, the neg log probability rule is a 100 year old base information metric. The Dembski expression is a metric of info beyond a threshold. Base 2 gives bits. KF
AF at 269,
Only by you.
AF at 274,
“… we’re still in the dark.”
Not we. Just you.
Hilarious, KF.
The math is trivial. What data do you enter? What does your trivial manipulation show?
That would be something. Not even Kairosfocus is able to tell us how his trivial manipulation of numbers (where the numbers come from is not yet clear) tells us something useful.
TL;DR No.
Seems Kairosfocus is, without specifying, melding his “FSCO/I” into a version of Dembski’s “complex specified information” (CSI). There has been much discussion of CSI here and elsewhere (remember Mathgrrl and UD regulars on how to calculate the CSI of something?). If KF confirms his ‘FSCO/I” is similar to one version of Dembski’s CSI (which one, I wonder) then that’s a very large wheel I don’t need to reinvent.
So does that mean Relatd can explain how to calculate the CSI of something, explain what numbers he is using and what the result signifies.
Surprise me, Relatd.
*continues not holding breath*
People will be resurrecting the explanatory filter next!
Blimey. CSI is famous. It has a Wikipedia entry.
https://en.m.wikipedia.org/wiki/Specified_complexity
Kairosfocus:
Please explain how your D2 relates to Dr Dembski’s pS(T). They don’t appear to be the same thing so that would imply they are not ‘measuring’ the same thing. You seem to drop it anyway which surely can’t be right.
And K2? What is that? Presumably it’s -log2(D2) . . . but you say that K2 has a limit of so many bits but your units don’t carry through. 398 is unit-less is it not?
What is the range of I(T)?
If we start with Dr Dembski’s
X = -log2(10^120•pS(T)•P(T|H)) = -log2(2^398•pS(T)•P(T|H))
= -log2(2^398) – log2(pS(T)) – log2(P(T|H)) = -398 – log2(pS(T)) – log2(P(T|H))
(without using any of your substitutions except for 2^398)
Replacing -log2(P(T|H)) with I(T) looses the reliance on H which surely is not correct, why did Dr Dembski put it in there in the first place?
And, again, replacing -log2(pS(T)) with K2 without any kind of explanation (and dropping the T) is just not good mathematical practice. You must explain what you are doing.
And you must be cognisant of the units. I do not see how you can get ‘bits’ out of a log expression.
For reference: Dr Dembski defines pS(T) as the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T. How does your replacement compare to that? Notice pS(T) has nothing to do with bits at all. It’s just a number.
P(T|H) will be between 0 and 1 as are all probabilities. So, log2(P(T|H)) will be between -infinity and 0; the smaller P(T|H) is the larger but negative log2(P(T|H)) will be.
So -log2(P(T|H)) will be a non-negative number.
pS(T) is a non-negative number (could be zero). If pS(T) is greater than or equal to 1 that means log2(pS(T)) will be a non-negative number.
So -log2(pS(T)) will most likely be negative.
So -398 – log2(pS(T)) – log2(P(T|H)) could be a negative number. You can’t have a negative number of bits which is why trying to say the expression refers to a number of bits is incorrect.
JVL, are you familiar with the information metric, negative log probability, which gives linear additivity? I have already linked the longstanding note that is always linked, from which the brief excerpt comes and serves to show that Dembski’s metric comes down to info beyond a threshold. I almost hesitate to say read here on, on quantification tied to info theory. As for pS(T), note from Dembski “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T [in a very large config space W, something instantly familiar from statistical thermodynamics].” I used p for phi, X for Chi and W for Omega as WP makes a hash of greek letters. Obviously the value will always be positive. I simply put up constants as substitutes, here K2. Then, – log2(P(T|H)) –> I(T), by neg log probability metric of information, base 2 giving bits. subtract [398 + K2]. X = I(T) – [398 + K2], where we know the latter term peaks at 500 for Dembski. I take a more conservative 500 bits for sol system, 1,000 for the observed cosmos. As fair comment, your inferences and attempted correction — that “You can’t have a negative number of bits which is why trying to say the expression refers to a number of bits is incorrect” — reflect lack of familiarity with the physical and information theory context in Dembski’s paper. And so forth. KF
AF, you continue to dig yourself further into the hole you are in. I draw your attention to the basics that you have yet again side stepped, from 255:
Similarly, I point to 260 above, where I drew out and extended issues tied to Kolmogorov complexity by adding in what is next door. This too you have dodged, the better to play at further rhetorical stunts. KF
F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design.
First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits
FSCO/I, X_sol = FSB – 500 in functionally specific bits
Likewise for the observable cosmos,
X_cos = FSB – 1,000, functionally specific bits
And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get
X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits.
Which, is comfortably beyond, so redundancy is unlikely to make a difference.
Contrast a typical value for 1800 tossed coins
X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short.
If the coins expressed ASCII code in correct English
X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed.
[We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.]
A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference.
Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy.
Life, credibly, is designed.
KF
PS, There has already been in the thread citation from Dembski on the definition of CSI and how in cell based life it is cashed out on function. I note, the concept as opposed to Dembski’s quantitative metric (which boils down to functionally specific info beyond a threshold) traces to Orgel and Wicken in the 70’s. This was noted by Thaxton et al in the 80’s and Dembski, a second generation design theorist set out models starting in the 90’s.
PPS, as for Wickedpedia on such a topic, slander is the standard, worse than useless.
PPPS, Mathgrrl turned out to be plagiarising someone’s handle, to be a man and a fraud who did not understand logs. The above stands beyond his raft of specious objections over a decade ago.
I often make the claim that the obvious is ignored on UD by both sides of the debate.
This indicates that commenters are not really interested in understanding or explaining. For example, a CSI calculation was provided but ignored. Now, there may be some need of minor corrections but essentially it illustrated CSI.
Here is a video that was presented on UD explaining the calculation of CSI. Some is simple while other parts will require more concentration.
https://www.youtube.com/watch?v=5CWu_8CTdDY&t=217s
Thank you, Alan Fox, for proving that you are scientifically illiterate! The explanatory filter is standard operating procedure for science. It forces us to honor Newton’s 4 rules of scientific reasoning, Occam’s Razor and parsimony.
Just how ignorant are you?
Alan brings up mathgrrl. mathgrrl was shown to be a willfully ignorant troll. Pretty much just like Alan Fox. A coward who couldn’t support the claims of its own position if its life depended on it!
F/N: Wiki as cited back then on the tie in between information and entropy:
Now, Harry S Robertson in his thermal physics:
Remember, this is the road I actually travelled on.
KF
Kairosfocus: we know the latter term peaks at 500 for Dembski
How do we know that?
Let’s just review what Dr Dembski actually says in his 2005 monograph: Specification: The Pattern That Signifies Intelligence.
From page 24:
And then later on page 24:
No mention of bits or anything like that. He’s just computing what he calls the specified complexity.
From earlier, page 18:
Note that nowhere does Dr Dembski suggest substituting some constant or number of bits for any of his terms.
Now, I suggest that replacing any of Dr Dembski’s terms with constants or estimates which he himself did not choose to do is potentially misunderstand what he intends. And, if there is any doubt he worked out an example, on page 22 and 23:
Notice the result of -20 which clearly cannot have units of number of bits.
If you choose to vary from what Dr Demski derived and wrote then I suggest you are not measuring the same thing he attempted to measure. IF you’d like to work out an example using both methods and show that they work out the same then please do so. That would settle the issue so I’d recommend that. If you can.
JVL, the 500 bits is there in Dembski’s 1 in 10^150 as 2^500 = 3.27*10^150. As for bits once you have a negative log base 2 of a probability it is information in bits. All I did is showed that Dembski’s framework on reasonable terms reduces to a bits beyond a threshold metric then used the bits. KF
Trivial manipulation of numbers it is then.
Kairosfocus: As for bits once you have a negative log base 2 of a probability it is information in bits.
Uh huh. And where does it say that? That would depend on what kind of probability you were talking about wouldn’t it? And, if that’s true then why didn’t Dr Dembski make the same assumptions you did?
1 in 10^150 as 2^500 = 3.27*10^150
???? Where is that? What happened to the 3.27 then?
Look, you’re going to have to do a lot better job explaining how and why you derived what you did. AND, at the very least, work out an example using your and Dr Dembski’s methods to show they get the same conclusion. And, may I point out again, that his worked out example does not involve numbers of bits.
Jerry at 294,
The purpose of propaganda, and lies, is to keep repeating them regardless of the truth.
JVL, probabilities are automatically indices of relative ignorance and knowledge, save when firmly 1 or 0. The eventuation of especially a low probability state is to some degree a surprise, and that is a related concept. So probabilities are inherently informational. Negative logs give simple additive properties [etc] and base 2 gives bits; indeed, IIRC that is how Hartley came up with the now ubiquitous contraction, nats are also far more rarely used, for log_e in the information context. As for 3.27, we accept the order of magnitude as 498.29 approximately is rather inconvenient. KF
AF, hardly trivial, we are in the context of the breakthrough thoughts that opened up the information and telecommunication age. Shannon and Weaver probably should have had a Nobel. Drawing a link out is important. One that BTW makes sense, a metric of information beyond a threshold where blind forces are plausible is a key approach. KF
Kairosfocus: probabilities are automatically indices of relative ignorance and knowledge,
Uh huh. Funny that Dr Dembski and Dr Behe have both made probabilistic arguments for design.
probabilities are automatically indices of relative ignorance and knowledge,
As well as being indices of relative ignorance and knowledge.
Negative logs give simple additive properties [etc] and base 2 gives bits
Not necessarily. You want that to be true to justify your interpolations.
As for 3.27, we accept the order of magnitude as 498.29 approximately is rather inconvenient.
The point is it doesn’t appear in your calculation.
Look, why don’t you work out the example in Dr Dembski’s monograph and show that you get the same result (-20) that he did. That will show that you are not distorting his metric.
If you can work out that example in your own way that is. We shall see.
JVL, with all respect, obviously, you are not familiar with information theory 101. I am not saying anything exceptional in speaking of negative log probability metrics and base two as giving bits; goes back to Hartley. What I did is I showed how a threshold emerges and noted the threshold that was suggested as yardstick. You will note, I use it for sol system scope and use its square for the observed cosmos. Dembski does something a bit different in looking for his 1/2 point. Related but different. KF
PS, my old 1971 edition Taub and Schilling, Princs of Communication, p. 415:
Of course, log [1/pk] is – log [pk], and if pk and pj apply to two independent messages 1/pk * 1/pj are such that this makes Itot = Ik + Ij. For pk = 1/2, we get Ik = 1 bit, what a coin flip would give. With a bias, so that say H is more likely and T less likely, The bias reduces the info capacity of the more likely and raises that of the less.
And so forth.
PPS, the linked in my note accessible through my handle gives more.
Alan Fox:
Always a punk when its ignorance is exposed.
It is neither trivial nor a manipulation. Your ignorance is not an argument, Alan. And a coward such as yourself cannot bully us.
JVL continues to conflate CSI with specification. You are beyond dishonest, JVL.
JVL quoting Bill Dembski’s
Couple of queries. T in the formula is defined as both a “pattern” and an “event” and H as a “chance hypothesis”. Do can someone give a number for an event? The bacterial flagellum is not an event, for example, but a sequence of selected steps. And “chance hypothesis”? Dembski is ruling out non-random processes such as selection a priori. Can anyone explain how this formula addresses reality, where processes develop influenced by non-random selection? If selection is ignored by Dembski’s formula, what use is it?
ET: JVL continues to conflate CSI with specification.
From Dr Dembski’s 2005 monograph:
Of course some of the Greek letters are not correctly rendered. But, it’s clear, Dr Dembski defines specificity and incorporates that into he definition of specified complexity.
Anyway, I’m happy to concede a lot of points if Kairosfocus and/or ET can work through Dr Demski’s example in his monograph (where he gets a result of -20 for X) using Kairosfocus‘s reworking and get the same result.
I shall await your elucidation.
AF & JVL,
first, you are both trying to run before you can crawl, that leads to conceptual confusion.
Second, there is a longstanding framework, actually linked to every comment I make through my handle and here on goes through info theory basics. In that context, you will see that as Taub and Schilling — a reference I used as a textbook, then which I pulled, typed out and uploaded only to have you both instantly ignore — you will see that info is measured as a negative log of probabilities. Base 2 gives bits [base e nats and base 10 Hartleys], this is also in Bartlet’s summary and videotape Jerry reminded us of. I assume you both know enough to know log [a*b*c] = log a + log b + log c. That is all that is needed to see what I did, why. of course, p(T|H) is talking about probability of blind search processes finding a target zone T in a larger config space W, T can be contiguous or a dust, matters not. Whatever Dembski did to find himself 20 bits short of threshold makes little difference to the validity of my drawing out a bits beyond threshold metric from his equation.
And, I observe too that both of you sidestepped when I answered your previous demand for a calculated value relevant to biology etc, see 293 above, which is more than enough to establish the basic point that the only plausible causal factor to explain FSCO/I beyond threshold is design . . . which you both ducked. Jerry has a similar observation about ignoring, though he tried to soften his punch by making a both sides remark.
That behaviour on your part tells me you are not really interested in what is warranted but only to spin out objections and cause needless confusion.
Not good enough.
Moreover, I already excerpted Dembski in the always linked, as he explains his expression, and I now clip from that, though if you struggle to see that – log2[probability] –> info in bits, you will struggle far worse with his equation:
We can instantly deduce:
–> X is in bits
–> – log2 {} will give bits
–> 10^120 ·phiS(T)·P(T|H) is a three term product expression within a logs to get info operation
–> 10^120 becomes 398 bits as part of a threshold, and p(T|H) neg logged is information, so I(T)
–> phiS(T) is a number and becomes a further part of the threshold, once operated on by logging, hence substitution, as was pointed out and ignored
–> We know separately that Dembski sees 500 bits as threshold, as was pointed out and again ignored [no progress can be made if there is unwillingnes to acknowledge even basic facts, so this is now exposure of willfully obtuse hyperskepticism]
–> the threshold therefore ranges up to 500 from 398, where 2^498.29 gives the more exact 10^150 but rounding to 500 is about the same, again as noted and used to double down on objections
–> we therefore can infer that as a reasonable metric, “CSI lite” X = I(T) – 398 – K2, with upper bound X = I(T) – 500.
–> My further point is, we start with info carrying capacity, given that say a fair coin with p(1) = 1/2 gives 1 bit, and noting that anything else can be reduced to bits, then address observable functionality and configuration based specificity, represented by dummy variables F and S, and take product. Redundancy can be addressed onward, in practical cases we are so far beyond threshold that it is immaterial, as shown already
–> we then can do a csi lite model X = FSB – 500 or 1000, as shown.
–> the result is as already seen; unsurprisingly, the CSI in relevant contexts is so far beyond threshold that redundancy is immaterial
It is obvious the material question is answered, sidestepping, going after red herrings led away to strawmen etc are only distractive.
The probblem you face is that FSCO/I is real, is observable, is well beyond threshold and strongly points to intelligently directed configuration as a key causal factor for the cell. Where, CSI in bio contexts, is cashed out as function, i.e. it comes down to functionally specific complex organisation and/or associated information.
That is the pivotal point, and you are consistently coming up short.
KF
Kairosfocus:
I am not trying to run or walk; I’m trying to see if you can use your own version of Dr Dembski’s specified complexity metric on the same example he worked out in his 2005 monograph and get the same result, -20. If you don’t get the same result then you are measuring something different from him. Which is fine but then you have to explain why you are measuring something different because his work is perfectly legible and easy to follow.
I don’t need anymore lectures about basic mathematics; I don’t need anymore theoretical discussions of the nature of information. I can follow the mathematics Dr Dembski uses just fine. Just show me you can get the same thing with your version please. You came up with it, you should be able to use it on a simple example. And, again, if you get a different result then please interpret your result based on what Dr Dembski says the metric is for.
Wow! JVL doubles down on its ignorance! Buy a vowel, JVL.
Alan Fox:
That is the propaganda, anyway.
The problem is there isn’t any evidence that blind and mindless processes can or did produce any bacterial flagellum. There isn’t even any way to test the claim that they can or did.
Wrong again! First, natural selection is non-random in a trivial sense in that not all variants have the same chance of being eliminated. Next, NS doesn’t do anything until the motile device is up and running.
Alan Fox is just clueless and apparently proud of it.
There isn’t any evidence that natural selection can or did produce any bacterial flagellum. There isn’t even any way to test the claim that NS can or did.
All Alan can do is whine because he is too stupid to understand reality. The reality is the ONLY reason probability arguments exist is because there isn’t any actual evidence
ET: Wow! JVL doubles down on its ignorance! Buy a vowel, JVL.
Can you apply either Dr Dembski’s specified complexity metric or Kairosfocus‘s version on any example? The one Dr Dembski works through is okay but clearly being able to analyse other number sequences (as Dr Dembski does) would be interesting.
A yes or no answer to start with would be sufficient.
@ JVL
Not sure Yes or No will help. ET seems convinced it’s already been done, though he won’t tell us where, when, or by whom.
Again, JVL is conflating 2 different things. His fixation with the 2005 paper “Specification”, is his downfall.
Earth to Alan Fox- Durston did it in pee3r-review. And all you can do is choke on it! You are beyond pathetic.
@ET reading messages on UD is not good for your health. A fox will act like a fox while you try to convince the fox to act like a dove . Let the fox be the fox and keep calm 🙂
ET, LtComData;
Can anyone use either Dr Dembski’s specified complexity metric or Kairosfocus‘s variation on a simple example?
Yes or no?
I can easily use Dr Dembski’s metric on a coin tossing example. It’s not that hard.
JVL:, what I did is enough to establish my point. Dembski’s work simply shows that he is 20 bits short of whatever threshold he set. As you were told already. Your doubling down and unresponsiveness simply show that you have nothing substantial to say about the implications of negative log probability and the addition rule for logs which lead to a threshold metric. KF
PS, you were already shown an example at 293 above and were reminded of it. Your pretence fools no one.
Kairosfocus:
I’m not doubling down or being unresponsive; I’m asking if you, personally, can use Dr Dembski’s metric or your version of it to deal with a particular example.
In response 293 you did mention some general cases; I’d like to see you deal with a particular case and compare and contrast approaches. What do you say? Something concrete and not subject to interpretation.
I know about the addition rule for logs, that doesn’t change the final output. I just want to see if we apply your standard and Dr Demski’s standard to the same example we get the same result. I’d prefer to deal with an example that is not covered in Dr Demski’s monograph if that’s okay with you?
What do you say? How a bout flipping a coin five times and getting five heads? Shall we work on that? I can do the work for Dr Dembski’s metric; can you do yours?
Nope. Fits, Joe, not bits. Do try to keep up.
He’s calm. If he were stressed he’d start mis-spelling.
Oh wait …
JVL, side tracking. I established that per standard info theory negative log base 2 probability gives bit metrics for information. In that context, additional factors establish a threshold to be surpassed, just from the algebra involved: X = I(T) – [log c + log d + . . . ]. There is reason to see Dembski as favouring a 500 bit threshold. Going beyond, it is easy to see that reasonable cases, i.e. 300 AA is the average for proteins that is commonly given, thus 900 bases at 2 bits carrying capacity per base is 1800 bits carrying capacity, 1300 bits beyond a sol system threshold. That is more than enough to make any reasonable degree of redundancy irrelevant. For one typical protein. Genomes run to 100 – 1,000 kbases for first life and 10 – 100+ million bases for major body plans such as make an arthropod. Again, vastly beyond blind search thresholds. That is what is material, and it is sufficiently realistic for a reasonable person. There is reason to conclude that cells are designed and body plans are designed. That is what is material. And to date, are you willing to acknowledge that neg log base 2 of a probability is a standard info metric that gives a key additive property? Y/N, why? _____ KF
PS, that Dembski was 20 bits short of his implied threshold [all, bound up in the math as again pointed out . . . ], for whatever reason, makes no difference to the main point. Similarly, other comparatives show the reason why this is how information has been measured. A fair coin is a 1 binary digit register and can store – log2[1/2] = log2 [2] = 1 bit. 1800 bits of text for ASCII is 257 characters and 257 characters in good English would without question be taken as designed. And more.
Alan Fox:
A distinction without a difference, Alan. Do try to keep up.
FIT stands for Functional Bit, Alan. You clearly love to expose yourself as the fool, eh…
Let me get this straight, Joe. Kirk Durston’s metric, “fit”, is exactly the same metric as KF’s “bit” and Dembski’s measure of CSI? This is what “ET” is claiming on the 8th August at 4pm EST?
*chuckles* I rest my case.
So, you case is that you are a moron? In what way does the adjective “functional” change the fact that a bit is a bit? The adjective “functional” specifies the type of bit.
Are you really completely ignorant of information?
Let me get this straight- for YEARS I have been telling Alan and his minions that FSC = CSI = FSCO/I. And only now is it starting to sink in cuz he thinks he has some imaginary points to score?
FSC is functional sequence complexity
CSI is complex specified information. It is a form of FSC.
FSCO/I is functionally specific complex organization or information. A form of CSI/ FSC and SC (specified complexity)
Even elementary school kids can see how they are all related.
Oh my aching sides! 🙂 🙂 🙂
Related bits now. Please stop, I can’t take much more. 🙂
OK, maybe I’m being a bit (heh) mean to Joe. But a bit is:
…the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values.
I guess it’s possible some bits are functional and some are pink.
Wow! No, you are not being mean to me. You are just exposing your ignorance of the subject
The bit pertains information carrying CAPACITY. A FUNCTIONAL bit pertains the actual information.
Related CONCEPTs, duh!
Again, for the learning impaired:
“We have extended Shannon uncertainty by incorporating the data variable with a functionality variable. The resulting measured unit, which we call Functional bit (Fit), is calculated from the sequence data jointly with the defined functionality variable.”
Science says that in the lab under observation the chemistry[matter] don’t built functions .
Anyway there are some rumors that long ago and far away, under a rock .. 😆
AF, predictable. You have nothing substantial so you resort to snideness. It looks like we need to go back to 255:
The obvious bottomline is that you feel you must object to and hyperskeptically dismiss what you have not troubled to try to understand. For years. KF
PS, I have spoken of functionally specific bits. That is related to but different from info carrying capacity. A chain of 500 coins in no particular order can carry info but will most likely express gibberish. But if we find an ASCII string with meaningful English text or code for an algorithm that is a different matter. In the cell, we have found copious algorithmic code for protein synthesis.
Come on, KF, you quoted a definition of a “bit’ yourself. It’s the smallest unit of binary information, having a value of either one or zero. Bits cannot be distinguished by their functionality any more than they can by their pinkness.
The ID claim that there is a reliable way to look at a representation of “information” such as a sequence or pattern of binary digits and, knowing nothing else about that sequence or pattern, and, by some trivial math operation, be able to say whether the pattern or sequence contains “functional” information or not.
This has not yet been done.
Can you show how to calculate the likelihood? Most likely?Very likely? Somewhat likely? Can you be more precise? A computation?
AF,
you continue to double down.
It is directly because of non response and misinformation that I pulled my older edn Taub and Schilling. Had you deigned to look in my always linked, you would have seen a more detailed discussion starting from Connor’s Signals, something that is over a decade old. You have sought to gin up a needless polarisation. And even now, you have yet to acknowledge that negative log2 probability is an information metric in bits, or that – log2[p(T|H)*c*d* . . .] will by the product rule for logs [and underlying exponents] reduce to an information beyond a threshold value, through the algebra involved. Which is why I noted by citing my always linked.
Of course the base metric is that of information carrying capacity. Where, practical encodings invariably have redundancies that mean the neg log probability value is an upper bound. We may readily see from a coin, a two state register element in this context, how – log2 p(1) = 1 bit. Kolmogorov Complexity and compact compression in principle allow us to estimate functional information content, as was pointed out by augmenting Wikipedia at 293. Which, predictably, you sidestepped and pretend does not exist. Shame on you.
Going on, you try to make a mountain out of the molehill of a simple description that, as any description can be expressed in bits, WLOG the binary expansion result or observation gives a bell, one dominated by near 50-50 H/T with the overwhelming majority being gibberish strings in no particular order. Any reasonable person would accept this, and would further realise that we cannot in advance generate a universal decoding algorithm that tells each and all functional sequences of bits. I suspect you know this and are trying to use it to compose what you think are clever objections. Instead, you are only showing desperation to distract from what we can and do know readily and adequately.
We observe functionally specific, complex organisation and/or information, reduce it to bit strings and seek causal explanation of the complex functional information. There is an effective result. Once functional information is beyond 500 – 1,000 bits of complexity, reliably, it comes from design. Neither you nor any other objector can give us actually observed counter examples. A decade plus past, many tried and failed. So, objecting, denialism tactics have shifted.
Which is what we are seeing.
Stubborn denial of and objection to empirically well supported reality.
Because, you are desperate to hyperskeptically deny or dismiss something at the heart of the design inference. By, speaking with utter disregard to truth.
You now try to deny that functional sequences [such as your own text in English or code for an algorithm. . . as in mRNA for protein synthesis . . . or DWG code for say an ABU 6500 C3 reel] can be distinguished from gibberish like hkftdhvfsglvgo[8wdbblhyud or repetitive strings like asasasasasasas which is patent nonsense. The facts don’t fit your ideology so you try to get rid of them.
Going on, you set up and knock over a strawman caricature of not only Dembski or myself but even Yockey and Wicken. Which latter you have never acknowledged. Functionality based on complex configuration and its wider cousin, specified complexity are observed in the here and now and may be given as detachable descriptions. We are doing science, so that should be no surprise. The question at stake being, how could/did such come about.
The answer is, reliably — trillions of cases, by design.
There is good reason to infer design as cause of the complex algorithmic code in the cell, and in body plans. Further, code is language and algorithms are knowledge based stepwise, goal directed processes.
Signatures of design.
KF
Kairosfocus: I established that per standard info theory negative log base 2 probability gives bit metrics for information.
But that’s NOT what Dr Dembski is trying to do!!
From his introduction:
Notice there is no demand that functionality has to be observed first.
Now, you choose to break down Dr Dembski’s formula in the following way:
X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120) -log2(pS(T)) -log2(P(T|H))
(which is completely unnecessary but whatever.)
Again from Dr Dembski’s monograph:
Note he is talking about the product of pS(T) and P(T|H) not them taken separately. But, more importantly, does Dr Dembski himself even look at a particular problem in that way?
Nothing about bits there . . .
Nothing about bits there . . .
Please note that the examples that Dr Dembski works through are either sequences of heads or tails (represented as 0s and 1s but that’s not necessary) or a sequence of numbers. NONE of his examples are of great length. Nor does he allude to 500 bits as being some important threshold.
Again, nothing about bits there. AND you can’t have -20 bits. Because HE’S NOT COUNTING BITS! He trying to see if chance can be eliminated as he clearly says!! It’s not based on the size of the sequence he’s considering!
No mention of bits anywhere. He just wants to see if X is > 1. That’s it.
Nothing about bits at all. In fact, he’s not even considering something numerical (the bacterial flagellum).
No bits to be seen because part of the point is to be able to analyse things that don’t match a pre-supposed limit!!
In fact, Since all Dr Dembski cares about is whether or not his X is greater than 1 he could have used log10 and compared 10^120•pS(T)•P(T|H) to 1/10 by adding the appropriate constant.
In none of his examples does he talk about the number of bits he’s looking at being significant. He choose to use log2 (because of its association with information theory?) and he uses 10^120 as an upper bound because of the number of binary operations but he does not convert the parts of his formula into number of bits. He just doesn’t do that.
And, again, I’m happy to work through Dr Dembski’s formula for a simple example (say getting five heads in a row) if you can work through your version as well. And then we can compare the results.
I assume you can use your version . . . since you proposed it. You can use it can’t you?
Alan Fox:
Your willful ignorance is not an argument. Durston explained it. What part of the explanation are you too stupid to comprehend?
Again, for the learning impaired- Functionality is observed. Actual information, ie meaning, is observed.
Nope. You clearly don’t know what you are talking about.
If the information isn’t measurable, then we use the specification metric to see if an object, structure or event was intelligently designed or not. However, if we have life, which is full of measurable information, then we can use Durston’s methodology.
JVL- go soak your head! You are using the wrong metric. How many times do you have to be corrected on this? It’s like you are a one-track minded infant.
Read “No Free Lunch” and stop acting like such a loser crybaby.
ET: Read “No Free Lunch” and stop acting like such a loser crybaby.
Again, to quote Dr Dembski himself where he says he is extending the work done in No Free Lunch:
You are using the wrong metric. How many times do you have to be corrected on this?
I am discussing the metric that Dr Dembski published in 2005. I am supporting things I say with quotes from his monograph. I have offered to compute his metric for a simple coin-tossing example and have repeated asked you and Kairosfocus if you want to use the metric proposed by Kairosfocus and then we can compare results and discuss any differences. You and Kairosfocus have declined to try this, for reasons known only to yourselves. If you’d like to use yet another metric then be my guest. I do think that Dr Dembski (PhD in mathematics) thought long and hard about design detection and came up with his metric thinking that it would actually work which is why I’d like to compare its results with other schemes. Please note that he does state he’s trying to deal with a situation “even if nothing is known about how they arose” meaning that it can be applied to sequences of numbers or 0s and 1s without being told how they were generated.
I’m willing to give his metric a shot at some examples. Are you willing to do something similar?
Again, 2 different things. I explained it. You ignored the explanation and choose to prattle on like a child.
The metric proposed in No Free Lunch also pertains to Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?
But this makes me wonder- what metric do people use for evolution by means of blind and mindless processes? Thery don’t have one! Evos just use bald assertions and declarations.
ET:
So, you don’t want to test Dr Dembski’s metric from his 2005 monograph which he says “reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch” which, to my mind, means the 2005 monograph is the preferred and superior version.
Is it because you can’t do the math involved with Kairosfocus‘s that you don’t want to try a couple of examples?
I’m happy to use the 2005 metric and compare that to the one in No Free Lunch if you want. How about we look at a coin-flipping test? Surely if they are both about detecting design (with no prior knowledge of the objects origins) then they are comparable since they are trying to do the same thing?
No one cares what your mind thinks, JVL. And seeing that archaeologists and forensic scientists don’t use it, I don’t see the fuss.
Again, for the learning impaired- NFL and specification are two different metrics used to see if something was intelligently designed or not. NFL pertains to CSI in which bits are easily measured. Specification pertains to an object/ structure/ event that isn’t easily amendable to bits.
However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.
In the end, I don’t care what you do. If you want me to do something for you, you have to pay me.
JVL, the reliable observable indicator or sign of intelligently directed configuration is FSCO/I. Second, whatever Dembski may have said, once we have – log2[probability*c*d] we have an info beyond threshold. That is objective and established given how info is measured and why. Indeed, just the fact that Dembski used negative base 2 logs implies he was aiming at info measured in bits. The connexion between Dembski and X = I(T) – threshold level [typ. 500 bits] is objective. The issue onward is to measure functional info, beyond info capacity. That is because practical encodings, whether direct or by description language such as AutoCAD DWG etc, have redundancies, which partly help with resisting noise effects. Kolmogorov complexity and compact compression allows that, but even before such once one is far enough beyond the already conservative threshold, redundancy makes no material difference. Just with a typical 300 AA protein, we are well beyond threshold, much less with a genome. Life and body plans, on contained FSCO/I, are designed. KF
Not even if you assume instantaneous random assembly which is not what happens during reiterative steps of variation and selection. You still fail at knocking over straw men.
Additionally, even using your nonsense assumption of all-at-once random assembly, Keefe and Szostak showed the wealth of functionality in random protein sequences. You also fail dismally at trying to join dots between sequence and function.
PS, –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^-140 implies bits.
ET: No one cares what your mind thinks, JVL. And seeing that archaeologists and forensic scientists don’t use it, I don’t see the fuss.
Dr Dembski will be glad to know that all the work he put into his 2005 monograph were wasted since no one wants to test it on some examples.
NFL pertains to CSI in which bits are easily measured. Specification pertains to an object/ structure/ event that isn’t easily amendable to bits.
The 2005 formula handles both.
However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.
Dr Dembski thinks he found a way that’s better than that; when you don’t need to know anything about the origin of the thing in question. Shame no one takes it seriously.
It’s all very simple.
Bits can be directly equated with a probability. 6 bits = 1/64; 7bits = 1/128
500 bits describes every particle in universe at every nano second or state transition since the Big Bang. Something with a probability more than 500 bits is essentially impossible by random processes.
A distribution that is ordered such as 1000 Heads or 500 coins in a pattern is different from any random distribution and it’s probability can be calculated based on the specific pattern. So to say 1000 H and any other random distribution are equivalent is nonsense. Their probabilities are spectacularly different.
Everyone reading this thread has been given the explanation for this but it is ignored.
But the purpose for commenting here is not to understand but find any small way one can to hold forth mostly with nonsense or unnecessary complexity.
Alan Fox:
Clueless. You don’t have any evidence that blind and mindless processes can produce any proteins, Alan. You don’t even have a way to test the claim.
Great. Too bad you can’t demonstrate that blind and mindless processes produced them. And it wasn’t a wealth of functionality. It was barely any functionality.
JVL:
The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.
Oh my. Now you know what Dembski thinks. What a putz.
That’s what archaeologists and forensic scientists do.
Dembski just provided a way to quantify it.
I do.
It’s been presented several times. The punctuated equilibrium adherents claim random accumulation of variations to genomes is what produces new proteins. Easily tested in the sense of what to do but finding money/resources/willingness is extremely difficult.
Kairos>– log2[probability*c*d] we have an info beyond threshold.
You just have a number which Dr Dembski wants to compare to 1 to see if there is specified complexity. As he clearly stated.
Indeed, just the fact that Dembski used negative base 2 logs implies he was aiming at info measured in bits.
Which he never says in any of his examples and worked out an example where he got X approximately equal to -20 which he does not note is weird or impossible which he would have done if he were talking about bits. Clearly.
The connexion between Dembski and X = I(T) – threshold level [typ. 500 bits] is objective.
Which is not what he did working out any of his examples.
That is because practical encodings, whether direct or by description language such as AutoCAD DWG etc, have redundancies, which partly help with resisting noise effects. Kolmogorov complexity and compact compression allows that, but even before such once one is far enough beyond the already conservative threshold, redundancy makes no material difference. Just with a typical 300 AA protein, we are well beyond threshold, much less with a genome. Life and body plans, on contained FSCO/I, are designed.
Look, you clearly are interpreting his 2005 metric different than he did himself. Which is why I’d be interested in comparing his with yours for a simple, easy-to-compute, example. Why not do that and see? I can carry out his math. Shall we?
P(T|H) < 1/2 ×10^-140 implies bits.
1/2 x 10^-140 is a very, very, very small positive number which cannot be number of bits. And, you continue to disregard pS(T) which he spends lots of paragraphs developing so it must be important and depends on T. In fact, as he says clearly, X cannot solely depend on P(T|H) because very random results are very improbable. Any sequence of Hs and Ts is just as likely/unlikely as any other given a random generation. That’s why you need pS(T)! Dr Dembski explains all this.
Shall we try both methods and compare results? I’d start with something easy so the math is very clear and then work up to more complicated examples.
Jerry: But the purpose for commenting here is not to understand but find any small way one can to hold forth mostly with nonsense or unnecessary complexity.
For some reason, Dr Dembski produced a monograph in 2005 where he proposed a way to detect specified complexity even if you knew nothing of the origin of the object in question. He used some simple numerical examples to motivate his derivation and show how it worked out in one particular example. He clearly spent a lot of time working on this metric.
All I like to do is to compare the results you get from that 2005 metric with other approaches starting with some simple examples so the mathematics is straight forward.
If you don’t want to try that fine. I gotta think Dr Dembski didn’t just make up his metric so it could lie in a drawer not being used. I’d like to see how it works. Clearly you don’t care and neither does Kairosfocus or ET which I find strange since Dr Dembski’s ID views are otherwise considered significant.
ET: The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.
That’s not what he wrote in the Abstract for the monograph.
Oh my. Now you know what Dembski thinks. What a putz.
That’s essentially what he said in the Abstract of the monograph.
Dembski just provided a way to quantify it.
Shall we check it and see what kind of results it gives compared to other methods?
I explained the mathematics.
That is doing. Not trying.
Jerry: I explained the mathematics.
So, Dr Dembski’s metric is useless? Why do you think he wrote it and wrote in the Abstract:
Clearly he thought he was adding to the work he did previously and it seems to me that the most recent formulation should be the thing that is used.
Again, JVL- Dembski provides a probability argument. Yet you and yours don’t even deserve a seat at that table.
The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.
And yet it is a fact.
No, he did not.
What other methods? Who tried to quantify the design of Stonehenge?
Wow. The previous work he was referring to was that of bits and sequences amendable to bits.
So go ahead. Use specification of Stonehenge to see if the archaeologists are correct.
And we are still waiting on the methodology used by those adhering to evolution by means of blind and mindless processes.
ET: Dembski provides a probability argument. Yet you and yours don’t even deserve a seat at that table.
Yes, he does. Which he clearly thought was valid. Shall we compare and contrast the results of his 2005 metric with other methods?
The ONLY reason such a paper was written is because you and yours have nothing. And Dembski proved it.
Again, taking Dr Dembski at his own words, in a non-peer reviewed paper which he was free to say whatever he liked he was trying to refine and extend design detection in a mathematically sound way.
No, he did not.
Clearly he did. HIs statement are straight-forward and easy to understand.
What other methods?
Other methods of detecting design which is clearly what he was working on!!
Look, clearly you’re afraid to work with his 2005 metric for some reason. Only you know why. But you cannot deny what Dr Dembski himself wrote:
Unless you want to question all the work he did for that monograph? HE thought it was important (and maybe necessary) to clarify and extend his previous work. You disagree, I guess. But I think I’ll take him at his word and I have enough respect for his formulation to see how it works for given examples. Why don’t you want to do that?
The previous work he was referring to was that of bits and sequences amendable to bits.
Which he did not reference in his worked out examples in his 2005 monograph. In fact, in his most worked out example he got a result of approx -20 which he DID NOT say was weird or impossible if he was looking at number of bits.
So go ahead. Use specification of Stonehenge to see if the archaeologists are correct.
That’s a very complicated example; I think it’s better to start with simpler situations to see how the parts of the metric work.
JVL:
Non-sequitur. What other methods?
And another non-sequitur.
He did not make the claim you are attributing to him.
Again, he was trying to QUANTIFY it. THAT is what he was working on.
Are you stupid? It was referenced in the abstract. Of course, it wouldn’t be in the examples for obvious reasons.
To me, Dembski’s work on specification is for desk jockeys. In the field there are other ways to go about it. But, when in doubt, the metric may come in handy.
Stonehenge isn’t complicated at all. And it’s something we “know” is artificial.
ET: What other methods?
What other ways you’d like to detect specified complexity as Dr Dembski said he was doing.
He did not make the claim you are attributing to him.
I stand by my statement as supported by quotes from the monograph.
Again, he was trying to QUANTIFY it. THAT is what he was working on.
Again, as he clearly said in his Abstract, he was working on aspects of design detection.
It was referenced in the abstract. Of course, it wouldn’t be in the examples for obvious reasons.
No, that is not obvious. If you mention or say something in the Abstract but then don’t actually address that in your worked out examples then it’s fair to take the examples on their face values whilst taking the Abstract with a grain of salt.
To me, Dembski’s work on specification is for desk jockeys. In the field there are other ways to go about it. But, when in doubt, the metric may come in handy
Okay. Why not test it out then to see when it is actually useable? This is what I’m suggesting.
Stonehenge isn’t complicated at all. And it’s something we “know” is artificial.
It is complicated if you’re going to apply Dr Dembski’s 2005 metric. Which is my point: let’s explore his metric and see a) if it’s useful and b) how it compares to other ways of checking for specified complexity.
Shall we start with a simple coin-toss example and then ratchet things up?
JVL, that number is information in bits beyond the implied threshold. Interesting to see how resistant you are to mathematics, here. KF
F/N: Stonehenge, a monoliths based stone circle calendar aligned with the solstice. You try to build one with primitive tools, complete with bringing stones from a huge distance. Then go solve the same problem for the Giza pyramids. KF
PS, Wikipedia’s confessions:
In order to detect functional information math formulas are not necessary. Do you observe a function in living organisms ? There you have functional information
This sentence consists of code symbols typed by an intelligent agent. That is all you need to know about Intelligent Design.
Kairosfocus: JVL, that number is information in bits beyond the implied threshold. Interesting to see how resistant you are to mathematics,
AGAIN, I am interested in exploring and using Dr Dembski’s 2005 metric for specified complexity. If you’re not curious or interested in taking him at his word and trying to compute his formulation then just say so. I find that stance confusing and contradictory (since Dr Dembski is considered one of the prime intellects behind the modern ID movement) but you can make your own choices.
LtComData: In order to detect functional information math formulas are not necessary. Do you observe a function in living organisms ? There you have functional information
Well, Dr Dembski seemed to think finding mathematical support was worth the effort and I’d like to a) take him at his word and b) respect his efforts enough to see what it’s like applying his metric to some easy to understand and easy to compute examples. At first anyway. If that doesn’t interest you then so be it. But surely you think it’s worthwhile respecting a publication that Dr Dembski clearly took a lot of time and effort putting together especially given the fact that he himself said it was a continuation of work he had presented In previous publications.
You make up your own mind. I’d like to see if what he proposed has merit, is useful and returns values we can all agree on. Strangely, no one else here feels the same way.
Relatd: This sentence consists of code symbols typed by an intelligent agent. That is all you need to know about Intelligent Design.
Why do you think Dr Dembski took the time and effort to come up with his 2005 metric for specified complexity? He must have thought it was worthwhile and could address some question and issues raised by skeptics. But no one here seems to think it was worthwhile or are even interested in trying to compute it. I find that strange and confusing. For simple examples the mathematics is not difficult. And by working with some simple examples at first it should become easier to graduate onto more complicated situations as one gains computational expertise with the defined terms.
But, again, oddly, no one here seems to care at all. I wonder why?
JVL, I stated a fact of the math of info theory and of logarithms, one that is highly material and which you are resisting. At this point, I guess we can draw the conclusion that the facts do not fit your agenda. Telling. KF
Kairosfocus: I stated a fact of the math of info theory and of logarithms, one that is highly material and which you are resisting. At this point, I guess we can draw the conclusion that the facts do not fit your agenda.
My ‘agenda’ is to work with Dr Demski’s 2005 metric for specified complexity and see how the results generated from that measure compare and contrast with other measures. I know how logarithms work and how to evaluate them including finding log base 2. The rest of you seem blatantly uninterested in pursuing Dr Dembski’s work or finding out how useful it is. Why is that? How is it that working with Dr Dembski’s metric is pursuing some agenda of my own? More importantly how is it that you not wanting to pursue Dr Dembski’s metric is a sign of you not being able to work with the mathematics he presents?
Your absolute refusal to deal with what Dr Dembski actually wrote and presented is telling don’t you think? Either you disagree with him or you can’t follow his procedure. Which is it I wonder . . .
JVL, I showed from the said math, that his expression mathematically implies an info in bits beyond a threshold metric, which you have tried to resist. I guess I need to directly ask, does – log2(probability) give info in bits? _____ Why or why not i/l/o what is on the table _____ [Honest answer, yes, and because that was worked out as a natural info metric decades ago.] I further pointed out that the case you point to boils down to 20 bits short of threshold, which you have also sidestepped. We can now freely draw the conclusion that your arguments have failed. KF
You are overstating the case for “Intelligent Design”.
lol
AF at 379,
Every word-symbol you wrote had to be functional, specific and in the correct order to be understood.
Relatd, and because it is FSCO/I you instantly recognised it as from an intelligent source. That self referentiality is part of what exposes the speciousnes of the sort of objections we are seeing. KF
The problem of observer is scientifically unsolvable so we are stuck with religion and ethic.
If everything is designed, what’s the point of detecting it? It makes no sense.
Kairosfocus:
You are interpreting what Dr Dembski wrote instead of reading what he actually wrote and what he clearly meant.
Again, he worked out an example an got a result of approx -20. He didn’t say: that’s weird ’cause I should be getting a number representing so many bits. He interpreted -20 based on his formulation.
He DOES NOT break his formulation apart and when he gives the bottom line criterion he’s clearly looking for a result greater than 1. Not greater than 20, not greater than 500, greater than 1. He doesn’t say “more than 1 bit” he just says greater than 1.
You are so desperate to work in your 500-bit threshold that you not only break apart his calculation you also change some of his factors so that you can get what you want.
His whole point is to create a metric than can be used to analyse some object or pattern or sequence OF ANY LENGTH to see if it exhibits specified complexity and thus was designed.
I further pointed out that the case you point to boils down to 20 bits short of threshold, which you have also sidestepped.
He didn’t say it was 20 bits shy of threshold. He just didn’t do that. The reason he didn’t say that is because he’s not interpreting his results as bits AND he wants to be able to analyse things that are of any length.w. The sequence he used for that example was CLEARLY much shorter than your 500-bit limit so if he wanted to hit that threshold he would have picked something of that length. But he didn’t.
You’ve spent years and years convincing yourself of your reworked interpretation which is just not correct.
I have, multiple times, offered to compare results from using the metric Dr Dembski actually wrote up with your interpretation to see what results are obtained. I’ve offered to do the mathematics for his metric myself. If you thought your version would give the same result as his I would think you would gladly agree with that because you’d prove your case. BUT you have not and will not agree to such a test. Which says to me either a) you suspect you will not get the same result or b) you can’t actually calculate your own version. Since you won’t even do the mature thing and admit which of those is true I guess the rest of us can just make an assumption. Come to think of it . . . they could both be true.
JVL, no, I am not; I am working out the algebra that he had to have in mind to go to a negative log2 configuration, and that leads to some basic telecomms theory. That you have to deny the obvious mathematics of -log2[probability*c*d] tells us all we need to know on the bankruptcy of what you are trying to support. Working out gives the trivial answer that Dembski’s example is 20 bits short of threshold, where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits. It is now fairly obvious that not having a substantial answer you have resorted to a rhetorical distraction and refuse to acknowledge the relevant algebra. There is no reason for me to further pander to a further side track [which this already is] as it will simply lead to more of the same, if you are unresponsive to algebra, that is already decisive and not in your favour. This tells us a lot about the nature of far too many objections to the design inference. KF
AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind. Its comeuppance started in the 1950’s with the detection of fine tuning of the cosmos and with recognition that there was and is coded algorithmic information in D/RNA. By the 1970’s Orgel and Wicken brought the matter to focus through recognising FSCO/I. Thaxton et al responded in the 80’s and from the 90’s the design inference, associated theory and a supportive movement grew. Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades. And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling. KF
F/N: An online discussion:
https://math.stackexchange.com/questions/2318606/is-log-the-only-choice-for-measuring-information
>>When we quantify information, we use I(x)=?logP(x), where P(x) is the probability of some event x. The explanation I always got, and was satisfied with up until now, is that for two independent events, to find the probability of them both we multiply, and we would intuitively want the information of each event to add together for the total information. So we have I(x?y)=I(x)+I(y). The class of logarithms klog(x) for some constant k satisfy this identity, and we choose k=?1
to make information a positive measure.
But I’m wondering if logarithms are more than just a sensible choice. Are they the only choice? I can’t immediately think of another class of functions that satisfy that basic identity. Even in Shannon’s original paper on information theory, he doesn’t say it’s the only choice, he justifies his choice by saying logs fit what we expect and they’re easy to work with. Is there more to it?
. . .
That functional equation characterizes the logarithm (as long as you have any reasonable continuity condition). –
Ethan Bolker
Jun 11, 2017 at 15:26
The logarithm I think is the only class of continuous functions that turn multiplication into addition, but as you said the explanation is only intuitive. I don’t know of an alternative, but I am certain the logarithm is not the only possible choice. –
Matt Samuel
Jun 11, 2017 at 15:27
Sketch of proof: Let I=f?log
, then the identity becomes f(a+b)=f(a)+f(b), which is Cauchy’s functional equation. – user856
Jun 11, 2017 at 15:30
. . .
I just wanted to point something out, but honestly, I think the other answers are far better given that this is a mathematics site. I’m just pointing it out to add another argument for why logarithm makes sense as the only choice.
You have to ask yourself what information even is. What is information?
Information is the ability to distinguish possibilities.1
1 Compare with energy in physics: the ability to do work or produce heat.
Okay, let’s start reasoning.
Every bit (= binary digit) of information can (by definition) distinguish 2 possibilities, because it can have 2 different values. Similarly, every n bits of information can distinguish 2n
possibilities.
Therefore: the amount of information required to distinguish 2n
possibilities is n
bits.
And the same exact reasoning works regardless of whether you’re talking about base 2 or 3 or e.
So clearly you have to take a logarithm if the number of possibilities is an integer power of the base.
Now, what if the number of possibilities is not a power of b=2
(or whatever your base is)?
In this case you’re looking for a function that coincides with the logarithm at the integer powers.
At this point, I would be convinced to use the logarithm itself (anything else would seem bizarre), but this is where a mathematician would invoke the reasonings mentioned in the other arguments (continuity or additivity for independent events or whatever) to show that no other function could satisfy reasonable criteria on information content.>>
I just hope this from different voices helps break down obvious and needless polarisation. In fact my introduction to these matters was decades ago in T/comms as a key extension of electronics context.
I frankly get the feeling that people unfamiliar with that context are suspicious of obvious algebra because of polarisation over the design inference.
That’s why I pulled my older edn of Taub and Schilling and pointed to my online note, obviously in vain.
KF
KF
Kairosfocus: Working out gives the trivial answer that Dembski’s example is 20 bits short of threshold,
Which he did not say. He could have easily made that point if that’s the point he wanted to make. Also, the sequence he used was much more than 20 bits shy of your 500-bit threshold.
where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits.
Another point he did not make even though the would be no reason he couldn’t.
Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades.
You are completely missing the point. I am NOT debating that notion; all I am doing is looking at Dr Dembski’s metric and your version and wanting to compare them on some easy to compute examples to see if they agree. Why don’t we do that?
And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling.
I’ll stick with Dr Dembski’s process of evaluating his own metric which he DID NOT break apart as you do.
Regardless, that doesn’t stop us from comparing the two versions/interpretations. But you won’t do it!! Why is that? Let’s just focus on that question from now on.
Why aren’t you willing to compare and contrast results from your version and Dr Dembski’s own version of his metric? What are you afraid of?
Shall we start with a simple example just to make sure we both understand the mathematics involved and can check each other’s work?
Encoded information is gibberish without the key. DNA is gibberish without the decoder.
Our brain is programmed to have a narrow focus on a very few things as the eye has a narrow visible spectrum. This is a built-in bias . We can’t perceive the reality as it is but only as our ” programmed” biases allow us.
JVL, at this point you are being stubborn. There is not a snowball’s chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric. The ONLY practical use for base 2 logs I have seen or worked with is for that, if you have one kindly tell us ______ The log of products rule used to be what 3rd form Math, now it’s 4th form I think. Grade 9 or 10 I think. So, your narrative about WmAD does not pass the giggle test. My derivation is simply working through the algebra involved. KF
PS, notice, once WmAD has worked out the first term as 10^120, we see:
We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have – log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] – [C + D]. That is, information beyond a threshold, in bits.
In that context, if WmAD works out a value and is 20 bits short of threshold, that is fairly plain to see. 1 in 10^150 is bit short of 500 bits and 10^140 ties to 465 bits.
I now could freely go on on how yet another critic comes up short as failing to understand etc, but will not go there.
Alan Fox:
Who says that everything was designed? No one in ID does.
JVL, the metric of CSI has already demonstrated that living organisms were intelligently designed. There is, by far, more than 500 bits of CSI per organism. And that is over the UPB.
And if you have questions about Dembski’s metric, then email the man himself.
However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.
Again, he never makes that claim. And the methodology I use is also used when we don’t know anything about the origin of the thing in question.
It’s as if you are proud to expose the fat that you too have ZERO investigative experience.
Do archaeologists know how their proposed artifacts arose? No. That is what they are doing in the field. Trying to determine artifacts from nature.
Alan Fox:
Your ignorance is not an argument, Alan. And when it comes to science, biology, ID and evolution, all you have is ignorance.
Kairosfocus: at this point you are being stubborn. There is not a snowball’s chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric.
Still, he did not make that statement in the monograph.
Are we going to compare methods or not?
The log of products rule used to be what 3rd form Math, now it’s 4th form I think. Grade 9 or 10 I think.
I’m not saying you can’t break the log down like that, I’m saying it’s unnecessary for evaluating the metric.
We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have – log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] – [C + D]. That is, information beyond a threshold, in bits.
Shall we compare methods on an example?
ET: And if you have questions about Dembski’s metric, then email the man himself.
I don’t have a particular question; I just want to see how its results compares to that given my Kairosfocus‘s interpretation. He doesn’t want to play ball for some reason. I wonder why that is?
Again, he never makes that claim.
From the monograph’s abstract:
So, clearly, he’s interested in exploring that possibility.
From later on:
Further on again:
And later again:
Oh, by the way, in Addendum 1,
Oh, and there’s this as well: why he has replaced 10^-150 with pS(T)•10^-120 and why pS(T) is not a constant.
JVL has obvious reading comprehension issues. He is attributing things to Dembski that Dembski never claims. Dembski NEVER said his method is superior to how design is currently detected.
Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence. Archaeologists do not require independent knowledge of the designers.
Seeing that JVL is being dishonest about what Dembski says, it is clear that he isn’t interested in an honest discussion.
In “Specification” Dembski uses a 10-digit code. TEN. And it came out as specified. What do you think a protein of 100 AA will come out as?
ET: n “Specification” Dembski uses a 10-digit code. TEN. And it came out as specified.
Do you mean 1, 1, 2, 3, 5, 8, 13, 21? That’s 8 numbers but, yes, he treats it as 10-digits.
He said IF pS(T) were on the order of 10^3 then chance could be eliminated. But he didn’t actually say it was on that order for that particular example. But, I get the point, especially because of his discussion the the following paragraph. Quite a few probabilistic arguments about design wouldn’t you say?
What do you think a protein of 100 AA will come out as?
Depends on pS(T) doesn’t it? IF you want to use his ‘refined and extended’ work from 2005.
Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence.
Knowledge of the skills and abilities of the humans around at the time is part of the relevant evidence. If an artefact were found that was way beyond any skills and known abilities of the pertinent human civilisations then it would be time to reconsider . . . as one would expect.
Archaeologists do not require independent knowledge of the designers.
They certainly do if they want to conclude who they think created the artefact in question.
Dembski NEVER said his method is superior to how design is currently detected.
But, he did say:
Sounds like it’s ‘better’ to me. Clarification: more straight-forward. Extension: applicable to more situations. Refinement: more specific and detailed.
AF at 384,
Here is the difference between an atheist and a real scientist.
Richard Dawkins: Living things only look designed. They are not designed.
ID: Life is designed. It contains codes that direct its development. Codes can only come from an intelligence. Which raises the question: Who is this intelligence? It can’t be dead chemicals springing to life one day for no reason. And human beings who were designed by nobody. Like your computer, someone designed and built it, not nothing/nobody.
You are just clueless. Dembski NEVER compares his methodology to the tried-and-true techniques currently used.
We “know” that humans were capable of building Stonehenge only because Stonehenge exists. So, again, you prove that you are clueless. Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty.
For a 100 AA protein the ps(T) would be gleaned from the sequence. And there isn’t any evidence that blind and mindless processes can do it.
ET at 392.
Alan Fox plays the fool. All living things are designed. All LIVING things. Period. Alan is not ignorant, he plays games.
Relatd/401
So, this is the definition of “real science?”:
A veritable Copernican Revolution…..
KF:
Nonsense, you are no mindreader. You imagine stuff. Then you write singular prose remarkable only for its obscurity. The quoted sentence is an example typical for lack of any meat in the sandwich.
Here we go again. I guess there is a nugget in there about DNA and RNA that illustrates your child-like incomprehension of the biochemistry involved.
Orgel came up with the phrase “specified complexity” as a qualitative property of living systems. Nothing to do with your nonsense
I’m exchanging thoughts, as one interested layperson to another, on some obscure blog. Are you totally incapable of civil exchange? These are not Earth-shattering events; I’m just entertaining myself as time and curiosity allow.
Nobody has a clue what your “FSCO/I” is yet despite JVL’s remarkable patience in getting you to make some sense.
What is sadly telling is once we establish what trivial mathematical manipulations are or are not involved in telling us whether something is deigned [I deign to leave my Freudian slip], I predict there will be a further fruitless discussion on what numbers go into the equation or formula, should one eventually emerge from the fog of words.
Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion. Demsbski rules out reiterative change and demands everything happens all at once. The model does not fit reality.
Yes, folks, I know it is a waste of time to respond to Joe. It’s for the lurker!
ET: Dembski NEVER compares his methodology to the tried-and-true techniques currently used.
I’ll take your word on it. I’m just repeating what he said in his 2005 monograph.
We “know” that humans were capable of building Stonehenge only because Stonehenge exists.l
There are a lot of other standing stone circles in the British Isles and Brittany.
Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty.
I didn’t say they required it; I said they look at all the evidence including independent information about the humans around at the time and where they lived, what they ate, sometimes the tools they used, sometimes where they were buried.
For a 100 AA protein the ps(T) would be gleaned from the sequence
Dr Dembski explains how to ‘glean’ pS(T). And it involves knowing the ‘sample space’.
JVL, we both know the algebra is correct. I simply moved from the probability space to the information space. This exposes how the posing on math etc is a rhetorical front. KF
PS, at 293 I put up several examples. https://uncommondescent.com/evolution/at-sci-news-moths-produce-ultrasonic-defensive-sounds-to-fend-off-bat-predators/#comment-762545
Kairosfocus: we both know the algebra is correct.
Fine. Shall we compare results on a simple example and then escalate things a bit?
CD at 404,
It sure is. Not that evolution crap. ‘Uh, yeah. You see, dead chemicals came to life and produced life and it just zigged and zagged for millions of years until we came around… from extremely primitive earlier versions of not really men. Here, look. I got pictures.’
This is me when I was a fish.
This is me when I looked like a Lemur.
And this is me when I looked like an ape.
AF at 405,
I’m not enjoying your act. Parts get repeated over and over. Alan Fox is smart except when he’s not, or doesn’t want to be.
You have no future in stand-up comedy or in feigning frustration.
F/N: As a courtesy to the onlooker:
KF
Kairosfocus:
Shall we compare metric interpretations? Yes or no?
JVL, fallacy of the loaded question. We both know that I am carrying out the – log2[ . . . ] unary operation on a probability expression right there in Dembski’s X = eqn, and stating its standard result, an information value in bits. As it is applied to three factors, it is info beyond a threshold (or short of it by so much). You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life. We have reason to hold that cell based life and body plans are designed. KF
Kairosfocus: fallacy of the loaded question. We both know that I am carrying out the – log2[ . . . ] unary operation on a probability expression right there in Dembski’s X = eqn, and stating its standard result, an information value in bits.
Then there’s no reason for you to take up the challenge!!
You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life.
But I think Dr Dembski was working on something different and that would the detection of specified complexity. That’s what he said he was doing and that’s the contention I’d like to test using his own formulation and way of working them out.
Shall we compare and contrast results? If they turn out to be the same then that’s okay.
JVL, again, we both know the algebra is correct. Further, we both know that Dembski pointed out that for life the specification is cashed out in functionality. Notice, [a: functionally] specified, complex [b:organisation and/or] associated information. A says, context is life or other contexts where functionality is key, B that information can be implicit in organisation. KF
Kairosfocus: again, we both know the algebra is correct
I didn’t say the algebra was incorrect. It’s your interpretation of some of the pieces as constants that isn’t clear.
Anyway, he came up with a metric for seeing if there was enough specified complexity in an object or event to conclude that it’s designed. You changed his metric. I’d like to compare his version and your version to see if they give the same results. Are you willing to do the comparison? Yes or no?
JVL, what part of Dembski’s specification of the two values as numbers — I highlighted yesterday in the clip — is so unclear it requires “interpretation”? _____ What part of giving one as M*N LT 10^120 is unclear? ______ What part of “define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T” rather than some function on a variable parameter is doubtful? _____ In your clip on flagellar proteins, I read “It follows that –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^-140 , where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event" . . . which sets 10^140 as upper bound, less conservative than 500 bits worth, 3.27*10^150. So, no, he was discussing numbers and bounds or thresholds not odd functions that can run off anywhere to any weird value as one pleases. Oddly, even if pS(T) were some weird function, it would still be part of a threshold, by the algebra; the issue then would be to find a bound. a constant, your latest word to pounce on rhetorically. But as it turns out we are not forced to guess such, as we know it is an upper bound on observability, a target zone in a wider space of possibilities W; familiar from statistical thermodynamics. It is easy to see that for sol system or observed cosmos 2^500 to 2*1,000 is a generous upper bound on every atom, 10^57 to 10^80 being an observer of 500 or 1,000 coins each, flipped at 10^14 per second and for 10^17 s. So, whatever goes into the threshold, it is bound by search resources of sol system or observed cosmos. The thresholds given all the way up in 293 bound any reasonable value. All the huffing and puffing hyperskepticism fails. But at least you acknowledge explicitly that the algebra is correct. KF
PS, you have calculations on the bounds, again cited yesterday. Can you tell me how for 10^57 or 10^80 atoms each observing bit operations on 500 or 1,000 one bit registers [“coins”] every 10^-14s, we do not bound the scope of search for 10^17 s, by 10^88 to 10^111 as over generous upper limit? I find the hyperskepticism unjustified.
AF, 405:
We both know just what movement has been held as making it possible to be an intellectually fulfilled atheist. Which state is demonstrably impossible due to inherent incoherence of the implied evolutionary materialistic atheism.
You are also lying and confessing by projection regarding want of substance. The self referentially incoherent evolutionary materialistic scientism of our day is not only public but notorious.
Your stunt is so bad it fully deserves to be corrected by reference to Lewontin’s cat out of the bag moment, suitably marked up — a moment you are fully familiar with:
As for trying to jump on me over claimed errors of style, that is now obviously attack the man, dodge the substance.
Indeed, we have every right to use cognitive dissonance psychology to interpret such stunts as confession by projection.
KF
Kairosfocus:
I understand Dr Dembski’s mathematics quite well thank you. You replace log2(pS(T)) with a constant and log2(P(T|H)) with a different function I(T). Since it’s not really clear what those replacements are I thought a test comparing the result using your formulation and Dr Dembski’s original formulation would be interesting. If they come to the same conclusion, fine. If they don’t (for some particular case) then it would be enlightening to discuss that. I think.
Shall we start by looking at a simple case and then try to ratchet things up? Why not have a go?
JVL, the distraction continues. WmAD first found an upper bound for his M*N term, 10^120, citing Seth Lloyd on how many bit ops are feasible for the observed cosmos. pS(T) is about “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” That is he is effectively binding the number of targets in the wider space W. So, finding an upper bound for that is reasonable. Next, you now acknowledge that – log2[prob] yields an info metric, where that Dembski formulates on that operation points to intention to reduce to info in bits. – log2[prob * factor c * factor d] by the algebra is Info[t] – {logc + logd} –> info beyond a threshold. I(T) is not a different function, but the value of – log2[P(T|H)], an information value in bits to be evaluated case by case. You are back to denying the algebra, kindly see Taub and Schilling as you obviously have no regard for my own background in info and t/comms theory. Next logc = log[10^120] = 398 bits. For log D we want a bits upper bound similar to his MN –> 10^120. He uses a case where the expression requires “P(T|H) 1.” Substitute and use equality as the border case: –log2[10^120 ·pS(T)·{1/2*10^-140}] = 1. Now break it up using the neg log operation. 1 = 466.07 – 398.63 – x i.e. 1 = 67.44 – x, so x = 66.44. (Notice, well within my 100.) What units? We can subtract guavas from guavas not from mangoes or coconuts, so x is in bits. x is effectively log2[pS(T)] so that gives 2 ^10^20. We are back to a threshold of 1 in 10^140, as expected given WmAD’s IFF. This shows the validity of the thresholds of spaces for 500 or 1000 bits. Your it’s not really clear is just another way to try to take back your concession on the algebra, which algebra is manifest. As for what about simple examples they have been on the table with even more generous thresholds than WmAD gave. There is no need to drag out this sidetrack further. The message is clear, for any reasonable threshold for search capability of sol system or observed cosmos, information content of cells and body plans is so far beyond that blind causes have no traction. Life, body plans and the taxonomical tree of life are replete with strong signs of design due to their functionally cashed out complex specified information, explicit and implicit in organisation. KF
Alan Fox:
You and JVL are willfully ignorant and on an agenda of obfuscation.
Dude, what is trivial is your understanding of ID, science and evolution.
It remains that you and yours do NOT have a scientific explanation for our existence. You have nothing but denial and promissory notes.
Alan Fox:
They did not demonstrate that blind and mindless processes produced any of the proteins used
Liar. You keep making these blatantly false statements. And you think we are just going to sit here and accept it. Pound sand.
If you are going to spew BS about ID on an ID site, you had better bring the evidence. Your cowardly bloviations mean nothing here.
The claim that life’s diversity arose by means of evolution by means of blind and mindless processes, such as natural selection and drift, does not fit reality.
Alan is in such a tizzy over all things Intelligent Design. Yet he doesn’t have a scientific alternative to ID. Shoot down all of the straw men you want, Alan. ID isn’t going anywhere until someone steps up and demonstrates that blind and mindless processes can actually do the things you and yours claim.
JVL:
I know. You clearly don’t understand it.
We “know” that humans were capable of building Stonehenge only because Stonehenge exists!
And? We know humans didit cuz humans were around? We know they had the capability to do it cuz the structures exist? Thank you for proving my point.
And ASSUME they didit cuz there they are!
Right. That math is easy. How many different combinations are there for a 100 aa polypeptide?
If you can’t do that then forget about the other math, JVL.
Earth to Alan Fox-
I don’t know how many zeros are in a gadzillion, but this is what Keefe and Szostak said:
1 in 10^11! And those random-sequence proteins did not arise via blind and mindless processes.
Kairosfocus: the distraction continues.
How is asking if you’d be willing to work out some examples using your approach distracting? I don’t see the problem, with any numerical formulation, asking to see it ‘in action’.
Plus you keep repeating yourself which is completely pointless at this point.
So, let’s just stick to yes or no queries:
Will you show your working using your method for some simple examples. Yes or no?
ET: Right. That math is easy. How many different combinations are there for a 100 aa polypeptide?
As I’ve been saying: I think it’s best to start with some simpler examples and make sure everyone is following along and that the results make sense.
If you can’t do that then forget about the other math, JVL.
I think I can do that.
JVL, you have had examples and a use of WmAD’s case on was it the flagellum. You are still talking as if they don’t exist. That tells us you are simply emptily doubling down. For record, from the outset WmAD used -log2[prob], which is instantly recognisable to one who has done or used info theory, as an info metric in bits. That is the only fairly common use of base 2 logs, to yield bits. Next, by product rule once boundable factors c and d are added as products, we have an info in bits beyond a threshold metric, per algebra of logs. Thus, once we have reasonable bounds, and we do with 500 – 1,000 bit thresholds [cf how 10^57 to 10^80 atoms observing each 500 – 1,000 1-bit registers aka coins, at 10^14/s for 10^17s can only survey a negligible fraction of config states], then we may freely work with info beyond a threshold. We only need to factor in info carrying capacity vs redundancy effects of codes as Durston et al did. WmAD apparently picked an example that was 20 bits short of threshold. However, for many cases we are well beyond it so redundancy makes no practical difference. Already for an average 300 AA protein, we are well beyond. FSCO/I — a relevant subset and context of CSI since Orgel and Wicken in the 70’s — is a good sign of design. This you have resisted and sidestepped for 100’s of comments, indicating that you have no substantial answer but find it unacceptable. Our ability to analyse, warrant adequately and know is not bound by your unwarranted resistance, sidesteps and side tracks. But this thread has clearly shown that the balance on merits supports the use of FSCO/I. Life, from cell to body plans including our own, shows strong signs of design. KF
ET, interaction with ATP is not a good proxy for the myriads of proteins carrying out configuration-specific function. A good sign of this is the exceedingly precise care with which the cell assembles and folds proteins. KF
Yes, KF. That Alan Fox calls on that experiment and results exposes the sheer desperation of his position.
Right. JVL balks when given a real-world, biological example. An example that he cannot control and manipulate.
That math [sample space] is easy. How many different combinations are there for a 100 aa polypeptide?
*crickets*
If you can’t do that then forget about the other math, JVL.
As predicted. Thank you.
ET, ignoring the oddballs and assuming away chirality issues and a lot of other chem possibilities, 20^100 = 1.268*10^130. KF
So, we have a massive sample space. Next, we need that protein and to see how variable it is. Then we will know how many targets there are in that sample space.
Trying to hit 1 in 100,000,000,000 (Keefe and Szostak for 80aa with minimal functionality), should be enough for anyone to see the futility of evolution by means of blind and mindless processes. Just seeing what DNA-based life requires to be existing and functioning from the start, should be enough for rational people to understand that nature didn’t do it.
ET: As predicted. Thank you.
I said I think I can do that, how is that ‘balking’?
Are you even paying attention?
Also, please note, I am only talking about evaluating Dr Dembski’s metric.
Kairosfocus:
Will you show your working using your method for some simple examples. Yes or no?
JVL,
Further doubling down. First,
Then, as you were shown and reminded:
Your empty doubling down is groundless and a strawman tactic that beyond a point is rhetorical harassment. There is more than enough on the table to show why the design inference on FSCO/I is warranted. This implies that the world of life, credibly, is full of signs of design from the cell to body plans to our own constitution.
KF
PPS, also side-stepped and ignored:
>>260
kairosfocus
August 6, 2022 at 4:45 am
PPPS, as a further point, Wikipedia’s admissions on the Mandelbrot
set and Kolmogorov Complexity:
This is of course first a description of a deterministic but chaotic
system where at the border zone we have anything but a well behaved
simple “fitness landscape” so to speak. Instead, infinite complexity, a
rugged landscape and isolated zones in the set with out of it just next
door . . . the colours etc commonly seen are used to describe bands of
escape from the set. The issues raised in other threads which AF
dismisses are real.
Further to which, let me now augment the text showing what is just
next door but is not being drawn out:
This gives some background to further appreciate what is at stake.>>
Kairosfocus: There is more than enough on the table to show why the design inference on FSCO/I is warranted.
I wasn’t questioning that!! I’m just trying to figure out why you reworked Dr Dembski’s metric and if your reworking gives the same results! I don’t know why that is so hard for you to understand.
I will write up a simple example, apply Dr Dembski’s metric then ask you to apply yours (specifying the values of your introduced terms) and then we can see what’s what.
You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting – but not ending – with Keefe and Szostak) that potential function is widespread in protein sequences.
Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific. The all-at-once scenario assumed by Dembski doesn’t match reality. Though it will be amusing to see if his math produces more than GIGO, if KF dares to venture into genuine illustrative examples.
*wonders if he needs more popcorn*
AF, not an assumption. Notice how carefully proteins are synthesised and folded. That is the mark of an exacting requirement. KF
JVL, I await your renewed acknowledgement of the algebra, your willingness to acknowledge that FSCO/I is a subset of CSI for systems where functional configuration identifies the specificity [one noted by Orgel and Wicken in the 70s], and recognition that calcul;ated cases are on the table. Not having my old log tables from 3 – 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack. HP calculators since 1977. Further, I WORKED OUT what – log2[ prob] is, an info metric, that is not replacement. Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys? KF
Kairosfocus: Not having my old log tables from 3 – 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack.
You can convert log base anything into log base 10 or ln quite simply. And even simple calculators have log10 and ln.
Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys?
Let’s just compare methods and see what happens. I already said your algebra was fine albeit unnecessary. It’s your introduction of constants and functions not present in Dr Dembski’s formula that I want to check.
JVL, I take it that you have not done info theory and refuse to accept what is in Taub and Schilling much less my briefing note. That is the root of your problem. KF
PS, Wikipedia confesses:
So, you can see my direct reason for reducing to information and symbolising I(T). The product rule for logs directly gives the threshold, as noted. Functionally Specific Bits, using F and S as dummy variables is obvious, and a matter of observation. More complex measures can be resorted to but excess of threshold is so large no practical difference results. Design.
As you obviously did not read my longstanding notes, I clip:
I trust that should be enough for starters.
Let me add, that your assertion in the teeth of repeated correction, is unjust: ” It’s your introduction of constants and functions not present in Dr Dembski’s formula that I want to check.” False and misleading. I reduces – log2[prob] to information and symbolised it I(T). I used the product rule to draw out threshold. I reduced the 10^120 term to log 2 result 398 bits. I symbolised the other term, a number, and pointed to the 10^150 threshold, essentially 500 bits. On your repeated objection I used WmAD’s case and showed the bit value, about 66, noting that he used 10^140 configs as space of possibilities there.
Your resistance to a simple working out tells me it would be futile to try anything more complex. All that would invite is an onward raft of further objections.
The basic point is, neg log of prob –> information, all else follows and indeed the unusual formulation of WmAD’s expression as – log2[ . . .] itself tells that the context is information in bits.
As I have noted, the only practical use I have seen for log2 is to yield info in bits. If you have seen another kindly enlighten me.
KF
Kairosfocus:
You’re just not really paying attention to what I am actually saying. I shall write up a simple example soon and ask you to work out the same example using your method (with your introduced constants and change of function) and we’ll see.
JVL, you are setting up and knocking over a strawman. That you resist a reduction of a – log2[ . . .] expression into the directly implied information in bits even after repeated explanation and correction tells me there is a refusal to acknowledge what is straightforward. If you are unwilling to acknowledge that, that is itself telling that you have no case on merits but insist on hyperskeptically wasting time. KF
Kairosfocus:
I do not understand your constant objections. I’ve agreed with your algebra. I don’t understand why you made certain substitutions as the mathematics is quite straightforward as Dr Dembski stated his formulation but if we compare results we can clear some of those questions up. But you keep not wanting to compare results.
As I said, I will present a worked out, fairly simple case, just to get things started. I’ve done a rough draft but I’d like to review it to make sure it’s clear and cogent and easy to follow.
Stop arguing against things I haven’t said; you can convince me your approach is correct by comparing results. Simple.
Alan Fox:
They said 1 in 100,000,000,000 proteins are functional. Read their paper. 1 in 100,000,000,000 is NOT widespread.
Alan Fox is either a LIAR or just willfully ignorant:
You are lying as Dembski doesn’t make such an assumption.
Grow up, Alan.
EARTH TO ALAN FOX. FROM KEEFE ANS SZOSTAK:
You lied about their paper, too.
You have no shame.
Okay, here’s what I’d like to use as a first test of Dr Dembski’s metric. I’m not saying this test is controversial in any way; I’m just wanting to step through it as an example.
I’ll work through Dr Dembski’s metric (from his 2005 monograph: Specification, the Pattern That Signifies Intelligence) twice, once not breaking the log base 2 bit apart and once breaking it apart. In both cases I will get the same result because breaking the log apart has no effect on the final value.
For this post Dr Dembski’s metric looks like this:
X = -log2(10^120•pS(T)•P(T|H))
(Because this blog is not configured to handle Greek letters I’ve change some of the notation)
I’d like Kairosfocus to work through the example using his version of the metric (from comment 276 above: X = I(T) – 398 – K2) and I’d like him to give values for K2 and for I(T).
We can then compare results and conclusions.
For this particular example I expect to get the same conclusions because I think the conclusion is pretty clear but I’d like to illustrate the difference in the approaches.
The example I’d like to work through first is: Flipping a fair coin 10 times and getting 10 tails.
Again, I expect Kairosfocus and I to arrive at the same conclusion for this particular example. I just want to see how he works his version.
I will/may be using a log conversion method which says log base b of N written as logbN = log10N/log10b = ln10/lnb. This can be found in any high school math text beyond the base level. This is handy when evaluating log2 since modern calculators do not have that function.
JVL, why don’t you reduce the – log2[ . . . ]? That would tell you a lot. I did it, but you apparently need to do so for yourself. And, you show that you know enough about logs to understand. KF
Okay, if you flip a fair coin 10 times there are 2^10 possible outcomes all of which are equally likely if each flip is truly random which we’re going to assume for this example.
So, S = our semiotic agent, T = getting 10 tails with 10 flips, H = the flips are random -> P(T|H) = 1/2^10 = 2^-10
Dr Dembski defines pS(T) as: the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T.
I argue that pS(T) = 2 in this case. We can describe our T as “getting all tails” and the only other possible outcome with a description that simple or simpler is “getting all heads”
So X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120•2•2^-10) = -log2(10^120•2^-9)
Now 10 is approx equal to 2^3.321928 (recall that 2^2 = 4, 2^3 = 8 and 2^4 = 16)
So X is approx = -log2((2^3.321928)^120•2^-9) = -log2(2^398.63136•2^-9)
= -log2(2^389.63136) = -389.63136
This result is less than one (Dr Dembski’s threshold) so design is not concluded, i.e. this event could have come about by chance.
Addendum: perhaps I should point out that for any base, b: logb(b) = 1 and logb(b^n) = n.
An alternate method of computing the final result is:
X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120) – log2(pS(T)) – log2(P(T|H))
For our values that’s
= -log2(10^120) – log2(2) – log2(2^-10) = -log2(2^398.63136) – 1 + 10 = -398.6316 -1 + 10 = -389.6316
So, breaking apart the stuff inside the log is possible but unnecessary as the result is the same and therefore the conclusion is the same.
So, I’d now like Kairosfocus to work through this same simple example, explain how he’s calculating I(T) and K2, give us his result and conclusion. As I already said: I expect our conclusions to be the same for this example but I’d like to see how he’s calculating K2 and I(T).
JVL, evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 – 1,000 or more bits, 3.27*10^150 to 1.07*10^301 or bigger, doubling for every further bit. 10 bits is not even two ascii characters. Any given binary sequence could come about by raw chance, but some are utterly unlikely and implausible to do so because of the statistical weight of the near 50-50 peak, with bits in no particular functional order, i.e. gibberish. KF
PS, you will observe that I gave limiting values and said so. Dembski suggested 500 bits, and that config space swamps the sol system’s search capacity. 1,000 bits I am more comfortable with for the cosmos as a whole. That is, I used values that make any plausible search reduce to negligibility. As you full well know.
Kairosfocus: evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 – 1,000 or more bits
Can you just show us how to evaluate your version of his metric for this case, yes or no? If you think it falls below the threshold then do the math and show us why. For this example what is your K2 and your I(T)?
Dr Dembski worked through an example where he got -20, below his threshold, so clearly he intended to be able to use his metric for ALL cases.
AF at 440,
Do you even read what you write?
“Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific.”
“selected for”? By who? By what? Blind, unguided chance? That’s not goal oriented? “selective pressure”? Seriously? How much time, according to the non-existant Selective Pressure Cookbook, needs to pass to create the fictional change or changes?
AF, you know full well. No specificity or functionality, so 10 bits x 0 x 0 = 0. X_500 = 0 – 500 = – 500, 500 bits short of a design inference threshold. the two threshold terms are addressed AS YOU KNOW by finding a bounding value, here a very generous 500 bits as WmAD has mentioned. I would use that for the sol system scale. KF
PS, just to clarify, 10^57 atoms in sol system mostly H, He in sol, but use that. 10^14 observations of state of 500 1 bit registers per second each, for 10^17 s, gives 10^88 possible observations. Negligible fraction of 3.27*10^150 possible states. This has already been outlined and given over years.
Kairosfocus:
One other thing, since you haven’t responded yet . . .
When Dr Dembski worked an example and got -20 you suggested that that example was 20 bits shy. I got -389.something rounded up (or down) to -390. Does that mean that that example was 390 bits shy of the threshold? Should we add 390 and try again?
Related:
The NICHE!
…which is why swifts are generally found flying in advance of weather fronts, golden moles generally swimming in sand in the Namib, and great white sharks generally patrolling oceans containing suitable prey. Not chance, but environmental design, which some refer to as natural selection by the niche environment.
The NICHE!
AF at 461,
I work with professional writers and if I saw that kind of CRAP on my desk, I would immediately reject it. Then throw it in the trash.
“environmental design”? That’s not even fiction, or “science” fiction. It contains zero science.
Swifts flying in advance of weather fronts? Who taught them how to do that? Nothing? Because that is exactly what you have.
KF
ad nauseam. I grasp Seth Lloyd’s concept of total number of particles in the (known) universe times units of Plank time since the start of (this known) universe. Dembski misapplies the concept, which might make some sense if this known universe is strictly deterministic, which it isn’t. But that isn’t the big mistake, which is in assuming unique solutions and random, exhaustive searches.
You keep mentioning this as of it should impress me. What would impress me is if Related showed some understanding of biology and attacked that rather than your strawman version.
The niche (in the sense of sifting out individuals with poorer ability from the population),
Gradually.
AF at 463 and 464,
Pzzzfffft !!! “The niche”? Woo hoo !!! The niche what? That fictional, invisible thing – without intelligence – you’re trying to sell here?
That’s crap. It has NO basis in fact. In case you missed it – that’s CRAP.
AF at 465,
All the baby Swifts had to show up for practice one day. Called there by the fictional, invisible nothing…
Seriously? I mean Seriously?
How innate behaviour is templated in DNA sequences is a subject largely untouched so far. I optimistically expect that to change one day. I pessimistically expect climate change to get us first. The niche humans occupy is changing very rapidly.
I don’t expect to convince you of anything, Related. Just remember what I said twenty years from now, when I’ll have already shuffled off this mortal coil.
AF at 468,
The Niche, starring Alan Fox. Where he points at nothing and tells people it’s something.
By the way, according to AF, we’re all going to die next week. The week after that at the latest…
Only humans, Related. I suspect you and everyone that thinks God has a purpose for us humans that involves an eternity of hosannah-ing are in for a bit of a disappointment.
AF at 471,
If you think that this life is all there is, I’ve got 2,000 years of testimony that says different.
Well, Related, let’s agree to meet up in the hereafter and compare notes. Though an eternity of talking to you is not the most attractive proposition, I have to say. Perhaps I’ll get to go to Hell where all the interesting folks are.
JVL, further doubling down in the face of a response . . . since you haven’t responded yet. That is telling. Ultimately, telling on a rhetorical strategy of distractions, side tracks and polarisation. One, that reveals through what is evaded and dismissed or forgotten, the dirty secret of the long term ID objector. Not having a substantial response, side track and polarise. It is clear, the design inference on signs is well warranted, functional information like the text of objections is an observable and that blind watchmaker needle in haystack search becomes hopeless once 500 – 1,000 bits are on the table. Indeed, it is obvious that bits are a natural info metric, starting with the carrying capacity of two state elements. In an info theory context, – log2[probability] gives info in bits. Then, as WmAD’s expression reduced algebraically shows, – log2[probability*threshold_index] gives information short of, at or beyond threshold in bits. Where we can work through redundancy as Durston et al have, we can set dummy variables to enfold functionality and specificity, we can use bounds for thresholds, with 500 – 1,000 bits a very reasonable and even generous threshold. The net result is, that relevant cases such as the 900 bases for a typical 300 AA protein, 1800 bits info capacity in a functional, specific entity, are so far beyond sol system threshold, 1300 bits, that redundancy makes no practical difference. The net result is FSCO/I in the cell and in body plans — OOL 100 – 1,000 kbases in the genome, 10 – 100+ mn bases for body plans eg for arthropods — is so far beyond threshold that redundancy is irrelevant. Credibly, life and body plans come from the only observed source for FSCO/I, intelligently directed configuration. KF
AF, see the just above. KF
Kairosfocus: since you haven’t responded yet.
You didn’t address anything to me after I responded to you. So I asked another question.
And, you haven’t said what your K2 and I(T) are for the particular example I worked out using Dr Dembski’s metric. You came up with those so, if they have any meaning, you should be able to specify their values for a given example.
Nor have you answered my follow-up question: since you once said that a result using Dr Dembski’s metric that came out to -20 . . . since I got a result of -389 or so does that mean that that particular test sequence was 389 or so bits below threshold?
This is just a sincere and simple test of yours and Dr Dembski’s specified complexity formula. You seem to avoid actually doing any calculations.
If I don’t hear back from regarding actual values of K2 and I(T) then I shall move on to another example with more ‘bits’ and see if you agree or disagree with the results, and why, and (hopefully) what your own version of the metric shows. But, truth be told, I’m not holding my breath since you actually seem to be about tossing lots of math around without actually doing any.
I just noticed an omission in the above comment, it should read . . .
So, let’s ratchet things up a bit. Let’s try testing a very similar scenario except let’s go for 400 tails in a row. An extremely unlikely event if everything is by chance I think you’ll agree.
I don’t think it’s difficult updating my work with Dr Dembski’s metric (all I have to do is put ‘400’ in where I had ’10’ before) but I will first give Kairosfocus a chance to tell us what his version of the metric comes up with (i.e. what his K2 and I(T) are) and what his conclusion is before I chime in with my results.
Again this is just testing Dr Dembski’s metric from his 2005 monograph.
JVL, your ongoing game is a waste of time and distraction. KF
Kairosfocus: your ongoing game is a waste of time and distraction.
Are you saying you can’t compute your K2 and I(T) for the example of getting 400 tails in a row when flipping a fair coin? I can compute Dr Dembski’s metric, easily.
😆 To talk only about amino-acids “metric” is a bad joke. We should be talking about all associated cell processes combined metrics and “probability ” of all processes to function/cooperate/help each other forming interconnected systems from the first cell.
Actin nucleation core
Action potential
Afterhyperpolarization
Autolysis
Autophagin
Autophagy
Binucleated cells
Biochemical switches in the cell cycle
Branch migration
Bulk endocytosis
CDK7 pathway
Cap formation
Cell cycle
Cell death
Cell division
Cell division orientation
Cell growth
Cell migration
Cellular differentiation
Cellular senescence
Chromosomal crossover
Coagulative necrosis
Crossing-over value
Cytoplasm-to-vacuole targeting
Cytoplasmic streaming
Cytostasis
DNA damage
DNA repair
Density dependence
Dentinogenesis
Dynamin
Ectopic recombination
Efferocytosis
Emperipolesis
Endocytic cycle
Endocytosis
Endoexocytosis
Endoplasmic-reticulum-associated protein degradation
Epithelial–mesenchymal transition
Exocytosis
Ferroptosis
Fibrinoid necrosis
Filamentation
Formins
Fungating lesion
Genetic recombination
Hertwig rule
Histone methylation
Interference
Interkinesis
Intracellular transport
Intraflagellar transport
Invagination
Karyolysis
Karyorrhexis
Klerokinesis
Leptotene stage
Malignant transformation
Meiosis
Membrane potential
Microautophagy
Mitotic recombination
Necrobiology
Necrobiosis
Necroptosis
Necrosis
Nemosis
Nuclear organization
Parasexual cycle
Parthanatos
Passive transport
Peripolesis
Phagocytosis
Phagoptosis
Pinocytosis
Poly
Potocytosis
Pyknosis
Quantal neurotransmitter release
Rap6
Receptor-mediated endocytosis
Residual body
Ribosome biogenesis
S phase index
Senescence
Septin
Site-specific recombination
Squelching
Stringent response
Synizesis
Trans-endocytosis
Transcytosis
Xenophagy
+all still unknown processes :))
Good luck!
LtComData:
We are talking about computing Dr Dembski’s specified complexity metric from his 2005 monograph: Specification: The Pattern That Signifies Intelligence. I have decided to compare and contrast the results of that metric on some basic and simple examples with the alternate metric proposed by Kairosfocus many years ago now, I think. I have computed the result for the example of flipping a fair coin 10 times and getting 10 tails and hoped that Kairosfocus would show what his alternate interpretation of that metric would compute to. He . . . well . . . avoided giving a direct answer.
I am now asking for him to give his result for the example of flipping a fair coin 400 times and getting 400 tails. I can easily compute Dr Dembski’s metric for that example but I’d like to hear Kairosfocus‘s response first. Does flipping a fair coin 400 times and getting 400 tails give evidence of specified complexity and therefore design?
If, after the dialogue with Kairosfocus is resolved, you’d like to discuss the application of Dr Dembski’s metric to some of the other situations you list then perhaps we can do that. But first I’d like to resolve the simple case.
CD, yes, we are dealing with lower bounds on complexity. It is enough for a reasonable person — not to be assumed at this stage — that for an average 300 AA protein, we have 900 bases, and so 1800 bits carrying capacity. Bits can be seen i/l/o basic info theory, notice how that was ducked time and again. For our effective cosmos, the sol system, 10^57 atoms as observers each overseeing 500 bits/coins changing 10^14 times/s for 10^17 s we can examine 10^88 states. Sounds huge till one sees that the config space for 500 bits is 3.27*10^150, so one can only search a negligible fraction. Needle in haystack search challenge sidelines blind mechanisms. Intelligence uses understanding to compose effective, functional complex organisation. And the objectors know this, they are seeking to suppress what should be a commonplace. KF
Well, it seems like Kairosfocus is just not going to even try and compute his version of Dr Dembski’s 2005 specified complexity metric from his monograph Specification: The Pattern That Signifies Intelligence for the second example I have proposed: flipping a fair coin 400 times and getting 400 tails. I shall give you my result from computing Dr Dembski’s metric for that example. I argue that the only thing I have to do is change the ’10’ in my previous example with ‘400’ so I get:
(Oh, after the first step all the ‘equals’ should properly be read as ‘approximately equals.)
X =-log2(10^120•2•2^-400) = -log2(2^398.63136•2^-399) = -log2(2^-0.36864) = 0.36864
Which is below Dr Dembski’s threshold of 1 to conclude that the event or sequence exhibited enough specified complexity to be definitely designed. Please don’t shout at me, I’m just trying to calculate his metric fairly. If you think I’ve made a mathematical mistake then please point it out specifically. Because I didn’t come up with the metric if you have a problem with it then do not blame me.
Can I just say, it’s clear that one more coin flip getting all heads would clearly step over Dr Dembski’s specified complexity line. 401 fair coin flips, all tails would meet the criteria of his metric. A more complicated pattern would increase pS(T) and thereby mean an increase in the number of trials/flips required to meet the threshold. In some sense, looking at the very simplest case puts a kind of lower bound, based on his metric, for detecting sufficient specified complexity that leads to a conclusion of design. It’s close to 400 events or choices? Based on actually calculating Dr Dembski’s metric. Most of the time, it would be much higher than that.
Once again, I am not casting judgement on Dr Dembski’s metric, I am only trying to explore its implications. I was hoping to get Kairosfocus to do something similar with his version of Dr Dembski’s metric but, alas, he seems to have excused himself from the discussion. For whatever reason. I would still very much like him to give values for his K2 and I(T) for any of the examples I have dealt with. He came up with those terms so, if they have any meaning, he should be able to evaluate them. We shall see if he deigns to enlighten us with the numerical thinking behind his formulation.
JVL, what I am saying is that once we have a reasonable bound, and can see the result for relevant cases — as was long since shown — we have the material answer. Therefore, I have no need to go on and on with what is patently distractive. The material result is, there is good reason to conclude that cells and body plans include intelligently directed configuration as key cause. KF
PS, and BTW, the bounds set limits for plausible ranges for terms involved in the threshold values implicit in Dembski’s expression.
PPS, I again remind, as just one example
Where, of course, redundancy was long since addressed by Abel, Durston et al. It is quite clear from the above that you still resist the simple reduction to information in bits directly implied by – log2[ . . . ] where that was established as a basic metric for information decades ago. Similarly the algebra of logs leads to thresholds in the Dembski expression, which is why I used threshold metrics from over a decade ago as is drawn out in my always linked. Again side stepped and/or distracted from and resisted. That suggests that you were unfamiliar with the established result of an information metric, then with the significance of the product rule for logs given WmAD’s expression. You no longer have an excuse. Information is first measured as carrying capacity and redundancy can be addressed for practical cases but makes no effective difference for the main point. That main point is your obvious underlying objection but you cannot deal with it substantially on merits. Which is telling.
AF, reasonable bounds. You are dealing with me here, not WmAD and we need not further side track. I took each atom in the cosmos or sol system as an observer, and a fast chem rxn time as a bound on time for an observation, with a timeline since the singularity as a bound on observations of all 10^80 or 10^57 atoms, where that is a generous estimate as most are H and He, many bound up in stars etc. Give the 10^57 sol sys atoms registers of 500 bits to observe each 10^-14 s, 1000 bits for the cosmos as a whole. You have 10^111 or 10^88 observations as bounds. Configuration space for 500 or 1,000 bits is 3.27*10^150 or 1.07*10^301. In each case span of possible search is negligible relative to the config space. Once we can find functionally specific, complex information . . . which can be implicit in organisation to achieve function per Wicken wiring diagram . . . beyond reasonable threshold, there is only one empirically warranted, analytically plausible source, intelligently directed configuration. This is plain but equally plainly you have resisted it and sought to distract attention from it through every rhetorical stunt. That backfires, it is an implicit admission that you have no substantial reply on the focal, decisive point. So your continued objections and distractions are of no material value as the main point is decided on merits, long since. Life is credibly designed, body plans are, including ours. The 160 year long agenda to expel design has failed. KF
Kairosfocus:
Can you actually compute the terms you came up with: K2 and I(T)? Yes or no?
Maybe, but what you’ve amply demonstrated is it is a matter of belief rather than anything that can be shown mathematically. It is bizarrely simplistic to pluck some arbitrary threshold from… the air… and claim anything beyond is a product of design. You believe God created everything anyway. The bogus mathematical argument is pointless.
F/N: The distraction continues. Having bounded variables to go into the log reduction, having provided the result that for cases relevant to cell based life and body plans, we are well beyond threshold where intelligently directed configuration is the by far and away best explanation, the material question is over. I(T|H) can be assessed on capacity then adjusted as Abel, Durston et al have published, but it is implausible that redundancy makes a practical difference. Thresholds have been given generous bounds for sol system and cosmos. All along, there has been refusal to acknowledge plainly and work with the reduction of – log2[ . . . ] and the link thereby to information and to information beyond a threshold. That resistance and distraction tell the story, and they are why we need to refocus the main thing and conclusion on merits: life is credibly the result of design, also body plans up to our own. Observe the significance of that and the onward distractive behaviour. Where, the behaviour so far gives little confidence that any going along with onward distractions will have any fruitfulness. Enough has been done but determined objectors will never acknowledge any significant result, a sad fact of life. In the end that unresponsiveness and that hyperskeptical polarisation are telling. KF
See why I have declared intellectual independence and refuse to allow endless hyperskeptical objections to veto what on warrant I can know with good reason?
Note, by using bounds driven by search capability of the cosmos or sol system, we have general results; far more powerful than any particular detailed calculation, eg by using tables of protein families to estimate redundancy. For any reasonable person, a general result is preferable to one that depends on detailed assumptions, scenario and compiled data on proteins etc. Such general results with examples were on the table hundreds of comments ago. The sullen resistance, foot dragging, side stepping, implicit half concessions pulled back and resort to polarisation tell us that the objectors have lost on merits.
kairosfocus: All along, there has been refusal to acknowledge plainly and work with the reduction of – log2[ . . . ] and the link thereby to information and to information beyond a threshold.
I’m not the one who reworked Dr Dembski’s metric, introducing new terms (K2 and I(T)); that was you. And, it seems that you can’t even specify what those terms are numerically for a particular, simple example. You just talk in general about stuff when I’m asking you to be specific about terms you came up with.
So, again: Can you actually compute YOUR TERMS K2 and I(T) for the very particular case of flipping a fair coin 400 times and getting 400 tails? Yes or no?
I was able to calculate a specific value for Dr Dembski’s metric; the mathematics was elementary. You changed the metric into something you seemingly cannot calculate. Why did you make the change if you can’t calculate it?
See why I have declared intellectual independence and refuse to allow endless hyperskeptical objections to veto what on warrant I can know with good reason?
Perhaps you’d like to justify that by computing the terms you created as replacements for terms in Dr Dembski’s metric that were calculable as I have shown.
JVL, that is now an outright lie, sustained in the teeth of repeated correction. Working out that – log2[prob ] –> information is NOT “reworked Dr Dembski’s metric.” You did not seem to know what neg log prob means, you obviously have no regard to background and even explanatory step by step notes on the info theory and now excerpt from a classic text on the subject; apparently you found it rhetorically convenient to sidestep why I would have in my library two copies of editions of Taub and Schilling, not to mention the Connor series and other works. That should have been a clue, but that was not convenient. Yes, I made simplifying substitutions then drew out generous bounds for info beyond a threshold metrics. Bounds that deliver a powerful general result. That is what is material, once we see that the structure of the WmAD expression gives an info beyond a threshold value. The bounds deliver a general result, given cosmos capability to search and implicit scattered nature of found and similar targets. That general result is powerful. One may thereafter wish to debate particular models and estimates by WmAD, Abel, Durston et al, but it is a very different thing when that is in the context of a powerful general result. It is the side stepping of that general result that is in the end telling. KF
Kairosfocus: Yes, I made simplifying substitutions
True dat.
then drew out generous bounds for info beyond a threshold metrics. Bounds that deliver a powerful general result. That is what is material, once we see that the structure of the WmAD expression gives an info beyond a threshold value. The bounds deliver a general result, given cosmos capability to search and implicit scattered nature of found and similar targets. That general result is powerful. One may thereafter wish to debate particular models and estimates by WmAD, Abel, Durston et al, but it is a very different thing when that is in the context of a powerful general result. It is the side stepping of that general result that is in the end telling.
Why didn’t you just declare bounds on the original terms? Why the change of notation? And why make it look like one of your new terms was still a function dependant on T?
So, to be very, very clear, for the particular example of flipping a coin 400 times and getting 400 tails:
What are the bounds for K2?
What are the bounds for I(T)?
JVL, reducing a log operation to its result and making a simplifying substitution then finding a general bound is a reasonable procedure. One, that gives a telling result on origin of cells and body plans including our own. Your onward demands are either given from outset, for sol system the threshold [given WmAD’s statements] is 500 bits with 398 on the clock so an additional 100 or so, Which is where you started the needless song and dance. As for amount of information, as much as can be produced by all the intelligence in reality and expressed in the cosmos. As for why is I(T) the info value of the target, the answer is obvious, it is just that; take the neg log prob. And we could go on endlessly. KF
Kairosfocus: reducing a log operation to its result and making a simplifying substitution then finding a general bound is a reasonable procedure.
As for why is I(T) the info value of the target, the answer is obvious, it is just that; take the neg log prob.
“take the neg log prob”. Is that the way trendy math people talk?
It’s not obvious. You can’t just keep waving your hands about and hope no one asks you for specifics.
You and others had this 500-bit threshold in mind. That was the standard. Then Dr Dembski thought: you know what, for some situations/patterns/sample spaces the threshold might be less than 500-bits (or more!) AND why not try and make the whole idea a bit more rigorous mathematically. So he had a think and came up with the metric in his 2005 monograph. If he wanted to just stick with the 500-bit threshold there would have been no point in revising and extended (his words) his previous work. And, in fact, using his metric, for the case of flipping coins and getting all tails it looks like the threshold is reached at 401 flips and not 500. According to my calculations which no one has disputed.
I think you looked at his metric, tore it apart, interpreted each of the parts as number of bits (even though he explicitly stated that the threshold for his metric was being greater than 1), renamed parts, came up with I(T) (which you did not clearly define) and decided that had to meet the same old criterion of being 500-bits or more. Dr Dembski would never have bothered creating that metric if all he wanted to do was to stick with the already existing 500-bit threshold. AND, as we’ve seen, for certain cases, the threshold is less than 500-bits. That, in fact, was part of his point: for each individual case/situation/pattern/sample space a tighter, more mathematical threshold might exist.
But you just tore his new metric apart and tried to make it fit into the old threshold. You read a couple books on information theory and remembered a rule about logs and for years and years no one questioned what you did. They didn’t understand Dr Dembski’s mathematics so they figured you knew what you were talking about. But you can’t clearly define or evaluate the terms you came up with. What’s the point in creating them if you’re just going to say: it all has to meet the same 500-bit limit? You created them then brushed them under your blather of math.
If you want to stick with the 500-bit threshold, fine. You do that. But don’t attempt to do some clever mathematics (badly) and then say Dr Dembski’s new method of calculating a threshold for individual cases gives the same results. The point is that it might not. That’s why he created it.
AND, again, I got a different threshold for flipping a coin and getting tails trying to honestly use the metric Dr Dembski elucidated and explained. Do you agree that for that particular case and event the threshold is 401 flips? Yes or no?
I’m not going to ask you about K2 and I(T) anymore because you don’t even know what they mean so you can’t tell what values they can take on for a particular situation.
Alan Fox:
Alan the psychic blowhard, strikes again!
No, Alan. Only losers on an agenda says crap like that. As Dr Behe said many years ago:
ID does NOT claim that everything is intelligently designed. Lying about ID and erecting strawmen is all Alan is reduced to.
Priceless…
Why is Alan so afraid of people trying to quantify the concept of information, with respect to biology, as posited by Francis Crick?
Why is Alan so afraid to tell us of this methodology used to determine that blind and mindless processes, such as natural selection and drift, produced all bacterial flagella?
Why is Alan so afraid to develop his notion of the NICHE designs? Why does the evidence point to honing of existing designs, for example?
And why is Alan so afraid to learn what Intelligent Design actually is and what it argues against?
Hi Alan
It depends on the application. Besides the sequence problem there is another problem which is the waiting time problem to fixation. This blind and unguided dog does not hunt. Universal common descent is not going to make it as a hypothesis and it is important this realization comes sooner then later.
ID shows where science may have limits. Big time and resource saver. ID can help stop faulty theories from surfacing and misleading science. Universal common descent is an example. Alan, please do not get in the way of more sensible biological science just to satisfy your political ideology.
Hi Bill,
Are you on an R & R break from Peaceful Science?
JVL, you continue to set up and knock over strawmen. If you were not familiar with the negative log probability metric, and how base 2 yields bits, then you were and by refusal to acknowledge still are, not in a position to make substantial remarks. KF
F/N: Insofar as science seeks an accurate understanding of the world, identification and use of reliable signs of design is a significant contribution. And if instead science is reduced to propping up evolutionary materialistic scientism as an ideology, it is on its way to losing credibility. KF
KF,
It has been amply demonstrated that you are talking in non-sequiturs. Given any set of raw data, without additional information, you are (with your math manipulation) utterly unable to distinguish random number sets from sets that hold information.