Uncommon Descent Serving The Intelligent Design Community

At Evolution News: Günter Bechly repudiates “Professor Dave’s” attacks against ID

Categories
Intelligent Design
worldview
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Günter Bechly, Senior Fellow of the Center for Science and Culture, addresses the off-base accusations made against ID and the Discovery Institute.

Dave Farina is an atheist American YouTuber who runs a channel called Professor Dave Explains with almost two million subscribers.

The clichés and misrepresentations Farina recycles about intelligent design are beyond tired. Still, those new to the debate might find it helpful to see Farina’s false claims debunked.

Farina seems more interested in caricaturing those he disagrees with than understanding them.

Three Major Problems 

Farina also thinks that intelligent design theory “cannot be validated as real science because it does not explain or predict anything.” Here are three major problems with this statement:

Who defines what qualifies as “real science”? It is certainly not Dave Farina. It is not judges in court rooms. And it is not even the scientists themselves who define “science.” Reasonably, it is philosophers of science who address this question. But Farina seems to be totally ignorant of the fact that there is no consensus among philosophers of science about a demarcation criterion that could reliably distinguish science from non-science. Any criterion yet suggested, including Karl Popper’s criterion of falsifiability, either excludes too much (e.g., scientific fields like string theory or evolutionary biology) or includes too much (e.g., homeopathy or parapsychology).

Of course, intelligent design has explanatory power. Otherwise, we could not even explain the existence of Romeo and Juliet by the intelligent agency of William Shakespeare. There is no doubt that the designing activity of an intelligent agent is a perfectly valid explanation for complex specified patterns. The only question under debate is whether such patterns are confined to the realm of human cultural artifacts or if they are also found in nature. But this question should not be decided by dogmatic a priorirestrictions of certain worldviews that do not allow for design explanations whatever the evidence might be, but should rather follow the evidence wherever it leads. It is an empirical question to be decided by the data.

It is simply false that intelligent design does not predict anything. Indeed, this is yet another common stereotype that has been refuted so many times by ID proponents that any further use of this argument can be based only on a total ignorance of the facts (or perhaps deliberate lying, but I prefer not to apply that interpretation). Stephen Meyer (2009) included in his book Signature in the Cell a whole chapter with a dozen predictions inspired by intelligent design theory. These are often very precise and easily falsifiable, for example: “No undirected process will demonstrate the capacity to generate 500 bits of new [specified] information starting from a nonbiological source.” Just write a computer simulation that achieves this, without smuggling the information in through a backdoor, and you can claim victory over a core prediction of intelligent design.

Evolution News

Dr. Bechly addresses numerous additional misfires attempted by Professor Dave. With such a voluble spray of baseless accusations coming from someone like Professor Dave, it can be helpful to be reminded of the proverb, “Like a sparrow in its flitting, like a swallow in its flying, a curse that is causeless does not alight.” (Proverbs 26:2)

Comments
JVL:
Let’s start with your test that can be used on ID and unguided evolution.
Irreducible complexity:
IC- A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, non-arbitrarily individuated parts such that each part in the set is indispensable to maintaining the system’s basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. Page 285 NFL Numerous and Diverse Parts If the irreducible core of an IC system consists of one or only a few parts, there may be no insuperable obstacle to the Darwinian mechanism explaining how that system arose in one fell swoop. But as the number of indispensable well-fitted, mutually interacting, non-arbitrarily individuated parts increases in number & diversity, there is no possibility of the Darwinian mechanism achieving that system in one fell swoop. Page 287 Minimal Complexity and Function Given an IC system with numerous & diverse parts in its core, the Darwinian mechanism must produce it gradually. But if the system needs to operate at a certain minimal level of function before it can be of any use to the organism & if to achieve that level of function it requires a certain minimal level of complexity already possessed by the irreducible core, the Darwinian mechanism has no functional intermediates to exploit. Page 287
Then there is the fact that ID's concepts of evolution are being used in genetic algorithms. No one uses evolution by means of blind and mindless processes for anything.ET
June 5, 2022
June
06
Jun
5
05
2022
09:02 AM
9
09
02
AM
PDT
ET: Except none of that has anything to do with unguided evolution. Not one of your “predictions” has anything to do with unguided evolution. You can change “unguided evolution” with telic evolution and nothing changes. Except that, without a designer, it's all about unguided evolution. And, since you haven't provided it, there seemingly isn't a 'test' for unguided evolution. One that can be applied to ID as well. You said you had one. Where is it? This is the problem. You don’t have any clue at all. Science is definitely not your forte. Your should work on providing the test you claimed you had that could be applied to ID and unguided evolution. Where is it? Read the paper that I inked to that measures functional sequence complexity. Submit your rebuttal to peer-review Provide the test you said you had which you can apply to ID and unguided evolution. And, while you're at it, see if you can find the extra programming you've claimed for years exists in cells but which no one has found. And explain how it affects development.JVL
June 5, 2022
June
06
Jun
5
05
2022
09:00 AM
9
09
00
AM
PDT
JVL:
I think the case has been made that life, as we know it, arose via unguided and undirected means.
We know that you are unable to think. No such case has ever been made. Read the paper that I inked to that measures functional sequence complexity. Submit your rebuttal to peer-reviewET
June 5, 2022
June
06
Jun
5
05
2022
08:57 AM
8
08
57
AM
PDT
Nevermind. I found JVL's equivocation:
Unguided evolution predicts that bacteria will gain resistance to antibiotics. Unguided evolution predicts that there will be ‘arms races’ between prey and predators. Unguided evolution predicts that there will be convergent evolution, that is similar functionality will evolve in different lineages along different pathways.
Except none of that has anything to do with unguided evolution. Not one of your "predictions" has anything to do with unguided evolution. You can change "unguided evolution" with telic evolution and nothing changes. Thanks to unguided evolution evolutionary biologists don't even know what determines biological form! Given populations of prokaryotes there aren't any naturalistic mechanisms capable of producing eukaryotes. This is the problem. You don't have any clue at all. Science is definitely not your forte.ET
June 5, 2022
June
06
Jun
5
05
2022
08:54 AM
8
08
54
AM
PDT
ET: Where did JVL post these alleged predictions of evolution by means of blind and mindless processes? A comment # would be nice. Go look. Or shut up. Your call.JVL
June 5, 2022
June
06
Jun
5
05
2022
08:49 AM
8
08
49
AM
PDT
Where did JVL post these alleged predictions of evolution by means of blind and mindless processes? A comment # would be nice.ET
June 5, 2022
June
06
Jun
5
05
2022
08:47 AM
8
08
47
AM
PDT
Kairosfocus: Go, show an observed cases where say 500 bits of functional DNA are composed by blind chance and mechanical necessity without intelligent direction, whether in DNA or a random document generation exercise that does not sneak intelligence in the back door as weasel and kin do. That is, find an island of function. I think the case has been made that life, as we know it, arose via unguided and undirected means. You know I think that so what are you asking for? Stuff you already acknowledge you disagree with. We know we disagree on this. What do you want? What I'd like is for you to show me an application of your complex, specified information detection algorithm given an input provided by me. Okay? Can you do that?JVL
June 5, 2022
June
06
Jun
5
05
2022
08:42 AM
8
08
42
AM
PDT
JVL, you too, that's disappointing. But if you have to pick on that strawman it is an implicit acknowledgement that naturalism does not have much of a case. Just for one example, FSCO/I will only be seen to be caused by intelligently directed configuration. Go, show an observed cases where say 500 bits of functional DNA are composed by blind chance and mechanical necessity without intelligent direction, whether in DNA or a random document generation exercise that does not sneak intelligence in the back door as weasel and kin do. That is, find an island of function. KFkairosfocus
June 5, 2022
June
06
Jun
5
05
2022
08:36 AM
8
08
36
AM
PDT
Kairosfocus: In that context, Dembski’s use of CSI is an elaboration and generalisation on Orgel et al, as is evident save to the adamant objector. Let me clip, the discussion also echoes statistical thermodynamics: Fine. If I give you a sequence of symbols can you apply that criteria? I'd just like to see it 'in action' so to speak.JVL
June 5, 2022
June
06
Jun
5
05
2022
08:30 AM
8
08
30
AM
PDT
ET: And it is very telling that neither JVL nor Fred posted any alleged predictions borne from evolution by means of blind and mindless processes. I did. You're just not paying attention. AND I didn't see the test that can be applied to ID and unguided evolution. A testable hypothesis is not the same thing. You really do need to keep up with your claims.JVL
June 5, 2022
June
06
Jun
5
05
2022
08:23 AM
8
08
23
AM
PDT
Finally, some have said we should have used the ancestral enzyme as our starting point, because they believe modern enzymes are somehow different from ancient ones. Why do they think that? It’s because modern enzymes can’t be coopted to anything except trivial changes in function. In other words, they don’t evolve! That is precisely the point we are making.,, https://evolutionnews.org/2014/12/a_new_paper_fro/
bornagain77
June 5, 2022
June
06
Jun
5
05
2022
08:16 AM
8
08
16
AM
PDT
Earth to chuckdarwin- Just saying it doesn't make it so. And that hypothesis is the same type as used by archaeologists and forensic scientists.ET
June 5, 2022
June
06
Jun
5
05
2022
07:58 AM
7
07
58
AM
PDT
Many people do not understand what a symbol means. We have the word "dog" that is the symbol of a real dog. There is no deterministic causality between word "dog"(symbol) and a real dog . A real dog don't tell you that it is "a dog" , don't create the word dog so between the animal and d,o,g letters there is no causal link it's a CONVENTION that is created by 3rd party to link an animal with its symbol "dog". It's a convention because could be called different without losing it's meaning in its system(the word dog in chinese it's a different symbol ).In the cell same thing happens with 3 DNA bases (word "dog" ) that are the symbol of an amino acid (a real dog). The same convention happens in the cell (but infinitely more complex) because in the cell the word "dog" (=3 bases ) is the blueprint that "will build" a real dog(3D protein) . How is that working? How would be to build a real computer from the words "Personal Computer"? This kind of technology exists in cell .Lieutenant Commander Data
June 5, 2022
June
06
Jun
5
05
2022
07:53 AM
7
07
53
AM
PDT
ET/168 Your so-called "testable" hypothesis is nothing more than a circular argument.....chuckdarwin
June 5, 2022
June
06
Jun
5
05
2022
07:52 AM
7
07
52
AM
PDT
ET, I would add, there are trillions of observed cases of FSCO/I by intelligently directed configuration, indeed all actually observed cases are by this means. Including objecting comments in this thread. As objectors know or should acknowledge. KFkairosfocus
June 5, 2022
June
06
Jun
5
05
2022
07:37 AM
7
07
37
AM
PDT
FH, the problem there is first you have to get TO functional sequences and linked functional organisation. That is why Darwin's pond or the like is central. That, as Smithsonian admitted, is the root of the tree of life. No root, no shoots. KFkairosfocus
June 5, 2022
June
06
Jun
5
05
2022
07:28 AM
7
07
28
AM
PDT
The testable hypothesis for ID is as follows: 1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems. And that is by far more than evolution by means of blind and mindless processes can muster. My response (111) to Fred's clueless attempt at a rebuttal (90) proves that Fred doesn't understand science.ET
June 5, 2022
June
06
Jun
5
05
2022
07:14 AM
7
07
14
AM
PDT
Neither Allan Miller nor anyone else over on TSZ can explain how blind and mindless processes produced any functional sequence complexity. The point remains that CSI can be quantified.ET
June 5, 2022
June
06
Jun
5
05
2022
07:07 AM
7
07
07
AM
PDT
There's a succinct point made by one Allan Miller in the comments of Swamidass's thread:
But the main thrust is the unreliability of using a set of surviving and commonly descended sequences as if it were an unbiased sample of an entire space.
linkFred Hickson
June 5, 2022
June
06
Jun
5
05
2022
06:53 AM
6
06
53
AM
PDT
Swamidass:
Here, one of my brilliant MD PhD students and I study one of the “information” arguments against evolution.
Moron. It is NOT an argument against evolution. It is an argument against evolution by means of blind and mindless processes. Swamidass is the worst type of Christian who bears false witnessET
June 5, 2022
June
06
Jun
5
05
2022
06:53 AM
6
06
53
AM
PDT
Earth to Fred- Swamidass is an equivocator. And he NEVER demonstrates that blind and mindless processes can produce any proteins.ET
June 5, 2022
June
06
Jun
5
05
2022
06:51 AM
6
06
51
AM
PDT
Rob Davis, aka Timmy Horton:
ID proponents consist of three major groups.
And each of those groups is more intelligent and scientifically literate than you will ever be. You are a coward's coward. Even when you were surrounded by other evoTARDs you were too chicken to ante up and debate me on which side has the science and which side is full of droolers, like you. Even when one of the swamp minions was ready to ante up, he was exposed as an equivocating coward and ran away.ET
June 5, 2022
June
06
Jun
5
05
2022
06:49 AM
6
06
49
AM
PDT
Ah yes, it was Joshua Swamidass and this paper https://www.biorxiv.org/content/10.1101/114132v2 Discussed here: http://theskepticalzone.com/wp/evolution-and-functional-information/comment-page-1/#comments I see Kirk Durston joined in an earlier discussion here: http://theskepticalzone.com/wp/how-not-to-sample-protein-space/comment-page-1/#commentsFred Hickson
June 5, 2022
June
06
Jun
5
05
2022
06:47 AM
6
06
47
AM
PDT
FH, you seem to want to pose hyperskeptically on oh there is no meaning to functionally specific, complex organisation and associated information, taking the general form, complex specified information as meaningless. The reality is, just to pose objections you generated, intelligently, ASCII text strings at 7 bits of FSCO/I per character, showing cases in point. D/RNA is related, actually expressing algorithms using coded alphanumeric symbolic elements. Where, this is language. Crick knew that in the 50's as you were shown from his March 19 1953 letter to his son, but refuse to acknowledge. Orgel as cited above pretty much said the same in 1973, twenty years later. Wicken said the same. Nor is such a surprise, the genetic code is well known, as is the fact that last I checked there were about two dozen dialects. Rather as BASIC had ever so many dialects. As to definition on strings, as configuration based functional organisation is describable through description languages [think, AutoCAD], discussion on strings is WLOG. In that context, Dembski's use of CSI is an elaboration and generalisation on Orgel et al, as is evident save to the adamant objector. Let me clip, the discussion also echoes statistical thermodynamics:
CONCEPT: NFL, p. 148:“The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [cf. p 144 as cited below], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. . . . In virtue of their function [a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways
[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole. Dembski cites: Wouters, p. 148: "globally in terms of the viability of whole organisms," Behe, p. 148: "minimal function of biochemical systems," Dawkins, pp. 148 - 9: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by ran-| dom chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction." On p. 149, he roughly cites Orgel's famous remark on specified complexity from 1973, which exactly cited reads: " In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity . . ." And, p. 149, he highlights Paul Davis in The Fifth Miracle: "Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity."] . . .”
DEFINITION: p. 144: [Specified complexity can be more formally defined:] “. . . since a universal probability bound of 1 [chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [the cluster] (T, E) constitutes CSI because T [effectively the target hot zone in the field of possibilities] subsumes E [effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ”
Just for completeness, let's make a bridge:
let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, . . . there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information,. . . ):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
KFkairosfocus
June 5, 2022
June
06
Jun
5
05
2022
06:39 AM
6
06
39
AM
PDT
"Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.” " Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.” - Dr Michael Behe in "Darwin's Black Box"ET
June 5, 2022
June
06
Jun
5
05
2022
06:38 AM
6
06
38
AM
PDT
Intelligent Design's concepts on evolution have proven useful in the form of genetic algorithms which are goal-oriented programs that use targeted searches to solve various problems. No one uses evolution by means of blind and mindless processes for anything beyond predicting genetic diseases and deformities.ET
June 5, 2022
June
06
Jun
5
05
2022
06:37 AM
6
06
37
AM
PDT
And it is very telling that neither JVL nor Fred posted any alleged predictions borne from evolution by means of blind and mindless processes. That’s because blind and mindless processes only predicts genetic diseases and deformities.ET
June 5, 2022
June
06
Jun
5
05
2022
06:25 AM
6
06
25
AM
PDT
FH: "But I (Darwin) can find out no such case." And indeed Darwin was not a molecular biologist. Molecular biology wasn't even a science then. So he could have known of no 'case'. But Axe is a molecular biologist and has such 'cases',
“Charles Darwin said (paraphrase) “If anyone could find anything that could not be had through a number of slight, successive, modifications my theory would absolutely breakdown.” Well, that condition has been met time and time again now. Basically every gene and every new protein fold, there is nothing of significance that we can show that can be had in that gradualistic way. It’s all a mirage. None of it happens that way.” – Douglas Axe – 200 Years After Darwin – What Didn’t Darwin Know? – (5:30 minute mark) video – Part 2 of 2 https://youtu.be/VKIgNroTj54?t=329 "Enzyme Families -- Shared Evolutionary History or Shared Design?" - Ann Gauger - December 4, 2014 Excerpt: If enzymes can't be recruited to genuinely new functions by unguided means, no matter how similar they are, the evolutionary story is false.,,, Taken together, since we found no enzyme that was within one mutation of cooption, the total number of mutations needed is at least four: one for duplication, one for over-production, and two or more single base changes. The waiting time required to achieve four mutations is 10^15 years. That's longer than the age of the universe. The real waiting time is likely to be much greater, since the two most likely candidate enzymes failed to be coopted by double mutations. http://www.evolutionnews.org/2014/12/a_new_paper_fro091701.html Claim: New Proteins Evolve Very Easily - Cornelius Hunter - April 25, 2017 Excerpt:  It is now clear that for a given protein, only a few changes to its amino acid sequence can be sustained before the protein function is all but eliminated. Here is how one paper explained it: “The accepted paradigm that proteins can tolerate nearly any amino acid substitution has been replaced by the view that the deleterious effects of mutations, and especially their tendency to undermine the thermodynamic and kinetic stability of protein, is a major constraint on protein evolvability—the ability of proteins to acquire changes in sequence and function.” In other words, protein function precipitously drops off with only a tiny fraction of its amino acids altered. It is not a gradual fitness landscape. Another paper described the protein fitness landscape as rugged. Therefore it is not surprising that various studies on evolving proteins have failed to show a viable mechanism. One study concluded that 10^63 attempts would be required to evolve a relatively short protein. And a similar result (10^65 attempts required) was obtained by comparing protein sequences. Another study found that 10^64 to 10^77 attempts are required, and another study concluded that 10^70 attempts would be required. So something like 10^70 attempts are required yet evolutionists estimate that only 10^43 attempts are possible. In other words, there is a shortfall of 27 orders of magnitude. But it gets worse. The estimate that 10^43 attempts are possible is utterly unrealistic. For it assumes billions of years are available, and that for that entire time the Earth is covered with bacteria, constantly churning out mutations and new protein experiments. Aside from the fact that these assumptions are entirely unrealistic, the estimate also suffers from the rather inconvenient fact that those bacteria are, err, full of proteins. In other word, for evolution to evolve proteins, they must already exist in the first place. This is absurd. And yet, even with these overly optimistic assumptions, evolution falls short by 27 orders of magnitude. https://www.evolutionnews.org/2017/04/claim-new-proteins-evolve-very-easily/ Right of Reply: Our Response to Jerry Coyne - September 29, 2019 by Günter Bechly, Brian Miller and David Berlinski Excerpt: David Gelernter observed that amino acid sequences that correspond to functional proteins are remarkably rare among the “space” of all possible combinations of amino acid sequences of a given length. Protein scientists call this set of all possible amino acid sequences or combinations “amino acid sequence space” or “combinatorial sequence space.” Gelernter made reference to this concept in his review of Meyer and Berlinski’s books. He also referenced the careful experimental work by Douglas Axe who used a technique known as site-directed mutagenesis to assess the rarity of protein folds in sequence space while he was working at Cambridge University from 1990-2003. Axe showed that the ratio of sequences in sequence space that will produce protein folds to sequences that won’t is prohibitively and vanishingly small. Indeed, in an authoritative paper published in the Journal of Molecular Biology Axe estimated that ratio at 1 in 10^74. From that information about the rarity of protein folds in sequence space, Gelernter—like Axe, Meyer and Berlinski—has drawn the rational conclusion: finding a novel protein fold by a random search is implausible in the extreme. Not so, Coyne argued. Proteins do not evolve from random sequences. They evolve by means of gene duplication. By starting from an established protein structure, protein evolution had a head start. This is not an irrational position, but it is anachronistic. Indeed, Harvard mathematical biologist Martin Nowak has shown that random searches in sequence space that start from known functional sequences are no more likely to enter regions in sequence space with new protein folds than searches that start from random sequences. The reason for this is clear: random searches are overwhelmingly more likely to go off into a non-folding, non-functional abyss than they are to find a novel protein fold. Why? Because such novel folds are so extraordinarily rare in sequence space. Moreover, as Meyer explained in Darwin’s Doubt, as mutations accumulate in functional sequences, they will inevitably destroy function long before they stumble across a new protein fold. Again, this follows from the extreme rarity (as well as the isolation) of protein folds in sequence space. Recent work by Weizmann Institute protein scientist Dan Tawfik has reinforced this conclusion. Tawfik’s work shows that as mutations to functional protein sequences accumulate, the folds of those proteins become progressively more thermodynamically and structurally unstable. Typically, 15 or fewer mutations will completely destroy the stability of known protein folds of average size. Yet, generating (or finding) a new protein fold requires far more amino acid sequence changes than that. Finally, calculations based on Tawfik’s work confirm and extend the applicability of Axe’s original measure of the rarity of protein folds. These calculations confirm that the measure of rarity that Axe determined for the protein he studied is actually representative of the rarity for large classes of other globular proteins. Not surprisingly, Dan Tawfik has described the origination of a truly novel protein or fold as “something like close to a miracle.” Tawfik is on Coyne’s side: He is mainstream. https://quillette.com/2019/09/29/right-of-reply-our-response-to-jerry-coyne/ "A Thermodynamic Analysis of the Rarity of Protein Folds" by Dr. Brian Miller - 2019 https://www.youtube.com/watch?v=CvSpN_3tFN4 “Research by Douglas Axe demonstrated that amino acid sequences that correspond to a functional beta-lactamase protein fold are extremely rare. In response, critics have raised questions related to the accuracy of his analysis. This presentation describes how more recent research on the effects of mutations on the thermodynamic stability of protein folds has confirmed Axe’s result and its general relevance to most proteins."
bornagain77
June 5, 2022
June
06
Jun
5
05
2022
06:25 AM
6
06
25
AM
PDT
This popped up while looking for the discussion on Durston. https://uncommondescent.com/intelligent-design/why-theres-no-such-thing-as-a-csi-scanner-or-reasonable-and-unreasonable-demands-relating-to-complex-specified-information/ What happened to Vincent Torley?Fred Hickson
June 5, 2022
June
06
Jun
5
05
2022
06:25 AM
6
06
25
AM
PDT
"Some unknown naturalistic processes did something", is NOT the makings of a scientific theory. And that is what Darwin offered. That is what we still have today.ET
June 5, 2022
June
06
Jun
5
05
2022
06:24 AM
6
06
24
AM
PDT
1 25 26 27 28 29 33

Leave a Reply