Uncommon Descent Serving The Intelligent Design Community

Evolution driven by laws? Not random mutations?

Categories
Evolutionary biology
Intelligent Design
News
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

So claims a recent book, Arrival of the Fittest, by Andreas Wagner, professor of evolutionary biology at U Zurich in Switzerland (also associated with the Santa Fe Institute). He lectures worldwide and is a fellow of the American Association for the Advancement of Sciences.

From the book announcement:

Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?

In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory. Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take.

From a review (which is careful to note that it is not a religious argument):

The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.

He then shows some of the fundamental hidden principles that can actually make innovations possible for natural selection to then select and preserve those innovations.

Like interacting parallel worlds, this would be momentous news if it is true. But someone is going to have to read the book and assess the strength of the laws advanced.

One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever. But we knew, of course, when Nature published an article on the growing dissent within the ranks about Darwinism.

In origin of life research, there has long been a law vs. chance controversy. For example, Does nature just “naturally” produce life? vs. Maybe if we throw enough models at the origin of life… some of them will stick?

Note: You may have to apprise your old schoolmarm that Darwin’s theory* is “natural selection acting on random mutations,” not “evolution” in general. It is the only theory that claims sheer randomness can lead to creativity, in conflict with information theory. See also: Being as Communion.

*(or neo-Darwinism, or whatever you call what the Darwin-in-the-schools lobby is promoting or Evolution Sunday is celebrating).*

Follow UD News at Twitter!

Comments
gpuccio, Would you have been satisfied with Keefe & Szostak's experiment if they had selected for biotin binding, which IS a biologically selectable function? Why, no, you wouldn't. Because, as you put it,
How does Szostak decide that he will work on the weak binding for ATP, and not on any other random sequence? Because he knows that ATP binding is what he wants to obtain. Because he is a designer.
This is the "the experiment was designed" complaint. You are stating that you would only be satisfied if A) the sequence has a biologically selectable function and B) the experimental design does not test for any particular biologically selectable function. Such conditions only apply in the field (where, of course, they have been observed: e.g. vpu) They cannot apply in any experiment. Can you not see that this is deranged?DNA_Jock
November 8, 2014
November
11
Nov
8
08
2014
11:05 AM
11
11
05
AM
PDT
Learned Hand, to gpuccio:
Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things.
That's right. Dembski's problems are that 1) he can't calculate P(T|H), because H encompasses "Darwinian and other material mechanisms"; and 2) his argument would be circular even if he could calculate it. KF's problem is that although he claims to be using Dembski's P(T|H), he actually isn't, because he isn't taking Darwinian and other material mechanisms into account. It's painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it. Gpuccio avoids KF's problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio's dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by "Darwinian and other material mechanisms", so his argument is circular, like Dembski's. All three concepts are fatally flawed and cannot be used to detect design.keith s
November 8, 2014
November
11
Nov
8
08
2014
10:43 AM
10
10
43
AM
PDT
To address Dionisio's #533
Dionisio wrote: D: One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that’s a real delight for any passionate computer scientist or engineer. Do you understand this? [No emphasis in original]
DNAJ replied: Yes. One of the cool things Robert M. Pirsig points out in “Zen and the Art…” is that the more tests you do, the more hypotheses increase in number. Makes science a lot of fun, if rather poorly remunerated. I see you opted for the buns [Emphasis in original]
To which Dionisio replied: Apparently you did not understand what I wrote. That’s fine. Let’s try it again. As serious researchers dig into the biological systems, while trying to answer outstanding questions, they discover and report elaborate choreographies of information-processing mechanisms (regulatory networks, signaling pathways, epigenetics, proteomics, the whole nine yard), and newer questions arise. As far as I recall, there’s only one known source of information-processing systems: intelligence. There are “chicken-egg” questions associated with the observed systems. Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it. Did you understand this now? [Emphasis in original]
No, Dionisio, I understood what you wrote quite well the first time. What I did not realize is that you expected me to associate the phrase “elaborate information-processing system” with something that could not arise via evolution. I understood your statement, and I agreed with it. By the same token, I completely agree with the first sentence of your re-statement : “As serious researchers…newer questions arise”. Have you read “Zen and the Art…”? I think you would enjoy it. When you say “As far as I recall, there’s only one known source of information-processing systems: intelligence.”, I am inclined to agree with you, but only so long as we maintain a rather broad view of “intelligence”, that includes virtually all extant organisms, and viruses too. We can exclude prions. Before you start over-concluding, I feel I should warn you that, as far as I recall, there’s only one known source of intelligence: biology. Therefore any conclusion you wish to infer about the need for intelligent intervention in order to explain the origin of complex systems will apply equally to the need for biology in order to explain the origin of intelligence. You might want to think that one through. The problem is with the inductive nature of your “only one known source” argument.
There are “chicken-egg” questions associated with the observed systems.
I can remember being taught about the huge “chicken-egg” problem (you’ll appreciate the fact that we termed it a “bootstrap problem”) presented by template-directed protein synthesis. Have you read “Signature in the Cell”? What did you think of his treatment of the TDPS bootstrap problem?
Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it.
Autodidactism can lead one astray; I would recommend taking a degree or two in biology. Although, if you are correct when you say that your IQ is about the same as your age, this won’t help much.DNA_Jock
November 8, 2014
November
11
Nov
8
08
2014
10:36 AM
10
10
36
AM
PDT
DNA_Jock: I havbe not much time, so for the moment I will address only a couple of mre relevant points: a) I admit that I was not very precise when I said: "ATP binding “is not a function at all”". I should have said: "ATP binding in itself is not a naturally selectable function in a biological context, neither in its "strong" form (the final protein, see what happened when they tried to inject it in living organisms), nor, least of all, in the original weak affinity form. Sometimes I write too quickly. You certainly know that anything can be defined as a function, according to my approach, so ATP binding too can be defined as a function. Obviously, in its weak form, it is not a very complex function, because it is rather likely in random sequences. We all agree on that. The important point is that it is not a naturally selectable function. A weak affinity for ATP does not confer any reproductive advantage in any known context. So, I suppose this brings us to the last point about the difference between IS and NS. And here it is. I must seriously disagree with what you say about that final point. First of all, let's clarify that IS is not a process of selection where intelligent agents intervene to select each time. IS is usually a process where intelligent agents have defined what to select, how to select it, and then the selection can certainly be algorithmic, like in my example of antibody maturation. Therefore, your objections like: "Human beings did not “select” anything, except of course the conditions under which the experiment was performed. They did NOT go in and hand-pick binders." or: "No intelligent agent intervenes in the process" are completely inappropriate and out of order. So, what is the difference between IS and NS? Easy. NS is a process where no intelligent agent decides what will be selected, and the selection (which indeed should not be called selection, but that is not important) is made by what already exists in the system, and what exists in the system was not algorithmically set by any intelligent agent with any awareness of anything to be selected. In the form which interests our debate, NS happens only because there are self-replicators which compete for resources, and therefore the replicators which have a reproductive advantage are expanded (positive selection) while those who lose reproductive fitness are eliminated (negative selection). IOWs, the trait which defines NS is that what is selected is reproductive success. Anything else can be selected only indirectly, if it confers reproductive success. So, if you want to model NS you need to generate a model where the variation confers an advantage of itself, in an environment which has not been set to measure anything in particular, and to react to that measurement by expanding anything. That's what I mean when I say that, in NS, the variation must be selected "on its own merits". In IS, on the other hand, there is a prior definition of the function, or of the type of function, and then a measurement system is set up to measure the defined function. This is a very important point. The measurement system can be set to measure the desired function at any level, even at very low levels. That's what Szostak has done. So, we can recognize functions which would never be recognized "on their own merits", in any natural context. And the second important point is, the system is set up to generate a continuous pathway to the final result. Szostak selects any level of ATP binding, amplifies and mutates the selected result, and then re-selects any increase in the binding. He is willfully generating a continuous gradient of the desired function, where any increase is selected and amplified, any decrease is eliminated. Elizabeth does the same thing. This works. But it is not NS. And it is not a model of NS. You should understand that. You cannot model one thing with another thing which has completely different properties and completely different behaviour. How does Szostak decide that he will work on the weak binding for ATP, and not on any other random sequence? Because he knows that ATP binding is what he wants to obtain. Because he is a designer. How does Szostak measure the ATP binding? By an intelligent measurement system. Not by any advantage that the ATP binding confers in any natural system. IS measures what it wants to measure, at the level it wants to measure it. And IS actively acts on the results. It expands what has been considered "good", not on its own merits, but in the judgement of the intelligent agent or of the algorithm he has created. And varies it and selects it again. Szostak implements mutagenic PCR on the selected sequences, and then again, and then again. Expansion and variation and measurement and again expansion. Of what? Of what is considered the purpose. Any time the result is nearer to the desired purpose. Even is the result is only a little bit nearer to the purpose, it is selected, and expanded, and again selected. It's easy to do that. And what does he attain in the end? A proteins which binds ATP. Strongly. Which is exactly what he wanted to obtain. Which is still a protein which has no useful function in a biological context, and which cannot be naturally selected in a biological context (indeed, it can easily be negatively selected as an ATP depriver). You say, of Lizzie's example: "Strings with higher scores are more likely to be copied. Neither the program nor its designer has any clue about what a ‘successful” string might look like." No. Strings with higher score are copied more likely because Lizzie has decided that way. Strings with higher score have no useful information of tehir own in a natural context, and the score cannot be measured by any natural context. This is not NS, nor any valid model of it. This is a fundamental flaw of all these "models" of NS. The problem is not that they are designed, as you seem to believe my argument is. The problem is that they are designed as examples of IS, and then they are erroneously considered "models" of NS. They are not. So, I fully maintain my statement. Lizzie’s “Creating CSI with NS” should be called “Creating CSI with design by IS”. If you don't agree, please exp'lain in what sense Lizzie's example would be a model of NS, and not simply a trivial implementation of IS and engineering. Szostak's paper has exactly the same problem. He should be interested only in one thing: can we find, in random libraries, sequences that can be shown to be naturally selectable? His paper tells nothing about that. It just tells us that we can find in random libraries sequences which can be intelligently engineered, even if results which are not especially useful or impressing.gpuccio
November 8, 2014
November
11
Nov
8
08
2014
10:13 AM
10
10
13
AM
PDT
KF, I see that you've had time to drop a few replies and write a characteristically long "FYI-FTR." Could you take a few seconds and respond to my repeated question above? What basis is there for asserting that Orgel and Dembski are using the same notion of "complexity?" (If you had opened comments on the latest FYI-FTR, I would also like to know your thoughts on Dembski's declaration that he had "pretty much dispensed with the EF," due to logical flaws, and his subsequent reinstatement of it on grounds that "critics [were] crowing about the demise of the EF." He never seemed to address the logical flaws he acknowledged crippled the EF in the first place; have you?)Learned Hand
November 8, 2014
November
11
Nov
8
08
2014
10:02 AM
10
10
02
AM
PDT
Incidentally, gpuccio, I do appreciate your responses. I realize that I'm asking you to revisit things you probably feel that you've discussed exhaustively in the past. Thank you for your forbearance. Given how often CSI is discussed, do you not think it's oddly difficult to find examples of it being calculated? Do you think, with an hour to search, you could find more than a dozen? I suspect less than that; I searched a while back, and could just find a few serious attempts. It's like pulling teeth just to get someone to explicitly lay out their method and do the calculation. And when they do, everyone seems to have their own pet approach to the problem. I don't see what's "simple, beautiful and consistent" about CSI.Learned Hand
November 8, 2014
November
11
Nov
8
08
2014
09:53 AM
9
09
53
AM
PDT
Learned Hand:
“Remember that there are a couple of assertions on the table: gpuccio’s claim that CSI is a beautiful, strong, consistent concept, and now yours that it’s a routine calculation. Neither claim is supported by the poor showings to date. If it’s so easy to calculate, then please calculate it rather than declaring that you’ve done so and listing a few parts of the calculation.”
Have I missed your comments to my posts #360 and #400 here? If they are “poor showings”, please explain why. Or simply admit that I have calculated what you ask me to calculate. You can agree or not with the calculation, but not go on saying that I have not done it.
Thanks for the response. I didn't respond to your comment at #360 because it (a) was addressed to someone else, and (b) doesn't calculate anything or refer to any explicit calculations. It just lists a series of things that you say were and weren't designed. Did you mean a different comment? Because if you did mean that #360 is responsive to my request that you show the work behind a CSI calculation, then yes, it's an extremely poor showing--there are no calculations there. Your comment #400 does sketch out a basic approach to declaring that specified complexity exists, although it's still not what I've been asking for--can you please simply give your approach? Just the formula you're using would be helpful, especially since it's difficult for me to work backwards and determine it from your casual discussion of the results of your calculations. I'm interested in comparing it to Dembski's, because you aren't following the same procedure as far as I can tell. Possibly I'm just misunderstanding your approach, though. It looks to me like you're disclaiming the need to consider non-design hypotheses when you say, "Obviously, both for ATP synthase and histone H3 I am aware of no algorithmic explanation for their origin. Can you give one?" (No, I can't. I wouldn't know where to start.) While I'm no mathematician, I think it's safe to say that not knowing what a term should be is not a sufficient reason to ignore it. Dembski made P(T|H), in one form or another, part of the CSI calculation for what seem like very good reasons. And I think you defended his concept as simple, rigorous, and consistent. But nevertheless you, KF, and Dembski all seem to be taking different approaches and calculating different things. If CSI is "simple, beautiful and consistent," why are there so many versions of it, and why is it like pulling teeth to get someone to explicitly demonstrate a calculation? It can be done, I'm sure I've seen people try to walk through the calculations before. But doing so seems to make IDists deeply uneasy, especially because it draws very pointed criticisms identifying flaws in both the concept and the execution (most obviously, from my perspective, the casual dismissal of the P(T|H) problem).Learned Hand
November 8, 2014
November
11
Nov
8
08
2014
09:46 AM
9
09
46
AM
PDT
Dionisio, I answered many, but not all, of your questions. Some appeared to be entirely rhetorical. If there are particular questions that you think I ought to answer, but have not, please restate them. I will address your post #533. My responses to you may be a little slower coming than my responses to gpuccio. I hope you understand why.DNA_Jock
November 8, 2014
November
11
Nov
8
08
2014
09:39 AM
9
09
39
AM
PDT
#541 KS, the dFSCI metric represents the informational content of an entity, rooted in the number of coded y/n q’s to specify state in a communicative context. Source, encoder, decoder, application, code system, physical expression etc. In that context, per config space scope vs sparse possible blind search, it becomes maximally implausible that such would be able to find islands of relevant function, WITHOUT need to define or work out precise calculated probability values. KF
#542 D-J, The just above will also be helpful for you. KF
Not at all helpful. My questions were: Are [you] quite comfortable with Durston’s assumption that the exploration of insulin’s aa sequence has been a random walk, without any intervention? Every single one of your calculations of p(T|H) and related alphabet soup rely on the assumption of independence (as you have admitted) and that this assumption is false (as both you and Durston have admitted). You assert that the error is “not material”. How big is the error? How do you know? Your posts at 497, 541 and 542 were relatively concise (542 admirably so), but non-responsive. Please be as precise and concise as you can while answering these questions.DNA_Jock
November 8, 2014
November
11
Nov
8
08
2014
09:31 AM
9
09
31
AM
PDT
gpuccio, As I understand it, you are making three, potentially related, complaints about Keefe & Szostak. 1. There were no strong ATP binders in the original library. 2. Strong ATP binders only arose after “intelligent selection” had been applied 3. ATP binding “is not a function at all” Complaint # 1 “There were no strong ATP binders in the original library.” This is a feature, not a bug. If the initial library had contained strong L-binders, and RM+S had not been able to improve on them, we would be drawn to conclude that L-binders were rather easy to come by, and that RM+S wasn’t much use at improving them. Instead, the result was much more interesting: weak L-binders were found at a frequency that makes them accessible to an unguided stochastic process, AND many of these L-binders could be improved, often dramatically, by RM+S. I will deal with complaint # 2 last, since it ties in with your comments re “Creating CSI with NS”. Complaint # 3 ATP binding “is not a function at all” While your statement is untrue, it can be re-phrased to form a seemingly reasonable objection, i.e. “ATP is not a very interesting function, I want to see you evolve an enzyme activity.” If you think carefully about how their experiments work, you will be able to figure out for yourself why this is an inherent limitation of the method, but it is NOT a limitation that would apply in nature. This is what I said originally (#245 of the elephant thread) regarding the technology described in Keefe & Szostak:
Where we disagree, AFAICT, is your insistence that RV and NS must be considered separately AND that no NS can act until there is a selective advantage that is a “fact”, meaning it has been demonstrated to be operative (and, you seem to imply, historically accurate?) by evidence that you personally find clear and convincing. I, OTOH, am willing to posit small selective advantages for simpler, poorly optimized polymers, and try to investigate what these rudimentary functionalities might look like. And the experimental data on protein evolution supports me here: in particular, Phylos Inc demonstrated that using libraries of sizes of ~ 10^13 (e.g. USP 6,261,804), you could evolve peptides that bound to pretty much ANYTHING. Unfortunately, I can’t get much more specific, but here’s a “statement against interest”: the libraries produced better binders if the random peptide was anchored by an invariant ‘scaffold’. They used fibronectin, but I suspect that a bit of beta sheet at each end of the random peptide would have done the trick. They also had a technical problem in optimizing catalysis, but that limitation would not apply in actual living systems. [Emphasis in original]
Bottom up studies like Keefe’s are the only way to explore the frequency of “the shores of the islands of function” in protein space (that I have heard of). Studies like McLauglin explore the degree to which functional protein space is interconnected via single steps near an optimum. Durston asks “how broad is the peak?”, a question of secondary relevance, at best. Axe doesn’t explore anything; the paper is based on a glaring fallacy. See my attempt to explain this, inter alia, to Mung. Wordpress is mangling my attempts to provide you with a linkout. please enter "http://theskepticalzone.com/wp/?p=1472&cpage=7#comment-19065" in your browser. Dr. Axe is represented by “Dr. A” -- I’m a subtle guy. (Off-topic: I believe I owe you an apology: from various things that Mung had said about you at TSZ, I had erroneously assumed that you had discussed PDZ on UD. My bad.) There is not any inconsistency between, to use your terms, the forward data and the reverse data: Keefe’s forward data are compatible with McLauglin’s and Durston’s reverse data. You may be mis-understanding Durston’s data. Axe himself is mis-understanding his own data. Finally, complaint #2: Strong ATP binders only arose after “intelligent selection” had been applied. Your complaint against “intelligent selection” is fundamentally flawed. If you wish to argue that a particular model, or a particular experiment, does not accurately reflect the process that it purports to model and/or test, then you need to explain, with supporting data, why you believe this to be the case. Merely complaining that “the experiment was designed” or “the solution was smuggled in” does not cut it. Of course the experiment was designed! The question is rather “Was it appropriately designed?” In the case of Keefe & Szostak, they performed random mutation, and then selection for binding, achieved by letting the entire mix stick to immobilized ligand, washing and eluting. Human beings did not “select” anything, except of course the conditions under which the experiment was performed. They did NOT go in and hand-pick binders. Similarly your complaint that Lizzie’s “Creating CSI with NS” should be called “Creating CSI with design by IS”
IS requires a conscious intelligent agent who recognizes some function as desirable, sets the context to develop it, can measure it at any desired level, and can intervene in the system to expand any result which shows any degree of the desired function. IOWs, both the definition of the function, the way to measure it, and the interventions to facilitate its emergence are carefully engineered. It’s design all the way. On the contrary, NS assumes that some new complex function arises in a system which is not aware of its meaning and possibilities, only because some intermediary steps represent a step to it, and through the selection of the intermediary steps because of one property alone: higher reproductive success. So, I ask a simple question: what reproductive success is present in Lizzie’s example? None at all. It’s the designer who selects what he wants to obtain. The property selected has no capability at all to be selected “on its own merits
Strings with higher scores are more likely to be copied. Neither the program nor its designer has any clue about what a ‘successful” string might look like. No intelligent agent intervenes in the process. Just like antibody affinity maturation, which you also used as an example of IS. Some parts of the gene mutate at a higher rate than others, therefore “intelligence”. Really?DNA_Jock
November 8, 2014
November
11
Nov
8
08
2014
09:24 AM
9
09
24
AM
PDT
KS, with all due respect, your insistently repeating errors does not make them into truth. I have said enough long since for a reasonable person to see just why FSCO/I -- which includes dFSCI -- is real, is observable and recognisable, is quantifiable based on observable characteristics, and why it is maximally unlikely to result from blind chance and mechanical necessity but is per trillions of directly observed cases in point of its cause, a reliable sign of design. (The just above linked is yet another explanation for those who needed it, in reply to your caricaturing of Newton, which I just could not let pass, as one trained in my discipline.) I have said enough for the reasonable man, so I need not elaborate further regardless of drumbeat repetition of erroneous assertions and insinuations. Gotta go now, errands to run for She Who Must be Obeyed. KFkairosfocus
November 8, 2014
November
11
Nov
8
08
2014
07:44 AM
7
07
44
AM
PDT
D-J, The just above will also be helpful for you. KFkairosfocus
November 8, 2014
November
11
Nov
8
08
2014
07:17 AM
7
07
17
AM
PDT
KS, the dFSCI metric represents the informational content of an entity, rooted in the number of coded y/n q's to specify state in a communicative context. Source, encoder, decoder, application, code system, physical expression etc. In that context, per config space scope vs sparse possible blind search, it becomes maximally implausible that such would be able to find islands of relevant function, WITHOUT need to define or work out precise calculated probability values. KF PS: here, will be relevant.kairosfocus
November 8, 2014
November
11
Nov
8
08
2014
07:14 AM
7
07
14
AM
PDT
"No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved." Its such a ridiculous point when evolutionists say this. In other words, it is completely random, but remember, some organisms died.phoodoo
November 8, 2014
November
11
Nov
8
08
2014
06:17 AM
6
06
17
AM
PDT
Strange how evos attack ID methodology seeing that they don't use any methodology beyond bald declaration.Joe
November 8, 2014
November
11
Nov
8
08
2014
06:05 AM
6
06
05
AM
PDT
keith s:
The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection.
Prove it.
No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved.
Yet natural selection has proven to be impotent. But anyway what is the methodology those evolutionary biologists used to determine that unguided evolution could produce a bacterial flagellum. Please be specific, that way we can compare methodologies.Joe
November 8, 2014
November
11
Nov
8
08
2014
06:04 AM
6
06
04
AM
PDT
DNA_Jock RE: #535 addendum Lately we see mathematicians, electrical engineers and computer science professionals involved in multidisciplinary teams, working on important biology-related research projects at different institutions. That's why we look forward, with much anticipation, to reading newer reports coming out of research. Because they shed more light on the elaborate cellular and molecular choreographies observed in the biological systems. These days it's quite fascinating to closely follow what is going on in biology. Let's enjoy it! :)Dionisio
November 8, 2014
November
11
Nov
8
08
2014
02:32 AM
2
02
32
AM
PDT
keith s: Just to be clear. I have already answered your "objections". I will not do it again. You seem to love repetitions. I don't. There is a point where reasonable people must accept that they have different ideas. You don't seem to believe that, and go on crying: "I am right. I win." That's fine. Go on.gpuccio
November 8, 2014
November
11
Nov
8
08
2014
02:30 AM
2
02
30
AM
PDT
DNA_Jock RE: #534 addendum You may want to keep in mind what is written in post #525. :)Dionisio
November 8, 2014
November
11
Nov
8
08
2014
02:13 AM
2
02
13
AM
PDT
DNA_Jock RE: #533 addendum Here's an example of a very interesting scientific report and some of the new questions that arise while carefully reading it: https://uncommondescent.com/evolution/a-third-way-of-evolution/#comment-525809 Note that there are many examples like this. Over 550 just in this thread: https://uncommondescent.com/evolution/a-third-way-of-evolution/ Can you answer the questions in post #533? Thank you. :)Dionisio
November 8, 2014
November
11
Nov
8
08
2014
02:09 AM
2
02
09
AM
PDT
#522 DNA_Jock
D: One very interesting thing about questions like the ones posted on #512 is that their correct answers lead to deeper questions, which eventually point to an elaborate information-processing system that’s a real delight for any passionate computer scientist or engineer. Do you understand this?
Yes. One of the cool things Robert M. Pirsig points out in “Zen and the Art…” is that the more tests you do, the more hypotheses increase in number. Makes science a lot of fun, if rather poorly remunerated. I see you opted for the buns. :)
Apparently you did not understand what I wrote. That's fine. Let's try it again. As serious researchers dig into the biological systems, while trying to answer outstanding questions, they discover and report elaborate choreographies of information-processing mechanisms (regulatory networks, signaling pathways, epigenetics, proteomics, the whole nine yard), and newer questions arise. As far as I recall, there's only one known source of information-processing systems: intelligence. There are "chicken-egg" questions associated with the observed systems. Has anyone proposed a comprehensive step-by-step description of how those biological orchestrations could have appeared? If you know of any, can you point to it? I would gladly read it. Did you understand this now? :)Dionisio
November 8, 2014
November
11
Nov
8
08
2014
01:54 AM
1
01
54
AM
PDT
gpuccio, We'd like to see a calculation of some quantity -- under whatever acronym you like -- the presence of which would demonstrate, in a non-circular way, that the structure or sequence in question could not have been produced by evolution or other nonintelligent natural processes. That is what CSI was touted to do, that is what you claim for dFSCI, and that is what KF claims for FSCO/I. dFSCI cannot do what you claim for it, as I explained in my previous comment. That you choose not to defend it is not to your credit. If you don't think it's worth defending, then no one else will either.keith s
November 8, 2014
November
11
Nov
8
08
2014
01:04 AM
1
01
04
AM
PDT
keith s: Very briefly: a) "We’ve been over this many times." Correct. And I will not go back again to "discussing" it with you. Whoever is interested can easily find my many detailed arguments about dFSCI spread on this blog. I must apologize for saying this, but I really don't consider you a serious interlocutor. Just some person who likes state "I am right and I win" as a mantra. OK, go on. b) "the problem with your dFSCI calculations is that the number they produce is useless" Well, at least you admit that I have calculated dFSCI. You just don't agree that my calculations are valid. That's fine. The next time that Learned Hand or anyone else on your side comes back with the false statement that I have never calculated dFSCI, and that I am afraid to do it, can I just mention you, and say that "even keith s admits that I have calculated dFSCI, even if he does not consider my calculations valid"?. Maybe that would be of some help. You must certainly be held in very high esteem on your side, such a fine bomber. :)gpuccio
November 8, 2014
November
11
Nov
8
08
2014
12:49 AM
12
12
49
AM
PDT
gpuccio, We've been over this many times, but the problem with your dFSCI calculations is that the number they produce is useless. The dFSCI number reflects the probability that a given sequence was produced purely randomly, without selection. No evolutionary biologists thinks the flagellum (or any other complex structure) arose through a purely random process; everyone thinks selection was involved. By neglecting selection, your dFSCI number is answering a question that no one is asking. It's useless. There is a second aspect of dFSCI that is a boolean (true/false) variable, but it depends on knowing beforehand whether or not the structure in question could have evolved. You can't use dFSCI to show that something couldn't have evolved, because you already need to know that it couldn't have evolved before you attribute dFSCI to it. It's hopelessly circular. What a mess. The numerical part of dFSCI is useless because it neglects selection, and the boolean part is also useless because the argument that employs it is circular. dFSCI is a fiasco.keith s
November 8, 2014
November
11
Nov
8
08
2014
12:30 AM
12
12
30
AM
PDT
Learned Hand: "Remember that there are a couple of assertions on the table: gpuccio’s claim that CSI is a beautiful, strong, consistent concept, and now yours that it’s a routine calculation. Neither claim is supported by the poor showings to date. If it’s so easy to calculate, then please calculate it rather than declaring that you’ve done so and listing a few parts of the calculation." Have I missed your comments to my posts #360 and #400 here? If they are "poor showings", please explain why. Or simply admit that I have calculated what you ask me to calculate. You can agree or not with the calculation, but not go on saying that I have not done it.gpuccio
November 7, 2014
November
11
Nov
7
07
2014
11:52 PM
11
11
52
PM
PDT
Sorry for not replying earlier, since we last spoke I've flown to Hong Kong for work. I wish I could share the view from my room's window. I'll be here for a week or so; obviously I won't be very responsive given the time difference. KF, keiths is right. (I can't not read your user name as a plural.) I'm looking for the explicit calculation, particularly so that I can compare it to Dembski's. Remember that there are a couple of assertions on the table: gpuccio's claim that CSI is a beautiful, strong, consistent concept, and now yours that it's a routine calculation. Neither claim is supported by the poor showings to date. If it's so easy to calculate, then please calculate it rather than declaring that you've done so and listing a few parts of the calculation. Once again, it's striking from a critic's perspective how little perspective IDists have on their own standards. They claim that CSI is a phenomenally powerful concept that can revolutionize science, that it's a robust and beautiful tool that can reliably detect design with no false positives, that it doesn't need to be tested, that it has been tested, that it can't be tested, that it's used every day... but they have the hardest time actually saying, "This is how I calculated CSI in this specific case, step-by-step." Skipping that part makes it obvious, at least to those outside the ideology, how poor a tool CSI actually is. And then, of course, there's the total failure of IDists to ever (literally ever, as far as I can tell) use CSI to detect design in the real world under controlled circumstances. It seems only to work in cases where design is either known in advance (Shakespeare) or assumed on faith (flagella). Why can't it be used to distinguish white noise from radio communications? Or determine whether the latest variant of the Ebola virus is a natural strain? Or to break codes, or detect steganography, or distinguish between data and noise in hard drive recovery, or answer any of Elsberry and Shallit's "eight challenges"? I think the answer is clear: testing CSI puts the concept at risk, since it might fail. That risk is unnecessary, since IDists don't require that their tools be tested; they're promoted on faith and logic (albeit logic under the heavy guidance of motivated reasoning), rather than empirical success. I'm sure Dembski and other creationists would love to shout from the rooftops that their tools have proven to be successful and useful, especially since the secular world would take up and use a productive tool, thus making it impossible for skeptics to ignore. But they are notably shy about dipping their toes in that water. I think it's because they know the risk is untenable--they understand as well as their critics do that the tools just don't work. That's obviously not an opinion shared by everyone here, but although I've asked several times, why don't IDists test these tools?, I haven't heard any strong answers. (In fact, the only answer I've heard is from WJM, who argued simply that skeptics wouldn't believe the results. I'm not sure he's wrong about that, but I am sure that's a terrible reason not to prove that your groundbreaking idea actually works. It has shades of Dembski's bizarre double-retraction of the explanatory filter, in which he declared the tool was valid seemingly because people were mocking him rather than because he'd overcome its crippling logical defect.) Finally, I note that you are still claiming that "specified complexity was first observed and stated on the record by Orgel in 1973." But that's starting to strain your credibility; as I've pointed out, the Orgel cite plainly uses a different definition of "complexity" than Dembski does. Orgel's "specified complexity" can hardly be the same as Dembski's if they're talking about different concepts of "complex." I've asked you why you think they're the same; if you've answered, I don't see it in this thread. Is there some reason to think they're the same?Learned Hand
November 7, 2014
November
11
Nov
7
07
2014
10:01 PM
10
10
01
PM
PDT
kairosfocus, A reminder:
kairosfocus:
LH: I gave an outline of an absolutely routine way to measure functionally specific info, with a guide as to how it would address the flagellum.
KF, Learned Hand asked for an “explicit calculation”, not an “outline”. You say the calculation is “absolutely routine”. Then perform it! Surely you are competent enough to perform an “absolutely routine” calculation, aren’t you? Show us your explicit and “absolutely routine” calculation of how much “functionally specific info” is contained in the bacterial flagellum. P.S. Dembski’s CSI argument is circular, as I explained above. If you disagree, you need to show where my argument fails, rather than tossing out distractive red herring talking points designed to polarise and confuse the atmosphere. Please do better.
keith s
November 7, 2014
November
11
Nov
7
07
2014
05:23 PM
5
05
23
PM
PDT
DNA_Jock: Here is another brief comment I made recently (to Alan Fox): "Of the Szostak paper, you alredy know what I think. It is essentially a false paper, at least if interpreted as an estimate of the occurrence of functional proteins in a random library. As I have said many times, the ATP binding protein which they describe, and which however is not functional at all in a biological context, least of all naturally selectable in any true scenario, was not in the original random library, but is the result of intelligent selection. What was in the original random library were a few sequences with some very weak affinity for ATP, certainly trivial in any real biochemical context. Period. Still, the third paper you reference quotes the Szostak paper as a demonstration that “the frequency of occurrence of functional proteins in a randomized library has been estimated to be about 1 in 1 : 10^11?. So, this false conclusion is propaganda for the neo darwinist field." About the powers of Intelligent Selection, I have often offered the very interesting example of antibody affinity maturation after the first immune response to new epitopes. Here we have a beautiful example of an algorithm embedded in the immune system which can take an existing function selected from a relatively random repertoire (the basic antibody repertoire) and optimize it rather quickly (a few months) by targeted random mutations and Intelligent Selection which uses the environmental information in the epitope. A further proof of what Intelligent Selection can do, with its added information and algorithmic power.gpuccio
November 7, 2014
November
11
Nov
7
07
2014
03:37 PM
3
03
37
PM
PDT
DNA_Jock You have not answered all the questions I have asked you. You don't have to answer them, but remember there are more lurkers that commenters in this thread. Don't you care about the impression they will get from following this discussion? :)Dionisio
November 7, 2014
November
11
Nov
7
07
2014
03:08 PM
3
03
08
PM
PDT
DNA_Jock: "The wall was always there. The bullet-holes arose before humans existed (by three or more days). :) Biochemists arrive. They observe bullet holes, and paint circles around the bullet holes that they observe. They may debate how big a circle they should draw. That this is, in fact, the sequence of events is highlighted by your observation of the biochemist that discovers a new enzyme activity: “Look, a new bullet hole! Quick, pass me the paint!”" I don't agree, but you can keep your point of view. For me, it is obvious that when we observe the function of an enzyme we are not painting anything: we just measure an activity in the lab, we see a reaction take place which would never take place if the enzyme were not there. What are we painting? Absolutely nothing. As I said, I respect your views, but don't agree. Regarding Szostak, some time ago I spent a lot of time to analyze in detail both the original paper and its follow-ups, but unfortunately I have not kept the link. I paste here a recent summary of my main argument:
It is not true that according to data there is an “uncertainty” in the quantification of foldung/functional sequences in random libraries. The simple truth is that Axe’s data (and those of some other, who used similar reverse methodology) are true, while the forward data are wring. Not because the data themselves are wrong, but because they are not what we are told they are. The most classical paper about this froward approach is the famous Szostak paper: Functional proteins from a random-sequence library http://www.nature.com/nature/j.....0715a0.pdf I have criticized that paper in detail here some time ago, so I will not repeat myself. The general idea is that the final protein, the one they studies and which has some folding and a strong binding to ATP, is not in the original random library of 6 * 10^12 random sequences of 80 AAs, but is derived through rounds of random mutation and intelligent selection for ATP binding from the original library, where only a few sequences with very weak ATP binding exist. Indeed, the title is smart enough: “Functional proteins from a random-sequence library” (emphasis added), and not “Functional proteins in a random-sequence library”. The final conclusion is ambiguous enough to serve the darwinian propaganda (which, as expected, has repeatedly exploited the paper for its purposes): “In conclusion, we suggest that functional proteins are sufficiently common in protein sequence space (roughly 1 in 10^11) that they may be discovered by entirely stochastic means, such as presumably operated when proteins were first used by living organisms. However, this frequency is still low enough to emphasize the magnitude of the problem faced by those attempting de novo protein design.” Emphasis mine. The statement in emphasis is definitely wrong: the authors “discovered” the (non functional) protein in their library by selecting weak affinity for ATP (which is not a function at all) and deriving from that a protein with strong affinity (which is a useless function, in no way selectable) by RV + Intelligent selection (for ATP binding). That’s why the bottom up studies like Szostak’s tell us nothing about the real frequency of truly functional, and especially naturally selectable proteins in a random library. That’s why they are no alternative to Axe’s data, and that’s why Hunt’s “argument” is simply wrong.
(The reference to Humt is because he takes the Szostak number as a higher threshold of the frequency of functional information in random sequences.) As I have explained in my cirticism of Lizzie's arguments about NS, the important point is that Intelligent Selection is not Natural Selection. For convenience, I paste here my criticism to Lizzie's argument:
The discussion about Elizabeth’s post, if I remember well, was “parallel”: I posted here and my interlocutors posted at TSZ. I have nothing against posting at TSZ (I have done that, or at least in similar places. more than once time ago). However, I decided some time ago to limit my activity to UD: it is already too exacting this way. However, my criticism to Lizzie’s argument is very simple: it is an example of intelligent selection applied to random variation. It is of the same type of the Weasel and of Szostak’s ATP binding protein. You see, I am already well convinced that RV + IS can generate dFSCI. It is the bottom up strategy to engineer things. So, I have no proble with Lizzie’s example, example for its title: “Creating CSI with NS” That is simply wrong. “Creating CSI with design by IS” would be perfectly fine. Your field seems to willfully ignore the difference between NS and IS. It is a huge difference. IS requires a conscious intelligent agent who recognizes some function as desirable, sets the context to develop it, can measure it at any desired level, and can intervene in the system to expand any result which shows any degree of the desired function. IOWs, both the definition of the function, the way to measure it, and the interventions to facilitate its emergence are carefully engineered. It’s design all the way. On the contrary, NS assumes that some new complex function arises in a system which is not aware of its meaning and possibilities, only because some intermediary steps represent a step to it, and through the selection of the intermediary steps because of one property alone: higher reproductive success. So, I ask a simple question: what reproductive success is present in Lizzie’s example? None at all. It’s the designer who selects what he wants to obtain. The property selected has no capability at all to be selected “on its own merits”. Therefore, Lizzie’s example has nothing to do with NS. I am certain of Lizzie’s good faith. I have great esteem for her. I am equally certain that she is confused about these themes.
Well, that's all, for the moment.gpuccio
November 7, 2014
November
11
Nov
7
07
2014
03:05 PM
3
03
05
PM
PDT
1 4 5 6 7 8 24

Leave a Reply