Uncommon Descent Serving The Intelligent Design Community

Ascertaining Non-Function

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the main arguments to support evolution appeals to shared non-functional structures between organisms. Since design entails design for function, shared non-functional structures would suggest common ancestry in the absence of common design. But how can we tell whether something is truly non-functional? Here are some insights from a colleague that address this point:

As a programmer, sometimes I spend a lot of time designing error-detection and/or error-correction algorithms (especially for dealing with user input). Some of these functions may never, ever be used in a real-life situation. There are also various subroutines and functions that provide either exotic or minor capabilities that, likewise, maybe be used very seldom if at all. But they are there for a reason. Good programming practice requires considerable extra design and implementation of features that may only rarely, if ever, be used.

If someone were to cut out and eliminate these sections of code, repairing what’s left so that the program still functions, the program may work perfectly well for just about all situations. But there are some situations that, without the snipped code, would create havoc if the program tried to call on a function that was no longer there or that was replaced by some different function that tried to take its place. (Ask yourself what percent of the functionality of your spreadsheet or word processor program you use, and then ask if you would even notice if some of the lesser-known functionality were removed.)

I think biological life is like that. It seems to me that if some DNA code can be successfully removed with no apparent effects, one possibility is that the removed portion is rarely used, or the impact of it not being there has effects that are masked or otherwise hidden.

Perhaps redundancy is what was removed, meaning the organism will now not be quite as robust in all situations as before. I can give a kidney to someone else and suffer no ill effect whatsoever… until my remaining kidney fails and cannot be helped by the redundant one that I gave up (which situation may never, ever really occur due to my general good health).

P.S. Being able to snip something with no apparent ill effect may in fact provide support for ID by showing that the system was so well engineered that it could automatically adjust to a certain degree, and in most cases completely (apparently). It would be interesting to see some ID research into some of the evo cases that are being used to support the various flavors of junk DNA, to see what REALLY happens long term with the new variety now missing something snipped.

Comments
You are wrong, as long since step by step and repeatedly pointed out, and are insistent on using ad hominems against me.
Pointing out your errors is not a personal attack, asking you to refrain from engaging in gutter politics is not part of the debate about WEASEL, it is a personal request for you to evolve a sense of decency. Stop slinging mud at people. BillB
BillB: You are wrong, as long since step by step and repeatedly pointed out, and are insistent on using ad hominems against me. You have sacrificed any right to civil discourse. Good bye GEM of TKI kairosfocus
1-> The debate here is about latching mechanisms not about whether this is a targeted search. 2-> Dawkins describes his algorithm EXPLICITLY - He does not mention latching. -> All the published results are consistent with an non latching algorithm and this has been empirically demonstrated OVER AND OVER AND OVER AGAIN! -> Dawkins has been asked if he ever wrote a version that employed latching and he has (allegedly) said no. -> Your attempt to dismiss this as some kind of frivolous straw-man is uncivil. -> I am not accusing you of being Dishonest, deluded, blinkered, blinded by your own arrogance perhaps, but the question of honesty of something only you can truly answer. Be honest with yourself, that's what I'm asking of you! -> Your imagined accusation on my part is yet another attempt to divert from the issues, and a predictable ploy to try and drag Clive into the debate in order to get me censored. It is insulting. -> You consistently and frequently throw out accusations that those arguing against your claims are involved in distraction, straw-men burning, and ad hominem attacks. These are implied accusations of dishonesty against your opponents who would claim that they are trying to address real points, but you happily sling this mud out on an almost daily basis whilst constantly seeking the faintest whiff of an accusation against yourself so you can have a tantrum about it. This is gutter politics, can you not see how morally bankrupt it is? BillB
BillB: Not so. You do not get to rule arbitrary self-serving datum lines,nor do you get to toss around slanderous implications of dishonesty, in the teeth of longstanding statements on the record on my part that on any fair reading lay out the situation and address the issues truthfully and fairly. First, the fundamental truth about weasel 86 that we all need to know is as noted, and that the whole exercise is misleading --as confessed by Dawkins in BW -- is highly material. Next, enough has been cited from Dawkins c 1986 -- and with the printouts to back it up -- we have good reason to understand why an explicitly latched interpretation is valid on the terms of "cumulative selection" -- i.e each step builds on the last so progress to date is latched-in [as the print-offs support] -- the slightest increment in proximity to target." Third, we know that when we move ro algors that do not explicitly latch, with some parameter values and filter types, we have implicit latching, with others we have quasi-latching, with yet others we have far from latched behaviour [esp. as the remaining incorrect letters become few -- I recall watching runs that spent thousands of gens flicking back and forth without closing the deal]. IN THE LIGHT OF THE ABOVE, YOUR IMPLICATION OF DISHONESTY ON MY PART -- ESPECIALLY GIVEN WHAT I HAVE PUT ON THE RECORD JUST ONE LINK AWAY FOR MONTHS NOW -- IS UNCIVIL AND SHOULD BE WITHDRAWN. G'day, sir. GEM of TKI kairosfocus
KF. The debate here is about latching mechanisms not about whether this is a targeted search. Please stop trying to distract from the issue.
it can be — and was — shown, months ago, that with population and mutation rates and filters suitably set up, we can see implicit latching where the parameters and filter interact to give “good runs” that show a steady march to the target without reversions.
PRECISELY! WEASEL does not require any of these imagined extra mechanisms to produce results that where reversions almost never appear in the fittest member of each generation. Why then precede that sentence with this
it is a reasonable interpretation to use a latched letterwise search algorithm.
Why is it reasonable to claim that a mechanism exists where, by your own admission, none is required, and which is never described? PLEASE KF can you focus on this one question and provide an HONEST answer. BillB
Onlookers|: I have just a few moments, so I will simply highlight again what Mr Dawkins said circa 1986 (I have already linked my discussion of it, and note that this is being wrenched to set up and knock over a handy strawman or two. On fair comment at best BillB has skimmed through it looking for points to make in rebuttal that will sound plausible to those who will not take time to work through a fairly technical matter. Now, Dawkins, circa 1986, in BW: _______________ >> I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . It [Weasel 86] . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases [Now this ducks the issue of required functionality to be fit in an environment and substitutes instead mere proximity to a target -- in short, the whole exercise is invalid from the outset] , the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase {so, with reasonable parameters, it will either preserve the present state or advance it a step or so towards the target . . . and preserving closest to target in a context where on reasonable mutation rates it is vanishingly small odds that here will be no no-change cases in a population will tend to implicit latching with the right sort of filter present -- observe here that the published runs c 1986 show NO reversions in over 200 cases where such reversions could have been possible, so reversions of letters are either rare or nonexistent; that is, the interpretation tha the case is partitioned letterwise search, and latched on hitting target is a good one -- and the next best explanation is that he interactions summarised will implicitly latch once we have a high likelihood of members of the mutant population that will be unchanged] , METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. [such as, proximity is substituted for actual functionality, also the search has a long term target which is opposite to what NS could do as a non-purposeful proposed mechanism] One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target [So he knew that it was targetted search and that this was not a legitimate representation of what RV + NS should be trying to do] , the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection [Again, weasel words that he probably knew in advance would provide an escape hatch while allowing the power of a "computer simulation of evolution" to impress the unwary] , although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [TBW, Ch 3, as cited by Wikipedia, various emphases and colours added.] >> __________________ In short, BillB's objections are specious, and were answered before he made them, just a click away. In particular, on the direct statements and printouts of Weasel 1986, it is a reasonable interpretation to use a latched letterwise search algorithm. And, on the protest that no that sort of algorithm was not used, it can be -- and was -- shown, months ago, that with population and mutation rates and filters suitably set up, we can see implicit latching where the parameters and filter interact to give "good runs" that show a steady march to the target without reversions. So, it is a strawman laced with ad hominems to attack those who took legitimate interpretations of the given information c 1986, and it is worse to attack those who have shown that even without explicit latching of letters that hit home, latching of outputs can appear implicitly for "good runs." once parameters for mutation rates, population size and filters are suitably set up. (And, remember, only credible code c 1986 will actually tell us the truth on what algorithm was used for sure; but we have enought o see that he o/p published credibly latched, and we can see how that could have been done explicitly or implicitly. And it is Mr Dawkins who, on the evidence of his confession of the misleading nature of Weasel, who needs to come clean and give us the code. We already have a confession that the whole Weasel exercise is at best misleading, at worst deceptive based on artful half truths and the difference between the image and the headline and the qualifying details..) To now pretend that presenting a paragraph that describes the situation of a latched letterwise search weasel is somehow a slander against Mr Dawkins, is even more grotesquely unjustified. Let the record reflect that. GEM of TKI kairosfocus
Jerry:
...you have no clue what he did for the Blind Watchmaker.
Dawkins describes what he did in his book 'The Blind Watchmaker':
... We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
He also mentions that he uses BASIC initially and then re-wrote it in Pascal, not that it make much difference because it is his description of the algorithm that is important, not the language he implemented it in. I wrote my own version in c++ but it can also be done in a spreadsheet. You risk shooting yourself in the foot with your own arguments - If you are right and we can have no idea what Dawkins did (despite him describing what he did) then neither can Marks or Dembski - so on what basis can they claim to know both what he did, and that it is different to what he says he did? If this was a piece of work by an ID proponent being misrepresented by an ID critic in a peer reviewed publication I'm sure you not consider it 'moronic' to point that out and ask for it to be corrected. BillB
"So are you going to apologise to Dawkins for misrepresenting his work?" No because you have no clue what he did for the Blind Watchmaker. You and the other anti ID people should apologize to the pro ID people for wasting so much of our time with your inane inappropriate comments. Most of the time around here is spent dealing with the moronic objections that the anti ID people conjure up. But to the disinterested observer it is all in good cause as the lack of any logic, facts or judgment does the most good for the pro ID point of view than anything. When the supposedly received wisdom is shown to be intellectually and argumentatively bankrupt, it helps your cause. So while it is a pain in the rear to deal with the anti ID people, carry on with your nonsense. jerry
Oops, my mistake. Patrick May writes a Weasel program in C which finds the phrase in only 74 generations. http://www.softwarematters.org/more-weasel.html He uses a population of 100 and a mutation rate of 5%. He also has extensive quotes from "The Blind Watchmaker" in his article and Dawkins found a solution after only 43 and 64 generations. Of course, 74 generations times a population of 100 is 7400 tries. Rather slow, actually. But lightning fast compared to changing all 28 letters every time, the standard ID method of doing it. djmullen
jerry: 1-> Write a WEASEL programe using the algorithm Dawkins describes (i.e. wirhout latching) 2-> Run some trials using different population sizes and mutation rates 3-> Observe how it is possible to get the same results that Dawkins published, that appear to preserve correct letters once selected and converge on a solution quickly.
Also ns does not reject genes very readily once they have been selected
Thats the whole point - selection is the mechanism that preserves correct letters, not latching. The selfish gene 'idea' has nothing to do with latching mechanisms, it is about selection. So are you going to apologise to Dawkins for misrepresenting his work? BillB
Jerry @ 59 "It would be a bizarre thing if it was not latching given the published results showing no changes once selected ..." It would be a bizzare and unnecessary thing for Dawkins to add latching when it's not required. Here's why it LOOKS like latching to those who don't understand what's going on: Suppose you have just selected this string as being the closest to the "target": MXXXXXXXXXXXXXXXXXXXXXXXXXX It has one correct letter, the first one. So this string is copied 5 times and one of the copies is mutated: 1 MXXXXXXXXXXXXXXXXXXXXXXXXXX 2 MXXXXXXXXXXXXXXXXXXXXXXXXXX 3 MXXXXXXXXXXXXXXXXXXXXXXXXXX 4 MXXXXXXXXXXXXXXXXXXXXXXXXXX 5 ZXXXXXXXXXXXXXXXXXXXXXXXXXX Now which string is NOT going to be selected? String 5 because it's not as close to the "target" as strings 1 through 4. So one of the strings 1-4 will be selected to be the new string: 1 MXXXXXXXXXXXXXXXXXXXXXXXXXX Golly gee, it looks EXACTLY like that correct letter is latched, doesn't it? People who don't understand Dawkin's argument will say that it's latched: "... I assume it was re-written and the latching mechanism probably taken out since the program he used in the Blind Watchmaker found a solution incredibly quickly and should have been subject to ridicule for its simplicity and irrelevancy." Dawkin's solution does not converge incredibly quickly. He only shows samples of the output. His algorithm takes, on average, the number of tries it takes to find one character (about 13.5 tries if there are 27 different characters) times the number of characters in the sentence (28) or about 400 tries. This is incredibly fast compared to guessing all 28 characters on every try, which is the standard ID method of calculating these things, but it's about what you'd expect for cumulative selection. Another objection that people who don't understand evolution or Dawkin's algorithm frequently make: "The 'target' string is in the program!" Only because Dawkins wrote a quick and dirty program. He could just as easily have put the 'target' string in a data file. Or he could have written a second program that reads the data file and his Weasel program would just ask the second program questions like "Is letter 1 an 'M'?" and receive a Yes or No answer. EITHER WAY, THE ALGORITHM WILL FIND ANY 'TARGET'. In real life, the "target" string is the environment and the algorithm extracts informatiom from the environment. It does this by changing a character in its DNA and seeing how well the resulting new organism works. If it works better than the original, the new letter is kept, if not, it's discarded. The information flows from the environment to the DNA through variation (trying a new letter in the DNA) and selection (seeing if the new organism works better in the current environment). Standard Darwinian theory. djmullen
"It’s possible that Dawkins incorporated latching as well as selection, but there is no evidence of this and it would be a bizarre thing to do. Evolutionary models do not have mechanisms that protect certain genes from mutating. Fitness is always evaluated at the organism level." No evidence. Now that is something that is truly bizarre. It would be a bizarre thing if it was not latching given the published results showing no changes once selected and Dawkins' published views on evolution. Also ns does not reject genes very readily once they have been selected so the normal thing to assume is that once a gene is selected it will be very, very, very difficult to lose that selection. You seem to want to use a distorted view of what Dawkins said but he is the one who pushes the selfish gene idea so according to him it is even more unlikely it will mutate out at the same rate as one mutates in. Extremely, extremely unlikely. Oh, and is not very likely either. So both these things point to a latching mechanism as well as the extremely fast rate at which they converged on the solution. So I think you should apologize to Dr. Dembski and exhibit some modesty when discussing some things you seem to readily misconstrue. My guess is the non latching was introduced later when the embarrassingly fast convergence was pointed out to him. jerry
jerry:
You have no idea if he got it right or not. That is the point. Senseless discussion over nothing. I believe he got it right. Prove me wrong. All the evidence and logic points to latching even if it is not mentioned. So I suggest you get honest on this and drop the objections to calling it latching and say you don’t know what it is.
It is not correct to say that we have no idea whether he included latching, nor is it correct to say that all of the evidence points to latching. Both selection and latching preserve beneficial mutations. Latching does so with 100% accuracy, while selection approaches 100% preservation for large populations and low mutation rates. Dawkins was very explicit that his algorithm used selection. Dembski's algorithm does not include multiple progeny or selection, so in that respect he clearly contradicts Dawkins' description. It's possible that Dawkins incorporated latching as well as selection, but there is no evidence of this and it would be a bizarre thing to do. Evolutionary models do not have mechanisms that protect certain genes from mutating. Fitness is always evaluated at the organism level. R0b
"If he’s going to keep bringing it up, is it too much to ask that he get it right?" You have no idea if he got it right or not. That is the point. Senseless discussion over nothing. I believe he got it right. Prove me wrong. All the evidence and logic points to latching even if it is not mentioned. So I suggest you get honest on this and drop the objections to calling it latching and say you don't know what it is. jerry
jerry:
Not true, We discussed this ad nauseam since the first of the year and showed a university that first used a latching mechanism. The only logical thing to explain the results in the Blind Watchmaker is a latching mechanism . It actually makes better evolutionary thinking too since once something is selected it is unlikely to unselected and not disappear so easily.
The latching mechanism was first used by Royal Truman, who got Dawkins' algorithm completely wrong. In TBW, Dawkins describes a standard evolutionary process with multiple progeny and selection, and no mention of latching. Truman's algorithm did not have multiple progeny or selection, and latching was therefore required in order for the algorithm to succeed. Monash University and Dr. Dembski followed Truman's lead. All them got the algorithm indisputably wrong. And you're correct that this is a tempest in a teapot, but don't forget that it's Dembski who keeps bringing up WEASEL in his work. If he's going to keep bringing it up, is it too much to ask that he get it right? R0b
"It matters when people are using it to make arguments about science and consistently misrepresent it in their publications." This is what I call a pathetic argument. The intent of the discussions on this is to discredit Bill Dembski and nothing else. That this is over a misrepresentation of science is a joke. The original had nothing to do with science but propaganda for a baseless idea. The posting of the program in the Blind Watchmaker had it converging incredibly fast and no evidence of non latching. "This issue has nothing to do with ID or evolution – it is about representing facts accurately and honestly." Another faux and pathetic response. Get real. It is about trying to discredit Bill Dembski and nothing else. jerry
Jerry:
The only logical thing to explain the results in the Blind Watchmaker is a latching mechanism .
not true, we have been over this ad nauseam and it is all backed up with empirical evidence. A latching mechanism is not required to produce the appearance of latching. The results in BW can and have been reproduced without resort to latching mechanisms.
What I find interesting is that people actually care about this meaningless example and try to defend one version versus the other.
It matters when people are using it to make arguments about science and consistently misrepresent it in their publications.
God, the anti ID arguments are pathetic,
This issue has nothing to do with ID or evolution - it is about representing facts accurately and honestly. BillB
"None of them use a latching mechanism but they can still produce the appearance of latching behaviour." Not true, We discussed this ad nauseam since the first of the year and showed a university that first used a latching mechanism. The only logical thing to explain the results in the Blind Watchmaker is a latching mechanism . It actually makes better evolutionary thinking too since once something is selected it is unlikely to unselected and not disappear so easily. Dawkins said his original program was written in Basic and then later used a more sophisticated language so I assume it was re-written and the latching mechanism probably taken out since the program he used in the Blind Watchmaker found a solution incredibly quickly and should have been subject to ridicule for its simplicity and irrelevancy. But either way it was a meaningless example. What I find interesting is that people actually care about this meaningless example and try to defend one version versus the other. Is that all they have to defend and even there the interpretation is suspect. God, the anti ID arguments are pathetic, jerry
Could you kindly provide us with a credible copy of the actual program or algorithm for the Weasel runs as published circa 1986?
A description of the algorithm has already been supplied by Dawkins in his book:
The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
Lots of people have used this description of the algorithm to create their own WEASEL programmes. None of them use a latching mechanism but they can still produce the appearance of latching behaviour. BillB
KF, this is hilarious. 1-> Dawkins does not mention latching as a part of the mechanism. 2-> Latching mechanisms are not required to produce the observed behaviour. Claiming that it is therefore natural to interpret the algorithm as requiring explicit latching is bizarre to say the least.
with suitable filters and parameter tuning, Weasel programs can implicitly latch.
Not true - No extra mechanisms are required to produce the appearance of latching beyond those explicitly described by Dawkins. There are plenty of examples, including source code, of WEASEL algorithms on the web, they will all produce outputs that are consistent with Dawkins own programme, all without explicit latching mechanisms. Given these FACTS the emphasis is on you to supply us with a copy of Dawkins code in order to demonstrate this undocumented and unnecessary extra feature that you imagine must exist.
D-E-M-O-N-I-S-I-N-G B-I-G-O-T-R-Y
You really are scraping the barrel now - I'll do my best to turn the other cheek to your insults and slurs - is this an example of your moral superiority at work? If in doubt, point and shout. I hope you are enjoying the privilege of being able to hurl abuse at people without the risk of being moderated by Clive. By the way, it is not me that is claiming that WEASEL does not specify a latching mechanism, Dawkins does this when he describes his algorithm and does not mention a latching mechanism As for it not requiring a mechanism, if you bothered to look beyond your own website at other peoples pages about WEASEL, or even tried writing the code yourself, you would be able to see clearly that latching is not required to explain any of the published results - they have all been recreated using non-latching WEASEL algorithms. BillB
PS: Could you kindly provide us with a credible copy of the actual program or algorithm for the Weasel runs as published circa 1986? [If you cannot, given your declaration that >> Dawkins WEASEL algorithm does not specify nor require a latching mechanism and that apparent latching behaviour is simply an expected result when only observing the fittest member of each generation >> then my remarks above and in the again linked are doubly underscored.] kairosfocus
BillB: Again, I have pointed out that (a) the most natural interpretation of Mr Dawkins' actual words -- cf the above excerpt from BW if you do not care to follow the link -- is partitioned search and explicit latching. MORE TOT HE POINT, THEY SHOW THAT HE UNDERTOOK PROXIMITY-BASED TARGETTED SEARCH WITH REWARD OF NON-FUNCTIONALITY. (That is Weasel is fundamentally flawed and demonstrably rhetorically seriously misleading.) (b) other readings are possible, and it is demonstrated -- not just speculation -- that with suitable filters and parameter tuning, Weasel programs can implicitly latch. ( c) on the balance of the evidence of statements, printed runs and whatnot, the published runs of 1986 seem to have implicitly latched. (d) Quasi-patching with rare reversions is also possible. (e) far from latching behaviour is also possible. All of this I have stated, and as necessary, shown. So, why are you re-stating what I have said, with emphases and wording that make it seem that I am in the wrong to say such? [You can write type d or e versions of Weasel to your heart's content. That will not change the natural reading of Mr Dawkins' words, circa 1986, and it will not change the fact that the runs he published at that time show behaviour that is best explained as latched. the issue is how. Explicit latching is the easiest way, but implicit quasi-latching is also possible. And, far-from latched versions of Weasel are irrelevant to the status as at 1986, INCLUDING the 1987 BBC Horizon videotaped runs.] Please, recheck yourself on cognitive dissonance again. Recall, it its the same Dawkins who said that those who disagree with his evolutionary materialism -- especially if they happen to be "Creationists" -- are ignorant, stupid, insane or wicked. Do I need to spell that out: D-E-M-O-N-I-S-I-N-G B-I-G-O-T-R-Y GEM of TKI kairosfocus
KF, the issue that I am addressing is purely and simply that Dawkins WEASEL algorithm does not specify nor require a latching mechanism and that apparent latching behaviour is simply an expected result when only observing the fittest member of each generation. Latching, pseudo-latching, quasi-latching are all inventions of yours, they are distractions, red herrings, straw-men. Your inability to understand this simple point and deal with the issue 'on its merits' as you so like to claim, is breathtaking. BillB
BillB: Kindly examine here on cognitive dissonance and how it can contribute to deadlocked conflict through the pattern I have highlighted. If you want to see what "FSCI" is about, you can easily enough look at the relevant glossary item and a the relevant WAC discussion. GEM of TKI kairosfocus
BillB: Since I have been severely and unjustly attacked on this otherwise tangential topic [though red herrings, strawmen and ad hominems], I take time to answer. I expect you to respond in future on the merits, not the Anti-Evo party-line rhetorical talking points. What has been demonstrated -- and as I have stated ever since December and again from March as issues came up (cf the previously linked that you have obviously failed to read with care to understand rather than intent to object), is that: 1 --> Weasel is disqualified by Dawkins himself, as it is a targetted search that is "misleading." Indeed, he uses the direct statement:
>> The computer examines the mutant nonsense phrases [selection is independent of function] , the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles [i.e. search is by proximity not function, and single step increments in this are rewarded] the target phrase [targetted search] , METHINKS IT IS LIKE A WEASEL >> [It therefore should NEVER have been used in a book entitled BLIND Watchmaker, because its rhetorical impact -- as the past 23 years have shown -- overwhelms the little disqualifying "weasel words" put in as an escape hatch. (And, I suspect, knowing the attitude of "brights" that even the phrase chosen, with the word "weasel" in it, was artfully chosen to subtly say just that to the in-group.)]
2 --> Weasel, as described c. 1986, may therefore plainly legitimately be interpreted as letter-by-letter partitioned, explicitly latched search. [Cf my highlighted excerpt from BW, and WmAD's remarks in his own thread and in the just published paper.] 3 --> Weasel can also, with particular tuning of parameters and filters, implicitly latch and quasi-latch. remember, I have provided an actual implicitly latched run, thanks to Atom. 4 --> With other filters and parameters, it can exhibit far-from latched behaviour. 5 --> On the whole, Weasel functions to create the false impression that functionally specific, complex information can be had on teh cheap though gradual cumulative selection. but the use of proximity-based targetted search shows that Weasel is riddled with active, purposeful information that undermines any legitimate capability to show what is triumphantly announced. ________________ And, BillB, I am increasingly finding that you come across as simply scooping out words out of context to make rhetorical rather than substantial objections. I suggest you take time to read my remarks on selective hyperskepticism here. And, you may find reflecting on cognitive dissonance also helpful. GEM of TKI kairosfocus
I don't deny that cells are observed to be complex, that elements within them need to be specific in order to regulate or control other processes, or that any of these things contain information with respect to their effect on others. If this is FSCI, FCSI FASCI or whatever you have decided to call it today then fine, I agree, I just don't how to draw a line between observing these features in a cell and observing them in a chemical soup or any other complex system. I'm not 'wrenching your words' I was trying to make a point about your use of language and apparent confusion over what your eyes behold. Evidently you didn't understand. BillB
BillB: Kindly, stop wrenching my words. You are beginning to come across as simply reading to snatch a few words out of context -- do I dare to say "quote-mining" -- to make a rhetorical rather than substantial, and dismissive objection. FYI, above, I have been rather explicit, that right there in the discussion of protein folding, Wiki has had to stipulate that AA sequences in observed proteins exhibit functionally specific, complex information (FSCI for short); which of course in turn traces to codon sequences in DNA (which are also of algorithmic character as they step by step form the AA sequences of proteins). That is extremely relevant to the issues we have focussed on here at UD. FSCI is real, is empirically observed, and is plainly important in biological contexts. For, the proteins are the workhorse molecules of cell-based life. Nor is this exactly news. Back in 1984 in ch 8 TMLO, Thaxton et al summarised orgel, Yockey and Wickens thusly: ________________ >> [Orgel:] Living organisms are distinguished by their specified complexity. [This gives the functional context and uses the term specified complexity -- the root of the term in modern discussion.] Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.]. . . . Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future. >> ______________ In fact, the summary term FSCI commonly encountered at UD, derives DIRECTLY from these summary discussions in the 1984 work that is the technical work at the foundation of modern design theory. And, we can see that it was well understood among leading OOL researchers by the turn of the 1980's, that biofunction was resident in informational macromolecules that were thus functionally specific, co-adapted to work together at operating points based on co-ordinated spatio-temporal organisation, and were quite complex. FSCI is real, and it is being described by Wiki in the particular case of functionally oriented protein folding. (And remember, from prions and chaperoning, that the folding is NON-UNIQUE, so there has to be a nudging in the right direction. In fact the relative stability of mis-folded prions shows us that proteins in functional form are METASTABLE; i.e. not only is it so that proteins are energetically unfavourable on chaining, but also on folding.) FASCI is real. So, kindly, face the truth. GEM of TKI kairosfocus
KF: It has been demonstrated empirically time and again that latching is NOT REQUIRED in order for WEASEL to produce the results that you yourself display on your colourful pages. Why do you keep insisting that a mechanism that is unnecessary and never described actually exists? Ignoring OBVIOUS AND COPIOUS evidence on this issue seems to be a particular talent of yours, combined with your usual rude accusations and appeals to moderators. Please don't reply with more of your veiled insults by pretending that this is all obfuscation and red herrings, IT IS NOT, it is simply a matter of FACT, I've even written my own weasel program in the past using Dawkins description and observed how the results look like latching is at work when no mechanism is present in the code. Many others have done the same and found the same results. BillB
KF, are you arguing that, for these amino acid sequences to have an evolutionary origin that they must therefore NOT contain information that controls the process? Simply highlighting the words 'information' and 'specifies' in pieces of text does not help your argument one bit. It is like highlighting the word 'pulls' in the phrase "Gravity pulls the apple downward" in order to claim that gravity is acting with a purpose, and that scientists are trying to hide this fact. Any chemical process can be examined and parts of it described as containing information that specify the outcome of parts of the process. It does not imply that there was an agency behind it. BillB
Footnote: Dr Dembski has just announced publication of that long forthcoming IEEE article on cost of search and active information. In light of the just above on specificity of the information in AA sequences, it will repay a look. Cf e.g. Fig 1 p. 1054. And IEEE is of course a very serious institution. (And yes, Dembski still -- obviously after significant peer discussion -- uses the explicit partitioning interpretation of Weasel c. 1986 [cf p 1055], as is warranted by how CRD discussed it in his BW. My remarks are still to be found here.) --> Clive, the linked thread is a capital example of how a serious post was IMMEDIATELY swarmed under by distractive red herrings led out to strawman issues soaked in ad hominems etc. kairosfocus
PPS: pardon, Cytosine C kairosfocus
PS: This, from Wiki on folding of proteins bears emphasis [they cannot hide EVERYTHING . . . ], given the dismissive tendencies of our resident Darwinists once confronted withthe fact of FSCI: _________ >> The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state >> ______________ (And of course, in turn, that functionally specific and complex information traces to the codon sequence in the DNA string that is templated by mRNA and sequentially read codon by codon in the ribosome, with tRNA's supplying the AA chain, from START to STOP.) kairosfocus
Nakashima-san: I think you need to look at how the Dryden et al Abstract begins: "We suggest . . ." In short, the paper is long on speculation, short on substance. In fact protein folding is an extremely complex computational challenge, and with the existence of prions as known misfolding based diseases [scrapies, mad cow disease], we know that there are not unique and simple solutions tot he folding. (Observe how in vitro folding is aided post-chaining by chaperoning molecules.) Wiki, on prions:
A prion (pronounced /?pri?.?n/ ( listen)[1]) is an infectious agent that is composed of protein. To date, all such agents that have been discovered propagate by transmitting a mis-folded protein state; the protein does not itself self-replicate and the process is dependent on the presence of the polypeptide in the host organism.[2] The mis-folded form of the prion protein has been implicated in a number of diseases in a variety of mammals, including bovine spongiform encephalopathy (BSE, also known as "mad cow disease") in cattle and Creutzfeldt-Jakob disease (CJD) in humans. All known prion diseases affect the structure of the brain or other neural tissue, and all are currently untreatable and are always fatal.[3] In general usage, prion refers to the theoretical unit of infection. In scientific notation, PrPC refers to the endogenous form of prion protein (PrP), which is found in a multitude of tissues, while PrPSC refers to the misfolded form of PrP, that is responsible for the formation of amyloid plaques that lead to neurodegeneration. Prions are hypothesized to infect and propagate by refolding abnormally into a structure which is able to convert normal molecules of the protein into the abnormally structured form. All known prions induce the formation of an amyloid fold, in which the protein polymerises into an aggregate consisting of tightly packed beta sheets. This altered structure is extremely stable and accumulates in infected tissue, causing tissue damage and cell death.[4] This stability means that prions are resistant to denaturation by chemical and physical agents, making disposal and containment of these particles difficult.
Similarly, Wiki on Protein folding:
The amino-acid sequence (or primary structure) of a protein defines its native conformation. A protein molecule folds spontaneously during or after synthesis. While these macromolecules may be regarded as "folding themselves", the process also depends on the solvent (water or lipid bilayer),[5] the concentration of salts, the temperature, and the presence of molecular chaperones. Folded proteins usually have a hydrophobic core in which side chain packing stabilizes the folded state, and charged or polar side chains occupy the solvent-exposed surface where they interact with surrounding water. Minimizing the number of hydrophobic side-chains exposed to water is an important driving force behind the folding process,[6]. Formation of intramolecular hydrogen bonds provides another important contribution to protein stability.[7]The strength of hydrogen bonds depends on their environment, thus H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state.[8] . . . . The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state. This is not to say that nearly identical amino acid sequences always fold similarly.[10] Conformations differ based on environmental factors as well; similar proteins fold differently based on where they are found. Folding is a spontaneous process independent of energy inputs from nucleoside triphosphates. The passage of the folded state is mainly guided by hydrophobic interactions, formation of intramolecular hydrogen bonds, and van der Waals forces, and it is opposed by conformational entropy . . . . De novo or ab initio techniques for computational protein structure prediction is related to, but strictly distinct from, studies involving protein folding. Molecular Dynamics (MD) is an important tool for studying protein folding and dynamics in silico. Because of computational cost, ab initio MD folding simulations with explicit water are limited to peptides and very small proteins. MD simulations of larger proteins remain restricted to dynamics of the experimental structure or its high-temperature unfolding. In order to simulate long time folding processes (beyond about 1 microsecond), like folding of small-size proteins (about 50 residues) or larger, some approximations or simplifications in protein models need to be introduced. An approach using reduced protein representation (pseudo-atoms representing groups of atoms are defined) and statistical potential is not only useful in protein structure prediction, but is also capable of reproducing the folding pathways.[17]
Bearing that in mind, we should then read with a fairly critical eye, and make a few annotations: ______________ >> Before turning to a discussion of the second assumption, we wish to summarize information showing [a word that usually calls for something demonstrative, empirical or logical or both . . . not "suggestions," material gaps and speculations . . .] the first assumption, namely that the sequence space is vast, to be false. A typical estimate of the size of sequence space is 20^100 (approx. 10^130) for a protein of 100 amino acids in which any of the normally occurring 20 amino acids can be found. [the number is based on the possibilities of chaining AA's in essentially any order, i.e. any AA can be followed by any other due tot he basic AA structure: H2N-CR-COOH, so the acid and amine groups can chain with very little reference to the R side-group. It is the R group that gives functional properties to the AA.] This number is indeed gigantic but it is likely to be a significant overestimate of the size of protein sequence space. [But, the issue is not PROTEIN sequence space but polypetide sequence space -- i.e what space is che3mically feasible for a chain of given length, not what space is actually used by biologically observed proteins . . . i.e this is begging the question that is at stake; and of course is a dig at the Durson metric; based on q-begging Cf. my discussion of FSCI, the search space and "hypothesine" in light of the sequences actually used for Cytochrome-C]] For example, Dill and colleagues used simple theoretical models to suggest (Lau & Dill 1990; Chan & Dill 1991; Dill 1999), and experimental or computational variation of protein sequence provides ample evidence (Cordes et al. 1996; Riddle et al. 1997; Plaxco et al. 1998; Larson et al. 2002; Guo et al. 2004; Doi et al. 2005), that the actual identity of most of the amino acids in a protein is irrelevant. [Not os; in remarking on Cytosine-C, a commonly studied protein of about 100 AA's that is used for taxonomic research, I noted that it "typically varies across 1 - 5 AA's in each position, with a few AA positions showing more variability than that. About a third of the AA positions are invariant across a range from humans to rice to yeast. That is, the observed variability, if scaled up to 232 AA's [for the "hypothesine" model used int eh discussion], would be well within the 10^150 limit suggested; as, e.g. 5^155 ~ 2.19 * 10^108." In short, Dryden et al are playing two sides of the issue: where AA positions are pinned to one or a few molecule possibilities, that drops "protein sequence space" and where relatively wide variability may ne observed, that means that AA sequence is "irrelevant." That "logic with a swivel" approach sounds rather like the conclusion was written before the argument was made and before the evidence was seriously consulted.] >> ________________ In short, there is much more to the story than this paper presents. GEM of TKI kairosfocus
Since DNA seems to contain something similar to computer code (software), and computer code can only be created by intelligent agents, I think ID researchers should look for the actual instructions, in the form of program code, that make the DNA perform its different routines. That will require reverse engineering or decompilation of the DNA's "machine code" to get the source code. In the process of getting the source code, the smoking gun evidence for ID will be the historic discovery of the first "comments" line and what it says. rprado
Mr Lamarck, Sorry, I'm not following your reference. Can you quote or link to the statement you are thinking of? This study, How much of protein sequence space has been explored by life on Earth? is an example of how thinking in terms of hydrophobic/hydrophillic, etc can collapse the search space. Nakashima
Mr. Nak, "Suddenly the “alphabet” has collapsed from 20 to 2." I'm sure you don't think the rest of the FCSI doesn't matter though. There was the quote earlier, the interview, about how one base pair leads to another and another to make a statement. lamarck
Lenoxus @ 25: So let's say, for instance, that our hypothesis for some of these knocked out regions that don't seem to affect global fitness is that they serve as error correction mechanisms for some obscure cellular function, as Dembski's programmer friend has suggested could be the case. If so, then if that error never occurs, the mechanism would never activate and thus never affect global fitness. To test this hypothesis we could either: A) Do a near infinite number of mutagenesis experiments to see if the area of interest begins to affect global fitness B) Directly investigate the area of interest based on it's coding and three-dimensional structure and try to find if it interacts with any other area of the genome. I would think option B would be more likely to generate a timely result. In other words, an experiment based on global fitness is not really any help here, since it's already been established that these regions can be knocked out without affecting fitness. One possibility is that this error correction mechanism only works during replication. It's so efficient and the system it acts on is so critical that an organism without it either nevers starts its life (is aborted at fertilization or meiosis for example), or simply doesn't manifest any error in the critical system at all and survives just fine. tragic mishap
So is this now like an election? Each amino acid substitution gets one vote? Whoever gets the most votes becomes the rule and the rest are exceptions? lol, how many exceptions would it take to change the so-called "generality" you just described. tragic mishap
KF-san, I was aware that I was only speaking in generalities, and that there are many exceptions to what I was saying. Nakashima
Nakashima-san, I think you overlook that [a] enzymes have active region clefts, [b] 3-D structures based on chains can be destabilised by as few as one inner AA that is incompatible. For instance, think of a proline "aa" [technically an imino acid,I know] which will pin its location due to the internal binding back of the R-group to the N atom. Similarly, a hydrophilic or hydrophobic out of order will have adverse impacts on folding and function. there is a reason why life forms form proteins step by step per sequences of instructions. GEM of TKI kairosfocus
I've heard of Colin Reeves in the GA field, though I don't know what he's published in the field since 2000. Nakashima
Mr Lamarck, Yes, both of these studies, the long knockout and the ultra-conserved knockout, were for non-coding regions that wre interspersed with functional genes. I agree with your scepticism, once a previusly functional area starts to erode, it is harder and harder to come back to any kind of functionality, new or old. I'm not sure if we can generalize about genes. Certainly most genes today create proteins with very precise grooves, channels, etc and a pattern of electric charges on the surface. But rather than being the effect of specific amino acids in the primary structure ( the unfolded protein chain) a lot of this can be boiled down to the choice of hydrophobic versus hydrophillic amino acids. Suddenly the "alphabet" has collapsed from 20 to 2. There also even larger protein motifs, such as alpha sheets and beta coils. All of these mean that some AA changes due to a change in the DNA don't actually have much effect on the functionality of the protein. You might generalize that the AAs on the outside of the protein (after it folds into its final 3D tertiary structure) are the ones most likely to influence the FSCI calulation. However, the cube-square law gives us the clue that as proteins grow larger, relatively fewer on the surface, so the ratio of important to less important AAs will go down in larger proteins. (That is a very crude approximation, since a lot of proteins are far from spherical!) If DNA starts down the path towards disuse, I don't expect much to come of it. i think the time after a gene duplication event while both copies are still functional is crucial. The gene can wander in the neighborhood of its original function (as we have just shown, many sites are free to change without affecting function) and if it hits on a new function, great. That's how we got tri-color vision again. With frameshifts and inversions, there are examples even in English of palindromes, and two words which are the reverse of each other. Apparently it happens in the world of proteins also. Again, it might help to think in terms of a two letter (phobic/phillic) alphabet to see that it is less unlikely than out intuition might otherwise lead us to believe. This depends on how often the genetic code maps phobic to phobic and phillic to phillic when the triples are reversed. i don't know that off hand! :) Nakashima
Missed this bit of News! Biologic Institute: New Talent, New Places Gosh and I thought they said ID was squished. Hmmmm, more talent joins in the pursuit of Design challenges. Fantastics, Congrats to all of you at UD and Discovery! No, you're not "big" yet, but you're attracting some of the brightest! WTG guys. DATCG
What does a Darwinist call tonsils and appendices today btw? Has "vestigial organs" been removed from textbooks? DATCG
Nakashima, The analogy can only take you so far if you're talking about an "advanced civilization" of aliens such as that mentioned by Richard Dawkins. As is the fancy of Darwin and his followers, one can make conjectures that are "reasonable" hypothetics and consider a more advanced species would have elimintated comments from any program of this type of analogy. These areas much like all the other areas that were predicted wrong by the Darwinist at the time can be simply out of reach right now of current understanding. Just like removing tonsils and an appendix does not kill or eliminate reprodutivity, this could be an areas where enough expertise simply does not recognize the value of information, just like they failed to recognize the value of formerly named "vestigial organs" DATCG
tragic mishap:
Lenoxus, if a function were to be uncovered yet did not affect fitness, what would Darwinism be able to say about the origins of that function?
That's a very good question. However, it's my understanding that all functions affect fitness, if only in the slightest way. Even an organism's eye color can make a fitness difference. (Though maybe not its lung or liver color… hmm… maybe it would affect its perceived tastiness to predators?) Lenoxus
Mr. Nak, I think you're talking about non coding regions interspersed within functional dna sequences? I know about that but I'm not sure if you disagree with me or not. The reason I'm skeptical about non-coding regions of the genomes eventually coming up with something meaningful is; natural selection isn't weeding out their mutations if they're truly non-functional. So they just become more and more non-FCSI based. Isn't it also true that a coding gene by and large has to have FCSI? That is, you can have a meaningless segment here and there, but thousands of parts of real meaningful language need to line up. So how can dna go further and further towards chaos and you expect something to come of it? This is a related question to what I said on the whale evolution thread. How can a large frameshift, or an inversion, where all the base pair switched partners, even remotely come close to coding for something? Yet this is observed to happen a lot of times. Making I'm missing something fundamental here? lamarck
Lenoxus, if a function were to be uncovered yet did not affect fitness, what would Darwinism be able to say about the origins of that function? tragic mishap
bFast: Actually, that report was on a different study; the one you are probably thinking of was reported by Science Daily here. Indeed, as an evolutionist, I have to say that that particular study, first brought to my attention here at UD (unless that was yet another study!), is incredible, and seemingly difficult to reconcile with basic common descent; the interviewed scientists themselves seem startled. (I have since read about GRNs and suchlike, but can't claim to fully understand them.) That said, I'm not aware of any IDer who specifically predicted something like this in advance — instead, the repeated insistence has been that there is zero or close to zero non-functional DNA, on the assumption that the designer is responsible for the whole genome and therefore wouldn't waste any of it. (In the usual narrative, it's always the evolutionists who refuse to see the possible purposes of this or that portion of the genome, instead close-mindedly sequestering it as junk forevermore.) The funny thing about this study is that in order to use it as evidence against ordinary evolution, the relevant DNA must always remain truly non-functional in terms of fitness value to its current species, because otherwise its conserved-ness would no longer be a mystery. Even then, though, it remains merely a mystery, and not evidence for ID. In order for it to be evidence for ID, its design purpose must roughly be shown or inferrable — for example, perhaps it encodes a mathematical sequence, or functions as the "program comments". (That would certainly fit well with Remine's Message Theory.) Lenoxus
To contunue with the programming analogy, there are also blocks of comments and commented out code which are guaranteed non-functional, as well as code that analysis can prove is impossible to reach. Counting accumulated spelling errors in these areas would be similar to what is proposed for non-coding DNA. Nakashima
Mr Lamarck, In the case of a duplicate gene, at first both genes are working, neither is junk. Later mutation might cause one of them to become non-functional or shift to a different function, coding for a different protein. You are correct that is one copy becomes non-functional, the theory is that it will start to accumulate more mutations at some background rate, thereby providing a clock of sorts that started ticking when it lost function. Nakashima
Regarding the so-called non-function, non-coded regions. I remember reading about Meta-Information or multi-layer information after the ENCODE project was done. There is for to much unknown still to determine why certain blocks or regions can be knocked out and yet not represent any known damage to a lab rat or mouse. It could be certain regions are only engaged based upon external stimuli of the environment. It could also be possible FrontLoading arrangements for future adaptations. As the Darwinist here are always fond of saying, just because we do not know now, does not mean we will not discover the answer in the future. DATCG
PaulB, "It would indeed be interesting to see some ID research of any sort, particularly if the term “intelligent design” appeared in the article, and if the article appeared in an actual science journal." You mean besides Reverse Engineering of nature every single day? There was an attempt to publish an ID paper. Once the Darwinist found out, they acted like Hitler and Stalin. They squashed the attempt, acting like a bunch of fearful fascist Darwinist at Smithsonian and acted like thugs against Dr. Sternberg for his attempt at open discussion by Stephen Meyer. They should've lost their jobs, been reprimanded or dismissed to other areas. But this is the double-standards you live by. So, there is ID research going on at the Biologic Institute now where fascist cannot take peoples keys away and lock them out. What a joke. Darwinist are so scared of any information that counters them, they turn in to little Hitlers and burn books. Biologic Institute Research and Selected Publications Let me know Paul the day you write the Smithsonian Institute on behalf of Dr. Sternberg and strongly protest the fascist actions of the Darwinians in control there. DATCG
Is this mouse study the same one DaveScot used to refer to all the time as an example disproving Darwinian evolution? On the page there is a similar study where by conserved parts of the genome were eliminated and the mice functioned just fine. So why was the conserved region still identical after 80 million years but when knocked out had no effect? Interesting! jerry
Mr bFast, Yes, it seems to be the same lab as did the knockout of an ultra-conserved non-coding region. Nakashima
What about darwinists postulating on the latent potential of duplicate genes etc becoming useful later? Isn't this considered junk DNA till functional? Seems like the harm would be less mutations per generation when you remove the junk, so it can't adapt? lamarck
Nakashima, 2.3 million divided by 2.7 billion is only about 0.085%. Joseph
Shared non-functional structures and common ancestry is counter-intuitive. I take it that is why it is used as evidence for it. Joseph
Nakashima, interesting post. This report seems to be of the same experiment where much of the DNA which was knocked out was highly conserved. I find it interesting, and highly anti-Darwinian that highly conserved DNA could be knocked out of mice without the resultant animals showing ill effects. bFast
Of course, thanks for the article. Dr. Dembski is suggesting we do research on the portions that were knocked out in experiments like this and investigate them for functions, perhaps taking as a clue advice from his programmer colleague. tragic mishap
Mr Mishap, There have been experiments like this, of course. Mice with a million bases of junk DNA knocked out. Nakashima
I think that you are misunderstanding the post there Megan. The research being cited is evolutionary research based on fitness. The avenue for ID research is investigating functions for structures that evolutionists have ruled out because it can be snipped with no discernable affect on fitness. tragic mishap
OP: "Being able to snip something with no apparent ill effect may in fact provide support for ID by showing that the system was so well engineered that it could automatically adjust to a certain degree, and in most cases completely (apparently)." This sounds like a good avenue for ID research; randomly remove an organ (or part thereof) -if the subject dies or suffers greatly, then said organ has no redundant capacity and is irreducibly complex (designed) if the subject lives or enjoys enhanced functionality then the organ shall be deemed to exhibit redundancy (also designed). MeganC
Great post Dr. Dembski and this I believe is going to be an area of research where ID will excel. All that matters in evolutionary theory is fitness. ID will go beyond that. Perhaps in order to get published we might need to keep the words "intelligent design" out of the papers for now. ;) tragic mishap
Being able to snip something with no apparent ill effect may in fact provide support for ID
Oh, those evolutionists, constantly bringing in just-so stories so they can claim that any and all data support their hypothesis! Lenoxus
As a software developer, let me point out additional pheomena of non-function within intelligenty designed works. When I develop code, I incorporate a bunch of standard libraries. These libraries contains code that most programmers no longer use, sprintf(), for instance. Even when I am using relatively new code, I will create or purchase a code library that contains a bunch of functionality that a particular project does not use. Therefore shared, unused technology does not by any means prove common ancestry. Though it does not prove common ancestry, it does imply it. If presumed ancetral organisms have a functioning part, and their presumed progeny have non-functioning vestigial parts, this is certainly very consistant with common ancestry. However, Dr. Dembsky, when you say, "Since design entails design for function, shared non-functional structures would suggest common ancestry in the absence of common design" it must be emphasized that common ancestry hardly rules out ID. bFast
http://www.livescience.com/health/090727-one-eye-vision.html Here is a person living just fine with only left side of her brain yet I doubt anyone would claim this proves the right side is "junk" or unimportant. This example shows how tough is it to determine how important each part of the DNA really is. Who know if some "junk DNA" was involved in rewiring this girl brain. It seems that scientist have learn more about our spleen as it plays a more important role than they first thought. Even though you can live without your spleen you better hold on to it as long as possible. http://www.nytimes.com/2009/08/04/science/04angier.html?_r=1&partner=rss&emc=rss Smidlee
Dr. Dembski qoutes an un-named colleague as writing " It would be interesting to see some ID research into some of the evo cases..." It would indeed be interesting to see some ID research of any sort, particularly if the term "intelligent design" appeared in the article, and if the article appeared in an actual science journal. PaulBurnett
Let me put in my impression on how this might be interpreted in an ID framework- though I am just speculating- Perhaps the designer designed this junk DNA to serve a function in the past and it either degraded over time or modern environments don't correlate with what it was originally intended to do. So to analogise it to software you can have what seems like junk code but it is really just left over code from the original engine - and although it is not useful now accept to serve as support for the rest of the functional system- it was at one time very adequate and tailored to past environments. This could of course also be true for an evolutionary story line- which is why a very interesting problem for the controversy itself. That is, computer software done indeed evolve from the time it was originally being entered and sketched out to the finished product. The only questions really are is the junk really junk or does it confer advantage even it only in the past? And then as far as UCA is concerned does the past function of the junk coincide with human structure or a lesser creature? OF course there is very much the possibility that the junk might actually be serving hidden or unknown modern* purposes. Fascinating. Frost122585
Bill this is an excellent post and question. I read about this at the end of Steve Meyer's new book. I think it is a fascinating subject that calls for more research and theroizing and shows how ID is very much a sceintific theory and project. Frost122585

Leave a Reply