Uncommon Descent Serving The Intelligent Design Community

Evolution driven by laws? Not random mutations?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

So claims a recent book, Arrival of the Fittest, by Andreas Wagner, professor of evolutionary biology at U Zurich in Switzerland (also associated with the Santa Fe Institute). He lectures worldwide and is a fellow of the American Association for the Advancement of Sciences.

From the book announcement:

Can random mutations over a mere 3.8 billion years solely be responsible for wings, eyeballs, knees, camouflage, lactose digestion, photosynthesis, and the rest of nature’s creative marvels? And if the answer is no, what is the mechanism that explains evolution’s speed and efficiency?

In Arrival of the Fittest, renowned evolutionary biologist Andreas Wagner draws on over fifteen years of research to present the missing piece in Darwin’s theory. Using experimental and computational technologies that were heretofore unimagined, he has found that adaptations are not just driven by chance, but by a set of laws that allow nature to discover new molecules and mechanisms in a fraction of the time that random variation would take.

From a review (which is careful to note that it is not a religious argument):

The question “how does nature innovate?” often elicits a succinct but unsatisfying response – random mutations. Andreas Wagner first illustrates why random mutations alone cannot be the cause of innovations – the search space for innovations, be it at the level of genes, protein, or metabolic reactions is too large that makes the probability of stumbling upon all the innovations needed to make a little fly (let alone humans) too low to have occurred within the time span the universe has been around.

He then shows some of the fundamental hidden principles that can actually make innovations possible for natural selection to then select and preserve those innovations.

Like interacting parallel worlds, this would be momentous news if it is true. But someone is going to have to read the book and assess the strength of the laws advanced.

One thing for sure, if an establishment figure can safely write this kind of thing, Darwin’s theory is coming under more serious fire than ever. But we knew, of course, when Nature published an article on the growing dissent within the ranks about Darwinism.

In origin of life research, there has long been a law vs. chance controversy. For example, Does nature just “naturally” produce life? vs. Maybe if we throw enough models at the origin of life… some of them will stick?

Note: You may have to apprise your old schoolmarm that Darwin’s theory* is “natural selection acting on random mutations,” not “evolution” in general. It is the only theory that claims sheer randomness can lead to creativity, in conflict with information theory. See also: Being as Communion.

*(or neo-Darwinism, or whatever you call what the Darwin-in-the-schools lobby is promoting or Evolution Sunday is celebrating).*

Follow UD News at Twitter!

Comments
Learned Hand:
But the question isn’t whether “we” could do it. Rather it’s whether there is any way in which it could be done short of design. You want the answer to be “no,” but you don’t know that the answer is “no,” because you can’t compute the probability of all the possible paths nature could have taken. But ignoring the unknowns doesn’t make them go away. It may be that we will never know enough to calculate the odds, in which case dFSCI will never work properly. Sometimes the things we most sincerely want to be true are not.
This is not a scientific argument. In the same way, I can say that you want very badly that it be possible, but you have nothing to show for that theory. The only scientific argument is: have you any evidence that your explanation can do that? And the answer is: No. You have no evidence that dFSCI can emerge in the way you describe. Just abstract wishful thinking. Have I any evidence that dFSCI can emerge by design? Yes, a lot. Tons. Indeed, all the dFSCI whose origin we know is designed. This is scientific and empirical reasoning.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:54 AM
9
09
54
AM
PDT
Learned Hand:
It’s quite bold to say “My procedure works” when you’ve never actually used it to successfully determine design that wasn’t apparent from some traditional analysis (such as recognizing English). It could possibly work under some circumstances, such as where random variation is truly the only alternative–but that takes life off the table. And it’s never actually worked in the real world.
It's not bold. It works. Recognizing English is no different from recognizing that an enzyme accelerates a reaction, or that a watch measures time. RV is the only alternative in all cases of language, software, machines. All these objects, if complex enough, are designed. They never arise an any other way. Natural algorithms cannot generate them. Random variation can generate simple configurations which can have simple functions without having been designed. Like "I am". It works. It works in real world. It works always. You are not convinced? Show where it does not work. Show a false positive.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:50 AM
9
09
50
AM
PDT
Learned Hand:
Yes, I could do the same. And I don’t know how to calculate -log2 of anything, so I won’t be using dFSCI to do it. Your procedure doesn’t do anything.
No. Wrong. See my answer to keith in post #597 for that.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:42 AM
9
09
42
AM
PDT
GP, your challenge of text generation beyond a threshold by blind chance and mechanical necessity is apt (I favour flattened off Zener noise or nice crackly frying sky noise . . . not pseudorandom sequences). I only note that within Sol system resources, 72 characters is enough, and for the observed cosmos, 143. On reports, I have seen 20 - 24 character strings. A serious look at the challenge will teach objectors much about what we have been saying. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
09:42 AM
9
09
42
AM
PDT
Learned Hand:
You did not do a test. You did not actually calculate dFSCI for anything, and dFSCI is neither necessary nor helpful in determining that these posts are designed. We know they’re designed because we compare them to our personal experience of English communications, not through any calculation of generic designedness. I think it’s an important point: dFSCI is irrelevant to determining design not just in this case, but in all cases. There is no case I’m aware of that dSFSCI has ever been shown to work in the absence of the usual ways of detecting design.
What do you mean? Of course I did a test. We are comparing nothing. We are just evaluating if a passage means something in English. That is an objective property. Anyone who knows English well can answer. Let's say that, instead of language, we are evaluating software. Give me executable programs which, when open in a Windows XP system, can take lists of words and order them in alphabetical order. Let's say they are at least 3000 bits long. I will infer design for the. Show one which was generated by a random bit generator. A false positive. I am doing a test. Absolutely.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:40 AM
9
09
40
AM
PDT
LH: Indeed, once one recognises the existence of FSCO/I, s/he may infer to design without explicit calculation. However, the unit character of English text -- so, functionally specific -- and the threshold (over-generous) 600, are precisely an info beyond a threshold metric in the context of functional specificity. Using ASCII, one character is equivalent to seven bits, if you wish a more familiar unit. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
09:36 AM
9
09
36
AM
PDT
Learned Hand:
Unless you were unaware of (as per (h)) or ignored (as per (i)) an alternative that could produce that object. Which means that your results are completely determined by your state of mind, making them not only subjective but extremely susceptible to bias. Someone who has a deep, heartfelt desire for the dFSCI calculation to show that life was designed, for example, has an enormous incentive to not see alternatives and thus return a false positive.
This would be a comment to my statement: "IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong." I don't understand. I am saying that empirically nobody can present an object which will be judged as dFSCI by my procedure and be a false positive. I am speaking of objects of which we can know the origin independently, as you can notice. So, you can simply show me an object which has the properties you describe: with a function for which I can compute a satisfyingly high dFSCI, which will evoke in me no suspicion of being algorithmic and will induce me into error. Do it. You can start by showing a 600 character long sequence which appear to me as having good meaning in English, but was randomly or algorithmically generated. Do it.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:34 AM
9
09
34
AM
PDT
Learned Hand:
This is a grandiose claim. Why not test it so that you can prove it? So far your response to that suggestion–correct me if I’m wrong, please–has been to ask other people to test it for you by providing you with subjects. It’s not your obligation to test your own theories; I’m sure you have your own job and hobbies. But grandiose claims that the claimant doesn’t bother to test set off my alarm bells. It sounds very much like the equivalent of ID in the sphere I’m more familiar with, law and finance. If someone claimed to have a machine that would predict futures prices in advance, but when asked to prove it responded, “You do it!”, they’d be laughed out of the building.
It is a claim as grandiose as it is true. But why do you say that I do not test it? that I ask others to test it for me? That is simply not true. I have tested it here. I have inferred design by dFSCI for all the posts her that are longer than 600 characters. And I have asked all here to provide some equivalent sequence which was generated in a random character generator, or by simple mathematical algorithms, and which will cause my error and be judged by me a positive, while it is not. I am doing the test, not others. I could generate the random strings myself, but why? I an sure of the result. So, I am asking many more or less hostile readers to show those false positives in which they seem to have faith. Do that. You, do it. I already know that those false positives do not exist. What do you want me to do? To generate random strings and post them here? You have to explain why it is so easy for me to infer design for all those sequences which have a good English meaning, and why I am so sure that no one will provide a false positive, if you really think that the dFSCI procedure has no value, or is circular, or whatever. Please note that I will infer design for all sequences longer than 600 characters which have good meaning in English. Without knowing anything of their origin. If my dFSCI reasoning is wrong, I am taking a huge risk. Why am I not worried at all?gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:27 AM
9
09
27
AM
PDT
Learned Hand:
You’ve obviously put a lot of thought into this, so I’m surprised to see this. You know that “random variation” isn’t the proposal on the table from mainstream science. Your procedure is designed to test for design against a strawman. If you don’t know or can’t calculate the effects of selection, simply ignoring them doesn’t make the problem go away. It may be difficult to calculate the effects of many planetary bodies on the orbit of a moon, but ignoring them doesn’t make them go away–it only makes the calculation inaccurate.
No. The computation of functional complexity eliminates RV. Selection can be considered, but only if demonstrated. I have discussed this explicitly elsewhere. NS can work only if complex functional sequences can be deconstructed, in the general case, into simpler steps which are "bridges" at sequence level, and each of which can be expanded by positive NS because it confers reproductive advantage. So, the algorithm is: A (initial state, unrelated at sequence level to B, the new functional state; what I am discussing is the emergence of a new functional protein, of a new superfamily, for which there is no precious homologue known). So, again: A -> A1 (small transition, in the range of RV) -> Expansion of A1 (reproductive advantage, NS) -> A2 -> Expansion of A2 -> ... B (new functional state). Objections: a) There is no logical reason why A1, A2, An should exist. b) There is no empirical evidence that they exist. c) If each of them was selected and expanded, why is there no trace of them in the existing proteome? IOWs, NS can easily be rejected as an explanation for functional proteins. Unless you can provide an explicit pathway, and demonstrate that it exists. The problem here is not to calculate "the effects of many planetary bodies". The problem here is to provide any evidence that those bodies exist.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:15 AM
9
09
15
AM
PDT
Learned Hand:
This is the flaw I’m most focused on. I don’t know if it’s the most serious problem with your procedure, but it’s the most comprehensible to me. First, as noted above, this makes the calculation entirely subjective. “2 + 2? is objective; “2 + the number of coins in your pocket” is subjective and the result will change from person to person. And as you yourself note, any one person’s calculations can change over time as their knowledge grows. At the very least, even if we discard the question of subjectivity, that means that this calculation is inherently susceptible to false positives. If you calculate dFSCI today and decide that it indicates design, you could learn tomorrow of a natural algorithm that explains the sequence. Your initial positive was therefore a false positive, exactly the result CSI isn’t supposed to return. Am I missing something? Do you not sign on to the usual claim that CSI can’t return false positives?
I have already answered, but I will add some more comments, as I understand that this point is important for you. It is a false problem. The purpose of eliminating necessity in Dembski's EF and in my procedure is essentially to rule our ordered sequences. That is not really a problem for my subset of specification, because complex digital functional specification is usually limited to three different sets of sequences: language, software, biological sequences. None of these three kinds of sequence complexity can be explained by natural algorithms (unless you believe in the myth of RV + NS). That's why, if the sequence is not ordered, and if its function depends only by processes which are strictly connected to a conscious understanding, the exclusion of algorithms is irrelevant. Just to be more clear: why am I so sure that no natural algorithm, or even simple mathematical algorithm, can write a sonnet in English? It's simple: because no natural or mathematical algorithm can know anything about meaning, and about English language. And there is no reason to believe that some day we will find some natural system which generates English sonnets. Only random variation could do that, and if the complexity is high enough random variation is out of the game. The same is true with software. Can you imagine some simple environmental algorithm which can generate the code for a spreadsheet? Do you really think that something like that can ever be found? No. Those things require understanding, programming, conscious search and computation. They can only be designed. Therefore, the problem is essentially to compute the functional information necessary to implement the function (the target space) to eliminate random events. The same is true for proteins. The sequence of nucleotides which will code for the correct sequence of AAs in a functional enzyme can only be found by knowing the laws of biochemistry (top down), or by some painful intelligent bottom up research guided by Intelligent Selection (for the difference between NS and IS, please refer to my previous answers to DNA_Jock). No biochemical algorithm can find those sequences by necessity. RV + NS is a false answer which has tried to deny that simple truth. It is false, it does not work, and cannot work. But that is another discussion. The simple point is: no explicit pathway based on RV and NS is knoon for any complex functional protein.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
09:04 AM
9
09
04
AM
PDT
Learned Hand:
Here I’m criticizing again. If I understand correctly, you’re considering the target space to be the specific function of the subject. What about all the other possible results? If you were calculating dFSCI for a hand of cards you’d consider the target space to be larger than that one specific hand, wouldn’t you? So how do you determine, with a subject as complex as life, the scope of the target space? All the other pathways that could have led to the same, or any other equally functional, result?
I am not sure I understand your point here. There are two possible aspects, so I will answer both: a) dFSCI is computed for an explicitly defined function, and an explicitly defined way of measuring and assessing it. IOWs, for each function definition we must be able, in principle, to assess if the defined function is present or absent in each possible sequence of the search space. That's what I mean when I say that our definition of function generates a binary partition in the search space. b) Obviously, for big search spaces we cannot really measure the function in each possible sequence. So, the target space must be evaluated indirectly. For proteins, that can be made by approximation by the Durston method, or simply, in some cases, by my shortcut based on highly conserved sequences, as I have proposed for ATP synthase ans histone H3. The aim is not to have an exact number, but an order of magnitude which is definitely higher than our threshold. In general, we are looking for a lwoer threshold of functional complexity.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
08:43 AM
8
08
43
AM
PDT
Learned hand:
How do you define “function”? Does it require action, or can a state (such as being beautiful, or being hot, or being conductive) be a function? I ask because I don’t understand, not Socratically.
I have dedicated a whole OP to that. Here is the link: https://uncommondescent.com/intelligent-design/functional-information-defined/ For your convenience, I quote here the most relevant part, but maybe you should read the whole post: "That said, I will try to begin introducing two slightly different, but connected, concepts: a) A function (for an object) b) A functionality (in a material object) I define a function for an object as follows: a) If a conscious observer connects some observed object to some possible desired result which can be obtained using the object in a context, then we say that the conscious observer conceives of a function for that object. b) If an object can objectively be used by a conscious observer to obtain some specific desired result in a certain context, according to the conceived function, then we say that the object has objective functionality, referred to the specific conceived function. The purpose of this distinction should be clear, but I will state it explicitly just the same: a function is a conception of a conscious being, it does not exist in the material world outside of us, but it does exist in our subjective experience. Objective functionalities, instead, are properties of material objects. But we need a conscious observer to connect an objective functionality to a consciously defined function."gpuccio
November 9, 2014
November
11
Nov
9
09
2014
08:35 AM
8
08
35
AM
PDT
Learned hand:
Your proposal puts me in mind of someone presenting a method for determining prime numbers, where that method can only exclude candidates if the tester already knows that they are not prime. Assuming no other method for determining primes, the results are going to differ from tester to tester based on their knowledge and beliefs. Like dFSCI, that’s not going to return consistent or accurate results.
I don't understand what that has to do with my procedure. You statement seems to be circular, if I understand it well (I am not really sure what you mean), while dFSCI is not a circular procedure. In dFSCI we exclude those specific cases where a sequence (usually showing some form of order) can be explained by a known algorithm which can operate in the system we are studying. This is a scientific empirical evaluation, like all scientific procedures. Again, like many others, I suspect that you are confounding empirical scientific methods with mathematical demonstrations. I maintain that my procedure, correctly applied, has no false positives. You say that there have to be virtually a lot of them. Please, show one. It should be easy.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
08:32 AM
8
08
32
AM
PDT
Learned Hand:
That not only makes the process subjective, it virtually guarantees false positives. After all, if you and I both apply the tool but you know more about the relevant algorithms than I do, I may decide there is no “explicit algorithm available in the system can explain the sequence.” You may apply your greater knowledge and conclude that there is. I have therefore reached a subjective false positive, despite correctly calculating dFSCI according to your procedure.
No. Science is made by sharing knowledge, not by personal secrets. If I apply the dFSCI procedure, I must be aware of all that is available about the problem that I am analyzing, exactly as I must know quantum mechanics to solve a quantum mechanics problem, and I must be an updated medical doctor to heal a patient. To apply the procedure correctly means to have full knowledge of what has to be known about our system and our problem and the current scientific data and theories about it. Again, please offer a false positive to my design detection for the Shakespeare sonnet. After all, someone could know a simple algorithm which can write a sonnet in good English without any conscious intervention and without any added information. That would reduce the complexity of any 600 character passage in good English to a much lower number of bits (those of the algorithm), and we should reconsider everything. But that is simply not true. So, I will maintain my inference. If you have a false positive, show it.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
08:27 AM
8
08
27
AM
PDT
Learned Hand: Now, let's go to your post #594.
That you would write this and then go on to describe a completely subjective measurement is surprising to me. How can dFSCI be objective if the calculation inherently, explicitly depends on your knowledge of design alternatives? Your technique only reports that something is designed if you don’t know of any non-design alternatives. Thus, two different people applying your technique correctly, without error, can easily arrive at two different conclusions.
No. What is required is just an analysis of what we are observing. If it shows regularity, an algorithmic explanation must be suspected and thoroughly excluded. This is normal scientific methodology. In the case of function like language, software and protein sequences, it is easy to exclude any algorithmic origin present in the system. Let's restrict the debate to protein coding gene sequences. All that we know of the biochemical laws proves that a complex functional protein sequence can never arise algorithmically, because no natural laws is aware of what is necessary to have a protein which folds and works to get some specific result. An algorithmic explanation is always related to some order, because a necessity law is simple and generates order. For example, a protein which is made only of one AA can be explained by some biologic system where only that aminoacid is available. So, if I define a function for that protein, that function is not complex, because it can be explained algorithmically (IOWs, its Kolmogorov complexity is low). But if I want to compute the sequence of a functional enzyme of sufficient functional complexity, I need specific understanding of biochemical laws and computational power, and a lab where to apply an intelligent procedure, and even so that result is still beyond our power of intelligent designers. There is only one necessity algorithm proposed to explain functional proteins: NS applied to RV. As I have said many times, if it works then all the design detection procedure applied to proteins is false. That is good. It means that the biological argument based on dFSCI can be falsified, and is a correct scientific theory in the Popperian sense. The reasons why RV + NS is not an explicit explanation for any known protein of functional complexity has been debated many times. It is an integral part of the design detection theory in biology. So, to sum up: there is nothing subjective. The problem is not if I know an algorithm which can explain proteins. The simple point is that no such algorithm must be available. If I want to be really thorough, I can make an Internet search, consult specialists in the field, and so on. The simple point is: if I infer design for ATP synthase, it is because I am sure that nobody can offer an explicit algorithmic explanation for it. Blind faith in the undemonstrated powers of generic RV + NS is not a valid explanation.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
08:17 AM
8
08
17
AM
PDT
Learned Hand: "As already noted, dFSCI would not return a positive for four characters. He’s right–dFSCI is a useless step in that “test.” When is it useful? Has dFSCI (or CSI or F/SCIO) ever detected design that was otherwise not apparent?" I will answer you #596 first, because it can probably help in the answers which will follow to your previous more detailed analysis. You have it wrong. The whole purpose of dFSCI and of design detection is not to detect design that is not otherwise apparent. It cannot do that, because it is a procedure with low sensitivity. The purpose is exactly the opposite: to confirm that objects whose design is apparent are truly designed, and that the appearance of design is not a pseudo-design. IOWs, design detection serves to distinguish between true design (apparent) and pseudo-design (apparent too). Between a painting made by an artist and the clouds in the sky which resemble something. Is that clear? Now, don't say that it is useless. It is very useful. Isn't it useful to know that some artifact is a true designed artifact, and not a stone shaped by the wind? Isn't it useful to know that ATP synthase was designed, and did not arise by non conscious events? Obviously, sometimes the function can be difficult to recognize. For example, if I see a binary sequence, I could not understand immediately that it is a working software. That means that the recognition of a function is a science of its own. But the recognition of a function, while necessary to infer design, is not the design inference. The design inference relies on the demonstration that the recognized function is complex and not algorithmic. Of course, we can only infer design for functions that we recognize and define explicitly.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
07:59 AM
7
07
59
AM
PDT
keith s: "The two procedures give exactly the same results, yet the second one doesn’t even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing." I am surprised that you still do not understand. See KF's comment at #588, which is perfectly correct. Your procedure is an evaluation of dFSCI, in the phrase: "1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed." That implies a definition of the function, a measurement of the search space, and the statement of a threshold which, by its length alone, guarantees a sufficient target space / search space ratio. It's dFSCI calculation all the way. The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400. For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?). That's why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive. For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.gpuccio
November 9, 2014
November
11
Nov
9
09
2014
07:49 AM
7
07
49
AM
PDT
Your “test” is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless?
Because it is not useless. You could easily enough attain, in a random character generator, the phrase: “I am”. Which has perfect sense in English.
As already noted, dFSCI would not return a positive for four characters. He's right--dFSCI is a useless step in that "test." When is it useful? Has dFSCI (or CSI or F/SCIO) ever detected design that was otherwise not apparent?Learned Hand
November 9, 2014
November
11
Nov
9
09
2014
06:47 AM
6
06
47
AM
PDT
gpuccio @ 580
I have been deeply engrossed, in the last months, in what is known about epigenetic control of differentiation, and it is refreshing to study a biological argument which screams design and where RV + NS is rarely invoked explicitly, maybe because of some residual modesty.
Also could it be that many serious researchers-who are busy trying to figure out how those elaborate molecular/cellular choreographies are orchestrated- don't have time to squander on senseless OOL issues? Perhaps some of them make quick mentions of a few 'required' "n-D e" keywords in order to appease some 'still influential' personalities within the academic establishment and to keep the censorship police away? After all, couldn't grants be denied if the proposed research could shake the foundations of some long-held 'proven' ideas printed year after year in profitable textbook publications by 'influential' personalities who sit in boards that approve those grants? I've heard from scientists I know that this is the case more often than one could think of. c'est la vie mon ami! Remember that in a language spoken by many people in this world 'horror show' means 'good'. Again, very commendable and highly appreciated effort you're making. Eccellente!!! Keep it up!!!Dionisio
November 9, 2014
November
11
Nov
9
09
2014
06:42 AM
6
06
42
AM
PDT
gpuccio,
Well, I have not been so brief, after all.
No, but you're gracious to write so much. Thank you for the effort and the clarity.
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin. This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process.
That you would write this and then go on to describe a completely subjective measurement is surprising to me. How can dFSCI be objective if the calculation inherently, explicitly depends on your knowledge of design alternatives? Your technique only reports that something is designed if you don't know of any non-design alternatives. Thus, two different people applying your technique correctly, without error, can easily arrive at two different conclusions. That not only makes the process subjective, it virtually guarantees false positives. After all, if you and I both apply the tool but you know more about the relevant algorithms than I do, I may decide there is no "explicit algorithm available in the system can explain the sequence." You may apply your greater knowledge and conclude that there is. I have therefore reached a subjective false positive, despite correctly calculating dFSCI according to your procedure. Your proposal puts me in mind of someone presenting a method for determining prime numbers, where that method can only exclude candidates if the tester already knows that they are not prime. Assuming no other method for determining primes, the results are going to differ from tester to tester based on their knowledge and beliefs. Like dFSCI, that's not going to return consistent or accurate results.
a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. .... d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object.
How do you define "function"? Does it require action, or can a state (such as being beautiful, or being hot, or being conductive) be a function? I ask because I don't understand, not Socratically.
f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function.
Here I'm criticizing again. If I understand correctly, you're considering the target space to be the specific function of the subject. What about all the other possible results? If you were calculating dFSCI for a hand of cards you'd consider the target space to be larger than that one specific hand, wouldn't you? So how do you determine, with a subject as complex as life, the scope of the target space? All the other pathways that could have led to the same, or any other equally functional, result?
h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance.
This is the flaw I'm most focused on. I don't know if it's the most serious problem with your procedure, but it's the most comprehensible to me. First, as noted above, this makes the calculation entirely subjective. "2 + 2" is objective; "2 + the number of coins in your pocket" is subjective and the result will change from person to person. And as you yourself note, any one person's calculations can change over time as their knowledge grows. At the very least, even if we discard the question of subjectivity, that means that this calculation is inherently susceptible to false positives. If you calculate dFSCI today and decide that it indicates design, you could learn tomorrow of a natural algorithm that explains the sequence. Your initial positive was therefore a false positive, exactly the result CSI isn't supposed to return. Am I missing something? Do you not sign on to the usual claim that CSI can't return false positives?
i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation).
You've obviously put a lot of thought into this, so I'm surprised to see this. You know that "random variation" isn't the proposal on the table from mainstream science. Your procedure is designed to test for design against a strawman. If you don't know or can't calculate the effects of selection, simply ignoring them doesn't make the problem go away. It may be difficult to calculate the effects of many planetary bodies on the orbit of a moon, but ignoring them doesn't make them go away--it only makes the calculation inaccurate.
m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives.
This is a grandiose claim. Why not test it so that you can prove it? So far your response to that suggestion--correct me if I'm wrong, please--has been to ask other people to test it for you by providing you with subjects. It's not your obligation to test your own theories; I'm sure you have your own job and hobbies. But grandiose claims that the claimant doesn't bother to test set off my alarm bells. It sounds very much like the equivalent of ID in the sphere I'm more familiar with, law and finance. If someone claimed to have a machine that would predict futures prices in advance, but when asked to prove it responded, "You do it!", they'd be laughed out of the building.
IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong.
Unless you were unaware of (as per (h)) or ignored (as per (i)) an alternative that could produce that object. Which means that your results are completely determined by your state of mind, making them not only subjective but extremely susceptible to bias. Someone who has a deep, heartfelt desire for the dFSCI calculation to show that life was designed, for example, has an enormous incentive to not see alternatives and thus return a false positive.
Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed.
You did not do a test. You did not actually calculate dFSCI for anything, and dFSCI is neither necessary nor helpful in determining that these posts are designed. We know they're designed because we compare them to our personal experience of English communications, not through any calculation of generic designedness. I think it's an important point: dFSCI is irrelevant to determining design not just in this case, but in all cases. There is no case I'm aware of that dSFSCI has ever been shown to work in the absence of the usual ways of detecting design.
And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software.
Yes, I could do the same. And I don't know how to calculate -log2 of anything, so I won't be using dFSCI to do it. Your procedure doesn't do anything.
This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works. . . . [B]iological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don’t know the origin.
It's quite bold to say "My procedure works" when you've never actually used it to successfully determine design that wasn't apparent from some traditional analysis (such as recognizing English). It could possibly work under some circumstances, such as where random variation is truly the only alternative--but that takes life off the table. And it's never actually worked in the real world.
You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources.
But the question isn't whether "we" could do it. Rather it's whether there is any way in which it could be done short of design. You want the answer to be "no," but you don't know that the answer is "no," because you can't compute the probability of all the possible paths nature could have taken. But ignoring the unknowns doesn't make them go away. It may be that we will never know enough to calculate the odds, in which case dFSCI will never work properly. Sometimes the things we most sincerely want to be true are not.
Or, if you just want to falsify my empirical procedure, offer a false positive. I am here.
I don't think that your procedure will ever generate a positive unless you start from the conclusion that design happened, or have some independent means of determining that design happened. Essentially, it only confirms that vastly unlikely events are vastly unlikely, and by ignoring known natural alternatives concludes that life is so unlikely it must have been designed. So I'm not sure I can give you a false positive, although I'll put some thought into it. In the meanwhile, why don't you do the same? I'd be much more impressed with ID if its advocates took their own ideas more seriously.Learned Hand
November 9, 2014
November
11
Nov
9
09
2014
06:42 AM
6
06
42
AM
PDT
gpuccio @ 580
The only reason why I have not yet completed my "announced" post about the procedures is that the subject is so complex and fascinating that I am still studying and reflecting.
As far as I can see, that’s one of the most difficult articles anyone could ever attempt to write in any serious blog, where participants make references to biology issues these days. Most probably a definitive game changer for many future discussions here and elsewhere. I won't be surprised if your first OP generates a long series of related OPs with no foreseeable ending. Actually, that could also be the draft for a very interesting publication, in the form of a book or your own separate blog. 100% valid reason to take your time. Very commendable and highly appreciated. Thank you. :) Eccellente!!! Mile grazie!!!Dionisio
November 9, 2014
November
11
Nov
9
09
2014
05:55 AM
5
05
55
AM
PDT
KF, That fails to address keiths's point that no form of CSI calculation is necessary to decide that recognizable English is recognizable English. It's simply putting a mathy dressing on what is not a mathematical calculation: recognizing something that we've seen before, and for which the specific mechanism (as opposed to "some kind of design") is not only known but familiar. I have yet to see even proposed a task for which CSI is actually necessary or useful in identifying design. It is not surprising, therefore, that IDsts have failed to ever use CSI productively in the real world despite enormous incentives to do so.Learned Hand
November 9, 2014
November
11
Nov
9
09
2014
05:43 AM
5
05
43
AM
PDT
"The only reason why I have not yet completed my 'announced' post about the procedures is that the subject is so complex and fascinating that I am still studying and reflecting." -gpuccio @580
As far as I can see, that's one of the most difficult posts anyone could attempt to write in any science-related blog these days. 100% valid reason to take your time. Very commendable and highly appreciated. Thank you. :)Dionisio
November 9, 2014
November
11
Nov
9
09
2014
05:23 AM
5
05
23
AM
PDT
"So, much of the above objectionism towards the design view is a case of straining at gnats while swallowing camels: selective hyperskepticism about what one is disinclined to accept multiplied by hypercredulity on any rays of faint hope for what is desired given the a priori evolutionary materialism and fellow travellers dominant in science and science education." -kf
Dionisio
November 9, 2014
November
11
Nov
9
09
2014
05:04 AM
5
05
04
AM
PDT
F/N: From my lost draft, I add, take Cy-C and halve the info metric value from Yockey, 125 bits call it. Say, 100 proteins of similar avg value per AA as the halved Cy-C; we have 12,500 functionally specific bits to get dna for and to get support machinery for already such as Ribosomes [which use a lot of RNA]. Such is well past 500 or 1000 bit thresholds. The only empirically warranted, needle in haystack blind search plausible source for such is design. And, by starting from a simple approach then adjusting per factors, we can see how we get there, though there is a lot of underlying work by Yockey etc there. Durston et al 2007 did fairly similar work, which is unfortunately not easy to follow for those likely to be reading a blog. Those who need it know where to find it. The point is, even if, after going through various factors we set about one y/n choice per AA as the info content of a typical protein on average, once we set that in the context of hundreds of proteins, we are back to the same basic conclusion -- if we are willing to allow the force of inductive patterns of reasoning that undergird science. And believe us, there have been objectors about who would burn down not only induction but logic. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
04:49 AM
4
04
49
AM
PDT
KS:
Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed.
This failed to recognise that characters of meaningful English is a valid metric of functionally specific information. Taking ASCII text, 7 bits per character. The length 600 is quite over-generous [143 characters is 1000 bits], but constitutes a complexity degree threshold beyond which it is unreasonable to infer that a sparse blind search on the gamut of the observable cosmos could plausibly generate the string. But as FSCO/I is routinely and empirically reliably known to be produced by intelligently directd configuration, it is reasonable to infer inductively as best current explanation that the sample in view was designed. Even that is corrective of the way you presented the design inference, despite many corrections across years. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
04:29 AM
4
04
29
AM
PDT
DJ: I think I need to comment re this zero concessions to IDiots remark at 570:
I only visited UD in order to point out to Kairosfocus his shoddy math.
I suggest to you with all due respects that you are being inappropriately hyperskeptical, especially starting from the OOL on up, what the Smithsonian accepts is the root of the TOL. In Darwin's pond or the like, there is a chaining chemistry of D/RNA and AAs that allows any of the 4 or 20 elements to follow any other, even after looking at the issues of chirality, cross-reactions and energetically uphill chemistry etc. and setting them to one side for the sake of argument. I must insist we start from OOL, as this is what allows us to understand the problem most clearly. We need to get to a gated, encapsulated metabolising cell with an integral von Neumann self replicator. That is what is empirically relevant and observed. When you can show empirically actual other architectures of biological life and how they bridge to what we see, then that would become relevant. In that context, we note that there are hundreds of proteins at minimum needed (including enzymes), and tha the causal chain runs:
D/RNA --> Ribosome + tRNAs etc --> AA chain, adjustments and folding, agglomeration --> biofunction
There is no pattern where functional configs can cause changes to codes ahead of time so they can come into existence. Nor, given sparseness of fold-function protein string clusters in AA space, is there a credible warrant for a simple stepping stones incremental pattern across hundreds of relevant proteins. Nor, does functionally specific, complex organisation leading to interactive coupling and function come about for free. Of course, one may go back-ways and look from functioning proteins and impose a priori evolutionary materialism constraints and think that assessing info content on the basic facts of freedom of chaining, is dismissible. But, I wold think that it is not unreasonable to look at that basic point first. Where, for instance, Shannon himself used the counting of possibilities in a chain as an information metric in that famous 1948 paper. That is, it is not inherently unreasonable or shoddy to look at a state in a set of possibilities, and ask, how many structured y/n questions in a context can be used to specify the state form other possibilities, and to reckon this an information metric. Where one may go on to assess relative symbol frequencies and make adjustments on H = -SUM pi log pi, etc. And by using a suitable dummy variable one may warrant that the information metric is functional and specific based on observations. Functionality of configured complex entities is commonly observed in all sorts of settings. Here is Bradley of the original team with Thaxton who about 20 years ago presented the following (I clip from App A my always linked, where it has been for years) to the ASA:
Cytochrome c (protein) -- chain of 110 amino acids of 20 types If each amino acid has pi = .05 [--> per front end chaining chemistry not after the fact chains that are seen as functioning and variations on the same fundamental protein], then average information “i” per amino acid is given by log2 (20) = 4.32 The total Shannon information is given by I = N * i = 110 * 4.32 = 475, with total number of unique sequences “W0” that are possible is W0 = 2^I = 2^475 = 10^143 Amino acids in cytochrome c are not equiprobable (pi =/= 0.05) as assumed above. If one takes the actual probabilities of occurrence of the amino acids in cytochrome c [--> thus an after the fact functional context in which the real world dynamic-stochastic process has been run through whatever real degree of common descent has happened], one may calculate the average information per residue (or link in our 110 link polymer chain) to be 4.139 using i = - SUM pi log2 pi [--> the familiar result] Total Shannon information is given by I = N * i = 4.139 x 110 = 455. The total number of unique sequences “W0” that are possible for the set of amino acids in cytochrome c is given by W0 = 2^455 = 1.85 x 10^137 . . . . Some amino acid residues (sites along chain) allow several different amino acids to be used interchangeably in cytochrome-c without loss of function, reducing i from 4.19 to 2.82 and I (i x 110) from 475 to 310 (Yockey) [--> again, after the fact of variations across the world of life to today,the real world Monte Carlo runs I spoke of] M = 2^310 = 2.1 x 10^93 = W1 Wo / W1 = 1.85 x 10^137 / 2.1 x 10^93 = 8.8 x 10^44 Recalculating for a 39 amino acid racemic prebiotic soup [as Glycine is achiral] he then deduces (appar., following Yockey): W1 is calculated to be 4.26 x 10^62 Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74 ICSI = log2 (4.35 x 10^74) = 248 bits He then compares results from two experimental studies: Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found 1 in 10^75 (Strait and Dewey, 1996) and 1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990).
Now, we are actually dealing with hundreds of proteins from various families to make a living cell. In aggregate, the adjustments just seen in a simple case, do not make any material difference, the overall FSCO/I in a living cell, or just in the whole protein synthesis system is well beyond any reasonable reach of a blind watchmaker search process on the gamut of our observed cosmos. The only empirically warranted cause of FSCO/I is design. And FSCO/I is not after the fact target painting [just derange organisation a bit and see function vanish], it is as common as text in languages, or computer programs or gears in a train or wiring diagram based function at all sorts of scales from cell metabolism to petroleum refineries. It is readily observable and recognisable, and it is readily seen that reliably it is caused by design when we can directly see cause. So, per vera causa we are well warranted to infer it as a reliable sign of design. So, much of the above objectionism towards the design view is a case of straining at gnats while swallowing camels: selective hyperskepticism about what one is disinclined to accept multiplied by hypercredulity on any rays of faint hope for what is desired given the a priori evolutionary materialism and fellow travellers dominant in science and science education. Now, let me turn to that shoddy math by an IDiot, so-called. First, information can be quantified by reasonable metrics, as a commonplace of information theory and practice, including counting chains of y/n q's in a structured context, which is of course a bit value. (If you doubt me then kindly explain how AutoCAD works in terms of describing a configuration in a bit based file structure.) Multiply by a dummy variable that certifies warrant on functional specificity dependent on config relative to available alternatives. For practical purposes we can look at the possibilities and count them element by element, or we may if we wish modify based on how symbols appear in proportion after the fact. Makes little practical difference to the overall result. Then, use a threshold that makes sparse search constrained by atomic resources, here sol system, maximally implausible: Chi_500 = I*S - 500, functionally specific bits beyond a sol system threshold of blind search I is an info metric, S the warranting dummy variable, 500 bits the threshold, and if Chi_500 goes positive it is implausible that on the gamut of sol system, something with FSCO/I came about by blind mechanisms. Instead, the best explanation is intelligently directed configuration, aka design. Remember, we are aggregating hundreds of proteins. Let's take the Cy-C after the fact value and round down to 100 bits. Multiply by say 100. 10,000 bits. Well past 500 bits or even the 1,000 for the observable cosmos. Also, let's look at codes, which appear in strings -*-*-*- . . ., which are a linear node-arc pattern. These are therefore a subset of FSCO/I. The DNA codes for the proteins run at 3 x 4-state letters per AA codon. Six bits nominal, if we want to adjust by the halved Cy-C result, we are at about one bit per AA, 1/6 bit per base. In aggregate, we are looking at again, well past the limit. And while it is now a favourite pastime to try to pick holes in Dembski's 2005 metric model, I suggest again that if one simply reduced the logs, it is an info beyond a threshold metric. And, the probability that has become the focal point for objections of all sorts is an information value based on whatever happened in the real world with reasonable enough likelihood to be material. I suggest that info values from direct inspection of chains and possible states, or from working though a version of SUM pi log pi on observing the range of functional states in the world of life will make but little difference in the end, to the result. Especially given the island of function pattern imposed by multiple part interactive relevant function. If you cannot tell the difference between a pile of bricks and a functional castle -- as has happened here at UD in recent weeks -- then there is a problem here. I think on fair comment, some reconsideration is required, sir. I must go, having had to rebuild a comment after a keystroke wipeout. KFkairosfocus
November 9, 2014
November
11
Nov
9
09
2014
03:58 AM
3
03
58
AM
PDT
gpuccio, We can use your very own test procedure to show that dFSCI is useless. Procedure 1: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Procedure 2: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Conclude that the comment was designed. The two procedures give exactly the same results, yet the second one doesn't even include the dFSCI step. All the work was done by the other steps. The dFSCI step was a waste of time, mere window dressing. Even your own test procedure shows that dFSCI is useless, gpuccio.keith s
November 9, 2014
November
11
Nov
9
09
2014
12:11 AM
12
12
11
AM
PDT
keiths, to gpuccio:
Your “test” is nothing more than this: 1. Look at a comment longer than 600 characters. 2. If you recognize it as meaningful English, conclude that it must be designed. 3. Perform a pointless and irrelevant dFSCI calculation. 4. Conclude that the comment was designed. Why not omit step 3, since it is useless? dFSCI adds absolutely nothing that wasn’t there already. You had already determined that the comment was designed, so your entire argument is circular: 1. If a 600-character comment looks designed, attribute dFSCI to it. 2. If it has dFSCI, conclude that it was designed. It’s amazing to me that you won’t let yourself see this, gpuccio.
gpuccio:
Because it is not useless. You could easily enough attain, in a random character generator, the phrase: “I am”. Which has perfect sense in English.
Read my step #1 again:
1. Look at a comment longer than 600 characters.
"I am" is not longer than 600 characters.keith s
November 8, 2014
November
11
Nov
8
08
2014
11:58 PM
11
11
58
PM
PDT
keith s: "There is a cost, however. If you can’t defend your own idea, no one else has a reason to take it seriously." I will happily take that risk.gpuccio
November 8, 2014
November
11
Nov
8
08
2014
11:55 PM
11
11
55
PM
PDT
1 2 3 4 5 6 24

Leave a Reply