Uncommon Descent Serving The Intelligent Design Community

The First Gene: An information theory look at the origin of life

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
The First Gene: The Birth of Programming, Messaging and Formal Control

Here, edited by David Abel, The First Gene: The Birth of Programming, Messaging and Formal Control :

“The First Gene: The Birth of Programming, Messaging and Formal Control” is a peer-reviewed anthology of papers that focuses, for the first time, entirely on the following difficult scientific questions: *How did physics and chemistry write the first genetic instructions? *How could a prebiotic (pre-life, inanimate) environment consisting of nothing but chance and necessity have programmed logic gates, decision nodes, configurable-switch settings, and prescriptive information using a symbolic system of codons (three nucleotides per unit/block of code)? The codon table is formal, not physical. It has also been shown to be conceptually ideal. *How did primordial nature know how to write in redundancy codes that maximally protect information? *How did mere physics encode and decode linear digital instructions that are not determined by physical interactions? All known life is networked and cybernetic. “Cybernetics” is the study of various means of steering, organizing and controlling objects and events toward producing utility. The constraints of initial conditions and the physical laws themselves are blind and indifferent to functional success. Only controls, not constraints, steer events toward the goal of usefulness (e.g., becoming alive or staying alive). Life-origin science cannot advance until first answering these questions: *1-How does nonphysical programming arise out of physicality to then establish control over that physicality? *2-How did inanimate nature give rise to a formally-directed, linear, digital, symbol-based and cybernetic-rich life? *3-What are the necessary and sufficient conditions for turning physics and chemistry into formal controls, regulation, organization, engineering, and computational feats? “The First Gene” directly addresses these questions.

As we write, it is #2 in biophysics, and the trolls haven’t even got there yet.

Here’s Casey Luskin’s review:

Materialists Beware: The First Gene Defends a Strictly Scientific, Non-Materialist Conception of Biological Origins:

The First Gene investigates a number of different types of information that we find in nature, including prescriptive information, semantic information, and Shannon information. Prescriptive information is what directs our choices, and it is a form of semantic information — which is a type of functional information. In contrast, Shannon information, according to Abel, shouldn’t even be called “information” because it’s really a measure of a reduction in certainty, and by itself cannot do anything to “prescribe or generate formal function.” (p. 11) Making arguments similar to those embodied in Dembski’s law of conservation of information, Abel argues that “Shannon uncertainty cannot progress to becoming [Functional Information] without smuggling in positive information from an external source.” (p. 12) The highest form of information, however, is prescriptive information:

Comments
I suppose I'm most curious why the Designer gave disease causing organisms the ability to evolve around and even subvert the immune system. If anything, AIDS is a marvel of design. I suppose the Designer likes His games played on a level field.Petrushka
November 25, 2011
November
11
Nov
25
25
2011
02:50 PM
2
02
50
PM
PDT
Hi, UB, You are right, the modest potentials of natural selection are however the consequence and result of the existing replication process, that implies a lot of already existing information: not too much in the case of computer viruses, a huge amount for biological replicators. And yet, even so, that limited power of NS cannot generate any significant new functional information, least of all true dFSCI. If my proposed simulation were accomplished, I am really sure that no results could be observed: the computer replicators would remain simple computer replicators, and would develop no new functions, certainly no complex ones. I am sure of that. Our darwinist interlocutors, IMO, are as sure as I am of the same thing. That's why they try in all possible ways to dismiss my model for testing the powers of NS, and seek refuge in their ad hoc, self-serving, useless GAs, whose results, although extremely modest, are nothing else than wonderful examples of intelligent design.gpuccio
November 25, 2011
November
11
Nov
25
25
2011
02:09 PM
2
02
09
PM
PDT
Petrushka: We have a wonderful model of intelligent selection in the immune system. Here the designer has implemented a very efficient algorithm to model high affinity antibodies from low affinity ones after the primary immune response, using somatic hypermutation limited to the complementarity regions (targeted RV), and then intelligent selection based on measurement of the affinity of the modified antibodies for the epitope presented in the antigen presenting cells and expansion of the more efficient clones. In that way, highly specific molecules are created by a process of targeted RV and intelligent selection, exactly a bottom up protein engineering. That amazing process is incorporated in the highly intelligent, obviously designed structure of the immune system, and no previous knowledge of the antigen is necessary. That's how an intelligent algorithm can model a molecule on a previously unknown molecule.gpuccio
November 25, 2011
November
11
Nov
25
25
2011
02:02 PM
2
02
02
PM
PDT
In the software industry your design theory would be known as vaporware. Perhaps you can illuminate your theory of intelligent selection. How, for example, would the Designer optimize the host/parasite relationship in Leishmaniasis? Or malaria?Petrushka
November 25, 2011
November
11
Nov
25
25
2011
11:18 AM
11
11
18
AM
PDT
Hi GP, As is normal in this conversation, you are correct. When Petrushka say's "The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success", he/she admits by proxy that they are willing to believe that the onset of the (observed) formalities required for life processes, were themselves a result of life processes. This deformity is logic is not tied to physical evidence, but to ideology instead. (ie: natural selection and "differential reproductive success" are the end result of a system of formalities instantiated (and observed) in the storage and translation of genetic information, which must exist prior to the existence of either natural selection or differential reproductive sucess). In other words, something comes from nothing. This is the materialist's "poof" annointed into respectability by Charles Darwin, and so well armoured and protected by the today's academy - directly in the face of physical evidence to the contrary.Upright BiPed
November 25, 2011
November
11
Nov
25
25
2011
11:05 AM
11
11
05
AM
PDT
Petrushka: I will be clear. I don't mean impossible in principle. I mean impossible at present. Clear? Regarding your other points, repeated for the nth time, I have answered many times. Design can use both top down strategies and bottom up strategies. Including intelligent selection. I certainly cannot say, at present, what role each of these strategies had in the design of biological information. That point is certainly open to empirical investigation. You say: The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success. I could not agree more. That's why the theory is wrong and essentially stupid.gpuccio
November 25, 2011
November
11
Nov
25
25
2011
09:55 AM
9
09
55
AM
PDT
it is obviously impossible, therefore, to include a modeling of the protein space in a GA. Maybe some time we will be able to do that, but certainly not now.
I think you need to get clear on whether modelling is impossible or merely very hard. It has always been possible in principle to compute hidden lines in 3D graphical renderings, just very hard. It has always been rather simple to compute individual points. What is not simple is computing the utility of coding and regulatory sequences. Computing protein folding is very hard, but that doesn't even get you to first base in knowing about utility. Toss in regulatory networks and ecological variables and things get exponentially more complex. The thing about natural selection is that it reduces all these dimensions to a single variable, and that is differential reproductive success. I'd like to see a theory of design that doesn't incorporate fecundity and selection. Something like the theory of ray tracing that enables 3D rendering (even if you don't have the computational power to do it in real time). Let's see a thought experiment in which you establish rules for steering sequences toward utility. Without selection.Petrushka
November 25, 2011
November
11
Nov
25
25
2011
06:19 AM
6
06
19
AM
PDT
Petrushka: Your usual nonsense. I have clearly stated that it is possible to engineer proteins top down, even if at present it ia a very hard tasl to accomplish with our computational resources. This is a fact. it is obviously impossible, therefore, to include a modeling of the protein space in a GA. Maybe some time we will be able to do that, but certainly not now. By the way, modeling the proten space for a GA seems a huge task compared to just engineering new functional proteins. My point was very simple: existing GAs are never modeling biology. It's impossible, at the present state of our knowledge and resources, even begin to think of modeling biology to verify the powers of RV + NS. On the contrary, a pure experiment to test the powers of RV + NS in a non biological, computer based system, can be done. As I have proposed.gpuccio
November 25, 2011
November
11
Nov
25
25
2011
05:46 AM
5
05
46
AM
PDT
It's doubly amusing that when I discussed the "impossibility" of design without employing evolution, I was lectured about how many computational problems were once considered impossible but are now routine.Petrushka
November 25, 2011
November
11
Nov
25
25
2011
04:31 AM
4
04
31
AM
PDT
It is impossible to model biology. That is out of question. Modeling biology would mean to model protein space and biological reproduction, metabolism, and so on. It’s simply impossible.
Odd. When I said that it is impossible to predict the utility of a coding or regulatory sequence without testing it in a living thing, I was jumped on, even by you. So what is your theory of design? How does the designer produce large chunks of dFSCI or whatever? Can you give an example of the process used by the designer (assuming the designer isn't omniscient).Petrushka
November 25, 2011
November
11
Nov
25
25
2011
04:23 AM
4
04
23
AM
PDT
GCUGreyArea: Thank you for your serious and thoughtful comments. A few brief answers. First, I am a bit confused as to why you would consider a computer operating system a ‘natural’ environment. It is a designed system that imposes some very deliberate constraints on software, for example limitations on resource access and consumption, and a good operating system would be designed to prevent or severely limit some aspects of a program, for example replication. It is designed, but not to demonstrate anything about NS. That's enough. We are talking about how to model biology and an OS has very little in common with the natural environments we actually observe – it is not a good analog for the thing we are trying to model It is impossible to model biology. That is out of question. Modeling biology would mean to model protein space and biological reproduction, metabolism, and so on. It's simply impossible. What we can model is the general principle that RV + NS can create complex functionalities in replicators. We take software replicators in a software environment, and in no way we try to model biology. We test the assumption that replicators + RV + NS is a powerful engine to generate functional information (an assumtpion, IMO, completely wrong). there is a danger that it does exactly what you caution against, it is biased by design against the undirected development of new functions. Why? Nothing can be biased by design if those who designed it were not aware of the purpose (in this case the development of new functions). That's why the environment must be blind to the experiment. Exactly my point. The if implies that you have not done the experiment and so the way the statement is phrased implies that you have assumed your conclusions. The statement would be better if put as an hypothesis. Hey, I was just expressing my expectation. Just a human touch :) What do you mean by generic formal properties? The general formal laws that govern information, function, variation, randomness and replication, be it biological or software. if we are talking about biology then we would, at the very least, want sources of energy that can be utilized and which are distributed non uniformly in a space that also contains harmful stimuli and through which the agents can move if they have that ability. No, we are not talking about biology. As I said, it's absolutely impossible to model biology on a computer, at least at this level. We cannot model protein space (we know too little of it, and the computational resources anyway should be incredibly huge). And the same is true for the rest of biology. If that environment is uniform then we have designed in a low global maxima – why develop any complexity when everything is the same? The environment needs to reflect what we see in nature or it is not a model of nature. I have never said that the environment must be uniform. First of all, a computer environment is not uniform at all. And I quote myself: "If desired, the environment can change in time, to offere diversified contexts to NS. But it is essential that any variation in the environment must be “blind” to the replicators and the experiment, for the same reason as above (avoid the introduction of added information)." If we get rid of the fitness function and instead include with each agent a replication function where they accumulate accuracy points (an anolog of energy) and they can then replicate if they have enough energy – would this qualify? There must not be a replication function. Replicators must replicate, really replicate. If they replicate better, because they have create better functional information to replicate, they will. No need for "points" or enything measured. I’m not sure ‘blind experiment’ means quite what you think it means in this context. In general, all forms of blinding have the purpose to avoid that a cognitive bias in the experimenters can alter the results. Here, that's exactly what the blinding of the environment to the replicators is aimed to achieve.gpuccio
November 24, 2011
November
11
Nov
24
24
2011
02:24 PM
2
02
24
PM
PDT
I should add quickly, in the financial example I gave, the simulation was not designed for evolution, it was a 'black box' (in the form of a DLL) supplied by an investment bank who wanted to see if the GA would do any better than their existing trading software (it did, very effectively)GCUGreyArea
November 24, 2011
November
11
Nov
24
24
2011
05:06 AM
5
05
06
AM
PDT
GP: Thanks for your reply, I'm afraid I don't have much time to respond in as much detail as I would like. First, I am a bit confused as to why you would consider a computer operating system a 'natural' environment. It is a designed system that imposes some very deliberate constraints on software, for example limitations on resource access and consumption, and a good operating system would be designed to prevent or severely limit some aspects of a program, for example replication. We are talking about how to model biology and an OS has very little in common with the natural environments we actually observe - it is not a good analog for the thing we are trying to model and there is a danger that it does exactly what you caution against, it is biased by design against the undirected development of new functions.
If a correct modeling of that is made, it will be clear that it can’t.
The if implies that you have not done the experiment and so the way the statement is phrased implies that you have assumed your conclusions. The statement would be better if put as an hypothesis.
You can choose an existing operating system, or somebody can design a specific one, but in that case the programing of the system must be blind, and must satisfy only generic formal properties that have nothing to do with the specific replicators that will be used, and with the hypothesis that is to be demonstrated.
What do you mean by generic formal properties? - if we are talking about biology then we would, at the very least, want sources of energy that can be utilized and which are distributed non uniformly in a space that also contains harmful stimuli and through which the agents can move if they have that ability.
The only observed things that have to be captured are those intrinsic to the basic hypothesis: that replicators, in an environment where they can replicate, complex enough and various enough, and subjected to RV in appropriate rates, can and will develop new dFSCI to exploit better the resources in the environment. Nothing else is needed.
If that environment is uniform then we have designed in a low global maxima - why develop any complexity when everything is the same? The environment needs to reflect what we see in nature or it is not a model of nature.
it is clear that each system has its rules and properties, but if they are unrelated to the replicators and to the experiment, they are no more a bias, but only random constraints and conditions. It is the duty of the replicators/RV part to exploit those conditions, if that is possible.
Ok, I know of a project that used a GA to evolve predictors for financial markets. The simulation was designed to simulate financial markets and 'fitness' was defined as the accuracy of the prediction (they evolved neural networks to make the predictions). If we get rid of the fitness function and instead include with each agent a replication function where they accumulate accuracy points (an anolog of energy) and they can then replicate if they have enough energy - would this qualify?
Blinded experiments are the rule in empirical sciences, exactly because cognitive bias is a terrible enemy.
I'm not sure 'blind experiment' means quite what you think it means in this context.GCUGreyArea
November 24, 2011
November
11
Nov
24
24
2011
05:03 AM
5
05
03
AM
PDT
"So-called "evolutionary algorithms" (a Self-contradictory nonsense term), if they produce any formal function, are always artificially controlled. Optimization of genetic algorithms is always choice-contingent, and therefore formal rather than physical."Mung
November 23, 2011
November
11
Nov
23
23
2011
08:41 AM
8
08
41
AM
PDT
"Adami rightly argues that information must always be about something. "Aboutness" is a common focus of attention in trying to elucidate what makes information intuitive. But aboutness is always abstract, conceptual and formal." "Jablonka rightly argues that Shannon information is insufficient to explain biology. She points to the required interaction between sender and receiver. Jablonka emphasizes both the function of bioinformation and its "aboutness," arguing that semantic information only exists in association with living or designed systems." When folks like Elizabeth Liddle and Allen MacNeill argue that information need not be about anything at all they just display how little they have actually thought upon or read about the subject. So where does 'aboutness' come from?Mung
November 23, 2011
November
11
Nov
23
23
2011
07:47 AM
7
07
47
AM
PDT
But you are aware of minds associated with brains? Pray tell, what is this 'mind' you speak of and how is it associated with 'brain'? Please tell us what information is. Else your assertion that you're not aware of any instance of information that is not physically embodied is just rhetoric. And you can't think of any non-material objects that contribute to science? Is that because you can't think of any non-material objects? How about mathematics?Mung
November 23, 2011
November
11
Nov
23
23
2011
07:34 AM
7
07
34
AM
PDT
No Kindle edition — aaarrrrgggghhhh!!!!!
"Because this book is also being made available in e-book format (e.g., for Kindles), many awkward internet links have been deliberately left in the text and reference lists." - p. xiMung
November 23, 2011
November
11
Nov
23
23
2011
07:27 AM
7
07
27
AM
PDT
I'm not aware of any minds that are not associated with brains, nor of any instance of information that is not physically embodied. Nor am I aware of any lasting contribution to science that involves non-material objects or causes.Petrushka
November 22, 2011
November
11
Nov
22
22
2011
06:03 PM
6
06
03
PM
PDT
Of course not, because you think science deals with only the material. That is why you continue to show confusion over, for example, the concept of information, which is non-material. I'll take your statement and raise you one: Virtually everything of lasting contribution in applied science and engineering has arisen from mind and information, not from purely material and mechanical causes.Eric Anderson
November 22, 2011
November
11
Nov
22
22
2011
03:35 PM
3
03
35
PM
PDT
You would discount the contributions of the "mind" to science? While certainly manifest in physical media (brain matter), it isn't settled that the "mind" is itself material.ciphertext
November 22, 2011
November
11
Nov
22
22
2011
08:40 AM
8
08
40
AM
PDT
GCUGreyArea: Some clarifications. So if I wanted to test the hypothesis that geographical isolation can lead to speciation (in the sense of one population diverging into two that subsequently loose the ability to interbreed) then I must not specify an environment in which geographical isolation is possible? What I am discussing here is exclusively the model according to which RV + NS in a reproducing population can create new FSCI. If a correct modeling of that is made, it will be clear that it can't. Anyway, if you want to request generic formal properties for the environment (such as sufficient complexity that allows for sub-environments, or anything else that is in n o way related to the replicators and the experiment, that can pe passed as a request to the people who model the environment in an explicit way, so that those requirement can always be analyzed to verify that they are not adding information related to specific functions. Anyway, the programming of the environment must remain blind to the replicators, except for generic formal properties. I’m a little confused when you use the phrase ‘Natural informational environment’ – what does this mean? I mean: we can implement a system based on the concepts of RV and NS in a computer environment. But the computer environment must be a natural computer environment, not a software programmed explicitly to demonstrate our point. Otherwise, we will program the environment to demonstrate our point, and the whole work will be biased. The point is: if RV and NS can generate dFSCI, they can do that in electronic replicators in a computer environment, such as computer viruses ina an operating system, if they do the same in a biological environment, provided we give enough reproductive resources and enough variation. Because the logical and informational problems are similar. IOWs, if mere variation and NS based on intrinsic reproductive ability can generate dfSCI, a computer virus is a good model for the principle: it replicates, it can be subjected to RV, even in different rates, and it can develop functional code that can give advantages in the computer environment where it reproduces. So, an operating system like Windows would be a nwutral computer environment. But it can be any other system, provided it is not built to test NS. Avida, and all other similar softwares, are systems built with a specific purpose, and they include a lot of added information in terms of choices, functions, and so on. They are not "blind environments". It sounds like you are saying that the environment must be picked at random, or maybe that is should be found … No, it just means that it is not prepared ad hoc for the experiment. You can choose an existing operating system, or somebody can design a specific one, but in that case the programing of the system must be blind, and must satisfy only generic formal properties that have nothing to do with the specific replicators that will be used, and with the hypothesis that is to be demonstrated. This is a little tricky when it comes to constructing a model because models are only valid when they capture observed things. The only observed things that have to be captured are those intrinsic to the basic hypothesis: that replicators, in an environment where they can replicate, complex enough and various enough, and subjected to RV in appropriate rates, can and will develop new dFSCI to exploit better the resources in the environment. Nothing else is needed. Fitness functions in GA’s used for biological hypothesis testing are created as models of the result of an organism existing in an environment. Exactly. That's why they are a different thing from NS. They are design. That's why a true model for NS must not include any fitness function. Fitness is a naturally occurring properties. Fitness functions are a designed entity. A virus that replicates in a system is fit, there is no need for a fitness function to tell us that. If another virus is more fit, it will replicate more. Nobody has to measure anything for that to happen. That's why NS is calle natural. Your phrase “the environment has no direct information about the functional result” is also a little odd – how do you formally classify direct as opposed to indirect? If the environment was not created for the replicators and the experiment, it cannot have "direct" information. Direct information is intentional, and it comes from conscious agents who do have information and understanding of what they are doing. I was probably not clear in my terms here, so I will try to make my concepts more clear: Let's say that no intentional information must be present in the environment, both in direct ways (such as in the Weasel model, where the result is already incorporated explicitly in the system), and in indirect ways (for instance, by ad hoc choices of parameters, or of the type of system in connection to the type of replicators). The information in the system can certainly exist in non intentional forms: it is clear that each system has its rules and properties, but if they are unrelated to the replicators and to the experiment, they are no more a bias, but only random constraints and conditions. It is the duty of the replicators/RV part to exploit those conditions, if that is possible. One way to read it is that modelling reproductive success in not a valid way of modelling reproductive success? That's the point. We are not modeling reproductive success. We are modeling RV and NS to see if they can generate reproductive success and complex new functions as a result of it. I will be more clear. We already know that a selectable trait, that is made to expand, will behave in a certain way. We are not interested in that. We knoe that selection works. The problem is: does natural selection work? IOWs, is reproductive advantage a strong enough mechanism to generate complex functions through RV and NS? To determine that, we must implement RV in some form of blind environment in some form of replicators, and observe any reproductive advantage that happens as an output. And analyzed if that reproductive advantage implied the creation of new dFSCI. Is that a simulation? In a sense it is, because we are implementing in a computer system a mechanism that shoud be working in a biological system. But we are not trying in any way to simulate the biological environment (that would really be impossible). We are making an experiment to verify if a proposed logical algorithm can work. If it does not work in a compouter system, even if we try different conditions (always respecting the basic properties I have outlined), there is no reason to think it will work in a biological system. This appears to be an argument that including any form of selection is invalid as a way of modelling selection … I must have misunderstood. No, it was me that again was not clear enough. I rewrite the sentence in a more complete way: "b) The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate. Then, exclusively as a conseqeunce of random variation and of spontaneous reproductive advantage in the system, the replicators must develop new complex functional information. The point is simple, after all. No form of selection must be included in the system. If RV generates new function that gives reproductive advantage, differential reproduction will just happen. No artificial form of selection must be included, because we are investigating natural selection. Surely then if the mere act of creating the model, and the fact that models have to be intelligently created, means any model of selection is actually intelligent selection, then it is impossible to model natural selection? No. We can certainly create a model which is blinded in some of its parts, to give the desired result. Blinded experiments are the rule in empirical sciences, exactly because cognitive bias is a terrible enemy. The model can be intelligently created to guarantee that any form of selection in it is natural, and not intelligent, selection.gpuccio
November 22, 2011
November
11
Nov
22
22
2011
08:31 AM
8
08
31
AM
PDT
I'm not aware of any lasting contribution to science that requires a non-material cause. Certainly individual scientists have held a wide variety of religious beliefs, including beliefs in a global flood and in special creation.Petrushka
November 22, 2011
November
11
Nov
22
22
2011
08:25 AM
8
08
25
AM
PDT
I think your argument is really against the philosophies of science being employed by the three (though you conflate two of the three) various theories. A purely "material" philosophy of science versus a philosophy of science which allows for non-material explanation (as was popular up to late 19th or early 20th century I believe).ciphertext
November 22, 2011
November
11
Nov
22
22
2011
08:14 AM
8
08
14
AM
PDT
Natuiral selection is a result- an after-the-fact assessment. If you have differential reproduction due to heritable random variation then you have natural selection. A mouse that has more offspring than other mice still gave birth to mice, and those mice will give rise to other MICE. OTOH artificial selection is a real selection process and can do what NS cannot. However NS can undo what AS has done. That is about all NS can do.Joseph
November 22, 2011
November
11
Nov
22
22
2011
04:30 AM
4
04
30
AM
PDT
So if I wanted to test the hypothesis that geographical isolation can lead to speciation (in the sense of one population diverging into two that subsequently loose the ability to interbreed) then I must not specify an environment in which geographical isolation is possible? I'm a little confused when you use the phrase 'Natural informational environment' - what does this mean? It sounds like you are saying that the environment must be picked at random, or maybe that is should be found ... This is a little tricky when it comes to constructing a model because models are only valid when they capture observed things. Fitness functions in GA's used for biological hypothesis testing are created as models of the result of an organism existing in an environment. Your phrase "the environment has no direct information about the functional result" is also a little odd - how do you formally classify direct as opposed to indirect? One way to read it is that modelling reproductive success in not a valid way of modelling reproductive success?
b) The replicators can be programmed any way that is considered appropriate, and the rules of random variations in them can be set as appropriate. Then, exclusively as a conseqeunce of random variation, the replicators must develop new complex functional information.
This appears to be an argument that including any form of selection is invalid as a way of modelling selection ... I must have misunderstood.
we must model NS, not intelligent selection.
Surely then if the mere act of creating the model, and the fact that models have to be intelligently created, means any model of selection is actually intelligent selection, then it is impossible to model natural selection?GCUGreyArea
November 22, 2011
November
11
Nov
22
22
2011
04:07 AM
4
04
07
AM
PDT
Petrushka, What if it's not poof but something we have a genuine inability to grasp (uncomputability)?! E.g. mathematics is humble enough to recognise there are an infinite number of mathematical truths that cannot be proven. I think it is time biologists did the same, science will only benefit from it. Besides, I would not put my money on alchemy investigating something that did not exist.Eugene S
November 22, 2011
November
11
Nov
22
22
2011
04:03 AM
4
04
03
AM
PDT
And by that "logic" arcaeology is a scribe of the gaps argument and forensic science is a criminal of the gaps argument.Joseph
November 22, 2011
November
11
Nov
22
22
2011
03:57 AM
3
03
57
AM
PDT
DrREC:
This falsifies the bulk of the claims of ID about information processes, specifically that language (abstract and arbitrary) and programming could never emerge from randomized inputs plus selection.
You are still confused as ID does not state that. Not only that all of what you posted were DESIGNED.Joseph
November 22, 2011
November
11
Nov
22
22
2011
03:56 AM
3
03
56
AM
PDT
Again, there isn't any selecting going on. What part of that don't you understand? Differential reproduction due to heritable random variation occurs, but calling it "selection" is misleading, as Will Provine said: The Origin of Theoretical Population Genetics (University of Chicago Press, 1971), reissued in 2001 by William Provine:
Natural selection does not act on anything, nor does it select (for or against), force, maximize, create, modify, shape, operate, drive, favor, maintain, push, or adjust. Natural selection does nothing….Having natural selection select is nifty because it excuses the necessity of talking about the actual causation of natural selection. Such talk was excusable for Charles Darwin, but inexcusable for evolutionists now. Creationists have discovered our empty “natural selection” language, and the “actions” of natural selection make huge, vulnerable targets. (pp. 199-200)
Joseph
November 22, 2011
November
11
Nov
22
22
2011
03:55 AM
3
03
55
AM
PDT
ID always has been a and always will be a god of the gaps argument. What else is new?Petrushka
November 22, 2011
November
11
Nov
22
22
2011
03:52 AM
3
03
52
AM
PDT
1 3 4 5 6 7 9

Leave a Reply