Uncommon Descent Serving The Intelligent Design Community

EA’s “oldie but goodie” short primer on Intelligent Design, Sept. 2003

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Sometimes, we run across a sleeper that just begs to be headlined here at UD.

EA’s short primer on ID, drawn up in Sept 2003, is such a sleeper. Let’s observe:

__________

>> Brief Primer on Intelligent Design

 

Having read a fair amount of material on intelligent design and having been involved in various discussions on the topic, I decided to prepare this brief primer that I trust will be useful in clarifying the central issues and in helping those less familiar with intelligent design understand its basic propositions.

This is not intended to be a comprehensive analysis of intelligent design, nor is it intended to respond to criticisms.  Rather, this represents my modest attempt to avoid the side roads and the irrelevancies, and outline the fundamental central tenet of intelligent design, which is that some things exhibit characteristics of design that can be objectively and reliably detected.  It is my view that criticisms of intelligent design must focus on this central tenet, or risk missing the mark.  It is also with this central tenet that intelligent design stands or falls as a scientific enterprise.

Setting the Stage

As with so many issues, it is important to first define our terms.  In public debates, the term “intelligent design” is often incorrectly associated with anyone who believes that the Earth and all life upon the Earth were actively created by an intelligent Creator, and when used pejoratively, the term generates much more heat than light and adds no substantive insight to the discussion.

In a broader sense, the term might be applied to individuals who hold to a basic teleological view of the universe or the diversity of life on earth.  In this sense, many individuals believe in some form of intelligent design, including those who hold to an initial act of life’s creation, followed by naturalistic evolutionary mechanisms.

In yet a more concrete sense, the term is often used with respect to those involved in the modern intelligent design movement, including vocal proponents such as Philip Johnson and Jonathan Wells.  Although Johnson and Wells are certainly involved in the broader intelligent design movement, they largely use intelligent design as a tool for promoting change in current educational and philosophical frameworks.  This use of intelligent design as a tool for change has received by far the most press coverage and is at the heart of the often-heated debates over school curricula.  However, as intelligent design’s primary spokesperson, William Dembski, has pointed out, intelligent design’s use as a tool for change is secondary to intelligent design’s undertaking as an independent scientific enterprise.

Finally, therefore, intelligent design refers to the science of detecting design.  In this latter sense, intelligent design is not limited to debates over evolutionary theory or discussions of design in nature, but covers the study of signs of intelligence wherever they may occur: whether in archeology, forensic science, the search for extraterrestrial intelligence, or otherwise.  (Though not strictly limited to historical events, intelligent design argues that design can be detected in some things even in the absence of any reliable historical record or independent knowledge of a designing intelligence.  It is in this context that we wish to discuss intelligent design.)  Defined more tightly, intelligent design can thus be viewed as the science of studying the criteria, parameters and procedures for reliably detecting the activity of an intelligent agent.

Associated with this latter more limited definition are scientists involved in such a scientific enterprise.  These individuals include, probably most notably, Dembski and Michael Behe, and a number of other scientists who have begun to take notice of intelligent design as a legitimate scientific inquiry.

It is in this latter sense that I wish to examine the concept of intelligent design.

Basic Propositions

What then is the basic foundation and what are the basic propositions of intelligent design?

Intelligent design begins with a very basic proposition: some things are designed.  This is slightly more complicated than it sounds, but not much, if we keep a couple of points in mind.

First, one might object that many things appear to be partly designed and partly not.  This, however, is simply a matter of drilling down deeply enough to identify the discrete “thing” being examined.  For example, if we look at a stone wall we can see that it is made up of stones of various sizes and shapes.  Even if we assume that the stones themselves were not the product of intelligent design, we would conclude that they have been used by an intelligent agent in designing and building the wall.  Thus, in situations where something looks partly designed and partly not designed, we need simply drill down further and determine which aspect, portion, or piece of the “thing” we are evaluating.  In this example, are we examining the individual stones, or are we examining their overall arrangement, pattern, and resulting function?

Even if we are unable to break down a particular object or system into its component parts, and we end up with a “thing” that is partially designed and partially not designed, the initial proposition of intelligent design would remain essentially the same: some parts, or portions, or components of some things are designed.

Second, when we talk about the fact that some things are designed, we are not referring only to physical objects, but are referring to anything that is the subject of design, whether it be a physical object, a system, or a message or other representation able to convey information.  Thus if I took the same naturally-occurring stones, and instead of building a wall, I laid them out on the beach to spell a message, we would also have a clear indication of the actions of an intelligent agent, once again not in the stones themselves, but in the representation created by the stones and the information conveyed by that representation.

Given this basic proposition that some things are designed, intelligent design then asks the next logical question: is it possible to detect design?  As others have pointed out, if the unlikely answer is “no,” then we can only say that everything may or may not be designed, and we have no way of determining whether any particular item is or is not designed.  However, if the likely answer is “yes,” then this leads to a final and more challenging question that lies at the heart of intelligent design theory and intelligent design as a scientific enterprise: how does one reliably detect design?

Characteristics of Design and Limitations of Intelligent Design

What kinds of characteristics do things that are designed exhibit?  When we contemplate things that are designed – a car, a computer, a carefully coordinated bouquet of flowers – a number of characteristics might spring to mind, such as regularity, order, and beauty.  However, if we think for a moment, we can come up with many examples of naturally occurring phenomena that might fit these descriptions: the rotation of the Earth that brings each new day and the well-timed phases of the moon exhibit regularity; naturally-occurring crystals are examples of nearly flawless order; the rainbow or the sunset, resulting from the sun’s rays playing in the atmosphere, are paradigms of beauty.

To be sure, characteristics such as regularity and order might be strongly indicative of an intelligent agent in those instances where natural phenomena would not normally account for them, such as a handful of evenly spaced flowers growing beside the highway, or a pile of carefully stacked rocks along the hiking trail.  Nevertheless, because there are many instances of naturally occurring phenomena that exhibit regularity, order, and beauty, the mere existence of these characteristics is not necessarily indicative of design.  In other words, these are not necessary defining characteristics of design.

On the flip side, there are many things that are designed that do not exhibit any particular regularity or order, at least not in a mathematical sense, such as a painting or a sculpture.  There are also many objects of design that do not evoke any particular sense of beauty.  And this brings up an important limitation of intelligent design: we are not able to identify everything that is designed.

A related limitation arises in that we cannot say with certainty that a particular thing is not designed.  This is particularly true, given that many things are purposely designed to resemble naturally occurring phenomena.  For example, in my yard I have many rocks that have been purposely designed and strategically placed to resemble the random placement of rocks in a stream.  In addition, when I recently remodeled a room in my home, I used a faux painting technique – carefully designed and coordinated over the course of several hours – to resemble a naturally occurring pattern.

As a result, intelligent design is limited in two important aspects: it can neither identify all things that are designed, nor can it tell us with certainty that a particular thing is not designed.

But that leaves one remaining possibility: is it possible to identify with certainty some things that are designed?  Dembski and Behe would argue that the answer is “yes.”

Possibility versus Probability

In order to identify with certainty that something is designed, we must be able to define characteristics that, while not necessarily present in all things designed, are never present in things not designed.  It is in defining these characteristics and setting the parameters for identifying and studying these characteristics, that intelligent design seeks to make its scientific contribution.

We have already reviewed some potential characteristics of things that might be designed, and have noted, for example, that regularity and order do not necessarily define design.  I have posited, however, that regularity and order might provide an inference of design, in those instances where natural phenomena would not normally account for them, such as the handful of evenly spaced flowers or the pile of stacked rocks.  Let’s examine these two examples in a bit more detail.

Is it possible that this pattern of flowers or the stack of rocks occurred naturally?  Yes, it is possible.  It is also possible, at least as a pure logical matter, that the sun will cease to shine tomorrow morning at 9:00 a.m.  To give a stronger example, is it possible that the laws of physics will fail tonight at midnight?  Sure, as a pure logical matter.  But is it likely?  Absolutely not.  In fact, based on past observations and experience, we deem such an event so unlikely as to be a practical impossibility.

Note that in the examples of the sun ceasing to shine or the laws of physics failing we are not talking simply about unusual or rare events; rather we are talking about something so improbable that we, our precious scientific theories, and the very community in which we live are more likely to pass into oblivion before the event in question occurs.  Thus for all practical purposes, within the frame of reference of the universe as we understand it and the world in which we live and operate, it can be deemed an impossibility.  Dembski has already skillfully addressed this issue of logical possibility, so I will not review the matter further, except to summarize that in science we are not so interested in pure logical possibility as in realistic probability.  It is within this realm of probability that all science operates, and it is in this sense that we must view the probabilities relevant to intelligent design.

However, while we need not be concerned with wildly speculative logical possibilities, we might nevertheless conclude that the pattern of flowers or the stack of rocks is possible, not only as a matter of logical possibility, but also as a matter of reasonable probability, within the realm of our experience.  After all, there are lots of flowers on the Earth and surely a handful of them must eventually turn up evenly spaced as though carefully planted.  In addition, we have all seen precariously balanced rocks, formed as a result of erosion acting on rocks of disparate hardness, so perhaps our pile of rocks also occurred naturally.  We might admit that our flowers and our stack of rocks are rare and unusual natural phenomena, but we would argue that they are not outside of the realm of probability or our past experience.

Thus, the inference of design needs to get much stronger before we are satisfied that our pattern of flowers or our stack of rocks have been designed.

The Design Inference Continuum

Now let’s suppose that we tweak the examples a bit.  Let’s suppose that instead of a handful of flowers, we have several dozen flowers, each evenly spaced one foot apart along the highway.  Can we safely conclude that this is the product of design?  What about a dozen identical stacks of rocks along the hiking trail?  One might still mount an argument that these phenomena do not yet reliably indicate design because they could have been created naturally.  Nevertheless, in making such an argument we would be relying less on realistic probabilities and what we know about the world around us, and slipping closer to the argument by logical possibility.  This precisely the mistake for which Dembski takes Allen Orr to task.

Now allow me to tweak yet a bit more.  Let’s suppose that the dozens of flowers are now hundreds, each in a carefully and evenly spaced pattern along the highway.  At this point, the probability of natural occurrence becomes so low as to completely escape our previous experience; it becomes so low as to suggest practical impossibility.  Is it the sheer number of flowers that puts us over the hump?  No, it is not the number of flowers itself that provides evidence for design, but the number of spacings between the flowers, the complexity of the overall pattern, and the fact that these spacings and the resulting complexity are not required by any natural law, but are only one of any number of possible variations.  In other words, it is the discretionary placement of all of these flowers, selected from among the nearly infinite number of placements possible under natural laws, which allows us to infer design.  It is this placement of all the flowers, which gives the characteristics of specificity and complexity, and which Dembski terms “specified complexity.”  And it is in this realm of specified complexity that the probability of non-design nears impossibility, and our confidence in inferring design nears certainty.

Yet, our examples can become even more compelling.  As a last modification, let’s suppose that the flowers are now arranged by the side of the road in the outline of the state of Texas, complete with Bluebonnets in the shape of the Lone Star.  Let’s suppose that our stacks of rocks are arranged so that there is one stack exactly each mile along the trail, or one stack at each fork in the trail.  Now we have not only specified complex patterns, but patterns high in secondary information content.  In the one case we have a shape that identifies Texas, a particular type of flower that signifies the state, and a star that is not just a pattern, but a pattern with strong symbolic meaning.  Along our hiking trail we have markers that carry out a function by providing specific information regarding changes in the trail or indicating the distance traveled.

Intelligent design, as a scientific enterprise is geared toward this end of the probability continuum where the probability of non-design nears zero and the probability of design nears one.  In some ways, focusing only on the area of most certainty is a rather modest and limiting approach.  Yet design theorists willingly give up the possibility of identifying design in many cases where it in fact exists, in exchange for the accuracy and the certainty that a more stringent set of criteria bestow.  In this way, the design inference is lifted from the level of broad intuition to a focused scientific instrument with definitive testable criteria.

Conclusion

As a scientific undertaking, intelligent design is not in the business of identifying all things designed, nor is it in the business of confirming with certainty that a particular thing is not designed.  Indeed, intelligent design, and it is fair to say current human knowledge, is incapable of performing these tasks.  What intelligent design does seek to do, however, is identify some things that are designed.

We have seen that the argument to design is essentially an inference based on probabilities.  As a result, there is a continuum ranging from the likelihood of non-design to the likelihood of design.  At a certain point the probability of non-design nears zero and the probability of design nears one.  At that point we can say, the design theorist argues, with as much certainty as any other scientific fact or proposition, that the thing in question was designed.  It is in this area of specified complexity (of which high secondary information content and Behe’s “irreducible complexity” are examples) that the theory of intelligent design operates.

Criticisms of intelligent design based on social, religious, philosophical, or cultural grounds, including complaints about the identity, motives, or capabilities of the putative designer, miss the mark.  Design theorists argue that specified complexity can be objectively and reliably defined and detected so that the probability of non-design nears impossibility and the probability of design nears certainty.  This is intelligent design’s central tenet.  It is on this point, and only on this point, that intelligent design as a scientific undertaking can be appropriately challenged and criticized.  And it is on this point that Dembski, Behe, and others are confident that intelligent design will make its greatest contribution.

Eric Anderson

September 9, 2003>>

___________

It seems to me the matter was clear enough a decade ago, and the objections were sufficiently answered a decade ago.

Why are we still meeting the same problems, ten years later?

I want to suggest, that this has more to do with unnecessary heat, unjustifiable polarisation and inexcusable clouding of issues, than with the basic substance on the merits. Can we learn from the mistakes made over these past ten years, and do better over the next ten years?

I hope so. END

Comments
@Gregory #20 Zdravstvuyte Gregory, Although my Russian is focused almost exclusively on reading math and physics literature (the prices of Russian textbooks were irresistible for my student stipends), I didn't have any trouble following your Russian passages (they were also more colorful than the English sections). Although we both have one foot in the Eastern European and one in the Western culture, our migration paths were in the opposite directions -- you went from Canada to Russia, while I came from (a country formerly known as) Yugoslavia to USA (to my second grad school). Either path is a form of mental reboot into a new OS, quite disruptive and disorienting at first, but very refreshing and stimulating over time. This straddling of the same two realms seems to have resulted in both of us often "fighting" against both sides in the ID vs ND war of ideas. Checking out your blog and some earlier posts in UD, I see strong resonances between your concept of "Human Extension" and several other thought currents, such as "Extended Phenotype" of Dawkins (which generalizes his 'selfish gene' and 'meme' patterns), "Omega Point" by Teilhard de Chardin, mystical egregore, social organisms, as well as the zeitgeist of 'internet as a superbrain' emerging from numerous authors more recently. All of these ideas (which go way back to Aristotle, at least) identify a very interesting organizing principle of the universe, albeit each capturing only a segment of the whole pattern. After pursuing these white rabbits (and few others) each down his own trail, toward what seemed to be a common hidden treasure, each trail would somehow terminate unfinished and in dead end, driving me to the next one. I am beginning to suspect it is how this process is supposed to go and how it will continue, although each one appears at first as the final awaking into the real thing. My current "final" trail, which I call "Planckian networks", combines the best insights of those that went before with few new elements from fundamental physics (pregeometry), math & computer science. Thanks to the stimulating questions and discussion from the folks here at UD, the key elements of the "Planckian networks" model were sketched in a recent thread here. The thread was unfortunately archived before I could gather links to the scattered posts into a coherent TOC as my concluding post on the thread, so for convenience of quick intro, here is how that goes: Planckian Networks intro --- Matrix, Leibniz, Teilhard de Chardain --- Model of consciousness, after death, Bell inequalities --- Harmonization, biochemical networks --- Free will, hybrid intelligence, quantum magic --- SFED, QED, quantum topics --- Carleman Linearization, SFED, panpsychism --- Additive intelligence, pregeometry, fine tuning --- Consciousness after death, exceptional path, limits of theories --- Self-programming networks --- Internal modeling, physics & information (rock in mud) --- Science, Russian dolls, mind stuff, internal models, laws vs patterns --- Goal oriented, anticipatory systems from pattern recognizers --- Attractors as memories, internal models, front loading --- Digital physics, complexity science, laws vs patterns --- Free will in fractal internal model, crossword puzzle --- Participatory front loading --- How it works, additive intelligence, composition problem --- Quantum measurement theory vs Glauber --- Limits of computations, irreducibility of laws, constraints, Fermi paradox --- Levels & power of computation, Max Plancks, broken bone --- Creation smarter than creator? --- Ontological levels, Game of Life, chess --- Genetic Algorithms vs Networks vs Dembski --- CSI vs networks, capacity of NNs, stereotyping, knowability --- Meyer, empirical consciousness --- CSI vs networks, limits of Abel, Dembski --- Counterexample for Abel --- Thinking vs computing --- Why simple building blocks Evolution process vs theory conflation --- Map vs Terrain --- Chess, consciousness vs computation --- Concession on microevolution, dice example --- Concessions, technological evolution Natural Science schema --- Algorithmically effective elements vs consciousness --- Science schema re-explained --- Qualia, science --- General algorithms --- Necessary vs sufficient, algorithmic effectiveness --- Algorithm semantics, parasitic elements --- Meyer, why cringe? --- Semantic wall (KF) --- Meyer, citation --- Meyer, sloppiness? --- Meyer, inductive strength --- Meyer's leap --- Wisdom of leap --- Other links to intelligent mind --- Leap details --- Dembski, Mere Creation.. mind/intelligence conflation --- Wisdom of leap vs James Shapiro --- Science won't leap over the edge --- more on edge --- Key holders, missing ID hypothesis --- Missing ID hypothesis _______ Seven or more links (not sure these days) will probably trip the mod filter. KFnightlight
April 13, 2013
April
04
Apr
13
13
2013
02:41 PM
2
02
41
PM
PDT
nightlight @19,
"Besides absurdity, allowing them to get away with ‘random smudges can create valid words and partial sentences without scribe‘ is sure way to have to back off next to holding onto whole valid paragraphs, then to whole valid pages,… since they may eventually find an example of a partial sentence which has only one letter wrong and which could be repaired into a correct sentence via the experimentally established random smudge in not very many tries."
There's no logical connection between accepting that random errors occur and supposing that they can construct whole volumes incrementally while preserving a definite meaning at each step. "Chance Ratcliff kicked the cat." "Chance Ratcliff kicked the can". If that sentence was a witness statement against me, a copying error might make me look rather innocent instead -- a loss of information resulting in a net fitness gain. This in no way implies that any change to the text can result in an increase of information or a fitness gain, and that successive errors will do so as well.
At that point they have shown, with the key help from your initial concession that partial sentences don’t need scribe, that random smudges can produce whole valid sentence which you claimed that only a scribe can produce. Hence you now have to back off from the sentence threshold to holding onto the valid paragraphs threshold, etc, i.e. by allowing a threshold for for no-scribe gap you have needlessly condemned yourself to a constant retreat and a certain complete defeat, eventually, reached one step at a time, by expanding the scribe-less gaps you left for them to work on.
I make no such concession that partial sentences don't need a scribe, even by analogy. Random copying errors must be preceded by both a manuscript and a scribe. Allowing that a manuscript may be generally reliable even if a couple of random copying errors of individual letters occurs, in no way implies that the manuscript nor the scribe can be explained causally by randomness acting incrementally to produce function at each step. Let's be clear. The claim of neo-Darwinism is that whole cellular subsystems (organelles and more) can be constructed incrementally in small steps, where each step is functionally advantageous in some environmental context. This presupposes an information-bearing system with the functional capacity to replicate itself. To say that random errors can occur during replication concedes nothing important. It's just a statement of observation. To suggest that some error or two might produce a net fitness gain even if it results in a loss of function does not suggest that random errors can construct whole systems or the information which specifies them. There is simply no impetus to make that leap. My claim is modest. Replication errors beget substitutions. Again, this is sufficiency, not necessity. Accepting the sufficiency of a random error to produce an AA substitution does not concede that any substitution is necessarily random, nor that the information in which the error occurs is the product of random forces.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
01:53 PM
1
01
53
PM
PDT
Box @23, I definitely agree that as a system becomes more complex, the difficulty in integrating a new function or a new "pathway" to function can rapidly become more difficult, and as such, the probability of incorporating it by random drop exponentially as the system becomes more complex and specific.
The days of DNA code as First Cause are over. We now know that DNA code for protein X doesn’t operate on its own and is only a part of what can be viewed upon as a larger specific subsystem of the cell which contains highly specific information about protein X.
Indeed, that would seem to be the case. And I wouldn't presume that random mutations can add up to new systems incrementally, I don't think that's feasible. I just wanted to take issue with the notion that we cannot attribute any perturbation at the single or double AA level to randomness, aka, replication errors. That just doesn't seem reasonable.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
12:05 PM
12
12
05
PM
PDT
nightlight @19,
On (1): error-correcting mechanisms are commonly seen in intelligently guided evolutions at social & technological levels — it is a conformity enforcing mechanism, which doesn’t make non-conforming phenomena “random” or non-guided actions. Hence, one cannot use its presence in biology as a proof or evidence of randomness of the deviations from the norm, either.
I don't buy the social analogy, so I don't think that observing steering behaviors of intelligent agents within social constraints means we must conclude that replication errors are not random. WRT technology, or practically anything which relies on a physical process, random occurrences are contingencies which can and do happen, and are dealt with accordingly in whatever context they arise in. None of this undermines the inference to design where it's already warranted. I'll repeat my thesis: Random mutations are a sufficient but not a necessary cause for a limited number of substitutions. To show that this is not the case, one might demonstrate that replication errors are actually targeted, specific changes that don't occur uniformly but are goal-directed. Or one could show that copying errors do not result in AA substitutions. Neither of those seem likely. So my very modest claim, RM->S, holds up pretty well.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
11:22 AM
11
11
22
AM
PDT
Gregory@20
Hello nightlight... Here I come out of retirement from UD, just for you.
Oh wonder. Better break out the brass band there, nightlight.jstanley01
April 13, 2013
April
04
Apr
13
13
2013
10:46 AM
10
10
46
AM
PDT
nightlight @19,
"Consider analogy — you find some ancient hand copied manuscript with a word ‘sun’ written instead of word ‘son’ (as context or other better copies would require), obviously due to scribe’s momentary loss of focus. But that doesn’t imply ‘sun’ is a “random” ink smudge which didn’t require intelligent scribe, since it is still a valid word (like valid AA), one shape among astronomical number of all conceivable random ink smudges of that approximate size."
(My emphasis) It's not a random ink smudge, but the error is random. The scribe did not intentionally impose it, he just was distracted at some point along the process and chose the wrong word. (I do this typing sometimes.) This doesn't suggest that the copying process is random. Without debating the ontology of randomness, which I don't intend to do, random events happen even in designed contexts. For instance, manufacturing errors occur, which necessitate a quality control cycle. This does not imply that the manufacturing process isn't designed. It only implies that physical processes contain a degree of uncertainty, which presents as randomness. We wouldn't step out onto the slippery slope of conceding that manufacturing processes were the result of random forces, just by admitting that random events can occur during the process. The presence of a quality control factor implies that errors occur and need to be dealt with to assure a higher level of accuracy than is intrinsic to the manufacturing process itself.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
10:24 AM
10
10
24
AM
PDT
Gregory, Let's not go back over that big ID little id crap again, please. Here at UD, and I'm sure I speak for everyone, since your departure we have all enjoyed some great discussions without having to waste time with your conspiracy theory. It just got really boring to end up with. Look no one here is particularly interested in it, in fact we probably only have one more person interested in discussing it than you do on your own blog, and that's you! So why not do us all a favour and go back there. Besides that, hope you are well :)PeterJ
April 13, 2013
April
04
Apr
13
13
2013
10:11 AM
10
10
11
AM
PDT
Eric @17, thanks. As usual you frame the concepts as they relate to ID quite well.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
10:01 AM
10
10
01
AM
PDT
Gregory @20: Nice first paragraph. However, questions have been well addressed here at UD. At least those that are rational.
Think about such terms as ‘code,’ ‘intelligence,’ ‘agent,’ etc. These are not common terms in physics, biology or chemistry. The IDM is proposing an entirely new semantic constellation from what most people currently use.
Absolute nonsense. These are common terms in every day language and are used in the ordinary sense of the words. All you need is a standard dictionary. There is nothing strange or unusual about it.Eric Anderson
April 13, 2013
April
04
Apr
13
13
2013
09:39 AM
9
09
39
AM
PDT
Chance Ratcliff #15: Box, with reference to your #13, I don’t doubt that the creation of a new protein with a new function would potentially necessitate a whole suite of coordinated changes in other complementary systems, (…)
Chance Ratcliff, I brought forth my simple argument on several occasions and you are the first one to respond and I thank you for it. What my argument boils down to is that under naturalism beneficial DNA mutations are impossible. The days of DNA code as First Cause are over. We now know that DNA code for protein X doesn’t operate on its own and is only a part of what can be viewed upon as a larger specific subsystem of the cell which contains highly specific information about protein X. My obvious point is that if it is necessary for DNA code X AND all the constituent parts of the protein X’s subsystem to arise synchronously – and indeed it is necessary in order to function – then probability ratings will go through the roof exponentially faster than they already do when we just consider the DNA code in isolation. p.s. CR, excuse me for diverting attention from your central argument.Box
April 13, 2013
April
04
Apr
13
13
2013
09:25 AM
9
09
25
AM
PDT
wd400 @18, My reasoning is based on the error rate given a single replication event across the entire 4.6 mbp genome. Population size is relevant if you want to cash it out in terms of replications. And the error rate of 10^-9 is per nucleotide copied. the number 0.0046 comes from reasoning per above. It's the likelihood of a mutation occurring during the replication of a whole genome.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
09:24 AM
9
09
24
AM
PDT
Quick correction: 'code' is (e.g. genetic code), but 'intelligence' and 'agent/agency' are not. The point is that the analogy is stretched too far from 'designed' to 'Designed'.Gregory
April 13, 2013
April
04
Apr
13
13
2013
05:35 AM
5
05
35
AM
PDT
Hello nightlight, Bolshinstvo v etom piecmo ya pisal ranshye, a dumal shto Vui uzhe ushyli sovsyem ot Uncommon Descent blog potomu shto otvyeti k Vashemu voprosi ne ochin horosho builo zdes ot IDists. Here I come out of retirement from UD, just for you. As someone who has met most of the leaders of the IDM, eaten with and discussed with them, exchanged e-mails, answered questions from them and also questioned them (many of which were not answered), I am in a somewhat unusual position to respond to you. At one time during my late undergrad and graduate school days, I ate freely of IDist arguments and ‘theories,’ that is, until finally rejecting them as simplistic, idealistic, duplicitous and ultimately naïve. So, please take my advice with that confession provided at the start, since you are of course free to decide as you choose. Taki zhe liudi zdes v Uncommon Descent, oni ne zhivyeot ili uchastvovat v mainstream nauk. Eto ne mesto dlya uchyeoni liudi, a prosti liudi kotorui liubit i poklonit na ‘Intelligent Design Theory’ (t.e. teoria ‘razumni zasmysl’) kak novui nauchni Ikon. To shto Vui skazali uzhe po povodi raznitsa mezhdu chelovecheski ‘design’ i ne-chelovechski ‘Design’ – eto horosho i yasno. K solzhaleniye, oni ne ponimayet i otkazat iz za tovo shto oni liubit zhalovatsa protev yestestveni nauk i ‘naturalism.’ (A ya tozhe protev 'naturalism' a ispolzuyu votochni Europeeski podxodi, a ne Amerikanski modeli.) Kazhetsa mne shto Vui tozhe hotel bui vuizovat neverushi uchyeoni potom shto Vui verushi chilovek. Konyetchno ya za eto positsia na odna storona. A v drugaya storona, ‘Intelligent Design Theory’ – takoi stranni i protevorechivi shtobui yesli Vui hotitye snimat i prodvigat eto theory, Vui budyete izkluchyeon ot nastoyashi nauchni soobshetvo. Eto prosto normalno i praveleno. Pishu tak potomu shto ya mnogo ob etom izuchal i dazhe v rossia podiscuteroval s liudmi. Eto ne pravoslavni theory, sovsyem nyet. Eto protev Catolica i tozhe protev Pravoslaviya, immeno na tema analogicheski znacheniye. Yesli hotiye mne sotrudnichit ili prosto contactirovat na eta tema, mozhno sleduet ssilka na moya imya i tam naidyeot address. S udovolstviam Vam otvichayu. When you acknowledge “the blurring between the map and the territory,” of course I agree. IDists do the same thing with ‘intelligence’ between ‘human-made’ and ‘non-human-made’ things. ‘Univocal predication’ is what they do, trying to force this as ‘orthodox’ for their Protestant revolutionary attitude. The vast majority of IDists are Protestant activists, many of whom are young earth creationists and most of whom have no training in natural or applied sciences. You highlight the problem of ‘infinite regression,’ which IDists have not satisfactorily answered. “MN would allow that chess playing program is an intelligent agency (agent).” – nightlight I’m not sure. It would seem to depend on how Americanised you are. The ideology of MN is an American invention, specifically, that of Christian ethicist Paul de Vries (1986). There are American theistic evolutionists/theists who accept the majority of evolutionary biology (if not those features that are clearly anti-theistic) that promote MN while being totally ignorant of ideological naturalism. You shared: “I am not sure whether it is semantics or on substance.” The ‘theory/hypothesis/ideology/paradigm’ of ‘intelligent design/Intelligent Design’ is highly semantic. Think about such terms as ‘code,’ ‘intelligence,’ ‘agent,’ etc. These are not common terms in physics, biology or chemistry. The IDM is proposing an entirely new semantic constellation from what most people currently use. They want to ‘REBEL’ against mainstream science and to manipulate ‘natural science’ so that it will include studies of ‘intelligence/Intelligence.’ Your comments noting this have been both insightful and helpful. Unfortunately, ideological-IDists won’t allow what you say to be spread amongst their ranks; it is too revealing of their twisting of words. ‘Design’ for non-human-made things is not ‘analogous,’ but claiming univocality with God’s Design/Creation of the world. They spin this as well as they can, but their insistence on the natural scientificity of ‘Intelligent Design’ is their downfall. Let’s not try to deceive ourselves. What they are really trying to convince you of is that “God exists,” the unnamed ‘God’ of the Greeks and Freemasons. That is Big-ID-implicationism in a nutshell. They don’t care which God/Allah/YHWH/Baha’i. I eto zhe tochno pochemu svyasheniki po mira otvergat takoi evangelesticheskaya-activistskaya Amerikanskaya teoria. Skrivat svoi muisli pod odeyalom, delaia vid shto eto yestestvenni Nauk. Eto zhe normalno ili iskazheno? On the one hand, good for IDists, as that is what their personal Good News approach (Matt 28: 19-20) requires of them. The responsible and ‘orthodox’ position for Abrahamic believers, however, is to reject the Big-ID claims of ‘natural scientific proof/inference of God’s fingerprints.’ What then is ‘faith’ (vera) for if we have Big-ID proving (dokazivayushi) an Intelligent Agent (i.e. God) using natural science? There are the universal IDists who use Romans 1:20 as their weapon against both unbelief and different belief than their Protestant Evangelical American Triumphalism. Pochti nikto v etom dvizheniye ne Pravoslavni. “basing ID on “conscious” intelligence is like building a house on a tar pit, resulting in endless semantic debates advancing nowhere.” – nightlight Well-said! “Anything resting on ‘consciousness’ as its foundation is automatically outside of science, leaving it at best in the realm of philosophy.” - nightlight Perhaps ‘cognitive studies’ too? Vse seuchas, u menya nyet dalshye vremya. Uzhe sliishkom mnogo vremya tratil na etom 'Intelligent Design' dvizheniye. Oni selduyat kak za Ivan Susanin (yevo zovut Phillip Johnson). Kuda? - Nikuda. Kak preodelet ‘naturalism’ ili ‘materialism’? Ne nado ‘ID theory’. A mnogo interesni teoriyi s vostochnimi uchyeonimi tozhe kotorui bui polesno dlya chenit raskoldovivaniye v America i s zapadnim uchyeonimi. Vsevo dobrovo, GregoryGregory
April 13, 2013
April
04
Apr
13
13
2013
05:09 AM
5
05
09
AM
PDT
Chance Ratcliff #14: ...random mutations are a sufficient cause for one or two AA substitutions. This is warranted and supported by two observations: 1) the presence of error-correcting mechanisms as part of the replication process suggests that errors actually occur; 2) observed error rates with respect to a specific substitution are able to account for one or two substitutions. On (1): error-correcting mechanisms are commonly seen in intelligently guided evolutions at social & technological levels -- it is a conformity enforcing mechanism, which doesn't make non-conforming phenomena "random" or non-guided actions. Hence, one cannot use its presence in biology as a proof or evidence of randomness of the deviations from the norm, either. This is a simple application of the general logical coherence requirement -- any biological phenomenon that has analogues in other realms where it is known to be produced by an intelligent process, cannot be concluded/inferred to be an example of non-intelligently (randomly) produced phenomenon in biology. Otherwise one could apply that same "logic" claimed valid in biology to arrive at the analogous conclusion in the other realm, where it is contradicted by the known explanation (as intelligently produced phenomenon). That's then a refutation of any such "logic" via reductio ad absurdum. The main weakness, though, is with (2). The AA substitution is an extremely narrow kind of transition, a speck in the combinatorial space of all chemically or physically accessible transitions of similar magnitude which don't produce AA's (but any among countless other kinds of molecules). Consider analogy -- you find some ancient hand copied manuscript with a word 'sun' written instead of word 'son' (as context or other better copies would require), obviously due to scribe's momentary loss of focus. But that doesn't imply 'sun' is a "random" ink smudge which didn't require intelligent scribe, since it is still a valid word (like valid AA), one shape among astronomical number of all conceivable random ink smudges of that approximate size. Suppose now Darwinist style claim that no scribe is needed since the whole artifact of the manuscript is produced via random ink smudges that were later selected in competition until a book came out that appears written by a scribe. You can't coherently oppose them by backing away to some higher level designed units, say sentences, and hold onto the 'sure thing' position that all valid sentences are result of intelligent action by a scribe, while conceding that anything below that level of complexity doesn't require scribe and can be produced by random smudges from the ink drips. While it is true that having a whole valid sentence is even more unlikely to be a result of random smudges than having valid words in incorrect sentences, the supposedly 'intelligent process' that such compromise suggests is incoherent and can be rejected as absurd. Namely, by such 'sure thing' compromise theory, the ancient manuscript artifacts are produced by an intelligent agency which either writes correct full sentences, or every now and then leaves out sentence size gaps in the text where random ink smudges somehow drip into and which by chance have shapes of valid words seen in the rest of the text, but still fall short of making a correct whole sentence. Besides absurdity, allowing them to get away with 'random smudges can create valid words and partial sentences without scribe' is sure way to have to back off next to holding onto whole valid paragraphs, then to whole valid pages,... since they may eventually find an example of a partial sentence which has only one letter wrong and which could be repaired into a correct sentence via the experimentally established random smudge in not very many tries. At that point they have shown, with the key help from your initial concession that partial sentences don't need scribe, that random smudges can produce whole valid sentence which you claimed that only a scribe can produce. Hence you now have to back off from the sentence threshold to holding onto the valid paragraphs threshold, etc, i.e. by allowing a threshold for for no-scribe gap you have needlessly condemned yourself to a constant retreat and a certain complete defeat, eventually, reached one step at a time, by expanding the scribe-less gaps you left for them to work on. The position I was suggesting is to hold that intelligent agency is active at all levels, at all times, at all points. Hence the intelligent agency leaves no gaps or 'dumb segments' where it is absent from the scene. Allowing for any such 'dumb segments' lets the Darwinians string them into larger 'dumb segments' via some demonstrable random link of low complexity, and thus gradually expand the size of the 'dumb segments' i.e. leaving you defending 'God of gaps' in a constant retreat. Consequently, one would require that those claiming random smudges as the origin of manuscripts have a burden of proof of their claim at any level. For any level of complexity they claim sufficiency of randomness, they need to enumerate all possible random ink smudges of that level or size, then compute odds of such random smudges forming letter shapes or word shapes... and then establish that the number of available tries for random smudging can produce even one valid letter, valid word ... whatever their claim is. In other words, since they are claiming particular kind of process as the origin of manuscripts at all levels, it is their burden to show it is probabilistically valid at all levels, not just from some threshold level, while conceding everything below it as a 'dumb segment' requiring no scribe. Otherwise, the intelligent scribe remains the most plausible consistent explanation across all sizes, since we know intelligent scribes can produce such manuscripts. That is a far stronger position to hold, since if they wish to claim some property of the process (such as randomness), the burden of proof is on them to show that such property is probabilistically plausible, not on me to prove it is implausible. It is also internally much more coherent position, since it doesn't hypothesize an absurd kind of intelligent agency which designs full sentences, but also leaves sentence size gaps for random smudges to somehow form almost correct full sentences. Within this stronger & more coherent uniform position, the intelligent agency is active at all times and at all levels, leaving no 'dumb segments'. In case the opposition can properly demonstrate that some level of complexity is accessible to a random process, that still only makes random process an alternative explanation at that low level of complexity, but not the sole explanation even for that since we never concede any 'dumb segment' -- the agency doesn't leave any gaps. Hence, they cannot use any concession (of dumb segments) to piggyback another level of complexity on top of it and push further, making you concede larger and larger 'dumb segments' (once you concede size X as dumb segment based on accessibility to random process, you have no basis to hold onto size X+1 if they can show accessibility of X+1 to random process, under assumption that any X is a 'dumb segment'). Further, unlike the weak 'sure thing' position you propose, it is now Darwinian style explanation which appears absurd by offering an alternative explanation for small disconnected patches of complexity which they can show to be accessible to random process. In contrast, you hold the sole coherent explanation for all levels of complexity.nightlight
April 13, 2013
April
04
Apr
13
13
2013
01:32 AM
1
01
32
AM
PDT
Chance, Sure - but your calculations appear to be based on on would happen to a single bacterium. There are ~10e9 E coli in your average mL of culture broth. Quite a few chances to a hit on a mutation. (In fact, across the world as a whole any given two-AA mutation probably happens every week) I also dont' really follow your math. If the mutation rate is 10e-9 then the chance that any nucleotide mutates in any generation is 10e-9, isn't it?wd400
April 12, 2013
April
04
Apr
12
12
2013
10:13 PM
10
10
13
PM
PDT
Chance @16: Exactly correct. Because the design inference cannot demonstrate, and is not in the business of demonstrating, that something is not designed. As a result, it is possible, in a purely logical sort of way, that nearly everything is designed. But that is not a helpful notion in the current debate and we don't gain any mileage going down that path, because (i) it is not the end of the spectrum where the design inference is focused, and (ii) there are lots of (potential) false negatives. The conclusion of design needs to remain squarely focused on, and limited to, those situations in which design is clearly present, using a reasonable probability bound.Eric Anderson
April 12, 2013
April
04
Apr
12
12
2013
09:58 PM
9
09
58
PM
PDT
Footnote to #14 and #15, I'm perfectly happy to entertain hypotheses which suggest other causes for presumed random events. But I really don't think ID should be publicly challenging substitution events that occur within Behe's Edge of Evolution. It's a bit like arguing that a pair of dice that show a reasonable distribution might still be loaded in some way. Such may be the case, and evidence may present itself eventually; but until such a situation occurs, it's not productive to accuse someone of cheating. However as an "internal" debate, I think it's just fine to question just how random these mutations may or may not be. It's just not where ID should be pushing its arguments, imo.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
07:37 PM
7
07
37
PM
PDT
Box, with reference to your #13, I don't doubt that the creation of a new protein with a new function would potentially necessitate a whole suite of coordinated changes in other complementary systems, but I remain skeptical that a single AA substitution (or two) in a single protein, that confers a contextual selective advantage cannot ever occur, or is even unlikely to. I'm not trying to be dogmatic about this point, because we may find many cases where presumed randomness surrenders to definite purpose in the face of new evidence. However I just don't think it's reasonable, based on observations, to rule out that random mutations are sufficient to account for small changes. See my #14 to nightlight for more. I'd like to bring up Michael Behe's First Rule of Adaptive Evolution: break or blunt any functional coded element whose loss would yeild a net fitness gain.
For most of history such a question could not be investigated, but with the tools that have become available to molecular biology in the past few decades, a good start can be made on addressing it. The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. (I make a number of distinctions defining gain-, loss- and modification of function mutations, so for the complete story please read the paper.) Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.
So even when we're talking about beneficial mutations, often times these mutations confer an advantage by breaking something. The whole paper is here: Experimental Evolution, Loss-of-Function Mutations, and “The First Rule of Adaptive Evolution”. In cases where a loss-of-function mutation provides some net fitness gain given particular environmental stressors, we can consider the event may be directed, targeted, etc. -- that's certainly possible, but we can also consider that organisms have a property of robustness that allow them to continue functioning even when some system fails. Robustness is an engineering principle, and compatible with ID. Again I'm not suggesting that substitution mutations are necessarily random, just that the random factor is warranted as a sufficient cause. This doesn't rule out other causes for such an event.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
07:26 PM
7
07
26
PM
PDT
nightlight @12, I believe I showed that based on current knowledge of cause and effect, random mutations are a sufficient cause for one or two AA substitutions. This is warranted and supported by two observations: 1) the presence of error-correcting mechanisms as part of the replication process suggests that errors actually occur; 2) observed error rates with respect to a specific substitution are able to account for one or two substitutions. If you're trying to convince me that random mutations are not a necessary cause of mutations, then I already agree. Note that this is implied in my argument. If random mutation then substitution. This is a sufficient causal relationship, not a necessary one. This allows for other sufficient causes to potentially explain the consequent. Random mutations are a sufficient but not a necessary cause for a limited number of substitutions.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
06:59 PM
6
06
59
PM
PDT
Chance Ratcliff #10: If you’re implying that it may not be possible for any mutation to occur without corresponding epigenetic modifications, that may well be true, but I have my doubts that small changes at that level would necessitate corresponding epigenetic modifications to prevent total system failure, at least in all cases.
I’m willing to argue that naturalism cannot allow for any serious DNA mutation. Let’s assume that the change in DNA leads to a new code for a new protein. Does this not require an adaption of spliceosomes? And how will the new protein find its way to its proper new place? We know of protein import and sorting pathways at the membrane of mitochondria, don’t they (or other pathways) need new information – and so adaptation - regarding this new protein? The amount of the new protein needs to be regulated. Don’t we need new information and so new or (adapted) proteins to accomplish this regulation? I would argue that the fact that organisms are capable of accommodating new proteins points towards a reality which is unexplainable by naturalism.
‘The essence of cellular life is regulation: The cell controls how much and what kinds of chemicals it makes; when it loses control, it dies’, Michael Behe, Darwin’s black box, p.191.
Box
April 12, 2013
April
04
Apr
12
12
2013
06:46 PM
6
06
46
PM
PDT
Chance Ratcliff #8 My #5 is meant to address the latter part of nightlight's #4, and the ongoing issue as to whether random mutations can be considered a sufficient cause of point mutations that result in one or two amino acid substitutions. I got that from your #5, but I still don't see where how was the alleged "randomness" (fair pick in some probability space) established here. Just by labeling a deviation from conformity as an "error" doesn't mean it is actually an error, let alone random error. For example, human social networks, as analogue of cellular biochemical networks, have strong mechanisms for enforcing conformity on its members (social error correction, as it were), yet the acts of 'heresy' (or political incorrectness, or lawbreaking, etc) are not random, or errors, despite those "repair" mechanisms which push members to conform. The choices by heretics or lawbreakers may be right or wrong, but they're still result of an intelligent agent pursuing his happiness as he sees it. They are errors in eyes of the customs or laws, but that doesn't imply errors, let alone random acts, in the eyes of the heretic -- they remain deliberate actions of an intelligent agency (even if we judge it as a dumb thing to do). To really establish "randomness" as a scientific fact refuting all possible alternatives, you would need to know how to calculate probability space (as illustrated in that dice example we discussed earlier) for the DNA molecule under those conditions, but that is a quantum mechanical problem which is beyond the reach of the present science for that size molecules, even without any environmental effects. Hence, all claims of "randomness" of spontaneous mutations are unscientific (opinions, wishful thinking). Merely measuring the mutation rates at different sites doesn't tell you by itself anything about intelligent or unintelligent nature of the guidance. It certainly is not a substitute for proper quantum calculation that computes probabilities for all possible changes. Among possible alternatives hypotheses blocking the leap to "randomness" claim, one can look at such mutations as deliberate attempts by the guiding intelligence to produce some variety as result of some more subtle anticipatory considerations i.e. one can view the processes of exact copying and inexact copying as two kinds of processes with vastly different odds of prevailing but still, each deliberately and intelligently guided. That's like you deciding whether to get up for work at regular time as the alarm rings (i.e. copy your daily pattern exactly), to or just stay in bed and do some thinking (make an error in copy of the daily pattern) -- the first one may prevail almost always, yet both outcomes, exact replica of daily pattern or the inexact replica, are results of deliberation by an intelligent agent. General rationale for this type of objection is that if a biological phenomenon has an analogous counterpart in social or technological realms, where it is intelligently guided, then one cannot use such biological phenomenon as a "proof" or even as mere "indication" of randomness of the phenomenon.nightlight
April 12, 2013
April
04
Apr
12
12
2013
06:34 PM
6
06
34
PM
PDT
Box @6, I think my #9 addresses your first question. As to your second,
Correction: that's post #8 which addresses error correction.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
05:53 PM
5
05
53
PM
PDT
Box @6, I think my #9 addresses your first question. As to your second,
"And do we also make the simplifying assumption that after this AA substitution there will be a functioning epigenome, metabolism etc. to accommodate the new code?"
Yes, in short. I wouldn't suspect that a single mutation could foul epigenetic regulation, but I couldn't say for sure either. So implicit in the assumptions is that such minor alterations are often tolerable to the organism, and if it happens that some mutation is detrimental to function, the organism fails. If you're implying that it may not be possible for any mutation to occur without corresponding epigenetic modifications, that may well be true, but I have my doubts that small changes at that level would necessitate corresponding epigenetic modifications to prevent total system failure, at least in all cases.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
05:51 PM
5
05
51
PM
PDT
wd400, my numbers relate to the probability of a copying error in a single replication. The probability of error in a single replication: 0.0046 The probability of the error occurring at a specific site: 2.17E-7 The probability of the error resulting in a specific AA substitution: 0.0454 Based on such a probability, at 1/P(E) replications I find that there is a 0.6321 chance of the event occurring, which is more likely than not. 1 - (1-P(E))^1/P(E) converges on 0.6321 as P(E) gets small, unless I'm mistaken. Note that I'm not arguing against the sufficiency of random mutations to account for these types of substitutions -- quite the contrary, I'm suggesting that in principle they can.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
05:33 PM
5
05
33
PM
PDT
My #5 is meant to address the latter part of nightlight's #4, and the ongoing issue as to whether random mutations can be considered a sufficient cause of point mutations that result in one or two amino acid substitutions. While I certainly don't suggest that any given mutation must have been random, it doesn't seem reasonable to deny that random copying errors are sufficient for a couple of AA substitutions. One of nightlight's criticisms of ID is the acceptance of random mutations as a sufficient cause for certain microevolutionary events; but sufficient causes do not imply necessary ones. I do not think it's productive to take issue with neo-Darwinism over such trivial events that are implied by less-than-perfect replication events. The presence of specific mechanisms for error-correction in living systems implies that actual errors are implicit in the replication process. To again reference Shapiro:
"The extraordinary low error frequency results from monitoring the results of the polymerization process and correcting incorporation mistakes after the fact, not from the inherent precision of the replication apparatus. The DNA polymerase that incorporates nucleotides itself has an intrinsic precision of about one mistake for every 100,000 (10^5) nucleotides. Although this is impressive when compared to any man-made manufacturing process, the polymerase alone is at least for orders of magnitude less accurate than the final replication result. Ultimate precision is achieved by two separates stages of sensory-based proofreading:"
He goes on to describe a two-stage process that increases copying fidelity from 10^-5 to the impressive 10^-9. If errors did not occur, there would be no point in correcting them. The presence of error-correcting systems suggests quite forcefully that replication errors occur. Until such errors are shown to be directed as opposed to random, accepting the sufficiency of random errors for such events is quite reasonable.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
05:10 PM
5
05
10
PM
PDT
How many bacteria in your population, chance?wd400
April 12, 2013
April
04
Apr
12
12
2013
04:54 PM
4
04
54
PM
PDT
Chance Ratcliff (5), solid math! I have some follow-up questions: - Do we also make the simplifying assumption that DNA repair systems are non-existent? - And do we also make the simplifying assumption that after this AA substitution there will be a functioning epigenome, metabolism etc. to accommodate the new code?Box
April 12, 2013
April
04
Apr
12
12
2013
04:53 PM
4
04
53
PM
PDT
With regard to random mutations and mutation rates, James Shapiro gives us some numbers:
"The E.coli cell reproduces its DNA with remarkable precision (less than one mistake for every billion (10^9) new nucleotides incorporated) and at surprisingly high speed. The E.coli cell duplicates its 4.6 MB genome in 40 minutes (about 2,000 nucleotides per second), independently of the cell division time." James Shapiro, Evolution: A View from the 21st Century, Kindle location 500
Well, I'll risk making some calculations and simplifying assumptions. Using 10^-9 as an estimate for the mutation rate across a 4.6 mb genome, there is a 0.0046 chance for some nucleotide to be copied incorrectly. This means that after around 217 replications, it's more likely than not that some mutation will occur in some bacterium. The likelihood that the mutation will occur at a specific site is 1/4.6*10^6, or around 2.17*10^-7, given an assumption of uniformity with respect to all nucleotides. Making the simplifying assumption that any nucleotide change would result in a specific amino acid substitution with equal probability, there's an additional factor of 1/22 ~= 0.0454. Tallying the results that a specific mutation occurs in a single replication event, we have 0.0046 * 0.0454 * 2.17E-7 ~= 4.5E-11. Unless I'm in error, which is certainly possible, a specific AA substitution is more likely than not after around 22 billion replications. Squaring this number gives something close to 4.8*10^20, which as it happens, is pretty close to Behe's empirical 10^20 figure for chloroquine resistance, which I believe was two AA substitutions. (The P. falciparum genome is about five times larger however.) Granted, things are certainly more complicated than those calculations suggest, but at the same time, if my math and assumptions are not far off the mark, there doesn't appear to be a good reason to rule out random mutations as a sufficient causal phenomenon with regard to one or two AA substitutions. This does not imply that random mutation is the necessary cause for any given substitution however.Chance Ratcliff
April 12, 2013
April
04
Apr
12
12
2013
04:29 PM
4
04
29
PM
PDT
Nice essay, much better intro into ID than the Wiki article. The only important element left out (it may have been vaguely hinted at in one sentence) is the third leg of the specified complexity, besides the origin of life and its evolutions -- the fine tuning of physical laws & constants. The latter artifacts are poised on a sharp tip of a needle, as improbable as anything in the biological world, yet tightly specified by the requirements of complex life, such as humans, which builds upon them. The importance of this third leg is threefold: a) No cheap, narrow solutions to one problem only (such as those James Shapiro which deal only with some forms of evolution) can do as a coherent answer on the origin and the nature of the intelligent agency behind these ID artifacts. b) Pointing at the ID at the level of fundamental physics, implies that the same intelligent act (presently seen as Big Bang) started our universe with full anticipation of its much later fruits. That means, the intelligence needed is much greater than what is needed to explain life. c) The tight requirements on physical properties, their continuation and their latent complexification attributes, means that the same intelligent agency is acting continuously, including now, on all elements of the universe, upholding its life supporting patterns (which are only partially reflected in our physical & chemical laws). The last point is essential in order to be able to challenge common neo-Darwinian attribution of "randomness" trait to any mutation for which the cause is not known. The possibility that intelligent agency is continuously and actively guiding any process that we describe with our current physical & chemical laws in a coarse grained manner (statistically), raises the proof bar on the "randomness" claim even for micro-evolution (such as adaptation of bacteria to antibiotics) which is currently largely conceded to neo-Darwinian "random mutation". The type of randomness proof required from neo-Darwinians, before they can claim it as explained, was illustrated in "random" dice tossing example earlier.nightlight
April 12, 2013
April
04
Apr
12
12
2013
01:37 PM
1
01
37
PM
PDT
"Johnson and Wells are certainly involved in the broader intelligent design movement, they largely use intelligent design as a tool for promoting change in current educational and philosophical frameworks" My recollection is that Johnson advocated taking the debate to academia (i.e. colleges and universities), because that's where the agendas are set. I don't recall him ever advocating pushing intelligent design into high school classrooms. I think the distinction is important.cantor
April 12, 2013
April
04
Apr
12
12
2013
11:48 AM
11
11
48
AM
PDT
1 2 3

Leave a Reply