Uncommon Descent Serving The Intelligent Design Community

EA’s “oldie but goodie” short primer on Intelligent Design, Sept. 2003

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Sometimes, we run across a sleeper that just begs to be headlined here at UD.

EA’s short primer on ID, drawn up in Sept 2003, is such a sleeper. Let’s observe:

__________

>> Brief Primer on Intelligent Design

 

Having read a fair amount of material on intelligent design and having been involved in various discussions on the topic, I decided to prepare this brief primer that I trust will be useful in clarifying the central issues and in helping those less familiar with intelligent design understand its basic propositions.

This is not intended to be a comprehensive analysis of intelligent design, nor is it intended to respond to criticisms.  Rather, this represents my modest attempt to avoid the side roads and the irrelevancies, and outline the fundamental central tenet of intelligent design, which is that some things exhibit characteristics of design that can be objectively and reliably detected.  It is my view that criticisms of intelligent design must focus on this central tenet, or risk missing the mark.  It is also with this central tenet that intelligent design stands or falls as a scientific enterprise.

Setting the Stage

As with so many issues, it is important to first define our terms.  In public debates, the term “intelligent design” is often incorrectly associated with anyone who believes that the Earth and all life upon the Earth were actively created by an intelligent Creator, and when used pejoratively, the term generates much more heat than light and adds no substantive insight to the discussion.

In a broader sense, the term might be applied to individuals who hold to a basic teleological view of the universe or the diversity of life on earth.  In this sense, many individuals believe in some form of intelligent design, including those who hold to an initial act of life’s creation, followed by naturalistic evolutionary mechanisms.

In yet a more concrete sense, the term is often used with respect to those involved in the modern intelligent design movement, including vocal proponents such as Philip Johnson and Jonathan Wells.  Although Johnson and Wells are certainly involved in the broader intelligent design movement, they largely use intelligent design as a tool for promoting change in current educational and philosophical frameworks.  This use of intelligent design as a tool for change has received by far the most press coverage and is at the heart of the often-heated debates over school curricula.  However, as intelligent design’s primary spokesperson, William Dembski, has pointed out, intelligent design’s use as a tool for change is secondary to intelligent design’s undertaking as an independent scientific enterprise.

Finally, therefore, intelligent design refers to the science of detecting design.  In this latter sense, intelligent design is not limited to debates over evolutionary theory or discussions of design in nature, but covers the study of signs of intelligence wherever they may occur: whether in archeology, forensic science, the search for extraterrestrial intelligence, or otherwise.  (Though not strictly limited to historical events, intelligent design argues that design can be detected in some things even in the absence of any reliable historical record or independent knowledge of a designing intelligence.  It is in this context that we wish to discuss intelligent design.)  Defined more tightly, intelligent design can thus be viewed as the science of studying the criteria, parameters and procedures for reliably detecting the activity of an intelligent agent.

Associated with this latter more limited definition are scientists involved in such a scientific enterprise.  These individuals include, probably most notably, Dembski and Michael Behe, and a number of other scientists who have begun to take notice of intelligent design as a legitimate scientific inquiry.

It is in this latter sense that I wish to examine the concept of intelligent design.

Basic Propositions

What then is the basic foundation and what are the basic propositions of intelligent design?

Intelligent design begins with a very basic proposition: some things are designed.  This is slightly more complicated than it sounds, but not much, if we keep a couple of points in mind.

First, one might object that many things appear to be partly designed and partly not.  This, however, is simply a matter of drilling down deeply enough to identify the discrete “thing” being examined.  For example, if we look at a stone wall we can see that it is made up of stones of various sizes and shapes.  Even if we assume that the stones themselves were not the product of intelligent design, we would conclude that they have been used by an intelligent agent in designing and building the wall.  Thus, in situations where something looks partly designed and partly not designed, we need simply drill down further and determine which aspect, portion, or piece of the “thing” we are evaluating.  In this example, are we examining the individual stones, or are we examining their overall arrangement, pattern, and resulting function?

Even if we are unable to break down a particular object or system into its component parts, and we end up with a “thing” that is partially designed and partially not designed, the initial proposition of intelligent design would remain essentially the same: some parts, or portions, or components of some things are designed.

Second, when we talk about the fact that some things are designed, we are not referring only to physical objects, but are referring to anything that is the subject of design, whether it be a physical object, a system, or a message or other representation able to convey information.  Thus if I took the same naturally-occurring stones, and instead of building a wall, I laid them out on the beach to spell a message, we would also have a clear indication of the actions of an intelligent agent, once again not in the stones themselves, but in the representation created by the stones and the information conveyed by that representation.

Given this basic proposition that some things are designed, intelligent design then asks the next logical question: is it possible to detect design?  As others have pointed out, if the unlikely answer is “no,” then we can only say that everything may or may not be designed, and we have no way of determining whether any particular item is or is not designed.  However, if the likely answer is “yes,” then this leads to a final and more challenging question that lies at the heart of intelligent design theory and intelligent design as a scientific enterprise: how does one reliably detect design?

Characteristics of Design and Limitations of Intelligent Design

What kinds of characteristics do things that are designed exhibit?  When we contemplate things that are designed – a car, a computer, a carefully coordinated bouquet of flowers – a number of characteristics might spring to mind, such as regularity, order, and beauty.  However, if we think for a moment, we can come up with many examples of naturally occurring phenomena that might fit these descriptions: the rotation of the Earth that brings each new day and the well-timed phases of the moon exhibit regularity; naturally-occurring crystals are examples of nearly flawless order; the rainbow or the sunset, resulting from the sun’s rays playing in the atmosphere, are paradigms of beauty.

To be sure, characteristics such as regularity and order might be strongly indicative of an intelligent agent in those instances where natural phenomena would not normally account for them, such as a handful of evenly spaced flowers growing beside the highway, or a pile of carefully stacked rocks along the hiking trail.  Nevertheless, because there are many instances of naturally occurring phenomena that exhibit regularity, order, and beauty, the mere existence of these characteristics is not necessarily indicative of design.  In other words, these are not necessary defining characteristics of design.

On the flip side, there are many things that are designed that do not exhibit any particular regularity or order, at least not in a mathematical sense, such as a painting or a sculpture.  There are also many objects of design that do not evoke any particular sense of beauty.  And this brings up an important limitation of intelligent design: we are not able to identify everything that is designed.

A related limitation arises in that we cannot say with certainty that a particular thing is not designed.  This is particularly true, given that many things are purposely designed to resemble naturally occurring phenomena.  For example, in my yard I have many rocks that have been purposely designed and strategically placed to resemble the random placement of rocks in a stream.  In addition, when I recently remodeled a room in my home, I used a faux painting technique – carefully designed and coordinated over the course of several hours – to resemble a naturally occurring pattern.

As a result, intelligent design is limited in two important aspects: it can neither identify all things that are designed, nor can it tell us with certainty that a particular thing is not designed.

But that leaves one remaining possibility: is it possible to identify with certainty some things that are designed?  Dembski and Behe would argue that the answer is “yes.”

Possibility versus Probability

In order to identify with certainty that something is designed, we must be able to define characteristics that, while not necessarily present in all things designed, are never present in things not designed.  It is in defining these characteristics and setting the parameters for identifying and studying these characteristics, that intelligent design seeks to make its scientific contribution.

We have already reviewed some potential characteristics of things that might be designed, and have noted, for example, that regularity and order do not necessarily define design.  I have posited, however, that regularity and order might provide an inference of design, in those instances where natural phenomena would not normally account for them, such as the handful of evenly spaced flowers or the pile of stacked rocks.  Let’s examine these two examples in a bit more detail.

Is it possible that this pattern of flowers or the stack of rocks occurred naturally?  Yes, it is possible.  It is also possible, at least as a pure logical matter, that the sun will cease to shine tomorrow morning at 9:00 a.m.  To give a stronger example, is it possible that the laws of physics will fail tonight at midnight?  Sure, as a pure logical matter.  But is it likely?  Absolutely not.  In fact, based on past observations and experience, we deem such an event so unlikely as to be a practical impossibility.

Note that in the examples of the sun ceasing to shine or the laws of physics failing we are not talking simply about unusual or rare events; rather we are talking about something so improbable that we, our precious scientific theories, and the very community in which we live are more likely to pass into oblivion before the event in question occurs.  Thus for all practical purposes, within the frame of reference of the universe as we understand it and the world in which we live and operate, it can be deemed an impossibility.  Dembski has already skillfully addressed this issue of logical possibility, so I will not review the matter further, except to summarize that in science we are not so interested in pure logical possibility as in realistic probability.  It is within this realm of probability that all science operates, and it is in this sense that we must view the probabilities relevant to intelligent design.

However, while we need not be concerned with wildly speculative logical possibilities, we might nevertheless conclude that the pattern of flowers or the stack of rocks is possible, not only as a matter of logical possibility, but also as a matter of reasonable probability, within the realm of our experience.  After all, there are lots of flowers on the Earth and surely a handful of them must eventually turn up evenly spaced as though carefully planted.  In addition, we have all seen precariously balanced rocks, formed as a result of erosion acting on rocks of disparate hardness, so perhaps our pile of rocks also occurred naturally.  We might admit that our flowers and our stack of rocks are rare and unusual natural phenomena, but we would argue that they are not outside of the realm of probability or our past experience.

Thus, the inference of design needs to get much stronger before we are satisfied that our pattern of flowers or our stack of rocks have been designed.

The Design Inference Continuum

Now let’s suppose that we tweak the examples a bit.  Let’s suppose that instead of a handful of flowers, we have several dozen flowers, each evenly spaced one foot apart along the highway.  Can we safely conclude that this is the product of design?  What about a dozen identical stacks of rocks along the hiking trail?  One might still mount an argument that these phenomena do not yet reliably indicate design because they could have been created naturally.  Nevertheless, in making such an argument we would be relying less on realistic probabilities and what we know about the world around us, and slipping closer to the argument by logical possibility.  This precisely the mistake for which Dembski takes Allen Orr to task.

Now allow me to tweak yet a bit more.  Let’s suppose that the dozens of flowers are now hundreds, each in a carefully and evenly spaced pattern along the highway.  At this point, the probability of natural occurrence becomes so low as to completely escape our previous experience; it becomes so low as to suggest practical impossibility.  Is it the sheer number of flowers that puts us over the hump?  No, it is not the number of flowers itself that provides evidence for design, but the number of spacings between the flowers, the complexity of the overall pattern, and the fact that these spacings and the resulting complexity are not required by any natural law, but are only one of any number of possible variations.  In other words, it is the discretionary placement of all of these flowers, selected from among the nearly infinite number of placements possible under natural laws, which allows us to infer design.  It is this placement of all the flowers, which gives the characteristics of specificity and complexity, and which Dembski terms “specified complexity.”  And it is in this realm of specified complexity that the probability of non-design nears impossibility, and our confidence in inferring design nears certainty.

Yet, our examples can become even more compelling.  As a last modification, let’s suppose that the flowers are now arranged by the side of the road in the outline of the state of Texas, complete with Bluebonnets in the shape of the Lone Star.  Let’s suppose that our stacks of rocks are arranged so that there is one stack exactly each mile along the trail, or one stack at each fork in the trail.  Now we have not only specified complex patterns, but patterns high in secondary information content.  In the one case we have a shape that identifies Texas, a particular type of flower that signifies the state, and a star that is not just a pattern, but a pattern with strong symbolic meaning.  Along our hiking trail we have markers that carry out a function by providing specific information regarding changes in the trail or indicating the distance traveled.

Intelligent design, as a scientific enterprise is geared toward this end of the probability continuum where the probability of non-design nears zero and the probability of design nears one.  In some ways, focusing only on the area of most certainty is a rather modest and limiting approach.  Yet design theorists willingly give up the possibility of identifying design in many cases where it in fact exists, in exchange for the accuracy and the certainty that a more stringent set of criteria bestow.  In this way, the design inference is lifted from the level of broad intuition to a focused scientific instrument with definitive testable criteria.

Conclusion

As a scientific undertaking, intelligent design is not in the business of identifying all things designed, nor is it in the business of confirming with certainty that a particular thing is not designed.  Indeed, intelligent design, and it is fair to say current human knowledge, is incapable of performing these tasks.  What intelligent design does seek to do, however, is identify some things that are designed.

We have seen that the argument to design is essentially an inference based on probabilities.  As a result, there is a continuum ranging from the likelihood of non-design to the likelihood of design.  At a certain point the probability of non-design nears zero and the probability of design nears one.  At that point we can say, the design theorist argues, with as much certainty as any other scientific fact or proposition, that the thing in question was designed.  It is in this area of specified complexity (of which high secondary information content and Behe’s “irreducible complexity” are examples) that the theory of intelligent design operates.

Criticisms of intelligent design based on social, religious, philosophical, or cultural grounds, including complaints about the identity, motives, or capabilities of the putative designer, miss the mark.  Design theorists argue that specified complexity can be objectively and reliably defined and detected so that the probability of non-design nears impossibility and the probability of design nears certainty.  This is intelligent design’s central tenet.  It is on this point, and only on this point, that intelligent design as a scientific undertaking can be appropriately challenged and criticized.  And it is on this point that Dembski, Behe, and others are confident that intelligent design will make its greatest contribution.

Eric Anderson

September 9, 2003>>

___________

It seems to me the matter was clear enough a decade ago, and the objections were sufficiently answered a decade ago.

Why are we still meeting the same problems, ten years later?

I want to suggest, that this has more to do with unnecessary heat, unjustifiable polarisation and inexcusable clouding of issues, than with the basic substance on the merits. Can we learn from the mistakes made over these past ten years, and do better over the next ten years?

I hope so. END

Comments
Ooops, sorry, #61 was posted into a wrong thread. It was meant to go here.nightlight
April 17, 2013
April
04
Apr
17
17
2013
10:44 AM
10
10
44
AM
PDT
NickMatzke_UD #5: You can't just go say ID is not about the immune system, but instead about the origin of life and the Cambrian explosion. The 'irreducible complexity' examples by Behe, or CSI examples by Dembski, serve as counterexamples to neo-Darwinian theory of evolution (ND=RM+NS), pointing to some instances where ND's RM+NS mechanism seems incapable of explaining the particular biological artifacts. The existence of direct counterexamples has no bearing on whether RM+NS mechanism is capable of explaining some other biological artifacts, such as micro-evolution (e.g. bacterial resistance to antibiotics). For example, say you offer a theory NM_UD that declares among others: ... x*x > 10 for all integers x. To invalidate NM_UD, it suffices to show an integer x, such as x=3, for which this NM_UD statement is false. That's a falsification by counterexample. Whether there are some integers x for which x*x>10 holds, or whether NM_UD has some other statements which are valid, is irrelevant regarding the established fact that NM_UD is a falsified theory. There is also no logical or scientific requirement that a falsification by counterexample must also provide an alternative theory that explains the phenomena that NM_UD sought to explain in order to declare NM_UD a falsified theory.nightlight
April 17, 2013
April
04
Apr
17
17
2013
10:42 AM
10
10
42
AM
PDT
Good job with NL above, and yup Mung is probably tongue in cheek
:)Mung
April 16, 2013
April
04
Apr
16
16
2013
04:06 PM
4
04
06
PM
PDT
Gregory #58: I like your style of bolding certain words... [I] don't assume that you are yelling by this stylistic approach, but identifying important emphasis. Yep, that was the idea. This would be YELLING. The bold in a paragraph in its natural case is meant to help reader scan the text more quickly and in chunks, with bold fragments setting up the point or context for the whole paragraph ahead of the sequential reading. Web/hypertext reading is as different from book/linear reading as flying is from rowing in a boat. Yeah, that pretty much nails it to a point. But be warned from personal experience; IDists, neo-creationists and Discovery Institute aficionados don't like their acronym `ID' being played with or experimented with. I find it helpful for my own understanding to assign distinct labels to distinct concepts or entities. In short time I have spent here on UD, there seem to be multiple currents of ID even in this small forum. The descriptive labels I used, such as "universal ID" (U-ID) and "part time ID" (PT-ID) merely reflect some of the more obvious distinctions between the observed currents. Since labels are arbitrary conventions anyway, if anyone is offended by my plainly descriptive labels, they're welcome to offer some less descriptive or euphemistic variants for their own branch, or even an abstract ones such as X-ID or Y-ID. This sounds a bit like Romanian-American Adrian Bejan's views of `design in nature' as a given... This is the first time I ran into that material. Checking his web site about his "Constructal Theory", this seems to be a pursuit of the 'holy grail' of 'complexity science' (Santa Fe Institute, SFI), which is to formulate the 4-th law of thermodynamics that captures in some way the essence of 'complex systems'. Some other characterizations include highly non-linear or chaotic or dissipative systems by Prigogine, systems at the edge between order and chaos by SFI & Wolfram, 'principle of computational equivalence' by Wolfram, etc. While each captures a bit of the same pattern, it's still an unfinished business. Bejan's version of 4th law amplifies the 2nd law by saying that the (complex) system will not only seek to maximize its entropy, but will also organize itself so it can do it the fastest way possible, by improving/facilitating the flows between the system components. That pattern is indeed easily perceived in nature. As a bit of synchronicity here, I have recently been tackling this same optimization problem for the switching networks (such as those used in large scale Data Centers). Through some lucky guesses I discovered that the problem of maximizing the flows (or throughput) of a certain large class of networks (Cayley graphs) is mathematically exactly the same problem as that of optimizing Error Correcting Codes (maximizing Hamming distance between codewords). Since ECC field is much more mature than the field of network throughput optimization, there are tens of thousands of optimal ECC solutions which can now be easily translated, via a simple recipe given in the paper (pp. 28-29), into optimal throughput networks. If one then interprets the resulting networks (which I call "Long Hop" networks) as state diagrams of Markov processes, then they represent the fastest mixing Markov processes i.e. processes with the fastest approach to max entropy state. Hence, they are a realization of Bejan's law in this context (i.e. for processes which have these Cayley graphs as Markov state diagrams).nightlight
April 16, 2013
April
04
Apr
16
16
2013
09:31 AM
9
09
31
AM
PDT
nightlight, Just a note of thanks for your message #32. I was disallowed by KF for posting in Russian (even though I wasn’t telling jokes about him ;). But nevertheless glad that you could understand my words, as transliteration is oftentimes a challenge! (The institution where I work has 4 official languages, two with Slavic script, so these problems come up regularly.) I’ll be out of the country for awhile and won’t respond again here soon (so expect name calling like above!), but would welcome private contact if you like, which you can find from the links you’ve already followed. You’ve made an interesting impression here at UD, and living in the USA, being a ‘scientist’ coming from ‘East’ provides a unique viewpoint most at UD are not used to.
“my inference of intelligent design & guidance is not based on presently unexplained complexity of biological systems (these merely amplify it) but on knowability of the world despite its phenomenological richness, especially on the mathematical elegance and coherence of the physical laws.” – nightlight
This sounds a bit like Romanian-American Adrian Bejan’s views of ‘design in nature’ as a given (though, without a ‘Designer,’ as an agnostic/atheist, and without the adjective 'intelligent' behind 'design'). I wonder if you’ve come across Bejan’s work? UD has thus far (not surprisingly) avoided his ‘Design in Nature’ (2012) book. I'd be quite curious to hear your thoughts about it if you have.
“PT-ID is a weak position, self-condemned to keep shrinking as science expands and losing in courts. Its final natural endpoint in the long run is a classical deism — intelligent agency which designed and set universe into motion in the initial act of creation, then got out of it, which is the ultimate form of part-time ID, shrunk to a point.” – nightlight
Yeah, that pretty much nails it to a point. But be warned from personal experience; IDists, neo-creationists and Discovery Institute aficionados don’t like their acronym ‘ID’ being played with or experimented with. They are trying for a static, established, monumental definition of 'ID' (even if they've widely little-big-tent approach failed so far). My ‘Big-ID’ vs. ‘small-id’ distinction, while already well-established in the very small portion of mainstream science, philosophy, religion literature that takes IDism seriously, has been violated and attacked by supposedly peace-loving IDists here at UD. Perhaps they are not the only 'victims' of injustice?! @Eric Anderson #24: would be crushed (read: opening round defeat) in an actual ‘evolution debate’ outside of friendly-ID territory, especially in a 'live' situation unchained from mere black-and-white text. KF, posing as a competent person in philosophy of science demarcation, which he has repeatedly shown he is not, writes a ‘gem’ of a distortion about “the amount of abusive behaviour in and around the Internet regarding the design issue”, in the poor victimized IDists ‘expelled’ genre. No, KF – GEM, ‘design’ is a perfectly usable and well-explored concept in a variety of fields. I will again be presenting a paper that includes the concept of ‘design’ next week. But it has *nothing* to do with Intelligent Design Theory, the Discovery Institute, Uncommon Descent, ASA, IDEA, etc. – i.e. evangelical IDism sites and sources. Indeed, such views of ‘design’ as 'ID' are best understood as ‘deviant,’ as sociologists and criminologists (of science!) call it. The vast majority of ‘design issues’ are well-worth discussing, and fascinating indeed, free as they are from the conspiracy-crazed Public Relations world of ‘Intelligent Design Theory,’ the neo-creationism built by and for American evangelicals and those few others that have been deluded to adopt the DI’s ‘designist’ language. It’s rather sad that IDists don’t realise the abuse they perpetrate against other ‘design issue’ fields by their deviant actions and strategies. Really, it is a shame that they've given 'design' such a bad name (and will insist on denying it)! And all of the religious folks who took the time and effort to honestly read and even really tried to ‘buy’ ‘Intelligent Design Theory’ and then saw clearly through it and have now wisely and either steadfastly or emphatically rejected it, are disgraced by IDists. The latter still claim to have a mirage of a monopoly over the terms ‘intelligent design & Intelligent Design,’ while most Abrahamic believers have for a long time and still do accept the percept of ‘intelligent design,’ meaning a theistic worldview, which seems to be what is meant properly by U-ID. nighlight's writing is shining big and wide on the glaring ‘gaps’ in IDist ‘theories’. That doesn’t mean, however, that IDists should take it as a personal insult or that it means they are thought of as ‘bad guys/gals.’ The point is not to moralise about IDists, just to speak as objectively as possible (even as reflexive subjectivity inevitably always creeps in, as human persons) about this supposedly ‘natural scientific’ theory that was *undeniably* concocted in the 1980’s and 90’s by Thaxton, Meyer, Behe, Dembski, et al., and which is still supposedly ‘evolving,’ as folks like Sewell like to say about ‘technologies.’ IDists aren’t ‘designing’ the future of their theory anyway, but instead placing their bets on using ‘historical science’ to pontificate how wrong a 19th century British naturalist was using the tools and ideas available during his epoch. The IDM's strategy of IDism doesn't count as one well-designed or planned in my books or in views expressed from relations with respectable and sometimes top-quality scientists, scholars and theologians from around the world. But my voice is just one small thread in a larger yarn, that IDists will (until the end of the IDM) spin and try to weave their own way. Gregory p.s. nightlight, I like your style of bolding certain words or phrases and, unlike some fickle-editorial folk at UD, don’t assume that you are yelling by this stylistic approach, but identifying important emphasis.Gregory
April 16, 2013
April
04
Apr
16
16
2013
05:49 AM
5
05
49
AM
PDT
Chance Ratcliff #55, kairosfocus #53 Thanks both for some clarifications on positions of conventional ID, which I labeled above PT-ID (part time ID; that was not meant as derogatory term but as a literal description of its intermittent activations). Ultimately, it seems both of you ended up reaffirming the "part time" label -- you divide processes (or systems) into "natural" (explainable by known physical, chemical... laws; these could include random and deterministic elements) and the "un-natural" (those not explainable by known laws), which include some that are "intelligently guided" as sub-category. Besides several major practical weaknesses of that approach listed previously (which are due to its unambigious 'part-time' aspect), its more fundamental flaw is in projecting (or conflating) the epistemological into ontological traits of these processes (or systems). Namely, while you treat "natural" vs "intelligently guided" as intrinsic/ontological properties of some processes, they are in fact only properties of our present knowledge about these processes. For example, if we were cavemen discussing similar topics, you would be one arguing that movements of stars, rain, lightening... were all 'intelligently guided' (by some spirits du jour) processes, while, say a rock falling down onto ground is a "natural" process (being easily controllable & reproducible). Some millennia later, all those "intelligently guided" processes have magically transmuted into "natural", without anything at all changing about them. Hence, they couldn't have been intrinsic properties of those processes. In other words, this division of processes by PT-ID into "natural" vs "intelligently guided" is a conflation of map with the territory, analogous to insisting that nations of the world are red, yellow, blue, green,... based on the particular coloring used on the currently most authoritative map of the world. This is clearly neither well founded nor sustainable position for the "intelligent guidance". The universal ID (U-ID) I am advocating, is simply a more coherent position, exploring the ontological properties proper, without confusing them with the relation of those processes to our present knowledge about them. Hence, my inference of intelligent design & guidance is not based on presently unexplained complexity of biological systems (these merely amplify it) but on knowability of the world despite its phenomenological richness, especially on the mathematical elegance and coherence of the physical laws. Hence, U-ID is an offshoot of classical ID of ancient Greeks (such as Pythagoreans), or of some even more ancient caveman philosophers arguing with those other cavemen mentioned above, warning them that their divisions on "natural" vs "intelligently guided" processes are superficial and unsustainable.nightlight
April 15, 2013
April
04
Apr
15
15
2013
06:47 PM
6
06
47
PM
PDT
kf, this post from ENV may interest you: Information, Past and Present - Winston Ewert - April 15, 2013 http://www.evolutionnews.org/2013/04/information_pas071201.htmlbornagain77
April 15, 2013
April
04
Apr
15
15
2013
04:27 PM
4
04
27
PM
PDT
nightlight @49,
"Although we’ll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of ‘intelligent agency’ at all times and all places, from physical laws and up, so-called “errors” included."
I have enjoyed our conversation and appreciate your continued attempts to clarify your views. While I don't always understand what you're getting at, your patience has helped to identify areas of agreement as well as disagreement.
To distinguish below between this type of “universal ID” (U-ID) and the conventional ID, I will label the latter as “part-time ID” (PT-ID; due to allowing for absence of intelligent agency some of the time; the extreme point of PT-ID is classical deism, the ultimate part-time). Alternative labeling which would fit as well is hard-ID vs soft-ID. I will use U-ID vs PT-ID.
While I don't really like your PT-ID label because it has a pejorative flavor, your U-ID puts a label on what you propose, making it a little easier to deal with. The reason Part-Time ID is not appreciated is because ID proponents don't deny that everything may indeed be designed, from the regular phenomena that are describable by physical laws, to the random aspects of contingency, to the intentional and contingent configurations of matter which result in a category of objects which are not amenable to either physical law, random chance, or certain propositions of gradualism. Contingency in material arrangements can be partitioned between chaos and purposeful configurations, each potentially destructive to the other. If needed, I will use Intelligent Design Theory (IDT) to account for what you term PT-ID, which is a composite of hypotheses about the universe and living systems, which posit that the products of intelligent activity -- designed objects, systems, messages, etc. -- have features which distinguish them from the products of unguided phenomena, such as geological processes.
1) Neo-Darwinian theory (ND=RM+NS) is hitching a free ride on top of already highly intelligent system, cellular biochemical networks (CBN). These are intelligent networks i.e. distributed self-programming computers running anticipatory algorithms of the same general kind as human brain (both are modeled in these key traits by neural networks).
(My emphasis in this and any subsequent quotes, unless otherwise noted.) Without acquiescing to your CBN terminoligy, since I'm not entirely sure exactly what it encompasses, I can fully agree that neo-Darwinism is "hitching a free ride" on an intelligently designed system. For that matter, so is Darwinian evolution. By not having a viable mechanism for generating formal novelties or their underlying specification, it is tacitly presumed that whatever organisms do is "natural". However that's potentially a weasel-word, because it can be presumed to imply that unguided processes can account for life's existence and diversity.
The ND has picked out one positive feedback loop M(utations) + NS (natural selection), which is one of CBN’s intelligent algorithms, and declared it as a sole driver of evolution. But then, they also gratuitously attach a parasitic attribute “random” to the M(utation) half, i.e. change M ==> RM. Their motivation for this over-specification M ==> RM is purely ideological, serving to promote atheism (with all its social and moral corollaries).
Agreed. Depending on the context, random does not imply unguided. The immune system makes use of a type of genetic algorithm to produce variations in antibodies. This relies on a random factor, but is goal-directed. ID proponents are generally careful to make distinctions between targeted behaviors, which may still involve random components, and uniformly random occurrences, such as replication errors in e.coli. Also, I think I agree that neo-Darwinism assumes M→RM. However that's the converse of what I've been arguing for all along, which I think can be viewed as RM→M. This is the sufficient causal relationship as opposed to the necessary one.
Requirement of U-ID that intelligent agency (whose immediate tool or technology at this level are CBN-s) is continually active during any M-process (mutation), guiding it and shaping it to some anticipated objectives, calls the ND on the above critical sleight of hand M ==> RM.
The criticism I have with that is the failure to distinguish between events which are goal-directed and those which are actually random. Random processes destroy information; and while it's logically possible that some limited forms of intentional creative change could be introduced into seemingly random processes, a truly random outcome is distinguishable from the purposeful addition of specified complexity. Any thesis which conflates the two, I can't really accept. Intelligence moves specification in a positive direction, and random influences move in the opposite direction. Impose randomness upon specified information and it will eventually overtake it. Impose specification atop random occurrences and the sequences will no longer conform to a definition of randomness. These two forces, intelligent input and chaotic processes, move in entirely different directions, and in their rawest most unconstrained forms are not compatible.
Namely, U-ID can’t let them change M ==> RM without legitimate proof and explicit elimination of ‘intelligently guided’ M-process (mutation), since M vs RM is a perfectly falsifiable distinction. The falsification requires modeling & computing probabilities of all possible adjacent states of DNA (e.g. via quantum theory of molecular transitions) and establishing that the actual M-processes (mutations) observed are a fair sample from this large event space. This is exactly the same type of falsification one would have to use to falsify some fairness or randomness claim about rolling dice (such as the dice example discussed earlier).
I think this may be a two-way street. You appear to propose that intelligence can act through seemingly random processes, yet targeted or goal-directed mutations are not random by definition. So a distinction need to be made between "random" and "designed" here. If intelligence can design through otherwise random processes, how does one make a distinction between design through the influence of random factors, and the explicit design inference warranted by goal-directed processes? Additionally, since intelligence is capable of simulating randomness, for example with cryptographically secure pseudo-random number generators, then providing a falsifiability criterion for whether or not some seemingly random occurrence is actually random, might impose an undue burden of proof.
That’s the vital point that PT-ID (conventional ID as expressed in your and other posts) is needlessly surrendering on (and unsurprisingly, losing in courts). There is no need for that concession since RM and IGM are both elements of M of equal a priori standing, absent any falsifications (which requires the above probabilistic procedure for evaluating fairness of the observed samples of M-processes). Hence there is no scientific reason why should ID lose in courts as being less scientific than ND, provided ID is U-ID branch, hypothesizing a continuously active intelligent agency involved in IGM as an alternative hypothesis to RM hypothesis.
This is an explicit area of disagreement that will not be resolved for reasons I've given and repeated. RM→S (random mutations implies substitutions) is warranted, regardless of whether one accepts that intelligence might be able to influence random factors from some layer underlying particle physics. With regard to courts, such actual evidence of intelligence versus randomness is not what is being considered. I think you overestimate the judiciary with regard to it's judgments on scientific matters and ID to date.
With number of such demonstrable mechanisms increasing, the PT-ID will keep conceding the effects of such mechanisms as phenomena being “naturally” explained hence not requiring actions of intelligent agency. In contrast, the U-ID sees all such mechanisms as technologies or tools being created and operated by the continuously active intelligent agency.
That's just not the direction that discoveries are moving in. See my post #51 above. As the actual mechanisms are elucidated, and presumed randomness falls victim to purposeful design, ID is vindicated, not reduced. I don't think you've made this case very well, although you've commented about it frequently. ID is not squeezed by discoveries of new purposeful, integrated, goal-directed mechanisms, and is not squeezed by the reduced role of random mutations as explanations for apparent design. I can only guess that you're not very familiar with ID literature, or its actual claims. And to attribute the very noteworthy effects of random degradations to intelligent forces, causes more problems than it solves, imo.
3) The ID is not only about biological evolution, but also about origin of life and fine tuning of physical laws (including physical constants). The U-ID spans all of those since it requires the common intelligence to uphold all those levels in operation at all times and all places. Hence, nothing exists from our physical level and up, without continuous intelligent action of the ‘intelligent agency’.
Not only does IDT address origins, evolution, and cosmological fine tuning, it doesn't presuppose an underlying force which upholds the entire universe, nor does it disallow such a force. Perhaps you should ask more questions and make fewer assumptions. There are people here better equipped than I to make clarifications, but in all your commenting here, I get the impression that you're more of a salesman than a seeker of knowledge. ID does not claim to explain all of reality, but specific aspects of our observation of it, for instance specified and irreducible complexity. ID seeks to account for patterns in nature that are better explained by an intelligent cause, than unguided processes. It's part of our uniform and repeated experience, and does not rely on an Intelligent Universal Theory of Everything.
In contrast, the PT-ID concedes present physical laws as “natural” that require no continuous intelligence to run. Hence, any time physics expands to explain yet another finely tuned physical constant, the PT-ID will have to back off from claiming need of an ‘intelligence’ for that one. I.e. PT-ID will repeat the same shrinking pattern it exhibits at the level of evolution — any time a specific mechanism is uncovered (reverse-engineered), the space for actions of ‘intelligent agency’ diminishes.
Again, ID's scope is limited. Because it doesn't pretend to explain all of physical reality in a single unifying theory of everything, but rather make sense of a subset of observations withing reality, it can accommodate a situation where we find intelligence may actually be required to uphold it. These are all accusations that appear to come from your general impression of ID, and not from what prominent ID proponents actually say. I really suggest you read Behe, Meyer, Dembski, Denton, Wells, Richards, Gonzales, etc, and ask more questions and make fewer assumptions. More text gets spent here at UD correcting misrepresentations of IDT than is warranted. Also, I think you could make more clear your claims about specifics. For instance, what role intelligence plays in raw random factors -- does intelligence cause randomness, does intelligence design through randomness, are purely random effects distinguishable from specification by objective qualification or quantification? Does U-ID take issue with Darwinian evolutionary assumptions, such as gradualism, or does it just replace the "random" factor in "random variation" with "intelligence"? Since U-ID considers that random effects are intelligently guided, how does it account for purely negative effects such as genetic diseases, loss-of-function, and general degradation; are these effects just as intelligent as the constructive, design-generating ones, and how do we distinguish? Anyway nightlight, I've enjoyed or dialog, despite airing some frustrations. If you want to take the last word on the subject, be my guest. I can't guarantee I won't respond to something you say if you bring up new material, but I get the sense that this conversation should probably wind down now. Thanks much for your indulgence. Best, Chance P.S. Apologies for the hasty composition and length of this post. :)Chance Ratcliff
April 15, 2013
April
04
Apr
15
15
2013
03:48 PM
3
03
48
PM
PDT
KF @52, thanks for the acknowledgment and for the contributing and supporting thoughts. As usual your commentary is helpful and appreciated.
"Going further, there are observed, evidently accident-driven, rates of error in relevant processes of protein synthesis, etc. Similarly, in genome replication, we have reason to believe that there are errors that get into populations and indeed are used to trace the distribution/ancestry of human populations. It is a reasonable inference that — absent decisive evidence to the contrary — such variations are chance occurrences."
This is essentially my main point. I'm not against non-random explanations even for trivial changes, but those require positive evidence, given the sufficiency of transmission errors to account for these types of small changes. Furthermore, while presumed random changes might be goal directed, we must also account for events such as the development of genetic diseases, which are better explained by random events than purposeful ones, imo.
Ironically, that can even be built into the design of the living system, as within an island of function, it may be useful to have built in robustness and adaptability due to ability to shift around within the zone of function. Cf. here, dogs and it seems the Red Deer family — which includes the North American Elk. Circumpolar species may also be a similar example. That is no problem for a design-centric view.
Yes, precisely. I think it's reasonable to infer that organisms are robust precisely because of random factors which can lead to information degradation. It's rather remarkable to note that biological systems exhibit specified systems which appear to be for the purpose of keeping organisms functional in the face of errors and damage.
All of this does not undermine the basic concerns and challenges relating to the Darwinian macro-evolutionary view: the ability of life forms to find islands of function in vast config spaces for the organised complex components of life: origin of body plans. This starts with the body plan of the very first living cell. No roots, no shoots, branches or twigs. And that is why the implied concept of a vast connected continent of function traversible incrementally by the said tree of life, is so pivotal to the whole evolutionary materialism dominated Darwinist view of origins of the world of life. The Darwinist view implies such, but there is no good reason to infer such from the fossil record — which is dominated by suddenness of appearance, stasis of forms [with variation being within the form], gaps and disappearances.
Not only does this make randomness a poor explanation for the rich diversity of life (as well as its emergence), but imposes some severe constraints upon gradualism as well. To my mind, this means that even if we can apply some intelligence as an underlying factor of random events, there is no good explanation for arriving at islands of form and function through small, incremental changes. This may be somewhat controversial, but I think it's relevant.Chance Ratcliff
April 15, 2013
April
04
Apr
15
15
2013
01:50 PM
1
01
50
PM
PDT
NL: I add one little point to CR's response to you above, in re your:
There is no “natural” vs “un-natural” or “super-natural” distinction within U-ID. There is a single type of activity by the underlying intelligent agency, the pattern which we only know to a greater (“natural”) or to a lesser (“super-natural”) degree at different stages of harmonization at our level.
The pivotal problem here is empirical detectability, multiplied by a failure to see/ acknowledge the significance of inference to best explanation in light of empirically grounded reliable signs. (I have already had to challenge your attempted definition of science and its methods of investigation and reasoning, cf here and here onward when you came back on much the same ground with the same basic problems.) What you are in effect doing here is reverting to the NOMA idea, of Gould. Which fails. For, in a case where there is no distinction and there are no signs that distinguish between chance and/or mechanical necessity and design on detectable, reliable signs, we have a case of worldview level faith and a matter of ungrounded decision. The default will be to impose evolutionary materialism and to dismiss an assumed -- this is not per discriminating evidence amenable to observations -- behind the scenes intelligence as a myth. In short, such an inference is operationally indistinguishable from a priori evolutionary materialism. The real world situation is quite different. We do have such things as may be properly characterised as "natural" and those that may be characterised as "ART-ificial." With empirically observable discriminating evidence that can be used to construct and greound empirically an explanatory filter. That is, there are such things as empirically reliable signs of design. Namely, first, the natural can properly be characterised by that which follows stochastic laws that allow for chance, necessity or a combination of the two. A dropped heavy object falls, reliably, at initial acceleration 9.8 N/kg here on earth. The Moon swings by in the sky at a rate reducible to the same attenuated by spreading out of the flux of the relevant field with distance through the surface of a sphere. As Newton observed and inferred in the 1660's. Where the object falling on Earth's surface is a fair die, the uppermost surface after tumbling is effectively the result of a random distribution tracing to all sorts of factors on uncontrollable small variations of initial circumstances leading through sensitive dependence on initial and intervening circumstances, to such an outcome. Such can easily be observed and are very properly ascribed to chance and necessity, without inferring to an ultimate cause of the existence of such things. (We will get to that later, in its place.) But if we found together a tray with 200 dice in it neatly arranged in a linear pattern where we have a code that spells out the first 72 or so characters of this post in succession [e.g. pairs of dice [36 states] could easily be mapped to letters of the alphabet [26] plus decimal numbers [10] . . . cf. Vietnam war era prisoner knock code matrices as well as how classically alphabetic characters were used to represent numbers in effect a --> 1, b --> 2 etc. Indeed ASCII retains some features of that, cf discussion and table linked here which shows the ASCII table, in a context of grounding digital productivity], we would very properly infer to another empirically grounded causal factor that is well able to do such, and where the other source of high contingency, chance, is not a credible explanation per the deep isolation of such islands of function in the space of possible configs. That is what the design inference is about, and it is a fairly simple issue of emphasising what we may directly support through empirically based, inductive reasoning. The onward inference therefrom, to cases where we were not present to observe directly the actual events -- the situation in many criminal trials, and that of historical sciences and origins science -- is simple. Once we have a reasonable framework of inference on tested, reliable signs, we may use the signs to infer per best empirically warranted observation. That is how modern geology was founded, and it is how Wallace and Darwin reasoned, all of which has antecedents in Newton's well known four rules of reasoning. The surprise in the process for the committed a priori materialist and his fellow travellers, is that there are identifiable, reliable signs of ART as cause, i.e. of design. Such as, functionally specific complex organisation and/or associated information [FSCO/I], and the like. This is a matter of well confirmed and patently obvious fact, with billions of cases around us. With this sitting at the table of inference to best explanation, it is now evident that the a priori injection of materialism, as has been used to try to even redefine science and its methods in the teeth of experience, history, and logic, to try to insist on such a priori materialism (on whatever excuses and strawman tactics such as accusations of "giving up" or "God of the gaps" etc) is little more than question-begging. Pulling back, we can see cases all around us in a technological world that underscore just how effective FSCO/I etc are as reliable signs of design. The strained attempts to avoid this would be laughable, save that it is such a sad indictment of where we have reached as a civilisation. Next, we see that the world of cell based life is chock full of reliable signs of design,starting from D/RNA and proteins. But the matter does not stop there. Pulling back, we see that we live in an observed cosmos that is evidently quite finely tuned in many, many ways that set it up on underlying laws, parameters and initial circumstances, to an operating point that sets up a habitation for cell based life of a type that is also compatible with intelligent cell based life. Just one example is that the first four most abundant Chemically active atoms are those that give us water, organic chemistry, and proteins: H, C & O, N. That, in a context where there is a well known bit of fine tuning that addresses the abundance and balance of C & O. That points to design as credible explanation of the functionally specific, complex organisation at the origin of a cosmos in which such is the case. Indeed, it turns out that the multiverse alternative proffered, is not only speculation without empirical -- observational -- warrant, but it also simply pushes back the fine tuning one step: the cosmos bread making machine that bakes up a fit habitat for life is just as much subject to fine tuning as the directly observed cosmos. (That's no surprise, Dembski pointed out as much in NFL, pp. 149 ff. and subsequently. Namely, the search for a search [S4S] becomes just as hard as the original search. That is, once functionally specific complex organisation and associated information are on the table, the radical contingency implied and the siting of FSCO/I in isolated islands of function in the space of plausibly accessible configs, is not a credible result of chance.) And notice, at every step, the emphasis has fallen on things that are empirically well grounded, leading to inference on known reliable and observable sign. Lewontin's a priori evolutionary materialism fails, and Gould's NOMA fails too. And God of the gaps is a strawman. Last but not least, these things are not exactly news, nor are they particularly inaccessible, even here at UD. I therefore think it would be reasonable to expect that onward discussion should reflect a due diligence reckoning with what design theory is actually about and what science as a discipline is actually about in light of the underlying issues in logic and epistemology. (The 101 here may be of help.) Otherwise, we are simply looking at going in deadlocked circles, because of a refusal to address evidence and reasoning on the merits, but instead the all too commonly encountered strawmen. KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
11:59 PM
11
11
59
PM
PDT
CR:
Transference of information implies errors — noise, the loss of fidelity. This is a directional movement from states of order to states of disorder. It’s part of our uniform and repeated experience. Systems based on physical processes are error prone, and all types of measures go into assuring the fidelity of information transfer. We find exactly these sorts of countermeasures in the replication processes of biological systems.
Well said. We know that temperature exists (as a basic physical property), and so we have a framework in which there is a credible, random statistical distribution of translation, rotation and vibration energies at molecular levels, in any material system. Linked to this, we know per basic communication theory that noise is a characteristic of any informational, communication system so that there is a proneness to error which we have every reason to accept will follow statistical distributions characteristic of chance. Going further, there are observed, evidently accident-driven, rates of error in relevant processes of protein synthesis, etc. Similarly, in genome replication, we have reason to believe that there are errors that get into populations and indeed are used to trace the distribution/ancestry of human populations. It is a reasonable inference that -- absent decisive evidence to the contrary -- such variations are chance occurrences. Ironically, that can even be built into the design of the living system, as within an island of function, it may be useful to have built in robustness and adaptability due to ability to shift around within the zone of function. Cf. here, dogs and it seems the Red Deer family -- which includes the North American Elk. Circumpolar species may also be a similar example. That is no problem for a design-centric view. Nor, is the role of error correction systems -- a well known feature of the technology of code based communication systems -- that work to to keep the phenomenon within bounds. All of this does not undermine the basic concerns and challenges relating to the Darwinian macro-evolutionary view: the ability of life forms to find islands of function in vast config spaces for the organised complex components of life: origin of body plans. This starts with the body plan of the very first living cell. No roots, no shoots, branches or twigs. And that is why the implied concept of a vast connected continent of function traversible incrementally by the said tree of life, is so pivotal to the whole evolutionary materialism dominated Darwinist view of origins of the world of life. The Darwinist view implies such, but there is no good reason to infer such from the fossil record -- which is dominated by suddenness of appearance, stasis of forms [with variation being within the form], gaps and disappearances. The actual evidence of the world of life, and the wider evidence of complex, functionally specific systems, is that systems will be characterised by having requisites of well matched parts being properly arranged and coupled together for function to emerge or be present. That is, there is good reason to see that he tree of life implied picture is not credible at all. Islands of function in config spaces is a reasonable and empirically based expectation. This speaks to the issue of noise and its effects beyond certain limits: deleterious. It may even be associated with something very much like Sanford's genetic entropy. Gradual deterioration of the genome leading to exhaustion of the functional capacity of the species, and vulnerability to disappearance. Indeed, the problem of over-specialisation and loss of robustness crops up too. I do not think it is accident that some of the more spectacular varieties of dogs, horses, goldfish etc show signs that they are at limits for the species. The situation of top class race horses that have to live in air conditioning as they lack adequate ability to sweat, is just one example. Behind all of this is the very opposite of God of the gaps reasoning. Which you noted on:
ID explanations are not being squeezed by narrowing gaps, because ID doesn’t rely on explanatory gaps — such a statement presupposes that all of biology can be explained by physical law, and that ID just attempts to account for what cannot be explained by mechanistic processes. Such is not the case. There are no viable physical explanations for either the origin of life or the subsequent diversification of it, not even close. ID offers a better causative element, one that can be currently seen in operation. The necessity for design as a cause of functionally complex and specified organization and information (there are numerous examples apart from biological systems) is intuitively obvious, and no viable alternatives exist. This fact is becoming more illuminated as discoveries unfold, discoveries of biological nanotechnology, and of signalling systems, elements that James Shapiro uses terms like “cybernetic” and “cognitive” to account for. All of our direct experience indicates that such systems require intelligent design. ID is not being squeezed at all, but rather the arrow of time has been favorable with regard to new discoveries, always pointing toward new systems and interactions more sophisticated than previously thought . . . . Neither is ID being squeezed in the shrinking gaps of random occurrences, since it has no stake there. Even if it’s shown that presumed random elements are not as random as imagined, design grows as a result, it does not shrink like neo-Darwinism, nor is ID bound to NDE’s fate.
Well said. KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
11:05 PM
11
11
05
PM
PDT
nightlight @49, just some quick comments here while I have time.
"Although we’ll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of ‘intelligent agency’ at all times and all places, from physical laws and up, so-called “errors” included."
Noting your parenthetical statement, since you've mentioned ID in the context of narrowing gaps before, I'll say that ID explanations are not being squeezed by narrowing gaps, because ID doesn't rely on explanatory gaps -- such a statement presupposes that all of biology can be explained by physical law, and that ID just attempts to account for what cannot be explained by mechanistic processes. Such is not the case. There are no viable physical explanations for either the origin of life or the subsequent diversification of it, not even close. ID offers a better causative element, one that can be currently seen in operation. The necessity for design as a cause of functionally complex and specified organization and information (there are numerous examples apart from biological systems) is intuitively obvious, and no viable alternatives exist. This fact is becoming more illuminated as discoveries unfold, discoveries of biological nanotechnology, and of signalling systems, elements that James Shapiro uses terms like "cybernetic" and "cognitive" to account for. All of our direct experience indicates that such systems require intelligent design. ID is not being squeezed at all, but rather the arrow of time has been favorable with regard to new discoveries, always pointing toward new systems and interactions more sophisticated than previously thought. Nowhere is it apparent that we're converging on a more simple account of biology; the trend is moving in the other direction, with new functions being discovered regularly, for DNA elements previously considered "junk" by many. The list goes on: epigenetics, plasticity, homoplasy, etc. Neither is ID being squeezed in the shrinking gaps of random occurrences, since it has no stake there. Even if it's shown that presumed random elements are not as random as imagined, design grows as a result, it does not shrink like neo-Darwinism, nor is ID bound to NDE's fate.Chance Ratcliff
April 14, 2013
April
04
Apr
14
14
2013
08:10 PM
8
08
10
PM
PDT
nightlight @19,
"That is a far stronger position to hold, since if they wish to claim some property of the process (such as randomness), the burden of proof is on them to show that such property is probabilistically plausible, not on me to prove it is implausible..."
I actually agree with that statement pretty strongly. As a matter of fact, ID proponents regularly lean on Darwinists to show that random mutations can build complex structures. I do believe that there is a solid burden of proof upon those who make a positive claim about the efficacy of a natural process to account for certain patterns otherwise consistent with the activity of intelligent beings, whether by chance or necessity or some combination of the two.
"...It is also internally much more coherent position, since it doesn’t hypothesize an absurd kind of intelligent agency which designs full sentences, but also leaves sentence size gaps for random smudges to somehow form almost correct full sentences."
This is the harder part to relate to. Transference of information implies errors -- noise, the loss of fidelity. This is a directional movement from states of order to states of disorder. It's part of our uniform and repeated experience. Systems based on physical processes are error prone, and all types of measures go into assuring the fidelity of information transfer. We find exactly these sorts of countermeasures in the replication processes of biological systems. In the case of e.coli, there is a two-stage error correction mechanism, one stage of which involves a protein cascade for signalling error detection and performing the subsequent correction. These are specific, complex, interacting hardware elements whose purpose is to increase the intrinsic DNA polymerase error rate of 10^-5 to an impressive 10^-9. It's hard to see why such a definite, physical, mechanical process for error correction should exist if we can attribute intelligent cause to the errors in the first place.Chance Ratcliff
April 14, 2013
April
04
Apr
14
14
2013
07:37 PM
7
07
37
PM
PDT
Chance Ratcliff #44: I'm fairly sure that no advantage comes from attributing intelligent causes to events which can be explained by replication errors and other events for which randomness is a sufficient cause. However I'm sure we won't reach agreement on this point. Although we'll likely end up agreeing to disagree, let me crystallize three main reasons why an ID position which is tenable in the longer run (instead of one which has to keep backing off, being squeezed into ever narrower gaps) requires continued activity of 'intelligent agency' at all times and all places, from physical laws and up, so-called "errors" included. To distinguish below between this type of "universal ID" (U-ID) and the conventional ID, I will label the latter as "part-time ID" (PT-ID; due to allowing for absence of intelligent agency some of the time; the extreme point of PT-ID is classical deism, the ultimate part-time). Alternative labeling which would fit as well is hard-ID vs soft-ID. I will use U-ID vs PT-ID. 1) Neo-Darwinian theory (ND=RM+NS) is hitching a free ride on top of already highly intelligent system, cellular biochemical networks (CBN). These are intelligent networks i.e. distributed self-programming computers running anticipatory algorithms of the same general kind as human brain (both are modeled in these key traits by neural networks). The ND has picked out one positive feedback loop M(utations) + NS (natural selection), which is one of CBN's intelligent algorithms, and declared it as a sole driver of evolution. But then, they also gratuitously attach a parasitic attribute "random" to the M(utation) half, i.e. change M ==> RM. Their motivation for this over-specification M ==> RM is purely ideological, serving to promote atheism (with all its social and moral corollaries). Requirement of U-ID that intelligent agency (whose immediate tool or technology at this level are CBN-s) is continually active during any M-process (mutation), guiding it and shaping it to some anticipated objectives, calls the ND on the above critical sleight of hand M ==> RM. Namely, U-ID can't let them change M ==> RM without legitimate proof and explicit elimination of 'intelligently guided' M-process (mutation), since M vs RM is a perfectly falsifiable distinction. The falsification requires modeling & computing probabilities of all possible adjacent states of DNA (e.g. via quantum theory of molecular transitions) and establishing that the actual M-processes (mutations) observed are a fair sample from this large event space. This is exactly the same type of falsification one would have to use to falsify some fairness or randomness claim about rolling dice (such as the dice example discussed earlier). Presently they cannot prove anything of the sort (since quantum modeling of such large molecules is far beyond the current techniques), hence M ==> RM bluff of ND should be called and rejected as an extraneous (ideological) addition. Namely, there is no falsifiable or empirical effect they can point to, if one were to simplify their theory via reversal RM ==> M, i.e. strip the extra attribute R of M-process they have gratuitously added. If they can't show an empirical or falsifiable difference between M and RM, then what is left is the general M-process which leaves on equal scientific footing 'intelligently guided' (IGM) and 'random' (RM) M-process. Hence, they have no scientific basis to claim that "random" attribute of M-process is more scientific than the "intelligently guided" attribute (which is a hypothesis of the U-ID's continuously active intelligent agency). That's the vital point that PT-ID (conventional ID as expressed in your and other posts) is needlessly surrendering on (and unsurprisingly, losing in courts). There is no need for that concession since RM and IGM are both elements of M of equal a priori standing, absent any falsifications (which requires the above probabilistic procedure for evaluating fairness of the observed samples of M-processes). Hence there is no scientific reason why should ID lose in courts as being less scientific than ND, provided ID is U-ID branch, hypothesizing a continuously active intelligent agency involved in IGM as an alternative hypothesis to RM hypothesis. 2) The intelligent networks such as CBN-s (like those underlying them via physical laws), have additive intelligence, hence by enlarging these networks, or merely reverse engineering to reveal more existent detail, their guiding intelligence and its tools increase. For example, once CBN-s construct multi-cellular organism 'technology' with sexual reproduction, suddenly the sensory organs and resulting mate selection 'technologies' of CBN-s can augment their capabilities to guide even more optimally the DNA transformations between generations. Of course, even lower level technologies (revealed via reverse engineering of CBN-s), such as 'horizontal gene transfer', endosymbiosis etc, augment the intelligence of CBN-s. With number of such demonstrable mechanisms increasing, the PT-ID will keep conceding the effects of such mechanisms as phenomena being "naturally" explained hence not requiring actions of intelligent agency. In contrast, the U-ID sees all such mechanisms as technologies or tools being created and operated by the continuously active intelligent agency. Unlike PT-ID, the U-ID doesn't leave any gap for the ever expanding "natural" process vs shrinking "intelligent" process. With U-ID, every process is manifestation of the ongoing intelligent activity, upholding our physical, chemical, biological,... processes at all times and all levels, at the razor's edge (this metaphor is used typically for fine tuned physical constants; within U-ID it applies it at all levels). There is no "natural" vs "un-natural" or "super-natural" distinction within U-ID. There is a single type of activity by the underlying intelligent agency, the pattern which we only know to a greater ("natural") or to a lesser ("super-natural") degree at different stages of harmonization at our level. Hence, with U-ID, any such discoveries of new biochemical and higher optimization mechanisms, demonstrate ever increasing level of intelligence needed to design, build and operate them. Such discoveries expand and amplify U-ID, not shrink it and weaken it as they do with PT-ID. 3) The ID is not only about biological evolution, but also about origin of life and fine tuning of physical laws (including physical constants). The U-ID spans all of those since it requires the common intelligence to uphold all those levels in operation at all times and all places. Hence, nothing exists from our physical level and up, without continuous intelligent action of the 'intelligent agency'. Within U-ID, all our laws are merely coarse grained regularities of the "thought patterns" of the intelligent agency. What is outside of our present laws is not super-natural but merely a less known/understood aspects or feature of the same pattern of intelligent activity. In contrast, the PT-ID concedes present physical laws as "natural" that require no continuous intelligence to run. Hence, any time physics expands to explain yet another finely tuned physical constant, the PT-ID will have to back off from claiming need of an 'intelligence' for that one. I.e. PT-ID will repeat the same shrinking pattern it exhibits at the level of evolution -- any time a specific mechanism is uncovered (reverse-engineered), the space for actions of 'intelligent agency' diminishes. In summary of reasons (1)-(3): PT-ID is a weak position, self-condemned to keep shrinking as science expands and losing in courts. Its final natural endpoint in the long run is a classical deism -- intelligent agency which designed and set universe into motion in the initial act of creation, then got out of it, which is the ultimate form of part-time ID, shrunk to a point.nightlight
April 14, 2013
April
04
Apr
14
14
2013
07:12 PM
7
07
12
PM
PDT
KF, thanks. The two of us are pretty much going 'round in circles now, but the conversation has been fun and challenging.Chance Ratcliff
April 14, 2013
April
04
Apr
14
14
2013
02:49 PM
2
02
49
PM
PDT
CR: I second the motion in re EA. KF PS: Good job with NL above, and yup Mung is probably tongue in cheek, but unfortunately the thing he describes is all too real. Hence the OP as a point of reference.kairosfocus
April 14, 2013
April
04
Apr
14
14
2013
01:28 PM
1
01
28
PM
PDT
F/N: Re NL and claims on semantic walls, here. Sadly, he is recycling long since answered (often, corrected) assertions; e.g. on definitions of science. A by now all too familiar pattern. KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
01:07 PM
1
01
07
PM
PDT
KF @42, I suspect Mung was being tonge-in-cheek in the exact context you specify. ;) I wonder why Eric Anderson, a clear, insightful and long-time design thinker and commenter here at UD, is not authoring instead of just commenting.Chance Ratcliff
April 14, 2013
April
04
Apr
14
14
2013
01:06 PM
1
01
06
PM
PDT
nightlight @33,
"That’s exactly the problem — from your ID perspective you don’t think it is a slippery slope since you presuppose the intelligent context."
Actually ID infers the intelligent cause from the context -- specific and complex arrangements of matter that are not amenable to undirected causes.
But for neo-Darwinian side, as well as for the curious who are outside and are listening to their counterpoint, you appear to be conceding scribe-less creation of meaningful novelty (such as whole word, or AA in DNA) i.e. your implied context isn’t their implied context.
That's just it. The efficacy of random causes to produce trivial results is intuitive to our perception and validated by our experience. Nobody is really surprised when a chaotic arrangement of Scrabble tiles might reveal the word "cat" or "tip" or perhaps even "fore" but the arrangement "Went for a walk, be back soon" is immediately understood to be a message arranged by agency for a purpose. This is the same sort of arrangement we find in DNA code -- specific and complex sequences of code for the purpose of specifying biological machinery. I can hardly see the benefit of attributing both random and purposeful causes to intelligence as a matter of general reasoning.
The only way to avoid getting trapped into the usual ND style equivocation of “random typo” (by an intelligent agency, which is what you mean) and “random smudge” (which is what they mean by “random error” which they claim is how everything came together), followed by your inevitable backing off to ever higher thresholds of complexity, is to enforce the distinction between the above two meanings throughout.
I'm not really sure what that means. ID does not become trapped by attributing random effects to random causes. I seriously doubt that the real equivocation -- attributing effects explicable by random events to intelligent causes -- is less confusing to the lay person. There are many actual examples of random causes having some effect on designed systems, and this reasoning is so generally intuitive, that I can hardly see a downside to allowing that replication errors produce actual errors; it's pretty much inherent to biology in general. There are scores of "loss of function" mutations which may or may not produce a net fitness gain given some environmental factor, such as with malaria. There are also genetic diseases with no fitness value at all. If these can be caused by random mutations, should we attribute these to intelligent causes as well? Losing the distinction between random factors and intelligent causes seems genuinely unhelpful here. I'm fairly sure that no advantage comes from attributing intelligent causes to events which can be explained by replication errors and other events for which randomness is a sufficient cause. However I'm sure we won't reach agreement on this point. ID reaches design inferences by the examination of effects which are not attributable to random causes, such as the complex arrangement of long strings for the purpose of performing a function or transmitting a message. This lines up well with our uniform experience -- such arrangements are the products of intelligent beings. Trying to assign noise an intelligent cause is not productive to ID methodology. Regardless of our disagreement, my claim still stands. Random mutations are a sufficient but not a necessary cause for a limited number of substitutions. We'll have to agree to disagree as to whether this distinction is favorable to design inferences. P.S. Besides replacing random events with intelligent causes, is there any significant distinctions to be made between your view of biological reality and Darwinian evolution?Chance Ratcliff
April 14, 2013
April
04
Apr
14
14
2013
12:59 PM
12
12
59
PM
PDT
Optimus: I agree, this is an oldie but goodie, that is why I asked EA's permission to re-post it. I'll bet it will be studiously ignored or derisively dismissed at the usual objector sites. At this point with someone pretending that objections to censorship or explulsion of design thinkers, and to an objection to invidious association with Nazism etc (on a subject where there are serious issues of principle at work that need to be addressed in a sober fashion instead of the resort to such nasty well poisoning tactics as I objected to) is strange or incomprehensible [I -- for cause -- call that enabling beahviour, Madam EL . . . and the screen clips are there to show why], I am pretty short on expecting reasonable behaviour from, or sympathy for objectors who act like that, and as well to those who harbour them on any excuse. KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
12:45 PM
12
12
45
PM
PDT
Mung: When it was originally said almost a decade ago, it was probably quite innovative. I think it is also pretty clear and helpful, even today. Takes away ever so many excuses, as in one may send to this post and ask, now what is it that you say you do not understand about the basic premises and principles of the design inference again? (Next stop, WACs.) KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
12:34 PM
12
12
34
PM
PDT
Gregory: A quiet, sad note. For cause, you do not have the trust of this thread owner, and so any further posts in Russian -- I believe -- or another language will be deleted. (And no, no stories that the post is harmless will be good enough. The danger is plain and no precedent will be set.) I only let what is above stand, as this is documentation of a problem. Please understand the problem, given the amount of abusive behaviour in and around the Internet regarding the design issue. KFkairosfocus
April 14, 2013
April
04
Apr
14
14
2013
12:30 PM
12
12
30
PM
PDT
Random/ chance mutations vs non-random/ directed mutations- Read "Not By Chance" by Dr Lee SpetnerJoe
April 14, 2013
April
04
Apr
14
14
2013
06:27 AM
6
06
27
AM
PDT
@ KF It's truly amazing that with such careful expositions of the key issues (with the OP as a prime example) that objectors to ID can with straight faces continue to sidetrack discussion, motive-monger, disdain correction, willfully misinterpret, and present patently infantile objections. Who can really say how the discussion will progress in the next ten years? For now I'm happy to take comfort in sound argumentation and provocative prose.Optimus
April 13, 2013
April
04
Apr
13
13
2013
09:47 PM
9
09
47
PM
PDT
PJ @ 26 Agreed!Optimus
April 13, 2013
April
04
Apr
13
13
2013
09:39 PM
9
09
39
PM
PDT
Thanks for posting Eric's primer, KF! @ EA That's one of the best discussions of intelligent design I've ever read. Top marks for clarity and focus. I also enjoyed your link to Dembski's takedown of Allen Orr. The incisiveness of his prose is quite delightful to read.Optimus
April 13, 2013
April
04
Apr
13
13
2013
09:37 PM
9
09
37
PM
PDT
There's nothing original in the OP. It's all been said before.Mung
April 13, 2013
April
04
Apr
13
13
2013
06:31 PM
6
06
31
PM
PDT
@Gregory #20 Zdravstvuyte Gregory, Although my Russian is focused almost exclusively on reading math and physics literature (the prices of Russian textbooks were irresistible for my student stipends), I didn't have any trouble following your Russian passages (they were also more colorful than the English sections). Although we both have one foot in the Eastern European and one in the Western culture, our migration paths were in the opposite directions -- you went from Canada to Russia, while I came from (a country formerly known as) Yugoslavia to USA (to my second grad school). Either path is a form of mental reboot into a new OS, quite disruptive and disorienting at first, but very refreshing and stimulating over time. This straddling of the same two realms seems to have resulted in both of us often "fighting" against both sides in the ID vs ND war of ideas. Checking out your blog and some earlier posts in UD, I see strong resonances between your concept of "Human Extension" and several other thought currents, such as "Extended Phenotype" of Dawkins (which generalizes his 'selfish gene' and 'meme' patterns), "Omega Point" by Teilhard de Chardin, mystical egregore, social organisms, as well as the zeitgeist of 'internet as a superbrain' emerging from numerous authors more recently. All of these ideas (which go way back to Aristotle, at least) identify a very interesting organizing principle of the universe, albeit each capturing only a segment of the whole pattern. After pursuing these white rabbits (and few others) each down his own trail, toward what seemed to be a common hidden treasure, each trail would somehow terminate unfinished and in dead end, driving me to the next one. I am beginning to suspect it is how this process is supposed to go and how it will continue, although each one appears at first as the final awaking into the real thing. My current "final" trail, which I call "Planckian networks", combines the best insights of those that went before with few new elements from fundamental physics (pregeometry), math & computer science. Thanks to the stimulating questions and discussion from the folks here at UD, the key elements of the "Planckian networks" model were sketched in a recent thread. The thread was unfortunately archived before I could gather links to the scattered posts into a coherent TOC as my concluding post on the thread, so for convenience of quick intro, here is how that goes: Planckian Networks intro #35.. Matrix, Leibniz, Teilhard de Chardain #58.. Model of consciousness, after death, Bell inequalities #64.. Harmonization, biochemical networks #67.. Free will, hybrid intelligence, quantum magic #92.. SFED, QED, quantum topics #95.. Carleman Linearization, SFED, panpsychism #98.. Additive intelligence, pregeometry, fine tuning #100. Consciousness after death, exceptional path, limits of theories #103. Self-programming networks #107. Internal modeling, physics; information (rock in mud) #109. Science, Russian dolls, mind stuff, internal models, laws vs patterns #116. Goal oriented, anticipatory systems from pattern recognizers #171. Attractors as memories, internal models, front loading #128. Digital physics, complexity science, laws vs patterns #141. Free will in fractal internal model, crossword puzzle #143. Participatory front loading #152. How it works, additive intelligence, composition problem #155. Quantum measurement theory vs Glauber #161. Limits of computations, irreducibility of laws, constraints, Fermi paradox #164. Levels & power of computation, Max Plancks, broken bone #165. Creation smarter than creator? #174. Ontological levels, Game of Life, chess #175. Genetic Algorithms vs Networks vs Dembski #179. CSI vs networks, capacity of NNs, stereotyping, knowability #182. Meyer, empirical consciousness #183. CSI vs networks, limits of Abel, Dembski #188. Counterexample for Abel #189. Thinking vs computing #192. Why simple building blocks Evolution process vs theory conflation #20.. Map vs Terrain #35.. Chess, consciousness vs computation #191. Concession on microevolution, dice example #214. Concessions, technological evolution Natural Science schema #101. Algorithmically effective elements vs consciousness #109. Science schema re-explained #117. Qualia, science #119. General algorithms #128. Necessary vs sufficient, algorithmic effectiveness #135. Algorithm semantics, parasitic elements #186. Meyer, why cringe? #210. Semantic wall (KF) #217. Meyer, citation #222. Meyer, sloppiness? #232. Meyer, inductive strength #233. Meyer's leap #237. Wisdom of leap #240. Other links to intelligent mind #245. Leap details #247. Dembski, Mere Creation.. mind/intelligence conflation #250. Wisdom of leap vs James Shapiro, #255 more on edge #256. Key holders, missing ID hypothesis #262. Missing ID hypothesis PS: The previous copy of this post which had links for the above numbered posts is stuck in the moderation.nightlight
April 13, 2013
April
04
Apr
13
13
2013
04:50 PM
4
04
50
PM
PDT
nightlight,
"my response to Gregory I sent right before this one, shows as ‘awaiting moderation’. Why is that?"
Sometimes this happens, for instance if you include a lot of hyperlinks in the message body, it'll trip the moderation filter.Chance Ratcliff
April 13, 2013
April
04
Apr
13
13
2013
04:11 PM
4
04
11
PM
PDT
Chance Ratcliff #27: It only implies that physical processes contain a degree of uncertainty, which presents as randomness. We wouldn't step out onto the slippery slope of conceding that manufacturing processes were the result of random forces, just by admitting that random events can occur during the process. That's exactly the problem -- from your ID perspective you don't think it is a slippery slope since you presuppose the intelligent context. But for neo-Darwinian side, as well as for the curious who are outside and are listening to their counterpoint, you appear to be conceding scribe-less creation of meaningful novelty (such as whole word, or AA in DNA) i.e. your implied context isn't their implied context. Substitution generated novelty, like a mistyped word or a mangled sentence, is a much more intelligent error than a sequence of random smudges (which is what they are talking about under "random" alteration of DNA, a.k.a. "random mutation"). While you don't assume absence of scribe in the creation of that meaningful error, it is taken and understood as such by the other side and the outsiders, from their perspectives. The only way to avoid getting trapped into the usual ND style equivocation of "random typo" (by an intelligent agency, which is what you mean) and "random smudge" (which is what they mean by "random error" which they claim is how everything came together), followed by your inevitable backing off to ever higher thresholds of complexity, is to enforce the distinction between the above two meanings throughout. The only way to keep the two distinct is to use their "random smudge" and insist they prove that this, the random smudge of an ink drop in the absence of intelligent context (scribe), is how the word 'sun' was produced on paper. To do that they need to model and enumerate all possible random smudges from ink drops and compute the odds of the ink pattern for 'sun', then match these odds against the number of tries available for alleged smudging. Only then they can justify that the whole novelty generation process which produced "sun" is random i.e. capable of occurring in the absence of any intelligent agency. Translated to biology, one has to insist that intelligent agency is active throughout the AA substitution (just as scribe was active when the wrong word 'sun' was put on paper). The AA substitution is a higher level/intelligent error that can arise only within an activity of an intelligent process (such as DNA replication). The AA substitution is not just any random alternation of DNA since there are astronomical numbers of possible molecules which can be produced by truly random DNA alternation consistent with laws of physics & chemistry, which is the kind of "random" process they claim to explain life & its evolution. In short, with pervasive semantic ambiguities in this debate, one cannot drop the intelligent context, hence intelligent agency acting through it, at any point. The intelligent agency is acting via this intelligent context throughout every transformation of a cell (or phenotype), just as scribe is acting through the ink, pen and paper at all points in the writing of the manuscript (through errors and all). If you drop the intelligent scribe out of the picture for writing the wrong word 'sun' instead of 'son', then you leave gaps which, according to your opponents, can produce word 'sun' without scribe (i.e. via sequence of random smudges only). Intelligence in this type of systems is additive and surrendering of even a sand grain sized bit of intelligent product to "random" process, can be combined through many such grains into a mountain of intelligent product. For example, any time they can connect couple of such scribe-less size gaps via a random smudge produced in the lab, you will be backing off to the next level of complexity as the one requiring the scribe while everything lesser can occur scribe-less, via random process, needlessly conceding ever greater domain to 'random process' as the creator of novelty. PS: my response to Gregory I sent right before this one, shows as 'awaiting moderation'. Why is that?nightlight
April 13, 2013
April
04
Apr
13
13
2013
03:56 PM
3
03
56
PM
PDT
1 2 3

Leave a Reply