Uncommon Descent Serving The Intelligent Design Community
Category

Intelligent Design

“It’s in your genes” theory fading in the wake of epigenetics?

In “Getting Over the Code Delusion” (The New Atlantis, Summer 2010), Steve Talbott muses on the mystique around the genetic code in past decades, especially in the light of modern findings: Meanwhile, the epigenetic revolution is slowly but surely making its way into the popular media — witness the recent Time magazine cover story, “Why DNA Isn’t Your Destiny.” The shame of it is that most of the significance of the current research is still being missed. Judging from much that is being written, one might think the main thing is simply that we’re gaining new, more complex insights into how to treat the living organism as a manipulable machine.The one decisive lesson I think we can draw from the work Read More ›

The Enduring Warfare Theses

Though historians tell us that the warfare thesis—the idea that the relationship between science and religion has been mostly one of conflict—is discredited, there seems to be a great many who have not yet learned of its demise. Not only is the warfare thesis alive and well in popular culture, it is also promoted by those who probably should know better. In fact in the origins debate each side has its own version. Why is the warfare thesis so enduring? One reason is that, like any good lie, there is some truth to it. Probably a better reason is its rhetorical power. But perhaps the main reason is that we need it—our religion demands it.  Read more

We hold these truths to be self-evident…

Can you spot the common theme in these historic statements?

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” – Excerpt from the American Declaration of Independence, which was ratified on July 4, 1776.

Men are born and remain free and equal in rights. Social distinctions may be founded only upon the general good.” – Declaration of the Rights of Man and of the Citizen (1789), article 1. The Declaration was approved by the National Constituent Assembly of France, on August 26, 1789.

“Four score and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal.” – Excerpt from President Lincoln’s Gettysburg Address, delivered on November 19, 1863.

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.” – Universal Declaration of Human Rights (1948), article 1. The Declaration was adopted by the United Nations General Assembly on 10 December 1948 at the Palais de Chaillot, Paris.

“I have a dream that one day this nation will rise up and live out the true meaning of its creed: ‘We hold these truths to be self-evident: that all men are created equal.‘”- Excerpt from the famous ‘I Have a Dream’ speech by Martin Luther King, Jr., delivered on 28 August 1963, at the Lincoln Memorial, Washington D.C.

(Emphases mine – VJT.)

Belief in human equality is a vital part of our democratic heritage. Take this belief away, and the moral foundations of Western civilization immediately collapse, like a house of cards.

Atheists divided

Sad to say, many (perhaps most) of the world’s 25 most influential living atheists don’t seem to share this belief. Specifically, many of these atheists don’t believe that newborn babies have the same moral worth as human adults.
Read More ›

ds_cyb_mind_model
The Eng Derek Smith Cybernetic Model

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:

Winds of Change, 76:

By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.

If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]

ID Does Not Posit Supernatural Causes, 35:

Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!

Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]

This challenge brings to a sharp focus the foundational  issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.

For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex  organisation and associated information [FSCO/I;  a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979)  into central focus].

ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.

And, many such thinkers further argue that:

if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)

then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.

But, we now need to show that . . .

Read More ›

Robert Marks interviewed by Tom Woodward

Tom Woodward, author of DOUBTS ABOUT DARWIN and DARWIN STRIKES BACK, interviewed Robert J. Marks about his work at the Evolutionary Informatics Lab. For the podcast, go here: “Darwin or Design?” (program starts at 5:08 | actual interview starts at 7:52)

Epigenome: Better find a new use for that pocket CD of your genome

Remember when, as sociologist Dorothy Nelkin tells it,

The language used by geneticists to describe the genes is permeated with biblical imagery. Geneticists call the genome the “Bible,” the “Book of Man”and the “Holy Grail.” They convey an image of this molecular structure as more than a powerful biological entity: it is also a mystical force that defines the natural and moral order. And they project an idea of genetic essentialism, suggesting that by deciphering and decoding the molecular text they will be able to reconstruct the essence of human beings, unlock the key to human nature. As geneticist Walter Gilbert put it, understanding our genetic composition is the ultimate answer to the commandment “know thyself.” Gilbert introduces his lectures on gene sequencing by pulling a compact disk from his pocket and announcing to his audience, “This is you.”*

At ScienceDaily (Jan. 14, 2011), we learn that after the complete draft of the human genome was released in 2003, of the growing focus on is on the epigenome:

Whereas the genome is the same in every cell of an organism, the epigenome of every cell type is different. It is because of the epigenome that a liver cell is not a brain cell is not a bone cell.

From the genome, we learned? …

“We learned many things from the Human Genome Project,” Elgin says, “but of course it didn’t answer every question we had!

“Including one of the oldest: We all start life as a single cell. That cell divides into many cells, each of which carries the same DNA. So why are we poor, bare, forked creatures, as Shakespeare put it, instead of ever-expanding balls of identical cells?

“This [epigenome] work,” says Elgin, “will help us learn the answer to this question and to many others. It will help us to put meat on the bones of the DNA sequences.”

You, know, it almost makes one go all religious and say: Re the “Bible,” the “Book of Man”and the “Holy Grail,” worship the creator not the creation. And recycle your CDs. Read More ›

ID Does Not Posit Supernatural Causes

The National Science Teachers Association (NSTA) has an official position on the nature of “science” here. For the reasons set forth below, ID proponents should have no problem with the NSTA conceptualization. The NSTA position emphasizes the following characteristics of science: Scientific knowledge is simultaneously reliable and tentative. Having confidence in scientific knowledge is reasonable while realizing that such knowledge may be abandoned or modified in light of new evidence or reconceptualization of prior evidence and knowledge. Although no single universal step-by-step scientific method captures the complexity of doing science, a number of shared values and perspectives characterize a scientific approach to understanding nature. Among these are a demand for naturalistic explanations supported by empirical evidence that are, at least Read More ›

The announced “death” of the Fine-tuning Cosmological Argument seems to have been over-stated

In recent days, there has been a considerable stir in the blogosphere, as  prof Don Page of the University of Alberta has issued two papers and a slide show that purport to show the death of — or at least significant evidence against — the fine-tuning cosmological argument. (Cf here and here at UD. [NB: A 101-level summary and context for the fine-tuning argument, with onward links is here. A fairly impressive compendium of articles, links and videos on fine-tuning is here. Video summary is here, from that compendium. (Privileged Planet at Amazon)])

[youtube guHodt-7Q7A]

However, an examination of the shorter of the two papers by the professor, will show that he has apparently overlooked a logical subtlety. He has in fact only argued that there may be a second, fine-tuned range of possible values for the cosmological constant. This may be seen from p. 5 of that paper:

. . . with the cosmological constant being the negative of the value for the MUM that makes it have present age

t0 = H0^- 1 = 10^8years/alpha, the total lifetime of the anti-MUM model is 2.44t0 = 33:4 Gyr.

Values of [L] more negative than this would presumably reduce the amount of life per baryon that has condensed into galaxies more than the increase in the fraction of baryons that condense into galaxies in the first place, so I would suspect that the value of the cosmological constant that maximizes the fraction of baryons becoming life is between zero and – LO ~ 3.5 * 10^- 122, with a somewhat lower magnitude than the observed value but with the opposite sign. [Emphases added, and substitutes made for symbols that give trouble in browsers.]

Plainly, though, if one is proposing a range of values that is constrained to within several parts in 10^-122, one is discussing a fairly fine-tuned figure.

Just, you are arguing for a second possible locus of fine-tuning on the other side of zero.

(And, that would still be so even if the new range were 0 to minus several parts in 10^-2 [a few percent], not minus several parts in 10^-122 [a few percent of a trillionth of a trillionth of . . . ten times over]. Several parts in a trillion is rather roughly comparable to the ratio of the size of a bacterium to twice the length of Florida or the lengths of  Cuba or Honshu in Japan or Cape York in Australia or Great Britain or Italy )

Read More ›

The death of fine-tuning?

The blogosphere is abuzz with reports about a physics paper, Evidence against fine-tuning for life, written by an evangelical Christian physicist named Don Page, professor of physics at the University of Alberta. The paper is surprisingly non-technical and very easy to read. Also worth reading is Dr. Don Page’s non-technical online presentation, Does God so love the multiverse? Professor Page has since rewritten this presentation as a 26-page scientific article, available here.

The gist of Professor Page’s latest paper is that in an optimally designed fine-tuned universe, we’d expect the fraction of baryons (particles composed of three quarks, such as protons and neutrons) that form organized structures (such as galaxies and eventually living things), to be maximized. However, the facts do not bear this out. In our universe, the observed value of the cosmological constant, lambda-0, is very slightly positive – about 3.5 x 10^(-122) – whereas in an optimally designed universe, the cosmological constant (lambda) should be very slightly negative – somewhere between zero and minus 3.5 x 10^(-122):
Read More ›

Hart Fails to Connect the Dots

I commend to our readers David Bentley Hart’s article, A Philosopher in the Twilight, in the February 2011 First Things.  Dr. Hart muses over Martin Heidegger’s late philosophy, especially his views regarding the connection between the Western intellectual tradition and nihilism.  I admire and respect Dr. Hart greatly.  His new articles is, as usual, full of thought provoking insights displaying his all-too-rare combination of deep learning, wisdom and the ability to write engaging prose.  The following passage from the article is puzzling to me though: It simply cannot be denied that the horrors of the last century were both conceptually and historically inseparable from some of the deepest principles of modernity’s founding ideologies. The ‘final solution’ was a kind of Read More ›

ID Foundations: The design inference, warrant and “the” scientific method

It has been said that Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .”  This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a  “the” is possible — scientific method.

Leading Intelligent Design researcher, William Dembski has summarised the actual process of  inference:

“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last”  . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]

The design inference process as described can be represented in a flow chart:

explan_filter

Fig. A: The Explanatory filter and the inference to design, as applied to various  aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . .  the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically  reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers  . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )

Explaining (and discussing) in steps:

1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design.  Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:

. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)

Read More ›

The Hierarchy of Evolutionary Apologetics: Protein Evolution Case Study

A common retort from evolution’s defenders is that all those scientists can’t be wrong. Is it conceivable that so many scientific papers and reports, with their conclusions about evolution, are making the same mistake? Before answering this we first must understand the hierarchy of the evolution apologetics literature. At the base of the pyramid are the scientific papers documenting new research findings. Next up are the review papers that organize and summarize the state of the research. And finally there is the popular literature, such as newspaper and magazine articles, and books. Across this hierarchy evolutionists make different types of claims that should not be blindly lumped together. Yes, there are problems across the spectrum, but they tend to be Read More ›

osc_rsc_fsc

Background Note: On Orderly, Random and Functional Sequence Complexity

In 2005, David L Abel and Jack T Trevors published a key article on order, randomness and functionality, that sets a further context for appreciating the warrant for the design inference. The publication data and title for the peer-reviewed article are as follows: Theor Biol Med Model. 2005; 2: 29. Published online 2005 August 11. doi: 10.1186/1742-4682-2-29. PMCID: PMC1208958 Copyright © 2005 Abel and Trevors; licensee BioMed Central Ltd. Three subsets of sequence complexity and their relevance to biopolymeric information A key figure (NB: in the public domain)  in the article was their Fig. 4: Figure 4: Superimposition of Functional Sequence Complexity onto Figure 2. The Y1 axis plane plots the decreasing degree of algorithmic compressibility as complexity increases from Read More ›