Uncommon Descent Serving The Intelligent Design Community

Aurelio Smith’s Analysis of Active Information

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, Aurelio Smith had a guest publication here at Uncommon Descent entitled Signal to Noise: A Critical Analysis of Active Information. Most of the post is taken up by a recounting of the history of active information. He also quotes the criticisms of Felsentein and English which have responded to at Evolution News and Views: These Critics of Intelligent Design Agree with Us More Than They Seem to Realize. Smith then does spend a few paragraphs developing his own objections to active information.

Smith argues that viewing evolution as a search is incorrect, because organisms/individuals aren’t searching, they are being acted upon by the environment:

Individual organisms or populations are not searching for optimal solutions to the task of survival. Organisms are passive in the process, merely affording themselves of the opportunity that existing and new niche environments provide. If anything is designing, it is the environment. I could suggest an anthropomorphism: the environment and its effects on the change in allele frequency are “a voice in the sky” whispering “warmer” or “colder”.

When we say search we simply mean a process that can be modeled as a probability distribution. Smith’s concern is irrelevent to that question. However, even if we are trying to model evolution as a optimization or solution-search problem Smith’s objection doesn’t make any sense. The objects of a search are always passive in the search. Objecting that the organisms aren’t searching is akin to objecting that easter eggs don’t find themselves. That’s not how any kind of search works. All search is the environment acting on the objects in the search.

Rather than demonstrating the “active information” in Dawkins’ Weasel program, which Dawkins freely confirmed is a poor model for evolution with its targeted search, would DEM like to look at Wright’s paper for a more realistic evolutionary model?

This is a rather strange comment. Smith quoted our discussion of Avida previously. But here he implies that we’ve only ever discussed Dawkin’s Weasel program. We’ve discussed Avida, Ev, Steiner Trees, and Metabiology. True, we haven’t looked at Wright’s paper, but its completely unreasonable to suggest that we’ve only discussed Dawkin’s “poor model.”

Secondly, “fitness landscape” models are not accurate representations of the chaotic, fluid, interactive nature of the real environment . The environment is a kaleidoscope of constant change. Fitness peaks can erode and erupt.

It is true that a static fitness landscape is an insufficient model for biology. That is why our work on conservation of information does not assume a static fitness landscape. Our model is deliberately general enough to handle any kind of feedback mechanism.

While I’m grateful for Smith taking the time to writeup his discussion, I find it very confused. The objections he raises don’t make any sense.

Comments
Aurelio:
If the environment is truly static, there will be no selective pressure.
That is incorrect. There may not be any selection pressure once the population's fitness is optimized, but there will be until then. So what does "truly static" mean?
Variation appears due to well-understood processes of imperfect replication etc.
Except it isn't "well understood". Basic biological reproduction is irreducibly complex and as such requires an Intelligent Designer. Isn’t Lenski’s experiment a static environment?
Nope. It’s boom and bust.
I think whether or not it is static is debateable.Joe
May 1, 2015
May
05
May
1
01
2015
10:34 AM
10
10
34
AM
PDT
Why can't evolution occur in a static environment? Is Aurelio really suggesting that mutations will not occur in a static environment? Isn't Lenski's experiment a static environment?Joe
May 1, 2015
May
05
May
1
01
2015
10:05 AM
10
10
05
AM
PDT
Bob O'H: Let me follow up by clipping the opening words, verbatim: >> 1. The Search Matrix All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted [1]. In previous work, we made assumptions that limited the generality of conservation of information, such as assuming that the baseline against which search perfor-mance is evaluated must be a uniform probability distribution or that any query of the search space yields full knowledge of whether the candidate queried is inside or outside the target. In this paper, we remove such constraints and show that | conservation of information holds quite generally. We continue to assume that tar-gets are fixed. Search for fuzzy and moveable targets will be the topic of future research by the Evolutionary Informatics Lab. In generalizing conservation of information, we first generalize what we mean by targeted search. The first three sections of this paper therefore develop a general approach to targeted search. The upshot of this approach is that any search may be represented as a probability distribution on the space being searched. Readers who are prepared to accept that searches may be represented in this way can skip to section 4 and regard the first three sections as stage-setting. Nonetheless, we sug-gest that readers study these first three sections, if only to appreciate the full gen-erality of the approach to search we are proposing and also to understand why attempts to circumvent conservation of information via certain types of searches fail. Indeed, as we shall see, such attempts to bypass conservation of information look to searches that fall under the general approach outlined here; moreover, conservation of information, as formalized here, applies to all these cases. >> I trust the point about reading in context is clear enough. KFkairosfocus
May 1, 2015
May
05
May
1
01
2015
09:34 AM
9
09
34
AM
PDT
Bob O'H: All I am doing is pointing out the actual controlling context which the authors have a right to assume will be taken into account in reading. Text out of context = pretext is a classic problem of interpretation. KFkairosfocus
May 1, 2015
May
05
May
1
01
2015
09:03 AM
9
09
03
AM
PDT
Winston #35  
We don’t actually assume a uniform distribution. The contribution of “A General Theory of Information Cost Incurred by Successful Search” is to show that conservation of information still applies under a non-uniform initial distribution.
Which pdf are we talking about?.  You identify a search with a pdf and your paper appears to show that the LCI holds even when that pdf is not uniform.  But that is not my point. You conclude that finding a more efficient search has an “information cost” which seems to be identified with a probability of finding that more efficient search i.e. the changes of success in the search for the search. This implies you must have some kind of pdf in mind for space of all possible searches.  That is the pdf I am questioning.  Otherwise the pdf might be simply zero probability of all searches that are less efficient than the improved search – which would certainly scupper the LCI.   Nowhere can I find an explicit explanation of the pdf of possible searches although I think you are assuming each of those matrices which identify a search are equally probable.Mark Frank
May 1, 2015
May
05
May
1
01
2015
08:30 AM
8
08
30
AM
PDT
All of this seems to be assumed in your work rather than made explicit and when made explicit raises some rather fundamental questions. What is the probability distribution of searches? There are many ways of enumerating searches – how do you justify your choice? On what basis do you assume each one is equally probable?
We don't actually assume a uniform distribution. The contribution of "A General Theory of Information Cost Incurred by Successful Search" is to show that conservation of information still applies under a non-uniform initial distribution. The conclusion of conservation of information is that in order to produce complex life, the initial distribution of the universe must have been configured in such a way as to increase the probability of producing complex life.
Is there a process that springs to mind that cannot be modeled as a probability distribution? This is taking the path of defining something so broadly that “search” means “anything”.
So?
I suggested a look at Sewall Wright’s paper as his approach is a classic attempt to describe gene combinations as a fitness landscape. He does not talk of environments as “landscapes”.
I'm sure there is some merit in looking at Wright's paper. But is he really doing anything that hasn't been repeated in computer models?
Agreed. I’d go further. If you model evolution in a truly static fitness landscape, there will be no evolution.
Many computer models have evolution do indeed model a static fitness landscape and do in fact experience evolution of a sort. So either your prediction is utterly incorrect, or I've not understood it.
Are you referring to “active information”? How does the idea of “the difference between the endogenous and exogenous information” help to address the dynamic, shifting interplay between a population of organisms and its niche?
What I'm saying is that conservation of information merely requires that your search be a probability distribution. Your dynamic shifting process is still modelable as a probability distribution.
Aurelio Smith has already pointed out the problem with this, but to put some specifics on it, under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4?s would be a search. As would diffusion, if you want to look at something dynamic in time.
Indeed, all of those are searches.Winston Ewert
May 1, 2015
May
05
May
1
01
2015
08:03 AM
8
08
03
AM
PDT
kf - if Ewert meant his definition within a context then hopefully he'll clarify it here in the comments. As it is, his statement seems pretty unambiguous, with no suggestion that he means it within a certain context. I hope he'll do this - it would be good to know precisely what he means by a search.Bob O'H
May 1, 2015
May
05
May
1
01
2015
08:02 AM
8
08
02
AM
PDT
Bob O'H (& attn MF): Ewert spoke in a context, with three initial background sections for context in a 40+ pp paper. That context from outset is blind, needle in haystack search and the probability distributions relate to taking searches, which are samples of config spaces. And in particular, blind samples. Notice how the main body opens:
All but the most trivial searches are needle-in-the-haystack problems. Yet many searches successfully locate needles in haystacks. How is this possible? A success-ful search locates a target in a manageable number of steps. According to conserva-tion of information, nontrivial searches can be successful only by drawing on existing external information, outputting no more information than was inputted
So, whatever infelicities of expression you may see or may think you see, that controlling context should be borne in mind. The probability distributions are in effect ways to address degrees of bias in samples, including samples based on an incremental search as is defined with reference to the search matrix which builds in a next step process. The issue is, how do evolutionary type searches outperform the yardstick, flat random sample blind needle in haystack search. The answer is, by input active information, such as obtains with say a warmer/colder signal pattern. In that broad context, different search strategies are effectively the same as differing probability distributions affecting sample choices. This also points to the case of search for a golden search, which puts you down on a target zone. Higher order searches for good searches are going to challenge you so that they will not -- if blind -- be likely to hand you a golden search. And on the strong statistical constraints imposed by the needle in haystack situation, it is reasonable to look at a v likely to succeed strategy that finds a way to add in info that guides search making the otherwise infeasible feasible that the performance gap is a measure of injected, bridging, active information. KFkairosfocus
May 1, 2015
May
05
May
1
01
2015
07:33 AM
7
07
33
AM
PDT
kf - in a game of dice a search metaphor makes sense, but according to Ewert just rolling a die 8for whatever reason9 is a search.Bob O'H
May 1, 2015
May
05
May
1
01
2015
06:16 AM
6
06
16
AM
PDT
Bob O'H: Context. In a game where dice are used, the value on a toss will feed an outcome, and such an outcome may shape onward steps etc. E.g. sarting at a random location, I can use dice tosses to guide steps in a random walk. E.g. Red 123 that many steps backwards, 456 123 steps forward, and Green, similar but left right. This would explore a space and constitutes a search esp if there is a reward function based on where one lands. Thus, a prob distribution can be integral to or tantamount to a search. So, the partly blind chance driven search of a config space makes sense. And indeed in the introduction to the paper such a context is explored via a drone over a field of cups covering items. (A picture with hex packed pills is used to illustrate). The basic point is that we have a reference search, take a flat random sample or a random walk (maybe with drift) etc. As we are under needle in haystack blind search circumstances, the target zones are maximally unlikely, and the other options are samples with a bias. But the blindness extends to the search for a golden or at least good search that plunks us down next to a target zone. That comes from higher order space. If W possibilities are there, direct, the searches as samples come from a set of 2^W possibilities making S4S )and higher order yet searches) plausibly progressively harder. So if a search drastically outperforms flat random, it is reasonable to see that it was not blindly chosen and/or does not act blindly. From this cap to be bridged we may infer info conveying an advantage, active info. And the degree of effect relative to a flat random blind search is reasonable as a metric. And, the information can be put in probabilistic terms. Cf here: https://uncommondescent.com/intelligent-design/id-foundations/fyi-ftr-to-jf-attn-el-on-fitness-functions-islands-of-function-bridging-active-information/ KFkairosfocus
May 1, 2015
May
05
May
1
01
2015
06:12 AM
6
06
12
AM
PDT
logically_speaking - that wasn't an analogy, it was a direct consequence of the definition! TBH, I think you are stretching the definition of a search - a search is for something, which is a subset of everything being searched. So, rolling a die to "search" for a number from 1 to 6 is bizarre, as the 'search' will always be successful first time around. So, in what sense is it a search, rather than a RNG?Bob O'H
May 1, 2015
May
05
May
1
01
2015
05:37 AM
5
05
37
AM
PDT
Thanks Mark- I was looking in the "Active Information" paper- ie the wrong paper.Joe
May 1, 2015
May
05
May
1
01
2015
04:30 AM
4
04
30
AM
PDT
#26 Joe Me:
Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as: So active information = endogenous information – exogenous information
Joe:
Can you please show us where that is in the paper?
To repeat AS’s quote and link with my emphasis From A General Theory of Information Cost Incurred by Successful Search
In comparing null and alternative searches, it is convenient to convert probabilities to information measures (note that all logarithms in the sequel are to the base 2). We therefore define the endogenous information I? as –log(p), which measures the inherent difficulty of a blind or null search in exploring the underlying search space ? to locate the target T. We then define the exogenous information IS as –log(q), which measures the difficulty of the alternative search S in locating the target T. And finally, we define the active information I+ as the difference between the endogenous and exogenous information: I+ = I? – IS = log(q/p).
Mark Frank
May 1, 2015
May
05
May
1
01
2015
04:17 AM
4
04
17
AM
PDT
There needs to be a "James Randi test" for evolutionism...Joe
May 1, 2015
May
05
May
1
01
2015
04:08 AM
4
04
08
AM
PDT
Mark Frank:
Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as: So active information = endogenous information – exogenous information
Can you please show us where that is in the paper?Joe
May 1, 2015
May
05
May
1
01
2015
03:50 AM
3
03
50
AM
PDT
And something else tat is very strange- Evos, if they had something, wouldn't bother with Winston's paper nor response. They would just present the evidence that demonstrates the power of unguided evolution. They would show us how it is operationalized. They would show us its entailments and its power. They would model it. However they don't even try. It's as if they know they have nothing but to attack ID. Yet attacking ID will never provide support for their claims.Joe
May 1, 2015
May
05
May
1
01
2015
03:32 AM
3
03
32
AM
PDT
I would love to see someone demonstrate how this "game of life" models unguided evolution. I know it won't happen but it would be nice to see an evo put its money where its mouth is.Joe
May 1, 2015
May
05
May
1
01
2015
03:28 AM
3
03
28
AM
PDT
Daniel:
Human construction tells us only what humans can produce. How does that show Nature’s limitations?
Nature cannot produce Stonehenges, Daniel. Forensics, archaeology and SETI all rely on our knowledge of cause and effect relationships. Demonstrate that nature can produce something and we cannot say some intelligent agency was required to do it.Joe
May 1, 2015
May
05
May
1
01
2015
03:25 AM
3
03
25
AM
PDT
Bob O'H, "Under this definition, rolling a die would be a search. Indeed, if you don’t make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4?s would be a search". I'm not sure that this is a good analogy. When rolling a die, you ARE doing a search in a sense, you are searching for any number between one and six (depending on the die!). What else would you be rolling a die for? Not to mention the actual intelligently designed dice that has to be deliberately rolled to achieve a result.logically_speaking
May 1, 2015
May
05
May
1
01
2015
02:20 AM
2
02
20
AM
PDT
When we say search we simply mean a process that can be modeled as a probability distribution.
Aurelio Smith has already pointed out the problem with this, but to put some specifics on it, under this definition, rolling a die would be a search. Indeed, if you don't make artificial restrictions on what you mean by a probability distribution, rolling a die with 6 4's would be a search. As would diffusion, if you want to look at something dynamic in time.Bob O'H
May 1, 2015
May
05
May
1
01
2015
01:12 AM
1
01
12
AM
PDT
As we have Dr. Ewert's attention I would love to hear his response to the problems I raised in a comment on AS's post. I have repeated it here with a bit more detail. Converting probabilities to their logs sometimes blinds us to the fact they are probabilities. So active information is defined as: So active information = endogenous information – exogenous information which is another way of expressing the ratio of two probabilities: p = prob(success|blind search) and q – prob(success|alternative search) But somehow this ratio p/q gets equated to the probability of the alternative search happening. To do this requires: 1) Treating possible searches as a random variable 2) Selecting a way of enumerating possible searches (e.g. a “search” is defined as an ordered subset of all the variables to be inspected, so the set of searches is all possible ordered subsets) 3) Using Bernouilli’s principle of indifference to decide all searches are equally probable within this space of all possible searches All of this seems to be assumed in your work rather than made explicit and when made explicit raises some rather fundamental questions.  What is the probability distribution of searches? There are many  ways of enumerating searches – how do you justify your choice? On what basis do you assume each one is equally probable?Mark Frank
April 30, 2015
April
04
Apr
30
30
2015
11:03 PM
11
11
03
PM
PDT
Human construction shows what requires intelligent agencies to produce. It also shows nature’s limitations.
Human construction tells us only what humans can produce. How does that show Nature's limitations?Daniel King
April 30, 2015
April
04
Apr
30
30
2015
07:17 PM
7
07
17
PM
PDT
In regards to "nature couldn’t be the designer." Daniel King asks "WHY NOT?" well, for a few examples, because,,,
Human brain has more switches than all computers on Earth - November 2010 Excerpt: They found that the brain's complexity is beyond anything they'd imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: ...One synapse, by itself, is more like a microprocessor--with both memory-storage and information-processing elements--than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. http://news.cnet.com/8301-27083_3-20023112-247.html "Complexity Brake" Defies Evolution - August 8, 2012 Excerpt: Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology speeds up by an order of magnitude each year. http://www.evolutionnews.org/2012/08/complexity_brak062961.html Map Of Major Metabolic Pathways In A Cell – Picture http://2.bp.blogspot.com/-AKkRRa65sIo/TlltZupczfI/AAAAAAAAE1s/nVSv_5HRpZg/s1600/pathway-1b.png A map of the entire human metabolic pathway - interactive map (high resolution) http://www.cc.gatech.edu/~turk/bio_sim/articles/metabolic_pathways.png "To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometres in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the portholes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings with find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus of itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly, complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation. We would wonder even more as we watched the strangely purposeful activities of these weird molecular machines, particularly when we realized that, despite all our accumulated knowledge of physics and chemistry, the task of designing one such molecular machine – that is one single functional protein molecule – would be completely beyond our capacity at present and will probably not be achieved until at least the beginning of the next century. Yet the life of the cell depends on the integrated activities of thousands, certainly tens, and probably hundreds of thousands of different protein molecules. We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late twentieth-century technology. What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory which would have one capacity not equalled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act at a magnification of one thousand million times would be an awe-inspiring spectacle.” Michael Denton PhD., Evolution: A Theory In Crisis, pg.328 https://uncommondescent.com/intelligent-design/on-the-impossibility-of-replicating-the-cell-a-problem-for-naturalism/ Systems biology: Untangling the protein web - July 2009 Excerpt: Vidal thinks that technological improvements — especially in nanotechnology, to generate more data, and microscopy, to explore interaction inside cells, along with increased computer power — are required to push systems biology forward. "Combine all this and you can start to think that maybe some of the information flow can be captured," he says. But when it comes to figuring out the best way to explore information flow in cells, Tyers jokes that it is like comparing different degrees of infinity. "The interesting point coming out of all these studies is how complex these systems are — the different feedback loops and how they cross-regulate each other and adapt to perturbations are only just becoming apparent," he says. "The simple pathway models are a gross oversimplification of what is actually happening." http://www.nature.com/nature/journal/v460/n7253/full/460415a.html
Mr. King, you are certainly free to believe that unguided material processes can create all that unfathomable complexity, (since you, contrary to your materialistic belief system, actually do have free will to choose what you believe is true), but I certainly don't find your blind faith in unguided material processes persuasive! Especially since no one has ever witnessed unguided material processes produce non-trivial functional information/complexity:
It’s (Much) Easier to Falsify Intelligent Design than Darwinian Evolution – Michael Behe, PhD https://www.youtube.com/watch?v=_T1v_VLueGk The Law of Physicodynamic Incompleteness - David L. Abel Excerpt: "If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise." If only one exception to this null hypothesis were published, the hypothesis would be falsified. Falsification would require an experiment devoid of behind-the-scenes steering. Any artificial selection hidden in the experimental design would disqualify the experimental falsification. After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: "No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone." https://www.academia.edu/9957206/The_Law_of_Physicodynamic_Incompleteness_Scirus_Topic_Page_
bornagain77
April 30, 2015
April
04
Apr
30
30
2015
07:16 PM
7
07
16
PM
PDT
Yes, always, so far. Just as structures like Stonehenge will always require an intelligent designer.
True. Structures that human beings construct are constructed by human beings. What does that have to do with "Nature always taking the line of least resistance" or Nature not being the designer of living organisms?Daniel King
April 30, 2015
April
04
Apr
30
30
2015
07:08 PM
7
07
08
PM
PDT
Joe,Daniel, if you please..., Bees, Beavers, Humans. Honeycomb, Dam, Stonehedge. Natural Design all? Human transcend Nature?ppolish
April 30, 2015
April
04
Apr
30
30
2015
07:05 PM
7
07
05
PM
PDT
For example see- Chase W. Nelson and John C. Sanford, The effects of low-impact mutations in digital organisms, Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9Joe
April 30, 2015
April
04
Apr
30
30
2015
06:49 PM
6
06
49
PM
PDT
Yes, always, so far. Just as structures like Stonehenge will always require an intelligent designer. Why not? For one there isn't any evidence that it can be the designer. All observations and experience argue against it. Human construction shows what requires intelligent agencies to produce. It also shows nature's limitations.Joe
April 30, 2015
April
04
Apr
30
30
2015
06:19 PM
6
06
19
PM
PDT
The reason is it always takes the line of least resistance. Always? How can anyone possibly know that? You'd have to examine every possible situation to claim that.
It can produce stones, even piles of stones but not Stonehenges.
So? What does a human construction have to do with Nature's inherent capabilities?
What it has to do with ID is that nature couldn’t be the designer.
WHY NOT?Daniel King
April 30, 2015
April
04
Apr
30
30
2015
05:53 PM
5
05
53
PM
PDT
The reason is it always takes the line of least resistance. It can produce stones, even piles of stones but not Stonehenges. What it has to do with ID is that nature couldn't be the designer.Joe
April 30, 2015
April
04
Apr
30
30
2015
05:36 PM
5
05
36
PM
PDT
Nature tends to the most simple. It peels away the unnecessary and leaves what it cannot peel away, or has not peeled away yet. IOW nature searches for the simplest solution.
I didn't know that. Is there reason to believe that? In any case, what does that have to do with Intelligent Design? Is Nature the designer?Daniel King
April 30, 2015
April
04
Apr
30
30
2015
05:14 PM
5
05
14
PM
PDT
1 6 7 8 9

Leave a Reply