Uncommon Descent Serving The Intelligent Design Community

Out-of-print early ID book now available as a .pdf

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An early ID book (possibly the earliest), The Mystery of Life’s Origin by Charles Thaxton, Walter Bradley, and Roger Olson (1984), with a foreword by Dean Kenyon, has been out of print for a while, I am told. But a .pdf can be downloaded here for now.

Information theory is a special branch of mathematics that has developed a way to measure information. In brief, the information content of a structure is the minimum number of instructions required to describe or specify it,  whether that structure is a rock or a rocket ship, a pile of leaves or a living organism. The more complex a structure is, the more instructions are needed to describe it. —Charles Thaxton, biochemist

Meanwhile ….

Study: Sun not special, therefore alien life should be common?

Does time’s one-way street prove that other universes exist?

The day time went backwards

Flogos: Coming soon to a clear blue sky near you …

Science and ethics: When the devil offered a no strings research post.

Nature’s IQ: Intelligent design from a Hindu perspective

Science journalist warns against the “institutionalised idolatry of science”

Expelled film pre-trashed by United Kludgies of Canada (Trashing a film you haven’t seen is way less work.)

Is everything determined by forces over which we have no control?

Chuck Colson on neural Buddhism: Do neurons get reincarnated?

Hopeful signs: Disaster causes outpouring of charity in China

On Jane Goodall, apes, human uniqueness, and God

Comments
Substituting "Theory of Evolution" with "Chemical Evolution" would probably be applicable as well:
Why will many predictably persist in their acceptance of some version of chemical evolution? Quite simply, because chemical evolution has not been falsified. One would be irrational to adhere to a falsified hypothesis. We have only presented a case that chemical evolution is highly implausible. By the nature of the case that is all one can do. In a strict, technical sense, chemical evolution cannot be falsified because it is not falsifiable. Chemical evolution is a speculative reconstruction of a unique past event, and cannot therefore be tested against recurring nature.
beancan5000
May 27, 2008
May
05
May
27
27
2008
05:33 AM
5
05
33
AM
PDT
JunkyardTornado: I don't want to keep you awake. I answer your post now, because here it's morning, but please feel free to answer when it's comfortable for you. I am sorry, but I am afraid that it's you that are a little bit confused, both about ID and some general concepts. Even if you invite the authorities of ID to correct me, I am afraid that here we are rather in democracy, and you will have to discuss the matter, if you want, with myself... First of all, I think you make a grand confusion with terms like law and necessity. Here, we are talking about the laws of nature, not the laws of a state. There is a big difference. The laws of nature are, as I have already said many times, logico-mathemathical formulations "explaining" facts. The same fact that facts happen according to mathemathical laws formulated by us is in itself a big philosophical mystery, and a mich debated one. But so it is. Therefore, the strngth of the gravitational attraction between two bodies can be easily computed, to a great degree of precision, by the formulas of Newton's gravitational theory. A quantistic waveform can be computed (in the easiest cases) according to a definite equation. And so on. These laws, which are the only laws of which I was talking, are mathemathical objects. As such, they work always, and always in the same way. Indeed, facts seem to happen according to such laws. That's why they are called laws of nature. They are nevessary laws, it's perfectly true, because their results are totally deterministic. They oby strictly the principle of cause and effects: given some causes, some effects can be computed according to the appropriate mathematical formula. You say: "A set of laws is a computer code according to ID. Its late otherwise I would provide the quotes from Dembski’s own writings where he repeatedly equates computer programs and necessity." THat's completely wrong. Computer programs are code (CSI) operating automatically through laws of necessity. Why? Because a computer is only a machine which obeys the laws of electromagnetism. Therefore, once some information (usually CSI) has been loaded in the machine, the machine operates according to necessity, and the algorithm which has been laoded effects its computations automatically and gives the necessary results. Here, the only natural laws at work are the laws of electromagnetism, which are described by Maxwell's equations. Maxwell's equations never change, they operate always in the same way. The code loaded in the machine, instead, is the direct product of a designer. The code has all the characteristics of CSI. On the contrary, the output of the program cannot contain any more CSI than it has received, both from the code and from the input data. The information can certainly change its form, but no new CSI is really produced. You say: "You are just flat wrong in your understanding of what ID says. It says that “intelligent agency” can do things that no computer, no program and no mechanism could ever do. " That's perfectly right. It's intelligent agency which generates the computer code. The computer code only passively executes what the intelligent agency has planned. My intelligent agency is generating this post. No computer code could do the same (unless I input this post or insert it in the origical code). So, I don't understand where I should be wrong. You can certainly encode knowledge in a program, but not in a law of nature. You cannot encode knowledge about contingetnt variables in a mathematical formula. The law of acceleration, F=ma, remains always the same. It is not: F=ma in certain cases, with certain inputs, and F=2ma in other cases. A computer program could do that, because a computer program is a set of instructions arbitrarily written by a designer. Human laws can do that, because they are codes (literally) producted arbitrarily by designers. They are not natural laws. So, your examples are completely wrong. Now, let's imagine that God wanted, in the beginning, to incorporate some knowledge of specific contingencies in a natural law. Let's suppose that, in shaping the law of acceleration, he decided (after all, He is God, who are we to limit his freedom?) that black objects would accelerate according to the F=ma formula, and whote objects according to the F=2ma formula (please, don't pay attention to the specific example, just follow the reasoning). OK, that is possible, in principle. That would be what you say, incorporating some knowledge, some specific instruction, in a law of nature, as though it were a computer code. Obviously, only God could do such a thing. But my point is, if God had done such a thing (which is, more or less, the strange idea of TEs), we should be able to observe that. Once the law is operating, we can observe how it works. So, we would see the black bodies behaving in a way, and the white bodies behaving differently. We would derive two different laws for the two different kinds of bodies, and those would be our natural laws. At that point, we could use those laws in any scientific theory trying to explain facts with them. If that difference in laws could explain CSI and biological information, we would observe it. That's not what we observe. All the physical laws we know have no power to explain CSI. They are mathemathical objects, rather austere ones, and they make no compromises. They incorporate no specific knowledge of contingencies. In other words, their is no way that the laws of physics, which are after all all the laws of the material universe that we know, can generate Shakespeare's Hamlet, or, in the same way, the sequence of myoglobin. Both are examples of CSI. Both cannot be generated by natural laws. Naturally, a computer program can well type a copy of Shakespeare's Hamlet. If, and only if, that information was inputted in it. The sequence of myoglobin works in the same way. It has to be inputted, but indeed it could also be computed, provided that the necessary knowledge (which, by the way, we still don't possess) about how proteins fold according to sequence, and about the specific function that the sequence should realize, be inputted in the program. In both cases, the information about Hamlet, or abot the sequence, or about how it could be computed, are CSI. They are not generated in the program. They are added to it. A designer is needed. The program acts from necessity, according to the natural laws of electromagnetism and to the initial conditions (specific informations) which were inputted into it by the designer. But that kind of information is never created by necessity.gpuccio
May 27, 2008
May
05
May
27
27
2008
04:27 AM
4
04
27
AM
PDT
gpuccio: "Here I cannot follow you. How can you “shift comnplexity or knowledge over to the laws f”? A law is a mathematical rule. It can be complex, but it is always the same, and it works always in the same way. That’s why it is a law. How can you “shiftcomplexity and knowledge” (of the output, I suppose) over to a law, or to a set of laws? A set of laws is not a computer code. " A set of laws is a computer code according to ID. Its late otherwise I would provide the quotes from Dembski's own writings where he repeatedly equates computer programs and necessity. You are just flat wrong in your understanding of what ID says. It says that "intelligent agency" can do things that no computer, no program and no mechanism could ever do. Dembski could correct you himself if he cared to. I will provide the quotes tomorrow if someone else does not. Knowledge is something that can be encoded. When someone says, "here are the laws by which this process is observed to operate" they are demonstrating knowledge. Knowledge = Mechanism = Law. Look at the laws of this country and how staggeringly complex they are. Do you think how those laws effect society are simplistic predictable or trivial? Why do you think people become lawyers? The behavior of a a complex set of laws is contingent on external conditions. You can have juridicial law with all sorts of provisions, acceptions, qualifications, and be a 1000 pages long - you saying that is not a real law. Everybody here seems to put their own spin on what ID is, and none of the supposed leaders ever step in to correct anybody.JunkyardTornado
May 27, 2008
May
05
May
27
27
2008
01:55 AM
1
01
55
AM
PDT
KF "FIRST,our existential, experiential fact no 1 is that we exist in the world as conscious agents, who act with purpose to change the world to achieve goals. We are designing entities, and one of the commonly encountered artifacts of that design is information, functionally specified, complex information [FSCI]." Agents are things from which FSCI magically emerges. Agents are FSCI generating things. FSCI springs from them in a way that cannot be accurately characterized by any conceivable mechanism. Its just a foreign laughable concept to me, but you shouldn't feel offended, because many many people have this same intuitive sense that you do that this mysterious entity called agency exists. No offense at all, but there is quite evidently a fundamental unbridgable impasse, here. Maybe I'm equally vociferous in my claim that agency doesn't exist, but wheras you're saying, "This thing which is nothing actually exists." I'm saying, "This thing which is nothing does not exist." If something can't be characterized in any systematic way then its vacuous. If you're saying it not a mechanism, you're saying it cannot be accurately characterized. For people who would claim to appreciate the Bible, you would think that would not denigrate 'law' so much. All the Old Testament talks about is God's Law. The Psalmist David says, "How I love thy law", Christ says, "The Law will never be revoked", or words to that effect. In a similar vein the Gospel of John begins, "In the beginning was the Word". So in all of these is the strong implication that what is essential about God is something that can be encoded. Then we have ID that talks about law like its some pointless predictable second class citizen. But a second class citizen to what? W ell, to something that even ID cannot describe, something they say is impossible to describe, something they say mysteriously emits CSI. THIRD, in the course of the history of ideas, we have established long since the fact that causal forces are routinely observed to fall under the categories: chance, necessity, intelligence. The vague default unexamined notions people have held over the course of history is significant, why? Its not some accepted self-evident truism among the educated that man is some mysterious inexplicable God-like emitter of CSI. People can recognize, discern store and retrieve CSI due to their physical capabilities, their brain capacity, the sophistication of their sensory organs. My comments here are somewhat hasty and provocative but its kind of late. I would engage you on more of your posts, but you are quite committed to an idea which you consider to be self-evident, and I consider to be vacuous. The probability issues are not something I'm insensitive to, but how the solution is this magical csi emitting machine is not by any means apparent to me. Cheers.JunkyardTornado
May 27, 2008
May
05
May
27
27
2008
01:18 AM
1
01
18
AM
PDT
M Caldwell, Tell them we have none of chance and necessity either- neither one for "physics" nor one for "atheism." Nor one for "hamburger" for that matter. This is the classical philosophically pathetic attempt of anyone on the loosing side of a debate- "I don't understand what you mean?" They point to a metaphysical loop hole of language- that is issues of rhetoric - which shows how political these discussions really are. Anyone can critique and confuse language and pretend to see through all things. But as T.S. Elliot said "To see through all things is the same as not to see."Frost122585
May 27, 2008
May
05
May
27
27
2008
01:14 AM
1
01
14
AM
PDT
JunkyardTornado: I will try to make my point more clear. I think we should discuss better the meaning of "law", or if you prefer "law of necessity". A law is a logico-mathematical formulation to explain facts and make previsions of new facts. Obviously I am not saying that there is no evidence that natural laws exist! And obviously there exists a mutation sequence that would result in human genome. We agree on that. I think we also agree that such a mutation sequence is, in itself, utterly improbable, if we hypothesize that it has to happen randomly. OK to this point? You say: "Shift complexity or knowledge over to the laws f, and an x that can produce life becomes more likely, but then f becomes more unlikely." Here I cannot follow you. How can you "shift comnplexity or knowledge over to the laws f"? A law is a mathematical rule. It can be complex, but it is always the same, and it works always in the same way. That's why it is a law. How can you "shiftcomplexity and knowledge" (of the output, I suppose) over to a law, or to a set of laws? A set of laws is not a computer code. Let's make an example. Just to stay storically consistent, we will use the famous "Methinks it's like a weasel", only, I hope, with more sense than our friend Dawkins. So, let's suppose there is a law which outputs that phrase (which for the sake of simplicity we could assume here as a piece of CSI, although it probably does not reach the right level of complexity). What form should that law have? Something like: "Whatever the outer conditions, just output "M", than "e", and so on"? That's not a law. That's a software instruction, and one which contains all the information it has to output. Now, let's suppose that we have devised a law for that phrase. How could the same law output, say, "To be or not to be"? Again, what you need here is not a law, but an instruction. You need information, not necessity. That's why necessary laws are not good at outputting CSI. For that task, you need "pseudo-random" sequences, shaped by intelligence. No law can output the works of Shakespeare. In the same way, no law could output this post you are reading. It is here for you to read because I am thinking it in my consciousness, then outputting it. If I did not think what I am writing, this post would never exist. It has never existed before, and it would never exist in the whole life of the universe. Why? Because this post is CSI, much more CSI than the weasel phrase (not because it is better, only because it is longer!). That's why I stress the importance of consciousness. The concept of design, of designer, and of intelligence have really no meaning without the empirical fact of consciousness. All those concepts can be defined only as properties of consciousness. You say: "If that specification is y, then here’s a mechanism to output it - “Output y.” " Again, that's not a law. It's a computer instruction. It's CSI. And a lot of CSI, if that "y" must include all the information of all the genomes of living beings! You are only sayng that, to produce CSI, you need CSI. That's correct. And, as CSI can only be produced by a designer, you need a designer. In other words, you need, in sequence: 1) A conscious, intelligent designer. 2) A design, produced by the designer (your f, which is not a law nor a set of laws, but just CSI). 3) The implementation of the design, which happens obviously through physical laws, guided by the CSI of the design, so that they output CSI in the form of pseudo-random, intelligently ordered sequences, which have function and express meaning. You say: "But what is design -its copying with incremental changes over time, with the best ideas being refined upon by subsequent generations." No, design is not that. Design is the output of a conscious representation, which has the inner properties of meaning and purpose. Let's remember that design is not always CSI. Simple designs do not exhibit CSI. But design is always the product of intelligent consciousness. And CSI is always a form of design. You say: "Its taken over 60 years and thousands upon thousands of people to come up with the computers we have to day, with continual retesting, incremental refinement and so on." CSI does not require complex inventions. CSI is abundantly present in all abstract thought, in language, in mathematical thought, in artifacts, even rather simple. All that is the product of design. And design is the product of intelligent consciousness. There is no doubt that intelligence can creatively improve the acquisitions of other intelligent beings, and that's how computer, and all the products of human culture, have accumulated. But it is not so much a question of "incremental refinement", but rather of creative thinking, vision, inference, intuition, purpose, commitment, and,in general, representation of meaning. All of these, properties of intelligent consciousness. The incremental refinement is often necessaty, and it pertain to the strategies of implementation. Consciousness does work through algorithms, but it is not algorithmic in itself (see Penrose), and it cannot be explained by algorithms. "Where is the mysterious miracle in that process." In intelligent consciousness. Without consciousness, you have no design. Without design, you have no CSI. In other words, only the conscious represantations of an intelligent being can impose the form of CSI (complex meaning) to appropriate (random like) supports. Nothing else in the universe can do that. Not true randomness, not any set of necessary laws. This truth is both logical and empirical, and is the real strength of all the ID theory. Finally, let's go to your last note. You say: "If something is a physical mechanism it should be operating continuously? So why do we have not have comets on a daily, if not hourly basis?" I don't follow you. We were speaking of laws. If a law is a lwa, it operates always in the same way. The laws of mechanics and gravitation, whatever they are, are responsible for the trajectories of comets. Trajectories change with time, the laws which allow us to compute them don't. "I would think someone who was not constrained by physical limitations more likely to be creating things continuously." I do think that the designer, whom in my opinion is God, is creating things continuously. I am convinced of that for religious reasons, however, and I don't consider that a scientific statement (at least for now). But the continuous intervention of God in creation, in my opinion, takes different forms. Laws are one of those forms. The historical implementation of special design in living beings, through means which are open to enquiry, is another one. All those, however, I insist, are religious and philosophical issues. The inference of design in biological beings, instead, is a purely scientific question.gpuccio
May 27, 2008
May
05
May
27
27
2008
01:09 AM
1
01
09
AM
PDT
"ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure)."
Well we have criteria for detecting design chance and necessity. A probability can always be assessed for the chance/necessity combination but when it exceeds the probability resources that we deem reasonable for non-intelligent nature then intelligence because the leading contender. As KF pointed out we know what intelligence can do. We know that it can beat probabilities and design for purpose. Your question is about assessing the probability of the designer but you display a very weak understanding of natural philosophy to pose such a nonsensical question. When we assess the probability of an event it is not under intelligent or non-intelligent grounds that that assessment is made. Probability have strictly to do with the complexity specificity and natural physical laws and boundaries set by secular science. IN other words you cant ask the question "what is the probability of the designer?" because you cant ask the question "what is the probability of chance necessity?" You see when you ask the question "what is the probability of the designer?" you are actually asking what is the probability that it has "being." On the other hand you then have to ask what is the probability that chance and necessity "have being?" and this is a ridiculous question. Of course they have being as does intelligence and design. The question that ID looks at is "where does the chance necessity and design reside?" IN other words which is which in the known world. Your question is about being but chance nd necessity can be used as factors in a design - for example if I design a car from scratch there are physical laws I must abide by and there is a certain amount of chance involved along the way. Yet I could chalk up the whole design to chance and necessity except that it would by definition artificially remove the conception of intelligent agency. So ID does take for granted that Intelligent Agency exists just like physics and philosophy takes for granted that necessity and chance exist. So this question regarding "the being" of the agent is ALWAYS beside the point. I can not find the object of "chance" and necessity in the cosmos because they do not exist as beings outside of circumstantial interpretations of events. The EF and Dembski’s criteria give us an excellent physical interpretation of design just like we have a physical interpretation of chance and necessity via definitions based on empirical experience. I can ask what is the probability that chance exists? What is the probability that necessity exists? These questions are vacuous because it is the interpretation of circumstances that give rise to their definitions. Design as a concept of equal philosophical strength is and should be treated the same way.Frost122585
May 27, 2008
May
05
May
27
27
2008
12:57 AM
12
12
57
AM
PDT
JT Please, compare 29, esp. remarks on excerpt 5. GEM of TKIkairosfocus
May 27, 2008
May
05
May
27
27
2008
12:46 AM
12
12
46
AM
PDT
I'm still trying to work through some of this: I was saying previously that if it is presumed that the natural laws are extremely simple, then this puts a huge burden on the mutations to produce a highly improbable string with a lot of information. My observation was that you could put more info into f which would make getting a usable sequence from the mutations x more likely. However, I said you would just make the natural laws more unlikely to occur themselves, as they were more complex. Here was the error I made, I believe: The mutations are defined as purely stochastic and random. They come into existence at a point in time for no reason at all. With f however, its possible you could view it has having always existed. If something with a lot of information in it has always existed, that's not the same thing as having come into existence for no reason at all (e.g. as with the mutations x). It is a pointless extra step that ID makes to demand that something like f (with a lot of info) be "designed" by a vacuous entity labelled an "agent". F itself should just be be considered as part of an eternal diety. Now even though F is finite, it doesn't mean eternal diety is finite, because F is only part of God. So the crux of the matter is, one should say that something too improbable to occur by chance has always existed, because eternal existence isn't the same as stochastic emergence.JunkyardTornado
May 27, 2008
May
05
May
27
27
2008
12:20 AM
12
12
20
AM
PDT
H'mm: A few thoughts on Turing machines and the like: 1] JT, 25: I was referring to the actual TM that executes TM programs. And a TM, or equivalently a computer, is an extremely simple device. A computer has to step from one instruction to the next in a program. The following are the only instructions it has to know how to perform: Z(n) - “move zero into register n; I(n) - “add 1 to the value in register n”; J(n,m,i) - “If the values in registers n and m are equal then jump to instruction i”. Whoa. Anyone who has had to physically instantiate a computer from the ground up will realise that the issues involved in say electrical, mechanical, electromechanical and electronic technologies to do the "simple" machine are anything but "simple." --> Unless mechanical elements are properly shaped, sized and made from the right materials, then oriented and fastened and/or interfaced together correctly, they will not work. (Cf. my microjets ecxample APP 1 the always linked.) [Recall here Babbage's Analytical Engine and the impact it had on the machine tool industry, even though it proved infeasible with Victorian Era technology (the gears, the gears the gears, Wiki! It was not just that CB was a "difficult" person.).] --> Similarly, a read head is an extremely complex and precise device, electronically, electromechanically, optically, magnetically or mechanically. Just look at Wiki on paper-tape input devices. [I am old enough to remember punched card punching and reader machines . . . great, hulking solidly built, very precise IBM machines. Usually painted a dull grey for some reason.] --> Multiply by the complexity of the symbolic code required to give instructions. (And, where do functional codes come from, in our observation?) --> Exponentiate by the algorithms that have to be coded and the underlying mechanisms for physically implementing same. (In our observation, where do algorithms come from?) 2] I’m not supposing necessarily that even a Turing machine could come into existence throught stochastic processes. But if its obvious it cannot, then what’s the point of throwing around huge numbers (e.g. 10^11, 10^15) . . . First, there are a lot of people out there who seem to think that -- probabilistic resources issues notwithstanding -- things far more complex than a TM can self assemble out of chemicals in a still warm pond or a hydrothermal vent, whether on our planet or the observed universe. Indeed, they seem to hold that unless you accept such, you are not properly "scientific." Further to this, the book downloadable through this thread discusses the issues linked to this, and in fact was the foundational technical level book that launched the design movement. Third, per Dembski's later work, the threshold of exhaustion of such resources we look at before ruling intelligence not chance, is of order 10^150 - 10^300. To get around that, we see increasing resort to a speculative quasi-infinite wider cosmos as a whole with randomly distributed physics across sub-cosmi. That in turn allows us to highlight that we have here crossed over from empirically anchored scientific reasoning in the observed cosmos, to the field of highly speculative metaphysics. Often without announcement of the fact and its implications. These bring up . . . 3] JT, 27: What if that law of necessity in fact contains encodings for CSI. First, mechanical necessity shows itself as the root factor underlying natural regularities. This means that regularity dominates over contingency: heavy objects fall and may thereafter roll around and tumble before they settle to rest. High contingency, the basis for contingency in the sense we speak of, per a vast body of observations, is rooted in chance or intelligence. E.g. if the heavy object is a die, its uppermost face on settling is effectively chance or agency. THen, if there are enough dice -- 200 - 400 six-sided dice -- and the outcome meets a simply describable specification tha tis vastly improbable on chance, then we have excellent reason to infer to agency. (Here, suppose the dice are expressing a code to drive a program on say a TM.) 4] There does not exist a mutation sequence that would result in mankind? What if that mutation sequence directly coded for mankind? We are not dealing with abstract logical possibilities but the search for paths that island-hop from OOL to body-plan level biodiversification to the human being. The constraint that such entities must implement and maintain themselves in cellular and bodily level viable organisms and populations imposes huge constraints and specifications that lend themselves to the inference that probabilistic resource exhaustion practically speaking rules out the Darwinian, evolutionary materialist pathways that are often presented as consensus, established science. Biology here -- once the role of DNA emerged -- has built bridges to information and [statistical] thermodynamics issues, and it is not faring so well. 5] ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure). Precisely backways around. FIRST,our existential, experiential fact no 1 is that we exist in the world as conscious agents, who act with purpose to change the world to achieve goals. We are designing entities, and one of the commonly encountered artifacts of that design is information, functionally specified, complex information [FSCI]. We live in a cosmos in which agents are possible, and are actual. P[agency] = 1, for all practical purposes. SECOND, Agency is foundational to being able to have a rational discourse [cf my appendix 6 the always linked . . .] THIRD, in the course of the history of ideas, we have established long since the fact that causal forces are routinely observed to fall under the categories: chance, necessity, intelligence. They may interact in any one case, but in cases of known origin, they are routinely seen to be adequate to describe origins. FOURTH,We can show that for known cases, FSCI [or the equivalent] is a reliable sign of intelligence as opposed to chance or necessity. So, per scientific induction on best explanation anchored to empirical observation, we can credibly infer to the fact of design per its reliable signs. IF FSCI (etc) THEN, P[design] --> 1. FIFTH, design implies intelligence, i.e agent action, not random search or the equivalent or comparable but active information stemming from insight that leads us to functional configurations not practically reachable by chance-based searches in the relevant config spaces (on pain of probabilistic/search resource exhaustion): IF design THEN p[designer] --> 1. In short, reliable signs of design are epistemic warrant for inferring to design thence designer. One starts from the fact that designers and designs exist and have characteristics that are reliably discernible. Then, once we see the signs, we credibly know that we have design thence designer. That there is a designer is different from identifying who or what it is. indeed, it is a premise for trying to find out WHODUNIT, that "'twere DUN." ____________ The controversies that surround ID show to me that the issue is not on who the designer is or may be -- it seems that objectors to the empirically anchored inference to design suspect that the best/ most plausible candidate for designer of life and cosmos may be someone they have little or no desire to meet or deal with. So, I find the objections tend to short circuit the epistemic warrant for design inference,even at the price of being inconsistent in praxis of science [e.g. we routinely infer to experimenter influence in experiments etc!]. Can we therefore look back at the actual epistemological case made by the design thinkers, instead of appealing subtly to prejudice? GEM of TKIkairosfocus
May 27, 2008
May
05
May
27
27
2008
12:17 AM
12
12
17
AM
PDT
gpuccio: "“First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always” I think now I wasn't completely on point to what you were saying here. But even what you're saying is not sound. If something is a physical mechanism it should be operating continuously? So why do we have not have comets on a daily, if not hourly basis? I would think someone who was not constrained by physical limitations more likely to be creating things continuously. Thanks for your comments, though. That's it for me today.JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
10:59 PM
10
10
59
PM
PDT
"First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always" There is no evidence that natural laws exist? There does not exist a mutation sequence that would result in mankind? What if that mutation sequence directly coded for mankind? Certainly such a sequence x could happen, though the chances are vanishingly remote. So with the existing natural laws, such as they are, there exists some mutation sequence x that would result in the biological world. Shift complexity or knowledge over to the laws f, and an x that can produce life becomes more likely, but then f becomes more unlikely. What is difficult to understand. Are you denying that a deterministic mechanism can output mankind? Do you deny epigenesis then? What is the probability of that particular mechanism coming into existence by chance. Does it mean it doesn't exist? Can the biological world be specified? If that specification is y, then here's a mechanism to output it - "Output y." Of course, the route could be much more circuitous than that, and conditioned by all sorts of random factors (which is what the input x to a process is, the random outside factors impinging on some deterministic mechanism.) So, the entire physical universe in your view really has no relevance at all to the biological world and its origins. Its just a bunch of additional garbage that the designer created for no good reason, apparently. no law of necessity can generate information with the characteristics of CSI What if that law of necessity in fact contains encodings for CSI. Why not call f, then, the thought of the designer Why not indeed. Why not fully characterize the thought processes of a "designer". Of course, ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure). Science is about best explanations. You, like many others, insist in excluding the simplest, and totally empirically based, explanation: design. But what is design -its copying with incremental changes over time, with the best ideas being refined upon by subsequent generations. Its taken over 60 years and thousands upon thousands of people to come up with the computers we have to day, with continual retesting, incremental refinement and so on. Where is the mysterious miracle in that process. We (in ID) deny your f for the simple reason that it does not exist in observable reality. Well perhaps your views are largely representative of the ID community in general and its most prominent members, I have no reason to deny that. Thanks for taking the time to read my thoughts.JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
10:19 PM
10
10
19
PM
PDT
JunkyardTornado: Thank you for your post, where you really try to clarify your views. You have been very specific, so I will answer very briefly and specifically. I think I have followed your reasoning, but your reasoning in my opinion has two serious flaws: 1) Your f (which you define as "a set of natural laws"), as far as we know, does not exist. If what you say were right, f would anyway be some set of laws of necessity, so complex as to give y with some reasonable probability. That is simply not true scientifically. First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always, and we would not observe only their results in the biological world, and nothing else (unless you suppose that, like God, that f is resting in the seventh day...). Moreover, there are serious logical reasons (see for instance Abel and Trevors) why no law of necessity can generate information with the characteristics of CSI. Therefore, your f, more than a set of laws, would take the form of a platonic counterpart of the information it has to generate. Why not call f, then, the thought of the designer? 2) Science is about best explanations. You, like many others, insist in excluding the simplest, and totally empirically based, explanation: design. Why? Design exists. Human designers continually generate CSI. So, why such an obstinacy in denying the obvious, recurring to long and inconsistent reasonings? That's not cognitive coherence. That's dogma. We (in ID) deny your f for the simple reason that it does not exist in observable reality. You can keep it as a personal dream, if you want, but we have no need to share that dream. We (in ID) affirm design because it exists, it is observable, and it can perfectly explain biological information. For me, that's cognitive simplicity and coherence, free from any intellectual dogma and prejudice. It's as simple as that.gpuccio
May 26, 2008
May
05
May
26
26
2008
09:09 PM
9
09
09
PM
PDT
DLH: "I am curious why you dismiss discussion of the Turing machine as being designed. It may seem conceptually 'simple'. Yet I recommend you explore the factors required to design a real computer that can process some string. Note that it requires energy processing to do so. Or have I mussunderstood your post? have yet to see anyone explain how the four forces of nature (strong & weak nuclear, electro-magnetism and gravity) with stochastic processes can form a processing system under any stretch of the imagination within the Upper Probability Limit. " gpuccio: "Indeed, I thimk, like DLH, that even the most basic turing machine or computer needs a quantity of information for its structure, symbolic code and so on, which should vastly overcome the limit of 500 bits. ... Since we are talking of computers and turing machines, anybody has any idea of where should we look for the code of the central nervous system organization? How can the 10^11 neurons in our body orderly connect with about 10^15 connections to realize the best known computing hardware, without a written information plan" --------- As far as a Turing machine, just to clarify, people often mean a Turing Machine program when they say "Turing Machine". I was referring to the actual TM that executes TM programs. And a TM, or equivalently a computer, is an extremely simple device. A computer has to step from one instruction to the next in a program. The following are the only instructions it has to know how to perform: Z(n) - "move zero into register n; I(n) - "add 1 to the value in register n"; J(n,m,i) - "If the values in registers n and m are equal then jump to instruction i". If a computer can do just that, everything else a program needs to do can be in the program itself, encoded in terms of sequences of only the three aforementioned instructions. A program could be a zillion bytes long and be executed by a computer that was only a few hundred bytes. Building real computers entails a continual effort to increase their speed and capacity. I'm not supposing necessarily that even a Turing machine could come into existence throught stochastic processes. But if its obvious it cannot, then what's the point of throwing around huge numbers (e.g. 10^11, 10^15), or contemplating numbers of nuerons, or angels or whatever, if random processes cannot even create 10^3. In the rest of this I will address why probability arguments regarding evolution are irrelevant. In the case of evolution, it is said that a series of random mutations (call it x) occurred over a period of time. Some set of natural laws f acted upon x to output the natural world (call it y) as we see it today. Now to possibly state the obvious, whatever f might happen to consist of, there is of necessity some sequence of mutations x that could occur such that f(x) outputs the biological world. Imagine that f is something incredibly stupid like "flip all the bits of the input x and output the result" (if we're thinking in a digital context). There is still a value for x such that f(x) = y, the biological world (encoded digitially). Of course, the issue for ID'ers is probability. They would say that, the liklihood of getting an x with the necessary value by chance in this situation, would be no less than just getting the biological world itself by pure chance, because x, in this scenario, is just an alternate encoding for the biological world itself in that f does virtually nothing except flip bits. And I think you would be correct. Of course we could encode f in such a way that it was much more likely to get an x such that f(x) = y. Suppose that f encodes a complex specification for viable biological life, and on the basis of whatever input x it gets, it takes a lsightly different path (perhaps encoding for brown hair instead of green in a given instance, or whatever). But whatever input x that f gets, it tries to incorporate that into a complex infrastructure it already contains. Therefore, in this situation, there are obviously lots and lots of values for x that would result in viable complex life. However, now f itself is extremely complex and unlikely to occur by chance. Since we don't have an explanation for f's existance, then it does in fact exist by chance. So whether the burden of information and complexity is in f, or in x, or rather divided equally between them, the probability of getting an evolutionary mechanism f(x) that could produce the biological world is vanishingly remote. But does this prove this mechanism f(x) did not occur. It does not. Suppose you're sitting in an empty room with an open door. You turn away for a moment and then look back and sitting in the room in front of you in a red wagon is Richard Dawkins. And the question arises in your mind - How on earth did that happen? So, later I enter the room and inform you, "Richard Dawkins was sitting in a wagon in the other room, and when you weren't looking I pushed the wagon and Richard Dawkins rolled through the door and into the room." So you think about it and say, "Let's see...the mechansm f is newtonian force, with consideration of other factors, e.g. frictional forces, e.g. the surface of the floor, the wagon wheels, etc. x is Richard Dawkins sitting in the wagon. [And to take agency out of the picture, say I fell over backwards and hit the wagon.] x is richard dawkins sitting in the wagon in the other room. f(x) = richard dawkins sitting in the wagon in this room now in front of me. However, the chance of getting a Richard Dawkins by chance is vanishingly remote. Even if we incorporate into f a mechanism capable of generating Richard Dawkins (for example RIchard Dawkins parents) it just makes f that much more complex and unlikely itself. So however you cut it, f(x) is too unlikely to have occured by chance, so it is not true that Richard Dawkins rolled into this room after being pushed." (I may have undermined the argument for those would insist that under any circumstances Richard Dawkins sitting in a room of their house is an extremely unlikely scenario. So just assume you're a sibling of Richard Dawkins.) Sorry to belabor all this if the point is already obvious, and it should be obvious. How can you rule out evolution or anything on the basis of probability alone. You started out as a microscopic cell (x), and forces of nature f acted on it to produce you. You saying that couldn't happen either based on probability? It is precisely the same argument you use to rule out evolution. It would be reasonable to me personally to assume that given the immensity of the universe and the immensity of energy it contains, that it must all exist for some reason pertaining to man. Before man or the biological world existed it seems reasonable that there were forces and conditions (f(x)) extant in the universe that resulted in the biological world's existance. Other physical factors in addition to mutations and natural selection would have to be incorporated into that picture, undoubtedly. And yes the f(x) we're talking about would be as unlikely to occur by chance as y itself. And really the crucial point is that f(x), whatever it is, if it resulted in y's existence, would therefore equate to y. Richard Dawkins sitting in a wagon in another room plus newtonian force = Dawkins materializing in your room. Richard Dawkins parents in a wagon in another room plus nine months plus Newtonian forces = Dawkins materializing in your room. The embryonic cells of Richard Dawkin's parents sitting in a test tube for eighteen years in a wagon in another room plus newtonian forces = Dawkins materializing in your room. You're just constantly pushing back what needs to be explained. Obviously you'll eventualy hit something for which no natural forces exist to explain it. But given the size of our physical universe it seems foolhardy to start ruling out preceding natural mechanisms so quickly, (and in truth when could any human being ever say conclusively that something was not the output of some phyisical process that preceded it.) [Note: I am aware that a lot of the above arguments did not in fact originate with me, but where I'm not certain. Maybe Dembski himself said some of it, I don't know.]JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
07:45 PM
7
07
45
PM
PDT
Oh my bad, forgot Barbara Forrester is a secular humanist. I guess she wouldn't mention "god" in her ranting. ;DF2XL
May 26, 2008
May
05
May
26
26
2008
06:35 PM
6
06
35
PM
PDT
Ahh....a creationist book!!!Mats
May 26, 2008
May
05
May
26
26
2008
03:22 PM
3
03
22
PM
PDT
Indeed, I thimk, like DLH, that even the most basic turing machine or computer needs a quantity of information for its structure, symbolic code and so on, which should vastly overcome the limit of 500 bits. If I remember well, a turing machine needs some nasic code to define the machine itself. It can really be implemented on a computer (I think in one of Pensrose's books there was something like that), although I cannot say exactly how much information code is needed. Maybe some of our friends engineers or programmers could help. Anyway, I agree that biological information is usually much more complex than that. Since we are talking of computers and turing machines, anybody has any idea of where should we look for the code of the central nervous system organization? How can the 10^11 neurons in our body orderly connect with about 10^15 connections to realize the best known computing hardware, without a written information plan? Or does someone think that a very simple fractal formula outputs the most complex working neural network in our experience?gpuccio
May 26, 2008
May
05
May
26
26
2008
02:47 PM
2
02
47
PM
PDT
JunkyardTornado at 15
I take it you’ve heard of a turing machine. A computer is an extremely trivial device- something that can read sequentially a series of instructions, increment values, compare values for equality (and then jump to other parts of the program on that basis. This is nothing. Its pointless to philosophize about somehthing so trivial having to be “designed.”
I am curious why you dismiss discussion of the Turing machine as being designed. It may seem conceptually "simple". Yet I recommend you explore the factors required to design a real computer that can process some string. Note that it requires energy processing to do so. Or have I mussunderstood your post? I have yet to see anyone explain how the four forces of nature (strong & weak nuclear, electro-magnetism and gravity) with stochastic processes can form a processing system under any stretch of the imagination within the Upper Probability Limit. e.g., consider the information processing of DNA to proteins in even the simplest self reproducing cell.DLH
May 26, 2008
May
05
May
26
26
2008
02:16 PM
2
02
16
PM
PDT
Denyse, Just a heads up. Dinesh D'Souza talks about The Spiritual Brain in today's column.nullasalus
May 26, 2008
May
05
May
26
26
2008
01:10 PM
1
01
10
PM
PDT
-"DOES THE UNIVERSE IN FACT CONTAIN ALMOST NO INFORMATION?" (haven't read this)JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
10:59 AM
10
10
59
AM
PDT
Which is more complex, tic-tac-toe or heart surgery. The latter is, because it takes a much much longer description to characterize accurately in such a way that person of average intelligence can grasp OTOH, sometimes the complexity of a task is due to ignorance. Its a complex task to break into a safe if you don't have the combination. Maybe heart surgery is simple as well.JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
10:34 AM
10
10
34
AM
PDT
"The cover of our course textbook, Elements of Information Theory ([C-T]), depicts a computer generated image of a small segment of the Mandelbrot set. The explanation on the back cover says "The information content of the fractal on the cover is essentially zero". This comment contradicts our intuitions: The picture looks very complex, as Penrose so vividly expresses it." -Complexity measures for complex systems and complex objectsJunkyardTornado
May 26, 2008
May
05
May
26
26
2008
10:26 AM
10
10
26
AM
PDT
gpuccio: "But if I see a print of a mandelbrot, I would think that it has been produced using designed tools (a computer)." I take it you've heard of a turing machine. A computer is an extremely trivial device- something that can read sequentially a series of instructions, increment values, compare values for equality (and then jump to other parts of the program on that basis. This is nothing. Its pointless to philosophize about somehthing so trivial having to be "designed." I'm not an expert on fractals either, except that I suppose I could see how some gargantuan saved-state could make them complex. Except that with Chaitin-Kolomgorov complexity, the memory or time consumed is not a consideration, just the smallest program length in instructions to compute a function. ANd I'm sure there's discussion somewhere on why the time and memory consumed can be ignored. Maybe I could try to hunt up some informative piece about algorithmic complexity and fractals.JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
09:41 AM
9
09
41
AM
PDT
JunkyardTornado: I agree with you. Still, just to be clear about fractals, although the mathematical formula is rather simple, its computation is long and requires great computational resources (anyone old enough to have computed a mandelbrot on an old computer can understand what I am saying). I am not aware of natural processes which can output a mandelbrot, although the formula is very simple. Perhaps other kinds of fractals, which do not imply computations with complex numbers, may be found as the result of natural processes (I am thinking of snowflakes and similar, but I could be wrong). But if I see a print of a mandelbrot, I would think that it has been produced using designed tools (a computer). Anyway, I am not an expert of fractals (although I love them very much), so if I have something wrong, please correct me.gpuccio
May 26, 2008
May
05
May
26
26
2008
08:56 AM
8
08
56
AM
PDT
mavis: "The more complex a structure is, the more instructions are needed to describe it" What jumps out at me here is fractals. xn+1 = xn2 + c, more or less. Well, I think it illustrates that some things can seem very complex, when in fact they are not, when assessed according to objective criteria. The complexity turns out to be an illusion. When explained how a magic trick works do you still insist on the basis of what your eyes saw that magic really took place? Its no different than gauging a fractal's complexity on the basis of a visual inspection and subjective reaction, perhaps contemplating the difficulty you would have in trying to draw it yourself free hand. Drawing a perfectly straight line is difficult as well, but not because its complex. Which is more complex, tic-tac-toe or heart surgery. The latter is, because it takes a much much longer description to characterize accurately in such a way that person of average intelligence can grasp the essential details.JunkyardTornado
May 26, 2008
May
05
May
26
26
2008
08:36 AM
8
08
36
AM
PDT
Off topic: you can use Linux to open a pdf. Then you don't have to worry about trojans/viruses etc.DrDan
May 26, 2008
May
05
May
26
26
2008
08:34 AM
8
08
34
AM
PDT
Re .pdf: I downloaded it but can't find where it went. However, I am not a techie and do not need to solve the problem immediately. Re designers: One can realize that a product features design without ever knowing who designed it. That is why all theists and non-materialist atheists agree on design, but most do not use it as a key apologetic. That is why ID is a big tent.O'Leary
May 26, 2008
May
05
May
26
26
2008
08:23 AM
8
08
23
AM
PDT
PS: Fractals do NOT pass the EF -- they are caught as "law" -- the first test. It is the programs and formulae that generate them that pass the EF. [And, these are known independently to be agent-originated, so they support the EF's reliability.]kairosfocus
May 26, 2008
May
05
May
26
26
2008
03:58 AM
3
03
58
AM
PDT
Mavis: Thanks for the thoughts. I have put up Reader 8.1.2 [sigh . . .], and it is d/loading TMLO PDF. [~ 70 MB]. Last, daily updates to modern malware packages do keep one in touch with latest developments. GEMkairosfocus
May 26, 2008
May
05
May
26
26
2008
03:55 AM
3
03
55
AM
PDT
Also KF you by the nature of the beast cannot "pre-screen" PDF files - until you have them you cannot examine the contents. Firewalls are little use here too, unless you have it set to scan the data and look for trogans, which might work but obviously only on trogans that are already known. Here is a typical exploit http://www.securityfocus.com/bid/21910 Again, you can't filter what you don't already recognise. And only using PDF files from trusted sources is not going to do it, as millions of servers around the world serving "normal" websites have been compromised and are unwittingly serving malware. So just better to get patched to the latest all round. No excuses!Mavis Riley
May 26, 2008
May
05
May
26
26
2008
03:37 AM
3
03
37
AM
PDT
1 2 3

Leave a Reply