Uncommon Descent Serving The Intelligent Design Community

Out-of-print early ID book now available as a .pdf

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An early ID book (possibly the earliest), The Mystery of Life’s Origin by Charles Thaxton, Walter Bradley, and Roger Olson (1984), with a foreword by Dean Kenyon, has been out of print for a while, I am told. But a .pdf can be downloaded here for now.

Information theory is a special branch of mathematics that has developed a way to measure information. In brief, the information content of a structure is the minimum number of instructions required to describe or specify it,  whether that structure is a rock or a rocket ship, a pile of leaves or a living organism. The more complex a structure is, the more instructions are needed to describe it. —Charles Thaxton, biochemist

Meanwhile ….

Study: Sun not special, therefore alien life should be common?

Does time’s one-way street prove that other universes exist?

The day time went backwards

Flogos: Coming soon to a clear blue sky near you …

Science and ethics: When the devil offered a no strings research post.

Nature’s IQ: Intelligent design from a Hindu perspective

Science journalist warns against the “institutionalised idolatry of science”

Expelled film pre-trashed by United Kludgies of Canada (Trashing a film you haven’t seen is way less work.)

Is everything determined by forces over which we have no control?

Chuck Colson on neural Buddhism: Do neurons get reincarnated?

Hopeful signs: Disaster causes outpouring of charity in China

On Jane Goodall, apes, human uniqueness, and God

Comments
There are classics of Design available on the web. They should be more widely known and read. The Hand, Sir Charles Bell Organic Evolution, The Duke of Argyll Typical Forms and Ends in Creation, McCosh Vladimir Krondan
kairos: No peoblems with a technica KO: I can live with that! And I do hope that darwinian evolution will continue to exist and be taught, even in schools, when it is no more dominant. First of all, it will be funny, and moreover, people should not forget what a whole culture has been able to believe for decades... gpuccio
... when Darwin theory was dominant and teached all over ^taught^ the world … Oops. Sorry for the mistake ... kairos
GPuccio: And obviously I respect your cautius position, but believe me, we will win for KO! :-) Hopefully, but there is also a third possibility: a less spectacular but definitive technical KO; i.e. a situation in which, although not proved abosultely, most scientists will argue for design and only a limited, isolated subset will be ask for going back to "the old nice times when Darwin theory was dominant and teached all over the world ..." :-) kairos
kairos: You are right Penrose does not pursue a stricly non materialist viewpoint. He remains, to say so, in a middle ground. But his argument is just the same the best basis for a non materialist interpretation of consciousness and knowledge. In a sense, it'a the same as for ID. ID is not necessarily linked to a spiritual perspective, but is the basis for it. The same can be said for Penroses's argument. And obviously I respect your cautius position, but believe me, we will win for KO! :-) gpuccio
GPuccio The fact is that, even in Penrose’s views, the reason why human thought is not strictly algorithmic remains largely a mystery. Penrose has his model, which is based, if I am not wrong, on some application of quantum theories at the subcellular level. That’s interesting, but still highly speculative. But, althouth it's quite possible that this shouldn't be realized, I argued about machines that possibly could in the future be able to implement this kind of activity. After all in the last year there has been much excitement about quantum computing. It's quite possibly that at the end no meaningful results will be obtained in this field but at the moment is perhaps too early for a negative answer. If the two aspects which seem to characterize human knowledge, consciousness and non algorithmic cognition, are linked, and there are many reasons to think that it could be so, then the non algorithmic cognition would be possible only in the presence of consciousness. Unfortunately I haven't read Penrose's books; so I cannot give my idea on this link between non algorithmic cognition and consciousness. But I know that also in the ID field this link is heavily questioned. For example Dembski argued that Penrose's position is not much different from a purely materialist one. He argued so in his book "Intelligent design; a bridge between science and teology", pp. 220-222. I agree with you when you say: “On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side;” In a certain sense it's the main argument that has been used for attacking Dembski concerns his claims about strict conservation of CSI. I don't agree with Dembski on this point as I don't see any strict necessity to argue that CSI cannot increase through indirect generation. In my opinion arguing so is like trying to win the match for KO; so doing one puts himself in the position to be easily hit by the opponent. kairos
Kairos: Just a note. You say: "I think this is true. But it is conceivable to produce machines that aren’t strictly algorithmic." That remains to be seen. The fact is that, even in Penrose's views, the reason why human thought is not strictly algorithmic remains largely a mystery. Penrose has his model, which is based, if I am not wrong, on some application of quantum theories at the subcellular level. That's interesting, but still highly speculative. If the two aspects which seem to characterize human knowledge, consciousness and non algorithmic cognition, are linked, and there are many reasons to think that it could be so, then the non algorithmic cognition would be possible only in the presence of consciousness. I agree with you when you say: "On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side;" That's right. Indeed, I am presenting my reflections on these subject only as a personal opinion, and don't want in any way to involve ID in that argument. But I do think that, in time, the ID controversy will have to face these problems. gpuccio
kairosfocus: thank you for your intervention in this interesting discussion. Unfortunately, I think that the passage you cite about Penrose is really misleading. Reading it, it seems that Penrose is just commenting on some philosophical aspects of the AI question. That's not correct. The article cited in your post says: "However, Penrose certainly hasn’t disproved that our brain aren’t Turing machine. He points to some issues, but no proof." While there are certainly philosophical parts in Penrose's discourse, the central core of his argument is a rigorous mathematical demonstration. That argument is deductive in nature, and depends on a special application of Godel's theorem. It is explained in "The Emperor's New Mind", and explained again, and partially corrected, in "Shadows of the Mind". In that second book, Penrose also analyzes in detail a lot of objections made to his demonstration. I am aware that Penrose's argument is controversial, and that many do not accept it, and think that it is in some way flawed. But the fact is, the argument is a mathemathical one: it can be true or false, but it is wrong to say that it "points to some issues, but no proof." It's exactly the opposite. The argument is a proof, only not everybody agrees that it is correct. So, we could say that it is a controversial proof. Personally, I am convinced that Penrose's argument is perfectly valid. Of course I am not a mathematician, and I could be wrong. I have, anyway, tried to understand the various aspects which are controversial, as far as I can do that, and I have always agreed with Penrose's views. In a sense, Penrose's argument is only a way of drawing the right conclusions from Godel's theorem. The status of Godel's theorem is indeed not completely defined in our scientific culture. Everybody accepts it as a fundamental piece of knowledge (at least, everybody who knows that it exists), but nobody seems to really agree on its real meaning. Something similar happens with quantum mechanics: it is correct, it is powerful, but do we really understand what it means? I think that the new scientific paradigm will have to go beyond the "Copenaghen interpretation" attitudes, and delve deeply in the problems of meaning. Penrose's thought is a good example of how a strictly technical approach can lead to unsuspected answers of huge philosophical relevance. Other examples will certainly come, if scientists accept the idea that they can think creatively (non algorithmically), and that they do not have to merely imitate computers. gpuccio
kairosfocus many thanks for your useful hints. As I've just said in my previous message I would be happy if Penrose be right, but I am not convinced about the fact that there is strong evidence that machines couldn't in principle perform non-algorithmic computation, such as the kind of inference required for Arvhimedes eureka or for Newton's gravitation theory. So, when you say: Thus Penrose says “you know, I don’t think that our brain really looks like a Turing machine at all!” I completely agree on this. The problem is to know if machines areor aren't constrained to this. Some would say that Penrose then goes off the deep end by hypothesizing that our brains are actually new types of computers: Non-algorithmic. They are not state machine! In Penrose’s world, since all Turing machines (and all computer architectures) are algorithmic because they are all Turing equivalent, our brains cannot be replicated by a Turing machine. I think this is true. But it is conceivable to produce machines that aren't strictly algorithmic. Now some people have have heard of neural nets, but it is important to recognize this is NOT what Penrose is talking about. Neural nets, fuzzy logic, etc are all algorithm in nature. (For instance, neural nets speak to the non-commutative nature of the order of synaptic connections. This is simply a different type of algorithm.) That's true at this time but for example a possible machine could better mimic the functioning of brain neurons at the analogical level and not at the digital level. Moreover, you could also add some form of indeterminism to the computation by using randomized if's. Olnly hints obviously. If Penrose is eventually declared correct, he will be revered for his insight. If wrong, then a computer can think. but without any consciousness obviously :- kairos
GPuccio: Obviously, the meta level and the symbolic apparatus can be algorithmically simulated, but that’s not the same thing. Anyway, I suggest we leave it at that for now, and we can take again this discussion any time we, or others, have new arguments. I agree. Finally, I perfectly agree with you final note on necessity. That’s why I affirmed from the beginning hat, even if your views about the possible indirect generation of CSI were true, that would not be in any way detrimental to ID theory. On the other hand, if my views on the subject were true, that would certainly make ID much stonger than it already is. But I think we agree on those points. Yes. As I said, the proven impossibility of indirect CSI generation would be the KO punch for NDE and materialism and nobody would be happier than me about this eventuality. On the other hand, strongly arguing about this impossibility without having a final proof about is IMHO potentially dangerous for ID side; in fact NDEers could show every new (although modeste) achievement in the AI field as a new proof that machines haven't constraints. Obviously it's quite possible that this is not true, but ID explication is the strongest hypothesis even arguing for indirect generation of CSI. kairos
Kairos: A teensy little note on a point:
machines that could be able to recognize analogies and differences in information stored in high level data structure
Recognition of deep analogies is -- per our experience -- an imaginative, intuitive, insightful, creative non-algorithmic process. (It is not mere pattern matching. Think of Archimedes and his bath [EUREKA!], or Newton and his falling apple as the moon swings by. How many have sat in a bath that overflowed? How many have seen fruit fall? Many. How many have seen the potential? Just one each. We see such result as "obvious" only after the fact of genius level insight and success. Indeed, we now in part see the world through the eyes of such great minds. We call that education and culture, or in some cases even language -- hence the point on absorbing CSI from the culture.) Oddly, that is embedded in our understanding of how science itself works -- abductive inference to hypotheses is a creative, non-routine step. So is the point of Godel's work on incompletemenss: we are able to see truths that we cannot provce relative to any coherent set of axioms, once we deal with a realistically complex mathematical system. [This also probably implies that not all problems of interest may be reduced to algorithms that start from known initial points, take in inputs, process based on stored routines and possibly intermediate inputs, shift internal states and steadily move stepwise to generate desired outputs.] I suspect this is a bit of what Penrose was driving at (all the huffing and puffing to dismiss what he had to say notwithstanding). Here is a perhaps helpful brief layman's level summary of P's point:
Strong AI really points to the idea that if we could only learn to load the right algorithm into a computer, we could replicate the programing that we have in our own brain. So, people with strong AI would suggest that it is simply getting the human software into a computer. If you watched Ghosts in the Shell, you will see this idea echoed over and over. People loading their programming into machines. So where does Penrose come into the picture? Penrose starts digging into the nature of algorithms. He points out that through things like Godel and Church Lamba calculus that we cannot figure out if all Turing machines really get to a completed state. On top of this, we dive into general "incompleteness" in that we can not only not figure a completed state for all Turing machines, but we also find out that various things in this universe (like the Quantum model and relativistic models) really never get to completion always. However, our brain deals with them just fine. Even though math is not a complete system (Godel says that all math will fundamentally experience paradoxes), we can use it with all of its problems. Thus Penrose says "you know, I don't think that our brain really looks like a Turing machine at all!" Some would say that Penrose then goes off the deep end by hypothesizing that our brains are actually new types of computers: Non-algorithmic. They are not state machine! In Penrose's world, since all Turing machines (and all computer architectures) are algorithmic because they are all Turing equivalent, our brains cannot be replicated by a Turing machine. Now, if you watch any Science Fiction, or if you are an anime fan like myself, you will recognize that this is destructive to a common motif. Ghosts in The Shell is one anime that uses the standard convention of Strong AI to a great extent. In this anime, computer programs can become self aware. In the same way, humans can load their algorithms (or their conscious mind) into computers. The same thing happens in the Matrix movies. If Penrose is correct, the Ghost in Shell idea could never happen. The idea of the Matrix is just that: an idea. The Turing machine architecture cannot hold a non-algorithmic QUANTUM computer programming. This last bit is highly controversial, and Penrose, while appreciated for tour of all things strange, simply lacks any real evidence. Stuart Hameroff is trying to get around some of this. However, if Stuart is correct in any of his ideas, you cannot create a quantum computer out of a Turing machine. You need the ability to go backwards in time, and the brain is all about collapsing probability functions and NOT moving through a state machine. Now some people have have heard of neural nets, but it is important to recognize this is NOT what Penrose is talking about. Neural nets, fuzzy logic, etc are all algorithm in nature. (For instance, neural nets speak to the non-commutative nature of the order of synaptic connections. This is simply a different type of algorithm.) What Penrose is suggesting is only found one place in nature: Our brain. There are paradox in our human self-awareness and Penrose does a brilliant job of going after a new way of thinking about them (did you know that we understand that our brain is on 1/2 second delay, and your brain lies to you to make you think that it isn't!). However, Penrose certainly hasn't disproved that our brain aren't Turing machine. He points to some issues, but no proof. If Penrose is eventually declared correct, he will be revered for his insight. If wrong, then a computer can think.
At least, food for thought. GEM of TKI kairosfocus
Kairos: OK, I think we agree on most things. I will keep my ideas about the main point, where you say: "Certainly the task of recognizing analogies and differences would require sophisticated recognition algorithms but this could be embedded at the lower levels of the machine. I mean that at higher abstraction level it’s quite conceivable that a machine can work simulating what humans actually do in a non-algorithmic way. Certainly this would not be at alla a consciousness of any form, but I’m not convinced that a machine wouldn’t be able in principle to implement, although in a purely behavioral way (i.e. without any real subjective consciousness), the meta-relation you have referred to." The important thing is that we agree on the absence of consciousness. I go beyond that, and continue to think that absence of consciousness implies absence of understanding of meaning, and therefore absence of a true symbolic apparatus. Obviously, the meta level and the symbolic apparatus can be algorithmically simulated, but that's not the same thing. Anyway, I suggest we leave it at that for now, and we can take again this discussion any time we, or others, have new arguments. I perfectly agree that chess programs can increase their baggage of information the way you describe. That seems completely algorithmic to me, and I have no objections to that. I just don't think that's the way (or at least the only way) that human players improve their playing. Finally, I perfectly agree with you final note on necessity. That's why I affirmed from the beginning hat, even if your views about the possible indirect generation of CSI were true, that would not be in any way detrimental to ID theory. On the other hand, if my views on the subject were true, that would certainly make ID much stonger than it already is. But I think we agree on those points. gpuccio
#56 gpuccio Thanks for the comments. Certainly this is a very complex and interesting argument. one of the strogest arguments for the non algorithmic (or at least, not purely algorithmic) nature of human knowledge comes exactly from mathemathics. It is based on the famous argument by Penrose, based on Godel’s theorem. It is too long and complex for me to sum it up here, but anybody can find it in detail in Penrose’s two books, “The Emperor’s New Mind”, and “Shadows of the mind”. In brief, the argument is aimed to mathematically demonstrate that there are mathemathical knowledges that are not algorithmic, which can be easily grasped by a conscious human being, and never by a computer, however complex. That’s intresting, because it is easy to imagine that a computer may have difficulties in emulating specific human things, like art, feelings, religion and so on, but ut is much more important to be able to demonstrate that it cannot emulate the human understanding of mathematics itself. ........ That ability to observe an object as a subject leads to the observer being in a “meta” relationship with the observed object: he puts himself “above” the things he is representing in his consciousness, and can better “look” at them. I completely agree on the fact that a mere algorithm capability is largely unadequate to solve many and many problems humans typically solve easily. However when I argued about the Gauss problem I didn't consider machines that strictly operate in an algorithmic way. I meant computing machines acting as inferenze engines at a more abstract symbolic level, i.e. machines that could be able to recognize analogies and differences in information stored in high level data structure. Certainly the task of recognizing analogies and differences would require sophisticated recognition algorithms but this could be embedded at the lower levels of the machine. I mean that at higher abstraction level it's quite conceivable that a machine can work simulating what humans actually do in a non-algorithmic way. Certainly this would not be at alla a consciousness of any form, but I'm not convinced that a machine wouldn't be able in principle to implement, although in a purely behavioral way (i.e. without any real subjective consciousness), the meta-relation you have referred to. The ability of the observer to put himself in a “meta” relationship with the observed things can always work on successive levels, creating a “mise en abime” which can be endless. That is the real cause of Godel’s theorem, and of Pendrose’s argument. A computer cannot do that. It is confined to its code, forever. It cannot put itself in “meta” respect to its own code. It can neither represent it nor observe it. A computer can never observe. But, as I've argue above, if we don't restrict a computer to act in a purely (native) algorithmic way, a computing machine can work on successive abstraction levels too; and the sequence (or better, the tree) of abstraction levels can be potentially endless too. Obviously I don't mean at all that machines can really be made really conscious of what they are doing, this is only fictionary stuff and the fact that even scientists working in the field think that this could be possible is a sign of irrationality. I simply mean that, restricting ourselves to see mere functionality, I don't see an a-priori restriction for a future machine to perform the same mathematical inference that was performed by the young Gauss. I don’t know. I would appreciate Gil Dodgen’s contribution here. My feeling is that a chess program has extraordinary computing abilities, and can store a lot of pre-formed decisions (made by human players), so that it can quickly compute the consequences of a move, and select the pre-formed solution which applies. Does that make the program intelligent? Is a trivial pursuit game intelligent, when it gives you the right answers? The contribution og Gil would be welcomed here. Actually I don't know if Deep Blue, or other programs for that matter, is now able to find new and better "strategies" to win chess matches. However I can put on the table some 0.01 cent ideas. Certainly a chess playing program can be impemented with the capability to continouously store all the data concerning all the matches it has played in the past, with complete statistics about moves and reaction by the opponents. So, it is conceivable that it could be found analogies between different situations that had all in common a tactical advantage or, at the best, the vistory in the matches. No, that’s not completely correct. Necessity has to be excluded separately. Chance is excluded on a probabilistic basis. You can apply statistical analysis only to random events, never to necessity. Even in the fisherian scenario, usually applied in biological sciences, statistics is used only to compute the probabilities that a random mechanism is the cause of observed facts (the null hypothesis), and not to directly prove the test hypothesis, which usually implies a causal mechanism (necessity). You are right; but I didn't explain well my idea about. I think (obviously IMHO) that the EF shouldn't exclude the action of necessity "in toto", i.e. the action of ANY deterministic agent. Instead it should be limited to what natural laws are able to produce from scratch. kairos
Kairos: Thank you for the generous discussion. I agree with you that the problem is open. Still, maybe I can add some thoughts, stimulated by your comments. "the intuition that allows a mathematician to “see” a new, simpler solution to a problem that up to then had been considered much more complex. to solve." Now, that's interesting, because one of the strogest arguments for the non algorithmic (or at least, not purely algorithmic) nature of human knowledge comes exactly from mathemathics. It is based on the famous argument by Penrose, based on Godel's theorem. It is too long and complex for me to sum it up here, but anybody can find it in detail in Penrose's two books, "The Emperor's New Mind", and "Shadows of the mind". In brief, the argument is aimed to mathematically demonstrate that there are mathemathical knowledges that are not algorithmic, which can be easily grasped by a conscious human being, and never by a computer, however complex. That's intresting, because it is easy to imagine that a computer may have difficulties in emulating specific human things, like art, feelings, religion and so on, but ut is much more important to be able to demonstrate that it cannot emulate the human understanding of mathematics itself. Penrose's argument is certainly controversial, but I do believe that he is right. Moreover, I do believe (but that's my personal thought, not necessarily Penrose's) that the ral meaning of that argument is that consciousness is necessary to many fundamental forms of understanding, even in mathematics. Consciousness allows one single procedure which cannot happen without it: detachment. The conscious observer can detach himself from what he is representing, and observe it as an object of his representing perception. That ability to observe an object as a subject leads to the observer being in a "meta" relationship with the observed object: he puts himself "above" the things he is representing in his consciousness, and can better "look" at them. The ability of the observer to put himself in a "meta" relationship with the observed things can always work on successive levels, creating a "mise en abime" which can be endless. That is the real cause of Godel's theorem, and of Pendrose's argument. A computer cannot do that. It is confined to its code, forever. It cannot put itself in "meta" respect to its own code. It can neither represent it nor observe it. A computer can never observe. My point (and, I think, Penrose's) is that the presence of consciousness does not ingender only a subjective differnce, but also an objective one: the conscious experiencer can do things which a non conscious computing machine can never do. I am sure of that. I am almost sure that producing new CSI is one of those things, but I agree that that has to be formally proved. "Please think about the Deep Blue example I mentioned; certainly that machinery wasn’t at all conscious of its operation but its result was pretty impressive and more “intelligent” (in its ethimological sense: the ability to collect and select) than Kasparov, and certainly more, more and more intelligent that any other human paying chess." I don't know. I would appreciate Gil Dodgen's contribution here. My feeling is that a chess program has extraordinary computing abilities, and can store a lot of pre-formed decisions (made by human players), so that it can quickly compute the consequences of a move, and select the pre-formed solution which applies. Does that make the program intelligent? Is a trivial pursuit game intelligent, when it gives you the right answers? The fact is, a computer, however complex, is not different from an abacus. A software is stored knowledge. Years of Strong AI theory and of (often good) science fiction novels have conditoned us to thinf that, the more conplex is the software, the more "human" it will be. I think there is nothing true in that. Strong AI is one os the most stupid theories I have ever known (with darwinian evolution, it's really a fascinating competition). They have told us that if we compute in parallel instead of serially, miracles will happen. They are telling us that if we use enough loops in our programs, those programs will become conscious, as though a loop in the code could be the same thing as a cosnsciuousness observing in "meta" its contents. All of that is nonsense. They have succeded in convincing millions of conscious (and sometimes intelligent) people that the primary empirical thing they experience, their personal consciousness, does not really exist. That's collective hypnosis, at best. And so on. "I could be wrong but if I remind well the definition of CSI does not require the a-priori exclusion of the action of necessity and chance. Instead, this exclusion is done a-posteriori on a probabilistic basis." No, that's not completely correct. Necessity has to be excluded separately. Chance is excluded on a probabilistic basis. You can apply statistical analysis only to random events, never to necessity. Even in the fisherian scenario, usually applied in biological sciences, statistics is used only to compute the probabilities that a random mechanism is the cause of observed facts (the null hypothesis), and not to directly prove the test hypothesis, which usually implies a causal mechanism (necessity). gpuccio
Sorry, in my previous post there are many typos, but I hope that the overall content be clear. kairos
#54 gpuccio You have explained the issues involved with CSI very clearly. In fact what is really worth to be explored is the following question asking if CSI is something that can grow without the DIRECT action of an intelligent agent: Question: Even if CSI cannot be generated by natural laws + chance, and requires a designer, must CSI be generated directly by the designer, or can it be the product of a designed machine? In other words, can CSI be indirectly generated by a designer? That’s a very important point, although, as you said, it would not not be a problem for ID in both cases. And yet, the answer to that question has certainly important consequences. This is certainly true; in fact if you (and Dembski of course) are right on this point materialism, and its most known weapon (NDE) for that matter, are completely defeated on a mere theoretical basis. In other words, if CSI cannot grow at all through laws+chance this would be the final KO punch for both materialism and NDE. It's true that this would be the best of the news for us IDers, but are we sure that this fact can be really proven? Perhaps we have to accept that ID will win on the long-term but without any KO, simply because more and more of the people who are looking at the match will switch their votes from NDE to ID. Let’s put it this way. We certainly know that machines (including computers, software code, etc., in other words any product of human agency which exhibits CSI and can give an output of some kind) can output CSI in various forms. A computer outputs results which have CSI. A printer can print Shakespeare’s works. And so on. But the question is, do those machines really generate new CSI? Or do they just reutilize, in different form, the CSI they have in themselves, or the one which they receive as input? First of all, I have to admit that I cannot give a formally explicit discussion on that. I can just give my intuitive idea. Therefore, any input from you and others will be greatly appreciated. My personal idea is that the answer is no. Machines, however “intelligent”, cannot create new CSI. They are destined to reshuffle what they receive, sometimes very brilliantly (but that brilliancy, in some way, is itself a merit of their programmers). I think that a key point here is to ask if most of the intelligent higher level tasks humans perform could be itself classified as very sophisticated forms of reshuffling of information previously received, modified and stored. For example let us consider a very high form of intelligemce: the intuition that allows a mathematician to "see" a new, simpler solution to a problem that up to then had been considered much more complex. to solve. I think in particular about the well know anedoct concerning Gauss when, as a young student, found a genial and quick solution to the problem of adding n numbers, each one differing from the previous one by the same number. However, it is well known that the human brain in particularly well suited to found analogies and differences between different fields of expertise. So it's conceivable (obviously not being sure about) that Gauss actually did: a) "see" some sort of analogy between the monotonic and linearly growing sequence of numbers and a sequence of piles of different heights; b) "simulate" the folding of the sequence of piles observing that the result is n/2 piles with the same height; c) apply the result to the sequence of numbers. Indeed, this would have been a very clever and high-level form of reshuffling but its result would have\been a new, clever and simpler technique for solving a given problem. Moreover (it is possible that I'm wrong) according to the Dembski's definition of CSI the result would have a higher CSI because the Chaitin-Kolmogorof representation of the algorithm is much shorter. Obviously we don't know what was the real mental process that allowed Gauss to find the new algorithm, but from a conceptual point of view there is no an a-priori constraint that deny a very complex computing machine to do the same. Why do I say that? Because I believe that the real source of CSI is intelligent consciousness. Machines are not conscious, and they never will be. Therefore, in a strict sense, they cannot even be “actively” intelligent (they can, obviously, be passively intelligent). I agree on this point but the notion of consciousness is not involved in the definition of CSI and then in the inference of design process. Why do I think that the source of CSI is only intelligent consciousness? Because CSI cannot be generated algorithmically. Let’s start from the beginning. A piece of CSI (let’s say Hamlet) cannot be produced by natural laws + chance. OK. But Shakespeare was using his brain and mind, and brain and mind are complex, specified structures. Somebody has put CSI in them. Let’s say it was God. Moreover, Shakespeare has used as input a lot of previous CSI created by other human beings (his experiences, his culture, and so on). So, we could ask, was Hamlet just the result of algorithmic computations (that is necessity) performed by a complex structure with a lot of CSI in itself (that is, Shakespeare’s brain and mind, and everything it contained)? I don’t think so. I think Hamlet came from Shakespeare’s consciousness, where all these things certainly contributed to his representations, to his feelings, to his intuitions, to his choices. From his conscousness, with the contribution of all this data, came out Hamlet. All the data, including brain and mind, could not have created it in themselves. Shakespeare’s individual consciousness was necessary. Now, please take notice that I am not using the example os Shakespeare here because he is a great, creative artist. My example should stay valid for every form of new CSI. It is quite possible that this was what did really happen. But IMHO we cannot discard the possibility that the same could be done by a machinery with huge computation power and storage and whose inference rules be mainly based on searching analogies and differences within the actual storage state. Please think about the Deep Blue example I mentioned; certainly that machinery wasn't at all conscious of its operation but its result was pretty impressive and more "intelligent" (in its ethimological sense: the ability to collect and select) than Kasparov, and certainly more, more and more intelligent that any other human paying chess. Indeed, if CSI is not apt to be generated algorithmically by case and necessity, when wew see an algorithm outputting CSI, only two cases are possible: a) The CSI was already in the machine/algorithm, and is simply being copied, sometimes in a resfhuffled form: the easiest case is that of the printer which prints Hamlet, or of the phrases used by the computer in its dialogue windows, or of the reshuffled audio comment in soccer games. b) The CSI is really being computed (necessity), but starting from a different CSI. In that case, although I am not able to manage the formalism, it seems to me that the total CSI should not increase (which should be in some way the law of conservation of information). It is, in a sense, copied in a different form, through algorithmic (necessary) functions and transformations. I think that the real point is what is the real theoretical bounds on what can be obtained by reshuffling information. It is possible that I'm wrong about, but, according to my previous example it seem conceivable that many intelligent tasks could produce some new CSI. It is important to remember that there is no way that an algorithm can create CSI out of nothing, because otherwise the result would be the product of necessity, or of case and necessity, which contradicts the definition of CSI. I could be wrong but if I remind well the definition of CSI does not require the a-priori exclusion of the action of necessity and chance. Instead, this exclusion is done a-posteriori on a probabilistic basis. That’s it, for now. I would be very interested to know how these concepts can apply to neural networks, and to their apparent “learning”. Or to know, for instance from GilDodgen, who is certainly an authority about that, how they apply to chess playing and to the related software. I agree; his opinion and experience about would be very useful kairos
Kairos (and kairosfocus): I think we agree completely on the point that CSI can never, empirically (that is, IN PRACTICE), be generated by natural laws + chance. The theorical possiblity should not be a problem for anyone. So, if there is CSI, there is a designer. There is no question about that (at least for us). But the question posed by RRE is really interesting, and I am intrigued by it. I'll try to sum it up again in a very simple form: Question: Even if CSI cannot be generated by natural laws + chance, and requires a designer, must CSI be generated directly by the designer, or can it be the product of a designed machine? In other words, can CSI be indirectly generated by a designer? That's a very important point, although, as you said, it would not not be a problem for ID in both cases. And yet, the answer to that question has certainly important consequences. Let's put it this way. We certainly know that machines (including computers, software code, etc., in other words any product of human agency which exhibits CSI and can give an output of some kind) can output CSI in various forms. A computer outputs results which have CSI. A printer can print Shakespeare's works. And so on. But the question is, do those machines really generate new CSI? Or do they just reutilize, in different form, the CSI they have in themselves, or the one which they receive as input? First of all, I have to admit that I cannot give a formally explicit discussion on that. I can just give my intuitive idea. Therefore, any input from you and others will be greatly appreciated. My personal idea is that the answer is no. Machines, however "intelligent", cannot create new CSI. They are destined to reshuffle what they receive, sometimes very brilliantly (but that brilliancy, in some way, is itself a merit of their programmers). Why do I say that? Because I believe that the real source of CSI is intelligent consciousness. Machines are not conscious, and they never will be. Therefore, in a strict sense, they cannot even be "actively" intelligent (they can, obviously, be passively intelligent). Why do I think that the source of CSI is only intelligent consciousness? Because CSI cannot be generated algorithmically. Let's start from the beginning. A piece of CSI (let's say Hamlet) cannot be produced by natural laws + chance. OK. But Shakespeare was using his brain and mind, and brain and mind are complex, specified structures. Somebody has put CSI in them. Let's say it was God. Moreover, Shakespeare has used as input a lot of previous CSI created by other human beings (his experiences, his culture, and so on). So, we could ask, was Hamlet just the result of algorithmic computations (that is necessity) performed by a complex structure with a lot of CSI in itself (that is, Shakespeare's brain and mind, and everything it contained)? I don't think so. I think Hamlet came from Shakespeare's consciousness, where all these things certainly contributed to his representations, to his feelings, to his intuitions, to his choices. From his conscousness, with the contribution of all this data, came out Hamlet. All the data, including brain and mind, could not have created it in themselves. Shakespeare's individual consciousness was necessary. Now, please take notice that I am not using the example os Shakespeare here because he is a great, creative artist. My example should stay valid for every form of new CSI. Indeed, if CSI is not apt to be generated algorithmically by case and necessity, when wew see an algorithm outputting CSI, only two cases are possible: a) The CSI was already in the machine/algorithm, and is simply being copied, sometimes in a resfhuffled form: the easiest case is that of the printer which prints Hamlet, or of the phrases used by the computer in its dialogue windows, or of the reshuffled audio comment in soccer games. b) The CSI is really being computed (necessity), but starting from a different CSI. In that case, although I am not able to manage the formalism, it seems to me that the total CSI should not increase (which should be in some way the law of conservation of information). It is, in a sense, copied in a different form, through algorithmic (necessary) functions and transformations. It is important to remember that there is no way that an algorithm can create CSI out of nothing, because otherwise the result would be the product of necessity, or of case and necessity, which contradicts the definition of CSI. We should notice that copying CSI does not create new CSI (two copies of Hamlet do not contain more CSI than one copy). In the same way, reshuffling CSI does not augment it (unless the reshuffling itself is CSI), and the same is valid for recomputing it (applying necessary transformations to it). Again, if case and necessity cannot produce CSI, another causal principle is necessary. We call it agency, but in the end agency can be defined only as the product of intelligent consciousness. That's it, for now. I would be very interested to know how these concepts can apply to neural networks, and to their apparent "learning". Or to know, for instance from GilDodgen, who is certainly an authority about that, how they apply to chess playing and to the related software. gpuccio
Footnote: Technical side-point. Foxit Reader got the TMLO download to my desktop, and Acrobat 8 opens it there, successfully [. . . 5 was also swamped by new formats]. [There is a reported hiccup on images with Foxit, reportedly due to an update that won't get through just now.] GEM of TKI kairosfocus
GP and Kairos: Fascinating. Keep on going! GEM of TKI kairosfocus
#45, 46 gpuccio I have just read your post, after having posted mine. This seems to be one of the rare cases that we don’t agree… I would appreciate further input from you. The question is really a good one! Probably we don't agree on this point but perhaps not so much as it could seem. I have re-read my post and I've seen that in my point 2. there are some typo errors that probably have changed the meaning of the text. I rewrite it by putting within [] the correct text: ---- 2. If point 1. is valid, and I don’t see how it could be disputed, the problem is simply a probabilistic one. Is it possible that BOTH the computer machinery AND its symbolic code could be arised by mere naturalistic forces. [? **here I put a point instead of a question mark; I didn't mean that this is possible; I only asked for answering NO after **] It [IF] chance is against any reasonable and phisical [physical] possibility in the known universe, then design inference is a mere matter of reason and common sense. ---- In other words, my opinion is that: - Although there isn't a strict theoretical constraint that makes it ABSOLUTELY impossible(i.e. with Prob=0), CSI cannot IN PRACTICE be produced from scratch by the mere application of natural laws+chance. In other words, it's quite impossible that eben a simple replicating structure could have been produced in that way, not to speak for the very sophisticated computing machinery needed to implement even the simplest computing step needed for act intelligently. HOWEVER: - Provided that a complex computing machinery had been previously assembled (and this is possible only through an evolution guided by an intelligent agent) and provided that its storage had been charged with all the information that allows it to mimic the human expertise in a specific field, then this machine should be able to produce new CSI to the same extent the human behavior it mimics should be able to produce new CSI. This doesn't imply at all that an intelligent agent is not necessary but simply that some amount of new CSI can be automatically produced by a pre-existent machinery that already embeds a much higher CSI. I don't think that this should be a problem for ID. After all the definition of CSI is in itself based on strict probabilistic issues, i.e. the fact that Prob to arise by natural laws+chance is quite under the UPB. On this grounds I think it's not possible to deny a-priori that some new CSI could be added. Who says that a computing system can actyually pass the Turing test? Penrose’s argument about the non algorithmic nature of human knowledge, based on the Godel theorem, would be against it. I didn't mean the global and absolute Turing test, but simply a test that be limited in a very specific field of knowledge where the human expertise can be easily formalized and added as expert rules to the computing machinery (for example chess playing where Deep Blue has been able to beat the world champion). Certainly this is a case where the (restricted) Turing test has been passed. Moreover, as this kind of programs have in itself the capability to "learn by playing" it seems that they are actually able to add some new CSI on a much higher pre-existent one. kairos
RRE: A good example which comes to my mind is the audio comment to soccer games. There you have the appearance that a new comment is generated by the game during a contingent play, which would mean new CSI, but in reality we well know what is happening: single recorded phrases are shuffled according to necessary algorithms programmed in the game, which are triggered by the contingency, more or less designed (depending on how well you play) generated by the player. Nobody is really commenting anything. There is no perception of the play by a commenter, no representation, no original comment. In other words, no new CSI. gpuccio
RRE: Yes, that would be my opinion, but, as kairos has said, that's a good question, and open to discussion. We have a good example in biology, which is the immune system. The immune response creates a specific response to antigens from a pre-existing repertoire, and then potentiates it (antibody maturation) through guided random mutation and selection. But the selection is possible only because the original configuration of the forein agent is retained in the immune system. So, even here it seems that the new information is modeled on information acquired from the outer world, and by an algorithm which is already pre-programmed in the system. gpuccio
gpuccio, When I stated: 'generating by themeselves', I was referring to the computer and the game together. So a computer and program cannot generate or produce new CSI, only express pre-existing CSI which came into existence by programming as well as through creative input from an active agent, right? RRE
gpuccio
Perhaps other kinds of fractals, which do not imply computations with complex numbers, may be found as the result of natural processes (I am thinking of snowflakes and similar, but I could be wrong).
Have you never eaten broccoli? Examine the images on this page http://www.fourmilab.ch/images/Romanesco/ Nature is full of fractals and Benoit B. Mandelbrot knew it, hence his book The Fractal Geometry of Nature I know the coast of Norway looks designed but it's really just a fractal. There are also natural fractal seed packing strategies that industry is attempting to emulate. Mavis Riley
PS: Who says that a computing system can actyually pass the Turing test? Penrose's argument about the non algorithmic nature of human knowledge, based on the Godel theorem, would be against it. gpuccio
Hi kairos, I have just read your post, after having posted mine. This seems to be one of the rare cases that we don't agree... I would appreciate further input from you. The question is really a good one! gpuccio
RRE: In your example, I would say that there is a concurrence of two different components. The player is an intelligent agent, and his intelligent input certainly contributes to increase the information in a specific instance of the game. At the same time, the gane has been designed so that an apprent increase in CSI takes place during the playing. But in reality, the new CSI was already coded in the game, and it is gradually applied to specific parts of it. So, the only new CSI which is added to the game comes from the conscious, intelligent player. I am not sure I have understood well you other observation. If I have understood correctly, I agree with you that conscious intelligent agents (at least, human ones) have to connect with external reality through a mind and a body to output intelligent information (CSI). They have, indeed, to do the same thing to input information and consciously represent it. Mind and body, in my view, are instruments of consciousness. They do not generate it, but are necessary for its expression (again, in humans). They are the video game. Consciousness is the player. Again, the beauty of the model is that it is perfectly consistent. Here, as in the videogame example, new CSI can come only from consciousness. Body and mind are necessary to express it, but they cannot generate it by themselves. gpuccio
#42 RRE The game will increase in complexity as I kill more bosses and finish more quests. My question is, does the code itself act as an intelligent agent and can it produce its own CSI when added to its compatible machine set (like the computer with powersource) as a direct cause? Very good question and I have thought a lot about. IMHO the answer should be a clear YES. There is no an a-priori limit to the extent a trained program (such as your man-trained game) could actually act intelligently. In a certain sense this is strictly connected to the reason why a carefully designed (and TRAINED) computing system can actyually pass the Turing test (i.e., its behavior could be made indistinguishable from an human one). However, I don't see why this could be a problem for strongly arguing for ID in the physical world. In fact, let us consider that: 1. In any way, any intelligent agent, who would have decided to assess a whichever world with some form of natural laws, should be constrained to act according to those laws. 2. If point 1. is valid, and I don't see how it could be disputed, the problem is simply a probabilistic one. Is it possible that BOTH the computer machinery AND its symbolic code could be arised by mere naturalistic forces. It chance is against any reasonable and phisical possibility in the known universe, then design inference is a mere matter of reason and common sense. kairos
gpuccio, Say I have a piece of software that codes for a roleplaying adventure game. In the game the specificity of my character will increase as I go on adventures and take on quests, gain party members and new items. The game will increase in complexity as I kill more bosses and finish more quests. My question is, does the code itself act as an intelligent agent and can it produce its own CSI when added to its compatible machine set (like the computer with powersource) as a direct cause? I am under the impression that all intelligent agents must possess a functional symbolic code with a compatible machine set (whether that would be a software program and computer, robotic arm with PLC controller and compatible programming code, or a mind with a body). I do know from observation however that a mind has always been observed as the originator behind any code or program as the prime cause, whether direct or indirect, where such an origin is known. RRE
Quick footnotes: 1] Denyse -- got the download to go [took very long], but the file vanished. Just as for you. Somebody needs to fix. 2] PBMRs and food to fuel. --> Corn production in the US has reportedly actually just about DOUBLED from 95 - 07, but Australia continues in drought. (And there is a dramatic increase still in US production, after ethanol production is taken out.] --> Those old enough to remember will recall how when oil went on a quadrupling cross the 70's, EVERYTHING else shot up. (Beware post hoc reasoning and the rhetoric of those with an agenda . . .) 2] PBMRS --> These may return nukes to the front burner . . . 3] intelligence is . . . --> Cf discussion here. --> We know intelligence from our experience of it in our selves and observation of others who behave in intelligent ways like ourselves. So, we do not need to belabour ourselves over getting to statements of necessary and sufficient conditions when we can point to examples and say: if it is like that, it is the same basic thing. 4] But agency is magic . . . --> JT, am I to take your posts as so much magic? Or lucky noise? [Or should I take them as the acts of an intelligent, communicating agent similar to my own experience of myself?] --> I repeat, fact no 1 -- hard as it may be to fit in with certain reductionistic, materialistic worldviews -- is that we are agents, and this is deeply embedded in our reasoning, communicating and deciding, much less acting. --> Indeed, we experience the external world through our conscious minds; which are thus more certain than that experience . . . thus w/views that make heavy weather of living with what has to come before experience and analysis of experience, are in deep trouble with factual adequacy. --> So cf discussion APP 6 the always linked . . . starting from what you do when you see an apparent message by the side of a railroad made up from stones . . . Okay for now GEM of TKI kairosfocus
Substituting "Theory of Evolution" with "Chemical Evolution" would probably be applicable as well:
Why will many predictably persist in their acceptance of some version of chemical evolution? Quite simply, because chemical evolution has not been falsified. One would be irrational to adhere to a falsified hypothesis. We have only presented a case that chemical evolution is highly implausible. By the nature of the case that is all one can do. In a strict, technical sense, chemical evolution cannot be falsified because it is not falsifiable. Chemical evolution is a speculative reconstruction of a unique past event, and cannot therefore be tested against recurring nature.
beancan5000
JunkyardTornado: I don't want to keep you awake. I answer your post now, because here it's morning, but please feel free to answer when it's comfortable for you. I am sorry, but I am afraid that it's you that are a little bit confused, both about ID and some general concepts. Even if you invite the authorities of ID to correct me, I am afraid that here we are rather in democracy, and you will have to discuss the matter, if you want, with myself... First of all, I think you make a grand confusion with terms like law and necessity. Here, we are talking about the laws of nature, not the laws of a state. There is a big difference. The laws of nature are, as I have already said many times, logico-mathemathical formulations "explaining" facts. The same fact that facts happen according to mathemathical laws formulated by us is in itself a big philosophical mystery, and a mich debated one. But so it is. Therefore, the strngth of the gravitational attraction between two bodies can be easily computed, to a great degree of precision, by the formulas of Newton's gravitational theory. A quantistic waveform can be computed (in the easiest cases) according to a definite equation. And so on. These laws, which are the only laws of which I was talking, are mathemathical objects. As such, they work always, and always in the same way. Indeed, facts seem to happen according to such laws. That's why they are called laws of nature. They are nevessary laws, it's perfectly true, because their results are totally deterministic. They oby strictly the principle of cause and effects: given some causes, some effects can be computed according to the appropriate mathematical formula. You say: "A set of laws is a computer code according to ID. Its late otherwise I would provide the quotes from Dembski’s own writings where he repeatedly equates computer programs and necessity." THat's completely wrong. Computer programs are code (CSI) operating automatically through laws of necessity. Why? Because a computer is only a machine which obeys the laws of electromagnetism. Therefore, once some information (usually CSI) has been loaded in the machine, the machine operates according to necessity, and the algorithm which has been laoded effects its computations automatically and gives the necessary results. Here, the only natural laws at work are the laws of electromagnetism, which are described by Maxwell's equations. Maxwell's equations never change, they operate always in the same way. The code loaded in the machine, instead, is the direct product of a designer. The code has all the characteristics of CSI. On the contrary, the output of the program cannot contain any more CSI than it has received, both from the code and from the input data. The information can certainly change its form, but no new CSI is really produced. You say: "You are just flat wrong in your understanding of what ID says. It says that “intelligent agency” can do things that no computer, no program and no mechanism could ever do. " That's perfectly right. It's intelligent agency which generates the computer code. The computer code only passively executes what the intelligent agency has planned. My intelligent agency is generating this post. No computer code could do the same (unless I input this post or insert it in the origical code). So, I don't understand where I should be wrong. You can certainly encode knowledge in a program, but not in a law of nature. You cannot encode knowledge about contingetnt variables in a mathematical formula. The law of acceleration, F=ma, remains always the same. It is not: F=ma in certain cases, with certain inputs, and F=2ma in other cases. A computer program could do that, because a computer program is a set of instructions arbitrarily written by a designer. Human laws can do that, because they are codes (literally) producted arbitrarily by designers. They are not natural laws. So, your examples are completely wrong. Now, let's imagine that God wanted, in the beginning, to incorporate some knowledge of specific contingencies in a natural law. Let's suppose that, in shaping the law of acceleration, he decided (after all, He is God, who are we to limit his freedom?) that black objects would accelerate according to the F=ma formula, and whote objects according to the F=2ma formula (please, don't pay attention to the specific example, just follow the reasoning). OK, that is possible, in principle. That would be what you say, incorporating some knowledge, some specific instruction, in a law of nature, as though it were a computer code. Obviously, only God could do such a thing. But my point is, if God had done such a thing (which is, more or less, the strange idea of TEs), we should be able to observe that. Once the law is operating, we can observe how it works. So, we would see the black bodies behaving in a way, and the white bodies behaving differently. We would derive two different laws for the two different kinds of bodies, and those would be our natural laws. At that point, we could use those laws in any scientific theory trying to explain facts with them. If that difference in laws could explain CSI and biological information, we would observe it. That's not what we observe. All the physical laws we know have no power to explain CSI. They are mathemathical objects, rather austere ones, and they make no compromises. They incorporate no specific knowledge of contingencies. In other words, their is no way that the laws of physics, which are after all all the laws of the material universe that we know, can generate Shakespeare's Hamlet, or, in the same way, the sequence of myoglobin. Both are examples of CSI. Both cannot be generated by natural laws. Naturally, a computer program can well type a copy of Shakespeare's Hamlet. If, and only if, that information was inputted in it. The sequence of myoglobin works in the same way. It has to be inputted, but indeed it could also be computed, provided that the necessary knowledge (which, by the way, we still don't possess) about how proteins fold according to sequence, and about the specific function that the sequence should realize, be inputted in the program. In both cases, the information about Hamlet, or abot the sequence, or about how it could be computed, are CSI. They are not generated in the program. They are added to it. A designer is needed. The program acts from necessity, according to the natural laws of electromagnetism and to the initial conditions (specific informations) which were inputted into it by the designer. But that kind of information is never created by necessity. gpuccio
gpuccio: "Here I cannot follow you. How can you “shift comnplexity or knowledge over to the laws f”? A law is a mathematical rule. It can be complex, but it is always the same, and it works always in the same way. That’s why it is a law. How can you “shiftcomplexity and knowledge” (of the output, I suppose) over to a law, or to a set of laws? A set of laws is not a computer code. " A set of laws is a computer code according to ID. Its late otherwise I would provide the quotes from Dembski's own writings where he repeatedly equates computer programs and necessity. You are just flat wrong in your understanding of what ID says. It says that "intelligent agency" can do things that no computer, no program and no mechanism could ever do. Dembski could correct you himself if he cared to. I will provide the quotes tomorrow if someone else does not. Knowledge is something that can be encoded. When someone says, "here are the laws by which this process is observed to operate" they are demonstrating knowledge. Knowledge = Mechanism = Law. Look at the laws of this country and how staggeringly complex they are. Do you think how those laws effect society are simplistic predictable or trivial? Why do you think people become lawyers? The behavior of a a complex set of laws is contingent on external conditions. You can have juridicial law with all sorts of provisions, acceptions, qualifications, and be a 1000 pages long - you saying that is not a real law. Everybody here seems to put their own spin on what ID is, and none of the supposed leaders ever step in to correct anybody. JunkyardTornado
KF "FIRST,our existential, experiential fact no 1 is that we exist in the world as conscious agents, who act with purpose to change the world to achieve goals. We are designing entities, and one of the commonly encountered artifacts of that design is information, functionally specified, complex information [FSCI]." Agents are things from which FSCI magically emerges. Agents are FSCI generating things. FSCI springs from them in a way that cannot be accurately characterized by any conceivable mechanism. Its just a foreign laughable concept to me, but you shouldn't feel offended, because many many people have this same intuitive sense that you do that this mysterious entity called agency exists. No offense at all, but there is quite evidently a fundamental unbridgable impasse, here. Maybe I'm equally vociferous in my claim that agency doesn't exist, but wheras you're saying, "This thing which is nothing actually exists." I'm saying, "This thing which is nothing does not exist." If something can't be characterized in any systematic way then its vacuous. If you're saying it not a mechanism, you're saying it cannot be accurately characterized. For people who would claim to appreciate the Bible, you would think that would not denigrate 'law' so much. All the Old Testament talks about is God's Law. The Psalmist David says, "How I love thy law", Christ says, "The Law will never be revoked", or words to that effect. In a similar vein the Gospel of John begins, "In the beginning was the Word". So in all of these is the strong implication that what is essential about God is something that can be encoded. Then we have ID that talks about law like its some pointless predictable second class citizen. But a second class citizen to what? W ell, to something that even ID cannot describe, something they say is impossible to describe, something they say mysteriously emits CSI. THIRD, in the course of the history of ideas, we have established long since the fact that causal forces are routinely observed to fall under the categories: chance, necessity, intelligence. The vague default unexamined notions people have held over the course of history is significant, why? Its not some accepted self-evident truism among the educated that man is some mysterious inexplicable God-like emitter of CSI. People can recognize, discern store and retrieve CSI due to their physical capabilities, their brain capacity, the sophistication of their sensory organs. My comments here are somewhat hasty and provocative but its kind of late. I would engage you on more of your posts, but you are quite committed to an idea which you consider to be self-evident, and I consider to be vacuous. The probability issues are not something I'm insensitive to, but how the solution is this magical csi emitting machine is not by any means apparent to me. Cheers. JunkyardTornado
M Caldwell, Tell them we have none of chance and necessity either- neither one for "physics" nor one for "atheism." Nor one for "hamburger" for that matter. This is the classical philosophically pathetic attempt of anyone on the loosing side of a debate- "I don't understand what you mean?" They point to a metaphysical loop hole of language- that is issues of rhetoric - which shows how political these discussions really are. Anyone can critique and confuse language and pretend to see through all things. But as T.S. Elliot said "To see through all things is the same as not to see." Frost122585
JunkyardTornado: I will try to make my point more clear. I think we should discuss better the meaning of "law", or if you prefer "law of necessity". A law is a logico-mathematical formulation to explain facts and make previsions of new facts. Obviously I am not saying that there is no evidence that natural laws exist! And obviously there exists a mutation sequence that would result in human genome. We agree on that. I think we also agree that such a mutation sequence is, in itself, utterly improbable, if we hypothesize that it has to happen randomly. OK to this point? You say: "Shift complexity or knowledge over to the laws f, and an x that can produce life becomes more likely, but then f becomes more unlikely." Here I cannot follow you. How can you "shift comnplexity or knowledge over to the laws f"? A law is a mathematical rule. It can be complex, but it is always the same, and it works always in the same way. That's why it is a law. How can you "shiftcomplexity and knowledge" (of the output, I suppose) over to a law, or to a set of laws? A set of laws is not a computer code. Let's make an example. Just to stay storically consistent, we will use the famous "Methinks it's like a weasel", only, I hope, with more sense than our friend Dawkins. So, let's suppose there is a law which outputs that phrase (which for the sake of simplicity we could assume here as a piece of CSI, although it probably does not reach the right level of complexity). What form should that law have? Something like: "Whatever the outer conditions, just output "M", than "e", and so on"? That's not a law. That's a software instruction, and one which contains all the information it has to output. Now, let's suppose that we have devised a law for that phrase. How could the same law output, say, "To be or not to be"? Again, what you need here is not a law, but an instruction. You need information, not necessity. That's why necessary laws are not good at outputting CSI. For that task, you need "pseudo-random" sequences, shaped by intelligence. No law can output the works of Shakespeare. In the same way, no law could output this post you are reading. It is here for you to read because I am thinking it in my consciousness, then outputting it. If I did not think what I am writing, this post would never exist. It has never existed before, and it would never exist in the whole life of the universe. Why? Because this post is CSI, much more CSI than the weasel phrase (not because it is better, only because it is longer!). That's why I stress the importance of consciousness. The concept of design, of designer, and of intelligence have really no meaning without the empirical fact of consciousness. All those concepts can be defined only as properties of consciousness. You say: "If that specification is y, then here’s a mechanism to output it - “Output y.” " Again, that's not a law. It's a computer instruction. It's CSI. And a lot of CSI, if that "y" must include all the information of all the genomes of living beings! You are only sayng that, to produce CSI, you need CSI. That's correct. And, as CSI can only be produced by a designer, you need a designer. In other words, you need, in sequence: 1) A conscious, intelligent designer. 2) A design, produced by the designer (your f, which is not a law nor a set of laws, but just CSI). 3) The implementation of the design, which happens obviously through physical laws, guided by the CSI of the design, so that they output CSI in the form of pseudo-random, intelligently ordered sequences, which have function and express meaning. You say: "But what is design -its copying with incremental changes over time, with the best ideas being refined upon by subsequent generations." No, design is not that. Design is the output of a conscious representation, which has the inner properties of meaning and purpose. Let's remember that design is not always CSI. Simple designs do not exhibit CSI. But design is always the product of intelligent consciousness. And CSI is always a form of design. You say: "Its taken over 60 years and thousands upon thousands of people to come up with the computers we have to day, with continual retesting, incremental refinement and so on." CSI does not require complex inventions. CSI is abundantly present in all abstract thought, in language, in mathematical thought, in artifacts, even rather simple. All that is the product of design. And design is the product of intelligent consciousness. There is no doubt that intelligence can creatively improve the acquisitions of other intelligent beings, and that's how computer, and all the products of human culture, have accumulated. But it is not so much a question of "incremental refinement", but rather of creative thinking, vision, inference, intuition, purpose, commitment, and,in general, representation of meaning. All of these, properties of intelligent consciousness. The incremental refinement is often necessaty, and it pertain to the strategies of implementation. Consciousness does work through algorithms, but it is not algorithmic in itself (see Penrose), and it cannot be explained by algorithms. "Where is the mysterious miracle in that process." In intelligent consciousness. Without consciousness, you have no design. Without design, you have no CSI. In other words, only the conscious represantations of an intelligent being can impose the form of CSI (complex meaning) to appropriate (random like) supports. Nothing else in the universe can do that. Not true randomness, not any set of necessary laws. This truth is both logical and empirical, and is the real strength of all the ID theory. Finally, let's go to your last note. You say: "If something is a physical mechanism it should be operating continuously? So why do we have not have comets on a daily, if not hourly basis?" I don't follow you. We were speaking of laws. If a law is a lwa, it operates always in the same way. The laws of mechanics and gravitation, whatever they are, are responsible for the trajectories of comets. Trajectories change with time, the laws which allow us to compute them don't. "I would think someone who was not constrained by physical limitations more likely to be creating things continuously." I do think that the designer, whom in my opinion is God, is creating things continuously. I am convinced of that for religious reasons, however, and I don't consider that a scientific statement (at least for now). But the continuous intervention of God in creation, in my opinion, takes different forms. Laws are one of those forms. The historical implementation of special design in living beings, through means which are open to enquiry, is another one. All those, however, I insist, are religious and philosophical issues. The inference of design in biological beings, instead, is a purely scientific question. gpuccio
"ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure)."
Well we have criteria for detecting design chance and necessity. A probability can always be assessed for the chance/necessity combination but when it exceeds the probability resources that we deem reasonable for non-intelligent nature then intelligence because the leading contender. As KF pointed out we know what intelligence can do. We know that it can beat probabilities and design for purpose. Your question is about assessing the probability of the designer but you display a very weak understanding of natural philosophy to pose such a nonsensical question. When we assess the probability of an event it is not under intelligent or non-intelligent grounds that that assessment is made. Probability have strictly to do with the complexity specificity and natural physical laws and boundaries set by secular science. IN other words you cant ask the question "what is the probability of the designer?" because you cant ask the question "what is the probability of chance necessity?" You see when you ask the question "what is the probability of the designer?" you are actually asking what is the probability that it has "being." On the other hand you then have to ask what is the probability that chance and necessity "have being?" and this is a ridiculous question. Of course they have being as does intelligence and design. The question that ID looks at is "where does the chance necessity and design reside?" IN other words which is which in the known world. Your question is about being but chance nd necessity can be used as factors in a design - for example if I design a car from scratch there are physical laws I must abide by and there is a certain amount of chance involved along the way. Yet I could chalk up the whole design to chance and necessity except that it would by definition artificially remove the conception of intelligent agency. So ID does take for granted that Intelligent Agency exists just like physics and philosophy takes for granted that necessity and chance exist. So this question regarding "the being" of the agent is ALWAYS beside the point. I can not find the object of "chance" and necessity in the cosmos because they do not exist as beings outside of circumstantial interpretations of events. The EF and Dembski’s criteria give us an excellent physical interpretation of design just like we have a physical interpretation of chance and necessity via definitions based on empirical experience. I can ask what is the probability that chance exists? What is the probability that necessity exists? These questions are vacuous because it is the interpretation of circumstances that give rise to their definitions. Design as a concept of equal philosophical strength is and should be treated the same way. Frost122585
JT Please, compare 29, esp. remarks on excerpt 5. GEM of TKI kairosfocus
I'm still trying to work through some of this: I was saying previously that if it is presumed that the natural laws are extremely simple, then this puts a huge burden on the mutations to produce a highly improbable string with a lot of information. My observation was that you could put more info into f which would make getting a usable sequence from the mutations x more likely. However, I said you would just make the natural laws more unlikely to occur themselves, as they were more complex. Here was the error I made, I believe: The mutations are defined as purely stochastic and random. They come into existence at a point in time for no reason at all. With f however, its possible you could view it has having always existed. If something with a lot of information in it has always existed, that's not the same thing as having come into existence for no reason at all (e.g. as with the mutations x). It is a pointless extra step that ID makes to demand that something like f (with a lot of info) be "designed" by a vacuous entity labelled an "agent". F itself should just be be considered as part of an eternal diety. Now even though F is finite, it doesn't mean eternal diety is finite, because F is only part of God. So the crux of the matter is, one should say that something too improbable to occur by chance has always existed, because eternal existence isn't the same as stochastic emergence. JunkyardTornado
H'mm: A few thoughts on Turing machines and the like: 1] JT, 25: I was referring to the actual TM that executes TM programs. And a TM, or equivalently a computer, is an extremely simple device. A computer has to step from one instruction to the next in a program. The following are the only instructions it has to know how to perform: Z(n) - “move zero into register n; I(n) - “add 1 to the value in register n”; J(n,m,i) - “If the values in registers n and m are equal then jump to instruction i”. Whoa. Anyone who has had to physically instantiate a computer from the ground up will realise that the issues involved in say electrical, mechanical, electromechanical and electronic technologies to do the "simple" machine are anything but "simple." --> Unless mechanical elements are properly shaped, sized and made from the right materials, then oriented and fastened and/or interfaced together correctly, they will not work. (Cf. my microjets ecxample APP 1 the always linked.) [Recall here Babbage's Analytical Engine and the impact it had on the machine tool industry, even though it proved infeasible with Victorian Era technology (the gears, the gears the gears, Wiki! It was not just that CB was a "difficult" person.).] --> Similarly, a read head is an extremely complex and precise device, electronically, electromechanically, optically, magnetically or mechanically. Just look at Wiki on paper-tape input devices. [I am old enough to remember punched card punching and reader machines . . . great, hulking solidly built, very precise IBM machines. Usually painted a dull grey for some reason.] --> Multiply by the complexity of the symbolic code required to give instructions. (And, where do functional codes come from, in our observation?) --> Exponentiate by the algorithms that have to be coded and the underlying mechanisms for physically implementing same. (In our observation, where do algorithms come from?) 2] I’m not supposing necessarily that even a Turing machine could come into existence throught stochastic processes. But if its obvious it cannot, then what’s the point of throwing around huge numbers (e.g. 10^11, 10^15) . . . First, there are a lot of people out there who seem to think that -- probabilistic resources issues notwithstanding -- things far more complex than a TM can self assemble out of chemicals in a still warm pond or a hydrothermal vent, whether on our planet or the observed universe. Indeed, they seem to hold that unless you accept such, you are not properly "scientific." Further to this, the book downloadable through this thread discusses the issues linked to this, and in fact was the foundational technical level book that launched the design movement. Third, per Dembski's later work, the threshold of exhaustion of such resources we look at before ruling intelligence not chance, is of order 10^150 - 10^300. To get around that, we see increasing resort to a speculative quasi-infinite wider cosmos as a whole with randomly distributed physics across sub-cosmi. That in turn allows us to highlight that we have here crossed over from empirically anchored scientific reasoning in the observed cosmos, to the field of highly speculative metaphysics. Often without announcement of the fact and its implications. These bring up . . . 3] JT, 27: What if that law of necessity in fact contains encodings for CSI. First, mechanical necessity shows itself as the root factor underlying natural regularities. This means that regularity dominates over contingency: heavy objects fall and may thereafter roll around and tumble before they settle to rest. High contingency, the basis for contingency in the sense we speak of, per a vast body of observations, is rooted in chance or intelligence. E.g. if the heavy object is a die, its uppermost face on settling is effectively chance or agency. THen, if there are enough dice -- 200 - 400 six-sided dice -- and the outcome meets a simply describable specification tha tis vastly improbable on chance, then we have excellent reason to infer to agency. (Here, suppose the dice are expressing a code to drive a program on say a TM.) 4] There does not exist a mutation sequence that would result in mankind? What if that mutation sequence directly coded for mankind? We are not dealing with abstract logical possibilities but the search for paths that island-hop from OOL to body-plan level biodiversification to the human being. The constraint that such entities must implement and maintain themselves in cellular and bodily level viable organisms and populations imposes huge constraints and specifications that lend themselves to the inference that probabilistic resource exhaustion practically speaking rules out the Darwinian, evolutionary materialist pathways that are often presented as consensus, established science. Biology here -- once the role of DNA emerged -- has built bridges to information and [statistical] thermodynamics issues, and it is not faring so well. 5] ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure). Precisely backways around. FIRST,our existential, experiential fact no 1 is that we exist in the world as conscious agents, who act with purpose to change the world to achieve goals. We are designing entities, and one of the commonly encountered artifacts of that design is information, functionally specified, complex information [FSCI]. We live in a cosmos in which agents are possible, and are actual. P[agency] = 1, for all practical purposes. SECOND, Agency is foundational to being able to have a rational discourse [cf my appendix 6 the always linked . . .] THIRD, in the course of the history of ideas, we have established long since the fact that causal forces are routinely observed to fall under the categories: chance, necessity, intelligence. They may interact in any one case, but in cases of known origin, they are routinely seen to be adequate to describe origins. FOURTH,We can show that for known cases, FSCI [or the equivalent] is a reliable sign of intelligence as opposed to chance or necessity. So, per scientific induction on best explanation anchored to empirical observation, we can credibly infer to the fact of design per its reliable signs. IF FSCI (etc) THEN, P[design] --> 1. FIFTH, design implies intelligence, i.e agent action, not random search or the equivalent or comparable but active information stemming from insight that leads us to functional configurations not practically reachable by chance-based searches in the relevant config spaces (on pain of probabilistic/search resource exhaustion): IF design THEN p[designer] --> 1. In short, reliable signs of design are epistemic warrant for inferring to design thence designer. One starts from the fact that designers and designs exist and have characteristics that are reliably discernible. Then, once we see the signs, we credibly know that we have design thence designer. That there is a designer is different from identifying who or what it is. indeed, it is a premise for trying to find out WHODUNIT, that "'twere DUN." ____________ The controversies that surround ID show to me that the issue is not on who the designer is or may be -- it seems that objectors to the empirically anchored inference to design suspect that the best/ most plausible candidate for designer of life and cosmos may be someone they have little or no desire to meet or deal with. So, I find the objections tend to short circuit the epistemic warrant for design inference,even at the price of being inconsistent in praxis of science [e.g. we routinely infer to experimenter influence in experiments etc!]. Can we therefore look back at the actual epistemological case made by the design thinkers, instead of appealing subtly to prejudice? GEM of TKI kairosfocus
gpuccio: "“First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always” I think now I wasn't completely on point to what you were saying here. But even what you're saying is not sound. If something is a physical mechanism it should be operating continuously? So why do we have not have comets on a daily, if not hourly basis? I would think someone who was not constrained by physical limitations more likely to be creating things continuously. Thanks for your comments, though. That's it for me today. JunkyardTornado
"First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always" There is no evidence that natural laws exist? There does not exist a mutation sequence that would result in mankind? What if that mutation sequence directly coded for mankind? Certainly such a sequence x could happen, though the chances are vanishingly remote. So with the existing natural laws, such as they are, there exists some mutation sequence x that would result in the biological world. Shift complexity or knowledge over to the laws f, and an x that can produce life becomes more likely, but then f becomes more unlikely. What is difficult to understand. Are you denying that a deterministic mechanism can output mankind? Do you deny epigenesis then? What is the probability of that particular mechanism coming into existence by chance. Does it mean it doesn't exist? Can the biological world be specified? If that specification is y, then here's a mechanism to output it - "Output y." Of course, the route could be much more circuitous than that, and conditioned by all sorts of random factors (which is what the input x to a process is, the random outside factors impinging on some deterministic mechanism.) So, the entire physical universe in your view really has no relevance at all to the biological world and its origins. Its just a bunch of additional garbage that the designer created for no good reason, apparently. no law of necessity can generate information with the characteristics of CSI What if that law of necessity in fact contains encodings for CSI. Why not call f, then, the thought of the designer Why not indeed. Why not fully characterize the thought processes of a "designer". Of course, ID avoids having to assess the probability of their designer because they say a designer cannot in fact be described by laws, program instructions, or by any other systematic method (so there is nothing to measure). Science is about best explanations. You, like many others, insist in excluding the simplest, and totally empirically based, explanation: design. But what is design -its copying with incremental changes over time, with the best ideas being refined upon by subsequent generations. Its taken over 60 years and thousands upon thousands of people to come up with the computers we have to day, with continual retesting, incremental refinement and so on. Where is the mysterious miracle in that process. We (in ID) deny your f for the simple reason that it does not exist in observable reality. Well perhaps your views are largely representative of the ID community in general and its most prominent members, I have no reason to deny that. Thanks for taking the time to read my thoughts. JunkyardTornado
JunkyardTornado: Thank you for your post, where you really try to clarify your views. You have been very specific, so I will answer very briefly and specifically. I think I have followed your reasoning, but your reasoning in my opinion has two serious flaws: 1) Your f (which you define as "a set of natural laws"), as far as we know, does not exist. If what you say were right, f would anyway be some set of laws of necessity, so complex as to give y with some reasonable probability. That is simply not true scientifically. First of all there is absolutely no evidence of such an f: if those laws existed, we should observe them working always, and we would not observe only their results in the biological world, and nothing else (unless you suppose that, like God, that f is resting in the seventh day...). Moreover, there are serious logical reasons (see for instance Abel and Trevors) why no law of necessity can generate information with the characteristics of CSI. Therefore, your f, more than a set of laws, would take the form of a platonic counterpart of the information it has to generate. Why not call f, then, the thought of the designer? 2) Science is about best explanations. You, like many others, insist in excluding the simplest, and totally empirically based, explanation: design. Why? Design exists. Human designers continually generate CSI. So, why such an obstinacy in denying the obvious, recurring to long and inconsistent reasonings? That's not cognitive coherence. That's dogma. We (in ID) deny your f for the simple reason that it does not exist in observable reality. You can keep it as a personal dream, if you want, but we have no need to share that dream. We (in ID) affirm design because it exists, it is observable, and it can perfectly explain biological information. For me, that's cognitive simplicity and coherence, free from any intellectual dogma and prejudice. It's as simple as that. gpuccio
DLH: "I am curious why you dismiss discussion of the Turing machine as being designed. It may seem conceptually 'simple'. Yet I recommend you explore the factors required to design a real computer that can process some string. Note that it requires energy processing to do so. Or have I mussunderstood your post? have yet to see anyone explain how the four forces of nature (strong & weak nuclear, electro-magnetism and gravity) with stochastic processes can form a processing system under any stretch of the imagination within the Upper Probability Limit. " gpuccio: "Indeed, I thimk, like DLH, that even the most basic turing machine or computer needs a quantity of information for its structure, symbolic code and so on, which should vastly overcome the limit of 500 bits. ... Since we are talking of computers and turing machines, anybody has any idea of where should we look for the code of the central nervous system organization? How can the 10^11 neurons in our body orderly connect with about 10^15 connections to realize the best known computing hardware, without a written information plan" --------- As far as a Turing machine, just to clarify, people often mean a Turing Machine program when they say "Turing Machine". I was referring to the actual TM that executes TM programs. And a TM, or equivalently a computer, is an extremely simple device. A computer has to step from one instruction to the next in a program. The following are the only instructions it has to know how to perform: Z(n) - "move zero into register n; I(n) - "add 1 to the value in register n"; J(n,m,i) - "If the values in registers n and m are equal then jump to instruction i". If a computer can do just that, everything else a program needs to do can be in the program itself, encoded in terms of sequences of only the three aforementioned instructions. A program could be a zillion bytes long and be executed by a computer that was only a few hundred bytes. Building real computers entails a continual effort to increase their speed and capacity. I'm not supposing necessarily that even a Turing machine could come into existence throught stochastic processes. But if its obvious it cannot, then what's the point of throwing around huge numbers (e.g. 10^11, 10^15), or contemplating numbers of nuerons, or angels or whatever, if random processes cannot even create 10^3. In the rest of this I will address why probability arguments regarding evolution are irrelevant. In the case of evolution, it is said that a series of random mutations (call it x) occurred over a period of time. Some set of natural laws f acted upon x to output the natural world (call it y) as we see it today. Now to possibly state the obvious, whatever f might happen to consist of, there is of necessity some sequence of mutations x that could occur such that f(x) outputs the biological world. Imagine that f is something incredibly stupid like "flip all the bits of the input x and output the result" (if we're thinking in a digital context). There is still a value for x such that f(x) = y, the biological world (encoded digitially). Of course, the issue for ID'ers is probability. They would say that, the liklihood of getting an x with the necessary value by chance in this situation, would be no less than just getting the biological world itself by pure chance, because x, in this scenario, is just an alternate encoding for the biological world itself in that f does virtually nothing except flip bits. And I think you would be correct. Of course we could encode f in such a way that it was much more likely to get an x such that f(x) = y. Suppose that f encodes a complex specification for viable biological life, and on the basis of whatever input x it gets, it takes a lsightly different path (perhaps encoding for brown hair instead of green in a given instance, or whatever). But whatever input x that f gets, it tries to incorporate that into a complex infrastructure it already contains. Therefore, in this situation, there are obviously lots and lots of values for x that would result in viable complex life. However, now f itself is extremely complex and unlikely to occur by chance. Since we don't have an explanation for f's existance, then it does in fact exist by chance. So whether the burden of information and complexity is in f, or in x, or rather divided equally between them, the probability of getting an evolutionary mechanism f(x) that could produce the biological world is vanishingly remote. But does this prove this mechanism f(x) did not occur. It does not. Suppose you're sitting in an empty room with an open door. You turn away for a moment and then look back and sitting in the room in front of you in a red wagon is Richard Dawkins. And the question arises in your mind - How on earth did that happen? So, later I enter the room and inform you, "Richard Dawkins was sitting in a wagon in the other room, and when you weren't looking I pushed the wagon and Richard Dawkins rolled through the door and into the room." So you think about it and say, "Let's see...the mechansm f is newtonian force, with consideration of other factors, e.g. frictional forces, e.g. the surface of the floor, the wagon wheels, etc. x is Richard Dawkins sitting in the wagon. [And to take agency out of the picture, say I fell over backwards and hit the wagon.] x is richard dawkins sitting in the wagon in the other room. f(x) = richard dawkins sitting in the wagon in this room now in front of me. However, the chance of getting a Richard Dawkins by chance is vanishingly remote. Even if we incorporate into f a mechanism capable of generating Richard Dawkins (for example RIchard Dawkins parents) it just makes f that much more complex and unlikely itself. So however you cut it, f(x) is too unlikely to have occured by chance, so it is not true that Richard Dawkins rolled into this room after being pushed." (I may have undermined the argument for those would insist that under any circumstances Richard Dawkins sitting in a room of their house is an extremely unlikely scenario. So just assume you're a sibling of Richard Dawkins.) Sorry to belabor all this if the point is already obvious, and it should be obvious. How can you rule out evolution or anything on the basis of probability alone. You started out as a microscopic cell (x), and forces of nature f acted on it to produce you. You saying that couldn't happen either based on probability? It is precisely the same argument you use to rule out evolution. It would be reasonable to me personally to assume that given the immensity of the universe and the immensity of energy it contains, that it must all exist for some reason pertaining to man. Before man or the biological world existed it seems reasonable that there were forces and conditions (f(x)) extant in the universe that resulted in the biological world's existance. Other physical factors in addition to mutations and natural selection would have to be incorporated into that picture, undoubtedly. And yes the f(x) we're talking about would be as unlikely to occur by chance as y itself. And really the crucial point is that f(x), whatever it is, if it resulted in y's existence, would therefore equate to y. Richard Dawkins sitting in a wagon in another room plus newtonian force = Dawkins materializing in your room. Richard Dawkins parents in a wagon in another room plus nine months plus Newtonian forces = Dawkins materializing in your room. The embryonic cells of Richard Dawkin's parents sitting in a test tube for eighteen years in a wagon in another room plus newtonian forces = Dawkins materializing in your room. You're just constantly pushing back what needs to be explained. Obviously you'll eventualy hit something for which no natural forces exist to explain it. But given the size of our physical universe it seems foolhardy to start ruling out preceding natural mechanisms so quickly, (and in truth when could any human being ever say conclusively that something was not the output of some phyisical process that preceded it.) [Note: I am aware that a lot of the above arguments did not in fact originate with me, but where I'm not certain. Maybe Dembski himself said some of it, I don't know.] JunkyardTornado
Oh my bad, forgot Barbara Forrester is a secular humanist. I guess she wouldn't mention "god" in her ranting. ;D F2XL
Ahh....a creationist book!!! Mats
Indeed, I thimk, like DLH, that even the most basic turing machine or computer needs a quantity of information for its structure, symbolic code and so on, which should vastly overcome the limit of 500 bits. If I remember well, a turing machine needs some nasic code to define the machine itself. It can really be implemented on a computer (I think in one of Pensrose's books there was something like that), although I cannot say exactly how much information code is needed. Maybe some of our friends engineers or programmers could help. Anyway, I agree that biological information is usually much more complex than that. Since we are talking of computers and turing machines, anybody has any idea of where should we look for the code of the central nervous system organization? How can the 10^11 neurons in our body orderly connect with about 10^15 connections to realize the best known computing hardware, without a written information plan? Or does someone think that a very simple fractal formula outputs the most complex working neural network in our experience? gpuccio
JunkyardTornado at 15
I take it you’ve heard of a turing machine. A computer is an extremely trivial device- something that can read sequentially a series of instructions, increment values, compare values for equality (and then jump to other parts of the program on that basis. This is nothing. Its pointless to philosophize about somehthing so trivial having to be “designed.”
I am curious why you dismiss discussion of the Turing machine as being designed. It may seem conceptually "simple". Yet I recommend you explore the factors required to design a real computer that can process some string. Note that it requires energy processing to do so. Or have I mussunderstood your post? I have yet to see anyone explain how the four forces of nature (strong & weak nuclear, electro-magnetism and gravity) with stochastic processes can form a processing system under any stretch of the imagination within the Upper Probability Limit. e.g., consider the information processing of DNA to proteins in even the simplest self reproducing cell. DLH
Denyse, Just a heads up. Dinesh D'Souza talks about The Spiritual Brain in today's column. nullasalus
-"DOES THE UNIVERSE IN FACT CONTAIN ALMOST NO INFORMATION?" (haven't read this) JunkyardTornado
Which is more complex, tic-tac-toe or heart surgery. The latter is, because it takes a much much longer description to characterize accurately in such a way that person of average intelligence can grasp OTOH, sometimes the complexity of a task is due to ignorance. Its a complex task to break into a safe if you don't have the combination. Maybe heart surgery is simple as well. JunkyardTornado
"The cover of our course textbook, Elements of Information Theory ([C-T]), depicts a computer generated image of a small segment of the Mandelbrot set. The explanation on the back cover says "The information content of the fractal on the cover is essentially zero". This comment contradicts our intuitions: The picture looks very complex, as Penrose so vividly expresses it." -Complexity measures for complex systems and complex objects JunkyardTornado
gpuccio: "But if I see a print of a mandelbrot, I would think that it has been produced using designed tools (a computer)." I take it you've heard of a turing machine. A computer is an extremely trivial device- something that can read sequentially a series of instructions, increment values, compare values for equality (and then jump to other parts of the program on that basis. This is nothing. Its pointless to philosophize about somehthing so trivial having to be "designed." I'm not an expert on fractals either, except that I suppose I could see how some gargantuan saved-state could make them complex. Except that with Chaitin-Kolomgorov complexity, the memory or time consumed is not a consideration, just the smallest program length in instructions to compute a function. ANd I'm sure there's discussion somewhere on why the time and memory consumed can be ignored. Maybe I could try to hunt up some informative piece about algorithmic complexity and fractals. JunkyardTornado
JunkyardTornado: I agree with you. Still, just to be clear about fractals, although the mathematical formula is rather simple, its computation is long and requires great computational resources (anyone old enough to have computed a mandelbrot on an old computer can understand what I am saying). I am not aware of natural processes which can output a mandelbrot, although the formula is very simple. Perhaps other kinds of fractals, which do not imply computations with complex numbers, may be found as the result of natural processes (I am thinking of snowflakes and similar, but I could be wrong). But if I see a print of a mandelbrot, I would think that it has been produced using designed tools (a computer). Anyway, I am not an expert of fractals (although I love them very much), so if I have something wrong, please correct me. gpuccio
mavis: "The more complex a structure is, the more instructions are needed to describe it" What jumps out at me here is fractals. xn+1 = xn2 + c, more or less. Well, I think it illustrates that some things can seem very complex, when in fact they are not, when assessed according to objective criteria. The complexity turns out to be an illusion. When explained how a magic trick works do you still insist on the basis of what your eyes saw that magic really took place? Its no different than gauging a fractal's complexity on the basis of a visual inspection and subjective reaction, perhaps contemplating the difficulty you would have in trying to draw it yourself free hand. Drawing a perfectly straight line is difficult as well, but not because its complex. Which is more complex, tic-tac-toe or heart surgery. The latter is, because it takes a much much longer description to characterize accurately in such a way that person of average intelligence can grasp the essential details. JunkyardTornado
Off topic: you can use Linux to open a pdf. Then you don't have to worry about trojans/viruses etc. DrDan
Re .pdf: I downloaded it but can't find where it went. However, I am not a techie and do not need to solve the problem immediately. Re designers: One can realize that a product features design without ever knowing who designed it. That is why all theists and non-materialist atheists agree on design, but most do not use it as a key apologetic. That is why ID is a big tent. O'Leary
PS: Fractals do NOT pass the EF -- they are caught as "law" -- the first test. It is the programs and formulae that generate them that pass the EF. [And, these are known independently to be agent-originated, so they support the EF's reliability.] kairosfocus
Mavis: Thanks for the thoughts. I have put up Reader 8.1.2 [sigh . . .], and it is d/loading TMLO PDF. [~ 70 MB]. Last, daily updates to modern malware packages do keep one in touch with latest developments. GEM kairosfocus
Also KF you by the nature of the beast cannot "pre-screen" PDF files - until you have them you cannot examine the contents. Firewalls are little use here too, unless you have it set to scan the data and look for trogans, which might work but obviously only on trogans that are already known. Here is a typical exploit http://www.securityfocus.com/bid/21910 Again, you can't filter what you don't already recognise. And only using PDF files from trusted sources is not going to do it, as millions of servers around the world serving "normal" websites have been compromised and are unwittingly serving malware. So just better to get patched to the latest all round. No excuses! Mavis Riley
Thanks for the answers. So fractals don't have CSI, but do they pass the EF? What happens when that's attempted (has it been?) Has anybody attempted to pass a fractal into the EF already? Would the "algorithm that imposes that necessity" give a different result if passed into the EF then the fractal image itself? Mavis Riley
Mavis Thanks for the thoughts on PDFs -- in my experience with reasonable filtering and firewalls PDFs remain safe. I will do a few expts to see if the d/load works. On the identity of designers issue, I note that ID is relevant to e.g. code breaking and the theory if inventive problem solving, TRIZ, as well as other things. On cosmological design, I infer -- cf remarks in the always linked --that the designer of the cosmos is personal, intelligent, powerful and int4nding to create life. That tends to support theistic as opposed to pantheistic [or materialistic] views, but is of course now a worldview level -- phil -- inference not a scientific one. I happen to be a Christian theist, but that has to do with the core warranting argument of that faith per Ac 17 [and my own life experiences etc], not the scientific discussion over design. And, GP is right: fractals have low contingency and are controlled by necessity. But the algorithm that imposes that necessity is a very different matter. [If that sounds like the discussion on the root of the fine-tuning of the mechanical necessity of our cosmos, yes it does.] GEM of TKI kairosfocus
Mavis: A fractal is a good example of a product of necessity. So, it does not exhibit CSI, becuase the EF has to rule out those forms of self-organization produced by necessary law. Obviously, the system which computes the fractal is a completely different thing... Moreover, I don't think that a fractal in itself has function, so it would not be functionally specified. gpuccio
The more complex a structure is, the more instructions are needed to describe it
What jumps out at me here is fractals. xn+1 = xn2 + c, more or less. Is there a "special" class of complexity that can be reduced to such simple "storage". Or don't fractals count in the same way that "proper" structures do? Do fractals have CSI? Do they pass the EF? Has anybody attempted to pass a fractal into the EF? Sorry for all the questions. Mavis Riley
Kariofocus, You should be aware that older version of programs such as Acrobat can be exploited by specially constructed files. You computer can be infected simply by opening such a file. The only way to protecct yourself is to get the latest version. Also, I realise that ID is about signs of intelligence but that does not disbar me from seeking O'Learys opinion on the matter at hand. Nonetheless, I see that the identity of the designer has been chosen by many here already, including yourself KF, at least that's the impression I get from reading your religious aplogetics website. It's not a Hindu deity now is it? :) Mavis Riley
Denyse: Thanks. I have a paper copy next to me as I write, 228 pp. I would appreciate the PDF version, especially if it is on a page that can be referenced from the web, not just the three online chapters that have been maintained by Mr Dolphin for years. (I am thinking here on the still relevant discussion on early atmosphere etc.) On using my Acrobat 5 I got sent to a download page but the 121 pp download is blank. Not sure if that is my fault for being a real cheapskate and insisting on getting more years out of an old version of the full Acrobat than maybe I should expect. (I am still using Office 97 too, uncle Bill over in Redmond . . .] Any advice? Thanks in advance! GEM of TKI PS: Mavis, pardon: ID is about signs of intelligence in the first and main instance. Identity of designer is a later question, to be addressed through contextual cues, similar to the forensic question, whodunit. What's been "dun" is prior to who. [As in "Accident, suicide or murder" before "who is the murderer."] kairosfocus
Do I take it would be ok by you if the "intelligent designer" turned out to be a Hindu deity then O'Leary?
I would like to see what a Hindu would make of the evident design of life.
I wonder if there will be a "Dover" for the Hindu's too. Mavis Riley

Leave a Reply